Monday, September 27, 2010

Vidyo SVC Video Conference

Vidyo provides Personal Telepresence enabling multi-party video conferences using a personal computer, with HD quality over converged IP networks.

Built on the new H.264 Scalable Video Coding (SVC) standard, Vidyo’s products deliver good error resilience and low latency videoconferencing over the Internet and general-purpose networks. Vidyo’s technology for OEMs and end-to-end product solutions for enterprises support point-to-point and multi-point connections that include a variety of different platforms ranging from Mac & Windows desktops to dedicated room solutions.

Currently Vidyo also bets on the iPad and iPhone for the future of video conferencing.


Sunday, September 26, 2010

HD Camera System Using FPGA and CCD

Elphel has developed several cameras including using CCD/CMOS and Xilinx Spartan 3e:

http://wiki.elphel.com/index.php?title=353

even TMS320DM6467 and Xilinx Spartan 6 FPGA:


Both CCD and CMOS sensors drivers implemented in the FPGA. The code is distributed under GNU GPL v3, the hardware documentation is under GNU FDL.

The previous 2M CMOS camera can be found in


Regarding to algorithms, the following paper is helpful:


CCD cameras embed imaging algorithms



May 1, 2007

To increase performance, camera vendors are embedding image-processing and machine-vision algorithms within their cameras.

By Andrew Wilson, Editor

Because of their low-cost, added functionality, and increasingly easier programming, field-programmable gate arrays (FPGAs) have become a mainstay of today’s CCD cameras. By incorporating functions such as Bayer interpolation, flat-field correction, and bad-pixel correction within these devices, the host computer can be offloaded from such compute-intensive tasks, increasing the speed of machine-vision systems. Today, many CCD cameras use FPGAs to allow these functions to be performed in a pipelined fashion in real time as images are captured. Better yet, some camera vendors are teaming with high-level imaging-software companies to allow more sophisticated image-analysis functions to be performed within these cameras.

When evaluating any specific CCD camera for machine-vision applications, system integrators should be mindful of many of the functions that can be performed within the camera itself. While many currently available CCD cameras use FPGAs in their designs, often these are used just to provide camera timing, triggering, and Ethernet, Camera Link, or USB connectivity. In the simplest designs, such cameras provide raw data over these interfaces and any pre- or postprocessing of the imaging data must be provided by a frame grabber (which often also incorporates an FPGA) or by the host processor itself. Because these cameras do not provide added functionality, they can be an inexpensive way of digitizing images, especially in applications where a frame grabber or host-processor combination can be used.

In many applications, however, embedding added functionality within the camera can prove beneficial, especially when implementing low-level point, area, resampling, or color-correction algorithms. Such algorithms readily lend themselves to the pipeline nature of FPGA logic, and, as such, provide a means for camera developers to increase the processing of functions by an order of magnitude (or greater) than simply offloading these functions to a host CPU.

Before choosing a camera that features these algorithms, it is necessary to understand both the algorithms and the benefits of choosing a camera with this on-board functionality. For example, while functionality such as gamma correction, Bayer interpolation, or flat-field correction is important in many machine-vision applications, functions such as JPEG/MPEG-2 compression are more suited to security surveillance. In these applications, image bandwidth reduction may be more important than preserving the quality of the original image.

Class actions

Image-processing algorithms generally fall into three classes: point operations, neighborhood operations, and more complex recursive operations. Because of their pipelined logic and on-board memory, FPGAs are mainly used to compute point and neighborhood operations. More recursive operations are usually handled by an external CPU or, in the case of smart cameras, by an on-board CPU. However, with the introduction of on-board RISC link processors, future generations of cameras will likely not perform all three types of operation within an FPGA.

Point-processing functions are mostly used to remap the camera’s original pixel values into a new set of pixel values using the FPGA’s on-board look-up tables. They can alter the contrast, brightness, and gamma of the image so that details of specific interest appear more clearly.

A more sophisticated class of operators are neighborhood operators. Indeed, many of the most powerful image-processing algorithms rely on these operators to perform functions such as spatial averaging, sharpening or blurring of image details, edge detection, and image-contrast enhancement.

In spatial convolution, the intensity of each output pixel is determined as a function of the intensity values of its neighboring pixels. This involves multiplying a group of pixels in the input image with an array of pixels in the convolution mask. By performing this function, the output value of each pixel is a weighted average of each input pixel and its neighboring pixels in the convolution kernel.

“Since mathematical manipulation of pixel data is the strongest point of an FPGA’s abilities,” says Anthony Pieri of Basler Vision Technologies, “FPGAs are very useful for incorporating finite and infinite impulse response filters in digital cameras. A typical example is a low-pass filter that can be used to “smooth” the pixels within an image. A low-pass filter is in essence a pixel averaging process and is suited to an FPGA’s math implementation.”


FIGURE 1. FPGAs are useful for incorporating finite impulse response and infinite impulse response filters in digital cameras. Low-pass filters can smooth the pixels within an image with a pixel averaging process. An original image (top) goes through a low-pass filter using a 7 x 7 neighborhood filter (bottom). Edges have softened, and small craters are fading or obliterated.
Click here to enlarge image

These types of filters are often used for preprocessing an image before it is passed onto a more powerful processing system (such as a CPU or DSP) for more complex analysis (see Fig. 1 on p. 49). “FPGA-based preprocessing capability has been a staple in Basler cameras used in inspection equipment designed to detect defects at the micron level in products such as CDs, DVDs, or glass substrates,” Pieri says.

Unsharp masking is another example of a neighborhood operator that increases the contrast of images within a scene. To do this, an image is first blurred by replacing each pixel value with the average of the pixel values that surround it. The resultant image is then subtracted from the original image within the FPGA, resulting in an image with small objects and structures enhanced. And, because FPGAs can be rapidly configured to incorporate pipelined multipliers and accumulators, they are ideally suited to performing these neighborhood operations (see Vision Systems Design, November 2005, p. 97).

While FPGA-based point and neighborhood operators are useful in performing many types of image-processing functions, they are generally not suited to more recursive image-processing operations such as statistical image analysis or image understanding. Here von-Neumann processors remain the architecture of choice. Despite this, many camera vendors have incorporated several different architectures within their FPGAs that allow bad-pixel correction, flat-field correction, filtering, color space conversion, centroid and particle detection, and JPEG/MPEG compression to be performed in real time.

Clever cameras

Two of the most important algorithms currently implemented in FPGA-based cameras are bad-pixel interpolation and flat-field correction. Bad pixels can occur when the sensitivity of a pixel is either lower than the sensitivity of adjacent pixels (in which case a dark pixel will be detected) or if the sensitivity is higher (resulting in a bright pixel). Alternatively, hot pixels may having sensitivity equal to an adjacent pixel but may produce a high-intensity output and be completely bright during long integration times (see Vision Systems Design, July 2006, p. 20).

As in many areas of image processing, a number of methods exist to compensate for bad pixels. One approach, for example, would be to replace the output from a dead pixel with a copy of the output from a neighboring pixel that was read previously. More sophisticated approaches involve substituting, for each defective pixel, the average of readouts from surrounding pixels. This approach can be easily accomplished within an FPGA (see Fig. 2 on p. 52).


FIGURE 2. Several methods can compensate for bad pixels in CCD imagers. One is to replace the output from a dead pixel with a copy of the output from a neighboring pixel that was read previously. More sophisticated approaches involve substituting, for each defective pixel, the average of readouts from surrounding pixels; for example, software used for detecting defective pixels and an image from the camera containing a cluster of defective pixels (top) and the same image with the defective cluster pixels corrected (bottom).
Click here to enlarge image

For its range of Lynx CCD cameras, Imperx uses a camera’s on-board FPGA to compute a gray-scale value based on the values of the two adjacent pixels. To find dead pixels, however, the camera must first be subjected to uniform illumination and a defective-pixel map file created. As images are captured by the CCD array, each pixel is compared with the known value in the table. Should the pixel be known to be dark, bright, or hot, the camera’s FPGA generates a new gray-scale value that is read out with other known good-pixel values as a defective-pixel compensated image.

Pixels within a CCD device also may not return similar values even though the illumination across the field of view is constant. To compensate for this, flat-field correction measures the response of each pixel in the CCD to a known illumination and corrects for any variation in illumination. By computing a histogram of the value of the intensity of each pixel across the array, the percentage difference of each pixel compared with its ideal flat-field value can be determined. In Imperx cameras, this flat-field-correction file contains coefficients describing these nonuniformities. By uploading the file to the camera’s nonvolatile memory, the FPGA can compensate for each individual pixel value as images are captured in real time.

“Incorporating a 32-bit RISC power PC in the FPGA,” says Petko Dinev, president of Imperx, “enables firmware and software upgradeability and features such as programmable frame rates, multiple triggering options, user-programmable LUTs, and defective pixel correction maps (DPMs). The on-board RISC processor allows developers to create LUTs and DPMs and upload them into the camera.”

Rotation and cropping

Neighborhood operations such as image rotation and cropping algorithms can be invaluable, especially in high-speed systems where digitized images may be skewed or when particular regions of interest need to be quickly isolated. If an incoming image such as a printed-circuit board under test is skewed, image-rotation algorithms can be used to align the image correctly. Since an image-rotation algorithm runs inside the FPGA, these functions can run in real time at frame rates. In other applications, the image may need to be analyzed for proper orientation and then displayed. In these cases, image rotation and 2-D scaling can analyze the incoming image and perform an automatic rotation and scaling for the display device in real time.

Kerry Van Iseghem, cofounder of Imaging Solutions Group (ISG), states that his company’s engineers specialize in image algorithm development and implementation. Functions such as rotation, cropping, and centroid detection are very important and can be used in conjunction with other algorithms to speed machine-vision applications. “Centroid detection, for example, is a neighborhood operation used in many sorting algorithms and in conjunction with other machine-vision algorithms to trigger pneumatic systems,” he says.

Click here to enlarge image

By properly determining the center of a faulty part, the triggering mechanism will ensure that pneumatic air pumps properly blow defective products from the production line. Centroid detection also finds uses in applications such as laser-based Free Space Optical Communication systems (en.wikipedia.org/wiki/Free-space_optical_communication). If the lasers used in such an application are not continuously aligned, the bandwidth drops to zero and the system fails. Here, CCD cameras can act as the input to servomotors to keep laser beams properly aligned.

“Object- and particle-detection algorithms coupled with centroid detection can also track multiple targets within moving images,” says Van Iseghem. “This is important in applications such as detecting and tracking defects on the surface of semiconductor wafers or detection and tracking of military or automotive targets. By implementing the functions with an FPGA, data throughput can be sustained within ISG’s cameras while image-processing functions are being performed.”

Color spaces

Image-interpolation algorithms are especially important in color cameras that use CCDs with a Bayer filter mosaic to produce color images. In these cameras, a color filter array is overlayed on a square grid of photodetectors. Because the human eye is more sensitive to green light, the Bayer pattern uses twice as many green or luminance elements as red or green chrominance sensors. To obtain an RGB output from such an array, this Bayer pattern must be interpolated to produce a color value for each pixel.

To create a full-color image, interpolation generates color values for each pixel, creating a full three-channel RGB color image. In modern CCD cameras, this interpolation is accomplished within the camera’s FPGA. Here again, a number of different algorithms can be used to accomplish this task (see Fig. 3).


FIGURE 3. To create a full-color image, interpolation generates values for each pixel, creating a full three-channel RGB color image. These include nearest neighbor interpolation (top), bilinear interpolation (middle), and interpolation using a variable number of gradients (bottom).
Click here to enlarge image

These are generally classed as either nonadaptive or adaptive. Nonadaptive methods include nearest neighbor, bilinear, and smooth-hue transition. Adaptive methods can use edge sensing or variable gradient algorithms to perform the interpolation function. While nonadaptive methods are generally less complex and faster, adaptive algorithms are more complex, more computationally demanding, and can produce more accurately interpolated images.

Many manufacturers provide RGB data output for their cameras by performing Bayer interpolation within the camera. Examples include the PicSight Camera Link camera from Leutron Vision, all ISG LightWise Cameras, and the TMC-1000CL and TMC-6700CL cameras from JAI.

The Leutron Vision PicSight Camera Link camera, for example, has an on-board FPGA for optional preprocessing tasks such as Bayer decoder and color space rotation. “To perform Bayer interpolation, the Bayer decoder module implements a 5 × 5 window algorithm,” says Stephane Francois, executive vice president. “Gradients of color values within the 5 × 5 window are calculated relative to the center pixel, and a threshold is used to locate regions of pixels that are most like the pixel under consideration. The pixels in these regions are then weighted and summed to determine the average difference between the color of the actual measured center pixel value and the missing color. This yields excellent results compared to a simple 3 × 3 bilinear interpolation, resulting in sharper edges and better scene contrast.”

In JAI’s latest GigE-based cameras, this interpolation is taken one step further. “After Bayer conversion, most digital cameras apply some level of edge enhancement to counteract the softening effects from the low-pass optical filter and interpolation of colors during Bayer conversion,” says Steve Kinney, product manager. “This increases aliasing effects in the image and decreases the signal-to-noise ratio.” To overcome this, the embedded NuCORE processor in the camera uses a proprietary algorithm to produce a sharp image without introducing these unwanted effects. Better still, the processors on-chip hardware allows pixel-error correction to be performed in hardware.

“While most pixel-error correction algorithms use only nearest-neighbor pixels, the NuCORE processor has a proprietary 2-D error-correction algorithm that uses more than the nearest neighbors to produce a more accurate approximation of the bad pixel,” says Kinney. Perhaps one of the most important features of the processor, however, is its digital zoom feature. Rather than implement a bilinear interpolation, NuCore engineers have used a bicubic interpolation algorithm that provides more accurate edge detail and less pixilation than bi-linear methods (see “GigE cameras leverage consumer ICs for improved performance,” p. 10).

JPEG and MPEG

In applications such as traffic surveillance and security, it may be necessary to transmit digital data from CCD cameras over lower-performance networks such as Ethernet and USB. A number of camera manufacturers have opted to incorporate image compression within their cameras. Most often, these cameras use the Joint Photographic Experts Group (JPEG) standard or a version of it in their designs. While a version of the baseline JPEG standard can be used for lossless compression, most often, a lossy compression in used. To perform this compression a discrete cosine transform (DCT) is used. In a variation of the standard, adopted by the Moving Picture Experts Group (MPEG), only the changes from one frame to another are stored and these changes once again encoded using the DCT.

Camera designers have a number of choices. Image compression can be implemented using-off-the-shelf custom ICs especially designed for the task, embedded as cores within FPGAs, or off-loaded to the host computer or embedded CPU within the camera for host processing. Of course, each of these approaches has its trade-offs. While off-the shelf-codecs may prove effective for low-resolution cameras, multiple devices may be required for large-format imagers. Embedding these functions within FPGAs using off-the-shelf or custom cores provide a flexible, engineering-intensive solution. Finally, off-loading image compression to the camera’s on-board processor results in a slower, more flexible implementation.

To perform JPEG compression in hardware at the high data rate provided by the Micron Technology sensor in its FastVision cameras, Fast Vision has reduced the size of the DCT core by creating a systolic form of a fast 2-D 8 × 8 DCT, implemented with two systolic 1-D eight-point DCTs implemented in a Xilinx XC2V2000 (see Vision Systems Design, December 2004, p. 39). To implement the 660-Mpixel/s data rate, the FPGA runs at 85 MHz (see Fig. 4).


FIGURE 4. To perform JPEG compression in its cameras, FastVision reduced the size of the DCT core by creating a systolic form of a fast 2-D 8 × 8 DCT, implemented with two systolic 1-D eight-point DCTs implemented in a Xilinx XC2V2000, resulting in an original test target captured by the camera (top). The bottom image is the result of JPEG compression.
Click here to enlarge image

ISG has also implemented the JPEG algorithm within a gate array in its cameras. The company says it can compress images of greater than 8 Mpixels at greater than 72 frames/s. Furthermore, ISG’s JPEG algorithm has bandwidth monitoring so a particular bandwidth can be set and the algorithm will vary the compression ratio to maintain the highest quality at this given bandwidth. Most JPEG algorithms are limited with a fixed compression ratio that wastes bandwidth. Since each image will compress differently than the next, the available bandwidth can be fully used to guarantee the best possible image quality.

Advanced features

“Many advanced features found within digital cameras now offer specialized functionality targeted for specific applications,” says Mike Cyros, president of Allied Vision Technologies. “These include Secure Image Signature, which places a digital time stamp, frame counter, and trigger counter into every image delivered from the camera in real time. Monitoring the trigger-to-frame counter gives an indication of a missed frame that may be very useful in time- critical systems.

“Other functions such as deferred image transport allow multiple cameras to acquire images simultaneously and stage the read-out of images in a controlled manner to the PC. This ensures the camera-to-PC bus bandwidths are not exceeded.” Similarly, a multisequence image feature allows a preprogrammed set of specific image acquisitions, each with individual parameters, to occur using a single trigger. “This is an important way to ease integration with complicated handling systems for the vision integrator,” Cyros says.

Of course, implementing algorithms within an FPGA still remains a challenge for many engineers. After algorithms are written in high-level languages, they must be recoded into hardware description languages such as Verilog before they are recoded, simulated, and synthesized to FPGA gates. However, this may be about to change. At VISION 2006 (Stuttgart, Germany), Silicon Software showed how its VisualApplets tool was used to program the eneo series of smart cameras from Videor Technical.

“VisualApplets uses a graphical user interface that allows developers to program and modify FPGA processors without knowledge of the hardware or HDL languages,” says Klaus-Henning Noffz, executive manager of Silicon Software, “both easing and accelerating system programming. The freely accessible Xilinx Spartan-3 FPGA within the eneo camera allows application-specific algorithms for image preprocessing to be rapidly implemented to off-load these functions from the host CPU.”

It may not be long before other providers of image-processing software follow suit. When this happens, both camera and frame-grabber vendors will be able to incorporate some of the software functions offered by software companies in hardware-based FPGA implementations. When such correct partitioning of point, neighborhood, and deterministic algorithms between FPGAs and CPUs is performed, a dramatic increase in speed and throughput of machine-vision applications will result.

Click here to enlarge image


RPDZ minimizes processing loads in networked cameras

Resolution Proportional Digital Zoom (RPDZ) is a postprocessing algorithm that maintains a constant data rate between a camera and a host computer while digitally modifying (zooming) the camera’s field of view (FOV). This is done by subsampling pixels in the image in a manner proportional to the digital zoom level.

Some vision applications demand that the FOV for the image being captured be repeatedly adjusted from wide to narrow and back again. In other words, the camera/sensor being used must capture a wide-angle view of a scene, zoom in on a specific object or section of the FOV, then zoom out again to see the big picture.

One way to tackle this challenge is by equipping the camera with a large, variable optical zoom lens. This enables a camera of moderate resolution (and cost) to capture enough detail in both wide-angle and close-up views, but adds the considerable expense and weight of the zoom lens to the vision system. A less expensive and less bulky alternative is to use a high-resolution camera with fixed optics, provided there is sufficient pixel density in the image to identify or analyze small objects by digitally “zooming” into an area of interest.

This challenge, however, becomes more demanding when the bandwidth of the video output must be considered. For example, unmanned aerial vehicles (UAVs) typically must transmit both wide-angle and close-up video information via an RF link of limited bandwidth. Similarly, some machine-vision or traffic applications call for multiple cameras sending both wide angle and close-up views over a single network.


At full resolution, this image represents 3 Mpixels of data, or more than 45 Mbytes/s at 8-bit gray scale and 15 frames/s (top). Applying RPDZ to the full FOV image (taking every fourth pixel and every fourth row), the roads and the vehicles are still visible (especially when viewed in 15-frames/s video), but the bandwidth is reduced to less than 200,000 pixels per frame, or less than 3 Mbytes/s, well within the constraints of the typical RF link or multicamera network (middle). Full 4xRPDZ zoom view-with the same 200,000 pixels per frame resolution-achieves full clarity by taking each pixel and each row of the digitally zoomed image and still supports real-time video transmission via an RF downlink (bottom).
Click here to enlarge image

In these applications, it may be inefficient or even impossible to use a megapixel camera with basic digital zoom capabilities, because the wide-angle view from a 4-Mpixel camera operating at 15 frames/s would generate more than 60 Mbytes of data per second at 8-bit gray scale. This is enough to choke even a high-bandwidth network and well beyond what a typical RF link can bear. The zoom lens option might satisfy the bandwidth requirement, but at an added cost and weight that greatly reduces the attractiveness of the system, especially for a UAV-type application.

Resolution Proportional Digital Zoom eliminates the need for an optical zoom, but avoids the bandwidth issues of megapixel digital zoom. For example, in a 4-Mpixel camera (2048 x 2048) utilizing RPDZ, the full FOV is imaged by capturing every fourth pixel in every fourth row, thus creating a 512 x 512 resolution. A 2X zoom is accomplished by digitally zooming to half of the full FOV and increasing the pixel density to every second pixel in every second line-again a 512 x 512 resolution. At 4X digital zoom, every pixel in the center 512 x 512 section of the image is used.

No matter what the zoom ratio or FOV, RPDZ holds the resolution at a level that fits within the standard TV bandwidth of a real-time RF link and greatly minimizes the processing load in network scenarios. In addition, full resolution snapshot images can still be captured when needed and transmitted in non-real-time over the lower bandwidth data link-something a low-resolution camera with a zoom lens cannot do.
Steve Kinney, product manager
JAI, San Jose, CA, USA



Saturday, September 25, 2010

Synopsys FPGA Synthesis 2010.9 Patch Version Release

Synopsys FPGA Synthesis 2010.9 Patch Version fpga201009-1.exe for Windows or fpga_vE-2010.09-1_linux.tar for Linux were released. Hopefully some bugs are fixed against the last month release which introduced many new functionalities and interface changes.

Friday, September 24, 2010

Comparison Between JPEG 2000, JPEG XR and H.264 Intra

1. Image quality comparison:

http://www.discoverdvc.org/publications/EPFL/A%20comparative%20study%20of%20JPEG%202000,%20AVCH.264,%20and%20HD%20Photo.pdf

In this paper, a study evaluating rate-distortion performance between JPEG 2000, AVC/H.264 High 4:4:4 Intra and JPEG XR is reported. A set of ten high definition color images with different spatial resolutions has been used. Both the PSNR and the perceptual MSSIM index were considered as distortion metrics. Results show that, for the material used to carry out the experiments, the overall performance, in terms of compression efficiency, are quite comparable for the three coding approaches, within an average range of ±10% in bitrate variations, and outperforming the conventional JPEG. JPEG 2000 and AVC/H.264 High 4:4:4 Intra Profile can slightly outperform JPEG XR with gains around 0.5~1.7 dB and 0.1~1.2 dB in PSNR respectively, when only considering the luminance component of the image in YUV 4:4:4.

2. Performance comparison:


At the 1080p resolution in raw RGB 4:4:4 mode, H.264 already offers similar performances in peak signal-to-noise ratio with JPEG2000 and JPEG XR. Since the wavelet transform is a global transform (whereas AVC and JPEG XR use block transforms), the memory bandwidth requirements of JPEG2000 far exceed those of AVC and JPEG XR. When tiling is used in JPEG2000 to constrain memory bandwidth (e.g., for 128x128 tiles, 1080p would have 16x8=128 tiles), H.264 HP and JPEG XR may in fact be superior.


Wednesday, September 22, 2010

OpenCV Image and Video Processing

OpenCV is a good library for state-of-the-art image and video processing. It includes Human-Computer Interaction (HCI); Object Identification, Segmentation and Recognition; Face Recognition; Gesture Recognition; Motion Tracking, Ego Motion, Motion Understanding; Structure From Motion (SFM); Stereo and Multi-Camera Calibration and Depth Computation; Mobile Robotics. It is freely available for Windows, Linux, Mac and even the iPhone. OpenCV was originally designed by Intel in 1999 to show how fast Intel CPUs can run.

Here is the official site:


Some tutorials:



Even a book:


VSofts H.264 Encoder on Virtex-6

http://www.vsofts.com/products/AVC-I_Enc.pdf

Xilinx and VSofts Demonstrate Low Latency Real-Time H.264/AVC-I IP Core Compression Solution for Xilinx FPGAs
Xilinx and VSofts Show Broadcast Equipment Designers How New Real-Time Encoding Technology Meets Highest Video Quality Expectations at IBC 2010

AMSTERDAM, Sept 10, 2010 /PRNewswire via COMTEX/ --

Xilinx(R), Inc. (Nasdaq: XLNX) and Vanguard Software Solutions, Inc. (VSofts) today demonstrated at the IBC2010 conference the ability of VSofts' H.264/AVC-I IP core to deliver a very low latency, ITU (International Telecommunication Union) and Panasonic AVC-Intra compliant Field Programmable Gate Array (FPGA) implementation of the industry-standard codec for providing minimal delay from source to encoded video in real-time video broadcast applications.

"The technology of video capture and broadcast is moving very rapidly," said Felix Nemirovsky, VSofts' Vice President of Marketing. "When it comes to High Value Content acquisition, editing and archiving, the flexibility and high data rate, real-time performance offered by basing our AVC I-frames encoder on Xilinx's proven and market-leading FPGAs allows us to meet the demands of a rapidly evolving marketplace."

As the demands of video capture and display move toward 3D TV and 4Kx2K Digital Cinema, FPGAs give designers flexibility during the design cycle and into production as standards evolve, while providing the levels of performance required for driving real-time video. Each evolution from 1080i through 4Kx2K formats requires an 8X increase of throughput off the back of the camera from 1.5Gbps to 12Gbps. This creates a potential chokepoint at each stage of the broadcast chain from camera, to contribution encoders passing video from professional studios, to video servers and to post-production houses. The H.264/AVC Intra IP core relies on the inherent flexibility, parallel processing and high-speed connectivity capabilities of Xilinx's Virtex(R)-6 FPGA to overcome these chokepoints by significantly reducing the encoding delay from camera acquisition through to live broadcast.

"The broadcast industry's evolution is moving faster than custom or standard off-the-shelf system-on-chip solutions can support in terms of performance, flexibility and time-to-market," said Dean Westman, vice president, Communications Business at Xilinx. "VSofts' H.264/AVC Intra encoding solutions underscores the value of IP in Xilinx's Targeted Design Platform approach for meeting the performance and development needs of broadcast equipment developers."

Xilinx Broadcast Targeted Design Platforms

Once in production at the end of the year, VSofts' H.264/AVC Intra IP core will join Xilinx's portfolio of Encoding IP that make up Xilinx's design platforms for the broadcast industry. This IP core can be used in conjunction with the Xilinx Broadcast Processing and Xilinx Broadcast Connectivity Kits, which integrate key hardware and software elements needed to quickly build systems and fully verify performance for a broad range of video and audio applications. These platforms include connectivity and video processing IP blocks, design environments and reference designs, along with a base set of digital audio/video development boards and industry standard FPGA Mezzanine Cards (FMC).

For broadcast applications, the Targeted Design Platform approach simplifies the development of complete broadcast audio and video interface solutions, including triple rate SDI solutions with support for standard definition TV to 3DTV and beyond in a single programmable device. It also enables the earliest possible adoption of emerging standards, such as the DisplayPort, rapidly replacing DVI (Digital Visual Interface), and new Ethernet AVB (Audio Video Bridging) technology that guarantees timing and bandwidth availability in IP networks. For more, please visit http://www.xilinx.com/broadcast.


Tuesday, September 21, 2010

Recent developments in standardization of HEVC or H.265

The paper "Recent developments in standardization of high efficiency video coding (HEVC)" is

http://www-ee.uta.edu/Dip/Courses/EE5355/HEVC.pdf

Some progress at


High Efficiency Video Coding / HEVC / H.265 : Beyond H.264

Vcodex, May 2010

"Work is continuing on the new video coding standard, currently known as "High Efficiency Video Coding" (HEVC). A Joint Collaborative Team on Video Coding (JCT-VC) has been set up by ISO/IEC MPEG and ITU-T VCEG. Following a Call for Proposals in January, 27 proposals were submitted to the first meeting of the JCT-VC in April. Elements of some of these proposals have been combined to develop an initial Test Model, a starting point for development of the new standard. The initial Test Model has similarities to earlier standards such as H.264/AVC, including block-based intra/inter prediction, block transform and entropy coding. New features include increased prediction flexibility, more sophisticated interpolation filters, a wider range of block sizes and new entropy coding schemes. Coding performance varies across the different proposals. It looks like we might expect to see a 2x compression improvement compared with H.264/AVC (i.e. half the bitrate at the same visual quality), at the expense of a significant increase in computational complexity (perhaps 3x or more). You can find the technical proposals here."


"Report of Subjective Test Results of Responses to the Joint Call for Proposals (CfP) on Video Coding Technology for High Efficiency Video Coding (HEVC)" can be found @



Monday, September 20, 2010

Xilinx ChipScope Video Tutorial

In YouTube, there are 3 clips published by Xilinx:





BAE Systems buys US intelligence services unit

Published: Monday, Sep. 20, 2010 - 3:51 am

British defense contractor BAE Systems says it is buying a U.S. intelligence services business in a $296 million deal.

BAE said Monday that it has an agreement to buy the Intelligence Services Group of L-1 Identity Solutions, Inc.

BAE says the group is a provider of security and counter-threat capabilities to the U.S. government. L-1 is based in Stamford, Connecticut.

The deal is subject to approval by U.S. regulators.

Tuesday, September 14, 2010

Digital Camera System

A digital camera data flow likes

Light -> Lenses -> CCD -> AFE -> Image Processing -> Memory

In camera front end (AFE) includes CDS, PGA and ADC. The main functions of the AFE in dataflow order:

Sampling: sample the analog data from the CCD. Many samplers offer Correlated Double Sampling (CDS) which sample twice: once a reference level and once the data. Those samples are subtracted to eliminate noise (dark current and die inconsistency). The Correlated Double Sampler (CDS) subtracts the CCD output signal black level from the video level. Common mode and power supply noise are rejected by the differential CDS input stage.

Amplification: most AFE’s have a Programmable Gain Amplifier (PGA), which lets the user adjust the gain according to the expected lighting of the scene. The PGA is digitally controlled with 10-bit/12-bit resolution on a dB scale, resulting in a some gain range. The PGA can be programmed to switch gain every pixel, in a user defined pattern of up to some different gains. A control logic allows a camera system to set the desired gain ratios for color balance. The system gain can then be changed by writing to a single register, and the color balance will be maintained.

The PGA and black level auto-calibration are controlled through a simple serial interface. The timing circuitry is designed to enable users to select a wide variety of available CCD and image sensors for their applications. Read back of the serial data registers is available from the digital output bus.

ADC: Analog to Digital Converter that converts the analog input to digital output.

A quick CCD Image Processing Prototyping Platform paper is in


Sharp solutions:

http://sharp-world.com/products/device/solution/pdf/camera.pdf

TI digital camera solutions using DMVA1 and DM368:


Sony HD Camera CCD Sensors

Sony provides a series of high quality HD CCD sensors, such as

ICX665/675 Series: Diagonal 7.705 mm (Type 1/2.3) 10.17M-Effective Pixel
ICX637CQZ: Diagonal 7.215 mm (Type 1/2.5) 9.14M-Effective Pixel

Please see the following links for detail:



ICX625: 5M (2456x2058) 15 fps. The following link is the data sheet of ICX625:


ICX655: 5M (2456x2058) 7.5 fps, the single tap version of the ICX625.

A good camera system overview using Sony CCD:

A white paper for Black/White camera system overview using Sony CCD:


AFE
vsp2260 (TI)

Powering (+15 & -8):
AAT3408 charge pump (Analogic)
LT3487 (Linear Tech)

V drivers:
CXD3400 (Sony)
LR366851 (Sharp)
LR36687U/Y (Sharp)
LR36689U (Sharp)
KS7221D (Samsung)

H driver:
74AC04 (Farchild)

ADC:
AD9978 (Analog Devices)
AD9845 (Analog Devices)

Glue (LVDS2CMOS, support logic, etc):
spartan3e100 (Xilinx)

TI Davinci HD EVM Support Updates

720p Davinci HD (DM6467) EVM (Revision F):




DaVinci HD EVM Documentation
DocumentationLink
DaVinci HD EVM BML
Board component population detail.
PDF - 06/04/08
DaVinci HD FAQ
Questions and comments from EVM users.
HTM - 06/04/08
DaVinci HD EVM Gerber
Printable version of Gerber data.
PDF - 06/04/08
DaVinci HD EVM Layout Info
Gerber data in native format and PCAD database.
ZIP - 06/04/08
DaVinci HD EVM Orcad Source
Schematic source for the DaVinci HD EVM in Orcad format.
DSN - 06/04/08
DaVinci HD Schematics
Board schematics
PDF - 06/04/08
DaVinci HD Technical Reference
Board details, switch settings, connector pinsouts.
PDF - 06/04/08
DaVinci HD Test Cases
Test Case Results.
TXT - 06/04/08

DaVinci HD EVM Software Resources

DocumentationLink
ARM Gel File
Setup PLL & DDR memory
GEL - 07/28/10
DSP Gel File
Setup Memory map
GEL - 07/28/10
DaVinci HD Set MAC address
COFF file for programming the DaVinci HD MAC address. Requires the DaVinci ARM9 GEL file.
OUT - 06/04/08
DaVinci HD Test Code
ARM based test code and board support library.
ZIP - 07/28/10


DaVinci™ HD1080p (DM6467t) EVM Support Home (Revision C)



DaVinci™ HD1080p EVM Documentation

DescriptionLink
DaVinci™ HD1080p EVM Schematics
Board schematics in PDF format
PDF - 09/01/10
Davinci™ HD1080p EVM Technical Reference
Board documentation, jumper settings, connection details, etc.
PDF - 08/14/09
Davinci™ HD1080p EVM BOM
Board component population detail.
PDF - 03/31/10
DaVinci™ HD1080p EVM Gerber
Printable version of Gerber data in PDF format.
PDF - 03/31/10
DaVinci™ HD1080p EVM Layout Info
Gerber data in PCAD and database format.
ZIP - 03/31/10
DaVinci™ HD1080p EVM Orcad Source
Orcad design file for the EVM.
DSN - 09/01/10

Davinci™ HD1080p EVM Software Resources

DocumentationLink
Davinci™ HD1080p EVM ARM GEL File
Davinci™ HD1080p EVM specific ARM GEL file. Includes memory map, EMIF/PLL/PINMUX init.
GEL - 07/28/10
Davinci™ HD1080p EVM DSP GEL File
Davinci™ HD1080p EVM specific DSP GEL file.
GEL - 07/01/10
Davinci™ HD1080p EVM CCS Config Files
Importable configurations for Spectrum Digital JTAG emulators.
CCS - 10/01/09
Target Content
Includes source code & tests for the Davinci™ HD1080p EVM.
ZIP - 07/01/10

Monday, September 13, 2010

INTERRA's VEGA H264/MVC Analyzer for 3D Video

MVC is the latest addition to the widely popular H.264/AVC video encoding standard developed with joint efforts by MPEG/VCEG that enables efficient representation of multiple views of a video scene in one video stream. Intended for encoding stereoscopic (two-view) video, free viewpoint television and multi-view 3D television, MVC is gaining wide interest and momentum in Consumer Electronics, DTV market, and other applications such as Mobile 3D video and Telepresence. MVC standard enables superior compression performance for delivering 3D video and is backward compatible with the H.264/AVC codecs.

Interra's Vega H264/MVC analyzer presents an in-depth view of MVC by performing frame by frame and block by block analysis of encoded streams. Other MVC features of Vega include frame statistics and thumbnail display for all views, and side by side comparison of 2 views.

Interra's Vega H264 Debug and Analysis platform provides an effective debug point for development and quality assurance of Codecs used in digital media products. With over 250+ Licensees worldwide, Vega is the de facto standard compliance assurance tool among tier-1 Broadcasters, IPTV, VOD and Codec vendors. Vega H264 supports a wide range of audio and video technologies, such as MEPG-2 Transport, MPEG-2 Program, DVB-H, MVC, SVC, AVI, QuickTime, H.264, VC1, MPEG4, MPEG2 Video, MP4, Dolby Digital (AC3), Dolby Digital Plus, Dolby E, and AAC Audio.

More detail info can be found @


According to a press release, Imagination selected this tool for its IP product.


YouTube Tests Live Streaming Channel

http://www.multichannel.com/article/457048-YouTube_Tests_Live_Streaming_Channel.php

Todd Spangler -- Multichannel News, 9/13/2010 12:11:50 PM

Google's YouTube on Monday kicked off a two-day test of a live channel, another indication the search giant is looking to bolster its push into TV with content tailored to a viewing in the living room.

The YouTube live streaming channel, which began broadcasting 8 a.m. (PT) on Monday, features content from four partners: Howcast, Next New Networks, Rocketboom and Young Hollywood.

"This new platform integrates live streaming directly into YouTube channels; all broadcasters need is a webcam or external USB/FireWire camera," YouTube said in a blog posting announcing the trial. The company added that based on the results of the initial live-streaming test, "we'll evaluate rolling out the platform more broadly to our partners worldwide."

The broader Google TV initiative aims to provide an Internet search and application platform for consumer electronics companies and pay-TV operators. The system, based in the Android operating system for mobile phones, is supposed to let TV viewers search across live listings, DVR recordings and Web content, as well as run third-party apps designed for the TV.

YouTube also has introduced a feature called "Leanback," which is intended to provide a continuous stream of clips based on a user's preference for viewing on a TV set.

Meanwhile, Google has reportedly been in discussions with movie studios about offering movie rentals via YouTube.


Tuesday, September 7, 2010

Apple to buy Cirrus

There are active discussion about this rumor in

SAN JOSE, Calif. - There are rumors that Apple Inc. could buy Cirrus Logic Inc., according to Barrons.com, which cites TheFlyOnTheWall.com as its source.

with the iPhone and iPad, Apple has become more systems house than traditional OEM. Apple now designs its own complex ASICs.

It even acquired IC design houses P.A. Semi and Intrinsity to bolster its ASIC design expertise. More in



Sunday, September 5, 2010

Synopsys 2010.09 Release

Synopsys' 3rd tool releases in 2010 are just out. Hopefully they will have a better support for new FPGA chips including Virtex®-7.

Followers

Blog Archive

About Me

My photo
HD Multimedia Technology player