Video quality measurement has been gaining more attention. Video service providers, television network operators, surveillance equipment manufacturers and broadcasters need to ensure that the video products and services they offer meet minimum quality requirements. Although the network bandwidth has been increased but higher and higher resolution of video is being carried over limited bandwidths. There are several video quality metrics including PSNR, VQM, MPQM, SSIM and NQM.
PSNR
PSNR is derived by setting the mean squared error (MSE) in relation to the maximum possible value of the luminance as follows:
Where f(i,j) is the original signal at pixel (i, j), F(i, j) is the reconstructed signal, and M x N is the picture size. The result is a single number in decibels, ranging from 30 to 40 for medium to high quality video.
Despite several objective video quality models have been developed in the past two decades, PSNR continues to be the most popular evaluation of the quality difference among pictures.
Video Quality Metric (VQM)
VQM is developed by ITS to provide an objective measurement for perceived video quality. It measures the perceptual effects of video impairments including blurring, jerky/unnatural motion, global noise, block distortion and color distortion, and combines them into a single metric. The testing results show VQM has a high correlation with subjective video quality assessment and has been adopted by ANSI as an objective video quality standard.
Moving Pictures Quality Metric (MPQM)
MPQM is an objective quality metric for moving picture which incorporates two human vision characteristics as mentioned above. It first decomposes an original sequence and a distorted version of it into perceptual channels. A channel-based distortion measure is then computed, accounting for contrast sensitivity and masking. Finally, the data is pooled over all the channels to compute the quality rating which is then scaled from 1 to 5 (from bad to excellent).
The actual computation of the distortion E for a given block is given by
where is the masked error signal at position and time in the current block and in the channel ; , and are the horizontal and vertical dimensions of the blocks; is the number of channels. is a constant having the value 4. The final quality measure can be expressed either using the Masked PSNR (MPSNR) as follows
or can be mapped to the 5-point ITU quality scale using the equation
where is a constant to be obtained experimentally. This can be done if the subjective quality rating is known for a given sequence, has to be computed and then one solves for from the last equation.
Structural Similarity Index (SSIM)
SSIM uses the structural distortion measurement since the human vision system is highly specialized in extracting structural information from the viewing field and it is not specialized in extracting the errors.
If you let x = {xi | i = 1,2,…,N} be the original signal and y = {yi | i = 1,2,…,N} be the distorted signal the structural similarity index can be calculated as:
with
- the average of ;
- the average of ;
- the variance of ;
- the variance of ;
- the covariance of and ;
- , two variables to stabilize the division with weak denominator;
- the dynamic range of the pixel-values (typically this is );
- and by default.
Noise Quality Measure (NQM)
In NQM, a degraded image is modeled as an original image that has been modeled as an original image that has been subjective to linear frequency distortion and additive noise injection. These two sources of degradation are considered independent and are decoupled into two quality measures: a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM takes into account : (1) variation in contrast sensitivity with distance, image dimensions; (2) variation in the local luminance mean; (3) contrast interaction between spatial frequencies; (4) contrast masking effects. The DM is computed in three steps. First, the frequency distortion in the degraded image is found. Second, the deviation of this frequency distortion from an all-pass response unity gain is computed. Finally, the deviation is weighted by a model of the frequency response of the human visual system.
No comments:
Post a Comment