Comparing VVC, HEVC and AV1 using Objective and Subjective Assessments

Fan Zhang, Angeliki Katsenou, Mariana Afonso, Goce Dimitrov and David Bull

ABSTRACT

In this paper, the performance of three state-of-the-art video codecs: High Efficiency Video Coding (HEVC) Test Model (HM), AOMedia Video 1 (AV1) and Versatile Video Coding Test Model (VTM), are evaluated using both objective and subjective quality assessments. Nine source sequences were carefully selected to offer both diversity and representativeness, and different resolution versions were encoded by all three codecs at pre-defined target bitrates. The compression efficiency of the three codecs are evaluated using two commonly used objective quality metrics, PSNR and VMAF. The subjective quality of their reconstructed content is also evaluated through psychophysical experiments. Furthermore, HEVC and AV1 are compared within a dynamic optimization framework (convex hull rate-distortion optimization) across resolutions with a wider bitrate, using both objective and subjective evaluations. Finally the computational complexities of three tested codecs are compared. The subjective assessments indicate that, for the tested versions there is no significant difference between AV1 and HM, while the tested VTM version shows significant enhancements. The selected source sequences, compressed video content and associated subjective data are available online, offering a resource for compression performance evaluation and objective video quality assessment.

Parts of this work have been presented in the IEEE International Conference on Image Processing (ICIP) 2019 in Taipei and in the Alliance for Open Media (AOM) Symposium 2019 in San Francisco.

SOURCE SEQUENCES

DATABASE

[DOWNLOAD] subjective data.

[DOWNLOAD] all videos from University of Bristol Research Data Storage Facility.

If this content has been mentioned in a research publication, please give credit to the University of Bristol, by referencing the following paper:

[1] A. V. Katsenou, F. Zhang, M. Afonso and D. R. Bull, “A Subjective Comparison of AV1 and HEVC for Adaptive Video Streaming,” 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 2019, pp. 4145-4149.

[2] F. Zhang, A. V. Katsenou, M. Afonso, Goce Dimitrov and D. R. Bull, “Comparing VVC, HEVC and AV1 using Objective and Subjective Assessments”, arXiv:2003. 10282 [eess.IV], 2020.

High Frame Rate Video

As the demand for higher quality and more immersive video content increases, the need to extend the current video parameter space of spatial resolutions and display sizes, to include, among other things, a wider colour gamut, higher dynamic range and higher frame rates, becomes ever greater. The use of increased frame rate can provide a more realistic portrayal of a scene through a reduction in motion blur, while also minimizing temporal aliasing, and the associated visual artefacts.

The BVI-HFR video database is the first publicly available high frame rate video database, and contains 22 unique HD video sequences at frame rates up to 120 Hz. Sample frames from some of the video sequences can be seen below:

sparkler hamster catch flowers bobblehead cyclist

 

 

 

 

 

 

 

 

 

Subjective evaluations of 51 participants on the sequences in the BVI-HFR video database have shown a clear relationship between frame rate and perceived quality (MOS), although we do see the effect of diminishing returns. The results also showed that a degree of content dependency exists, for example benefits of higher frame rate material are more likely to be observed in video sequences with high motion speed (i.e. moving camera).

 

subtest

Publications

A STUDY OF SUBJECTIVE VIDEO QUALITY AT VARIOUS FRAME RATES, Mackin, A. and Zhang, F. and Bull, D., Image Processing (ICIP), 2015 22nd IEEE International Conference on, 2015.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What’s on TV: A Large-Scale Quantitative Characterisation of Modern Broadcast Video Content

Video databases, used for benchmarking and evaluating the performance of new video technologies, should represent the full breadth of consumer video content. The parameterisation of video databases using low-level features has proven to be an effective way of quantifying the diversity within a database. However, without a comprehensive understanding of the importance and relative frequency and of these features in the content people actually consume, the utility of such information is limited. In collaboration with the BBC, the  “What’s on TV” is a large-scale analysis of the low-level features that exist in contemporary broadcast video. The project aims to establish an efficient set of features that can be used to characterise the spatial and temporal variation in modern consumer content. The meaning and relative significance of this feature set, together with the shape of their frequency distributions, represent highly valuable information for researchers wanting to model the diversity of modern consumer content in representative video databases.

Publications:

Felix Mercer Moss, Fan Zhang, Roland Baddeley and David Bull, What’s on TV: A large-scale quantitative characterisation of modern broadcast video content, ICIP 2016.

for_website

Optimal presentation duration for video quality assessment

Video content distributors, codec developers and researchers in related fields often rely on subjective assessments to ensure that their video processing procedures result in satisfactory quality. The current 10-second recommendation for the length of test sequences in subjective video quality assessment studies, however, has recently been questioned. Not only do sequences of this length depart from modern cinematic shooting styles, the use of shorter sequences would enable substantial efficiency improvements to the data collection process. This project, therefore, aims to explore the impact upon viewer rating behaviour of using different length video sequences and the consequent savings that could be made in time, labour and money .

Publications:

 

Felix Mercer Moss, Ke Wang, Fan Zhang, Roland Baddeley and David R. Bull, On the optimal presentation duration for subjective video quality assessment, IEEE Transactions on Circuits and Systems for Video Technology, Volume PP, Issue 99, July 2015.

Felix Mercer Moss, Chun-Ting Yeh, Fan Zhang, Roland Baddeley and David R. Bull, Support for reduced presentation durations in subjective video quality assessment, Signal Processing: Image Communication, Volume 48, October 2016, Pages 38-49.
sampleHamster-1200x675

 

Perceptual Quality Metrics (PVM)

RESEARCHERS

Dr. Fan (Aaron) Zhang

INVESTIGATOR

Prof. David Bull, Dr. Dimitris Agrafiotis and Dr. Roland Baddeley

DATES

2012-2015

FUNDING

ORSAS and EPSRC

SOURCE CODE 

PVM Matlab code Download.

INTRODUCTION

It is known that the human visual system (HVS) employs independent processes (distortion detection and artefact perception – also often referred to near-threshold supra-threshold distortion perception) to assess video quality for various distortion levels. Visual masking effects also play an important role in video distortion perception, especially within spatial and temporal textures.

Algorithmic diagram for PVM.
It is well known that small differences in textured content can be tolerated by the HVS. In this work, we employ the dual-tree complex wavelet transform (DT-CWT) in conjunction with motion analysis to characterise this tolerance within spatial and temporal textures. The DT-CWT has been found to be particularly powerful in this context due to its shift invariance and orientation selectivity properties. In highly distorted video content, for compressed material, blurring is one of the most commonly occuring artefacts. This is detected in our approach by comparing high frequency subband coefficients from the reference and distorted frames, also facilitated by the DT-CWT. This is motion-weighted in order to simulate the tolerance of the HVS to blurring in content with high temporal activity. Inspired by the previous work of Chandler and Hemamiand Larson and Chandler, thresholded differences (defined as noticeable distortion) and blurring artefacts are non-linearly combined using a modified geometric mean model, in which the proportion of each component is adaptively tuned. The performance of the proposed video metric is assessed and validated using the VQEG FRTV Phase I and the LIVE video databases, and shows clear improvements in correlation with subjective scores, over existing metrics such as PSNR, SSIM, VIF, VSNR, VQM and MOVIE, and in many cases over STMAD.

RESULTS

Figure: Scatter plots of subjective DMOS versus different video metrics on the VQEG database.
Figure: Scatter plots of subjective DMOS versus different video metrics on the LIVE video database.

REFERENCE

  1. A Perception-based Hybrid Model for Video Quality Assessment F. Zhang and D. Bull, IEEE T-CSVT, June 2016.
  2. Quality Assessment Methods for Perceptual Video Compression F. Zhang and D. Bull, ICIP, Melbourne, Australia, September 2013.

 

Parametric Video Coding

RESEARCHERS

Dr. Fan (Aaron) Zhang

INVESTIGATOR

Prof. David Bull, Dr. Dimitris Agrafiotis and Dr. Roland Baddeley

DATES

2008-2015

FUNDING

ORSAS and EPSRC

INTRODUCTION

In most cases, the target of video compression is to provide good subjective quality rather than to simply produce the most similar pictures to the originals. Based on this assumption, it is possible to conceive of a compression scheme where an analysis/synthesis framework is employed rather than the conventional energy minimization approach. If such a scheme were practical, it could offer lower bitrates through reduced residual and motion vector coding, using a parametric approach to describe texture warping and/or synthesis.

methodDiagram-1200x466

Instead of encoding whole images or prediction residuals after translational motion estimation, our algorithm employs a perspective motion model to warp static textures and utilises texture synthesis to create dynamic textures. Texture regions are segmented using features derived from the complex wavelet transform and further classified according to their spatial and temporal characteristics. Moreover, a compatible artefact-based video metric (AVM) is proposed with which to evaluate the quality of the reconstructed video. This is also employed in-loop to prevent warping and synthesis artefacts. The proposed algorithm has been integrated into an H.264 video coding framework. The results show significant bitrate savings, of up to 60% compared with H.264 at the same objective quality (based on AVM) and subjective scores.

RESULTS

 

 

REFERENCE

  1. Perception-oriented Video Coding based on Image Analysis and Completion: a Review. P. Ndjiki-Nya, D. Doshkov, H. Kaprykowsky, F. Zhang, D. Bull, T. Wiegand, Signal Processing: Image Communication, July 2012.
  2. A Parametric Framework For Video Compression Using Region-based Texture Models. F. Zhang and D. Bull, IEEE J-STSP, November 2011.