Header menu link for other important links
X
Full-reference video quality assessment using deep 3D convolutional neural networks
S.V.R. Dendi, G. Krishnappa,
Published in Institute of Electrical and Electronics Engineers Inc.
2019
Abstract
We present a novel framework called Deep Video QUality Evaluator (DeepVQUE) for full-reference video quality assessment (FRVQA) using deep 3D convolutional neural networks (3D ConvNets). DeepVQUE is a complementary framework to traditional handcrafted feature based methods in that it uses deep 3D ConvNet models for feature extraction. 3D ConvNets are capable of extracting spatio-temporal features of the video which are vital for video quality assessment (VQA). Most of the existing FRVQA approaches operate on spatial and temporal domains independently followed by pooling, and often ignore the crucial spatio-temporal relationship of intensities in natural videos. In this work, we pay special attention to the contribution of spatio-temporal dependencies in natural videos to quality assessment. Specifically, the proposed approach estimates the spatio-temporal quality of a video with respect to its pristine version by applying commonly used distance measures such as the l1 or the l2 norm to the volume-wise pristine and distorted 3D ConvNet features. Spatial quality is estimated using off-the-shelf full-reference image quality assessment (FRIQA) methods. Overall video quality is estimated using support vector regression (SVR) applied to the spatio-temporal and spatial quality estimates. Additionally, we illustrate the ability of the proposed approach to localize distortions in space and time. © 2019 IEEE.
About the journal
JournalData powered by Typeset25th National Conference on Communications, NCC 2019
PublisherData powered by TypesetInstitute of Electrical and Electronics Engineers Inc.