DigitalF: Elektrotechnik und Medientechnik
A. Aldahdooh, E. Masala, O. Janssens, G. Wallendael, Marcus Barkowsky, P. Callet, G. van Wallendael, P. Lambert
Improved Performance Measures for Video Quality Assessment Algorithms Using Training and Validation Sets
IEEE Transactions on Multimedia, vol. 74, pp. 32-41
Due to the three-dimensional spatiotemporal regularities of natural videos and small-scale video quality databases, effective objective video quality assessment (VQA) metrics are difficult to obtain but highly desirable. In this paper, we propose a general-purpose no-reference VQA framework that is based on weakly supervised learning with convolutional neural network (CNN) and resampling strategy. First, an eight-layer CNN is trained by weakly supervised learning to construct the relationship between the deformations of the three dimensional discrete cosine transform of video blocks and corresponding weak labels judged by a full-reference (FR) VQA metric. Thus, the CNN obtains the quality assessment capacity converted from the FR-VQA metric, and the effective features of the distorted videos can be extracted through the trained network. Then, we map the frequency histogram calculated from the quality score vectors predicted by the trained network onto the perceptual quality. Specially, to improve the performance of the mapping function, we transfer the frequency histogram of the distorted images and videos to resample the training set. The experiments are carried out on several widely used video quality assessment databases. The experimental results demonstrate that the proposed method is on a par with some state-of-the-art VQA metrics and has promising robustness.
DigitalF: Angewandte Informatik
Beitrag (Sammelband oder Tagungsband)
A. Aldahdooh, E. Masala, O. Janssens, G. Wallendael, Marcus Barkowsky
Comparing simple video quality measures for loss-impaired video sequences on a large-scale database
2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)
The performance of objective video quality measures is usually identified by comparing their predictions to subjective assessment results which are regarded as the ground truth. In this work we propose a complementary approach for this performance evaluation by means of a large-scale database of test sequences evaluated with several objective measurement algorithms. Such an approach is expected to detect performance anomalies that could highlight shortcomings in current objective measurement algorithms. Using realistic coding and network transmission conditions, we investigate the consistency of the prediction of different measures as well as how much their behavior can be predicted by content, coding and transmission features, discussing unexpected and peculiar behaviors, and highlighting how a large-scale database can help in identifying anomalies not easily found by means of subjective testing. We expect that this analysis will shed light on directions to pursue in order to overcome some of the limitations of existing reliability assessment methods for objective video quality measures.