Publikationen


Suche nach „[G.] [Wallendael]“ hat 3 Publikationen gefunden
Suchergebnis als PDF
    DigitalAngewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    A. Aldahlooh, G. Wallendael, E. Masala, L. Tiotsop, Marcus Barkowsky

    Computing Quality-of-Experience Ranges for Video Quality Estimation

    Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX) [5-7 June 2019; Berlin]

    2019

    DOI: 10.1109/QoMEX.2019.8743303

    Abstract anzeigen

    Typically, the measurement of the Quality of Experience for video sequences aims at a single value, in most cases the Mean Opinion Score (MOS). Predicting this value using various algorithms has been widely studied. However, deviation from the MOS is often handled as an unpredictable error. The approach in this contribution estimates intervals of video quality instead of the single valued MOS. Well-known video quality estimators are fused together to output a lower and upper border for the expected video quality, on the basis of a model derived from a well-known subjectively annotated dataset. Results on different datasets provide insight on the suitability of the well-known estimators for this particular approach.

    DigitalAngewandte Informatik

    Zeitschriftenartikel

    A. Aldahlooh, G. Wallendael, P. Lambert, E. Masala, Marcus Barkowsky

    Improving relevant subjective testing for validation: Comparing machine learning algorithms for finding similarities in VQA datasets using objective measures

    Signal Processing: Image Communication, vol. 74, no. May, pp. 32-41

    2019

    DOI: 10.1016/j.image.2019.01.004

    Abstract anzeigen

    Subjective quality assessment is a necessary activity to validate objective measures or to assess the performance of innovative video processing technologies. However, designing and performing comprehensive tests requires expertise and a large effort especially for the execution part. In this work we propose a methodology that, given a set of processed video sequences prepared by video quality experts, attempts to reduce the number of subjective tests by selecting a subset with minimum size which is expected to yield the same conclusions of the larger set. To this aim, we combine information coming from different types of objective quality metrics with clustering and machine learning algorithms that perform the actual selection, therefore reducing the required subjective assessment effort while trying to preserve the variety of content and conditions needed to ensure the validity of the conclusions. Experiments are conducted on one of the largest publicly available subjectively annotated video sequence dataset. As performance criterion, we chose the validation criteria for video quality measurement algorithms established by the International Telecommunication Union.

    DigitalElektrotechnik und Medientechnik

    Zeitschriftenartikel

    G. Wallendael, A. Aldahlooh, P. Lambert, E. Masala, O. Janssens, Enrico Masala, Marcus Barkowsky, P. Callet, Glenn van Wallendael

    Improved Performance Measures for Video Quality Assessment Algorithms Using Training and Validation Sets

    IEEE Transactions on Multimedia, vol. 74, pp. 32-41

    2018

    Abstract anzeigen

    Due to the three-dimensional spatiotemporal regularities of natural videos and small-scale video quality databases, effective objective video quality assessment (VQA) metrics are difficult to obtain but highly desirable. In this paper, we propose a general-purpose no-reference VQA framework that is based on weakly supervised learning with convolutional neural network (CNN) and resampling strategy. First, an eight-layer CNN is trained by weakly supervised learning to construct the relationship between the deformations of the three dimensional discrete cosine transform of video blocks and corresponding weak labels judged by a full-reference (FR) VQA metric. Thus, the CNN obtains the quality assessment capacity converted from the FR-VQA metric, and the effective features of the distorted videos can be extracted through the trained network. Then, we map the frequency histogram calculated from the quality score vectors predicted by the trained network onto the perceptual quality. Specially, to improve the performance of the mapping function, we transfer the frequency histogram of the distorted images and videos to resample the training set. The experiments are carried out on several widely used video quality assessment databases. The experimental results demonstrate that the proposed method is on a par with some state-of-the-art VQA metrics and has promising robustness.