Publikationen


Suche nach „[E.] [Masala]“ hat 10 Publikationen gefunden
Suchergebnis als PDF
    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    L. Tiotsop, E. Masala, A. Aldahdooh, G. Wallendael, Marcus Barkowsky

    Computing Quality-of-Experience Ranges for Video Quality Estimation

    Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX) [5-7 June 2019; Berlin]

    2019

    DOI: 10.1109/QoMEX.2019.8743303

    Abstract anzeigen

    Typically, the measurement of the Quality of Experience for video sequences aims at a single value, in most cases the Mean Opinion Score (MOS). Predicting this value using various algorithms has been widely studied. However, deviation from the MOS is often handled as an unpredictable error. The approach in this contribution estimates intervals of video quality instead of the single valued MOS. Well-known video quality estimators are fused together to output a lower and upper border for the expected video quality, on the basis of a model derived from a well-known subjectively annotated dataset. Results on different datasets provide insight on the suitability of the well-known estimators for this particular approach.

    DigitalF: Angewandte Informatik

    Zeitschriftenartikel

    A. Aldahdooh, E. Masala, G. Wallendael, P. Lambert, Marcus Barkowsky

    Improving relevant subjective testing for validation: Comparing machine learning algorithms for finding similarities in VQA datasets using objective measures

    Signal Processing: Image Communication, vol. 74, no. May, pp. 32-41

    2019

    DOI: 10.1016/j.image.2019.01.004

    Abstract anzeigen

    Subjective quality assessment is a necessary activity to validate objective measures or to assess the performance of innovative video processing technologies. However, designing and performing comprehensive tests requires expertise and a large effort especially for the execution part. In this work we propose a methodology that, given a set of processed video sequences prepared by video quality experts, attempts to reduce the number of subjective tests by selecting a subset with minimum size which is expected to yield the same conclusions of the larger set. To this aim, we combine information coming from different types of objective quality metrics with clustering and machine learning algorithms that perform the actual selection, therefore reducing the required subjective assessment effort while trying to preserve the variety of content and conditions needed to ensure the validity of the conclusions. Experiments are conducted on one of the largest publicly available subjectively annotated video sequence dataset. As performance criterion, we chose the validation criteria for video quality measurement algorithms established by the International Telecommunication Union.

    DigitalF: Angewandte Informatik

    Zeitschriftenartikel

    A. Aldahdooh, E. Masala, G. Wallendael, Marcus Barkowsky

    Framework for reproducible objective video quality research with case study on PSNR implementations

    Digital Signal Processing, vol. 77, no. June, pp. 195-206

    2018

    DOI: 10.1016/j.dsp.2017.09.013

    Abstract anzeigen

    Reproducibility is an important and recurrent issue in objective video quality research because the presented algorithms are complex, depend on specific implementations in software packages or their parameters need to be trained on a particular, sometimes unpublished, dataset. Textual descriptions often lack the required detail and even for the simple Peak Signal to Noise Ratio (PSNR) several mutations exist for images and videos, in particular considering the choice of the peak value and the temporal pooling. This work presents results achieved through the analysis of objective video quality measures evaluated on a reproducible large scale database containing about 60,000 HEVC coded video sequences. We focus on PSNR, one of the most widespread measures, considering its two most common definitions. The sometimes largely different results achieved by applying the two definitions highlight the importance of the strict reproducibility of the research in video quality evaluation in particular. Reproducibility is also often a question of computational power and PSNR is a computationally inexpensive algorithm running faster than realtime. Complex algorithms cannot be reasonably developed and evaluated on the abovementioned 160 hours of video sequences. Therefore, techniques to select subsets of coding parameters are then introduced. Results show that an accurate selection can preserve the variety of the results seen on the large database but with much lower complexity. Finally, note that our SoftwareX accompanying paper presents the software framework which allows the full reproducibility of all the research results presented here, as well as how the same framework can be used to produce derived work for other measures or indexes proposed by other researchers which we strongly encourage for integration in our open framework.

    DigitalF: Angewandte Informatik

    Zeitschriftenartikel

    A. Aldahdooh, E. Masala, G. Wallendael, Marcus Barkowsky

    Reproducible research framework for objective video quality measures using a large-scale database approach

    SoftwareX, vol. 8, no. July-December, pp. 64-68

    2018

    DOI: 10.1016/j.softx.2017.09.004

    Abstract anzeigen

    This work presents a framework to facilitate reproducibility of research in video quality evaluation. Its initial version is built around the JEG-Hybrid database of HEVC coded video sequences. The framework is modular, organized in the form of pipelined activities, which range from the tools needed to generate the whole database from reference signals up to the analysis of the video quality measures already present in the database. Researchers can re-run, modify and extend any module, starting from any point in the pipeline, while always achieving perfect reproducibility of the results. The modularity of the structure allows to work on subsets of the database since for some analysis this might be too computationally intensive. To this purpose, the framework also includes a software module to compute interesting subsets, in terms of coding conditions, of the whole database. An example shows how the framework can be used to investigate how the small differences in the definition of the widespread PSNR metric can yield very different results, discussed in more details in our accompanying research paper Aldahdooh et al. (0000). This further underlines the importance of reproducibility to allow comparing different research work with high confidence. To the best of our knowledge, this framework is the first attempt to bring exact reproducibility end-to-end in the context of video quality evaluation research.

    DigitalF: Elektrotechnik und Medientechnik

    Zeitschriftenartikel

    A. Aldahdooh, E. Masala, O. Janssens, G. Wallendael, Marcus Barkowsky, P. Callet, G. van Wallendael, P. Lambert

    Improved Performance Measures for Video Quality Assessment Algorithms Using Training and Validation Sets

    IEEE Transactions on Multimedia, vol. 74, pp. 32-41

    2018

    Abstract anzeigen

    Due to the three-dimensional spatiotemporal regularities of natural videos and small-scale video quality databases, effective objective video quality assessment (VQA) metrics are difficult to obtain but highly desirable. In this paper, we propose a general-purpose no-reference VQA framework that is based on weakly supervised learning with convolutional neural network (CNN) and resampling strategy. First, an eight-layer CNN is trained by weakly supervised learning to construct the relationship between the deformations of the three dimensional discrete cosine transform of video blocks and corresponding weak labels judged by a full-reference (FR) VQA metric. Thus, the CNN obtains the quality assessment capacity converted from the FR-VQA metric, and the effective features of the distorted videos can be extracted through the trained network. Then, we map the frequency histogram calculated from the quality score vectors predicted by the trained network onto the perceptual quality. Specially, to improve the performance of the mapping function, we transfer the frequency histogram of the distorted images and videos to resample the training set. The experiments are carried out on several widely used video quality assessment databases. The experimental results demonstrate that the proposed method is on a par with some state-of-the-art VQA metrics and has promising robustness.

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    A. Aldahdooh, E. Masala, G. Wallendael, Marcus Barkowsky

    Comparing temporal behavior of fast objective video quality measures on a large-scale database

    2016 Picture Coding Symposium (PCS)

    2016

    Abstract anzeigen

    In many application scenarios, video quality assessment is required to be fast and reasonably accurate. The characterization of objective algorithms by subjective assessment is well established but limited due to the small number of test samples. Verification using large-scale objectively annotated databases provides a complementary solution. In this contribution, three simple but fast measures are compared regarding their agreement on a large-scale database. In contrast to subjective experiments, not only sequence-wise but also framewise agreement can be analyzed. Insight is gained into the behavior of the measures with respect to 5952 different coding configurations of High Efficiency Video Coding (HEVC). Consistency within a video sequence is analyzed as well as across video sequences. The results show that the occurrence of discrepancies depends mostly on the configured coding structure and the source content. The detailed observations stimulate questions on the combined usage of several video quality measures for encoder optimization.

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    A. Aldahdooh, E. Masala, O. Janssens, G. Wallendael, Marcus Barkowsky

    Comparing simple video quality measures for loss-impaired video sequences on a large-scale database

    2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)

    2016

    Abstract anzeigen

    The performance of objective video quality measures is usually identified by comparing their predictions to subjective assessment results which are regarded as the ground truth. In this work we propose a complementary approach for this performance evaluation by means of a large-scale database of test sequences evaluated with several objective measurement algorithms. Such an approach is expected to detect performance anomalies that could highlight shortcomings in current objective measurement algorithms. Using realistic coding and network transmission conditions, we investigate the consistency of the prediction of different measures as well as how much their behavior can be predicted by content, coding and transmission features, discussing unexpected and peculiar behaviors, and highlighting how a large-scale database can help in identifying anomalies not easily found by means of subjective testing. We expect that this analysis will shed light on directions to pursue in order to overcome some of the limitations of existing reliability assessment methods for objective video quality measures.

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    G. van Wallendael, N. Staelens, E. Masala, Marcus Barkowsky

    Full-HD HEVC-Encoded Video Quality Assessment Database

    Ninth International Workshop on Video Processing and Quality Metrics (VPQM) [Feb 2015; Chandler, AZ, USA]

    2015

    DigitalF: Angewandte Informatik

    Zeitschriftenartikel

    Marcus Barkowsky, E. Masala, G. van Wallendael, K. Brunnström, N. Staelens, P. Le Callet

    Objective Video Quality Assessment ‐- Towards Large Scale Video Database Enhanced Model Development

    IEICE Transactions on Communications, vol. E-98b, no. 1, pp. 2-11

    2015

    Abstract anzeigen

    The current development of video quality assessment algorithms suffers from the lack of available video sequences for training, verification and validation to determine and enhance the algorithm's application scope. The Joint Effort Group of the Video Quality Experts Group (VQEG-JEG) is currently driving efforts towards the creation of large scale, reproducible, and easy to use databases. These databases will contain bitstreams of recent video encoders (H.264, H.265), packet loss impairment patterns and impaired bitstreams, pre-parsed bitstream information into files in XML syntax, and well-known objective video quality measurement outputs. The database is continuously updated and enlarged using reproducible processing chains. Currently, more than 70,000 sequences are available for statistical analysis of video quality measurement algorithms. New research questions are posed as the database is designed to verify and validate models on a very large scale, testing and validating various scopes of applications, while subjective assessment has to be limited to a comparably small subset of the database. Special focus is given on the principles guiding the database development, and some results are given to illustrate the practical usefulness of such a database with respect to the detailed new research questions.

    DigitalF: Angewandte Informatik

    Zeitschriftenartikel

    G. van Wallendael, N. Staelens, E. Masala, L. Janowski, K. Berger, Marcus Barkowsky

    Dreamed about training, verifying and validating your QoE model on a million videos?

    VQEG (Video Quality Expert Group) eLetter, vol. 1, no. 2, pp. 19-29

    2014