Publikationen


Suche nach „[L.] [Janowski]“ hat 8 Publikationen gefunden
Suchergebnis als PDF
    DigitalF: Angewandte Informatik

    Zeitschriftenartikel

    G. van Wallendael, N. Staelens, E. Masala, L. Janowski, K. Berger, Marcus Barkowsky

    Dreamed about training, verifying and validating your QoE model on a million videos?

    VQEG (Video Quality Expert Group) eLetter, vol. 1, no. 2, pp. 19-29

    2014

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    M. Pinson, C. Schmidmer, L. Janowski, R. Pepion, Q. Huynh-Thu, P. Corriveau, A. Younkin, P. Le Callet, Marcus Barkowsky, W. Ingram

    Subjective and objective evaluation of an audiovisual subjective dataset for research and development

    2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX)

    2013

    Abstract anzeigen

    In 2011, the Video Quality Experts Group (VQEG) ran subjects through the same audiovisual subjective test at six different international laboratories. That small dataset is now publically available for research and development purposes.

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    Marcus Barkowsky, N. Staelens, L. Janowski

    Open collaboration on hybrid video quality models ‐ VQEG joint effort group hybrid

    2013 IEEE 15th International Workshop on Multimedia Signal Processing (MMSP)

    2013

    Abstract anzeigen

    Several factors limit the advances on automatizing video quality measurement. Modelling the human visual system requires multi- and interdisciplinary efforts. A joint effort may bridge the large gap between the knowledge required in conducting a psychophysical experiment on isolated visual stimuli to engineering a universal model for video quality estimation under real-time constraints. The verification and validation requires input reaching from professional content production to innovative machine learning algorithms. Our paper aims at highlighting the complex interactions and the multitude of open questions as well as industrial requirements that led to the creation of the Joint Effort Group in the Video Quality Experts Group. The paper will zoom in on the first activity, the creation of a hybrid video quality model.

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    M. Leszczuk, L. Janowski, Marcus Barkowsky

    Freely Available Large-scale Video Quality Assessment Database in Full-HD Resolution with H.264 Coding

    Proceedings of the 2013 IEEE Globecom Workshops (GC Wkshps), Atlanta, GA, USA, no. -

    2013

    Abstract anzeigen

    Video databases often focus on a particular use case with a limited set of sequences. In this paper, a different type of database creation is proposed: an exhaustive number of test conditions will be continuously created and made freely available for objective and subjective evaluation. At the moment, the database comprises more than ten thousand JM/x264-encoded video sequences. An extensive study of the possible encoding parameter space led to a first subset selection of 1296 configura- tions. At the moment, only ten source sequences have been used, but extension to more than one hundred sequences is planned. Some Full-Reference (FR) and No-Reference (NR) metrics were selected and calculated. The resulting data will be freely available to the research community and possible exploitation areas are suggested.

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    Marcus Barkowsky, N. Staelens, L. Janowski, Y. Koudota, M. Leszczuk, M. Urvoy, P. Hummelbrunner, I. Sedano, K. Brunnström

    Subjective experiment dataset for joint development of hybrid video quality measurement algorithms

    QoEMCS 2012 ‐ Third Workshop on Quality of Experience for Multimedia Content Sharing, Berlin

    2012

    Abstract anzeigen

    The application area of an objective measurement algorithm for video quality is always limited by the scope of the video datasets that were used during its development and training. This is particularly true for measurements which rely solely on information available at the decoder side, for example hybrid models that analyze the bitstream and the decoded video. This paper proposes a framework which enables researchers to train, test and validate their algorithms on a large database of video sequences in such a way that the ‐ often limited ‐ scope of their development can be taken into consideration. A freely available video database for the development of hybrid models is described containing the network bitstreams, parsed information from these bitstreams for easy access, the decoded video sequences, and subjectively evaluated quality scores.

    DigitalF: Angewandte Informatik

    Zeitschriftenartikel

    M. Pinson, L. Janowski, R. Pepion, Q. Huynh-Thu, C. Schmidmer, P. Corriveau, A. Younkin, P. Le Callet, Marcus Barkowsky, W. Ingram

    The Influence of Subjects and Environment on Audiovisual Subjective Tests: An International Study

    IEEE Journal of Selected Topics in Signal Processing, vol. 6, no. 6, pp. 640-651

    2012

    DOI: 10.1109/JSTSP.2012.2215306

    Abstract anzeigen

    Traditionally, audio quality and video quality are evaluated separately in subjective tests. Best practices within the quality assessment community were developed before many modern mobile audiovisual devices and services came into use, such as internet video, smart phones, tablets and connected televisions. These devices and services raise unique questions that require jointly evaluating both the audio and the video within a subjective test. However, audiovisual subjective testing is a relatively under-explored field. In this paper, we address the question of determining the most suitable way to conduct audiovisual subjective testing on a wide range of audiovisual quality. Six laboratories from four countries conducted a systematic study of audiovisual subjective testing. The stimuli and scale were held constant across experiments and labs; only the environment of the subjective test was varied. Some subjective tests were conducted in controlled environments and some in public environments (a cafeteria, patio or hallway). The audiovisual stimuli spanned a wide range of quality. Results show that these audiovisual subjective tests were highly repeatable from one laboratory and environment to the next. The number of subjects was the most important factor. Based on this experiment, 24 or more subjects are recommended for Absolute Category Rating (ACR) tests. In public environments, 35 subjects were required to obtain the same Student\textquoterights t-test sensitivity. The second most important variable was individual differences between subjects. Other environmental factors had minimal impact, such as language, country, lighting, background noise, wall color, and monitor calibration. Analyses indicate that Mean Opinion Scores (MOS) are relative rather than absolute. Our analyses show that the results of experiments done in pristine, laboratory environments are highly representative of those devices in actual use, in a typical user environment.

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    S. Tourancheau, K. Wang, J. Bulat, R. Cousseau, L. Janowski, K. Brunnström, Marcus Barkowsky

    Reproducibility of crosstalk measurements on active glasses 3D LCD displays based on temporal characterization

    Proceedings of SPIE Vol. 8288: Stereoscopic Displays and Applications XXIII

    2012

    ISBN: 9780819489357

    Abstract anzeigen

    Crosstalk is one of the main display-related perceptual factors degrading image quality and causing visual dis-comfort on 3D-displays. It causes visual artifacts such as ghosting eects, blurring, and lack of color delitywhich are considerably annoying and can lead to diculties to fuse stereoscopic images. On stereoscopic LCDwith shutter-glasses, crosstalk is mainly due to dynamic temporal aspects: imprecise target luminance (highlydependent on the combination of left-view and right-view pixel color values in disparity regions) and synchro-nization issues between shutter-glasses and LCD. These dierent factors inuence largely the reproducibilityof crosstalk measurements across laboratories and need to be evaluated in several dierent locations involvingsimilar and diering conditions. In this paper we propose a fast and reproducible measurement procedure forcrosstalk based on high-frequency temporal measurements of both display and shutter responses. It permitsto fully characterize crosstalk for any right/left color combination and at any spatial position on the screen.Such a reliable objective crosstalk measurement method at several spatial positions is considered a mandatoryprerequisite for evaluating the perceptual inuence of crosstalk in further subjective studies.

    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    N. Staelens, I. Sedano, Marcus Barkowsky, L. Janowski, K. Brunnström, P. Le Callet

    Standardized Toolchain And Model Development For Video Quality Assessment ‐ The Mission Of The Joint Effort Group In Vqeg

    Proceedings of 2011 Third International Workshop on Quality of Multimedia Experience (QoMEX), Mechelen, Belgique

    2011

    Abstract anzeigen

    Since 1997, the Video Quality Experts Group (VQEG) has been active in the field of subjective and objective video quality assessment. The group has validated competitive quality metrics throughout several projects. Each of these projects requires mandatory actions such as creating a testplan and obtaining databases consisting of degraded video sequences with corresponding subjective quality ratings. Recently, VQEG started a new open initiative, the Joint Effort Group (JEG), for encouraging joint collaboration on all mandatory actions needed to validate video quality metrics. Within the JEG, effort is made to advance the field of both subjective and objective video quality measurement by providing proper software tools and subjective databases to the community. One of the subprojects of the JEG is the joint development of a hybrid H.264/AVC objective quality metric. In this paper, we introduce the JEG and provide an overview of the different ongoing activities within this newly started group.