Suche nach „[W.] [Ingram]“ hat 2 Publikationen gefunden
Suchergebnis als PDF
    DigitalF: Angewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    M. Pinson, C. Schmidmer, L. Janowski, R. Pepion, Q. Huynh-Thu, P. Corriveau, A. Younkin, P. Le Callet, Marcus Barkowsky, W. Ingram

    Subjective and objective evaluation of an audiovisual subjective dataset for research and development

    2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX)


    Abstract anzeigen

    In 2011, the Video Quality Experts Group (VQEG) ran subjects through the same audiovisual subjective test at six different international laboratories. That small dataset is now publically available for research and development purposes.

    DigitalF: Angewandte Informatik


    M. Pinson, L. Janowski, R. Pepion, Q. Huynh-Thu, C. Schmidmer, P. Corriveau, A. Younkin, P. Le Callet, Marcus Barkowsky, W. Ingram

    The Influence of Subjects and Environment on Audiovisual Subjective Tests: An International Study

    IEEE Journal of Selected Topics in Signal Processing, vol. 6, no. 6, pp. 640-651


    DOI: 10.1109/JSTSP.2012.2215306

    Abstract anzeigen

    Traditionally, audio quality and video quality are evaluated separately in subjective tests. Best practices within the quality assessment community were developed before many modern mobile audiovisual devices and services came into use, such as internet video, smart phones, tablets and connected televisions. These devices and services raise unique questions that require jointly evaluating both the audio and the video within a subjective test. However, audiovisual subjective testing is a relatively under-explored field. In this paper, we address the question of determining the most suitable way to conduct audiovisual subjective testing on a wide range of audiovisual quality. Six laboratories from four countries conducted a systematic study of audiovisual subjective testing. The stimuli and scale were held constant across experiments and labs; only the environment of the subjective test was varied. Some subjective tests were conducted in controlled environments and some in public environments (a cafeteria, patio or hallway). The audiovisual stimuli spanned a wide range of quality. Results show that these audiovisual subjective tests were highly repeatable from one laboratory and environment to the next. The number of subjects was the most important factor. Based on this experiment, 24 or more subjects are recommended for Absolute Category Rating (ACR) tests. In public environments, 35 subjects were required to obtain the same Student\textquoterights t-test sensitivity. The second most important variable was individual differences between subjects. Other environmental factors had minimal impact, such as language, country, lighting, background noise, wall color, and monitor calibration. Analyses indicate that Mean Opinion Scores (MOS) are relative rather than absolute. Our analyses show that the results of experiments done in pristine, laboratory environments are highly representative of those devices in actual use, in a typical user environment.