Publikationen


Suche nach „[Barkowsky] [Marcus]“ hat 7 Publikationen gefunden
Suchergebnis als PDF
    DigitalAngewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    Katharina Heydn, Marc-Philipp Dietrich, Marcus Barkowsky, Götz Winterfeldt, S. Mammen, A. Nüchter

    The Golden Bullet: A Comparative Study for Target Acquisition, Pointing and Shooting

    Proceedings of the 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games) [4-6 Sept. 2019; Vienna, Austria]

    2019

    DOI: 10.1109/VS-Games.2019.8864589

    Abstract anzeigen

    In this study, we evaluate an interaction sequence performed by six modalities consisting of desktop-based (DB) and virtual reality (VR) environments using different input devices. For the given study, we implemented a vertical prototype of a first person shooter (FPS) game scenario, focusing on the genre-defining point-and-shoot mechanic. We introduce measures to evaluate the success of the according interaction sequence (times for target acquisition, pointing, shooting, overall net time, and number of shots) and conduct experiments to record and compare the users' performances. We show that interacting using head-tracking for landscape-rotation is performing similarly to the input of a screen-centered mouse and also yielded shortest times in target acquisition and pointing. Although using head-tracking for target acquisition and pointing was most efficient, subjects rated the modality using head-tracking for target acquisition and a 3DOF Controller for pointing best. Eye-tracking (ET) yields promising results, but calibration issues need to be resolved to enhance reliability and overall user experience.

    DigitalAngewandte Informatik

    Beitrag (Sammelband oder Tagungsband)

    L. Tiotsop, E. Masala, A. Aldahlooh, G. Wallendael, Marcus Barkowsky

    Computing Quality-of-Experience Ranges for Video Quality Estimation

    Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX) [5-7 June 2019; Berlin]

    2019

    DOI: 10.1109/QoMEX.2019.8743303

    Abstract anzeigen

    Typically, the measurement of the Quality of Experience for video sequences aims at a single value, in most cases the Mean Opinion Score (MOS). Predicting this value using various algorithms has been widely studied. However, deviation from the MOS is often handled as an unpredictable error. The approach in this contribution estimates intervals of video quality instead of the single valued MOS. Well-known video quality estimators are fused together to output a lower and upper border for the expected video quality, on the basis of a model derived from a well-known subjectively annotated dataset. Results on different datasets provide insight on the suitability of the well-known estimators for this particular approach.

    DigitalAngewandte Informatik

    Zeitschriftenartikel

    A. Aldahlooh, E. Masala, G. Wallendael, P. Lambert, Marcus Barkowsky

    Improving relevant subjective testing for validation: Comparing machine learning algorithms for finding similarities in VQA datasets using objective measures

    Signal Processing: Image Communication, vol. 74, no. May, pp. 32-41

    2019

    DOI: 10.1016/j.image.2019.01.004

    Abstract anzeigen

    Subjective quality assessment is a necessary activity to validate objective measures or to assess the performance of innovative video processing technologies. However, designing and performing comprehensive tests requires expertise and a large effort especially for the execution part. In this work we propose a methodology that, given a set of processed video sequences prepared by video quality experts, attempts to reduce the number of subjective tests by selecting a subset with minimum size which is expected to yield the same conclusions of the larger set. To this aim, we combine information coming from different types of objective quality metrics with clustering and machine learning algorithms that perform the actual selection, therefore reducing the required subjective assessment effort while trying to preserve the variety of content and conditions needed to ensure the validity of the conclusions. Experiments are conducted on one of the largest publicly available subjectively annotated video sequence dataset. As performance criterion, we chose the validation criteria for video quality measurement algorithms established by the International Telecommunication Union.

    DigitalAngewandte Informatik

    Zeitschriftenartikel

    T. Mizdos, Marcus Barkowsky, M. Uhrina, P. Pocta

    Linking Bitstream Information to QoE: A Study on Still Images Using HEVC Intra Coding

    Advances in Electrical and Electronic Engineering, vol. 17, no. 4, pp. 436-445

    2019

    DOI: 10.15598/aeee.v17i4.3625

    Abstract anzeigen

    The coding tools used in image and video encoders aim at high perceptual quality for low bitrates. Analyzing the results of the encoders in terms of quantization parameter, image partitioning, prediction modes or residuals may provide important insight into the link between those tools and the human perception. As a first step, this contribution analyzes the possibility to transcode reference images of three well-known image databases, i.e. IRCCyN/IVC, LIVE and TID2013, from their original, older formats to HEVC; thus creating a homogeneous database of 327 HEVC encoded images accompanied with bitstream parameters and values obtained from objective and subjective assessments. Secondly, it analyzes some of the HEVC intra coding parameters regarding their influence on the image quality by using machine learning, namely Support Vector Machine - Regression.

    DigitalElektrotechnik und Medientechnik

    Zeitschriftenartikel

    K. Brunnström, Marcus Barkowsky

    Statistical quality of experience analysis: on planning the sample size and statistical significance testing

    Journal of Electronic Imaging, vol. 27, no. 05

    2018

    DOI: 10.1117/1.JEI.27.5.053013

    Abstract anzeigen

    This paper analyzes how an experimenter can balance errors in subjective video quality tests between the statistical power of finding an effect if it is there and not claiming that an effect is there if the effect is not there, i.e., balancing Type I and Type II errors. The risk of committing Type I errors increases with the number of comparisons that are performed in statistical tests. We will show that when controlling for this and at the same time keeping the power of the experiment at a reasonably high level, it is unlikely that the number of test subjects that are normally used and recommended by the International Telecommunication Union (ITU), i.e., 15 is sufficient but the number used by the Video Quality Experts Group (VQEG), i.e., 24 is more likely to be sufficient. Examples will also be given for the influence of Type I error on the statistical significance of comparing objective metrics by correlation. We also present a comparison between parametric and nonparametric statistics. The comparison targets the question whether we would reach different conclusions on the statistical difference between the video quality ratings of different video clips in a subjective test, based on the comparison between the student T-test and the Mann–Whitney U-test. We found that there was hardly a difference when few comparisons are compensated for, i.e., then almost the same conclusions are reached. When the number of comparisons is increased, then larger and larger differences between the two methods are revealed. In these cases, the parametric T-test gives clearly more significant cases, than the nonparametric test, which makes it more important to investigate whether the assumptions are met for performing a certain test.

    DigitalElektrotechnik und Medientechnik

    Zeitschriftenartikel

    A. Aldahlooh, E. Masala, O. Janssens, G. Wallendael, Marcus Barkowsky, P. Callet, Enrico Masala, Glenn van Wallendael, P. Lambert

    Improved Performance Measures for Video Quality Assessment Algorithms Using Training and Validation Sets

    IEEE Transactions on Multimedia, vol. 74, pp. 32-41

    2018

    Abstract anzeigen

    Due to the three-dimensional spatiotemporal regularities of natural videos and small-scale video quality databases, effective objective video quality assessment (VQA) metrics are difficult to obtain but highly desirable. In this paper, we propose a general-purpose no-reference VQA framework that is based on weakly supervised learning with convolutional neural network (CNN) and resampling strategy. First, an eight-layer CNN is trained by weakly supervised learning to construct the relationship between the deformations of the three dimensional discrete cosine transform of video blocks and corresponding weak labels judged by a full-reference (FR) VQA metric. Thus, the CNN obtains the quality assessment capacity converted from the FR-VQA metric, and the effective features of the distorted videos can be extracted through the trained network. Then, we map the frequency histogram calculated from the quality score vectors predicted by the trained network onto the perceptual quality. Specially, to improve the performance of the mapping function, we transfer the frequency histogram of the distorted images and videos to resample the training set. The experiments are carried out on several widely used video quality assessment databases. The experimental results demonstrate that the proposed method is on a par with some state-of-the-art VQA metrics and has promising robustness.

    DigitalAngewandte GesundheitswissenschaftenAngewandte InformatikElektrotechnik und MedientechnikIQW

    Zeitschriftenartikel

    Andreas Gegenfurtner, Armin Eichinger, Richard Latzel, Marc-Philipp Dietrich, Marcus Barkowsky, Alexandra Glufke, Angelika Stadler, Wolfgang Stern

    Mobiles Eye-Tracking in den angewandten Wissenschaften

    Bavarian Journal of Applied Sciences, vol. 4, no. 1, pp. 370-395

    2018

    DOI: 10.25929/bjas.v4i1.54

    Abstract anzeigen

    Mobiles Eye-Tracking ist als Forschungsmethode beliebter denn je und gewinnt in unterschiedlichen Feldern der angewandten Wissenschaften mehr und mehr an Bedeutung. Dieser Beitrag diskutiert, wie die Aufzeichnung und Analyse von Blickbewegungen in der Mobilität, im Usability Engineering, den Sportwissenschaften, der Augmented Reality/Mixed Reality/Virtual Reality und der Medizin bzw. medizinischen Weiterbildung eingesetzt wird. Der Beitrag gliedert sich dabei in drei Teile: in einem ersten Teil werden Grundzüge des Eye-Trackings erläutert; in einem zweiten Teil wird der Einsatz mobilen Eye-Trackings in ausgewählten Feldern der angewandten Wissenschaften veranschaulicht; und in einem abschließenden dritten Teil werden Potentiale und Risiken sowie zukünftige Forschungslinien skizziert, um die Anwendung mobilen Eye-Trackings als digitale Forschungsmethode weiter zu etablieren.