Publikationen


5833 Publikationen gefunden
    NachhaltigAngewandte Naturwissenschaften und WirtschaftsingenieurwesenIPH Teisnach

    Beitrag (Sammelband oder Tagungsband)

    Gerald Fütterer, W. Krais, A. Engelbrecht, A. Sperl, S. Killinger, M. Werni

    Abschattungsfreies Multi-Schiefspiegel-Teleskop als studentisches Entwicklungsprojekt

    DGaO Proceedings zur 119. Jahrestagung in Aalen (22.-26.05.2018) 2018

    2018

    Abstract anzeigen

    An der Fakultät für Angewandte Naturwissenschaften und Wirtschaftsingenieurwesen der Technische Hochschule Deggendorf wird ein breites Wissensspektrum vermittelt. Um dieses praxisnah zu gestalten, wurde ein Teleskopbau-Projekt ins Leben gerufen. Mit dem Technologie-Campus Teisnach existiert die Basis für die Fertigung und Messung hoch präziser Teleskop-Optiken. Ausgangsparameter sind 400 mm Durchmesser des Primärspiegels und der Ansatz, am Markt bestehende Systeme in der optischen Abbildungsleistung einzuholen. Das optische Design beruht auf einer Untermenge des Parameterraums, der 1989 von M. Brunn veröffentlicht wurde. Das Konzept wurde später von D. Stevick als F12 System (mit Bezug auf die Arbeit von M. Paul, 1935) gebaut. Das THD-Projekt startete mit einem Vergleich von F7 Systemen, die in Zemax implementiert wurden. Die Abbildungsleistung wurde über ein Feld von 0,7 ° deg verglichen. Das mechanische Design schließt FEM Simulation thermischer Effekte an leichtgewichteten Spiegeln ein. Unterschiedliche Tuben wurden miteinander verglichen, einschließlich CFK Monocoquetubus. Ein weiterer Punkt ist die Auslegung der Nachführung. Es wird der Stand der Entwicklung dargelegt.

    NachhaltigAngewandte Naturwissenschaften und WirtschaftsingenieurwesenIPH Teisnach

    Beitrag (Sammelband oder Tagungsband)

    Gerald Fütterer, W. Krais, A. Engelbrecht, A. Sperl, S. Killinger, M. Werni

    Developing a four-tilted-mirror telescope as a student project

    Optics Education and Outreach V, vol. volume 10741

    2018

    ISBN: 9781510620537

    DOI: 10.1117/12.2320542

    Abstract anzeigen

    The Faculty of Applied Natural Sciences and Industrial engineering, which is a part of the Deggendorf Institute of Technology (DIT), transfers a broad spectrum of knowledge to the students. Edifying the interrelations, which are present between seemingly isolated fields of knowledge, is a permanent process. In order to make this practical, a telescope construction project was launched. The Technology Campus Teisnach bundles capacities for process development, production and measurement of highprecision optics. This also includes telescope optics. This qualifies the Campus for being the base of the in-house project. Fixed boundary conditions are e.g. 400 mm diameter of the primary mirror M1 and the objective to realize an image performance, which is equivalent to commercial telescopes. Furthermore, an unobscured tilted-mirror-system should be realized. The optical design, which was chosen as a result of an analysis of the state of the art, is based on a subset of the parameter space, which was published in 1989 by M. Brunn 1, 2. The concept was later built by D. Stevick as f/12-system (with reference to the work of M. Paul, 1935) 3. The DIT project started with a comparison of f/7-systems. They had been implemented in the optical design software Zemax. The imaging performance was compared within a field of view of 0.7 ° deg. The mechanical design includes FEM simulation of thermal effects on slightly weighted mirrors. Different tubes had been compared, including carbonfiber- reinforced-polymer (CFRP) Monocoquetubus. Another task is the realization of fast and precise tracking. The state of the development is set out.

    NachhaltigAngewandte Naturwissenschaften und WirtschaftsingenieurwesenIPH Teisnach

    Beitrag (Sammelband oder Tagungsband)

    Gerald Fütterer

    CSLM illumination for 1D and 2D encoded holographic 3D displays

    Illumination Optics V; SPIE Illumination Optics Conference; SPIE Optical Systems Design (OSD) [May 14-16, 2018; Frankfurt, Germany], vol. 10693

    2018

    ISBN: 9781510619234

    DOI: 10.1117/12.2312745

    Abstract anzeigen

    To leave the path of classic holography and limit the space-bandwidth-product of the holographic reconstruction is one way to enable interactive real time holographic 3D displays. Thus, a couple of major problems - among several others - can be reduced to a practical level. This holds e.g. for the computation power, the data transfer rate and the pixel count of the spatial light modulator (SLM) used. Although this idea is almost twenty years old, the maximum time span of IP protection, displays based on space-bandwidth-limited CGH reconstruction, which also can be referred to as spacebandwidth- limited reconstruction of wave front segments, are still not on the market. There are several technological reasons for that. However, the technological barriers can be tackled gradually. One problem to be solved is the illumination of the entrance plane of the preferable complex valued spatial light modulator (CSLM). Here, CSLM means to modulate the phase and the amplitude of each pixel. The display diagonals of desktop and TV type CSLM might be e.g. 32 and 65 inch respectively. In other words, reasonable large collimated illumination wave fields are mandatory. In addition a small form factor is a must have in order to obtain commercial success. The solution is an optical system design, which is based on Bragg diffraction based volume gratings. Classic refractive optics fails here. In other words, Bragg diffraction based volume gratings are key components of illumination units of holographic 3D displays. They can be used within a parameter space, which cannot be addressed by surface relief type diffraction optics. But their layout depends on the parameters of the illumination wave field, which has to be tailored in regards to the optical system of the discrete, e.g. 1D or 2D encoded holographic 3D display. This will be described in more detail. The example used for the description is a double wedge type backlight unit. Furthermore, it will be explained why the use of complex valued secondary light sources is a must have in holographic 3D displays. For this, a short explanation of coherent retinal inter object point cross talk will be given too. Finally, the description of the wave field shaping (WFS), which is required in order to form the optimized complex valued light source planes, is provided. In other words, a description of a tailored coherence preparation is given, which is up to now not state of the art. The cause and effect relationship of the light propagating from the primary light sources, which are lasers, to the final receptor, which is the retina, will be pointed out. Although this tailored partial coherent illumination totally differs from the state of the art of information displays, it might help to understand a technology, which will come in the next decades.

    NachhaltigAngewandte Naturwissenschaften und WirtschaftsingenieurwesenIPH Teisnach

    Beitrag (Sammelband oder Tagungsband)

    Gerald Fütterer

    Optimization of the complex coherence function for diffraction-based wavefront transformations

    Unconventional Optical Imaging

    2018

    ISBN: 9781510618800

    DOI: 10.1117/12.2307245

    Abstract anzeigen

    Partial coherence is used in a plurality of applications, magnifying microscopic imaging, interferometric measurement, lithographic imaging, CGH based wave front shaping, interference lithography and space-bandwidth-limited wave front reconstruction, just to name a few. In some applications the primary light source is characterized by a limited coherence length and an extended angular spectrum of plane waves, which has to be narrowed, e.g. if an Excimer laser is used. Sometimes the angular spectrum of plane waves of the primary light source has to be increased in order to be practical. There are several possibilities in general, the primary light source can be used directly, the system has to be adapted or the coherence function Γ has to be tailored in order to provide the specific requirements. Almost all embodiments come with little changes of the light sources coherence properties only. For example, to use a spectral bandpass filter or to limit the size of the light source seem to be the standard solution for almost everything. However, more advanced tailoring of the complex valued coherence function Γ leads to an increased image quality, e.g. in interferometers, but is not limited to this, reduces background noise, decouples Fizeau cavities or it enables complete new illumination and imaging system designs, which provide unique features. This aspect will be discussed herein. Furthermore, the propagation of the complex coherence will be taken into account. This is done in order to provide defined conditions in defined planes of imaging devices. In other words, the usage of the Wiener-Khintchin theorem and the van Cittert-Zernike theorem is just a part of the system analysis and system optimization, which has to be done. Although generic approaches are used, discrete light source layouts are strongly related to the discrete optical devices, which make use of them. The specific tailoring of the complex coherence function, which is related to the space-bandwidth-limited reconstruction of wave front segments, which also can be referred to as space-bandwidth-limited CGH reconstruction, will be described in more detail. For this type of real time dynamic imaging two major problems - among several others - have to be solved. One problem is the huge computation power and the other one is the coherent retinal cross talk of adjacent image points, which are reconstructed in the image volume. The disclosed layouts of tailored secondary light sources are based on the Wiener-Khintchin theorem and the van Cittert-Zernike theorem. Both problems, which are mentioned above, can be solved. Tailored complex valued light sources reduce the required computation power by enabling reduced coherent overlay of sub-CGH areas. Furthermore, they reduce the coherent retinal cross talk of dynamic real time spacebandwidth- limited CGH reconstruction, which is used in advanced imaging applications, too. This results in an increased image quality of partial coherent wave field reconstruction based imaging.

    DigitalMaschinenbau und Mechatronik

    Beitrag (Sammelband oder Tagungsband)

    Gabriel Herl, Jochen Hiller, A. Stock, T. Sauer

    Edge preserving compression of CT scans using wavelets

    SHM-NDT 2018 International Symposium on Structural Health Monitoring and Nondestructive Testing 4-5 Oct 2018, Saarbrücken – Germany

    2018

    Abstract anzeigen

    This work addresses the subject of efficient storage of computed tomography (CT) data with an emphasis on the quality of surfaces. Industrial dimensional metrology often requires high measurement accuracy and we show that this is retained using wavelet-based compression methods. The applied techniques include a tensor product wavelet transform and soft wavelet shrinkage. In our tests on real objects, we compared dimensional CT measurements of compressed and uncompressed volumes. We were able to reduce the necessary storage space significantly with a minimal loss of accuracy. For a multi sphere phantom, we decreased the storage space to 4.7% (from 638 MB to 30 MB) with an average deviation below 1 µm from the original volume.

    DigitalMaschinenbau und Mechatronik

    Beitrag (Sammelband oder Tagungsband)

    Gabriel Herl, Jochen Hiller, A. Stock, T. Sauer

    Metal artifact reduction by fusion of CT scans from different positions using the unfiltered backprojection

    iCT 2018 8th Conference on Industrial Computed Tomography (iCT) 2018, 6-9 Feb, Wels, Austria

    2018

    Abstract anzeigen

    Metal objects or metal parts in an object are still a major problem of X-ray computed tomography (CT) because of so called metal artifacts. We propose a new method – a multipositional data fusion – for automatically fusing multiple CT volumes from different positions to reduce these metal artifacts. After scanning a specimen several times at different positions and reconstruction of every scan (e.g. by the filtered backprojection), we also perform an unfiltered backprojection. Based on the assumption that metal artifacts occur the most wherever X-rays are attenuated a lot, the unfiltered backprojection is used to autonomously estimate the likelihood of metal artifacts in every voxel of every scan. The different volumes are registered and then fused by weighted sum preferring the voxels with low values in the unfiltered backprojection results. In our tests on real objects, our method fully automatically created optimized volumes with significantly less metal artifacts. The multipositional data fusion was compared to the commercially multi spectra fusion of Werth Messtechnik GmbH and outperformed it in one of the use cases.

    DigitalElektrotechnik und Medientechnik

    Zeitschriftenartikel

    K. Brunnström, Marcus Barkowsky

    Statistical quality of experience analysis: on planning the sample size and statistical significance testing

    Journal of Electronic Imaging, vol. 27, no. 05

    2018

    DOI: 10.1117/1.JEI.27.5.053013

    Abstract anzeigen

    This paper analyzes how an experimenter can balance errors in subjective video quality tests between the statistical power of finding an effect if it is there and not claiming that an effect is there if the effect is not there, i.e., balancing Type I and Type II errors. The risk of committing Type I errors increases with the number of comparisons that are performed in statistical tests. We will show that when controlling for this and at the same time keeping the power of the experiment at a reasonably high level, it is unlikely that the number of test subjects that are normally used and recommended by the International Telecommunication Union (ITU), i.e., 15 is sufficient but the number used by the Video Quality Experts Group (VQEG), i.e., 24 is more likely to be sufficient. Examples will also be given for the influence of Type I error on the statistical significance of comparing objective metrics by correlation. We also present a comparison between parametric and nonparametric statistics. The comparison targets the question whether we would reach different conclusions on the statistical difference between the video quality ratings of different video clips in a subjective test, based on the comparison between the student T-test and the Mann–Whitney U-test. We found that there was hardly a difference when few comparisons are compensated for, i.e., then almost the same conclusions are reached. When the number of comparisons is increased, then larger and larger differences between the two methods are revealed. In these cases, the parametric T-test gives clearly more significant cases, than the nonparametric test, which makes it more important to investigate whether the assumptions are met for performing a certain test.

    DigitalElektrotechnik und Medientechnik

    Zeitschriftenartikel

    G. Wallendael, Peter Lambert, A. Aldahlooh, E. Masala, Ahmed Aldahdooh, O. Janssens, Enrico Masala, Marcus Barkowsky, P. Callet, Glenn van Wallendael

    Improved Performance Measures for Video Quality Assessment Algorithms Using Training and Validation Sets

    IEEE Transactions on Multimedia, vol. 74, pp. 32-41

    2018

    Abstract anzeigen

    Due to the three-dimensional spatiotemporal regularities of natural videos and small-scale video quality databases, effective objective video quality assessment (VQA) metrics are difficult to obtain but highly desirable. In this paper, we propose a general-purpose no-reference VQA framework that is based on weakly supervised learning with convolutional neural network (CNN) and resampling strategy. First, an eight-layer CNN is trained by weakly supervised learning to construct the relationship between the deformations of the three dimensional discrete cosine transform of video blocks and corresponding weak labels judged by a full-reference (FR) VQA metric. Thus, the CNN obtains the quality assessment capacity converted from the FR-VQA metric, and the effective features of the distorted videos can be extracted through the trained network. Then, we map the frequency histogram calculated from the quality score vectors predicted by the trained network onto the perceptual quality. Specially, to improve the performance of the mapping function, we transfer the frequency histogram of the distorted images and videos to resample the training set. The experiments are carried out on several widely used video quality assessment databases. The experimental results demonstrate that the proposed method is on a par with some state-of-the-art VQA metrics and has promising robustness.

    DigitalElektrotechnik und MedientechnikInstitut ProtectIT

    Zeitschriftenartikel

    Martin Schramm, R. Dojen, Michael Heigl

    A Vendor-Neutral Unified Core for Cryptographic Operations in GF(p) and GF( 2m ) Based on Montgomery Arithmetic (Article ID 4983404)

    Security and Communication Networks, no. 9, pp. 1-18

    2018

    DOI: 10.1155/2018/4983404

    Abstract anzeigen

    In the emerging IoT ecosystem in which the internetworking will reach a totally new dimension the crucial role of efficient security solutions for embedded devices will be without controversy. Typically IoT-enabled devices are equipped with integrated circuits, such as ASICs or FPGAs to achieve highly specific tasks. Such devices must have cryptographic layers implemented and must be able to access cryptographic functions for encrypting/decrypting and signing/verifying data using various algorithms and generate true random numbers, random primes, and cryptographic keys. In the context of a limited amount of resources that typical IoT devices will exhibit, due to energy efficiency requirements, efficient hardware structures in terms of time, area, and power consumption must be deployed. In this paper, we describe a scalable word-based multivendor-capable cryptographic core, being able to perform arithmetic operations in prime and binary extension finite fields based on Montgomery Arithmetic. The functional range comprises the calculation of modular additions and subtractions, the determination of the Montgomery Parameters, and the execution of Montgomery Multiplications and Montgomery Exponentiations. A prototype implementation of the adaptable arithmetic core is detailed. Furthermore, the decomposition of cryptographic algorithms to be used together with the proposed core is stated and a performance analysis is given.

    NachhaltigElektrotechnik und MedientechnikTC Freyung

    Beitrag (Sammelband oder Tagungsband)

    Luis Ramirez Camargo, Wolfgang Dorner, K. Gruber, F. Nitsch

    Assessing regional reanalysis data sets for planning small-scale renewable energy system

    20th EGU General Assembly, EGU2018, Proceedings from the conference held 4-13 April, 2018 in Vienna, Austria, p.4996

    2018

    Abstract anzeigen

    An accurate resource availability estimation is vital for proper location, sizing and economic viability of renewable energy plants. Large photovoltaic (PV) and wind installations undergo a long and exhaustive planning process that would imply unacceptably high costs for developers of small-scale installations. In a context of abolition of feed-in tariffs, electricity feed-in restricted by grid capacity constraints and storage systems being commercialized at lower costs, the acquisition of high quality solar radiation and wind speed data becomes important also for planners of small scale installations. These data allow the characterization of short-term and inter-annual variability of the resources availability. Global reanalysis data sets provide long time series of these variables with temporal resolutions that can be as high as one hour and at no cost for the final user. However, due to the coarse spatial resolution and relatively low accuracy these products only provide an inferior alternative for data retrieval compared to e.g. satellite derived radiation data sets or advanced interpolation methods for wind speed data. The COSMO-REA6 and COSMOS-REA2 regional reanalysis overcome this limitation by increasing the resolution of the reanalysis to six and two kilometres respectively. The accuracy of these data sets for variables with high relevancy for meteorology, such as rainfall, has been assessed with satisfactory results but an independent evaluation for variables relevant for renewable energy generation has not been performed yet. This work presents an assessment of the variables of these data sets that have been made available to the public until November 2017. This assessment is performed for the area of the federal state of Bavaria in Germany and whole Czech Republic using data of the Bavarian agro-meteorological network and the Czech Hydrometeorological Institute. Accuracy indicators are calculated for horizontal global radiation or cloud coverage (depending on data availability from the weather stations) and wind speeds at 10 meters height. While there are important differences between weather stations and cloud coverage data, the results for wind speeds and global solar irradiance are satisfactory for most of the locations. For certain locations widely used indicators such as the Pearson's correlation coefficient reach values above 0.8 for wind speeds and above 0.9 for global solar irradiance and the mean biased error is consistently lower than 10 W/m2 and can be as low as 0.3 W/m2 for the irradiance data and is, with a few exceptions, lower than 2 m/s in Germany and lower than 1 m/s in the Czech Republic for wind speed data. A total of eight indicators for the hourly data in the period between 1995 and 2015 are calculated, presented, discussed and compared against international literature dealing with data accuracy for solar irradiance and wind speed data sets.

    NachhaltigElektrotechnik und MedientechnikTC Freyung

    Zeitschriftenartikel

    Q. Chaudhry, J. Lewis, A. Dudkiewicz, Boxall, A. B. A., G. Allmaier, Peter Hofmann, K. Tiede, A. Lehner, K. Molhave

    Development of a sample preparation approach to measure the size of nanoparticle aggregates by electron microscopy

    [Available online 29 November 2018]

    Particuology

    2018

    DOI: 10.1016/j.partic.2018.05.007

    Abstract anzeigen

    Electron microscopy (EM) is widely used for nanoparticle (NP) sizing. Following an initial assessment of two sample preparation protocols described in the current literature as “unperturbed”, we found that neither could accurately measure the size of NPs featuring a broad size distribution, e.g., aggregates. Because many real-world NP samples consist of aggregates, this finding was of considerable concern. The data showed that the protocols introduced errors into the measurement by either inducing agglomeration artefacts or providing a skewed size distribution towards small particles (skewing artefact). The focus of this work was to develop and apply a mathematical refinement to correct the skewing artefact. This refinement provided a much improved agreement between EM and a reference methodology, when applied to the measurement of synthetic amorphous silica NPs. Further investigation, highlighted the influence of NP chemistry on the refinement. This study emphasised the urgent need for greater and more detailed consideration regarding the sample preparation of NP aggregates to routinely achieve accurate measurements by EM. This study also provided a novel refinement solution applicable to the size characterisation of silica and citrate-coated gold NPs featuring broad size distributions. With further research, this approach could be extended to other NP types

    DigitalAngewandte WirtschaftswissenschaftenTC Grafenau

    Beitrag (Sammelband oder Tagungsband)

    Leon Binder, Ali Fallah Tehrani, P. M. Svasta, Monica I. Ciolacu

    Education 4.0 - Artificial Intelligence Assisted Higher Education: Early Recognition System with Machine Learning to Support Students' Success

    2018 IEEE 24th International Symposium for Design and Technology in Electronic Packaging (SIITME)

    2018

    ISBN: 978-1-5386-5578-8

    DOI: 10.1109/SIITME.2018.8599203

    Abstract anzeigen

    Education 4.0 is being empowered more and more by artificial intelligence (AI) methods. We observe a continuously growing demand for adaptive and personalized Education. In this paper we present an innovative approach to promoting AI in Education 4.0. Our first contribution is AI assisted Higher Education Process with smart sensors and wearable devices for self-regulated learning. Secondly we describe our first results of Education 4.0 didactic methods implemented with learning analytics and machine learning algorithms. The aim of this case study is to predict the final score of students before participating in final examination. We propose an Early Recognition System equipped with real data captured in a blended learning course with a personalized test at the beginning of the semester, an adaptive learning environment based on Auto Tutor by N. A. Crowder theory with adaptive self-assessment feedback. It is obvious that focusing on students' success and their experiences is a win-win scenario for students and professors as well as for the administration.

    DigitalAngewandte Wirtschaftswissenschaften

    Beitrag (Sammelband oder Tagungsband)

    Heribert Popp, Leon Binder, Monica I. Ciolacu, R. Beer

    Education 4.0 für Akademiker 4.0 Kompetenzen – Blended Learning 4.0 Prozess mit Learning Analytics Cockpit

    Proceedings der Pre-Conference-Workshops der 16. E-Learning Fachtagung Informatik co-located with 16th e-Learning Conference of the German Computer Society (DeLFI 2018)

    2018

    Abstract anzeigen

    Wir präsentieren unsere Ergebnisse von Education 4.0, didaktisch umgesetzt mit einem effizienten Blended-Learning-4.0-Prozess. In der Reflexionsphase des Blended-LearningProzesses bauen wir ein Learning Analytics Cockpit auf. Dafür lassen wir das Neuronale Netz (NN) aus den Klickdaten und den erreichten Punkten in den Kontrollfragen der Kurse (der ersten zwei Monate) sowie das Bestehen/Nichtbestehen (Klassifikation) bzw. die erreichten Punkte (Regression) in der Klausur lernen. Dann testen wir die gelernten NN bei der nächsten Durchführung des Kurses (erreichte Prognosegenauigkeit 76%-100% bei der Klassifikation) und setzen sie bei der nächsten Kursdurchführung ein. Das Lernen der NN und das Testen sind dem Cockpiteinsatz vorgelagert. Das LA Cockpit, das in PHP als Plugin zum Lernmanagementsystem Moodle implementiert ist, dient nur in der Arbeitsphase zur Anwendung des gelernten NN. Bisher lernten wir NN für drei Kurse und konnten in zwei Simulationen die Auswirkungen des CockpitEinsatzes testen indem wir im Fach Mathematik und im Fach Informations- und Wissensmanagement (IWM) des Studiengangs Betriebswirtschaftslehre im WS 17/18 bzw. SS 18 erste erfolgreiche Anwendungen durchführten. Basierend auf den Prognosen wurden Warn-EMails an als gefährdet eingestufte Studierende verschickt. Dadurch konnten die Durchfallquote in Mathematik und die Nichtantrittszahl in IWM nahezu halbiert werden.

    Angewandte Wirtschaftswissenschaften

    Zeitschriftenartikel

    Josef Scherer

    Compliance bewegt …

    Interview

    ZFRC-Risk, Fraud & Compliance, Prävention und Aufdeckung durch Compliance-Organisationen, vol. 13, no. 4

    2018

    DigitalAngewandte Wirtschaftswissenschaften

    Buch (Monographie)

    K. Fruth, Josef Scherer

    Empirische Erhebung zum Verbreitungsgrad von Unternehmensführung 4.0 im Mittelstand

    2018

    ISBN: 978-3-947301-12-6

    DigitalAngewandte Wirtschaftswissenschaften

    Buch (Monographie)

    K. Fruth, Josef Scherer

    Die digitale Transformation von Normen, Richtlinien und Standards

    - ISO 9001 (2015) (QM) - ISO 19600 (2014) (Compliance) - COSO I (2013) (Compliance) - IDW PS 980 (2011) (Compliance) - ISO 37001 (Anti-Korruption) - ISO 31000 (2018) (Risk) - IDW PS 981 (2017) (Risk) - IDW PS 982 (2017) (IKS) - IDW PS 983 (2017) (Revision) - DIIR Nr. 3 (2016) (Interne Revision) - PAS 99 (2012) (Integriertes Managemententsystem) zielführend anwenden!

    2018

    ISBN: 978-3-947301-13-3

    Angewandte Wirtschaftswissenschaften

    Zeitschriftenartikel

    Josef Scherer

    Haftung eines Risikomanagers

    Interview [16.07.2018]

    RiskNET

    2018

    DigitalAngewandte Wirtschaftswissenschaften

    Buch (Monographie)

    K. Fruth, Josef Scherer

    Handbuch: Integriertes Managementsystem (IMS) „on demand“ mit Governance, Risk und Compliance (GRC)

    2018

    ISBN: 394730109X

    DigitalAngewandte Wirtschaftswissenschaften

    Buch (Monographie)

    K. Fruth, Josef Scherer

    Handbuch: Einführung in Product Compliance, Vertragsmanagement und Qualitätsmanagement

    Anlagenband zu: Integriertes Qualitätsmanagement- und Leistungserbringungs-Management mit Governance, Risk und Compliance (GRC)

    2018

    ISBN: 3947301065