00 Informatik, Wissen, Systeme
Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (23)
Volltext vorhanden
- ja (23) (entfernen)
Gehört zur Bibliographie
- nein (23) (entfernen)
Schlagworte
- Maschinelles Lernen (5)
- machine learning (4)
- Convolutional Neural Network (3)
- Künstliche Intelligenz (3)
- Nachhaltigkeit (3)
- Alkoholmissbrauch (2)
- Anomalie <Medizin> (2)
- Anomalieerkennung (2)
- App <Programm> (2)
- Bildverarbeitung (2)
Deep learning-based image registration (DLIR) has been widely developed, but it remains challenging in perceiving small and large deformations. Besides, the effectiveness of the DLIR methods was also rarely validated on the downstream tasks. In the study, a multi-scale complexity-aware registration network (MSCAReg-Net) was proposed by devising a complexity-aware technique to facilitate DLIR under a single-resolution framework. Specifically, the complexity-aware technique devised a multi-scale complexity-aware module (MSCA-Module) to perceive deformations with distinct complexities, and employed a feature calibration module (FC-Module) and a feature aggregation module (FA-Module) to facilitate the MSCA-Module by generating more distinguishable deformation features. Experimental results demonstrated the superiority of the proposed MSCAReg-Net over the existing methods in terms of registration accuracy. Besides, other than the indices of Dice similarity coefficient (DSC) and percentage of voxels with non-positive Jacobian determinant (|J(phi)|=<0), a comprehensive evaluation of the registration performance was performed by applying this method on a downstream task of multi-atlas hippocampus segmentation (MAHS). Experimental results demonstrated that this method contributed to a better hippocampus segmentation over other DLIR methods, and a comparable segmentation performance with the leading SyN method. The comprehensive assessment including DSC, |J(phi)|=<0, and the downstream application on MAHS demonstrated the advances of this method.
Model transformations are central to model-driven software development. Applications of model transformations include creating models, handling model co-evolution, model merging, and understanding model evolution. In the past, various (semi-)automatic approaches to derive model transformations from meta-models or from examples have been proposed. These approaches require time-consuming handcrafting or the recording of concrete examples, or they are unable to derive complex transformations. We propose a novel unsupervised approach, called Ockham, which is able to learn edit operations from model histories in model repositories. Ockham is based on the idea that meaningful domain-specific edit operations are the ones that compress the model differences. It employs frequent subgraph mining to discover frequent structures in model difference graphs. We evaluate our approach in two controlled experiments and one real-world case study of a large-scale industrial model-driven architecture project in the railway domain. We found that our approach is able to discover frequent edit operations that have actually been applied before. Furthermore, Ockham is able to extract edit operations that are meaningful—in the sense of explaining model differences through the edit operations they comprise—to practitioners in an industrial setting. We also discuss use cases (i.e., semantic lifting of model differences and change profiles) for the discovered edit operations in this industrial setting. We find that the edit operations discovered by Ockham can be used to better understand and simulate the evolution of models.
Modeling and executing knowledge-intensive processes (KiPs) are challenging with state-of-the-art approaches, and the specific demands of KiPs are the subject of ongoing research. In this context, little attention has been paid to the ontology-driven combination of data-centric and semantic business process modeling, which finds additional motivation by enabling the division of labor between humans and artificial intelligence. Such approaches have characteristics that could allow support for KiPs based on the inferencing capabilities of reasoners. We confirm this as we show that reasoners can infer the executability of tasks based on a currently researched ontology- and data-driven business process model (ODD-BP model). Further support for KiPs by the proposed inference mechanism results from its ability to infer the relevance of tasks, depending on the extent to which their execution would contribute to process progress. Besides these contributions along with the execution perspective (start-to-end direction), we will also show how our approach can help to reach specific process goals by inferring the relevance of process elements regarding their support to achieve such goals (end-to-start direction). The elements with the most valuable process progress can be identified in the intersection of both, the execution and goal perspective. This paper will introduce this new approach and verifies its practicability with an evaluation of a KiP in the field of emergency call centers.
The objective of the German non-profit association NFDI (German short form for ”National Research Data Infrastructure”) is to make the data stock of the entire German science system accessible to the public. To do so, it should involve all stakeholders. However, currently the Universities of Applied Sciences (UAS) are underrepresented in the NFDI, and there is a danger of neglecting their needs. Therefore, we present the project ”Research Data Management at Universities of Applied Sciences in the State of Rhineland-Palatinate” (FDM@HAW.rlp), which is funded by the German Federal Ministry of Education and Research (BMBF) and financed within the Recovery and Resilience Facility of the European Union. In the project, seven public UAS in Rhineland-Palatinate and the Catholic University of Applied Sciences (CUAS) Mainz follow a common goal: They intend to establish an institutional RDM within a period of three years by building up competencies at the UAS, setting up services for researchers and finding solutions for a common technical infrastructure.
Background: High numbers of consumable medical materials (eg, sterile needles and swabs) are used during the daily routine of intensive care units (ICUs) worldwide. Although medical consumables largely contribute to total ICU hospital expenditure, many hospitals do not track the individual use of materials. Current tracking solutions meeting the specific requirements of the medical environment, like barcodes or radio frequency identification, require specialized material preparation and high infrastructure investment. This impedes the accurate prediction of consumption, leads to high storage maintenance costs caused by large inventories, and hinders scientific work due to inaccurate documentation. Thus, new cost-effective and contactless methods for object detection are urgently needed.
Objective: The goal of this work was to develop and evaluate a contactless visual recognition system for tracking medical consumable materials in ICUs using a deep learning approach on a distributed client-server architecture.
Methods: We developed Consumabot, a novel client-server optical recognition system for medical consumables, based on the convolutional neural network model MobileNet implemented in Tensorflow. The software was designed to run on single-board computer platforms as a detection unit. The system was trained to recognize 20 different materials in the ICU, while 100 sample images of each consumable material were provided. We assessed the top-1 recognition rates in the context of different real-world ICU settings: materials presented to the system without visual obstruction, 50% covered materials, and scenarios of multiple items. We further performed an analysis of variance with repeated measures to quantify the effect of adverse real-world circumstances.
Results: Consumabot reached a >99% reliability of recognition after about 60 steps of training and 150 steps of validation. A desirable low cross entropy of <0.03 was reached for the training set after about 100 iteration steps and after 170 steps for the validation set. The system showed a high top-1 mean recognition accuracy in a real-world scenario of 0.85 (SD 0.11) for objects presented to the system without visual obstruction. Recognition accuracy was lower, but still acceptable, in scenarios where the objects were 50% covered (P<.001; mean recognition accuracy 0.71; SD 0.13) or multiple objects of the target group were present (P=.01; mean recognition accuracy 0.78; SD 0.11), compared to a nonobstructed view. The approach met the criteria of absence of explicit labeling (eg, barcodes, radio frequency labeling) while maintaining a high standard for quality and hygiene with minimal consumption of resources (eg, cost, time, training, and computational power).
Conclusions: Using a convolutional neural network architecture, Consumabot consistently achieved good results in the classification of consumables and thus is a feasible way to recognize and register medical consumables directly to a hospital’s electronic health record. The system shows limitations when the materials are partially covered, therefore identifying characteristics of the consumables are not presented to the system. Further development of the assessment in different medical circumstances is needed.
The aim of this work was to develop and evaluate the reinforcement learning algorithm VentAI, which is able to suggest a dynamically optimized mechanical ventilation regime for critically-ill patients. We built, validated and tested its performance on 11,943 events of volume-controlled mechanical ventilation derived from 61,532 distinct ICU admissions and tested it on an independent, secondary dataset (200,859 ICU stays; 25,086 mechanical ventilation events). A patient “data fingerprint” of 44 features was extracted as multidimensional time series in 4-hour time steps. We used a Markov decision process, including a reward system and a Q-learning approach, to find the optimized settings for positive end-expiratory pressure (PEEP), fraction of inspired oxygen (FiO2) and ideal body weight-adjusted tidal volume (Vt). The observed outcome was in-hospital or 90-day mortality. VentAI reached a significantly increased estimated performance return of 83.3 (primary dataset) and 84.1 (secondary dataset) compared to physicians’ standard clinical care (51.1). The number of recommended action changes per mechanically ventilated patient constantly exceeded those of the clinicians. VentAI chose 202.9% more frequently ventilation regimes with lower Vt (5–7.5 mL/kg), but 50.8% less for regimes with higher Vt (7.5–10 mL/kg). VentAI recommended 29.3% more frequently PEEP levels of 5–7 cm H2O and 53.6% more frequently PEEP levels of 7–9 cmH2O. VentAI avoided high (>55%) FiO2 values (59.8% decrease), while preferring the range of 50–55% (140.3% increase). In conclusion, VentAI provides reproducible high performance by dynamically choosing an optimized, individualized ventilation strategy and thus might be of benefit for critically ill patients.
This paper describes the project “Visual Knowledge Communication”, a joint project that started recently. The partners are psychologists and computer scientists from four universities of the German state Rhineland-Palatinate. The starting point for the project was the fact that visualizations have attracted considerable interest in psychology as well as computer science within the last years. However, psychologists and computer scientists pursued their investigations independently from each other in the past. This project has as its main goal the support and fostering of cooperation between psychologists and computer scientists in several visualization research projects.
The paper sketches the overall project. It then discusses in more detail the authors' subproject which deals with a peer review process for animations developed by students. The basic ideas, the main goals, and the project plan are described.
This paper is a work-in-progress report. Therefore, it does not contain any results.
Online Learning algorithms and Indoor Positioning Systems are complex applications in the environment of cyber-physical systems. These distributed systems are created by networking intelligent machines and autonomous robots on the Internet of Things using embedded systems that enable the exchange of information at any time. This information is processed by Machine Learning algorithms to make decisions about current developments in production or to influence logistics processes for optimization purposes. In this article, we present and categorize the further development of the prototype of a novel Indoor Positioning System, which constantly adapts its knowledge to the conditions of its environment with the help of Online Learning. Here, we apply Online Learning algorithms in the field of sound-based indoor localization with low-cost hardware and demonstrate the improvement of the system over its predecessor and its adaptability for different applications in an experimental case study.
The Saarschleife geotope (SE-Germany) represents one of the most prominent geotopes of the SaarLorLux region and is known far beyond the borders of the Greater Region. Surprisingly, there is no visual representation of the relief history and genesis of this river meander, which is unique for Central Europe - as is common at places with comparable outstanding phenomena, such as e.g. the Rocher Saint-Michel d'Aiguilhe (France) or some national parks in the U.S. (e.g. Grand Canyon). The Saarschleife geotope therefore was choosen as a pilot object for the envisaged analysis of the landscape genesis but also regarding the 3D mapping and visualization. The visualisation presents the relief history and geological evolution of the last 300 million years in selected geological epochs, which are of fundamental importance for the understanding of today's geomorphological relief conditions, and is compiled into a summarized chronology.
Containerization is one of the most important topics for modern data centers and web developers. Since the number of containers on one- and multi-node systems is growing, knowledge about the energy consumption behavior of single web-service containers is essential in order to save energy and, of course, money. In this article, we are going to show how the energy consumption behavior of single containerized web services/web apps changes while creating replicas of the service in order to scale and balance the web service.