00 Informatik, Wissen, Systeme
Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (23)
Volltext vorhanden
- ja (23)
Gehört zur Bibliographie
- nein (23)
Schlagworte
- Maschinelles Lernen (5)
- machine learning (4)
- Convolutional Neural Network (3)
- Künstliche Intelligenz (3)
- Nachhaltigkeit (3)
- Alkoholmissbrauch (2)
- Anomalie <Medizin> (2)
- Anomalieerkennung (2)
- App <Programm> (2)
- Bildverarbeitung (2)
Deep learning-based image registration (DLIR) has been widely developed, but it remains challenging in perceiving small and large deformations. Besides, the effectiveness of the DLIR methods was also rarely validated on the downstream tasks. In the study, a multi-scale complexity-aware registration network (MSCAReg-Net) was proposed by devising a complexity-aware technique to facilitate DLIR under a single-resolution framework. Specifically, the complexity-aware technique devised a multi-scale complexity-aware module (MSCA-Module) to perceive deformations with distinct complexities, and employed a feature calibration module (FC-Module) and a feature aggregation module (FA-Module) to facilitate the MSCA-Module by generating more distinguishable deformation features. Experimental results demonstrated the superiority of the proposed MSCAReg-Net over the existing methods in terms of registration accuracy. Besides, other than the indices of Dice similarity coefficient (DSC) and percentage of voxels with non-positive Jacobian determinant (|J(phi)|=<0), a comprehensive evaluation of the registration performance was performed by applying this method on a downstream task of multi-atlas hippocampus segmentation (MAHS). Experimental results demonstrated that this method contributed to a better hippocampus segmentation over other DLIR methods, and a comparable segmentation performance with the leading SyN method. The comprehensive assessment including DSC, |J(phi)|=<0, and the downstream application on MAHS demonstrated the advances of this method.
Model transformations are central to model-driven software development. Applications of model transformations include creating models, handling model co-evolution, model merging, and understanding model evolution. In the past, various (semi-)automatic approaches to derive model transformations from meta-models or from examples have been proposed. These approaches require time-consuming handcrafting or the recording of concrete examples, or they are unable to derive complex transformations. We propose a novel unsupervised approach, called Ockham, which is able to learn edit operations from model histories in model repositories. Ockham is based on the idea that meaningful domain-specific edit operations are the ones that compress the model differences. It employs frequent subgraph mining to discover frequent structures in model difference graphs. We evaluate our approach in two controlled experiments and one real-world case study of a large-scale industrial model-driven architecture project in the railway domain. We found that our approach is able to discover frequent edit operations that have actually been applied before. Furthermore, Ockham is able to extract edit operations that are meaningful—in the sense of explaining model differences through the edit operations they comprise—to practitioners in an industrial setting. We also discuss use cases (i.e., semantic lifting of model differences and change profiles) for the discovered edit operations in this industrial setting. We find that the edit operations discovered by Ockham can be used to better understand and simulate the evolution of models.
The Saarschleife geotope (SE-Germany) represents one of the most prominent geotopes of the SaarLorLux region and is known far beyond the borders of the Greater Region. Surprisingly, there is no visual representation of the relief history and genesis of this river meander, which is unique for Central Europe - as is common at places with comparable outstanding phenomena, such as e.g. the Rocher Saint-Michel d'Aiguilhe (France) or some national parks in the U.S. (e.g. Grand Canyon). The Saarschleife geotope therefore was choosen as a pilot object for the envisaged analysis of the landscape genesis but also regarding the 3D mapping and visualization. The visualisation presents the relief history and geological evolution of the last 300 million years in selected geological epochs, which are of fundamental importance for the understanding of today's geomorphological relief conditions, and is compiled into a summarized chronology.
The objective of the German non-profit association NFDI (German short form for ”National Research Data Infrastructure”) is to make the data stock of the entire German science system accessible to the public. To do so, it should involve all stakeholders. However, currently the Universities of Applied Sciences (UAS) are underrepresented in the NFDI, and there is a danger of neglecting their needs. Therefore, we present the project ”Research Data Management at Universities of Applied Sciences in the State of Rhineland-Palatinate” (FDM@HAW.rlp), which is funded by the German Federal Ministry of Education and Research (BMBF) and financed within the Recovery and Resilience Facility of the European Union. In the project, seven public UAS in Rhineland-Palatinate and the Catholic University of Applied Sciences (CUAS) Mainz follow a common goal: They intend to establish an institutional RDM within a period of three years by building up competencies at the UAS, setting up services for researchers and finding solutions for a common technical infrastructure.
Social media data are transforming sustainability science. However, challenges from restrictions in data accessibility and ethical concerns regarding potential data misuse have threatened this nascent field. Here, we review the literature on the use of social media data in environmental and sustainability research. We find that they can play a novel and irreplaceable role in achieving the UN Sustainable Development Goals by allowing a nuanced understanding of human-nature interactions at scale, observing the dynamics of social-ecological change, and investigating the co-construction of nature values. We reveal threats to data access and highlight scientific responsibility to address trade-offs between research transparency and privacy protection, while promoting inclusivity. This contributes to a wider societal debate of social media data for sustainability science and for the common good.
This paper describes the project “Visual Knowledge Communication”, a joint project that started recently. The partners are psychologists and computer scientists from four universities of the German state Rhineland-Palatinate. The starting point for the project was the fact that visualizations have attracted considerable interest in psychology as well as computer science within the last years. However, psychologists and computer scientists pursued their investigations independently from each other in the past. This project has as its main goal the support and fostering of cooperation between psychologists and computer scientists in several visualization research projects.
The paper sketches the overall project. It then discusses in more detail the authors' subproject which deals with a peer review process for animations developed by students. The basic ideas, the main goals, and the project plan are described.
This paper is a work-in-progress report. Therefore, it does not contain any results.
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate "the problem of loosening a connection and the distribution of the work" as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
Ahmad et al. in their paper for the first time proposed to apply sharp function for classification of images. In continuation of their work, in this paper we investigate the use of sharp function as an edge detector through well known diffusion models. Further, we discuss the formulation of weak solution of nonlinear diffusion equation and prove uniqueness of weak solution of nonlinear problem. The anisotropic generalization of sharp operator based diffusion has also been implemented and tested on various types of images.
Background: In recent years, the volume of medical knowledge and health data has increased rapidly. For example, the increased availability of electronic health records (EHRs) provides accurate, up-to-date, and complete information about patients at the point of care and enables medical staff to have quick access to patient records for more coordinated and efficient care. With this increase in knowledge, the complexity of accurate, evidence-based medicine tends to grow all the time. Health care workers must deal with an increasing amount of data and documentation. Meanwhile, relevant patient data are frequently overshadowed by a layer of less relevant data, causing medical staff to often miss important values or abnormal trends and their importance to the progression of the patient’s case.
Objective: The goal of this work is to analyze the current laboratory results for patients in the intensive care unit (ICU) and classify which of these lab values could be abnormal the next time the test is done. Detecting near-future abnormalities can be useful to support clinicians in their decision-making process in the ICU by drawing their attention to the important values and focus on future lab testing, saving them both time and money. Additionally, it will give doctors more time to spend with patients, rather than skimming through a long list of lab values.
Methods: We used Structured Query Language to extract 25 lab values for mechanically ventilated patients in the ICU from the MIMIC-III and eICU data sets. Additionally, we applied time-windowed sampling and holding, and a support vector machine to fill in the missing values in the sparse time series, as well as the Tukey range to detect and delete anomalies. Then, we used the data to train 4 deep learning models for time series classification, as well as a gradient boosting–based algorithm and compared their performance on both data sets.
Results: The models tested in this work (deep neural networks and gradient boosting), combined with the preprocessing pipeline, achieved an accuracy of at least 80% on the multilabel classification task. Moreover, the model based on the multiple convolutional neural network outperformed the other algorithms on both data sets, with the accuracy exceeding 89%.
Conclusions: In this work, we show that using machine learning and deep neural networks to predict near-future abnormalities in lab values can achieve satisfactory results. Our system was trained, validated, and tested on 2 well-known data sets to ensure that our system bridged the reality gap as much as possible. Finally, the model can be used in combination with our preprocessing pipeline on real-life EHRs to improve patients’ diagnosis and treatment.
Numerous research methods have been developed to detect anomalies in the areas of security and risk analysis. In healthcare, there are numerous use cases where anomaly detection is relevant. For example, early detection of sepsis is one such use case. Early treatment of sepsis is cost effective and reduces the number of hospital days of patients in the ICU. There is no single procedure that is sufficient for sepsis diagnosis, and combinations of approaches are needed. Detecting anomalies in patient time series data could help speed the development of some decisions. However, our algorithm must be viewed as complementary to other approaches based on laboratory values and physician judgments. The focus of this work is to develop a hybrid method for detecting anomalies that occur, for example, in multidimensional medical signals, sensor signals, or other time series in business and nature. The novelty of our approach lies in the extension and combination of existing approaches: Statistics, Self Organizing Maps and Linear Discriminant Analysis in a unique and unprecedented way with the goal of identifying different types of anomalies in real-time measurement data and defining the point where the anomaly occurs. The proposed algorithm not only has the full potential to detect anomalies, but also to find real points where an anomaly starts.