00 Informatik, Wissen, Systeme
Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (23)
Volltext vorhanden
- ja (23) (entfernen)
Gehört zur Bibliographie
- nein (23)
Schlagworte
- Maschinelles Lernen (5)
- machine learning (4)
- Convolutional Neural Network (3)
- Künstliche Intelligenz (3)
- Nachhaltigkeit (3)
- Alkoholmissbrauch (2)
- Anomalie <Medizin> (2)
- Anomalieerkennung (2)
- App <Programm> (2)
- Bildverarbeitung (2)
The Saarschleife geotope (SE-Germany) represents one of the most prominent geotopes of the SaarLorLux region and is known far beyond the borders of the Greater Region. Surprisingly, there is no visual representation of the relief history and genesis of this river meander, which is unique for Central Europe - as is common at places with comparable outstanding phenomena, such as e.g. the Rocher Saint-Michel d'Aiguilhe (France) or some national parks in the U.S. (e.g. Grand Canyon). The Saarschleife geotope therefore was choosen as a pilot object for the envisaged analysis of the landscape genesis but also regarding the 3D mapping and visualization. The visualisation presents the relief history and geological evolution of the last 300 million years in selected geological epochs, which are of fundamental importance for the understanding of today's geomorphological relief conditions, and is compiled into a summarized chronology.
Background: High numbers of consumable medical materials (eg, sterile needles and swabs) are used during the daily routine of intensive care units (ICUs) worldwide. Although medical consumables largely contribute to total ICU hospital expenditure, many hospitals do not track the individual use of materials. Current tracking solutions meeting the specific requirements of the medical environment, like barcodes or radio frequency identification, require specialized material preparation and high infrastructure investment. This impedes the accurate prediction of consumption, leads to high storage maintenance costs caused by large inventories, and hinders scientific work due to inaccurate documentation. Thus, new cost-effective and contactless methods for object detection are urgently needed.
Objective: The goal of this work was to develop and evaluate a contactless visual recognition system for tracking medical consumable materials in ICUs using a deep learning approach on a distributed client-server architecture.
Methods: We developed Consumabot, a novel client-server optical recognition system for medical consumables, based on the convolutional neural network model MobileNet implemented in Tensorflow. The software was designed to run on single-board computer platforms as a detection unit. The system was trained to recognize 20 different materials in the ICU, while 100 sample images of each consumable material were provided. We assessed the top-1 recognition rates in the context of different real-world ICU settings: materials presented to the system without visual obstruction, 50% covered materials, and scenarios of multiple items. We further performed an analysis of variance with repeated measures to quantify the effect of adverse real-world circumstances.
Results: Consumabot reached a >99% reliability of recognition after about 60 steps of training and 150 steps of validation. A desirable low cross entropy of <0.03 was reached for the training set after about 100 iteration steps and after 170 steps for the validation set. The system showed a high top-1 mean recognition accuracy in a real-world scenario of 0.85 (SD 0.11) for objects presented to the system without visual obstruction. Recognition accuracy was lower, but still acceptable, in scenarios where the objects were 50% covered (P<.001; mean recognition accuracy 0.71; SD 0.13) or multiple objects of the target group were present (P=.01; mean recognition accuracy 0.78; SD 0.11), compared to a nonobstructed view. The approach met the criteria of absence of explicit labeling (eg, barcodes, radio frequency labeling) while maintaining a high standard for quality and hygiene with minimal consumption of resources (eg, cost, time, training, and computational power).
Conclusions: Using a convolutional neural network architecture, Consumabot consistently achieved good results in the classification of consumables and thus is a feasible way to recognize and register medical consumables directly to a hospital’s electronic health record. The system shows limitations when the materials are partially covered, therefore identifying characteristics of the consumables are not presented to the system. Further development of the assessment in different medical circumstances is needed.
Numerous research methods have been developed to detect anomalies in the areas of security and risk analysis. In healthcare, there are numerous use cases where anomaly detection is relevant. For example, early detection of sepsis is one such use case. Early treatment of sepsis is cost effective and reduces the number of hospital days of patients in the ICU. There is no single procedure that is sufficient for sepsis diagnosis, and combinations of approaches are needed. Detecting anomalies in patient time series data could help speed the development of some decisions. However, our algorithm must be viewed as complementary to other approaches based on laboratory values and physician judgments. The focus of this work is to develop a hybrid method for detecting anomalies that occur, for example, in multidimensional medical signals, sensor signals, or other time series in business and nature. The novelty of our approach lies in the extension and combination of existing approaches: Statistics, Self Organizing Maps and Linear Discriminant Analysis in a unique and unprecedented way with the goal of identifying different types of anomalies in real-time measurement data and defining the point where the anomaly occurs. The proposed algorithm not only has the full potential to detect anomalies, but also to find real points where an anomaly starts.
Containerization is one of the most important topics for modern data centers and web developers. Since the number of containers on one- and multi-node systems is growing, knowledge about the energy consumption behavior of single web-service containers is essential in order to save energy and, of course, money. In this article, we are going to show how the energy consumption behavior of single containerized web services/web apps changes while creating replicas of the service in order to scale and balance the web service.
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate "the problem of loosening a connection and the distribution of the work" as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
Companies have made considerable progress in assessing the sustainability of their processes and products, including the information and communication technology (ICT) sector. However, it is surprising that little attention has been given to the sustainability performance of software products. For this article, we chose a case study approach to explore the extent, to which software manufacturers have considered sustainability criteria for their products. We selected a manufacturer of sustainability management software on the assumption that they would be more likely to integrate elements of sustainability performance in their products. In the case study, we applied a previously developed set of criteria for sustainable software (SCSS) using a questionnaire and experiments, to assess a web-based sustainability management software product regarding its sustainability performance. The assessment finds that despite a sustainability conscious manufacturer, a systematic assessment of sustainability regarding software products is missing in the case study. This implies that sustainability assessment for software products is still novel, corresponding knowledge is missing and suitable tools are not yet being widely applied in the industry. The SCSS presents a suitable approach to close this gap, but it does require further refinement, for example regarding its applicability to web-based software on external servers.
The aim of this work was to develop and evaluate the reinforcement learning algorithm VentAI, which is able to suggest a dynamically optimized mechanical ventilation regime for critically-ill patients. We built, validated and tested its performance on 11,943 events of volume-controlled mechanical ventilation derived from 61,532 distinct ICU admissions and tested it on an independent, secondary dataset (200,859 ICU stays; 25,086 mechanical ventilation events). A patient “data fingerprint” of 44 features was extracted as multidimensional time series in 4-hour time steps. We used a Markov decision process, including a reward system and a Q-learning approach, to find the optimized settings for positive end-expiratory pressure (PEEP), fraction of inspired oxygen (FiO2) and ideal body weight-adjusted tidal volume (Vt). The observed outcome was in-hospital or 90-day mortality. VentAI reached a significantly increased estimated performance return of 83.3 (primary dataset) and 84.1 (secondary dataset) compared to physicians’ standard clinical care (51.1). The number of recommended action changes per mechanically ventilated patient constantly exceeded those of the clinicians. VentAI chose 202.9% more frequently ventilation regimes with lower Vt (5–7.5 mL/kg), but 50.8% less for regimes with higher Vt (7.5–10 mL/kg). VentAI recommended 29.3% more frequently PEEP levels of 5–7 cm H2O and 53.6% more frequently PEEP levels of 7–9 cmH2O. VentAI avoided high (>55%) FiO2 values (59.8% decrease), while preferring the range of 50–55% (140.3% increase). In conclusion, VentAI provides reproducible high performance by dynamically choosing an optimized, individualized ventilation strategy and thus might be of benefit for critically ill patients.
Background: Problem drinking, particularly risky single-occasion drinking is widespread among adolescents and young adults in most Western countries. Mobile phone text messaging allows a proactive and cost-effective delivery of short messages at any time and place and allows the delivery of individualised information at times when young people typically drink alcohol. The main objective of the planned study is to test the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people with heterogeneous educational level.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted to test the efficacy of the intervention in comparison to assessment only. The fully-automated intervention program will provide an online feedback based on the social norms approach as well as individually tailored mobile phone text messages to stimulate (1) positive outcome expectations to drink within low-risk limits, (2) self-efficacy to resist alcohol and (3) planning processes to translate intentions to resist alcohol into action. Program participants will receive up to two weekly text messages over a time period of 3 months. Study participants will be 934 students from approximately 93 upper secondary and vocational schools in Switzerland. Main outcome criterion will be risky single-occasion drinking in the past 30 days preceding the follow-up assessment.
Discussion: This is the first study testing the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people. Given that this intervention approach proves to be effective, it could be easily implemented in various settings, and it could reach large numbers of young people in a cost-effective way.
Background: Tobacco smoking prevalence continues to be high, particularly among adolescents and young adults with lower educational levels, and is therefore a serious public health problem. Tobacco smoking and problem drinking often co-occur and relapses after successful smoking cessation are often associated with alcohol use. This study aims at testing the efficacy of an integrated smoking cessation and alcohol intervention by comparing it to a smoking cessation only intervention for young people, delivered via the Internet and mobile phone.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted. Participants in the integrated intervention group will: (1) receive individually tailored web-based feedback on their drinking behaviour based on age and gender norms, (2) receive individually tailored mobile phone text messages to promote drinking within low-risk limits over a 3-month period, (3) receive individually tailored mobile phone text messages to support smoking cessation for 3 months, and (4) be offered the option of registering for a more intensive program that provides strategies for smoking cessation centred around a self-defined quit date. Participants in the smoking cessation only intervention group will only receive components (3) and (4). Study participants will be 1350 students who smoke tobacco daily/occasionally, from vocational schools in Switzerland. Main outcome criteria are 7-day point prevalence smoking abstinence and cigarette consumption assessed at the 6-month follow up.
Discussion: This is the first study testing a fully automated intervention for smoking cessation that simultaneously addresses alcohol use and interrelations between tobacco and alcohol use. The integrated intervention can be easily implemented in various settings and could be used with large groups of young people in a cost-effective way.
Modeling and executing knowledge-intensive processes (KiPs) are challenging with state-of-the-art approaches, and the specific demands of KiPs are the subject of ongoing research. In this context, little attention has been paid to the ontology-driven combination of data-centric and semantic business process modeling, which finds additional motivation by enabling the division of labor between humans and artificial intelligence. Such approaches have characteristics that could allow support for KiPs based on the inferencing capabilities of reasoners. We confirm this as we show that reasoners can infer the executability of tasks based on a currently researched ontology- and data-driven business process model (ODD-BP model). Further support for KiPs by the proposed inference mechanism results from its ability to infer the relevance of tasks, depending on the extent to which their execution would contribute to process progress. Besides these contributions along with the execution perspective (start-to-end direction), we will also show how our approach can help to reach specific process goals by inferring the relevance of process elements regarding their support to achieve such goals (end-to-start direction). The elements with the most valuable process progress can be identified in the intersection of both, the execution and goal perspective. This paper will introduce this new approach and verifies its practicability with an evaluation of a KiP in the field of emergency call centers.
The objective investigation of the dynamic properties of vocal fold vibrations demands the recording and further quantitative analysis of laryngeal high-speed video (HSV). Quantification of the vocal fold vibration patterns requires as a first step the segmentation of the glottal area within each video frame from which the vibrating edges of the vocal folds are usually derived. Consequently, the outcome of any further vibration analysis depends on the quality of this initial segmentation process. In this work we propose for the first time a procedure to fully automatically segment not only the time-varying glottal area but also the vocal fold tissue directly from laryngeal high-speed video (HSV) using a deep Convolutional Neural Network (CNN) approach. Eighteen different Convolutional Neural Network (CNN) network configurations were trained and evaluated on totally 13,000 high-speed video (HSV) frames obtained from 56 healthy and 74 pathologic subjects. The segmentation quality of the best performing Convolutional Neural Network (CNN) model, which uses Long Short-Term Memory (LSTM) cells to take also the temporal context into account, was intensely investigated on 15 test video sequences comprising 100 consecutive images each. As performance measures the Dice Coefficient (DC) as well as the precisions of four anatomical landmark positions were used. Over all test data a mean Dice Coefficient (DC) of 0.85 was obtained for the glottis and 0.91 and 0.90 for the right and left vocal fold (VF) respectively. The grand average precision of the identified landmarks amounts 2.2 pixels and is in the same range as comparable manual expert segmentations which can be regarded as Gold Standard. The method proposed here requires no user interaction and overcomes the limitations of current semiautomatic or computational expensive approaches. Thus, it allows also for the analysis of long high-speed video (HSV)-sequences and holds the promise to facilitate the objective analysis of vocal fold vibrations in clinical routine. The here used dataset including the ground truth will be provided freely for all scientific groups to allow a quantitative benchmarking of segmentation approaches in future.
Fuzzy system based on two-step cascade genetic optimization strategy for tobacco tar prediction
(2019)
There are many challenges in accurately measuring cigarette tar constituents. These include the need for standardized smoke generation methods related to unstable mixtures. In this research were developed algorithms using fusion of artificial intelligence methods to predict tar concentration. Outputs of development are three fuzzy structures optimized with genetic algorithms resulting in genetic algorithm (GA)-FUZZY, GA-adaptive neuro fuzzy inference system (ANFIS), GA-GA-FUZZY algorithms. Proposed algorithms are used for the tar prediction in the cigarette production process. The results of prediction are compared with gas chromatograph (high-performance liquid chromatography (HPLC)) readings.
Model transformations are central to model-driven software development. Applications of model transformations include creating models, handling model co-evolution, model merging, and understanding model evolution. In the past, various (semi-)automatic approaches to derive model transformations from meta-models or from examples have been proposed. These approaches require time-consuming handcrafting or the recording of concrete examples, or they are unable to derive complex transformations. We propose a novel unsupervised approach, called Ockham, which is able to learn edit operations from model histories in model repositories. Ockham is based on the idea that meaningful domain-specific edit operations are the ones that compress the model differences. It employs frequent subgraph mining to discover frequent structures in model difference graphs. We evaluate our approach in two controlled experiments and one real-world case study of a large-scale industrial model-driven architecture project in the railway domain. We found that our approach is able to discover frequent edit operations that have actually been applied before. Furthermore, Ockham is able to extract edit operations that are meaningful—in the sense of explaining model differences through the edit operations they comprise—to practitioners in an industrial setting. We also discuss use cases (i.e., semantic lifting of model differences and change profiles) for the discovered edit operations in this industrial setting. We find that the edit operations discovered by Ockham can be used to better understand and simulate the evolution of models.
Deep learning-based image registration (DLIR) has been widely developed, but it remains challenging in perceiving small and large deformations. Besides, the effectiveness of the DLIR methods was also rarely validated on the downstream tasks. In the study, a multi-scale complexity-aware registration network (MSCAReg-Net) was proposed by devising a complexity-aware technique to facilitate DLIR under a single-resolution framework. Specifically, the complexity-aware technique devised a multi-scale complexity-aware module (MSCA-Module) to perceive deformations with distinct complexities, and employed a feature calibration module (FC-Module) and a feature aggregation module (FA-Module) to facilitate the MSCA-Module by generating more distinguishable deformation features. Experimental results demonstrated the superiority of the proposed MSCAReg-Net over the existing methods in terms of registration accuracy. Besides, other than the indices of Dice similarity coefficient (DSC) and percentage of voxels with non-positive Jacobian determinant (|J(phi)|=<0), a comprehensive evaluation of the registration performance was performed by applying this method on a downstream task of multi-atlas hippocampus segmentation (MAHS). Experimental results demonstrated that this method contributed to a better hippocampus segmentation over other DLIR methods, and a comparable segmentation performance with the leading SyN method. The comprehensive assessment including DSC, |J(phi)|=<0, and the downstream application on MAHS demonstrated the advances of this method.
Internet of Things (IoT) and Artificial Intelligence (AI) are one of the most promising and disruptive areas of current research and development. However, these areas require deep knowledge in multiple disciplines such as sensors, protocols, embedded programming, distributed systems, statistics and algorithms. This broad knowledge is not easy to acquire and the software used to design these systems is becoming increasingly complex. Small and medium-sized enterprises therefore have problems in developing new business ideas. However, node- and block-based software tools have also been released and are freely available as open source toolboxes. In this paper, we present an overview of multiple node- and block-based software tools to develop IoT- and AI-based business ideas. We arrange these tools according their capabilities and further propose extension and combinations of tools to design a useful open-source library for small and medium-sized enterprises, that is easy to use and helps with rapid prototyping, enabling new business ideas to be developed using distributed computing.
Online Learning algorithms and Indoor Positioning Systems are complex applications in the environment of cyber-physical systems. These distributed systems are created by networking intelligent machines and autonomous robots on the Internet of Things using embedded systems that enable the exchange of information at any time. This information is processed by Machine Learning algorithms to make decisions about current developments in production or to influence logistics processes for optimization purposes. In this article, we present and categorize the further development of the prototype of a novel Indoor Positioning System, which constantly adapts its knowledge to the conditions of its environment with the help of Online Learning. Here, we apply Online Learning algorithms in the field of sound-based indoor localization with low-cost hardware and demonstrate the improvement of the system over its predecessor and its adaptability for different applications in an experimental case study.
This paper describes the project “Visual Knowledge Communication”, a joint project that started recently. The partners are psychologists and computer scientists from four universities of the German state Rhineland-Palatinate. The starting point for the project was the fact that visualizations have attracted considerable interest in psychology as well as computer science within the last years. However, psychologists and computer scientists pursued their investigations independently from each other in the past. This project has as its main goal the support and fostering of cooperation between psychologists and computer scientists in several visualization research projects.
The paper sketches the overall project. It then discusses in more detail the authors' subproject which deals with a peer review process for animations developed by students. The basic ideas, the main goals, and the project plan are described.
This paper is a work-in-progress report. Therefore, it does not contain any results.
Background: In recent years, the volume of medical knowledge and health data has increased rapidly. For example, the increased availability of electronic health records (EHRs) provides accurate, up-to-date, and complete information about patients at the point of care and enables medical staff to have quick access to patient records for more coordinated and efficient care. With this increase in knowledge, the complexity of accurate, evidence-based medicine tends to grow all the time. Health care workers must deal with an increasing amount of data and documentation. Meanwhile, relevant patient data are frequently overshadowed by a layer of less relevant data, causing medical staff to often miss important values or abnormal trends and their importance to the progression of the patient’s case.
Objective: The goal of this work is to analyze the current laboratory results for patients in the intensive care unit (ICU) and classify which of these lab values could be abnormal the next time the test is done. Detecting near-future abnormalities can be useful to support clinicians in their decision-making process in the ICU by drawing their attention to the important values and focus on future lab testing, saving them both time and money. Additionally, it will give doctors more time to spend with patients, rather than skimming through a long list of lab values.
Methods: We used Structured Query Language to extract 25 lab values for mechanically ventilated patients in the ICU from the MIMIC-III and eICU data sets. Additionally, we applied time-windowed sampling and holding, and a support vector machine to fill in the missing values in the sparse time series, as well as the Tukey range to detect and delete anomalies. Then, we used the data to train 4 deep learning models for time series classification, as well as a gradient boosting–based algorithm and compared their performance on both data sets.
Results: The models tested in this work (deep neural networks and gradient boosting), combined with the preprocessing pipeline, achieved an accuracy of at least 80% on the multilabel classification task. Moreover, the model based on the multiple convolutional neural network outperformed the other algorithms on both data sets, with the accuracy exceeding 89%.
Conclusions: In this work, we show that using machine learning and deep neural networks to predict near-future abnormalities in lab values can achieve satisfactory results. Our system was trained, validated, and tested on 2 well-known data sets to ensure that our system bridged the reality gap as much as possible. Finally, the model can be used in combination with our preprocessing pipeline on real-life EHRs to improve patients’ diagnosis and treatment.
Ahmad et al. in their paper for the first time proposed to apply sharp function for classification of images. In continuation of their work, in this paper we investigate the use of sharp function as an edge detector through well known diffusion models. Further, we discuss the formulation of weak solution of nonlinear diffusion equation and prove uniqueness of weak solution of nonlinear problem. The anisotropic generalization of sharp operator based diffusion has also been implemented and tested on various types of images.
Optimal mental workload plays a key role in driving performance. Thus, driver-assisting systems that automatically adapt to a drivers current mental workload via brain–computer interfacing might greatly contribute to traffic safety. To design economic brain computer interfaces that do not compromise driver comfort, it is necessary to identify brain areas that are most sensitive to mental workload changes. In this study, we used functional near-infrared spectroscopy and subjective ratings to measure mental workload in two virtual driving environments with distinct demands. We found that demanding city environments induced both higher subjective workload ratings as well as higher bilateral middle frontal gyrus activation than less demanding country environments. A further analysis with higher spatial resolution revealed a center of activation in the right anterior dorsolateral prefrontal cortex. The area is highly involved in spatial working memory processing. Thus, a main component of drivers’ mental workload in complex surroundings might stem from the fact that large amounts of spatial information about the course of the road as well as other road users has to constantly be upheld, processed and updated. We propose that the right middle frontal gyrus might be a suitable region for the application of powerful small-area brain computer interfaces.
Social media data are transforming sustainability science. However, challenges from restrictions in data accessibility and ethical concerns regarding potential data misuse have threatened this nascent field. Here, we review the literature on the use of social media data in environmental and sustainability research. We find that they can play a novel and irreplaceable role in achieving the UN Sustainable Development Goals by allowing a nuanced understanding of human-nature interactions at scale, observing the dynamics of social-ecological change, and investigating the co-construction of nature values. We reveal threats to data access and highlight scientific responsibility to address trade-offs between research transparency and privacy protection, while promoting inclusivity. This contributes to a wider societal debate of social media data for sustainability science and for the common good.
Sustainable software products - Towards assessment criteria for resource and energy efficiency
(2018)
Many authors have proposed criteria to assess the “environmental friendliness” or “sustainability” of software products. However, a causal model that links observable properties of a software product to conditions of it being green or (more general) sustainable is still missing. Such a causal model is necessary because software products are intangible goods and, as such, only have indirect effects on the physical world. In particular, software products are not subject to any wear and tear, they can be copied without great effort, and generate no waste or emissions when being disposed of. Viewed in isolation, software seems to be a perfectly sustainable type of product. In real life, however, software products with the same or similar functionality can differ substantially in the burden they place on natural resources, especially if the sequence of released versions and resulting hardware obsolescence is taken into account. In this article, we present a model describing the causal chains from software products to their impacts on natural resources, including energy sources, from a life-cycle perspective. We focus on (i) the demands of software for hardware capacities (local, remote, and in the connecting network) and the resulting hardware energy demand, (ii) the expectations of users regarding such demands and how these affect hardware operating life, and (iii) the autonomy of users in managing their software use with regard to resource efficiency. We propose a hierarchical set of criteria and indicators to assess these impacts. We demonstrate the application of this set of criteria, including the definition of standard usage scenarios for chosen categories of software products. We further discuss the practicability of this type of assessment, its acceptability for several stakeholders and potential consequences for the eco-labeling of software products and sustainable software design.
The objective of the German non-profit association NFDI (German short form for ”National Research Data Infrastructure”) is to make the data stock of the entire German science system accessible to the public. To do so, it should involve all stakeholders. However, currently the Universities of Applied Sciences (UAS) are underrepresented in the NFDI, and there is a danger of neglecting their needs. Therefore, we present the project ”Research Data Management at Universities of Applied Sciences in the State of Rhineland-Palatinate” (FDM@HAW.rlp), which is funded by the German Federal Ministry of Education and Research (BMBF) and financed within the Recovery and Resilience Facility of the European Union. In the project, seven public UAS in Rhineland-Palatinate and the Catholic University of Applied Sciences (CUAS) Mainz follow a common goal: They intend to establish an institutional RDM within a period of three years by building up competencies at the UAS, setting up services for researchers and finding solutions for a common technical infrastructure.