00 Informatik, Wissen, Systeme
Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (23)
Volltext vorhanden
- ja (23) (entfernen)
Gehört zur Bibliographie
- nein (23)
Schlagworte
- Maschinelles Lernen (5)
- machine learning (4)
- Convolutional Neural Network (3)
- Künstliche Intelligenz (3)
- Nachhaltigkeit (3)
- Alkoholmissbrauch (2)
- Anomalie <Medizin> (2)
- Anomalieerkennung (2)
- App <Programm> (2)
- Bildverarbeitung (2)
Sustainable software products - Towards assessment criteria for resource and energy efficiency
(2018)
Many authors have proposed criteria to assess the “environmental friendliness” or “sustainability” of software products. However, a causal model that links observable properties of a software product to conditions of it being green or (more general) sustainable is still missing. Such a causal model is necessary because software products are intangible goods and, as such, only have indirect effects on the physical world. In particular, software products are not subject to any wear and tear, they can be copied without great effort, and generate no waste or emissions when being disposed of. Viewed in isolation, software seems to be a perfectly sustainable type of product. In real life, however, software products with the same or similar functionality can differ substantially in the burden they place on natural resources, especially if the sequence of released versions and resulting hardware obsolescence is taken into account. In this article, we present a model describing the causal chains from software products to their impacts on natural resources, including energy sources, from a life-cycle perspective. We focus on (i) the demands of software for hardware capacities (local, remote, and in the connecting network) and the resulting hardware energy demand, (ii) the expectations of users regarding such demands and how these affect hardware operating life, and (iii) the autonomy of users in managing their software use with regard to resource efficiency. We propose a hierarchical set of criteria and indicators to assess these impacts. We demonstrate the application of this set of criteria, including the definition of standard usage scenarios for chosen categories of software products. We further discuss the practicability of this type of assessment, its acceptability for several stakeholders and potential consequences for the eco-labeling of software products and sustainable software design.
Optimal mental workload plays a key role in driving performance. Thus, driver-assisting systems that automatically adapt to a drivers current mental workload via brain–computer interfacing might greatly contribute to traffic safety. To design economic brain computer interfaces that do not compromise driver comfort, it is necessary to identify brain areas that are most sensitive to mental workload changes. In this study, we used functional near-infrared spectroscopy and subjective ratings to measure mental workload in two virtual driving environments with distinct demands. We found that demanding city environments induced both higher subjective workload ratings as well as higher bilateral middle frontal gyrus activation than less demanding country environments. A further analysis with higher spatial resolution revealed a center of activation in the right anterior dorsolateral prefrontal cortex. The area is highly involved in spatial working memory processing. Thus, a main component of drivers’ mental workload in complex surroundings might stem from the fact that large amounts of spatial information about the course of the road as well as other road users has to constantly be upheld, processed and updated. We propose that the right middle frontal gyrus might be a suitable region for the application of powerful small-area brain computer interfaces.
The aim of this work was to develop and evaluate the reinforcement learning algorithm VentAI, which is able to suggest a dynamically optimized mechanical ventilation regime for critically-ill patients. We built, validated and tested its performance on 11,943 events of volume-controlled mechanical ventilation derived from 61,532 distinct ICU admissions and tested it on an independent, secondary dataset (200,859 ICU stays; 25,086 mechanical ventilation events). A patient “data fingerprint” of 44 features was extracted as multidimensional time series in 4-hour time steps. We used a Markov decision process, including a reward system and a Q-learning approach, to find the optimized settings for positive end-expiratory pressure (PEEP), fraction of inspired oxygen (FiO2) and ideal body weight-adjusted tidal volume (Vt). The observed outcome was in-hospital or 90-day mortality. VentAI reached a significantly increased estimated performance return of 83.3 (primary dataset) and 84.1 (secondary dataset) compared to physicians’ standard clinical care (51.1). The number of recommended action changes per mechanically ventilated patient constantly exceeded those of the clinicians. VentAI chose 202.9% more frequently ventilation regimes with lower Vt (5–7.5 mL/kg), but 50.8% less for regimes with higher Vt (7.5–10 mL/kg). VentAI recommended 29.3% more frequently PEEP levels of 5–7 cm H2O and 53.6% more frequently PEEP levels of 7–9 cmH2O. VentAI avoided high (>55%) FiO2 values (59.8% decrease), while preferring the range of 50–55% (140.3% increase). In conclusion, VentAI provides reproducible high performance by dynamically choosing an optimized, individualized ventilation strategy and thus might be of benefit for critically ill patients.
Internet of Things (IoT) and Artificial Intelligence (AI) are one of the most promising and disruptive areas of current research and development. However, these areas require deep knowledge in multiple disciplines such as sensors, protocols, embedded programming, distributed systems, statistics and algorithms. This broad knowledge is not easy to acquire and the software used to design these systems is becoming increasingly complex. Small and medium-sized enterprises therefore have problems in developing new business ideas. However, node- and block-based software tools have also been released and are freely available as open source toolboxes. In this paper, we present an overview of multiple node- and block-based software tools to develop IoT- and AI-based business ideas. We arrange these tools according their capabilities and further propose extension and combinations of tools to design a useful open-source library for small and medium-sized enterprises, that is easy to use and helps with rapid prototyping, enabling new business ideas to be developed using distributed computing.
Background: Tobacco smoking prevalence continues to be high, particularly among adolescents and young adults with lower educational levels, and is therefore a serious public health problem. Tobacco smoking and problem drinking often co-occur and relapses after successful smoking cessation are often associated with alcohol use. This study aims at testing the efficacy of an integrated smoking cessation and alcohol intervention by comparing it to a smoking cessation only intervention for young people, delivered via the Internet and mobile phone.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted. Participants in the integrated intervention group will: (1) receive individually tailored web-based feedback on their drinking behaviour based on age and gender norms, (2) receive individually tailored mobile phone text messages to promote drinking within low-risk limits over a 3-month period, (3) receive individually tailored mobile phone text messages to support smoking cessation for 3 months, and (4) be offered the option of registering for a more intensive program that provides strategies for smoking cessation centred around a self-defined quit date. Participants in the smoking cessation only intervention group will only receive components (3) and (4). Study participants will be 1350 students who smoke tobacco daily/occasionally, from vocational schools in Switzerland. Main outcome criteria are 7-day point prevalence smoking abstinence and cigarette consumption assessed at the 6-month follow up.
Discussion: This is the first study testing a fully automated intervention for smoking cessation that simultaneously addresses alcohol use and interrelations between tobacco and alcohol use. The integrated intervention can be easily implemented in various settings and could be used with large groups of young people in a cost-effective way.
Background: Problem drinking, particularly risky single-occasion drinking is widespread among adolescents and young adults in most Western countries. Mobile phone text messaging allows a proactive and cost-effective delivery of short messages at any time and place and allows the delivery of individualised information at times when young people typically drink alcohol. The main objective of the planned study is to test the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people with heterogeneous educational level.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted to test the efficacy of the intervention in comparison to assessment only. The fully-automated intervention program will provide an online feedback based on the social norms approach as well as individually tailored mobile phone text messages to stimulate (1) positive outcome expectations to drink within low-risk limits, (2) self-efficacy to resist alcohol and (3) planning processes to translate intentions to resist alcohol into action. Program participants will receive up to two weekly text messages over a time period of 3 months. Study participants will be 934 students from approximately 93 upper secondary and vocational schools in Switzerland. Main outcome criterion will be risky single-occasion drinking in the past 30 days preceding the follow-up assessment.
Discussion: This is the first study testing the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people. Given that this intervention approach proves to be effective, it could be easily implemented in various settings, and it could reach large numbers of young people in a cost-effective way.
Background: High numbers of consumable medical materials (eg, sterile needles and swabs) are used during the daily routine of intensive care units (ICUs) worldwide. Although medical consumables largely contribute to total ICU hospital expenditure, many hospitals do not track the individual use of materials. Current tracking solutions meeting the specific requirements of the medical environment, like barcodes or radio frequency identification, require specialized material preparation and high infrastructure investment. This impedes the accurate prediction of consumption, leads to high storage maintenance costs caused by large inventories, and hinders scientific work due to inaccurate documentation. Thus, new cost-effective and contactless methods for object detection are urgently needed.
Objective: The goal of this work was to develop and evaluate a contactless visual recognition system for tracking medical consumable materials in ICUs using a deep learning approach on a distributed client-server architecture.
Methods: We developed Consumabot, a novel client-server optical recognition system for medical consumables, based on the convolutional neural network model MobileNet implemented in Tensorflow. The software was designed to run on single-board computer platforms as a detection unit. The system was trained to recognize 20 different materials in the ICU, while 100 sample images of each consumable material were provided. We assessed the top-1 recognition rates in the context of different real-world ICU settings: materials presented to the system without visual obstruction, 50% covered materials, and scenarios of multiple items. We further performed an analysis of variance with repeated measures to quantify the effect of adverse real-world circumstances.
Results: Consumabot reached a >99% reliability of recognition after about 60 steps of training and 150 steps of validation. A desirable low cross entropy of <0.03 was reached for the training set after about 100 iteration steps and after 170 steps for the validation set. The system showed a high top-1 mean recognition accuracy in a real-world scenario of 0.85 (SD 0.11) for objects presented to the system without visual obstruction. Recognition accuracy was lower, but still acceptable, in scenarios where the objects were 50% covered (P<.001; mean recognition accuracy 0.71; SD 0.13) or multiple objects of the target group were present (P=.01; mean recognition accuracy 0.78; SD 0.11), compared to a nonobstructed view. The approach met the criteria of absence of explicit labeling (eg, barcodes, radio frequency labeling) while maintaining a high standard for quality and hygiene with minimal consumption of resources (eg, cost, time, training, and computational power).
Conclusions: Using a convolutional neural network architecture, Consumabot consistently achieved good results in the classification of consumables and thus is a feasible way to recognize and register medical consumables directly to a hospital’s electronic health record. The system shows limitations when the materials are partially covered, therefore identifying characteristics of the consumables are not presented to the system. Further development of the assessment in different medical circumstances is needed.
Fuzzy system based on two-step cascade genetic optimization strategy for tobacco tar prediction
(2019)
There are many challenges in accurately measuring cigarette tar constituents. These include the need for standardized smoke generation methods related to unstable mixtures. In this research were developed algorithms using fusion of artificial intelligence methods to predict tar concentration. Outputs of development are three fuzzy structures optimized with genetic algorithms resulting in genetic algorithm (GA)-FUZZY, GA-adaptive neuro fuzzy inference system (ANFIS), GA-GA-FUZZY algorithms. Proposed algorithms are used for the tar prediction in the cigarette production process. The results of prediction are compared with gas chromatograph (high-performance liquid chromatography (HPLC)) readings.
Modeling and executing knowledge-intensive processes (KiPs) are challenging with state-of-the-art approaches, and the specific demands of KiPs are the subject of ongoing research. In this context, little attention has been paid to the ontology-driven combination of data-centric and semantic business process modeling, which finds additional motivation by enabling the division of labor between humans and artificial intelligence. Such approaches have characteristics that could allow support for KiPs based on the inferencing capabilities of reasoners. We confirm this as we show that reasoners can infer the executability of tasks based on a currently researched ontology- and data-driven business process model (ODD-BP model). Further support for KiPs by the proposed inference mechanism results from its ability to infer the relevance of tasks, depending on the extent to which their execution would contribute to process progress. Besides these contributions along with the execution perspective (start-to-end direction), we will also show how our approach can help to reach specific process goals by inferring the relevance of process elements regarding their support to achieve such goals (end-to-start direction). The elements with the most valuable process progress can be identified in the intersection of both, the execution and goal perspective. This paper will introduce this new approach and verifies its practicability with an evaluation of a KiP in the field of emergency call centers.
Online Learning algorithms and Indoor Positioning Systems are complex applications in the environment of cyber-physical systems. These distributed systems are created by networking intelligent machines and autonomous robots on the Internet of Things using embedded systems that enable the exchange of information at any time. This information is processed by Machine Learning algorithms to make decisions about current developments in production or to influence logistics processes for optimization purposes. In this article, we present and categorize the further development of the prototype of a novel Indoor Positioning System, which constantly adapts its knowledge to the conditions of its environment with the help of Online Learning. Here, we apply Online Learning algorithms in the field of sound-based indoor localization with low-cost hardware and demonstrate the improvement of the system over its predecessor and its adaptability for different applications in an experimental case study.
The objective investigation of the dynamic properties of vocal fold vibrations demands the recording and further quantitative analysis of laryngeal high-speed video (HSV). Quantification of the vocal fold vibration patterns requires as a first step the segmentation of the glottal area within each video frame from which the vibrating edges of the vocal folds are usually derived. Consequently, the outcome of any further vibration analysis depends on the quality of this initial segmentation process. In this work we propose for the first time a procedure to fully automatically segment not only the time-varying glottal area but also the vocal fold tissue directly from laryngeal high-speed video (HSV) using a deep Convolutional Neural Network (CNN) approach. Eighteen different Convolutional Neural Network (CNN) network configurations were trained and evaluated on totally 13,000 high-speed video (HSV) frames obtained from 56 healthy and 74 pathologic subjects. The segmentation quality of the best performing Convolutional Neural Network (CNN) model, which uses Long Short-Term Memory (LSTM) cells to take also the temporal context into account, was intensely investigated on 15 test video sequences comprising 100 consecutive images each. As performance measures the Dice Coefficient (DC) as well as the precisions of four anatomical landmark positions were used. Over all test data a mean Dice Coefficient (DC) of 0.85 was obtained for the glottis and 0.91 and 0.90 for the right and left vocal fold (VF) respectively. The grand average precision of the identified landmarks amounts 2.2 pixels and is in the same range as comparable manual expert segmentations which can be regarded as Gold Standard. The method proposed here requires no user interaction and overcomes the limitations of current semiautomatic or computational expensive approaches. Thus, it allows also for the analysis of long high-speed video (HSV)-sequences and holds the promise to facilitate the objective analysis of vocal fold vibrations in clinical routine. The here used dataset including the ground truth will be provided freely for all scientific groups to allow a quantitative benchmarking of segmentation approaches in future.
Containerization is one of the most important topics for modern data centers and web developers. Since the number of containers on one- and multi-node systems is growing, knowledge about the energy consumption behavior of single web-service containers is essential in order to save energy and, of course, money. In this article, we are going to show how the energy consumption behavior of single containerized web services/web apps changes while creating replicas of the service in order to scale and balance the web service.
Companies have made considerable progress in assessing the sustainability of their processes and products, including the information and communication technology (ICT) sector. However, it is surprising that little attention has been given to the sustainability performance of software products. For this article, we chose a case study approach to explore the extent, to which software manufacturers have considered sustainability criteria for their products. We selected a manufacturer of sustainability management software on the assumption that they would be more likely to integrate elements of sustainability performance in their products. In the case study, we applied a previously developed set of criteria for sustainable software (SCSS) using a questionnaire and experiments, to assess a web-based sustainability management software product regarding its sustainability performance. The assessment finds that despite a sustainability conscious manufacturer, a systematic assessment of sustainability regarding software products is missing in the case study. This implies that sustainability assessment for software products is still novel, corresponding knowledge is missing and suitable tools are not yet being widely applied in the industry. The SCSS presents a suitable approach to close this gap, but it does require further refinement, for example regarding its applicability to web-based software on external servers.