FB Umweltplanung/-technik (UCB)
Filtern
Erscheinungsjahr
Dokumenttyp
Volltext vorhanden
- ja (58)
Gehört zur Bibliographie
- nein (58)
Schlagworte
- Biodiversität (6)
- CO2-Bilanz (6)
- Elektrofahrzeug (6)
- Elektromobilität (4)
- Fermentation (4)
- Maschinelles Lernen (4)
- Rapid Prototyping <Fertigung> (4)
- Umweltbilanz (4)
- Biomonitoring (3)
- Insekten (3)
Institut
The Saarschleife geotope (SE-Germany) represents one of the most prominent geotopes of the SaarLorLux region and is known far beyond the borders of the Greater Region. Surprisingly, there is no visual representation of the relief history and genesis of this river meander, which is unique for Central Europe - as is common at places with comparable outstanding phenomena, such as e.g. the Rocher Saint-Michel d'Aiguilhe (France) or some national parks in the U.S. (e.g. Grand Canyon). The Saarschleife geotope therefore was choosen as a pilot object for the envisaged analysis of the landscape genesis but also regarding the 3D mapping and visualization. The visualisation presents the relief history and geological evolution of the last 300 million years in selected geological epochs, which are of fundamental importance for the understanding of today's geomorphological relief conditions, and is compiled into a summarized chronology.
Terrestrial cyanobacteria grow as phototrophic biofilms and offer a wide spectrum of interesting products. For cultivation of phototrophic biofilms different reactor concepts have been developed in the last years. One of the main influencing factors is the surface material and the adhesion strength of the chosen production strain. In this work a flow chamber was developed, in which, in combination with optical coherence tomography and computational fluid dynamics simulation, an easy analysis of adhesion forces between different biofilms and varied surface materials is possible. Hereby, differences between two cyanobacteria strains and two surface materials were shown. With longer cultivation time of biofilms adhesion increased in all experiments. Additionally, the content of extracellular polymeric substances was analyzed and its role in surface adhesion was evaluated. To test the comparability of obtained results from the flow chamber with other methods, analogous experiments were conducted with a rotational rheometer, which proved to be successful. Thus, with the presented flow chamber an easy to implement method for analysis of biofilm adhesion was developed, which can be used in future research for determination of suitable combinations of microorganisms with cultivation surfaces on lab scale in advance of larger processes.
Science on ecosystems and people to support the Kunming-Montreal Global Biodiversity Framework
(2023)
In December 2022, members of the Convention on Biological Diversity adopted the new Kunming-Montreal Global Biodiversity Framework (GBF) to guide international biodiversity conservation efforts until 2030 in order to be able to live ‘in harmony with nature’ by 2050. This framework addresses the implementation gap left after the Aichi Biodiversity Targets, which were the previous global instrument for mainstreaming biodiversity conservation between 2010 and 2020.
The aim of this editorial is to draw attention to the GBF targets that are most relevant to our readership, with two objectives: First, to suggest how Ecosystems and People may be a venue for emerging research insights in support of the GBF. Second, to highlight examples of recent research in Ecosystems and People that can contribute to enrich, or even challenge, the evidence and development of the GBF Targets.
1. Woody riparian vegetation (WRV) benefits benthic macroinvertebrates in running waters. However, while some functions are provided by WRV irrespective of surrounding and catchment land use, others are context-specific. In recent large-scale studies, effects of WRV on macroinvertebrates were therefore small compared to catchment land use, raising the question about the relevance of WRV for restoration.
2. Model-based recursive partitioning was used to identify context-dependent effects of WRV on the macroinvertebrates' ecological status in small (catchment area 10–100 km2) lowland (n = 361) and mountain (n = 748) streams. WRV cover was quantified from orthophotos along the near (500 m) and far (5000 m) upstream river network and used to predict the site's ecological status. Agricultural, urban and woodland cover at the local and catchment scales along with hydromorphology were considered as partitioning variables.
3. In rural agricultural landscapes, the effect of WRV on the ecological status was large, indicating that establishing near-upstream WRV can improve the ecological status by as much as two of the five classes according to the EU Water Framework Directive.
4. Even in urban landscapes, effects of far-upstream WRV were large if catchments had a moderate share of agricultural land use in addition. The beneficial effects of WRV were only limited in purely urban catchments or in a multiple stressor context.
5. Synthesis and applications. While woody riparian vegetation (WRV) can even improve the ecological status in urban settings, it is especially relevant for river management in rural agricultural catchments, where developing WRV potentially are effective measures to achieve good ecological status.
Die Umbenetzungsagglomeration bietet die Möglichkeit einer Trennung nach zwei Partikeleigenschaften. Hierbei wird nach den Benetzungseigenschaften des Feststoffs in Bezug zur Suspensions- und Bindeflüssigkeit getrennt und nach der Größe. Ziel dieser Arbeit ist, die Reinheit der entstehenden Graphitagglomerate in einer Mischung mit Quarzsand gravimetrisch zu bestimmen und diese mit den Benetzungseigenschaften zu korrelieren. Die Güte dieser Ergebnisse wird mit einer Mikroröntgentomographie (µ-CT) untersucht. Es konnte gezeigt werden, dass sich Quarzsand mit Paraffinöl nicht benetzen lässt und somit Reinheiten von 99,5 % bis 99,9 % erreicht werden, was das Ergebnis der µ-CT bestätigt. Einen Einfluss der Partikelgröße des Quarzsandes konnte im untersuchten Bereich nicht bestätigt werden.
Owing to a long history of anthropogenic pressures, freshwater ecosystems are among the most vulnerable to biodiversity loss1. Mitigation measures, including wastewater treatment and hydromorphological restoration, have aimed to improve environmental quality and foster the recovery of freshwater biodiversity2. Here, using 1,816 time series of freshwater invertebrate communities collected across 22 European countries between 1968 and 2020, we quantified temporal trends in taxonomic and functional diversity and their responses to environmental pressures and gradients. We observed overall increases in taxon richness (0.73% per year), functional richness (2.4% per year) and abundance (1.17% per year). However, these increases primarily occurred before the 2010s, and have since plateaued. Freshwater communities downstream of dams, urban areas and cropland were less likely to experience recovery. Communities at sites with faster rates of warming had fewer gains in taxon richness, functional richness and abundance. Although biodiversity gains in the 1990s and 2000s probably reflect the effectiveness of water-quality improvements and restoration projects, the decelerating trajectory in the 2010s suggests that the current measures offer diminishing returns. Given new and persistent pressures on freshwater ecosystems, including emerging pollutants, climate change and the spread of invasive species, we call for additional mitigation to revive the recovery of freshwater biodiversity.
This research conducted a probabilistic life-cycle assessment (pLCA) into the greenhouse gas (GHG) emissions performance of nine combinations of truck size and powertrain technology for a recent past and a future (largely decarbonised) situation in Australia. This study finds that the relative and absolute life-cycle GHG emissions performance strongly depends on the vehicle class, powertrain and year of assessment. Life-cycle emission factor distributions vary substantially in their magnitude, range and shape. Diesel trucks had lower life-cycle GHG emissions in 2019 than electric trucks (battery, hydrogen fuel cell), mainly due to the high carbon-emission intensity of the Australian electricity grid (mainly coal) and hydrogen production (mainly through steam–methane reforming). The picture is, however, very different for a more decarbonised situation, where battery electric trucks, in particular, provide deep reductions (about 75–85%) in life-cycle GHG emissions. Fuel-cell electric (hydrogen) trucks also provide substantial reductions (about 50–70%), but not as deep as those for battery electric trucks. Moreover, hydrogen trucks exhibit the largest uncertainty in emissions performance, which reflects the uncertainty and general lack of information for this technology. They therefore carry an elevated risk of not achieving the expected emission reductions. Battery electric trucks show the smallest (absolute) uncertainty, which suggests that these trucks are expected to deliver the deepest and most robust emission reductions. Operational emissions (on-road driving and vehicle maintenance combined) dominate life-cycle emissions for all vehicle classes. Vehicle manufacturing and upstream emissions make a relatively small contribution to life-cycle emissions from diesel trucks (<5% each), but these are important aspects for electric trucks (5% to 30%).
Hydrological variability is a key factor in structuring biotic and abiotic processes in river ecosystems and is of particular importance to fish populations. We used 171 hydrological indices (HI) and young-of-the-year (YOY) fish abundances as indicators of reproductive success to compare species' response patterns to high and low flows on short-, intermediate-, and long-term scales. Our study included 13 common fish species in headwater streams of North Rhine-Westphalia, Germany. Generalized linear models using YOY abundances and HI on high- and low-flow patterns explained on average 64 % of the variability. HI calculated from long time series worked better than HI describing short- and intermediate-term high- and low flows. Species' reproductive success response to low flow HI depended on specific ecological traits whereas high flow HI differentially affected species according to their life history strategies. Equilibrium strategists responded negatively to high frequency and magnitude along with late timing of high flow, while periodic and opportunistic species mostly thrived under these conditions. We identified four species traits that mediated these differences between life history strategies. The reproductive success of species with low relative fecundity, large eggs and larvae, and long incubation periods was negatively impacted by the high frequency, high magnitude, and late timing of high flows. Conversely, the reproductive success of species with high relative fecundity, short incubation periods and small eggs and larvae was fostered by strong, frequent, and late high flows. The consideration of the relationship between reproductive success, life history, and fish species traits over several years under a range of flows is a novel step towards the implementation of measures to mitigate negative impacts and enhance conditions for successful fish reproduction.
In the past decade, research on measuring and assessing the environmental impact of software has gained significant momentum in science and industry. However, due to the large number of research groups, measurement setups, procedure models, tools, and general novelty of the research area, a comprehensive research framework has yet to be created. The literature documents several approaches from researchers and practitioners who have developed individual methods and models, along with more general ideas like the integration of software sustainability in the context of the UN Sustainable Development Goals, or science communication approaches to make the resource cost of software transparent to society. However, a reference measurement model for the energy and resource consumption of software is still missing. In this article, we jointly develop the Green Software Measurement Model (GSMM), in which we bring together the core ideas of the measurement models, setups, and methods of over 10 research groups in four countries who have done pioneering work in assessing the environmental impact of software. We briefly describe the different methods and models used by these research groups, derive the components of the GSMM from them, and then we discuss and evaluate the resulting reference model. By categorizing the existing measurement models and procedures and by providing guidelines for assimilating and tailoring existing methods, we expect this work to aid new researchers and practitioners who want to conduct measurements for their individual use cases.
Social media data are transforming sustainability science. However, challenges from restrictions in data accessibility and ethical concerns regarding potential data misuse have threatened this nascent field. Here, we review the literature on the use of social media data in environmental and sustainability research. We find that they can play a novel and irreplaceable role in achieving the UN Sustainable Development Goals by allowing a nuanced understanding of human-nature interactions at scale, observing the dynamics of social-ecological change, and investigating the co-construction of nature values. We reveal threats to data access and highlight scientific responsibility to address trade-offs between research transparency and privacy protection, while promoting inclusivity. This contributes to a wider societal debate of social media data for sustainability science and for the common good.
Additive manufacturing is an essential tool in innovative production processes. The extended degrees of freedom offer much potential in usage, construction, and product design. Rising raw material and energy costs, constantly increasing environmental requirements, and the increasing demand for resource-saving products represent a paradigm shift in classic production processes.
In addition to the purely energetic evaluation, developing energy models is a method to determine energy consumption and reduce it in the long term. The specific energy consumption model, also known as the SEC model, allows a quick estimation of energy consumption by multiplying the SEC with a unit like the mass of the workpiece, the manufacturing time, or the exposed area. Here, high dependence on the used machine, the considered peripheral devices, and the geometry are noticeable.
Previous studies, such as those by Kellens et al. and Baumers et al., have laid the basis for understanding the energy demands of PBF-LB/M processes. Various energy models have subsequently been proposed, including those by Paul and Anand, Yi et al., Lv et al., and Hui et al. These models are often limited by their specificity to sub-processes or subsystems. This results in limitations in their applicability to other manufacturing machines or inaccuracies in energy consumption predictions. The simulation accuracy ACC is mostly in the range of 90% with the limitation of small sample sizes. Moreover, nearly, all these models rely heavily on process time information, making the accuracy of their simulations largely dependent on the quality of the underlying time model.
In the following study, two manufacturing machines of the PBF-LB/M process are analyzed and compared with other studies. The aim is to analyze the power and resource consumption to use these data to build an improved energy model with a high accuracy, which can be used as an additional parameter in the adapted design methodology. Furthermore, potential savings are derived from the load curves.
Diadromous fish have exhibited a dramatic decline since the end of the 20th century. The allis shad (Alosa alosa) population in the Gironde-Garonne-Dordogne (GGD) system, once considered as a reference in Europe, remains low despite a fishing ban in 2008. One hypothesis to explain this decline is that the downstream migration and growth dynamics of young stages have changed due to environmental modifications in the rivers and estuary. We retrospectively analysed juvenile growth and migration patterns using otoliths from adults caught in the GGD system 30 years apart during their spawning migration, in 1987 and 2016. We coupled otolith daily growth increments and laser ablation inductively-coupled plasma mass spectrometry measurements of Sr:Ca, Ba:Ca, and Mn:Ca ratios along the longest growth axis from hatching to an age of 100 days (i.e., during the juvenile stage). A back-calculation allowed us to estimate the size of juveniles at the entrance into the brackish estuary. Based on the geochemistry data, we distinguished four different zones that juveniles encountered during their downstream migration: freshwater, fluvial estuary, brackish estuary, and lower estuary. We identified three migration patterns during the first 100 days of their life: (a) Individuals that reached the lower estuary zone, (b) individuals that reached the brackish estuary zone, and (c) individuals that reached the fluvial estuary zone. On average, juveniles from the 1987 subsample stayed slightly longer in freshwater than juveniles from the 2016 subsample. In addition, juveniles from the 2016 subsample entered the brackish estuary at a smaller size. This result suggests that juveniles from the 2016 subsample might have encountered more difficult conditions during their downstream migration, which we attribute to a longer exposure to the turbid maximum zone. This assumption is supported by the microchemical analyses of the otoliths, which suggests based on wider Mn:Ca peaks that juveniles in 2010s experienced a longer period of physiological stress during their downstream migration than juveniles in 1980s. Finally, juveniles from the 2016 subsample took longer than 100 days to exit the lower estuary than we would have expected from previous studies. Adding a new marker (i.e., Ba:Ca) helped us refine the interpretation of the downstream migration for each individual.
Background: The environmental impact of electric scooters has been the subject of critical debate in the scientific community for the past 5 years. The data published so far are very inhomogeneous and partly methodologically incomplete. Most of the data available in the literature suffer from an average bias of 34%, because end-of-life (EOL) impacts have not been modelled, reported or specified. In addition, the average lifetime mileage of shared fleets of e-scooters, as they are operated in cities around the world, has recently turned out to be much lower than expected. This casts the scooters in an unfavourable light for the necessary mobility transition. Data on impact categories other than the global warming potential (GWP) are scarce. This paper aims to quantify the strengths and weaknesses of e-scooters in terms of their contribution to sustainable transport by more specifically defining and extending the life cycle assessment (LCA) modelling conditions: the modelling is based on two genuine material inventories obtained by dismantling two different e-scooters, one based on a traditional aluminium frame and another, for the first time, based on plastic material.
Results: This study provides complete inventory data to facilitate further LCA modelling of electric kick scooters. The plastic scooter had a 26% lower lifetime GWP than the aluminium vehicle. A favourable choice of electric motor promises a further reduction in GWP. In addition to GWP, the scooter's life cycles were assessed across seven other impact categories and showed no critical environmental or health impacts compared to a passenger car. On the other hand, only the resource extraction impact revealed clear advantages for electric scooters compared to passenger cars.
Conclusions: Under certain conditions, scooters can still be an important element of the desired mobility transition. To assure a lifetime long enough is the crucial factor to make the electric scooter a favourable or even competitive vehicle in a future sustainable mobility system. A scooter mileage of more than 5400 km is required to achieve lower CO2eq/pkm emissions compared to passenger cars, which seems unlikely in today's standard use case of shared scooter fleets. In contrast, a widespread use of e-scooters as a commuting tool is modelled to be able to save 4% of greenhouse gas (GHG) emissions across the German mobility sector.
Ephemeroptera, Plecoptera and Trichoptera are three orders of freshwater macroinvertebrates with a short terrestrial adult life-stage that they use to disperse by flying upstream. This aerial dispersal can be assisted by native riparian forest, but regional variation has not yet been empirically tested. In this study we compared the EPT community of 153 sampling sites located in freshwater streams in four European regions (Central Plains, Central Highlands, Alps, Iberia). In each site, we assessed the EPT community dispersal ability using the Species Flying Propensity index. We also calculated the native deciduous forest cover in the riparian buffer and several environmental stressors such as saprobic pollution or catchment anthropization. Finally, we tested which of these parameters have a significant effect on the EPT community. In the Central Highlands and in Iberia, the share of weak dispersers increased with native deciduous forest cover, indicating a positive effect on dispersal of EPTs. In the Central Plains and the Alps, no such effect was found. We conclude that the effect of native deciduous forest depends on regional landscape characteristics and the regional species pool, but considering the dispersal of the regional EPT communities is needed to create effective river management policies.
We present the concrete realization of a virtual laboratory equipped with a pedagogical agent. Its functionality and media didactics takes into account the results of an usability test on a prototype system, and the students' demand on such an automated assistance as obtained from a preliminary survey. The pedagogical agent mediates between the content and the learner by activating him or her. To provide information about the learner's skills, we propose a pragmatic and simplified competence model that is based on fundamental representations in physics (experiment, figure, text and equation). Moreover, an automated feedback relates the student's self-assessment with the submitted answer to the correctness of the respective task. In consequence, the pedagogical agent enables mental reflection for a crucial review of the own learning process. Interestingly, learning pathways can be envisioned, thus, giving valuable insight into individual strengths and weaknesses.
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate "the problem of loosening a connection and the distribution of the work" as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
Research in global change ecology relies heavily on global climatic grids derived from estimates of air temperature in open areas at around 2 m above the ground. These climatic grids do not reflect conditions below vegetation canopies and near the ground surface, where critical ecosystem functions occur and most terrestrial species reside. Here, we provide global maps of soil temperature and bioclimatic variables at a 1-km2 resolution for 0–5 and 5–15 cm soil depth. These maps were created by calculating the difference (i.e. offset) between in situ soil temperature measurements, based on time series from over 1200 1-km2 pixels (summarized from 8519 unique temperature sensors) across all the world's major terrestrial biomes, and coarse-grained air temperature estimates from ERA5-Land (an atmospheric reanalysis by the European Centre for Medium-Range Weather Forecasts). We show that mean annual soil temperature differs markedly from the corresponding gridded air temperature, by up to 10°C (mean = 3.0 ± 2.1°C), with substantial variation across biomes and seasons. Over the year, soils in cold and/or dry biomes are substantially warmer (+3.6 ± 2.3°C) than gridded air temperature, whereas soils in warm and humid environments are on average slightly cooler (−0.7 ± 2.3°C). The observed substantial and biome-specific offsets emphasize that the projected impacts of climate and climate change on near-surface biodiversity and ecosystem functioning are inaccurately assessed when air rather than soil temperature is used, especially in cold environments. The global soil-related bioclimatic variables provided here are an important step forward for any application in ecology and related disciplines. Nevertheless, we highlight the need to fill remaining geographic gaps by collecting more in situ measurements of microclimate conditions to further enhance the spatiotemporal resolution of global soil temperature products for ecological applications.
The current work investigates the capability of a tailored multivariate curve resolution–alternating least squares (MCR-ALS) algorithm to analyse glucose, phosphate, ammonium and acetate dynamics simultaneously in an E. coli BL21 fed-batch fermentation. The high-cell-density (HCDC) process is monitored by ex situ online attenuated total reflection (ATR) Fourier transform infrared (FTIR) spectroscopy and several in situ online process sensors. This approach efficiently utilises automatically generated process data to reduce the time and cost consuming reference measurement effort for multivariate calibration. To determine metabolite concentrations with accuracies between ±0.19 and ±0.96·gL−l, the presented utilisation needs primarily — besides online sensor measurements — single FTIR measurements for each of the components of interest. The ambiguities in alternating least squares solutions for concentration estimation are reduced by the insertion of analytical process knowledge primarily in the form of elementary carbon mass balances. Thus, in this way, the established idea of mass balance constraints in MCR combines with the consistency check of measured data by carbon balances, as commonly applied in bioprocess engineering. The constraints are calculated based on online process data and theoretical assumptions. This increased calculation effort is able to replace, to a large extent, the need for manually conducted quantitative chemical analysis, leads to good estimations of concentration profiles and a better process understanding.
Many borate crystals feature nonlinear optical properties that allow for efficient frequency conversion of common lasers down into the ultraviolet spectrum. Twinning may degrade crystal quality and affect nonlinear optical properties, in particular if crystals are composed of twin domains with opposing polarities. Here, we use measurements of optical activity to demonstrate the existence of inversion twins within single crystals of YAl3(BO3)4 (YAB) and K2Al2B2O7 (KABO). We determine the optical rotatory dispersion of YAB and KABO throughout the visible spectrum using a spectrophotometer with rotatable polarizers. Space-resolved measurements of the optical rotation can be related to the twin structure and give estimates on the extent of twinning. The reported dispersion relations for the rotatory power of YAB and KABO may be used to assess crystal quality and to select twin-free specimens.
Ahmad et al. in their paper for the first time proposed to apply sharp function for classification of images. In continuation of their work, in this paper we investigate the use of sharp function as an edge detector through well known diffusion models. Further, we discuss the formulation of weak solution of nonlinear diffusion equation and prove uniqueness of weak solution of nonlinear problem. The anisotropic generalization of sharp operator based diffusion has also been implemented and tested on various types of images.
1. Recent reports on insect decline have highlighted the need for long-term data on insect communities towards identifying their trends and drivers.
2. With the launch of many new insect monitoring schemes to investigate insect communities over large spatial and temporal scales, Malaise traps have become one of the most important tools due to the broad spectrum of species collected and reduced capture bias through passive sampling of insects day and night. However, Malaise traps can vary in size, shape, and colour, and it is unknown how these differences affect biomass, species richness, and composition of trap catch, making it difficult to compare results between studies.
3. We compared five Malaise trap types (three variations of the Townes and two variations of the Bartak Malaise trap) to determine their effects on biomass and species richness as identified by metabarcoding.
4. Insect biomass varied by 20%–55%, not strictly following trap size but varying with trap type. Total species richness was 20%–38% higher in the three Townes trap models compared to the Bartak traps. Bartak traps captured lower richness of highly mobile taxa but increased richness of ground-dwelling taxa. The white roofed Townes trap captured a higher richness of pollinators.
5. We find that biomass, total richness, and taxa group specific richness are all sensitive to Malaise trap type. Trap type should be carefully considered and aligned to match monitoring and research questions. Additionally, our estimates of trap type effects can be used to adjust results to facilitate comparisons across studies.
Species distribution models (SDMs) are key tools in biodiversity and conservation, but assessing their reliability in unsampled locations is difficult, especially where there are sampling biases. We present a spatially-explicit sensitivity analysis for SDMs – SDM profiling – which assesses the leverage that unsampled locations have on the overall model by exploring the interaction between the effect on the variable response curves and the prevalence of the affected environmental conditions. The method adds a ‘pseudo-presence’ and ‘pseudo-absence’ to unsampled locations, re-running the SDM for each, and measuring the difference between the probability surfaces of the original and new SDMs. When the standardised difference values are plotted against each other (a ‘profile plot’), each point's location can be summarized by four leverage measures, calculated as the distances to each corner. We explore several applications: visualization of model certainty; identification of optimal new sampling locations and redundant existing locations; and flagging potentially erroneous occurrence records.
The number of additive manufacturing methods and materials is growing rapidly, leaving gaps in the knowledge of specific material properties. A relatively recent addition is the metal-filled filament to be printed similarly to the fused filament fabrication (FFF) technology used for plastic materials, but with additional debinding and sintering steps. While tensile, bending, and shear properties of metals manufactured this way have been studied thoroughly, their fatigue properties remain unexplored. Thus, the paper aims to determine the tensile, fatigue, and impact strengths of Markforged 17-4 PH and BASF Ultrafuse 316L stainless steel to answer whether the metal FFF can be used for structural parts safely with the current state of technology. They are compared to two 316L variants manufactured via selective laser melting (SLM) and literature results. For extrusion-based additive manufacturing methods, a significant decrease in tensile and fatigue strength is observed compared to specimens manufactured via SLM. Defects created during the extrusion and by the pathing scheme, causing a rough surface and internal voids to act as local stress risers, handle the strength decrease. The findings cast doubt on whether the metal FFF technique can be safely used for structural components; therefore, further developments are needed to reduce internal material defects.
Background: In recent years, the volume of medical knowledge and health data has increased rapidly. For example, the increased availability of electronic health records (EHRs) provides accurate, up-to-date, and complete information about patients at the point of care and enables medical staff to have quick access to patient records for more coordinated and efficient care. With this increase in knowledge, the complexity of accurate, evidence-based medicine tends to grow all the time. Health care workers must deal with an increasing amount of data and documentation. Meanwhile, relevant patient data are frequently overshadowed by a layer of less relevant data, causing medical staff to often miss important values or abnormal trends and their importance to the progression of the patient’s case.
Objective: The goal of this work is to analyze the current laboratory results for patients in the intensive care unit (ICU) and classify which of these lab values could be abnormal the next time the test is done. Detecting near-future abnormalities can be useful to support clinicians in their decision-making process in the ICU by drawing their attention to the important values and focus on future lab testing, saving them both time and money. Additionally, it will give doctors more time to spend with patients, rather than skimming through a long list of lab values.
Methods: We used Structured Query Language to extract 25 lab values for mechanically ventilated patients in the ICU from the MIMIC-III and eICU data sets. Additionally, we applied time-windowed sampling and holding, and a support vector machine to fill in the missing values in the sparse time series, as well as the Tukey range to detect and delete anomalies. Then, we used the data to train 4 deep learning models for time series classification, as well as a gradient boosting–based algorithm and compared their performance on both data sets.
Results: The models tested in this work (deep neural networks and gradient boosting), combined with the preprocessing pipeline, achieved an accuracy of at least 80% on the multilabel classification task. Moreover, the model based on the multiple convolutional neural network outperformed the other algorithms on both data sets, with the accuracy exceeding 89%.
Conclusions: In this work, we show that using machine learning and deep neural networks to predict near-future abnormalities in lab values can achieve satisfactory results. Our system was trained, validated, and tested on 2 well-known data sets to ensure that our system bridged the reality gap as much as possible. Finally, the model can be used in combination with our preprocessing pipeline on real-life EHRs to improve patients’ diagnosis and treatment.
Numerous research methods have been developed to detect anomalies in the areas of security and risk analysis. In healthcare, there are numerous use cases where anomaly detection is relevant. For example, early detection of sepsis is one such use case. Early treatment of sepsis is cost effective and reduces the number of hospital days of patients in the ICU. There is no single procedure that is sufficient for sepsis diagnosis, and combinations of approaches are needed. Detecting anomalies in patient time series data could help speed the development of some decisions. However, our algorithm must be viewed as complementary to other approaches based on laboratory values and physician judgments. The focus of this work is to develop a hybrid method for detecting anomalies that occur, for example, in multidimensional medical signals, sensor signals, or other time series in business and nature. The novelty of our approach lies in the extension and combination of existing approaches: Statistics, Self Organizing Maps and Linear Discriminant Analysis in a unique and unprecedented way with the goal of identifying different types of anomalies in real-time measurement data and defining the point where the anomaly occurs. The proposed algorithm not only has the full potential to detect anomalies, but also to find real points where an anomaly starts.
A new comprehensive evaluation system presented here allows to compare and to quantify education for a sustainable development (ESD) in degree programs. The evaluation is based on a criteria system working with three hierarchic levels. The highest level considers a list of 35 indicator terms. Primarily, the two most popular undergraduate (bachelor’s) degree programs in Germany (mechanical engineering, ME, and business administration, BA) have been reviewed for ESD contents based on the new evaluation scheme. Additionally we reviewed and quantified ESD subjects and their temporal changes in the entire bandwidth of degree programs of a university (Umwelt-Campus Birkenfeld, University of Applied Sciences Trier), back to 1999. Moreover, a spot check on international ME and BA bachelor’s degree programs was performed. Through our reviews, we found a high number of elective classes dedicated to ESD particularly in BA bachelor programs. However, the percentage of compulsory classes related to ESD is relatively low with 5-6 % in both ME and BA programs, respectively. The spot check on degree programs outside Germany revealed similar results. Analysing the time trend at Umwelt-Campus Birkenfeld, a considerable share of ESD that was part of the original diploma degrees was moved to what are now master’s degrees.
Concerns over climate change, air pollution, and oil supply have stimulated the market for battery electric vehicles (BEVs). The environmental impacts of BEVs are typically evaluated through a standardized life-cycle assessment (LCA) methodology. Here, the LCA literature was surveyed with the objective to sketch the major trends and challenges in the impact assessment of BEVs. It was found that BEVs tend to be more energy efficient and less polluting than conventional cars. BEVs decrease exposure to air pollution as their impacts largely result from vehicle production and electricity generation outside of urban areas. The carbon footprint of BEVs, being highly sensitive to the carbon intensity of the electricity mix, may decrease in the nearby future through a shift to renewable energies and technology improvements in general. A minority of LCAs covers impact categories other than carbon footprint, revealing a mixed picture. Up to date little attention is paid so far in LCA to the efficiency advantage of BEVs in urban traffic, the gap between on-road and certified energy consumption, the local exposure to air pollutants and noise and the aging of emissions control technologies in conventional cars. Improvements of BEV components, directed charging, second-life reuse of vehicle batteries, as well as vehicle-to-home and vehicle-to-grid applications will significantly reduce the environmental impacts of BEVs in the future.
Global change effects on biodiversity and human wellbeing call for improved long-term environmental data as a basis for science, policy and decision making, including increased interoperability, multifunctionality, and harmonization. Based on the example of two global initiatives, the International Long-Term Ecological Research (ILTER) network and the Group on Earth Observations Biodiversity Observation Network (GEO BON), we propose merging the frameworks behind these initiatives, namely ecosystem integrity and essential biodiversity variables, to serve as an improved guideline for future site-based long-term research and monitoring in terrestrial, freshwater and coastal ecosystems. We derive a list of specific recommendations of what and how to measure at a monitoring site and call for an integration of sites into co-located site networks across individual monitoring initiatives, and centered on ecosystems. This facilitates the generation of linked comprehensive ecosystem monitoring data, supports synergies in the use of costly infrastructures, fosters cross-initiative research and provides a template for collaboration beyond the ILTER and GEO BON communities.
Local biodiversity trends over time are likely to be decoupled from global trends, as local processes may compensate or counteract global change. We analyze 161 long-term biological time series (15–91 years) collected across Europe, using a comprehensive dataset comprising ~6,200 marine, freshwater and terrestrial taxa. We test whether (i) local long-term biodiversity trends are consistent among biogeoregions, realms and taxonomic groups, and (ii) changes in biodiversity correlate with regional climate and local conditions. Our results reveal that local trends of abundance, richness and diversity differ among biogeoregions, realms and taxonomic groups, demonstrating that biodiversity changes at local scale are often complex and cannot be easily generalized. However, we find increases in richness and abundance with increasing temperature and naturalness as well as a clear spatial pattern in changes in community composition (i.e. temporal taxonomic turnover) in most biogeoregions of Northern and Eastern Europe.
Intraspecific diet specialization, usually driven by resource availability, competition and predation, is common in natural populations. However, the role of parasites on diet specialization of their hosts has rarely been studied. Eye flukes can impair vision ability of their hosts and have been associated with alterations of fish feeding behavior. Here it was assessed whether European perch (Perca fluviatilis) alter their diet composition as a consequence of infection with eye flukes. Young-of-the-year (YOY) perch from temperate Lake Müggelsee (Berlin, Germany) were sampled in two years, eye flukes counted and fish diet was evaluated using both stomach content and stable isotope analyses. Perch diet was dominated by zooplankton and benthic macroinvertebrates. Both methods indicated that with increasing eye fluke infection intensity fish had a more selective diet, feeding mainly on the benthic macroinvertebrate Dikerogammarus villosus, while less intensively infected fish appeared to be generalist feeders showing no preference for any particular prey type. Our results show that infection with eye flukes can indirectly affect interaction of the host with lower trophic levels by altering the diet composition and highlight the underestimated role of parasites in food web studies.
The aim of this work was to develop and evaluate the reinforcement learning algorithm VentAI, which is able to suggest a dynamically optimized mechanical ventilation regime for critically-ill patients. We built, validated and tested its performance on 11,943 events of volume-controlled mechanical ventilation derived from 61,532 distinct ICU admissions and tested it on an independent, secondary dataset (200,859 ICU stays; 25,086 mechanical ventilation events). A patient “data fingerprint” of 44 features was extracted as multidimensional time series in 4-hour time steps. We used a Markov decision process, including a reward system and a Q-learning approach, to find the optimized settings for positive end-expiratory pressure (PEEP), fraction of inspired oxygen (FiO2) and ideal body weight-adjusted tidal volume (Vt). The observed outcome was in-hospital or 90-day mortality. VentAI reached a significantly increased estimated performance return of 83.3 (primary dataset) and 84.1 (secondary dataset) compared to physicians’ standard clinical care (51.1). The number of recommended action changes per mechanically ventilated patient constantly exceeded those of the clinicians. VentAI chose 202.9% more frequently ventilation regimes with lower Vt (5–7.5 mL/kg), but 50.8% less for regimes with higher Vt (7.5–10 mL/kg). VentAI recommended 29.3% more frequently PEEP levels of 5–7 cm H2O and 53.6% more frequently PEEP levels of 7–9 cmH2O. VentAI avoided high (>55%) FiO2 values (59.8% decrease), while preferring the range of 50–55% (140.3% increase). In conclusion, VentAI provides reproducible high performance by dynamically choosing an optimized, individualized ventilation strategy and thus might be of benefit for critically ill patients.
1. Among the many concerns for biodiversity in the Anthropocene, recent reports of flying insect loss are particularly alarming, given their importance as pollinators, pest control agents, and as a food source. Few insect monitoring programmes cover the large spatial scales required to provide more generalizable estimates of insect responses to global change drivers.
2. We ask how climate and surrounding habitat affect flying insect biomass using data from the first year of a new monitoring network at 84 locations across Germany comprising a spatial gradient of land cover types from protected to urban and crop areas.
3. Flying insect biomass increased linearly with temperature across Germany. However, the effect of temperature on flying insect biomass flipped to negative in the hot months of June and July when local temperatures most exceeded long-term averages.
4. Land cover explained little variation in insect biomass, but biomass was lowest in forests. Grasslands, pastures, and orchards harboured the highest insect biomass. The date of peak biomass was primarily driven by surrounding land cover, with grasslands especially having earlier insect biomass phenologies.
5. Standardised, large-scale monitoring provides key insights into the underlying processes of insect decline and is pivotal for the development of climate-adapted strategies to promote insect diversity. In a temperate climate region, we find that the positive effects of temperature on flying insect biomass diminish in a German summer at locations where temperatures most exceeded long-term averages. Our results highlight the importance of local adaptation in climate change-driven impacts on insect communities.
This paper presents a feasibility study for the production of recycled glycol modified polyethylene terephthalate (PETG) material for additive manufacturing. Past studies showed a variety of results for the recycling of 3D-printing material, therefore the precise effect on the material properties is not completely clear. For this work, PETG waste of the same grade was recycled once and further processed into 3D printing filament. The study compares three blend ratios between purchased plastic pellets and recycled pellets to determine the degradation effect of one recycling cycle and possible blend ratios to counter these effects. Furthermore, the results include a commercially available filament. The comparison uses the filament diameter, the dimensional accuracy of the printed test specimen and mechanical properties as quality criteria. The study shows that the recycled material has a minor decrease concerning the tensile strength and Young’s modulus.
Background: Tobacco smoking prevalence continues to be high, particularly among adolescents and young adults with lower educational levels, and is therefore a serious public health problem. Tobacco smoking and problem drinking often co-occur and relapses after successful smoking cessation are often associated with alcohol use. This study aims at testing the efficacy of an integrated smoking cessation and alcohol intervention by comparing it to a smoking cessation only intervention for young people, delivered via the Internet and mobile phone.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted. Participants in the integrated intervention group will: (1) receive individually tailored web-based feedback on their drinking behaviour based on age and gender norms, (2) receive individually tailored mobile phone text messages to promote drinking within low-risk limits over a 3-month period, (3) receive individually tailored mobile phone text messages to support smoking cessation for 3 months, and (4) be offered the option of registering for a more intensive program that provides strategies for smoking cessation centred around a self-defined quit date. Participants in the smoking cessation only intervention group will only receive components (3) and (4). Study participants will be 1350 students who smoke tobacco daily/occasionally, from vocational schools in Switzerland. Main outcome criteria are 7-day point prevalence smoking abstinence and cigarette consumption assessed at the 6-month follow up.
Discussion: This is the first study testing a fully automated intervention for smoking cessation that simultaneously addresses alcohol use and interrelations between tobacco and alcohol use. The integrated intervention can be easily implemented in various settings and could be used with large groups of young people in a cost-effective way.
Background: Problem drinking, particularly risky single-occasion drinking is widespread among adolescents and young adults in most Western countries. Mobile phone text messaging allows a proactive and cost-effective delivery of short messages at any time and place and allows the delivery of individualised information at times when young people typically drink alcohol. The main objective of the planned study is to test the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people with heterogeneous educational level.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted to test the efficacy of the intervention in comparison to assessment only. The fully-automated intervention program will provide an online feedback based on the social norms approach as well as individually tailored mobile phone text messages to stimulate (1) positive outcome expectations to drink within low-risk limits, (2) self-efficacy to resist alcohol and (3) planning processes to translate intentions to resist alcohol into action. Program participants will receive up to two weekly text messages over a time period of 3 months. Study participants will be 934 students from approximately 93 upper secondary and vocational schools in Switzerland. Main outcome criterion will be risky single-occasion drinking in the past 30 days preceding the follow-up assessment.
Discussion: This is the first study testing the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people. Given that this intervention approach proves to be effective, it could be easily implemented in various settings, and it could reach large numbers of young people in a cost-effective way.
Background: High numbers of consumable medical materials (eg, sterile needles and swabs) are used during the daily routine of intensive care units (ICUs) worldwide. Although medical consumables largely contribute to total ICU hospital expenditure, many hospitals do not track the individual use of materials. Current tracking solutions meeting the specific requirements of the medical environment, like barcodes or radio frequency identification, require specialized material preparation and high infrastructure investment. This impedes the accurate prediction of consumption, leads to high storage maintenance costs caused by large inventories, and hinders scientific work due to inaccurate documentation. Thus, new cost-effective and contactless methods for object detection are urgently needed.
Objective: The goal of this work was to develop and evaluate a contactless visual recognition system for tracking medical consumable materials in ICUs using a deep learning approach on a distributed client-server architecture.
Methods: We developed Consumabot, a novel client-server optical recognition system for medical consumables, based on the convolutional neural network model MobileNet implemented in Tensorflow. The software was designed to run on single-board computer platforms as a detection unit. The system was trained to recognize 20 different materials in the ICU, while 100 sample images of each consumable material were provided. We assessed the top-1 recognition rates in the context of different real-world ICU settings: materials presented to the system without visual obstruction, 50% covered materials, and scenarios of multiple items. We further performed an analysis of variance with repeated measures to quantify the effect of adverse real-world circumstances.
Results: Consumabot reached a >99% reliability of recognition after about 60 steps of training and 150 steps of validation. A desirable low cross entropy of <0.03 was reached for the training set after about 100 iteration steps and after 170 steps for the validation set. The system showed a high top-1 mean recognition accuracy in a real-world scenario of 0.85 (SD 0.11) for objects presented to the system without visual obstruction. Recognition accuracy was lower, but still acceptable, in scenarios where the objects were 50% covered (P<.001; mean recognition accuracy 0.71; SD 0.13) or multiple objects of the target group were present (P=.01; mean recognition accuracy 0.78; SD 0.11), compared to a nonobstructed view. The approach met the criteria of absence of explicit labeling (eg, barcodes, radio frequency labeling) while maintaining a high standard for quality and hygiene with minimal consumption of resources (eg, cost, time, training, and computational power).
Conclusions: Using a convolutional neural network architecture, Consumabot consistently achieved good results in the classification of consumables and thus is a feasible way to recognize and register medical consumables directly to a hospital’s electronic health record. The system shows limitations when the materials are partially covered, therefore identifying characteristics of the consumables are not presented to the system. Further development of the assessment in different medical circumstances is needed.
Fuzzy system based on two-step cascade genetic optimization strategy for tobacco tar prediction
(2019)
There are many challenges in accurately measuring cigarette tar constituents. These include the need for standardized smoke generation methods related to unstable mixtures. In this research were developed algorithms using fusion of artificial intelligence methods to predict tar concentration. Outputs of development are three fuzzy structures optimized with genetic algorithms resulting in genetic algorithm (GA)-FUZZY, GA-adaptive neuro fuzzy inference system (ANFIS), GA-GA-FUZZY algorithms. Proposed algorithms are used for the tar prediction in the cigarette production process. The results of prediction are compared with gas chromatograph (high-performance liquid chromatography (HPLC)) readings.
Since operational managers often monitor large numbers of wind turbines (WTs), they depend on a toolset to provide them with highly condensed information to identify and prioritize low performing WTs or schedule preventive maintenance measures. Power curves are a frequently used tool to assess the performance of WTs. The power curve health value (HV) used in this work is supposed to detect power curve anomalies since small deviations in the power curve are not easy to identify. It evaluates deviations in the linear region of power curves by performing a principal component analysis. To calculate the HV, the standard deviation in direction of the second principal component of a reference data set is compared to the standard deviation of a combined data set consisting of the reference data and data of the evaluated period. This article examines the applicability of this HV for different purposes as well as its sensitivities and provides a modified HV approach to make it more robust and suitable for heterogeneous data sets. The modified HV was tested based on ENGIE's open data wind farm and data of on- and offshore WTs from the WInD-Pool. It proved to detect anomalies in the linear region of the power curve in a reliable and sensitive manner and was also eligible to detect long term power curve degradation. Also, about 7 % of all corrective maintenance measures were preceded by high HVs with a median alarm horizon of three days. Overall, the HV proved to be a promising tool for various applications.
This study introduced an automated long-term fermentation process for fungals grown in pellet form. The goal was to reduce the overgrowth of bioreactor internals and sensors while better rheological properties in the fermentation broth, such as oxygen transfer and mixing time, can be achieved. Because this could not be accomplished with continuous culture and fed-batch fermentation, repeated-batch fermentation was implemented with the help of additional bioreactor internals (“sporulation supports”). This should capture some biomass during fermentation. After harvesting the suspended biomass, intermediate cleaning was performed using a cleaning device. The biomass retained on the sporulation support went through the sporulation phase. The spores were subsequently used as inocula for the next batch. The reason for this approach was that the retained pellets could otherwise cause problems (e.g., overgrowth on sensors) in subsequent batches because the fungus would then show undesirable hyphal growth. Various sporulation supports were tested for sufficient biomass fixation to start the next batch. A reproducible spore concentration within the range of the requirements could be achieved by adjusting the sporulation support (design and construction material), and an intermediate cleaning adapted to this.
This article presents experience curves and cost-benefit analyses for electric and plug-in hybrid cars sold in Germany. We find that between 2010 and 2016, the prices and price differentials relative to conventional cars declined at learning rates of 23 ± 2% and 32 ± 2% for electric cars and 6 ± 1% and 37 ± 2% for plug-in hybrids. If trends persist, price beak-even with conventional cars may be reached after another 7 ± 1 million electric cars and 5 ± 1 million plug-in hybrids are produced. The user costs of electric and plug-in hybrid cars relative to their conventional counterparts are declining annually by 14% and 26%. Also the costs for mitigating CO2 and air pollutant emissions through the deployment of electrified cars tend to decline. However, at current levels, NOX and particle emissions are still mitigated at lower costs by state-of-the-art after-treatment systems than through the electrification of powertrains. Overall, the observation of robust technological learning suggests policy makers should focus their support on non-cost market barriers for the electrification of road transport, addressing specifically the availability of recharging infrastructure.
Purpose: The well-to-wheel (WTW) methodology is widely used for policy support in road transport. It can be seen as a simplified life cycle assessment (LCA) that focuses on the energy consumption and CO2 emissions only for the fuel being consumed, ignoring other stages of a vehicle’s life cycle. WTW results are therefore different from LCA results. In order to close this gap, the authors propose a hybrid WTW+LCA methodology useful to assess the greenhouse gas (GHG) profiles of road vehicles.
Methods: The proposed method (hybrid WTW+LCA) keeps the main hypotheses of the WTW methodology, but integrates them with LCA data restricted to the global warming potential (GWP) occurring during the manufacturing of the battery pack. WTW data are used for the GHG intensity of the EU electric mix, after a consistency check with the main life cycle impact (LCI) sources available in literature.
Results and discussion: A numerical example is provided, comparing GHG emissions due to the use of a battery electric vehicle (BEV) with emissions from an internal combustion engine vehicle. This comparison is done both according to the WTW approach (namely the JEC WTW version 4) and the proposed hybrid WTW+LCA method. The GHG savings due to the use of BEVs calculated with the WTW-4 range between 44 and 56 %, while according to the hybrid method the savings are lower (31–46 %). This difference is due to the GWP which arises as a result of the manufacturing of the battery pack for the electric vehicles.
Conclusions: The WTW methodology used in policy support to quantify energy content and GHG emissions of fuels and powertrains can produce results closer to the LCA methodology by adopting a hybrid WTW+LCA approach. While evaluating GHG savings due to the use of BEVs, it is important that this method considers the GWP due to the manufacturing of the battery pack.
While the contribution of renewable energy technologies to the energy system is increasing, so is its level of complexity. In addition to new types of consumer systems, the future system will be characterized by volatile generation plants that will require storage technologies. Furthermore, a solid interconnected system that enables the transit of electrical energy can reduce the need for generation and storage systems. Therefore, appropriate methods are needed to analyze energy production and consumption interactions within different system constellations. Energy system models can help to understand and build these future energy systems. However, although various energy models already exist, none of them can cover all issues related to integrating renewable energy systems. The existing research gap is also reflected in the fact that current models cannot model the entire energy system for very high shares of renewable energies with high temporal resolution (15 min or 1-h steps) and high spatial resolution. Additionally, the low availability of open-source energy models leads to a lack of transparency about exactly how they work. To close this gap, the sector-coupled energy model (UCB-SEnMod) was developed. Its unique features are the modular structure, high flexibility, and applicability, enabling it to model any system constellation and can be easily extended with new functions due to its software design. Due to the software architecture, it is possible to map individual buildings or companies and regions, or even countries. In addition, we plan to make the energy model UCB-SEnMod available as an open-source framework to enable users to understand the functionality and configuration options more easily. This paper presents the methodology of the UCB-SEnMod model. The main components of the model are described in detail, i.e., the energy generation systems, the consumption components in the electricity, heat, and transport sectors, and the possibilities of load balancing.
Zur Optimierung von Zulaufsatzkultur-Fermentationen von methylotrophen Organismen wird eine Online-Messmethode vorgestellt, mit der die Methanol-Konzentration im Medium während einer Fermentation durch ein Spülgaspervaporations-Prinzip bestimmt werden kann. Im Gegensatz zu anderen Analysemethoden bietet die Messmethode die Möglichkeit, die Substratkonzentration bei Prozessen mit Methanol als zentralem Substrat über eine Regelung auf einem definierten Wert zu halten. Es werden Schwierigkeiten, aber auch deren Überwindung bei der Adaption der Messmethode auf Fermentationsprozesse dargestellt.
The implementation of single-use technologies offers several major advantages, e.g. prevention of cross-contamination, especially when spore-forming microorganisms are present. This study investigated the application of a single-use bioreactor in batch fermentation of filamentous fungus Penicillium sp. (IBWF 040-09) from the Institute of Biotechnology and Drug Research (IBWF), which is capable of intracellular production of a protease inhibitor against parasitic proteases as a secondary metabolite. Several modifications to the SU bioreactor were suggested in this study to allow the fermentation in which the fungus forms pellets. Simultaneously, fermentations in conventional glass bioreactor were also conducted as reference. Although there are significant differences in the construction material and gassing system, the similarity of the two types of bioreactors in terms of fungal metabolic activity and the reproducibility of fermentations could be demonstrated using statistic methods. Under the selected cultivation conditions, growth rate, yield coefficient, substrate uptake rate, and formation of intracellular protease-inhibiting substance in the single-use bioreactor were similar to those in the glass bioreactor.
Purification of mRNA with oligo(dT)-functionalized magnetic particles involves a series of magnetic separations for buffer exchange and washing. Magnetic particles interact and agglomerate with each other when a magnetic field is applied, which can result in a decreased total surface area and thus a decreased yield of mRNA. In addition, agglomeration may also be caused by mRNA loading on the magnetic particles. Therefore, it is of interest how the individual steps of magnetic separation and subsequent redispersion in the buffers used affect the particle size distribution. The lysis/binding buffer is the most important buffer for the separation of mRNA from the multicomponent suspension of cell lysate. Therefore, monodisperse magnetic particles loaded with mRNA were dispersed in the lysis/binding buffer and in the reference system deionized water, and the particle size distributions were measured. A concentration-dependent agglomeration tendency was observed in deionized water. In contrast, no significant agglomeration was detected in the lysis/binding buffer. With regard to magnetic particle recycling, the influence of different storage and drying processes on particle size distribution was investigated. Agglomeration occurred in all process alternatives. For de-agglomeration, ultrasonic treatment was examined. It represents a suitable method for reproducible restoration of the original particle size distribution.
Laboratory protocols using magnetic beads have gained importance in the purification of mRNA for vaccines. Here, the produced mRNA hybridizes specifically to oligo(dT)-functionalized magnetic beads after cell lysis. The mRNA-loaded magnetic beads can be selectively separated using a magnet. Subsequently, impurities are removed by washing steps and the mRNA is eluted. Magnetic separation is utilized in each step, using different buffers such as the lysis/binding buffer. To reduce the time required for purification of larger amounts of mRNA vaccine for clinical trials, high-gradient magnetic separation (HGMS) is suitable. Thereby, magnetic beads are selectively retained in a flow-through separation chamber. To meet the requirements of biopharmaceutical production, a disposable HGMS separation chamber with a certified material (United States Pharmacopeia Class VI) was developed which can be manufactured using 3D printing. Due to the special design, the filter matrix itself is not in contact with the product. The separation chamber was tested with suspensions of oligo(dT)-functionalized Dynabeads MyOne loaded with synthetic mRNA. At a concentration of cB = 1.6–2.1 g·L–1 in lysis/binding buffer, these 1 μm magnetic particles are retained to more than 99.39% at volumetric flows of up to 150 mL·min–1 with the developed SU-HGMS separation chamber. When using the separation chamber with volumetric flow rates below 50 mL·min–1, the retained particle mass is even more than 99.99%.
Productive biofilms are gaining growing interest in research due to their potential of producing valuable compounds and bioactive substances such as antibiotics. This is supported by recent developments in biofilm photobioreactors that established the controlled phototrophic cultivation of algae and cyanobacteria. Cultivation of biofilms can be challenging due to the need of surfaces for biofilm adhesion. The total production of biomass, and thus production of e.g. bioactive substances, within the bioreactor volume highly depends on the available cultivation surface. To achieve an enlargement of surface area for biofilm photobioreactors, biocarriers can be implemented in the cultivation. Thereby, material properties and design of the biocarriers are important for initial biofilm formation and growth of cyanobacteria. In this study, special biocarriers were designed and additively manufactured to investigate different polymeric materials and surface designs regarding biofilm adhesion of the terrestrial cyanobacterium Nostoc flagelliforme (CCAP 1453/33). Properties of 3D-printed materials were characterized by determination of wettability, surface roughness, and density. To evaluate the influence of wettability on biofilm formation, material properties were specifically modified by gas-phase fluorination and biofilm formation was analyzed on biocarriers with basic and optimized geometry in shaking flask cultivation. We found that different polymeric materials revealed no significant differences in wettability and with identical surface design no significant effect on biomass adhesion was observed. However, materials treated with fluorination as well as optimized biocarrier design showed improved wettability and an increase in biomass adhesion per biocarrier surface.
Hydrochar derived from Argan nut shell (ANS) was synthesized and applied to remove bisphenol A (BPA) and diuron. The results indicated that the hydrochar prepared at 200 °C (HTC@ANS-200) possessed a higher specific surface area (42 m2/g) than hydrochar (HTC@ANS-180) prepared at 180 °C (17 m2/g). The hydrochars exhibited spherical particles, which are rich in functional groups. The HTC@ANS-200 exhibited high adsorption efficiency, of about 92% of the BPA removal and 95% of diuron removal. The maximum Langmuir adsorption capacities of HTC@ANS-200 at room temperature were 1162.79 mg/for Bisphenol A and 833.33 mg/g for diuron (higher than most reported adsorbents). The adsorption process was spontaneous (− ΔG°) and exothermic (− ΔH°). Excellent reusability was reclaimed after five cycles, the removal efficiency showed a weak decrease of 4% for BPA and 1% for diuron. The analysis of Fourier transforms infrared spectrometry demonstrated that the aromatic C=C and OH played major roles in the adsorption mechanisms of BPA and diuron in this study. The high adsorption capacity was attributed to the beneficial porosity (The pore size of HTC@ANS-200 bigger than the size of BPA and diuron molecule) and surface functional groups. BPA and diuron adsorption occurred also via multiple adsorption mechanisms, including pore filling, π–π interactions, and hydrogen bonding interactions on HTC@ANS-200.
Modeling and executing knowledge-intensive processes (KiPs) are challenging with state-of-the-art approaches, and the specific demands of KiPs are the subject of ongoing research. In this context, little attention has been paid to the ontology-driven combination of data-centric and semantic business process modeling, which finds additional motivation by enabling the division of labor between humans and artificial intelligence. Such approaches have characteristics that could allow support for KiPs based on the inferencing capabilities of reasoners. We confirm this as we show that reasoners can infer the executability of tasks based on a currently researched ontology- and data-driven business process model (ODD-BP model). Further support for KiPs by the proposed inference mechanism results from its ability to infer the relevance of tasks, depending on the extent to which their execution would contribute to process progress. Besides these contributions along with the execution perspective (start-to-end direction), we will also show how our approach can help to reach specific process goals by inferring the relevance of process elements regarding their support to achieve such goals (end-to-start direction). The elements with the most valuable process progress can be identified in the intersection of both, the execution and goal perspective. This paper will introduce this new approach and verifies its practicability with an evaluation of a KiP in the field of emergency call centers.
A comprehensive overview is provided evaluating direct real-world CO2 emissions of both diesel and petrol cars newly registered in Europe between 1995 and 2015. Before 2011, European diesel cars emitted less CO2 per kilometre than petrol cars, but since then there is no appreciable difference in per-km CO2 emissions between diesel and petrol cars. Real-world CO2 emissions of diesel cars have not declined appreciably since 2001, while the CO2 emissions of petrol cars have been stagnant since 2012. When adding black carbon related CO2-equivalents, such as from diesel cars without particulate filters, diesel cars were discovered to have had much higher climate relevant emissions until the year 2001 when compared to petrol cars. From 2001 to 2015 CO2-equivalent emissions from new diesel cars and petrol cars were hardly distinguishable. Lifetime use phase CO2-equivalent emissions of all European passenger vehicles were modelled for 1995–2015 based on three scenarios: the historic case, another scenario freezing percentages of diesel cars at the low levels from the early 1990s (thus avoiding the observed “boom” in new diesel registrations), and an advanced mitigation scenario based on high proportions of petrol hybrid cars and cars burning gaseous fuels. The difference in CO2-equivalent emissions between the historical case and the scenario avoiding the diesel car boom is only 0.4%. The advanced mitigation scenario would have been able to achieve a 3.4% reduction in total CO2-equivalent emissions over the same time frame. The European diesel car boom appears to have been ineffective at reducing climate-warming emissions from the European transport sector.