Filtern
Erscheinungsjahr
Dokumenttyp
Volltext vorhanden
- ja (224)
Gehört zur Bibliographie
- nein (224)
Schlagworte
- Nachhaltigkeit (17)
- Deutschland (15)
- Rückenschmerz (11)
- COVID-19 (9)
- Digitalisierung (8)
- Künstliche Intelligenz (7)
- Maschinelles Lernen (7)
- Physikalische Therapie (7)
- Biodiversität (6)
- CO2-Bilanz (6)
Institut
- FB Umweltplanung/-technik (UCB) (58)
- FB Bauen + Leben (51)
- FB Informatik + Therapiewissenschaft (36)
- FB Umweltwirtschaft/-recht (UCB) (21)
- InDi - Institut für Internationale und Digitale Kommunikation (15)
- FB Technik (14)
- IfaS - Institut für angewandtes Stoffstrommanagement (10)
- FB Wirtschaft (7)
- ISS - Institut für Softwaresysteme in Wirtschaft, Umwelt und Verwaltung (6)
- LaROS - Labor für Radiotechnologie und optische Systeme (6)
Organic semiconductor distributed feedback laser fabricated by direct laser interference ablation
(2007)
We use a pulsed, frequency tripled picosecond Nd:YAG laser for holographic ablation to pattern a surface relief grating into an organic semiconductor guest-host system. The resulting second order distributed feedback lasers exhibit laser action with laser thresholds being comparable to those obtained with resonators structured by standard lithographic techniques. The details of the interference ablation of tris-(8-hydroxyquinoline) aluminum (Alq(3)) doped with the laser dye 4- dicyanomethylene-2-methyl-6-(p-dimethylaminostyryl)-4H-pyran (DCM) are presented and discussed. Lasing action is demonstrated at a wavelength of 646.6 nm, exploiting second order Bragg reflection in a relief grating with a period of 399 nm.
This paper describes the project “Visual Knowledge Communication”, a joint project that started recently. The partners are psychologists and computer scientists from four universities of the German state Rhineland-Palatinate. The starting point for the project was the fact that visualizations have attracted considerable interest in psychology as well as computer science within the last years. However, psychologists and computer scientists pursued their investigations independently from each other in the past. This project has as its main goal the support and fostering of cooperation between psychologists and computer scientists in several visualization research projects.
The paper sketches the overall project. It then discusses in more detail the authors' subproject which deals with a peer review process for animations developed by students. The basic ideas, the main goals, and the project plan are described.
This paper is a work-in-progress report. Therefore, it does not contain any results.
The photo-Dember effect is a source of impulsive THz emission following femtosecond pulsed optical excitation. This emission results from the ultrafast spatial separation of electron-hole pairs in strong carrier gradients due to their different diffusion coefficients. The associated time dependent polarization is oriented perpendicular to the excited surface which is inaptly for efficient out coupling of THz radiation. We propose a scheme for generating strong carrier gradients parallel to the excited surface. The resulting photo-Dember currents are oriented in the same direction and emit THz radiation into the favorable direction perpendicular to the surface. This effect is demonstrated for GaAs and In(0.53)Ga(0.47)As. Surprisingly the photo-Dember THz emitters provide higher bandwidth than photoconductive emitters. Multiplexing of phase coherent photo-Dember currents by periodically tailoring the photoexcited spatial carrier distribution gives rise to a strongly enhanced THz emission, which reaches electric field amplitudes comparable to a high-efficiency externally biased photoconductive emitter.
Background: Electric vehicles have been identified as being a key technology in reducing future emissions and energy consumption in the mobility sector. The focus of this article is to review and assess the energy efficiency and the environmental impact of battery electric cars (BEV), which is the only technical alternative on the market available today to vehicles with internal combustion engine (ICEV). Electricity onboard a car can be provided either by a battery or a fuel cell (FCV). The technical structure of BEV is described, clarifying that it is relatively simple compared to ICEV. Following that, ICEV can be ‘e-converted’ by experienced personnel. Such an e-conversion project generated reality-close data reported here.
Results: Practicability of today's BEV is discussed, revealing that particularly small-size BEVs are useful. This article reports on an e-conversion of a used Smart. Measurements on this car, prior and after conversion, confirmed a fourfold energy efficiency advantage of BEV over ICEV, as supposed in literature. Preliminary energy efficiency data of FCV are reviewed being only slightly lower compared to BEV. However, well-to-wheel efficiency suffers from 47% to 63% energy loss during hydrogen production. With respect to energy efficiency, BEVs are found to represent the only alternative to ICEV. This, however, is only true if the electricity is provided by very efficient power plants or better by renewable energy production. Literature data on energy consumption and greenhouse gas (GHG) emission by ICEV compared to BEV suffer from a 25% underestimation of ICEV-standardized driving cycle numbers in relation to street conditions so far. Literature data available for BEV, on the other hand, were mostly modeled and based on relatively heavy BEV as well as driving conditions, which do not represent the most useful field of BEV operation. Literature data have been compared with measurements based on the converted Smart, revealing a distinct GHG emissions advantage due to the German electricity net conditions, which can be considerably extended by charging electricity from renewable sources. Life cycle carbon footprint of BEV is reviewed based on literature data with emphasis on lithium-ion batteries. Battery life cycle assessment (LCA) data available in literature, so far, vary significantly by a factor of up to 5.6 depending on LCA methodology approach, but also with respect to the battery chemistry. Carbon footprint over 100,000 km calculated for the converted 10-year-old Smart exhibits a possible reduction of over 80% in comparison to the Smart with internal combustion engine.
Conclusion: Findings of the article confirm that the electric car can serve as a suitable instrument towards a much more sustainable future in mobility. This is particularly true for small-size BEV, which is underrepresented in LCA literature data so far. While CO2-LCA of BEV seems to be relatively well known apart from the battery, life cycle impact of BEV in categories other than the global warming potential reveals a complex and still incomplete picture. Since technology of the electric car is of limited complexity with the exception of the battery, used cars can also be converted from combustion to electric. This way, it seems possible to reduce CO2-equivalent emissions by 80% (factor 5 efficiency improvement).
Water is crucial for socio-economic development and healthy ecosystems. With the actual population growth and in view of future water scarcity, development calls for improved sectorial allocation of groundwater and surface water for domestic, agricultural and industrial use. Instead of intensifying the pressure on water resources, leading to conflicts among users and excessive pressure on the environment, sewage effluents, after pre-treatment, provide an alternative nutrient-rich water source for agriculture in the vicinity of cities. Water scarcity often occurs in arid and semiarid regions affected by droughts and large climate variability and where the choice of crop to be grown is limited by the environmental factors. Jatropha has been introduced as a potential renewable energy resource since it is claimed to be drought resistant and can be grown on marginal sites. Sewage effluents provide a source for water and nutrients for cultivating jatropha, a combined plant production/effluent treatment system. Nevertheless, use of sewage effluents for irrigation in arid climates carries the risk of salinization. Thus, potential irrigation with sewage effluents needs to consider both the water requirement of the crop and those needed for controlling salinity build-up in the top soil. Using data from a case study in Southern Morocco, irrigation requirements were calculated using CROPWAT 8.0. We present here crop evapotranspiration during the growing period, required irrigation, the resulting nutrient input and the related risk of salinization from the irrigation of jatropha with sewage effluent.
Innovative biogas multi-stage biogas plant and novel analytical system: First project experiences
(2012)
The here presented applied research and development project is targeted to the development and application of new and improved techniques in plant design, performance analysis and process control. Hereto following the required steps are illustrated and the goals are outlined. The project covers the development of a previously patented anaerobic digestion process, adaption of flow cytometry as an analytical instrument and investigation of innovative ways of disposal of solid fermentation wastes. The preliminary experiences with a newly built research plant employing a novel anaerobic biogas digestion technique are discussed. In this paper the first outcomes concerning the construction and operation are discussed. A novel method of disposal of the fermentation wastes is also discussed and first results are shown.
Many SMEs are still faced with the problematic fact that their corporate structures and processes are not designed for efficient development and market positioning and there is a lack of appropriate methods and tools. SMEs are often inefficiently targeted to the internal or external demands for services. The following key questions are answered in this article: 1) Which studies are available in terms of strategic planning in young SMEs? 2) Which aspects should be considered in the implementation and control of these instruments?
Most of the land reforms of recent decades have followed an approach of “formalization and capitalization” of individual land titles (de Soto 2000). However, within the privatization agenda, benefits of unimproved land (such as land rents and value capture) are reaped privately by well-organized actors, whereas the costs of valorization (e.g., infrastructure) or opportunity costs of land use changes are shifted onto poorly organized groups. Consequences of capitalization and formalization include rent seeking and land grabbing. In developing countries, formal law often transpires to work in favor of the winners of the titling process and is opposed by the customary rights of the losers. This causes a lack of general acknowledgement of formalized law (which is made responsible for deprivation of livelihoods of vulnerable groups) and often leads to a clash of formal and customary norms. Countries may fall into a state of de facto anarchy and “de facto open access”. Encroachment and destruction of natural resources may spread. A reframing of development policy is necessary in order to fight these aberrations. Examples and evidence are provided from Cambodia, which has many features in common with other countries in Asia and Sub-Saharan Africa in this respect.
Background: On the way to a more sustainable society, transport needs to be urgently optimized regarding energy consumption and pollution control. While in earlier decades, Europe followed automobile technology leaps initiated in the USA, it has decoupled itself for 20 years by focusing research capacity towards the diesel powertrain. The resulting technology shift has led to some 45 million extra diesel cars in Europe. Its outcome in terms of health and environmental effects will be investigated below.
Results: Expected greenhouse gas savings initiated by the shift to diesel cars have been overestimated. Only about one tenth of overall energy efficiency improvements of passenger cars can be attributed to it. These minor savings are on the other hand overcompensated by a significant increase of supply chain CO2 emissions and extensive black carbon emissions of diesel cars without particulate filter. We conclude that the European diesel car boom did not cool down the atmosphere. Moreover, toxic NO x emissions of diesel cars have been underestimated up to 20-fold in officially announced data. The voluntary agreement signed in 1998 between the European Automobile industry and the European Commission envisaging to reduce CO2 emissions has been identified as elementary for the ensuing European diesel car boom. Four factors have been quantified in order to explain very different dieselization rates across Europe: impact of national car/supplier industry, ecological modernization, fuel tourism and corporatist political governance. By comparing the European diesel strategy to the Japanese petrol-hybrid avenue, it becomes clear that a different road would have both more effectively reduced CO2 emissions and pollutants.
Conclusion: Europe's car fleets have been persistently transformed from being petrol-driven to diesel-driven over the last 20 years. This paper investigates on how this came to be and why Europe took a distinct route as compared to other parts of the world. It also attempts to evaluate the outcome of stated goals of this transformation which was primarily a robust reduction in GHG emissions. We conclude that global warming has been negatively affected, and air pollution has become alarming in many European locations. More progressive development scenarios could have prevented these outcomes.
Many SMEs still face a problematic fact that their corporate structures and processes are not designed for efficient development and market positioning and there is a lack of appropriate methods and tools. SMEs are often inefficiently targeted to the internal or external demands for services. The goal of the research regarding contents and methodology was to investigate the practice of strategic planning and the implementation and application of service engineering in young SMEs on the one hand, and on the other hand specifically for young SMEs, whose performance and the probability of success can be increased by its application. These two goals have been achieved.
This paper analyzed the characteristic of the tourism destination ecosystem from perspective of entropy in Dunhuang City. Given these circumstances, an evaluation index system that considers the potential of sustainable development was formed based on dissipative structure and entropy change for the tourism destination ecosystem. The sustainable development potential evaluation model for tourism destination ecosystem was built up based on information entropy. Then, we analyzed each indicator impact for the sustainable development potential and proposed some measures for the tourism destination ecosystem. The conclusions include: (a) the requirements of Dunhuang tourism destination ecosystem on the natural ecosystem continuously grew between 2000 and 2012; (b) The sustainable development potential of the Dunhuang tourism destination ecosystem was on an oscillation upward trend during the study period, which is dependent on government attention, and pollution problems were improved.
Integrated analysis on socio-economic metabolism could provide a basis for understanding and optimizing regional sustainability. The paper conducted socio-economic metabolism analysis by means of the emergy accounting method coupled with data envelopment analysis and decomposition analysis techniques to assess the sustainability of Qingyang city and its eight sub-region system, as well as to identify the major driving factors of performance change during 2000–2007, to serve as the basis for future policy scenarios. The results indicate that Qingyang greatly depended on non-renewable emergy flows and feedback (purchased) emergy flows, except the two sub-regions, named Huanxian and Huachi, which highly depended on renewable emergy flow. Zhenyuan, Huanxian and Qingcheng were identified as being relatively emergy efficient, and the other five sub-regions have potential to reduce natural resource inputs and waste output to achieve the goal of efficiency. The results of decomposition analysis show that the economic growth, as well as the increased emergy yield ratio and population not accompanied by a sufficient increase of resource utilization efficiency are the main drivers of the unsustainable economic model in Qingyang and call for polices to promote the efficiency of resource utilization and to optimize natural resource use.
Background: Problem drinking, particularly risky single-occasion drinking is widespread among adolescents and young adults in most Western countries. Mobile phone text messaging allows a proactive and cost-effective delivery of short messages at any time and place and allows the delivery of individualised information at times when young people typically drink alcohol. The main objective of the planned study is to test the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people with heterogeneous educational level.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted to test the efficacy of the intervention in comparison to assessment only. The fully-automated intervention program will provide an online feedback based on the social norms approach as well as individually tailored mobile phone text messages to stimulate (1) positive outcome expectations to drink within low-risk limits, (2) self-efficacy to resist alcohol and (3) planning processes to translate intentions to resist alcohol into action. Program participants will receive up to two weekly text messages over a time period of 3 months. Study participants will be 934 students from approximately 93 upper secondary and vocational schools in Switzerland. Main outcome criterion will be risky single-occasion drinking in the past 30 days preceding the follow-up assessment.
Discussion: This is the first study testing the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people. Given that this intervention approach proves to be effective, it could be easily implemented in various settings, and it could reach large numbers of young people in a cost-effective way.
Background: Tobacco smoking prevalence continues to be high, particularly among adolescents and young adults with lower educational levels, and is therefore a serious public health problem. Tobacco smoking and problem drinking often co-occur and relapses after successful smoking cessation are often associated with alcohol use. This study aims at testing the efficacy of an integrated smoking cessation and alcohol intervention by comparing it to a smoking cessation only intervention for young people, delivered via the Internet and mobile phone.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted. Participants in the integrated intervention group will: (1) receive individually tailored web-based feedback on their drinking behaviour based on age and gender norms, (2) receive individually tailored mobile phone text messages to promote drinking within low-risk limits over a 3-month period, (3) receive individually tailored mobile phone text messages to support smoking cessation for 3 months, and (4) be offered the option of registering for a more intensive program that provides strategies for smoking cessation centred around a self-defined quit date. Participants in the smoking cessation only intervention group will only receive components (3) and (4). Study participants will be 1350 students who smoke tobacco daily/occasionally, from vocational schools in Switzerland. Main outcome criteria are 7-day point prevalence smoking abstinence and cigarette consumption assessed at the 6-month follow up.
Discussion: This is the first study testing a fully automated intervention for smoking cessation that simultaneously addresses alcohol use and interrelations between tobacco and alcohol use. The integrated intervention can be easily implemented in various settings and could be used with large groups of young people in a cost-effective way.
The services sector is also called “tertiary sector” and has become increasingly important in the last few decades. The process of this structural change occurrence is characterized by a significant increase in employment in the services sector. On the other hand, the former economic importance in traditional areas, such as agriculture and forestry, as well as manufacturing, is declining. In this article the research field of the service sector is shown beginning from the 70s up to the present. The goal of the article is to demonstrate the necessity of service engineering research.
Issues on climate change have been recognized as serious challenges for regional sustainable development both at a global and local level. Given the background that most of the artificial carbon emissions are resulted from the energy consumption sector and the energy is also the key element resource for economic development, this paper investigated the relationship between CO2 emission, fossil energy consumption, and economic growth in the period 1970–2008 of nine European countries, based on the approach of Granger Causality Test, followed by the risk analysis on impacts of CO2 reduction to local economic growth classified by the indicator of causality degree. The results show that there are various feedback causal relationships between carbon emission, energy consumption and economic growth, with both unidirectional and dual-directional Granger causality. The impact of reducing CO2 emission to economic growth varies between countries as well.
With less than 6% of total global water resources but one fifth of the global population, China is facing serious challenges for its water resources management, particularly in rural areas due to the long-standing urban-rural dualistic structure and the economic-centralized developmental policies. This paper addresses the key water crises in rural China including potable water supply, wastewater treatment and disposal, water for agricultural purposes, and environmental concerns, and then analyzes the administrative system on water resources from the perspective of characteristics of the current administrative system and regulations; finally, synthetic approaches to solve water problems in rural China are proposed with regard to institutional reform, regulation revision, economic instruments, technology innovation and capacity-building. These recommendations provide valuable insights to water managers in rural China so that they can identify the most appropriate pathways for optimizing their water resources, reducing the total wastewater discharge and improving their water-related ecosystem.
Ahmad et al. in their paper for the first time proposed to apply sharp function for classification of images. In continuation of their work, in this paper we investigate the use of sharp function as an edge detector through well known diffusion models. Further, we discuss the formulation of weak solution of nonlinear diffusion equation and prove uniqueness of weak solution of nonlinear problem. The anisotropic generalization of sharp operator based diffusion has also been implemented and tested on various types of images.
A local non-restrictive ramp metering strategy PRO is introduced. It is based on the stochasticity of capacity. The ramp metering algorithm shows innovative features:
• upstream time shifted measurements for anticipation
• measurements are actuated every second
• up to three vehicles per green are allowed
Details of the theory of this strategy are described in the first part. At freeway B27 three ramp meters with the PRO algorithm were installed. In the second part, based on extensive detailed traffic and accident data the effects on traffic flow and safety are described. The impact is positive regarding vehicle speed, queue duration and length as well as capacity and traffic safety. The improvements of speeds, travel times and capacities are statistically significant. The ramp metering systems are highly cost effective.
A common answer to the financial challenges of green transformation and the shortcomings of the current taxation system is the “double dividend approach”. Environmental taxes should either feed the public purse in order to remove other distorting taxes, or directly contribute to financing green transformation. Germany adopted the former approach. However, this article argues, by using the example of Germany, that “good taxes” in terms of public finance should be neutral in terms of environmental protection and vice versa. Neutral taxation in terms of environmental impacts can be best achieved by applying the “Henry George principle”. Additionally, neutral taxation in terms of public finance is best achieved if the revenues from environmental taxes are redistributed to the citizens as an ecological basic income. Thus, distortive effects of environmental charges in terms of distribution and political decision-making might be removed. However, such a financial framework could be introduced step by step, starting with a tax shift.
The introduction of functionalized magnetizable particles for the purification of enzymes or for the multi-use of pre-immobilized biocatalysts offers a great potential for time and cost savings in biotechnological process design. The selective separation of the magnetizable particles is performed for example by a high-gradient magnetic separator. In this study FEM and CFD simulations of the magnetic field and the fluid flow field within a filter chamber of a magnetic separator were carried out, to find an optimal separator design. The motion of virtual magnetizable particles was calculated with a one-way coupled Lagrangian approach in order to test many geometric and parametric variations in reduced time. It was found that a flow homogenisator smoothed the fluid flow, so that the linear velocity became nearly equal over the cross section in the direction of flow. Furthermore the retention of magnetizable particles increases with a high total edge length within the filter matrix.
The introduction of functionalized magnetizable particles and high-gradient magnetic separation represents a time and money saving alternative to conventional purification and separation unit operations in the biotechnical sector. This technique has some advantages especially for the recycling of immobilized enzymes. A new magnetic filter with sight glasses was constructed and produced to study the performance of high-gradient magnetic separation at varied parameters. By optical analysis the buildup of a clogging was identified as the major parameter which affected the separation performance. For the cleaning procedure, a two-phase flow of water with highly dispersed air bubbles was tested which led to a nearly complete cleaning of the filter chamber.
Electrical stimulation is used for example to treat neuronal disorders and depression with deep brain stimulation or transcranial electrical stimulation. Depending on the application, different electrodes are used and thus different electrical characteristics exist, which have to be handled by the stimulator. Without a measuring device the user would have to rely on the stimulator being able to deliver the needed stimulation signal. Therefore, the objective of this paper is to present a method to increase the level of confidence with characterization and modelling of the electrical behavior by using the example of one channel of our stimulation device for experimental use. In several simulation studies with an electrode model with values in a typical range for cortical applications the influence of the load onto the stimulator and the possibility to pre-estimate measuring signals in complex networks are shown.
In the last decades, there has been a widespread implementation of Green Infrastructures worldwide. Among these, green roofs appear to be particularly flexible sustainable drainage facilities. To predict their effectiveness for planning purposes, a tool is required that provides information as a function of local meteorological variables. Thus, a relatively simple daily scale, one-dimensional water balance approach has been proposed. The crucial evapotranspiration process, usually considered as a water balance dependent variable, is replaced here by empirical relationships providing an a-priori assessment of soil water losses through actual evapotranspiration. The modelling scheme, which under some simplification can be used without a calibration process, has been applied to experimental runoff data monitored at a green roof located near Bernkastel (Germany), between April 2005 and December 2006. Two different empirical relationships have been used to model actual evapotranspiration, considering a water availability limited and an energy limited scheme. Model errors quantification, ranging from 2% to 40% on the long-term scale and from 1% to 36% at the event scale, appear strongly related to the particularly considered relationship.
Purpose: The well-to-wheel (WTW) methodology is widely used for policy support in road transport. It can be seen as a simplified life cycle assessment (LCA) that focuses on the energy consumption and CO2 emissions only for the fuel being consumed, ignoring other stages of a vehicle’s life cycle. WTW results are therefore different from LCA results. In order to close this gap, the authors propose a hybrid WTW+LCA methodology useful to assess the greenhouse gas (GHG) profiles of road vehicles.
Methods: The proposed method (hybrid WTW+LCA) keeps the main hypotheses of the WTW methodology, but integrates them with LCA data restricted to the global warming potential (GWP) occurring during the manufacturing of the battery pack. WTW data are used for the GHG intensity of the EU electric mix, after a consistency check with the main life cycle impact (LCI) sources available in literature.
Results and discussion: A numerical example is provided, comparing GHG emissions due to the use of a battery electric vehicle (BEV) with emissions from an internal combustion engine vehicle. This comparison is done both according to the WTW approach (namely the JEC WTW version 4) and the proposed hybrid WTW+LCA method. The GHG savings due to the use of BEVs calculated with the WTW-4 range between 44 and 56 %, while according to the hybrid method the savings are lower (31–46 %). This difference is due to the GWP which arises as a result of the manufacturing of the battery pack for the electric vehicles.
Conclusions: The WTW methodology used in policy support to quantify energy content and GHG emissions of fuels and powertrains can produce results closer to the LCA methodology by adopting a hybrid WTW+LCA approach. While evaluating GHG savings due to the use of BEVs, it is important that this method considers the GWP due to the manufacturing of the battery pack.
Resource prospects of municipal solid wastes generated in the Ga East Municipal Assembly of Ghana
(2017)
Background: Municipal solid wastes management has recently become an important public health concern. Municipal solid wastes are a major source of raw materials that could be used for resource recovery for diverse applications.
Objectives: The present study aimed to determine the composition of municipal solid waste and recoverable resources from the waste of the Ga East Municipal Assembly (GEMA) in the Greater Accra region of Ghana.
Methods: An exploratory approach was used to collect pertinent data from the Abloradgei dumpsite in GEMA using semi-structured interviews and focus group discussion. A field characterization study was undertaken to segregate and estimate the value of various components of collected waste. Dumpsite workers were asked about current general composition of MSW, mode of collection and disposal, record of sanitation-related diseases, use of modern treatment plant, waste management legislation and enforcement challenges, number of trucks received by the dumpsite per day, record on pretreatment of MSW before disposal, and use of personnel protective equipment.
Results: The results showed that significant proportions (48.8%) of the municipal solid wastes were organic materials, while the remaining (51.2%) were inorganic materials. The results also showed that 63% of the municipal solid waste is collected with no sorting from the source and no modern treatment applied before dumping. It was estimated that the value of the recyclable materials in GEMA municipal solid waste amounts to Ghana Cedis (GH¢) 9,381,960 (plastic); 985,111 (mixed glass); 5,160,078 (paper) and 11,586,770 (metal) with a total of GH¢ 27,113,919 ($10,845,568) equivalent to 2,106,339.2 m3 (74,384,667.5 ft3) per annum of biogas from these components with a market value of GH¢ 1,997,972.17 ($768, 393.62); 11,579 Mwh (1.32 Mw) of electricity and 9,535 Mwh (1.09 Mw) of heat. This is estimated to be lost with the current waste management practices.
Conclusions: We recommend that GEMA institute sustainable recycling practices and utilization of biogas production technologies and prioritize sanitation and waste management education for the public, obligate home segregation of waste materials, involve workers by providing them with protective clothing, incorporate informal waste collectors and scavengers into the new system and collaborate with research institutions in waste-to-resource projects to ensure a more sustainable waste management system in the municipality.
Driven by decreasing PV and energy storage prices, increasing electricity costs and policy supports from Thai government (self-consumption era), rooftop PV and energy storage systems are going to be deployed in the country rapidly that may disrupt existing business models structure of Thai distribution utilities due to revenue erosion and lost earnings opportunities. The retail rates that directly affect ratepayers (non-solar customers) are expected to increase. This paper focuses on a framework for evaluating impacts of PV with and without energy storage systems on Thai distribution utilities and ratepayers by using cost-benefit analysis (CBA). Prior to calculation of cost/benefit components, changes in energy sales need to be addressed. Government policies for the support of PV generation will also help in accelerating the rooftop PV installation. Benefit components include avoided costs due to transmission losses and deferring distribution capacity with appropriate PV penetration level, while cost components consist of losses in revenue, program costs, integration costs and unrecovered fixed costs. It is necessary for Thailand to compare total costs and total benefits of rooftop PV and energy storage systems in order to adopt policy supports and mitigation approaches, such as business model innovation and regulatory reform, effectively.
Concerns over climate change, air pollution, and oil supply have stimulated the market for battery electric vehicles (BEVs). The environmental impacts of BEVs are typically evaluated through a standardized life-cycle assessment (LCA) methodology. Here, the LCA literature was surveyed with the objective to sketch the major trends and challenges in the impact assessment of BEVs. It was found that BEVs tend to be more energy efficient and less polluting than conventional cars. BEVs decrease exposure to air pollution as their impacts largely result from vehicle production and electricity generation outside of urban areas. The carbon footprint of BEVs, being highly sensitive to the carbon intensity of the electricity mix, may decrease in the nearby future through a shift to renewable energies and technology improvements in general. A minority of LCAs covers impact categories other than carbon footprint, revealing a mixed picture. Up to date little attention is paid so far in LCA to the efficiency advantage of BEVs in urban traffic, the gap between on-road and certified energy consumption, the local exposure to air pollutants and noise and the aging of emissions control technologies in conventional cars. Improvements of BEV components, directed charging, second-life reuse of vehicle batteries, as well as vehicle-to-home and vehicle-to-grid applications will significantly reduce the environmental impacts of BEVs in the future.
Artificial light at night (ALAN) is a widespread alteration of the natural environment that can affect the functioning of ecosystems. ALAN can change the movement patterns of freshwater animals that move into the adjacent riparian and terrestrial ecosystems, but the implications for local riparian consumers that rely on these subsidies are still unexplored. We conducted a 2-year field experiment to quantify changes of freshwater-terrestrial linkages by installing streetlights in a previously light-naïve riparian area adjacent to an agricultural drainage ditch. We compared the abundance and community composition of emerging aquatic insects, flying insects, and ground-dwelling arthropods with an unlit control site. Comparisons were made within and between years using two-way generalized least squares (GLS) model and a BACI design (Before-After Control-Impact). Aquatic insect emergence, the proportion of flying insects that were aquatic in origin, and the total abundance of flying insects all increased in the ALAN-illuminated area. The abundance of several night-active ground-dwelling predators (Pachygnatha clercki, Trochosa sp., Opiliones) increased under ALAN and their activity was extended into the day. Conversely, the abundance of nocturnal ground beetles (Carabidae) decreased under ALAN. The changes in composition of riparian predator and scavenger communities suggest that the increase in aquatic-to-terrestrial subsidy flux may cascade through the riparian food web. The work is among the first studies to experimentally manipulate ALAN using a large-scale field experiment, and provides evidence that ALAN can affect processes that link adjacent ecosystems. Given the large number of streetlights that are installed along shorelines of freshwater bodies throughout the globe, the effects could be widespread and represent an underestimated source of impairment for both aquatic and riparian systems.
The current work investigates the capability of a tailored multivariate curve resolution–alternating least squares (MCR-ALS) algorithm to analyse glucose, phosphate, ammonium and acetate dynamics simultaneously in an E. coli BL21 fed-batch fermentation. The high-cell-density (HCDC) process is monitored by ex situ online attenuated total reflection (ATR) Fourier transform infrared (FTIR) spectroscopy and several in situ online process sensors. This approach efficiently utilises automatically generated process data to reduce the time and cost consuming reference measurement effort for multivariate calibration. To determine metabolite concentrations with accuracies between ±0.19 and ±0.96·gL−l, the presented utilisation needs primarily — besides online sensor measurements — single FTIR measurements for each of the components of interest. The ambiguities in alternating least squares solutions for concentration estimation are reduced by the insertion of analytical process knowledge primarily in the form of elementary carbon mass balances. Thus, in this way, the established idea of mass balance constraints in MCR combines with the consistency check of measured data by carbon balances, as commonly applied in bioprocess engineering. The constraints are calculated based on online process data and theoretical assumptions. This increased calculation effort is able to replace, to a large extent, the need for manually conducted quantitative chemical analysis, leads to good estimations of concentration profiles and a better process understanding.
In Deutschland findet die Gestaltung von Erbbaurechtsverträgen sehr oft unter politischen Gesichtspunkten, aber wenig am Markt orientiert statt. Die Akzeptanz des Erbbaurechts leidet hierunter. Eine wichtige Rolle spielt dabei die Festlegung des Erbbauzinses, der angesichts des niedrigen Zinsniveaus oft als unangemessen empfunden wird. Andererseits stellt sich die Ableitung „marktgerechter“ Erbbauzinsen im Vergleichswege schwierig dar. Im Beitrag wird daher ein praxisbezogener, auf der Kapitalmarkttheorie basierender Ansatz für die Festlegung marktgerechter Erbbauzinsen dargestellt. Wichtig ist dabei die Verschiebung der Rendite/Risiko-Position, die sich aufgrund der Bestellung von Erbbaurechten gegenüber Volleigentum ergibt. Sowohl der Erbbauverpflichtete wie auch der Erbbauberechtigte dürfen sich diesbezüglich nicht schlechter als bei Volleigentum stellen. Diese Anforderung wird durch die Sharpe Ratio konkretisiert. Damit sich der Erbbauberechtigte nicht gegenüber Volleigentum verschlechtert, bedarf es einer „Subventionierung“ seiner Rendite. Es wird gezeigt, dass dies durch den Erbbauberechtigten ohne Einbußen in seiner Rendite/Risiko-Position gegenüber Volleigentum geschehen kann. Auf Grundlage dieser Überlegungen werden Mindestrenditeforderungen für den Erbbauberechtigten und Höchstsätze für den Erbbauverpflichteten kalkuliert, die sich auf die Bodenwerte beziehen.
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate "the problem of loosening a connection and the distribution of the work" as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
Global change effects on biodiversity and human wellbeing call for improved long-term environmental data as a basis for science, policy and decision making, including increased interoperability, multifunctionality, and harmonization. Based on the example of two global initiatives, the International Long-Term Ecological Research (ILTER) network and the Group on Earth Observations Biodiversity Observation Network (GEO BON), we propose merging the frameworks behind these initiatives, namely ecosystem integrity and essential biodiversity variables, to serve as an improved guideline for future site-based long-term research and monitoring in terrestrial, freshwater and coastal ecosystems. We derive a list of specific recommendations of what and how to measure at a monitoring site and call for an integration of sites into co-located site networks across individual monitoring initiatives, and centered on ecosystems. This facilitates the generation of linked comprehensive ecosystem monitoring data, supports synergies in the use of costly infrastructures, fosters cross-initiative research and provides a template for collaboration beyond the ILTER and GEO BON communities.
Deep brain stimulation (DBS) is a neurosurgical intervention where electrodes are permanently implanted into the brain in order to modulate pathologic neural activity. The post-operative reconstruction of the DBS electrodes is important for an efficient stimulation parameter tuning. A major limitation of existing approaches for electrode reconstruction from post-operative imaging that prevents the clinical routine use is that they are manual or semi-automatic, and thus both time-consuming and subjective. Moreover, the existing methods rely on a simplified model of a straight line electrode trajectory, rather than the more realistic curved trajectory. The main contribution of this paper is that for the first time we present a highly accurate and fully automated method for electrode reconstruction that considers curved trajectories. The robustness of our proposed method is demonstrated using a multi-center clinical dataset consisting of N = 44 electrodes. In all cases the electrode trajectories were successfully identified and reconstructed. In addition, the accuracy is demonstrated quantitatively using a high-accuracy phantom with known ground truth. In the phantom experiment, the method could detect individual electrode contacts with high accuracy and the trajectory reconstruction reached an error level below 100 μm (0.046 ± 0.025 mm). An implementation of the method is made publicly available such that it can directly be used by researchers or clinicians. This constitutes an important step towards future integration of lead reconstruction into standard clinical care.
Static (one-legged stance) and dynamic (star excursion balance) postural control tests were performed by 14 adolescent athletes with and 17 without back pain to determine reproducibility. The total displacement, mediolateral and anterior-posterior displacements of the centre of pressure in mm for the static, and the normalized and composite reach distances for the dynamic tests were analysed. Intraclass correlation coefficients, 95% confidence intervals, and a Bland-Altman analysis were calculated for reproducibility. Intraclass correlation coefficients for subjects with (0.54 to 0.65), (0.61 to 0.69) and without (0.45 to 0.49), (0.52 to 0.60) back pain were obtained on the static test for right and left legs, respectively. Likewise, (0.79 to 0.88), (0.75 to 0.93) for subjects with and (0.61 to 0.82), (0.60 to 0.85) for those without back pain were obtained on the dynamic test for the right and left legs, respectively. Systematic bias was not observed between test and retest of subjects on both static and dynamic tests. The one-legged stance and star excursion balance tests have fair to excellent reliabilities on measures of postural control in adolescent athletes with and without back pain. They can be used as measures of postural control in adolescent athletes with and without back pain.
Introduction: Annually, 2 million sports-related injuries are reported in Germany of which athletes contribute to a large proportion. Multiple sport injury prevention programs designed to decrease acute and overuse injuries in athletes have been proven effective. Yet, the programs’ components, general or sports-specific, that led to these positive effects are uncertain. Despite not knowing about the superiority of sports-specific injury prevention programs, coaches and athletes alike prefer more specialized rather than generalized exercise programs. Therefore, this systematic review aimed to present the available evidence on how general and sports-specific prevention programs affect injury rates in athletes.
Methods: PubMed and Web of Science were electronically searched throughout April 2018. The inclusion criteria were publication dates Jan 2006–Dec 2017, athletes (11–45 years), exercise-based injury prevention programs and injury incidence. The methodological quality was assessed with the Cochrane Collaboration assessment tools.
Results: Of the initial 6619 findings, 15 studies met the inclusion criteria. In addition, 13 studies were added from reference lists and external sources making a total of 28 studies. Of which, one used sports-specific, seven general and 20 mixed prevention strategies. Twenty-four studies revealed reduced injury rates. Of the four ineffective programs, one was general and three mixed.
Conclusion: The general and mixed programs positively affect injury rates. Sports-specific programs are uninvestigated and despite wide discussion regarding the definition, no consensus was reached. Defining such terminology and investigating the true effectiveness of such IPPs is a potential avenue for future research.
Sustainable software products - Towards assessment criteria for resource and energy efficiency
(2018)
Many authors have proposed criteria to assess the “environmental friendliness” or “sustainability” of software products. However, a causal model that links observable properties of a software product to conditions of it being green or (more general) sustainable is still missing. Such a causal model is necessary because software products are intangible goods and, as such, only have indirect effects on the physical world. In particular, software products are not subject to any wear and tear, they can be copied without great effort, and generate no waste or emissions when being disposed of. Viewed in isolation, software seems to be a perfectly sustainable type of product. In real life, however, software products with the same or similar functionality can differ substantially in the burden they place on natural resources, especially if the sequence of released versions and resulting hardware obsolescence is taken into account. In this article, we present a model describing the causal chains from software products to their impacts on natural resources, including energy sources, from a life-cycle perspective. We focus on (i) the demands of software for hardware capacities (local, remote, and in the connecting network) and the resulting hardware energy demand, (ii) the expectations of users regarding such demands and how these affect hardware operating life, and (iii) the autonomy of users in managing their software use with regard to resource efficiency. We propose a hierarchical set of criteria and indicators to assess these impacts. We demonstrate the application of this set of criteria, including the definition of standard usage scenarios for chosen categories of software products. We further discuss the practicability of this type of assessment, its acceptability for several stakeholders and potential consequences for the eco-labeling of software products and sustainable software design.
Companies have made considerable progress in assessing the sustainability of their processes and products, including the information and communication technology (ICT) sector. However, it is surprising that little attention has been given to the sustainability performance of software products. For this article, we chose a case study approach to explore the extent, to which software manufacturers have considered sustainability criteria for their products. We selected a manufacturer of sustainability management software on the assumption that they would be more likely to integrate elements of sustainability performance in their products. In the case study, we applied a previously developed set of criteria for sustainable software (SCSS) using a questionnaire and experiments, to assess a web-based sustainability management software product regarding its sustainability performance. The assessment finds that despite a sustainability conscious manufacturer, a systematic assessment of sustainability regarding software products is missing in the case study. This implies that sustainability assessment for software products is still novel, corresponding knowledge is missing and suitable tools are not yet being widely applied in the industry. The SCSS presents a suitable approach to close this gap, but it does require further refinement, for example regarding its applicability to web-based software on external servers.
Containerization is one of the most important topics for modern data centers and web developers. Since the number of containers on one- and multi-node systems is growing, knowledge about the energy consumption behavior of single web-service containers is essential in order to save energy and, of course, money. In this article, we are going to show how the energy consumption behavior of single containerized web services/web apps changes while creating replicas of the service in order to scale and balance the web service.
Radar target simulator with complex-valued delay line modeling based on standard radar components
(2018)
With increasing radar activities in the automotive, industrial and private sector, there is a need to test radar sensors in their environment. A radar target simulator can help testing radar systems repeatably. In this paper, the authors present a concept of low-cost hardware for radar target simulation. The theoretical foundations are derived and analyzed. An implementation of a demonstrator operating in the 24 GHz ISM band is shown for which the dynamical range simulation was implemented in a FPGA with fast sampling ADCs and DACs. By using a FIR filtering approach a fine discretization of the range could be reached which will furthermore allow an inherent and automatic Doppler simulation by moving the target.
Life-threatening cardiomyopathy is a severe, but common, complication associated with severe trauma or sepsis. Several signaling pathways involved in apoptosis and necroptosis are linked to trauma- or sepsis-associated cardiomyopathy. However, the underling causative factors are still debatable. Heparan sulfate (HS) fragments belong to the class of danger/damage-associated molecular patterns liberated from endothelial-bound proteoglycans by heparanase during tissue injury associated with trauma or sepsis. We hypothesized that HS induces apoptosis or necroptosis in murine cardiomyocytes. By using a novel Medical-In silico approach that combines conventional cell culture experiments with machine learning algorithms, we aimed to reduce a significant part of the expensive and time-consuming cell culture experiments and data generation by using computational intelligence (refinement and replacement). Cardiomyocytes exposed to HS showed an activation of the intrinsic apoptosis signal pathway via cytochrome C and the activation of caspase 3 (both p < 0.001). Notably, the exposure of HS resulted in the induction of necroptosis by tumor necrosis factor α and receptor interaction protein 3 (p < 0.05; p < 0.01) and, hence, an increased level of necrotic cardiomyocytes. In conclusion, using this novel Medical-In silico approach, our data suggest (i) that HS induces necroptosis in cardiomyocytes by phosphorylation (activation) of receptor-interacting protein 3, (ii) that HS is a therapeutic target in trauma- or sepsis-associated cardiomyopathy, and (iii) indicate that this proof-of-concept is a first step toward simulating the extent of activated components in the pro-apoptotic pathway induced by HS with only a small data set gained from the in vitro experiments by using machine learning algorithms.
Many borate crystals feature nonlinear optical properties that allow for efficient frequency conversion of common lasers down into the ultraviolet spectrum. Twinning may degrade crystal quality and affect nonlinear optical properties, in particular if crystals are composed of twin domains with opposing polarities. Here, we use measurements of optical activity to demonstrate the existence of inversion twins within single crystals of YAl3(BO3)4 (YAB) and K2Al2B2O7 (KABO). We determine the optical rotatory dispersion of YAB and KABO throughout the visible spectrum using a spectrophotometer with rotatable polarizers. Space-resolved measurements of the optical rotation can be related to the twin structure and give estimates on the extent of twinning. The reported dispersion relations for the rotatory power of YAB and KABO may be used to assess crystal quality and to select twin-free specimens.
Background: The STarT-Back-Approach (STarT: Subgroups for Targeted Treatment) was developed in the UK and has demonstrated clinical and cost effectiveness. Based on the results of a brief questionnaire, patients with low back pain are stratified into three treatment groups. Since the organisation of physiotherapy differs between Germany and the UK, the aim of this study is to explore German physiotherapists’ views and perceptions about implementing the STarT-Back-Approach.
Methods: Three two-hour think-tank workshops with physiotherapists were conducted. Focus groups, using a semi-structured interview guideline, followed a presentation of the STarT-Back-Approach, with discussions audio recorded, transcribed and qualitatively analysed using content analysis.
Results: Nineteen physiotherapists participated (15 female, mean age 41.2 (SD 8.6) years). Three main themes emerged, each with multiple subthemes: 1) the intervention (15 subthemes), 2) the healthcare context (26 subthemes) and 3) individual characteristics (8 subthemes). Therapists’ perceptions of the extent to which the STarT-Back intervention would require changes to their normal clinical practice varied considerably. They felt that within their current healthcare context, there were significant financial disincentives that would discourage German physiotherapists from providing the STarT-Back treatment pathways, such as the early discharge of low-risk patients with supported self-management materials. They also discussed the need for appropriate standardised graduate and post-graduate skills training for German physiotherapists to treat high-risk patients with a combined physical and psychological approach (e.g., communication skills).
Conclusions: Whilst many German physiotherapists are positive about the STarT-Back-Approach, there are a number of substantial barriers to implementing the matched treatment pathways in Germany. These include financial disincentives within the healthcare system to early discharge of low-risk patients. Therapists also highlighted the need for solutions in respect of scalable physiotherapy training to gain skills in combined physical and psychological approaches.
A comprehensive overview is provided evaluating direct real-world CO2 emissions of both diesel and petrol cars newly registered in Europe between 1995 and 2015. Before 2011, European diesel cars emitted less CO2 per kilometre than petrol cars, but since then there is no appreciable difference in per-km CO2 emissions between diesel and petrol cars. Real-world CO2 emissions of diesel cars have not declined appreciably since 2001, while the CO2 emissions of petrol cars have been stagnant since 2012. When adding black carbon related CO2-equivalents, such as from diesel cars without particulate filters, diesel cars were discovered to have had much higher climate relevant emissions until the year 2001 when compared to petrol cars. From 2001 to 2015 CO2-equivalent emissions from new diesel cars and petrol cars were hardly distinguishable. Lifetime use phase CO2-equivalent emissions of all European passenger vehicles were modelled for 1995–2015 based on three scenarios: the historic case, another scenario freezing percentages of diesel cars at the low levels from the early 1990s (thus avoiding the observed “boom” in new diesel registrations), and an advanced mitigation scenario based on high proportions of petrol hybrid cars and cars burning gaseous fuels. The difference in CO2-equivalent emissions between the historical case and the scenario avoiding the diesel car boom is only 0.4%. The advanced mitigation scenario would have been able to achieve a 3.4% reduction in total CO2-equivalent emissions over the same time frame. The European diesel car boom appears to have been ineffective at reducing climate-warming emissions from the European transport sector.
Passenger cars in Europe have become both heavier and more powerful over the past decades. This trend has increased vehicle utility but it might have also offset technical improvements in powertrain efficiency. Here, we analyze efficiency trade-offs and CO2 emissions for three popular compact cars in Germany. We find that mass, power, and front area of model variants has increased by 66%, 147%, and 22%, respectively between 1980 and 2018. In the same period, fuel consumption decreased 14% for gasoline models but it increased 9% for diesel models. However, if vehicle mass, power, and front area had remained at 1980 levels, technical efficiency improvements would have decreased the fuel consumption of gasoline and diesel models by 23% and 24%, respectively. The related efficiency trade-offs amount to 24 g CO2/km or 13% of the current fuel consumption for gasoline models and 40 g CO2/km or 25% of the current fuel consumption for diesel models. These findings suggest that about half of the technical efficiency improvements in gasoline models and all of the technical efficiency improvements in diesel models are offset through other vehicle attributes. By accounting for the observed efficiency trade-offs, climate policy could become more effective.
This article presents experience curves and cost-benefit analyses for electric and plug-in hybrid cars sold in Germany. We find that between 2010 and 2016, the prices and price differentials relative to conventional cars declined at learning rates of 23 ± 2% and 32 ± 2% for electric cars and 6 ± 1% and 37 ± 2% for plug-in hybrids. If trends persist, price beak-even with conventional cars may be reached after another 7 ± 1 million electric cars and 5 ± 1 million plug-in hybrids are produced. The user costs of electric and plug-in hybrid cars relative to their conventional counterparts are declining annually by 14% and 26%. Also the costs for mitigating CO2 and air pollutant emissions through the deployment of electrified cars tend to decline. However, at current levels, NOX and particle emissions are still mitigated at lower costs by state-of-the-art after-treatment systems than through the electrification of powertrains. Overall, the observation of robust technological learning suggests policy makers should focus their support on non-cost market barriers for the electrification of road transport, addressing specifically the availability of recharging infrastructure.
Driven by falling photovoltaic (PV) installation costs and potential support policies, rooftop PV is expected to expand rapidly in Thailand. As a result, the relevant stakeholders, especially utilities, have concerns about the net economic impacts of high PV adoption. Using a cost–benefit analysis, this study quantifies the net economic impacts of rooftop PV systems on three utilities and on ratepayers in Thailand by applying nine different PV adoption scenarios with various buyback rates and annual percentages of PV cost reduction. Under Thailand’s current electricity tariff structure, Thai utilities are well-protected and able to pass all costs due to PV onto the ratepayers in terms of changes in retail rates. We find that when PV adoption is low, the net economic impacts on both the utilities and retail rates are small and the impacts on each utility depend on its specific characteristics. On the other hand, when PV adoption ranges from 9–14% in energy basis, five-year retail rate impacts become noticeable and are between 6% and 11% as compared to the projected retail rates in 2036 depending on the PV adoption level. Thus, it is necessary for Thailand to make tradeoffs among the stakeholders and maximize the benefits of rooftop PV adoption.
Background: High numbers of consumable medical materials (eg, sterile needles and swabs) are used during the daily routine of intensive care units (ICUs) worldwide. Although medical consumables largely contribute to total ICU hospital expenditure, many hospitals do not track the individual use of materials. Current tracking solutions meeting the specific requirements of the medical environment, like barcodes or radio frequency identification, require specialized material preparation and high infrastructure investment. This impedes the accurate prediction of consumption, leads to high storage maintenance costs caused by large inventories, and hinders scientific work due to inaccurate documentation. Thus, new cost-effective and contactless methods for object detection are urgently needed.
Objective: The goal of this work was to develop and evaluate a contactless visual recognition system for tracking medical consumable materials in ICUs using a deep learning approach on a distributed client-server architecture.
Methods: We developed Consumabot, a novel client-server optical recognition system for medical consumables, based on the convolutional neural network model MobileNet implemented in Tensorflow. The software was designed to run on single-board computer platforms as a detection unit. The system was trained to recognize 20 different materials in the ICU, while 100 sample images of each consumable material were provided. We assessed the top-1 recognition rates in the context of different real-world ICU settings: materials presented to the system without visual obstruction, 50% covered materials, and scenarios of multiple items. We further performed an analysis of variance with repeated measures to quantify the effect of adverse real-world circumstances.
Results: Consumabot reached a >99% reliability of recognition after about 60 steps of training and 150 steps of validation. A desirable low cross entropy of <0.03 was reached for the training set after about 100 iteration steps and after 170 steps for the validation set. The system showed a high top-1 mean recognition accuracy in a real-world scenario of 0.85 (SD 0.11) for objects presented to the system without visual obstruction. Recognition accuracy was lower, but still acceptable, in scenarios where the objects were 50% covered (P<.001; mean recognition accuracy 0.71; SD 0.13) or multiple objects of the target group were present (P=.01; mean recognition accuracy 0.78; SD 0.11), compared to a nonobstructed view. The approach met the criteria of absence of explicit labeling (eg, barcodes, radio frequency labeling) while maintaining a high standard for quality and hygiene with minimal consumption of resources (eg, cost, time, training, and computational power).
Conclusions: Using a convolutional neural network architecture, Consumabot consistently achieved good results in the classification of consumables and thus is a feasible way to recognize and register medical consumables directly to a hospital’s electronic health record. The system shows limitations when the materials are partially covered, therefore identifying characteristics of the consumables are not presented to the system. Further development of the assessment in different medical circumstances is needed.
Fuzzy system based on two-step cascade genetic optimization strategy for tobacco tar prediction
(2019)
There are many challenges in accurately measuring cigarette tar constituents. These include the need for standardized smoke generation methods related to unstable mixtures. In this research were developed algorithms using fusion of artificial intelligence methods to predict tar concentration. Outputs of development are three fuzzy structures optimized with genetic algorithms resulting in genetic algorithm (GA)-FUZZY, GA-adaptive neuro fuzzy inference system (ANFIS), GA-GA-FUZZY algorithms. Proposed algorithms are used for the tar prediction in the cigarette production process. The results of prediction are compared with gas chromatograph (high-performance liquid chromatography (HPLC)) readings.
A new comprehensive evaluation system presented here allows to compare and to quantify education for a sustainable development (ESD) in degree programs. The evaluation is based on a criteria system working with three hierarchic levels. The highest level considers a list of 35 indicator terms. Primarily, the two most popular undergraduate (bachelor’s) degree programs in Germany (mechanical engineering, ME, and business administration, BA) have been reviewed for ESD contents based on the new evaluation scheme. Additionally we reviewed and quantified ESD subjects and their temporal changes in the entire bandwidth of degree programs of a university (Umwelt-Campus Birkenfeld, University of Applied Sciences Trier), back to 1999. Moreover, a spot check on international ME and BA bachelor’s degree programs was performed. Through our reviews, we found a high number of elective classes dedicated to ESD particularly in BA bachelor programs. However, the percentage of compulsory classes related to ESD is relatively low with 5-6 % in both ME and BA programs, respectively. The spot check on degree programs outside Germany revealed similar results. Analysing the time trend at Umwelt-Campus Birkenfeld, a considerable share of ESD that was part of the original diploma degrees was moved to what are now master’s degrees.
Aim: The aim of the study was to identify common orthopedic sports injury profiles in adolescent elite athletes with respect to age, sex, and anthropometrics.
Methods: A retrospective data analysis of 718 orthopedic presentations among 381 adolescent elite athletes from 16 different sports to a sports medical department was performed. Recorded data of history and clinical examination included area, cause and structure of acute and overuse injuries. Injury-events were analyzed in the whole cohort and stratified by age (11–14/15–17 years) and sex. Group differences were tested by chi-squared-tests. Logistic regression analysis was applied examining the influence of factors age, sex, and body mass index (BMI) on the outcome variables area and structure (α = 0.05).
Results: Higher proportions of injury-events were reported for females (60%) and athletes of the older age group (66%) than males and younger athletes. The most frequently injured area was the lower extremity (47%) followed by the spine (30.5%) and the upper extremity (12.5%). Acute injuries were mainly located at the lower extremity (74.5%), while overuse injuries were predominantly observed at the lower extremity (41%) as well as the spine (36.5%). Joints (34%), muscles (22%), and tendons (21.5%) were found to be the most often affected structures. The injured structures were different between the age groups (p = 0.022), with the older age group presenting three times more frequent with ligament pathology events (5.5%/2%) and less frequent with bony problems (11%/20.5%) than athletes of the younger age group. The injured area differed between the sexes (p = 0.005), with males having fewer spine injury-events (25.5%/34%) but more upper extremity injuries (18%/9%) than females. Regression analysis showed statistically significant influence for BMI (p = 0.002) and age (p = 0.015) on structure, whereas the area was significantly influenced by sex (p = 0.005).
Conclusion: Events of soft-tissue overuse injuries are the most common reasons resulting in orthopedic presentations of adolescent elite athletes. Mostly, the lower extremity and the spine are affected, while sex and age characteristics on affected area and structure must be considered. Therefore, prevention strategies addressing the injury-event profiles should already be implemented in early adolescence taking age, sex as well as injury entity into account.
Introduction: Injury prevention programs (IPPs) are an inherent part of training in recreational and professional sports. Providing performance-enhancing benefits in addition to injury prevention may help adjust coaches and athletes’ attitudes towards implementation of injury prevention into daily routine. Conventional thinking by players and coaches alike seems to suggest that IPPs need to be specific to one’s sport to allow for performance enhancement. The systematic literature review aims to firstly determine the IPPs nature of exercises and whether they are specific to the sport or based on general conditioning. Secondly, can they demonstrate whether general, sports-specific or even mixed IPPs improve key performance indicators with the aim to better facilitate long-term implementation of these programs?
Methods: PubMed and Web of Science were electronically searched throughout March 2018. The inclusion criteria were randomized control trials, publication dates between Jan 2006 and Feb 2018, athletes (11–45 years), injury prevention programs and included predefined performance measures that could be categorized into balance, power, strength, speed/agility and endurance. The methodological quality of included articles was assessed with the Cochrane Collaboration assessment tools.
Results: Of 6619 initial findings, 22 studies met the inclusion criteria. In addition, reference lists unearthed a further 6 studies, making a total of 28. Nine studies used sports specific IPPs, eleven general and eight mixed prevention strategies. Overall, general programs ranged from 29–57% in their effectiveness across performance outcomes. Mixed IPPs improved in 80% balance outcomes but only 20–44% in others. Sports-specific programs led to larger scale improvements in balance (66%), power (83%), strength (75%), and speed/agility (62%).
Conclusion: Sports-specific IPPs have the strongest influence on most performance indices based on the significant improvement versus control groups. Other factors such as intensity, technical execution and compliance should be accounted for in future investigations in addition to exercise modality.
The increasing availability of off-the-shelf high-frequency components makes radar measurement become popular in mainstream industrial applications. We present a cooperative FM radar for strongly reflective environments, being devised for a range of up to approx. 120 m. The target is designed with an unambiguous signature method and satisfies coherence. A prototype is built with commercial semiconductor components that operates in the 24 GHz industrial, scientific and medical band. First experimental results taken in sewage pipes are presented, using the target prototype and a standard FMCW radio station. An overview on four data acquisition procedures is given.
Following a quantitative analysis of adequate feedstock, comprising 11 woody biomass species, four biochars were generated using a Kon-Tiki flame curtain kiln in the state of Aguascalientes, Mexico. Despite the high quality (certified by European Biochar Certificate), the biochars contain substantial quantities of hazardous substances, such as polycyclic aromatic hydrocarbons, polychlorinated dibenzo-p-dioxins and dibenzofurans, polychlorinated biphenyls, and heavy metals, which can induce adverse effects if wrongly applied to the environment. To assess the toxicity of biochars to non-target organisms, toxicity tests with four benthic and zooplanktonic invertebrate species, the ciliate Paramecium caudatum, the rotifer Lecane quadridentata, and the cladocerans Daphnia magna and Moina macrocopa were performed using biochar elutriates. In acute and chronic toxicity tests, no acute toxic effect to ciliates, but significant lethality to rotifers and cladocerans was detected. This lethal toxicity might be due to ingestion/digestion by enzymatic/mechanic processes of biochar by cladocerans and rotifers of toxic substances present in the biochar. No chronic toxicity was found where biochar elutriates were mixed with soil. These data indicate that it is instrumental to use toxicity tests to assess biochars’ toxicity to the environment, especially when applied close to sensitive habitats, and to stick closely to the quantitative set-point values.
Online Learning algorithms and Indoor Positioning Systems are complex applications in the environment of cyber-physical systems. These distributed systems are created by networking intelligent machines and autonomous robots on the Internet of Things using embedded systems that enable the exchange of information at any time. This information is processed by Machine Learning algorithms to make decisions about current developments in production or to influence logistics processes for optimization purposes. In this article, we present and categorize the further development of the prototype of a novel Indoor Positioning System, which constantly adapts its knowledge to the conditions of its environment with the help of Online Learning. Here, we apply Online Learning algorithms in the field of sound-based indoor localization with low-cost hardware and demonstrate the improvement of the system over its predecessor and its adaptability for different applications in an experimental case study.
Internet of Things (IoT) and Artificial Intelligence (AI) are one of the most promising and disruptive areas of current research and development. However, these areas require deep knowledge in multiple disciplines such as sensors, protocols, embedded programming, distributed systems, statistics and algorithms. This broad knowledge is not easy to acquire and the software used to design these systems is becoming increasingly complex. Small and medium-sized enterprises therefore have problems in developing new business ideas. However, node- and block-based software tools have also been released and are freely available as open source toolboxes. In this paper, we present an overview of multiple node- and block-based software tools to develop IoT- and AI-based business ideas. We arrange these tools according their capabilities and further propose extension and combinations of tools to design a useful open-source library for small and medium-sized enterprises, that is easy to use and helps with rapid prototyping, enabling new business ideas to be developed using distributed computing.
Background: Telerehabilitation can contribute to the maintenance of successful rehabilitation regardless of location and time. The aim of this study was to investigate a specific three-month interactive telerehabilitation routine regarding its effectiveness in assisting patients with physical functionality and with returning to work compared to typical aftercare.
Objective: The aim of the study was to investigate a specific three-month interactive telerehabilitation with regard to effectiveness in functioning and return to work compared to usual aftercare.
Methods: From August 2016 to December 2017, 111 patients (mean 54.9 years old; SD 6.8; 54.3% female) with hip or knee replacement were enrolled in the randomized controlled trial. At discharge from inpatient rehabilitation and after three months, their distance in the 6-minute walk test was assessed as the primary endpoint. Other functional parameters, including health related quality of life, pain, and time to return to work, were secondary endpoints.
Results: Patients in the intervention group performed telerehabilitation for an average of 55.0 minutes (SD 9.2) per week. Adherence was high, at over 75%, until the 7th week of the three-month intervention phase. Almost all the patients and therapists used the communication options. Both the intervention group (average difference 88.3 m; SD 57.7; P=.95) and the control group (average difference 79.6 m; SD 48.7; P=.95) increased their distance in the 6-minute-walk-test. Improvements in other functional parameters, as well as in quality of life and pain, were achieved in both groups. The higher proportion of working patients in the intervention group (64.6%; P=.01) versus the control group (46.2%) is of note.
Conclusions: The effect of the investigated telerehabilitation therapy in patients following knee or hip replacement was equivalent to the usual aftercare in terms of functional testing, quality of life, and pain. Since a significantly higher return-to-work rate could be achieved, this therapy might be a promising supplement to established aftercare.
Background: To facilitate access to evidence-based care for back pain, a German private medical insurance offered a health program proactively to their members. Feasibility and long-term efficacy of this approach were evaluated.
Methods: Using Zelen’s design, adult members of the health insurance with chronic back pain according to billing data were randomized to the intervention (IG) or the control group (CG). Participants allocated to the IG were invited to participate in the comprehensive health program comprising medical exercise therapy and life style coaching, and those allocated to the CG to a longitudinal back pain survey. Primary outcomes were back pain severity (Korff’s Chronic Pain Grade Questionnaire) as well as health-related quality of life (SF-12) assessed by identical online questionnaires at baseline and 2-year follow-up in both study arms. In addition to analyses of covariance, a subgroup analysis explored the heterogeneity of treatment effects among different risks of back pain chronification (STarT Back Tool).
Results: Out of 3462 persons selected, randomized and thereafter contacted, 552 agreed to participate. At the 24-month follow-up, data on 189 of 258 (73.3%) of the IG were available, in the CG on 255 of 294 (86.7%). Significant, small beneficial effects were seen in primary outcomes: Compared to the CG, the IG reported less disability (1.6 vs 2.0; p = 0.025; d = 0.24) and scored better at the SF-12 physical health scale (43.3 vs 41.0; p < 0.007; d = 0.26). No effect was seen in back pain intensity and in the SF-12 mental health scale. Persons with medium or high risk of back pain chronification at baseline responded better to the health program in all primary outcomes than the subgroup with low risk at baseline.
Conclusions: After 2 years, the proactive health program resulted in small positive long-term improvements. Using risk screening prior to inclusion in the health program might increase the percentage of participants deriving benefits from it.
Background: Stratified care is an up-to-date treatment approach suggested for patients with back pain in several guidelines. A comprehensively studied stratification instrument is the STarT Back Tool (SBT). It was developed to stratify patients with back pain into three subgroups, according to their risk of persistent disabling symptoms. The primary aim was to analyse the disability differences in patients with back pain 12 months after inclusion according to the subgroups determined at baseline using the German version of the SBT (STarT-G). Moreover, the potential to improve prognosis for disability by adding further predictor variables, an analysis for differences in pain intensity according to the STarT-Classification, and discriminative ability were investigated.
Methods: Data from the control group of a randomized controlled trial were analysed. Trial participants were members of a private medical insurance with a minimum age of 18 and indicated as having persistent back pain. Measurements were made for the risk of back pain chronification using the STarT-G, disability (as primary outcome) and back pain intensity with the Chronic Pain Grade Scale (CPGS), health-related quality of life with the SF-12, psychological distress with the Patient Health Questionnaire-4 (PHQ-4) and physical activity. Analysis of variance (ANOVA), multiple linear regression, and area under the curve (AUC) analysis were conducted.
Results: The mean age of the 294 participants was 53.5 (SD 8.7) years, and 38% were female. The ANOVA for disability and pain showed significant differences (p < 0.01) among the risk groups at 12 months. Post hoc Tukey tests revealed significant differences among all three risk groups for every comparison for both outcomes. AUC for STarT-G’s ability to discriminate reference standard ‘cases’ for chronic pain status at 12 months was 0.79. A prognostic model including the STarT-Classification, the variables global health, and disability at baseline explained 45% of the variance in disability at 12 months.
Conclusions: Disability differences in patients with back pain after a period of 12 months are in accordance with the subgroups determined using the STarT-G at baseline. Results should be confirmed in a study developed with the primary aim to investigate those differences.
This article investigates the representation of the issue of refugees travelling to the Italian coast that was reported by two major Italian newspapers between August 8th and August 19th, 2017. Using analysis tools belonging to communication theory and cognitive sciences, i.e. the concepts of frame and attitude, this article highlights two major points: firstly, the analysis reveals how the two newspapers aimed at establishing a specific relationship with their readers on this topic in the relevant period on the basis of specific interpretative models; secondly, each of these interpretative models relies on the representation of specific emotions which play a central role in the interpretation of reality according to a characteristic facet of the definition of post-truth.
We present the concrete realization of a virtual laboratory equipped with a pedagogical agent. Its functionality and media didactics takes into account the results of an usability test on a prototype system, and the students' demand on such an automated assistance as obtained from a preliminary survey. The pedagogical agent mediates between the content and the learner by activating him or her. To provide information about the learner's skills, we propose a pragmatic and simplified competence model that is based on fundamental representations in physics (experiment, figure, text and equation). Moreover, an automated feedback relates the student's self-assessment with the submitted answer to the correctness of the respective task. In consequence, the pedagogical agent enables mental reflection for a crucial review of the own learning process. Interestingly, learning pathways can be envisioned, thus, giving valuable insight into individual strengths and weaknesses.
This study compares the environmental impacts of petrol, diesel, natural gas, and electric vehicles using a process-based attributional life cycle assessment (LCA) and the ReCiPe characterization method that captures 18 impact categories and the single score endpoints. Unlike common practice, we derive the cradle-to-grave inventories from an originally combustion engine VW Caddy that was disassembled and electrified in our laboratory, and its energy consumption was measured on the road. Ecoivent 2.2 and 3.0 emission inventories were contrasted exhibiting basically insignificant impact deviations. Ecoinvent 3.0 emission inventory for the diesel car was additionally updated with recent real-world close emission values and revealed strong increases over four midpoint impact categories, when matched with the standard Ecoinvent 3.0 emission inventory. Producing batteries with photovoltaic electricity instead of Chinese coal-based electricity decreases climate impacts of battery production by 69%. Break-even mileages for the electric VW Caddy to pass the combustion engine models under various conditions in terms of climate change impact ranged from 17,000 to 310,000 km. Break-even mileages, when contrasting the VW Caddy and a mini car (SMART), which was as well electrified, did not show systematic differences. Also, CO2-eq emissions in terms of passenger kilometers travelled (54–158 g CO2-eq/PKT) are fairly similar based on 1 person travelling in the mini car and 1.57 persons in the mid-sized car (VW Caddy). Additionally, under optimized conditions (battery production and use phase utilizing renewable electricity), the two electric cars can compete well in terms of CO2-eq emissions per passenger kilometer with other traffic modes (diesel bus, coach, trains) over lifetime. Only electric buses were found to have lower life cycle carbon emissions (27–52 g CO2-eq/PKT) than the two electric passenger cars.
Background: As electric kick scooters, three-wheelers, and passenger cars enter the streets, efficiency trade-offs across vehicle types gain practical relevance for consumers and policy makers. Here, we compile a comprehensive dataset of 428 electric vehicles, including seven vehicle types and information on certified and real-world energy consumption. Regression analysis is applied to quantify trade-offs between energy consumption and other vehicle attributes.
Results: Certified and real-world energy consumption of electric vehicles increase by 60% and 40%, respectively, with each doubling of vehicle mass, but only by 5% with each doubling of rated motor power. These findings hold roughly also for passenger cars whose energy consumption tends to increase 0.6 ± 0.1 kWh/100 km with each 100 kg of vehicle mass. Battery capacity and vehicle mass are closely related. A 10 kWh increase in battery capacity increases the mass of electric cars by 15 kg, their drive range by 40–50 km, and their energy consumption by 0.7–1.0 kWh/100 km. Mass-produced state-of-the-art electric passenger cars are 2.1 ± 0.8 kWh/100 km more efficient than first-generation vehicles, produced at small scale.
Conclusion: Efficiency trade-offs in electric vehicles differ from those in conventional cars—the latter showing a strong dependency of fuel consumption on rated engine power. Mass-related efficiency trade-offs in electric vehicles are large and could be tapped by stimulating mode shift from passenger cars to light electric road vehicles. Electric passenger cars still offer potentials for further efficiency improvements. These could be exploited through a dedicated energy label with battery capacity as utility parameter.
Hydrochar derived from Argan nut shell (ANS) was synthesized and applied to remove bisphenol A (BPA) and diuron. The results indicated that the hydrochar prepared at 200 °C (HTC@ANS-200) possessed a higher specific surface area (42 m2/g) than hydrochar (HTC@ANS-180) prepared at 180 °C (17 m2/g). The hydrochars exhibited spherical particles, which are rich in functional groups. The HTC@ANS-200 exhibited high adsorption efficiency, of about 92% of the BPA removal and 95% of diuron removal. The maximum Langmuir adsorption capacities of HTC@ANS-200 at room temperature were 1162.79 mg/for Bisphenol A and 833.33 mg/g for diuron (higher than most reported adsorbents). The adsorption process was spontaneous (− ΔG°) and exothermic (− ΔH°). Excellent reusability was reclaimed after five cycles, the removal efficiency showed a weak decrease of 4% for BPA and 1% for diuron. The analysis of Fourier transforms infrared spectrometry demonstrated that the aromatic C=C and OH played major roles in the adsorption mechanisms of BPA and diuron in this study. The high adsorption capacity was attributed to the beneficial porosity (The pore size of HTC@ANS-200 bigger than the size of BPA and diuron molecule) and surface functional groups. BPA and diuron adsorption occurred also via multiple adsorption mechanisms, including pore filling, π–π interactions, and hydrogen bonding interactions on HTC@ANS-200.
Zur Optimierung von Zulaufsatzkultur-Fermentationen von methylotrophen Organismen wird eine Online-Messmethode vorgestellt, mit der die Methanol-Konzentration im Medium während einer Fermentation durch ein Spülgaspervaporations-Prinzip bestimmt werden kann. Im Gegensatz zu anderen Analysemethoden bietet die Messmethode die Möglichkeit, die Substratkonzentration bei Prozessen mit Methanol als zentralem Substrat über eine Regelung auf einem definierten Wert zu halten. Es werden Schwierigkeiten, aber auch deren Überwindung bei der Adaption der Messmethode auf Fermentationsprozesse dargestellt.
Irrigated paddy rice agriculture accounts for a major share of Asia Pacific’s total water withdrawal. Furthermore, climate change induced water scarcity in the Asia-Pacific region is projected to intensify in the near future. Therefore, methods to reduce water consumption through efficiency measures are needed to ensure the long-term (water) sustainability. The irrigation systems, subak of Karangasem, Indonesia, and the tameike of Kunisaki, Japan, are two examples of sustainable paddy rice irrigation. This research, through interviews and an extensive survey, comparatively assessed the socio-environmental sustainability of the two irrigation management systems with special reference to the intensity and nature of social capital, equity of water distribution, water demand, water footprint, and water quality, etc. The prevailing social capital paradigm of each system was also compared to its overall managerial outcomes to analyze how cooperative action contributes to sustainable irrigation management. Both systems show a comparable degree of sustainable irrigation management, ensuring an equitable use of water, and maintain relatively fair water quality due to the land-use practices adapted. However, the systems differ in water demand and water efficiency principally because of the differences in the irrigation management strategies: human and structural. These findings could help devise mechanisms for transitioning to sustainable irrigation management in the commercially-oriented paddy rice agricultural systems across the Asia-Pacific region.
Automated evaluation of contact angles in a three-phase system of selective agglomeration in liquids
(2020)
This study aims to an automated evaluation of contact angles in a three-phase system of selective agglomeration in liquids. Wetting properties, quantified by contact angles, are essential in many industries and their processes. Selective agglomeration as a three-phase system consists of a suspension liquid, a heterogeneous solid phase and an immiscible binding liquid. It offers the chance of establishing more efficient separation processes because of the shape-dependent wetting properties of fine particles (size ≤ 10 µm). In the present paper, an experimental setup for contact angle measurements of fine particles based on the Sessile Drop Method is described. Moreover, a new algorithm is discussed, which can be used to automatically compute contact angles from image data captured by a high-speed camera. The algorithm uses a marker-based watershed transform to segment the image data into regions representing the droplet, the carrier plate coated by fine particles, and the background. The main idea is a parametric modelling approach forthe time-dependent droplet’s contour by an ellipse.
The results show that the development of the dynamic contact angles towards a static contact angle can be efficiently determined based on this novel technique. These findings are useful for a detailed discrimination of wetting properties of spherical and irregularly shaped particles as well as their wetting kinetics. Also, a better understanding of selective agglomeration processes will be promoted by this user-friendly method.
Local biodiversity trends over time are likely to be decoupled from global trends, as local processes may compensate or counteract global change. We analyze 161 long-term biological time series (15–91 years) collected across Europe, using a comprehensive dataset comprising ~6,200 marine, freshwater and terrestrial taxa. We test whether (i) local long-term biodiversity trends are consistent among biogeoregions, realms and taxonomic groups, and (ii) changes in biodiversity correlate with regional climate and local conditions. Our results reveal that local trends of abundance, richness and diversity differ among biogeoregions, realms and taxonomic groups, demonstrating that biodiversity changes at local scale are often complex and cannot be easily generalized. However, we find increases in richness and abundance with increasing temperature and naturalness as well as a clear spatial pattern in changes in community composition (i.e. temporal taxonomic turnover) in most biogeoregions of Northern and Eastern Europe.
Background: Recent shoulder injury prevention programs have utilized resistance exercises combined with different forms of instability, with the goal of eliciting functional adaptations and thereby reducing the risk of injury. However, it is still unknown how an unstable weight mass (UWM) affects the muscular activity of the shoulder stabilizers. Aim of the study was to assess neuromuscular activity of dynamic shoulder stabilizers under four conditions of stable and UWM during three shoulder exercises. It was hypothesized that a combined condition of weight with UWM would elicit greater activation due to the increased stabilization demand.
Methods: Sixteen participants (7 m/9 f) were included in this cross-sectional study and prepared with an EMG-setup for the: Mm. upper/lower trapezius (U.TA/L.TA), lateral deltoid (DE), latissimus dorsi (LD), serratus anterior (SA) and pectoralis major (PE). A maximal voluntary isometric contraction test (MVIC; 5 s.) was performed on an isokinetic dynamometer. Next, internal/external rotation (In/Ex), abduction/adduction (Ab/Ad) and diagonal flexion/extension (F/E) exercises (5 reps.) were performed with four custom-made-pipes representing different exercise conditions. First, the empty-pipe (P; 0.5 kg) and then, randomly ordered, water-filled-pipe (PW; 1 kg), weight-pipe (PG; 4.5 kg) and weight + water-filled-pipe (PWG; 4.5 kg), while EMG was recorded. Raw root-mean-square values (RMS) were normalized to MVIC (%MVIC). Differences between conditions for RMS%MVIC, scapular stabilizer (SR: U.TA/L.TA; U.TA/SA) and contraction (CR: concentric/eccentric) ratios were analyzed (paired t-test; p ≤ 0.05; Bonferroni adjusted α = 0.008).
Results: PWG showed significantly greater muscle activity for all exercises and all muscles except for PE compared to P and PW. Condition PG elicited muscular activity comparable to PWG (p > 0.008) with significantly lower activation of L.TA and SA in the In/Ex rotation. The SR ratio was significantly higher in PWG compared to P and PW. No significant differences were found for the CR ratio in all exercises and for all muscles.
Conclusion: Higher weight generated greater muscle activation whereas an UWM raised the neuromuscular activity, increasing the stabilization demands. Especially in the In/Ex rotation, an UWM increased the RMS%MVIC and SR ratio. This might improve training effects in shoulder prevention and rehabilitation programs.
The objective investigation of the dynamic properties of vocal fold vibrations demands the recording and further quantitative analysis of laryngeal high-speed video (HSV). Quantification of the vocal fold vibration patterns requires as a first step the segmentation of the glottal area within each video frame from which the vibrating edges of the vocal folds are usually derived. Consequently, the outcome of any further vibration analysis depends on the quality of this initial segmentation process. In this work we propose for the first time a procedure to fully automatically segment not only the time-varying glottal area but also the vocal fold tissue directly from laryngeal high-speed video (HSV) using a deep Convolutional Neural Network (CNN) approach. Eighteen different Convolutional Neural Network (CNN) network configurations were trained and evaluated on totally 13,000 high-speed video (HSV) frames obtained from 56 healthy and 74 pathologic subjects. The segmentation quality of the best performing Convolutional Neural Network (CNN) model, which uses Long Short-Term Memory (LSTM) cells to take also the temporal context into account, was intensely investigated on 15 test video sequences comprising 100 consecutive images each. As performance measures the Dice Coefficient (DC) as well as the precisions of four anatomical landmark positions were used. Over all test data a mean Dice Coefficient (DC) of 0.85 was obtained for the glottis and 0.91 and 0.90 for the right and left vocal fold (VF) respectively. The grand average precision of the identified landmarks amounts 2.2 pixels and is in the same range as comparable manual expert segmentations which can be regarded as Gold Standard. The method proposed here requires no user interaction and overcomes the limitations of current semiautomatic or computational expensive approaches. Thus, it allows also for the analysis of long high-speed video (HSV)-sequences and holds the promise to facilitate the objective analysis of vocal fold vibrations in clinical routine. The here used dataset including the ground truth will be provided freely for all scientific groups to allow a quantitative benchmarking of segmentation approaches in future.
Electric drive systems are increasingly used in automobiles. However, the combination of comfort, dynamics and safety requirements places high demands on the torque accuracy. The complex interplay of battery, inverter and electrical machine causes a lot of system uncertainties based on parameter fluctuations and measurement errors that influence the system performance. In this paper these influences on the closed loop torque control are analyzed and quantified using a variance based sensitivity analysis. The method enables to connect the variance of the torque accuracy with the parameter uncertainties causing this variance. Moreover, it quantifies the influences of the parameters independent of the complexity of the analyzed system. In addition, two methods to ensure convergence of the estimated variance based sensitivity measures are proposed. The results of the analysis are presented for 19 static working points of an battery electric drive system.
Since operational managers often monitor large numbers of wind turbines (WTs), they depend on a toolset to provide them with highly condensed information to identify and prioritize low performing WTs or schedule preventive maintenance measures. Power curves are a frequently used tool to assess the performance of WTs. The power curve health value (HV) used in this work is supposed to detect power curve anomalies since small deviations in the power curve are not easy to identify. It evaluates deviations in the linear region of power curves by performing a principal component analysis. To calculate the HV, the standard deviation in direction of the second principal component of a reference data set is compared to the standard deviation of a combined data set consisting of the reference data and data of the evaluated period. This article examines the applicability of this HV for different purposes as well as its sensitivities and provides a modified HV approach to make it more robust and suitable for heterogeneous data sets. The modified HV was tested based on ENGIE's open data wind farm and data of on- and offshore WTs from the WInD-Pool. It proved to detect anomalies in the linear region of the power curve in a reliable and sensitive manner and was also eligible to detect long term power curve degradation. Also, about 7 % of all corrective maintenance measures were preceded by high HVs with a median alarm horizon of three days. Overall, the HV proved to be a promising tool for various applications.
Stabilization exercise (SE) is evident for the management of chronic non-specific low back pain (LBP). The optimal dose-response-relationship for the utmost treatment success is, thus, still unknown. The purpose is to systematically review the dose-response-relationship of stabilisation exercises on pain and disability in patients with chronic non-specific LBP. A systematic review with meta-regression was conducted (Pubmed, Web of Knowledge, Cochrane). Eligibility criteria were RCTs on patients with chronic non-specific LBP, written in English/German and adopting a longitudinal core-specific/stabilising/motor control exercise intervention with at least one outcome for pain intensity and/or disability. Meta-regressions (dependent variable = effect sizes (Cohens d) of the interventions (for pain and for disability), independent variable = training characteristics (duration, frequency, time per session)), and controlled for (low) study quality (PEDro) and (low) sample sizes (n) were conducted to reveal the optimal dose required for therapy success. From the 3,415 studies initially selected, 50 studies (n = 2,786 LBP patients) were included. N = 1,239 patients received SE. Training duration was 7.0 ± 3.3 weeks, training frequency was 3.1 ± 1.8 sessions per week with a mean training time of 44.6 ± 18.0 min per session. The meta-regressions’ mean effect size was d = 1.80 (pain) and d = 1.70 (disability). Total R2 was 0.445 and 0.17. Moderate quality evidence (R2 = 0.231) revealed that a training duration of 20 to 30 min elicited the largest effect (both in pain and disability, logarithmic association). Low quality evidence (R2 = 0.125) revealed that training 3 to 5 times per week led to the largest effect of SE in patients with chronic non-specific LBP (inverted U-shaped association). In patients with non-specific chronic LBP, stabilization exercise with a training frequency of 3 to 5 times per week (Grade C) and a training time of 20 to 30 min per session (Grade A) elicited the largest effect on pain and disability.
Study design: Systematic review with meta-analysis and meta-regression.
Background and objectives: We systematically reviewed and delineated the existing evidence on sustainability effects of motor control exercises on pain intensity and disability in chronic low back pain patients when compared with an inactive or passive control group or with other exercises. Secondary aims were to reveal whether moderating factors like the time after intervention completion, the study quality, and the training characteristics affect the potential sustainability effects.
Methods: Relevant scientific databases (Medline, Web of Knowledge, Cochrane) were screened. Eligibility criteria for selecting studies: All RCTs und CTs on chronic (≥ 12/13 weeks) nonspecific low back pain, written in English or German and adopting a longitudinal core-specific/stabilizing sensorimotor control exercise intervention with at least one pain intensity and disability outcome assessment at a follow-up (sustainability) timepoint of ≥ 4 weeks after exercise intervention completion.
Results and conclusions: From the 3,415 studies that were initially retrieved, 10 (2 CTs & 8 RCTs) on N = 1081 patients were included in the review and analyses. Low to moderate quality evidence shows a sustainable positive effect of motor control exercise on pain (SMD = -.46, Z = 2.9, p < .001) and disability (SMD = -.44, Z = 2.5, p < .001) in low back pain patients when compared to any control. The subgroups’ effects are less conclusive and no clear direction of the sustainability effect at short versus mid versus long-term, of the type of the comparator, or of the dose of the training is given. Low quality studies overestimated the effect of motor control exercises.
Background: Core-specific sensorimotor exercises are proven to enhance neuromuscular activity of the trunk, improve athletic performance and prevent back pain. However, the dose-response relationship and, therefore, the dose required to improve trunk function is still under debate. The purpose of the present trial will be to compare four different intervention strategies of sensorimotor exercises that will result in improved trunk function.
Methods/design: A single-blind, four-armed, randomized controlled trial with a 3-week (home-based) intervention phase and two measurement days pre and post intervention (M1/M2) is designed. Experimental procedures on both measurement days will include evaluation of maximum isokinetic and isometric trunk strength (extension/flexion, rotation) including perturbations, as well as neuromuscular trunk activity while performing strength testing. The primary outcome is trunk strength (peak torque). Neuromuscular activity (amplitude, latencies as a response to perturbation) serves as secondary outcome.
The control group will perform a standardized exercise program of four sensorimotor exercises (three sets of 10 repetitions) in each of six training sessions (30 min duration) over 3 weeks. The intervention groups’ programs differ in the number of exercises, sets per exercise and, therefore, overall training amount (group I: six sessions, three exercises, two sets; group II: six sessions, two exercises, two sets; group III: six sessions, one exercise, three sets). The intervention programs of groups I, II and III include additional perturbations for all exercises to increase both the difficulty and the efficacy of the exercises performed. Statistical analysis will be performed after examining the underlying assumptions for parametric and non-parametric testing.
Discussion: The results of the study will be clinically relevant, not only for researchers but also for (sports) therapists, physicians, coaches, athletes and the general population who have the aim of improving trunk function.
Since tangible assets of companies are becoming increasingly insignificant, emphasis should rather be placed on human capital as an essential source of competitive edge. This paper, accordingly, pursues the purpose to shed light on the major demands that the Millenials place on their prospective employers. In consequence, the work aims to identify attractiveness factors that German retailers should particularly promote in order to succeed in the war for talents and attract the most promising candidates among the German Gen Y. This work is based on a mixed-methods approach. First, interviews with German retail experts as well as generational keynote speakers were conducted in order to obtain a deep understanding and assessment of the German retail landscape from a professional perspective. The insights gained were subsequently used to design a questionnaire, which distribution led to a final sample of 216 useable responses by Millenials. Furthermore, the data obtained by interviewing experts and the survey was subsequently compared in order to evaluate to what extent the expectations of the Millenials correspond to the experts’ assessment. This study reveals Millenials to be driven by the need for growth, such as wide offers of development opportunities or scope for decision when choosing an employer. Among the relatedness needs, a harmonious working environment is particularly important, whereas a weekend off ranks first among the existential needs. Moreover, male Millenials consider Media Markt being the most popular employer in the German retail sector, while dm is preferred from a female perspective. Overall, employers of the German retail sector provide the majority of factors required by the Millenials, yet are only considered the 4th most popular industry behind the automotive, IT, art and entertainment industries. Our findings provide valuable practical implications as the research results might serve companies to build up a target group specific employer brand. Marketing strategies can be aligned with the identified attractiveness factors to efficiently and cost-effectively attract and bind Millenials to the company. Customized recruiting campaigns enhance the appeal as well as the attractiveness of an employer driving the likelihood of obtaining the strived status: Employer of Choice. To the best of the author’s knowledge, no study has yet dealt specifically with the attractiveness factors demanded by the Millenials in the context of the German retail sector as well as their most aspired employers in this industry. Furthermore, the attractiveness factors identified in the literature were embedded in Aldefer’s ERG theory. This work also offers a bilateral perspective through the widely conducted survey carried out among Millenials, which was additionally expanded through the lens of experts.
Major financial institutions operate in different regions of the world facing different regulatory landscapes for Supply Chain risks. In this environment, the optimization issue arises how to best comply with the different regulations and reaching cost efficiency at the same time. In this research, the international regulatory landscape for Supply Chain risks of Financial Institutions is introduced and compared internationally. It is understood as an integral part of Supply Chain Risk Management of Financial Institutions, yet the latter is analysed as the research background. Additionally, expert interviews are conducted in order to link the regulation analysis to the current challenges that Financial Institutions face. Finally, recommendations are developed on how banks can be cost efficient, while remaining regulatory compliant, facing increased international regulation in the area of Supply Chain Risk Management. The outcome of the underlying research shows that banking regulation in the area of Supply Chain risks is an important lever in the banking sector to secure customers and financial markets. However, the regulatory landscape is heterogeneous and not consistent on an international scale. Regulation in Asia is highly diverse across different countries due to different states of economic development. The US applies a rather pragmatical approach towards supply chain risk regulation applying different standards of standard setting institutions. Lastly, the EU is very restrictive and strives to unify regulation across member states. Banks should follow a consistent management approach keeping in mind international locations and the strictest regulatory environment they are operating in, to improve cost efficiency yet being regulatory compliant. Also, collaboration with and amongst regulators and other banks internationally is recommended for improved cost efficiency.
Radar systems for contactless vital sign monitoring are well known and an actual object of research. These radar-based sensors could be used for monitoring of elderly people in their homes but also for detecting the activity of prisoners and to control electrical devices (light, audio, etc.) in smart living environments. Mostly these sensors are foreseen to be mounted on the ceiling in the middle of a room. In retirement homes the rooms are mostly rectangular and of standardized size. Furniture like beds and seating are found at the borders or the corners of the room. As the propagation path from the center of the room ceiling to the borders and corners of a room is 1.4 and 1.7 time longer the power reflected by people located there is 6 or even 10 dB lower than if located in the center of the room. Furthermore classical antennas in microstrip technology are strengthening radiation in broadside direction. Radar systems with only one single planar antenna must be mounted horizontally aligned when measuring in all directions. Thus an antenna pattern which is increasing radiation in the room corners and borders for compensation of free space loss is needed. In this contribution a specification of classical room sizes in retirement homes are given. A method for shaping the antenna gain in the E-plane by an one-dimensional series-fed traveling wave patch array and in the H-plane by an antenna feeding network for improvement of people detection in the room borders and corners is presented for a 24 GHz digital beamforming (DBF) radar system. The feeding network is a parallel-fed power divider for microstrip patch antennas at 24 GHz. Both approaches are explained in theory. The design parameters and the layout of the antennas are given. The simulation of the antenna arrays are executed with CST MWS. Simulations and measurements of the proposed antennas are compared to each other. Both antennas are used for the transmit and the receive channel either. The sensor topology of the radar system is explained. Furthermore the measurement results of the protoype are presented and discussed.
Background: The Musculoskeletal Health Questionnaire (MSK-HQ) has been developed to measure musculoskeletal health status across musculoskeletal conditions and settings. However, the MSK-HQ needs to be further evaluated across settings and different languages.
Objective: The objective of the study was to evaluate and compare measurement properties of the MSK-HQ across Danish (DK) and English (UK) cohorts of patients from primary care physiotherapy services with musculoskeletal pain.
Methods: MSK-HQ was translated into Danish according to international guidelines. Measurement invariance was assessed by differential item functioning (DIF) analyses. Test-retest reliability, measurement error, responsiveness and minimal clinically important change (MCIC) were evaluated and compared between DK (n = 153) and UK (n = 166) cohorts.
Results: The Danish version demonstrated acceptable face and construct validity. Out of the 14 MSK-HQ items, three items showed DIF for language (pain/stiffness at night, understanding condition and confidence in managing symptoms) and three items showed DIF for pain location (walking, washing/dressing and physical activity levels). Intraclass Correlation Coefficients for test-retest were 0.86 (95% CI 0.81 to 0.91) for DK cohort and 0.77 (95% CI 0.49 to 0.90) for the UK cohort. The systematic measurement error was 1.6 and 3.9 points for the DK and UK cohorts respectively, with random measurement error being 8.6 and 9.9 points. Receiver operating characteristic (ROC) curves of the change scores against patients’ own judgment at 12 weeks exceeded 0.70 in both cohorts. Absolute and relative MCIC estimates were 8–10 points and 26% for the DK cohort and 6–8 points and 29% for the UK cohort.
Conclusions: The measurement properties of MSK-HQ were acceptable across countries, but seem more suited for group than individual level evaluation. Researchers and clinicians should be aware that some discrepancy exits and should take the observed measurement error into account when evaluating change in scores over time.
Stratified care for low back pain (LBP) has been shown to be clinically- and cost-effective in the UK, but its transferability to the German healthcare system is unknown. This study explores LBP patients’ perspectives regarding future implementation of stratified care, through in-depth interviews (n = 12). The STarT-Back-Tool was completed by participants prior to interviews. Interview data were analysed using Grounded Theory. The overarching theme identified from the data was ‘treatment-success’, with subthemes of ‘assessment and treatment planning’, ‘acceptance of the questionnaire’ and ‘contextual factors’. Patients identified the underlying cause of pain as being of great importance (whereas STarT-Back allocates treatment based on prognosis). The integration of the STarT-Back-Tool in consultations was considered helpful as long as it does not disrupt the therapeutic relationship, and was acceptable if tool results are handled confidentially. Results indicate that for patients to find STarT-Back acceptable, the shift from a focus on identifying a cause of pain and subsequent diagnosis, to prediction-orientated treatment planning, must be made clear. Patient ‘buy in’ is important for successful uptake of clinical interventions, and findings can help to inform future strategies for implementing STarT-Back in the Germany, as well as having potential implications for transferability to other similar healthcare systems.
Deep brain stimulation (DBS) is an established therapy for movement disorders such as in Parkinson's disease (PD) and essential tremor (ET). Adjusting the stimulation parameters, however, is a labour-intensive process and often requires several patient visits. Physicians prefer objective tools to improve (or maintain) the performance in DBS. Wearable motion sensors (WMS) are able to detect some manifestations of pathological signs, such as tremor in PD. However, the interpretation of sensor data is often highly technical and methods to visualise tremor data of patients undergoing DBS in a clinical setting are lacking. This work aims to visualise the dynamics of tremor responses to DBS parameter changes with WMS while patients performing clinical hand movements. To this end, we attended DBS programming sessions of two patients with the aim to visualise certain aspects of the clinical examination. PD tremor and ET were effectively quantified by acceleration amplitude and frequency. Tremor dynamics were analysed and visualised based on setpoints, movement transitions and stability aspects. These methods have not yet been employed and examples demonstrate how tremor dynamics can be visualised with simple analysis techniques. We therefore provide a base for future research work on visualisation tools in order to assist clinicians who frequently encounter patients for DBS therapy. This could lead to benefits in terms of enhanced evaluation of treatment efficacy in the future.
For the assessment of human reaction time, a test environment was developed. This system consists of an embedded device with organic light-emitting diode (OLED) displays with push buttons for the combined presentation of visual stimulation and registration of the haptic human reaction. The test leader can define the test sequence with the aid of a graphical user interface (GUI) on a personal computer (PC). The validation of the system was proved by measuring the latency times of the whole system, which are conditioned by the specific hard- and software constellation. Through the investigation of the display’s light radiation by a photodiode and the recorded current consumption, latency times and their variance were specified. In the fastest mode the system can reach an error limit of 60 μs.
Erbbaurechte werden in den letzten Jahren zwar wieder verstärkt verwendet, führen aber immer noch ein Nischendasein. Bei Erbbaurechten findet eine Aufspaltung der Eigentumsrechte an der Immobilie statt. Hierdurch entstehen einerseits zusätzliche Kontroll- und Durchsetzungskosten, andererseits auch Eingriffe in die Verfügungsrechte des Erbbauberechtigten. Beides führt zu Wertabschlägen, mit denen Volleigentum nicht belastet ist. Dies belastet sowohl die Rendite als auch die Möglichkeiten, bezahlbaren Wohnraum über Erbbaurechte zur Verfügung zu stellen. Hinzu kommen Nachteile bei der Veräußerbarkeit und der Beleihbarkeit von Erbbaurechten. Auf der anderen Seite können Erbbaurechte als ein Instrumentarium zur Reallokation von Investitionsrisiken auf den Erbbaurechtnehmer verstanden werden. Marktgerechtigkeit vorausgesetzt, sinken die Renditeforderungen der Erbbaurechtgeber stärker ab, als die Renditeforderungen der Erbbaurechtgeber ansteigen. Hierdurch entstehen u. U. beträchtliche Diskontierungsgewinne, die bei Volleigentum nicht generiert werden und die eine Überkompensation der Nachteile des Erbbaurechts bewirken können. Allerdings erlaubt es die Art und Weise, wie in Deutschland Erbbaurechte angewandt werden aber nicht, diesen potenziellen Mehrwert tatsächlich auszuschöpfen. Es werden Modelle aufgezeigt, die diese Anwendungsprobleme auf einfache Weise beheben.
Background: Deficiency in musculoskeletal imaging (MI) education will pose a great challenge to physiotherapists in clinical decision making in this era of first-contact physiotherapy practices in many developed and developing countries. This study evaluated the nature and the level of MI training received by physiotherapists who graduate from Nigerian universities.
Methods: An online version of the previously validated Physiotherapist Musculoskeletal Imaging Profiling Questionnaire (PMIPQ) was administered to all eligible physiotherapists identified through the database of the Medical Rehabilitation Therapist Board of Nigeria. Data were obtained on demographics, nature, and level of training on MI procedures using the PMIPQ. Logistic regression, Friedman’s analysis of variance (ANOVA) and Kruskal-Wallis tests were used for the statistical analysis of collected data.
Results: The results (n = 400) showed that only 10.0% of the respondents had a stand-alone entry-level course in MI, 92.8% did not have any MI placement during their clinical internship, and 67.3% had never attended a MI workshop. There was a significant difference in the level of training received across MI procedures [χ2 (15) = 1285.899; p = 0.001]. However, there was no significant difference in the level of MI training across institutions of entry-level programme (p = 0.36). The study participants with transitional Doctor of Physiotherapy education were better trained in MI than their counterparts with a bachelor’s degree only (p = 0.047).
Conclusions: Most physiotherapy programmes in Nigeria did not include a specific MI module; imaging instructions were mainly provided through clinical science courses. The overall self-reported level of MI training among the respondents was deficient. It is recommended that stand-alone MI education should be introduced in the early part of the entry-level physiotherapy curriculum.
Unintended nuclear war
(2021)
Carbon footprinting of universities worldwide: Part I — objective comparison by standardized metrics
(2021)
Background: Universities, as innovation drivers in science and technology worldwide, should be leading the Great Transformation towards a carbon–neutral society and many have indeed picked up the challenge. However, only a small number of universities worldwide are collecting and publishing their carbon footprints, and some of them have defined zero emission targets. Unfortunately, there is limited consistency between the reported carbon footprints (CFs) because of different analysis methods, different impact measures, and different target definitions by the respective universities.
Results: Comprehensive CF data of 20 universities from around the globe were collected and analysed. Essential factors contributing to the university CF were identified. For the first time, CF data from universities were not only compared. The CF data were also evaluated, partly corrected, and augmented by missing contributions, to improve the consistency and comparability. The CF performance of each university in the respective year is thus homogenized, and measured by means of two metrics: CO2e emissions per capita and per m2 of constructed area. Both metrics vary by one order of magnitude across the different universities in this study. However, we identified ten universities reaching a per capita carbon footprint of lower than or close to 1.0 Mt (metric tons) CO2e/person and year (normalized by the number of people associated with the university), independent from the university’s size. In addition to the aforementioned two metrics, we suggested a new metric expressing the economic efficiency in terms of the CF per $ expenditures and year. We next aggregated the results for all three impact measures, arriving at an overall carbon performance for the respective universities, which we found to be independent of geographical latitude. Instead the per capita measure correlates with the national per capita CFs, and it reaches on average 23% of the national impacts per capita. The three top performing universities are located in Switzerland, Chile, and Germany.
Conclusion: The usual reporting of CO2 emissions is categorized into Scopes 1–3 following the GHG Protocol Corporate Accounting Standard which makes comparison across universities challenging. In this study, we attempted to standardize the CF metrics, allowing us to objectively compare the CF at several universities. From this study, we observed that, almost 30 years after the Earth Summit in Rio de Janeiro (1992), the results are still limited. Only one zero emission university was identified, and hence, the transformation should speed up globally.
Laboratory protocols using magnetic beads have gained importance in the purification of mRNA for vaccines. Here, the produced mRNA hybridizes specifically to oligo(dT)-functionalized magnetic beads after cell lysis. The mRNA-loaded magnetic beads can be selectively separated using a magnet. Subsequently, impurities are removed by washing steps and the mRNA is eluted. Magnetic separation is utilized in each step, using different buffers such as the lysis/binding buffer. To reduce the time required for purification of larger amounts of mRNA vaccine for clinical trials, high-gradient magnetic separation (HGMS) is suitable. Thereby, magnetic beads are selectively retained in a flow-through separation chamber. To meet the requirements of biopharmaceutical production, a disposable HGMS separation chamber with a certified material (United States Pharmacopeia Class VI) was developed which can be manufactured using 3D printing. Due to the special design, the filter matrix itself is not in contact with the product. The separation chamber was tested with suspensions of oligo(dT)-functionalized Dynabeads MyOne loaded with synthetic mRNA. At a concentration of cB = 1.6–2.1 g·L–1 in lysis/binding buffer, these 1 μm magnetic particles are retained to more than 99.39% at volumetric flows of up to 150 mL·min–1 with the developed SU-HGMS separation chamber. When using the separation chamber with volumetric flow rates below 50 mL·min–1, the retained particle mass is even more than 99.99%.
Purification of mRNA with oligo(dT)-functionalized magnetic particles involves a series of magnetic separations for buffer exchange and washing. Magnetic particles interact and agglomerate with each other when a magnetic field is applied, which can result in a decreased total surface area and thus a decreased yield of mRNA. In addition, agglomeration may also be caused by mRNA loading on the magnetic particles. Therefore, it is of interest how the individual steps of magnetic separation and subsequent redispersion in the buffers used affect the particle size distribution. The lysis/binding buffer is the most important buffer for the separation of mRNA from the multicomponent suspension of cell lysate. Therefore, monodisperse magnetic particles loaded with mRNA were dispersed in the lysis/binding buffer and in the reference system deionized water, and the particle size distributions were measured. A concentration-dependent agglomeration tendency was observed in deionized water. In contrast, no significant agglomeration was detected in the lysis/binding buffer. With regard to magnetic particle recycling, the influence of different storage and drying processes on particle size distribution was investigated. Agglomeration occurred in all process alternatives. For de-agglomeration, ultrasonic treatment was examined. It represents a suitable method for reproducible restoration of the original particle size distribution.
The implementation of single-use technologies offers several major advantages, e.g. prevention of cross-contamination, especially when spore-forming microorganisms are present. This study investigated the application of a single-use bioreactor in batch fermentation of filamentous fungus Penicillium sp. (IBWF 040-09) from the Institute of Biotechnology and Drug Research (IBWF), which is capable of intracellular production of a protease inhibitor against parasitic proteases as a secondary metabolite. Several modifications to the SU bioreactor were suggested in this study to allow the fermentation in which the fungus forms pellets. Simultaneously, fermentations in conventional glass bioreactor were also conducted as reference. Although there are significant differences in the construction material and gassing system, the similarity of the two types of bioreactors in terms of fungal metabolic activity and the reproducibility of fermentations could be demonstrated using statistic methods. Under the selected cultivation conditions, growth rate, yield coefficient, substrate uptake rate, and formation of intracellular protease-inhibiting substance in the single-use bioreactor were similar to those in the glass bioreactor.
Thailand’s power system has been facing an energy transition due to the increasing amount of Renewable Energy (RE) integration, prosumers with self-consumption, and digitalization-based business models in a Local Energy Market (LEM). This paper introduces a decentralized business model and a possible trading platform for electricity trading in Thailand’s Micro-Grid to deal with the power system transformation. This approach is Hybrid P2P, a market structure in which sellers and buyers negotiate on energy exchanging by themselves called Fully P2P trading or through the algorithm on the market platform called Community-based trading. A combination of Auction Mechanism (AM), Bill Sharing (BS), and Traditional Mechanism (TM) is the decentralized price mechanism proposed for the Community-based trading. The approach is validated through a test case in which, during the daytime, the energy import and export of the community are significantly reduced when 75 consumers and 25 PV rooftop prosumers participate in this decentralized trading model. Furthermore, a comparison analysis confirms that the decentralized business model outperforms a centralized approach on community and individual levels.
As productive biofilms are increasingly gaining interest in research, the quantitative monitoring of biofilm formation on- or offline for the process remains a challenge. Optical coherence tomography (OCT) is a fast and often used method for scanning biofilms, but it has difficulty scanning through more dense optical materials. X-ray microtomography (μCT) can measure biofilms in most geometries but is very time-consuming. By combining both methods for the first time, the weaknesses of both methods could be compensated. The phototrophic cyanobacterium Tolypothrix distorta was cultured in a moving bed photobioreactor inside a biocarrier with a semi-enclosed geometry. An automated workflow was developed to process µCT scans of the biocarriers. This allowed quantification of biomass volume and biofilm-coverage on the biocarrier, both globally and spatially resolved. At the beginning of the cultivation, a growth limitation was detected in the outer region of the carrier, presumably due to shear stress. In the later phase, light limitations could be found inside the biocarrier. µCT data and biofilm thicknesses measured by OCT displayed good correlation. The latter could therefore be used to rapidly measure the biofilm formation in a process. The methods presented here can help gain a deeper understanding of biofilms inside a process and detect any limitations.
Introduction: The use of social marketing strategies to induce the promotion of cognitive health has received little attention in research. The objective of this scoping review is twofold: (i) to identify the social marketing strategies that have been used in recent years to initiate and maintain health-promoting behaviour; (ii) to advance research in this area to inform policy and practice on how to best make use of these strategies to promote cognitive health.
Methods and analysis: We will use the five-stage methodological framework of Arksey and O'Malley. Articles in English published since 2010 will be searched in electronic databases (the Cochrane Library, DoPHER, the International Bibliography of the Social Sciences, PsycInfo, PubMed, ScienceDirect, Scopus). Quantitative and qualitative study designs as well as reviews will be considered. We will include those articles that report the design, implementation, outcomes and evaluation of programmes and interventions concerning social marketing and/or health promotion and/or promotion of cognitive health. Grey literature will not be searched. Two independent reviewers will assess in detail the abstracts and full text of selected citations against the inclusion criteria. A Preferred Reporting Items for Systematic Reviews and Meta-Analyses flowchart for Scoping Reviews will be used to illustrate the process of article selection. We will use a data extraction form, present the results through narrative synthesis and discuss them in relation to the scoping review research questions.
Ethics and dissemination: Ethics approval is not required for conducting this scoping review. The results of the review will be the first step to advance a conceptual framework, which contributes to the development of interventions targeting the promotion of cognitive health. The results will be published in a peer-reviewed scientific journal. They will also be disseminated to key stakeholders in the field of the promotion of cognitive health.
The aim of this work was to develop and evaluate the reinforcement learning algorithm VentAI, which is able to suggest a dynamically optimized mechanical ventilation regime for critically-ill patients. We built, validated and tested its performance on 11,943 events of volume-controlled mechanical ventilation derived from 61,532 distinct ICU admissions and tested it on an independent, secondary dataset (200,859 ICU stays; 25,086 mechanical ventilation events). A patient “data fingerprint” of 44 features was extracted as multidimensional time series in 4-hour time steps. We used a Markov decision process, including a reward system and a Q-learning approach, to find the optimized settings for positive end-expiratory pressure (PEEP), fraction of inspired oxygen (FiO2) and ideal body weight-adjusted tidal volume (Vt). The observed outcome was in-hospital or 90-day mortality. VentAI reached a significantly increased estimated performance return of 83.3 (primary dataset) and 84.1 (secondary dataset) compared to physicians’ standard clinical care (51.1). The number of recommended action changes per mechanically ventilated patient constantly exceeded those of the clinicians. VentAI chose 202.9% more frequently ventilation regimes with lower Vt (5–7.5 mL/kg), but 50.8% less for regimes with higher Vt (7.5–10 mL/kg). VentAI recommended 29.3% more frequently PEEP levels of 5–7 cm H2O and 53.6% more frequently PEEP levels of 7–9 cmH2O. VentAI avoided high (>55%) FiO2 values (59.8% decrease), while preferring the range of 50–55% (140.3% increase). In conclusion, VentAI provides reproducible high performance by dynamically choosing an optimized, individualized ventilation strategy and thus might be of benefit for critically ill patients.
Big Data is now poised to mutate decision-making systems. Indeed, the decision is no longer based solely on the structured information that was hitherto collected and stored by the organization, but also on all data not structured outside the corporate straitjacket. The cloud and the information it contains impacts decisions and the industry is witnessing the emergence of business intelligence 3.0. With the growth of the internet, social networks, connected objects and communication information are now more abundant than ever before, along with rapid and substantial growth in their production. In 2012, 2.5 exabytes of data (one exabyte representing a million gigabytes of data) came every day to swell the ranks of big data (McAfee et al., 2012), which should weigh more than 40 zettabytes from 2020 (Valduriez, 2014) for 30 billion connected devices (The Internet Of Nothings, 2014) and 50 billion sensors (Davenport & Soulard, 2014). One of the most critical aspects of all of this information flow is the impact these will have on the way decisions are made. Indeed, in the part of an environment in which data was scarce and difficult to obtain, it was logical to let decision-making be conditioned by the intuition of the experienced decision-maker (Klein, Phillips, Rall, & Peluso, 2007). However, since information and knowledge are now available to everyone, the role of experts and decision-makers is gradually changing. Big data, in particular, makes it possible for analytical and decision-making systems to base their decision-making on global models. However, considering all the dimensions of the situations encountered, it was not until now that these systems were not within the reach of man, but were rationally limited (Simon & Newell, 1971). Big data and however, the processing of unstructured data requires modifying the architecture of decision support systems (DSS) of organizations. This paper is an inventory of developments undergone by aid systems decision-making, under the pressure of big data. Finally, it opens the debate on ethical questions raised by these new technologies, and it is observed that now, data analysis of personal data has become more debatable than in the past.
The integration of genetic algorithms to optimize the networks of value chains could enormously improve the performance of supply chains. For this reason, this paper describes in more detail the application of genetic algorithms in the value chains of the automotive industry. For this purpose, a theoretical model is built up to evaluate whether the application of the model can optimize the value chain. This option is described, analyzed and its restrictions are shown. Instead of looking at the entire network, individual finished goods and their bill of material are used as a basis for optimization, which greatly reduces the complexity of the original problem. The original complexity of the supply chain networks can thus be reduced and considered based on the bill of material.
For a detailed discussion of process mining, the objective of this paper is the analysis of the successful implementation of process mining in the practical fields of supply chain management. The research comprises the investigation of use cases in companies that are already actively using process mining.
Purpose: This research aims to highlight the applicability of process mining in the supply chain management business field.
Research Methodology: In order to examine the applicability of process mining in supply chain management a research study was conducted among experts in this business field. Further, theoretical findings were compared to the results and evaluated.
Results: Process Mining can be applied very well in the SCM area. The advantages that arise primarily reflect significant potential benefits and improved process throughput times. The information that can be gained from the operational areas supported by process mining is suitable for reliable decisions, both in the tactical and strategic areas.
Limitations: The results on the application of process mining show a certain generalization and have to be adapted and adjusted to the respective application case.
Contribution: This study is useful, especially for the purchasing and logistics business area.
A study of industry 4.0 technologies in the John Deere and Company and their impact on company operations is presented in this paper. Deere and Company’s implementation of Industry 4.0 to its factories and its factors was the focus of the research. The literature review with the systematic approach as well as a comprehensive review of current John Deere and Company’s developments is used in the current study. Also, it relied on freely available information on the company website. Public and investor relations have also been used as credible sources of information. An analysis found that adopting industry 4.0 technologies to agriculture manufacturing results in higher quality products, increased productivity, safety, and wider acceptance among stakeholders. This study assumes full implementation of these technologies in all agriculture manufacturing companies, and it also emphasizes up-to-date technologies. Studying this topic can be useful for engineers in mechanical and agricultural fields, managers in business, and marketers.
The aim of the study is to find out how SMEs used Social Media during Corona and how customers received it, to determine what should be continued or avoided by SMEs in the future. In this study, an interpretivist approach was adopted through problem-centred interviews with three SMEs. The second part of the study used an objectivist approach, where an online-based survey with a purpose sampling was conducted. The results were evaluated by means of thematic analysis.The SMEs interviewed considered Social Media essential during Corona. This was due to limited resources and the feeling of being overwhelmed by the situation. For customers a Social Media presence is also considered indispensable, and that the followership is based on the desire for the latest information. However, it also became clear that the survey participants do not believe the information on Social Media and prefer information on the website or at the location itself. No answers could be found about how the experts would answer sans or post Corona. Furthermore, due to anonymisation efforts, it was not possible to clarify the attitude of the survey participants specifically to the individual SMEs.
Sowohl der stationäre Handel als auch die Online Pure Player befinden sich durch die zunehmende Digitalisierung und die Beeinflussung durch die dynamisch veränderten Trends in einem Wandel. Insbesondere wird von der Möbelbranche ein adaptives Verhalten an die vorliegenden Entwicklungen verlangt. Durch die Intensivierung der Markt- und Wettbewerbslandschaften und die Veränderungen des Verbraucherverhaltens bezüglich der verlangten Verschmelzung der Einkaufskanäle wird ein Umdenken gefordert und notwendig. Zusätzlich wird diese Notwendigkeit durch aktuelle Gegebenheiten, wie die Covid-19-Pandemie dringlicher. Die Omnichannel-Strategie und deren Etablierung birgt für die Möbelbranche insbesondere hinsichtlich der logistischen Herausforderungen Gelegenheiten und Bedrohungen. Diese sind zu erkennen, zu nutzen und zu beheben.