Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (Fachzeitschriften) (178) (entfernen)
Volltext vorhanden
- ja (178)
Gehört zur Bibliographie
- nein (178)
Schlagworte
- Deutschland (13)
- Nachhaltigkeit (11)
- Rückenschmerz (11)
- COVID-19 (7)
- Digitalisierung (7)
- China (6)
- Gesundheitswesen (6)
- Künstliche Intelligenz (6)
- Maschinelles Lernen (6)
- Pandemie (6)
Institut
- FB Bauen + Leben (49)
- FB Umweltplanung/-technik (UCB) (44)
- FB Informatik + Therapiewissenschaft (30)
- FB Technik (11)
- FB Umweltwirtschaft/-recht (UCB) (10)
- IfaS - Institut für angewandtes Stoffstrommanagement (10)
- InDi - Institut für Internationale und Digitale Kommunikation (6)
- LaROS - Labor für Radiotechnologie und optische Systeme (6)
- ISS - Institut für Softwaresysteme in Wirtschaft, Umwelt und Verwaltung (5)
- FB Wirtschaft (3)
Stabilization exercise (SE) is evident for the management of chronic non-specific low back pain (LBP). The optimal dose-response-relationship for the utmost treatment success is, thus, still unknown. The purpose is to systematically review the dose-response-relationship of stabilisation exercises on pain and disability in patients with chronic non-specific LBP. A systematic review with meta-regression was conducted (Pubmed, Web of Knowledge, Cochrane). Eligibility criteria were RCTs on patients with chronic non-specific LBP, written in English/German and adopting a longitudinal core-specific/stabilising/motor control exercise intervention with at least one outcome for pain intensity and/or disability. Meta-regressions (dependent variable = effect sizes (Cohens d) of the interventions (for pain and for disability), independent variable = training characteristics (duration, frequency, time per session)), and controlled for (low) study quality (PEDro) and (low) sample sizes (n) were conducted to reveal the optimal dose required for therapy success. From the 3,415 studies initially selected, 50 studies (n = 2,786 LBP patients) were included. N = 1,239 patients received SE. Training duration was 7.0 ± 3.3 weeks, training frequency was 3.1 ± 1.8 sessions per week with a mean training time of 44.6 ± 18.0 min per session. The meta-regressions’ mean effect size was d = 1.80 (pain) and d = 1.70 (disability). Total R2 was 0.445 and 0.17. Moderate quality evidence (R2 = 0.231) revealed that a training duration of 20 to 30 min elicited the largest effect (both in pain and disability, logarithmic association). Low quality evidence (R2 = 0.125) revealed that training 3 to 5 times per week led to the largest effect of SE in patients with chronic non-specific LBP (inverted U-shaped association). In patients with non-specific chronic LBP, stabilization exercise with a training frequency of 3 to 5 times per week (Grade C) and a training time of 20 to 30 min per session (Grade A) elicited the largest effect on pain and disability.
Introduction: Annually, 2 million sports-related injuries are reported in Germany of which athletes contribute to a large proportion. Multiple sport injury prevention programs designed to decrease acute and overuse injuries in athletes have been proven effective. Yet, the programs’ components, general or sports-specific, that led to these positive effects are uncertain. Despite not knowing about the superiority of sports-specific injury prevention programs, coaches and athletes alike prefer more specialized rather than generalized exercise programs. Therefore, this systematic review aimed to present the available evidence on how general and sports-specific prevention programs affect injury rates in athletes.
Methods: PubMed and Web of Science were electronically searched throughout April 2018. The inclusion criteria were publication dates Jan 2006–Dec 2017, athletes (11–45 years), exercise-based injury prevention programs and injury incidence. The methodological quality was assessed with the Cochrane Collaboration assessment tools.
Results: Of the initial 6619 findings, 15 studies met the inclusion criteria. In addition, 13 studies were added from reference lists and external sources making a total of 28 studies. Of which, one used sports-specific, seven general and 20 mixed prevention strategies. Twenty-four studies revealed reduced injury rates. Of the four ineffective programs, one was general and three mixed.
Conclusion: The general and mixed programs positively affect injury rates. Sports-specific programs are uninvestigated and despite wide discussion regarding the definition, no consensus was reached. Defining such terminology and investigating the true effectiveness of such IPPs is a potential avenue for future research.
Freedom of trade, occupation and profession in times of the Covid-19 pandemic in South Africa
(2022)
This paper evaluates the freedom of trade, occupation, and profession in South Africa from a Covid-19 pandemic context. It does that by focusing on the pertinent provisions and rights contained in the Constitution of the Republic of South Africa, 1996 (the Constitution) and relevant international and regional human rights instruments. It proceeds by discussing the interlinkage between (the freedom of trade, occupation, and profession and other pertinent fundamental) rights, limitation, enforcement, and interpretation of rights. This is followed by some final observations.
Purpose: The well-to-wheel (WTW) methodology is widely used for policy support in road transport. It can be seen as a simplified life cycle assessment (LCA) that focuses on the energy consumption and CO2 emissions only for the fuel being consumed, ignoring other stages of a vehicle’s life cycle. WTW results are therefore different from LCA results. In order to close this gap, the authors propose a hybrid WTW+LCA methodology useful to assess the greenhouse gas (GHG) profiles of road vehicles.
Methods: The proposed method (hybrid WTW+LCA) keeps the main hypotheses of the WTW methodology, but integrates them with LCA data restricted to the global warming potential (GWP) occurring during the manufacturing of the battery pack. WTW data are used for the GHG intensity of the EU electric mix, after a consistency check with the main life cycle impact (LCI) sources available in literature.
Results and discussion: A numerical example is provided, comparing GHG emissions due to the use of a battery electric vehicle (BEV) with emissions from an internal combustion engine vehicle. This comparison is done both according to the WTW approach (namely the JEC WTW version 4) and the proposed hybrid WTW+LCA method. The GHG savings due to the use of BEVs calculated with the WTW-4 range between 44 and 56 %, while according to the hybrid method the savings are lower (31–46 %). This difference is due to the GWP which arises as a result of the manufacturing of the battery pack for the electric vehicles.
Conclusions: The WTW methodology used in policy support to quantify energy content and GHG emissions of fuels and powertrains can produce results closer to the LCA methodology by adopting a hybrid WTW+LCA approach. While evaluating GHG savings due to the use of BEVs, it is important that this method considers the GWP due to the manufacturing of the battery pack.
Innovative biogas multi-stage biogas plant and novel analytical system: First project experiences
(2012)
The here presented applied research and development project is targeted to the development and application of new and improved techniques in plant design, performance analysis and process control. Hereto following the required steps are illustrated and the goals are outlined. The project covers the development of a previously patented anaerobic digestion process, adaption of flow cytometry as an analytical instrument and investigation of innovative ways of disposal of solid fermentation wastes. The preliminary experiences with a newly built research plant employing a novel anaerobic biogas digestion technique are discussed. In this paper the first outcomes concerning the construction and operation are discussed. A novel method of disposal of the fermentation wastes is also discussed and first results are shown.
In the last decades, there has been a widespread implementation of Green Infrastructures worldwide. Among these, green roofs appear to be particularly flexible sustainable drainage facilities. To predict their effectiveness for planning purposes, a tool is required that provides information as a function of local meteorological variables. Thus, a relatively simple daily scale, one-dimensional water balance approach has been proposed. The crucial evapotranspiration process, usually considered as a water balance dependent variable, is replaced here by empirical relationships providing an a-priori assessment of soil water losses through actual evapotranspiration. The modelling scheme, which under some simplification can be used without a calibration process, has been applied to experimental runoff data monitored at a green roof located near Bernkastel (Germany), between April 2005 and December 2006. Two different empirical relationships have been used to model actual evapotranspiration, considering a water availability limited and an energy limited scheme. Model errors quantification, ranging from 2% to 40% on the long-term scale and from 1% to 36% at the event scale, appear strongly related to the particularly considered relationship.
Since operational managers often monitor large numbers of wind turbines (WTs), they depend on a toolset to provide them with highly condensed information to identify and prioritize low performing WTs or schedule preventive maintenance measures. Power curves are a frequently used tool to assess the performance of WTs. The power curve health value (HV) used in this work is supposed to detect power curve anomalies since small deviations in the power curve are not easy to identify. It evaluates deviations in the linear region of power curves by performing a principal component analysis. To calculate the HV, the standard deviation in direction of the second principal component of a reference data set is compared to the standard deviation of a combined data set consisting of the reference data and data of the evaluated period. This article examines the applicability of this HV for different purposes as well as its sensitivities and provides a modified HV approach to make it more robust and suitable for heterogeneous data sets. The modified HV was tested based on ENGIE's open data wind farm and data of on- and offshore WTs from the WInD-Pool. It proved to detect anomalies in the linear region of the power curve in a reliable and sensitive manner and was also eligible to detect long term power curve degradation. Also, about 7 % of all corrective maintenance measures were preceded by high HVs with a median alarm horizon of three days. Overall, the HV proved to be a promising tool for various applications.
Species distribution models (SDMs) are key tools in biodiversity and conservation, but assessing their reliability in unsampled locations is difficult, especially where there are sampling biases. We present a spatially-explicit sensitivity analysis for SDMs – SDM profiling – which assesses the leverage that unsampled locations have on the overall model by exploring the interaction between the effect on the variable response curves and the prevalence of the affected environmental conditions. The method adds a ‘pseudo-presence’ and ‘pseudo-absence’ to unsampled locations, re-running the SDM for each, and measuring the difference between the probability surfaces of the original and new SDMs. When the standardised difference values are plotted against each other (a ‘profile plot’), each point's location can be summarized by four leverage measures, calculated as the distances to each corner. We explore several applications: visualization of model certainty; identification of optimal new sampling locations and redundant existing locations; and flagging potentially erroneous occurrence records.
Artificial light at night (ALAN) is a widespread alteration of the natural environment that can affect the functioning of ecosystems. ALAN can change the movement patterns of freshwater animals that move into the adjacent riparian and terrestrial ecosystems, but the implications for local riparian consumers that rely on these subsidies are still unexplored. We conducted a 2-year field experiment to quantify changes of freshwater-terrestrial linkages by installing streetlights in a previously light-naïve riparian area adjacent to an agricultural drainage ditch. We compared the abundance and community composition of emerging aquatic insects, flying insects, and ground-dwelling arthropods with an unlit control site. Comparisons were made within and between years using two-way generalized least squares (GLS) model and a BACI design (Before-After Control-Impact). Aquatic insect emergence, the proportion of flying insects that were aquatic in origin, and the total abundance of flying insects all increased in the ALAN-illuminated area. The abundance of several night-active ground-dwelling predators (Pachygnatha clercki, Trochosa sp., Opiliones) increased under ALAN and their activity was extended into the day. Conversely, the abundance of nocturnal ground beetles (Carabidae) decreased under ALAN. The changes in composition of riparian predator and scavenger communities suggest that the increase in aquatic-to-terrestrial subsidy flux may cascade through the riparian food web. The work is among the first studies to experimentally manipulate ALAN using a large-scale field experiment, and provides evidence that ALAN can affect processes that link adjacent ecosystems. Given the large number of streetlights that are installed along shorelines of freshwater bodies throughout the globe, the effects could be widespread and represent an underestimated source of impairment for both aquatic and riparian systems.
Online Learning algorithms and Indoor Positioning Systems are complex applications in the environment of cyber-physical systems. These distributed systems are created by networking intelligent machines and autonomous robots on the Internet of Things using embedded systems that enable the exchange of information at any time. This information is processed by Machine Learning algorithms to make decisions about current developments in production or to influence logistics processes for optimization purposes. In this article, we present and categorize the further development of the prototype of a novel Indoor Positioning System, which constantly adapts its knowledge to the conditions of its environment with the help of Online Learning. Here, we apply Online Learning algorithms in the field of sound-based indoor localization with low-cost hardware and demonstrate the improvement of the system over its predecessor and its adaptability for different applications in an experimental case study.
In der seit 2009 andauernden Niedrigzinsphase ging der Anteil der Kapitaleinkommen am Volkseinkommen zurück. Da sich der Anteil des Faktors Arbeit nicht wesentlich veränderte, gewann der Produktionsfaktor Boden an Bedeutung. In der Volkswirtschaftlichen Gesamtrechnung spiegelt sich dies jedoch nicht wider, da der Boden dort nicht gesondert ausgewiesen wird. Mithilfe des Henry-George-Theorems wird versucht, eine Methode zu entwickeln, die den Anteil des Faktors Boden quantifiziert. Der explizite Ausweis des Bodeneinkommens in der Volkswirtschaftlichen Gesamtrechnung könnte dazu beitragen, die Rolle dieses vernachlässigten Faktors zu korrigieren.
Erbbaurechte werden in den letzten Jahren zwar wieder verstärkt verwendet, führen aber immer noch ein Nischendasein. Bei Erbbaurechten findet eine Aufspaltung der Eigentumsrechte an der Immobilie statt. Hierdurch entstehen einerseits zusätzliche Kontroll- und Durchsetzungskosten, andererseits auch Eingriffe in die Verfügungsrechte des Erbbauberechtigten. Beides führt zu Wertabschlägen, mit denen Volleigentum nicht belastet ist. Dies belastet sowohl die Rendite als auch die Möglichkeiten, bezahlbaren Wohnraum über Erbbaurechte zur Verfügung zu stellen. Hinzu kommen Nachteile bei der Veräußerbarkeit und der Beleihbarkeit von Erbbaurechten. Auf der anderen Seite können Erbbaurechte als ein Instrumentarium zur Reallokation von Investitionsrisiken auf den Erbbaurechtnehmer verstanden werden. Marktgerechtigkeit vorausgesetzt, sinken die Renditeforderungen der Erbbaurechtgeber stärker ab, als die Renditeforderungen der Erbbaurechtgeber ansteigen. Hierdurch entstehen u. U. beträchtliche Diskontierungsgewinne, die bei Volleigentum nicht generiert werden und die eine Überkompensation der Nachteile des Erbbaurechts bewirken können. Allerdings erlaubt es die Art und Weise, wie in Deutschland Erbbaurechte angewandt werden aber nicht, diesen potenziellen Mehrwert tatsächlich auszuschöpfen. Es werden Modelle aufgezeigt, die diese Anwendungsprobleme auf einfache Weise beheben.
A common answer to the financial challenges of green transformation and the shortcomings of the current taxation system is the “double dividend approach”. Environmental taxes should either feed the public purse in order to remove other distorting taxes, or directly contribute to financing green transformation. Germany adopted the former approach. However, this article argues, by using the example of Germany, that “good taxes” in terms of public finance should be neutral in terms of environmental protection and vice versa. Neutral taxation in terms of environmental impacts can be best achieved by applying the “Henry George principle”. Additionally, neutral taxation in terms of public finance is best achieved if the revenues from environmental taxes are redistributed to the citizens as an ecological basic income. Thus, distortive effects of environmental charges in terms of distribution and political decision-making might be removed. However, such a financial framework could be introduced step by step, starting with a tax shift.
Most of the land reforms of recent decades have followed an approach of “formalization and capitalization” of individual land titles (de Soto 2000). However, within the privatization agenda, benefits of unimproved land (such as land rents and value capture) are reaped privately by well-organized actors, whereas the costs of valorization (e.g., infrastructure) or opportunity costs of land use changes are shifted onto poorly organized groups. Consequences of capitalization and formalization include rent seeking and land grabbing. In developing countries, formal law often transpires to work in favor of the winners of the titling process and is opposed by the customary rights of the losers. This causes a lack of general acknowledgement of formalized law (which is made responsible for deprivation of livelihoods of vulnerable groups) and often leads to a clash of formal and customary norms. Countries may fall into a state of de facto anarchy and “de facto open access”. Encroachment and destruction of natural resources may spread. A reframing of development policy is necessary in order to fight these aberrations. Examples and evidence are provided from Cambodia, which has many features in common with other countries in Asia and Sub-Saharan Africa in this respect.
Die umlagefinanzierte gesetzliche Rentenversicherung kann angesichts der demografischen Entwicklung eine auskömmliche Versorgung im Alter bald nicht mehr sichern. Indem die Bürger:innen über eine kapitalgedeckte Alterssicherung am Unternehmensvermögen mittelbar beteiligt werden, könnte die Rentenlücke zumindest teilweise geschlossen werden. Das Unternehmensvermögen sollte bei einem neuen Reformanlauf auch vollständig in die Erbschaftsteuer einbezogen werden; dabei darf aber der Fortbestand der Unternehmen nicht gefährdet werden. Die Erbschaftsteuer auf Unternehmensvermögen könnte dabei als Finanzierungsquelle für eine kapitalgedeckte Altersgrundsicherung dienen.
Der Erbbauzins ist bei kommunalen Erbbaurechten sowohl eine zentrale Stellgröße für die Wirtschaftlichkeit als auch von kommunalwirtschafts- und beihilferechtlicher Relevanz. Er wird zumeist ermittelt, indem ein geeigneter Erbbauzinssatz auf den Bodenwert angelegt wird. Der Erbbauzinssatz sollte dabei marktgerecht sein. Sowohl die Ableitung des Erbbauzinssatzes aus dem Primärmarkt (erstmalige Ausgabe von Erbbaurechten) wie aus dem Sekundärmarkt (Weiterverkäufe) ist aber zumindest bei Erbbaurechten für Mehrfamilienhäuser derzeit kaum sinnvoll zu diesem Zwecke durchzuführen. Auch der Liegenschaftszinssatz ist ungeeignet, da er aus einem Modell für Volleigentum mit einer vollkommen anderen Risiko-/Rendite-Konstellation abgeleitet wird. Daher wird für eine stärkere Anwendung ökonomisch basierter Verfahren plädiert und hierbei ein kapitalmarktorientiertes Mark-to-Model-Verfahren dargestellt. Erste überschlägige Ermittlungen legen zudem die Orientierung an langfristigen Baufinanzierungssätzen als Daumenregel nahe. Regelmäßig dürften von Kommunen für die Ermittlung von marktgerechten Erbbauzinssätzen öffentlich bestellte und vereidigte oder zertifizierte Grundstückssachverständige betraut werden, denen die betreffenden Verfahren jedoch oftmals fremd sind. Auch stellt sich die Frage nach der Zulässigkeit, da sie sich als Best Practice-Verfahren bislang nicht etabliert haben. Daher wäre dem Gesetz- bzw. Verordnungsgeber anzuraten, die Ermittlung marktgerechter Erbbauzinssätze ausdrücklich zu regeln und dabei jenseits von Mark-to-Market-Verfahren weitere geeignete ökonomisch gestützte Methoden wie auch empirisch abgesicherte „Daumenregeln“ zuzulassen.
In Deutschland findet die Gestaltung von Erbbaurechtsverträgen sehr oft unter politischen Gesichtspunkten, aber wenig am Markt orientiert statt. Die Akzeptanz des Erbbaurechts leidet hierunter. Eine wichtige Rolle spielt dabei die Festlegung des Erbbauzinses, der angesichts des niedrigen Zinsniveaus oft als unangemessen empfunden wird. Andererseits stellt sich die Ableitung „marktgerechter“ Erbbauzinsen im Vergleichswege schwierig dar. Im Beitrag wird daher ein praxisbezogener, auf der Kapitalmarkttheorie basierender Ansatz für die Festlegung marktgerechter Erbbauzinsen dargestellt. Wichtig ist dabei die Verschiebung der Rendite/Risiko-Position, die sich aufgrund der Bestellung von Erbbaurechten gegenüber Volleigentum ergibt. Sowohl der Erbbauverpflichtete wie auch der Erbbauberechtigte dürfen sich diesbezüglich nicht schlechter als bei Volleigentum stellen. Diese Anforderung wird durch die Sharpe Ratio konkretisiert. Damit sich der Erbbauberechtigte nicht gegenüber Volleigentum verschlechtert, bedarf es einer „Subventionierung“ seiner Rendite. Es wird gezeigt, dass dies durch den Erbbauberechtigten ohne Einbußen in seiner Rendite/Risiko-Position gegenüber Volleigentum geschehen kann. Auf Grundlage dieser Überlegungen werden Mindestrenditeforderungen für den Erbbauberechtigten und Höchstsätze für den Erbbauverpflichteten kalkuliert, die sich auf die Bodenwerte beziehen.
E-commerce live streaming - An emerging industry in China and a potential future trend in the world
(2021)
With the widespread use of the Internet, many industries have developed rapidly. The economy based on the Internet poses a significant threat to the traditional economy. Live streaming plus e-commerce, which is acknowledged as the current global economic status, is the result of combing live streaming and various industries through the Internet. E-commerce live streaming is one of the most essential types of online live streaming. In this article, it is defined as the live streaming of the e-commerce platform used by Key Opinion Leaders or product sellers through the built-in live streaming function of the platform to propagate goods, brands, events, etc. to achieve goals of brand exposure and product sales. Compared with the traditional economic model, the combined model of e-commerce and live streaming has its advantages and characteristics. This kind of marketing tool is now prevalent. However, there are many deficiencies in e-commerce live streaming that need to be improved since the development of e-commerce is immature and supervision of Internet use is ongoing.
This scientific paper aims to collect and analyze various digital technologies connected to pharmacies and Health 4.0. Thus, the goal is to give basic recommendations for actions for pharmacies to remain successful businesses in the digital future of healthcare. While the total health sector is growing continuously, the total number of pharmacies is shrinking. To be able to face the competitive pressure on the pharmaceutical market, pharmacies have to integrate more efficient digital technologies to be able to increase customers’ experience. Hence, the acceptance and attitude of the German society towards digital health solutions are examined using a short survey and a precise questionnaire. After a detailed analysis of the survey results and the questionnaire answered by a pharmacist, specific digital methods and technologies which make sense for pharmacies can be elaborated. As the future of pharmacies is still quite unexplored, while the health market is shifting to more efficient digital solutions, pharmacies have to adapt to current developments fast. Therefore, this paper can serve as a guideline for pharmacies in the rapid changes toward more digital markets.
In this paper, the radio frequency (RF) behavior of mechanically stressed coaxial and for the first time also twisted-pair transmission lines is investigated over their service life. The main goal is to enable predictive maintenance for cables in moving applications and avoid preventive replacement. This also reduces the use of high-cost resources. For this purpose, stranded and solid-core variants of coaxial and twisted-pair type cables are mechanically loaded on the two-pulley apparatus according to EN 50396. Their RF transmission (S21) behavior is measured using a vector network analyzer and presented over bending cycles. For the first time, the phase response of mechanically loaded transmission lines is evaluated with respect to their service life. Two significant causes for the increasing attenuation and altered phase response are identified: breakage in foil screen and increasing surface roughness on the copper conductors. The identified causes are supported with literature evidence. Through measurements and theoretical calculations, it is proven that the phase is much more suitable for an assessment of the remaining service life than the amplitude. The findings can be used to implement a cable monitoring system in industrial environments which monitors the lines in-situ and reminds the user to replace them, whenever a certain wear-level is reached.
Research in global change ecology relies heavily on global climatic grids derived from estimates of air temperature in open areas at around 2 m above the ground. These climatic grids do not reflect conditions below vegetation canopies and near the ground surface, where critical ecosystem functions occur and most terrestrial species reside. Here, we provide global maps of soil temperature and bioclimatic variables at a 1-km2 resolution for 0–5 and 5–15 cm soil depth. These maps were created by calculating the difference (i.e. offset) between in situ soil temperature measurements, based on time series from over 1200 1-km2 pixels (summarized from 8519 unique temperature sensors) across all the world's major terrestrial biomes, and coarse-grained air temperature estimates from ERA5-Land (an atmospheric reanalysis by the European Centre for Medium-Range Weather Forecasts). We show that mean annual soil temperature differs markedly from the corresponding gridded air temperature, by up to 10°C (mean = 3.0 ± 2.1°C), with substantial variation across biomes and seasons. Over the year, soils in cold and/or dry biomes are substantially warmer (+3.6 ± 2.3°C) than gridded air temperature, whereas soils in warm and humid environments are on average slightly cooler (−0.7 ± 2.3°C). The observed substantial and biome-specific offsets emphasize that the projected impacts of climate and climate change on near-surface biodiversity and ecosystem functioning are inaccurately assessed when air rather than soil temperature is used, especially in cold environments. The global soil-related bioclimatic variables provided here are an important step forward for any application in ecology and related disciplines. Nevertheless, we highlight the need to fill remaining geographic gaps by collecting more in situ measurements of microclimate conditions to further enhance the spatiotemporal resolution of global soil temperature products for ecological applications.
Gait analysis is a systematic study of human movement. Combining wearable foot pressure sensors and machine learning (ML) solutions for a high-fidelity body pose tracking from RGB video frames could reveal more insights into gait abnormalities. However, accurate detection of heel strike (HS) and toe-off (TO) events is crucial to compute interpretable gait parameters. In this work, we present an experimental platform to study the timing of gait events using a new wearable foot pressure sensor (ActiSense System, IEE S.A., Luxembourg), and Google’s open-source ML solution MediaPipe Pose. For this purpose, two StereoPi systems were built to capture stereoscopic videos and images in real time. MediaPipe Pose was applied to the synchronized StereoPi cameras, and two algorithms (ALs) were developed to detect HS and TO events for gait and analysis. Preliminary results from a healthy subject walking on a treadmill show a mean relative deviation across all time spans of less than 4% for the ActiSense device and less than 16% for AL2 (33% for AL1) employing MediaPipe Pose on StereoPi videos. Finally, this work offers a platform for the development of sensor- and video-based ALs to automatically identify the timing of gait events in healthy individuals and those with gait disorders.
Since tangible assets of companies are becoming increasingly insignificant, emphasis should rather be placed on human capital as an essential source of competitive edge. This paper, accordingly, pursues the purpose to shed light on the major demands that the Millenials place on their prospective employers. In consequence, the work aims to identify attractiveness factors that German retailers should particularly promote in order to succeed in the war for talents and attract the most promising candidates among the German Gen Y. This work is based on a mixed-methods approach. First, interviews with German retail experts as well as generational keynote speakers were conducted in order to obtain a deep understanding and assessment of the German retail landscape from a professional perspective. The insights gained were subsequently used to design a questionnaire, which distribution led to a final sample of 216 useable responses by Millenials. Furthermore, the data obtained by interviewing experts and the survey was subsequently compared in order to evaluate to what extent the expectations of the Millenials correspond to the experts’ assessment. This study reveals Millenials to be driven by the need for growth, such as wide offers of development opportunities or scope for decision when choosing an employer. Among the relatedness needs, a harmonious working environment is particularly important, whereas a weekend off ranks first among the existential needs. Moreover, male Millenials consider Media Markt being the most popular employer in the German retail sector, while dm is preferred from a female perspective. Overall, employers of the German retail sector provide the majority of factors required by the Millenials, yet are only considered the 4th most popular industry behind the automotive, IT, art and entertainment industries. Our findings provide valuable practical implications as the research results might serve companies to build up a target group specific employer brand. Marketing strategies can be aligned with the identified attractiveness factors to efficiently and cost-effectively attract and bind Millenials to the company. Customized recruiting campaigns enhance the appeal as well as the attractiveness of an employer driving the likelihood of obtaining the strived status: Employer of Choice. To the best of the author’s knowledge, no study has yet dealt specifically with the attractiveness factors demanded by the Millenials in the context of the German retail sector as well as their most aspired employers in this industry. Furthermore, the attractiveness factors identified in the literature were embedded in Aldefer’s ERG theory. This work also offers a bilateral perspective through the widely conducted survey carried out among Millenials, which was additionally expanded through the lens of experts.
Purpose: Grounded in the theoretical concepts of utilitarianism and deontology, this paper aims to evaluate the issue of child labour from an ethics perspective. By linking utilitarianism with normative stakeholder theory, relevant stakeholder groups are being identified in order to examine their influence on and role in the occurrence of child labour allowing for a practical reference. The findings may serve companies in particular as a basis for decision-making in the development of their value chains.
Design/Methodology/Approach: The author uses a literature review in order to analyze the findings of existing literature on the topic of child labour in an ethics context, thereby drawing on literature, indexed in Web of Science and Google Scholar by employing forward and backward citation analysis.
Findings: The investigation of child labour in terms of ethics yields conflicting results. From a deontological perspective, child labour can never be ethical and should always be rejected as it is not wanted to become a general law. In contrast, according to a utilitarian sentiment, child labour is ethically justifiable as long as the beneficiaries of the labour are greater in number than the children working or suffering.
Originality/Value: The examination of child labour from the perspective of deontology and utilitarianism in conjunction with normative stakeholder theory constitutes a novelty in the ethics literature. The integration of theoretical findings into a practical business context provides additional value for managers and global supply chain managers.
The dark side of Samsung’s value chain: The human costs of cobalt mining “BLOOD, SWEAT AND COBALT”
(2021)
Samsung has been implicitly linked to human rights abuses and wider social downgrading propagated within the Democratic Republic of Congo (DRC). Reports by different studies have shown artisanal cobalt mines (ASM) to exploit child labour and subject workers to perilous conditions. The IT multinational is dependent upon Congolese cobalt as a key element in lithiumion batteries used to produce their array of electronics. However, irresponsible cobalt sourcing practices undertaken by Tier 1 suppliers, Glencore and Huayou, have resulted in ASM operations being incorporated into Samsung’s global value chain, as Tier 2 suppliers. Analysis of the relationships underpinning Samsung’s cobalt value chain theoretical framework, highlights the presence of a relational governance structure, with captive elements among upstream Tier 1 and Tier 2 suppliers. Samsung is thereby reliant upon both Glencore and Huayou to transmit and enforce private codes of conduct down the value chain to expel human rights abuses. In conjunction, the DRC’s weak and unstable institutional environment has facilitated corruption and the improper enforcement of laws across the ASM industry. It is thereby imperative that Samsung takes ownership of the issues present within its value chain, as both Tier 1 suppliers and the Congolese government have failed to ensure responsible cobalt sourcing practices to date. This report recommends that Samsung adopt a holistic action plan, not only utilising their own resources and capabilities, but also those of critical stakeholders including Tier 1 suppliers, NGOs and the DRC and South Korean governments. Most prominently, this report suggests that supply chain transparency can be improved using certificates of origin and blockchain technology. Furthermore, it is recommended that poverty alleviation is targeted as a key measure through “Cobalt for Development”, an action plan designed to instigate both social and economic upgrading within ASM operations and the wider community. By employing a multi-scalar approach and addressing the issues inherent across multiple governance levels, Samsung can ensure a responsible source of cobalt be sustained.
Containerization is one of the most important topics for modern data centers and web developers. Since the number of containers on one- and multi-node systems is growing, knowledge about the energy consumption behavior of single web-service containers is essential in order to save energy and, of course, money. In this article, we are going to show how the energy consumption behavior of single containerized web services/web apps changes while creating replicas of the service in order to scale and balance the web service.
Context: In the framework of studying cosmic microwave background polarization and characterizing its Galactic foregrounds, the angular power spectrum analysis of the thermal dust polarization map has led to intriguing evidence of an E/B asymmetry and a positive TE correlation. The interpretation of these observations is the subject of theoretical and simulation-driven studies in which the correlation between the density structure of the interstellar medium (ISM) and the magnetic field appears to be a key aspect. In this context, and when the magnetized ISM structures are modeled in three dimensions, dust clouds are generally considered to be filamentary structures only, but both filamentary and sheet-like shapes are supported by observational and theoretical evidence.
Aims: We aim to study the influence of the cloud shape and its connection to the local magnetic field, as well as the influence from the viewing angle, on the angular power spectra measured on thermal dust polarization maps; we specifically focus on the dependence of the E/B power asymmetry and TE correlation.
Methods: To this end, we simulated realistic interstellar clouds with both filament-like and sheet-like shapes using the software ASTERION, which also allowed us to generate synthetic maps of thermal dust polarized emission with an area of 400 square degrees. Then, we computed their polarization power spectra in the multipole range ℓ ϵ [100, 500] and focused on the E/B power asymmetry, quantified through the ℛEB ratio, and the correlation coefficient rTE between Τ and Ε modes. We quantified the dependence of ℛEB and rTE values on the offset angle (between the longest cloud axis and local magnetic field lines) and inclination angle (between the line of sight and the magnetic field) for both types of cloud shapes, either embedded in a regular magnetic field or coupled to a nonregular field to mimic turbulence.
Results: We find that both types of cloud shapes cover the same regions of the (ℛEB, rTE) parameter space. The dependence on the inclination and offset angles is similar for both shapes, although sheet-like structures generally show larger scatter than filamentary structures. In addition to the known dependence on the offset angle, we find a strong dependence of ℛEB and rTE on the inclination angle.
Conclusions: The very fact that filament-like and sheet-like structures may lead to polarization power spectra with similar (ℛEB,rTE) values complicates their interpretation. We argue that interpreting them solely in terms of filament characteristics is risky, and in future analyses, this degeneracy should be accounted for, as should the connection to the magnetic field geometry. Our results based on maps of 400 square degrees clarify that the overall geometrical arrangement of the magnetized ISM surrounding the observer leaves its marks on polarization power spectra.
For the assessment of human reaction time, a test environment was developed. This system consists of an embedded device with organic light-emitting diode (OLED) displays with push buttons for the combined presentation of visual stimulation and registration of the haptic human reaction. The test leader can define the test sequence with the aid of a graphical user interface (GUI) on a personal computer (PC). The validation of the system was proved by measuring the latency times of the whole system, which are conditioned by the specific hard- and software constellation. Through the investigation of the display’s light radiation by a photodiode and the recorded current consumption, latency times and their variance were specified. In the fastest mode the system can reach an error limit of 60 μs.
The photo-Dember effect is a source of impulsive THz emission following femtosecond pulsed optical excitation. This emission results from the ultrafast spatial separation of electron-hole pairs in strong carrier gradients due to their different diffusion coefficients. The associated time dependent polarization is oriented perpendicular to the excited surface which is inaptly for efficient out coupling of THz radiation. We propose a scheme for generating strong carrier gradients parallel to the excited surface. The resulting photo-Dember currents are oriented in the same direction and emit THz radiation into the favorable direction perpendicular to the surface. This effect is demonstrated for GaAs and In(0.53)Ga(0.47)As. Surprisingly the photo-Dember THz emitters provide higher bandwidth than photoconductive emitters. Multiplexing of phase coherent photo-Dember currents by periodically tailoring the photoexcited spatial carrier distribution gives rise to a strongly enhanced THz emission, which reaches electric field amplitudes comparable to a high-efficiency externally biased photoconductive emitter.
Global vernetzte Supply Chains (SC) führen bei den Unternehmen zu geringeren Kosten, aber zugleich erhöhen diese auch die Abhängigkeit ggü. den Lieferanten und die Störanfälligkeit der SCs wird erhöht. In den vergangenen Jahren haben zudem die Unsicherheiten für die SCs stark zugenommen. Treiber waren hier u.a. der Brexit, Handelskonflikte oder auch die Corona-Pandemie. In diesem Zusammenhang steht verstärkt die Entwicklung neuer SC-Strategien im Fokus der Unternehmen. Dabei wird stark auf die Verbesserung der Resilienz der SCs geachtet, um dadurch u.a. die Risiken für die SCs zu reduzieren. Die Arbeit beschäftigt sich mit den Auswirkungen steigender Unsicherheiten auf das Design sowie die Resilienz von SCs und hat das Ziel zu ermitteln, ob es Änderungen in der SC bedarf, um auf die Auswirkungen steigender Unsicherheiten zu reagieren und wie die Resilienz zukünftig sichergestellt werden kann (Trade-off zwischen Resilienz und Kosteneffizienz). Im Rahmen der Untersuchung erfolgte eine qualitative Forschung in Form von Experteninterviews, da so u.a. spezifische Meinungen oder auch Begründungen und Einstellungen von bestimmten Personen zu den vorliegenden Thematiken ermittelt werden können. Die Ergebnisse zeigen, dass die Kosten weiterhin meist der entscheidende Aspekt sind und es mehr Transparenz, Flexibilität sowie ein besseres Risikomanagement nötig ist. Des Weiteren bedarf es zukünftig u.a. einer größeren Berücksichtigung von Unsicherheiten, eine Erhöhung von Sicherheitsbeständen sowie eine Verringerung der Komplexität der SCs und u.U. mehr Local Sourcing. Es empfiehlt sich eine weitere Untersuchung hinsichtlich der Kosten, die durch Resilienz-Instrumente sowie durch fehlende Resilienz entstehen können, durchzuführen.
Companies have made considerable progress in assessing the sustainability of their processes and products, including the information and communication technology (ICT) sector. However, it is surprising that little attention has been given to the sustainability performance of software products. For this article, we chose a case study approach to explore the extent, to which software manufacturers have considered sustainability criteria for their products. We selected a manufacturer of sustainability management software on the assumption that they would be more likely to integrate elements of sustainability performance in their products. In the case study, we applied a previously developed set of criteria for sustainable software (SCSS) using a questionnaire and experiments, to assess a web-based sustainability management software product regarding its sustainability performance. The assessment finds that despite a sustainability conscious manufacturer, a systematic assessment of sustainability regarding software products is missing in the case study. This implies that sustainability assessment for software products is still novel, corresponding knowledge is missing and suitable tools are not yet being widely applied in the industry. The SCSS presents a suitable approach to close this gap, but it does require further refinement, for example regarding its applicability to web-based software on external servers.
Sustainable software products - Towards assessment criteria for resource and energy efficiency
(2018)
Many authors have proposed criteria to assess the “environmental friendliness” or “sustainability” of software products. However, a causal model that links observable properties of a software product to conditions of it being green or (more general) sustainable is still missing. Such a causal model is necessary because software products are intangible goods and, as such, only have indirect effects on the physical world. In particular, software products are not subject to any wear and tear, they can be copied without great effort, and generate no waste or emissions when being disposed of. Viewed in isolation, software seems to be a perfectly sustainable type of product. In real life, however, software products with the same or similar functionality can differ substantially in the burden they place on natural resources, especially if the sequence of released versions and resulting hardware obsolescence is taken into account. In this article, we present a model describing the causal chains from software products to their impacts on natural resources, including energy sources, from a life-cycle perspective. We focus on (i) the demands of software for hardware capacities (local, remote, and in the connecting network) and the resulting hardware energy demand, (ii) the expectations of users regarding such demands and how these affect hardware operating life, and (iii) the autonomy of users in managing their software use with regard to resource efficiency. We propose a hierarchical set of criteria and indicators to assess these impacts. We demonstrate the application of this set of criteria, including the definition of standard usage scenarios for chosen categories of software products. We further discuss the practicability of this type of assessment, its acceptability for several stakeholders and potential consequences for the eco-labeling of software products and sustainable software design.
The number of additive manufacturing methods and materials is growing rapidly, leaving gaps in the knowledge of specific material properties. A relatively recent addition is the metal-filled filament to be printed similarly to the fused filament fabrication (FFF) technology used for plastic materials, but with additional debinding and sintering steps. While tensile, bending, and shear properties of metals manufactured this way have been studied thoroughly, their fatigue properties remain unexplored. Thus, the paper aims to determine the tensile, fatigue, and impact strengths of Markforged 17-4 PH and BASF Ultrafuse 316L stainless steel to answer whether the metal FFF can be used for structural parts safely with the current state of technology. They are compared to two 316L variants manufactured via selective laser melting (SLM) and literature results. For extrusion-based additive manufacturing methods, a significant decrease in tensile and fatigue strength is observed compared to specimens manufactured via SLM. Defects created during the extrusion and by the pathing scheme, causing a rough surface and internal voids to act as local stress risers, handle the strength decrease. The findings cast doubt on whether the metal FFF technique can be safely used for structural components; therefore, further developments are needed to reduce internal material defects.
Railroads, roads, rivers, and airways are the most common modes of transportation for people and commodities. The cost of different ways of transportation varies according to distance, luxury, size, fragility, and other factors. When the following factors are accounted for, the vehicle might become prohibitively expensive for many individuals. A new means of conveyance has been developed. Elon Musk initially proposed it as the fifth mode of transportation in 2012. For commuters and goods, Hyperloop offers a quick and cost-effective way of transportation. The Hyperloop is essentially a vacuum tube train that transports people or products at incredible speeds while efficiently. Compared to traditional forms of transportation, the Hyperloop is ideal since it is highly energy-efficient, quiet, and self-contained. Increased cargo delivery speeds will be the most evident benefit of this idea to the industry. Hyperloop also has the potential to make a significant contribution to green supply chains. It is a carbon-free form of transportation that has changed inland freight transportation and maritime and air freight transit. It can move freight below, above ground, and under-water. The aim of this paper is to explain this new innovative technology as a development for logistic concepts.
Background: The STarT-MSK-Tool is an adaptation of the well established STarT-Back-Tool, used to risk-stratify patients with a wider range of musculoskeletal presentations.
Objective: To formally translate and cross-culturally adapt the Keele STarT-MSK risk stratification tool into German (STarT-MSKG) and to establish its reliability and validity.
Methods: A formal, multi-step, forward and backward translation approach was used. To assess validity patients aged ≥18 years, with acute, subacute or chronic musculoskeletal presentations in the lumbar spine, hip, knee, shoulder, or neck were included. The prospective cohort was used with initial data collected electronically at the point-of-consultation. Retest and 6-month follow-up questionnaires were sent by email. Test-retest reliability, construct validity, discriminative ability, predictive ability and floor or ceiling effects were analysed using intraclass correlation coefficient, and comparisons with a reference standard (Orebro-Musculoskeletal-Pain-Questionnaire: OMPQ) using correlations, ROC-curves and regression models.
Results: The participants’ (n = 287) mean age was 47 (SD = 15.8) years, 51% were female, with 48.8% at low, 43.6% at medium, and 7.7% at high risk. With ICC = 0.75 (95% CI 0.69; 0.81) test-retest-reliability was good. Construct validity was good with correlations for the STarT-MSKG-Tool against the OMPQ-Tool of rs = 0.74 (95% CI 0.68, 0.79). The ability of the tool [comparison OMPQ] to predict 6-month pain and disability was acceptable with AUC = 0.77 (95% CI 0.71, 0.83) [OMPQ = 0.74] and 0.76 (95% CI 0.69, 0.82) [OMPQ = 0.72] respectively. However, the explained variance (linear/logistic regression) for predicting 6-month pain (21% [OMPQ = 17%]/logistic = 29%) and disability (linear = 20%:[OMPQ = 19%]/logistic = 26%), whilst being comparable to the existing OMPQ reference standard, fell short of the a priori target of ≥30%.
Conclusions: The German version of the STarT-MSK-Tool is a valid instrument for use across multiple musculoskeletal conditions and is availabe for use in clinical practice. Comparison with the OMPQ suggests it is a good alternative.
Stratified care for low back pain (LBP) has been shown to be clinically- and cost-effective in the UK, but its transferability to the German healthcare system is unknown. This study explores LBP patients’ perspectives regarding future implementation of stratified care, through in-depth interviews (n = 12). The STarT-Back-Tool was completed by participants prior to interviews. Interview data were analysed using Grounded Theory. The overarching theme identified from the data was ‘treatment-success’, with subthemes of ‘assessment and treatment planning’, ‘acceptance of the questionnaire’ and ‘contextual factors’. Patients identified the underlying cause of pain as being of great importance (whereas STarT-Back allocates treatment based on prognosis). The integration of the STarT-Back-Tool in consultations was considered helpful as long as it does not disrupt the therapeutic relationship, and was acceptable if tool results are handled confidentially. Results indicate that for patients to find STarT-Back acceptable, the shift from a focus on identifying a cause of pain and subsequent diagnosis, to prediction-orientated treatment planning, must be made clear. Patient ‘buy in’ is important for successful uptake of clinical interventions, and findings can help to inform future strategies for implementing STarT-Back in the Germany, as well as having potential implications for transferability to other similar healthcare systems.
Background: The STarT-Back-Approach (STarT: Subgroups for Targeted Treatment) was developed in the UK and has demonstrated clinical and cost effectiveness. Based on the results of a brief questionnaire, patients with low back pain are stratified into three treatment groups. Since the organisation of physiotherapy differs between Germany and the UK, the aim of this study is to explore German physiotherapists’ views and perceptions about implementing the STarT-Back-Approach.
Methods: Three two-hour think-tank workshops with physiotherapists were conducted. Focus groups, using a semi-structured interview guideline, followed a presentation of the STarT-Back-Approach, with discussions audio recorded, transcribed and qualitatively analysed using content analysis.
Results: Nineteen physiotherapists participated (15 female, mean age 41.2 (SD 8.6) years). Three main themes emerged, each with multiple subthemes: 1) the intervention (15 subthemes), 2) the healthcare context (26 subthemes) and 3) individual characteristics (8 subthemes). Therapists’ perceptions of the extent to which the STarT-Back intervention would require changes to their normal clinical practice varied considerably. They felt that within their current healthcare context, there were significant financial disincentives that would discourage German physiotherapists from providing the STarT-Back treatment pathways, such as the early discharge of low-risk patients with supported self-management materials. They also discussed the need for appropriate standardised graduate and post-graduate skills training for German physiotherapists to treat high-risk patients with a combined physical and psychological approach (e.g., communication skills).
Conclusions: Whilst many German physiotherapists are positive about the STarT-Back-Approach, there are a number of substantial barriers to implementing the matched treatment pathways in Germany. These include financial disincentives within the healthcare system to early discharge of low-risk patients. Therapists also highlighted the need for solutions in respect of scalable physiotherapy training to gain skills in combined physical and psychological approaches.
Background: Stratified care is an up-to-date treatment approach suggested for patients with back pain in several guidelines. A comprehensively studied stratification instrument is the STarT Back Tool (SBT). It was developed to stratify patients with back pain into three subgroups, according to their risk of persistent disabling symptoms. The primary aim was to analyse the disability differences in patients with back pain 12 months after inclusion according to the subgroups determined at baseline using the German version of the SBT (STarT-G). Moreover, the potential to improve prognosis for disability by adding further predictor variables, an analysis for differences in pain intensity according to the STarT-Classification, and discriminative ability were investigated.
Methods: Data from the control group of a randomized controlled trial were analysed. Trial participants were members of a private medical insurance with a minimum age of 18 and indicated as having persistent back pain. Measurements were made for the risk of back pain chronification using the STarT-G, disability (as primary outcome) and back pain intensity with the Chronic Pain Grade Scale (CPGS), health-related quality of life with the SF-12, psychological distress with the Patient Health Questionnaire-4 (PHQ-4) and physical activity. Analysis of variance (ANOVA), multiple linear regression, and area under the curve (AUC) analysis were conducted.
Results: The mean age of the 294 participants was 53.5 (SD 8.7) years, and 38% were female. The ANOVA for disability and pain showed significant differences (p < 0.01) among the risk groups at 12 months. Post hoc Tukey tests revealed significant differences among all three risk groups for every comparison for both outcomes. AUC for STarT-G’s ability to discriminate reference standard ‘cases’ for chronic pain status at 12 months was 0.79. A prognostic model including the STarT-Classification, the variables global health, and disability at baseline explained 45% of the variance in disability at 12 months.
Conclusions: Disability differences in patients with back pain after a period of 12 months are in accordance with the subgroups determined using the STarT-G at baseline. Results should be confirmed in a study developed with the primary aim to investigate those differences.
The integration of genetic algorithms to optimize the networks of value chains could enormously improve the performance of supply chains. For this reason, this paper describes in more detail the application of genetic algorithms in the value chains of the automotive industry. For this purpose, a theoretical model is built up to evaluate whether the application of the model can optimize the value chain. This option is described, analyzed and its restrictions are shown. Instead of looking at the entire network, individual finished goods and their bill of material are used as a basis for optimization, which greatly reduces the complexity of the original problem. The original complexity of the supply chain networks can thus be reduced and considered based on the bill of material.
Fuzzy system based on two-step cascade genetic optimization strategy for tobacco tar prediction
(2019)
There are many challenges in accurately measuring cigarette tar constituents. These include the need for standardized smoke generation methods related to unstable mixtures. In this research were developed algorithms using fusion of artificial intelligence methods to predict tar concentration. Outputs of development are three fuzzy structures optimized with genetic algorithms resulting in genetic algorithm (GA)-FUZZY, GA-adaptive neuro fuzzy inference system (ANFIS), GA-GA-FUZZY algorithms. Proposed algorithms are used for the tar prediction in the cigarette production process. The results of prediction are compared with gas chromatograph (high-performance liquid chromatography (HPLC)) readings.
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate "the problem of loosening a connection and the distribution of the work" as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
Irrigated paddy rice agriculture accounts for a major share of Asia Pacific’s total water withdrawal. Furthermore, climate change induced water scarcity in the Asia-Pacific region is projected to intensify in the near future. Therefore, methods to reduce water consumption through efficiency measures are needed to ensure the long-term (water) sustainability. The irrigation systems, subak of Karangasem, Indonesia, and the tameike of Kunisaki, Japan, are two examples of sustainable paddy rice irrigation. This research, through interviews and an extensive survey, comparatively assessed the socio-environmental sustainability of the two irrigation management systems with special reference to the intensity and nature of social capital, equity of water distribution, water demand, water footprint, and water quality, etc. The prevailing social capital paradigm of each system was also compared to its overall managerial outcomes to analyze how cooperative action contributes to sustainable irrigation management. Both systems show a comparable degree of sustainable irrigation management, ensuring an equitable use of water, and maintain relatively fair water quality due to the land-use practices adapted. However, the systems differ in water demand and water efficiency principally because of the differences in the irrigation management strategies: human and structural. These findings could help devise mechanisms for transitioning to sustainable irrigation management in the commercially-oriented paddy rice agricultural systems across the Asia-Pacific region.
This study will describe how the robotics industry evolved increasingly and a new phase of advanced robotics has emerged, and the relation between humans and robots in the same workplace. Problems of designing safer robots in human-machine interaction systems are urgent research topics in the field of industrial robotics. Many of the problems in industrial robotics are related not just to technological issues, but also to human-robot collaboration also will be discussed as an effective method to tackle this issue is the invention of Collaborative robots.
Background: To facilitate access to evidence-based care for back pain, a German private medical insurance offered a health program proactively to their members. Feasibility and long-term efficacy of this approach were evaluated.
Methods: Using Zelen’s design, adult members of the health insurance with chronic back pain according to billing data were randomized to the intervention (IG) or the control group (CG). Participants allocated to the IG were invited to participate in the comprehensive health program comprising medical exercise therapy and life style coaching, and those allocated to the CG to a longitudinal back pain survey. Primary outcomes were back pain severity (Korff’s Chronic Pain Grade Questionnaire) as well as health-related quality of life (SF-12) assessed by identical online questionnaires at baseline and 2-year follow-up in both study arms. In addition to analyses of covariance, a subgroup analysis explored the heterogeneity of treatment effects among different risks of back pain chronification (STarT Back Tool).
Results: Out of 3462 persons selected, randomized and thereafter contacted, 552 agreed to participate. At the 24-month follow-up, data on 189 of 258 (73.3%) of the IG were available, in the CG on 255 of 294 (86.7%). Significant, small beneficial effects were seen in primary outcomes: Compared to the CG, the IG reported less disability (1.6 vs 2.0; p = 0.025; d = 0.24) and scored better at the SF-12 physical health scale (43.3 vs 41.0; p < 0.007; d = 0.26). No effect was seen in back pain intensity and in the SF-12 mental health scale. Persons with medium or high risk of back pain chronification at baseline responded better to the health program in all primary outcomes than the subgroup with low risk at baseline.
Conclusions: After 2 years, the proactive health program resulted in small positive long-term improvements. Using risk screening prior to inclusion in the health program might increase the percentage of participants deriving benefits from it.
The data presented here contain information on cheating behavior from experiments and general self-reported attitudes related to honesty-related social norms and trust, together with individual-level demographic variables. Our sample included 493 university students in five countries, namely, Germany, Vietnam, Taiwan, China, and Japan. The experiment was monetarily incentivized based on the performance on a matrix task. The participants also answered a survey questionnaire. The dataset is valuable for academic researchers in sociology, psychology, and economics who are interested in honesty, norms, and cultural differences.
Deep brain stimulation (DBS) is a neurosurgical intervention where electrodes are permanently implanted into the brain in order to modulate pathologic neural activity. The post-operative reconstruction of the DBS electrodes is important for an efficient stimulation parameter tuning. A major limitation of existing approaches for electrode reconstruction from post-operative imaging that prevents the clinical routine use is that they are manual or semi-automatic, and thus both time-consuming and subjective. Moreover, the existing methods rely on a simplified model of a straight line electrode trajectory, rather than the more realistic curved trajectory. The main contribution of this paper is that for the first time we present a highly accurate and fully automated method for electrode reconstruction that considers curved trajectories. The robustness of our proposed method is demonstrated using a multi-center clinical dataset consisting of N = 44 electrodes. In all cases the electrode trajectories were successfully identified and reconstructed. In addition, the accuracy is demonstrated quantitatively using a high-accuracy phantom with known ground truth. In the phantom experiment, the method could detect individual electrode contacts with high accuracy and the trajectory reconstruction reached an error level below 100 μm (0.046 ± 0.025 mm). An implementation of the method is made publicly available such that it can directly be used by researchers or clinicians. This constitutes an important step towards future integration of lead reconstruction into standard clinical care.
Agility and digital trends go hand in hand, but the advantages of digitalization perform a high pressure on the established automotive companies. For years now, automotive groups have no longer been innovation drivers in the industry. This status is reserved for radical companies like Tesla. But is there any chance that conservative companies will reinvent themselves, establish leaner structures and thus regain market dominance and innovation?
Concerns over climate change, air pollution, and oil supply have stimulated the market for battery electric vehicles (BEVs). The environmental impacts of BEVs are typically evaluated through a standardized life-cycle assessment (LCA) methodology. Here, the LCA literature was surveyed with the objective to sketch the major trends and challenges in the impact assessment of BEVs. It was found that BEVs tend to be more energy efficient and less polluting than conventional cars. BEVs decrease exposure to air pollution as their impacts largely result from vehicle production and electricity generation outside of urban areas. The carbon footprint of BEVs, being highly sensitive to the carbon intensity of the electricity mix, may decrease in the nearby future through a shift to renewable energies and technology improvements in general. A minority of LCAs covers impact categories other than carbon footprint, revealing a mixed picture. Up to date little attention is paid so far in LCA to the efficiency advantage of BEVs in urban traffic, the gap between on-road and certified energy consumption, the local exposure to air pollutants and noise and the aging of emissions control technologies in conventional cars. Improvements of BEV components, directed charging, second-life reuse of vehicle batteries, as well as vehicle-to-home and vehicle-to-grid applications will significantly reduce the environmental impacts of BEVs in the future.
Background: Electric vehicles have been identified as being a key technology in reducing future emissions and energy consumption in the mobility sector. The focus of this article is to review and assess the energy efficiency and the environmental impact of battery electric cars (BEV), which is the only technical alternative on the market available today to vehicles with internal combustion engine (ICEV). Electricity onboard a car can be provided either by a battery or a fuel cell (FCV). The technical structure of BEV is described, clarifying that it is relatively simple compared to ICEV. Following that, ICEV can be ‘e-converted’ by experienced personnel. Such an e-conversion project generated reality-close data reported here.
Results: Practicability of today's BEV is discussed, revealing that particularly small-size BEVs are useful. This article reports on an e-conversion of a used Smart. Measurements on this car, prior and after conversion, confirmed a fourfold energy efficiency advantage of BEV over ICEV, as supposed in literature. Preliminary energy efficiency data of FCV are reviewed being only slightly lower compared to BEV. However, well-to-wheel efficiency suffers from 47% to 63% energy loss during hydrogen production. With respect to energy efficiency, BEVs are found to represent the only alternative to ICEV. This, however, is only true if the electricity is provided by very efficient power plants or better by renewable energy production. Literature data on energy consumption and greenhouse gas (GHG) emission by ICEV compared to BEV suffer from a 25% underestimation of ICEV-standardized driving cycle numbers in relation to street conditions so far. Literature data available for BEV, on the other hand, were mostly modeled and based on relatively heavy BEV as well as driving conditions, which do not represent the most useful field of BEV operation. Literature data have been compared with measurements based on the converted Smart, revealing a distinct GHG emissions advantage due to the German electricity net conditions, which can be considerably extended by charging electricity from renewable sources. Life cycle carbon footprint of BEV is reviewed based on literature data with emphasis on lithium-ion batteries. Battery life cycle assessment (LCA) data available in literature, so far, vary significantly by a factor of up to 5.6 depending on LCA methodology approach, but also with respect to the battery chemistry. Carbon footprint over 100,000 km calculated for the converted 10-year-old Smart exhibits a possible reduction of over 80% in comparison to the Smart with internal combustion engine.
Conclusion: Findings of the article confirm that the electric car can serve as a suitable instrument towards a much more sustainable future in mobility. This is particularly true for small-size BEV, which is underrepresented in LCA literature data so far. While CO2-LCA of BEV seems to be relatively well known apart from the battery, life cycle impact of BEV in categories other than the global warming potential reveals a complex and still incomplete picture. Since technology of the electric car is of limited complexity with the exception of the battery, used cars can also be converted from combustion to electric. This way, it seems possible to reduce CO2-equivalent emissions by 80% (factor 5 efficiency improvement).
A comprehensive overview is provided evaluating direct real-world CO2 emissions of both diesel and petrol cars newly registered in Europe between 1995 and 2015. Before 2011, European diesel cars emitted less CO2 per kilometre than petrol cars, but since then there is no appreciable difference in per-km CO2 emissions between diesel and petrol cars. Real-world CO2 emissions of diesel cars have not declined appreciably since 2001, while the CO2 emissions of petrol cars have been stagnant since 2012. When adding black carbon related CO2-equivalents, such as from diesel cars without particulate filters, diesel cars were discovered to have had much higher climate relevant emissions until the year 2001 when compared to petrol cars. From 2001 to 2015 CO2-equivalent emissions from new diesel cars and petrol cars were hardly distinguishable. Lifetime use phase CO2-equivalent emissions of all European passenger vehicles were modelled for 1995–2015 based on three scenarios: the historic case, another scenario freezing percentages of diesel cars at the low levels from the early 1990s (thus avoiding the observed “boom” in new diesel registrations), and an advanced mitigation scenario based on high proportions of petrol hybrid cars and cars burning gaseous fuels. The difference in CO2-equivalent emissions between the historical case and the scenario avoiding the diesel car boom is only 0.4%. The advanced mitigation scenario would have been able to achieve a 3.4% reduction in total CO2-equivalent emissions over the same time frame. The European diesel car boom appears to have been ineffective at reducing climate-warming emissions from the European transport sector.
A new comprehensive evaluation system presented here allows to compare and to quantify education for a sustainable development (ESD) in degree programs. The evaluation is based on a criteria system working with three hierarchic levels. The highest level considers a list of 35 indicator terms. Primarily, the two most popular undergraduate (bachelor’s) degree programs in Germany (mechanical engineering, ME, and business administration, BA) have been reviewed for ESD contents based on the new evaluation scheme. Additionally we reviewed and quantified ESD subjects and their temporal changes in the entire bandwidth of degree programs of a university (Umwelt-Campus Birkenfeld, University of Applied Sciences Trier), back to 1999. Moreover, a spot check on international ME and BA bachelor’s degree programs was performed. Through our reviews, we found a high number of elective classes dedicated to ESD particularly in BA bachelor programs. However, the percentage of compulsory classes related to ESD is relatively low with 5-6 % in both ME and BA programs, respectively. The spot check on degree programs outside Germany revealed similar results. Analysing the time trend at Umwelt-Campus Birkenfeld, a considerable share of ESD that was part of the original diploma degrees was moved to what are now master’s degrees.
This study compares the environmental impacts of petrol, diesel, natural gas, and electric vehicles using a process-based attributional life cycle assessment (LCA) and the ReCiPe characterization method that captures 18 impact categories and the single score endpoints. Unlike common practice, we derive the cradle-to-grave inventories from an originally combustion engine VW Caddy that was disassembled and electrified in our laboratory, and its energy consumption was measured on the road. Ecoivent 2.2 and 3.0 emission inventories were contrasted exhibiting basically insignificant impact deviations. Ecoinvent 3.0 emission inventory for the diesel car was additionally updated with recent real-world close emission values and revealed strong increases over four midpoint impact categories, when matched with the standard Ecoinvent 3.0 emission inventory. Producing batteries with photovoltaic electricity instead of Chinese coal-based electricity decreases climate impacts of battery production by 69%. Break-even mileages for the electric VW Caddy to pass the combustion engine models under various conditions in terms of climate change impact ranged from 17,000 to 310,000 km. Break-even mileages, when contrasting the VW Caddy and a mini car (SMART), which was as well electrified, did not show systematic differences. Also, CO2-eq emissions in terms of passenger kilometers travelled (54–158 g CO2-eq/PKT) are fairly similar based on 1 person travelling in the mini car and 1.57 persons in the mid-sized car (VW Caddy). Additionally, under optimized conditions (battery production and use phase utilizing renewable electricity), the two electric cars can compete well in terms of CO2-eq emissions per passenger kilometer with other traffic modes (diesel bus, coach, trains) over lifetime. Only electric buses were found to have lower life cycle carbon emissions (27–52 g CO2-eq/PKT) than the two electric passenger cars.
Universities, as innovation drivers in science and technology worldwide, should attempt to become carbon-neutral institutions and should lead this transformation. Many universities have picked up the challenge and quantified their carbon footprints; however, up-to-date quantification is limited to use-phase emissions. So far, data on embodied impacts of university campus infrastructure are missing, which prevents us from evaluating their life cycle costs. In this paper, we quantify the embodied impacts of two university campuses of very different sizes and climate zones: the Umwelt-Campus Birkenfeld (UCB), Germany, and the Nanyang Technological University (NTU), Singapore. We also quantify the effects of switching to full renewable energy supply on the carbon footprint of a university campus based on the example of UCB. The embodied impacts amount to 13.7 (UCB) and 26.2 (NTU) kg CO2e/m2•y, respectively, equivalent to 59.2% (UCB), and 29.8% (NTU), respectively, of the building lifecycle impacts. As a consequence, embodied impacts can be dominating; thus, they should be quantified and reported. When adding additional use-phase impacts caused by the universities on top of the building lifecycle impacts (e.g., mobility impacts), both institutions happen to exhibit very similar emissions with 124.5–126.3 kg CO2e/m2•y despite their different sizes, structures, and locations. Embodied impacts comprise 11.0–20.8% of the total impacts at the two universities. In conclusion, efficient reduction in university carbon footprints requires a holistic approach, considering all impacts caused on and by a campus including upstream effects.
Carbon footprinting of universities worldwide: Part I — objective comparison by standardized metrics
(2021)
Background: Universities, as innovation drivers in science and technology worldwide, should be leading the Great Transformation towards a carbon–neutral society and many have indeed picked up the challenge. However, only a small number of universities worldwide are collecting and publishing their carbon footprints, and some of them have defined zero emission targets. Unfortunately, there is limited consistency between the reported carbon footprints (CFs) because of different analysis methods, different impact measures, and different target definitions by the respective universities.
Results: Comprehensive CF data of 20 universities from around the globe were collected and analysed. Essential factors contributing to the university CF were identified. For the first time, CF data from universities were not only compared. The CF data were also evaluated, partly corrected, and augmented by missing contributions, to improve the consistency and comparability. The CF performance of each university in the respective year is thus homogenized, and measured by means of two metrics: CO2e emissions per capita and per m2 of constructed area. Both metrics vary by one order of magnitude across the different universities in this study. However, we identified ten universities reaching a per capita carbon footprint of lower than or close to 1.0 Mt (metric tons) CO2e/person and year (normalized by the number of people associated with the university), independent from the university’s size. In addition to the aforementioned two metrics, we suggested a new metric expressing the economic efficiency in terms of the CF per $ expenditures and year. We next aggregated the results for all three impact measures, arriving at an overall carbon performance for the respective universities, which we found to be independent of geographical latitude. Instead the per capita measure correlates with the national per capita CFs, and it reaches on average 23% of the national impacts per capita. The three top performing universities are located in Switzerland, Chile, and Germany.
Conclusion: The usual reporting of CO2 emissions is categorized into Scopes 1–3 following the GHG Protocol Corporate Accounting Standard which makes comparison across universities challenging. In this study, we attempted to standardize the CF metrics, allowing us to objectively compare the CF at several universities. From this study, we observed that, almost 30 years after the Earth Summit in Rio de Janeiro (1992), the results are still limited. Only one zero emission university was identified, and hence, the transformation should speed up globally.
Cryotropic gelation is one of the most common approaches to design novel hydrogels with multifaceted technological and biological functionalities. In the present paper, we studied the ability of highly galactosyl-substituted galactomannans, i.e. fenugreek and alfalfa gum, to form physically crosslinked hydrogels via cryogenic processing. Cycling of the galactomannan solutions (0.25 to 4% wt) from 25 to −20 to 25 °C induced the physical crosslinking of the galactomannan chains leading to the formation of different cryogel structures, i.e. filamentous aggregates (c* < c < 1%), cellular-like gel networks (1 ≤ c < 4%) or a homogeneously swollen gel (c ≥ 4%), depending on the total biopolymer content. Alfalfa gum-based cryogels exhibited higher elasticity and stiffness, better uniformity of the structure and a lower macropore size than their fenugreek counterparts. The physical blending of alfalfa or fenugreek gum with locust bean gum (2% total biopolymer) led to the reinforcement of the mechanical properties of the cryogels without significantly altering their microstructural aspects.
The present work aimed at investigating an extraction protocol based on consecutive steps of isoelectric point (pH ~ 4.25) mediated gum swelling and deproteinisation as an alternative method to produce flaxseed gum extracts of enhanced techno-functional characteristics. The osidic and proximate composition, structure conformation, flow behaviour, dynamic rheological and thermal properties of gums isolated from brown and golden flaxseeds were assessed. Gum extraction under near-to-isoelectric point conditions did not impair the extraction yield, residual protein and ash content, whilst it resulted in minor changes in the sugar composition of the flaxseed gum extracts. The deconvolution of the GPC/SEC chromatographs revealed the presence of four major polysaccharidic populations corresponding to arabinoxylans, rhamnogalacturonan–I and two AX-RG-I composite fractions. The latter appeared to minimise the intra- and interchain polymer non-covalent interactions (hydrogen bonding) leading to a better solvation affinity in water and lyotropic solvents. Golden flaxseed gums exerted higher molecular weight (Mw = 1.34–1.15 × 106 Da) and intrinsic viscosities (6.63–5.13 dL g−1) as well as better thickening and viscoelastic performance than the brown flaxseed gum exemplars. Golden flaxseed gums exhibited a better thermal stability compared to the brown flaxseed counterparts and therefore, they are suitable for product applications involving severe heat treatments.
The future of German pharmacy business models with retail clinics as USP against online pharmacies
(2022)
Purpose: This study aims to discuss the chances of in-store pharmacies through in-store health services, e. g. vaccinations or Covid-19 tests, in Germany. As a result, retail clinics could lead to a USP against online pharmacies. Additionally, the study will show how possible retail clinics could look like and how the acceptance in the population is.
Research Methodology: To conduct the survey, Google Forms was used together with MS Excel for the analysis. Various studies were reviewed, and care was taken to work as closely as possible to practice and its figures from e. g. associations, companies, or statista.
Results: The launch of retail clinics could give German stationary pharmacies a new boost and differentiate them from online pharmacies on the market. Pharmaceutical services are in great demand among the population, hopefully, the legal framework will soon be in place, and the proper services will have to be offered to be able to fully generate the large sales potential.
Discussion and Conclusion: The study only points to a general overview of how health services can help store pharmacies in the competition with online pharmacies in Germany. Yet there are also other possible USPs for store pharmacies, which are not concluded in that study. In addition, the legal framework under which pharmacies operate must be analyzed in detail by legal experts to obtain a precise overview of what is possible for pharmacies in the area of pharmaceutical services and retail clinics. The study is useful for pharmacists, business economists in general as well as in health care management.
Background: Tobacco smoking prevalence continues to be high, particularly among adolescents and young adults with lower educational levels, and is therefore a serious public health problem. Tobacco smoking and problem drinking often co-occur and relapses after successful smoking cessation are often associated with alcohol use. This study aims at testing the efficacy of an integrated smoking cessation and alcohol intervention by comparing it to a smoking cessation only intervention for young people, delivered via the Internet and mobile phone.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted. Participants in the integrated intervention group will: (1) receive individually tailored web-based feedback on their drinking behaviour based on age and gender norms, (2) receive individually tailored mobile phone text messages to promote drinking within low-risk limits over a 3-month period, (3) receive individually tailored mobile phone text messages to support smoking cessation for 3 months, and (4) be offered the option of registering for a more intensive program that provides strategies for smoking cessation centred around a self-defined quit date. Participants in the smoking cessation only intervention group will only receive components (3) and (4). Study participants will be 1350 students who smoke tobacco daily/occasionally, from vocational schools in Switzerland. Main outcome criteria are 7-day point prevalence smoking abstinence and cigarette consumption assessed at the 6-month follow up.
Discussion: This is the first study testing a fully automated intervention for smoking cessation that simultaneously addresses alcohol use and interrelations between tobacco and alcohol use. The integrated intervention can be easily implemented in various settings and could be used with large groups of young people in a cost-effective way.
Background: Problem drinking, particularly risky single-occasion drinking is widespread among adolescents and young adults in most Western countries. Mobile phone text messaging allows a proactive and cost-effective delivery of short messages at any time and place and allows the delivery of individualised information at times when young people typically drink alcohol. The main objective of the planned study is to test the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people with heterogeneous educational level.
Methods/Design: A two-arm cluster-randomised controlled trial with one follow-up assessment after 6 months will be conducted to test the efficacy of the intervention in comparison to assessment only. The fully-automated intervention program will provide an online feedback based on the social norms approach as well as individually tailored mobile phone text messages to stimulate (1) positive outcome expectations to drink within low-risk limits, (2) self-efficacy to resist alcohol and (3) planning processes to translate intentions to resist alcohol into action. Program participants will receive up to two weekly text messages over a time period of 3 months. Study participants will be 934 students from approximately 93 upper secondary and vocational schools in Switzerland. Main outcome criterion will be risky single-occasion drinking in the past 30 days preceding the follow-up assessment.
Discussion: This is the first study testing the efficacy of a combined web- and text messaging-based intervention to reduce problem drinking in young people. Given that this intervention approach proves to be effective, it could be easily implemented in various settings, and it could reach large numbers of young people in a cost-effective way.
Internet of Things (IoT) and Artificial Intelligence (AI) are one of the most promising and disruptive areas of current research and development. However, these areas require deep knowledge in multiple disciplines such as sensors, protocols, embedded programming, distributed systems, statistics and algorithms. This broad knowledge is not easy to acquire and the software used to design these systems is becoming increasingly complex. Small and medium-sized enterprises therefore have problems in developing new business ideas. However, node- and block-based software tools have also been released and are freely available as open source toolboxes. In this paper, we present an overview of multiple node- and block-based software tools to develop IoT- and AI-based business ideas. We arrange these tools according their capabilities and further propose extension and combinations of tools to design a useful open-source library for small and medium-sized enterprises, that is easy to use and helps with rapid prototyping, enabling new business ideas to be developed using distributed computing.
Value-based controlling & international accounting
of Economic Value Added (EVA) – An overview
(2022)
This paper will discuss an important target variable in value-based management: the Economic Value Added (or EVA). EVA is a measure of a company's financial performance based on the residual wealth calculated by deducting its cost of capital from its operting profit, adjusted for taxes on a cash basis. EVA can also be referred to as economic profit, as it attempts to capture the true economic profit of a company. This measure was devised by management consulting firm Stern Value Management, originally incorporated as Stern Stewart & Co. This research will also discuss adjustments and different types of assumptions that are necessary for the calculation as well as how to use them properly to obtain an interpretable result. Paper will explain the formula and which conversions should be considered. It remains to be noted that the EVA concept only leads to small progress from a scientific point of view, but that the clever marketing by Stern & Stewart has initiated a renaissance of the underlying residual profit concept. The paper provides practitioners and academics with a good overview of the demonstrable added value of EVA controlling and, in contrast, also illustrates the weaknesses of the calculation model or the inaccuracy due to interpretation variables, which overall limit the value of EVA as a management key performance indicator. The research includes comprehensive and substantial discussion in the scientific literature on EVA and its interpretation.
This text will explain which role “Green Bonds” play in financing projects and how the green factor is weighted. It will be discussed on how the term “green” can change the price of the bond, if there is a “green premium” and for which group of investors this type of bond is interesting. We will discuss ways to reduce their cost of capital, also considering the risks and on ways on how to improve their conditions. The sustainable and eco-friendly aspects are also highlighted in this text and they might become crucial in future investing, which gives the bond an interesting role.
Global change effects on biodiversity and human wellbeing call for improved long-term environmental data as a basis for science, policy and decision making, including increased interoperability, multifunctionality, and harmonization. Based on the example of two global initiatives, the International Long-Term Ecological Research (ILTER) network and the Group on Earth Observations Biodiversity Observation Network (GEO BON), we propose merging the frameworks behind these initiatives, namely ecosystem integrity and essential biodiversity variables, to serve as an improved guideline for future site-based long-term research and monitoring in terrestrial, freshwater and coastal ecosystems. We derive a list of specific recommendations of what and how to measure at a monitoring site and call for an integration of sites into co-located site networks across individual monitoring initiatives, and centered on ecosystems. This facilitates the generation of linked comprehensive ecosystem monitoring data, supports synergies in the use of costly infrastructures, fosters cross-initiative research and provides a template for collaboration beyond the ILTER and GEO BON communities.
This article investigates the representation of the issue of refugees travelling to the Italian coast that was reported by two major Italian newspapers between August 8th and August 19th, 2017. Using analysis tools belonging to communication theory and cognitive sciences, i.e. the concepts of frame and attitude, this article highlights two major points: firstly, the analysis reveals how the two newspapers aimed at establishing a specific relationship with their readers on this topic in the relevant period on the basis of specific interpretative models; secondly, each of these interpretative models relies on the representation of specific emotions which play a central role in the interpretation of reality according to a characteristic facet of the definition of post-truth.
Covid-19 outbreak had a huge impact on the economy worldwide as businesses had to close or cease their activities due to the lockdown regulations. The “luckiest” firms were able to operate but under restricted conditions. In order to avoid what certain authors called “bankruptcy epidemic” European countries took economic and fiscal measures to help companies compensate their financial losses. In addition to Government Grants, emergency legislations have been adopted with the aim to adapt insolvency and restructuring procedures to the sanitary situation and specific rules relating to company Law have also been implemented. This paper deals with the measures taken by the state of Luxembourg and gives a brief overview of the legal amendments.
Optimal mental workload plays a key role in driving performance. Thus, driver-assisting systems that automatically adapt to a drivers current mental workload via brain–computer interfacing might greatly contribute to traffic safety. To design economic brain computer interfaces that do not compromise driver comfort, it is necessary to identify brain areas that are most sensitive to mental workload changes. In this study, we used functional near-infrared spectroscopy and subjective ratings to measure mental workload in two virtual driving environments with distinct demands. We found that demanding city environments induced both higher subjective workload ratings as well as higher bilateral middle frontal gyrus activation than less demanding country environments. A further analysis with higher spatial resolution revealed a center of activation in the right anterior dorsolateral prefrontal cortex. The area is highly involved in spatial working memory processing. Thus, a main component of drivers’ mental workload in complex surroundings might stem from the fact that large amounts of spatial information about the course of the road as well as other road users has to constantly be upheld, processed and updated. We propose that the right middle frontal gyrus might be a suitable region for the application of powerful small-area brain computer interfaces.
The purpose of this article is to evaluate optimal expected utility risk measures (OEU) in a risk-constrained portfolio optimization context where the expected portfolio return is maximized. We compare the portfolio optimization with OEU constraint to a portfolio selection model using value at risk as constraint. The former is a coherent risk measure for utility functions with constant relative risk aversion and allows individual specifications to the investor’s risk attitude and time preference. In a case study with three indices, we investigate how these theoretical differences influence the performance of the portfolio selection strategies. A copula approach with univariate ARMA-GARCH models is used in a rolling forecast to simulate monthly future returns and calculate the derived measures for the optimization. The results of this study illustrate that both optimization strategies perform considerably better than an equally weighted portfolio and a buy and hold portfolio. Moreover, our results illustrate that portfolio optimization with OEU constraint experiences individualized effects, e.g., less risk-averse investors lose more portfolio value in the financial crises but outperform their more risk-averse counterparts in bull markets.
Following a quantitative analysis of adequate feedstock, comprising 11 woody biomass species, four biochars were generated using a Kon-Tiki flame curtain kiln in the state of Aguascalientes, Mexico. Despite the high quality (certified by European Biochar Certificate), the biochars contain substantial quantities of hazardous substances, such as polycyclic aromatic hydrocarbons, polychlorinated dibenzo-p-dioxins and dibenzofurans, polychlorinated biphenyls, and heavy metals, which can induce adverse effects if wrongly applied to the environment. To assess the toxicity of biochars to non-target organisms, toxicity tests with four benthic and zooplanktonic invertebrate species, the ciliate Paramecium caudatum, the rotifer Lecane quadridentata, and the cladocerans Daphnia magna and Moina macrocopa were performed using biochar elutriates. In acute and chronic toxicity tests, no acute toxic effect to ciliates, but significant lethality to rotifers and cladocerans was detected. This lethal toxicity might be due to ingestion/digestion by enzymatic/mechanic processes of biochar by cladocerans and rotifers of toxic substances present in the biochar. No chronic toxicity was found where biochar elutriates were mixed with soil. These data indicate that it is instrumental to use toxicity tests to assess biochars’ toxicity to the environment, especially when applied close to sensitive habitats, and to stick closely to the quantitative set-point values.
This paper is structured into two parts, which are closely related: first, the analysis of the parlamentary and governmental measures against the covid-19 pandemic; and second, the future regulatory framework about freedom of movement and other rights in the European area, according to the new European pact on migration and asylum.
This paper analyzed the characteristic of the tourism destination ecosystem from perspective of entropy in Dunhuang City. Given these circumstances, an evaluation index system that considers the potential of sustainable development was formed based on dissipative structure and entropy change for the tourism destination ecosystem. The sustainable development potential evaluation model for tourism destination ecosystem was built up based on information entropy. Then, we analyzed each indicator impact for the sustainable development potential and proposed some measures for the tourism destination ecosystem. The conclusions include: (a) the requirements of Dunhuang tourism destination ecosystem on the natural ecosystem continuously grew between 2000 and 2012; (b) The sustainable development potential of the Dunhuang tourism destination ecosystem was on an oscillation upward trend during the study period, which is dependent on government attention, and pollution problems were improved.
The objective investigation of the dynamic properties of vocal fold vibrations demands the recording and further quantitative analysis of laryngeal high-speed video (HSV). Quantification of the vocal fold vibration patterns requires as a first step the segmentation of the glottal area within each video frame from which the vibrating edges of the vocal folds are usually derived. Consequently, the outcome of any further vibration analysis depends on the quality of this initial segmentation process. In this work we propose for the first time a procedure to fully automatically segment not only the time-varying glottal area but also the vocal fold tissue directly from laryngeal high-speed video (HSV) using a deep Convolutional Neural Network (CNN) approach. Eighteen different Convolutional Neural Network (CNN) network configurations were trained and evaluated on totally 13,000 high-speed video (HSV) frames obtained from 56 healthy and 74 pathologic subjects. The segmentation quality of the best performing Convolutional Neural Network (CNN) model, which uses Long Short-Term Memory (LSTM) cells to take also the temporal context into account, was intensely investigated on 15 test video sequences comprising 100 consecutive images each. As performance measures the Dice Coefficient (DC) as well as the precisions of four anatomical landmark positions were used. Over all test data a mean Dice Coefficient (DC) of 0.85 was obtained for the glottis and 0.91 and 0.90 for the right and left vocal fold (VF) respectively. The grand average precision of the identified landmarks amounts 2.2 pixels and is in the same range as comparable manual expert segmentations which can be regarded as Gold Standard. The method proposed here requires no user interaction and overcomes the limitations of current semiautomatic or computational expensive approaches. Thus, it allows also for the analysis of long high-speed video (HSV)-sequences and holds the promise to facilitate the objective analysis of vocal fold vibrations in clinical routine. The here used dataset including the ground truth will be provided freely for all scientific groups to allow a quantitative benchmarking of segmentation approaches in future.
Background: Telerehabilitation can contribute to the maintenance of successful rehabilitation regardless of location and time. The aim of this study was to investigate a specific three-month interactive telerehabilitation routine regarding its effectiveness in assisting patients with physical functionality and with returning to work compared to typical aftercare.
Objective: The aim of the study was to investigate a specific three-month interactive telerehabilitation with regard to effectiveness in functioning and return to work compared to usual aftercare.
Methods: From August 2016 to December 2017, 111 patients (mean 54.9 years old; SD 6.8; 54.3% female) with hip or knee replacement were enrolled in the randomized controlled trial. At discharge from inpatient rehabilitation and after three months, their distance in the 6-minute walk test was assessed as the primary endpoint. Other functional parameters, including health related quality of life, pain, and time to return to work, were secondary endpoints.
Results: Patients in the intervention group performed telerehabilitation for an average of 55.0 minutes (SD 9.2) per week. Adherence was high, at over 75%, until the 7th week of the three-month intervention phase. Almost all the patients and therapists used the communication options. Both the intervention group (average difference 88.3 m; SD 57.7; P=.95) and the control group (average difference 79.6 m; SD 48.7; P=.95) increased their distance in the 6-minute-walk-test. Improvements in other functional parameters, as well as in quality of life and pain, were achieved in both groups. The higher proportion of working patients in the intervention group (64.6%; P=.01) versus the control group (46.2%) is of note.
Conclusions: The effect of the investigated telerehabilitation therapy in patients following knee or hip replacement was equivalent to the usual aftercare in terms of functional testing, quality of life, and pain. Since a significantly higher return-to-work rate could be achieved, this therapy might be a promising supplement to established aftercare.
The concept of Circular Economy (CE) is becoming increasingly important in the pursuit of more sustainable societies. CE strategies are being applied in the sustainable management of a plethora of areas, such as energy, water, food and eco-industrial parks. The present paper focuses on the question of how CE principles can support the sustainable management of water in the agricultural sector around the world, considering different legislative environments, water resources management guidelines, environmental stressors, and CE practices. Considering these practices and circumstances, seven countries were compared: Brazil, Germany, Japan, Mexico, Morocco, Portugal, and Taiwan. Together, CE experts in the seven countries developed a set of 44 criteria to assess each of these areas. Broader establishment and respect of water resources legislation was found to be strongly correlated with lower agricultural water use. While the application of CE practices was found to not be correlated with lower consumption, this is still novel in most countries. Based on the studied countries, it can be concluded that a global CE agenda has not been reached for water resources. Further application and variety of practices is required to better represent the impact of CE on a national scale, but local success stories could support the wider application of CE in agriculture. The findings and the framework of the study can be applied to other countries in directing CE strategies for more sustainable water use in agriculture. Increasing CE implementation, motivated by legislation and better management can help ensure water security throughout nations.
With a radar working in the 24 GHz ISM-band in a frequency modulated continuous wave mode the major vital signs heartbeat and respiration rate are monitored. The observation is hereby contactless with the patient sitting straight up in a distance of 1–2 m to the radar. Radar and sampling platform are components developed internally in the university institution. The communication with the radar is handled with MATLAB via TCP/IP. The signal processing and real-time visualization is developed in MATLAB, too. Cornerstone of this publication are the wavelet packet transformation and a spectral frequency estimation for vital sign calculation. The wavelet transformation allows a fine tuning of frequency subspaces, separating the heartbeat signal from the respiration and more important from noise and other movement. Heartbeat and respiration are monitored independently and compared to parallel recorded ECG-data.
Radar target simulator with complex-valued delay line modeling based on standard radar components
(2018)
With increasing radar activities in the automotive, industrial and private sector, there is a need to test radar sensors in their environment. A radar target simulator can help testing radar systems repeatably. In this paper, the authors present a concept of low-cost hardware for radar target simulation. The theoretical foundations are derived and analyzed. An implementation of a demonstrator operating in the 24 GHz ISM band is shown for which the dynamical range simulation was implemented in a FPGA with fast sampling ADCs and DACs. By using a FIR filtering approach a fine discretization of the range could be reached which will furthermore allow an inherent and automatic Doppler simulation by moving the target.
The objective of this study is to allow a better understanding of the role of industry 4.0 technologies, especially filament extrusion technology in the reduction of costs, environmental impact, energy consumption, and the possibility to expand the range of printable materials. The study focuses on the desktop Filament Extruders available in the market now, where these machines are assessed and future possible modifications for these apparatuses are presented. The research leading to the publication of this study consists of a review of the existing literature, in addition, information from different extruders manufacturers’ websites has been used. The study has demonstrated that the extrusion of material at home is still not an exact science, and the process ends up costing the user large sums of money over time. However, there are still limitations to the use of this technology such as the lack of standardized extrusion settings, the necessity of pre-drying the pellets, and the complexity of the extruder cleaning process after each use.
Background: The Musculoskeletal Health Questionnaire (MSK-HQ) has been developed to measure musculoskeletal health status across musculoskeletal conditions and settings. However, the MSK-HQ needs to be further evaluated across settings and different languages.
Objective: The objective of the study was to evaluate and compare measurement properties of the MSK-HQ across Danish (DK) and English (UK) cohorts of patients from primary care physiotherapy services with musculoskeletal pain.
Methods: MSK-HQ was translated into Danish according to international guidelines. Measurement invariance was assessed by differential item functioning (DIF) analyses. Test-retest reliability, measurement error, responsiveness and minimal clinically important change (MCIC) were evaluated and compared between DK (n = 153) and UK (n = 166) cohorts.
Results: The Danish version demonstrated acceptable face and construct validity. Out of the 14 MSK-HQ items, three items showed DIF for language (pain/stiffness at night, understanding condition and confidence in managing symptoms) and three items showed DIF for pain location (walking, washing/dressing and physical activity levels). Intraclass Correlation Coefficients for test-retest were 0.86 (95% CI 0.81 to 0.91) for DK cohort and 0.77 (95% CI 0.49 to 0.90) for the UK cohort. The systematic measurement error was 1.6 and 3.9 points for the DK and UK cohorts respectively, with random measurement error being 8.6 and 9.9 points. Receiver operating characteristic (ROC) curves of the change scores against patients’ own judgment at 12 weeks exceeded 0.70 in both cohorts. Absolute and relative MCIC estimates were 8–10 points and 26% for the DK cohort and 6–8 points and 29% for the UK cohort.
Conclusions: The measurement properties of MSK-HQ were acceptable across countries, but seem more suited for group than individual level evaluation. Researchers and clinicians should be aware that some discrepancy exits and should take the observed measurement error into account when evaluating change in scores over time.
Driven by falling photovoltaic (PV) installation costs and potential support policies, rooftop PV is expected to expand rapidly in Thailand. As a result, the relevant stakeholders, especially utilities, have concerns about the net economic impacts of high PV adoption. Using a cost–benefit analysis, this study quantifies the net economic impacts of rooftop PV systems on three utilities and on ratepayers in Thailand by applying nine different PV adoption scenarios with various buyback rates and annual percentages of PV cost reduction. Under Thailand’s current electricity tariff structure, Thai utilities are well-protected and able to pass all costs due to PV onto the ratepayers in terms of changes in retail rates. We find that when PV adoption is low, the net economic impacts on both the utilities and retail rates are small and the impacts on each utility depend on its specific characteristics. On the other hand, when PV adoption ranges from 9–14% in energy basis, five-year retail rate impacts become noticeable and are between 6% and 11% as compared to the projected retail rates in 2036 depending on the PV adoption level. Thus, it is necessary for Thailand to make tradeoffs among the stakeholders and maximize the benefits of rooftop PV adoption.
Driven by decreasing PV and energy storage prices, increasing electricity costs and policy supports from Thai government (self-consumption era), rooftop PV and energy storage systems are going to be deployed in the country rapidly that may disrupt existing business models structure of Thai distribution utilities due to revenue erosion and lost earnings opportunities. The retail rates that directly affect ratepayers (non-solar customers) are expected to increase. This paper focuses on a framework for evaluating impacts of PV with and without energy storage systems on Thai distribution utilities and ratepayers by using cost-benefit analysis (CBA). Prior to calculation of cost/benefit components, changes in energy sales need to be addressed. Government policies for the support of PV generation will also help in accelerating the rooftop PV installation. Benefit components include avoided costs due to transmission losses and deferring distribution capacity with appropriate PV penetration level, while cost components consist of losses in revenue, program costs, integration costs and unrecovered fixed costs. It is necessary for Thailand to compare total costs and total benefits of rooftop PV and energy storage systems in order to adopt policy supports and mitigation approaches, such as business model innovation and regulatory reform, effectively.
Aim: The aim of the study was to identify common orthopedic sports injury profiles in adolescent elite athletes with respect to age, sex, and anthropometrics.
Methods: A retrospective data analysis of 718 orthopedic presentations among 381 adolescent elite athletes from 16 different sports to a sports medical department was performed. Recorded data of history and clinical examination included area, cause and structure of acute and overuse injuries. Injury-events were analyzed in the whole cohort and stratified by age (11–14/15–17 years) and sex. Group differences were tested by chi-squared-tests. Logistic regression analysis was applied examining the influence of factors age, sex, and body mass index (BMI) on the outcome variables area and structure (α = 0.05).
Results: Higher proportions of injury-events were reported for females (60%) and athletes of the older age group (66%) than males and younger athletes. The most frequently injured area was the lower extremity (47%) followed by the spine (30.5%) and the upper extremity (12.5%). Acute injuries were mainly located at the lower extremity (74.5%), while overuse injuries were predominantly observed at the lower extremity (41%) as well as the spine (36.5%). Joints (34%), muscles (22%), and tendons (21.5%) were found to be the most often affected structures. The injured structures were different between the age groups (p = 0.022), with the older age group presenting three times more frequent with ligament pathology events (5.5%/2%) and less frequent with bony problems (11%/20.5%) than athletes of the younger age group. The injured area differed between the sexes (p = 0.005), with males having fewer spine injury-events (25.5%/34%) but more upper extremity injuries (18%/9%) than females. Regression analysis showed statistically significant influence for BMI (p = 0.002) and age (p = 0.015) on structure, whereas the area was significantly influenced by sex (p = 0.005).
Conclusion: Events of soft-tissue overuse injuries are the most common reasons resulting in orthopedic presentations of adolescent elite athletes. Mostly, the lower extremity and the spine are affected, while sex and age characteristics on affected area and structure must be considered. Therefore, prevention strategies addressing the injury-event profiles should already be implemented in early adolescence taking age, sex as well as injury entity into account.
Background: On the way to a more sustainable society, transport needs to be urgently optimized regarding energy consumption and pollution control. While in earlier decades, Europe followed automobile technology leaps initiated in the USA, it has decoupled itself for 20 years by focusing research capacity towards the diesel powertrain. The resulting technology shift has led to some 45 million extra diesel cars in Europe. Its outcome in terms of health and environmental effects will be investigated below.
Results: Expected greenhouse gas savings initiated by the shift to diesel cars have been overestimated. Only about one tenth of overall energy efficiency improvements of passenger cars can be attributed to it. These minor savings are on the other hand overcompensated by a significant increase of supply chain CO2 emissions and extensive black carbon emissions of diesel cars without particulate filter. We conclude that the European diesel car boom did not cool down the atmosphere. Moreover, toxic NO x emissions of diesel cars have been underestimated up to 20-fold in officially announced data. The voluntary agreement signed in 1998 between the European Automobile industry and the European Commission envisaging to reduce CO2 emissions has been identified as elementary for the ensuing European diesel car boom. Four factors have been quantified in order to explain very different dieselization rates across Europe: impact of national car/supplier industry, ecological modernization, fuel tourism and corporatist political governance. By comparing the European diesel strategy to the Japanese petrol-hybrid avenue, it becomes clear that a different road would have both more effectively reduced CO2 emissions and pollutants.
Conclusion: Europe's car fleets have been persistently transformed from being petrol-driven to diesel-driven over the last 20 years. This paper investigates on how this came to be and why Europe took a distinct route as compared to other parts of the world. It also attempts to evaluate the outcome of stated goals of this transformation which was primarily a robust reduction in GHG emissions. We conclude that global warming has been negatively affected, and air pollution has become alarming in many European locations. More progressive development scenarios could have prevented these outcomes.
This research paper discusses how RFID technology could improve current deposit bottle logistic processes in food retailing and which obstacles impede successful implementations. Research Methodology include desk research: Library, EBSCOhost, wiso.net, Google Scholar, Scientific Journals, Statista, SpringerLink. Implementation of RFID is potentially beneficial, but same obstacles remain outlook. To validate the conclusion further studying and practical proof of concept are necessary. Contributions: supply chain management, return logistics, food retail, beverage industry
Objective: The objective of the article highlight the significance of culture in the entrepreneurial landscape and provides entrepreneurs and (project) managers with a guidance tool to overcome previously unconsidered stumbling blocks while operating in the intercultural setting.
Research Design & Methods: The following article was prepared based on a critical study review devoted to existing approaches to intercultural impact in business life and used the archival technique from 1990-2020. The study review reflects on the identification of existing literature gaps in the implementation of a subcultural business environment. It addresses these by designing an appropriate model to bypass the apparent pitfalls of intercultural business communication and co-existence, if possible.
Findings: Culture impacts diverse sets of society and businesses, including entrepreneurship. This article underpins which pitfalls are advisable to consider when encountering the intercultural and entrepreneurship-driven workplace.
Implications & Recommendations: Based on the study review, startups, as well as big corporate companies’ projects of a creational nature, are advised to reconsider their perception and handling of culture applying The Building of Cultural and Entrepreneurial Force.
Contribution & Value Added: The added value of this article is to be found in the solid analysis of cultural essentialism, anti-essentialism, and implications to beware of in the managerial and entrepreneurial context related to The Building of Intercultural and Entrepreneurial Force that intends to ease to co-work of intercultural teams.
Objective: In this article, the methods used to simplify the business modelling and founding of new companies are presented and critically reflected. Furthermore, it is discussed to what extent a specific method is advantageous, disadvantageous, applicable, not applicable, or even contradictory.
Methodology: The theoretical analysis is underpinned by a qualitative interview study asking company founders about applying the methods mentioned above. The work is based on scientific papers and books and is complemented by the data originating from a specially designed study.
Findings: The results conclude that business model founding instruments provide strategic guidelines favouring entrepreneurs, yet they turn out to be minor in its real-life significance as numerous factors rooted in different fields of expertise play in.
Value Added: The added value of this paper is in the elaboration of efficiency bringing and risk-minimizing components of the methods, respectively. Accordingly, managers and entrepreneurs of all industries are intended to be equipped with sufficient information content that eases the decision for or against one of the methods as realistic expectations considering the application are likely to emerge.
Recommendations: The limitations of this study are rooted in the chosen qualitative research since every interviewee is a subject to their individual perception.
Purpose: In this article, the canvas used to simplify business modeling of a platform and its visual depiction are put into the entrepreneurial context, and critically reflected accordingly. Furthermore, it is discussed to what extent the canvas is advantageous, disadvantageous, applicable, not applicable, or even contradictory.
Methodology: The analysis is based on theoretical research. Additionally, qualitative interviews with business founders were conducted.
Results: The results conclude that the canvas employed to ease the business model sharpening process supplies founders with essential aspects to cover, yet they are part of a large set of factors that play in.
Conclusion: The limitations of this study are rooted in the chosen research design based on the conceptual review.
Many borate crystals feature nonlinear optical properties that allow for efficient frequency conversion of common lasers down into the ultraviolet spectrum. Twinning may degrade crystal quality and affect nonlinear optical properties, in particular if crystals are composed of twin domains with opposing polarities. Here, we use measurements of optical activity to demonstrate the existence of inversion twins within single crystals of YAl3(BO3)4 (YAB) and K2Al2B2O7 (KABO). We determine the optical rotatory dispersion of YAB and KABO throughout the visible spectrum using a spectrophotometer with rotatable polarizers. Space-resolved measurements of the optical rotation can be related to the twin structure and give estimates on the extent of twinning. The reported dispersion relations for the rotatory power of YAB and KABO may be used to assess crystal quality and to select twin-free specimens.
Multimodal meaning making: The annotation of nonverbal elements in multimodal corpus transcription
(2021)
The article discusses how to integrate annotation for nonverbal elements (NVE) from multimodal raw data as part of a standardized corpus transcription. We argue that it is essential to include multimodal elements when investigating conversational data, and that in order to integrate these elements, a structured approach to complex multimodal data is needed. We discuss how to formulate a structured corpus-suitable standard syntax and taxonomy for nonverbal features such as gesture, facial expressions, and physical stance, and how to integrate it in a corpus. Using corpus examples, the article describes the development of a robust annotation system for spoken language in the corpus of Video-mediated English as a Lingua Franca Conversations (ViMELF 2018) and illustrates how the system can be used for the study of spoken discourse. The system takes into account previous research on multimodality, transcribes salient nonverbal features in a concise manner, and uses a standard syntax. While such an approach introduces a degree of subjectivity through the criteria of salience and conciseness, the system also offers considerable advantages: it is versatile and adaptable, flexible enough to work with a wide range of multimodal data, and it allows both quantitative and qualitative research on the pragmatics of interaction.
Deep brain stimulation (DBS) is an established therapy for movement disorders such as in Parkinson's disease (PD) and essential tremor (ET). Adjusting the stimulation parameters, however, is a labour-intensive process and often requires several patient visits. Physicians prefer objective tools to improve (or maintain) the performance in DBS. Wearable motion sensors (WMS) are able to detect some manifestations of pathological signs, such as tremor in PD. However, the interpretation of sensor data is often highly technical and methods to visualise tremor data of patients undergoing DBS in a clinical setting are lacking. This work aims to visualise the dynamics of tremor responses to DBS parameter changes with WMS while patients performing clinical hand movements. To this end, we attended DBS programming sessions of two patients with the aim to visualise certain aspects of the clinical examination. PD tremor and ET were effectively quantified by acceleration amplitude and frequency. Tremor dynamics were analysed and visualised based on setpoints, movement transitions and stability aspects. These methods have not yet been employed and examples demonstrate how tremor dynamics can be visualised with simple analysis techniques. We therefore provide a base for future research work on visualisation tools in order to assist clinicians who frequently encounter patients for DBS therapy. This could lead to benefits in terms of enhanced evaluation of treatment efficacy in the future.
This paper presents a feasibility study for the production of recycled glycol modified polyethylene terephthalate (PETG) material for additive manufacturing. Past studies showed a variety of results for the recycling of 3D-printing material, therefore the precise effect on the material properties is not completely clear. For this work, PETG waste of the same grade was recycled once and further processed into 3D printing filament. The study compares three blend ratios between purchased plastic pellets and recycled pellets to determine the degradation effect of one recycling cycle and possible blend ratios to counter these effects. Furthermore, the results include a commercially available filament. The comparison uses the filament diameter, the dimensional accuracy of the printed test specimen and mechanical properties as quality criteria. The study shows that the recycled material has a minor decrease concerning the tensile strength and Young’s modulus.
The following paper aims to find out consumers' expectations and attitudes towards the innovation "Metaverse". It will also be explored which role the Meta Group plays in mass adaption and how the company influences consumers' possible use and opinion on the project. These results are connected to the fashion industry, further exploring new types of products and a possible distribution channel. Therefore, this study is useful to developers of Metaverses and AR/VR products, the Meta Group, and fashion companies. The main results of this research are: Meta and the Metaverse are seen as critical, the required technology has not yet reached mainstream use, but interest is present. Digital fashion had participants divided, some not willing to spend any money and some already having spent over 100€, although the Metaverse's influence on future purchases is little. The Metaverse could serve as a new distribution channel for clothing products. To conduct this research Google Forms was used. The research is classified as survey-based. The biggest limitation is the nonexistence of the Metaverse as envisioned by Meta, making it hard for participants to answer some of the questions asked.
Electric drive systems are increasingly used in automobiles. However, the combination of comfort, dynamics and safety requirements places high demands on the torque accuracy. The complex interplay of battery, inverter and electrical machine causes a lot of system uncertainties based on parameter fluctuations and measurement errors that influence the system performance. In this paper these influences on the closed loop torque control are analyzed and quantified using a variance based sensitivity analysis. The method enables to connect the variance of the torque accuracy with the parameter uncertainties causing this variance. Moreover, it quantifies the influences of the parameters independent of the complexity of the analyzed system. In addition, two methods to ensure convergence of the estimated variance based sensitivity measures are proposed. The results of the analysis are presented for 19 static working points of an battery electric drive system.
Unternehmen verlassen sich bei der Entwicklung von Software und Lösungen häufig auf das Know-How externer Dienstleister. Moderne Arbeits- und Kollaborationsformen verändern gleichzeitig die Entwicklung von Produkten und Dienstleistungen. Wie beeinflussen diese Trends die Zusammenarbeit und Kooperation zwischen Unternehmen und ihren externen agilen Dienstleistern? Ziel dieser wissenschaftlichen Arbeit ist es herauszufinden, welche Schritte unternehmen müssen, um agiles Arbeiten und die Zusammenarbeit mit externen Dienstleistern umzusetzen. Daher wurde anhand einer Fallstudie inklusive einer qualitativen Befragung herausgefunden und aufgezeigt, welche Maßnahmen und Handlungen Unternehmen ergreifen müssen, um das Ziel einer effektiven Umsetzung einer agilen Zusammenarbeit und Kooperation zu erreichen. Drei Kernthemen wurden identifiziert, auf deren Grundlage die Forschungsfragen zu den Maßnahmen beantwortet werden: Erstens, welche Möglichkeiten Unternehmen haben, ein internes agiles Setup zu implementieren, um mit agilen Dienstleistern auf Augenhöhe zusammenzuarbeiten. Zweitens, welche Vertragsvarianten die agile Zusammenarbeit unterstützen und verbessern können und drittens, welche agilen Techniken und Methoden in der agilen Zusammenarbeit eingesetzt werden sollten. Die Ergebnisse der Fallstudien bestätigen die Annahme, dass die drei identifizierten Kernthemen für eine effektive Zusammenarbeit im agilen Umfeld essenziell sind. Während einerseits nachgewiesen wurde, dass sich die Vertragsanforderungen hinsichtlich ihrer Flexibilität und Anpassungsfähigkeit veränderten, wurde andererseits auch nachgewiesen, dass das interne Setup agile Treiber, Techniken und Methoden erfordert, um eine effektive Zusammenarbeit mit agilen Dienstleistern zu ermöglichen. Dieser Artikel gibt einen Überblick über die wichtigsten Inhalte innerhalb der drei genannten Kernthemen und gibt Unternehmen zudem Hinweise, wie sie eine Basis für eine effektive Zusammenarbeit schaffen können.
Unintended nuclear war
(2021)
While the contribution of renewable energy technologies to the energy system is increasing, so is its level of complexity. In addition to new types of consumer systems, the future system will be characterized by volatile generation plants that will require storage technologies. Furthermore, a solid interconnected system that enables the transit of electrical energy can reduce the need for generation and storage systems. Therefore, appropriate methods are needed to analyze energy production and consumption interactions within different system constellations. Energy system models can help to understand and build these future energy systems. However, although various energy models already exist, none of them can cover all issues related to integrating renewable energy systems. The existing research gap is also reflected in the fact that current models cannot model the entire energy system for very high shares of renewable energies with high temporal resolution (15 min or 1-h steps) and high spatial resolution. Additionally, the low availability of open-source energy models leads to a lack of transparency about exactly how they work. To close this gap, the sector-coupled energy model (UCB-SEnMod) was developed. Its unique features are the modular structure, high flexibility, and applicability, enabling it to model any system constellation and can be easily extended with new functions due to its software design. Due to the software architecture, it is possible to map individual buildings or companies and regions, or even countries. In addition, we plan to make the energy model UCB-SEnMod available as an open-source framework to enable users to understand the functionality and configuration options more easily. This paper presents the methodology of the UCB-SEnMod model. The main components of the model are described in detail, i.e., the energy generation systems, the consumption components in the electricity, heat, and transport sectors, and the possibilities of load balancing.
In this paper two simple synthetic aperture radar (SAR) methods are applied on data from a 24 GHz FMCW radar implemented on a linear drive for educational purposes. The data of near and far range measurements are evaluated using two different SAR signal processing algorithms featuring 2D-FFT and frequency back projection (FBP) method (Moreira et al., 2013). A comparison of these two algorithms is performed concerning runtime, image pixel size, azimuth and range resolution. The far range measurements are executed in a range of 60 to 135 m by monitoring cars in a parking lot. The near range measurement from 0 to 5 m are realised in a measuring chamber equipped with absorber foam and nearly ideal targets like corner reflectors. The comparison of 2D-FFT and FBP algorithm shows that both deliver good and similar results for the far range measurements but the runtime of the FBP algorithm is up to 150 times longer as the 2D-FFT runtime. In the near range measurements the FBP algorithm displays a very good azimuth resolution and targets which are very close to each other can be separated easily. In contrast to that the 2D-FFT algorithm has a lower azimuth resolution in the near range, thus targets which are very close to each other, merge together and cannot be separated.
In the single-processor scheduling problem with time restrictions there is one main processor and B resources that are used to execute the jobs. A perfect schedule has no idle times or gaps on the main processor and the makespan is therefore equal to the sum of the processing times. In general, more resources result in smaller makespans, and as it is in practical applications often more economic not to mobilize resources that will be unnecessary and expensive, we investigate in this paper the problem to find the smallest number B of resources that make a perfect schedule possible. We show that the decision version of this problem is NP-complete, derive new structural properties of perfect schedules, and we describe a Mixed Integer Linear Programming (MIP) formulation to solve the problem. A large number of computational tests show that (for our randomly chosen problem instances) only B=3 or B=4 resources are sufficient for a perfect schedule.
Numerous research methods have been developed to detect anomalies in the areas of security and risk analysis. In healthcare, there are numerous use cases where anomaly detection is relevant. For example, early detection of sepsis is one such use case. Early treatment of sepsis is cost effective and reduces the number of hospital days of patients in the ICU. There is no single procedure that is sufficient for sepsis diagnosis, and combinations of approaches are needed. Detecting anomalies in patient time series data could help speed the development of some decisions. However, our algorithm must be viewed as complementary to other approaches based on laboratory values and physician judgments. The focus of this work is to develop a hybrid method for detecting anomalies that occur, for example, in multidimensional medical signals, sensor signals, or other time series in business and nature. The novelty of our approach lies in the extension and combination of existing approaches: Statistics, Self Organizing Maps and Linear Discriminant Analysis in a unique and unprecedented way with the goal of identifying different types of anomalies in real-time measurement data and defining the point where the anomaly occurs. The proposed algorithm not only has the full potential to detect anomalies, but also to find real points where an anomaly starts.
In 2019 at IBM, it was found that there is a strong dependence on a few large banks in bank sales, and the growth targets of the sales division cannot be achieved due to the existing business with these same customers. To counteract this dependency, an NCA-specific sales team for the banking industry was established to support small and medium-sized banks with personal commitment and expertise and to develop them into long-term business partners of IBM. This research focuses on the development of a performance measurement system for NCA-Sales teams. It postulates the hypothesis that more effective and better-suited performance measurement systems can be developed for NCA-Sales of information technology towards financial institutions. Authors use the methodology of expert interviews and Mayrings qualitative content analysis to gain insights into the relevant factors that need to be considered when evaluating the performance of such sales teams. The paper identifies stakeholders, challenges, and goals that should be integrated into a performance measurement system as well as KPIs to measure them. The results are being consolidated into a conceptual sketch for an NCA-sales optimized PMS. The paper distinguishes itself from other research through an approach that gives detailed guidance for the practical implementation of its findings. The research was conducted with professionals in the IT sector; however, all of them were working for the same company, and the data was collected in the short span of one week as it was part of a research. The outcome can be used for further studies on how to effectively measure performance in NCA-Sales teams.
In this paper, the mechanical damage behavior is investigated based on the characteristic roughness on the surface and the orientation of superficial structures. The main goal is to explore the surface roughness on mechanically loaded copper conductors as a lifetime indicator. For this purpose, copper conductors are mechanically stressed in accordance with EN 50,396 and then examined metallographically and microscopically. The microstructure examination shows that the roughness is caused by material extrusion and cracks due to work hardening in the surface area. Using confocal microscopy, it is shown for the first time that significant formation of surface roughness takes place over the service life of copper conductors. The roughness increases monotonically, but not linearly with number of cycles, due to internal microstructural processes and can be divided into three sections. First inspections of the conductor surface over lifetime show a correlation between the intensity of structures orientated 45° to the loading direction and the roughness. This phenomenon, already known from microscopic slip lines, is thus also evident in macroscopic roughness formation and is well founded by the research theory on material extrusion along dislocation lines. In summary, a lifetime determination is possible based on its developing roughness which enables the utilization as a sensor element.
Background: Recent shoulder injury prevention programs have utilized resistance exercises combined with different forms of instability, with the goal of eliciting functional adaptations and thereby reducing the risk of injury. However, it is still unknown how an unstable weight mass (UWM) affects the muscular activity of the shoulder stabilizers. Aim of the study was to assess neuromuscular activity of dynamic shoulder stabilizers under four conditions of stable and UWM during three shoulder exercises. It was hypothesized that a combined condition of weight with UWM would elicit greater activation due to the increased stabilization demand.
Methods: Sixteen participants (7 m/9 f) were included in this cross-sectional study and prepared with an EMG-setup for the: Mm. upper/lower trapezius (U.TA/L.TA), lateral deltoid (DE), latissimus dorsi (LD), serratus anterior (SA) and pectoralis major (PE). A maximal voluntary isometric contraction test (MVIC; 5 s.) was performed on an isokinetic dynamometer. Next, internal/external rotation (In/Ex), abduction/adduction (Ab/Ad) and diagonal flexion/extension (F/E) exercises (5 reps.) were performed with four custom-made-pipes representing different exercise conditions. First, the empty-pipe (P; 0.5 kg) and then, randomly ordered, water-filled-pipe (PW; 1 kg), weight-pipe (PG; 4.5 kg) and weight + water-filled-pipe (PWG; 4.5 kg), while EMG was recorded. Raw root-mean-square values (RMS) were normalized to MVIC (%MVIC). Differences between conditions for RMS%MVIC, scapular stabilizer (SR: U.TA/L.TA; U.TA/SA) and contraction (CR: concentric/eccentric) ratios were analyzed (paired t-test; p ≤ 0.05; Bonferroni adjusted α = 0.008).
Results: PWG showed significantly greater muscle activity for all exercises and all muscles except for PE compared to P and PW. Condition PG elicited muscular activity comparable to PWG (p > 0.008) with significantly lower activation of L.TA and SA in the In/Ex rotation. The SR ratio was significantly higher in PWG compared to P and PW. No significant differences were found for the CR ratio in all exercises and for all muscles.
Conclusion: Higher weight generated greater muscle activation whereas an UWM raised the neuromuscular activity, increasing the stabilization demands. Especially in the In/Ex rotation, an UWM increased the RMS%MVIC and SR ratio. This might improve training effects in shoulder prevention and rehabilitation programs.