Filtern
Erscheinungsjahr
Dokumenttyp
Volltext vorhanden
- ja (183)
Gehört zur Bibliographie
- nein (183)
Schlagworte
- Deutschland (14)
- Nachhaltigkeit (11)
- Rückenschmerz (11)
- COVID-19 (7)
- Digitalisierung (7)
- Maschinelles Lernen (7)
- China (6)
- Gesundheitswesen (6)
- Künstliche Intelligenz (6)
- Pandemie (6)
Institut
- FB Bauen + Leben (49)
- FB Umweltplanung/-technik (UCB) (44)
- FB Informatik + Therapiewissenschaft (30)
- FB Technik (13)
- FB Umweltwirtschaft/-recht (UCB) (11)
- IfaS - Institut für angewandtes Stoffstrommanagement (10)
- InDi - Institut für Internationale und Digitale Kommunikation (7)
- LaROS - Labor für Radiotechnologie und optische Systeme (6)
- ISS - Institut für Softwaresysteme in Wirtschaft, Umwelt und Verwaltung (5)
- FB Wirtschaft (3)
In this paper, the mechanical damage behavior is investigated based on the characteristic roughness on the surface and the orientation of superficial structures. The main goal is to explore the surface roughness on mechanically loaded copper conductors as a lifetime indicator. For this purpose, copper conductors are mechanically stressed in accordance with EN 50,396 and then examined metallographically and microscopically. The microstructure examination shows that the roughness is caused by material extrusion and cracks due to work hardening in the surface area. Using confocal microscopy, it is shown for the first time that significant formation of surface roughness takes place over the service life of copper conductors. The roughness increases monotonically, but not linearly with number of cycles, due to internal microstructural processes and can be divided into three sections. First inspections of the conductor surface over lifetime show a correlation between the intensity of structures orientated 45° to the loading direction and the roughness. This phenomenon, already known from microscopic slip lines, is thus also evident in macroscopic roughness formation and is well founded by the research theory on material extrusion along dislocation lines. In summary, a lifetime determination is possible based on its developing roughness which enables the utilization as a sensor element.
Geometrieerzeugung von Evolventenzahntrieben: Profilverschobene schrägverzahnte Stirnzahnräder
(2022)
In dieser Arbeit wird die Zahnradgeometrie von Stirnrädern berechnet und formatiert, um sie in ein CAD-Programm zu übertragen. Dabei werden die Konturen der Evolvente und der Trochoide nach den gleichen Regel wie bei der Herstellung durch Wälzfräsen erzeugt. Der Anwender hat die Möglichkeit die Haupteigenschaften wie Modul, Zahnkopfspiel und Eckenverrundung einzugeben. Zusätzlich können auch schrägverzahnte, profilverschobene Stirnräder mit Hochverzahnung und Kopfkürzung erzeugt werden.
Per Datenausgabe werden die Koordinaten gespeichert und durch ein Makro in das CAD-Programm übertragen. Aus den beiden Konturzügen wird der 3D-Körper durch Austragen entlang der Helix erzeugt.
Zur Weiterverarbeitung wird die Zahnradgeometrie nach manueller Tesselierung in ein universales Dateiformat exportiert.
In 2019 at IBM, it was found that there is a strong dependence on a few large banks in bank sales, and the growth targets of the sales division cannot be achieved due to the existing business with these same customers. To counteract this dependency, an NCA-specific sales team for the banking industry was established to support small and medium-sized banks with personal commitment and expertise and to develop them into long-term business partners of IBM. This research focuses on the development of a performance measurement system for NCA-Sales teams. It postulates the hypothesis that more effective and better-suited performance measurement systems can be developed for NCA-Sales of information technology towards financial institutions. Authors use the methodology of expert interviews and Mayrings qualitative content analysis to gain insights into the relevant factors that need to be considered when evaluating the performance of such sales teams. The paper identifies stakeholders, challenges, and goals that should be integrated into a performance measurement system as well as KPIs to measure them. The results are being consolidated into a conceptual sketch for an NCA-sales optimized PMS. The paper distinguishes itself from other research through an approach that gives detailed guidance for the practical implementation of its findings. The research was conducted with professionals in the IT sector; however, all of them were working for the same company, and the data was collected in the short span of one week as it was part of a research. The outcome can be used for further studies on how to effectively measure performance in NCA-Sales teams.
Numerous research methods have been developed to detect anomalies in the areas of security and risk analysis. In healthcare, there are numerous use cases where anomaly detection is relevant. For example, early detection of sepsis is one such use case. Early treatment of sepsis is cost effective and reduces the number of hospital days of patients in the ICU. There is no single procedure that is sufficient for sepsis diagnosis, and combinations of approaches are needed. Detecting anomalies in patient time series data could help speed the development of some decisions. However, our algorithm must be viewed as complementary to other approaches based on laboratory values and physician judgments. The focus of this work is to develop a hybrid method for detecting anomalies that occur, for example, in multidimensional medical signals, sensor signals, or other time series in business and nature. The novelty of our approach lies in the extension and combination of existing approaches: Statistics, Self Organizing Maps and Linear Discriminant Analysis in a unique and unprecedented way with the goal of identifying different types of anomalies in real-time measurement data and defining the point where the anomaly occurs. The proposed algorithm not only has the full potential to detect anomalies, but also to find real points where an anomaly starts.
In the single-processor scheduling problem with time restrictions there is one main processor and B resources that are used to execute the jobs. A perfect schedule has no idle times or gaps on the main processor and the makespan is therefore equal to the sum of the processing times. In general, more resources result in smaller makespans, and as it is in practical applications often more economic not to mobilize resources that will be unnecessary and expensive, we investigate in this paper the problem to find the smallest number B of resources that make a perfect schedule possible. We show that the decision version of this problem is NP-complete, derive new structural properties of perfect schedules, and we describe a Mixed Integer Linear Programming (MIP) formulation to solve the problem. A large number of computational tests show that (for our randomly chosen problem instances) only B=3 or B=4 resources are sufficient for a perfect schedule.
In this paper two simple synthetic aperture radar (SAR) methods are applied on data from a 24 GHz FMCW radar implemented on a linear drive for educational purposes. The data of near and far range measurements are evaluated using two different SAR signal processing algorithms featuring 2D-FFT and frequency back projection (FBP) method (Moreira et al., 2013). A comparison of these two algorithms is performed concerning runtime, image pixel size, azimuth and range resolution. The far range measurements are executed in a range of 60 to 135 m by monitoring cars in a parking lot. The near range measurement from 0 to 5 m are realised in a measuring chamber equipped with absorber foam and nearly ideal targets like corner reflectors. The comparison of 2D-FFT and FBP algorithm shows that both deliver good and similar results for the far range measurements but the runtime of the FBP algorithm is up to 150 times longer as the 2D-FFT runtime. In the near range measurements the FBP algorithm displays a very good azimuth resolution and targets which are very close to each other can be separated easily. In contrast to that the 2D-FFT algorithm has a lower azimuth resolution in the near range, thus targets which are very close to each other, merge together and cannot be separated.
While the contribution of renewable energy technologies to the energy system is increasing, so is its level of complexity. In addition to new types of consumer systems, the future system will be characterized by volatile generation plants that will require storage technologies. Furthermore, a solid interconnected system that enables the transit of electrical energy can reduce the need for generation and storage systems. Therefore, appropriate methods are needed to analyze energy production and consumption interactions within different system constellations. Energy system models can help to understand and build these future energy systems. However, although various energy models already exist, none of them can cover all issues related to integrating renewable energy systems. The existing research gap is also reflected in the fact that current models cannot model the entire energy system for very high shares of renewable energies with high temporal resolution (15 min or 1-h steps) and high spatial resolution. Additionally, the low availability of open-source energy models leads to a lack of transparency about exactly how they work. To close this gap, the sector-coupled energy model (UCB-SEnMod) was developed. Its unique features are the modular structure, high flexibility, and applicability, enabling it to model any system constellation and can be easily extended with new functions due to its software design. Due to the software architecture, it is possible to map individual buildings or companies and regions, or even countries. In addition, we plan to make the energy model UCB-SEnMod available as an open-source framework to enable users to understand the functionality and configuration options more easily. This paper presents the methodology of the UCB-SEnMod model. The main components of the model are described in detail, i.e., the energy generation systems, the consumption components in the electricity, heat, and transport sectors, and the possibilities of load balancing.
Unintended nuclear war
(2021)
Unternehmen verlassen sich bei der Entwicklung von Software und Lösungen häufig auf das Know-How externer Dienstleister. Moderne Arbeits- und Kollaborationsformen verändern gleichzeitig die Entwicklung von Produkten und Dienstleistungen. Wie beeinflussen diese Trends die Zusammenarbeit und Kooperation zwischen Unternehmen und ihren externen agilen Dienstleistern? Ziel dieser wissenschaftlichen Arbeit ist es herauszufinden, welche Schritte unternehmen müssen, um agiles Arbeiten und die Zusammenarbeit mit externen Dienstleistern umzusetzen. Daher wurde anhand einer Fallstudie inklusive einer qualitativen Befragung herausgefunden und aufgezeigt, welche Maßnahmen und Handlungen Unternehmen ergreifen müssen, um das Ziel einer effektiven Umsetzung einer agilen Zusammenarbeit und Kooperation zu erreichen. Drei Kernthemen wurden identifiziert, auf deren Grundlage die Forschungsfragen zu den Maßnahmen beantwortet werden: Erstens, welche Möglichkeiten Unternehmen haben, ein internes agiles Setup zu implementieren, um mit agilen Dienstleistern auf Augenhöhe zusammenzuarbeiten. Zweitens, welche Vertragsvarianten die agile Zusammenarbeit unterstützen und verbessern können und drittens, welche agilen Techniken und Methoden in der agilen Zusammenarbeit eingesetzt werden sollten. Die Ergebnisse der Fallstudien bestätigen die Annahme, dass die drei identifizierten Kernthemen für eine effektive Zusammenarbeit im agilen Umfeld essenziell sind. Während einerseits nachgewiesen wurde, dass sich die Vertragsanforderungen hinsichtlich ihrer Flexibilität und Anpassungsfähigkeit veränderten, wurde andererseits auch nachgewiesen, dass das interne Setup agile Treiber, Techniken und Methoden erfordert, um eine effektive Zusammenarbeit mit agilen Dienstleistern zu ermöglichen. Dieser Artikel gibt einen Überblick über die wichtigsten Inhalte innerhalb der drei genannten Kernthemen und gibt Unternehmen zudem Hinweise, wie sie eine Basis für eine effektive Zusammenarbeit schaffen können.
Electric drive systems are increasingly used in automobiles. However, the combination of comfort, dynamics and safety requirements places high demands on the torque accuracy. The complex interplay of battery, inverter and electrical machine causes a lot of system uncertainties based on parameter fluctuations and measurement errors that influence the system performance. In this paper these influences on the closed loop torque control are analyzed and quantified using a variance based sensitivity analysis. The method enables to connect the variance of the torque accuracy with the parameter uncertainties causing this variance. Moreover, it quantifies the influences of the parameters independent of the complexity of the analyzed system. In addition, two methods to ensure convergence of the estimated variance based sensitivity measures are proposed. The results of the analysis are presented for 19 static working points of an battery electric drive system.