Filtern
Erscheinungsjahr
- 2022 (3)
Dokumenttyp
Sprache
- Englisch (3)
Volltext vorhanden
- ja (3)
Gehört zur Bibliographie
- nein (3)
Schlagworte
- Algorithmus (3) (entfernen)
Numerous research methods have been developed to detect anomalies in the areas of security and risk analysis. In healthcare, there are numerous use cases where anomaly detection is relevant. For example, early detection of sepsis is one such use case. Early treatment of sepsis is cost effective and reduces the number of hospital days of patients in the ICU. There is no single procedure that is sufficient for sepsis diagnosis, and combinations of approaches are needed. Detecting anomalies in patient time series data could help speed the development of some decisions. However, our algorithm must be viewed as complementary to other approaches based on laboratory values and physician judgments. The focus of this work is to develop a hybrid method for detecting anomalies that occur, for example, in multidimensional medical signals, sensor signals, or other time series in business and nature. The novelty of our approach lies in the extension and combination of existing approaches: Statistics, Self Organizing Maps and Linear Discriminant Analysis in a unique and unprecedented way with the goal of identifying different types of anomalies in real-time measurement data and defining the point where the anomaly occurs. The proposed algorithm not only has the full potential to detect anomalies, but also to find real points where an anomaly starts.
Abstract: This paper is about detecting the difference between fully-random and semi-random shuffleing data sets, with the use of unsupervised learning algorithms. Because of the limits of the k-means algorithm alone, a recurrent autoencoder is used for feature extraction to improve the results of k-means. In the next step the autoencoder alone is used for clustering.
Introduction: In the last years, machine learning has been used more and more in different areas and it is also appropriate for for pattern recognition in data. Random data is characterized through the missing of defined patterns. Permutations without repetitions have the highest amount of entropy for a sequence of its length, which is similar to random data according to Andrei Kolmogorov, who states that random data have the highest amount of information and can’t be compressed. Therefore, this paper analyses the difference between random permutations and good shuffled permutations, which have some remaining patterns left. This is done via a recurrent autoencoder.
In this paper two simple synthetic aperture radar (SAR) methods are applied on data from a 24 GHz FMCW radar implemented on a linear drive for educational purposes. The data of near and far range measurements are evaluated using two different SAR signal processing algorithms featuring 2D-FFT and frequency back projection (FBP) method (Moreira et al., 2013). A comparison of these two algorithms is performed concerning runtime, image pixel size, azimuth and range resolution. The far range measurements are executed in a range of 60 to 135 m by monitoring cars in a parking lot. The near range measurement from 0 to 5 m are realised in a measuring chamber equipped with absorber foam and nearly ideal targets like corner reflectors. The comparison of 2D-FFT and FBP algorithm shows that both deliver good and similar results for the far range measurements but the runtime of the FBP algorithm is up to 150 times longer as the 2D-FFT runtime. In the near range measurements the FBP algorithm displays a very good azimuth resolution and targets which are very close to each other can be separated easily. In contrast to that the 2D-FFT algorithm has a lower azimuth resolution in the near range, thus targets which are very close to each other, merge together and cannot be separated.