• search hit 1 of 1
Back to Result List

Analysis of anomalies in random permutations using recurrent neural networks

  • Abstract: This paper is about detecting the difference between fully-random and semi-random shuffleing data sets, with the use of unsupervised learning algorithms. Because of the limits of the k-means algorithm alone, a recurrent autoencoder is used for feature extraction to improve the results of k-means. In the next step the autoencoder alone is used for clustering. Introduction: In the last years, machine learning has been used more and more in different areas and it is also appropriate for for pattern recognition in data. Random data is characterized through the missing of defined patterns. Permutations without repetitions have the highest amount of entropy for a sequence of its length, which is similar to random data according to Andrei Kolmogorov, who states that random data have the highest amount of information and can’t be compressed. Therefore, this paper analyses the difference between random permutations and good shuffled permutations, which have some remaining patterns left. This is done via a recurrent autoencoder.

Download full text files

Export metadata

Metadaten
Author:Fabian Fries, Ernst Georg Haffner
URN:urn:nbn:de:hbz:tr5-746
Document Type:Working Paper
Language:English
Date of OPUS upload:2022/08/11
Date of first Publication:2022/08/15
Publishing University:Hochschule Trier
Release Date:2022/08/15
Tag:anomalies in permutations; recurrent neural networks
GND Keyword:Maschinelles Lernen; Mustererkennung; Datensatz; Permutation; Algorithmus; k-Means-Algorithmus; Neuronales Netz
Page Number:4
First Page:1
Last Page:4
Departments:FB Technik
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 000 Informatik, Informationswissenschaft, allgemeine Werke
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International