To see the other types of publications on this topic, follow the link: Anomaly.

Dissertations / Theses on the topic 'Anomaly'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Anomaly.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ståhl, Björn. "Online Anomaly Detection." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2825.

Full text
Abstract:
Where the role of software-intensive systems has shifted from the traditional one of fulfilling isolated computational tasks, larger collaborative societies with interaction as primary resource, is gradually taking its place. This can be observed in anything from logistics to rescue operations and resource management, numerous services with key-roles in the modern infrastructure. In the light of this new collaborative order, it is imperative that the tools (compilers, debuggers, profilers) and methods (requirements, design, implementation, testing) that supported traditional software engineering values also adjust and extend towards those nurtured by the online instrumentation of software intensive systems. That is, to adjust and to help to avoid situations where limitations in technology and methodology would prevent us from ascertaining the well-being and security of systems that assists our very lives. Coupled with most perspectives on software development and maintenance is one well established member of, and complement to, the development process. Debugging; or the art of discovering, localising, and correcting undesirable behaviours in software-intensive systems, the need for which tend to far outlive development in itself. Debugging is currently performed based on a premise of the developer operating from a god-like perspective. A perspective that implies access and knowledge regarding source code, along with minute control over execution properties. However, the quality as well as accessibility of such information steadily decline with time as requirements, implementation, hardware components and their associated developers, all alike fall behind their continuously evolving surroundings. In this thesis, it is argued that the current practice of software debugging is insufficient, and as precursory action, introduce a technical platform suitable for experimenting with future methods regarding online debugging, maintenance and analysis. An initial implementation of this platform will then be used for experimenting with a simple method that is targeting online observation of software behaviour.
APA, Harvard, Vancouver, ISO, and other styles
2

Sutton, Patrick James. "The dimensional-reduction anomaly." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ59681.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tran, Thi Minh Hanh. "Anomaly detection in video." Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/22443/.

Full text
Abstract:
Anomaly detection is an area of video analysis that has great importance in automated surveillance. Although it has been extensively studied, there has been little work on using deep convolutional neural networks to learn spatio-temporal feature representations. In this thesis we present novel approaches for learning motion features and modelling normal spatio-temporal dynamics for anomaly detection. The contributions are divided into two main chapters. The first introduces a method that uses a convolutional autoencoder to learn motion features from foreground optical flow patches. The autoencoder is coupled with a spatial sparsity constraint, known as Winner-Take-All, to learn shift-invariant and generic flow-features. This method solves the problem of using hand-crafted feature representations in state of the art methods. Moreover, to capture variations in scale of the patterns of motion as an object moves in depth through the scene,we also divide the image plane into regions and learn a separate normality model in each region. We compare the methods with state of the art approaches on two datasets and demonstrate improved performance. The second main chapter presents a end-to-end method that learns normal spatio-temporal dynamics from video volumes using a sequence-to-sequence encoder-decoder for prediction and reconstruction. This work is based on the intuition that the encoder-decoder learns to estimate normal sequences in a training set with low error, thus it estimates an abnormal sequence with high error. Error between the network's output and the target is used to classify a video volume as normal or abnormal. In addition to the use of reconstruction error, we also use prediction error for anomaly detection. We evaluate the second method on three datasets. The prediction models show comparable performance with state of the art methods. In comparison with the first proposed method, performance is improved in one dataset. Moreover, running time is significantly faster.
APA, Harvard, Vancouver, ISO, and other styles
4

Barone, Joshua M. "Automated Timeline Anomaly Detection." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1609.

Full text
Abstract:
Digital forensics is the practice of trained investigators gathering and analyzing evidence from digital devices such as computers and smart phones. On these digital devices, it is possible to change the time on the device for a purpose other than what is intended. Currently there are no documented techniques to determine when this occurs. This research seeks to prove out a technique for determining when the time has been changed on forensic disk image by analyzing the log files found on the image. Out of this research a tool is created to perform this analysis in automated fashion. This tool is TADpole, a command line program that analyzes the log files on a disk image and determines if a timeline anomaly has occurred.
APA, Harvard, Vancouver, ISO, and other styles
5

Samuelsson, Jonas. "Anomaly Detection in ConsoleLogs." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-314514.

Full text
Abstract:
The overall purpose of this project was to find anomalies inunstructured console logs. Logs were generated from system componentsin a contact center, specifically components in an email chain. Ananomaly is behaviour that can be described as abnormal. Suchbehaviour was found by creating features of the data that later oncould be analyzed by a data mining model. The mining model involvedthe usage of normalisation methods together with different distancefunctions. The algorithms that were used in order to generate resultson the prepared data were DBSCAN, Local Outlier Factor, and k-NNGlobal Anomaly Score. Every algorithm was combined with two differentnormalisation technologies, namely Min-Max- and Z-transformationnormalisation. The six different experiments yielded three datapoints that could be considered anomalies. Further inspection on thedata showed that the anomalies could be divided into two differenttypes of anomalies; system- or user behavioural related. Two out ofthree algorithms gave an anomaly score to a data point, whereas thethird gave a binary anomaly value to a data point. All the sixexperiments in this project had a common denominator; two data pointscould be classified as anomalies in all the six experiments.
APA, Harvard, Vancouver, ISO, and other styles
6

Das, Mahashweta. "Spatio-Temporal Anomaly Detection." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1261540196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mazel, Johan. "Unsupervised network anomaly detection." Thesis, Toulouse, INSA, 2011. http://www.theses.fr/2011ISAT0024/document.

Full text
Abstract:
La détection d'anomalies est une tâche critique de l'administration des réseaux. L'apparition continue de nouvelles anomalies et la nature changeante du trafic réseau compliquent de fait la détection d'anomalies. Les méthodes existantes de détection d'anomalies s'appuient sur une connaissance préalable du trafic : soit via des signatures créées à partir d'anomalies connues, soit via un profil de normalité. Ces deux approches sont limitées : la première ne peut détecter les nouvelles anomalies et la seconde requiert une constante mise à jour de son profil de normalité. Ces deux aspects limitent de façon importante l'efficacité des méthodes de détection existantes.Nous présentons une approche non-supervisée qui permet de détecter et caractériser les anomalies réseaux de façon autonome. Notre approche utilise des techniques de partitionnement afin d'identifier les flux anormaux. Nous proposons également plusieurs techniques qui permettent de traiter les anomalies extraites pour faciliter la tâche des opérateurs. Nous évaluons les performances de notre système sur des traces de trafic réel issues de la base de trace MAWI. Les résultats obtenus mettent en évidence la possibilité de mettre en place des systèmes de détection d'anomalies autonomes et fonctionnant sans connaissance préalable
Anomaly detection has become a vital component of any network in today’s Internet. Ranging from non-malicious unexpected events such as flash-crowds and failures, to network attacks such as denials-of-service and network scans, network traffic anomalies can have serious detrimental effects on the performance and integrity of the network. The continuous arising of new anomalies and attacks create a continuous challenge to cope with events that put the network integrity at risk. Moreover, the inner polymorphic nature of traffic caused, among other things, by a highly changing protocol landscape, complicates anomaly detection system's task. In fact, most network anomaly detection systems proposed so far employ knowledge-dependent techniques, using either misuse detection signature-based detection methods or anomaly detection relying on supervised-learning techniques. However, both approaches present major limitations: the former fails to detect and characterize unknown anomalies (letting the network unprotected for long periods) and the latter requires training over labeled normal traffic, which is a difficult and expensive stage that need to be updated on a regular basis to follow network traffic evolution. Such limitations impose a serious bottleneck to the previously presented problem.We introduce an unsupervised approach to detect and characterize network anomalies, without relying on signatures, statistical training, or labeled traffic, which represents a significant step towards the autonomy of networks. Unsupervised detection is accomplished by means of robust data-clustering techniques, combining Sub-Space clustering with Evidence Accumulation or Inter-Clustering Results Association, to blindly identify anomalies in traffic flows. Correlating the results of several unsupervised detections is also performed to improve detection robustness. The correlation results are further used along other anomaly characteristics to build an anomaly hierarchy in terms of dangerousness. Characterization is then achieved by building efficient filtering rules to describe a detected anomaly. The detection and characterization performances and sensitivities to parameters are evaluated over a substantial subset of the MAWI repository which contains real network traffic traces.Our work shows that unsupervised learning techniques allow anomaly detection systems to isolate anomalous traffic without any previous knowledge. We think that this contribution constitutes a great step towards autonomous network anomaly detection.This PhD thesis has been funded through the ECODE project by the European Commission under the Framework Programme 7. The goal of this project is to develop, implement, and validate experimentally a cognitive routing system that meet the challenges experienced by the Internet in terms of manageability and security, availability and accountability, as well as routing system scalability and quality. The concerned use case inside the ECODE project is network anomaly
APA, Harvard, Vancouver, ISO, and other styles
8

Leto, Kevin. "Anomaly detection in HPC systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Nell’ambito dei supercomputer, l’attività di anomaly detection rappresenta un’ottima strategia per mantenere alte le performance del sistema (disponibilità ed affidabilità), consentendo di prevenire i guasti e di adattare l’attività di manutenzione alla salute del sistema stesso. Il supercomputer esaminato da questa ricerca è chiamato MARCONI ed appartiene al CINECA, consorzio interuniversitario italiano con sede a Bologna. I dati estratti per l’analisi si riferiscono in particolar modo al nodo “r183c12s04”, ma per provare la generalità dell’approccio sono stati eseguiti ulteriori test anche su nodi differenti (seppur di minor portata). L’approccio seguito sfrutta le potenzialità del machine learning, combinando addestramento non supervisionato e supervisionato. Un autoencoder viene addestrato in modo non supervisionato per ottenere una rappresentazione compressa (dimensionality reduction) dei dati grezzi estratti da un nodo del sistema. I dati compressi vengono poi forniti ad una rete neurale di 3 livelli (input, hidden, output) per effettuare una classificazione supervised tra stati normali e stati anomali. Il nostro approccio si è rilevato molto promettente, raggiungendo livelli di accuracy, precision, recall e f1_score tutti superiori al 97% per il nodo principale. Mentre livelli più bassi, ma comunque molto positivi (mediamente superiori al 83%) sono stati riscontrati per gli altri nodi presi in considerazione. Le performance non perfette degli altri nodi sono sicuramente causate dal basso numero di esempi anomalie presenti nei dataset di riferimento.
APA, Harvard, Vancouver, ISO, and other styles
9

Martin, Xiumin. "Accrual persistence and accrual anomaly." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4824.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on September 28, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Quyen Do. "Anomaly handling in visual analytics." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-122307-132119/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Nordlöf, Jonas. "Anomaly detection in videosurveillance feeds." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-105521.

Full text
Abstract:
Traditional passive surveillance is proving ineffective as the number of available cameras for an operator often exceeds the operators ability to monitor them. Furthermore, monitoring surveillance cameras requires a focus that operators can only uphold for a short amount of time. Algorithms for automatic detection of anomalies in video surveillance feeds are thereby constructed and presented in this thesis by using hidden Markov models (HMM) and a Gaussian mixture probability hypothesis density (GM-PHD) filter. Four different models are created and evaluated using the PETS2009 dataset and a simulated dataset from FOI. The three first models are created to model normal behaviour of crowds in order to detect anomalies. The first uses only one HMM to model all observed behaviours. The second model uses two different HMMs, created by manually splitting the observations in the training set into two parts corresponding to different behaviours. This model does not perform as well as the first model. The third model is attained by clustering the observations in the training dataset, using dynamic time warping (DTW) and z-scores, and creating a separate HMM for each cluster. This model is regarded as the most efficient anomaly detector. The last model uses information from all crowds in the surveilled scene but does not perform well enough to be used to detect anomalies.
Traditionell övervakning är ofta ineffektiv i och med att antalet tillgängliga övervakningskameror ofta överstiger en operatörs förmåga att bevaka dessa. Vidare kräver övervakning ett fokus som en operatör endast klarar av att upprätthålla under en kort tidsperiod. I detta arbete har därför algoritmer för automatisk anomalidetektion i övervakningskameror skapats, med hjälp hidden Markov models (HMM) samt ett Gaussian mixture probability hypothesis density (GM-PHD) filter. Fyra olika modeller har implementerats och utvärderats med hjälp av PETS2009 datasetet samt ett simulerat dataset från FOI. De tre första modellerna är skapade för att modellera normalt beteende bland folksamlingar och kan därefter användas för att upptäcka anomalier. Den första modellen använder sig av endast en HMM för att modellera olika beteenden. Den andra modellen använder sig av två olika HMMer, skapade genom att manuellt dela upp observationerna i träningssetet i två delar så att dessa motsvarar olika beteenden. Denna modell fungerar inte lika bra som den första modellen. Den tredje modellen har konstruerats genom att klustra samtliga observationer, med hjälp av dynamic time warping (DTW) och zscores, därefter skapas en HMM för varje kluster. Denna modell anses vara den mest effektiva anomalidetektorn. Den sista modellen använder information från alla grupper i det bevakade området men fungerar inte tillräckligt bra för att kunna upptäcka anomalier.
APA, Harvard, Vancouver, ISO, and other styles
12

Nguyen, Quyen Do. "Anomaly Handling in Visual Analytics." Digital WPI, 2007. https://digitalcommons.wpi.edu/etd-theses/1144.

Full text
Abstract:
"Visual analytics is an emerging field which uses visual techniques to interact with users in the analytical reasoning process. Users can choose the most appropriate representation that conveys the important content of their data by acting upon different visual displays. The data itself has many features of interest, including clusters, trends (commonalities) and anomalies. Most visualization techniques currently focus on the discovery of trends and other relations, where uncommon phenomena are treated as outliers and are either removed from the datasets or de-emphasized on the visual displays. Much less work has been done on the visual analysis of outliers, or anomalies. In this thesis, I will introduce a method to identify the different levels of “outlierness” by using interactive selection and other approaches to process outliers after detection. In one approach, the values of these outliers will be estimated from the values of their k-Nearest Neighbors and replaced to increase the consistency of the whole dataset. Other approaches will leave users with the choice of removing the outliers from the graphs or highlighting the unusual patterns on the graphs if points of interest lie in these anomalous regions. I will develop and test these anomaly handling methods within the XMDV Tool."
APA, Harvard, Vancouver, ISO, and other styles
13

Turcotte, Melissa. "Anomaly detection in dynamic networks." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/24673.

Full text
Abstract:
Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. A second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behaviour inherent in communication networks. This seasonal behaviour is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the changepoints over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don't fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behaviour is then identified from outlying behaviour with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Dongyang. "PRAAG Algorithm in Anomaly Detection." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-194193.

Full text
Abstract:
Anomaly detection has been one of the most important applications of datamining, widely applied in industries like financial, medical,telecommunication, even manufacturing. In many scenarios, data are in theform of streaming in a large amount, so it is preferred to analyze the datawithout storing all of them. In other words, the key is to improve the spaceefficiency of algorithms, for example, by extracting the statistical summary ofthe data. In this thesis, we study the PRAAG algorithm, a collective anomalydetection algorithm based on quantile feature of the data, so the spaceefficiency essentially depends on that of quantile algorithm.Firstly, the master thesis investigates quantile summary algorithms thatprovides quantile information of a dataset without storing all the data point.Then, we implement the selected algorithms and run experiments to test theperformance. Finally, the report focuses on experimenting on PRAAG tounderstand how the parameters affect the performance and compare it withother anomaly detection algorithms.In conclusion, GK algorithm provides a more space efficient way to estimatequantiles than simply storing all data points. Also, PRAAG is effective in termsof True Prediction Rate (TPR) and False Prediction Rate (FPR), comparingwith a baseline algorithm CUSUM. In addition, there are many possibleimprovements to be investigated, such as parallelizing the algorithm.
Att upptäcka avvikelser har varit en av de viktigaste tillämpningarna avdatautvinning (data mining). Det används stor utsträckning i branscher somfinans, medicin, telekommunikation, och även tillverkning. I många fallströmmas stora mängder data och då är det mest effektivt att analysera utanatt lagra data. Med andra ord är nyckeln att förbättra algoritmernasutrymmeseffektivitet till exempel genom att extraheraden statistiskasammanfattning avdatat. PRAAGär en kollektiv algoritm för att upptäckaavvikelser. Den ärbaserad på kvantilenegenskapernai datat, såutrymmeseffektiviteten beror i huvudsak på egenskapernahoskvantilalgoritmen.Examensarbetet undersöker kvantilsammanfattande algoritmer som gerkvantilinformationen av ett dataset utan att spara alla datapunkter. Vikommer fram till att GKalgoritmenuppfyllervåra krav. Sedan implementerarvialgoritmerna och genomför experiment för att testa prestandan. Slutligenfokuserar rapporten påexperiment på PRAAG för att förstå hur parametrarnapåverkar prestandan. Vi jämför även mot andra algoritmer för att upptäckaavvikelser.Sammanfattningsvis ger GK ett mer utrymmeseffektiv sätt att uppskattakvantiler än att lagra alla datapunkter. Dessutom är PRAAG, jämfört med enstandardalgoritm (CUSUM), effektiv när det gäller True Prediction Rate (TPR)och False Prediction Rate (FPR). Det finns fortfarande flertalet möjligaförbättringar som ska undersökas, t.ex. parallelisering av algoritmen.
APA, Harvard, Vancouver, ISO, and other styles
15

Ohlsson, Jonathan. "Anomaly Detection in Microservice Infrastructures." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231993.

Full text
Abstract:
Anomaly detection in time series is a broad field with many application areas, and has been researched for many years. In recent years the need for monitoring and DevOps has increased, partly due to the increased usage of microservice infrastructures. Applying time series anomaly detection to the metrics emitted by these microservices can yield new insights into the system health and could enable detecting anomalous conditions before they are escalated into a full incident. This thesis investigates how two proposed anomaly detectors, one based on the RPCA algorithm and the other on the HTM neural network, perform on metrics emitted by a microservice infrastructure, with the goal of enhancing the infrastructure monitoring. The detectors are evaluated against a random sample of metrics from a digital rights management company’s microservice infrastructure, as well as the open source NAB dataset. It is illustrated that both algorithms are able to detect every known incident in the company metrics tested. Their ability to detect anomalies is shown to be dependent on the defined threshold value for what qualifies as an outlier. The RPCA Detector proved to be better at detecting anomalies on the company microservice metrics, however the HTM detector performed better on the NAB dataset. Findings also highlight the difficulty of manually annotating anomalies even with domain knowledge. An issue found to be true for both the dataset created for this project, and the NAB dataset. The thesis concludes that the proposed detectors possess different abilities, both having their respective trade-offs. Although they are similar in detection accuracy and false positive rates, each has different inert abilities to perform tasks such as continuous monitoring or ease of deployment in an existing monitoring setup.
Anomalitetsdetektering i tidsserier är ett brett område med många användningsområden och har undersökts under många år. De senaste åren har behovet av övervakning och DevOps ökat, delvis på grund av ökad användning av microservice-infrastrukturer. Att tillämpa tidsserieanomalitetsdetektering på de mätvärden som emitteras av dessa microservices kan ge nya insikter i systemhälsan och kan möjliggöra detektering av avvikande förhållanden innan de eskaleras till en fullständig incident. Denna avhandling undersöker hur två föreslagna anomalitetsdetektorer, en baserad på RPCA-algoritmen och den andra på HTM neurala nätverk, presterar på mätvärden som emitteras av en microservice-infrastruktur, med målet att förbättra infrastrukturövervakningen. Detektorerna utvärderas mot ett slumpmässigt urval av mätvärden från en microservice-infrastruktur på en digital underhållningstjänst, och från det öppet tillgängliga NAB-dataset. Det illustreras att båda algoritmerna kunde upptäcka alla kända incidenter i de testade underhållningstjänst-mätvärdena. Deras förmåga att upptäcka avvikelser visar sig vara beroende av det definierade tröskelvärdet för vad som kvalificeras som en anomali. RPCA-detektorn visade sig bättre på att upptäcka anomalier i underhållningstjänstens mätvärden, men HTM-detektorn presterade bättre på NAB-datasetet. Fynden markerar också svårigheten med att manuellt annotera avvikelser, även med domänkunskaper. Ett problem som visat sig vara sant för datasetet skapat för detta projekt och NAB-datasetet. Avhandlingen slutleder att de föreslagna detektorerna har olikaförmågor, vilka båda har sina respektive avvägningar. De har liknande detekteringsnoggrannhet, men har olika inerta förmågor för att utföra uppgifter som kontinuerlig övervakning, eller enkelhet att installera i en befintlig övervakningsinstallation.
APA, Harvard, Vancouver, ISO, and other styles
16

Aradhye, Hrishikesh Balkrishna. "Anomaly Detection Using Multiscale Methods." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu989701610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ioannidou, Polyxeni. "Anomaly Detection in Computer Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295762.

Full text
Abstract:
In this degree project, we study the anomaly detection problem in log files of computer networks. In particular, we try to find an efficient way to detect anomalies in our data, which consist of different logging messages from different systems in CERN’s network for the LHC-b experiment. The contributions of the thesis are double: 1) The thesis serves as a survey on how we can detect threats, and errors in systems that are logging a huge amount of messages in the databases of a computer network. 2) Scientists in the LHC-b experiment make use of the Elasticsearch, which is an open source search engine and logging platform with great reputation, providing log monitoring, as well as data stream processing. Moreover, the Elasticsearch provides a machine learning feature that automatically models the behavior of the data, learning trends, and periodicity to identify anomalies. Alternatively to the Elasticsearch machine learning feature, we build, test and evaluate some machine learning models that can be used for the same purpose from the scientists of the experiment. We further provide results that our models generalize well to unseen log messages in the database.
I detta examensarbete studerar vi problemet med att upptäcka avvikelser i loggfiler från ett datanätverk. Specifikt försöker vi hitta ett effektivt sätt att upptäcka avvikelser i datan, som består av olika loggningsmeddelanden från olika system i CERNs nätverk för LHC-b-experimentet. Avhandlingens dubbla bidrag är: 1)Avhandlingen kan anses som en undersökning om hur vi kan upptäcka hot och fel i system som loggar en enorm mängd meddelanden i databaser från ett datanätverk. 2) Forskare i LHC-bexperimentet använder sig av Elasticsearch, som är en sökmotor och loggningsplattform med öppen källkod och ett avsevärt rykte, som tillhandahåller loggövervakning och automatisk datahantering. Dessutom är Elasticsearch försedd med en maskinlärningsfunktion som automatiskt modellerar beteenden med hjälp av data, trender och periodicitet för att identifiera avvikelser. Vi bygger, testar och utvärderar ett fåtal maskininlärningsmodeller som ett alternativt till Elasticsearch maskininlärningsfunktion. Forskarna i experimentet kan använda maskininlärningsmodellerna till samma ändamål som Elasticsearch maskininlärningsfunktion. Vi presenterar också resultat som visar att våra modeller generaliserar väl för osedda loggmeddelanden i databasen.
APA, Harvard, Vancouver, ISO, and other styles
18

Patton, Michael Dean. "Seedlet Technology for anomaly detection." Diss., Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-08022002-142101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Alkadi, Alaa. "Anomaly Detection in RFID Networks." UNF Digital Commons, 2017. https://digitalcommons.unf.edu/etd/768.

Full text
Abstract:
Available security standards for RFID networks (e.g. ISO/IEC 29167) are designed to secure individual tag-reader sessions and do not protect against active attacks that could also compromise the system as a whole (e.g. tag cloning or replay attacks). Proper traffic characterization models of the communication within an RFID network can lead to better understanding of operation under “normal” system state conditions and can consequently help identify security breaches not addressed by current standards. This study of RFID traffic characterization considers two piecewise-constant data smoothing techniques, namely Bayesian blocks and Knuth’s algorithms, over time-tagged events and compares them in the context of rate-based anomaly detection. This was accomplished using data from experimental RFID readings and comparing (1) the event counts versus time if using the smoothed curves versus empirical histograms of the raw data and (2) the threshold-dependent alert-rates based on inter-arrival times obtained if using the smoothed curves versus that of the raw data itself. Results indicate that both algorithms adequately model RFID traffic in which inter-event time statistics are stationary but that Bayesian blocks become superior for traffic in which such statistics experience abrupt changes.
APA, Harvard, Vancouver, ISO, and other styles
20

Tuma, Soraya Ivonne Lozada. ""Inversão por etapas de anomalias magnéticas bi-dimensionais"." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/14/14132/tde-12062006-173944/.

Full text
Abstract:
Este trabalho apresenta um procedimento de inversão magnética de três etapas no qual quantidades invariantes em relação à fonte magnética são sequencialmente invertidas para recuperar i) a geometria da fonte no substrato, ii) sua intensidade de magnetização e iii) a inclinação da magnetização da fonte. A primeira quantidade invertida (chamada função geométrica) é obtida pela razão entre a intensidade do gradiente da anomalia magnética e a intensidade do campo magnético anômalo. Para fontes homogêneas, a função geométrica depende apenas da geometria da fonte, o que permite a reconstrução da forma do corpo usando valores arbitrários para a magnetização. Na segunda etapa, a forma da fonte é fixa e a intensidade de magnetização é estimada ajustando o módulo do gradiente da anomalia magnética, uma quantidade invariante com a direção da magnetização e equivalente à amplitude do sinal analítico. Na última etapa, a forma da fonte e a intensidade da magnetização são fixas e a inclinação da magnetização é determinada ajustando a anomalia magnética. Além de recuperar a forma e a magnetização de fontes homogêneas, esta técnica permite, em alguns casos, verificar se as fontes magnéticas são homogêneas. Isto é possível pois a função geométrica de fontes heterogêneas pode ser ajustada por um modelo homogêneo, mas o modelo assim obtido não permite o ajuste da amplitude do sinal analítico nem da anomalia magnética. Esse é um critério que parece efetivo no reconhecimento de fontes fortemente heterogêneas. O método de inversão por etapas é testado em experimentos numéricos de computador e utilizado para interpretar uma anomalia magnética gerada por rochas básicas intrusivas da Bacia do Paraná.
This work presents a three step magnetic inversion procedure in which invariant quantities related to source parameters are sequentially inverted to provide i) cross-section of two-dimensional sources; ii)intensity of source magnetization, and iii) inclination of source magnetization. The first inverted quantity (called geometrical function) is obtained by rationing intensity gradient of total field anomaly and intensity of vector anomalous field. For homogenous sources, geometrical function depends only on source geometry thus allowing shape reconstruction by using arbitrary values for source magnetization. In the second step, source shape is fixed and magnetization intensity is estimated by fitting intensity gradient of total field anomaly, an invariant quantity with magnetization direction and equivalent to amplitude of the analytical signal. In the last step, source shape and magnetization intensity are fixed and magnetization inclination is determined by fitting magnetic anomaly. Besides furnishing shape and magnetization of homogeneous two-dimensional sources, this technique allows to check in some cases if causative sources are homogeneous. It is possible because geometrical function from inhomogeneous sources can be fitted by a homogeneous model but a model thus obtained does not fit the amplitude of analytical signal nor magnetic anomaly itself. This is a criterion that seems efective in recognizing strongly inhomogeneous sources. The proposed technique is tested with numerical experiments, and used to model a magnetic anomaly from intrusive basic rocks of Paraná Basin, Brazil.
APA, Harvard, Vancouver, ISO, and other styles
21

Aussel, Nicolas. "Real-time anomaly detection with in-flight data : streaming anomaly detection with heterogeneous communicating agents." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL007/document.

Full text
Abstract:
Avec l'augmentation du nombre de capteurs et d'actuateurs dans les avions et le développement de liaisons de données fiables entre les avions et le sol, il est devenu possible d'améliorer la sécurité et la fiabilité des systèmes à bord en appliquant des techniques d'analyse en temps réel. Cependant, étant donné la disponibilité limité des ressources de calcul embarquées et le coût élevé des liaisons de données, les solutions architecturelles actuelles ne peuvent pas exploiter pleinement toutes les ressources disponibles, limitant leur précision.Notre but est de proposer un algorithme distribué de prédiction de panne qui pourrait être exécuté à la fois à bord de l'avion et dans une station au sol tout en respectant un budget de communication. Dans cette approche, la station au sol disposerait de ressources de calcul rapides et de données historiques et l'avion disposerait de ressources de calcul limitées et des données de vol actuelles.Dans cette thèse, nous étudierons les spécificités des données aéronautiques et les méthodes déjà existantes pour produire des prédictions de pannes à partir de ces dernières et nous proposerons une solution au problème posé. Notre contribution sera détaillé en trois parties.Premièrement, nous étudierons le problème de prédiction d'événements rares créé par la haute fiabilité des systèmes aéronautiques. Beaucoup de méthodes d'apprentissage en classification reposent sur des jeux de données équilibrés. Plusieurs approches existent pour corriger le déséquilibre d'un jeu de donnée et nous étudierons leur efficacité sur des jeux de données extrêmement déséquilibrés.Deuxièmement, nous étudierons le problème d'analyse textuelle de journaux car de nombreux systèmes aéronautiques ne produisent pas d'étiquettes ou de valeurs numériques faciles à interpréter mais des messages de journaux textuels. Nous étudierons les méthodes existantes basées sur une approche statistique et sur l'apprentissage profond pour convertir des messages de journaux textuels en une forme utilisable en entrée d'algorithmes d'apprentissage pour classification. Nous proposerons notre propre méthode basée sur le traitement du langage naturel et montrerons comment ses performances dépassent celles des autres méthodes sur un jeu de donnée public standard.Enfin, nous offrirons une solution au problème posé en proposant un nouvel algorithme d'apprentissage distribué s'appuyant sur deux paradigmes d'apprentissage existant, l'apprentissage actif et l'apprentissage fédéré. Nous détaillerons notre algorithme, son implémentation et fournirons une comparaison de ses performances avec les méthodes existantes
With the rise of the number of sensors and actuators in an aircraft and the development of reliable data links from the aircraft to the ground, it becomes possible to improve aircraft security and maintainability by applying real-time analysis techniques. However, given the limited availability of on-board computing and the high cost of the data links, current architectural solutions cannot fully leverage all the available resources limiting their accuracy.Our goal is to provide a distributed algorithm for failure prediction that could be executed both on-board of the aircraft and on a ground station and that would produce on-board failure predictions in near real-time under a communication budget. In this approach, the ground station would hold fast computation resources and historical data and the aircraft would hold limited computational resources and current flight's data.In this thesis, we will study the specificities of aeronautical data and what methods already exist to produce failure prediction from them and propose a solution to the problem stated. Our contribution will be detailed in three main parts.First, we will study the problem of rare event prediction created by the high reliability of aeronautical systems. Many learning methods for classifiers rely on balanced datasets. Several approaches exist to correct a dataset imbalance and we will study their efficiency on extremely imbalanced datasets.Second, we study the problem of log parsing as many aeronautical systems do not produce easy to classify labels or numerical values but log messages in full text. We will study existing methods based on a statistical approach and on Deep Learning to convert full text log messages into a form usable as an input by learning algorithms for classifiers. We will then propose our own method based on Natural Language Processing and show how it outperforms the other approaches on a public benchmark.Last, we offer a solution to the stated problem by proposing a new distributed learning algorithm that relies on two existing learning paradigms Active Learning and Federated Learning. We detail our algorithm, its implementation and provide a comparison of its performance with existing methods
APA, Harvard, Vancouver, ISO, and other styles
22

Di, Felice Marco. "Unsupervised anomaly detection in HPC systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Alla base di questo studio vi è l'analisi di tecniche non supervisionate applicate per il rilevamento di stati anomali in sistemi HPC, complessi calcolatori capaci di raggiungere prestazioni dell'ordine dei PetaFLOPS. Nel mondo HPC, per anomalia si intende un particolare stato che induce un cambiamento delle prestazioni rispetto al normale funzionamento del sistema. Le anomalie possono essere di natura diversa come il guasto che può riguardare un componente, una configurazione errata o un'applicazione che entra in uno stato inatteso provocando una prematura interruzione dei processi. I datasets utilizzati in un questo progetto sono stati raccolti da D.A.V.I.D.E., un reale sistema HPC situato presso il CINECA di Casalecchio di Reno, o sono stati generati simulando lo stato di un singolo nodo di un virtuale sistema HPC analogo a quello del CINECA modellato secondo specifiche funzioni non lineari ma privo di rumore. Questo studio propone un approccio inedito, quello non supervisionato, mai applicato prima per svolgere anomaly detection in sistemi HPC. Si è focalizzato sull'individuazione dei possibili vantaggi indotti dall'uso di queste tecniche applicate in tale campo. Sono stati realizzati e mostrati alcuni casi che hanno prodotto raggruppamenti interessanti attraverso le combinazioni di Variational Autoencoders, un particolare tipo di autoencoder probabilistico con la capacità di preservare la varianza dell'input set nel suo spazio latente, e di algoritmi di clustering, come K-Means, DBSCAN, Gaussian Mixture ed altri già noti in letteratura.
APA, Harvard, Vancouver, ISO, and other styles
23

Brauckhoff, Daniela. "Network traffic anomaly detection and evaluation." Aachen Shaker, 2010. http://d-nb.info/1001177746/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Brax, Christoffer. "Anomaly detection in the surveillance domain." Doctoral thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-16373.

Full text
Abstract:
In the post September 11 era, the demand for security has increased in virtually all parts of the society. The need for increased security originates from the emergence of new threats which differ from the traditional ones in such a way that they cannot be easily defined and are sometimes unknown or hidden in the “noise” of daily life. When the threats are known and definable, methods based on situation recognition can be used find them. However, when the threats are hard or impossible to define, other approaches must be used. One such approach is data-driven anomaly detection, where a model of normalcy is built and used to find anomalies, that is, things that do not fit the normal model. Anomaly detection has been identified as one of many enabling technologies for increasing security in the society. In this thesis, the problem of how to detect anomalies in the surveillance domain is studied. This is done by a characterisation of the surveillance domain and a literature review that identifies a number of weaknesses in previous anomaly detection methods used in the surveillance domain. Examples of identified weaknesses include: the handling of contextual information, the inclusion of expert knowledge and the handling of joint attributes. Based on the findings from this study, a new anomaly detection method is proposed. The proposed method is evaluated with respect to detection performance and computational cost on a number datasets, recorded from real-world sensors, in different application areas of the surveillance domain. Additionally, the method is also compared to two other commonly used anomaly detection methods. Finally, the method is evaluated on a dataset with anomalies developed together with maritime subject matter experts. The conclusion of the thesis is that the proposed method has a number of strengths compared to previous methods and is suitable foruse in operative maritime command and control systems.
Christoffer Brax forskar också vid högskolan i Skövde, Informatics Research Centre / Christoffer Brax also does research at the University of Skövde, Informatics Research Centre
APA, Harvard, Vancouver, ISO, and other styles
25

Riveiro, María José. "Visual analytics for maritime anomaly detection." Doctoral thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-12783.

Full text
Abstract:
The surveillance of large sea areas typically involves  the analysis of huge quantities of heterogeneous data.  In order to support the operator while monitoring maritime traffic, the identification of anomalous behavior or situations that might need further investigation may reduce operators' cognitive load. While it is worth acknowledging that existing mining applications support the identification of anomalies, autonomous anomaly detection systems are rarely used for maritime surveillance. Anomaly detection is normally a complex task that can hardly be solved by using purely visual or purely computational methods. This thesis suggests and investigates the adoption of visual analytics principles to support the detection of anomalous vessel behavior in maritime traffic data. This adoption involves studying the analytical reasoning process that needs to be supported,  using combined automatic and visualization approaches to support such process, and evaluating such integration. The analysis of data gathered during interviews and participant observations at various maritime control centers and the inspection of video recordings of real anomalous incidents lead to a characterization of the analytical reasoning process that operators go through when monitoring traffic. These results are complemented with a literature review of anomaly detection techniques applied to sea traffic. A particular statistical-based technique is implemented, tested, and embedded in a proof-of-concept prototype that allows user involvement in the detection process. The quantitative evaluation carried out by employing the prototype reveals that participants who used the visualization of normal behavioral models outperformed the group without aid. The qualitative assessment shows that  domain experts are positive towards providing automatic support and the visualization of normal behavioral models, since these aids may reduce reaction time, as well as increase trust and comprehensibility in the system. Based on the lessons learned, this thesis provides recommendations for designers and developers of maritime control and anomaly detection systems, as well as guidelines for carrying out evaluations of visual analytics environments.
Maria Riveiro is also affiliated to Informatics Research Centre, Högskolan i Skövde
Information Fusion Research Program, Högskolan i Skövde
APA, Harvard, Vancouver, ISO, and other styles
26

Satam, Shalaka Chittaranjan, and Shalaka Chittaranjan Satam. "Bluetooth Anomaly Based Intrusion Detection System." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/625890.

Full text
Abstract:
Bluetooth is a wireless technology that is used to communicate over personal area networks (PAN). With the advent of Internet of Things (IOT), Bluetooth is the technology of choice for small and short range communication networks. For instance, most of the modern cars have the capability to connect to mobile devices using Bluetooth. This ubiquitous presence of Bluetooth makes it important that it is secure and its data is protected. Previous work has shown that Bluetooth is vulnerable to attacks like the man in the middle attack, Denial of Service (DoS) attack, etc. Moreover, all Bluetooth devices are mobile devices and thus power utilization is an import performance parameter. The attacker can easily increase power consumption of a mobile device by launching an attack vector against that device. As a part of this thesis we present an anomaly based intrusion detection system for Bluetooth network, Bluetooth IDS (BIDS). The BIDS uses Ngram based approach to characterize the normal behavior of the Bluetooth protocol. Machine learning algorithms were used to build the normal behavior models for the protocol during the training phase of the system, and thus allowing classification of observed Bluetooth events as normal or abnormal during the operational phase of the system. The experimental results showed that the models that were developed in this thesis had a high accuracy with precision of 99.2% and recall of 99.5%.
APA, Harvard, Vancouver, ISO, and other styles
27

Forstén, Andreas. "Unsupervised Anomaly Detection in Receipt Data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215161.

Full text
Abstract:
With the progress of data handling methods and computing power comes the possibility of automating tasks that are not necessarily handled by humans. This study was done in cooperation with a company that digitalizes receipts for companies. We investigate the possibility of automating the task of finding anomalous receipt data, which could automate the work of receipt auditors. We study both anomalous user behaviour and individual receipts. The results indicate that automation is possible, which may reduce the necessity of human inspection of receipts.
Med de framsteg inom datahantering och datorkraft som gjorts så kommer också möjligheten att automatisera uppgifter som ej nödvändigtvis utförs av människor. Denna studie gjordes i samarbete med ett företag som digitaliserar företags kvitton. Vi undersöker möjligheten att automatisera sökandet av avvikande kvittodata, vilket kan avlasta revisorer. Vti studerar både avvikande användarbeteenden och individuella kvitton. Resultaten indikerar att automatisering är möjligt, vilket kan reducera behovet av mänsklig inspektion av kvitton
APA, Harvard, Vancouver, ISO, and other styles
28

Tjhai, Gina C. "Anomaly-based correlation of IDS alarms." Thesis, University of Plymouth, 2011. http://hdl.handle.net/10026.1/308.

Full text
Abstract:
An Intrusion Detection System (IDS) is one of the major techniques for securing information systems and keeping pace with current and potential threats and vulnerabilities in computing systems. It is an indisputable fact that the art of detecting intrusions is still far from perfect, and IDSs tend to generate a large number of false IDS alarms. Hence human has to inevitably validate those alarms before any action can be taken. As IT infrastructure become larger and more complicated, the number of alarms that need to be reviewed can escalate rapidly, making this task very difficult to manage. The need for an automated correlation and reduction system is therefore very much evident. In addition, alarm correlation is valuable in providing the operators with a more condensed view of potential security issues within the network infrastructure. The thesis embraces a comprehensive evaluation of the problem of false alarms and a proposal for an automated alarm correlation system. A critical analysis of existing alarm correlation systems is presented along with a description of the need for an enhanced correlation system. The study concludes that whilst a large number of works had been carried out in improving correlation techniques, none of them were perfect. They either required an extensive level of domain knowledge from the human experts to effectively run the system or were unable to provide high level information of the false alerts for future tuning. The overall objective of the research has therefore been to establish an alarm correlation framework and system which enables the administrator to effectively group alerts from the same attack instance and subsequently reduce the volume of false alarms without the need of domain knowledge. The achievement of this aim has comprised the proposal of an attribute-based approach, which is used as a foundation to systematically develop an unsupervised-based two-stage correlation technique. From this formation, a novel SOM K-Means Alarm Reduction Tool (SMART) architecture has been modelled as the framework from which time and attribute-based aggregation technique is offered. The thesis describes the design and features of the proposed architecture, focusing upon the key components forming the underlying architecture, the alert attributes and the way they are processed and applied to correlate alerts. The architecture is strengthened by the development of a statistical tool, which offers a mean to perform results or alert analysis and comparison. The main concepts of the novel architecture are validated through the implementation of a prototype system. A series of experiments were conducted to assess the effectiveness of SMART in reducing false alarms. This aimed to prove the viability of implementing the system in a practical environment and that the study has provided appropriate contribution to knowledge in this field.
APA, Harvard, Vancouver, ISO, and other styles
29

Cheng, Leon. "Unsupervised topic discovery by anomaly detection." Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37599.

Full text
Abstract:
Approved for public release; distribution is unlimited
With the vast amount of information and public comment available online, it is of increasing interest to understand what is being said and what topics are trending online. Government agencies, for example, want to know what policies concern the public without having to look through thousands of comments manually. Topic detection provides automatic identification of topics in documents based on the information content and enhances many natural language processing tasks, including text summarization and information retrieval. Unsupervised topic detection, however, has always been a difficult task. Methods such as Latent Dirichlet Allocation (LDA) convert documents from word space into document space (weighted sums over topic space), but do not perform any form of classification, nor do they address the relation of generated topics with actual human level topics. In this thesis we attempt a novel way of unsupervised topic detection and classification by performing LDA and then clustering. We propose variations to the popular K-Mean Clustering algorithm to optimize the choice of centroids, and we perform experiments using Facebook data and the New York Times (NYT) corpus. Although the results were poor for the Facebook data, our method performed acceptably with the NYT data. The new clustering algorithms also performed slightly and consistently better than the normal K-Means algorithm.
APA, Harvard, Vancouver, ISO, and other styles
30

Tziakos, Ioannis. "Subspace discovery for video anomaly detection." Thesis, Queen Mary, University of London, 2010. http://qmro.qmul.ac.uk/xmlui/handle/123456789/387.

Full text
Abstract:
In automated video surveillance anomaly detection is a challenging task. We address this task as a novelty detection problem where pattern description is limited and labelling information is available only for a small sample of normal instances. Classification under these conditions is prone to over-fitting. The contribution of this work is to propose a novel video abnormality detection method that does not need object detection and tracking. The method is based on subspace learning to discover a subspace where abnormality detection is easier to perform, without the need of detailed annotation and description of these patterns. The problem is formulated as one-class classification utilising a low dimensional subspace, where a novelty classifier is used to learn normal actions automatically and then to detect abnormal actions from low-level features extracted from a region of interest. The subspace is discovered (using both labelled and unlabelled data) by a locality preserving graph-based algorithm that utilises the Graph Laplacian of a specially designed parameter-less nearest neighbour graph. The methodology compares favourably with alternative subspace learning algorithms (both linear and non-linear) and direct one-class classification schemes commonly used for off-line abnormality detection in synthetic and real data. Based on these findings, the framework is extended to on-line abnormality detection in video sequences, utilising multiple independent detectors deployed over the image frame to learn the local normal patterns and infer abnormality for the complete scene. The method is compared with an alternative linear method to establish advantages and limitations in on-line abnormality detection scenarios. Analysis shows that the alternative approach is better suited for cases where the subspace learning is restricted on the labelled samples, while in the presence of additional unlabelled data the proposed approach using graph-based subspace learning is more appropriate.
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Chengqiang. "Featured anomaly detection methods and applications." Thesis, University of Exeter, 2018. http://hdl.handle.net/10871/34351.

Full text
Abstract:
Anomaly detection is a fundamental research topic that has been widely investigated. From critical industrial systems, e.g., network intrusion detection systems, to people’s daily activities, e.g., mobile fraud detection, anomaly detection has become the very first vital resort to protect and secure public and personal properties. Although anomaly detection methods have been under consistent development over the years, the explosive growth of data volume and the continued dramatic variation of data patterns pose great challenges on the anomaly detection systems and are fuelling the great demand of introducing more intelligent anomaly detection methods with distinct characteristics to cope with various needs. To this end, this thesis starts with presenting a thorough review of existing anomaly detection strategies and methods. The advantageous and disadvantageous of the strategies and methods are elaborated. Afterward, four distinctive anomaly detection methods, especially for time series, are proposed in this work aiming at resolving specific needs of anomaly detection under different scenarios, e.g., enhanced accuracy, interpretable results, and self-evolving models. Experiments are presented and analysed to offer a better understanding of the performance of the methods and their distinct features. To be more specific, the abstracts of the key contents in this thesis are listed as follows: 1) Support Vector Data Description (SVDD) is investigated as a primary method to fulfill accurate anomaly detection. The applicability of SVDD over noisy time series datasets is carefully examined and it is demonstrated that relaxing the decision boundary of SVDD always results in better accuracy in network time series anomaly detection. Theoretical analysis of the parameter utilised in the model is also presented to ensure the validity of the relaxation of the decision boundary. 2) To support a clear explanation of the detected time series anomalies, i.e., anomaly interpretation, the periodic pattern of time series data is considered as the contextual information to be integrated into SVDD for anomaly detection. The formulation of SVDD with contextual information maintains multiple discriminants which help in distinguishing the root causes of the anomalies. 3) In an attempt to further analyse a dataset for anomaly detection and interpretation, Convex Hull Data Description (CHDD) is developed for realising one-class classification together with data clustering. CHDD approximates the convex hull of a given dataset with the extreme points which constitute a dictionary of data representatives. According to the dictionary, CHDD is capable of representing and clustering all the normal data instances so that anomaly detection is realised with certain interpretation. 4) Besides better anomaly detection accuracy and interpretability, better solutions for anomaly detection over streaming data with evolving patterns are also researched. Under the framework of Reinforcement Learning (RL), a time series anomaly detector that is consistently trained to cope with the evolving patterns is designed. Due to the fact that the anomaly detector is trained with labeled time series, it avoids the cumbersome work of threshold setting and the uncertain definitions of anomalies in time series anomaly detection tasks.
APA, Harvard, Vancouver, ISO, and other styles
32

Soares, Nuno Domingues Mateus Pedroso. "The accruals anomaly in the UK." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.505405.

Full text
Abstract:
In this thesis I provide evidence related to the existence, or otherwise, of the accruals anomaly (Sloan, 1996) in the UK stock market. The accruals anomaly is one of the several anomalies relative to the efficient market hypothesis that have been reported in the accounting and finance literature, and that has received wide attention from researchers in order to better understand it and determine if a real anomaly exists.
APA, Harvard, Vancouver, ISO, and other styles
33

Pellissier, Muriel. "Anomaly detection technique for sequential data." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM078/document.

Full text
Abstract:
De nos jours, beaucoup de données peuvent être facilement accessibles. Mais toutes ces données ne sont pas utiles si nous ne savons pas les traiter efficacement et si nous ne savons pas extraire facilement les informations pertinentes à partir d'une grande quantité de données. Les techniques de détection d'anomalies sont utilisées par de nombreux domaines afin de traiter automatiquement les données. Les techniques de détection d'anomalies dépendent du domaine d'application, des données utilisées ainsi que du type d'anomalie à détecter.Pour cette étude nous nous intéressons seulement aux données séquentielles. Une séquence est une liste ordonnée d'objets. Pour de nombreux domaines, il est important de pouvoir identifier les irrégularités contenues dans des données séquentielles comme par exemple les séquences ADN, les commandes d'utilisateur, les transactions bancaires etc.Cette thèse présente une nouvelle approche qui identifie et analyse les irrégularités de données séquentielles. Cette technique de détection d'anomalies peut détecter les anomalies de données séquentielles dont l'ordre des objets dans les séquences est important ainsi que la position des objets dans les séquences. Les séquences sont définies comme anormales si une séquence est presque identique à une séquence qui est fréquente (normale). Les séquences anormales sont donc les séquences qui diffèrent légèrement des séquences qui sont fréquentes dans la base de données.Dans cette thèse nous avons appliqué cette technique à la surveillance maritime, mais cette technique peut être utilisée pour tous les domaines utilisant des données séquentielles. Pour notre application, la surveillance maritime, nous avons utilisé cette technique afin d'identifier les conteneurs suspects. En effet, de nos jours 90% du commerce mondial est transporté par conteneurs maritimes mais seulement 1 à 2% des conteneurs peuvent être physiquement contrôlés. Ce faible pourcentage est dû à un coût financier très élevé et au besoin trop important de ressources humaines pour le contrôle physique des conteneurs. De plus, le nombre de conteneurs voyageant par jours dans le monde ne cesse d'augmenter, il est donc nécessaire de développer des outils automatiques afin d'orienter le contrôle fait par les douanes afin d'éviter les activités illégales comme les fraudes, les quotas, les produits illégaux, ainsi que les trafics d'armes et de drogues. Pour identifier les conteneurs suspects nous comparons les trajets des conteneurs de notre base de données avec les trajets des conteneurs dits normaux. Les trajets normaux sont les trajets qui sont fréquents dans notre base de données.Notre technique est divisée en deux parties. La première partie consiste à détecter les séquences qui sont fréquentes dans la base de données. La seconde partie identifie les séquences de la base de données qui diffèrent légèrement des séquences qui sont fréquentes. Afin de définir une séquence comme normale ou anormale, nous calculons une distance entre une séquence qui est fréquente et une séquence aléatoire de la base de données. La distance est calculée avec une méthode qui utilise les différences qualitative et quantitative entre deux séquences
Nowadays, huge quantities of data can be easily accessible, but all these data are not useful if we do not know how to process them efficiently and how to extract easily relevant information from a large quantity of data. The anomaly detection techniques are used in many domains in order to help to process the data in an automated way. The anomaly detection techniques depend on the application domain, on the type of data, and on the type of anomaly.For this study we are interested only in sequential data. A sequence is an ordered list of items, also called events. Identifying irregularities in sequential data is essential for many application domains like DNA sequences, system calls, user commands, banking transactions etc.This thesis presents a new approach for identifying and analyzing irregularities in sequential data. This anomaly detection technique can detect anomalies in sequential data where the order of the items in the sequences is important. Moreover, our technique does not consider only the order of the events, but also the position of the events within the sequences. The sequences are spotted as anomalous if a sequence is quasi-identical to a usual behavior which means if the sequence is slightly different from a frequent (common) sequence. The differences between two sequences are based on the order of the events and their position in the sequence.In this thesis we applied this technique to the maritime surveillance, but this technique can be used by any other domains that use sequential data. For the maritime surveillance, some automated tools are needed in order to facilitate the targeting of suspicious containers that is performed by the customs. Indeed, nowadays 90% of the world trade is transported by containers and only 1-2% of the containers can be physically checked because of the high financial cost and the high human resources needed to control a container. As the number of containers travelling every day all around the world is really important, it is necessary to control the containers in order to avoid illegal activities like fraud, quota-related, illegal products, hidden activities, drug smuggling or arm smuggling. For the maritime domain, we can use this technique to identify suspicious containers by comparing the container trips from the data set with itineraries that are known to be normal (common). A container trip, also called itinerary, is an ordered list of actions that are done on containers at specific geographical positions. The different actions are: loading, transshipment, and discharging. For each action that is done on a container, we know the container ID and its geographical position (port ID).This technique is divided into two parts. The first part is to detect the common (most frequent) sequences of the data set. The second part is to identify those sequences that are slightly different from the common sequences using a distance-based method in order to classify a given sequence as normal or suspicious. The distance is calculated using a method that combines quantitative and qualitative differences between two sequences
APA, Harvard, Vancouver, ISO, and other styles
34

Udd, Robert. "Anomaly Detection in SCADA Network Traffic." Thesis, Linköpings universitet, Programvara och system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122680.

Full text
Abstract:
Critical infrastructure provides us with the most important parts of modern society, electricity, water and transport. To increase efficiency and to meet new demands from the customer remote monitoring and control of the systems is necessary. This opens new ways for an attacker to reach the Supervisory Control And Data Acquisition (SCADA) systems that control and monitors the physical processes involved. This also increases the need for security features specially designed for these settings. Anomaly-based detection is a technique suitable for the more deterministic SCADA systems. This thesis uses a combination of two techniques to detect anomalies. The first technique is an automatic whitelist that learns the behavior of the network flows. The second technique utilizes the differences in arrival times of the network packets. A prototype anomaly detector has been developed in Bro. To analyze the IEC 60870-5-104 protocol a new parser for Bro was also developed. The resulting anomaly detector was able to achieve a high detection rate for three of the four different types of attacks evaluated. The studied methods of detection are promising when used in a highly deterministic setting, such as a SCADA system.
APA, Harvard, Vancouver, ISO, and other styles
35

Svensson, Carolin. "Anomaly Detection in Encrypted WLAN Traffic." Thesis, Linköpings universitet, Kommunikationssystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-172689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Putina, Andrian. "Unsupervised anomaly detection : methods and applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT012.

Full text
Abstract:
Une anomalie (également connue sous le nom de outlier) est une instance qui s'écarte de manière significative du reste des données et est définie par Hawkins comme "une observation, qui s'écarte tellement des autres observations qu'elle éveille les soupçons qu'il a été généré par un mécanisme différent". La détection d’anomalies (également connue sous le nom de détection de valeurs aberrantes ou de nouveauté) est donc le domaine de l’apprentissage automatique et de l’exploration de données dans le but d’identifier les instances dont les caractéristiques semblent être incohérentes avec le reste de l’ensemble de données. Dans de nombreuses applications, distinguer correctement l'ensemble des points de données anormaux (outliers) de l'ensemble des points normaux (inliers) s'avère très important. Une première application est le nettoyage des données, c'est-à-dire l'identification des mesures bruyantes et fallacieuses dans un ensemble de données avant d'appliquer davantage les algorithmes d'apprentissage. Cependant, avec la croissance explosive du volume de données pouvant être collectées à partir de diverses sources, par exemple les transactions par carte, les connexions Internet, les mesures de température, etc., l'utilisation de la détection d'anomalies devient une tâche autonome cruciale pour la surveillance continue des systèmes. Dans ce contexte, la détection d'anomalies peut être utilisée pour détecter des attaques d'intrusion en cours, des réseaux de capteurs défaillants ou des masses cancéreuses. La thèse propose d'abord une approche basée sur un collection d'arbres pour la détection non supervisée d'anomalies, appelée "Random Histogram Forest (RHF)". L'algorithme résout le problème de la dimensionnalité en utilisant le quatrième moment central (alias 'kurtosis') dans la construction du modèle en bénéficiant d'un temps d'exécution linéaire. Un moteur de détection d'anomalies basé sur le stream, appelé 'ODS', qui exploite DenStream, une technique de clustering non supervisée est présenté par la suite et enfin un moteur de détection automatisée d'anomalies qui allège l'effort humain requis lorsqu'il s'agit de plusieurs algorithmes et hyper-paramètres est présenté en dernière contribution
An anomaly (also known as outlier) is an instance that significantly deviates from the rest of the input data and being defined by Hawkins as 'an observation, which deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism'. Anomaly detection (also known as outlier or novelty detection) is thus the machine learning and data mining field with the purpose of identifying those instances whose features appear to be inconsistent with the remainder of the dataset. In many applications, correctly distinguishing the set of anomalous data points (outliers) from the set of normal ones (inliers) proves to be very important. A first application is data cleaning, i.e., identifying noisy and fallacious measurement in a dataset before further applying learning algorithms. However, with the explosive growth of data volume collectable from various sources, e.g., card transactions, internet connections, temperature measurements, etc. the use of anomaly detection becomes a crucial stand-alone task for continuous monitoring of the systems. In this context, anomaly detection can be used to detect ongoing intrusion attacks, faulty sensor networks or cancerous masses.The thesis proposes first a batch tree-based approach for unsupervised anomaly detection, called 'Random Histogram Forest (RHF)'. The algorithm solves the curse of dimensionality problem using the fourth central moment (aka kurtosis) in the model construction while boasting linear running time. A stream based anomaly detection engine, called 'ODS', that leverages DenStream, an unsupervised clustering technique is presented subsequently and finally Automated Anomaly Detection engine which alleviates the human effort required when dealing with several algorithm and hyper-parameters is presented as last contribution
APA, Harvard, Vancouver, ISO, and other styles
37

Joshi, Vineet. "Unsupervised Anomaly Detection in Numerical Datasets." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427799744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jirwe, Marcus. "Online Anomaly Detection on the Edge." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299565.

Full text
Abstract:
The society of today relies a lot on the industry and the automation of factory tasks is more prevalent than ever before. However, the machines taking on these tasks require maintenance to continue operating. This maintenance is typically given periodically and can be expensive while sometimes requiring expert knowledge. Thus it would be very beneficial if one could predict when a machine needs maintenance and only employ maintenance as necessary. One method to predict when maintenance is necessary is to collect sensor data from a machine and analyse it for anomalies. Anomalies are usually an indicator of unexpected behaviour and can therefore show when a machine needs maintenance. Due to concerns like privacy and security, it is often not allowed for the data to leave the local system. Hence it is necessary to perform this kind of anomaly detection in an online manner and in an edge environment. This environment imposes limitations on hardware and computational ability. In this thesis we consider four machine learning anomaly detection methods that can learn and detect anomalies in this kind of environment. These methods are LoOP, iForestASD, KitNet and xStream. We first evaluate the four anomaly detectors on the Skoltech Anomaly Benchmark using their suggested metrics as well as the Receiver Operating Characteristic curves. We also perform further evaluation on two data sets provided by the company Gebhardt. The experimental results are promising and indicate that the considered methods perform well at the task of anomaly detection. We finally propose some avenues for future work, such as implementing a dynamically changing anomaly threshold.
Dagens samhälle är väldigt beroende av industrin och automatiseringen av fabriksuppgifter är mer förekommande än någonsin. Dock kräver maskinerna som tar sig an dessa uppgifter underhåll för att forsätta arbeta. Detta underhåll ges typiskt periodvis och kan vara dyrt och samtidigt kräva expertkunskap. Därför skulle det vara väldigt fördelaktigt om det kunde förutsägas när en maskin behövde underhåll och endast göra detta när det är nödvändigt. En metod för att förutse när underhåll krävs är att samla in sensordata från en maskin och analysera det för att hitta anomalier. Anomalier fungerar ofta som en indikator av oväntat beteende, och kan därför visa att en maskin behöver underhåll. På grund av frågor som integritet och säkerhet är det ofta inte tillåtet att datan lämnar det lokala systemet. Därför är det nödvändigt att denna typ av anomalidetektering genomförs sekventiellt allt eftersom datan samlas in, och att detta sker på nätverkskanten. Miljön som detta sker i påtvingar begränsningar på både hårdvara och beräkningsförmåga. I denna avhandling så överväger vi fyra anomalidetektorer som med användning av maskininlärning lär sig och upptäcker anomalier i denna sorts miljö. Dessa metoder är LoOP, iForestASD, KitNet och xStream. Vi analyserar först de fyra anomalidetektorerna genom Skoltech Anomaly Benchmark där vi använder deras föreslagna mått samt ”Receiver Operating Characteristic”-kurvor. Vi genomför även vidare analys på två dataset som vi har tillhandhållit av företaget Gebhardt. De experimentella resultaten är lovande och indikerar att de övervägda metoderna presterar väl när det kommer till detektering av anomalier. Slutligen föreslår vi några idéer som kan utforskas för framtida arbete, som att implementera en tröskel för anomalidetektering som anpassar sig dynamiskt.
APA, Harvard, Vancouver, ISO, and other styles
39

Wu, Mingxi. "Statistical methods for fast anomaly detection." [Gainesville, Fla.] : University of Florida, 2008. http://purl.fcla.edu/fcla/etd/UFE0022572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Rosario, Dalton S. "Algorithm development for hyperspectral anomaly detection." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8583.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Applied Mathematics and Scientific Computation Program. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
41

GHORBANI, SONIYA. "Anomaly Detection in Electricity Consumption Data." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-35011.

Full text
Abstract:
Distribution grids play an important role in delivering electricityto end users. Electricity customers would like to have a continuouselectricity supply without any disturbance. For customerssuch as airports and hospitals electricity interruption may havedevastating consequences. Therefore, many electricity distributioncompanies are looking for ways to prevent power outages.Sometimes the power outages are caused from the grid sidesuch as failure in transformers or a break down in power cablesbecause of wind. And sometimes the outages are caused bythe customers such as overload. In fact, a very high peak inelectricity consumption and irregular load profile may causethese kinds of failures.In this thesis, we used an approach consisting of two mainsteps for detecting customers with irregular load profile. In thefirst step, we create a dictionary based on all common load profileshapes using daily electricity consumption for one-monthperiod. In the second step, the load profile shapes of customersfor a specific week are compared with the load patterns in thedictionary. If the electricity consumption for any customer duringthat week is not similar to any of the load patterns in thedictionary, it will be grouped as an anomaly. In this case, loadprofile data are transformed to symbols using Symbolic AggregateapproXimation (SAX) and then clustered using hierarchicalclustering.The approach is used to detect anomaly in weekly load profileof a data set provided by HEM Nät, a power distributioncompany located in the south of Sweden.
APA, Harvard, Vancouver, ISO, and other styles
42

Oriwoh, Edewede. "A smart home anomaly detection framework." Thesis, University of Bedfordshire, 2015. http://hdl.handle.net/10547/622486.

Full text
Abstract:
Smart Homes (SHs), as subsets of the Internet of Things (IoT), make use of Machine Learning and Arti cial Intelligence tools to provide technology-enabled solutions which assist their occupants and users with their Activities of Daily Living (ADL). Some SH provide always-present, health management support and care services. Having these services provided at home enables SH occupants such as the elderly and disabled to continue to live in their own homes and localities thus aiding Ageing In Place goals and eliminating the need for them to be relocated in order to be able to continue receiving the same support and services. Introducing and interconnecting smart, autonomous systems in homes to enable these service provisions and Assistance Technologies (AT) requires that certain interfaces in, and connections to, SH are exposed to the Internet, among other public-facing networks. This introduces the potential for cyber-physical attacks to be perpetrated through, from and against SH. Apart from the actual threats posed by these attacks to SH occupants and their homes, the potential that these attacks might occur can adversely a ect the adoption or uptake of SH solutions. This thesis identi es key attributes of the di erent elements (things or nodes and rooms or zones) in SHs and the relationships that exist between these elements. These relationships can be used to build SH security baselines for SHs such that any deviations from this baseline is described as anomalous. The thesis demonstrates the application of these relationships to Anomaly Detection (AD) through the analysis of several hypothetical scenarios and the decisions reached about whether they are normal or anomalous. This thesis also proposes an Internet of Things Digital Forensics Framework (IDFF), a Forensics Edge Management System (FEMS), a FEMS Decision-Making Algorithm (FDMA) and an IoT Incident Response plan. These tools can be combined to provide proactive (autonomous and human-led) Digital Forensics services within cyber-physical environments like the Smart Home.
APA, Harvard, Vancouver, ISO, and other styles
43

Rossell, Daniel. "Anomaly detection using adaptive resonance theory." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12205.

Full text
Abstract:
Thesis (M.S.)--Boston University
This thesis focuses on the problem of anomaly detection in computer networks. Anomalies are often malicious intrusion attempts that represent a serious threat to network security. Adaptive Resonance Theory (ART) is used as a classification scheme for identifying malicious network traffic. ART was originally developed as a theory to explain how the human eye categorizes visual patterns. For network intrusion detection, the core ART algorithm is implemented as a clustering algorithm that groups network traffic into clusters. A machine learning process allows the number of clusters to change over time to best conform to the data. Network traffic is characterized by network flows, which represent a packet, or series of packets, between two distinct nodes on a network. These flows can contain a number of attributes, including IP addresses, ports, size, and duration. These attributes form a multi-dimensional vector that is used in the clustering process. Once data is clustered along the defined dimensions, anomalies are identified as data points that do not match known good or nominal network traffic. The ART clustering algorithm is tested on a realistic network environment that was generated using the network flow simulation tool FS. The clustering results for this simulation show very promising detection rates for the ART clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Örneholm, Filip. "Anomaly Detection in Seasonal ARIMA Models." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sreenivasulu, Ajay. "Evaluation of cluster based Anomaly detection." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18053.

Full text
Abstract:
Anomaly detection has been widely researched and used in various application domains such as network intrusion, military, and finance, etc. Anomalies can be defined as an unusual behavior that differs from the expected normal behavior. This thesis focuses on evaluating the performance of different clustering algorithms namely k-Means, DBSCAN, and OPTICS as an anomaly detector. The data is generated using the MixSim package available in R. The algorithms were tested on different cluster overlap and dimensions. Evaluation metrics such as Recall, precision, and F1 Score were used to analyze the performance of clustering algorithms. The results show that DBSCAN performed better than other algorithms when provided low dimensional data with different cluster overlap settings but it did not perform well when provided high dimensional data with different cluster overlap. For high dimensional data k-means performed better compared to DBSCAN and OPTICS with different cluster overlaps
APA, Harvard, Vancouver, ISO, and other styles
46

Dao, Quang Hoan <1992&gt. "Anomaly detection with time series forecasting." Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/17320.

Full text
Abstract:
Anomaly detection time series is a very large and complex field. In recent years, several techniques based on data science were designed in order to improve the efficiency of methods developed for this purpose. In this paper, we introduce Recurrent Neural Networks (RNNs) with LSTM units, ARIMA and Facebook Prophet library for dectecting the anomalies with time series forcasting. Due to the challenges in obtaining labeled anomaly datasets, an unsupervised approach is employed. Unsupervised anomaly detection is the process of finding outlying records in a given dataset without prior need for training. An anomaly could become normal during the data evolution, therefore it is necessary to maintain a dynamic system to adapt the changes. While LSTM and ARIMA are powerful methods for time series forecasting the future, the Prophet package works best with time series that have strong seasonal effects and several seasons of historical data. The Prophet is robust to missing data and shifts in the trend, and typically handles anomalies well. The resulting prediction errors are modeled to give anomaly scores. We also provide a quantitative comparison among approved techniques for voting the optimal choice of the problem. Our experiments, we implement with the practical datasets collected from real products using by thousands of users.
APA, Harvard, Vancouver, ISO, and other styles
47

Zavanin, Eduardo Marcio 1989. "Mecanismo de Pontecorvo estendido." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/278245.

Full text
Abstract:
Orientador: Marcelo Moraes Guzzo
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin
Made available in DSpace on 2018-08-21T20:53:09Z (GMT). No. of bitstreams: 1 Zavanin_EduardoMarcio_M.pdf: 2031044 bytes, checksum: a24fad776402ee2c06c0d5c44baaf4bb (MD5) Previous issue date: 2013
Resumo: O objetivo desse trabalho é desenvolver um mecanismo que possa servir como solução para as anomalias dos antineutrinos de reatores e do Gálio. Relaxando a hipótese de Pontecorvo, permitindo que os ângulos de mistura que compõem um estado de sabor possuam diferentes valores, conseguimos explicar o fenômeno de desaparecimento de neutrinos/antineutrinos em baixas distâncias, através de um parâmetro livre. Para confrontar o mecanismo desenvolvido também fazemos uma analise criteriosa de alguns limites experimentais obtidos por aceleradores de partículas e identificamos uma possível dependência desse parâmetro livre com a energia. Adotando esse dependência energética para o parâmetro livre, conseguimos acomodar a grande maioria dos dados experimentais em física de neutrinos através de um único modelo
Abstract: This project aims the development of a mechanism that provides a possible solution to reactor antineutrino anomaly and Gallium anomaly. Relaxing the Pontecorvo\'s hypothesis, allowing the mixing angles that compose a flavor state possesses different values, it is possible to explain the phenomenon of desappearance in short-baselines, through a free parameter. To confront the mechanism developed we also perform an analysis of some experimental limits obtained by particle accelerators and identify a possible dependence of this free parameter with the energy. Adopting this energetic dependence for the free parameter, we can¿t almost every experiment in neutrino physics through a single model
Mestrado
Física
Mestre em Física
APA, Harvard, Vancouver, ISO, and other styles
48

Balocchi, Leonardo. "Anomaly detection mediante algoritmi di machine learning." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
L'obiettivo dell'elaborato è quello di studiare lo stato di danneggiamento del ponte Z-24, struttura che in fase di demolizione è stata sottoposta ad un danneggiamento progressivo. Lo studio è stato effettuato tramite algoritmi di machine learning ed in particolare sono stati scelti due algoritmi di anomaly detection. Dato che gli algoritmi utilizzati sono due, più chiaramente ACH e KNN, alla fine dell'elaborato è stato effettuato un confronto per capire quale fosse il migliore per l'analisi strutturale. Il confronto fra i due algoritmi è stato fatto tramite matrici di confusione e tramite curve roc.
APA, Harvard, Vancouver, ISO, and other styles
49

SUNDHOLM, JOEL. "Feature Extraction for Anomaly Detection inMaritime Trajectories." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155898.

Full text
Abstract:
The operators of a maritime surveillance system are hardpressed to make complete use of the near real-time informationflow available today. To assist them in this matterthere has been an increasing amount of interest in automated systems for the detection of anomalous trajectories.Specifically, it has been proposed that the framework of conformal anomaly detection can be used, as it provides the key property of a well-tuned alarm rate. However, inorder to get an acceptable precision there is a need to carefully tailor the nonconformity measure used to determine if a trajectory is anomalous. This also applies to the features that are used by the measure. To contribute to a better understandingof what features are feasible and how the choice of feature space relates to the types of anomalies that can be found we have evaluated a number of features on real maritime trajectory data with simulated anomalies. It isfound that none of the tested feature spaces was best for detecting all anomaly types in the test set. While one feature space might be best for detecting one kind of anomaly,another feature space might be better for other anomalies.There are indications that the best possible non conformity measure should capture both absolute anomalies, such asan anomalous position, as well as relative anomalies, such as strange turns or stops.
APA, Harvard, Vancouver, ISO, and other styles
50

Balupari, Ravindra. "Real-time network-based anomaly intrusion detection." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174579398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography