To see the other types of publications on this topic, follow the link: Root Cause Analysis.

Dissertations / Theses on the topic 'Root Cause Analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Root Cause Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Melo, Daniel Araújo. "ARCA - Alerts root cause analysis framework." Universidade Federal de Pernambuco, 2014. https://repositorio.ufpe.br/handle/123456789/13946.

Full text
Abstract:
Submitted by Luiza Maria Pereira de Oliveira (luiza.oliveira@ufpe.br) on 2015-05-15T14:58:14Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DISSERTAÇÃO Daniel Araújo Melo.pdf: 2348702 bytes, checksum: cdf9ac0421311267960355f9d6ca4479 (MD5)
Made available in DSpace on 2015-05-15T14:58:14Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DISSERTAÇÃO Daniel Araújo Melo.pdf: 2348702 bytes, checksum: cdf9ac0421311267960355f9d6ca4479 (MD5) Previous issue date: 2014-09-08
Modern virtual plagues, or malwares, have focused on internal host infection and em-ploy evasive techniques to conceal itself from antivirus systems and users. Traditional network security mechanisms, such as Firewalls, IDS (Intrusion Detection Systems) and Antivirus Systems, have lost efficiency when fighting malware propagation. Recent researches present alternatives to detect malicious traffic and malware propagation through traffic analysis, however, the presented results are based on experiments with biased artificial traffic or traffic too specific to generalize, do not consider the existence of background traffic related with local network services or demands previous knowledge of networks infrastructure. Specifically don’t consider a well-known intru-sion detection systems problem, the high false positive rate which may be responsible for 99% of total alerts. This dissertation proposes a framework (ARCA – Alerts Root Cause Analysis) capable of guide a security engineer, or system administrator, to iden-tify alerts root causes, malicious or not, and allow the identification of malicious traffic and false positives. Moreover, describes modern malwares propagation mechanisms, presents methods to detect malwares through analysis of IDS alerts and false positives reduction. ARCA combines an aggregation method based on Relative Uncertainty with Apriori, a frequent itemset mining algorithm. Tests with 2 real datasets show an 88% reduction in the amount of alerts to be analyzed without previous knowledge of network infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
2

AGUIAR, MILENA CABRAL. "ROOT CAUSE ANALYSIS: SURVEY METHODS AND EXEMPLIFATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23437@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
Grandes desafios surgiram para as organizações devido ao avanço tecnológico ocorrido nos últimos tempos. Qualidade no produto não é mais um meio para se obter vantagem competitiva, mas sim uma necessidade para as organizações manterem seus clientes. Deste modo, formas para que a qualidade esteja cada vez mais presente nas organizações se fazem necessárias. Neste contexto, os objetivos do presente trabalho são estudar os principais métodos de análise de causa raiz da literatura, apresentando suas etapas, características, peculiaridades, comparação, e exemplificar a aplicação desses métodos. Uma vez conhecidos pelas organizações, a aplicação de tais métodos pode prevenir recorrência de falhas, levando às organizações a um nível superior de qualidade, aumento de produtividade, e consequentemente maior satisfação dos clientes. A pesquisa foi iniciada pelo levantamento e estudo dos métodos apontados em referências acadêmicas, Método dos 5 por quês, Diagrama de Ishikawa, Análise de Barreira de Controle, Gráfico de Fator Causal e Evento (GFCE), Análise de Árvore de Falhas (AAF) e Mapa de Causa Raiz, seguidos da apresentação de um exemplo de falha elaborado no contexto de uma indústria manufatureira metal-mecânica. Cada método foi aplicado ao problema de não conformidade. Os resultados dos métodos foram comparados e em seguida as vantagens e desvantagens dos métodos foram destacadas. Os Métodos 5 por quês, Diagrama de Ishikawa e Análise de Barreira de Controle foram considerados mais adequados para problemas considerados simples em uma organização industrial. Já para problemas considerados complexos, cujas causas raízes não são facilmente identificadas, os métodos GFCE, AAF e Mapa de Causa Raiz foram considerados os mais indicados.
Great challenges emerged for organizations due to technological advance occurred in recent times. Product quality is no longer a means to gain competitive advantage, but a necessity for organizations to keep their customers. Thus, ways in which quality is increasingly present in organizations are necessary to facilitate its implementation. In this context, the objectives of this work are to study the major methods of root cause analysis of the literature, with its stages, features, peculiarities, comparison, and exemplify the application of these methods. Once known by the organizations, the application of such methods can prevent recurrence of failures, leading organizations to a higher level of quality, increased productivity, and thus increased customer satisfaction. This research was initiated by the survey and study of methods aimed at academic references, 5 Whys, Ishikawa Diagram, Control Barrier Analysis, Event and Causal Factor Charting, Fault Tree Analysis and Root Cause Map, followed by the presentation of an example fault developed in the context of a metalworking manufacturing. Each method was applied to the problem of non-compliance. The results of both methods are compared and then the advantages and disadvantages of both methods are highlighted. 5 Whys, Ishikawa Diagram and Control Barrier Analysis were considered appropriated to simple problems in an industrial organization. In complex problems cases, which root causes are not easily identified, Event and Causal Factor Charting, Fault Tree Analysis and Root Cause Map were considered more appropriated.
APA, Harvard, Vancouver, ISO, and other styles
3

Elliott, Grant Stephen. "Improving customer service contact root-cause analysis." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/50095.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2009.
Includes bibliographical references (p. 50).
When a customer calls or e-mails customer service, a customer service agent will diagnose the issue, render a solution, and then wrap-up the call or e-mail. For many customer service departments, this wrap-up process requires the agent to classify the reason the customer contacted customer service. Typically, this classification is done by assigning a code that describes the reason for a contact. Additionally, if a contact requires a concession, the agent will classify the reason the customer requires a concession, and select an appropriate code. These codes are used by the various business teams within the company to identify and correct failures in their processes. Therefore, these codes should drive down to the root cause for a contact or concession to allow for efficient correction. Possessing codes that do not clearly identify the root cause for a contact are of little or no use for the company. Additionally, the codes must be developed in such a way that they can be accurately chosen by either the agent or the customer. Having agents select the wrong code not only obscures the true cause for a contact, but also creates additional work due to the process involved in determining the correct code. This thesis looks at the challenges inherent in developing a list of codes that both provides clear insight into the root cause for customer contacts, and can be accurately selected by the customer service agent or the customer.
by Grant Stephen Elliott.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Changlin. "Root Cause Localization for Unreproducible Builds." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1595524817828183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Josefsson, Tim. "Root-cause analysis throughmachine learning in the cloud." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-340428.

Full text
Abstract:
It has been predicted that by 2021 there will be 28 billion connected devices and that 80% of global consumer internet traffic will be related to streaming services such as Netflix, Hulu and Youtube. This connectivity will in turn be matched by a cloudinfrastructure that will ensure connectivity and services. With such an increase in infrastructure the need for reliable systems will also rise. One solution to providing reliability in data centres is root-cause analysis where the aim is to identifying the root-cause of a service degradation in order to prevent it or allow for easy localization of the problem.In this report we explore an approach to root-cause-analysis using a machine learning model called self-organizing maps. Self-organizing maps provides data classification, while also providing visualization of the model which is something many machine learning models fail to do. We show that self-organizing maps are a promising solutionto root-cause analysis. Within the report we also compare our approach to another prominent approachs and show that our model preforms favorably. Finally, we touch upon some interesting research topics that we believe can further the field of root-cause analysis
APA, Harvard, Vancouver, ISO, and other styles
6

Medidi, Prasadbabu. "Waste in Lean Software Development : A Root Cause Analysis." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4238.

Full text
Abstract:
Context: Removal of wastes is a crucial area in lean software development. It has been found that there was little evidence on root causes of wastes in lean software development. Root causes from the state of practice had not being investigated. Furthermore, relations between wastes were now successfully exposed through root cause identifications process. Objectives: The objective of this study was to perform an in-depth investigation to identify causes which lead to wastes in Lean software development process in the context of medium to large software development. To this end, researcher also identified relationships that exist between wastes. Methods: The researcher conducted Literature review to look for evidence on waste related activities offered in peer-reviewed literature. Furthermore, the author conducted seven semi-structured interviews and used Grounded Theory method for both literature and interview data analysis. Results: The researcher identified three categories of factors of wastes. Namely, Technical, Non-technical and Global software product development. In the technical category, factors relating to different technical aspects to build a product such as required resource issues, solving complexity issues among others were identified. Similarly, factors relating to people knowledge, management issues as well as factors that bothered on communication, coordination and temporal distance were identified as non-technical and global software product development respectively. For all seven kinds of wastes the root causes were identified.
0046734784551
APA, Harvard, Vancouver, ISO, and other styles
7

Ellis, Kathryn. "Improving root cause analysis of bacteriological water quality failures." Thesis, University of Sheffield, 2013. http://etheses.whiterose.ac.uk/5701/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pereira, Rosangela de Fátima. "A data-driven solution for root cause analysis in cloud computing environments." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-03032017-082237/.

Full text
Abstract:
The failure analysis and resolution in cloud-computing environments are a a highly important issue, being their primary motivation the mitigation of the impact of such failures on applications hosted in these environments. Although there are advances in the case of immediate detection of failures, there is a lack of research in root cause analysis of failures in cloud computing. In this process, failures are tracked to analyze their causal factor. This practice allows cloud operators to act on a more effective process in preventing failures, resulting in the number of recurring failures reduction. Although this practice is commonly performed through human intervention, based on the expertise of professionals, the complexity of cloud-computing environments, coupled with the large volume of data generated from log records generated in these environments and the wide interdependence between system components, has turned manual analysis impractical. Therefore, scalable solutions are needed to automate the root cause analysis process in cloud computing environments, allowing the analysis of large data sets with satisfactory performance. Based on these requirements, this thesis presents a data-driven solution for root cause analysis in cloud-computing environments. The proposed solution includes the required functionalities for the collection, processing and analysis of data, as well as a method based on Bayesian Networks for the automatic identification of root causes. The validation of the proposal is accomplished through a proof of concept using OpenStack, a framework for cloud-computing infrastructure, and Hadoop, a framework for distributed processing of large data volumes. The tests presented satisfactory performance, and the developed model correctly classified the root causes with low rate of false positives.
A análise e reparação de falhas em ambientes de computação em nuvem é uma questão amplamente pesquisada, tendo como principal motivação minimizar o impacto que tais falhas podem causar nas aplicações hospedadas nesses ambientes. Embora exista um avanço na área de detecção imediata de falhas, ainda há percalços para realizar a análise de sua causa raiz. Nesse processo, as falhas são rastreadas a fim de analisar o seu fator causal ou seus fatores causais. Essa prática permite que operadores da nuvem possam atuar de modo mais efetivo na prevenção de falhas, reduzindo-se o número de falhas recorrentes. Embora essa prática seja comumente realizada por meio de intervenção humana, com base no expertise dos profissionais, a complexidade dos ambientes de computação em nuvem, somada ao grande volume de dados oriundos de registros de log gerados nesses ambientes e à ampla inter-dependência entre os componentes do sistema tem tornado a análise manual inviável. Por esse motivo, torna-se necessário soluções que permitam automatizar o processo de análise de causa raiz de uma falha ou conjunto de falhas em ambientes de computação em nuvem, e que sejam escaláveis, viabilizando a análise de grande volume de dados com desempenho satisfatório. Com base em tais necessidades, essa dissertação apresenta uma solução guiada por dados para análise de causa raiz em ambientes de computação em nuvem. A solução proposta contempla as funcionalidades necessárias para a aquisição, processamento e análise de dados no diagnóstico de falhas, bem como um método baseado em Redes Bayesianas para a identificação automática de causas raiz de falhas. A validação da proposta é realizada por meio de uma prova de conceito utilizando o OpenStack, um arcabouço para infraestrutura de computação em nuvem, e o Hadoop, um arcabouço para processamento distribuído de grande volume de dados. Os testes apresentaram desempenhos satisfatórios da arquitetura proposta, e o modelo desenvolvido classificou corretamente com baixo número de falsos positivos.
APA, Harvard, Vancouver, ISO, and other styles
9

Mustafa, Mohamed. "A Model to Identify Failure & the Root Cause." Thesis, Linnéuniversitetet, Institutionen för maskinteknik (MT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-68770.

Full text
Abstract:
Through identifying failure manufacturing companies compete in today’s world to gain beneficial attributes. The purpose if this thesis is to develop a model towards identifying failure and the root cause. The model developed to identify failure and the root cause toward it, which should result it decrease in failure time (nonfunction machine). The developed model has tested and analyzed in a manufacturing company. The model has been established through studies based on preventive and predictive maintenance: FMEA & RCA.
APA, Harvard, Vancouver, ISO, and other styles
10

Siekkinen, Matti. "Root cause analysis of TCP throughput : methodology, techniques, and applications." Nice, 2006. http://www.theses.fr/2006NICE4037.

Full text
Abstract:
The interest for the research community to measure the Internet has grown tremendously during the last couple of years. Thisncrease of interest is largely due to the growth and expansion of the Internet that has been overwhelming. We have experienced exponential growth in terms of traffic volumes and number of devices connected to the Internet. In addition, the heterogeneity of the Internet is constantly increasing: we observe more and more different devices with different communication needs residing in or moving between different types of networks. This evolution has brought up many needs - commercial, social, and technical needs - to know more about the users, traffic, and devices connected to the Internet. Unfortunately, little such knowledge is available today and more is required every day. That is why Internet measurements has grown to become a substantial research domain today. This thesis is concerned with TCP traffic. TCP is estimated to carry over 90% of the Internet's traffic, which is why it plays a crucial role in the functioning of the entire Internet. The most important performance metrics for applications is typically throughput, i. E. The amount of data transmitted over a period of time. Our definition of the root cause analysis of TCP throughput is the analysis and inference of the reasons that prevent a given TCP connection from achieving a higher throughput. These reasons can be many: application, network, or even the TCP protocol itself. This thesis comprises three parts: methodology, techniques, and applications. The first part introduces our database management system-based methodology for passive traffic analysis. In that part we explain our approach, the InTraBase, which is based on an object-relational database management system. We also describe our prototype of this approach, which is implemented on PostgreSQL, and evaluate and optimize its performance. In the second part, we present the primary ontributions of this thesis: the techniques for root cause analysis of TCP throughput. We introduce the different potential causes that can prevent a given TCP connection to achieve a higher throughput and explain in detail the algorithms we developed and used to detect such causes. Given the large heterogeneity and potentially large impact of applications that operate on top of TCP, we emphasize their analysis. The core of the third part of this thesis is a case study of traffic originating from clients of a commercial ADSL access network. The study focuses on performance analysis of data transfers from a point of view of the client. We discover some surprising results, such as poor overall performance of P2P applications for file distribution due to upload rate limits enforced by client applications. The third part essentially binds the two first ones together: we give an idea of the capabilities of a system combining the methodology of the first part with the techniques of the second part to produce meaningful results in a real world case study
L'intérêt pour la métrologie de l'Internet s'est beaucoup accru ces dernières années. Ceci est en grande partie dû à la croissance de l'Internet en termes de volumes de trafic et de nombre de machines reliés à l'Internet. Cette évolution a sucité beaucoup d'envies - du point de vue commercial, social, et technique - d'en savoir plus au sujet des utilisateurs et du trafic Internet en général. Malheureusement, il y a peu de connaissances de ce type disponibles aujourd'hui. C'est pourquoi la métrologie de l'Internet est devenue un domaine substantiel de recherches. Cette thèse porte sur l'analyse du trafic TCP. On estime que TCP transporte 90% du trafic Internet, ce qui implique que TCP est une pièce essentielle dans le fonctionnement de l'Internet. La métrique de performance la plus importante pour les applications est, dans la plupart des cas le débit de transmission ; c'est-à-dire la quantité des données transmises par périodes de temps. Notre objectif est l'analyse du débit de transmission de TCP et l'identification des raisons qui empêchent une connexion TCP d'obtenir un débit plus élevé. Ces raisons peuvent être multiples: l'application, le réseau, ou même le protocole TCP lui-même. Cette thèse comporte trois parties. Une première partie sur la méthodologie, une seconde sur techniques d'analyse de TCP, et une dernière qui est une application de ces technique. Dans la première partie, nous présentons notre méthodologie basée sur un système de gestion de base de données (DBMS) pour l'analyse passive de trafic. Nous expliquons notre approche, nommée InTraBase, qui est basée sur un système de gestion de base de données orienté objet. Nous décrivons également notre prototype de cette approche, qui est implémenté au dessus de PostgreSQL, et nous évaluons et optimisons ses performances. Dans la deuxième partie, nous présentons les contributions principales de cette thèse: les techniques d'analyse des causes du débit de transmission TCP observé. Nous présentons les différentes causes potentielles qui peuvent empêcher une connexion TCP d'obtenir un débit plus élevé et nous expliquons en détail les algorithmes que nous avons développé pour détecter ces causes. Etant donné leur hétérogénéité et leur impact sur le débit TCP, nous accordons une grande importance aux applications au dessus de TCP. La troisième partie de cette thèse est une étude de cas du trafic des clients d'un réseau d'accès commercial d'ADSL. L'étude se concentre sur l'analyse des performances des transferts de données d'un point de vue client. Nous démontrons quelques résultats étonnants, tel le fait que les performances globalement faibles des applications pair-à-pair sont dues aux limitations du débit de transmission imposées par ces applications (et non à la congestion dans le réseau)
APA, Harvard, Vancouver, ISO, and other styles
11

Forsberg, Viktor. "AUTOMATIC ANOMALY DETECTION AND ROOT CAUSE ANALYSIS FOR MICROSERVICE CLUSTERS." Thesis, Umeå universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-164740.

Full text
Abstract:
Large microservice clusters deployed in the cloud can be very difficult to both monitor and debug. Monitoring theses clusters is a fi€rst step towards detection of anomalies, deviations from normal behaviour. Anomalies are oft‰en indicators that a component is failing or is about to fail and should hence be detected as soon as possible. Th‘ere are oft‰en lots of metrics available to view. Furthermore, any errors that occur oft‰en propagate to other microservices making it hard to manually locate the root cause of an anomaly, because of this automatic methods are needed to detect and correct the problems. Th‘e goal of this thesis is to create a solution that can automatically monitor a microservice cluster, detect anomalies, and fi€nd a root cause. Th‘e anomaly detection is based on an unsupervised clustering algorithm that learns the normal behaviour of each service and then look for data that falls outside that behaviour. Once an anomaly is detected the proposed method tries to match the data against prede€fined root causes. ‘The proposed solution is evaluated in a real microservice cluster deployed in the cloud, using Kubernetes together with a service mesh and several other tools to help gather metrics and trace requests in the system.
APA, Harvard, Vancouver, ISO, and other styles
12

Wepener, Clare. "The development and validation of a questionnaire on Root Cause Analysis." Master's thesis, Faculty of Health Sciences, 2021. http://hdl.handle.net/11427/33082.

Full text
Abstract:
Background: Root Cause Analysis (RCA) is a method of investigating adverse events (AEs). The purpose of RCA is to improve quality of care and patient safety through a retrospective, structured investigative process of an incident, resulting in recommendations to prevent the recurrence of medical errors. Aim: The aim of the study was to develop and validate a prototype questionnaire to establish whether the RCA model and processes employed at the research setting were perceived by the users to be acceptable, thorough and credible in terms of internationally established criteria. Methods: This is a validation study comprising four phases to meet the study objectives: 1) the development of a prototype questionnaire guided by a literature review; 2) assessing the validity of the content of the questionnaire by and numerical evaluation of the face validity thereof; 3) assessing the qualitative face validity cognitive interviews; and 4) reliability by test-retest. Results: Content validity assessment in Phase 2 resulted in removal of 1/36 (2.77%) question items and amendment of 7/36 (19.44%), resulting in 35 for the revised questionnaire. Analysis of data from the cognitive interviews resulted in amendment of 20/35 (57.14%) question items but no removal. Reliability of the final questionnaire achieved the predetermined ≥0.7 level of agreement. Conclusion: The questionnaire achieved a high content validity index and face validity was enhanced by cognitive interviews by providing qualitative data. The inter-rater coefficient indicated a high level of reliability. The tool was designed for a local private healthcare sector and this may limit its use.
APA, Harvard, Vancouver, ISO, and other styles
13

Sun, Xuewen M. Eng Massachusetts Institute of Technology, and Bangqi Yin. "A root cause analysis of stock-outs in the pharmaceutical industry." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92120.

Full text
Abstract:
Thesis: M. Eng. in Logistics, Massachusetts Institute of Technology, Engineering Systems Division, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 45-46).
PharCo (an assumed name) is a leading global healthcare company with well-recognized brands of both pharmaceutical and consumer healthcare products. As PharCo continues to expand its global presence, product stock-outs in their pharmaceutical business unit have been consistently increasing. PharCo suspected that manufacturing quality defects were a major cause of stock-outs, reducing the production yield and preventing the company from meeting customer demand. To help test this hypothesis and address the stock-out challenge, we reviewed existing research on the subject of product stock-outs within the pharmaceutical industry. To understand PharCo's manufacturing process, we conducted on-site visits and reviewed their quality control practices. Finally, we designed a mixed methods approach that combines qualitative and quantitative techniques to analyze the root causes of product stock-outs at PharCo. The analysis revealed that, instead of manufacturing quality defects, regulatory issues were the primary cause for stock-outs at PharCo. Regulatory challenges associated with developments such as new product launches, license renewals, and formulation modifications need to be addressed for PharCo to reduce their stock-out level.
by Xuewen Sun and Bangqi Yin.
M. Eng. in Logistics
APA, Harvard, Vancouver, ISO, and other styles
14

Roberts, J. (Juho). "Iterative root cause analysis using data mining in software testing processes." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201604271548.

Full text
Abstract:
In order to remain competitive, companies need to be constantly vigilant and aware of the current trends in the industry in which they operate. The terms big data and data mining have exploded in popularity in recent years, and will continue to do so with the launch of the internet of things (IoT) and the 5th generation of mobile networks (5G) in the next decade. Companies need to recognize the value of the big data they are generating in their day-to-day operations, and learn how and why to exploit data mining techniques to extract the most knowledge out of the data their customers and the company itself are generating. The root cause analysis of faults uncovered during base station system testing is a difficult process due to the profound complexity caused by the multi-disciplinary nature of a base station system, and the sheer volume of log data outputted by the numerous system components. The goal of this research is to investigate if data mining can be exploited to conduct root cause analysis. It took the form of action research and is conducted in industry at an organisation unit responsible for the research and development of mobile base station equipment. In this thesis, we survey existing literature on how data mining has been used to address root cause analysis. Then we propose a novel approach to root cause analysis by making iterations to the root cause analysis process with data mining. We use the data mining tool Splunk in this thesis as an example; however, the practices presented in this research can be applied to other similar tools. We conduct root cause analysis by mining system logs generated by mobile base stations, to investigate which system component is causing the base station to fall short of its performance specifications. We then evaluate and validate our hypotheses by conducting a training session for the test engineers to collect feedback on the suitability of data mining in their work. The results from the evaluation show that amongst other benefits, data mining makes root cause analysis more efficient, but also makes bug reporting in the target organisation more complete. We conclude that data mining techniques can be a significant asset in root cause analysis. The efficiency gains are significant in comparison to the manual root cause analysis which is currently being conducted at the target organisation
Kilpailuedun säilyttämiseksi yritysten on pysyttävä ajan tasalla markkinoiden viimeisimpien kehityssuuntien kanssa. Massadata ja sen jatkojalostaminen, eli tiedonlouhinta, ovat tällä hetkellä mm. IT- ja markkinointialan muotisanoja. Esineiden internetin ja viidennen sukupolven matkapuhelinverkon (5G) yleistyessä tiedonlouhinnan merkitys tulee kasvamaan entisestään. Yritysten on kyettävä tunnistamaan luomansa massadatan merkitys omissa toiminnoissaan, ja mietittävä kuinka soveltaa tiedonlouhintamenetelmiä kilpailuedun luomiseksi. Matkapuhelinverkon tukiasemien vika-analyysi on haastavaa tukiasemien monimutkaisen luonteen sekä valtavan datamäärän ulostulon vuoksi. Tämän tutkimuksen tavoitteena on arvioida tiedonlouhinnan soveltuvuutta vika-analyysin edesauttamiseksi. Tämä pro gradu -tutkielma toteutettiin toimintatutkimuksen muodossa matkapuhelinverkon tukiasemia valmistavassa yrityksessä. Tämä pro gradu -tutkielma koostui sekä kirjallisuuskatsauksesta, jossa perehdyttiin siihen, kuinka tiedonlouhintaa on sovellettu vika-analyysissä aikaisemmissa tutkimuksissa että empiirisestä osiosta, jossa esitetään uudenlaista iteratiivista lähestymistapaa vika-analyysiin tiedonlouhintaa hyödyntämällä. Tiedonlouhinta toteutettiin Splunk -nimistä tiedonlouhintatyökalua hyödyntäen, mutta tutkimuksessa esitelty teoria voidaan toteuttaa muitakin työkaluja käyttäen. Tutkimuksessa louhittiin tukiaseman synnyttämiä lokitiedostoja, joista pyrittiin selvittämään, mikä tukiaseman ohjelmistokomponentti esti tukiasemaa saavuttamasta suorituskyvyllisiä laatuvaatimuksia. Tutkimuksen tulokset osoittivat tiedonlouhinnan olevan oivallinen lähestymistapa vika-analyysiin sekä huomattava etu työn tehokkuuden lisäämiseksi verrattuna nykyiseen käsin tehtyyn analyysiin
APA, Harvard, Vancouver, ISO, and other styles
15

López, Sergio. "Anomaly Detection and Root Cause Analysis for LTE Radio Base Stations." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231618.

Full text
Abstract:
This project aims to detect possible anomalies in the resource consumption of radio base stations within the 4G LTE Radio architecture. This has been done by analyzing the statistical data that each node generates every 15 minutes, in the form of "performance maintenance counters". In this thesis, we introduce methods that allow resources to be automatically monitored after software updates, in order to detect any anomalies in the consumption patterns of the different resources compared to the reference period before the update. Additionally, we also attempt to narrow down the origin of anomalies by pointing out parameters potentially linked to the issue.
Detta projekt syftar till att upptäcka möjliga anomalier i resursförbrukningen hos radiobasstationer inom 4G LTE Radio-arkitekturen. Detta har gjorts genom att analysera de statistiska data som varje nod genererar var 15:e minut, i form av PM-räknare (PM = Performance Maintenance). I denna avhandling introducerar vi metoder som låter resurser över-vakas automatiskt efter programuppdateringar, för att upptäcka eventuella avvikelser i resursförbrukningen jämfört med referensperioden före uppdateringen. Dessutom försöker vi också avgränsa ursprunget till anomalier genom att peka ut parametrar som är potentiellt kopplade till problemet.
APA, Harvard, Vancouver, ISO, and other styles
16

Bedford, Nigel St John. "Hong Kong's 1997 problem : critical analysis of its root cause(s) /." Thesis, [Hong Kong : University of Hong Kong], 1992. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13302164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Riesel, Max. "Root cause analysis using Bayesian networks for a video streaming service." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252717.

Full text
Abstract:
In this thesis, an approach for localizing culprits of degradation of quality measures in an IPTV streaming service using Bayesian net-work is presented. This task is referred to as Root Cause Analysis(RCA). The objective of this thesis is to develop a model that is able to provide useful information to technicians by generating a list of probable root causes in order to shorten the amount of time spent on trouble shooting. A performance comparison is presented in Section Experimental results with Bayesian models such as Naive Bayes (NB),Tree Augmented naive Bayes (TAN) and Hill Climbing (HC) and the non Bayesian methods K-Nearest Neighbors and Random Forest. The results of the RCA models indicated that the most frequent most prob-able cause of degradation of quality is the signal strength of the user’s Wi-Fi that is reported at the user’s TV box.
I detta examensarbete presenteras en metod för att lokalisera grundorsaken till nedgradering av kvalitet i en IPTV strömningstjänst. Denna uppgift refererar tillgrundorsaksanalys. Avsikten med denna tes är att utveckla en modell som kan tillförse tekniker med användarbar information genom att generera en lista med möjliga grundorsaker för att förkorta tiden som spenderas med felsökning. En prestandajämförelse är presenterad i Sektion Experimental results med de Bayesianska modellerna Naive Bayes (NB), Tree Augmented naive Bayes (TAN) och Hill Climbing (HC) samt de icke Bayesianska modellerna K-Nearest Neighbors och Random Forest. Resultatet av grundorsaksmodellerna indikerade att den mest frekventa mest sannolika grundorsaken till nedgradering av kvalitet är signal styrkan hos Wi-Fi nätverket vilket rapporteras i användarens TV-box.
APA, Harvard, Vancouver, ISO, and other styles
18

Lightner, Cynthia. "Experiences and Barriers for Patient Safety Officers Conducting Root Cause Analysis." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3796.

Full text
Abstract:
Research shows that, when unintentional harm to patients in outpatient and hospital settings occurs, root cause analysis (RCA) investigations should be conducted to identify and implement corrective actions to prevent future patient harm. Executives at a small healthcare consulting company that employs patient safety officers (PSOs) responsible for conducting RCAs were concerned with the low quality of RCA outcomes, prompting this postinvestigation assessment of PSOs' RCA training and experiences. Guided by adult learning theory, the purpose of this study was to assess PSOs' RCA training and investigation experiences by examining self-reported benefits, attitudes, barriers, and time since training, and the relationship between time since training and the number of barriers encountered during RCA investigations. This quantitative study used a preestablished survey with a purposeful sample of 89 PSOs located at 75 military health care facilities in the United States and abroad. Data analysis included descriptive statistics and Kendall's tau-b correlations. Results indicated that PSOs had positive training experiences, valued RCA investigations, varied on the time since RCA training, and encountered barriers conducting RCAs. Kendall's tau-b correlation analysis showed that the time since training was not significantly associated with the frequency of barriers they encountered. Findings suggest that the transfer of technical RCA knowledge was applied during actual RCA investigations regardless of time since training, and barriers contributed to subpar quality RCA outcomes. RCA professional development was designed to enhance nontechnical, soft competency skills as a best practice to overcome encountered barriers and promote social change in the field.
APA, Harvard, Vancouver, ISO, and other styles
19

Patsanis, Alexandros. "Network Anomaly Detection and Root Cause Analysis with Deep Generative Models." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-397367.

Full text
Abstract:
The project's objective is to detect network anomalies happening in a telecommunication network due to hardware malfunction or software defects after a vast upgrade on the network's system over a specific area, such as a city. The network's system generates statistical data at a 15-minute interval for different locations in the area of interest. For every interval, all statistical data generated over an area are aggregated and converted to images. In this way, an image represents a snapshot of the network for a specific interval, where statistical data are represented as points having different density values. To that problem, this project makes use of Generative Adversarial Networks (GANs), which learn a manifold of the normal network pattern. Additionally, mapping from new unseen images to the learned manifold results in an anomaly score used to detect anomalies. The anomaly score is a combination of the reconstruction error and the learned feature representation. Two models for detecting anomalies are used in this project, AnoGAN and f-AnoGAN. Furthermore, f-AnoGAN uses a state-of-the-art approach called Wasstestein GAN with gradient penalty, which improves the initial implementation of GANs. Both quantitative and qualitative evaluation measurements are used to assess GANs models, where F1 Score and Wasserstein loss are used for the quantitative evaluation and linear interpolation in the hidden space for qualitative evaluation. Moreover, to set a threshold, a prediction model used to predict the expected behaviour of the network for a specific interval. Then, the predicted behaviour is used over the anomaly detection model to define a threshold automatically. Our experiments were implemented successfully for both prediction and anomaly detection models. We additionally tested known abnormal behaviours which were detected and visualised. However, more research has to be done over the evaluation of GANs, as there is no universal approach to evaluate them.
APA, Harvard, Vancouver, ISO, and other styles
20

Mdini, Maha. "Anomaly detection and root cause diagnosis in cellular networks." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0144/document.

Full text
Abstract:
Grâce à l'évolution des outils d'automatisation et d'intelligence artificielle, les réseauxmobiles sont devenus de plus en plus dépendants de la machine. De nos jours, une grandepartie des tâches de gestion de réseaux est exécutée d'une façon autonome, sans interventionhumaine. Dans cette thèse, nous avons focalisé sur l'utilisation des techniques d'analyse dedonnées dans le but d'automatiser et de consolider le processus de résolution de défaillancesdans les réseaux. Pour ce faire, nous avons défini deux objectifs principaux : la détectiond'anomalies et le diagnostic des causes racines de ces anomalies. Le premier objectif consiste àdétecter automatiquement les anomalies dans les réseaux sans faire appel aux connaissancesdes experts. Pour atteindre cet objectif, nous avons proposé un algorithme, Watchmen AnomalyDetection (WAD), basé sur le concept de la reconnaissance de formes (pattern recognition). Cetalgorithme apprend le modèle du trafic réseau à partir de séries temporelles périodiques etdétecte des distorsions par rapport à ce modèle dans le flux de nouvelles données. Le secondobjectif a pour objet la détermination des causes racines des problèmes réseau sans aucuneconnaissance préalable sur l'architecture du réseau et des différents services. Pour ceci, nousavons conçu un algorithme, Automatic Root Cause Diagnosis (ARCD), qui permet de localiser lessources d'inefficacité dans le réseau. ARCD est composé de deux processus indépendants :l'identification des contributeurs majeurs à l'inefficacité globale du réseau et la détection desincompatibilités. WAD et ARCD ont fait preuve d'efficacité. Cependant, il est possible d'améliorerces algorithmes sur plusieurs aspects
With the evolution of automation and artificial intelligence tools, mobile networks havebecome more and more machine reliant. Today, a large part of their management tasks runs inan autonomous way, without human intervention. In this thesis, we have focused on takingadvantage of the data analysis tools to automate the troubleshooting task and carry it to a deeperlevel. To do so, we have defined two main objectives: anomaly detection and root causediagnosis. The first objective is about detecting issues in the network automatically withoutincluding expert knowledge. To meet this objective, we have proposed an algorithm, WatchmenAnomaly Detection (WAD), based on pattern recognition. It learns patterns from periodic timeseries and detect distortions in the flow of new data. The second objective aims at identifying theroot cause of issues without any prior knowledge about the network topology and services. Toaddress this question, we have designed an algorithm, Automatic Root Cause Diagnosis (ARCD)that identifies the roots of network issues. ARCD is composed of two independent threads: MajorContributor identification and Incompatibility detection. WAD and ARCD have been proven to beeffective. However, many improvements of these algorithms are possible
APA, Harvard, Vancouver, ISO, and other styles
21

Tian, Yue. "On improving estimation of root cause distribution of volume diagnosis." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6653.

Full text
Abstract:
Identifying common root causes of systematic defects in a short time is crucial for yield improvement. Diagnosis driven yield analysis (DDYA) such as Root cause deconvolution (RCD) is a method to estimate root cause distribution by applying statistical analysis on volume diagnosis. By fixing identified common root causes, yield can be improved. With advanced technologies, smaller feature size and more complex fabrication processes for manufacturing VLSI semiconductor devices lead to more complicated failure mechanisms. Lack of domain knowledge of such failure mechanisms makes identifying the emerging root causes more and more difficult. These root causes include but are not limited to layout pattern (certain prone to fail layout shapes) and cell internal root causes. RCD has proved to have certain degree of success in previous work, however, these root cause are not included and pose a challenge for RCD. Furthermore, complex volume diagnosis brings difficulty in investigation on RCD. To overcome the above challenges to RCD, improvement based on better understanding of the method is desired. The first part of this dissertation proposes a card game model to create controllable diagnosis data which can be used to evaluate the effectiveness of DDYA techniques. Generally, each DDYA technique could have its own potential issues, which need to be evaluated for future improvement. However, due to limitation of real diagnosis data, it is difficult to, 1. Obtain diagnosis data with sufficient diversity and 2. Isolate certain issues and evaluate them separately. With card game model given correct statistical model parameters, impact of different diagnosis scenarios on RCD are evaluated. Overfitting problem from limited sample size is alleviated by the proposed cross validation method. In the second part of this dissertation, an enhanced RCD flow based on pre-extract layout patterns is proposed to identify layout pattern root causes. Prone to fail layout patterns are crucial factors for yield loss, but they normally have enormous number of types which impact the effectiveness of RCD. Controlled experiment shows effectiveness of enhanced RCD on both layout pattern root causes and interconnect root causes after extending to layout pattern root causes. Test case from silicon data also validates the proposed flow. The last part of this dissertation addresses RCD extension to cell internal root causes. Due to limitation of domain knowledge in both diagnosis process and defect behavior, parameters of RCD model are not perfectly accurate. As RCD moves to identify cell internal root causes, such limitation become an unescapable challenge for RCD. Due to inherent characteristics of cell internal root cause, RCD including cell internal root cause faces more difficulty due to less accurate model parameters. Rather than enhancing domain knowledge, supervised learning for more accurate parameters based on training data are proposed to improve accuracy of RCD. Both controlled experiments and real silicon data shows that with parameters learned from supervised learning, accuracy of RCD with cell internal root cause are greatly improved.
APA, Harvard, Vancouver, ISO, and other styles
22

Philip, Justin. "Root cause analysis of production defects in a foundry using lean tools." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/14565.

Full text
Abstract:
Inlcudes bibliographical references.
Lean manufacturing is one of the philosophies that many major businesses have been trying to adopt in order to remain competitive in an increasingly global market. This research project focuses on the implementation of lean principles, standardizing operations in the production line and thereby improving productivity. The study is conducted in a large-scale metal casting company, Atlantis Foundries which manufactures cylinder blocks and gear box castings. At present scrap and rework rate of heavy duty, cores exceed set targets; this is a major quality concern for the company. From literature, it is known that the introduction of standardized work is one of the best practices in building the quality of products. Therefore, project focuses on introduction of standardised work at the core shop heavy duty flow line for reducing scrap rate, reducing rework rate and for increasing the production of heavy-duty cores. To know how the employees accept standardised work, it is essential to diagnose the employee behaviour. The project analyses the behaviour of core shop heavy duty flow line employees towards the introduction of standardised work. Moreover, this project analyses the personal and training development of employees, whether employees structure their work environment (5S) and existence of the seven wastes at core shop heavy duty flow line by means of a structured standardized questionnaire. At the end, this project uses A3 Practical problem solving report (PPS) for analysing the root cause of production defects for reducing the increase in rework occurred at the core shop heavy duty flow line after the introduction of standardised work. Standardised work was introduced with the generation of standard work instruction, job element sheet, skills training matrix and layered process audit prepared in consultation with the operators. On analysis of production figures, it was known that the introduction of standardised work reduced scrap rate, reduced rework rate and increased production. Analysis of employee behaviour, personal and training development of employees, structuring of work environment (5S) and the existence of the seven wastes with questionnaire resulted in the respective conclusion that the employees are satisfied with standardised work, personal and training development of employees increased, employees structure their work environment and lesser existence of the seven wastes at core shop heavy duty flow line. Analysis with PPS resulted that increase in rework was due to the worn bushes of Machine 150. Hence, the checks for worn bushes of Machine 150 were included in change over procedures and total productive maintenance activities. The project suggests that lean tools like standard work instruction, job element sheet, skills training matrix and layered process audits need to be introduced at each department of the company. Standard work instructions need to be introduced for changeover as well as for total productive maintenance checks. TPM checks must be done regularly at all stations of core shop heavy-duty flow line.
APA, Harvard, Vancouver, ISO, and other styles
23

Zasadziński, Michał. "Model driven root cause analysis and reliability enhancement for large distributed computing systems." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/663480.

Full text
Abstract:
Over last years the number of Big Data, supercomputing, Internet of Things (IoT) or edge systems has snowballed. The core part of many areas and services in academia and business, are large, distributed, and complex IT systems. Any failure or performance degradation occurring in these systems causes negative effects, e.g., decreased user experience, raised operational costs. As the response, IT operators resolve failures, issues, and unexpected events. Operators are aided with tools for monitoring, diagnostics, and root cause analysis (RCA). They perform actions to recover a system to its normal state. However, the characteristics of the future IT systems makes the diagnostics and root cause analysis complicated. Thus, even the most skillful operators have problems to satisfy QoS constraints. In this thesis, we would like to aid operator work and in the long-term substitute them in the RCA. We contribute for environments as mentioned earlier in two areas: diagnostics, RCA, root cause classification; prevention of failures. We focus on areas such as scalability, dynamism, lack of knowledge on system failures, predictability, and prevention of failures. We use different IT environments for diversification of the use cases. Firstly, we propose a fast RCA system based on probabilistic reasoning. The system can diagnose networks of devices with millions of nodes in a diagnostic model and solves the problem of scalability of RCA. We create diagnostic models based on Bayesian networks. Then, we transform them into a more efficient structure for runtime use that are Arithmetic Circuits. Thanks to the proposed optimization in this transformation and cache-based mechanism, the solution achieves great performance. Also, we propose actor-based RCA. This method is based on distributing diagnostic calculations through the devices and use of self-diagnostics paradigm. Thanks to this solution, results of partial diagnosis are known even when the connectivity with a part of the diagnosed system is lost. We show that the contribution works well in a simulated IoT system with high dynamism in its structure. Secondly, we focus on the aspect of knowledge integration and partial knowledge of a diagnosed system. The path to NoOps involves precise, reliable and fast diagnostics and also reusing as much knowledge as possible after the system is reconfigured or changed. We propose a weighted graph framework which can transfer knowledge and perform high-quality diagnostics of IT systems. We encode all possible data in a graph representation of a system state and automatically calculate weights of these graphs. Then, thanks to the similarity evaluation, we transfer knowledge about failures from one system to another and use it for diagnostics. We successfully evaluate the proposed approach on Big Data cluster and a cloud system of containers running: Spark, Hadoop, Kafka and Cassandra systems. Thirdly, we focus on the predictability of a supercomputing environment and prevention of failures. Failed jobs in a supercomputer cause waste in, e.g., CPU time, energy. Mining data collected during the operation of data centers helps to find patterns explaining failures and can be used to predict them. Automating system reactions, e.g., early termination of jobs, when software failures are predicted does not only increase availability and reduce operating cost, but it also frees people’s time. We explore a unique dataset containing the topology, operation metrics, and job scheduler history from the petascale Mistral supercomputer. We extract the most relevant system features deciding on the final state of a job through decision trees. Then, we propose actions to prevent failures. We create a model to predict job evolution based on power time series of nodes. Finally, we evaluate the effect on CPU time saving for static and dynamic job termination policies. We finish the thesis with a discussion on the contributions and state directions for future work.
En los últimos años, la cantidad de Big Data, supercomputación, dispositivos IoT o sistemas edge se ha disparado. Grandes sistemas distribuidos y complejos de tecnología de la información (TI) forman la parte central de muchas áreas y servicios en el mundo académico y la industria. Cualquier falla o degradación del rendimiento que ocurra en estos sistemas puede acarrear importantes efectos adversos. Como respuesta, los operadores de TI se encargan de resolver fallas, problemas y eventos inesperados. Sin embargo, las características de los sistemas de TI emergentes y futuros dificultan y complican el diagnóstico y el análisis de causa raíz (RCA). Incluso los operadores más hábiles tienen problemas para lidiar con estos sistemas para satisfacer los niveles de calidad de servicio esperados y ofrecer una experiencia de usuario impecable. En esta tesis, nos gustaría ayudar al trabajo del operador y, a largo plazo, sustituirlo por un sistema automatizado de RCA. En ese sentido contribuimos en dos áreas: el diagnóstico, clasificación, y prevención de fallas. En particular, nos enfocamos en áreas tales como escalabilidad, dinamismo, falta de conocimiento sobre fallas del sistema, previsibilidad y prevención de fallas. Para cada uno de estos aspectos, utilizamos un entorno de TI diferente. En primer lugar, proponemos un sistema rápido de RCA basado en razonamiento probabilístico. El sistema puede diagnosticar redes de dispositivos con millones de nodos en un modelo de diagnóstico y resuelve el problema de la escalabilidad del RCA. Creamos modelos de diagnóstico basados en redes Bayesianas. Gracias a la optimización propuesta en este redes, la solución funciona mejor que las técnicas más avanzadas en términos de consumo de memoria y tiempo. Además, proponemos un RCA basado en actores. Este método se basa en la distribución de cálculos de diagnóstico a través de los dispositivos y el uso del paradigma de autodiagnóstico. Mostramos que la contribución funciona bien en un sistema IoT simulado con un alto dinamismo en su estructura. En segundo lugar, nos centramos en el aspecto de la integración del conocimiento y el conocimiento parcial de un sistema diagnosticado. El camino hacia NoOps implica la reutilización del mayor conocimiento posible después de que el sistema se reconfigure o cambie. Proponemos una aproximación basada en grafos con pesos que puede transferir conocimiento entre sistemas diferentes y realizar diagnósticos de alta calidad de los sistemas de TI. Codificamos todos los datos posibles en un grafo de un estado del sistema. Luego, gracias a una función de similitud entre grafos, transferimos el conocimiento sobre fallas de un sistema a otro y lo usamos para el diagnóstico. Evaluamos con éxito el enfoque propuesto en los sistemas Spark, Hadoop, Kafka y Cassandra. Para este propósito, usamos un clúster para Big Data y un sistema de contenedores en la nube. En tercer lugar, nos centramos en la previsibilidad de un entorno de supercomputación y la prevención de fallas. Los trabajos fallidos en una supercomputadora causan pérdidas en el, e.g., tiempo de CPU y en el consumo de energía. Los datos recopilados durante la operación de los centros de datos pueden ayudar a encontrar patrones que expliquen fallas y pueden usarse para predecirlos. Exploramos un conjunto de datos único que contiene la topología, las métricas de operación y el historial del planificador de tareas del superordenador Mistral. Extraemos las características más relevantes del sistema para decidir sobre el estado final de un trabajo a través de árboles de decisión. Proponemos acciones para evitar fallas. Creamos un modelo para predecir la evolución del trabajo basado en la serie temporal de potencia de los nodos. Finalmente, evaluamos el efecto sobre el ahorro de tiempo de la CPU para las diferentes políticas de terminació. Terminamos la tesis con una breve discusión sobre las contribuciones de esta tesis y consideraciones de posible trabajo futuro.
APA, Harvard, Vancouver, ISO, and other styles
24

Parthasarathy, Sailashri 1982. "Application of artificial intelligence techniques for root cause analysis of customer support calls." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111276.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, 2017.
Thesis: S.M. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, in conjunction with the Leaders for Global Operations Program at MIT, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 66-68).
Dell Technologies seeks to use the advancements in the field of artificial intelligence to improve its products and services. This thesis aims to implement artificial intelligence techniques in the context of Dell's Client Solutions Division, specifically to analyze the root cause of customer calls so actions can be taken to remedy them. This improves the customer experience while reducing the volume of calls, and hence costs, to Dell. This thesis evaluated the external vendor landscape for text analytics, developed an internal proof-of-concept model using open source algorithms, and explored other applications for artificial intelligence within Dell. The external technologies were not a good fit for this use-case at this time. The internal model achieved an accuracy of 72%, which was above the acceptable internal threshold of 65%, thus making it viable to replace manual analytics with an artificial intelligence model. Other applications were identified in the Client Solutions division as well as in the Support and Services, Supply Chain, and Sales and Marketing divisions. Our recommendations include developing a production model from the internal proof-of-concept model, improving the quality of the call logs, and exploring the use of artificial intelligence across the business. Towards that end, the specific recommendations are: (i) to build division-based teams focused on deploying artificial intelligence technologies, (ii) to test speech analytics, and (iii) to develop a Dell-wide Center of Excellence. The division-based teams are estimated to incur an annual cost $1.5M per team while the Center of Excellence is estimated to cost $1.8M annually.
by Sailashri Parthasarathy.
M.B.A.
S.M. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
25

Sowade, Enrico, Eloi Ramon, Kalyan Yoti Mitra, Carme Martínez-Domingo, Marta Pedró, Jofre Pallarès, Fausta Loffredo, et al. "All-inkjet-printed thin-film transistors: manufacturing process reliability by root cause analysis." Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-211665.

Full text
Abstract:
We report on the detailed electrical investigation of all-inkjet-printed thin-film transistor (TFT) arrays focusing on TFT failures and their origins. The TFT arrays were manufactured on flexible polymer substrates in ambient condition without the need for cleanroom environment or inert atmosphere and at a maximum temperature of 150 °C. Alternative manufacturing processes for electronic devices such as inkjet printing suffer from lower accuracy compared to traditional microelectronic manufacturing methods. Furthermore, usually printing methods do not allow the manufacturing of electronic devices with high yield (high number of functional devices). In general, the manufacturing yield is much lower compared to the established conventional manufacturing methods based on lithography. Thus, the focus of this contribution is set on a comprehensive analysis of defective TFTs printed by inkjet technology. Based on root cause analysis, we present the defects by developing failure categories and discuss the reasons for the defects. This procedure identifies failure origins and allows the optimization of the manufacturing resulting finally to a yield improvement.
APA, Harvard, Vancouver, ISO, and other styles
26

Lambert, Madeline(Madeline Marie). "A root cause analysis of REXIS detection efficiency loss during phase E operations." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127077.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 203-205).
The Regolith X-ray Imaging Spectrometer (REXIS) is a student-built instrument flown on NASA's Origins, Spectral Interpretation, Resource Identification, Safety, Regolith Explorer (OSIRIS-REx) mission. During main science operations, the instrument experienced detector efficiency loss in the form of loss of iron calibration source counts, which greatly affected the science output. In this thesis, a root cause investigation is performed on the loss of iron counts, and an optical light leak onto the edge of the instrument's detectors is identified as the most likely cause. A CAST analysis is then performed to identify possible organizational and cultural causes of the design that allowed for an optical light leak, and recommendations for future similar instruments (low-cost, high-risk) are made.
by Madeline Lambert.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
27

Rademeyer, Anerie. "The development of a root cause analysis process for variations in human performance." Thesis, Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-04012009-231223/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Perez, Bianca. "A root cause analysis of the barriers to transparency among physicians a systemic perspective." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4821.

Full text
Abstract:
Transparency in healthcare relates to formally reporting medical errors and disclosing bad outcomes to patients and families. Unfortunately, most physicians are not in the habit of communicating transparently, as many studies have shown the existence of a large medical error information gap. Research also shows that creating a culture of transparency would mutually support patient safety and risk management goals by concomitantly reducing medical errors and alleviating the malpractice crisis. Three predictor variables are used to represent the various dimensions of the context just described. Perfectionism represents the intrapersonal domain, socio-organizational climate represents the interpersonal and institutional domains, and medico-legal environment represents the societal domain. Chin and Benne's normative re-educative strategy provides theoretical support for the notion that successful organizational change hinges upon addressing the structural and cultural barriers displayed by individuals and groups. The Physician Transparency Questionnaire was completed by 270 physicians who were drawn from a multi-site healthcare organization in Central Florida. Structural equation modeling was used to determine whether perfectionism, socio-organizational climate, and medico-legal environment significantly predict two transparency outcomes, namely, error reporting transparency and provider-patient transparency. Perfectionism and socio-organizational climate were found to be statistically significant predictors. Collectively, these variables accounted for nearly half of the variance in each transparency outcome. Within socio-organizational climate, policies had the greatest influence on transparency, followed by immunity and professional norms. Multiple group analysis showed that the covariance model developed in this study generalizes across gender, medical specialty, and occupation. In addition, group means comparisons tests revealed a number of interesting trends in error reporting and disclosure practices that provide insights about the behavioral and cognitive psychology behind transparent communication: 1) Physicians are more inclined to engage in provider-patient transparency compared to error reporting transparency, 2) physicians are more inclined to report serious errors compared to less serious errors, and 3) physicians are more inclined to express sympathy for bad outcomes than they are to apologize for a preventable error or be honest about the details surrounding bad outcomes. These results suggest that change efforts would need to be directed at medical education curricula and health provider organizations to ensure that current and future generations of physicians replace the pursuit for perfectionism with the pursuit for excellence. Also, a number of institutional changes are recommended, such as clearly communicating transparency policies and guidelines, promoting professional norms that encourage learning from mistakes rather than an aversion to error, and reassuring physicians that reporting and disclosure activities will not compromise their reputation. From the perspective of patient safety advocates and risk managers, the results are heartening because they emphasize a key principle in quality improvement - i.e., small changes can yield big results. From an ethical standpoint, this research suggests that healthcare organizations can inhibit (or facilitate) the emergence of professional virtues. Thus, although organizations cannot make a physician become virtuous, it is within their power to create conditions that encourage the physician to practice certain virtues. With respect to leadership styles, this research finds that bottom-up, grassroots change efforts can elicit professional virtues, and that culture change in healthcare lies beyond the scope of the medico-legal system.
ID: 030646191; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 126-142).
Ph.D.
Doctorate
Health and Public Affairs
Public Affairs
APA, Harvard, Vancouver, ISO, and other styles
29

Jacob, Thomas. "Root cause analysis of low on-time delivery performance at a computer manufacturing plant." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/46079.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management, 1997, and Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1997.
Includes bibliographical references (p. 73).
by Thomas Jacob.
M.S.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
30

Harper, Benjamin C. "Root cause analysis and mitigation paths for persistent inventory shortages to an assembly area." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43835.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2008.
Includes bibliographical references (p. 89-90).
The strategic alignment of a company impacts the culture of the organization, which in turn reinforces the strategic alignment. The corporate behavior resulting from the combination of alignment and culture determines the organization's ability to handle disruption and change. This thesis explores the intersection of these two elements in the context of experience gained at Spirit AeroSystems through an internship. The importance of alignment and culture of Spirit comes to light in observing the response of different parts of the organization to a supply shock caused by an industry wide titanium and aluminum shortage. A method to analytically assess delinquent part delivery and determine the optimal balance of increased upstream labor capacity versus downstream cost avoidance is presented. This information requires a supportive organizational structure to be utilized fully, and the form of this structure depends heavily on the existing culture to determine its viability. Several organizational structures are proposed to internalize the external costs of delinquency, and the cultural viability of these options is explored. The key attributes of this viable, effective structure are control by the Fuselage customer and cultural infusion and strategic coordination with Supply Chain Management.
by Benjamin C. Harper.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
31

von, Hacht Johan. "Anomaly Detection for Root Cause Analysis in System Logs using Long Short-Term Memory." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301656.

Full text
Abstract:
Many software systems are under test to ensure that they function as expected. Sometimes, a test can fail, and in that case, it is essential to understand the cause of the failure. However, as systems grow larger and become more complex, this task can become non-trivial and potentially take much time. Therefore, even partially, automating the process of root cause analysis can save time for the developers involved. This thesis investigates the use of a Long Short-Term Memory (LSTM) anomaly detector in system logs for root cause analysis. The implementation is evaluated in a quantitative and a qualitative experiment. The quantitative experiment evaluates the performance of the anomaly detector in terms of precision, recall, and F1 measure. Anomaly injection is used to measure these metrics since there are no labels in the data. Additionally, the LSTM is compared with a baseline model. The qualitative experiment evaluates how effective the anomaly detector could be for root cause analysis of the test failures. This was evaluated in interviews with an expert in the software system that produced the log data that the thesis uses. The results show that the LSTM anomaly detector achieved a higher F1 measure than the proposed baseline implementation thanks to its ability to detect unusual events and events happening out of order. The qualitative results indicate that the anomaly detector could be used for root cause analysis. In many of the evaluated test failures, the expert being interviewed could deduce the cause of the failure. Even if the detector did not find the exact issue, a particular part of the software might be highlighted, meaning that it produces many anomalous log messages. With this information, the expert could contact the people responsible for that part of the application for help. In conclusion, the anomaly detector automatically collects the necessary information for the expert to perform root cause analysis. As a result, it could save the expert time to perform this task. With further improvements, it could also be possible for non-experts to utilise the anomaly detector, reducing the need for an expert.
Många mjukvarusystem testas för att försäkra att de fungerar som de ska. Ibland kan ett test misslyckas och i detta fall är det viktigt att förstå varför det gick fel. Detta kan bli problematiskt när mjukvarusystemen växer och blir mer komplexa eftersom att denna uppgift kan bli icke trivial och ta mycket tid. Om man skulle kunna automatisera felsökningsprocessen skulle det kunna spara mycket tid för de invloverade utvecklarna. Denna rapport undersöker användningen av en Long Short-Term Memory (LSTM) anomalidetektor för grundorsaksanalys i loggar. Implementationen utvärderas genom en kvantitativ och kvalitativ undersökning. Den kvantitativa undersökningen utvärderar prestandan av anomalidetektorn med precision, recall och F1 mått. Artificiellt insatta anomalier används för att kunna beräkna dessa mått eftersom att det inte finns etiketter i den använda datan. Implementationen jämförs också med en annan simpel anomalidetektor. Den kvalitativa undersökning utvärderar hur användbar anomalidetektorn är för grundorsaksanalys för misslyckade tester. Detta utvärderades genom intervjuer med en expert inom mjukvaran som producerade datan som användes in denna rapport. Resultaten visar att LSTM anomalidetektorn lyckades nå ett högre F1 mått jämfört med den simpla modellen. Detta tack vare att den kunde upptäcka ovanliga loggmeddelanden och loggmeddelanden som skedde i fel ordning. De kvalitativa resultaten pekar på att anomalidetektorn kan användas för grundorsaksanalys för misslyckade tester. I många av de misslyckade tester som utvärderades kunde experten hitta anledningen till att felet misslyckades genom det som hittades av anomalidetektorn. Även om detektorn inte hittade den exakta orsaken till att testet misslyckades så kan den belysa en vissa del av mjukvaran. Detta betyder att just den delen av mjukvaran producerad många anomalier i loggarna. Med denna information kan experten kontakta andra personer som känner till den delen av mjukvaran bättre för hjälp. Anomalidetektorn automatiskt den information som är viktig för att experten ska kunna utföra grundorsaksanalys. Tack vare detta kan experten spendera mindre tid på denna uppgift. Med vissa förbättringar skulle det också kunna vara möjligt för mindre erfarna utvecklare att använda anomalidetektorn. Detta minskar behovet för en expert.
APA, Harvard, Vancouver, ISO, and other styles
32

Ali, Raman. "Root Cause Analysis for In-Transit Time Performance : Time Series Analysis for Inbound Quantity Received into Warehouse." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184062.

Full text
Abstract:
Cytiva is a global provider of technologies to global pharmaceutical companies and it is critical to ensure that Cytiva’s customers receive deliveries of products on-time. Cytiva’s products are shipped via road transportation within most parts of Europe and for the rest in the world air freight is used. The company is challenged to deliver products on time between regional distribution points and from manufacturing sites to its regional distribution centers. The time performance for the delivery of goods is today 79% compared to the company’s goal 95%. The goal of this work is to find the root causes and recommend improvement opportunities for the logistics organizations inbound in-transit time performance towards their target of 95% success rate of shipping in-transit times. Data for this work was collected from the company’s system to create visibility for the logistics specialists and to create a prediction that can be used for the warehouse in Rosersberg. Visibility was created by implementing various dashboards in the QlikSense program that can be used by the logistics group. The prediction models were built on Holt-Winters forecasting technique to be able to predict quantity, weight and volume of products, which arrive daily within five days and are enough to be implemented in the daily work. With the forecasting technique high accurate models were found for both the quantity and weight with accuracies of 96.02% and 92.23%, respectively. For the volume, however, too many outliers were replaced by the mean values and the accuracy of the model was 75.82%. However, large amounts of discrepancies have been found in the data which today has led to a large ongoing project to solve. This means that the models shown in this thesis cannot be completely reliable for the company to use when a lot of errors in their data have been found. The models may need to be adjusted when the quality of the data has increased. As of today the models can be used by having a glance upon.
APA, Harvard, Vancouver, ISO, and other styles
33

Miller, Kristi. "Effect of Root Cause Analysis on Pre-Licensure, Senior-Level Nursing Students’ Safe Medication Administration Practices." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etd/3432.

Full text
Abstract:
Aim: The aim of this study was to examine if student nurse participation in root cause analysis has the potential to reduce harm to patients from medication errors by increasing student nurse sensitivity to signal and responder bias. Background: Schools of nursing have traditionally relied on strategies that focus on individual characteristics and responsibility to prevent harm to patients. The modern patient safety movement encourages utilization of systems theory strategies like Root Cause Analysis (RCA). The Patient Risk Detection Theory (Despins, Scott-Cawiezell, & Rouder, 2010) supports the use of nurse training to reduce harm to patients. Method. Descriptive and inferential analyses of the demographic and major study variables were conducted. Validity and reliability assessments for the instruments were performed. The Safe Administration of Medications-Revised Scale (Bravo, 2014) was used to measure sensitivity to signal. The Safety Attitudes Questionnaire (SAQ; Sexton et al., 2006) was used to assess responder bias; this was the first use of this instrument with nursing students. Results: The sample consisted of 125 senior-level nursing students from three universities in the southeastern United States. The SAQ was found to be a valid and reliable test of safety attitudes in nursing students. Further support for the validity and reliability of the SAM-R was provided. A significant difference in safety climate between schools was observed. There were no differences detected between the variables. Conclusion: The results of this study provide support for the use of the SAQ and the SAM-R to further test the PRDT, and to explore methods to improve nursing student ability to administer medications safely.
APA, Harvard, Vancouver, ISO, and other styles
34

Jain, Pranav. "Root cause analysis of solder flux residue incidence in the manufacture of electronic power modules." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/69489.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 75-76).
This work investigates the root causes of the incidence of solder flux residue underneath electronic components in the manufacture of power modules. The existing deionized water-based centrifugal cleaning process was analyzed and hypotheses for root causes of the problem were proposed. The experimentation included cleaning tests using agitation and soak cycles. Parameters such as chemical agent, time and temperature were also tested for these tests. A novel method of residue incidence determination using visual inspection was proposed. Results suggest that the centrifugal process with water is incapable of providing enough agitation to effectively clean the residue. It was also found that product design and architectural causes greatly contribute to cleaning process effectiveness. It was concluded that effective printed circuit board cleaning requires high agitation and efficient product design.
by Pranav Jain.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
35

Shabbir, Muhammad Humas. "Streamlining information and part flow by re-designing process flow to aid root cause analysis." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113727.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 63).
Waters Corporation is a world leading analytical instrument manufacturing company, with an overarching goal of achieving and maintaining high product robustness. Analysis of the challenge identifies the problem of lack of root cause analysis. This is further attributed to the inefficient process flow of information and parts: there is a lack of tracking mechanism for parts which is induced from lack of ownership and value at each stage of the root cause analysis phase. A new process flow is designed around the current process to address gaps and inefficiencies. This process flow redesign is done for both information flow for hot parts list and the movement of parts; this will streamline the overall root cause process and secondarily help save cost and eliminate redundancies. A layout improvement solution is developed, and a plan for implementation recommended. The new process flow is designed to increase visual control of the process and to effectively move the material. Each phase of the project has been reviewed and discussed to encourage stakeholder involvement in order to develop a continuous improvement culture.
by Muhammad Humas Shabbir.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Qinyan. "Optimal coordinate sensor placements for estimating mean and variance components of variation sources." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/2238.

Full text
Abstract:
In-process Optical Coordinate Measuring Machine (OCMM) offers the potential of diagnosing in a timely manner variation sources that are responsible for product quality defects. Such a sensor system can help manufacturers improve product quality and reduce process downtime. Effective use of sensory data in diagnosing variation sources depends on the optimal design of a sensor system, which is often known as the problem of sensor placements. This thesis addresses coordinate sensor placement in diagnosing dimensional variation sources in assembly processes. Sensitivity indices of detecting process mean and variance components are defined as the design criteria and are derived in terms of process layout and sensor deployment information. Exchange algorithms, originally developed in the research of optimal experiment deign, are employed and revised to maximize the detection sensitivity. A sort-and-cut procedure is used, which remarkably improve the algorithm efficiency of the current exchange routine. The resulting optimal sensor layouts and its implications are illustrated in the specific context of a panel assembly process.
APA, Harvard, Vancouver, ISO, and other styles
37

Thorsell, Tobias. "Orsaksanalys och lösningsförslag vid fel vid kommunikation av växelläge." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Maskinteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-19025.

Full text
Abstract:
Detta examensarbete på C-nivå har genomförts i samarbete med Kongsberg Automotive i Mullsjö som utvecklar och tillverkar komponenter till fordons-industrin.Företaget har fått reklamationer på en växelväljare i deras sortiment och är en produkt som sitter i kundens lastbilar och bussar. Felen har uppträtt relativt sällan men med tillräckligt stor marginal för att de ska klassas som allvarliga fel. Dessa fel är kopplade till den magnetarm som kommunicerar med transmissionen i fordonet och konsekvensen av dessa fel blir att fordonet blir obrukbart och måste bärgas bort.Syftet med detta examensarbete är att författaren på ett ingenjörsmässigt sätt ska angripa problemen med magnetarmen till växelväljaren på så sätt att grundorsaken bakom kan säkerställas. Målet för arbetet är att hitta dessa grundorsaker till varför magnetarmen går sönder, eller hoppas ur sitt läge, samt att ta fram en design som hindrar att problemen uppstår igen.För att strukturera arbetet har författaren använt sig av en problemlösningsmetod som heter Six Sigma DMAIC. Det är den här metoden som hela projektet och därmed rapporten är uppbyggt kring.Författaren kom fram till att grundorsakerna till problemen som uppstått med magnetarmen hade grundat sig i konstruktionen av de komponenter som sköter funktionen med produktens växelknappar. Deras konstruktion har gjort det möjligt för föraren att felaktigt kunna aktivera två knappar samtidigt vilket ledde till att produkten påverkats på fel sätt.Examensarbetet resulterade i ett koncept som tillsammans med företagets egna framtagna lösning tar bort de bakomliggande grundorsakerna och förhindrar att problemen kan uppstå igen.
This bachelor thesis has been executed in cooperation with Kongsberg Auto-motive AB, Mullsjö, who develop and produce parts to the automotive industry.The company has received complaints on a gear lever unit which they produce and which sits in the customers’ trucks and buses. The failures have occurred relatively infrequently, but with enough margins to classify them as serious failures. These failures are connected to that magnet arm in the product which communicates with the transmission of the vehicle and leads to the consequence of an unusable truck that is in need of towing.The intent with this thesis is that the author should tackle the problems with the magnet arm on an engineering basis so that the root causes to the problems can be ascertained. The goal is to find these root causes to why the magnet arm breaks, or dislocates, and generate a design that prevents the problems from reappearing.To structure the work the author has used a method for problem solving called Six Sigma DMAIC which is the base for the whole project and therefore the thesis.Through extensive analyzes the author ascertained that the root causes for the problems with the magnet arm came from the design of parts, relating to the knob of the product, that enables two buttons to simultaneously be activated.The thesis resulted in a concept which together with the company´s solution removes the underlying root causes and prevents the problem from reappearing.
APA, Harvard, Vancouver, ISO, and other styles
38

Heravizadeh, Mitra. "Quality-aware business process management." Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/30410/1/Mitra_Heravizadeh_Thesis.pdf.

Full text
Abstract:
Business Process Management (BPM) has emerged as a popular management approach in both Information Technology (IT) and management practice. While there has been much research on business process modelling and the BPM life cycle, there has been little attention given to managing the quality of a business process during its life cycle. This study addresses this gap by providing a framework for organisations to manage the quality of business processes during different phases of the BPM life cycle. This study employs a multi-method research design which is based on the design science approach and the action research methodology. During the design science phase, the artifacts to model a quality-aware business process were developed. These artifacts were then evaluated through three cycles of action research which were conducted within three large Australian-based organisations. This study contributes to the body of BPM knowledge in a number of ways. Firstly, it presents a quality-aware BPM life cycle that provides a framework on how quality can be incorporated into a business process and subsequently managed during the BPM life cycle. Secondly, it provides a framework to capture and model quality requirements of a business process as a set of measurable elements that can be incorporated into the business process model. Finally, it proposes a novel root cause analysis technique for determining the causes of quality issues within business processes.
APA, Harvard, Vancouver, ISO, and other styles
39

Sagala, Ramadhan Kurniawan. "Visualization of Vehicle Usage Based on Position Data for Root-Cause Analysis : A Case Study in Scania CV AB." Thesis, Uppsala universitet, Människa-datorinteraktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-355909.

Full text
Abstract:
Root cause analysis (RCA) is a process in Scania carried out to understand the root cause of vehicle breakdowns. It is commonly done by studying vehicle warranty claims and failure reports, identifying patterns that are correlated to the breakdowns, and then analyzing the root cause based on those findings. Vehicle usage is believed to be one of the factors that may contribute towards the breakdowns, but the data on vehicle usage is not commonly utilized in RCA. This thesis investigates a way to help RCA process by introducing a dataset of vehicle usage based on position data gathered in project FUMA (Fleet telematics big data analytics for vehicle Usage Modeling and Analysis). A user-centered design process of a visualization tool which presents FUMA data for people working in RCA process was carried out. Interviews were conducted to gain insights about the RCA process and generate design ideas. PACT framework was used to organize the ideas, and Use Cases were developed to project a conceptual scenario. A low fidelity prototype was developed as design artifact for the visualization, and a formative test was done to validate the design and gather feedback for future prototyping iterations. In each design phase, more insights about how visualization of vehicle usage should be used in RCA were obtained. Based on this study, the prototype design showed a promising start in visualizing vehicle usage for RCA purpose. Improvement on data presentation, however, still needs to be addressed to reach the level of practicality required in RCA.
Root cause analysis (RCA) är en process på Scania som används för att förstå rotorsaken till fordons behov av reparation.Oftast studeras fordonets försäkringsrapporter och felrapporter, för att identifiera och analysera mönster som motsvarar de olika behoven för reparation. Fordonsanvändningen tros vara en av de faktorer som bidrar till reparationsbehoven, men data angående detta används sällan i RCA. Denna rapport undersöker hur RCA-processen kan dra nytta av positionsdata som samlats in i projekt FUMA (Fleet telematics big data analytics for vehicle Usage Modeling and Analysis). En användarcentrerad designmetodik har använts för att ta fram ett visualiseringsverktyg som presenterar FUMA-data för personer som deltar i RCA-processen. Intervjuer har genomförts för att samla insikter om RCA-processen och för att generera designidéer. PACT-ramverket användes sedan för att organisera idéerna, och användningssituationer togs fram för att skapa ett konceptuellt scenario. En low-fidelity prototyp togs fraför personer som deltar i RCA-processenm som en designartefakt för visualiseringen och ett utvecklande test genomfördes för att validera designen och samla in feedback för framtida iterationer av prototyping. Under varje design-fas, samlades mer insikter om hur visualiseringen av fordonsanvändning skulle kunna användas för RCA in. Baserat på detta, visade design-prototypen en lovande start för visualisering av fordonsanvändning i RCA. Förbättringar på hur data presenteras måste dock genomföras, så att rätt genomförbarhet för RCA uppnås.
APA, Harvard, Vancouver, ISO, and other styles
40

Madenas, Nikolaos. "Integrating product lifecycle management systems with maintenance information across the supply chain for root cause analysis." Thesis, Cranfield University, 2014. http://dspace.lib.cranfield.ac.uk/handle/1826/9331.

Full text
Abstract:
Purpose: The purpose of this research is to develop a system architecture for integrating PLM systems with maintenance information to support root cause analysis by allowing engineers to visualise cross supply chain data in a single environment. By integrating product-data from PLM systems with warranty claims, vehicle diagnostics and technical publications, engineers were able to improve the root cause analysis and close the information gaps. Methodology: The methodology was divided in four phases and combined multiple data collection approaches and methods depending on each objective. Data collection was achieved through a combination of semi-structured interviews with experts from the automotive sector, by studying the internal documentation and by testing the systems used. The system architecture was modelled using UML diagrams. Findings: The literature review in the area of information flow in the supply chain and the area of root cause analysis provides an overview of the current state of research and reveals research gaps. In addition, the industry survey conducted, highlighted supply chain issues related to information flow and the use of Product Lifecycle Management (PLM) systems. Prior to developing the system architecture, current state process maps were captured to identify challenges and areas of improvement. The main finding of this research is a novel system architecture for integrating PLM systems with maintenance information across the supply chain to support root cause analysis. This research shows the potential of PLM systems within the maintenance procedures by demonstrating through the integration of PLM systems with warranty information, vehicle diagnostics and technical publications, that both PD engineers and warranty engineers were benefited. The automotive experts who validated the system architecture recognised that the proposed solution provides a standardised approach for root cause analysis across departments and suppliers. To evaluate the applicability of the architecture in a different industry sector, the proposed solution was also tested using a case study from the defence sector. Originality/Value: This research addressed the research gaps by demonstrating that: i) A system architecture can be developed to integrate PLM systems with maintenance information to allow the utilisation of knowledge and data across the product lifecycle; ii) Network can be treated as a virtual warehouse where maintenance data are integrated and shared within the supply chain; iii) Product data can be utilised in conjunction with maintenance information to support warranty and product development engineers; iv) Disparate pieces of data can be integrated where later data mining techniques could potentially be applied.
APA, Harvard, Vancouver, ISO, and other styles
41

Balasubramanian, Prashanth. "Root cause analysis-based approach for improving preventive/corrective maintenance of an automated prescription-filling system." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
42

Singh, Karen J. "Patient safety and the RCA: A document analysis." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/87825/1/Karen_Singh_Thesis.pdf.

Full text
Abstract:
This research examined the function of Queensland Health's Root Cause Analysis (RCA) to improve patient safety through an investigation of patient harm events where permanent harm and preventable death, Severity Assessment Code 1, were the outcome of healthcare. Unedited and highly legislated RCAs from across Queensland Health public hospitals from 2009, 2010 and 2011 comprised the data. A document analysis revealed the RCAs opposed organisational policy and dominant theoretical directives. If we accept the prevailing assumption that patient harm is a systemic issue, then the RCA is failing to address harm events in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
43

Carata, Lucian. "Provenance-based computing." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/287562.

Full text
Abstract:
Relying on computing systems that become increasingly complex is difficult: with many factors potentially affecting the result of a computation or its properties, understanding where problems appear and fixing them is a challenging proposition. Typically, the process of finding solutions is driven by trial and error or by experience-based insights. In this dissertation, I examine the idea of using provenance metadata (the set of elements that have contributed to the existence of a piece of data, together with their relationships) instead. I show that considering provenance a primitive of computation enables the exploration of system behaviour, targeting both retrospective analysis (root cause analysis, performance tuning) and hypothetical scenarios (what-if questions). In this context, provenance can be used as part of feedback loops, with a double purpose: building software that is able to adapt for meeting certain quality and performance targets (semi-automated tuning) and enabling human operators to exert high-level runtime control with limited previous knowledge of a system's internal architecture. My contributions towards this goal are threefold: providing low-level mechanisms for meaningful provenance collection considering OS-level resource multiplexing, proving that such provenance data can be used in inferences about application behaviour and generalising this to a set of primitives necessary for fine-grained provenance disclosure in a wider context. To derive such primitives in a bottom-up manner, I first present Resourceful, a framework that enables capturing OS-level measurements in the context of application activities. It is the contextualisation that allows tying the measurements to provenance in a meaningful way, and I look at a number of use-cases in understanding application performance. This also provides a good setup for evaluating the impact and overheads of fine-grained provenance collection. I then show that the collected data enables new ways of understanding performance variation by attributing it to specific components within a system. The resulting set of tools, Soroban, gives developers and operation engineers a principled way of examining the impact of various configuration, OS and virtualization parameters on application behaviour. Finally, I consider how this supports the idea that provenance should be disclosed at application level and discuss why such disclosure is necessary for enabling the use of collected metadata efficiently and at a granularity which is meaningful in relation to application semantics.
APA, Harvard, Vancouver, ISO, and other styles
44

Conradsson, Emil, and Vidar Johansson. "A MODEL-INDEPENDENT METHODOLOGY FOR A ROOT CAUSE ANALYSIS SYSTEM : A STUDY INVESTIGATING INTERPRETABLE MACHINE LEARNING METHODS." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160372.

Full text
Abstract:
Today, companies like Volvo GTO experience a vast increase in data and the ability toprocess it. This makes it possible to utilize machine learning models to construct a rootcause analysis system in order to predict, explain and prevent defects. However, thereexists a trade-off between model performance and explanation capability, both of whichare essential to such system.This thesis aims to, with the use of machine learning models, inspect the relationshipbetween sensor data from the painting process and the texture defectorange peel. Theaim is also to evaluate the consistency of different explanation methods.After the data was preprocessed, and new features were engineered, e.g. adjustments,three machine learning models were trained and tested. In order to explain a linearmodel, one can use its coefficients. In the case of a tree-based model, MDI is a commonglobal explanation method. SHAP is a state-of-the-art model-independent method thatcan explain a model globally and locally. These three methods were compared in orderto evaluate the consistency of their explanations. If SHAP would be consistent with theothers on a global level, it can be argued that SHAP can be used locally in an root causeanalysis.The study showed that the coefficients and MDI were consistent with SHAP as theoverall correlation between them were high and because they tended to weight thefeatures in a similar way. From this conclusion, a root cause analysis algorithm wasdeveloped with SHAP as a local explanation method. Finally, it cannot be concludedthat there is a relationship between the sensor data andorange peel, as the adjustments ofthe process were the most impactful features.
Idag upplever företag som Volvo GTO en stor ökning av data och en förbättrad förmågaatt bearbeta den. Detta gör det möjligt att, med hjälp av maskininlärningsmodeller,skapa ett rotorsaksanalyssystem för att förutspå, förklara och förebygga defekter. Detfinns dock en balans mellan modellprestanda och förklaringskapacitet, där båda ärväsentliga för ett sådant system.Detta examensarbete har som mål att, med hjälp av maskininlärningsmodeller, under-söka förhållandet mellan sensordata från målningsprocessen och strukturdefektenorangepeel. Målet är även att utvärdera hur konsekventa olika förklaringsmetoder är.Efter att datat förarbetats och nya variabler skapats, t.ex. förändringar som gjorts, trä-nades och testades tre maskinlärningsmodeller. En linjär modell kan tolkas genomdess koefficienter. En vanlig metod för att globalt förklara trädbaserade modeller ärMDI. SHAP är en modern modelloberoende metod som kan förklara modeller bådeglobalt och lokalt. Dessa tre förklaringsmetoder jämfördes sedan för att utvärdera hurkonsekventa de var i sina förklaringar. Om SHAP skulle vara konsekvent med de andrapå en global nivå, kan det argumenteras för att SHAP kan användas lokalt i en rotorsak-analys.Studien visade att koefficienterna och MDI var konsekventa med SHAP då den över-gripande korrelationen mellan dem var hög samt att metoderna tenderade att viktavariablerna på ett liknande sätt. Genom denna slutsats utvecklades en rotorsakanalysal-goritm med SHAP som lokal förklaringsmetod. Slutligen går det inte att dra någonslutsats om att det finns ett samband mellan sensordatat ochorange peel, eftersom förän-dringarna i processen var de mest betydande variablerna.
APA, Harvard, Vancouver, ISO, and other styles
45

Solakoglu, Gokce. "Using DMAIC methodology to optimize data processing cycles with an overall goal of improving root cause analysis procedure." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113768.

Full text
Abstract:
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 68-69).
The main objective of this thesis is to use the DMAIC methodology to streamline customer related procedures in Waters Corporation in order to improve root cause analysis (RCA) capability. First, a software based approach is proposed to streamline the data collection stage in the field. The proposed system would ensure that field service reports capture essential information, are consistent, and are more easily filled out while at the customer site. Second, a new coding system is proposed to enable global service support engineers to better identify the underlying causes of field calls. By addressing these weaknesses in the current process, this thesis contributes a strategy to improve the content of the data captured during the field applications and to provide better feedback to the quality department for improved product robustness..
by Gokce Solakoglu.
M. Eng. in Advanced Manufacturing and Design
APA, Harvard, Vancouver, ISO, and other styles
46

Regele, Oliver Brian. "Applied discrete event simulation for root cause analysis and evaluation of corrective process change Efficacy within vaccine manufacturing." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126896.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, May, 2020
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 137-141).
Digital Transformation of the Biopharmaceutical industry is enabling improved operations through smart manufacturing. One area of interest is the application of advanced data analytics techniques to supplement traditional workflows. The focus of this research was developing a process simulation model to address a defect observed at a manufacturing line at the Sanofi Pasteur Lyon site. This defect entailed a series of Out-of-Trend batches with abnormally low content of a certain attribute, at the end of a two-year process with complex product batch genealogy, which complicated the use of a traditional approaches to Root Cause Analysis. This study performed a statistical analysis of the defect batch attribute content through production stages to determine which contained a Root Cause. Once this analysis identified the Valence Assembly process as a stage of origin, a Discrete Event Simulator for this process was developed based on historical process data and specifications. This simulator was able to model the current process and replicate the defect in-silico. The simulator identified a specific Root Cause in the batch testing protocol as well as the expected incidence rate of the defect over future campaigns. Finally, the simulator evaluated the efficacy of two potential Corrective Process Changes. This work functions as a practical exploration of integrating novel data analysis and simulation techniques into traditional vaccine manufacturing activities.
by Oliver Brian Regele.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
47

Swift, James D. "Root Cause Analysis and Classification of Single Point Failures in Designs Applying Triple Modular Redundancy in SRAM FPGAs." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8744.

Full text
Abstract:
Radiation effects encountered in space or aviation environments can affect the configuration bits in Field Programmable Gate Arrays (FPGA) causing errors in FPGA output. One method of increasing FPGA reliability in radiation environments includes adding redundant logic to mask errors and allow time for repair. Despite the redundancy added with triple modular redundancy (TMR) and configuration scrubbing there exist some configuration bits that individually affect multiple TMR domains causing errors in FPGA output. A new tool called DeBit is introduced that identifies hardware resources associated with a single bit failure. This tool identifies a novel failure mode involving global routing resources and the failure mode is verified through a series of directed tests on global routing resources. Lastly, a mitigation strategy is proposed and tested on a single error in a triple modular redundancy (TMR) design.
APA, Harvard, Vancouver, ISO, and other styles
48

Liang, Ge, and Liang Yu. "Quality Driven Re-engineering Framework." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2161.

Full text
Abstract:
Context. Software re-engineering has been identified as a business critical activity to improve legacy systems in industries. It is the process of understanding existing software and improving it, for modified or improved functionality, better maintainability, configurability, reusability, or other quality goals. However, there is little knowledge to integrate software quality attributes into the re-engineering process. It is essential to resolve quality problems through applying software re-engineering processes. Objectives. In this study we perform an in-depth investigation to identify and resolve quality problems by applying software re-engineering processes. At the end, we created a quality driven re-engineering framework. Methods. At first, we conducted a literature review to get knowledge for building the quality driven re-engineering framework. After that, we performed a case study in Ericsson Company to validate the processes of the framework. At last, we carried out an experiment to prove that the identified quality problems has been resolved. Results. We compared three existing re-engineering frameworks and identified their weaknesses. In order to fix the weaknesses, we created a quality driven re-engineering framework. This framework is used to improve software quality through identifying and resolving root cause problems in legacy systems. Moreover, we validated the framework for one type of legacy system by successfully applying the framework in a real case in Ericsson Company. And also, we proved that the efficiency of a legacy system is improved after executing an experiment in Ericsson Company. Conclusions. We conclude that the quality driven re-engineering framework is applicable, and it can improve efficiency of a legacy system. Moreover, we conclude that there is a need for further empirical validation of the framework in full scale industrial trials.
APA, Harvard, Vancouver, ISO, and other styles
49

Youssef, Amanda. "Root-cause analysis and characterization of oxygen-related defects in silicon PV material : an approach from macro to nanoscale." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122510.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 141-155).
With energy demand forecasted to grow significantly, efforts towards mitigating global warming effects by reducing greenhouse gas emissions are becoming stricter as more power generation plants are deployed to meet the global demand. Deployment of renewable energy technologies as a low-carbon alternative to fossil fuel is an attractive solution. Photovoltaics (PV) present several advantages over other energy sources because PV is modular, and has proven to be a scalable and reliable technology. A capital expenditure reduction of 70% has been found to be necessary to meet the climate targets of 7-10 TW of PV by 2030. This can be achieved through different channels: improving conversion efficiency and device performance of silicon modules, increasing solar cell manufacturing yield, reducing silicon feedstock material use, etc. This research focuses on n-type monocrystalline silicon and aims to increase conversion efficiency up to 20% relative and increase manufacturing yield up to 50%, as levers to reduce the capital expenditure. The increase in conversion efficiency and manufacturing yield is achieved by defect engineering and mitigation of a lifetime-limiting bulk defect in n-type monocrystalline silicon, characterized by low-lifetime concentric rings. Temperature- and injection-dependent photoluminescence imaging is applied to investigate the defect's root-cause by studying its evolution under several high temperature process conditions and is found to be caused by oxide-related precipitates. Synchrotron-based mic ...
by Amanda Youssef.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
50

Willeke, Larissa, and Wiktor Suvander. "Incomplete Delivery : Description of Causes and Effects." Thesis, Linköpings universitet, Kvalitetsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-98726.

Full text
Abstract:
Quality defects are a common problem for producing companies, but causes and consequences are often unknown. The purpose of this thesis assignment is to develop a step-by-step analysis method for identifying the root causes of quality defects based on previously examined consequences. The first steps focus on customer recovery meanwhile the following steps concentrate on process recovery. The analysis method is process-orientated as the complete production and delivery process are scrutinized upstream by the combination of commonly used quality tools.   For testing the applicability of the presented method this thesis comprises a case study conducted at one company receiving complaints about quality defects. For the Case Study Company the consequences and causes of quality defects are described, analyzed and suggestions for improvement are developed.   In the investigated case, the developed method helps to identify causes and consequences of incomplete delivery, the company’s major quality problem. The upstream approach proved advantages for two reasons. First of all including the customer side guarantees that the cause analysis is limited to the relevant problems. With the help of the method the severity of consequences depending on the customers’ awareness of defects and available time can be detected. Secondly problems can be scrutinized in natural order as difficulties in production once identified can be followed step by step to the causes in a preceding step. The main causes identified in this case study are a lack of process definition and of standardization. Thus, the portrayed case suggests that regular appearances of quality defects are not a coincidence. The reasons are the underlying, possibly insufficiently defined and managed processes.   The general finding of the thesis assignment is the presented analysis method that comprises a systematic process-oriented approach designed to examine consequences and causes of quality defects. In contrast to the root cause analysis approaches found in literature each analysis step is described in detail. This makes the method easy to apply in practice. Therefore the method is a valid tool to deal with a high degree of complexity. The case study proved that it is effective and efficient to scrutinize problems with these characteristics. Under different circumstances the application of single quality tools might be sufficient and hence resource effective. Further investigation is necessary since this method has only been tested in one case study.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography