Dissertations / Theses on the topic 'Process mining'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Process mining.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
van, der Aalst Wil M. P., Arya Adriansyah, Alves de Medeiros Ana Karla, Franco Arcieri, Thomas Baier, Tobias Blickle, Jagadeesh Chandra Bose R. P, et al. "Process Mining Manifesto." Springer, 2011. http://dx.doi.org/10.1007/978-3-642-28108-2_19.
Full textKhodabandelou, Ghazaleh. "Mining Intentional Process Models." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2014. http://tel.archives-ouvertes.fr/tel-01010756.
Full textRemberg, Julia. "Grundlagen des Process Mining : [Studienarbeit] /." [München] : Grin-Vel, 2008. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=017676071&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Full textNguyen, Hoang H. "Stage-aware business process mining." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130602/9/Hoang%20Nguyen%20Thesis.pdf.
Full textBaier, Thomas, Jan Mendling, and Mathias Weske. "Bridging abstraction layers in process mining." Elsevier, 2014. http://dx.doi.org/10.1016/j.is.2014.04.004.
Full textPika, Anastasiia. "Mining process risks and resource profiles." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/86079/1/Anastasiia_Pika_Thesis.pdf.
Full textGerke, Kerstin. "Continual process improvement based on reference models and process mining." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2011. http://dx.doi.org/10.18452/16353.
Full textThe dissertation at hand takes as its subject business processes. Naturally they are subject to continual improvement and are a major asset of any given organization. An optimally-designed process, having once proven itself, must be flexible, as new developments demand swift adaptations. However, many organizations do not adequately describe these processes, though doing so is a prerequisite for their improvement. Very often the process model created during an information system’s implementation either is not used in the first place or is not maintained, resulting in an obvious lack of correspondence between the model and operational reality. Process mining techniques prevent this. They extract the process knowledge inherent in an information system and visualize it in the form of process models. Indeed, continual process improvement depends greatly on this modeling approach, and reference models, such as ITIL and CobiT, are entirely suitable and powerful means for dealing with the efficient design and control of processes. Process improvement typically consists of a number of analysis, design, implementation, execution, monitoring, and evaluation activities. This dissertation proposes a methodology that supports and facilitates them. An empirical analysis both revealed the challenges and the potential benefits of these processes mining techniques’ successful. This in turn led to the detailed consideration of specific aspects of the data preparation for process mining algorithms. Here the focus is on the provision of enterprise data and RFID events. This dissertation as well examines the importance of analyzing the execution of reference processes to ensure compliance with modified or entirely new business processes. The methodology involved a number of cases’ practical trials; the results demonstrate its power and universality. This new approach ushers in an enhanced continual inter-departmental and inter-organizational improvement process.
Muñoz, Gama Jorge. "Conformance checking and diagnosis in process mining." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284964.
Full textEn las últimas décadas, la capacidad de los sistemas de información para generar y almacenar datos de eventos ha experimentado un crecimiento exponencial, especialmente en contextos como el industrial. Dispositivos conectados permanentemente a Internet (Internet of things), redes sociales, teléfonos inteligentes, y la computación en la nube proporcionan nuevas fuentes de datos, una tendencia que continuará en los siguientes años. La omnipresencia de grandes volúmenes de datos de eventos almacenados en logs abre la puerta al Process Mining (Minería de Procesos), una nueva disciplina a caballo entre las técnicas de gestión de procesos de negocio, el modelado de procesos, y la inteligencia de negocio. Las técnicas de minería de procesos pueden usarse para descubrir, analizar, y mejorar procesos reales, a base de extraer modelos a partir del comportamiento observado. La capacidad de estos modelos para representar la realidad determina la calidad de los resultados que se obtengan, condicionando su efectividad. El Conformance Checking (Verificación de Conformidad), objetivo final de esta tesis, permite analizar los comportamientos observados y modelados, y determinar si el modelo es una fiel representación de la realidad. La mayoría de los esfuerzos en Conformance Checking se han centrado en medir y asegurar que los modelos fueran capaces de capturar todo el comportamiento observado, también llamado "fitness". Otras propiedades, tales como asegurar la "precisión" de los modelos (no modelar comportamiento innecesario) han sido relegados a un segundo plano. La primera parte de esta tesis se centra en analizar la precisión, donde modelos describiendo la realidad con precisión son preferidos a modelos demasiado genéricos. La tesis presenta una nueva técnica basada en detectar "arcos de escape", i.e. puntos donde el comportamiento modelado se desvía del comportamiento reflejado en el log. Estos arcos de escape son usados para determinar, en forma de métrica, el nivel de precisión entre un log y un modelo, y para localizar posibles puntos de mejora. La tesis también presenta un intervalo de confianza sobre la métrica, así como una métrica multi-factorial para medir la severidad de las imprecisiones detectadas. Conformance Checking puede ser una operación costosa para escenarios reales, y entender las razones que causan los problemas requiere esfuerzo. La segunda parte de la tesis cambia el foco (de precisión a fitness), y propone el uso de técnicas de descomposición para ayudar en la verificación de fitness. Las técnicas propuestas se basan en descomponer el modelo en componentes con una sola entrada y una sola salida, llamados SESEs. Estos componentes representan subprocesos dentro del proceso principal. Verificar el fitness a nivel de subproceso proporciona una información detallada de dónde están los problemas, ayudando en su diagnóstico. Además, las relaciones entre subprocesos pueden ser explotadas para mejorar las capacidades de diagnóstico e identificar qué áreas concentran la mayor densidad de problemas. Finalmente, la tesis propone dos aplicaciones directas de las técnicas de descomposición: 1) la teoría es extendida para incluir información de datos a la verificación de fitness, y 2) el uso de sistemas descompuestos en tiempo real para monitorizar fitness
Selig, Henny. "Continuous Event Log Extraction for Process Mining." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210710.
Full textProcess mining är användningen av datavetenskaplig teknik för transaktionsdata, för att identifiera eller övervaka processer inom en organisation. Analyserade data härstammar ofta från processomedvetna företagsprogramvaror, såsom SAP-system, vilka är centrerade kring affärsdokumentation. Skillnaderna i data management mellan Enterprise Resource Planning (ERP)och process mining-system resulterar i en stor andel tvetydiga fall, vilka påverkas av konvergens och divergens. Detta resulterar i ett gap mellan processen som tolkas av process mining och processen som exekveras i ERP-systemet. I denna uppsats används en inköpsprocess för ett SAP ERP-system för att visa hur ERP-data kan extraheras och omvandlas till en process mining-orienterad händelselogg som uttrycker tvetydiga fall så precist som möjligt. Eftersom innehållet och strukturen hos händelseloggen redan definierar omfattningen (vilken process) och granularitet (aktivitetstyperna), så beror resultatet av process mining på kvalitén av händelseloggen. Resultaten av denna uppsats visar hur definitioner av typfall och händelsens granularitet kan användas för att förbättra kvalitén. Den beskrivna lösningen stöder kontinuerlig händelseloggsextraktion från ERPsystemet.
Munoz-Gama, Jorge. "Conformance checking and diagnosis in process mining." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284964.
Full textEn las últimas décadas, la capacidad de los sistemas de información para generar y almacenar datos de eventos ha experimentado un crecimiento exponencial, especialmente en contextos como el industrial. Dispositivos conectados permanentemente a Internet (Internet of things), redes sociales, teléfonos inteligentes, y la computación en la nube proporcionan nuevas fuentes de datos, una tendencia que continuará en los siguientes años. La omnipresencia de grandes volúmenes de datos de eventos almacenados en logs abre la puerta al Process Mining (Minería de Procesos), una nueva disciplina a caballo entre las técnicas de gestión de procesos de negocio, el modelado de procesos, y la inteligencia de negocio. Las técnicas de minería de procesos pueden usarse para descubrir, analizar, y mejorar procesos reales, a base de extraer modelos a partir del comportamiento observado. La capacidad de estos modelos para representar la realidad determina la calidad de los resultados que se obtengan, condicionando su efectividad. El Conformance Checking (Verificación de Conformidad), objetivo final de esta tesis, permite analizar los comportamientos observados y modelados, y determinar si el modelo es una fiel representación de la realidad. La mayoría de los esfuerzos en Conformance Checking se han centrado en medir y asegurar que los modelos fueran capaces de capturar todo el comportamiento observado, también llamado "fitness". Otras propiedades, tales como asegurar la "precisión" de los modelos (no modelar comportamiento innecesario) han sido relegados a un segundo plano. La primera parte de esta tesis se centra en analizar la precisión, donde modelos describiendo la realidad con precisión son preferidos a modelos demasiado genéricos. La tesis presenta una nueva técnica basada en detectar "arcos de escape", i.e. puntos donde el comportamiento modelado se desvía del comportamiento reflejado en el log. Estos arcos de escape son usados para determinar, en forma de métrica, el nivel de precisión entre un log y un modelo, y para localizar posibles puntos de mejora. La tesis también presenta un intervalo de confianza sobre la métrica, así como una métrica multi-factorial para medir la severidad de las imprecisiones detectadas. Conformance Checking puede ser una operación costosa para escenarios reales, y entender las razones que causan los problemas requiere esfuerzo. La segunda parte de la tesis cambia el foco (de precisión a fitness), y propone el uso de técnicas de descomposición para ayudar en la verificación de fitness. Las técnicas propuestas se basan en descomponer el modelo en componentes con una sola entrada y una sola salida, llamados SESEs. Estos componentes representan subprocesos dentro del proceso principal. Verificar el fitness a nivel de subproceso proporciona una información detallada de dónde están los problemas, ayudando en su diagnóstico. Además, las relaciones entre subprocesos pueden ser explotadas para mejorar las capacidades de diagnóstico e identificar qué áreas concentran la mayor densidad de problemas. Finalmente, la tesis propone dos aplicaciones directas de las técnicas de descomposición: 1) la teoría es extendida para incluir información de datos a la verificación de fitness, y 2) el uso de sistemas descompuestos en tiempo real para monitorizar fitness
Bredenkamp, Ben. "Analysis and modelling of mining induced seismicity." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2257.
Full textEarthquakes and other seismic events are known to have catastrophic effects on people and property. These large-scale events are almost always preceded by smallerscale seismic events called precursors, such as tremors or other vibrations. The use of precursor data to predict the realization of seismic hazards has been a long-standing technical problem in different disciplines. For example, blasting or other mining activities have the potential to induce the collapse of rock surfaces, or the occurrence of other dangerous seismic events in large volumes of rock. In this study, seismic data (T4) obtained from a mining concern in South Africa were considered using a nonlinear time series approach. In particular, the method of surrogate analysis was used to characterize the deterministic structure in the data, prior to fitting a predictive model. The seismic data set (T4) is a set of seismic events for a small volume of rock in a mine observed over a period of 12 days. The surrogate data were generated to have structure similar to that of T4 according to some basic seismic laws. In particular, the surrogate data sets were generated to have the same autocorrelation structure and amplitude distributions of the underlying data set T4. The surrogate data derived from T4 allow for the assessment of some basic hypotheses regarding both types of data sets. The structure in both types of data (i.e. the relationship between the past behavior and the future realization of components) was investigated by means of three test statistics, each of which provided partial information on the structure in the data. The first is the average mutual information between the reconstructed past and futures states of T4. The second is a correlation dimension estimate, Dc which gives an indication of the deterministic structure (predictability) of the reconstructed states of T4. The final statistic is the correlation coefficients which gives an indication of the predictability of the future behavior of T4 based on the past states of T4. The past states of T4 was reconstructed by reducing the dimension of a delay coordinate embedding of the components of T4. The map from past states to future realization of T4 values was estimated using Long Short-Term Recurrent Memory (LSTM) neural networks. The application of LSTM Recurrent Neural Networks on point processes has not been reported before in literature. Comparison of the stochastic surrogate data with the measured structure in the T4 data set showed that the structure in T4 differed significantly from that of the surrogate data sets. However, the relationship between the past states and the future realization of components for both T4 and surrogate data did not appear to be deterministic. The application of LSTM in the modeling of T4 shows that the approach could model point processes at least as well or even better than previously reported applications on time series data.
Al, Jlailaty Diana. "Mining Business Process Information from Emails Logs for Process Models Discovery." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED028.
Full textExchanged information in emails’ texts is usually concerned by complex events or business processes in which the entities exchanging emails are collaborating to achieve the processes’ final goals. Thus, the flow of information in the sent and received emails constitutes an essential part of such processes i.e. the tasks or the business activities. Extracting information about business processes from emails can help in enhancing the email management for users. It can be also used in finding rich answers for several analytical queries about the employees and the organizations enacting these business processes. None of the previous works have fully dealt with the problem of automatically transforming email logs into event logs to eventually deduce the undocumented business processes. Towards this aim, we work in this thesis on a framework that induces business process information from emails. We introduce approaches that contribute in the following: (1) discovering for each email the process topic it is concerned by, (2) finding out the business process instance that each email belongs to, (3) extracting business process activities from emails and associating these activities with metadata describing them, (4) improving the performance of business process instances discovery and business activities discovery from emails by making use of the relation between these two problems, and finally (5) preliminary estimating the real timestamp of a business process activity instead of using the email timestamp. Using the results of the mentioned approaches, an event log is generated which can be used for deducing the business process models of an email log. The efficiency of all of the above approaches is proven by applying several experiments on the open Enron email dataset
Mantila, K. (Kimmo). "Channels to mining industry and technology market." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201309251727.
Full textBala, Saimir, Jan Mendling, Martin Schimak, and Peter Queteschiner. "Case and Activity Identification for Mining Process Models from Middleware." Springer, Cham, 2018. http://epub.wu.ac.at/6620/1/PoEM2018%2Dsubmitted.pdf.
Full textTurner, Christopher James. "A genetic programming based business process mining approach." Thesis, Cranfield University, 2009. http://dspace.lib.cranfield.ac.uk/handle/1826/4471.
Full textSOARES, FABIO DE AZEVEDO. "TEXT MINING AT THE INTELLIGENT WEB CRAWLING PROCESS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13212@1.
Full textCONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Esta dissertação apresenta um estudo sobre a utilização de Mineração de Textos no processo de coleta inteligente de dados na Web. O método mais comum de obtenção de dados na Web consiste na utilização de web crawlers. Web crawlers são softwares que, uma vez alimentados por um conjunto inicial de URLs (sementes), iniciam o procedimento metódico de visitar um site, armazenálo em disco e extrair deste os hyperlinks que serão utilizados para as próximas visitas. Entretanto, buscar conteúdo desta forma na Web é uma tarefa exaustiva e custosa. Um processo de coleta inteligente de dados na Web, mais do que coletar e armazenar qualquer documento web acessível, analisa as opções de crawling disponíveis para encontrar links que, provavelmente, fornecerão conteúdo de alta relevância a um tópico definido a priori. Na abordagem de coleta de dados inteligente proposta neste trabalho, tópicos são definidos, não por palavras chaves, mas, pelo uso de documentos textuais como exemplos. Em seguida, técnicas de pré-processamento utilizadas em Mineração de Textos, entre elas o uso de um dicionário thesaurus, analisam semanticamente o documento apresentado como exemplo. Baseado nesta análise, o web crawler construído será guiado em busca do seu objetivo: recuperar informação relevante sobre o documento. A partir de sementes ou realizando uma consulta automática nas máquinas de buscas disponíveis, o crawler analisa, igualmente como na etapa anterior, todo documento recuperado na Web. Então, é executado um processo de comparação entre cada documento recuperado e o documento exemplo. Depois de obtido o nível de similaridade entre ambos, os hyperlinks do documento recuperado são analisados, empilhados e, futuramente, serão desempilhados de acordo seus respectivos e prováveis níveis de importância. Ao final do processo de coleta de dados, outra técnica de Mineração de Textos é aplicada, objetivando selecionar os documentos mais representativos daquela coleção de textos: a Clusterização de Documentos. A implementação de uma ferramenta que contempla as heurísticas pesquisadas permitiu obter resultados práticos, tornando possível avaliar o desempenho das técnicas desenvolvidas e comparar os resultados obtidos com outras formas de recuperação de dados na Web. Com este trabalho, mostrou-se que o emprego de Mineração de Textos é um caminho a ser explorado no processo de recuperação de informação relevante na Web.
This dissertation presents a study about the application of Text Mining as part of the intelligent Web crawling process. The most usual way of gathering data in Web consists of the utilization of web crawlers. Web crawlers are softwares that, once provided with an initial set of URLs (seeds), start the methodical proceeding of visiting a site, store it in disk and extract its hyperlinks that will be used for the next visits. But seeking for content in this way is an expensive and exhausting task. An intelligent web crawling process, more than collecting and storing any web document available, analyses its available crawling possibilities for finding links that, probably, will provide high relevant content to a topic defined a priori. In the approach suggested in this work, topics are not defined by words, but rather by the employment of text documents as examples. Next, pre-processing techniques used in Text Mining, including the use of a Thesaurus, analyze semantically the document submitted as example. Based on this analysis, the web crawler thus constructed will be guided toward its objective: retrieve relevant information to the document. Starting from seeds or querying through available search engines, the crawler analyzes, exactly as in the previous step, every document retrieved in Web. the similarity level between them is obtained, the retrieved document`s hyperlinks are analysed, queued and, later, will be dequeued according to each one`s probable degree of importance. By the end of the gathering data process, another Text Mining technique is applied, with the propose of selecting the most representative document among the collected texts: Document Clustering. The implementation of a tool incorporating all the researched heuristics allowed to achieve results, making possible to evaluate the performance of the developed techniques and compare all obtained results with others means of retrieving data in Web. The present work shows that the use of Text Mining is a track worthy to be exploited in the process of retrieving relevant information in Web.
JIMÉNEZ, HAYDÉE GUILLOT. "APPLYING PROCESS MINING TO THE ACADEMIC ADMINISTRATION DOMAIN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=32300@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
As instituições de ensino superior mantêm uma quantidade considerável de dados que incluem tanto os registros dos alunos como a estrutura dos currículos dos cursos de graduação. Este trabalho, adotando uma abordagem de mineração de processos, centra-se no problema de identificar quão próximo os alunos seguem a ordem recomendada das disciplinas em um currículo de graduação, e até que ponto o desempenho de cada aluno é afetado pela ordem que eles realmente adotam. O problema é abordado aplicando-se duas técnicas já existentes aos registros dos alunos: descoberta de processos e verificação de conformidade; e frequência de conjuntos de itens. Finalmente, a dissertação cobre experimentos realizados aplicando-se essas técnicas a um estudo de caso com mais de 60.000 registros de alunos da PUC-Rio. Os experimentos indicam que a técnica de frequência de conjuntos de itens produz melhores resultados do que as técnicas de descoberta de processos e verificação de conformidade. E confirmam igualmente a relevância de análises baseadas na abordagem de mineração de processos para ajudar coordenadores acadêmicos na busca de melhores currículos universitários.
Higher Education Institutions keep a sizable amount of data, including student records and the structure of degree curricula. This work, adopting a process mining approach, focuses on the problem of identifying how closely students follow the recommended order of the courses in a degree curriculum, and to what extent their performance is affected by the order they actually adopt. It addresses this problem by applying to student records two already existing techniques: process discovery and conformance checking, and frequent itemsets. Finally, the dissertation covers experiments performed by applying these techniques to a case study involving over 60,000 student records from PUC-Rio. The experiments show that the frequent itemsets technique performs better than the process discovery and conformance checking techniques. They equally confirm the relevance of analyses based on the process mining approach to help academic coordinators in their quest for better degree curricula.
Patel, Akash. "Data Mining of Process Data in Multivariable Systems." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-201087.
Full textModellering av reglersystem i industriprocesser med hjälp av system identifieringsexperiment, kan vara både kostsammt och tidskrävande. Ökad tillgångtill stora volymer av historisk lagrad data och processorkraft har därmed väcktstort intresse för data mining algoritmer.Denna avhandling fokuserar på utvärderingen av en data minig algoritm för mulitvariablaprocesser där de utvunna data segmenten can potenitellt användasför system identifiering. Första delen av avhandlingen utforskar vilken effektalgoritmens många parametrar har på dess prestanda. För att förenkla valenav parametrarna, utveklades ett användargränsnitt. Den andra delen av avhandlingenutvärderar algoritmens prestanda genom att modellera en simuleradprocess som är baserad på de utvunna data segment.Resultaten visar att algoritmen är särskilt känslig mot valen av brytfrekvensernai bandpassfiltret, tröskel värdet för det reciproka konditions talet och ordernpå Laguerre filtret. Dessutom visar resultaten att det är, genom det utveckladeanvändargränssnittet, möjligt att välja parameter värden som ger godtyckligautvunna data segment. Slutgiltigen kan det konstateras att man kan medhög nogrannhet modellera en simulerad process med hjälp av de utvunna datasegmenten från algoritmen.
Cotroneo, Orazio. "Mining declarative process models with quantitative temporal constraints." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24636/.
Full textKomulainen, O. (Olli). "Process mining benefits for organizations using ERP systems." Master's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201712013238.
Full textMonissa lähteissä on kerrottu jo useiden vuosikymmenten ajan, kuinka globalisoituminen kiihdyttää liike-elämää. Liike-elämän johtamisesta on tullut vaativampaa, ja suuretkin yritykset joutuvat kilpailemaan myös innovatiivisia startup-yrityksiä vastaan. Samanaikaisesti yritysten on pystyttävä luomaan yhä enemmän arvoa loppuasiakkailleen entistä pienemmillä resursseilla. Tämän vuoksi niiden on tärkeää keskittyä ydinliiketoimintoihinsa ja varmistaa organisaation korkea tehokkuus. Yksi perustavanlaatuinen toimintatapa tehokkuuden parantamiseksi on liiketoimintaprosessien analysointi ja kehittäminen. Perinteiset menetelmät, kuten liiketoimintaprossien mallintaminen ja kehittäminen työpajojen avulla, ovat resursseja kuluttavia. Perinteiset menetelmät eivät ole myöskään vahvoja kuvaamaan organisaatioiden todellisia prosesseja, joissa ovat mukana kaikki reaalielämän muuttujat. Lisäksi organisaatioilla on selkeästi haasteita liiketoimintaprosessiensa hallinnassa. Toisaalta organisaatiot ovat keränneet viime vuosien ajan suuria määriä operationaalista dataa erilaisiin datakeskuksiin, ja tästä syystä esimerkiksi big data -termi on yleistynyt. Kuitenkin suurin osa olemassa olevasta datasta ei ole tehokkaasti käytössä esimerkiksi datalähtöisessä päätöksenteossa. Prosessilouhinta (engl. process mining) yhdistää liiketoimintaprosessien hallinnan ja datalähtöisen lähestymistavan yrityksen alati kasvavien tarpeiden tukemiseen. Prosessilouhinnan päätarkoitus on käyttää toiminnanohjausjärjestelmien (engl. ERP systems) ja muiden IT-järjestelmien tuottamaa sivudataa algoritmien avulla liiketoimintaprosessien visuaaliseen mallintamiseen. Lisäksi prosessilouhinta keskittyy löydösten tekemiseen, joiden avulla tunnistetaan potentiaalisimmat alueet prosessien kehitykselle. Prosessilouhinta on ajankohtainen tutkimusalue, mutta sen hyötyjä loppuasiakkaille ei ole vielä tutkittu eikä dokumentoitu laajasti. Tämän tutkimuksen tarkoitus on ensin validoida, että organisaatioilla on haasteita liiketoimintaprosessiensa kanssa ja että heillä on saatavilla dataa toiminnanohjausjärjestelmistä. Sen jälkeen tutkimuksen päätavoite on määritellä tapaustutkimuksen (QPR ProcessAnalyzer) avulla, millaisia hyötyjä prosessilouhinnalla saavutetaan. Tutkimustavoitteet saavutetaan vastaamalla kolmeen tutkimuskysymykseen: TK1) Millaisia haasteita organisaatioilla on liiketoimintaprosessiensa kanssa? TK2) Tukeeko toiminnanohjausjärjestelmistä saatava data liiketoimintaprosessien kehittämistä? TK3) Miten prosessilouhinta auttaa organisaatioita ymmärtämään ja kehittämään liiketoimintaprosessejaan? Kaksi ensimmäistä tutkimuskysymystä liittyvät sekä tapaustutkimukseen että kirjallisuuskatsaukseen, jonka aihealueet ovat liiketoimintaprosessien hallinta, toiminnanohjausjärjestelmät ja liiketoimintatieto (engl. business intelligence). Kolmas tutkimuskysymys liittyy tapaustutkimukseen, jossa on sisäinen ja ulkoinen näkökulma. Sisäistä näkökulmaa edustaa QPR Softwaren henkilöstö ja ulkoista näkökulmaa kaksi asiakasorganisaatiota. Tutkimuksessa tunnistetaan prosessilouhinnan tuottamia hyötyjä ja käyttötarkoituksia. Työssä esitellään laajemmin yksi prosessilouhinnan käyttötarkoitus, jossa prosessilouhinnalla tuetaan toiminnanohjausjärjestelmän käyttöönottoprojektia. Lisäksi huomataan, että prosessilouhinta pystyy auttamaan kirjallisuuskatsauksessa esille tuoduissa liiketoimintaprosessien hallinnan päähaasteissa. Tutkimus on yleistettävissä myös muuhun prosessilouhinnan alaan, sillä työssä on hyödynnetty tapaustutkimuksen lisäksi aiempaa kirjallisuutta sekä termistöä
Shahbaz, Muhammad. "Product and manufacturing process improvement using data mining." Thesis, Loughborough University, 2005. https://dspace.lboro.ac.uk/2134/34834.
Full textLiu, Siyao. "Integrating Process Mining with Discrete-Event Simulation Modeling." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5735.
Full textBurattin, Andrea <1984>. "Applicability of Process Mining Techniques in Business Environments." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5446/1/thesis-final-v4.pdf.
Full textBurattin, Andrea <1984>. "Applicability of Process Mining Techniques in Business Environments." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5446/.
Full textPESTANA, L. F. "Aplicação do Process Mining na Auditoria de Processos Governamentais." Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/8692.
Full textA auditoria de processos de negócios é um tema de relevância crescente na literatura. No entanto, técnicas tradicionais e manuais demonstram-se insatisfatórias ou insuficientes, visto que as mesmas são custosas, podem ser tendenciosas e passíveis de erros, além de envolverem grande quantidade de recursos temporais, humanos e materiais. Nesse sentido, o presente estudo vem demonstrar como a técnica de process mining pode ser utilizada, de forma automática, na auditoria de processos governamentais, a partir de um sistema de informação e de uma ferramenta de mining denominada ProM. A partir de técnicas de verificação de conformidade, realizou-se a comparação entre os processos reais e seus respectivos modelos oficiais de uma organização governamental. Os resultados obtidos demonstram algumas divergências entre eles, e indicam que a técnica pode ser utilizada como um meio auxiliar na realização de auditoria de processos de negócios.
Papangelakis, Vladimiros George. "Mathematical modelling of an exothermic pressure leaching process." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121089.
Full textThe object of the present thesis was the development of a mathematical model suitable for computer simulation of hydrometallurgical processes. The model formulation was made for a strongly exothermic three-phase reaction system, namely the pressure oxidation process as applied to the treatment of refractory gold ores and concentrates. The steps followed during the course of this work involved first, the experimental identification of the intrinsic kinetics of the two principal refractory gold minerals, arsenopyrite and pyrite, and second, the development of reactor models describing the isothermal and non-isothermal behaviour of batch and multi-stage continuous reactors at steady state. Emphasis was given to the identification of feed conditions for autothermal operation.The key features of the developed model are the coupling of both mass and heat balance equations, the description of the non-isothermal performance of a multistage continuous reactor, and the treatment of a two-mineral mixture concentrate. In addition, continuous functions are used to describe the size distribution of reacting particles and gas-liquid mass transfer rate limitations are assessed.The model predictions were in good agreement with pilot-plant scale industrial data. Simulation runs of alternative reactor configurations and feed compositions elucidated the impact of the size of the first reactor stage, the rate limiting regime, and the sulphur content of the feed on the attainment of autogenous performance.
Le but de cette etude etait de developper un modele mathematique pour la simulation par ordinateur des processus hydrometallurgiques. La formulation du modele a ete faite pour un systeme de reaction de trois phases fortement exothermique, Ie processus d'oxidation sous pression applique au traitement des minerais et des concentres refract aires d'or. Les etapes suivies au cours de cette etude necessitaient premierement l'identification experiment ale de la cinetique intrinseque des deux principaux mineraux d'or, l'arsenopyrite et la pyrite, et par la suite, Ie developpement de modeies de reacteurs decrivant Ie comportement isothermique et non-isothermique de reacteurs en discontinu et de reacteurs en sene continus a Petat d'equilibre. L'emphase a ete donnee al'identification des conditions d'alimentation pouvant produire une operation autothermique.Les principales caracteristiques du modele developpe sont: la combinaison de deux equations d'equilibre de la masse et de la chaleur, la description de la performance non-isothermique de reacteurs en serie continus, Ie traitement d'un con centre d'unmeiange des deux mineraux, l'emploi de fonctions [...]
Bala, Saimir, Macias Cristina Cabanillas, Andreas Solti, Jan Mendling, and Axel Polleres. "Mining Project- Oriented Business Processes." Springer, Cham, 2015. http://dx.doi.org/10.1007/978-3-319-23063-4_28.
Full textGarcía, Oliva Rodrigo Alfonso, and Barrenechea Jesús Javier Santos. "Modelo de evaluación de métricas de control para procesos de negocio utilizando Process Mining." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/653470.
Full textThis project aims to analyze the complexity of business processes in retail companies in a deep way that in other techniques is very difficult or even impossible to do. With Process Mining it is possible to overcome this gap and that is what we want to demonstrate through the implementation of a Process Mining model. The project proposes a Process Mining model that contemplates the presence of various sources of information of a logistic process in a retail company, as well as the application of the three phases of Process Mining (Discovery, Compliance and Improvement). Additionally, a diagnostic phase is proposed, which details a set of control metrics to evaluate the logistic process and thus be able to generate an improvement plan that gives the guidelines to optimize the process based on what has been analyzed through this technique. The model developed was implemented in a peruvian company in the retail sector (TopiTop S.A.) for the analysis of the logistics process, specifically the management of purchase orders. This was analyzed giving as a result of the application of the model and the evaluation of the proposed metrics, the identification of anomalies in the process through the application of each of the phases of the proposed model, ensuring the quality of the analysis in the pre-processing phase, generating the process model and extracting information that was derived in control metrics through the open source tool ProM Tools.
Tesis
Sharma, Sumana. "An Integrated Knowledge Discovery and Data Mining Process Model." VCU Scholars Compass, 2008. http://scholarscompass.vcu.edu/etd/1615.
Full textSchönig, Stefan, Macias Cristina Cabanillas, Ciccio Claudio Di, Stefan Jablonski, and Jan Mendling. "Mining Resource Assignments and Teamwork Compositions from Process Logs." Gesellschaft für Informatik e.V, 2016. http://epub.wu.ac.at/5688/1/Schoenig_et_al_2016_Softwaretechnik%2DTrends.pdf.
Full textYongsiriwit, Karn. "Modeling and mining business process variants in cloud environments." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL002/document.
Full textMore and more organizations are adopting cloud-based Process-Aware Information Systems (PAIS) to manage and execute processes in the cloud as an environment to optimally share and deploy their applications. This is especially true for large organizations having branches operating in different regions with a considerable amount of similar processes. Such organizations need to support many variants of the same process due to their branches' local culture, regulations, etc. However, developing new process variant from scratch is error-prone and time consuming. Motivated by the "Design by Reuse" paradigm, branches may collaborate to develop new process variants by learning from their similar processes. These processes are often heterogeneous which prevents an easy and dynamic interoperability between different branches. A process variant is an adjustment of a process model in order to flexibly adapt to specific needs. Many researches in both academics and industry are aiming to facilitate the design of process variants. Several approaches have been developed to assist process designers by searching for similar business process models or using reference models. However, these approaches are cumbersome, time-consuming and error-prone. Likewise, such approaches recommend entire process models which are not handy for process designers who need to adjust a specific part of a process model. In fact, process designers can better develop process variants having an approach that recommends a well-selected set of activities from a process model, referred to as process fragment. Large organizations with multiple branches execute BP variants in the cloud as environment to optimally deploy and share common resources. However, these cloud resources may be described using different cloud resources description standards which prevent the interoperability between different branches. In this thesis, we address the above shortcomings by proposing an ontology-based approach to semantically populate a common knowledge base of processes and cloud resources and thus enable interoperability between organization's branches. We construct our knowledge base built by extending existing ontologies. We thereafter propose an approach to mine such knowledge base to assist the development of BP variants. Furthermore, we adopt a genetic algorithm to optimally allocate cloud resources to BPs. To validate our approach, we develop two proof of concepts and perform experiments on real datasets. Experimental results show that our approach is feasible and accurate in real use-cases
Yongsiriwit, Karn. "Modeling and mining business process variants in cloud environments." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL002.
Full textMore and more organizations are adopting cloud-based Process-Aware Information Systems (PAIS) to manage and execute processes in the cloud as an environment to optimally share and deploy their applications. This is especially true for large organizations having branches operating in different regions with a considerable amount of similar processes. Such organizations need to support many variants of the same process due to their branches' local culture, regulations, etc. However, developing new process variant from scratch is error-prone and time consuming. Motivated by the "Design by Reuse" paradigm, branches may collaborate to develop new process variants by learning from their similar processes. These processes are often heterogeneous which prevents an easy and dynamic interoperability between different branches. A process variant is an adjustment of a process model in order to flexibly adapt to specific needs. Many researches in both academics and industry are aiming to facilitate the design of process variants. Several approaches have been developed to assist process designers by searching for similar business process models or using reference models. However, these approaches are cumbersome, time-consuming and error-prone. Likewise, such approaches recommend entire process models which are not handy for process designers who need to adjust a specific part of a process model. In fact, process designers can better develop process variants having an approach that recommends a well-selected set of activities from a process model, referred to as process fragment. Large organizations with multiple branches execute BP variants in the cloud as environment to optimally deploy and share common resources. However, these cloud resources may be described using different cloud resources description standards which prevent the interoperability between different branches. In this thesis, we address the above shortcomings by proposing an ontology-based approach to semantically populate a common knowledge base of processes and cloud resources and thus enable interoperability between organization's branches. We construct our knowledge base built by extending existing ontologies. We thereafter propose an approach to mine such knowledge base to assist the development of BP variants. Furthermore, we adopt a genetic algorithm to optimally allocate cloud resources to BPs. To validate our approach, we develop two proof of concepts and perform experiments on real datasets. Experimental results show that our approach is feasible and accurate in real use-cases
Castellano, Mattia. "Business Process Management e tecniche per l'applicazione del Process Mining. Il caso Università degli Studi di Parma." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Find full textKluska, Martin. "Získávání znalostí z procesních logů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399172.
Full textSchönig, Stefan, Macias Cristina Cabanillas, Ciccio Claudio Di, Stefan Jablonski, and Jan Mendling. "Mining team compositions for collaborative work in business processes." Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/s10270-016-0567-4.
Full textSouthavilay, Vilaythong. "A Data Mining Toolbox for Collaborative Writing Processes." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9764.
Full textMoses, Lucian Benedict. "Flotation as a separation technique in the coal gold agglomeration process." Thesis, Cape Technikon, 2000. http://hdl.handle.net/20.500.11838/2155.
Full textInternationally, there is an increase in the need for safer environmental processes that can be applied to mining operations, especially on a small scale, where mercury amalgamation is the main process used for the recovery of free gold. An alternative, more environmentally acceptable, process called the Coal Gold Agglomeration (CGA) process has been investigated at the Cape Technikon. This paper explains the application of flotation as a means of separation for the CGA process. The CGA process is based on the recovery of hydrophobic gold particles from ore slurries into agglomerates formed from coal and oil. The agglomerates are separated from the slurry through scraping, screening, flotation or a combination of the aforementioned. They are then ashed to release the gold particles, after which it is smelted to form gold bullion. All components were contacted for fifty minutes after which a frother was added and after three minutes of conditioning, air, at a rate of one I/min per cell volume was introduced into the system. The addition of a collector (Potassium Amyl Xanthate) at the start of each run significantly improved gold recoveries. Preliminary experiments indicated that the use of baffles decreased the gold recoveries, which was concluded to be due to agglomerate breakage. The system was also found to be frother-selective and hence only DOW-200 was used in subsequent experiments. A significant increase or decrease in the air addition rate both had a negative effect on the recoveries; therefore, the air addition rate was not altered during further tests. The use of tap water as opposed to distilled water decreased the attainable recoveries by less than five per cent. This was a very encouraging result, in terms of the practical implementation of the CGA process.
Reguieg, Hicham. "Using MapReduce to scale event correlation discovery for process mining." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2014. http://tel.archives-ouvertes.fr/tel-01002623.
Full textOstovar, Alireza. "Business process drift: Detection and characterization." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/127157/1/Alireza_Ostovar_Thesis.pdf.
Full textCanturk, Deniz. "Time-based Workflow Mining." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606149/index.pdf.
Full textworkflow logs"
containing information about the workflow process as it is actually being executed. In this thesis, new mining technique based on time information is proposed. It is assumed that events in workflow logs bear timestamps. This information is used in to determine task orders and control flows between tasks. With this new algorithm, basic workflow structures, sequential, parallel, alternative and iterative (i.e., loops) routing, and advance workflow structure or-join can be mined. While mining the workflow structures, this algorithm also handles the noise problem.
Fordal, Arnt Ove. "Process Data Mining for Parameter Estimation : With the DYNIA Method." Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10005.
Full textUpdating the model parameters of the control system of an oil and gas production system for the reasons of cost-effectiveness and production optimization, requires a data set of input and output values for the system identification procedure. A requirement for the system identification to provide a well performing model is for this data set to be informative. Traditionally, the way of obtaining an informative data set has normally been to take the production system out of normal operational order, in the interest of performing experiments specificially designed to produce informative data. It is however desirable to use segments of process data from normal operation in the system identification procedure, as this eliminates the costs connected with a halt of operation. The challenge is to identify segments of the process data that give an informative data set. Dynamic Identifiability Analysis (DYNIA) is an approach to locating periods of high information content and parameter identifiability in a data set. An introduction to the concepts of data mining, system identification and parameter identifiability lay the foundation for an extensive review of the DYNIA method in this context. An implementation of the DYNIA method is presented. Examples and a case study show promising results for the practical functionality of the method, but also raise awareness to elements that should be improved. A discussion on the industrial applicability of DYNIA is presented, as well as suggestions towards modifications that may improve the method.
Bani, Mustafa Ahmed Mahmood. "A knowledge discovery and data mining process model for metabolomics." Thesis, Aberystwyth University, 2012. http://hdl.handle.net/2160/6889468e-851f-47fd-bd44-fe65fe516c7a.
Full textNyman, Tobias. "Kan Process Mining informera RPA för att automatisera komplexa affärsprocesser?" Thesis, Högskolan i Karlstad, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84879.
Full textMyers, David. "Detecting cyber attacks on industrial control systems using process mining." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130799/1/David_Myers_Thesis.pdf.
Full textÖberg, Johanna. "Time prediction and process discovery of administration process." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-432893.
Full textAl, Dahami Abdulelah. "A stage-based model for enabling decision support in process mining." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103533/1/Abdulelah%20Saleh%20A_Al%20Dahami_Thesis.pdf.
Full textBala, Saimir. "Mining Projects from Structured and Unstructured Data." Jens Gulden, Selmin Nurcan, Iris Reinhartz-Berger, Widet Guédria, Palash Bera, Sérgio Guerreiro, Michael Fellman, Matthias Weidlich, 2017. http://epub.wu.ac.at/7205/1/ProjecMining%2DCamera%2DReady.pdf.
Full textHasan, Muayad Mohammed. "Enhanced recovery of heavy oil using a catalytic process." Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/53253/.
Full textRojas-Candio, Piero, Arturo Villantoy-Pasapera, Jimmy Armas-Aguirre, and Santiago Aguirre-Mayorga. "Evaluation Method of Variables and Indicators for Surgery Block Process Using Process Mining and Data Visualization." Repositorio Academico - UPC, 2021. http://hdl.handle.net/10757/653799.
Full textIn this paper, we proposed a method that allows us to formulate and evaluate process mining indicators through questions related to the process traceability, and to bring about a clear understanding of the process variables through data visualization techniques. This proposal identifies bottlenecks and violations of policies that arise due to the difficulty of carrying out measurements and analysis for the improvement of process quality assurance and process transformation. The proposal validation was carried out in a health clinic in Lima (Peru) with data obtained from an information system that supports the surgery block process. Finally, the results contribute to the optimization of decision-making by the medical staff involved in the surgery block process.
Revisión por pares
Evangelista, Pescorán Misael Elias, and Torres Andre Junior Coronado. "Modelo para la evaluación de variables en el Sector Salud utilizando Process Mining y Data Visualization." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/653132.
Full textThe present work proposes a model for the evaluation of variables in the health sector using Process Mining and Data Visualization supported by the Celonis tool. This arises from the problem oriented to the difficulty in understanding the activities that are involved in business processes and their results. The project focuses on the investigation of two emerging disciplines. One of these disciplines is Process Mining and it focuses mainly on the processes, on the data for each event, this in order to discover a model, see conformity of the processes or improve them (Process Mining: An innovative technique for the improvement of the processes, 2016). The second discipline is Data Visualization, this allows data to be presented in a graphic or pictorial format ("Data Visualization: What it is and why it matters", 2016). This project mainly involves research, first, Process Mining and Data Visualization techniques are analyzed. Second, the characteristics and qualities of the disciplines are separated, and a model is designed for the evaluation of variables in the Health Sector using Process Mining and Data Visualization, generating added value, given that by having a graphic or pictorial format that adequately represents the results of using a process mining technique, understanding and analysis in decision making is more accurate. Third, the model is validated in an institution that provides services in the Health Sector, analyzing one of the core processes. Finally, a continuity plan is drawn up so that the proposed model can be applied to process optimization techniques in organizations.
Tesis