To see the other types of publications on this topic, follow the link: Process mining.

Dissertations / Theses on the topic 'Process mining'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Process mining.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

van, der Aalst Wil M. P., Arya Adriansyah, Alves de Medeiros Ana Karla, Franco Arcieri, Thomas Baier, Tobias Blickle, Jagadeesh Chandra Bose R. P, et al. "Process Mining Manifesto." Springer, 2011. http://dx.doi.org/10.1007/978-3-642-28108-2_19.

Full text
Abstract:
Process mining techniques are able to extract knowledge from event logs commonly available in today's information systems. These techniques provide new means to discover, monitor, and improve processes in a variety of application domains. There are two main drivers for the growing interest in process mining. On the one hand, more and more events are being recorded, thus, providing detailed information about the history of processes. On the other hand, there is a need to improve and support business processes in competitive and rapidly changing environments. This manifesto is created by the IEEE Task Force on Process Mining and aims to promote the topic of process mining. Moreover, by defining a set of guiding principles and listing important challenges, this manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users. The goal is to increase the maturity of process mining as a new tool to improve the (re)design, control, and support of operational business processes.
APA, Harvard, Vancouver, ISO, and other styles
2

Khodabandelou, Ghazaleh. "Mining Intentional Process Models." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2014. http://tel.archives-ouvertes.fr/tel-01010756.

Full text
Abstract:
Jusqu'à présent, les techniques de fouille de processus ont modélisé les processus en termes des séquences de tâches qui se produisent lors de l'exécution d'un processus. Cependant, les recherches en modélisation du processus et de guidance ont montrée que de nombreux problèmes, tels que le manque de flexibilité ou d'adaptation, sont résolus plus efficacement lorsque les intentions sont explicitement spécifiées. Cette thèse présente une nouvelle approche de fouille de processus, appelée Map Miner méthode (MMM). Cette méthode est conçue pour automatiser la construction d'un modèle de processus intentionnel à partir des traces d'activités des utilisateurs. MMM utilise les modèles de Markov cachés pour modéliser la relation entre les activités des utilisateurs et leurs stratégies (i.e., les différentes façons d'atteindre des intentions). La méthode comprend également deux algorithmes spécifiquement développés pour déterminer les intentions des utilisateurs et construire le modèle de processus intentionnel de la Carte. MMM peut construire le modèle de processus de la Carte avec différents niveaux de précision (pseudo-Carte et le modèle du processus de la carte) par rapport au formalisme du métamodèle de Map. L'ensemble de la méthode proposée a été appliqué et validé sur des ensembles de données pratiques, dans une expérience à grande échelle, sur les traces d'événements des développeurs de Eclipse UDC.
APA, Harvard, Vancouver, ISO, and other styles
3

Remberg, Julia. "Grundlagen des Process Mining : [Studienarbeit] /." [München] : Grin-Vel, 2008. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=017676071&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nguyen, Hoang H. "Stage-aware business process mining." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130602/9/Hoang%20Nguyen%20Thesis.pdf.

Full text
Abstract:
Process mining enables the analysis of event logs to gain actionable insights into an organisation’s operations. However, state-of-the-art process mining techniques do not exploit the natural decomposition characteristics of business processes. “Process stages” are a generic type of business process decomposition prevalent in multiple domains, e.g. the stages of loan processing, the support levels in IT helpdesk, or the clinical stages in patient treatment. This study contributes a novel approach to process mining based on process stages. The approach is grounded on four techniques that allow the mining of process stages, the automated discovery of process models, the mining of process performance and the multi-perspective comparison of process variants. The approach has been implemented in an open-source toolset and evaluated with real-life datasets from different domains.
APA, Harvard, Vancouver, ISO, and other styles
5

Baier, Thomas, Jan Mendling, and Mathias Weske. "Bridging abstraction layers in process mining." Elsevier, 2014. http://dx.doi.org/10.1016/j.is.2014.04.004.

Full text
Abstract:
While the maturity of process mining algorithms increases and more process mining tools enter the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Current approaches for event log abstraction try to abstract from the events in an automated way that does not capture the required domain knowledge to fit business activities. This can lead to misinterpretation of discovered process models. We developed an approach that aims to abstract an event log to the same abstraction level that is needed by the business. We use domain knowledge extracted from existing process documentation to semi-automatically match events and activities. Our abstraction approach is able to deal with n:m relations between events and activities and also supports concurrency. We evaluated our approach in two case studies with a German IT outsourcing company.
APA, Harvard, Vancouver, ISO, and other styles
6

Pika, Anastasiia. "Mining process risks and resource profiles." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/86079/1/Anastasiia_Pika_Thesis.pdf.

Full text
Abstract:
This research contributes novel techniques for identifying and evaluating business process risks and analysing human resource behaviour. The developed techniques use predefined indicators to identify process risks in individual process instances, evaluate overall process risk, predict process outcomes and analyse human resource behaviour based on the analysis of information about process executions recorded in event logs by information systems. The results of this research can help managers to more accurately evaluate the risk exposure of their business processes, to more objectively evaluate the performance of their employees, and to identify opportunities for improvement of resource and process performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Gerke, Kerstin. "Continual process improvement based on reference models and process mining." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2011. http://dx.doi.org/10.18452/16353.

Full text
Abstract:
Geschäftsprozesse stellen ein wichtiges Gut eines Unternehmens dar. Für den Unternehmenserfolg sind nicht einmalig optimal gestaltete Prozesse entscheidend, sondern die Fähigkeit, schnell auf neue Entwicklungen reagieren und die betroffenen Prozesse flexibel anpassen zu können. In vielen Unternehmen ist eine aktuelle Beschreibung ihrer Prozesse als wesentliche Voraussetzung für die Prozessverbesserung jedoch nicht oder nur unzureichend gegeben. Nicht selten wird ein erstelltes Prozessmodell nicht weiterverwendet, so dass es nach kurzer Zeit von der betrieblichen Realität abweicht. Diese fehlende Übereinstimmung kann durch die Nutzung von Prozess-Mining-Technologien verhindert werden, indem das in den Informationssystemen implizit vorhandene Prozesswissen automatisiert extrahiert und in Form von Prozessmodellen abgebildet wird. Ein weiteres wichtiges Element für die effiziente Gestaltung und Steuerung von Prozessen bilden Referenzmodelle, wie z. B. ITIL und CobiT. Die Prozessverbesserung durchläuft in der Regel mehrere Analyse-, Design-, Implementierungs- , Ausführungs-, Monitoring-, und Evaluierungsschritte. Die Arbeit stellt eine Methodik vor, die die Identifizierung und Lösung der auftretenden Aufgaben unterstützt und erleichtert. Eine empirische Untersuchung zeigt die Herausforderungen und die Potenziale für den erfolgreichen Einsatz von Process-Mining-Techniken. Auf der Basis der Resultate dieser Untersuchung wurden spezielle Aspekte der Datenaufbereitung für Process-Mining-Algorithmen detailliert betrachtet. Der Fokus liegt dabei auf der Bereitstellung von Enterprise- und RFID-Daten. Weiterhin beleuchtet die Arbeit die Wichtigkeit, die Referenzprozessausführung zu überprüfen, um deren Einhaltung in Bezug auf neue oder geänderte Prozesse zu sichern. Die Methodik wurde anhand einer Reihe von Praxisbeispielen erprobt. Die Ergebnisse unterstreichen ihre generelle unternehmensübergreifende Anwendbarkeit für die effiziente kontinuierliche Prozessverbesserung.
The dissertation at hand takes as its subject business processes. Naturally they are subject to continual improvement and are a major asset of any given organization. An optimally-designed process, having once proven itself, must be flexible, as new developments demand swift adaptations. However, many organizations do not adequately describe these processes, though doing so is a prerequisite for their improvement. Very often the process model created during an information system’s implementation either is not used in the first place or is not maintained, resulting in an obvious lack of correspondence between the model and operational reality. Process mining techniques prevent this. They extract the process knowledge inherent in an information system and visualize it in the form of process models. Indeed, continual process improvement depends greatly on this modeling approach, and reference models, such as ITIL and CobiT, are entirely suitable and powerful means for dealing with the efficient design and control of processes. Process improvement typically consists of a number of analysis, design, implementation, execution, monitoring, and evaluation activities. This dissertation proposes a methodology that supports and facilitates them. An empirical analysis both revealed the challenges and the potential benefits of these processes mining techniques’ successful. This in turn led to the detailed consideration of specific aspects of the data preparation for process mining algorithms. Here the focus is on the provision of enterprise data and RFID events. This dissertation as well examines the importance of analyzing the execution of reference processes to ensure compliance with modified or entirely new business processes. The methodology involved a number of cases’ practical trials; the results demonstrate its power and universality. This new approach ushers in an enhanced continual inter-departmental and inter-organizational improvement process.
APA, Harvard, Vancouver, ISO, and other styles
8

Muñoz, Gama Jorge. "Conformance checking and diagnosis in process mining." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284964.

Full text
Abstract:
In the last decades, the capability of information systems to generate and record overwhelming amounts of event data has experimented an exponential growth in several domains, and in particular in industrial scenarios. Devices connected to the internet (internet of things), social interaction, mobile computing, and cloud computing provide new sources of event data and this trend will continue in the next decades. The omnipresence of large amounts of event data stored in logs is an important enabler for process mining, a novel discipline for addressing challenges related to business process management, process modeling, and business intelligence. Process mining techniques can be used to discover, analyze and improve real processes, by extracting models from observed behavior. The capability of these models to represent the reality determines the quality of the results obtained from them, conditioning its usefulness. Conformance checking is the aim of this thesis, where modeled and observed behavior are analyzed to determine if a model defines a faithful representation of the behavior observed a the log. Most of the efforts in conformance checking have focused on measuring and ensuring that models capture all the behavior in the log, i.e., fitness. Other properties, such as ensuring a precise model (not including unnecessary behavior) have been disregarded. The first part of the thesis focuses on analyzing and measuring the precision dimension of conformance, where models describing precisely the reality are preferred to overly general models. The thesis includes a novel technique based on detecting escaping arcs, i.e., points where the modeled behavior deviates from the one reflected in log. The detected escaping arcs are used to determine, in terms of a metric, the precision between log and model, and to locate possible actuation points in order to achieve a more precise model. The thesis also presents a confidence interval on the provided precision metric, and a multi-factor measure to assess the severity of the detected imprecisions. Checking conformance can be time consuming for real-life scenarios, and understanding the reasons behind the conformance mismatches can be an effort-demanding task. The second part of the thesis changes the focus from the precision dimension to the fitness dimension, and proposes the use of decomposed techniques in order to aid in checking and diagnosing fitness. The proposed approach is based on decomposing the model into single entry single exit components. The resulting fragments represent subprocesses within the main process with a simple interface with the rest of the model. Fitness checking per component provides well-localized conformance information, aiding on the diagnosis of the causes behind the problems. Moreover, the relations between components can be exploded to improve the diagnosis capabilities of the analysis, identifying areas with a high degree of mismatches, or providing a hierarchy for a zoom-in zoom-out analysis. Finally, the thesis proposed two main applications of the decomposed approach. First, the theory proposed is extended to incorporate data information for fitness checking in a decomposed manner. Second, a real-time event-based framework is presented for monitoring fitness.
En las últimas décadas, la capacidad de los sistemas de información para generar y almacenar datos de eventos ha experimentado un crecimiento exponencial, especialmente en contextos como el industrial. Dispositivos conectados permanentemente a Internet (Internet of things), redes sociales, teléfonos inteligentes, y la computación en la nube proporcionan nuevas fuentes de datos, una tendencia que continuará en los siguientes años. La omnipresencia de grandes volúmenes de datos de eventos almacenados en logs abre la puerta al Process Mining (Minería de Procesos), una nueva disciplina a caballo entre las técnicas de gestión de procesos de negocio, el modelado de procesos, y la inteligencia de negocio. Las técnicas de minería de procesos pueden usarse para descubrir, analizar, y mejorar procesos reales, a base de extraer modelos a partir del comportamiento observado. La capacidad de estos modelos para representar la realidad determina la calidad de los resultados que se obtengan, condicionando su efectividad. El Conformance Checking (Verificación de Conformidad), objetivo final de esta tesis, permite analizar los comportamientos observados y modelados, y determinar si el modelo es una fiel representación de la realidad. La mayoría de los esfuerzos en Conformance Checking se han centrado en medir y asegurar que los modelos fueran capaces de capturar todo el comportamiento observado, también llamado "fitness". Otras propiedades, tales como asegurar la "precisión" de los modelos (no modelar comportamiento innecesario) han sido relegados a un segundo plano. La primera parte de esta tesis se centra en analizar la precisión, donde modelos describiendo la realidad con precisión son preferidos a modelos demasiado genéricos. La tesis presenta una nueva técnica basada en detectar "arcos de escape", i.e. puntos donde el comportamiento modelado se desvía del comportamiento reflejado en el log. Estos arcos de escape son usados para determinar, en forma de métrica, el nivel de precisión entre un log y un modelo, y para localizar posibles puntos de mejora. La tesis también presenta un intervalo de confianza sobre la métrica, así como una métrica multi-factorial para medir la severidad de las imprecisiones detectadas. Conformance Checking puede ser una operación costosa para escenarios reales, y entender las razones que causan los problemas requiere esfuerzo. La segunda parte de la tesis cambia el foco (de precisión a fitness), y propone el uso de técnicas de descomposición para ayudar en la verificación de fitness. Las técnicas propuestas se basan en descomponer el modelo en componentes con una sola entrada y una sola salida, llamados SESEs. Estos componentes representan subprocesos dentro del proceso principal. Verificar el fitness a nivel de subproceso proporciona una información detallada de dónde están los problemas, ayudando en su diagnóstico. Además, las relaciones entre subprocesos pueden ser explotadas para mejorar las capacidades de diagnóstico e identificar qué áreas concentran la mayor densidad de problemas. Finalmente, la tesis propone dos aplicaciones directas de las técnicas de descomposición: 1) la teoría es extendida para incluir información de datos a la verificación de fitness, y 2) el uso de sistemas descompuestos en tiempo real para monitorizar fitness
APA, Harvard, Vancouver, ISO, and other styles
9

Selig, Henny. "Continuous Event Log Extraction for Process Mining." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210710.

Full text
Abstract:
Process mining is the application of data science technologies on transactional business data to identify or monitor processes within an organization. The analyzed data often originates from process-unaware enterprise software, e.g. Enterprise Resource Planning (ERP) systems. The differences in data management between ERP and process mining systems result in a large fraction of ambiguous cases, affected by convergence and divergence. The consequence is a chasm between the process as interpreted by process mining, and the process as executed in the ERP system. In this thesis, a purchasing process of an SAP ERP system is used to demonstrate, how ERP data can be extracted and transformed into a process mining event log that expresses ambiguous cases as accurately as possible. As the content and structure of the event log already define the scope (i.e. which process) and granularity (i.e. activity types), the process mining results depend on the event log quality. The results of this thesis show how the consideration of case attributes, the notion of a case and the granularity of events can be used to manage the event log quality. The proposed solution supports continuous event extraction from the ERP system.
Process mining är användningen av datavetenskaplig teknik för transaktionsdata, för att identifiera eller övervaka processer inom en organisation. Analyserade data härstammar ofta från processomedvetna företagsprogramvaror, såsom SAP-system, vilka är centrerade kring affärsdokumentation. Skillnaderna i data management mellan Enterprise Resource Planning (ERP)och process mining-system resulterar i en stor andel tvetydiga fall, vilka påverkas av konvergens och divergens. Detta resulterar i ett gap mellan processen som tolkas av process mining och processen som exekveras i ERP-systemet. I denna uppsats används en inköpsprocess för ett SAP ERP-system för att visa hur ERP-data kan extraheras och omvandlas till en process mining-orienterad händelselogg som uttrycker tvetydiga fall så precist som möjligt. Eftersom innehållet och strukturen hos händelseloggen redan definierar omfattningen (vilken process) och granularitet (aktivitetstyperna), så beror resultatet av process mining på kvalitén av händelseloggen. Resultaten av denna uppsats visar hur definitioner av typfall och händelsens granularitet kan användas för att förbättra kvalitén. Den beskrivna lösningen stöder kontinuerlig händelseloggsextraktion från ERPsystemet.
APA, Harvard, Vancouver, ISO, and other styles
10

Munoz-Gama, Jorge. "Conformance checking and diagnosis in process mining." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284964.

Full text
Abstract:
In the last decades, the capability of information systems to generate and record overwhelming amounts of event data has experimented an exponential growth in several domains, and in particular in industrial scenarios. Devices connected to the internet (internet of things), social interaction, mobile computing, and cloud computing provide new sources of event data and this trend will continue in the next decades. The omnipresence of large amounts of event data stored in logs is an important enabler for process mining, a novel discipline for addressing challenges related to business process management, process modeling, and business intelligence. Process mining techniques can be used to discover, analyze and improve real processes, by extracting models from observed behavior. The capability of these models to represent the reality determines the quality of the results obtained from them, conditioning its usefulness. Conformance checking is the aim of this thesis, where modeled and observed behavior are analyzed to determine if a model defines a faithful representation of the behavior observed a the log. Most of the efforts in conformance checking have focused on measuring and ensuring that models capture all the behavior in the log, i.e., fitness. Other properties, such as ensuring a precise model (not including unnecessary behavior) have been disregarded. The first part of the thesis focuses on analyzing and measuring the precision dimension of conformance, where models describing precisely the reality are preferred to overly general models. The thesis includes a novel technique based on detecting escaping arcs, i.e., points where the modeled behavior deviates from the one reflected in log. The detected escaping arcs are used to determine, in terms of a metric, the precision between log and model, and to locate possible actuation points in order to achieve a more precise model. The thesis also presents a confidence interval on the provided precision metric, and a multi-factor measure to assess the severity of the detected imprecisions. Checking conformance can be time consuming for real-life scenarios, and understanding the reasons behind the conformance mismatches can be an effort-demanding task. The second part of the thesis changes the focus from the precision dimension to the fitness dimension, and proposes the use of decomposed techniques in order to aid in checking and diagnosing fitness. The proposed approach is based on decomposing the model into single entry single exit components. The resulting fragments represent subprocesses within the main process with a simple interface with the rest of the model. Fitness checking per component provides well-localized conformance information, aiding on the diagnosis of the causes behind the problems. Moreover, the relations between components can be exploded to improve the diagnosis capabilities of the analysis, identifying areas with a high degree of mismatches, or providing a hierarchy for a zoom-in zoom-out analysis. Finally, the thesis proposed two main applications of the decomposed approach. First, the theory proposed is extended to incorporate data information for fitness checking in a decomposed manner. Second, a real-time event-based framework is presented for monitoring fitness.
En las últimas décadas, la capacidad de los sistemas de información para generar y almacenar datos de eventos ha experimentado un crecimiento exponencial, especialmente en contextos como el industrial. Dispositivos conectados permanentemente a Internet (Internet of things), redes sociales, teléfonos inteligentes, y la computación en la nube proporcionan nuevas fuentes de datos, una tendencia que continuará en los siguientes años. La omnipresencia de grandes volúmenes de datos de eventos almacenados en logs abre la puerta al Process Mining (Minería de Procesos), una nueva disciplina a caballo entre las técnicas de gestión de procesos de negocio, el modelado de procesos, y la inteligencia de negocio. Las técnicas de minería de procesos pueden usarse para descubrir, analizar, y mejorar procesos reales, a base de extraer modelos a partir del comportamiento observado. La capacidad de estos modelos para representar la realidad determina la calidad de los resultados que se obtengan, condicionando su efectividad. El Conformance Checking (Verificación de Conformidad), objetivo final de esta tesis, permite analizar los comportamientos observados y modelados, y determinar si el modelo es una fiel representación de la realidad. La mayoría de los esfuerzos en Conformance Checking se han centrado en medir y asegurar que los modelos fueran capaces de capturar todo el comportamiento observado, también llamado "fitness". Otras propiedades, tales como asegurar la "precisión" de los modelos (no modelar comportamiento innecesario) han sido relegados a un segundo plano. La primera parte de esta tesis se centra en analizar la precisión, donde modelos describiendo la realidad con precisión son preferidos a modelos demasiado genéricos. La tesis presenta una nueva técnica basada en detectar "arcos de escape", i.e. puntos donde el comportamiento modelado se desvía del comportamiento reflejado en el log. Estos arcos de escape son usados para determinar, en forma de métrica, el nivel de precisión entre un log y un modelo, y para localizar posibles puntos de mejora. La tesis también presenta un intervalo de confianza sobre la métrica, así como una métrica multi-factorial para medir la severidad de las imprecisiones detectadas. Conformance Checking puede ser una operación costosa para escenarios reales, y entender las razones que causan los problemas requiere esfuerzo. La segunda parte de la tesis cambia el foco (de precisión a fitness), y propone el uso de técnicas de descomposición para ayudar en la verificación de fitness. Las técnicas propuestas se basan en descomponer el modelo en componentes con una sola entrada y una sola salida, llamados SESEs. Estos componentes representan subprocesos dentro del proceso principal. Verificar el fitness a nivel de subproceso proporciona una información detallada de dónde están los problemas, ayudando en su diagnóstico. Además, las relaciones entre subprocesos pueden ser explotadas para mejorar las capacidades de diagnóstico e identificar qué áreas concentran la mayor densidad de problemas. Finalmente, la tesis propone dos aplicaciones directas de las técnicas de descomposición: 1) la teoría es extendida para incluir información de datos a la verificación de fitness, y 2) el uso de sistemas descompuestos en tiempo real para monitorizar fitness
APA, Harvard, Vancouver, ISO, and other styles
11

Bredenkamp, Ben. "Analysis and modelling of mining induced seismicity." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2257.

Full text
Abstract:
Thesis (MScEng (Process Engineering))--University of Stellenbosch, 2006.
Earthquakes and other seismic events are known to have catastrophic effects on people and property. These large-scale events are almost always preceded by smallerscale seismic events called precursors, such as tremors or other vibrations. The use of precursor data to predict the realization of seismic hazards has been a long-standing technical problem in different disciplines. For example, blasting or other mining activities have the potential to induce the collapse of rock surfaces, or the occurrence of other dangerous seismic events in large volumes of rock. In this study, seismic data (T4) obtained from a mining concern in South Africa were considered using a nonlinear time series approach. In particular, the method of surrogate analysis was used to characterize the deterministic structure in the data, prior to fitting a predictive model. The seismic data set (T4) is a set of seismic events for a small volume of rock in a mine observed over a period of 12 days. The surrogate data were generated to have structure similar to that of T4 according to some basic seismic laws. In particular, the surrogate data sets were generated to have the same autocorrelation structure and amplitude distributions of the underlying data set T4. The surrogate data derived from T4 allow for the assessment of some basic hypotheses regarding both types of data sets. The structure in both types of data (i.e. the relationship between the past behavior and the future realization of components) was investigated by means of three test statistics, each of which provided partial information on the structure in the data. The first is the average mutual information between the reconstructed past and futures states of T4. The second is a correlation dimension estimate, Dc which gives an indication of the deterministic structure (predictability) of the reconstructed states of T4. The final statistic is the correlation coefficients which gives an indication of the predictability of the future behavior of T4 based on the past states of T4. The past states of T4 was reconstructed by reducing the dimension of a delay coordinate embedding of the components of T4. The map from past states to future realization of T4 values was estimated using Long Short-Term Recurrent Memory (LSTM) neural networks. The application of LSTM Recurrent Neural Networks on point processes has not been reported before in literature. Comparison of the stochastic surrogate data with the measured structure in the T4 data set showed that the structure in T4 differed significantly from that of the surrogate data sets. However, the relationship between the past states and the future realization of components for both T4 and surrogate data did not appear to be deterministic. The application of LSTM in the modeling of T4 shows that the approach could model point processes at least as well or even better than previously reported applications on time series data.
APA, Harvard, Vancouver, ISO, and other styles
12

Al, Jlailaty Diana. "Mining Business Process Information from Emails Logs for Process Models Discovery." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED028.

Full text
Abstract:
Les informations échangées dans les textes des courriels sont généralement concernées par des événements complexes ou des processus métier dans lesquels les entités qui échangent des courriels collaborent pour atteindre les objectifs finaux des processus. Ainsi, le flux d’informations dans les courriels envoyés et reçus constitue une partie essentielle, les activités métier de l’entreprise. L’extraction d’informations sur les processus métier à partir des courriels peut aider à améliorer la gestion des courriels pour les utilisateurs. Il peut également être utilisé pour trouver des réponses riches à plusieurs questions analytiques sur les employés et les organisations. Aucun des travaux précédents n’a résolu le problème de la transformation automatique des journaux de courriels en journaux d’événements pour éventuellement en déduire les processus métier non documentés. Dans ce but, nous travaillons dans cette thèse sur un framework qui induit des informations de processus métier à partir d’emails. Nous introduisons des approches qui contribuent à ce qui suit : (1) découvrir pour chaque courriel le sujet de processus qui le concerne, (2) découvrir l’instance de processus métier à laquelle appartient chaque courriel, (3) extraire les activités de processus métier des courriels et associer ces activités aux métadonnées qui les décrivent, (4) améliorer la performance de la découverte des instances de processus métier et des activités métier en utilisant la relation entre ces deux problèmes, et enfin (5) estimer au préalable la date/heure réelle d’un activité métier. En utilisant les résultats des approches mentionnées, un journal d’événements est généré qui peut être utilisé pour déduire les modèles de processus métier d’un journal de courriels. L’efficacité de toutes les approches ci-dessus est prouvée par l’application de plusieurs expériences sur l’ensemble de données de courriel ouvert d’Enron
Exchanged information in emails’ texts is usually concerned by complex events or business processes in which the entities exchanging emails are collaborating to achieve the processes’ final goals. Thus, the flow of information in the sent and received emails constitutes an essential part of such processes i.e. the tasks or the business activities. Extracting information about business processes from emails can help in enhancing the email management for users. It can be also used in finding rich answers for several analytical queries about the employees and the organizations enacting these business processes. None of the previous works have fully dealt with the problem of automatically transforming email logs into event logs to eventually deduce the undocumented business processes. Towards this aim, we work in this thesis on a framework that induces business process information from emails. We introduce approaches that contribute in the following: (1) discovering for each email the process topic it is concerned by, (2) finding out the business process instance that each email belongs to, (3) extracting business process activities from emails and associating these activities with metadata describing them, (4) improving the performance of business process instances discovery and business activities discovery from emails by making use of the relation between these two problems, and finally (5) preliminary estimating the real timestamp of a business process activity instead of using the email timestamp. Using the results of the mentioned approaches, an event log is generated which can be used for deducing the business process models of an email log. The efficiency of all of the above approaches is proven by applying several experiments on the open Enron email dataset
APA, Harvard, Vancouver, ISO, and other styles
13

Mantila, K. (Kimmo). "Channels to mining industry and technology market." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201309251727.

Full text
Abstract:
This research is a Master’s thesis that has made for Parker Hannifin Oy in the University of Oulu in Department of Process and Environmental engineering. Parker Hannifin Oy ordered a market research of the Finnish mining market. A subject was a limited research to the northern Fennoscandian and Greenland mining market but the main focus is in Finnish mining industry and technology market. The research produced new information about the mining market for the decision makers of Parker Hannifin Oy. The research problem was to clarify which delivery models would work with mining industry, what is the commercial potential with mining industry and which are the technology areas where the supply of Parker Hannifin Oy and the demand of the mining industry would meet. Qualitative and quantitative methods were used in the market research. A sample for market research was collected from public sources and with interviews and questionnaires. The market research begun with desk research in which the researcher investigated the mining market by public information. After this there were a qualitative interviews and questionnaires for the mining industry and for the company which ordered of research. The data sample was treated with qualitative research methods. The commercial potential for Parker Hannifin Oy is huge in Finnish and Swedish mining market. The mining is developing and growing very fast in both countries. It was estimated that Parker Hannifin Oy could be increased their revenue 10 % by sales to Finnish mining industry in 2012. It has been estimated that the production of Finnish mining industry will triple by the year 2022. Also, the Swedish mining industry will need 10 000–15 000 new employees by year 2025 which two or three times more than Finnish mining industry. The potential mining industry customers considered that site container and a spare part store of mining company are efficient and effective delivery models. One result of the questionnaire was that most of the Finnish mining companies have a store of critical spare parts of mining machines. Products of Parker Hannifin Oy are mostly used in underground mining machines in Finland. In the research came out that the easiest way for Parker Hannifin Oy to increase its sales could be by finding cooperation partners from those local stores and contactors which already do business with local mining companies.
APA, Harvard, Vancouver, ISO, and other styles
14

Bala, Saimir, Jan Mendling, Martin Schimak, and Peter Queteschiner. "Case and Activity Identification for Mining Process Models from Middleware." Springer, Cham, 2018. http://epub.wu.ac.at/6620/1/PoEM2018%2Dsubmitted.pdf.

Full text
Abstract:
Process monitoring aims to provide transparency over operational aspects of a business process. In practice, it is a challenge that traces of business process executions span across a number of diverse systems. It is cumbersome manual engineering work to identify which attributes in unstructured event data can serve as case and activity identifiers for extracting and monitoring the business process. Approaches from literature assume that these identifiers are known a priori and data is readily available in formats like eXtensible Event Stream (XES). However, in practice this is hardly the case, specifically when event data from different sources are pooled together in event stores. In this paper, we address this research gap by inferring potential case and activity identifiers in a provenance agnostic way. More specifically, we propose a semi-automatic technique for discovering event relations that are semantically relevant for business process monitoring. The results are evaluated in an industry case study with an international telecommunication provider.
APA, Harvard, Vancouver, ISO, and other styles
15

Turner, Christopher James. "A genetic programming based business process mining approach." Thesis, Cranfield University, 2009. http://dspace.lib.cranfield.ac.uk/handle/1826/4471.

Full text
Abstract:
As business processes become ever more complex there is a need for companies to understand the processes they already have in place. To undertake this manually would be time consuming. The practice of process mining attempts to automatically construct the correct representation of a process based on a set of process execution logs. The aim of this research is to develop a genetic programming based approach for business process mining. The focus of this research is on automated/semi automated business processes within the service industry (by semi automated it is meant that part of the process is manual and likely to be paper based). This is the first time a GP approach has been used in the practice of process mining. The graph based representation and fitness parsing used are also unique to the GP approach. A literature review and an industry survey have been undertaken as part of this research to establish the state-of-the-art in the research and practice of business process modelling and mining. It is observed that process execution logs exist in most service sector companies are not utilised for process mining. The development of a new GP approach is documented along with a set of modifications required to enable accuracy in the mining of complex process constructs, semantics and noisy process execution logs. In the context of process mining accuracy refers to the ability of the mined model to reflect the contents of the event log on which it is based; neither over describing, including features that are not recorded in the log, or under describing, just including the most common features leaving out low frequency task edges, the contents of the event log. The complexity of processes, in terms of this thesis, involves the mining of parallel constructs, processes containing complex semantic constructs (And/XOR split and join points) and processes containing 20 or more tasks. The level of noise mined by the business process mining approach includes event logs which have a small number of randomly selected tasks missing from a third of their structure. A novel graph representation for use with GP in the mining of business processes is presented along with a new way of parsing graph based individuals against process execution logs. The GP process mining approach has been validated with a range of tests drawn from literature and two case studies, provided by the industrial sponsor, utilising live process data. These tests and case studies provide a range of process constructs to fully test and stretch the GP process mining approach. An outlook is given into the future development of the GP process mining approach and process mining as a practice.
APA, Harvard, Vancouver, ISO, and other styles
16

SOARES, FABIO DE AZEVEDO. "TEXT MINING AT THE INTELLIGENT WEB CRAWLING PROCESS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13212@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Esta dissertação apresenta um estudo sobre a utilização de Mineração de Textos no processo de coleta inteligente de dados na Web. O método mais comum de obtenção de dados na Web consiste na utilização de web crawlers. Web crawlers são softwares que, uma vez alimentados por um conjunto inicial de URLs (sementes), iniciam o procedimento metódico de visitar um site, armazenálo em disco e extrair deste os hyperlinks que serão utilizados para as próximas visitas. Entretanto, buscar conteúdo desta forma na Web é uma tarefa exaustiva e custosa. Um processo de coleta inteligente de dados na Web, mais do que coletar e armazenar qualquer documento web acessível, analisa as opções de crawling disponíveis para encontrar links que, provavelmente, fornecerão conteúdo de alta relevância a um tópico definido a priori. Na abordagem de coleta de dados inteligente proposta neste trabalho, tópicos são definidos, não por palavras chaves, mas, pelo uso de documentos textuais como exemplos. Em seguida, técnicas de pré-processamento utilizadas em Mineração de Textos, entre elas o uso de um dicionário thesaurus, analisam semanticamente o documento apresentado como exemplo. Baseado nesta análise, o web crawler construído será guiado em busca do seu objetivo: recuperar informação relevante sobre o documento. A partir de sementes ou realizando uma consulta automática nas máquinas de buscas disponíveis, o crawler analisa, igualmente como na etapa anterior, todo documento recuperado na Web. Então, é executado um processo de comparação entre cada documento recuperado e o documento exemplo. Depois de obtido o nível de similaridade entre ambos, os hyperlinks do documento recuperado são analisados, empilhados e, futuramente, serão desempilhados de acordo seus respectivos e prováveis níveis de importância. Ao final do processo de coleta de dados, outra técnica de Mineração de Textos é aplicada, objetivando selecionar os documentos mais representativos daquela coleção de textos: a Clusterização de Documentos. A implementação de uma ferramenta que contempla as heurísticas pesquisadas permitiu obter resultados práticos, tornando possível avaliar o desempenho das técnicas desenvolvidas e comparar os resultados obtidos com outras formas de recuperação de dados na Web. Com este trabalho, mostrou-se que o emprego de Mineração de Textos é um caminho a ser explorado no processo de recuperação de informação relevante na Web.
This dissertation presents a study about the application of Text Mining as part of the intelligent Web crawling process. The most usual way of gathering data in Web consists of the utilization of web crawlers. Web crawlers are softwares that, once provided with an initial set of URLs (seeds), start the methodical proceeding of visiting a site, store it in disk and extract its hyperlinks that will be used for the next visits. But seeking for content in this way is an expensive and exhausting task. An intelligent web crawling process, more than collecting and storing any web document available, analyses its available crawling possibilities for finding links that, probably, will provide high relevant content to a topic defined a priori. In the approach suggested in this work, topics are not defined by words, but rather by the employment of text documents as examples. Next, pre-processing techniques used in Text Mining, including the use of a Thesaurus, analyze semantically the document submitted as example. Based on this analysis, the web crawler thus constructed will be guided toward its objective: retrieve relevant information to the document. Starting from seeds or querying through available search engines, the crawler analyzes, exactly as in the previous step, every document retrieved in Web. the similarity level between them is obtained, the retrieved document`s hyperlinks are analysed, queued and, later, will be dequeued according to each one`s probable degree of importance. By the end of the gathering data process, another Text Mining technique is applied, with the propose of selecting the most representative document among the collected texts: Document Clustering. The implementation of a tool incorporating all the researched heuristics allowed to achieve results, making possible to evaluate the performance of the developed techniques and compare all obtained results with others means of retrieving data in Web. The present work shows that the use of Text Mining is a track worthy to be exploited in the process of retrieving relevant information in Web.
APA, Harvard, Vancouver, ISO, and other styles
17

JIMÉNEZ, HAYDÉE GUILLOT. "APPLYING PROCESS MINING TO THE ACADEMIC ADMINISTRATION DOMAIN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=32300@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
As instituições de ensino superior mantêm uma quantidade considerável de dados que incluem tanto os registros dos alunos como a estrutura dos currículos dos cursos de graduação. Este trabalho, adotando uma abordagem de mineração de processos, centra-se no problema de identificar quão próximo os alunos seguem a ordem recomendada das disciplinas em um currículo de graduação, e até que ponto o desempenho de cada aluno é afetado pela ordem que eles realmente adotam. O problema é abordado aplicando-se duas técnicas já existentes aos registros dos alunos: descoberta de processos e verificação de conformidade; e frequência de conjuntos de itens. Finalmente, a dissertação cobre experimentos realizados aplicando-se essas técnicas a um estudo de caso com mais de 60.000 registros de alunos da PUC-Rio. Os experimentos indicam que a técnica de frequência de conjuntos de itens produz melhores resultados do que as técnicas de descoberta de processos e verificação de conformidade. E confirmam igualmente a relevância de análises baseadas na abordagem de mineração de processos para ajudar coordenadores acadêmicos na busca de melhores currículos universitários.
Higher Education Institutions keep a sizable amount of data, including student records and the structure of degree curricula. This work, adopting a process mining approach, focuses on the problem of identifying how closely students follow the recommended order of the courses in a degree curriculum, and to what extent their performance is affected by the order they actually adopt. It addresses this problem by applying to student records two already existing techniques: process discovery and conformance checking, and frequent itemsets. Finally, the dissertation covers experiments performed by applying these techniques to a case study involving over 60,000 student records from PUC-Rio. The experiments show that the frequent itemsets technique performs better than the process discovery and conformance checking techniques. They equally confirm the relevance of analyses based on the process mining approach to help academic coordinators in their quest for better degree curricula.
APA, Harvard, Vancouver, ISO, and other styles
18

Patel, Akash. "Data Mining of Process Data in Multivariable Systems." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-201087.

Full text
Abstract:
Performing system identification experiments in order to model control plantsin industry processes can be costly and time consuming. Therefore, with increasinglymore computational power available and abundant access to loggedhistorical data from plants, data mining algorithms have become more appealing.This thesis focuses on evaluating a data mining algorithm for multivariate processwhere the mined data can potentially be used for system identification.The first part of the thesis explores the effect many of the necessary user chosenparameters have on the algorithmic performance. In order to do this, a GUIdesigned with assisting in parameter selection is developed. The second partof the thesis evaluates the proposed algorithm’s performance by modelling asimulated process based on intervals found by the algorithm.The results show that the algorithm is particularly sensitive to the choice ofcut-off frequencies in the bandpass filter, threshold of the reciprocal conditionnumber and the Laguerre filter order. It is also shown that with the GUI itis possible to select parameters such that the algorithm performs satisfactoryand mines data relevant for system identification. Finally, the results show thatit’s possible to use the mined data in order to model a simulated process usingsystem identification techniques with good accuracy.
Modellering av reglersystem i industriprocesser med hjälp av system identifieringsexperiment, kan vara både kostsammt och tidskrävande. Ökad tillgångtill stora volymer av historisk lagrad data och processorkraft har därmed väcktstort intresse för data mining algoritmer.Denna avhandling fokuserar på utvärderingen av en data minig algoritm för mulitvariablaprocesser där de utvunna data segmenten can potenitellt användasför system identifiering. Första delen av avhandlingen utforskar vilken effektalgoritmens många parametrar har på dess prestanda. För att förenkla valenav parametrarna, utveklades ett användargränsnitt. Den andra delen av avhandlingenutvärderar algoritmens prestanda genom att modellera en simuleradprocess som är baserad på de utvunna data segment.Resultaten visar att algoritmen är särskilt känslig mot valen av brytfrekvensernai bandpassfiltret, tröskel värdet för det reciproka konditions talet och ordernpå Laguerre filtret. Dessutom visar resultaten att det är, genom det utveckladeanvändargränssnittet, möjligt att välja parameter värden som ger godtyckligautvunna data segment. Slutgiltigen kan det konstateras att man kan medhög nogrannhet modellera en simulerad process med hjälp av de utvunna datasegmenten från algoritmen.
APA, Harvard, Vancouver, ISO, and other styles
19

Cotroneo, Orazio. "Mining declarative process models with quantitative temporal constraints." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24636/.

Full text
Abstract:
Time has always been a subject of study in science, philosophy and religion. Time was referred by the ancient Greeks with two separate words: Chronos and Kairos. Chronos referring to the quantitative aspect of it, while Kairos referring to the qualitative part of it. In this work, time, as a measurement system for a given business context, would be explored in both of its forms. Specifically in the last few years, embedding the notion of quantitative time in discovering declarative mining models has been a point of focus in research. The aim of this work is to enrich declarative process mining models with the notion of quantitative time, and then to adapt the discovery algorithm, inspired by Mooney in 1995, and then modified by Palmieri in 2020, to discover the enriched models.
APA, Harvard, Vancouver, ISO, and other styles
20

Komulainen, O. (Olli). "Process mining benefits for organizations using ERP systems." Master's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201712013238.

Full text
Abstract:
Various sources have stated already for the past couple decades how globalization is increasing the pace on the business environments and how the companies must concentrate on their core operations. However, today the business environment is even more demanding than before and large corporations compete neck to neck with disruptive startups. Therefore, organizations must deliver more added value for end customer with less internal resources, which is directly linked to efficiency and effectiveness of organizations. One founding concept to increase efficiency and effectiveness is to analyze and improve companies’ business processes. Traditional methods such as holding workshops to map the business processes and then analyze and further develop them can be very time consuming and mostly limited to “wish/to-be” processes. Traditional methods also lack ability to describe “as-is” state of business processes where actual variation is present. Furthermore, organizations have clearly challenges with their business process management. On the other hand, past years’ organizations have been collecting huge amounts of operational data to their data warehouses and terminology such as big data has been a hot topic for several years. Still, most of the data is not used very effectively on any analyzes even talking about applying it on data based decision making. Process mining combines business process management and data-driven approach to address the rising needs of organizations. The core idea of process mining is to harness ERP or other IT system’s underlying data by using algorithms to create visualized process flowchart. Moreover, process mining concentrates on making discoveries and findings to locate pain points and improvement areas for organization to apply business process development actions. Process mining is also very timely research area of which actual end user benefits are not well documented. The goal of this research is to first validate that organizations have challenges with their business processes and that they have also data available at their ERP systems. Then the main contribution of this research is to evaluate what kind of benefits user can achieve from process mining by conducting a case study on one large process mining vendor (QPR ProcessAnalyzer). The research objectives are met by answering to three research questions: RQ1) What kind of business process management challenges organizations have? RQ2) Does ERP systems’ data help to improve business processes? RQ3) How does process mining help organizations to understand and improve their business processes? The first two research questions are related to both literature review (business process management, enterprise resource systems and business intelligence) and the case study. The third one is addressed strongly by the case study that has two angles: internal QPR Software employee point of view and external customer perspective from two large QPR Software’s customers. The results of the research introduce many benefits and use cases of process mining. The new use case of applying process mining to support ERP implementation project is presented with a detailed perspective and estimated cost saving calculations. Furthermore, it is discovered that process mining addresses all three main BPM challenges identified during the research. The results can generally be applied on process mining industry as the case study setting has been extended with the past literature and terminology
Monissa lähteissä on kerrottu jo useiden vuosikymmenten ajan, kuinka globalisoituminen kiihdyttää liike-elämää. Liike-elämän johtamisesta on tullut vaativampaa, ja suuretkin yritykset joutuvat kilpailemaan myös innovatiivisia startup-yrityksiä vastaan. Samanaikaisesti yritysten on pystyttävä luomaan yhä enemmän arvoa loppuasiakkailleen entistä pienemmillä resursseilla. Tämän vuoksi niiden on tärkeää keskittyä ydinliiketoimintoihinsa ja varmistaa organisaation korkea tehokkuus. Yksi perustavanlaatuinen toimintatapa tehokkuuden parantamiseksi on liiketoimintaprosessien analysointi ja kehittäminen. Perinteiset menetelmät, kuten liiketoimintaprossien mallintaminen ja kehittäminen työpajojen avulla, ovat resursseja kuluttavia. Perinteiset menetelmät eivät ole myöskään vahvoja kuvaamaan organisaatioiden todellisia prosesseja, joissa ovat mukana kaikki reaalielämän muuttujat. Lisäksi organisaatioilla on selkeästi haasteita liiketoimintaprosessiensa hallinnassa. Toisaalta organisaatiot ovat keränneet viime vuosien ajan suuria määriä operationaalista dataa erilaisiin datakeskuksiin, ja tästä syystä esimerkiksi big data -termi on yleistynyt. Kuitenkin suurin osa olemassa olevasta datasta ei ole tehokkaasti käytössä esimerkiksi datalähtöisessä päätöksenteossa. Prosessilouhinta (engl. process mining) yhdistää liiketoimintaprosessien hallinnan ja datalähtöisen lähestymistavan yrityksen alati kasvavien tarpeiden tukemiseen. Prosessilouhinnan päätarkoitus on käyttää toiminnanohjausjärjestelmien (engl. ERP systems) ja muiden IT-järjestelmien tuottamaa sivudataa algoritmien avulla liiketoimintaprosessien visuaaliseen mallintamiseen. Lisäksi prosessilouhinta keskittyy löydösten tekemiseen, joiden avulla tunnistetaan potentiaalisimmat alueet prosessien kehitykselle. Prosessilouhinta on ajankohtainen tutkimusalue, mutta sen hyötyjä loppuasiakkaille ei ole vielä tutkittu eikä dokumentoitu laajasti. Tämän tutkimuksen tarkoitus on ensin validoida, että organisaatioilla on haasteita liiketoimintaprosessiensa kanssa ja että heillä on saatavilla dataa toiminnanohjausjärjestelmistä. Sen jälkeen tutkimuksen päätavoite on määritellä tapaustutkimuksen (QPR ProcessAnalyzer) avulla, millaisia hyötyjä prosessilouhinnalla saavutetaan. Tutkimustavoitteet saavutetaan vastaamalla kolmeen tutkimuskysymykseen: TK1) Millaisia haasteita organisaatioilla on liiketoimintaprosessiensa kanssa? TK2) Tukeeko toiminnanohjausjärjestelmistä saatava data liiketoimintaprosessien kehittämistä? TK3) Miten prosessilouhinta auttaa organisaatioita ymmärtämään ja kehittämään liiketoimintaprosessejaan? Kaksi ensimmäistä tutkimuskysymystä liittyvät sekä tapaustutkimukseen että kirjallisuuskatsaukseen, jonka aihealueet ovat liiketoimintaprosessien hallinta, toiminnanohjausjärjestelmät ja liiketoimintatieto (engl. business intelligence). Kolmas tutkimuskysymys liittyy tapaustutkimukseen, jossa on sisäinen ja ulkoinen näkökulma. Sisäistä näkökulmaa edustaa QPR Softwaren henkilöstö ja ulkoista näkökulmaa kaksi asiakasorganisaatiota. Tutkimuksessa tunnistetaan prosessilouhinnan tuottamia hyötyjä ja käyttötarkoituksia. Työssä esitellään laajemmin yksi prosessilouhinnan käyttötarkoitus, jossa prosessilouhinnalla tuetaan toiminnanohjausjärjestelmän käyttöönottoprojektia. Lisäksi huomataan, että prosessilouhinta pystyy auttamaan kirjallisuuskatsauksessa esille tuoduissa liiketoimintaprosessien hallinnan päähaasteissa. Tutkimus on yleistettävissä myös muuhun prosessilouhinnan alaan, sillä työssä on hyödynnetty tapaustutkimuksen lisäksi aiempaa kirjallisuutta sekä termistöä
APA, Harvard, Vancouver, ISO, and other styles
21

Shahbaz, Muhammad. "Product and manufacturing process improvement using data mining." Thesis, Loughborough University, 2005. https://dspace.lboro.ac.uk/2134/34834.

Full text
Abstract:
In recent years manufacturing enterprises are increasingly automated and collect and store large quantities of data relating to their products and production systems. This electronically stored data can hold both process measures and hidden information, which can be very important when discovered. Knowledge discovery in databases provides the tools to explore historic or current data to reveal many kinds of previously unknown knowledge from these databases. Manufacturing enterprises data is complex and may include information relating to design, process improvement and limitations, manufacturing machines and tools, and product quality. This thesis focuses on issues relating to information extraction from engineering databases in general and from manufacturing processes in particular using their historical databases. It also addresses the important issue of how the process or the design of the product can be improved based on such information.
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Siyao. "Integrating Process Mining with Discrete-Event Simulation Modeling." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5735.

Full text
Abstract:
Discrete-event simulation (DES) is an invaluable tool which organizations can use to help better understand, diagnose, and optimize their operational processes. Studies have shown that for the typical DES exercise, the greatest amount of time is spent on developing an accurate model of the process that is to be studied. Process mining, a similar field of study, focuses on using historical data stored in software databases to accurate recreate and analyze business processes. Utilizing process mining techniques to help rapidly develop DES models can drastically reduce the amount of time spent building simulation models, which ultimately will enable organizations to more quickly identify and correct shortcomings in their operations. Although there have been significant advances in process mining research, there are still several issues with current process mining methods which prevent them from seeing widespread industry adoption. One such issue, which this study examines, is the lack of cross-compatibility between process mining tools and other process analysis tools. Specifically, this study develops and characterizes a method through which mined process models can be converted into discrete-event simulation models. The developed method utilizes a plugin written for the ProM Framework, an existing collection of process mining tools, which takes a mined process model as its input and outputs an Excel workbook which provides the process data in a format more easily read by DES packages. Two event logs which mimic real-world processes were used in the development and validation of the plugin. The developed plugin successfully extracted the critical process data from the mined process model and converted it into a format more easily utilized by DES packages. There are several limitations which will limit model accuracy, but the plugin developed by this study shows that the conversion of process models to basic simulation models is possible. Future research can focus on addressing the limitations to improve model accuracy.
APA, Harvard, Vancouver, ISO, and other styles
23

Burattin, Andrea <1984&gt. "Applicability of Process Mining Techniques in Business Environments." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5446/1/thesis-final-v4.pdf.

Full text
Abstract:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
APA, Harvard, Vancouver, ISO, and other styles
24

Burattin, Andrea <1984&gt. "Applicability of Process Mining Techniques in Business Environments." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5446/.

Full text
Abstract:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
APA, Harvard, Vancouver, ISO, and other styles
25

PESTANA, L. F. "Aplicação do Process Mining na Auditoria de Processos Governamentais." Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/8692.

Full text
Abstract:
Made available in DSpace on 2018-08-01T23:38:03Z (GMT). No. of bitstreams: 1 tese_10969_Dissertação Luciana França_Versão Final.pdf: 2442120 bytes, checksum: c6bc5141d2f33e637a2a6eb80e448216 (MD5) Previous issue date: 2017-12-06
A auditoria de processos de negócios é um tema de relevância crescente na literatura. No entanto, técnicas tradicionais e manuais demonstram-se insatisfatórias ou insuficientes, visto que as mesmas são custosas, podem ser tendenciosas e passíveis de erros, além de envolverem grande quantidade de recursos temporais, humanos e materiais. Nesse sentido, o presente estudo vem demonstrar como a técnica de process mining pode ser utilizada, de forma automática, na auditoria de processos governamentais, a partir de um sistema de informação e de uma ferramenta de mining denominada ProM. A partir de técnicas de verificação de conformidade, realizou-se a comparação entre os processos reais e seus respectivos modelos oficiais de uma organização governamental. Os resultados obtidos demonstram algumas divergências entre eles, e indicam que a técnica pode ser utilizada como um meio auxiliar na realização de auditoria de processos de negócios.
APA, Harvard, Vancouver, ISO, and other styles
26

Papangelakis, Vladimiros George. "Mathematical modelling of an exothermic pressure leaching process." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121089.

Full text
Abstract:
Note:
The object of the present thesis was the development of a mathematical model suitable for computer simulation of hydrometallurgical processes. The model formulation was made for a strongly exothermic three-phase reaction system, namely the pressure oxidation process as applied to the treatment of refractory gold ores and concentrates. The steps followed during the course of this work involved first, the experimental identification of the intrinsic kinetics of the two principal refractory gold minerals, arsenopyrite and pyrite, and second, the development of reactor models describing the isothermal and non-isothermal behaviour of batch and multi-stage continuous reactors at steady state. Emphasis was given to the identification of feed conditions for autothermal operation.The key features of the developed model are the coupling of both mass and heat balance equations, the description of the non-isothermal performance of a multistage continuous reactor, and the treatment of a two-mineral mixture concentrate. In addition, continuous functions are used to describe the size distribution of reacting particles and gas-liquid mass transfer rate limitations are assessed.The model predictions were in good agreement with pilot-plant scale industrial data. Simulation runs of alternative reactor configurations and feed compositions elucidated the impact of the size of the first reactor stage, the rate limiting regime, and the sulphur content of the feed on the attainment of autogenous performance.
Le but de cette etude etait de developper un modele mathematique pour la simulation par ordinateur des processus hydrometallurgiques. La formulation du modele a ete faite pour un systeme de reaction de trois phases fortement exothermique, Ie processus d'oxidation sous pression applique au traitement des minerais et des concentres refract aires d'or. Les etapes suivies au cours de cette etude necessitaient premierement l'identification experiment ale de la cinetique intrinseque des deux principaux mineraux d'or, l'arsenopyrite et la pyrite, et par la suite, Ie developpement de modeies de reacteurs decrivant Ie comportement isothermique et non-isothermique de reacteurs en discontinu et de reacteurs en sene continus a Petat d'equilibre. L'emphase a ete donnee al'identification des conditions d'alimentation pouvant produire une operation autothermique.Les principales caracteristiques du modele developpe sont: la combinaison de deux equations d'equilibre de la masse et de la chaleur, la description de la performance non-isothermique de reacteurs en serie continus, Ie traitement d'un con centre d'unmeiange des deux mineraux, l'emploi de fonctions [...]
APA, Harvard, Vancouver, ISO, and other styles
27

Bala, Saimir, Macias Cristina Cabanillas, Andreas Solti, Jan Mendling, and Axel Polleres. "Mining Project- Oriented Business Processes." Springer, Cham, 2015. http://dx.doi.org/10.1007/978-3-319-23063-4_28.

Full text
Abstract:
Large engineering processes need to be monitored in detail regarding when what was done in order to prove compliance with rules and regulations. A typical problem of these processes is the lack of con- trol that a central process engine provides, such that it is difficult to track the actual course of work even if data is stored in version control systems (VCS). In this paper, we address this problem by defining a mining technique that helps to generate models that visualize the work history as GANTT charts. To this end, we formally define the notion of a project-oriented business process and a corresponding mining algorithm. Our evaluation based on a prototypical implementation demonstrates the benefits in comparison to existing process mining approaches for this specific class of processes.
APA, Harvard, Vancouver, ISO, and other styles
28

García, Oliva Rodrigo Alfonso, and Barrenechea Jesús Javier Santos. "Modelo de evaluación de métricas de control para procesos de negocio utilizando Process Mining." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/653470.

Full text
Abstract:
Este proyecto tiene como objetivo analizar la complejidad de los procesos de negocio en las empresas retail de una forma profunda que en otras técnicas resulta muy difícil o incluso imposible de realizar. Con Process Mining es posible superar esta brecha y eso es lo que queremos demostrar a través de la implementación de un modelo. El proyecto propone un modelo de Process Mining que contemple la presencia de diversas fuentes de información de un proceso logístico en una empresa minorista, así como la aplicación de las tres fases de Process Mining (Descubrimiento, Conformidad y Mejora) y adicionalmente se propone una fase de diagnóstico la cual detalla un conjunto de métricas de control para evaluar el proceso de logística y así poder generar una plan de mejora que dé las pautas para optimizar el proceso en base a lo analizado mediante esta técnica. El modelo desarrollado se implementó en una empresa peruana del sector retail (TopiTop S.A) para el análisis del proceso de logística, específicamente el de gestión de órdenes de compra. Este se analizó dando como resultado de la aplicación del modelo y de la evaluación de las métricas propuestas, la identificación de anomalías en el proceso a través de la aplicación de cada una de las fases del modelo propuesto, asegurando la calidad del análisis en la fase de preprocesamiento, generando el modelo de procesos y extrayendo información que se derivó en métricas de control a través de la herramienta de código abierto ProM Tools.
This project aims to analyze the complexity of business processes in retail companies in a deep way that in other techniques is very difficult or even impossible to do. With Process Mining it is possible to overcome this gap and that is what we want to demonstrate through the implementation of a Process Mining model. The project proposes a Process Mining model that contemplates the presence of various sources of information of a logistic process in a retail company, as well as the application of the three phases of Process Mining (Discovery, Compliance and Improvement). Additionally, a diagnostic phase is proposed, which details a set of control metrics to evaluate the logistic process and thus be able to generate an improvement plan that gives the guidelines to optimize the process based on what has been analyzed through this technique. The model developed was implemented in a peruvian company in the retail sector (TopiTop S.A.) for the analysis of the logistics process, specifically the management of purchase orders. This was analyzed giving as a result of the application of the model and the evaluation of the proposed metrics, the identification of anomalies in the process through the application of each of the phases of the proposed model, ensuring the quality of the analysis in the pre-processing phase, generating the process model and extracting information that was derived in control metrics through the open source tool ProM Tools.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
29

Sharma, Sumana. "An Integrated Knowledge Discovery and Data Mining Process Model." VCU Scholars Compass, 2008. http://scholarscompass.vcu.edu/etd/1615.

Full text
Abstract:
Enterprise decision making is continuously transforming in the wake of ever increasing amounts of data. Organizations are collecting massive amounts of data in their quest for knowledge nuggets in form of novel, interesting, understandable patterns that underlie these data. The search for knowledge is a multi-step process comprising of various phases including development of domain (business) understanding, data understanding, data preparation, modeling, evaluation and ultimately, the deployment of the discovered knowledge. These phases are represented in form of Knowledge Discovery and Data Mining (KDDM) Process Models that are meant to provide explicit support towards execution of the complex and iterative knowledge discovery process. Review of existing KDDM process models reveals that they have certain limitations (fragmented design, only a checklist-type description of tasks, lack of support towards execution of tasks, especially those of the business understanding phase etc) which are likely to affect the efficiency and effectiveness with which KDDM projects are currently carried out. This dissertation addresses the various identified limitations of existing KDDM process models through an improved model (named the Integrated Knowledge Discovery and Data Mining Process Model) which presents an integrated view of the KDDM process and provides explicit support towards execution of each one of the tasks outlined in the model. We also evaluate the effectiveness and efficiency offered by the IKDDM model against CRISP-DM, a leading KDDM process model, in aiding data mining users to execute various tasks of the KDDM process. Results of statistical tests indicate that the IKDDM model outperforms the CRISP model in terms of efficiency and effectiveness; the IKDDM model also outperforms CRISP in terms of quality of the process model itself.
APA, Harvard, Vancouver, ISO, and other styles
30

Schönig, Stefan, Macias Cristina Cabanillas, Ciccio Claudio Di, Stefan Jablonski, and Jan Mendling. "Mining Resource Assignments and Teamwork Compositions from Process Logs." Gesellschaft für Informatik e.V, 2016. http://epub.wu.ac.at/5688/1/Schoenig_et_al_2016_Softwaretechnik%2DTrends.pdf.

Full text
Abstract:
Process mining aims at discovering processes by extracting knowledge from event logs. Such knowledge may refer to different business process perspectives. The organisational perspective deals, among other things, with the assignment of human resources to process activities. Information about the resources that are involved in process activities can be mined from event logs in order to discover resource assignment conditions. This is valuable for process analysis and redesign. Prior process mining approaches in this context present one of the following issues: (i) they are limited to discovering a restricted set of resource assignment conditions; (ii) they are not fully efficient; (iii) the discovered process models are difficult to read due to the high number of assignment conditions included; or (iv) they are limited by the assumption that only one resource is responsible for each process activity and hence, collaborative activities are disregarded. To overcome these issues, we present an integrated process mining framework that provides extensive support for the discovery of resource assignment and teamwork patterns.
APA, Harvard, Vancouver, ISO, and other styles
31

Yongsiriwit, Karn. "Modeling and mining business process variants in cloud environments." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL002/document.

Full text
Abstract:
De plus en plus les organisations adoptent les systèmes d'informations sensibles aux processus basés sur Cloud en tant qu'un environnement pour gérer et exécuter des processus dans le Cloud dans l'objectif de partager et de déployer leurs applications de manière optimale. Cela est particulièrement vrai pour les grandes organisations ayant des succursales opérant dans des différentes régions avec des processus considérablement similaires. Telles organisations doivent soutenir de nombreuses variantes du même processus en raison de la culture locale de leurs succursales, de leurs règlements, etc. Cependant, le développement d'une nouvelle variante de processus à partir de zéro est sujet à l'erreur et peut prendre beaucoup du temps. Motivés par le paradigme "la conception par la réutilisation", les succursales peuvent collaborer pour développer de nouvelles variantes de processus en apprenant de leurs processus similaires. Ces processus sont souvent hétérogènes, ce qui empêche une interopérabilité facile et dynamique entre les différentes succursales. Une variante de processus est un ajustement d'un modèle de processus afin de s'adapter d'une façon flexible aux besoins spécifiques. De nombreuses recherches dans les universités et les industries visent à faciliter la conception des variantes de processus. Plusieurs approches ont été développées pour aider les concepteurs de processus en recherchant des modèles de processus métier similaires ou en utilisant des modèles de référence. Cependant, ces approches sont lourdes, longues et sujettes à des erreurs. De même, telles approches recommandent des modèles de processus pas pratiques pour les concepteurs de processus qui ont besoin d'ajuster une partie spécifique d'un modèle de processus. En fait, les concepteurs de processus peuvent mieux développer des variantes de processus ayant une approche qui recommande un ensemble bien défini d'activités à partir d'un modèle de processus défini comme un fragment de processus. Les grandes organisations multi-sites exécutent les variantes de processus BP dans l'environnement Cloud pour optimiser le déploiement et partager les ressources communes. Cependant, ces ressources Cloud peuvent être décrites en utilisant des différents standards de description des ressources Cloud ce qui empêche l'interopérabilité entre les différentes succursales. Dans cette thèse, nous abordons les limites citées ci-dessus en proposant une approche basée sur les ontologies pour peupler sémantiquement une base de connaissance commune de processus et de ressources Cloud, ce qui permet une interopérabilité entre les succursales de l'organisation. Nous construisons notre base de connaissance en étendant les ontologies existantes. Ensuite, nous proposons une approche pour exploiter cette base de connaissances afin de supporter le développement des variantes BP. De plus, nous adoptons un algorithme génétique pour allouer d'une manière optimale les ressources Cloud aux BPs. Pour valider notre approche, nous développons deux preuves de concepts et effectuons des expériences sur des ensembles de données réels. Les résultats expérimentaux montrent que notre approche est réalisable et précise dans des cas d'utilisation réels
More and more organizations are adopting cloud-based Process-Aware Information Systems (PAIS) to manage and execute processes in the cloud as an environment to optimally share and deploy their applications. This is especially true for large organizations having branches operating in different regions with a considerable amount of similar processes. Such organizations need to support many variants of the same process due to their branches' local culture, regulations, etc. However, developing new process variant from scratch is error-prone and time consuming. Motivated by the "Design by Reuse" paradigm, branches may collaborate to develop new process variants by learning from their similar processes. These processes are often heterogeneous which prevents an easy and dynamic interoperability between different branches. A process variant is an adjustment of a process model in order to flexibly adapt to specific needs. Many researches in both academics and industry are aiming to facilitate the design of process variants. Several approaches have been developed to assist process designers by searching for similar business process models or using reference models. However, these approaches are cumbersome, time-consuming and error-prone. Likewise, such approaches recommend entire process models which are not handy for process designers who need to adjust a specific part of a process model. In fact, process designers can better develop process variants having an approach that recommends a well-selected set of activities from a process model, referred to as process fragment. Large organizations with multiple branches execute BP variants in the cloud as environment to optimally deploy and share common resources. However, these cloud resources may be described using different cloud resources description standards which prevent the interoperability between different branches. In this thesis, we address the above shortcomings by proposing an ontology-based approach to semantically populate a common knowledge base of processes and cloud resources and thus enable interoperability between organization's branches. We construct our knowledge base built by extending existing ontologies. We thereafter propose an approach to mine such knowledge base to assist the development of BP variants. Furthermore, we adopt a genetic algorithm to optimally allocate cloud resources to BPs. To validate our approach, we develop two proof of concepts and perform experiments on real datasets. Experimental results show that our approach is feasible and accurate in real use-cases
APA, Harvard, Vancouver, ISO, and other styles
32

Yongsiriwit, Karn. "Modeling and mining business process variants in cloud environments." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL002.

Full text
Abstract:
De plus en plus les organisations adoptent les systèmes d'informations sensibles aux processus basés sur Cloud en tant qu'un environnement pour gérer et exécuter des processus dans le Cloud dans l'objectif de partager et de déployer leurs applications de manière optimale. Cela est particulièrement vrai pour les grandes organisations ayant des succursales opérant dans des différentes régions avec des processus considérablement similaires. Telles organisations doivent soutenir de nombreuses variantes du même processus en raison de la culture locale de leurs succursales, de leurs règlements, etc. Cependant, le développement d'une nouvelle variante de processus à partir de zéro est sujet à l'erreur et peut prendre beaucoup du temps. Motivés par le paradigme "la conception par la réutilisation", les succursales peuvent collaborer pour développer de nouvelles variantes de processus en apprenant de leurs processus similaires. Ces processus sont souvent hétérogènes, ce qui empêche une interopérabilité facile et dynamique entre les différentes succursales. Une variante de processus est un ajustement d'un modèle de processus afin de s'adapter d'une façon flexible aux besoins spécifiques. De nombreuses recherches dans les universités et les industries visent à faciliter la conception des variantes de processus. Plusieurs approches ont été développées pour aider les concepteurs de processus en recherchant des modèles de processus métier similaires ou en utilisant des modèles de référence. Cependant, ces approches sont lourdes, longues et sujettes à des erreurs. De même, telles approches recommandent des modèles de processus pas pratiques pour les concepteurs de processus qui ont besoin d'ajuster une partie spécifique d'un modèle de processus. En fait, les concepteurs de processus peuvent mieux développer des variantes de processus ayant une approche qui recommande un ensemble bien défini d'activités à partir d'un modèle de processus défini comme un fragment de processus. Les grandes organisations multi-sites exécutent les variantes de processus BP dans l'environnement Cloud pour optimiser le déploiement et partager les ressources communes. Cependant, ces ressources Cloud peuvent être décrites en utilisant des différents standards de description des ressources Cloud ce qui empêche l'interopérabilité entre les différentes succursales. Dans cette thèse, nous abordons les limites citées ci-dessus en proposant une approche basée sur les ontologies pour peupler sémantiquement une base de connaissance commune de processus et de ressources Cloud, ce qui permet une interopérabilité entre les succursales de l'organisation. Nous construisons notre base de connaissance en étendant les ontologies existantes. Ensuite, nous proposons une approche pour exploiter cette base de connaissances afin de supporter le développement des variantes BP. De plus, nous adoptons un algorithme génétique pour allouer d'une manière optimale les ressources Cloud aux BPs. Pour valider notre approche, nous développons deux preuves de concepts et effectuons des expériences sur des ensembles de données réels. Les résultats expérimentaux montrent que notre approche est réalisable et précise dans des cas d'utilisation réels
More and more organizations are adopting cloud-based Process-Aware Information Systems (PAIS) to manage and execute processes in the cloud as an environment to optimally share and deploy their applications. This is especially true for large organizations having branches operating in different regions with a considerable amount of similar processes. Such organizations need to support many variants of the same process due to their branches' local culture, regulations, etc. However, developing new process variant from scratch is error-prone and time consuming. Motivated by the "Design by Reuse" paradigm, branches may collaborate to develop new process variants by learning from their similar processes. These processes are often heterogeneous which prevents an easy and dynamic interoperability between different branches. A process variant is an adjustment of a process model in order to flexibly adapt to specific needs. Many researches in both academics and industry are aiming to facilitate the design of process variants. Several approaches have been developed to assist process designers by searching for similar business process models or using reference models. However, these approaches are cumbersome, time-consuming and error-prone. Likewise, such approaches recommend entire process models which are not handy for process designers who need to adjust a specific part of a process model. In fact, process designers can better develop process variants having an approach that recommends a well-selected set of activities from a process model, referred to as process fragment. Large organizations with multiple branches execute BP variants in the cloud as environment to optimally deploy and share common resources. However, these cloud resources may be described using different cloud resources description standards which prevent the interoperability between different branches. In this thesis, we address the above shortcomings by proposing an ontology-based approach to semantically populate a common knowledge base of processes and cloud resources and thus enable interoperability between organization's branches. We construct our knowledge base built by extending existing ontologies. We thereafter propose an approach to mine such knowledge base to assist the development of BP variants. Furthermore, we adopt a genetic algorithm to optimally allocate cloud resources to BPs. To validate our approach, we develop two proof of concepts and perform experiments on real datasets. Experimental results show that our approach is feasible and accurate in real use-cases
APA, Harvard, Vancouver, ISO, and other styles
33

Castellano, Mattia. "Business Process Management e tecniche per l'applicazione del Process Mining. Il caso Università degli Studi di Parma." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
In un mondo in cui l'apertura di nuovi mercati e l'introduzione di nuove tecnologie genera continuamente nuove opportunità, le aziende devono sempre più imparare ad adattarsi e a gestire il cambiamento. Con la diffusione del Process Thinking le organizzazioni iniziano a rendere propri concetti quali processi, attività, eventi, flussi, cambiamento e sviluppo. E' in questo scenario che nascono discipline orientate al cambiamento organizzativo e miglioramento dei processi aziendali, come il Business Process Re-engineering (BPR) e il Business Process Management (BPM). L'aumento della tecnologia e l'era dell'Information Technology (IT), i sistemi informativi assumono un ruolo sempre più importante nella vita dell'organizzazione, supportando l'esecuzione dei processi e iniziando a produrre grandi quantità di tracce relative all'esecuzione dei task. Comincia l'era dei Big Data e del Data Mining. La ricerca arriverà a soddisfare il bisogno aziendale di estrazione di valore tangibile dai dati, o log, con le tecniche di Process Mining. Le tecniche di Process Mining, considerate una forma di Business Intelligence (BI) vengono oggi applicate in vari settori industriali, primo fra tutti il settore dei Servizi. Verrà analizzata nel dettaglio un'applicazione delle tecniche di Process Mining in un progetto commissionato dall'Università degli Studi di Parma a HSPI S.p.A, azienda di consulenza direzionale nella quale ho svolto il tirocinio per Tesi e partecipato attivamente all'analisi. Il Process Mining si conferma una valida tecnica per l'analisi dei dati offline, e la ricerca è attualmente concentrata all'implementazione del Process Mining nell'analisi dei dati real-time, al fine di affrontare la necessità di cambiamento in modo tempestivo, e trarne un vantaggio competitivo.
APA, Harvard, Vancouver, ISO, and other styles
34

Kluska, Martin. "Získávání znalostí z procesních logů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399172.

Full text
Abstract:
This Master's describes knownledge discovery from process logs by using process mining algorithms. Chosen algorithms are described in detail. These aim to create process model based on event log analysis. The goal is to design such components, which would be able to import the process and run the simulations. Results from components can be used for short term planning.
APA, Harvard, Vancouver, ISO, and other styles
35

Schönig, Stefan, Macias Cristina Cabanillas, Ciccio Claudio Di, Stefan Jablonski, and Jan Mendling. "Mining team compositions for collaborative work in business processes." Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/s10270-016-0567-4.

Full text
Abstract:
Process mining aims at discovering processes by extracting knowledge about their different perspectives from event logs. The resource perspective (or organisational perspective) deals, among others, with the assignment of resources to process activities. Mining in relation to this perspective aims to extract rules on resource assignments for the process activities. Prior research in this area is limited by the assumption that only one resource is responsible for each process activity, and hence, collaborative activities are disregarded. In this paper, we leverage this assumption by developing a process mining approach that is able to discover team compositions for collaborative process activities from event logs. We evaluate our novel mining approach in terms of computational performance and practical applicability.
APA, Harvard, Vancouver, ISO, and other styles
36

Southavilay, Vilaythong. "A Data Mining Toolbox for Collaborative Writing Processes." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9764.

Full text
Abstract:
Collaborative writing (CW) is an essential skill in academia and industry. Providing support during the process of CW can be useful not only for achieving better quality documents, but also for improving the CW skills of the writers. In order to properly support collaborative writing, it is essential to understand how ideas and concepts are developed during the writing process, which consists of a series of steps of writing activities. These steps can be considered as sequence patterns comprising both time events and the semantics of the changes made during those steps. Two techniques can be combined to examine those patterns: process mining, which focuses on extracting process-related knowledge from event logs recorded by an information system; and semantic analysis, which focuses on extracting knowledge about what the student wrote or edited. This thesis contributes (i) techniques to automatically extract process models of collaborative writing processes and (ii) visualisations to describe aspects of collaborative writing. These two techniques form a data mining toolbox for collaborative writing by using process mining, probabilistic graphical models, and text mining. First, I created a framework, WriteProc, for investigating collaborative writing processes, integrated with the existing cloud computing writing tools in Google Docs. Secondly, I created new heuristic to extract the semantic nature of text edits that occur in the document revisions and automatically identify the corresponding writing activities. Thirdly, based on sequences of writing activities, I propose methods to discover the writing process models and transitional state diagrams using a process mining algorithm, Heuristics Miner, and Hidden Markov Models, respectively. Finally, I designed three types of visualisations and made contributions to their underlying techniques for analysing writing processes. All components of the toolbox are validated against annotated writing activities of real documents and a synthetic dataset. I also illustrate how the automatically discovered process models and visualisations are used in the process analysis with real documents written by groups of graduate students. I discuss how the analyses can be used to gain further insight into how students work and create their collaborative documents.
APA, Harvard, Vancouver, ISO, and other styles
37

Moses, Lucian Benedict. "Flotation as a separation technique in the coal gold agglomeration process." Thesis, Cape Technikon, 2000. http://hdl.handle.net/20.500.11838/2155.

Full text
Abstract:
Thesis (MTech (Chemical Engineering))--Cape Technikon, 2000.
Internationally, there is an increase in the need for safer environmental processes that can be applied to mining operations, especially on a small scale, where mercury amalgamation is the main process used for the recovery of free gold. An alternative, more environmentally acceptable, process called the Coal Gold Agglomeration (CGA) process has been investigated at the Cape Technikon. This paper explains the application of flotation as a means of separation for the CGA process. The CGA process is based on the recovery of hydrophobic gold particles from ore slurries into agglomerates formed from coal and oil. The agglomerates are separated from the slurry through scraping, screening, flotation or a combination of the aforementioned. They are then ashed to release the gold particles, after which it is smelted to form gold bullion. All components were contacted for fifty minutes after which a frother was added and after three minutes of conditioning, air, at a rate of one I/min per cell volume was introduced into the system. The addition of a collector (Potassium Amyl Xanthate) at the start of each run significantly improved gold recoveries. Preliminary experiments indicated that the use of baffles decreased the gold recoveries, which was concluded to be due to agglomerate breakage. The system was also found to be frother-selective and hence only DOW-200 was used in subsequent experiments. A significant increase or decrease in the air addition rate both had a negative effect on the recoveries; therefore, the air addition rate was not altered during further tests. The use of tap water as opposed to distilled water decreased the attainable recoveries by less than five per cent. This was a very encouraging result, in terms of the practical implementation of the CGA process.
APA, Harvard, Vancouver, ISO, and other styles
38

Reguieg, Hicham. "Using MapReduce to scale event correlation discovery for process mining." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2014. http://tel.archives-ouvertes.fr/tel-01002623.

Full text
Abstract:
The volume of data related to business process execution is increasing significantly in the enterprise. Many of data sources include events related to the execution of the same processes in various systems or applications. Event correlation is the task of analyzing a repository of event logs in order to find out the set of events that belong to the same business process execution instance. This is a key step in the discovery of business processes from event execution logs. Event correlation is a computationally-intensive task in the sense that it requires a deep analysis of very large and growing repositories of event logs, and exploration of various possible relationships among the events. In this dissertation, we present a scalable data analysis technique to support efficient event correlation for mining business processes. We propose a two-stages approach to compute correlation conditions and their entailed process instances from event logs using MapReduce framework. The experimental results show that the algorithm scales well to large datasets.
APA, Harvard, Vancouver, ISO, and other styles
39

Ostovar, Alireza. "Business process drift: Detection and characterization." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/127157/1/Alireza_Ostovar_Thesis.pdf.

Full text
Abstract:
This research contributes a set of techniques for the early detection and characterization of process drifts, i.e. statistically significant changes in the behavior of business operations, as recorded in transactional data. Early detection and subsequent characterization of process drifts allows organizations to take prompt remedial actions and avoid potential repercussions resulting from unplanned changes in the behavior of their operations.
APA, Harvard, Vancouver, ISO, and other styles
40

Canturk, Deniz. "Time-based Workflow Mining." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606149/index.pdf.

Full text
Abstract:
Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and typically there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, new techniques for discovering workflow models have been required. Starting point for such techniques are so-called &ldquo
workflow logs"
containing information about the workflow process as it is actually being executed. In this thesis, new mining technique based on time information is proposed. It is assumed that events in workflow logs bear timestamps. This information is used in to determine task orders and control flows between tasks. With this new algorithm, basic workflow structures, sequential, parallel, alternative and iterative (i.e., loops) routing, and advance workflow structure or-join can be mined. While mining the workflow structures, this algorithm also handles the noise problem.
APA, Harvard, Vancouver, ISO, and other styles
41

Fordal, Arnt Ove. "Process Data Mining for Parameter Estimation : With the DYNIA Method." Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10005.

Full text
Abstract:

Updating the model parameters of the control system of an oil and gas production system for the reasons of cost-effectiveness and production optimization, requires a data set of input and output values for the system identification procedure. A requirement for the system identification to provide a well performing model is for this data set to be informative. Traditionally, the way of obtaining an informative data set has normally been to take the production system out of normal operational order, in the interest of performing experiments specificially designed to produce informative data. It is however desirable to use segments of process data from normal operation in the system identification procedure, as this eliminates the costs connected with a halt of operation. The challenge is to identify segments of the process data that give an informative data set. Dynamic Identifiability Analysis (DYNIA) is an approach to locating periods of high information content and parameter identifiability in a data set. An introduction to the concepts of data mining, system identification and parameter identifiability lay the foundation for an extensive review of the DYNIA method in this context. An implementation of the DYNIA method is presented. Examples and a case study show promising results for the practical functionality of the method, but also raise awareness to elements that should be improved. A discussion on the industrial applicability of DYNIA is presented, as well as suggestions towards modifications that may improve the method.

APA, Harvard, Vancouver, ISO, and other styles
42

Bani, Mustafa Ahmed Mahmood. "A knowledge discovery and data mining process model for metabolomics." Thesis, Aberystwyth University, 2012. http://hdl.handle.net/2160/6889468e-851f-47fd-bd44-fe65fe516c7a.

Full text
Abstract:
This thesis presents a novel knowledge discovery and data mining process model for metabolomics, which was successfully developed, implemented and applied to a number of metabolomics applications. The process model provides a formalised framework and a methodology for conducting justifiable and traceable data mining in metabolomics. It promotes the achievement of metabolomics analytical objectives and contributes towards the reproducibility of its results. The process model was designed to satisfy the requirements of data mining in metabolomics and to be consistent with the scientific nature of metabolomics investigations. It considers the practical aspects of the data mining process, covering management, human interaction, quality assurance and standards, in addition to other desired features such as visualisation, data exploration, knowledge presentation and automation. The development of the process model involved investigating data mining concepts, approaches and techniques; in addition to the popular data mining process models, which were critically analysed in order to utilise their better features and to overcome their shortcomings. Inspiration from process engineering, software engineering, machine learning and scientific methodology was also used in developing the process model along with the existing ontologies of scientific experiments and data mining. The process model was designed to support both data-driven and hypothesis-driven data mining. It provides a mechanism for defining the analytical objectives of metabolomics data mining, considering their achievability, feasibility, measurability and success criteria. The process model also provides a novel strategy for performing justifiable selection of data mining techniques, taking into consideration the achievement of the process's analytical objectives and taking into account the nature and quality of the metabolomics data, in addition to the requirements and feasibility of the selected data mining techniques. The model ensures validity and reproducibility of the outcomes by defining traceability and assessment mechanisms, which cover all the procedures applied and the deliveries generated throughout the process. The process also defines evaluation mechanisms, which cover not only the technical aspects of the data mining model, but also the contextual aspects of the acquired knowledge. The process model was implemented using a software environment, and was applied to four real-world metabolomics applications. The applications demonstrated the proposed process model's applicability to various data mining approaches, goals, tasks, and techniques. They also confirmed the process's applicability to various metabolomics investigations and approaches using data generated by a variety of data acquisition instruments. The outcomes of the process execution in these applications were used in evaluating the process model's design and its satisfaction of the requirements of metabolomics data mining.
APA, Harvard, Vancouver, ISO, and other styles
43

Nyman, Tobias. "Kan Process Mining informera RPA för att automatisera komplexa affärsprocesser?" Thesis, Högskolan i Karlstad, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84879.

Full text
Abstract:
Företag automatiserar sina affärsprocesser för att effektivisera sin verksamhet. Robot process automation (RPA) är ett mjukvarusystem som fungerar genom att imitera den mänskliga användaren och kan på det sättet automatisera affärsprocesser som är standardiserade, repetitiva och har låg komplexitet. En affärsprocess som är komplex har flera variationer och logiken mellan aktiviteterna i processen är inte alltid tydlig. Det gör att RPA inte kan automatisera affärsprocesser med för hög komplexitet då kostnaden för RPA projektet inte anses lönsamt. Process mining är ett verktyg som implementeras på verksamhetens databaser och informationssystemens eventlogar och använder sig av data för att modulera alla verksamhetens affärsprocesser. Syftet med denna kvalitativa studie är att undersöka om RPA och process mining kan automatisera komplexa affärsprocesser. Undersökningsfrågan för denna rapport är: Till vilken grad kan process mining informera RPA för att automatisera mer komplexa affärsprocesser? Litteraturöversikten behandlar tidigare forskning inom tre huvudområden komplexa affärsprocesser, RPA och process mining och den empiriska studien genomfördes med fyra intervjuer från fyra olika företag. Den första intervjun genomfördes med en expert inom affärsprocesser, andra och tredje intervjun genomfördes med konsulter inom RPA och den fjärde intervjun genomfördes med en konsult inom RPA och process mining. De slutsatser som uppsatsen har kommit fram till är att RPA och process mining kan automatisera mer komplexa affärsprocesser, där data är strukturerad och det finns tydliga regler. RPA är bäst anpassad för mindre komplexa processer på grund av att de levererar ett bättre return on investment (ROI). RPA och process mining kan inte automatisera affärsprocesser med en komplexitet där startpunkten är olika varje gång och där logiken i utförandet är slumpartad.
APA, Harvard, Vancouver, ISO, and other styles
44

Myers, David. "Detecting cyber attacks on industrial control systems using process mining." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130799/1/David_Myers_Thesis.pdf.

Full text
Abstract:
Industrial control systems conduct processes which are core to our lives, from the generation, transmission, and distribution of power, to the treatment and supply of water. These industrial control systems are moving from dedicated, serial-based communications to switched and routed corporate networks to facilitate the monitoring and management of an industrial processes. However, this connection to corporate networks can expose industrial control systems to the Internet, placing them at risk of cyber-attack. In this study, we develop and evaluate a process-mining based anomaly detection system to generate process models of, and detect cyber-attacks on, industrial control system processes and devices.
APA, Harvard, Vancouver, ISO, and other styles
45

Öberg, Johanna. "Time prediction and process discovery of administration process." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-432893.

Full text
Abstract:
Machine learning and process mining are two techniques that are becoming more and more popular among organisations for business intelligence purposes. Results from these techniques can be very useful for organisations' decision-making. The Swedish National Forensic Centre (NFC), an organisation that performs forensic analyses, is in need of a way to visualise and understand its administration process. In addition, the organisation would like to be able to predict the time analyses will take to perform. In this project, it was evaluated if machine learning and process mining could be used on NFC's administration process-related data to satisfy the organisation's needs. Using the process mining tool Mehrwerk Process Mining implemented in the software Qlik Sense, different process variants were discovered from the data and visualised in a comprehensible way. The process variants were easy to interpret and useful for NFC. Machine learning regression models were trained on the data to predict analysis length. Two different datasets were tried, a large dataset with few features and a smaller dataset with more features. The models were then evaluated on test datasets. The models did not predict the length of analyses in an acceptable way. A reason to this could be that the information in the data was not sufficient for this prediction.
APA, Harvard, Vancouver, ISO, and other styles
46

Al, Dahami Abdulelah. "A stage-based model for enabling decision support in process mining." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103533/1/Abdulelah%20Saleh%20A_Al%20Dahami_Thesis.pdf.

Full text
Abstract:
This thesis introduces a decision support system tool that is representing the process tasks of a spaghetti-like model in stages for understanding business process mining results. In addition, this representation helps to evaluate the proposed solution and compare it with others. This tool can break the issue of visualising and aligning tasks in process model and clearly show the comprehensive flow relations with more accurate dependencies for decision-makers in terms of the business's perspective.
APA, Harvard, Vancouver, ISO, and other styles
47

Bala, Saimir. "Mining Projects from Structured and Unstructured Data." Jens Gulden, Selmin Nurcan, Iris Reinhartz-Berger, Widet Guédria, Palash Bera, Sérgio Guerreiro, Michael Fellman, Matthias Weidlich, 2017. http://epub.wu.ac.at/7205/1/ProjecMining%2DCamera%2DReady.pdf.

Full text
Abstract:
Companies working on safety-critical projects must adhere to strict rules imposed by the domain, especially when human safety is involved. These projects need to be compliant to standard norms and regulations. Thus, all the process steps must be clearly documented in order to be verifiable for compliance in a later stage by an auditor. Nevertheless, documentation often comes in the form of manually written textual documents in different formats. Moreover, the project members use diverse proprietary tools. This makes it difficult for auditors to understand how the actual project was conducted. My research addresses the project mining problem by exploiting logs from project-generated artifacts, which come from software repositories used by the project team.
APA, Harvard, Vancouver, ISO, and other styles
48

Hasan, Muayad Mohammed. "Enhanced recovery of heavy oil using a catalytic process." Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/53253/.

Full text
Abstract:
Oil is a major source of energy around the world. With the decline of light conventional oil, more attention is being paid to heavy oil and bitumen, as a good alternative to light oil for energy supplies. Heavy crude oils have a tendency to have a higher concentration of metals and several other elements such as sulfur and nitrogen, and extraction of these heavy oils requires more effort and cost. The Toe-to-heel Air Injection catalytic upgrading process In-situ (THAI-CAPRI) is an integrated process which includes recovery and upgrading of heavy oil and bitumen using an air injection process, and horizontal injector and producer wells. Since the process works through a short distance displacement technique, the produced oil flows easily toward the horizontal producer well. This direct mobilized oil production and short distance are the major properties of this method which lead to robust operational stability and high oil recovery. This technique gives the possibility of a higher recovery percentage and lowers environmental effects compared to other technologies like steam based techniques. A catalyst plays a crucial role in the THAI-CAPRI technique to be successfully conducted. However, heavy coke can be formed as a result of the thermal cracking of heavy oil occurring in the THAI-CAPRI process, and a catalyst resistant enough to use in CAPRI needs to be developed. Therefore, there is a need to understand the pore structure to achieve a high catalyst quality, to obtain a structure that directly affects the fluid behaviour within a disordered porous material. In this study, novel experimental techniques were used to obtain greater accuracy results, for the information obtained from gas adsorption curves by using a combination of data obtained for two adsorptives, namely nitrogen and argon, both before and after mercury porosimetry. This new method allows studying the effect of pore-pore co-operative during an adsorption process, which significantly affects the accuracy of the pore size distributions, obtained for porous solids. A comparison, between the results obtained from the characterisation of a mixed silica-alumina pellet and those obtained from pure silica and alumina catalysts, were presented to study the effects of surface chemistry on the different wetting properties of adsorbates. The pore networks within pellets invaded by mercury following mercury porosimetry have been imaged by computerized X-ray tomography (CXT). It was noticed that the silica-alumina catalyst had a hierarchical internal structure, similar to that for blood vessels in the body. To validate the findings of the pore geometry characterisation obtained from the new method, several techniques, such as cryoporometry, gas sorption isotherms, and mercury intrusion experiments, were considered. Further, a novel well design consisting of two horizontal injectors and two horizontal producers was used in different well configurations, to investigate the potential for improved efficiency of the THAI process on the heavy oil recovery. A 3D dimensional simulation model, employing the CMG-STARS simulator, was applied in this simulation. Two horizontal injectors and producers were designed in this project, instead of horizontal injector and producer were used in the Greaves model (the base case model), to investigate the effect of the extra injector and producer on the performance of the THAI process. It was found that the locations of the well injections and the well productions significantly affected the oil production. For the study of the effectiveness of the catalysts in the oil upgrading process, the CAPRI technique has been simulated to investigate the effect of several parameters, such as catalyst packing porosity, the thickness of the catalyst layer and hydrogen to air ratio, on the performance of the CAPRI process. The TC3 model used by Rabiu Ado (2017), which was the same model used in the experimental study of Greaves et al. (2012), was also used in this study. The Houdry catalyst characterised by the experimental work was placed around the horizontal producer in this simulation.
APA, Harvard, Vancouver, ISO, and other styles
49

Rojas-Candio, Piero, Arturo Villantoy-Pasapera, Jimmy Armas-Aguirre, and Santiago Aguirre-Mayorga. "Evaluation Method of Variables and Indicators for Surgery Block Process Using Process Mining and Data Visualization." Repositorio Academico - UPC, 2021. http://hdl.handle.net/10757/653799.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
In this paper, we proposed a method that allows us to formulate and evaluate process mining indicators through questions related to the process traceability, and to bring about a clear understanding of the process variables through data visualization techniques. This proposal identifies bottlenecks and violations of policies that arise due to the difficulty of carrying out measurements and analysis for the improvement of process quality assurance and process transformation. The proposal validation was carried out in a health clinic in Lima (Peru) with data obtained from an information system that supports the surgery block process. Finally, the results contribute to the optimization of decision-making by the medical staff involved in the surgery block process.
Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
50

Evangelista, Pescorán Misael Elias, and Torres Andre Junior Coronado. "Modelo para la evaluación de variables en el Sector Salud utilizando Process Mining y Data Visualization." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/653132.

Full text
Abstract:
El presente trabajo propone un modelo para la evaluación de variables en el sector salud utilizando Process Mining y Data Visualization soportado por la herramienta Celonis. Esto surge ante la problemática orientada a la dificultad en la comprensión de las actividades que están involucradas en los procesos negocios y los resultados de este. El proyecto se centra en la investigación de dos disciplinas emergentes. Una de estas disciplinas es Process Mining y se enfoca principalmente en los procesos, en los datos por cada evento, esto con el fin de descubrir un modelo, ver conformidad de los procesos o mejorarlos (Process Mining: Una técnica innovadora para la mejora de los procesos, 2016). La segunda disciplina es Data Visualization, esta permite presentar los datos en un formato gráfico o pictórico ("Data Visualization: What it is and why it matters", 2016). El proyecto implica principalmente investigación, en primer lugar, se analizan las técnicas de Process Mining y Data Visualization. En segundo lugar, se separan las características y cualidades de las disciplinas, y se diseña un modelo para la evaluación de variables en el Sector Salud utilizando Process Mining y Data Visualization, generando un valor agregado, dado que al tener un formato gráfico o pictórico que representa adecuadamente los resultados de usar una técnica de minería de procesos, la comprensión y el análisis en la toma de decisiones es más precisa. En tercer lugar, se valida el modelo en una institución que brinda servicios en el Sector Salud, analizando uno de los procesos core. Finalmente, se elabora un plan de continuidad para que el modelo propuesto se aplique en técnicas de optimización de procesos en las organizaciones.
The present work proposes a model for the evaluation of variables in the health sector using Process Mining and Data Visualization supported by the Celonis tool. This arises from the problem oriented to the difficulty in understanding the activities that are involved in business processes and their results. The project focuses on the investigation of two emerging disciplines. One of these disciplines is Process Mining and it focuses mainly on the processes, on the data for each event, this in order to discover a model, see conformity of the processes or improve them (Process Mining: An innovative technique for the improvement of the processes, 2016). The second discipline is Data Visualization, this allows data to be presented in a graphic or pictorial format ("Data Visualization: What it is and why it matters", 2016). This project mainly involves research, first, Process Mining and Data Visualization techniques are analyzed. Second, the characteristics and qualities of the disciplines are separated, and a model is designed for the evaluation of variables in the Health Sector using Process Mining and Data Visualization, generating added value, given that by having a graphic or pictorial format that adequately represents the results of using a process mining technique, understanding and analysis in decision making is more accurate. Third, the model is validated in an institution that provides services in the Health Sector, analyzing one of the core processes. Finally, a continuity plan is drawn up so that the proposed model can be applied to process optimization techniques in organizations.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography