To see the other types of publications on this topic, follow the link: Intelligent Workflows.

Dissertations / Theses on the topic 'Intelligent Workflows'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Intelligent Workflows.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Obeso, Duque Aleksandra. "Performance Prediction for Enabling Intelligent Resource Management on Big Data Processing Workflows." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-372178.

Full text
Abstract:
Mobile cloud computing offers an augmented infrastructure that allows resource-constrained devices to use remote computational resources as an enabler for highly intensive computation, thus improving end users experience. Being able to efficiently manage cloud elasticity represents a big challenge for dynamic resource scaling on-demand. In this sense, the development of intelligent tools that could ease the understanding of the behavior of a highly dynamic system and to detect resource bottlenecks given certain service level constrains represents an interesting case of study. In this project, a comparative study has been carried out for different distributed services taking into account the tools that are available for load generation, benchmarking and sensing of key performance indicators. Based on that, the big data processing framework Hadoop Mapreduce, has been deployed as a virtualized service on top of a distributed environment. Experiments for different cluster setups using different benchmarks have been conducted on this testbed in order to collect traces for both resource usage statistics at the infrastructure level and performance metrics at the platform level. Different machine learning approaches have been applied on the collected traces, thus generating prediction and classification models whose performance is then evaluated and compared. The highly accurate results, namely a Normalized Mean Absolute Error below 10.3% for the regressor and an accuracy score above 99.9% for the classifier, show the feasibility of the prediction models generated for service performance prediction and resource bottleneck detection that could be further used to trigger auto-scaling processes on cloud environments under dynamic loads in order to fulfill service level requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

Fourli-Kartsouni, Florendia. "Intelligent workflow support for context sensitive business process modelling." Saarbrücken VDM Verlag Dr. Müller, 2004. http://d-nb.info/99121773X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cheung, Yee Chung. "Compliance flow : an intelligent workflow management system to support engineering processes." Thesis, Loughborough University, 2003. https://dspace.lboro.ac.uk/2134/35616.

Full text
Abstract:
This work is about extending the scope of current workflow management systems to support engineering processes. On the one hand engineering processes are relatively dynamic, and on the other their specification and performance are constrained by industry standards and guidelines for the sake of product acceptability, such as IEC 61508 for safety and ISO 9001 for quality. A number of technologies have been proposed to increase the adaptability of current workflow systems to deal with dynamic situations. A primary concern is how to support open-ended processes that cannot be completely specified in detail prior to their execution. A survey of adaptive workflow systems is given and the enabling technologies are discussed. Engineering processes are studied and their characteristics are identified and discussed. Current workflow systems have been successfully used in managing "administrative" processes for some time, but they lack the flexibility to support dynamic, unpredictable, collaborative, and highly interdependent engineering processes.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Hanwen. "Workflow resource pattern simulation and visualization." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65502/1/Hanwen_Guo_Thesis.pdf.

Full text
Abstract:
This thesis addresses the process simulation and validation in Business Process Management. It proposes that the hybrid Multi Agent System (MAS) / 3D Virtual World approach is a valid method for better simulating the behaviour of human resources in business processes, supporting a wide range of rich visualization applications that can facilitate communication between business analysts and stakeholders. It is expected that the findings of this thesis may be fruitfully extended from BPM to other application domains, such as social simulation in video games and computer-based training animations.
APA, Harvard, Vancouver, ISO, and other styles
5

Romeo, Marco. "Automated processes and intelligent tools in CG media production." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/373915.

Full text
Abstract:
La producció moderna dels mitjans descansa de forma important sobre continguts generats per ordinador, tals com animacions 3D, i efectes visuals digitals. Aquests actius complexos poblen videojocs, pel•lícules, televisió, dispositius mòbils i internet, però crear-los encara és una feina complexa, que requereix molta intervenció humana, té un flux de treball complex i no estàndard, procliu als errors, i demana un esforç important. Aquesta tesi contribueix en dues àrees de contingut generat per ordinador: la investigació inicial es dirigeix a l’automatització de “rigging”, de l’expressió facial emocional i de moviments expressius, i de la generació semiautomàtica de “clips”; a la investigació més recent, basada en un treball extens amb la indústria, la tesi defineix un flux genèric de producció de mitjans generats per ordinador, inclosos processos de producció i un model de “pipeline”, i, a partir d’aquest flux, es mostren i es discuteixen unes aproximacions a l’automatització dels passos més crítics des d’una perspectiva tant industrial com acadèmica. En resum, la tesi contribueix amb un conjunt d’algorismes i eines “familiaritzats” amb el “pipeline”, que ajuden l’usuari en el procés de producció, és a dir, eines intel•ligents.<br>Modern media production heavily relies on computer generated content, such as 3D animations and digital visual effects. Those complex assets populate videogames, films, television, mobile devices and the Internet, but their creation is still a complex task requiring a lot of human intervention, and with non standard complex workflows, which are prone to errors, and demand more production effort. The thesis contributes in two main areas of computer generated content: the oldest research is geared towards the automation of rigging, of emotional facial expression and emotional movements; and towards semi-automated generation of clips; in the most recent research, based on extensive work with the industry, the thesis defines a generic computer generated media production workflow, including production processes and a sample pipeline, and starting from this workflow, approaches to automate the most critical steps are shown and discussed with both scientific and industry eyes. In summary, this thesis contributes with a series of algorithms and tools that are “aware” of the pipeline, and assist the user in the production process, thus: intelligent tools.<br>La producción moderna de los medios descansa de forma importante en contenidos generados por ordenador, tales como animaciones 3D, y efectos visuales digitales. Estos activos complejos pueblan los videojuegos, películas, televisión, dispositivos móviles e internet, pero crearlos es aún una faena compleja, que demanda mucha intervención humana, implica un flujo de trabajo complejo no estándar, proclive a errores, y exige un esfuerzo importante. Esta tesi contribuye en dos áreas de contenido generado mediante ordenador: la investigación inicial se dirige a la automatización del “rigging”, de la expresión facial emocional i de movimientos expresivos, i de la generación semiautomática de “clips”; en la investigación más reciente, basada en un trabajo extenso en la industria, la tesis define un flujo genérico de producción de medios generados mediante ordenador, incluyendo procesos de producción y un modelo de “pipeline”, y, a partir de este flujo, se muestran y discuten unas aproximaciones a la automatización de los pasos más críticos desde una perspectiva tanto industrial como académica. En resumen, la tesis contribuye con un conjunto de algoritmos y herramientas “familiarizados” con el “pipeline”, que ayudan al usuario en el proceso de producción, es decir, unas herramientas inteligentes.
APA, Harvard, Vancouver, ISO, and other styles
6

Hachicha, Rim. "Modélisation et analyse de la flexibilité dans les systèmes workflow." Paris, CNAM, 2007. http://www.theses.fr/2007CNAM0565.

Full text
Abstract:
Cette thèse est consacrée à la modélisation formelle et la gestion des systèmes workflows. Nous nous intéressons à apporter une solution à l'un des principaux problèmes des systèmes workflows est celui de la flexibilité : les modèles aussi bien que les systèmes actuels ne sont pas suffisamment flexibles et adaptables. Dans ce but, nous proposons un modèle de tâches et d'acteurs précisant les relations formelles entre les tâches workflow et les acteurs et permettant une assignation flexible des acteurs ux activités workflow. L'allocation des tâches workflows est fondée sur le concept de distance acteur/tâche et sur le processus de formation de coalitions d'agents. Le modèle permet de vérifier l'interchangeabilité des acteurs et la cohérence des tâches workflows suite à l'évolution de l'environnement. Nous proposons une architecture orientée agent distribuée intégrant le modèle formel et permettant de réaliser les fonctionnalités requises par les systèmes workflows. Cette architecture est capable de s'adapter de façon réactive aux changements tout en assurant la réutilisabilité du système Workflow. Nous avons implémenté le modèle sur la plate forme JADE en utilisant le système expert JESS et nous l'avons validé sur une réelle application<br>This thesis is devoted to formal modeling and management of workflow system. We are interested to bring a solution to the one of the principal problems of workflow systems is that of flexibility : the models as well as the current systems are not sufficiently flexible and adaptable. For these requirements, we propose a task model and actor model specifying the formal relations between workflow tasks and actors and allowing a flexible assignment of actors to workflow activities. The workflow task allocation is based on the concept of actor/task distance and agent coalition formation process. The model allows checking the interchangeability of the actors and the coherence of the workflows tasks following the evolution of the environment. We propose a distributed agent architecture integrating the formal model and permitting to carry out the functionalities required by the workflow system. This architecture is adaptable, reactive and ensures the reusability of the workflow system. We implemented the proposed model on JADE agent platform using expert system JESS and we validated our model on a real application
APA, Harvard, Vancouver, ISO, and other styles
7

Reul, Christian [Verfasser], Frank [Gutachter] Puppe, and Marcus [Gutachter] Liwicki. "An Intelligent Semi-Automatic Workflow for Optical Character Recognition of Historical Printings / Christian Reul ; Gutachter: Frank Puppe, Marcus Liwicki." Würzburg : Universität Würzburg, 2020. http://d-nb.info/1215500882/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lopes, Robson da Silva. "Planejamento instrucional adaptativo usando Workflow e planejamento genético." Universidade Federal de Uberlândia, 2009. https://repositorio.ufu.br/handle/123456789/12483.

Full text
Abstract:
Fundação de Amparo a Pesquisa do Estado de Minas Gerais<br>The course in distance learning can be customized taking in to account the specific characteristic of each student. This means that the learning environment has adaptativity. So, in order to develop adaptative systems of distance learning, it is necessary to consider the problems: instructional planning and the student model. The first problem allows to generate a specific sequence of content for each student and the second one provides the necessary information to cope with adaptativity. Planning techniques in Artificial Intelligence have been successfully used for determining the sequence of instructional actions. The Workflow technology has been used for the management of these systems. Then, this work presents an adaptative instructional planning and a student model based on Taxonomy of Educational Objectives and learning styles.<br>Muitos sistemas de educação à distância não levam em consideração características particulares do estudante, utilizam as mesmas estratégias pedagógicas e seqüencias de conteúdo para todos os estudante. No entanto, ambiente de educação à distância devem introduzir adaptatividade como uma das suas principais características. Para que isto seja possível, deve-se considerar os problemas de planejamento instrucional e definição modelo do estudante. O primeiro problema permite gerar uma sequência de conteúdo específica para cada estudante e o segundo problema prove informações necessárias para lidar com adaptatividade. Técnicas de planejamento em Inteligência Artificial têm sido utilizada para determinar a sequência de ações instrucionais. A tecnologia de Workflow têm sido utilizada para gerenciar estes sistemas. Por tanto, este trabalho apresentar um sequenciador de conteúdo adaptativo, que utiliza algoritmos genéticos e um modelo do estudante baseado taxionomia do objetivos educacionais e em estilos de aprendizagem.<br>Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
9

Rabenius, Michaela. "Deep Learning-based Lung Triage for Streamlining the Workflow of Radiologists." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160537.

Full text
Abstract:
The usage of deep learning algorithms such as Convolutional Neural Networks within the field of medical imaging has grown in popularity over the past few years. In particular, these types of algorithms have been used to detect abnormalities in chest x-rays, one of the most commonly performed type of radiographic examination. To try and improve the workflow of radiologists, this thesis investigated the possibility of using convolutional neural networks to create a lung triage to sort a bulk of chest x-ray images based on a degree of disease, where sick lungs should be prioritized before healthy lungs. The results from using a binary relevance approach to train multiple classifiers for different observations commonly found in chest x-rays shows that several models fail to learn how to classify x-ray images, most likely due to insufficient and/or imbalanced data. Using a binary relevance approach to create a triage is feasible but inflexible due to having to handle multiple models simultaneously. In future work it would therefore be interesting to further investigate other approaches, such as a single binary classification model or a multi-label classification model.
APA, Harvard, Vancouver, ISO, and other styles
10

Pla, Planas Albert. "Multi-attribute auctions: application to workflow management systems." Doctoral thesis, Universitat de Girona, 2014. http://hdl.handle.net/10803/134731.

Full text
Abstract:
Resource and task allocation for workflows poses an allocation problem in which several attributes may be involved (economic cost, delivery time, CO2 emissions...), therefore, it must be treated from a multi-criteria perspective so that all of the attributes are taken into account when deciding the optimal assignments. Auction mechanisms offer the chance to allocate resources and services in a competitive market environment whilst optimizing outcomes for all of the participants. In this thesis, we propose the use of multi-attribute auctions for allocating resources to workflows occurring in dynamic environments where task performance is uncertain. To this end, we present an auction mechanism for allocationg multi-attribute tasks and resources in the workflow domain (PUMAA), a framework for customizing the outcomes of the auctions depending on the domain particularities (FMAAC) and a multi-dimensional fairness mechanism for favouring egalitarian allocations<br>L'assignació de tasques i recursos en fluxos de treball planteja un problema en el qual poden intervenir diferents atributs (costs econòmics, terminis d'entrega, emissions de CO2, etc.). Conseqüentment, per tal de tenir en compte tots els elements involucrats en l'assignació de recursos i aconseguir-ne una d'òptima, cal enfocar el problema des d'un prisma multicritèria. Els mecanismes de subhasta ofereixen la possibilitat d'assignar recursos i serveis en entorns competitius al mateix temps que s'optimitzen els beneficis per a tots els participants. En aquesta tesi es proposa emprar subhastes multi-atribut per a l'assignació de recursos en fluxos de treball que es desenvolupen, concurrentment, en entorns dinàmics on el desenvolupament de tasques presenta un alt grau d'incertesa.Amb aquest objectiu, presentem un mecanisme de subhastes multi-attribute per assignar recursos i tasques (PUMAA), un marc per a personalitzar les assignacions resultants dels mecanismes de buasta (FMAAC) i un mecanisme de fairness multi-criteria per afovirir assignacions igualitàries en lloc d'utilitàries
APA, Harvard, Vancouver, ISO, and other styles
11

Maita, Ana Rocío Cárdenas. "Um estudo da aplicação de técnicas de inteligência computacional e de aprendizado em máquina de mineração de processos de negócio." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-22012016-155157/.

Full text
Abstract:
Mineração de processos é uma área de pesquisa relativamente recente que se situa entre mineração de dados e aprendizado de máquina, de um lado, e modelagem e análise de processos de negócio, de outro lado. Mineração de processos visa descobrir, monitorar e aprimorar processos de negócio reais por meio da extração de conhecimento a partir de logs de eventos disponíveis em sistemas de informação orientados a processos. O principal objetivo deste trabalho foi avaliar o contexto de aplicação de técnicas provenientes das áreas de inteligência computacional e de aprendizado de máquina, incluindo redes neurais artificiais. Para fins de simplificação, denominadas no restante deste texto apenas como ``redes neurais\'\'. e máquinas de vetores de suporte, no contexto de mineração de processos. Considerando que essas técnicas são, atualmente, as mais aplicadas em tarefas de mineração de dados, seria esperado que elas também estivessem sendo majoritariamente aplicadas em mineração de processos, o que não tinha sido demonstrado na literatura recente e foi confirmado por este trabalho. Buscou-se compreender o amplo cenário envolvido na área de mineração de processos, incluindo as principais caraterísticas que têm sido encontradas ao longo dos últimos dez anos em termos de: tipos de mineração de processos, tarefas de mineração de dados usadas, e técnicas usadas para resolver tais tarefas. O principal enfoque do trabalho foi identificar se as técnicas de inteligência computacional e de aprendizado de máquina realmente não estavam sendo amplamente usadas em mineração de processos, ao mesmo tempo que se buscou identificar os principais motivos para esse fenômeno. Isso foi realizado por meio de um estudo geral da área, que seguiu rigor científico e sistemático, seguido pela validação das lições aprendidas por meio de um exemplo de aplicação. Este estudo considera vários enfoques para delimitar a área: por um lado, as abordagens, técnicas, tarefas de mineração e ferramentas comumente mais usadas; e, por outro lado, veículos de publicação, universidades e pesquisadores interessados no desenvolvimento da área. Os resultados apresentam que 81% das publicações atuais seguem as abordagens tradicionais em mineração de dados. O tipo de mineração de processos com mais estudo é Descoberta 71% dos estudos primários. Os resultados deste trabalho são valiosos para profissionais e pesquisadores envolvidos no tema, e representam um grande aporte para a área<br>Mining process is a relatively new research area that lies between data mining and machine learning, on one hand, and business process modeling and analysis, on the other hand. Mining process aims at discovering, monitoring and improving business processes by extracting real knowledge from event logs available in process-oriented information systems. The main objective of this master\'s project was to assess the application of computational intelligence and machine learning techniques, including, for example, neural networks and support vector machines, in process mining. Since these techniques are currently widely applied in data mining tasks, it would be expected that they were also widely applied to the process mining context, which has been not evidenced in recent literature and confirmed by this work. We sought to understand the broad scenario involved in the process mining area, including the main features that have been found over the last ten years in terms of: types of process mining, data mining tasks used, and techniques applied to solving such tasks. The main focus of the study was to identify whether the computational intelligence and machine learning techniques were indeed not being widely used in process mining whereas we sought to identify the main reasons for this phenomenon. This was accomplished through a general study area, which followed scientific and systematic rigor, followed by validation of the lessons learned through an application example. This study considers various approaches to delimit the area: on the one hand, approaches, techniques, mining tasks and more commonly used tools; and, on the other hand, the publication vehicles, universities and researchers interested in the development area. The results show that 81% of current publications follow traditional approaches to data mining. The type of mining processes more study is Discovery 71% of the primary studies. These results are valuable for practitioners and researchers involved in the issue, and represent a major contribution to the area
APA, Harvard, Vancouver, ISO, and other styles
12

Abdel, Ahad George, and Abo Jack Dilli. "Digitalisering utifrån ekonomers perspektiv : En fallstudie vid två offentliga organisationer." Thesis, Högskolan Väst, Avd för företagsekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-16627.

Full text
Abstract:
Digitalisering är ett aktuellt ämne i dagsläget och teknologin har vuxit samt påverkat samhället. Det har påverkat ekonomer och deras arbetssätt. Det förekommer flertalet studier angående hur digitalisering påverkar ekonomrollen inom privat sektor, forskning kring ekonomernas egna erfarenheter och inom den offentliga sektorn är dock fåtalig. Offentlig sektor har kritiserats då den tekniska utvecklingen inte går framåt i den takt som övriga samhället. Detta då de har strukturer som försvårar införandet av moderniseringsarbete. Syftet med studien är att kartlägga och analysera erfarenheter med digitalisering samt möjligheter och utmaningar kring det utifrån ekonomers perspektiv. Studien har genomförts genom en kvalitativ metod där datainsamlingen gjordes på två offentliga organisationer genom sex semistrukturerade intervjuer. Digitalisering resulterar i att analog information skiftar till att bli digitaliserade. Den förbättrar den interna effektiviteten genom att effektivisera arbetsprocesser genom att eliminera manuella hanteringar och reducera de mänskliga felen. Ekonomistyrsystem handlar om att påverka beteendes hos medarbetarnas, men även chefernas beteende i organisationen. Studiens empiriska resultat visar att digitalisering har bidragit till en övergång från det analoga till det digitala. Det visar även att digitaliseringen har effektiviserat arbetsprocesser inom organisationerna. Vad gäller ekonomistyrningen i organisationerna har studien identifierat att det i arbetssättet främst råder en kombination av resultat- samt handlingsstyrning i organisationerna. Slutsatserna som studien presenterar är att fördelarna med digitalisering är att det har lett till effektivare arbetsprocesser och frigjort tid till mer kvalificerade arbetsuppgifter. Möjligheterna är att effektiviseringen leder till att mer fokus kan sättas på att hantera med kvalificerade arbetsuppgifter som faktiskt kräver det mänskliga ögat och minska de mänskliga felen. En nackdel är att ekonomerna inte besitter IT- kompetensen, och behöver ta hjälp av IT-avdelningen, vilket i sig är en utmaning då det uppstår utmaningar gällande kommunikationen. På grund utav detta upplever ekonomerna detta att systemen inte alltid är optimala att hantera utifrån deras arbetsuppgifter. Ytterligare en nackdel med digitaliseringen är att den påverkar den sociala kontakten och den kreativa förmågan negativt till följd av fler digitala möten istället för att träffas fysiskt.<br>Digitalization is a current topic at present and technology has grown and affected society. It has affected economists and their way of working. There are several studies regarding how digitalization affects the role of economists in the private sector, research on economists' own experiences and in the public sector is, however, few. The public sector has been criticized as technological development does not progress at the pace of the rest of society. This is because they have structures that make it difficult to introduce modernization work. The purpose of the study is to map and analyze experiences with digitization as well as opportunities and challenges around it from the perspective of economists. The study was conducted through a qualitative method where data collection was done on two public organizations through six semi-structured interviews. Digitization results in analog information shifting to being digitized. It improves internal efficiency by streamlining work processes by eliminating manual handling and reducing human error. Management control systems are about influencing the behavior of employees, but also managers' behavior in the organization. The empirical results of the study show that digitization has contributed to a transition from the analog to the digital. It also shows that digitalization has streamlined work processes within organizations. With regard to management control in the organizations, the study has identified that there is both results control and action control in the organizations. The conclusions that the study presents are that the advantages of digitization are that it has led to more efficient workflow processes and freed up time for more qualified tasks. The possibilities are that the streamlining leads to more focus being placed on dealing with qualified tasks that require the human eye and reduce human errors. A disadvantage is that economists do not possess IT skills, and need the help of the IT department, which is a challenge as challenges arise regarding communication. Due to this, economists experience that the systems are not always optimal to handle based on their tasks. Another disadvantage of digitalization is that it negatively affects social contact and creative ability as a result of more digital encounters instead of meeting physically.
APA, Harvard, Vancouver, ISO, and other styles
13

Tomasone, Marco Benito. "Pipeline per il Machine Learning: Analisi dei workflow e framework per l’orchestrazione i casi Recommendation System e Face2Face Traslation." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Al giorno d’oggi tantissimi dei problemi che affrontiamo quotidianamente prevedono l’utilizzo di tecniche di Intelligenza Artificiale. Sono nate sempre più techinchè fino al machine learning è una materia in forte sviluppo, e i modelli, gli algoritmi si espandono a vista d’occhio. Spesso molti dei problemi vengono affrontati e risolti tramite pipeline di modelli di machine learning che sequenzializzati fra loro portano alla soluzione sperata. Nascono sempre più piattaforme come Ai4Eu che si pongono come centro di scambio di modelli, dataset e conoscenza. Nasce quindi l'esigenza di voler automatizzare su queste piattaforme la creazione delle pipeline di ml riutilizzando ove possibile il codice. Questo lavoro di tesi, sfruttando un approccio bottom-up, prevede un’attenta osservazione del workflow del Recommendation System di YouTube. Succesivamente si valutano gli approcci standard in letteratura, individuando due principali classi di Recommendation System in funzione del filtraggio applicato, il Collaborative filtering (classe di appartenenza del Recommendation System di YouTube) o l’Item-Based filtering. Si nota come la parte più importante di questo tipo di applicativi riguardi la gestione dei dati. Seguendo lo stesso metodo operativo viene studiata la pipeline di Face2Face Traslation, analizzando per ogni suo componente l’approccio ai dati e la struttura del modello e confrontando ogni componente con i suoi corrispettivi in letteratura, per valutarne invarianti e versalità mostrando come alcuni modelli si presentino con modelli standard e approcci ai dati diversi, mentre altri presentino approcci standard ai dati ma una grande varietà nei modelli a disposizione. Vengono infine presentati tre framework per l’orchestrazione di pipeline di Machine Learning: MLRun, ZenML e Kale, scelti poichè permettono il deployment e la riusabilità del codice. Si osserva come, escluse piccole differenze, questi tre framework si presentano molto equivalenti fra loro.
APA, Harvard, Vancouver, ISO, and other styles
14

Klinga, Peter. "Transforming Corporate Learning using Automation and Artificial Intelligence : An exploratory case study for adopting automation and AI within Corporate Learning at financial services companies." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279570.

Full text
Abstract:
As the emergence of new technologies are continuously disrupting the way in which organizations function and develop, the majority of initiatives within Learning and Development (L&amp;D) are far from fully effective. The purpose of this study was to conduct an exploratory case study to investigate how automation and AI technologies could improve corporate learning within financial services companies. The study was delimited to study three case companies, all primarily operating in the Nordic financial services industry. The exploratory research was carried out through a literature review, several indepth interviews as well as a survey for a selected number of research participants. The research revealed that the current state of training within financial services is characterized by a significant amount of manual and administrative work, lack of intelligence within decision-making as well as a non-existing consideration of employee knowledge. Moreover, the empirical evidence similarly reveled a wide array of opportunities for adopting automation and AI technologies into the respective learning workflows of the L&amp;D organization within the case companies.<br>I takt med att företag kontinuerligt anammar nya teknologier för att förbättra sin verksamhet, befinner sig utbildningsorganisationer i ett märkbart ineffektivt stadie. Syftet med denna studie var att genomföra en explorativ fallstudie gällande hur finansbolag skulle kunna införa AI samt automatisering för att förbättra sin utbildningsorganisation. Studien var begränsat till att undersöka tre företag, alla med verksamhet i den nordiska finansbranschen. Den explorativa delen av studien genomfördes med hjälp av en litteraturstudie, flertal djupgående intervjuer samt en enkät för ett begränsat antal deltagare i forskningsprocessen. Forskning påvisade att den existerade utbildningsorganisationen inom finansbolag är starkt präglat av ett överflöd av manuellt och administrativt arbete, bristande intelligens inom beslutsprocesser samt en bristande hänsyn för existerande kunskapsnivåer bland anställda. Studien påvisade därtill en mängd möjligheter att införa automatisering samt AI för att förbättra utbildningsflödena inom samtliga deltagande bolag i fallstudien.
APA, Harvard, Vancouver, ISO, and other styles
15

Fischer, Tobias Christian, and Elin Lawson. "The Embeddedness of Information Technology in the Workflow of Business Processes : How Can IT Support and Improve the Way Work is Done?" Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-202627.

Full text
Abstract:
Wise investments in Information Technology have become increasingly important in staying competitive in today's environment. Massive amounts of people and IT-systems are involved in the process of input becoming output. As these employees and IT-systems must be harmonized, it becomes relevant to study how employees’ routines and habits are related to the usage and embeddedness of these systems. Therefore, the purpose of this paper is to investigate how embedded IT can lead to improved business processes. This is done through exploring how embedded IT is used in workflows as well as to examine what support and hindrance IT can offer. Therefore, extensive theoretical research was conducted within the fields of habits and routines, business processes and embedded IT, developing a framework for analysis. Then, a case study was conducted where a specific process within insurance claims was thoroughly analyzed through interviews and work shadowing. This facilitated a within-case analysis. The results of the study showed the interdependency between the pillars of this study. Workflow habits and routines influences IT usage, whereas IT aims to support through automatization and informatization. However, to enable this and achieve a significant improvement, the processes it aims to support needs to be fully known.
APA, Harvard, Vancouver, ISO, and other styles
16

Guizani, Nachoua. "Conception d’une architecture de services d’intelligence ambiante pour l’optimisation de la qualité de service de transmission de messages en e-santé." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1157/document.

Full text
Abstract:
La gestion de l'acheminement de messages d'e-santé en environnement ubiquitaire soulève plusieurs défis majeurs liés à la diversité et à la spécificité des cas d'usage et des acteurs, à l'évolutivité des contextes médical, social, logistique, environnemental...Nous proposons une méthode originale d'orchestration autonome et auto-adaptative de services visant à optimiser le flux des messages et à personnaliser la qualité de transmission, en les adressant aux destinataires les plus appropriés dans les délais requis. Notre solution est une architecture générique dirigée par des modèles du domaine d'information considéré et des données contextuelles, basés sur l'identification des besoins et des contraintes soulevées par notre problématique.Notre approche consiste en la composition de services de fusion et de gestion dynamique en temps réel d'informations hétérogènes provenant des écosystèmes source, cible et message, pilotés par des méthodes d'intelligence artificielle pour l'aide à la prise de décision de routage. Le but est de garantir une communication fiable, personnalisable et sensible à l'évolution du contexte, quel que soit le scénario et le type de message (alarme, technique, etc.). Notre architecture, applicable à divers domaines, a été consolidée par une modélisation des processus métiers (BPM) explicitant le fonctionnement des services qui la composent.Le cadriciel proposé est basé sur des ontologies et est compatible avec le standard HL7 V3. L'auto-adaptation du processus décisionnel d'acheminement est assurée par un réseau bayésien dynamique et la supervision du statut des messages par une modélisation mathématique utilisant des réseaux de Petri temporels<br>Routing policy management of eHealth messages in ubiquitous environment leads to address several key issues, such as taking into account the diversity and specificity of the different use cases and actors, as well as the dynamicity of the medical, social, logistic and environmental contexts.We propose an original, autonomous and adaptive service orchestration methodology aiming at optimizing message flow and personalizing transmission quality by timely sending the messages to the appropriate recipients. Our solution consists in a generic, model-driven architecture where domain information and context models were designed according to user needs and requirements. Our approach consists in composing, in real time, services for dynamic fusion and management of heterogeneous information from source, target and message ecosystems, driven by artificial intelligence methods for routing decision support. The aim is to ensure reliable, personalized and dynamic context-aware communication, whatever the scenario and the message type (alarm, technical, etc.). Our architecture is applicable to various domains, and has been strengthened by business process modeling (BPM) to make explicit the services operation.The proposed framework is based on ontologies and is compatible with the HL7 V3 standard. Self-adaptation of the routing decision process is performed by means of a dynamic Bayesian network and the messages status supervision is based on timed Petri nets
APA, Harvard, Vancouver, ISO, and other styles
17

Bougueng, Tchemeube Renaud. "Location-Aware Business Process Management for Real-time Monitoring of Patient Care Processes." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/24336.

Full text
Abstract:
Long wait times are a global issue in the healthcare sector, particularly in Canada. Despite numerous research findings on wait time management, the issue persists. This is partly because for a given hospital, the data required to conduct wait times analysis is currently scattered across various information systems. Moreover, such data is usually not accurate (because of possible human errors), imprecise and late. The whole situation contributes to the current state of wait times. This thesis proposes a location-aware business process management system for real-time care process monitoring. More precisely, the system enables an improved visibility of process execution by gathering, as processes execute, accurate and granular process information including wait time measurements. The major contributions of this thesis include an architecture for the system, a prototype taking advantages of commercial real-time location system combined with a business process management system to accurately measure wait times, as well as a case study based on a real cardiology process from an Ontario hospital.
APA, Harvard, Vancouver, ISO, and other styles
18

El, Amraoui Yassine. "Faciliter l'inclusion humaine dans le processus de science des données : de la capture des exigences métier à la conception d'un workflow d'apprentissage automatique opérationnel." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4017.

Full text
Abstract:
Le processus de création de flux de travail en science des données, notamment pour résoudre des problèmes d'apprentissage automatique, repose souvent sur des essais et erreurs, manquant de structure et de partage de connaissances entre les data scientistes. Cela entraîne une variabilité dans les tentatives et une interprétation subjective des cas d'utilisation. Afin d'améliorer ce processus dans le cadre de la détection d'anomalies sur les séries temporelles, notre travail propose trois contributions principales :Contribution 1 : Intégration des données, des exigences métiers et des composants de la solution dans la conception du flux de travail d'apprentissage automatique.Alors que les approches automatiques se concentrent sur les données, notre approche prend en compte les dépendances entre les données, les exigences métier et les composants de la solution.Cette approche holistique assure une compréhension plus complète du problème et guide le développement de solutions appropriées. Contribution 2 : Personnalisation des flux de travail pour des solutions sur mesure en tirant parti de configurations partielles et modulaires. Notre approche vise à aider les data scientists à personnaliser les flux de travail pour leurs problèmes spécifiques.Nous y parvenons en employant divers modèles de variabilité et un système de contraintes.Cette approche permet d'intégrer les données, les exigences métier et les composants de la solution dans le processus de conception des flux de travail.En outre, nous avons montré que les utilisateurs peuvent accéder à des expériences antérieures basées sur les paramètres du problème ou en créer de nouvelles.Contribution 3 : Amélioration de la connaissance des lignes de produits logiciels par l'exploitation de nouveaux produits.Nous avons proposé une approche pratique de la construction d'une ligne de produit logiciel (LPL) comme première étape vers la conception de solutions génériques pour détecter les anomalies dans les séries temporelles tout en capturant de nouvelles connaissances et en capitalisant sur celles qui existent déjà lorsqu'il s'agit de nouvelles expériences ou de nouveaux cas d'utilisation. L'incrémentalité dans l'acquisition de connaissances et l'instabilité du domaine sont soutenues par la LPL à travers sa structuration et l'exploitation de configurations partielles associées à des cas d'utilisation antérieurs.À notre connaissance, il s'agit du premier cas d'application du paradigme LPL dans un tel contexte et avec un objectif d'acquisition de connaissances.En capturant les pratiques dans des descriptions partielles des problèmes et des descriptions des solutions mises en œuvre, nous obtenons les abstractions nécessaires pour raisonner sur les ensembles de données, les solutions et les exigences métier.Le LPL est ensuite utilisé pour produire de nouvelles solutions, les comparer aux solutions antérieures et identifier les connaissances qui n'étaient pas explicites.L'abstraction croissante soutenue par le SPL apporte également d'autres avantages.En ce qui concerne le partage des connaissances, nous avons observé un changement dans l'approche de la création de flux de travail de ML, en se concentrant sur l'analyse des problèmes avant de rechercher des applications similaires<br>When data scientists need to create machine learning workflows to solve a problem, they first understand the business needs, analyze the data, and then experiment to find a solution. They judge the success of each attempt using metrics like accuracy, recall, and F-score. If these metrics meet expectations on the test data, it's a success; otherwise, it's considered a failure. However, they often don't pinpoint why a workflow fails before trying a new one. This trial-and-error process can involve many attempts because it's not guided and relies on the preferences and knowledge of the data scientist.This intuitive method leads to varying trial counts among data scientists. Also, evaluating solutions on a test set doesn't guarantee performance on real-world data. So, when models are deployed, additional monitoring is needed. If a workflow performs poorly, the whole process might need restarting with adjustments based on new data.Furthermore, each data scientist learns from their own experiences without sharing knowledge. This lack of collaboration can lead to repeated mistakes and oversights. Additionally, the interpretation of similarity between use cases can vary among practitioners, making the process even more subjective. Overall, the process lacks structure and heavily depends on the individual knowledge and decisions of the data scientists involved.In this work, we present how to mutualize data science knowledge related to anomaly detection in time series to help data scientists generate machine learning workflows by guiding them along the phases of the process .To this aim, we have proposed three main contributions to this problem:Contribution 1: Integrating Data, Business Requirements, and Solution Components in ML Workflow design.While automatic approaches focus on data, our approach considers the dependencies between the data, the business requirements, and the solution components. This holistic approach ensures a more comprehensive understanding of the problem and guides the development of appropriate solutions.Contribution 2: Customizing Workflows for Tailored Solutions by Leveraging Partial and Modular Configurations. Our approach aims to assist data scientists in customizing workflows for their specific problems. We achieve this by employing various variability models and a constraint system. This setup enables users to receive feedback based on their data and business requirements, possibly only partially identified.Additionally, we showed that users can access previous experiments based on problem settings or create entirely new ones.Contribution 3: Enhancing Software Product Lines Knowledge through New Product Exploitation.We have proposed a practice-driven approach to building an SPL as a first step toward allowing the design of generic solutions to detect anomalies in time series while capturing new knowledge and capitalizing on the existing one when dealing with new experiments or use cases.The incrementality in the acquisition of knowledge and the instability of the domain are supported by the SPL through its structuring and the exploitation of partial configurations associated with past use cases.As far as we know, this is the first case of application of the SPL paradigm in such a context and with a knowledge acquisition objective.By capturing practices in partial descriptions of the problems and descriptions of the solutions implemented, we obtain the abstractions to reason about datasets, solutions, and business requirements.The SPL is then used to produce new solutions, compare them to past solutions, and identify knowledge that was not explicit.The growing abstraction supported by the SPL also brings other benefits.In knowledge sharing, we have observed a shift in the approach to creating ML workflows, focusing on analyzing problems before looking for similar applications
APA, Harvard, Vancouver, ISO, and other styles
19

Oliveira, Nathália Ribeiro Schalcher de. "FORMAÇÃO E CUMPRIMENTO DE CONTRATOS ELETRÔNICOS NO SISTEMA DE COMÉRCIO INTELIGENTE - ICS." Universidade Federal do Maranhão, 2004. http://tedebc.ufma.br:8080/jspui/handle/tede/347.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:52:51Z (GMT). No. of bitstreams: 1 Nathalia Ribeiro Schalcher de Oliveira.pdf: 789444 bytes, checksum: ab56306fa1e20e633e51fdbf806d48c3 (MD5) Previous issue date: 2004-02-27<br>This work is part of the ICS Project, which has been developed in LSI at UFMA under the coordinator Sofiane Labidi s assistance. The main objective is to develop an intelligent environment to deal with the last two phases of the ICS lifecycle: the Formation and Fulfillment of Contracts. The ICS architecture and environment are presented as well as its development aspects. An implemented system that uses intelligent agent has the possibility to automatize the mechanism of closed deal contract among ICS users, as well as the monitoring of its assumed obligations. We propose the use of patterns and Semantic Web tools to deal with the information management included in the contracts. Related to the monitoring we propose a model that makes use of the Temporal Workflow and of active rules based on the ECAA s paradigm of Active Database System.<br>Este trabalho faz parte do Projeto ICS (Intelligent Commerce System), atualmente em desenvolvimento no Laboratório de Sistemas Inteligentes (LSI), da Universidade Federal do Maranhão (UFMA), sob a coordenação do Prof. Dr. Sofiane Labidi. Possui como objetivo, o desenvolvimento de um ambiente inteligente para lidar com as últimas duas fases do ciclo de vida do ICS: a Formação e o Cumprimento dos Contratos. O ambiente e a arquitetura do ICS são apresentados, inserindo todos os aspectos relativos ao seu desenvolvimento. Um sistema que possibilita automatizar tanto o mecanismo de contratação dos negócios fechados entre os usuários do ICS, quanto o monitoramento das obrigações por eles assumidas, através da utilização de agente inteligente, é implantado. Propomos o uso de padrões e ferramentas da Web Semântica para lidar com o gerenciamento das informações contidas nos contratos. Em relação ao monitoramento, propomos um modelo que faz uso, em conjunto, de Workflow Temporal e de regras ativas fundamentadas no paradigma ECAA de Sistemas de Banco de Dados Ativos.
APA, Harvard, Vancouver, ISO, and other styles
20

Fdhila, Walid. "Décentralisation Optimisée et Synchronisation des Procédés Métiers Inter-Organisationnels." Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00643827.

Full text
Abstract:
La mondialisation, la croissance continuelle des tailles des entreprises et le besoin d'agilité ont poussé les entreprises à externaliser leurs activités, à vendre des parties de leurs procédés, voire même distribuer leurs procédés jusqu'à lors centralisés. En plus, la plupart des procédés métiers dans l'industrie d'aujourd'hui impliquent des interactions complexes entre un grand nombre de services géographiquement distribués, développés et maintenus par des organisations différentes. Certains de ces procédés, peuvent être très complexes et manipulent une grande quantité de données, et les organisations qui les détiennent doivent faire face à un nombre considérable d'instances de ces procédés simultanément. Certaines même éprouvent des difficultés à les gérer d'une manière centralisée. De ce fait, certaines entreprises approuvent le besoin de partitionner leurs procédés métiers d'une manière flexible, et être capables de les distribuer d'une manière efficace, tout en respectant la sémantique et les objectifs du procédé centralisé. Le travail présenté dans cette thèse consiste à proposer une méthodologie de décentralisation qui permet de décentraliser d'une manière optimisée, générique et flexible, des procédés métiers. En d'autres termes, cette approche vise à transformer un procédé centralisé en un ensemble de fragments coopérants. Ces fragments sont déployés et exécutés indépendamment, distribués géographiquement et peuvent être invoqués à distance. Cette thèse propose aussi un environnement pour la modélisation des chorégraphies de services web dans un langage formel à savoir le calcul d'événements.
APA, Harvard, Vancouver, ISO, and other styles
21

Jovanovic, Petar. "Requirement-driven design and optimization of data-intensive flows." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/400139.

Full text
Abstract:
Data have become number one assets of today's business world. Thus, its exploitation and analysis attracted the attention of people from different fields and having different technical backgrounds. Data-intensive flows are central processes in today's business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. However, designing and optimizing such data flows, to satisfy both users' information needs and agreed quality standards, have been known as a burdensome task, typically left to the manual efforts of a BI system designer. These tasks have become even more challenging for next generation BI systems, where data flows typically need to combine data from in-house transactional storages, and data coming from external sources, in a variety of formats (e.g., social media, governmental data, news feeds). Moreover, for making an impact to business outcomes, data flows are expected to answer unanticipated analytical needs of a broader set of business users' and deliver valuable information in near real-time (i.e., at the right time). These challenges largely indicate a need for boosting the automation of the design and optimization of data-intensive flows. This PhD thesis aims at providing automatable means for managing the lifecycle of data-intensive flows. The study primarily analyzes the remaining challenges to be solved in the field of data-intensive flows, by performing a survey of current literature, and envisioning an architecture for managing the lifecycle of data-intensive flows. Following the proposed architecture, we further focus on providing automatic techniques for covering different phases of the data-intensive flows' lifecycle. In particular, the thesis first proposes an approach (CoAl) for incremental design of data-intensive flows, by means of multi-flow consolidation. CoAl not only facilitates the maintenance of data flow designs in front of changing information needs, but also supports the multi-flow optimization of data-intensive flows, by maximizing their reuse. Next, in the data warehousing (DW) context, we propose a complementary method (ORE) for incremental design of the target DW schema, along with systematically tracing the evolution metadata, which can further facilitate the design of back-end data-intensive flows (i.e., ETL processes). The thesis then studies the problem of implementing data-intensive flows into deployable formats of different execution engines, and proposes the BabbleFlow system for translating logical data-intensive flows into executable formats, spanning single or multiple execution engines. Lastly, the thesis focuses on managing the execution of data-intensive flows on distributed data processing platforms, and to this end, proposes an algorithm (H-WorD) for supporting the scheduling of data-intensive flows by workload-driven redistribution of data in computing clusters. The overall outcome of this thesis an end-to-end platform for managing the lifecycle of data-intensive flows, called Quarry. The techniques proposed in this thesis, plugged to the Quarry platform, largely facilitate the manual efforts, and assist users of different technical skills in their analytical tasks. Finally, the results of this thesis largely contribute to the field of data-intensive flows in today's BI systems, and advocate for further attention by both academia and industry to the problems of design and optimization of data-intensive flows.<br>Actualment, les dades han esdevingut el principal actiu del món empresarial. En conseqüència, la seva explotació i anàlisi ha atret l'atenció de gent provinent de diferents camps i experiència tècnica. Els fluxes de dades intensius són processos centrals en els actuals sistemes d'inteligència de negoci (BI), desplegant diferents tecnologies per a proporcionar dades, provinents de diferents fonts i centrant-se en formats orientats a l'usuari. Tantmateix, el disseny i l'optimització de tals fluxes, per tal de satisfer ambdós usuaris de la informació i els estàndars de qualitat, resulta una tasca tediosa, normalment dirigida als esforços manuals del dissenyador del sistema BI. Aquestes tasques han esdevingut encara més complexes en el context dels sistemes BI de nova generació, on els fluxes de dades típicament combinen dades internes de fonts transaccionals, amb dades externes representades amb diferents formats (xarxes socials, dades governamentals, notícies). A més a més, per tal de tenir un impacte en el negoci, s'espera que els fluxes de dades responguin a necessitats analítiques no anticipades en un marge de temps proper a temps real. Aquests reptes clarament indiquen la necessitat de millora en l'automatització del disseny i optimització dels fluxes de dades intensius. L'objectiu d'aquesta tesi doctoral és el de proporcionar mitjans automàtics per tal de manegar el cicle de vida de fluxes de dades intensius. L'estudi primerament analitza els reptes pendents de resoldre en l'àrea de fluxes intensius de dades, mitjançant l'anàlisi de la literatura recent, i concebent una arquitectura per a la gestió del cicle de vida dels fluxes de dades intensius. A partir de l'arquitectura proposada, ens centrem en la proposta de tècniques automàtiques per tal de cobrir cadascuna de les fases del cicle de vida dels fluxes intensius de dades. Particularment, aquesta tesi inicialment proposa una tècnica (CoAl) per el disseny incremental dels fluxes de dades intensius, mitjançant la consolidació de multiples fluxes. CoAl no només facilita el manteniment dels flux de dades davant de noves necessitats d'informació, sinó que també permet la optimització de múltiples fluxes mitjançant la maximització de la reusabilitat. Posteriorment, en un contexte de magatzems de dades (DW), proposem un mètode complementari (ORE) per el disseny incremental d'un esquema de DW objectiu, acompanyat per la traça sistemàtica de metadades d'evolució, les quals poden facilitar el disseny dels fluxes intensius de dades (processos ETL). A continuació, la tesi estudia el problema d'implementació de fluxes de dades intensius a diferents sistemes d'execució, i proposa el sistema BabbleFlow per la traducció de fluxes de dades intensius lògics a formats executables, a un o múltiples sistemes d'execució. Finalment, la tesi es centra en la gestió dels fluxes de dades intensius en plataformes distribuïdes de processament de dades, amb aquest objectiu es proposa un algorisme (H-WorD) per donar suport a la planificació de l'execució de fluxes intensius de dades mitjançant la redistribució de dades dirigides per la carga de treball. El resultat general d'aquesta tesi és una plataforma d'inici a fi per tal de gestionar el cicle de vida dels fluxes intensius de dades, anomenada Quarry. Les tècniques propostes en aquesta tesi, incorporades a la plataforma Quarry, en gran part simplifiquen els esforços manuals i assisteixen usuaris amb diferent experiència tècnica a les seves tasques analítiques. Finalment, els resultats d'aquesta tesi contribueixen a l'àrea de fluxes intensius de dades en els sistemes de BI actuals. A més a més, reflecteixen la necessitat de més atenció per part dels mons acadèmic i industrial als problemes de disseny i optimització de fluxes de dades intensius.
APA, Harvard, Vancouver, ISO, and other styles
22

Fdhila, Walid. "Décentralisation optimisée et synchronisation des procédés métiers inter-organisationnels." Electronic Thesis or Diss., Nancy 1, 2011. http://www.theses.fr/2011NAN10058.

Full text
Abstract:
La mondialisation, la croissance continuelle des tailles des entreprises et le besoin d'agilité ont poussé les entreprises à externaliser leurs activités, à vendre des parties de leurs procédés, voire même distribuer leurs procédés jusqu'à lors centralisés. En plus, la plupart des procédés métiers dans l'industrie d'aujourd'hui impliquent des interactions complexes entre un grand nombre de services géographiquement distribués, développés et maintenus par des organisations différentes. Certains de ces procédés, peuvent être très complexes et manipulent une grande quantité de données, et les organisations qui les détiennent doivent faire face à un nombre considérable d'instances de ces procédés simultanément. Certaines même éprouvent des difficultés à les gérer d'une manière centralisée. De ce fait, certaines entreprises approuvent le besoin de partitionner leurs procédés métiers d'une manière flexible, et être capables de les distribuer d'une manière efficace, tout en respectant la sémantique et les objectifs du procédé centralisé. Le travail présenté dans cette thèse consiste à proposer une méthodologie de décentralisation qui permet de décentraliser d'une manière optimisée, générique et flexible, des procédés métiers. En d'autres termes, cette approche vise à transformer un procédé centralisé en un ensemble de fragments coopérants. Ces fragments sont déployés et exécutés indépendamment, distribués géographiquement et peuvent être invoqués à distance. Cette thèse propose aussi un environnement pour la modélisation des chorégraphies de services web dans un langage formel à savoir le calcul d'événements<br>In mainstream service orchestration platforms, the orchestration model is executed by a centralized orchestrator through which all interactions are channeled. This architecture is not optimal in terms of communication overhead and has the usual problems of a single point of failure. Moreover, globalization and the increase of competitive pressures created the need for agility in business processes, including the ability to outsource, offshore, or otherwise distribute its once-centralized business processes or parts thereof. An organization that aims for such fragmentation of its business processes needs to be able to separate the process into different parts. Therefore, there is a growing need for the ability to fragment one's business processes in an agile manner, and be able to distribute and wire these fragments together so that their combined execution recreates the function of the original process. This thesis is focused on solving some of the core challenges resulting from the need to restructure enterprise interactions. Restructuring such interactions corresponds to the fragmentation of intra and inter enterprise business process models. This thesis describes how to identify, create, and execute process fragments without loosing the operational semantics of the original process models. It also proposes methods to optimize the fragmentation process in terms of QoS properties and communication overhead. Further, it presents a framework to model web service choreographies in Event Calculus formal language
APA, Harvard, Vancouver, ISO, and other styles
23

Yen, Yu-Hong, and 顏毓宏. "Development of Intelligent Navigation Workflow andTraining for Spinal Surgery." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/12645496827549995725.

Full text
Abstract:
碩士<br>國立成功大學<br>機械工程學系碩博士班<br>95<br>In this research, we developed an intelligent navigation workflow and computer-aided training system for pedicle screw implantation in spinal surgery. Base on the biplane X-ray imaging of a lumbar saw bone, spatial coordinate correlations between the physical saw bone and the 3D digitizer built an off-line training environment. A senior spine surgeon sketched 2D safety areas on biplane images to be projective intersection to generate 3D safety zones for pedicle insertion. The physicians are able to operate a 3D digitizer to set pedicle drilling position and orientation on saw bones in which 3D visualization is associated to the safety zones. Grading levels were preset by the senior spine surgeon to represent the test results of pedicle screw implantation. The techniques developed in this system involve X-ray imaging rectification, registration, image processing, imaging dilation and erosion, bi-plane calibration, and spatial geometric transformation, which will be a pioneer study of further clinical applications. The same principles of pedicle insertion can be applied to either thoracic or lumbar spine.
APA, Harvard, Vancouver, ISO, and other styles
24

Huang, Ching-Jen, and 黃敬仁. "Using Intelligent Agent Technology to Enable Autonomous Workflow Management for Collaborative Design." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/73849789014166453396.

Full text
Abstract:
博士<br>國立清華大學<br>工業工程與工程管理學系<br>94<br>Under the support of network capacity, collaborative design system has become the sharp tool for quick response to the time-to-market for enterprise in distributed environment. The technique of implementing the collaborative design system can be divided into two parts: data process and presentation of the collaborative design system which includes Product Data Management (PDM), product configuration management and product presentation technique, and management and control of the collaborative design system in constructing workflow management system within it in order to fulfill the requirements of management and time control. In this research, we focus on developing the loose-coupled collaborative workflow management system with the intelligent agent technology in the distributed environment. There are three stages in our research. The first stage is to develop a general process definition mechanism modeling technology based upon the WfMC workflow reference model. Defining the workflow elements as message, task, and process, with which combined into various workflow templates flexibly. Further we define the workflow monitoring mechanism in controlling workflow status. In our second stage, we develop the support activity for the first stage in studying the applicable technology and managerial support of the collaborative system with the point of view of contact center. For the third stage, we have the distributed collaborative design environment as our design goal by extending the general process definition mechanism modeling technology developed from the first stage. With the mechanism of information communication constructed by agent technology, the process elements, built with RDF, can be applied to other workflow platforms. The contribution of our research is to develop a dynamic and autonomous workflow constructing technology and process control mechanism. The intelligent agent technology is applied to build the information communication mechanism with distributed environment. Futher, the RDF/XML process elements are adopted to solve the communication compatibility problems among heterogeneous platforms in order to enhance the effectiveness of workflows.
APA, Harvard, Vancouver, ISO, and other styles
25

Reul, Christian. "An Intelligent Semi-Automatic Workflow for Optical Character Recognition of Historical Printings." Doctoral thesis, 2020. https://doi.org/10.25972/OPUS-20923.

Full text
Abstract:
Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years great progress has been made in the area of historical OCR resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, Automatic Text Recognition (ATR) and postcorrection. Their major drawback is that they only offer limited applicability by non-technical users like humanist scholars, in particular when it comes to the combined use of several tools in a workflow. Furthermore, depending on the material, these tools are usually not able to fully automatically achieve sufficiently low error rates, let alone perfect results, creating a demand for an interactive postcorrection functionality which, however, is generally not incorporated. This thesis addresses these issues by presenting an open-source OCR software called OCR4all which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required Ground Truth (GT) for training stronger mixed models (for segmentation as well as text recognition) is not available, yet, neither in the desired quantity nor quality. To deal with this issue in the short run, OCR4all offers better recognition capabilities in combination with a very comfortable Graphical User Interface (GUI) that allows error corrections not only in the final output, but already in early stages to minimize error propagation. In the long run this constant manual correction produces large quantities of valuable, high quality training material which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. The architecture of OCR4all allows for an easy integration (or substitution) of newly developed tools for its main components by supporting standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings. In addition to OCR4all, several methodical extensions in the form of accuracy improving techniques for training and recognition are presented. Most notably an effective, sophisticated, and adaptable voting methodology using a single ATR engine, a pretraining procedure, and an Active Learning (AL) component are proposed. Experiments showed that combining pretraining and voting significantly improves the effectiveness of book-specific training, reducing the obtained Character Error Rates (CERs) by more than 50%. The proposed extensions were further evaluated during two real world case studies: First, the voting and pretraining techniques are transferred to the task of constructing so-called mixed models which are trained on a variety of different fonts. This was done by using 19th century Fraktur script as an example, resulting in a considerable improvement over a variety of existing open-source and commercial engines and models. Second, the extension from ATR on raw text to the adjacent topic of typography recognition was successfully addressed by thoroughly indexing a historical lexicon that heavily relies on different font types in order to encode its complex semantic structure. During the main experiments on very complex early printed books even users with minimal or no experience were able to not only comfortably deal with the challenges presented by the complex layout, but also to recognize the text with manageable effort and great quality, achieving excellent CERs below 0.5%. Furthermore, the fully automated application on 19th century novels showed that OCR4all (average CER of 0.85%) can considerably outperform the commercial state-of-the-art tool ABBYY Finereader (5.3%) on moderate layouts if suitably pretrained mixed ATR models are available<br>Die Optische Zeichenerkennung (Optical Character Recognition, OCR) auf historischen Drucken stellt nach wie vor eine große Herausforderung dar, hauptsächlich aufgrund des häufig komplexen Layouts und der hoch varianten Typographie. In den letzten Jahre gab es große Fortschritte im Bereich der historischen OCR, die nicht selten auch in Form von Open Source Tools interessierten Nutzenden frei zur Verfügung stehen. Der Nachteil dieser Tools ist, dass sie meist ausschließlich über die Kommandozeile bedient werden können und somit nicht-technische Nutzer schnell überfordern. Außerdem sind die Tools häufig nicht aufeinander abgestimmt und verfügen dementsprechend nicht über gemeinsame Schnittstellen. Diese Arbeit adressiert diese Problematik mittels des Open Source Tools OCR4all, das verschiedene State-of-the-Art OCR Lösungen zu einem zusammenhängenden Workflow kombiniert und in einer einzigen Anwendung kapselt. Besonderer Wert liegt dabei darauf, auch nicht-technischen Nutzern zu erlauben, selbst die ältesten und anspruchsvollen Drucke selbstständig und mit höchster Qualität zu erfassen. OCR4all ist vollständig über eine komfortable graphische Nutzeroberfläche bedienbar und bietet umfangreiche Möglichkeiten hinsichtlich Konfiguration und interaktiver Nachkorrektur. Zusätzlich zu OCR4all werden mehrere methodische Erweiterungen präsentiert, um die Effektivität und Effizienz der Trainings- und Erkennungsprozesse zur Texterkennung zu optimieren. Während umfangreicher Evaluationen konnte gezeigt werden, dass selbst Nutzer ohne nennenswerte Vorerfahrung in der Lage waren, OCR4all eigenständig auf komplexe historische Drucke anzuwenden und dort hervorragende Zeichenfehlerraten von durchschnittlich unter 0,5% zu erzielen. Die methodischen Verbesserungen mit Blick auf die Texterkennung reduzierten dabei die Fehlerrate um über 50% im Vergleich zum etablierten Standardansatz
APA, Harvard, Vancouver, ISO, and other styles
26

Cardoso, Afonso Pires de Matos Gomes. "Method to Foster Intelligent Processes Automation into an Organization." Master's thesis, 2021. http://hdl.handle.net/10362/125982.

Full text
Abstract:
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies Management<br>The present dissertation introduces a framework for process automation with the purpose of making them intelligent, in order to increase consistency, optimize execution-time and free workers from low value-added tasks. Foster Intelligent Processes Automation into an Organization is a topic that remains underdeveloped today. Being a relatively recent topic, intelligent process automation may be known as an extension of automated processes “on steroids”. The main objective of this project is to propose a method to find and promote Intelligent Automation to a target process, providing a walkthrough guideline, so organizations can identify, assess, and design a process of converting current processes into intelligent automated processes. In order to evaluate the proposed framework, a case study is presented, and a set of interviews was carried out with two groups: an academic group and a group of agents who manage processes in an organization.
APA, Harvard, Vancouver, ISO, and other styles
27

"Explainable AI in Workflow Development and Verification Using Pi-Calculus." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.55566.

Full text
Abstract:
abstract: Computer science education is an increasingly vital area of study with various challenges that increase the difficulty level for new students resulting in higher attrition rates. As part of an effort to resolve this issue, a new visual programming language environment was developed for this research, the Visual IoT and Robotics Programming Language Environment (VIPLE). VIPLE is based on computational thinking and flowchart, which reduces the needs of memorization of detailed syntax in text-based programming languages. VIPLE has been used at Arizona State University (ASU) in multiple years and sections of FSE100 as well as in universities worldwide. Another major issue with teaching large programming classes is the potential lack of qualified teaching assistants to grade and offer insight to a student’s programs at a level beyond output analysis. In this dissertation, I propose a novel framework for performing semantic autograding, which analyzes student programs at a semantic level to help students learn with additional and systematic help. A general autograder is not practical for general programming languages, due to the flexibility of semantics. A practical autograder is possible in VIPLE, because of its simplified syntax and restricted options of semantics. The design of this autograder is based on the concept of theorem provers. To achieve this goal, I employ a modified version of Pi-Calculus to represent VIPLE programs and Hoare Logic to formalize program requirements. By building on the inference rules of Pi-Calculus and Hoare Logic, I am able to construct a theorem prover that can perform automated semantic analysis. Furthermore, building on this theorem prover enables me to develop a self-learning algorithm that can learn the conditions for a program’s correctness according to a given solution program.<br>Dissertation/Thesis<br>Doctoral Dissertation Computer Science 2020
APA, Harvard, Vancouver, ISO, and other styles
28

Custódio, David José Fernandes. "A strategy for the integration of hyper-automation technologies into the Portuguese companies." Master's thesis, 2022. http://hdl.handle.net/10362/135617.

Full text
Abstract:
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business Intelligence<br>Today´s competitive world demand companies to explore and discover new business automation technologies in order to evolve their processes and obtain tremendous benefits, from increased efficiency to reduced costs. The business processes that currently involve a lot of manual work or are considered as non-value added to the company are in the lead to automate so employees can focus their knowledge into more relevant tasks. The purpose of this study is to propose a strategy for the integration of hyper-automation technologies into current Portuguese companies’ processes and by this way increase the competitiveness of the Portuguese companies. An analysis of the subject and understanding the relevance of hyper-automation processes is conducted, as well as a collection of information from Portuguese companies of their automated processes, as the basis to identify business needs that may be included in a strategy to apply hyper-automation technologies. It will be gathered relevant literature on the domain being analyzed for building a comprehensive body. The results will be analyzed to understand in what extent Portuguese companies would adjust from hyper-automation technologies, reporting the benefits inherent to technological evolution and measure in which areas/departments the managers believe hyper-automation will have a major influence in the short-term.
APA, Harvard, Vancouver, ISO, and other styles
29

Witty, Derick. "Implementation of a Laboratory Information Management System To Manage Genomic Samples." Thesis, 2013. http://hdl.handle.net/1805/3521.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)<br>A Laboratory Information Management Systems (LIMS) is designed to manage laboratory processes and data. It has the ability to extend the core functionality of the LIMS through configuration tools and add-on modules to support the implementation of complex laboratory workflows. The purpose of this project is to demonstrate how laboratory data and processes from a complex workflow can be implemented using a LIMS. Genomic samples have become an important part of the drug development process due to advances in molecular testing technology. This technology evaluates genomic material for disease markers and provides efficient, cost-effective, and accurate results for a growing number of clinical indications. The preparation of the genomic samples for evaluation requires a complex laboratory process called the precision aliquotting workflow. The precision aliquotting workflow processes genomic samples into precisely created aliquots for analysis. The workflow is defined by a set of aliquotting scheme attributes that are executed based on scheme specific rules logic. The aliquotting scheme defines the attributes of each aliquot based on the achieved sample recovery of the genomic sample. The scheme rules logic executes the creation of the aliquots based on the scheme definitions. LabWare LIMS is a Windows® based open architecture system that manages laboratory data and workflow processes. A LabWare LIMS model was developed to implement the precision aliquotting workflow using a combination of core functionality and configured code.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!