To see the other types of publications on this topic, follow the link: Artificial intelligence and process.

Dissertations / Theses on the topic 'Artificial intelligence and process'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Artificial intelligence and process.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bellman, Markus, and Gustav Göransson. "Intelligent Process Automation : Building the bridge between Robotic Process Automation and Artificial Intelligence." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263090.

Full text
Abstract:
Process Automation has the potential to yield great benefits for companies and organizations, especially in the financial services industry where companies are information-intensive and experience rich data flows. This has mostly been done through Robotic Process Automation (RPA), but the increased maturity of Machine Learning algorithms has increased the viability of combining classic RPA with Artificial Intelligence, leading to Intelligent Process Automation (IPA). However, there is a set of challenges embedded in the transition from RPA to IPA. These challenges need to be dealt with in order to ensure that the benefits of the new technology can be harvested. The aim of this research was to identify this set of challenges that the companies will face, as well as provide guidance to what preparations that need to be made before IPA can be implemented in full scale. The research was conducted as a theory building case study at a large Swedish bank. An empirical study was conducted, consisting of interviews with researchers, as well as automation professionals and R&amp;D at the case company. The findings of the empirical study and previous research on the area were combined and condensed into a guiding framework for organizations wanting to adopt IPA.<br>Processautomation har potentialen att ge stora fördelar för företag och organisationer, speciellt i finansbranschen där företag är informationsintensiva och har stora dataflöden. Detta har huvudsakligen gjorts med Robotic Process Automation (RPA) men den ökade mognadsgraden av maskininlärning har snabbt förbättrat möjligheten att kombinera RPA med Artificiell Intelligens (AI) för att därmed möjliggöra Intelligent Process Automation (IPA). I övergången från RPA till IPA uppkommer däremot en del utmaningar och problem som företag måste hanteras innan potentialen med dessa nya tekniker kan förverkligas. Den här forskningen ämnar att identifiera de utmaningar som företagen kommer ställas inför samt ge vägledning för vilka förberedelser som företagen måste genomföra innan IPA kan implementeras fullskaligt i organisationen. Forskningen genomfördes som en teoribyggande fallstudie på en stor svensk bank. Den teoretiska grunden samlades in genom en omfattande litteraturstudie och en empirisk studie bestående av intervjuer med forskare samt automationsutvecklare och FoU på banken. Resultaten från litteraturstudien och empirin kombinerades och kondenserades till ett vägvisande ramverk för organisationer som vill implementera IPA.
APA, Harvard, Vancouver, ISO, and other styles
2

McBrien, Andrew. "Artificial intelligence methods in process plant layout." Thesis, University of Nottingham, 1994. http://eprints.nottingham.ac.uk/14403/.

Full text
Abstract:
The thesis describes "Plant Layout System" or PLS, an Expert System which automates all aspects of conceptual layout of chemical process plant, from sizing equipment using process data to deriving the equipment items' elevation and plan positions. PLS has been applied to a test process of typical size and complexity and which encompasses a wide range of layout issues and problems. The thesis presents the results of the tests to show that PLS generates layouts that are entirely satisfactory and conventional from an engineering viewpoint. The major advance made during this work is the approach to layout by Expert System of any kind of process plant. The thesis describes the approach in full, together with the engineering principles which it acknowledges. Plant layout problems are computationally complex. PLS decomposes layout into a sequence of formalised steps and uses a powerful and sophisticated technique to reduce plant complexity. PLS uses constraint propagation for spatial synthesis and includes propagation algorithms developed specifically for this domain. PLS includes a novel qualitative technique to select constraints to be relaxed. A conventional frame based representation was found to be appropriate, but with procedural knowledge recorded in complex forward chaining rules with novel features. Numerous examples of the layout engineer's knowledge are included to elucidate the epistemology of the domain.
APA, Harvard, Vancouver, ISO, and other styles
3

Corea, F. M. Ravindra. "Artificial intelligence in process supervision and control." Thesis, University of Newcastle Upon Tyne, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cheung, Yen Ping. "Artificial intelligence techniques for assembly process planning." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/79999/.

Full text
Abstract:
Due to current trends in adopting flexible manufacturing philosophies, there has been a growing interest in applying Artificial Intelligence (AI) techniques to implement these manufacturing strategies. This is because conventional computational methods alone are not sufficient to meet these requirements for more flexibility. This research examines the possibility of applying AI techniques to process planning and also addresses the various problems when implementing such techniques. In this project AI planning techniques were reviewed and some of these techniques were adopted and later extended to develop an assembly planner to illustrate the feasibility of applying AI techniques to process planning. The focus was on assembly process planning because little work in this area has been reported. Logical decisions like the sequencing of tasks which is a part of the process planning function can be viewed as an AI planning problem. The prototype Automatic Assembly Planner (AAP) was implemented using Edinburgh Prolog on a SUN workstation. Even though expected assembly sequences were obtained, the major problem facing this approach and perhaps AI applications in general is that of extracting relevant design data for the process planning function as illustrated by the planner. It is also believed that if process planning can be regarded as making logical decisions with the knowledge of company specific data then perhaps AAP has also provided some possible answers as to how human process planners perform their tasks. The same kind of reasoning for deciding the sequence of operations could also be employed for planning different products based on a different set of company data. AAP has illustrated the potentialities of applying AI techniques to process planning. The complexity of assembly can be tackled by breaking assemblies into sub-goals. The Modal Truth Criterion (MTC) was applied and tested in a real situation. A system for representing the logic of assembly was devised. A redundant goals elimination feature was also added in addition to the MTC in the AAP. Even though the ideal is a generative planner, in practice variant planners are still valid and perhaps closer to manual assembly process planning.
APA, Harvard, Vancouver, ISO, and other styles
5

Duttala, Satish. "Virtual material processing artificial intelligence based process selection." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174590077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gault, Rosemary S. "Intelligent process planning for rapid prototyping." Thesis, Cardiff University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Winsor, Julian. "Process based functionality modelling." Thesis, University of Strathclyde, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xia, T. A. "The application of artificial intelligence techniques to process identification and control." Thesis, Swansea University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636701.

Full text
Abstract:
The application of artificial intelligence technique (viz, neural networks, genetic algorithms and fuzzy logic systems) to process identification and control has been investigated with different systems. Neural networks and fuzzy logic systems are able to learn the dynamics of a process by training with a set of data obtained from that process and, subsequently, are able to provide a good predictive performance. Genetic algorithms and evolution strategies are non-gradient-based search schemes which facilitate non-linear system optimisation. Thus, they can be applied to the process industries in place of more traditional linear modelling and optimisation. Based on the standard backpropagation network (BPN), a new neural network - the extended backpropagation network (ExtBPN) - has been proposed and tested using different SISO, SIMO and MIMO systems. Two main disadvantages of the standard BPN, i.e. a long training time and poor ability to extrapolate outside the range of training data, are overcome to some extent by using the ExtBPN. A unified strategy for model-based predictive and model reference control has been developed based on the optimisation of a cost function which contains the feedback of error information for the adjustment of future set-points in order to compensate for the mismatch between the model and the real process. In this way, the inherent advantages of both feedback and feedforward control have been utilised. Several new control strategies have been developed from this basis and tested with linear or non-linear, SISO or MIMO systems using neural network and fuzzy process models. These new control strategies (viz. the generalised horizon adjusted predictive (GHAP) control), the predictive direct model reference control (PDMRC), the modified predictive internal model reference control (MPIMC) and the predictive generic model reference control (PGMC) in which the cost function is optimised using genetic algorithms and evolution strategies have been applied successfully to different processes. It has been shown, further, that a variety of time-integral performance criteria can be employed for the design of PID and model-based predictive controllers.
APA, Harvard, Vancouver, ISO, and other styles
9

Silva, R. G. "Cutting tool condition monitoring of the turning process using artificial intelligence." Thesis, University of South Wales, 1997. https://pure.southwales.ac.uk/en/studentthesis/cutting-tool-condition-monitoring-of-the-turning-process-using-artificial-intelligence(25bc91ec-44fd-435b-a55d-06c564f88f35).html.

Full text
Abstract:
This thesis relates to the application of Artificial Intelligence to tool wear monitoring. The main objective is to develop an intelligent condition monitoring system able to detect when a cutting tool is worn out. To accomplish this objective it is proposed to use a combined Expert System and Neural Network able to process data coming from external sensors and combine this with information from the knowledge base and thereafter estimate the wear state of the tool. The novelty of this work is mainly associatedw ith the configurationo f the proposeds ystem. With the combination of sensor-baseidn formation and inferencer ules, the result is an on-line system that can learn from experience and can update the knowledge base pertaining to information associated with different cutting conditions. Two neural networks resolve the problem of interpreting the complex sensor inputs while the Expert System, keeping track of previous successe, stimatesw hich of the two neuraln etworks is more reliable. Also, mis-classificationsa re filtered out through the use of a rough but approximate estimator, the Taylor's tool life equation. In this study an on-line tool wear monitoring system for turning processesh as been developed which can reliably estimate the tool wear under common workshop conditions. The system's modular structurem akesi t easyt o updatea s requiredb y different machinesa nd/or processesT. he use of Taylor's tool life equation, although weak as a tool life estimator, proved to be crucial in achieving higher performance levels. The application of the Self Organizing Map to tool wear monitoring is, in itself, new and proved to be slightly more reliable then the Adaptive Resonance Theory neural network.
APA, Harvard, Vancouver, ISO, and other styles
10

Badhe, Y. P. "Process modeling and optimization using artificial intelligence and machine learning formalisms." Thesis(Ph.D.), CSIR-National Chemical Laboratory, Pune, 2008. http://dspace.ncl.res.in:8080/xmlui/handle/20.500.12252/2627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Karlsson, Frida. "The opportunities of applying Artificial Intelligence in strategic sourcing." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281306.

Full text
Abstract:
Artificial Intelligence technology has become increasingly important from a business perspective. In strategic sourcing, the technology has not been explored much. However, 67% of CPO:s in a survey showed that AI is one of their top priorities the next 10 years. AI can be used to identify patterns, predict prices and provide support in decision making. A qualitative case study has been performed in a strategic sourcing function at a large size global industrial company where the purpose has been to investigate how applicable AI is in the strategic sourcing process at The Case Company. In order to achieve the purpose of this study, it has been important to understand the strategic sourcing process and understand what AI technology is and what it is capable of in strategic sourcing. Based on the empirical data collection combined with literature, opportunities of applying AI in strategic sourcing have been identified and key areas for an implementation have been suggested. These include Forecasting, Spend Analysis &amp; Savings Tracking, Supplier Risk Management, Supplier Identification &amp; Selection, RFQ process, Negotiation process, Contract Management and Supplier Performance Management. These key areas have followed the framework identified in the literature study while identifying and adding new factors. It also seemed important to consider factors such as challenges and risks, readiness and maturity as well as factors that seems to be important to consider in order to enable an implementation. To assess how mature and ready the strategic sourcing function is for an implementation, some of the previous digital projects including AI technologies have been mapped and analysed. Based on the identified key areas of opportunities of applying AI, use cases and corresponding benefits of applying AI have been suggested. A guideline including important factors to consider if applying the technology has also been provided. However, it has been concluded that there might be beneficial to start with a smaller use case and then scale it up. Also as the strategic sourcing function has been establishing a spend analytics platform for the indirect team, there might be a good start to evaluate that project and then apply AI on top of the existing solution. Other factors to consider are ensuring data quality and security, align with top management as well as demonstrate the advantages AI can provide in terms of increased efficiency and cost savings. The entire strategic sourcing function should be involved in an AI project and the focus should not only be on technological aspect but also on soft factors including change management and working agile in order to successfully apply AI in strategic sourcing.<br>Artificiell Intelligens har blivit allt viktigare ur ett affärsperspektiv. När det gäller strategiskt inköp har tekniken inte undersökts lika mycket tidigare. Hursomhelst, 67% av alla tillfrågade CPO:er i en enkät ansåg att AI är en av deras topprioriteringar de kommande tio åren. AI kan exempelvis identifiera mönster, förutspå priser samt ge support inom beslutsfattning. En kvalitativ fallstudie har utförts i en strategisk inköpsfunktion hos ett globalt industriföretag där syftet har varit att undersöka hur tillämpbart AI är i strategiskt inköp hos Case-Företaget. För att uppnå syftet med denna studie har det varit viktigt att förstå vad den strategiska inköpsprocessen omfattas av samt vad AI-teknologi är och vad den är kapabel till inom strategiskt inköp. Därför har litteraturstudien gjorts för att undersöka hur man använt AI inom strategiskt inköp tidigare och vilka fördelar som finns. Baserat på empirisk datainsamling kombinerat med litteratur har nyckelområden för att applicera AI inom strategiskt inköp föreslagits inkluderat forecasting, spendanalys &amp; besparingsspårning, riskhantering av leverantörer, leverantörsidentifikation och val, RFQ-processen, förhandlingsprocessen, kontrakthantering samt uppföljning av leverantörsprestation. Dessa nyckelområden har följt det ramverk som skapats i litteraturstudien samtidigt som nya faktorer har identifierats och lagts till då de ansetts som viktiga. För att tillämpa AI i strategiska inköpsprocessen måste Case-Företaget överväga andra aspekter än var i inköpsprocessen de kan dra nytta av AI mest. Faktorer som utmaningar och risker, beredskap och mognad samt faktorer som ansetts viktiga att beakta för att möjliggöra en implementering har identifierats. För att bedöma hur mogen och redo den strategiska inköpsfunktionen hos Case-Företaget är för en implementering har några av de tidigare digitala projekten inklusive AI-teknik kartlagts och analyserats. Det har emellertid konstaterats att det kan vara fördelaktigt för strategiskt inköp att börja med ett mindre användningsområde och sedan skala upp det. Eftersom strategiska inköpsfunktionen har implementerat en spendanalys plattform kan det vara en bra start att utvärdera det projektet och sedan tillämpa AI ovanpå den befintliga lösningen. Andra faktorer att beakta är att försäkra datakvalitet och säkerhet, involvera ledningen samt lyfta vilka fördelar AI kan ge i form av ökad effektivitet och kostnadsbesparingar. Därtill är det viktigt att inkludera hela strategiska inköps-funktionen samt att inte endast beakta den tekniska aspekten utan också mjuka faktorer så som change management och agila metoder.
APA, Harvard, Vancouver, ISO, and other styles
12

Douratsos, Ioannis. "Application of artificial neural networks to in-line pH process control." Thesis, Liverpool John Moores University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Karlsson, Marcus. "Developing services based on Artificial Intelligence." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-73090.

Full text
Abstract:
This thesis explores the development process of services based on artificial intelligence (AI) technology within an industrial setting. There has been a renewed interest in the technology and leading technology companies as well as many start-ups has integrated it into their market offerings. The technology´s general application potential for enhancing products and services along with the task automation possibility for improved operational excellence makes it a valuable asset for companies. However, the implementation rate of AI services is still low for many industrial actors. The research in the area has been technically dominated with little contribution from other disciplines. Therefore, the purpose of this thesis is to identify development challenges of AI services and drawing on service development- and value-theory to propose a process framework promoting implementation. The work will have two main contributions. Firstly, to compare differences in theoretical and practical development challenges and secondly to combine AI with service development and value theory. The empirical research is done through a single case study based on a systematic combining research approach. It moves iteratively between the theory and empirical findings to direct and support the thesis throughout the work process. The data was collected through semi-structured interviews with a purposive sample. It consisted of two groups of interview participants, one AI expert group and one case internal group. This was supported by participant observation of the case environment. The data analysis was done through flexible pattern matching. The results were divided into two sections, practical challenges and development aspect of AI service development. These were combined with the selected theories and a process framework was generated. The study showed a current understudied area of business and organisational aspect regarding AI service development. Several such challenges were identified with limited theoretical research as support. For a wider industrial adoption of AI technology, more research is needed to understand the integration into the organisation. Further, sustainability and ethical aspect were found not to be a primary concern, only mention in one of the interviews. This, despite the plethora of theory and identified risks found in the literature. Lastly, the interdisciplinary research approach was found to be beneficial to the AI field to integrate the technology into an industrial setting. The developed framework could draw from existing service development models to help manage the identified challenges.<br>Denna uppsats utforskar utvecklingsprocessen av tjänster baserade på artificiell intelligens (AI) i en industriell miljö. Tekniken har fått ett förnyat intresse vilket har lett till att allt fler ledande teknik företag och start-up:s har integrerat AI i deras marknads erbjudande. Teknikens generella applikations möjlighet för att kunna förbättra produkter och tjänster tillsammans med dess automatiserings möjlighet för ökad operationell effektivitet gör den till en värdefull tillgång för företag. Dock så är implementations graden fortfarande låg för majoriteten av industrins aktörer. Forskningen inom AI området har varit mycket teknik dominerat med lite bidrag från andra forskningsdiscipliner. Därför syftar denna uppsats att identifiera utvecklingsutmaningar med AI tjänster och genom att hämta delar från tjänsteutveckling- och värde teori generera ett processramverk som premierar implementation. Uppsatsen har två huvudsakliga forskningsbidrag. Först genom att jämföra skillnader mellan teoretiska och praktiska utvecklingsutmaningar, sedan bidra genom att kombinera AI med tjänsteutveckling- och värdeteori. Den empiriska forskningen utfördes genom en fallstudie baserad på ett systematic combining tillvägagångsätt. På så sätt rör sig forskning iterativt mellan teori och empiri för att forma och stödja uppsatsen genom arbetet. Datat var insamlad genom semi strukturerade intervjuer med två separata, medvetet valda intervjugrupper där ena utgjorde en AI expert grupp och andra en intern grupp för fallstudien. Detta stöttades av deltagande observationer inom fallstudiens miljö. Dataanalysen utfördes med metoden flexible pattern matching. Resultatet var uppdelat i två olika sektioner, den första med praktiska utmaningar och den andra med utvecklingsaspekter av AI tjänsteutveckling. Dessa kombinerades med de utvalda teorierna för att skapa ett processramverk. Uppsatsen visar ett under studerat område angående affär och organisation i relation till AI tjänsteutveckling. Ett flertal av sådana utmaningar identifierades med begränsat stöd i existerande forskningslitteratur. För en mer utbredd adoption av AI tekniken behövs mer forskning för att förstå hur AI ska integreras med organisationer. Vidare, hållbarhet och etiska aspekter var inte en primär aspekt i resultatet, endast bemött i en av intervjuerna trots samlingen av artiklar och identifierade risker i litteraturen. Till sist, det tvärvetenskapliga angreppsättet var givande för AI området för att bättre integrera tekniken till en industriell miljö. Det utvecklade processramverket kunde bygga på existerande tjänsteutvecklings modeller för att hantera de identifierade utmaningarna.
APA, Harvard, Vancouver, ISO, and other styles
14

Maree, Charl. "Diagnostic monitoring of dynamic systems using artificial immune systems." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/1780.

Full text
Abstract:
Thesis (MScEng (Process Engineering))--University of Stellenbosch, 2006.<br>The natural immune system is an exceptional pattern recognition system based on memory and learning that is capable of detecting both known and unknown pathogens. Artificial immune systems (AIS) employ some of the functionalities of the natural immune system in detecting change in dynamic process systems. The emerging field of artificial immune systems has enormous potential in the application of fault detection systems in process engineering. This thesis aims to firstly familiarise the reader with the various current methods in the field of fault detection and identification. Secondly, the notion of artificial immune systems is to be introduced and explained. Finally, this thesis aims to investigate the performance of AIS on data gathered from simulated case studies both with and without noise. Three different methods of generating detectors are used to monitor various different processes for anomalous events. These are: (1) Random Generation of detectors, (2) Convex Hulls, (3) The Hypercube Vertex Approach. It is found that random generation provides a reasonable rate of detection, while convex hulls fail to achieve the required objectives. The hypercube vertex method achieved the highest detection rate and lowest false alarm rate in all case studies. The hypercube vertex method originates from this project and is the recommended method for use with all real valued systems, with a small number of variables at least. It is found that, in some cases AIS are capable of perfect classification, where 100% of anomalous events are identified and no false alarms are generated. Noise has, expectedly so, some effect on the detection capability on all case studies. The computational cost of the various methods is compared, which concluded that the hypercube vertex method had a higher cost than other methods researched. This increased computational cost is however not exceeding reasonable confines therefore the hypercube vertex method nonetheless remains the chosen method. The thesis concludes with considering AIS’s performance in the comparative criteria for diagnostic methods. It is found that AIS compare well to current methods and that some of their limitations are indeed solved and their abilities surpassed in certain cases. Recommendations are made to future study in the field of AIS. Further the use of the Hypercube Vertex method is highly recommended in real valued scenarios such as Process Engineering.
APA, Harvard, Vancouver, ISO, and other styles
15

McClelland, David. "The conflict between artificial intelligence and adversarial argumentation in the litigatory process." Thesis, Queen's University Belfast, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Hsinchun, and Vasant Dhar. "Cognitive Process as a Basis for Intelligent Retrieval Systems Design." Pergamon Press, 1991. http://hdl.handle.net/10150/105912.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona<br>Two studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of five computational models of online document retrieval. These models were then incorporated into the design of an "intelligent" document-based retrieval system. Following a discussion of this system, we discuss the broader implications of our research for the design of information retrieval systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Arana, Landín Ines. "A process-oriented approach to representing and reasoning about naive physiology." Thesis, University of Aberdeen, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259706.

Full text
Abstract:
This thesis presents the RAP system: a Reasoner About Physiology. RAP consists of two modules: knowledge representation and reasoning. The knowledge representation module describes commonsense anatomy and physiology at various levels of abstraction and detail. This representation is broad (covers several physiological systems), dense (the number of relationships between anatomical and physiological elements is high) and uniform (the same kind of formalism is used to represent anatomy, physiology and their interrelationships). These features lead to a 'natural' representation of naive physiology which is, therefore, easy to understand and use. The reasoning module performs two tasks: 1) it infers the behaviour of a complex physiological process using the behaviours of its subprocesses and the relationships between them; 2) it reasons about the effect of introducing a fault in the model. In order to reason about the behaviour of a complex process, RAP uses a mechanism which consists of the following tasks: (i) understanding how subprocesses behave; (ii) comprehending how these subprocesses affect each others behaviours; (iii) "aggregating" these behaviours together to obtain the behaviour of the top level process; (iv) giving that process a temporal context in which to act. RAP uses limited commonsense knowledge about faults to reason about the effect of a fault in the model. It discovers new processes which originate as a consequence of a fault and detects processes which misbehave due to a fault. The effects of both newly generated and misbehaving processes are then propagated throughout the model to obtain the overall effect of the fault. RAP represents and reasons about naive physiology and is a step forward in the development of systems which use commonsense knowledge.
APA, Harvard, Vancouver, ISO, and other styles
18

Yannopoulos, Georgios. "Modelling the legal process for information applications in law." Thesis, Queen Mary, University of London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Khabbaz, Beheshti Behrouz. "Plant Information Modelling, Using Artificial Intelligence, for Process Hazard and Risk Analysis Study." Thesis, Curtin University, 2019. http://hdl.handle.net/20.500.11937/75673.

Full text
Abstract:
In this research, the application of Artificial Intelligence and knowledge engineering, automation of equipment arrangement design, automation of piping and support design, using machine learning to automate the stress analysis, and finally, using information modelling to shift ‘field weld locating’ activity from the construction to the design phase were investigated. The results of integrating these methods on case studies, to increase the safety in the lifecycle of process plants were analysed and discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Gang [Verfasser], and J. [Akademischer Betreuer] Hubbuch. "Advancing Downstream Process Development - Mechanistic Modeling and Artificial Intelligence / Gang Wang ; Betreuer: J. Hubbuch." Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1163320358/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Saunders, Brandon Scot. "Observational Intelligence: An Overview of Computational Actual Entities and their Use as Agents of Artificial Intelligence." VCU Scholars Compass, 2007. http://scholarscompass.vcu.edu/etd/1235.

Full text
Abstract:
This thesis' focus is on the use of Alfred North Whitehead's concept of Actual Entities as a computational tool for computer science and the introduction of a novel usage of Actual Entities as learning agents. Actual Entities are vector based agents that interact within their environment through a process called prehension. It is the combined effect of multiple Actual Entities working within a Colony of Prehending Entities that produces emergent, intelligent behavior. It is not always the case that prehension functions for desired behavior are known beforehand and frequently the functions are too complex to construct by hand. Through the use of Artificial Neural Networks and a technique called Observational Intelligence, Actual Entities can extract the characteristic behavior of observable phenomena. This behavior is then converted into a functional form and generalized to provide a knowledge base for how an observed object interacts with its surroundings.
APA, Harvard, Vancouver, ISO, and other styles
22

Gulesin, Mahmut. "An intelligent knowledge based process planning and fixturing system using the step standard." Thesis, Coventry University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Moral, Hakan. "Modeling Of Activated Sludge Process By Using Artificial Neural Networks." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605733/index.pdf.

Full text
Abstract:
Current activated sludge models are deterministic in character and are constructed by basing on the fundamental biokinetics. However, calibrating these models are extremely time consuming and laborious. An easy-to-calibrate and user friendly computer model, one of the artificial intelligence techniques, Artificial Neural Networks (ANNs) were used in this study. These models can be used not only directly as a substitute for deterministic models but also can be plugged into the system as error predictors. Three systems were modeled by using ANN models. Initially, a hypothetical wastewater treatment plant constructed in Simulation of Single-Sludge Processes for Carbon Oxidation, Nitrification &amp<br>Denitrification (SSSP) program, which is an implementation of Activated Sludge Model No 1 (ASM1), was used as the source of input and output data. The other systems were actual treatment plants, Ankara Central Wastewater Treatment Plant, ACWTP and iskenderun Wastewater Treatment Plant (IskWTP). A sensitivity analysis was applied for the hypothetical plant for both of the model simulation results obtained by the SSSP program and the developed ANN model. Sensitivity tests carried out by comparing the responses of the two models indicated parallel sensitivities. In hypothetical WWTP modeling, the highest correlation coefficient obtained with ANN model versus SSSP was about 0.980. By using actual data from IskWTP the best fit obtained by the ANN model yielded R value of 0.795 can be considered very high with such a noisy data. Similarly, ACWTP the R value obtained was 0.688, where accuracy of fit is debatable.
APA, Harvard, Vancouver, ISO, and other styles
24

Zegers, Pablo. "Some new results on the architecture, training process, and estimation error bounds for learning machines." Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/280015.

Full text
Abstract:
The importance of the problem of designing learning machines rests on the promise of one day delivering systems able to learn complex behavior and assisting us in a myriad of situations. Despite its long standing in the scientific arena, progress towards producing useful machines is hampered by many unanswered questions. This dissertation makes some important contributions towards this overall goal. In particular it focuses on providing a practical solution that allows to build systems that can learn and modulate dynamic behavior, on presenting an incremental learning scheme that permits to check if a learning machine has attained generalization capability just from studying its adaptation behavior, and on studying a bound that limits the learning capacity of any machine. The first contribution develops a Dynamic Neural Network (DNN), a hybrid architecture that employs a Recurrent Neural Network (RNN) in cascade with a Non-Recurrent Neural Network (NRNN). The RNN is in charge of generating a simple limit cycle while the NRNN is devoted to reshaping the limit cycle into the desired spatio-temporal behavior. The main advantage of this architecture is the simplicity of training which results from the simplification of the overall training task due to its decomposition into independent spatial and temporal learning subtasks, which in turn permits to reduce the overall training complexity to that of training a feedforward neural network alone. The second contribution of this dissertation presents an incremental learning procedure that permits to determine whether a learning system has generalized or not. The procedure employs some concepts from statistical learning theory to prove that when a system generalizes the probability that it will encounter unexpected situations decreases exponentially to zero. The third contribution uses the well known fact that the problem underlying the design of a learning machine corresponds to an estimation problem and is thus bounded by the Fisher information quantity. Given how important it is to know more about this bound, a series of properties of the Fisher information are presented.
APA, Harvard, Vancouver, ISO, and other styles
25

Semrau, Penelope. "An analysis of cognitive theories in artificial intelligence and psychology in relation to the qualitative process of emotion /." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487329662146988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Mendling, Jan, Gero Decker, Richard Hull, Hajo A. Reijers, and Ingo Weber. "How do Machine Learning, Robotic Process Automation, and Blockchains Affect the Human Factor in Business Process Management?" Association for Information Systems, 2018. http://epub.wu.ac.at/6557/1/CAIS%2D2018%2DMendling_et_al%2DHow_do_Machine_Learning%2C_Robotic_Process_Automation_and_Blockchains_affect_the_Human_Factor_in_Business_Process_Management.pdf.

Full text
Abstract:
This paper summarizes a panel discussion at the 15th International Conference on Business Process Management. The panel discussed to what extent the emergence of recent technologies including machine learning, robotic process automation, and blockchain will reduce the human factor in business process management. The panel discussion took place on 14 September, 2017, at the Universitat Politècnica de Catalunya in Barcelona, Spain. Jan Mendling served as a chair; Gero Decker, Richard Hull, Hajo Reijers, and Ingo Weber participated as panelists. The discussions emphasized the impact of emerging technologies at the task level and the coordination level. The major challenges that the panel identified relate to employment, technology acceptance, ethics, customer experience, job design, social integration, and regulation.
APA, Harvard, Vancouver, ISO, and other styles
27

Mendling, Jan, Gero Decker, Hull Richard, A. Reijers Hajo, and Weber Ingo. "How do Machine Learning, Robotic Process Automation, and Blockchains Affect the Human Factor in Business Process Management?" Association for Information Systems, 2018. http://epub.wu.ac.at/6383/1/CAIS%2D2018%2DMendling_et_al%2DHow_do_Machine_Learning%2C_Robotic_Process_Automation_and_Blockchains_affect_the_Human_Factor_in_Business_Process_Management.pdf.

Full text
Abstract:
This paper summarizes a panel discussion at the 15th International Conference on Business Process Management. The panel discussed to what extent the emergence of recent technologies including machine learning, robotic process automation, and blockchain will reduce the human factor in business process management. The panel discussion took place on 14 September, 2017, at the Universitat Politècnica de Catalunya in Barcelona, Spain. Jan Mendling served as a chair; Gero Decker, Richard Hull, Hajo Reijers, and Ingo Weber participated as panelists. The discussions emphasized the impact of emerging technologies at the task level and the coordination level. The major challenges that the panel identified relate to employment, technology acceptance, ethics, customer experience, job design, social integration, and regulation.
APA, Harvard, Vancouver, ISO, and other styles
28

Panchapakesan, Ashwin. "Optimizing Shipping Container Damage Prediction and Maritime Vessel Service Time in Commercial Maritime Ports Through High Level Information Fusion." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39593.

Full text
Abstract:
The overwhelming majority of global trade is executed over maritime infrastructure, and port-side optimization problems are significant given that commercial maritime ports are hubs at which sea trade routes and land/rail trade routes converge. Therefore, optimizing maritime operations brings the promise of improvements with global impact. Major performance bottlenecks in maritime trade process include the handling of insurance claims on shipping containers and vessel service time at port. The former has high input dimensionality and includes data pertaining to environmental and human attributes, as well as operational attributes such as the weight balance of a shipping container; and therefore lends itself to multiple classification method- ologies, many of which are explored in this work. In order to compare their performance, a first-of-its-kind dataset was developed with carefully curated attributes. The performance of these methodologies was improved by exploring metalearning techniques to improve the collective performance of a subset of these classifiers. The latter problem formulated as a schedule optimization, solved with a fuzzy system to control port-side resource deployment; whose parameters are optimized by a multi-objective evolutionary algorithm which outperforms current industry practice (as mined from real-world data). This methodology has been applied to multiple ports across the globe to demonstrate its generalizability, and improves upon current industry practice even with synthetically increased vessel traffic.
APA, Harvard, Vancouver, ISO, and other styles
29

Pokharel, Gaurab. "Increasing the Value of Information During Planning in Uncertain Environments." Oberlin College Honors Theses / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1624976272271825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hunter, Stephen Leon. "Non-linear neurocontrol of chemical processes using reinforcement learning." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/17871.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2011.<br>ENGLISH ABSTRACT: The difficulties of chemical process control using plain Proportional-Integral- Derivative (PID) methods include interaction of process manipulated- and control variables as well as difficulty in tuning. One way of eliminating these problems is to use a centralized non-linear control solution such as a feed-forward neural network. While many ways exist to train such neurocontrollers, one of the promising active research areas is reinforcement learning. The biggest drawing card of the neurocontrol using reinforcement learning paradigm is that no expert knowledge of the system is neccesary - all control knowledge is gained by interaction with the plant model. This work uses episodic reinforcement learning to train controllers using two types of process model - non-linear dynamic models and non-linear autoregressive models. The first was termed model-based training and the second data-based learning. By testing the controllers obtained during data-based learning on the original model, the effect of plant model mismatch and therefore real-world applicability could be seen. In addition, two reinforcement learning algorithms, Policy Gradients with Parameter-based Exploration (PGPE) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) were compared to one-another. Set point tracking was facilitated by the use of integral error feedback. Two control case studies were conducted to test the effectiveness of each type of controller and algorithm, and allowed comparison to multi-loop feedback control. The first is a ball mill grinding circuit pilot plant model with 5 degrees of freedom, and the second a 41-stage binary distillation column with 7 degrees of freedom. The ball mill case study showed that centralized non-linear feedback control using neural networks can improve on even highly optimized PI control methods, with the proposed integral error-feedback neural network architecture working very well at tracking the set point. CMA-ES produced better results than PGPE, being able to find up to 20% better solutions. When compared to PI control, the ball mill neurocontrol solution had a 6% higher productivity and showed more than 10% improvement of the product size set point tracking. In the case of some plant-model mismatch (88% fit), the data-based ball mill neurocontroller still achieved better set point tracking and disturbance handling than PI control, but productivity did not improve. The distillation case study showed less positive results. While reinforcement learning was able to learn successful controllers in the case of no plant-model mismatch and outperform LV - and (L/D)(V/B)-based PI control, the best-performing neurocontroller still performed up to 20% worse than DB-based PI control. Once again, CMA-ES showed better performance than PGPE, with latter even failing to find feasible control solutions. While on-line learning in the ball mill study was made impossible due to stability issues, on-line adaptation in the distillation case study succeeded with the use of a partial neurocontroller. The learner was able to achieve, with a success rate of just over 50%, greater than 95% purity in both distillate and bottoms within 2,000 minutes of interacting with the plant. Overall, reinforcement learning showed that, when there is sufficient room for improvement over existing control implementations, it can make for a very good replacement control solution even when no model is available. Future work should focus on evaluating these techniques in lab-scale control studies.<br>AFRIKAANSE OPSOMMING: Die probleme van prosesbeheer met behulp van gewone Proporsioneel-Integraal- Afgeleide (PID) metodes sluit interaksie van proses gemanipuleerde- en beheerveranderlikes, sowel as probleme met in-stemming in. Een manier om hierdie probleme te elimineer, is deur ’n gesentraliseerde nie-lineêre oplossing te gebruik, soos ’n vorentoe-gevoerde neurale netwerk. Daar is baie maniere is om sulke neurobeheerders op te lei, waarvan die meer innoverende maniere versterkingsleer is. Die grootste trekpleister van versterkingsleer is dat geen deskundige kennis van die stelsel nodig is nie - alle beheerkennis word opgedoen deur interaksie met die aanleg model. Hierdie werk gebruik episodiese versterkingsleer om beheerders met behulp van twee tipes van prosesmodel op te lei - nie-lineêre dinamiese modelle en nie-lineêre outoregressiewe modelle. Die eerste was model-gebaseerde opleiding en die tweede data-gebaseerde opleiding genoem. Deur die beheerders wat verkry is tydens datagebaseerde opleiding op die oorspronklike model te toets, kon die effek van die verskil tussen aanleg en model gesien word, en ’n aanduiding van werklike wêreld toepaslikheid gee. Twee versterkingsleer algoritmes was met mekaar vergelyk - Policy Gradients with Parameter-based Exploration (PGPE), en die Covariance Matrix Adaptation Evolution Strategy. Stelpunt volging was deur integraalfout-terugvoer gefasiliteer. Twee gevallestudies is uitgevoer om die doeltreffendheid van elke tipe beheerder en algoritme te toets, deur vergelyking met PI terugvoerbeheer. Die eerste is ’n balmeul toetsaanleg met ’n vryheidsgraad van 5 en die tweede ’n binêre distillasie kolom met ’n vryheidsgraad van 7. Die balmeul gevallestudie het getoon dat gesentraliseerde nie-lineêre terugvoerbeheer met behulp van neurale netwerke selfs op hoogs-geoptimeerde PI beheer metodes kan verbeter. In vergelyking met PI beheer, kon die balmeul neurobeheer oplossing ’n 6% hoër produktiwiteit handhaaf en het meer as 10% verbetering in die handhawing van die produkgrootte stel punt getoon. In die geval van ’n 12% aanleg-model verskil, het die data-gebaseerde balmeul neurobeheerder steeds beter stel punt handhawing en versteuring hantering as PI beheer gewys, alhoewel produktiwiteit nie verbeter het nie. In beide gevalle het die integraalfout oplossing sukses getoon, en CMA-ES het tot 20% beter as PGPE gevaar. Die distillasie gevallestudie het getoon dat die sukses van die balmeul gevallestudie nie noodwendig na ander aanlegte uitbrei nie. Alhoewel versterkingsleer in staat was om suksesvolle beheerders in die geval van geen aanleg-model verskil te leer, het die beste presterende neurobeheerder steeds tot 20% swakker as DB-gebaseerde PI beheer gevaar. Weereens het CMA-ES beter as PGPE gevaar, met die laasgenoemde wat selfs nie daarin kon slaag om werkende oplossings te vind nie. Alhoewel onstabiliteit aan-lyn aanpassing in die balmeul gevallestudie onmoontlik gemaak het, is an-lyn aanpassing in die distillasie gevallestudie moontlik gemaak deur die gebruik van ’n gedeeltelike neurobeheerder. Die leerder was in staat om, met ’n slaagsyfer van net meer as 50 %, meer as 95 % suiwerheid in beide uitlaatstrome te bereik in 2,000 minute van die interaksie met die aanleg. Op die ou end het versterkingsleer getoon dat, wanneer daar voldoende ruimte is vir verbetering oor bestaande beheer implementasies, kan dit ’n baie goeie vervanging wees selfs wanneer daar geen model beskikbaar is nie. Toekomstige werk moet fokus op laboratoriumskaal toepassings van hierdie tegnieke.
APA, Harvard, Vancouver, ISO, and other styles
31

Sampaio, Daniel Julien Barros da Silva. "Automação do monitoramento da qualidade do processo de solda a ponto resistiva." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-17082010-113505/.

Full text
Abstract:
Neste trabalho é proposto e avaliado um sistema capaz de monitorar de forma não-destrutiva, não-invasiva, individualizada, em tempo real e em ambiente industrial, a qualidade de soldas produzidas através do processo de solda a ponto resistiva (PSPR), diminuindo ou mesmo eliminando a necessidade dos testes destrutivos, reduzindo custos e aumentando a produtividade. Este sistema de monitoramento é baseado em reconhecimento de padrões, através de redes neurais artificiais do tipo Perceptron multicamadas. As características do processo usadas na entrada da rede neural são os parâmetros ajustados de um modelo matemático parametrizável, criado com o intuito de refletir as propriedades fundamentais da grandeza do processo passível de ser medida e monitorada em tempo real, neste caso a curva de resistência dinâmica. Estes valores ajustados dos parâmetros do modelo são ainda relacionados com os estados ou condições do processo, de forma a permitir a identificação de possíveis causas para falhas detectadas. Para avaliar e validar este sistema, usaram-se dados reais obtidos na produção de lotes de contatos elétricos através do PSPR. Os resultados obtidos mostram que o sistema proposto é capaz de monitorar satisfatoriamente a qualidade do processo investigado, com erro médio quadrático de 16,5 N, na estimação da força de cisalhamento suportada pela solda, no pior caso. O sistema também mostrou-se capaz de identificar a causa para soldas cuja qualidade estimada foi considerada baixa, com taxa de acerto acima de 97%. Esse sistema proposto não contém especificidades de nenhum processo produtivo e, portanto, tem potencial para ser aplicado em outros processos, além do PSPR.<br>In this work a non-destructive, non-invasive, individualized, real-time system has been proposed and evaluated to monitor the quality of welds produced by resistance spot welding process (RSWP) in industrial environment. This system is able to reduce or eliminate the need for destructive tests, leading to cost reduction and increase in productivity. This monitoring system is based on pattern recognition with multilayer Perceptron artificial neural networks (ANN). The process features used as input of the ANN are adjusted parameters of a parametric mathematic model created to reflect the fundamental properties of the process variable that is measurable in real time, in this work, the dynamic resistance curve. The adjustable model parameters values are related with the process states and conditions, so that it is possible to identify the causes for detected bad quality. In order to evaluate and validate the proposed system, real data obtained in the production of electric contacts by RSWP were used. The results show that the proposed system is capable of properly monitoring the investigated process quality, with a mean square error of 16.5 N, in the estimation of the shear force supported by the weld, in the worst case. The system proved to be able to identify the causes for detected bad quality, with a reliability of more than 97%. The proposed system contains no productive process specificities, and, therefore, can be applied to other processes.
APA, Harvard, Vancouver, ISO, and other styles
32

Zanuttini, Bruno. "Computational Aspects of Learning, Reasoning, and Deciding." Habilitation à diriger des recherches, Université de Caen, 2011. http://tel.archives-ouvertes.fr/tel-00995250.

Full text
Abstract:
We present results and research projects about the computational aspects of classical problems in Artificial Intelligence. We are interested in the setting of agents able to describe their environment through a possibly huge number of Boolean descriptors, and to act upon this environment. The typical applications of this kind of studies are to the design of autonomous robots (for exploring unknown zones, for instance) or of software assistants (for scheduling, for instance). The ultimate goal of research in this domain is the design of agents able to learn autonomously, by learning and interacting with their environment (including human users), also able to reason for producing new pieces of knowledge, for explaining observed phenomena, and finally, able to decide on which action to take at any moment, in a rational fashion. Ideally, such agents will be fast, efficient as soon as they start to interact with their environment, they will improve their behavior as time goes by, and they will be able to communicate naturally with humans. Among the numerous research questions raised by these objectives, we are especially interested in concept and preference learning, in reinforcement learning, in planning, and in some underlying problems in complexity theory. A particular attention is paid to interaction with humans and to huge numbers of descriptors of the environment, as are necessary in real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Berggren, Andreas, Martin Gunnarsson, and Johannes Wallin. "Artificial intelligence as a decision support system in property development and facility management." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-25535.

Full text
Abstract:
The construction industry has been hesitant for a long time to apply new technologies. In property development, the industry relies heavily on employees bringing experience from one project to another. These employees learn to manage risks in connection with the acquisition of land, but when these people retire, the knowledge disappears. An AI-based decision-support system that takes the risks and the market into account when acquiring land can learn from each project and bring this knowledge into future projects. In facility management, artificial intelligence could increase the efficiency of the allocation of staff in the ongoing operations. The purpose of the study is to analyse how companies in the real estate industry can improve their decision-making with the help of AI in property development and property management. In this study, two case studies of two different players in the real estate industry have been performed. One player, Bygg-Fast, represents property development and the other player, VGR, represents facility management. The study is based on interviews, discussions, and collected data. By mapping and then quantifying the risks and market indicators that are input data in the process, a basis can be created. The data can be used for a model that lays the foundation for an AI-based decision support system that will help the property developer to make calculated decisions in the land acquisition process. By mapping what a flow through a property looks like, measuring points can be set out to analyse how long the activities take in the specific business. These measured values provide a collection of data that makes it easier to plan the activities conducted in the property. A more efficient flow can be achieved by visualizing the entire process so staff can be allocated to the right part of the flow. By being flexible and being able to re-plan the business quickly if planning is disrupted, a high level of efficiency can be achieved. This could be done by an AI-based decision support system that simulates alternative day plans.<br>Byggbranschen har länge varit tveksamt till att applicera nya tekniker. Inom fastighetsutveckling bygger branschen mycket på att anställda tar med sig erfarenheter från ett projekt till ett annat. Dessa anställda lär sig hantera risker i samband med förvärv av mark men när dessa personer slutar eller går i pension försvinner kunskapen. Ett AI baserat beslutssystem som tar risk och marknad i beaktning vid förvärv av mark kan lära sig av varje projekt och ta med dessa kunskaper till framtida projekt. Inom fastighetsförvaltning skulle artificiell intelligens kunna effektivisera allokerandet av personal i den pågående verksamheten. Syftet med studien är att analysera hur företag i fastighetsbranschen kan förbättra sitt beslutstagande med hjälp av AI i utveckling av fastigheter samt fastighetsförvaltning. I denna studien har två fallstudier av två olika aktörer i fastighetsbranschen utförts. Ena aktören, Bygg-Fast, representerar fastighetsutveckling och den andra aktören, VGR, representerar fastighetsförvaltning. Studien bygger på intervjuer, diskussioner och insamlade data. Genom att kartlägga och sedan kvantifiera de risker samt marknadsindikatorer som är indata i processen kan ett underlag skapas. Underlaget kan användas för en modell som lägger grunden för ett AI baserat beslutsstödsystem som ska hjälpa fastighetsutvecklaren med att ta kalkylerade beslut i mark förvärvsprocessen. Genom att kartlägga hur ett flöde genom en fastighet ser ut kan mätpunkter sättas ut för att analysera hur lång tid aktiviteterna tar i den specifika verksamheten. Dessa mätvärden ger en samlad data som gör det lättare att planera verksamheten som bedrivs i fastigheten. Ett effektivare flöde kan uppnås genom att visualisera hela processen så personal kan allokeras till rätt del av flödet. Genom att vara flexibel och kunna planera om verksamheten snabbt ifall planering störs kan en hög effektivitet nås. Detta skulle kunna göras av ett AI baserat beslutsstödsystem som simulerar alternativa dagsplaneringar.
APA, Harvard, Vancouver, ISO, and other styles
34

Ernsberger, Timothy S. "Integrating Deterministic Planning and Reinforcement Learning for Complex Sequential Decision Making." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1354813154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Soutter, James. "An integrated architecture for operating procedure synthesis." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/7436.

Full text
Abstract:
The task of creating the operating procedures for a processing plant is time consuming and requires the involvement of key members of the design team. As one of the consequences, the writing of operating procedures is often put off till the final stages of the design process. However, some operability problems will remain hidden in the design until the operating procedure is considered. These problems are expensive to fix because they require undoing some of the design decisions that have already been made. This thesis reports on research into the automatic creation of operating procedures, a field of research sometimes called Operating Procedure Synthesis (OPS). One motivation for OPS research is to develop a tool that can detect operability problems in the design of a plant and thus allow operability problems to be considered earlier in the design process reducing the cost of resolving these problems. Previous OPS systems are generally based around single techniques such as mixed integer linear programming. All the techniques that have been examined in the past are strong in some aspects of OPS and weak in some other aspects. There is no single technique that is strong in all areas of OPS. As a result, no previous OPS system is able to generate all the procedures used as examples in the OPS literature. This thesis presents a new approach to OPS. In this approach, OPS is viewed as a set of distinct but related subtasks. Three subtasks have been identified and examined in this work, namely planning, safety and valve sequencing. Algorithms have been developed to address each of these three subtasks individually. These algorithms have been integrated to form a single OPS system by using a common representation of the operating procedure to be created.
APA, Harvard, Vancouver, ISO, and other styles
36

Sànchez-Ferreres, Josep. "Bridging the gap between textual and formal business process representations." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/673493.

Full text
Abstract:
In the era of digital transformation, an increasing number of organizations are start ing to think in terms of business processes. Processes are at the very heart of each business, and must be understood and carried out by a wide range of actors, from both technical and non-technical backgrounds alike. When embracing digital transformation practices, there is a need for all involved parties to be aware of the underlying business processes in an organization. However, the representational complexity and biases of the state-of-the-art modeling notations pose a challenge in understandability. On the other hand, plain language representations, accessible by nature and easily understood by everyone, are often frowned upon by technical specialists due to their ambiguity. The aim of this thesis is precisely to bridge this gap: Between the world of the techni cal, formal languages and the world of simpler, accessible natural languages. Structured as an article compendium, in this thesis we present four main contributions to address specific problems in the intersection between the fields of natural language processing and business process management.<br>A l’era de la transformació digital, cada vegada més organitzacions comencen a pensar en termes de processos de negoci. Els processos són el nucli principal de tota empresa i, com a tals, han de ser fàcilment comprensibles per un ampli ventall de rols, tant perfils tècnics com no-tècnics. Quan s’adopta la transformació digital, és necessari que totes les parts involucrades estiguin ben informades sobre els protocols implantats com a part del procés de digitalització. Tot i això, la complexitat i biaixos de representació dels llenguatges de modelització que actualment conformen l’estat de l’art sovint en dificulten la seva com prensió. D’altra banda, les representacions basades en documentació usant llenguatge natural, accessibles per naturalesa i fàcilment comprensibles per tothom, moltes vegades són vistes com un problema pels perfils més tècnics a causa de la presència d’ambigüitats en els textos. L’objectiu d’aquesta tesi és precisament el de superar aquesta distància: La distància entre el món dels llenguatges tècnics i formals amb el dels llenguatges naturals, més accessibles i senzills. Amb una estructura de compendi d’articles, en aquesta tesi presentem quatre grans línies de recerca per adreçar problemes específics en aquesta intersecció entre les tecnologies d’anàlisi de llenguatge natural i la gestió dels processos de negoci.<br>Computació
APA, Harvard, Vancouver, ISO, and other styles
37

Woodward, Mark P. "Framing Human-Robot Task Communication as a Partially Observable Markov Decision Process." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10188.

Full text
Abstract:
As general purpose robots become more capable, pre-programming of all tasks at the factory will become less practical. We would like for non-technical human owners to be able to communicate, through interaction with their robot, the details of a new task; I call this interaction "task communication". During task communication the robot must infer the details of the task from unstructured human signals, and it must choose actions that facilitate this inference. In this dissertation I propose the use of a partially observable Markov decision process (POMDP) for representing the task communication problem; with the unobservable task details and unobservable intentions of the human teacher captured in the state, with all signals from the human represented as observations, and with the cost function chosen to penalize uncertainty. This dissertation presents the framework, works through an example of framing task communication as a POMDP, and presents results from a user experiment where subjects communicated a task to a POMDP-controlled virtual robot and to a human controlled virtual robot. The task communicated in the experiment consisted of a single object movement and the communication in the experiment was limited to binary approval signals from the teacher. This dissertation makes three contributions: 1) It frames human-robot task communication as a POMDP, a widely used framework. This enables the leveraging of techniques developed for other problems framed as a POMDP. 2) It provides an example of framing a task communication problem as a POMDP. 3) It validates the framework through results from a user experiment. The results suggest that the proposed POMDP framework produces robots that are robust to teacher error, that can accurately infer task details, and that are perceived to be intelligent.<br>Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
38

Cyvoct, Alexandra, and Shirin Fathi. "Artificial Intelligence in Business-to-Business Sales Processes : The impact on the sales representatives and management implications." Thesis, Linköpings universitet, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157988.

Full text
Abstract:
Background: The sales representatives in B2B companies are experiencing several changes in their environment, which have already altered their performed activities. In order to meet the new customer needs, Artificial Intelligence (AI) provides an effective usage of the large amount of complex data that is available, defined as Big Data. AI is developing intelligence that is human-like and is expected to impact occupational roles while threating to automate tasks typically performed by humans. Previous technologies have already impacted sales representatives in the performance of their sales activities; however, it is still uncertain how AI will impact and benefit them. Previous empirical findings and the lack of studies centered on the individual impact of AI confirm the need for more academic reports. Purpose: The aim of this research is to explore how the implementation of Artificial Intelligence and usage of Big Data in Business-to-Business selling processes are impacting sales representatives, in term of performed activities. Further, the aim is also to explore the management of individuals during the implementation of AI. Methodology: This qualitative study is based on a realistic perspective with an inductive research approach. The empirical data has been collected through semi structured interviews with six AI-providers and two consulting firms that have proven experiences in working with AI and sales in B2B companies. Conclusion: AI is characterized by its adapting capability as well as its ability to process and combine a large amount of real-time, online and historical data. As a result, the selling process is constantly provided with more accurate, faster and original insights. Through the analytical capacity of AI, the sales representatives are gaining extensive knowledge about the customer and the external world. Also, AI simplifies the creation and maintenance of long- lasting customer relationships by providing specific and valuable content. Administrative tasks and non-sales activities can also become automated through the usage of AI, which enables sales representatives to focus on their core tasks, for instance relationship building and value-adding activities. The threat of automation and elimination of jobs should be redefined into the possibility to augment human capabilities. By adopting this approach, the importance of the human-machine collaboration is strongly emphasized. In order to increase the willingness for changing working procedures at individual levels, the communication during the process of change should be centered on creating a positive perception and understanding of AI. It is also important to create trust for AI and promote a data-driven culture in order to ensure the systematic usage of the system.
APA, Harvard, Vancouver, ISO, and other styles
39

Lesage, Laurent. "Data analysis for insurance : recommendation system based on a Multivariate Hawkes Process." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0048.

Full text
Abstract:
L'objectif de la thèse est de construire un système de recommandation pour l'assurance. En observant le comportement et l'évolution d'un client dans le contexte assurantiel, les clients semblent modifier leur couverture d'assurance lorsqu'un événement important survient dans leur vie. Afin de prendre en compte l'influence des événements de la vie (par exemple, mariage, naissance, changement d'emploi) sur la sélection des garanties d'assurance auprès des clients, nous modélisons le système de recommandation avec un processus de Hawkes multivarié, qui comprend plusieurs fonctionnalités spécifiques visant à calculer des recommandations pertinentes. aux clients d'une compagnie d'assurance luxembourgeoise. Plusieurs de ces fonctionnalités visent à proposer une "background intensity" personnalisée pour chaque client grâce à un modèle de Machine Learning, à utiliser des fonctions de déclenchement adaptées aux données d'assurance ou à pallier les failles des données du monde réel en ajoutant un terme de pénalisation spécifique dans la fonction objectif. Nous définissons un cadre complet de processus de Hawkes multivariés avec une fonction d'excitation de densité gamma (estimation, simulation, qualité de l'ajustement) et nous démontrons certaines propriétés mathématiques (espérance, variance) sur le régime transitoire du processus. Notre système de recommandation a été back-testé sur une année complète. Les observations des paramètres du modèle et les résultats de ce back-test montrent que la prise en compte des événements de la vie par un Processus de Hawkes Multivarié permet d'améliorer significativement la précision des recommandations. La thèse est présentée en 4 chapitres. Le chapitre 1 explique comment la "background intensity" du processus de Hawkes multivarié est calculée grâce à un algorithme de Machine Learning, afin que chaque client ait une recommandation personnalisée. Le chapitre 1 présente une version étendue de la méthode présentée dans "Un système de recommandation pour l'assurance automobile", dans laquelle la méthode est utilisée pour rendre l'algorithme explicable. Le chapitre 2 présente un cadre de processus de Hawkes multivariés afin de calculer la dépendance entre la propension à accepter une recommandation et l'occurrence d'événements de vie : définitions, notations, simulation, estimation, propriétés, etc. Le chapitre 3 présente plusieurs résultats du système de recommandation : paramètres estimés du modèle, effets des contributions, back-testing de la précision du modèle, etc. Le chapitre 4 présente l'implémentation de nos travaux dans un package R<br>The objective of the thesis is to build a recommendation system for insurance. By observing the behavior and the evolution of a customer in the insurance context, customers seem to modify their insurance cover when a significant event happens in their life. In order to take into account the influence of life events (e.g. marriage, birth, change of job) on the insurance covering selection from customers, we model the recommendation system with a Multivariate Hawkes Process, which includes several specific features aiming to compute relevant recommendations to customers from a Luxembourgish insurance company.Several of these features are intent to propose a personalized background intensity for each customer thanks to a Machine Learning model, to use triggering functions suited for insurance data or to overcome flaws in real-world data by adding a specific penalization term in the objective function. We define a complete framework of Multivariate Hawkes Processes with a Gamma density excitation function (i.e. estimation, simulation, goodness-of-fit) and we demonstrate some mathematical properties (i.e. expectation, variance) about the transient regime of the process. Our recommendation system has been back-tested over a full year. Observations from model parameters and results from this back-test show that taking into account life events by a Multivariate Hawkes Process allows us to improve significantly the accuracy of recommendations.The thesis is presented in 4 chapters. Chapter 1 explains how the background intensity of the Multivariate Hawkes Process is computed thanks to a Machine Learning algorithm, so that each customer has a personalized recommendation. Chapter 1 is shown an extended version of the method presented in "A Recommendation System For Car Insurance", in which the method is used to make the algorithm explainable. Chapter 2 presents a Multivariate Hawkes Processes framework in order to compute the dependency between the propensity to accept a recommendation and the occurrence of life events: definitions, notations, simulation, estimation, properties, etc. Chapter 3 presents several results of the recommendation system: estimated parameters of the model, effects of contributions, back-testing of the model's accuracy, etc. Chapter 4 presents the implementation of our work into a R package
APA, Harvard, Vancouver, ISO, and other styles
40

Lundgren, Patric, and Christofer Wiechert. "Artificiell intelligens i rekryteringsprocessen : En kvalitativ studie om rekryterares perception." Thesis, Högskolan i Skövde, Institutionen för handel och företagande, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17010.

Full text
Abstract:
Fenomenet Artificiell intelligens (AI) är en högaktuell teknik som appliceras på flera olika områden inom samhället. Inom HR-arbetet kan rekryteringsprocessen baseras på AI-teknik och stora delar kan komma att automatiseras. Tidigare forskning har visat på att både urvalssökning och kandidatmatchning har varit användbara områden där företag kan automatisera för att tidseffektivisera. AI-teknik är ännu inte ett väletablerat fenomen på den svenska arbetsmarknaden och därför har rekryterares perception kring användandet undersökts. Syftet med studien är att öka förståelse om användning av AI-teknik i rekryteringsprocessen hos bemanningsföretag då det är stora volymer av arbetssökande inom bemanningsbranschen och deras huvudsysslor är att arbeta med bemanning och rekrytering. Den teoretiska referensramen baseras på två olika ansatser till rekrytering. De utgörs av den psykometriska ansatsen, som är en objektiv ansats, och den sociala ansatsen, som är en subjektiv ansats, för rekryteringsprocessens utformning. Den teoretiska referensramen baseras även på en forskningssammanställning om AI-teknik för att i analysen kunna göra en jämförelse mellan tidigare forskning och rekryterares insikter. Författarna har tagit fram en egen analysmodell för att använda den teoretiska referensramen till att analysera det empiriska materialet. För att skapa en djupare förståelse för rekryterares perception av användandet av AI i rekryteringsprocessen baseras studien på kvalitativa intervjuer med rekryterare på bemanningsföretag. För att skapa en variation bland respondenterna har studiens författare utfört intervjuer med nio olika respondenter på sju olika bemanningsföretag. Den insamlande empirin har analyserats genom författarnas analysmodell. Resultatet tyder på att rekryteringsprocessen idag inte är anpassad för att använda AI och framförallt arbetet med kravprofilen behöver utvecklas för att AI ska nå maximal utdelning. Studiens slutsats är att det kommer krävas en utveckling av de två tidigare presenterade ansatserna till rekryteringsprocessen. Författarna föreslår den automatiserade ansatsen till rekrytering som en tredje ansats, där den inledande processen objektiviseras och anpassas för AI och de mänskliga faktorerna bibehålls i subjektiva intervjuprocesser och mänskliga beslut.<br>The phenomenon of Artificial Intelligence (AI) is a trending technology that is applied in several different areas of society. In HR work, the recruitment process can be based on AI technology and large parts can be automated. Previous research has shown that both selection and matching of candidates have been useful areas where companies can automate in order to make more efficient use of their time. AI technology is not yet a well-established phenomenon in Swedish companies and therefore, the recruiters' perception of use has been studied. The purpose of the study is to increase understanding of the use of AI technology in the recruitment process of staffing agencies, as there are large volumes of job seekers in the staffing industry and their main job is to work with staffing and recruitment. The theoretical frame is based on two different approaches to recruitment. They consist of the psychometric approach, which is an objective approach, and the social approach, which is a subjective approach to the recruitment process. The theoretical framework is also based on a research summary on AI technology in order to make a comparison between previous research and the recruiters' insights in the analysis. The authors have developed their own analysis model to use the theoretical frame to analyze the empirical material. To create a deeper understanding of recruiters perception of the use of AI in the recruitment process, the study is based on qualitative interviews with recruiters at staffing companies. In order to create a variation among the respondents, the authors the study have conducted interviews with nine different respondents at seven different staffing companies. The empirical data has been analyzed by the authors' analysis model. The result suggests that the recruitment process today is not adapted to use AI and, above all, the work with the requirement profile needs to be developed in order for AI to reach the maximum usage. The conclusion of the study is that a development of the two previously presented approaches to the recruitment process will be required. The authors propose the automated approach to recruitment as a third approach, where the initial process is objectified and adapted for AI but the human factors are maintained in subjective interview processes with human decisions.
APA, Harvard, Vancouver, ISO, and other styles
41

R, Santana Estela. "Humanoid Robots and Artificial Intelligence in Aircraft Assembly : A case study and state-of-the-art review." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Industriell organisation och produktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-40100.

Full text
Abstract:
Increasing demands, a need for more efficient manufacturing processes and pressure to remain competitive have been driving the development and use of technology in the industry since the industrial revolution. The number of operational industrial robots worldwide have been increasing every year and is expected to reach 3 billion by 2020. The aerospace industry still faces difficulty when it comes to automation due to the complexity of the products and low production volumes. These aspects make the use of traditional fixed robots very difficult to implement and economically unfeasible, which is the reason why the assembly process of aircrafts is mainly a manual work. These challenges have led the industry to consider other possibilities of automation, bringing the attention of many companies to humanoid robots. The aim of this thesis was to investigate the applicability of autonomous humanoid robots in aircraft assembly activities by focusing on four domains: mobility, manipulation, instruction supply and human-robot interaction. A case study was made in one workstation of the pre-assembly process of a military aircraft at Saab AB, in order to collect technical requirements for a humanoid robot to perform in this station. Also, a state-of-the-art literature review was made focusing on commercially available products and ongoing research projects. The crossing of information gathered by the case study and the state-of-the-art review, provided an idea of how close humanoid robots are to performing in the aircraft assembly process in each of the four domains. In general, the findings show that the mechanical structure and other hardware are not the biggest challenge when it comes to creating highly autonomous humanoid robots. Physically, such robots already exist, but they mostly lack autonomy and intelligence. In conclusion, the main challenges concern the degree of intelligence for autonomous operation, including the capability to reason, learn from experience, make decisions and act on its own, as well as the integration of all the different technologies into one single platform. In many domains, sub-problems have been addressed individually, but full solutions for, for example, autonomous indoor navigation and object manipulation, are still under development.
APA, Harvard, Vancouver, ISO, and other styles
42

de, Petris Micaela, and Leila Ramirez. "Artificiell intelligens inom rekryteringsprocessen - Med fokus på rationalitet, etik och användningsområden." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20029.

Full text
Abstract:
Artificiell Intelligens (AI) är idag mycket omtalat. Det finns flera olika användningsområden för AI, det kan vara bild-, ljud- och ansiktsigenkänning, personliga personalassistenter, robotik med mera. AI har på senare tid även börjat implementeras inom HR-arbetet och har påverkat den traditionella rekryteringsprocessen. Rekryteringsarbetet upplevs idag som problematiskt, 72% av näringslivet uppger att de har svårt att hitta rätt arbetskraft. Arbetsgivare uppger även att det administrativa arbetet är mycket tidskrävande. Det finns även en problematik gällande rationalitet och etik, då beslutsfattandet ofta påverkas av mänsklig bias. Vår studie visar att implementeringen av AI kan resultera i minskade kostnader samtidigt som det sker tidsbesparingar hos personalen. Det finns även mer tid för att utveckla personliga relationer med de nyanställda. En av intervjuerna visar att implementeringen av AI inom rekrytering kan bespara rekryterare 10 timmar i veckan. Studien visar även att det är vanligt att tro att rekryteringen blir mer rättvis och etiskt korrekt vid implementering av AI, dock finns det forskare som menar att detta är en stor utmaning. Syftet med denna rapport är att på ett kvalitativt sätt undersöka hur svenska företag använder sig utav AI i sin rekryteringsprocess samt om AI kan förändra arbetet inom rekryteringsprocessen beträffande rationalitet och etik. Informationssökningen kommer att ske genom tidigare forskning samt intervjuer för att kunna besvara forskningsfrågorna.<br>Artificial intelligence (AI) is today widely spoken about. There are several different uses for it. AI can be image-, sound- and face recognition, personal staff assistants, robotics and more. AI has recently become implemented within the HR work and has come to influence the traditional recruitment process. The recruitment work is today perceived as problematic, 72% of the business sector report that they find it difficult to find the right workforce. Employers also state that the administrative work is very time-consuming. There is also a problem concerning rationality and ethics, since decision-making is often affected by human bias. Our study shows that the implementation of AI can result in reduced costs while saving time for staff. There is also more time to develop personal relationships with the new employees. One of the interviews shows that the implementation of AI in recruitment can save recruiters 10 hours a week. Our study shows that the implementation of AI can result in reduced costs while saving time for staff. There is also more time to develop personal relationships with the new employees. The study also shows that it is common to believe that the recruitment becomes more fair and ethically correct when implementing AI. However, there are researchers who believe that this is a great challenge. This qualitative study aims to examine how Swedish companies use AI in their recruitment process and whether AI can change the work within the recruitment process regarding rationality and ethics. The information used was conducted through previous research as well as interviews in order to answer the research questions.
APA, Harvard, Vancouver, ISO, and other styles
43

Kobeissi, Meriana. "A conversational AI Framework for Cognitive Process Analysis." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS025.

Full text
Abstract:
Les processus métier (BP) sont les piliers fondamentaux des organisations, englobant toute une gamme d'activités structurées visant à atteindre des objectifs organisationnels distincts. Ces processus, caractérisés par une multitude de tâches, d'interactions et de flux de travail, offrent une méthodologie structurée pour superviser les opérations cruciales dans divers secteurs. Une découverte essentielle pour les organisations a été la reconnaissance de la valeur profonde inhérente aux données produites pendant ces processus. L'analyse des processus, une discipline spécialisée, explore ces journaux de données, facilitant une compréhension plus profonde et l'amélioration des BP. Cette analyse peut être catégorisée en deux perspectives : le niveau d'instance, qui se concentre sur les exécutions individuelles de processus, et le niveau de processus, qui examine le processus global.Cependant, l'application de l'analyse des processus pose des défis aux utilisateurs, impliquant la nécessité d'accéder aux données, de naviguer dans les API de bas niveau et d'utiliser des méthodes dépendantes d'outils. L'application dans le monde réel rencontre souvent des complexités et des obstacles centrés sur l'utilisateur.Plus précisément, l'analyse de niveau d'instance exige des utilisateurs qu'ils accèdent aux données d'exécution de processus stockées, une tâche qui peut être complexe pour les professionnels de l'entreprise en raison de l'exigence de maîtriser des langages de requête complexes tels que SQL et CYPHER. En revanche, l'analyse de niveau de processus des données de processus implique l'utilisation de méthodes et d'algorithmes qui exploitent les données d'exécution de processus extraites des systèmes d'information. Ces méthodologies sont regroupées sous le terme de techniques d'exploration de processus. L'application de l'exploration de processus confronte les analystes à la tâche complexe de sélection de méthodes, qui consiste à trier des descriptions de méthodes non structurées. De plus, l'application des méthodes d'exploration de processus dépend d'outils spécifiques et nécessite un certain niveau d'expertise technique.Pour relever ces défis, cette thèse présente des solutions basées sur l'IA, mettant l'accent sur l'intégration de capacités cognitives dans l'analyse des processus pour faciliter les tâches d'analyse tant au niveau de l'instance qu'au niveau du processus pour tous les utilisateurs. Les objectifs principaux sont doubles : premièrement, améliorer l'accessibilité des données d'exécution de processus en créant une interface capable de construire automatiquement la requête de base correspondante à partir du langage naturel. Ceci est complété par la proposition d'une technique de stockage adaptée et d'un langage de requête autour desquels l'interface doit être conçue. À cet égard, nous introduisons un méta-modèle graphique basé sur le graphe de propriétés étiquetées (LPG) pour le stockage efficace des données. Deuxièmement, pour rationaliser la découverte et l'accessibilité des techniques d'exploration de processus, nous présentons une architecture orientée services.Pour valider notre méta-modèle graphique, nous avons utilisé deux ensembles de données de processus accessibles au public disponibles à la fois au format CSV et OCEL. Ces ensembles de données ont été essentiels pour évaluer les performances de notre pipeline de requêtes en langage naturel. Nous avons recueilli des requêtes en langage naturel auprès d'utilisateurs externes et en avons généré d'autres à l'aide d'outils de paraphrase. Notre cadre orienté services a été évalué à l'aide de requêtes en langage naturel spécialement conçues pour les descriptions de services d'exploration de processus. De plus, nous avons mené une étude de cas avec des participants externes pour évaluer l'expérience utilisateur et recueillir des commentaires. Nous fournissons publiquement les résultats de l'évaluation pour garantir la reproductibilité dans le domaine étudié<br>Business processes (BP) are the foundational pillars of organizations, encapsulating a range of structured activities aimed at fulfilling distinct organizational objectives. These processes, characterized by a plethora of tasks, interactions, and workflows, offer a structured methodology for overseeing crucial operations across diverse sectors. A pivotal insight for organizations has been the discernment of the profound value inherent in the data produced during these processes. Process analysis, a specialized discipline, ventures into these data logs, facilitating a deeper comprehension and enhancement of BPs. This analysis can be categorized into two perspectives: instance-level, which focuses on individual process executions, and process-level, which examines the overarching process.However, applying process analysis in practice poses challenges for users, involving the need to access data, navigate low-level APIs, and employ tool-dependent methods. Real-world application often encounters complexities and user-centric obstacles.Specifically, instance-level analysis demands users to access stored process execution data, a task that can be intricate for business professionals due to the requirement of mastering complex query languages like SQL and CYPHER. Conversely, process-level analysis of process data involves the utilization of methods and algorithms that harness process execution data extracted from information systems. These methodologies collectively fall under the umbrella of process mining techniques. The application of process mining confronts analysts with the intricate task of method selection, which involves sifting through unstructured method descriptions. Additionally, the application of process mining methods depends on specific tools and necessitates a certain level of technical expertise.To address these challenges, this thesis introduces AI-driven solutions, with a focus on integrating cognitive capabilities into process analysis to facilitate analysis tasks at both the instance level and the process level for all users. The primary objectives are twofold: Firstly, to enhance the accessibility of process execution data by creating an interface capable of automatically constructing the corresponding database query from natural language. This is complemented by proposing a suitable storage technique and query language that the interface should be designed around. In this regard, we introduce a graph metamodel based on Labeled Property Graph (LPG) for efficient data storage. Secondly, to streamline the discovery and accessibility of process mining techniques, we present a service-oriented architecture. This architecture comprises three core components: an LPG meta-model detailing process mining methods, a service-oriented REST API design tailored for these methods, and a component adept at matching user requirements expressed in natural language with appropriate services.For the validation of our graph metamodel, we utilized two publicly accessible process datasets available in both CSV and OCEL formats. These datasets were instrumental in evaluating the performance of our NL querying pipeline. We gathered NL queries from external users and produced additional ones through paraphrasing tools. Our service-oriented framework underwent an assessment using NL queries specifically designed for process mining service descriptions. Additionally, we carried out a use case study with external participants to evaluate user experience and to gather feedback. We publically provide the evaluation results to ensure reproducibility in the studied area
APA, Harvard, Vancouver, ISO, and other styles
44

Johansson, Jennifer, and Senja Herranen. "The application of Artificial Intelligence (AI) in Human Resource Management: Current state of AI and its impact on the traditional recruitment process." Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-44323.

Full text
Abstract:
Background: The world is constantly becoming more prone to technology due to globalization which implies organizations have to stay up to date in order to be competitive. Human Resource Management (HRM) is more important than ever, especially with a focus on the recruitment of new employees which will bring skills and knowledge to an organization. With technological advances also comes the opportunity to streamline activities that previously have had to be carried out by humans. Therefore, it is of the highest importance to consider and evaluate the impact technology might have on the area of HRM and specifically the recruitment process. Purpose: The purpose of this thesis is to research the implications that technological advancements, in particular Artificial Intelligence (AI), have for the recruitment process. It aims to investigate where AI can be implemented in the traditional recruitment process and possibly make the process more effective, as well as what the implications would be of having AI within recruitment. Method: This thesis uses a qualitative study with semi-structured interviews conducted with eight international companies from all over the world. It is viewed through an interpretivism research philosophy with an inductive research approach. Conclusion: The results show that the area of AI in recruitment is relatively new and there are not many companies that utilize AI in all parts of their recruitment process. The most suitable parts to implement AI in traditional recruitment include recruitment activities such as pre-selection and communication with candidates and sending out recruitment results for applicants. The main benefits of AI were seen as the speeded quality and elimination of routine tasks, while the major challenge was seen as the companies’ overall readiness towards new technologies.
APA, Harvard, Vancouver, ISO, and other styles
45

Prado, Quintana Elvis David, and Alvarado Wiliam Eduardo Valdivieso. "Propuesta de transformación digital y mejora del gobierno de TI para el organismo estatal encargado de la identificación de los peruanos." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2019. http://hdl.handle.net/10757/628218.

Full text
Abstract:
La presente tesis tiene como objetivo la elaboración de una Propuesta de Transformación Digital sobre el proceso de registro de identificación, la cual toma como base la mejora de los procesos de TI a través de los lineamientos de COBIT PAM para el organismo estatal encargado de la identificación de los peruanos. Para lograr este propósito se identificarán los objetivos estratégicos de la institución y su relación con los procesos de TI. Posteriormente, se determinarán los procesos que serán revisados bajo el Modelo de Evaluación de Procesos de COBIT 5. A partir de los resultados obtenidos se elaborará un plan de mejora cuyo objetivo será cumplir con los criterios necesarios para lograr que los procesos de TI estén consolidados. Finalmente, teniendo un buen soporte de TI, se hará una propuesta de transformación digital para el proceso de registro de identificación que con la ayuda de un asistente virtual, basado en inteligencia artificial, brindará un servicio omnicanal a los ciudadanos.<br>This thesis aims to develop a Digital Transformation Proposal on the identification registration process, which is based on the improvement of IT processes through the COBIT PAM guidelines for the state body responsible for identification of the Peruvians. To achieve this purpose, the institution's strategic objectives and its relationship with IT processes will be identified. Subsequently, the processes that will be reviewed under the COBIT Process Evaluation Model 5 will be determined. Based on the results obtained, an improvement plan will be developed whose objective will be to meet the criteria necessary to ensure that the IT processes are consolidated. Finally, having a good IT support, a digital transformation proposal will be made for the identification registration process that, with the help of a virtual assistant, based on artificial intelligence, will provide an omnichannel service to citizens.<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
46

Gaffet, Alexandre. "Machine learning approaches for automotive production process diagnosis." Electronic Thesis or Diss., Toulouse, INSA, 2023. http://www.theses.fr/2023ISAT0060.

Full text
Abstract:
Les avancées technologiques ont permis l’arrivée dans de nombreuses entreprises de nouvelles méthodes de collecte et de stockage d’un grand nombre de données, notamment au travers du cloud computing. Ce grand volume de données est appelé Big Data. La collecte de ces données est considérée par certains comme un nouvel "or noir". Ce volume de données est également lié à l’essor de la quatrième révolution industrielle : Industrie 4.0., qui s’appuie sur les données et la digitalisation des entreprises pour réorganiser les processus de production.Les données collectées en industrie peuvent transporter des informations diverses comme des informations de matières de fabrication, des états de processus de fabrication, des informations de maintenance ou des résultats de tests d’équipements d’inspection. Une façon d’apporter de la valeur aux données consiste à déterminer l’état de santé des systèmes liés à ces dernières grâce à des méthodes d’intelligence artificielle, de machine learning ou de data mining. Cette thèse se concentre sur la maintenance conditionnelle (CBM), conditionnée par l’état de santé du matériel. La tâche de diagnostic en ligne, qui consiste à détecter les fautes et fournir leurs causes, s’intègre dans ce processus et permet d’améliorer significativement l’efficacité de la ligne de production en augmentant la qualité des produits et en réduisant les temps d’arrêt des équipements de production.L’automobile est un des domaines où ces méthodes de diagnostic peuvent apporter une grande valeur due à la complexité croissante des véhicules et donc des chaines de production à mettre en place pour les fabriquer. C’est dans ce contexte que s’est effectuée cette thèse en collaboration entre Vitesco Technologies, un fournisseur mondial de groupes motopropulseurs de véhicules et le LAAS (Laboratoire d'Analyse et d'Architecture des Systèmes), un laboratoire du CNRS (Centre National de la Recherche Scientifique).Cette thèse propose dans un premier temps une méthode de gestion de santé basée uniquement sur les données dans un cadre univarié, adapté aux contraintes industrielles imposées par l’équipement de test spécifique de la ligne de production SMT (Surface Mount Technology). Elle répond aux contraintes du cas d’étude : un très grand nombre de données, non étiquetées, et sur lesquelles il n’est pas possible d’extraire des séries temporelles cohérentes. La méthode est basée sur une étape de clustering utilisant les mélanges de gaussiennes (GMM) et la capabilité comme indicateur de santé. Les résultats montrent que la méthode permet de mieux détecter les cas critiques et ainsi d’améliorer les décisions de maintenance, notamment pour des fautes liées à l’équipement et aux lots de produits.Une étape de détection d’anomalie additionnelle en amont de la méthode, utilisant la théorie des valeurs extrêmes (EVT), et une étape d’isolation réalisée en fin de process permettent d’améliorer les performances de la méthode et de confirmer son applicabilité sur un cas réel.Une méthode de décomposition basée sur le clustering spectral est ensuite proposée pour gérer les données dans un cadre multivarié. L’idée est de regrouper en blocs des données similaires, puis d’appliquer par bloc une méthode de détection de fautes. Cette partie est appliquée au processus de contrôle de production chimique issue du Tennessee Eastman et au processus de test de la ligne SMT. L’application montre une amélioration des performances de détection de faute en combinant l’utilisation des blocs avec la méthode de détection d’anomalies Local Outlier Factor (LOF)<br>Technological advances have enabled the adoption of new methods for collecting and storing large amounts of data in many companies through cloud computing. This large volume of data is referred to as Big Data. Some view the collection of this data as a new form of "black gold". The rise of Big Data is also linked to the fourth industrial revolution, known as Industry 4.0, which involves the digitalization of companies and the use of data to reorganize production processes.Data collected in industry can contain various types of information, such as information about manufacturing materials, state of manufacturing processes, maintenance information, or test results for inspection equipment. One way to derive value from this data is to use artificial intelligence, machine learning, or data mining methods to determine the health state of the systems it is linked to. This thesis focuses on Condition-Based Maintenance (CBM), which is based on the health state of equipment. The online diagnosis task, which involves detecting faults and identifying their causes, is integrated into this process and can significantly improve the efficiency of the production line by increasing product quality and reducing downtime of production equipment.Diagnosis methods can be particularly valuable in the automotive industry, due to the increasing complexity of vehicles and the corresponding complexity of the production lines required to manufacture them. This thesis was conducted in collaboration between Vitesco Technologies, a global supplier of vehicle powertrains, and the LAAS (Laboratory of Analysis and Architecture of Systems), a laboratory of the CNRS (National Center for Scientific Research).This thesis first presents a data-based health management method in a univariate framework, tailored to the industrial constraints of the specific test equipment used in the SMT (Surface Mount Technology) production line. It addresses the constraints of the case study, which includes a very large number of unlabeled data, from which it is difficult to extract coherent time series. The method is based on clustering using Gaussian mixtures (GMM) and the capability as a health indicator. The results show that the method is effective at detecting critical cases and improving maintenance decisions for faults related to equipment and product batches.To further improve the performance of the method and confirm its applicability in an online real-world scenario, an additional anomaly detection step using Extreme Value Theory (EVT) is applied upstream of the method, and an isolation step is carried out at the end of the process.A decomposition method based on spectral clustering is then proposed for managing data in a multivariate framework. The approach involves grouping similar data into blocks and applying a fault detection method to each block. This method is applied to a chemical production control process from Tennessee Eastman and a SMT line test process. The results show that combining the use of blocks with the Local Outlier Factor (LOF) anomaly detection method leads to improved fault detection performance.This thesis also suggests using information from other equipment to improve the quality defect detection and test process automation with supervised learning methods
APA, Harvard, Vancouver, ISO, and other styles
47

Schneider, Homero Mauricio 1953. "Setting up a backtrack-free customisation process for product families = Estabelecendo um processo de customização livre de retrocessos para famílias de produtos." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261226.

Full text
Abstract:
Orientador: Yuzo Iano<br>Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação<br>Made available in DSpace on 2018-08-25T04:44:14Z (GMT). No. of bitstreams: 1 Schneider_HomeroMauricio_D.pdf: 3315215 bytes, checksum: 24f2739a680bd64590516d28c5900606 (MD5) Previous issue date: 2014<br>Resumo: Um conceito chave na área de customização em massa é o de família de produtos. Embora o projeto de uma família de produtos é uma tarefa difícil e desafiadora, derivar os membros da família de produtos para atender os requisitos de clientes individuais pode ser uma tarefa de design rotineira. Neste trabalho, propomos uma abordagem formal para modelar o processo de customização de famílias de produtos, que alcançar este objetivo. De fato, construímos uma teoria para a customização de famílias de produtos. Esta abordagem é baseada em uma estrutura de conhecimento para a representação de famílias de produtos que combina uma estrutura de produto genérica e uma rede de restrições estendida com funções de design. O método para derivar os membros da família de produtos é um processo de instanciação com duas fases. Primeiramente, uma solução para o modelo de rede de restrição consistente com os requisitos do cliente é encontrada. Em seguida, esta solução é utilizada para transformar a estrutura de produto genérica em uma estrutura especifica que corresponde a um membro da família de produtos. Neste trabalho, provamos que, se o modelo de rede de restrição estendida com funções de design satisfaz algumas condições de modelagem, então encontrar soluções se torna um processo livre de retrocessos. Embora existam outros trabalhos na literatura que também afirmam ser livre de retrocessos, um fato notável sobre a nossa abordagem é que conseguimos isso através da introdução de conhecimento sobre a família de produtos, em vez de recorrer ao poder computacional e pré-processamento como naquelas abordagens. Outro aspecto notável da nossa abordagem é que os componentes podem ser projetados como parte do processo de personalização através das funções de design. Isto implica que é possível dispor de um processo de customização eficiente sem comprometer a flexibilidade da família de produtos. Na conclusão deste trabalho, argumentamos que a nossa abordagem pode lidar com problemas de customização que estão fora da área de configuração de produtos. Dois apêndices também são adicionados à tese. Um deles é uma modelagem completa de uma família de produtos Chave de Transferência Automática (ATS) baseado em nossa abordagem. Este exemplo é usado no corpo principal da tese para ilustrar os conceitos que estão sendo introduzidos. A outra é uma implementação computacional do primeiro estágio do processo de customização da família de produtos ATS<br>Abstract: Product family is a key concept is the area of mass customisation. Although the design of a product family is a difficult and challenging task, to derive members of the product family to meet the requirements of individual customers can be a routine design task. In this work, we propose a formal approach to model the customisation of product families that achieves this goal. In fact, we are setting up a theory for the customization of product families. This approach is based on a knowledge framework for the representation of product families, which combines a generic product structure and a constraint network extended with design functions. The method for deriving members of the product family is a two-stage instantiation process. First, a solution to the constraint network model consistent with the customer requirements is found. Next, this solution is used to transform the generic product structure into a specific structure that corresponds to a member of the product family. In this work, we prove that if the constraint network model extended with design functions satisfies a few modelling conditions, then to find solutions become a backtrack-free process. Although there are other works in the literature that also claim to be backtrack-free, a remarkable fact about our approach is that we achieve this by the introduction of knowledge about the product family, instead of resorting to computational power and pre-processing as in those approaches. Another remarkable aspect of our approach is that components can be designed as part of the customisation process using the design functions. This implies that it is possible to have an efficient customisation process without compromising the flexibility of the product family. In the conclusion of this work, we argue that our approach can deal with customisation problems outside the product configuration area. Two appendixes are also added to the thesis. One is a compete modelling of the Automatic Transfer Switch (ATS) product family using our approach. This example is used in the main body of the thesis to illustrate the concepts that are being introduced. The other one is the computational implementation of the first-stage customisation process of the ATS product family<br>Doutorado<br>Telecomunicações e Telemática<br>Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
48

Rakhshan, Pouri Samaneh. "COMPARATIVE STUDIES OF DIFFUSION MODELS AND ARTIFICIAL NEURAL INTELLIGENCE ON ELECTROCHEMICAL PROCESS OF U AND Zr DISSOLUTIONS IN LiCl-KCl EUTECTIC SALTS." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/5026.

Full text
Abstract:
The electrorefiner (ER) is the heart of pyroprocessing technology operating at a high-temperature (723 K – 773 K) to separate uranium from Experimental Breeder Reactor-II (EBR-II) used metallic fuel. One of the most common electroanalytical methods for determining the thermodynamic and electrochemical behavior of elemental species in the eutectic molten salt LiCl-KCl inside ER is cyclic voltammetry (CV). Information from CV can possibly be used to estimate diffusion coefficients, apparent standard potentials, transfer coefficients, and numbers of electron transferred. Therefore, predicting the trace of each species from the CV method in an absence of experimental data is important for safeguarding this technology. This work focused on the development an interactive computational design for the CV method by analyzing available uranium chloride data sets (1 to 10 wt%) in a LiCl-KCl molten salt at 773 K under different scan rates to help elucidating, improving, and providing robustness in detection analysis. A principle method and a computational code have been developed by using electrochemical fundamentals and coupling various variables such as: the diffusion coefficients, formal potentials, and process time duration. Although this developed computational model works moderately well with reported uranium data sets, it experiences difficulty in tracing zirconium data sets due to their complex CV structures. Therefore, an artificial neural intelligent (ANI) data analysis has been proposed to resolve this issue and to provide comparative study to the precursor computational modeling development. For this purpose, ANI has been applied on 0.5 to 5 wt% of zirconium chloride in LiCl-KCl eutectic molten salt at 773 K under different scan rates to mimic the system and provide current and potential simulated data sets for the unseen data. In addition, a Graphical User Interface (GUI) through the commercial software Matlab was created to provide a controllable environment for different users. The computational code shows a limitation in high concentration CV prediction, capturing the adsorption peaks, and provides a dissimilarity. However, the model is able to capture the important anodic and cathodic peaks of uranium chloride CV which is the main focus of this study. Furthermore, the developed code is able to calculate the concentration of each species as a function of time. Due to the complexity of the CV of zirconium chloride, the computational model is used to predict the probability reactions occurring at each peak. The resulting study reveals that the reaction at the highest anodic peak is related to the combination of 70% Zr/Zr+4 and 30% Zr/Zr+2 for the 1.07 wt% and 2.49 wt% zirconium chloride and 30% Zr/Zr+4 and 70% Zr/Zr+2 combination for 4.98 wt% ZrCl4. The proposed alternative ANI method has demonstrated its capability in predicting the trend of species in a new situation with a high accuracy on predictions without any dissimilarity. Two final structures from zirconium chloride study which high accuracy (that is, a low error) are related to [9, 15, 10]-18 and [10, 11, 25]-19. These two final structures have been applied on uranium chloride salt experimental data sets to further validate the ANI’s ability and concept. Three different fixed data combinations were considered. The result indicates that by increasing the number of training data sets it does not necessarily help improving the prediction process. ANI implementation outcome on uranium chloride data set illustrates a good prediction with a specific fixed data combination and [9, 15, 10]-18 structure. Thus, it can be concluded that ANI is a promising method for safeguarding pyroprocessing technology due to its robustness in predicting the CV plots with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
49

Huntsinger, Richard A. "Evaluating Forecasting Performance in the Context of Process-Level Decisions: Methods, Computation Platform, and Studies in Residential Electricity Demand Estimation." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/898.

Full text
Abstract:
This dissertation explores how decisions about the forecasting process can affect the evaluation of forecasting performance, in general and in the domain of residential electricity demand estimation. Decisions of interest include those around data sourcing, sampling, clustering, temporal magnification, algorithm selection, testing approach, evaluation metrics, and others. Models of the forecasting process and analysis methods are formulated in terms of a three-tier decision taxonomy, by which decision effects are exposed through systematic enumeration of the techniques resulting from those decisions. A computation platform based on the models is implemented to compute and visualize the effects. The methods and computation platform are first demonstrated by applying them to 3,003 benchmark datasets to investigate various decisions, including those that could impact the relationship between data entropy and forecastability. Then, they are used to study over 10,624 week-ahead and day-ahead residential electricity demand forecasting techniques, utilizing fine-resolution electricity usage data collected over 18 months on groups of 782 and 223 households by real smart electric grids in Ireland and Australia, respectively. The main finding from this research is that forecasting performance is highly sensitive to the interaction effects of many decisions. Sampling is found to be an especially effective data strategy, clustering not so, temporal magnification mixed. Other relationships between certain decisions and performance are surfaced, too. While these findings are empirical and specific to one practically scoped investigation, they are potentially generalizable, with implications for residential electricity demand estimation, smart electric grid design, and electricity policy.
APA, Harvard, Vancouver, ISO, and other styles
50

Bergling, Malin, and Lisa Warnberg. "Artificiell intelligens och dess påverkan på revisionsbolags legitimitet : En kvalitativ studie om hur revisionsbolags legitimitet kommer att påverkas av artificiell intelligens." Thesis, Karlstads universitet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-78742.

Full text
Abstract:
Legitimacy is about meeting society's expectations and as we live in a society that is constantly changing, the perception of what is legitimate or not is also changing. The main purpose of an auditor in today's society is to review financial reports and guarantee their quality. In recent years, technological developments have changed the profession of auditors and previous research shows that digitalization has made it easier for the auditors as certain parts of the audit process have been automated. The purpose of the study was to examine how AI in audit firms will affect the legitimacy of auditors. The focus has been on the audit companies' working process and how the process is going to change with the help of AI, and finally how the auditors believe this will affect the legitimacy of the audit companies. Qualitative interviews have been used on auditors, to be able to get into the depth and obtain personal thoughts from the respondents. The results of the study show that the introduction of AI will have a positive impact on the audit process since it is assumed that standardized and time-consuming tasks will be automated. Furthermore, this means that auditors will be able to perform more accurate analyzes and get more time over to the customers, which means that the audit companies are assumed to be able to increase their legitimacy. Therefore, the conclusion is that AI will help audit companies with their legitimacy in the future.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!