To see the other types of publications on this topic, follow the link: Artificial intelligent techniques.

Dissertations / Theses on the topic 'Artificial intelligent techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Artificial intelligent techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Huixiang. "Intelligent search techniques for large software systems." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6422.

Full text
Abstract:
There are many tools available today to help software engineers search in source code systems. It is often the case, however, that there is a gap between what people really want to find and the actual query strings they specify. This is because a concept in a software system may be represented by many different terms, while the same term may have different meanings in different places. Therefore, software engineers often have to guess as they specify a search, and often have to repeatedly search before finding what they want. To alleviate the search problem, this thesis describes a study of what we call intelligent search techniques as implemented in a software exploration environment, whose purpose is to facilitate software maintenance. We propose to utilize some information retrieval techniques to automatically apply transformations to the query strings. The thesis first introduces the intelligent search techniques used in our study, including abbreviation concatenation and abbreviation expansion. Then it describes in detail the rating algorithms used to evaluate the query results' similarity to the original query strings. Next, we describe a series of experiments we conducted to assess the effectiveness of both the intelligent search methods and our rating algorithms. Finally, we describe how we use the analysis of the experimental results to recommend an effective combination of searching techniques for software maintenance, as well as to guide our future research.
APA, Harvard, Vancouver, ISO, and other styles
2

Kostias, Aristotelis, and Georgios Tagkoulis. "Development of an Artificial Intelligent Software Agent using Artificial Intelligence and Machine Learning Techniques to play Backgammon Variants." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-251923.

Full text
Abstract:
Artificial Intelligence has seen enormous progress in many disciplines in the recent years. Particularly, digitalized versions of board games require artificial intelligence application due to their complex decision-making environment. Game developers aim to create board game software agents which are intelligent, adaptive and responsive. However, the process of designing and developing such a software agent is far from straight forward due the nature and diversity of each game. The thesis examines and presents a detailed procedure of constructing a software agent for backgammon variants, using temporal difference, artificial neural networks and backpropagation. Different artificial intelligence and machine learning algorithms used in board games, are overviewed and presented. Finally, the thesis describes the development and implementation of a software agent for the backgammon variant called Swedish Tables and evaluates its performance.
Artificiell intelligens har sett enorma framsteg inom många discipliner de senare åren. Speciellt, digitaliserade brädspel kräver implementering av Artificiell intelligens då deras beslutfattande logik är väldigt komplex. Dataspelutvecklarnas syfte och mål är att skapa programvaror som är intelligenta, adaptiva och lyhörda. Dock konstruktionsoch utvecklingsprocess för att kunna skapa en sådan mjukvara är långtifrån att vara faställd, mest på grund av diversitet av naturen av varje spel. Denna avhandlingen forskar och föreslår en detaljerad procedur för att bygga en "Software Agent" för olika slags Backagammon, genom att använda AI neurala nätvärk och back-propagation metoder. Olika artificiell intelligensoch maskininlärningsalgoritmer som används i brädspel forskas och presenteras. Slutligen denna avhandling beskriver implementeringen och utvecklingen av ett mjukvaru agent för en backgammonvariant, närmare bestämt av "Svenska Tabeller" samt utvärderar dess prestanda.
APA, Harvard, Vancouver, ISO, and other styles
3

Angeli, Chrissanthi. "Intelligent fault detection techniques for an electro-hydraulic system." Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Jianhua. "Intelligent data mining using artificial neural networks and genetic algorithms : techniques and applications." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3831/.

Full text
Abstract:
Data Mining (DM) refers to the analysis of observational datasets to find relationships and to summarize the data in ways that are both understandable and useful. Many DM techniques exist. Compared with other DM techniques, Intelligent Systems (ISs) based approaches, which include Artificial Neural Networks (ANNs), fuzzy set theory, approximate reasoning, and derivative-free optimization methods such as Genetic Algorithms (GAs), are tolerant of imprecision, uncertainty, partial truth, and approximation. They provide flexible information processing capability for handling real-life situations. This thesis is concerned with the ideas behind design, implementation, testing and application of a novel ISs based DM technique. The unique contribution of this thesis is in the implementation of a hybrid IS DM technique (Genetic Neural Mathematical Method, GNMM) for solving novel practical problems, the detailed description of this technique, and the illustrations of several applications solved by this novel technique. GNMM consists of three steps: (1) GA-based input variable selection, (2) Multi- Layer Perceptron (MLP) modelling, and (3) mathematical programming based rule extraction. In the first step, GAs are used to evolve an optimal set of MLP inputs. An adaptive method based on the average fitness of successive generations is used to adjust the mutation rate, and hence the exploration/exploitation balance. In addition, GNMM uses the elite group and appearance percentage to minimize the randomness associated with GAs. In the second step, MLP modelling serves as the core DM engine in performing classification/prediction tasks. An Independent Component Analysis (ICA) based weight initialization algorithm is used to determine optimal weights before the commencement of training algorithms. The Levenberg-Marquardt (LM) algorithm is used to achieve a second-order speedup compared to conventional Back-Propagation (BP) training. In the third step, mathematical programming based rule extraction is not only used to identify the premises of multivariate polynomial rules, but also to explore features from the extracted rules based on data samples associated with each rule. Therefore, the methodology can provide regression rules and features not only in the polyhedrons with data instances, but also in the polyhedrons without data instances. A total of six datasets from environmental and medical disciplines were used as case study applications. These datasets involve the prediction of longitudinal dispersion coefficient, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) data, eye bacteria Multisensor Data Fusion (MDF), and diabetes classification (denoted by Data I through to Data VI). GNMM was applied to all these six datasets to explore its effectiveness, but the emphasis is different for different datasets. For example, the emphasis of Data I and II was to give a detailed illustration of how GNMM works; Data III and IV aimed to show how to deal with difficult classification problems; the aim of Data V was to illustrate the averaging effect of GNMM; and finally Data VI was concerned with the GA parameter selection and benchmarking GNMM with other IS DM techniques such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Fuzzy ARTMAP, and Cartesian Genetic Programming (CGP). In addition, datasets obtained from published works (i.e. Data II & III) or public domains (i.e. Data VI) where previous results were present in the literature were also used to benchmark GNMM’s effectiveness. As a closely integrated system GNMM has the merit that it needs little human interaction. With some predefined parameters, such as GA’s crossover probability and the shape of ANNs’ activation functions, GNMM is able to process raw data until some human-interpretable rules being extracted. This is an important feature in terms of practice as quite often users of a DM system have little or no need to fully understand the internal components of such a system. Through case study applications, it has been shown that the GA-based variable selection stage is capable of: filtering out irrelevant and noisy variables, improving the accuracy of the model; making the ANN structure less complex and easier to understand; and reducing the computational complexity and memory requirements. Furthermore, rule extraction ensures that the MLP training results are easily understandable and transferrable.
APA, Harvard, Vancouver, ISO, and other styles
5

Ebada, Adel. "Intelligent techniques-based approach for ship manoeuvring simulations and analysis artificial neural networks application /." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=984707166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Ao. "Artificial Intelligent Techniques in Residential Water End-use Studies for Optimized Urban Water Management." Thesis, Griffith University, 2018. http://hdl.handle.net/10072/382672.

Full text
Abstract:
In the urban water planning and management industry, end-use water consumption monitoring is a primary tool for water demand management and source substitution. Numerous residential end-use consumption studies have been carried out worldwide in the last two decades. With the rapid development of intelligent technology, the traditional time-consuming process for water flow data disaggregation has been replaced by a smart water metering system with advanced analysis. However, the existing water flow trace analysis system cannot accurately disaggregate all categories of residential water end-use events. In response to this issue, this research focused on developing new techniques, which can improve the autonomous categorisation accuracy of the residential water flow disaggregation process. A rigorous research method was adopted to achieve the above-mentioned research objectives and included the following two stages: (1) review and testing of pattern recognition techniques; and (2) software development. This study employed the extensive South-east Queensland (SEQ) Residential Water End Use Study dataset to undertake the development of the intelligent and autonomous water end-use recognition technique. Due to the array of objectives, methods, and results, this thesis has been structured around two refereed journal publications produced during the MPhil study. Two themes emerged from the research, namely: (1) development of hybrid intelligent model for mechanised water end-use analysis; and (2) optimising water end-use analysis process with Self-organising maps and K-Means clustering. The application of many sophisticated intelligent techniques has been attempted in order to tackle this complex problem. In the first stage, the original application of Dynamic Time Warping (DTW) algorithm was found to be ineffective due to settings of the threshold value. Through further investigation into the existing database, Artificial Bee Colony (ABC) and K-Medoids algorithm were selected. In this stage, this technique was applied to assist in finding toilet events in an artificially mixed data. 95.71% accuracy for correctly classified mechanical events was achieved when tested on 136 mixed events from different categories. The performance of the selected algorithms have been compared against previously reported approaches, with the technique and accuracy comparisons presented in a refereed journal paper. While the ABC and K-Medoids approach to clustering flow data into water end-use categories was suitable for mechanical end-use categories, it was less effective for other behaviourally influenced categories. Further exploration of various water flow data clustering techniques was required in order to discover a more suitable approach for the preliminary clustering of flow data into all of the water end-use categories. This prompted the undertaking of the research activities for the second journal paper described as follows. The study continued with the development of a hybrid technique in the second stage. Self-organising maps (SOM) and K-means algorithms were applied to the existing software Autoflow through pre-grouping of water end-use events in order to improve the accuracy. The verification on two datasets (i.e., (1) over 100,000 single events, and (2) 30 independent homes), resulted in an improvement in water end-use categorisation accuracy, when compared to the original technique employed in Autoflow, for each residential end-use category. Accuracy improvements were particularly noticeable for the mechanical water end-use event categories (i.e., washing machine, toilet, and evaporative cooler). The research outcomes have implications for researchers and the water industry. For researchers, the revised Autoflow v3.1 developed in this study is more accurate than previous versions reported in the literature. The novel hybrid pattern recognition approach and the associated algorithms employed in this latest Autoflow v3.1 version can be adapted for a range of pattern recognition problems. For the water industry, an accurate and autonomous water end-use analysis software tool has a range of implications, including, providing bottom-up data for demand forecasting and infrastructure planning, evidence-based water demand management, and end-use level customer feedback phone and web-based applications.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
7

Hasan, Irfan. "Machine learning techniques for automated knowledge acquisition in intelligent knowledge-based systems." Instructions for remote access. Click here to access this electronic resource. Access available to Kutztown University faculty, staff, and students only, 1991. http://www.kutztown.edu/library/services/remote_access.asp.

Full text
Abstract:
Thesis (M.S.)--Kutztown University of Pennsylvania, 1991.
Source: Masters Abstracts International, Volume: 45-06, page: 3187. Abstract precedes thesis as [2] preliminary leaves. Typescript. Includes bibliographical references (leaves 102-104).
APA, Harvard, Vancouver, ISO, and other styles
8

Wong, Kam Cheung. "Intelligent methods of power system components monitoring by artificial neural networks and optimisation using evolutionary computing techniques." Thesis, University of Sunderland, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jarvis, Matthew P. "Applying machine learning techniques to rule generation in intelligent tutoring systems." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-112724.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Intelligent Tutoring Systems; Model Tracing; Machine Learning; Artificial Intelligence; Programming by Demonstration. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
10

Raja, Muhammad Nouman Amjad. "Load-settlement investigation of geosynthetic-reinforced soil using experimental, analytical, and intelligent modelling techniques." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2455.

Full text
Abstract:
During the past five decades, numerous studies have been conducted to investigate the load-settlement behaviour of geosynthetic-reinforced soil. The main advantage of reinforced soil foundations are the increase in the bearing capacity and decrease in the settlement. Whereas, for pavement foundation design, the strength of the subgrade soil is often measured in terms of California bearing ratio (CBR). The researchers have suggested various methods to improve the quality of geosynthetic-reinforced foundations soils. In the recent past, the wraparound geosynthetic reinforcement technique has been proposed to strengthen the foundation soil effectively. However, there are several research gaps in the area; for example, there has been no analytical solution for estimating the ultimate bearing capacity of wraparound reinforced foundations, and there has been no evaluation of this technique under repeated loading conditions. Similarly, for planar geosynthetic-reinforced soil foundations, the prediction of load-settlement behaviour also requires more attention. The advent of artificial intelligence (AI) based modelling techniques has made many traditional approaches antiquated. Despite this, there is limited research on using AI techniques to derive mathematical expressions for predicting the load-settlement behaviour of reinforced soil foundations or the strength of reinforced subgrade soil. This research is undertaken to examine the load-settlement behaviour of geosynthetic-reinforced foundation soils using experimental, analytical, and intelligent modelling methods. For this purpose, extensive laboratory measurements, analytical, numerical and AI-based modelling and analysis have been conducted to: (i) derive theoretical expression to estimate the ultimate bearing capacity of footing resting on soil bed strengthened by wraparound reinforcement technique; (ii) using detail experimental study, present the effectiveness of wraparound reinforcement for improving the load-settlement characteristics of sandy soil under repeated loading conditions; (iii) to build the executable artificial intelligence-based or computationally intelligent soft computing models and converting them into simple mathematical equations for estimating the (a) ultimate bearing capacity of reinforced soil foundations; (b) settlement at peak footing loads; (c) strength (California bearing ratio) of geosynthetic-reinforced subgrade soil; and (iv) to examine and predict the settlement of geosynthetic-reinforced soil foundations (GRSF) under service loading condition using novel hybrid approach, that is, finite element modelling (FEM) and AI modelling. In the analytical phase, a theoretical expression has been developed for estimating the ultimate bearing capacity of strip footing resting on soil bed reinforced with the geosynthetic layer having the wraparound ends. The wraparound ends of the geosynthetic reinforcement are considered to provide the shearing resistance at the soil-geosynthetic interface as well as the passive resistance due to confinement of soil by the geosynthetic reinforcement. The values of ultimate load-bearing capacity determined by using the developed analytical expression have predicted values closer to the model studies reported in the literature, with a difference in the range of 0% to 25% with an average difference of 10%. In the experimental phase, model footing load tests have been conducted on strip footing resting on a sandy soil bed reinforced with geosynthetic in wraparound and planar forms under monotonic and repeated loadings. The geosynthetic layers were laid according to the reinforcement ratio to minimise the scale effect. The effect of repeated load amplitude and the number of cycles, and the effect of reinforcement parameters, such as number of layers, reinforcement width, lap-length ratio and planar width of wraparound, were investigated, and their potential effect on the load-settlement behaviour has been studied. The wraparound reinforced model has shown about 45% lower average total settlement than the unreinforced model. In comparison, the double-layer reinforced model has about 41% at the cost of twice the material and 1.5 times the occupied land width ratio. Additionally, for lower settlement levels (s/B ≤ 5%), the wraparound geotextile with a smaller occupied land width ratio (bp/B = 3.5) has performed well in comparison to the wraparound with a slightly larger occupied land width ratio (bp/B = 4). However, the wraparound with occupied width ratio of 4 provides more stability to the foundation soil for higher settlement levels. The performance of the fully wrapped model (bp/B = 2.8) is more similar to that of the planar double-layer reinforced model (b/B = 4); however, it is noted that even the fully wrapped model outperforms the planar single-layer reinforced model with the same amount of geotextile and 50% less occupied land width For data analytic methods, first historical data has been collected to build the various machine learning (ML) models, and then detailed comparison has been presented among the ML-based models and with other available theoretical methods. A comprehensive study was conducted for each model to choose its structure, optimisation, and tuning of hyperparameters and its interpretation in the form of mathematical expressions. The forecasting strength of the models was assessed through a cross-validation approach, rigorous statistical testing, multi-criteria approach, and external validation process. The traditional statistical indices such as coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), and mean absolute percent deviation (MAPD); along with several other modern model performance indicators, were utilised to evaluate the accuracy of the developed models. For ultimate bearing capacity (UBC) estimation, the performance of the extreme learning machine (ELM) and TreeNet models has shown a good degree of prediction accuracy in comparison with traditional methods over the test dataset. However, the overall performance of the ELM model (R2 = 0.9586, MAPD=12.8%) was better than that of the TreeNet model (R2 = 0.9147, MAPD =17.2%). Similarly, for settlement estimation at peak footing loads, multivariate adaptive regression splines (MARS) modelling technique has outperformed (R2 = 0.974, RMSE = 1.19 mm, and MAPD = 7.19%) several other robust AI-based models, namely ELM, support vector regression (SVR), Gaussian process regression (GPR), and stochastic gradient boosting trees (SGBT). For CBR, the competency and reliability of the several intelligent models such as artificial neural network (ANN), least median of squares regression (LMSR), GPR, elastic net regularization regression (ENRR), lazy K-star (LKS), M5 model trees, alternating model trees (AMT), and random forest (RF). Among all the intelligent modelling techniques, ANN (R2 = 0.944, RMSE = 1.74, and MAE = 1.27) and LKS (R2 = 0.955, RMSE = 1.52, and MAE = 1.04) has achieved the highest ranking score of 35 and 40, respectively, in predicting the CBR of geosynthetic-reinforced soil. Moreover, for UBC and settlement at peak footing loads, new model footing load tests, and for strength of reinforced subgrade soil, new CBR tests were also conducted to verify the predictive veracity of the developed AI-based models. For predicting the settlement behaviour of GRSF under various service loads, an integrated numerical-artificial intelligence approach was utilised. First, the large-scale footing load tests were simulated using the FEM technique. At the second stage, a detailed parametric study was conducted to find the effect of footing-, geosynthetic- and soil strength- parameters on the settlement of GRSF under various service loads. Afterward, a novel evolutionary artificial intelligence model, that is, grey-wolf optimised artificial neural network (ANN-GWO), was developed and translated to the simple mathematical equation for estimating the load-settlement behaviour of GRSF. The results of this study indicate that the proposed ANN-GWO model predict the settlement of GRSF with high accuracy for training (RMSE = 0.472 mm, MAE = 0.833, R2 = 0.982), and testing (RMSE = 0.612 mm, MAE = 0.363, R2 = 0.962,) dataset. Furthermore, the predictive veracity of the model was verified by detailed and rigorous statistical testing and against several independent scientific studies as reported in the literature. This work is practically valuable for understanding and predicting the load-settlement behaviour of reinforced soil foundations and applies to traditional planar geosynthetic-reinforced and as well as recently developed wraparound geosynthetic-reinforced foundation soil technique. For wraparound reinforced soil foundations, the analytical expression will be helpful in the estimation of ultimate bearing capacity, and experimental study shows the beneficial effects of such foundations systems in terms of enhancement in bearing capacity and reduction in the settlement, and economic benefits in terms of saving land area and amount of geosynthetic, under repeated loading conditions. Moreover, the developed AI-based models and mathematical expressions will be helpful for the practitioners in predicting the strength and settlement of reinforced soil in an effective and intelligent way and will be beneficial in the broader understanding of embedding the intelligent modelling techniques with geosynthetic-reinforced soil (GRS) technology for the automation in construction projects.
APA, Harvard, Vancouver, ISO, and other styles
11

Alonso, Martínez Margarita. "Conocimiento y Bases de Datos: una propuesta de integración inteligente. Knowledge and Datebases: An Intelligent integration proposal." Doctoral thesis, Universidad de Cantabria, 1992. http://hdl.handle.net/10803/31767.

Full text
Abstract:
Se estudian y caracterizan los sistemas expertos en su aplicación a la gestión de la empresa y particularmente a los problemas de toma de decisiones de inversión. La conexión entre sistemas expertos y bases de datos ofrece, en el ámbito de la empresa, un marco de actuación que incorpora a las técnicas de almacenamiento y control de grandes volúmenes de información, aquéllas que significan conocimiento heurístico, capacidad de razonamiento, aprendizaje y comunicación con el usuario. El objetivo es establecer un marco de acción, que consiga acercarse a un control efectivo y global de la información que se requiere en los procesos de toma de decisión. La optimización de la interacción entre sistema experto y base de datos, se concreta en compartir un mismo diseño lógico de la información, para obtener tanto el esquema conceptual de la bases de datos, como la base de conocimiento del sistema experto.
This Thesis studies and characterizes the application of Expert Systems in the management of companies, particularly in problems related to decision making. The connection between Expert Systems and Databases offer new possibilities in the study of Business Management. Expert Systems provide techniques for; the control of large volumes of information and heuristic knowledge, reasoning and learning capabilities and interactive user communication. The objective of knowledge and data base integration is to establish a framework for effective and global control of decision making processes. The interaction of Expert Systems and Databases could be improved by sharing one logical design for information in order to obtain a unique Conceptual Scheme of the Database and Knowledge Base of Expert Systems.
APA, Harvard, Vancouver, ISO, and other styles
12

Rego, Máñez Albert. "Intelligent multimedia flow transmission through heterogeneous networks using cognitive software defined networks." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/160483.

Full text
Abstract:
[ES] La presente tesis aborda el problema del encaminamiento en las redes definidas por software (SDN). Específicamente, aborda el problema del diseño de un protocolo de encaminamiento basado en inteligencia artificial (AI) para garantizar la calidad de servicio (QoS) en transmisiones multimedia. En la primera parte del trabajo, el concepto de SDN es introducido. Su arquitectura, protocolos y ventajas son comentados. A continuación, el estado del arte es presentado, donde diversos trabajos acerca de QoS, encaminamiento, SDN y AI son detallados. En el siguiente capítulo, el controlador SDN, el cual juega un papel central en la arquitectura propuesta, es presentado. Se detalla el diseño del controlador y se compara su rendimiento con otro controlador comúnmente utilizado. Más tarde, se describe las propuestas de encaminamiento. Primero, se aborda la modificación de un protocolo de encaminamiento tradicional. Esta modificación tiene como objetivo adaptar el protocolo de encaminamiento tradicional a las redes SDN, centrado en las transmisiones multimedia. A continuación, la propuesta final es descrita. Sus mensajes, arquitectura y algoritmos son mostrados. Referente a la AI, el capítulo 5 detalla el módulo de la arquitectura que la implementa, junto con los métodos inteligentes usados en la propuesta de encaminamiento. Además, el algoritmo inteligente de decisión de rutas es descrito y la propuesta es comparada con el protocolo de encaminamiento tradicional y con su adaptación a las redes SDN, mostrando un incremento de la calidad final de la transmisión. Finalmente, se muestra y se describe algunas aplicaciones basadas en la propuesta. Las aplicaciones son presentadas para demostrar que la solución presentada en la tesis está diseñada para trabajar en redes heterogéneas.
[CA] La present tesi tracta el problema de l'encaminament en les xarxes definides per programari (SDN). Específicament, tracta el problema del disseny d'un protocol d'encaminament basat en intel·ligència artificial (AI) per a garantir la qualitat de servici (QoS) en les transmissions multimèdia. En la primera part del treball, s'introdueix les xarxes SDN. Es comenten la seva arquitectura, els protocols i els avantatges. A continuació, l'estat de l'art és presentat, on es detellen els diversos treballs al voltant de QoS, encaminament, SDN i AI. Al següent capítol, el controlador SDN, el qual juga un paper central a l'arquitectura proposta, és presentat. Es detalla el disseny del controlador i es compara el seu rendiment amb altre controlador utilitzat comunament. Més endavant, es descriuen les propostes d'encaminament. Primer, s'aborda la modificació d'un protocol d'encaminament tradicional. Aquesta modificació té com a objectiu adaptar el protocol d'encaminament tradicional a les xarxes SDN, centrat a les transmissions multimèdia. A continuació, la proposta final és descrita. Els seus missatges, arquitectura i algoritmes són mostrats. Pel que fa a l'AI, el capítol 5 detalla el mòdul de l'arquitectura que la implementa, junt amb els mètodes intel·ligents usats en la proposta d'encaminament. A més a més, l'algoritme intel·ligent de decisió de rutes és descrit i la proposta és comparada amb el protocol d'encaminament tradicional i amb la seva adaptació a les xarxes SDN, mostrant un increment de la qualitat final de la transmissió. Finalment, es mostra i es descriuen algunes aplicacions basades en la proposta. Les aplicacions són presentades per a demostrar que la solució presentada en la tesi és dissenyada per a treballar en xarxes heterogènies.
[EN] This thesis addresses the problem of routing in Software Defined Networks (SDN). Specifically, the problem of designing a routing protocol based on Artificial Intelligence (AI) for ensuring Quality of Service (QoS) in multimedia transmissions. In the first part of the work, SDN is introduced. Its architecture, protocols and advantages are discussed. Then, the state of the art is presented, where several works regarding QoS, routing, SDN and AI are detailed. In the next chapter, the SDN controller, which plays the central role in the proposed architecture, is presented. The design of the controller is detailed and its performance compared to another common controller. Later, the routing proposals are described. First, a modification of a traditional routing protocol is discussed. This modification intends to adapt a traditional routing protocol to SDN, focused on multimedia transmissions. Then, the final proposal is described. Its messages, architecture and algorithms are depicted. As regards AI, chapter 5 details the module of the architecture that implements it, along with all the intelligent methods used in the routing proposal. Furthermore, the intelligent route decision algorithm is described and the final proposal is compared to the traditional routing protocol and its adaptation to SDN, showing an increment of the end quality of the transmission. Finally, some applications based on the routing proposal are described. The applications are presented to demonstrate that the proposed solution can work with heterogeneous networks.
Rego Máñez, A. (2020). Intelligent multimedia flow transmission through heterogeneous networks using cognitive software defined networks [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160483
TESIS
APA, Harvard, Vancouver, ISO, and other styles
13

Bourrier, Yannick. "Diagnostic et prise de décision pédagogique pour la construction de compétences non-techniques en situation critique." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS002/document.

Full text
Abstract:
Les compétences non-techniques (CNT) sont un panel de capacités métacognitives complémentant les compétences techniques, et garantissant la réalisation d’une activité technique sûre. Elles jouent un rôle particulièrement important dans la gestion de situations critiques, et ce dans de nombreux domaines, comme la conduite automobile, ou la médecine d’urgence. Les travaux de cette thèse ont eu pour but de contribuer à la construction d’un environnement virtuel pour l’apprentissage humain (EVAH) de ces compétences non-techniques, via l’expérience de situations critiques. Les travaux se sont focalisés sur deux aspects fondamentaux pour la mise en place d’un EVAH. Dans un premier temps, nous nous sommes focalisés sur la conception d’une architecture de diagnostic des compétences non-techniques de l’apprenant, un problème complexe, « mal-défini » au regard du faible degré de formalisation du domaine, de la nature en temps réel de cet apprentissage, et des relations, propres à chaque individu, entre criticité, compétences techniques et compétences non-techniques. Cette architecture associe connaissances du domaine, apprentissage machine et un réseau bayésien, afin de franchir l’important gap sémantique séparant l’activité perceptivo-gestuelle de l’apprenant produite au sein d’un environnement virtuel, de l’évaluation épistémique de ses compétences. Dans un second temps, nous avons consacré nos efforts à la conception d’un module pédagogique capable de raisonner sur la base du module de diagnostic pour proposer à chaque apprenant un voyage à travers la criticité qui lui soit adapté, personnalisé, et à même de renforcer ses CNT. Ce module associe connaissances issues du réseau bayésien, à un algorithme d’apprentissage par renforcement de type « bandit manchot », pour guider l’apprenant vers une maîtrise toujours plus grande de ses compétences non-techniques. Les expérimentations ont eu pour but de valider les choix de modélisation. Elles se sont basées sur des données réelles, obtenues au cours de sessions d’apprentissage réalisées sur un simulateur « grande échelle » de conduite automobile, pour mettre en évidence la robustesse et la capacité de couverture de l’architecture de diagnostic. Nous avons ensuite conçu un jeu de données synthétiques pour évaluer les capacités du module pédagogique à proposer des situations d’apprentissage adaptées aux singularités de chaque apprenant, et à mêmes de contribuer au renforcement de ses CNT
Non-technical skills (NTS) are a set of metacognitive abilities that complement technical skills and allow for a safe and efficient technical activity. They play an important role during the handling of critical situations, in many domains, including driving or acute medicine. This thesis work focused on the building of a virtual environment for learning (VEL), dedicated to the training of these non-technical skills, through the experience of critical situations. The main contributions target two fundamental aspects with regards to the construction of such a VEL. First, we focused our efforts on the conception of an architecture able to diagnose a learner’s NTS. This is an ill-defined problem, given the low degree of domain knowledge, the real time aspects of this learning process, and the unique relations between criticality, technical, and non-technical skills. This architecture combines domain knowledge, machine learning, and a Bayesian network, to cross the semantic gap between the learner’s perceptual-gestural activity inside the VEL, and the diagnostic of high level, cognitive, NTS. Second, we built a pedagogical module, able to make decisions based on the diagnostic module, in order to build a « journey through criticality » adapted to each of our learners’ characteristics, in order to strengthen said their NTS. This module associates the knowledge about the learner obtained by the Bayesian network, with a reinforcement-learning « multi-armed bandit » algorithm, to reinforce the learner’s NTS through time. Experiments were conducted in order to validate our modelling choices. These experiments were first conducted on real user data, obtained during training sessions performed on a « large scale » driving simulator, in order to evaluate the robustness of the Bayesian network as well as its ability to provide varied diagnostics given its inputs. We then built a synthetic dataset in order to test the pedagogical module, more specifically its capabilities to provide adapted learning situations to learners of different profiles, and to contribute to these learner’s acquisition of NTS through time
APA, Harvard, Vancouver, ISO, and other styles
14

Sugianto, Nehemia. "Responsible AI for Automated Analysis of Integrated Video Surveillance in Public Spaces." Thesis, Griffith University, 2021. http://hdl.handle.net/10072/409586.

Full text
Abstract:
Understanding customer experience in real-time can potentially support people’s safety and comfort while in public spaces. Existing techniques, such as surveys and interviews, can only analyse data at specific times. Therefore, organisations that manage public spaces, such as local government or business entities, cannot respond immediately when urgent actions are needed. Manual monitoring through surveillance cameras can enable organisation personnel to observe people. However, fatigue and human distraction during constant observation cannot ensure reliable and timely analysis. Artificial intelligence (AI) can automate people observation and analyse their movement and any related properties in real-time. Analysing people’s facial expressions can provide insight into how comfortable they are in a certain area, while analysing crowd density can inform us of the area’s safety level. By observing the long-term patterns of crowd density, movement, and spatial data, the organisation can also gain insight to develop better strategies for improving people’s safety and comfort. There are three challenges to making an AI-enabled video surveillance system work well in public spaces. First is the readiness of AI models to be deployed in public space settings. Existing AI models are designed to work in generic/particular settings and will suffer performance degradation when deployed in a real-world setting. Therefore, the models require further development to tailor them for the specific environment of the targeted deployment setting. Second is the inclusion of AI continual learning capability to adapt the models to the environment. AI continual learning aims to learn from new data collected from cameras to adapt the models to constant visual changes introduced in the setting. Existing continuous learning approaches require long-term data retention and past data, which then raise data privacy issues. Third, most of the existing AI-enabled surveillance systems rely on centralised processing, meaning data are transmitted to a central/cloud machine for video analysis purposes. Such an approach involves data privacy and security risks. Serious data threats, such as data theft, eavesdropping or cyberattack, can potentially occur during data transmission. This study aims to develop an AI-enabled intelligent video surveillance system based on deep learning techniques for public spaces established on responsible AI principles. This study formulates three responsible AI criteria, which become the guidelines to design, develop, and evaluate the system. Based on the criteria, a framework is constructed to scale up the system over time to be readily deployed in a specific real-world environment while respecting people’s privacy. The framework incorporates three AI learning approaches to iteratively refine the AI models within the ethical use of data. First is the AI knowledge transfer approach to adapt existing AI models from generic deployment to specific real-world deployment with limited surveillance datasets. Second is the AI continuous learning approach to continuously adapt AI models to visual changes introduced by the environment without long-period data retention and the need for past data. Third is the AI federated learning approach to limit sensitive and identifiable data transmission by performing computation locally on edge devices rather than transmitting to the central machine. This thesis contributes to the study of responsible AI specifically in the video surveillance context from both technical and non-technical perspectives. It uses three use cases at an international airport as the application context to understand passenger experience in real-time to ensure people’s safety and comfort. A new video surveillance system is developed based on the framework to provide automated people observation in the application context. Based on real deployment using the airport’s selected cameras, the evaluation demonstrates that the system can provide real-time automated video analysis for three use cases while respecting people’s privacy. Based on comprehensive experiments, AI knowledge transfer can be an effective way to address limited surveillance datasets issue by transferring knowledge from similar datasets rather than training from scratch on surveillance datasets. It can be further improved by incrementally transferring knowledge from multi-datasets with smaller gaps rather than a one-stage process. Learning without Forgetting is a viable approach for AI continuous learning in the video surveillance context. It consistently outperforms fine-tuning and joint-training approaches with lower data retention and without the need for past data. AI federated learning can be a feasible solution to allow continuous learning in the video surveillance context without compromising model accuracy. It can obtain comparable accuracy with quicker training time compared to joint-training.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Dept Bus Strategy & Innovation
Griffith Business School
Full Text
APA, Harvard, Vancouver, ISO, and other styles
15

Cerda-Villafana, Gustavo. "Artificial intelligence techniques in flood forecasting." Thesis, University of Bristol, 2005. http://hdl.handle.net/1983/09d0faea-8622-4609-a33c-e4baefa304f5.

Full text
Abstract:
The need for reliable, easy to set up and operate, hydrological forecasting systems is an appealing challenge to researchers working in the area of flood risk management. Currently, advancements in computing technology have provided water engineering with powerful tools in modelling hydrological processes, among them, Artificial Neural Networks (ANN) and genetic algorithms (GA). These have been applied in many case studies with different level of success. Despite the large amount of work published in this field so far, it is still a challenge to use ANN models reliably in a real-time operational situation. This thesis is set to explore new ways in improving the accuracy and reliability of ANN in hydrological modelling. The study is divided into four areas: signal preprocessing, integrated GA, schematic application of weather radar data, and multiple input in flow routing. In signal preprocessing, digital filters were adopted to process the raw rainfall data before they are fed into ANN models. This novel technique demonstrated that significant improvement in modelling could be achieved. A GA, besides finding the best parameters of the ANN architecture, defined the moving average values for previous rainfall and flow data used as one of the inputs to the model. A distributed scheme was implemented to construct the model exploiting radar rainfall data. The results from weather radar rainfall were not as good as the results from raingauge estimations which were used for comparison. Multiple input has been carried out modelling a river junction with excellent results and an extraction pump with results not so promising. Two conceptual models for flow routing modelling and a transfer function model for rainfall-runoff modelling have been used to compare the ANN model's performance, which was close to the estimations generated by the conceptual models and better than the transfer function model. The flood forecasting system implemented in East Anglia by the Environment Agency, and the NERC HYREX project have been the main data sources to test the model.
APA, Harvard, Vancouver, ISO, and other styles
16

Marroquín, Cortez Roberto Enrique. "Context-aware intelligent video analysis for the management of smart buildings." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCK040/document.

Full text
Abstract:
Les systèmes de vision artificielle sont aujourd'hui limités à l'extraction de données issues de ce que les caméras « voient ». Cependant, la compréhension de ce qu'elles voient peut être enrichie en associant la connaissance du contexte et la connaissance d'interprétation d'un humain.Dans ces travaux de thèse, nous proposons une approche associant des algorithmes de vision atificielle à une modélisation sémantique du contexte d'acquisition.Cette approche permet de réaliser un raisonnement sur la connaissance extraite des images par les caméras en temps réel. Ce raisonnement offre une réponse aux problèmes d'occlusion et d'erreurs de détections inhérents aux algorithmes de vision artificielle. Le système complet permet d'offrir un ensemble de services intelligents (guidage, comptage...) tout en respectant la vie privée des personnes observées. Ces travaux forment la première étape du développement d'un bâtiment intelligent qui peut automatiquement réagir et évoluer en observant l'activité de ces usagers, i.e., un bâtiment intelligent qui prend en compte les informations contextuelles.Le résultat, nommé WiseNET, est une intelligence artificielle en charge des décisions au niveau du bâtiment (qui pourrait être étendu à un groupe de bâtiments ou même a l'échelle d'un ville intelligente). Elle est aussi capable de dialoguer avec l'utilisateur ou l'administrateur humain de manière explicite
To date, computer vision systems are limited to extract digital data of what the cameras "see". However, the meaning of what they observe could be greatly enhanced by environment and human-skills knowledge.In this work, we propose a new approach to cross-fertilize computer vision with contextual information, based on semantic modelization defined by an expert.This approach extracts the knowledge from images and uses it to perform real-time reasoning according to the contextual information, events of interest and logic rules. The reasoning with image knowledge allows to overcome some problems of computer vision such as occlusion and missed detections and to offer services such as people guidance and people counting. The proposed approach is the first step to develop an "all-seeing" smart building that can automatically react according to its evolving information, i.e., a context-aware smart building.The proposed framework, named WiseNET, is an artificial intelligence (AI) that is in charge of taking decisions in a smart building (which can be extended to a group of buildings or even a smart city). This AI enables the communication between the building itself and its users to be achieved by using a language understandable by humans
APA, Harvard, Vancouver, ISO, and other styles
17

O'Neill, J. "Applying artificial intelligence techniques to data distribution." Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.680239.

Full text
Abstract:
Automatic data distribution is one of the most crucial issues preventing the development of a fully automatic parallelisation environment. Researchers have proposed solutions that utilise artificial intelligence (AI) technology including expert systems and neural networks to try and solve the problem. In this research project, alternative artificial intelligent techniques including Genetic Algorithms (GAs) and Ant Colony Optimisation (ACO) are investigated for the purposes of determining if their use would be beneficial in the data distribution process. A data distribution 1001 has been developed for each technique in order to verify the detailed analysis. The tools were tested using 300 example loops and the results show that the introduction of these techniques was successful in determining an appropriate data partition and distribution strategy for all 300 test cases. Furthermore, a novel hyper-heuristic approach to the data distribution problem involving case base reasoning is also investigated. The aim of the hyper-heuristic approach is to select the most appropriate heuristic to apply to a particular problem. The approach has been verified by the development of a case base reasoning tool that will choose an appropriate heuristic based on previous experience. Results show that the approach is effective at identifying similar cases in the case base and choosing the most appropriate heuristic to apply.
APA, Harvard, Vancouver, ISO, and other styles
18

Cheung, Yen Ping. "Artificial intelligence techniques for assembly process planning." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/79999/.

Full text
Abstract:
Due to current trends in adopting flexible manufacturing philosophies, there has been a growing interest in applying Artificial Intelligence (AI) techniques to implement these manufacturing strategies. This is because conventional computational methods alone are not sufficient to meet these requirements for more flexibility. This research examines the possibility of applying AI techniques to process planning and also addresses the various problems when implementing such techniques. In this project AI planning techniques were reviewed and some of these techniques were adopted and later extended to develop an assembly planner to illustrate the feasibility of applying AI techniques to process planning. The focus was on assembly process planning because little work in this area has been reported. Logical decisions like the sequencing of tasks which is a part of the process planning function can be viewed as an AI planning problem. The prototype Automatic Assembly Planner (AAP) was implemented using Edinburgh Prolog on a SUN workstation. Even though expected assembly sequences were obtained, the major problem facing this approach and perhaps AI applications in general is that of extracting relevant design data for the process planning function as illustrated by the planner. It is also believed that if process planning can be regarded as making logical decisions with the knowledge of company specific data then perhaps AAP has also provided some possible answers as to how human process planners perform their tasks. The same kind of reasoning for deciding the sequence of operations could also be employed for planning different products based on a different set of company data. AAP has illustrated the potentialities of applying AI techniques to process planning. The complexity of assembly can be tackled by breaking assemblies into sub-goals. The Modal Truth Criterion (MTC) was applied and tested in a real situation. A system for representing the logic of assembly was devised. A redundant goals elimination feature was also added in addition to the MTC in the AAP. Even though the ideal is a generative planner, in practice variant planners are still valid and perhaps closer to manual assembly process planning.
APA, Harvard, Vancouver, ISO, and other styles
19

Rodriguez, Martins Alejandro. "Artificial intelligence techniques for modeling financial analysis." reponame:Repositório Institucional da UFSC, 1996. http://repositorio.ufsc.br/xmlui/handle/123456789/76408.

Full text
Abstract:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnologico, Programa de Pós-Graduação em Engenharia de Produção, Florianópolis, 1996
Made available in DSpace on 2012-10-16T10:03:15Z (GMT). No. of bitstreams: 1 107020.pdf: 389379 bytes, checksum: 7b2da8d59549841e6063ee41f2b27349 (MD5)
Although monitoring financial health of small firms is decisive to their success, these firms commonly present difficulty when analysing their operational financial condition. In order to overcome this fact, the present thesis proposes a financial knowledge representation that is capable of proposing alternative actions whenever a deviation is detected. The knowledge representation developed recognizes the existence of different phases of analysis: one that looks for some clues about possible financial problems and another one that focuses on with more detail the potential problems detected by the prior phase.The vagueness present in many semantic rules was implemented by using the Theory of Fuzzy Sets. The uncertainty about the future behavior of some key financial variables is incorporated by means of managers perceptions about trends and events. A practical formulation of this proposal is done considering the retail bus sector.
APA, Harvard, Vancouver, ISO, and other styles
20

Pearson, Kyle A., Leon Palafox, and Caitlin A. Griffith. "Searching for exoplanets using artificial intelligence." OXFORD UNIV PRESS, 2018. http://hdl.handle.net/10150/627143.

Full text
Abstract:
In the last decade, over a million stars were monitored to detect transiting planets. Manual interpretation of potential exoplanet candidates is labour intensive and subject to human error, the results of which are difficult to quantify. Here we present a new method of detecting exoplanet candidates in large planetary search projects that, unlike current methods, uses a neural network. Neural networks, also called 'deep learning' or 'deep nets', are designed to give a computer perception into a specific problem by training it to recognize patterns. Unlike past transit detection algorithms, deep nets learn to recognize planet features instead of relying on hand-coded metrics that humans perceive as the most representative. Our convolutional neural network is capable of detecting Earth-like exoplanets in noisy time series data with a greater accuracy than a least-squares method. Deep nets are highly generalizable allowing data to be evaluated from different time series after interpolation without compromising performance. As validated by our deep net analysis of Kepler light curves, we detect periodic transits consistent with the true period without any model fitting. Our study indicates that machine learning will facilitate the characterization of exoplanets in future analysis of large astronomy data sets.
APA, Harvard, Vancouver, ISO, and other styles
21

Fischer, Daniel Poehlman Skipper William. "Artificial intelligence techniques applied to fault detection systems /." *McMaster only, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wen, Chien-Hsien. "Applying artificial intelligence hybrid techniques in wastewater treatment." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1184357721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chui, David Kam Hung. "Artificial intelligence techniques for power system decision problems." Thesis, Queen Mary, University of London, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Usherwood, Thomas William. "Applications of artificial intelligence techniques to thermodynamic modelling." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Witt, Neil A. J. "Artificial intelligence techniques applied to automatic ship guidance." Thesis, University of Plymouth, 1995. http://hdl.handle.net/10026.1/2592.

Full text
Abstract:
It has been estimated that over eighty per cent of marine accidents have been caused by operator error. The skills of the operator in the handling of the vessel are variable and are subject to external influences. Over the past seventy years many advances have been made in the field of ship control. Early developments on proportional controllers have led to today's modern control systems which have interfacing capabilities with electronic navigation equipment. This research investigates traditional control methodologies and introduces the concept of applying artificial intelligence (AI) methods to the ship guidance problem. Research into AI techniques has been burgeoning over the last fifteen years and the main areas investigated are expert systems, fuzzy logic and neural networks. These areas are compared and the research proposes that it is feasible to design and develop a novel, advanced autopilot that is capable of learning the control functions of the operator as well as the manoeuvring characteristics of the vessel. An assessment is undertaken as to the feasibility of replicating a helmsperson's vessel handling functions with an intelligent neural network control system. This system has the capability of learning the course keeping and track keeping functions for a specific vessel. The research has been carried out under two specific task areas: neural course keeping control utilising simulation methods; and neural track keeping control exploiting the use of simulation and scale model techniques. The use of a scale model has allowed the collection of accurate training data through a integrated navigation and data collection system. The use of such a test bed has permitted the testing of the neural track keeping system. Alternative research has concentrated on the use of mathematical models of vessels and all the training data is created through the use of simulation techniques. Whilst this approach is suitable for the initial design of a neural control system can not fully replicate the disturbances acted upon and the responses of a real vessel. By utilising a scale model containing a navigation, data collection and control system it has been possible to expose the vessel to the real environmental data which is unobtainable when using simulation methods. The results of the neural control strategies implemented on the vessel guidance problem are evaluated against the teacher in terms of performance measures. The results indicate that the performance of the final track keeping system is of the manner desired in that it has learnt the control action of the operator. Areas for further research are presented including the application of alternative AI techniques and the use of more accurate navigation sensors.
APA, Harvard, Vancouver, ISO, and other styles
26

ZHU, YAOYAO. "UNSUPERVISED DATABASE DISCOVERY BASED ON ARTIFICIAL INTELLIGENCE TECHNIQUES." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1024314290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Jansen, John F. "Application of artificial intelligence techniques to power system design." Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/14974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Leach, Andrew Richard. "The application of artificial intelligence techniques in conformational analysis." Thesis, University of Oxford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Gwyn, Bryan James. "The use of artificial intelligence techniques for power analysis." Thesis, City University London, 1999. http://openaccess.city.ac.uk/8281/.

Full text
Abstract:
This thesis reports the research carried out into the use of Artificial Intelligence techniques for Power System Analysis. A number of aspects of Power System analysis and its management are investigated and the application of Artificial Intelligence techniques is researched. The use of software tools for checking the application of power system protection systems particularly for complex circuit arrangements was investigated. It is shown that the software provides a more accurate and efficient way of carrying out these investigations. The National Grid Company's (plc, UK) use of software tools for checking the application of protection systems is described, particularly for complex circuit arrangements such as multi-terminal circuits and composite overhead line and cable circuits. Also described, is how investigations have been made into an actual system fault that resulted in a failure of protection to operate. Techniques using digital fault records to replay a fault into a static model of protection are used in the example. The need for dynamic modelling of protection is also discussed. Work done on automating the analysis of digital fault records using computational techniques is described. An explanation is given on how a rule-based system has been developed to classify fault types and analyse the response of protection during a power system fault or disturbance in order to determine correct or incorrect operation. The development of expert systems for on-line application in Energy Control Centres (ECC), is reported. The development of expert systems is a continuous process as new knowledge is gained in the field of artificial intelligence and new expert system development tools are built. Efforts are being made for on-line application of expert systems in ECC as preventive control under normal/alert conditions and as a corrective control during a disturbance. This will enable a more secure power system operation. Considerable scope exists in the development of expert systems and their application to power system operation and control. An overview of the many different types of Neural Network has been carried out explaining terminology and methodology along with a number of techniques used for their implementation. Although the mathematical concepts are not new, many of them were recorded more than fifty years ago, the introduction of fast computers has enabled many of these concepts to be used for today's complex problems. The use of Genetic Algorithm based Artificial Neural Networks is demonstrated for Electrical Load Forecasting and the use of Self Organising Maps is explored for classifying Power System digital fault records. The background of the optimisation process carried out in this thesis is given and an introduction to the method applied, in particular Evolutionary Programming and Genetic Algorithms. Possible solutions to optimisation problems were introduced to be either local or global minimum solutions with the latter being the desirable result. The evolutionary computation that has potential to produce a global solution to a problem due to the searching mechanisms that are inherent to the procedures is discussed. Various mechanisms may be introduced to the genetic algorithm routine which may eliminate the problems of premature convergence, thus enhancing the methods' chances of producing the best solution. The other, more traditional methods of optimisation described include Lagrange multipliers, Dynamic Programming, Local Search and Simulated annealing. Only the Dynamic Programming method guarantees a global optimum solution to an optimisation problem, however for complex problems, the method could take a vast amount of time to locate a solution due to the potential for combinatorial explosion since every possible solution is considered. The Lagrange multiplier method and the local search method are useful for quick location of a global minimum and are therefore useful when the topography of the optimisation problem is uni-modal. However in a complex multi-modal problem, a global solution is less likely. The simulated annealing method has been more popular for solving complex multi-modal problems since it includes techniques for the search to avoid being trapped in local minimum solutions. Artificial Neural Network and Genetic Algorithm have been used to design a neural network for short-term load forecasting. The forecasting model has been used to produce a forecast of the load in the 24 hours of the forecast day concerned, using data provided by an Italian power company. The results obtained are promising. In this particular case, the comparison between the results from the Genetic Algorithm - Artificial Neural Network and Back Propagation - Neural Network shows that the Genetic Algorithm - Artificial Neural Network does not provide a faster solution than the Back Propagation - Neural Network. The application of Evolutionary Programming to fault section estimation is investigated and a comparison made with a Genetic Algorithm approach. To enhance service reliability and to reduce power outage, rapid restoration of power system is required. As a first step of restoration, the fault section should be accurately estimated quickly. The Fault Section Estimation (FSE) identifies fault components in a power system by using information on the operation of protection relays and circuit breakers. However this task is difficult especially for cases where the relay or circuit breaker fails to operate and for multiple faults. An Evolutionary Programming (EP) approach has been developed for solving the FSE problem including malfunctions of protection relays and/or circuit breakers and multiple fault cases. A comparison is made with the Genetic Algorithm (GA) approach at the same time. Two different population sizes are tested for each case. In general, EP showed faster computational speed than GA with an average factor of 13 times more. The final results were almost the same. The convergence speed (the required number of generations to get an optimum result) is a very important factor in real time applications. Test results show that EP is better than GA. However, as both EP and GA are evolutionary algorithms, their efficiencies are largely dependent on the complexity of the problem that might differ from case to case. The use of Artificial Neural Networks to classify digital fault records is investigated showing theat Self Organising Maps could be useful for classifying records if integrated into other systems. Digital fault records are a very useful source of information to the protection engineer to assist with the investigation of a suspected unwanted operation or failure to operate of a protection scheme. After a widespread power system disturbance, due to a storm for example, a large number of fault records can be produced. A method of automatically classifying fault records would be very helpful in reducing the amount of time spent in manual analysis, thus assisting the engineer to focus on records that need in depth analysis. Fault classification using rule base methods have already been developed. The completed work is preliminary in nature and an overview of an extension to this work, involving the extraction of frequency components from the digital fault record data and using these as input to a SOM network, is described.
APA, Harvard, Vancouver, ISO, and other styles
30

Wong, King-sau, and 黃敬修. "Improving the performance of lifts using artificial intelligence techniques." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B2768295X.

Full text
Abstract:
(Uncorrected OCR) Abstract of thesis entitled Improving the Performance of Lifts Using Artificial Intelligence Techniques submitted by Wong King Sau for the degree of Doctor of Philosophy at the University of Hong Kong in August 2003 An elevator group control system manages multiple elevators to serve hall calls in a building. Most elevator group control systems need to recognize the traffic pattern of the building and then change their control algorithms to improve the efficiency of the elevator system. However, the traffic flow in a building is very difficult to be classified into distinct patterns. Traffic recognition systems can recognize certain traffic patterns, but mixed traffic patterns are difficult to be recognized. The aim of this study was therefore to develop improved duplex elevator group control systems that do not need to recognize the traffic pattern. A fuzzy logic. control unit and genetic algorithms control unit were used. A fuzzy logic control unit integrates with the conventional duplex elevator group control system to improve performance especially in mixed traffic patterns with intermittent heavy traffic demand. This system will send more than one elevator to a floor with heavy demand, . according to the overall passenger traffic conditions in the building. The genetic algorithms control unit divides the building into three zones and assigns an appropriate number of elevators to each zone. The floors covered by each zone are adjusted every five minutes. This control unit optimizes elevator group control by equalizing the number of hall calls in each zone, the total elevator door opening time in each zone, and the number of floors served by each elevator. Both of the control units were tested by a simulator in a computer. The performance of the elevator system is given by indices such as average waiting time, wasted man-hour, and long waiting time percentage. The new performance index "wasted man-hour" indicates the total time spent by passengers in a building waiting for the lift service. Both proposed systems perform better than the conventional duplex control system. (An abstract of 297 words.) ~ Signed _ Wong King Sau
abstract
toc
Mechanical Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
31

Crespo-Sandoval, J. "Condition monitoring of outdoor insulation using artificial intelligence techniques." Thesis, Cardiff University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527122.

Full text
Abstract:
The work reported in this thesis is concerned with the application of artificial intelligence to monitoring of outdoor insulation. The work comprised a comprehensive literature survey, the development of computerised systems for capturing, storing and processing data recorded in laboratory and field tests. Extensive programmes of pollution tests on insulating material samples and complete outdoor insulators have been carried out, and the results were analysed using an artificial intelligence technique. In addition, existing long-term field data from a natural pollution testing station have been analysed and classified. The extensive literature survey reviewed the mechanisms causing degradation and failure of insulators, techniques for monitoring insulator degradation and the application of artificial intelligence techniques to their condition monitoring. The data acquisition systems were designed to interface with existing accelerated ageing unit and fog chamber facilities and to capture and store large quantities of leakage current data. Analysis of the laboratory test results on silicone rubber samples by means of the proposed artificial intelligence technique enabled certain types of leakage current waveshapes to be identified that were related to the extent of insulator degradation. Based on the results, a new technique was proposed for monitoring polymeric insulators and predicting imminent failure. Further analysis of the tests results has revealed that the rate of increase of accumulated energy can be used as an indicator of imminent insulator failure and this result is new and has not been published before to the author's knowledge. Clean fog tests were performed on polluted insulators and the results analysed using the artificial intelligence technique. The effects of increasing insulator degradation, pollution severity and applied voltage were investigated. By applying a normalisation procedure, it was possible to apply the monitoring technique developed on insulator samples, and it was demonstrated that the technique can distinguish good insulators from those that have been subjected to severe degradation levels. A new analysis technique was developed to convert existing field data into an easily accessible format, to perform a diagnostic analysis of the data in order to indicate imminent insulator failure and to act as a user-fiiendly interface for insulator monitoring. A computer programme was developed which incorporated the field data analysis and diagnostic procedure.
APA, Harvard, Vancouver, ISO, and other styles
32

Beikzadeh, Mohammad Reza. "Automatic high-level synthesis based upon artificial intelligence techniques." Thesis, University of Essex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Rafiq, M. Y. "Artificial intelligence techniques for the structural design of buildings." Thesis, University of Strathclyde, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sham, Siu Hung Robin. "Application of artificial intelligence techniques to conceptual bridge design." Thesis, Imperial College London, 1989. http://hdl.handle.net/10044/1/47651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Werbelow, Wayne Louis. "The application of artificial intelligence techniques to software maintenance." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Priest, Alexander. "The applications of artificial intelligence techniques in carcinogen chemistry." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:851d96ac-1329-4d79-b437-7ee48795fe60.

Full text
Abstract:
Computer-based drug design is a vital area of pharmaceutical chemistry; Quantitative Structure-Activity Relationships (QSARs), determined computationally from experimental observations, are crucial in identifying candidate drugs by early screening, saving time on synthesis and in vivo testing. This thesis investigates the viability and the practicalities of using Mass Spectra-based pseudo-molecular descriptors, in comparison with other molecular descriptor systems, to predict the carcinogenicity, mutagenicity and the Cltransport inhibiting ability of a variety of molecules, and in the first case, of chemotherapeutic drugs particularly. It does so by identifying a number of QSARs which link the physical properties of chemicals with their concomitant activities in a reliable and mathematical manner. First, this thesis confirms that carcinogenicity and mutagenicity are indeed predictable using a variety of Artificial Intelligence techniques, both supervised and unsupervised, information germane to pharmaceutical research groups interested in the preliminary screening of candidate anti-cancer drugs. Secondly, it demonstrates that Mass Spectral intensities possess great descriptive fidelity and shows that reducing the burden of dimensionality is not only important, but imperative; selecting this smaller set of orthogonal descriptors is best achieved using Principal Component Analysis as opposed to the selection of a set of the most frequent fragments, or the use of every peak up to a number determined by the boundaries of supervised learning. Thirdly, it introduces a novel system of backpropagation and demonstrates that it is more efficient than its principal competitor at monitoring a series of connection weights when applied to this area of research, which requires complex relationships. Finally, it promulgates some preliminary conclusions about which AI techniques are applicable to certain problem-scenarios, how these techniques might be applied, and the likelihood that that application will result in the identification of series of reliable QSARs.
APA, Harvard, Vancouver, ISO, and other styles
37

Starkey, Andrew J. "Condition monitoring of ground anchorages using artificial intelligence techniques." Thesis, University of Aberdeen, 2001. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=217212.

Full text
Abstract:
Neural networks are a form of Artificial Intelligence based on the architecture of the human brain. They allow complicated non-linear relationships to be learnt from example data, and for further test data to be identified according to the relationship previously learnt. This allows the construction of control systems and diagnostic systems of geotechnical processes which were previously not possible due to their complicated non-linear nature. The main topic of research is the application of neural networks to the diagnosis of the condition of ground anchorages. Ground anchorages are in use in many engineering structures such as tunnels, retaining walls and dams and it has been reported that only 5-10% are routinely monitored during service. The conventional method of testing is load lift-off testing, which is expensive and time consuming. The patented technique, GRANIT, makes use of neural networks to learn the complicated relationship between the vibrational response of an anchorage to an applied axial impulse and its post-tension level. Research has been conducted into the parameters of the system which affect the diagnostic ability of the neural network. Further research into the application of the GRANIT technique to the identification of other faults in the anchorage has been conducted, such as change in free length, or gaps in the grouting. An automated procedure for the identification of the frequencies of interest in the response signatures of the GRANIT system has been investigated, and an example is given of an application of this automated procedure in the area of vibro-impact ground moling, a patented technique which uses both vibration and impact to maximise its penetration depth. Further research into the use of neural networks in an automated process has also been undertaken, and the development of a new technique is presented. This new technique has the potential of returning parameters of interest from any given group of signals, and has potential of application outwith geotechnical data. A patent application for this new technique has now been filed by the author.
APA, Harvard, Vancouver, ISO, and other styles
38

Turan, Kamil Hakan. "Reliability-based Optimization Of River Bridges Using Artificial Intelligence Techniques." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613062/index.pdf.

Full text
Abstract:
Proper bridge design is based on consideration of structural, hydraulic, and geotechnical conformities at an optimum level. The objective of this study is to develop an optimization-based methodology to select appropriate dimensions for components of a river bridge such that the aforementioned design aspects can be satisfied jointly. The structural and geotechnical design parts uses a statisticallybased technique, artificial neural network (ANN) models. Therefore, relevant data of many bridge projects were collected and analyzed from different aspects to put them into a matrix form. ANN architectures are used in the objective function of the optimization problem, which is modeled using Genetic Algorithms with penalty functions as constraint handling method. Bridge scouring reliability comprises one of the constraints, which is performed using Monte-Carlo Simulation technique. All these mechanisms are assembled in a software framework, named as AIROB. Finally, an application built on AIROB is presented to assess the outputs of the software by focusing on the evaluations of hydraulic &ndash
structure interactions.
APA, Harvard, Vancouver, ISO, and other styles
39

Forde, Bruce W. R. "An application of selected artificial intelligence techniques to engineering analysis." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/29102.

Full text
Abstract:
This thesis explores the application of some of the more practical artificial intelligence (Al) techniques developed to date in the field of engineering analysis. The limitations of conventional computer-aided analysis programs provide the motivation for knowledge automation and development of a hybrid approach for constructing and controlling engineering analysis software. Artificial intelligence technology used in this thesis includes: object-oriented programming, generic application frameworks, event-driven architectures, and knowledge-based expert systems. Emphasis is placed on the implementation-independent description of objects, classes, methods, and inheritance using a simple graphical representation. The kinds of knowledge used in the analysis process, the programs that control this knowledge, and the resources that perform numerical computation are described as part of a hybrid system for engineering analysis. Modelling, solution, and interpretation activities are examined for a generic problem and a control framework is adopted for event-driven operation. An intelligent finite element analysis program called "SNAP" is developed to demonstrate the application of Al in the numerical analysis of two-dimensional linear problems in solid and structural mechanics. A step-by-step discussion is given for the design, implementation, and operation of the SNAP software to provide a clear understanding of the principles involved. The general conclusion of this thesis is that a variety of artificial intelligence techniques can be used to significantly improve the engineering analysis process, and that much research is still to be done. A series of projects suitable for completion by graduate students in the field of structural engineering are described at the end of the thesis.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
40

Sartoros, Christine. "Application of artificial intelligence techniques for inductively coupled plasma spectrometry." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0016/NQ44574.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Allen, Michael James. "Artificial intelligence techniques for efficient object location in image sequences." Thesis, University of Wolverhampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Su Liang. "Development of automated bearing condition monitoring using artificial intelligence techniques." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/195557/.

Full text
Abstract:
A recent series of tapered roller bearing tests have been conducted at the University of Southampton to evaluate the effectiveness of using multiple sensing technologies to detect incipient faults. The test rig was instrumented with on-line sensors including vibration, temperature and electrostatic wear and oil-line debris sensors. Off-line techniques were also used such as debris analysis and bearing surface examination. The electrostatic sensors, in particular, have the potential to detect early decay of tribological contacts within rolling element bearings. These sensors have the unique ability to detect surface charge associated with surface phase transformations, material transfer, tribofilm breakdown and debris generation. Thus, they have the capability to detect contact decay before conventional techniques such as vibration and debris monitoring. However, precursor electrostatic events can not always be clearly seen using time and frequency based techniques. Therefore, an intelligent system that can process signals from multiple sensors is needed to enable early and automatic detection of novel events and provide reasoning to these detected anomalies. Operators could then seek corroborative trends between sensors and set robust alarms to ensure safe running. This has been the motivation of this study.
APA, Harvard, Vancouver, ISO, and other styles
43

Sayers, William Keith Paul. "Artificial intelligence techniques for flood risk management in urban environments." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/21030.

Full text
Abstract:
Flooding is an important concern for the UK, as evidenced by the many extreme flooding events in the last decade. Improved flood risk intervention strategies are therefore highly desirable. The application of hydroinformatics tools, and optimisation algorithms in particular, which could provide guidance towards improved intervention strategies, is hindered by the necessity of performing flood modelling in the process of evaluating solutions. Flood modelling is a computationally demanding task; reducing its impact upon the optimisation process would therefore be a significant achievement and of considerable benefit to this research area. In this thesis sophisticated multi-objective optimisation algorithms have been utilised in combination with cutting-edge flood-risk assessment models to identify least-cost and most-benefit flood risk interventions that can be made on a drainage network. Software analysis and optimisation has improved the flood risk model performance. Additionally, artificial neural networks used as feature detectors have been employed as part of a novel development of an optimisation algorithm. This has alleviated the computational time-demands caused by using extremely complex models. The results from testing indicate that the developed algorithm with feature detectors outperforms (given limited computational resources available) a base multi-objective genetic algorithm. It does so in terms of both dominated hypervolume and a modified convergence metric, at each iteration. This indicates both that a shorter run of the algorithm produces a more optimal result than a similar length run of a chosen base algorithm, and also that a full run to complete convergence takes fewer iterations (and therefore less time) with the new algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Monti, Matteo. "Non-evolutive pattern recognition techniques: An application in medical image diagnostics." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6286/.

Full text
Abstract:
Lo studio dell’intelligenza artificiale si pone come obiettivo la risoluzione di una classe di problemi che richiedono processi cognitivi difficilmente codificabili in un algoritmo per essere risolti. Il riconoscimento visivo di forme e figure, l’interpretazione di suoni, i giochi a conoscenza incompleta, fanno capo alla capacità umana di interpretare input parziali come se fossero completi, e di agire di conseguenza. Nel primo capitolo della presente tesi sarà costruito un semplice formalismo matematico per descrivere l’atto di compiere scelte. Il processo di “apprendimento” verrà descritto in termini della massimizzazione di una funzione di prestazione su di uno spazio di parametri per un ansatz di una funzione da uno spazio vettoriale ad un insieme finito e discreto di scelte, tramite un set di addestramento che descrive degli esempi di scelte corrette da riprodurre. Saranno analizzate, alla luce di questo formalismo, alcune delle più diffuse tecniche di artificial intelligence, e saranno evidenziate alcune problematiche derivanti dall’uso di queste tecniche. Nel secondo capitolo lo stesso formalismo verrà applicato ad una ridefinizione meno intuitiva ma più funzionale di funzione di prestazione che permetterà, per un ansatz lineare, la formulazione esplicita di un set di equazioni nelle componenti del vettore nello spazio dei parametri che individua il massimo assoluto della funzione di prestazione. La soluzione di questo set di equazioni sarà trattata grazie al teorema delle contrazioni. Una naturale generalizzazione polinomiale verrà inoltre mostrata. Nel terzo capitolo verranno studiati più nel dettaglio alcuni esempi a cui quanto ricavato nel secondo capitolo può essere applicato. Verrà introdotto il concetto di grado intrinseco di un problema. Verranno inoltre discusse alcuni accorgimenti prestazionali, quali l’eliminazione degli zeri, la precomputazione analitica, il fingerprinting e il riordino delle componenti per lo sviluppo parziale di prodotti scalari ad alta dimensionalità. Verranno infine introdotti i problemi a scelta unica, ossia quella classe di problemi per cui è possibile disporre di un set di addestramento solo per una scelta. Nel quarto capitolo verrà discusso più in dettaglio un esempio di applicazione nel campo della diagnostica medica per immagini, in particolare verrà trattato il problema della computer aided detection per il rilevamento di microcalcificazioni nelle mammografie.
APA, Harvard, Vancouver, ISO, and other styles
45

Xia, T. A. "The application of artificial intelligence techniques to process identification and control." Thesis, Swansea University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636701.

Full text
Abstract:
The application of artificial intelligence technique (viz, neural networks, genetic algorithms and fuzzy logic systems) to process identification and control has been investigated with different systems. Neural networks and fuzzy logic systems are able to learn the dynamics of a process by training with a set of data obtained from that process and, subsequently, are able to provide a good predictive performance. Genetic algorithms and evolution strategies are non-gradient-based search schemes which facilitate non-linear system optimisation. Thus, they can be applied to the process industries in place of more traditional linear modelling and optimisation. Based on the standard backpropagation network (BPN), a new neural network - the extended backpropagation network (ExtBPN) - has been proposed and tested using different SISO, SIMO and MIMO systems. Two main disadvantages of the standard BPN, i.e. a long training time and poor ability to extrapolate outside the range of training data, are overcome to some extent by using the ExtBPN. A unified strategy for model-based predictive and model reference control has been developed based on the optimisation of a cost function which contains the feedback of error information for the adjustment of future set-points in order to compensate for the mismatch between the model and the real process. In this way, the inherent advantages of both feedback and feedforward control have been utilised. Several new control strategies have been developed from this basis and tested with linear or non-linear, SISO or MIMO systems using neural network and fuzzy process models. These new control strategies (viz. the generalised horizon adjusted predictive (GHAP) control), the predictive direct model reference control (PDMRC), the modified predictive internal model reference control (MPIMC) and the predictive generic model reference control (PGMC) in which the cost function is optimised using genetic algorithms and evolution strategies have been applied successfully to different processes. It has been shown, further, that a variety of time-integral performance criteria can be employed for the design of PID and model-based predictive controllers.
APA, Harvard, Vancouver, ISO, and other styles
46

Lucouw, Alexander. "Distributed fault detection and diagnostics using artificial intelligence techniques / A. Lucouw." Thesis, North-West University, 2009. http://hdl.handle.net/10394/4110.

Full text
Abstract:
With the advancement of automated control systems in the past few years, the focus has also been moved to safer, more reliable systems with less harmful effects on the environment. With increased job mobility, less experienced operators could cause more damage by incorrect identification and handling of plant faults, often causing faults to progress to failures. The development of an automated fault detection and diagnostic system can reduce the number of failures by assisting the operator in making correct decisions. By providing information such as fault type, fault severity, fault location and cause of the fault, it is possible to do scheduled maintenance of small faults rather than unscheduled maintenance of large faults. Different fault detection and diagnostic systems have been researched and the best system chosen for implementation as a distributed fault detection and diagnostic architecture. The aim of the research is to develop a distributed fault detection and diagnostic system. Smaller building blocks are used instead of a single system that attempts to detect and diagnose all the faults in the plant. The phases that the research follows includes an in-depth literature study followed by the creation of a simplified fault detection and diagnostic system. When all the aspects concerning the simple model are identified and addressed, an advanced fault detection and diagnostic system is created followed by an implementation of the fault detection and diagnostic system on a physical system.
Thesis (M.Ing. (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2009.
APA, Harvard, Vancouver, ISO, and other styles
47

Flitman, Andrew. "Towards the application of artificial intelligence techniques for discrete event simulation." Thesis, University of Warwick, 1986. http://wrap.warwick.ac.uk/51317/.

Full text
Abstract:
The possibility of incorporating Artificial Intelligence (A.I) techniques into Visual Interactive Discrete Event Simulation was examined. After a study the current state of the art, work was undertaken to investigate the usefulness of PROLOG as a simulation language. This led to the development of a working Simulation Engine, allowing simulations to be developed quickly. The way PROLOG facilitated development of the engine indicated a possible usefulness as a medium for controlling external simulations. Tests on the feasibility of this were made resulting in the development of an assembler link which allows PROLOG to remotely communicate with and control procedural language programs resident on a separate microcomputer. Experiments using this link were then made to test the application of A.I. techniques to current visual simulations. Studies were carried out on the controlling of the simulation, the monitoring and learning from a simulation, the use of simulation as a window to expert system performance, and on the manipulation of the simulation. This study represents a practical attempt to understand and develop the possible uses of A.I. techniques within visual interactive simulation. The thesis concludes with a discussion of the advantages attainable through such a merger of techniques, followed by areas in which the research may be expanded.
APA, Harvard, Vancouver, ISO, and other styles
48

White, Peter. "On the application of artificial intelligence techniques to heat exchanger design." Thesis, University of Ulster, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.281354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Weiqiang. "Improved condition monitoring of hydraulic turbines based on artificial intelligence techniques." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672373.

Full text
Abstract:
As a type of renewable energy that can provide rapid response to the requirement of the power grid, hydropower plays a fairly important position in the energy market. In recent years, with the enormous entrance of new renewable energies (NREs) such as wind energy and solar energy, the stability of the power grid has been challenged: the intermittent power supply from the NREs requires the hydraulic turbines to work more in off-design conditions and regulate the output much more frequently than they did before. In this new scenario, several problems have appeared in hydraulic turbine units. In order to reduce the maintaining periods and critical damages on the unit, condition monitoring techniques have been proved to be a useful tool for operators. However, these techniques shall be improved and updated in order to consider this new situation for hydropower. At present, hydraulic turbines have been monitored by different types of sensors. However, new data analysis technologies such as artificial intelligence haven’t been implemented in the systematical analyses of the prototypes. These techniques could improve the actual condition monitoring systems and could help to improve the diagnosis capacity for some critical problems, where classical analysis may fail. In this study, existing monitoring and field test data from various types of turbines (Pump turbine, Francis turbine and Pelton turbine) has been used and several artificial intelligence (AI) techniques and data-driven methods have been applied in order to improve the existing condition monitoring techniques. Firstly, for the pump turbine analyzed, artificial neural network (ANN) have been used to generate vibration hill charts based on the indicators used for condition monitoring. This has helped to analyze abnormal behaviors of the machine and to propose a better condition monitoring based on the generated maps. This can provide effective guidance for the operation plan of the unit. Secondly, the limits of operation of a large Francis turbine due to overload instability have been analyzed. AI techniques have been applied on existing data to analyze the feasibility to detect the overload instability several seconds before it occurs. It is shown that by implementing these techniques in the existing condition monitoring system, the operating range of the unit could be safely increased. Finally, for a failure that occurred in a Pelton turbine (broken bucket), artificial neural networks combined with dimension reduction techniques have been used to build up a model that can accurately predict the damage, which is helpful for the scheduled maintenance. This is an Article-Based Thesis, so it is based on three Journal Papers that have been published during the thesis duration. These three Journal papers are about the improved hydro turbine condition monitoring and fault diagnosis based on AI techniques, and they are attached and commented though the whole document of this thesis.
Como tipo de energía renovable que puede proporcionar una respuesta rápida a los requisitos de la red eléctrica, la energía hidroeléctrica ocupa un lugar fundamental en el mercado energético. En los últimos años, con la enorme entrada de nuevas energías renovables (NRE) como la eólica y la solar, la estabilidad de la red eléctrica se ha visto comprometida: el suministro de energía intermitente de las NRE obliga a que las turbinas hidráulicas funcionen más frecuentemente en condiciones fuera de diseño y regular su potencia con mucha más frecuencia que antes. En estos nuevos escenarios, han aparecido varios problemas en las turbinas hidráulicas. Con el fin de reducir los períodos de mantenimiento y los daños críticos en los prototipos, las técnicas de ”condition monitoring” han demostrado ser una herramienta útil para los operadores. Sin embargo, estas técnicas deben ser mejoradas y actualizadas para considerar esta nueva situación de la energía hidroeléctrica. En la actualidad, las turbinas hidráulicas han sido monitoreadas por diferentes tipos de sensores. Sin embargo, no se han implementado nuevas tecnologías de análisis de datos como la inteligencia artificial (IA) en los análisis sistemáticos de los prototipos. Estas técnicas podrían mejorar los sistemas de “condition monitoring” reales y podrían ayudar a mejorar la capacidad de diagnóstico para algunos problemas críticos, donde el análisis clásico no es suficiente. En este estudio, se han utilizado datos de diferentes tipos de turbinas prototipos (turbina- bomba, turbina Francis y turbina Pelton) y se han aplicado varias técnicas de inteligencia artificial para mejorar su monitorización. En primer lugar, para una turbina-bomba analizada, se han utilizado redes neuronales (ANN) para generar nuevos mapas de monitoritzación (“vibration hill charts”) basados en los indicadores utilizados para la monitorización de la máquina. Esto ha ayudado a analizar los comportamientos anormales de la máquina y a proponer una mejor monitorización basada en los mapas generados. Esto puede servir de referencia para una operación más eficaz de la máquina. En segundo lugar, se han analizado los límites de funcionamiento de una turbina Francis debido a la inestabilidad en sobrecarga (overload instability). Se han aplicado técnicas de IA en datos existentes para analizar la viabilidad de detectar la inestabilidad de sobrecarga varios segundos antes de que ocurra. Se muestra que, al implementar estas técnicas en el sistema de monitorización existente, el rango de operación de la máquina podría incrementarse de manera segura. Finalmente, para un daño ocurrido en una turbina Pelton, se han utilizado redes neuronales combinadas con técnicas de reducción dimensional para construir un modelo que puede predecir con precisión el daño, lo cual también es útil para el mantenimiento programado de la máquina. Esta es una tesis basada en artículos, por lo que se basa en tres artículos de revista que se han publicado durante la ejecución de la tesis. Estos tres artículos tratan sobre la mejoría de sistemas de “condition monitoring” y el diagnóstico de daños basado en técnicas de inteligencia artificial. Estos artículos se adjuntan y comentan en todo el documento de esta tesis.
Enginyeria mecànica, fluids i aeronàutica
APA, Harvard, Vancouver, ISO, and other styles
50

Luwes, Nicolaas Johannes. "Artificial intelligence machine vision grading system." Thesis, Bloemfontein : Central University of Technology, Free State, 2014. http://hdl.handle.net/11462/35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography