Dissertations / Theses on the topic 'Expert systems (Computer science) – Industrial applications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 30 dissertations / theses for your research on the topic 'Expert systems (Computer science) – Industrial applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Van, Niekerk Theo. "Monitoring a diagnosis for control of an intelligent machining process." Thesis, Port Elizabeth Technikon, 2001. http://hdl.handle.net/10948/70.

Full text
Abstract:
A multi-level modular control scheme to realize integrated process monitoring, diagnosis and control for intelligent machining is proposed and implemented. PC-based hardware architecture to manipulate machining process cutting parameters, using a PMAC interface card as well as sensing processes performance parameters through sampling, and processing by means of DSP interface cards is presented. Controller hardware, to interface the PC-based PMAC interface card to a machining process for the direct control of speed, feed and depth of cut, is described. Sensors to directly measure on-line process performance parameters, including cutting forces, cutting sound, tool-workpiece vibration, cutting temperature and spindle current are described. The indirect measurement of performance parameter surface roughness and tool wear monitoring, through the use of NF sensor fusion modeling, is described and verified. An object based software architecture, with corresponding user interfaces (using Microsoft Visual C++ Foundation Classes and implemented C++ classes for sending motion control commands to the PMAC and receiving processed on-line sensor data from the DSP) is explained. The software structure indicates all the components necessary for integrating the monitoring, diagnosis and control scheme. C-based software code executed on the DSP for real-time sampling, filtering and FFT processing of sensor signals, is explained. Making use of experimental data and regression analysis, analytical relationships between cutting parameters (independent) and each of the performance parameters (dependent) are obtained and used to simulate the machining process. A fuzzy relation that contains values determined from statistical data (indicating the strength of connection between the independent and dependent variables) is proposed. The fuzzy relation forms the basis of a diagnostic scheme that is able to intelligently determine which independent variable to change when a machining performance parameter exceeds control limits. The intelligent diagnosis scheme is extensively tested using the machining process simulation.
APA, Harvard, Vancouver, ISO, and other styles
2

Landry, Jacques-André. "Computer software for the control of potato storage environment." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=41668.

Full text
Abstract:
Much research has proven that computer controlled vegetable storage can achieve better storage conditions than traditional control systems. During the last 10 years, the use of microcomputer-based environmental control systems has become commonplace. However, to take full advantage of this computerization of the control process, it is not enough only to program the control functions that are performed by normal analog equipment. New and better control strategies must be developed. Recent advances in computer technology have made possible the development of expert systems; a branch of artificial intelligence. One of the advantages of developing such a system is that it provides a reasoning tool which approaches the level of proficiency human experts exhibit in that field. The application of new control methods using expert systems has been extensively demonstrated for greenhouse environments. However, the application of expert systems for the control of vegetable storage is still to be investigated. In the first phase of this project, the development and implementation of a sophisticated control software, using a conventional algorithm-based programming language, were achieved. Throughout the three years of experimentation in an industrial potato storage, the software proved to be appropriate for the control of storage environmental parameters (temperature and relative humidity). During the second phase, the application of an expert system for the on-line control of potato storage was explored. The development of a rule-based expert system, that could replace the conventional algorithm-based control routines was achieved. The integration of the expert system into the control software will result in a highly efficient control software, which can easily be maintained and improved as new knowledge emerges. The use of an expert system will also render possible the representation of heuristic knowledge in the form of rules, which was not possible with the use of conven
APA, Harvard, Vancouver, ISO, and other styles
3

Vom, Braucke Troy S., and tvombraucke@swin edu au. "Establishment of a database for tool life performance." Swinburne University of Technology, 2004. http://adt.lib.swin.edu.au./public/adt-VSWT20050914.085324.

Full text
Abstract:
The cutting tool industry has evolved over the last half century to the point where an increasing range and complexity of cutting tools are available for metal machining. This highlighted a need to provide an intelligent, user-friendly system of tool selection and recommendation that can also provide predictive economic performance data for engineers and end-users alike. Such an 'expert system' was developed for a local manufacturer of cutting tools in the form of a relational database to be accessed over the Internet. A number of performance predictive models were reviewed for various machining processes, however they did not encompass the wide range of variables encountered in metal machining, thus adaptation of these existing models for an expert system was reasoned to be economically prohibitive at this time. Interrogation of published expert systems from cutting tool manufacturers, showed the knowledge-engineered principle to be a common approach to transferring economic and technological information to an end-user. The key advantage being the flexibility to allow further improvements as new knowledge is gained. As such, a relational database was built upon the knowledge-engineered principle, based on skilled craft oriented knowledge to establish an expert system for selection and performance assessment of cutting tools. An investigation into tapping of austenitic stainless steels was undertaken to develop part of a larger expert system. The expert system was then interrogated in this specific area in order to challenge by experiment, the skilled craft oriented knowledge in this area. The experimental results were incorporated into the database where appropriate, providing a user-friendly working expert system for intelligent cutting tool selection, recommendation and performance data.
APA, Harvard, Vancouver, ISO, and other styles
4

Diaz, Zarate Gerardo Daniel. "A knowledge-based system for estimating the duration of cast in place concrete activities." FIU Digital Commons, 1992. http://digitalcommons.fiu.edu/etd/2806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Markarian, Naro R. "Environmental control of vegetable storage environments." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=31268.

Full text
Abstract:
A large-scale experimental, state of the art storage facility was constructed on the Macdonald Campus of McGill University. This storage facility will serve as a tool to further investigate many of the laboratory experiments performed in agricultural and food science topics, by providing a representation of actual storage facilities in use in the industry today. The storage facility was fully instrumented to provide valuable data of the stored commodity and it's environment. A custom control software was developed with a user friendly graphical interface. This fully automated software allows data acquisition and control of temperature and relative humidity of the experimental storage facility.
Experiments were performed and the control software provided an adequate temperature and relative humidity control. The controller was based on a conventional PID or proportional, integral and derivative controller. To further improve the control of the storage facility, a novel multivariable PID controller was developed using enthalpy as the process variable, which encompasses both temperature and relative humidity. The novel controller was tested using a mathematical model developed. Simulations were performed comparing the performance of the novel multivariable controller to two other conventional controllers. The results demonstrate that the novel multivariable PID controller is capable of controlling temperature and relative humidity better than the other two conventional control techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

Karandikar, Swanandesh S. "Expert system applications in architecture." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/44116.

Full text
Abstract:
This study proposes an Architectural Expert System (AES) to act as a design partner for architectural designers. Architectural designers are faced with a very complex task of searching a solution space, which is a labyrinth of several domains ranging from social to cultural, and from aesthetic to scientific. With the number of domains come a number of experts of that domain. After progressing through tedious analytical procedures involving the physical principles in architecture, and applying the knowledge of experience, the experts are able to convert the raw data into useful design guidelines.

Research in the field of artificial intelligence has developed techniques which can capture such expertise in a computer program, which then emulates the expert. This technology is know as Expert System (ES). This study has used this technology to develop a system to aid architectural design. An AES model is derived from literature review. As the nature of a system based on this model is complex and would require custom built software, an alternative is developed based on the derived model. Based on this alternative, a prototype is developed for energy audit and energy conservation by capturing the expertise of an energy conscious design expert. This prototype module is one component of the sub-system of AES and provides an example for further modules. Various areas such as design, architecture, artificial intelligence and expert systems technology, and energy conscious design and energy conservation converge, and become parts of this study.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
7

Akladios, Magdy. "Safety by design-- an expert systems approach /." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=1033.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains xi, 238 p. : ill. (some col.) Includes abstract. Includes bibliographical references (p. 231-238).
APA, Harvard, Vancouver, ISO, and other styles
8

Venneti, Vikram V. "Texpert-expert system for evaluating product design for worker's safety and health." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=679.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains x, 161 p. : ill. (some col.) Includes abstract. Includes bibliographical references (p. 156-161).
APA, Harvard, Vancouver, ISO, and other styles
9

Shen, Yan 1954. "ADVICE: AN EXPERT SYSTEM TO HELP EVALUATE GRADUATE STUDY PLANS OF SYSTEMS & INDUSTRIAL ENGINEERING STUDENTS." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/291320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fan, Yu. "Continuous time Bayesian Network approximate inference and social network applications." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957308751&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268330625&clientId=48051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Title from first page of PDF file (viewed March 8, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 130-133). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
11

Woodford, Brendon James, and n/a. "Connectionist-Based Intelligent Information Systems for image analysis and knowledge engineering : applications in horticulture." University of Otago. Department of Information Science, 2008. http://adt.otago.ac.nz./public/adt-NZDU20080515.111549.

Full text
Abstract:
New Zealand�s main export earnings come from the primary production area including agriculture, horticulture, and viticulture. One of the major contributors in this area of horticulture is the production of quality export grade fruit; specifically apples. In order to maintain a competitive advantage, the systems and methods used to grow the fruit are constantly being refined and are increasingly based on data collected and analysed by both the orchardist who grows the produce and also researchers who refine the methods used to determine high levels of fruit quality. To support the task of data analysis and the resulting decision-making process it requires efficient and reliable tools. This thesis attempts to address this issue by applying the techniques of Connectionist-Based Intelligent Information Systems (CBIIS) for Image Analysis and Knowledge Discovery. Using advanced neurocomputing techniques and a novel knowledge engineering methodology, this thesis attempts to seek some solutions to a set of specific problems that exist within the horticultural domain. In particular it describes a methodology based on previous research into neuro-fuzzy systems for knowledge acquisition, manipulation, and extraction and furthers this area by introducing a novel and innovative knowledge-based architecture for knowledge-discovery using an on-line/real-time incremental learning system based on the Evolving Connectionist System (ECOS) paradigm known as the Evolving Fuzzy Neural Network (EFuNN). The emphases of this work highlights knowledge discovery from these data sets using a novel rule insertion and rule extraction method. The advantage of this method is that it can operate on data sets of limited sizes. This method can be used to validate the results produced by the EFuNN and also allow for greater insight into what aspects of the collected data contribute to the development of high quality produce.
APA, Harvard, Vancouver, ISO, and other styles
12

Jamieson, Andrew George. "A novel systems design approach to wireless sensor networks for industrial applications." Thesis, University of Glasgow, 2008. http://theses.gla.ac.uk/424/.

Full text
Abstract:
Wireless Sensor Networks are a constantly evolving field spanning the Electronic Engineering and Computer Science domains; where the primary tasks of computation, communication and sensing are combined into ever smaller physical node devices. By utilising advanced multi-hop mesh networking techniques, scores of these nodes can form complex heterogeneous networks, where the efforts of discrete nodes combine to achieve a common goal. This themed Engineering Doctorate portfolio describes a four year period where the Research Engineer, in conjunction with the sponsoring company, The Kelvin Institute Ltd, undertook a range of interlinked research, development and business projects. Each venture was closely aligned to the technical and commercial interests of the sponsor, initiated in late 2003 by a preliminary look at the state-of-the-art and evaluation of early development nodes for location services. The primary project stemmed from this initial research and was undertaken with a collaborating company in the UK rail industry. Results from this work inspired a further project considering the use of security features to provide a new routing methodology for Wireless Sensor Networks and other ad-hoc topologies. In addition to the technical merit and academic contributions to the field, this themed portfolio considers the equally important commercial and business aspects, highlighting notable events and achievements throughout the course of the Engineering Doctorate programme.
APA, Harvard, Vancouver, ISO, and other styles
13

Shraim, Mustafa S. "An expert system for designing statistical experiments." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182516879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Araiza, Roberto. "The use of interval-related expert knowledge in processing 2-D and 3-D data with an emphasis on applications to geosciences and biosciences /." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Maema, Mathe. "OVR : a novel architecture for voice-based applications." Thesis, Rhodes University, 2011. http://hdl.handle.net/10962/d1006694.

Full text
Abstract:
Despite the inherent limitation of accessing information serially, voice applications are increasingly growing in popularity as computing technologies advance. This is a positive development, because voice communication offers a number of benefits over other forms of communication. For example, voice may be better for delivering services to users whose eyes and hands may be engaged in other activities (e.g. driving) or to semi-literate or illiterate users. This thesis proposes a knowledge based architecture for building voice applications to help reduce the limitations of serial access to information. The proposed architecture, called OVR (Ontologies, VoiceXML and Reasoners), uses a rich backend that represents knowledge via ontologies and utilises reasoning engines to reason with it, in order to generate intelligent behaviour. Ontologies were chosen over other knowledge representation formalisms because of their expressivity and executable format, and because current trends suggest a general shift towards the use of ontologies in many systems used for information storing and sharing. For the frontend, this architecture uses VoiceXML, the emerging, and de facto standard for voice automated applications. A functional prototype was built for an initial validation of the architecture. The system is a simple voice application to help locate information about service providers that offer HIV (Human Immunodeficiency Virus) testing. We called this implementation HTLS (HIV Testing Locator System). The functional prototype was implemented using a number of technologies. OWL API, a Java interface designed to facilitate manipulation of ontologies authored in OWL was used to build a customised query interface for HTLS. Pellet reasoner was used for supporting queries to the knowledge base and Drools (JBoss rule engine) was used for processing dialog rules. VXI was used as the VoiceXML browser and an experimental softswitch called iLanga as the bridge to the telephony system. (At the heart of iLanga is Asterisk, a well known PBX-in-a-box.) HTLS behaved properly under system testing, providing the sought initial validation of OVR.
LaTeX with hyperref package
APA, Harvard, Vancouver, ISO, and other styles
16

Miller, Richard Allen. "Computer simulation of continuous fermentation of glucose to ethanol with the use of an expert system for parameter calculations and applications for bioreactor control." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/41545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ji, Katrina Yun. "ADAP: A component-based model using design patterns with applications in E-Commerce." CSUSB ScholarWorks, 2000. https://scholarworks.lib.csusb.edu/etd-project/1694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cimiano, Philipp. "Ontology learning and population from text : algorithms, evaluation and applications /." New York, NY : Springer, 2006. http://www.loc.gov/catdir/enhancements/fy0824/2006931701-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gilkinson, John C. "An expert scheduling system utilizing a genetic algorithm in solving a multi-parameter job shop problem." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175881721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Uwais, Syed Muhammad. "Integration of expert system and analytic hierarchical process for the selection and evaluation of R&D projects in the pharmaceutical industry." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1178823422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Conroy, Justin Anderson. "Analysis of adaptive neuro-fuzzy network structures." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/19684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Clark, Matthew C. "Knowledge guided processing of magnetic resonance images of the brain [electronic resource] / by Matthew C. Clark." University of South Florida, 2001. http://purl.fcla.edu/fcla/etd/SFE0000001.

Full text
Abstract:
Includes vita.
Title from PDF of title page.
Document formatted into pages; contains 222 pages.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
ABSTRACT: This dissertation presents a knowledge-guided expert system that is capable of applying routinesfor multispectral analysis, (un)supervised clustering, and basic image processing to automatically detect and segment brain tissue abnormalities, and then label glioblastoma-multiforme brain tumors in magnetic resonance volumes of the human brain. The magnetic resonance images used here consist of three feature images (T1-weighted, proton density, T2-weighted) and the system is designed to be independent of a particular scanning protocol. Separate, but contiguous 2D slices in the transaxial plane form a brain volume. This allows complete tumor volumes to be measured and if repeat scans are taken over time, the system may be used to monitor tumor response to past treatments and aid in the planning of future treatment. Furthermore, once processing begins, the system is completely unsupervised, thus avoiding the problems of human variability found in supervised segmentation efforts.Each slice is initially segmented by an unsupervised fuzzy c-means algorithm. The segmented image, along with its respective cluster centers, is then analyzed by a rule-based expert system which iteratively locates tissues of interest based on the hierarchy of cluster centers in feature space. Model-based recognition techniques analyze tissues of interest by searching for expected characteristics and comparing those found with previously defined qualitative models. Normal/abnormal classification is performed through a default reasoning method: if a significant model deviation is found, the slice is considered abnormal. Otherwise, the slice is considered normal. Tumor segmentation in abnormal slices begins with multispectral histogram analysis and thresholding to separate suspected tumor from the rest of the intra-cranial region. The tumor is then refined with a variant of seed growing, followed by spatial component analysis and a final thresholding step to remove non-tumor pixels.The knowledge used in this system was extracted from general principles of magnetic resonance imaging, the distributions of individual voxels and cluster centers in feature space, and anatomical information. Knowledge is used both for single slice processing and information propagation between slices. A standard rule-based expert system shell (CLIPS) was modified to include the multispectral analysis, clustering, and image processing tools.A total of sixty-three volume data sets from eight patients and seventeen volunteers (four with and thirteen without gadolinium enhancement) were acquired from a single magnetic resonance imaging system with slightly varying scanning protocols were available for processing. All volumes were processed for normal/abnormal classification. Tumor segmentation was performed on the abnormal slices and the results were compared with a radiologist-labeled ground truth' tumor volume and tumor segmentations created by applying supervised k-nearest neighbors, a partially supervised variant of the fuzzy c-means clustering algorithm, and a commercially available seed growing package. The results of the developed automatic system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
23

Kanumury, Rajesh. "Integrating business and engineering processes in manufacturing environment using AI concepts." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179423333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Julio, Marcia Regina Ferro Moss. "Um estudo de metricas de similaridade em sistemas baseados em casos aplicados a area da saude." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276510.

Full text
Abstract:
Orientadores: Gilberto Shigueo Nakamiti, Heloisa Vieira da Rocha
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-04T19:58:40Z (GMT). No. of bitstreams: 1 Julio_MarciaReginaFerroMoss_M.pdf: 5134591 bytes, checksum: 7b347ade85c8652d790d671fd0d3bd1c (MD5) Previous issue date: 2005
Resumo: No momento da escolha da solução para um problema, muitas vezes o ser humano se vale de experiências passadas, ocorridas com problemas semelhantes e que, portanto, podem prever soluções de sucesso ou não. Sistemas Baseados em Casos (SBC) podem utilizar soluções anteriores para interpretar uma nova situação, ou criar uma solução apropriada para um novo problema. Este trabalho apresenta um estudo de métricas de similaridade em sistemas baseados em casos, aplicados à área da saúde, mais especificamente sobre epicondilite lateral, uma tendinite do cotovelo. O estudo sobre métricas de similaridade em sistemas baseados em casos foi realizado a partir de levantamentos bibliográficos sobre Raciocínio Baseados em Casos e sobretudo com o estudo e aprendizado obtido por meio da aplicação de RBC na área da Saúde. A aplicação foi desenvolvida com a participação de profissionais da área da saúde que muito colaboraram na construção da aplicação, bem como com o fornecimento de casos reais para os cadastros na base de casos e aplicação de testes de validação
Abstract: When solving a problem, humans ofien use past experiences with similar situations, which can help the prediction of failure or success. Case-Based Systems use past experiences to interpret a new situation, or to create an appropiate solution for a new problem. For work presents a study on similarity metrics in case-based systems, and an application concerning the health area, more specifically about Lateral Epiconditis, an elbow tendinitis. The study on similarity metrics in case-based systems was conducted from bibliographic research and more importantly, with the study and learning abtained with the health area application development. Health area professionals took part and helped the application development, as well as provided real cases to configure and validate the system
Mestrado
Engenharia de Software
Mestre Profissional em Computação
APA, Harvard, Vancouver, ISO, and other styles
25

Jones, Timothy Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Automated feature recognition system for supporting engineering activities downstream of conceptual design." 2007. http://handle.unsw.edu.au/1959.4/40486.

Full text
Abstract:
Transfer of information between CAD models and downstream manufacturing process planning software typically involves redundant user interaction. Many existing tools are process-centric and unsuited for selection of a "best process" in the context of existing concurrent engineering design tools. A computer based Feature-Recognition (FR) process is developed to extract critical manufacturing features from engineering product CAD models. FR technology is used for automating the extraction of data from CAD product models and uses wire-frame geometry extracted from an IGES neutral file format. Existing hint-based feature recognition techniques have been extended to encompass a broader range of manufacturing domains than typical in the literature, by utilizing a combination of algorithms, each successful at a limited range of features. Use of wire-frame models simplifies product geometry and has the potential to support rapid manufacturing shape evaluation at the conceptual design stage. Native CAD files are converted to IGES neutral files to provide geometry data marshalling to remove variations in user modelling practice, and to provide a consistent starting point for FR operations. Wire-frame models are investigated to reduce computer resources compared to surface and solid models, and provide a means to recover intellectual property in terms of manufacturing design intent from legacy and contemporary product models. Geometric ambiguity in regard to what is ?solid? and what is not has plagued wire-frame FR development in the past. A new application of crossing number theory (CNT) has been developed to solve the wire-frame ambiguity problem for a range of test parts. The CNT approach works satisfactorily for products where all faces of the product can be recovered and is tested using a variety of mechanical engineering parts. Platform independent tools like Extensible Mark-up Language are used to capture data from the FR application and provide a means to separate FR and decision support applications. Separate applications are composed of reusable software modules that may be combined as required. Combining rule-based and case-based reasoning provides decision support to the manufacturing application as a means of rejecting unsuitable processes on functional and economic grounds while retaining verifiable decision pathways to satisfy industry regulators.
APA, Harvard, Vancouver, ISO, and other styles
26

Moodley, Deshendran. "Ontology driven multi-agent systems : an architecture for sensor web applications." Thesis, 2009. http://hdl.handle.net/10413/8148.

Full text
Abstract:
Advances in sensor technology and space science have resulted in the availability of vast quantities of high quality earth observation data. This data can be used for monitoring the earth and to enhance our understanding of natural processes. Sensor Web researchers are working on constructing a worldwide computing infrastructure that enables dynamic sharing and analysis of complex heterogeneous earth observation data sets. Key challenges that are currently being investigated include data integration; service discovery, reuse and composition; semantic interoperability; and system dynamism. Two emerging technologies that have shown promise in dealing with these challenges are ontologies and software agents. This research investigates how these technologies can be integrated into an Ontology Driven Multi-Agent System (ODMAS) for the Sensor Web. The research proposes an ODMAS framework and an implemented middleware platform, i.e. the Sensor Web Agent Platform (SWAP). SWAP deals with ontology construction, ontology use, and agent based design, implementation and deployment. It provides a semantic infrastructure, an abstract architecture, an internal agent architecture and a Multi-Agent System (MAS) middleware platform. Distinguishing features include: the incorporation of Bayesian Networks to represent and reason about uncertain knowledge; ontologies to describe system entities such as agent services, interaction protocols and agent workflows; and a flexible adapter based MAS platform that facilitates agent development, execution and deployment. SWAP aims to guide and ease the design, development and deployment of dynamic alerting and monitoring applications. The efficacy of SWAP is demonstrated by two satellite image processing applications, viz. wildfire detection and monitoring informal settlement. This approach can provide significant benefits to a wide range of Sensor Web users. These include: developers for deploying agents and agent based applications; end users for accessing, managing and visualising information provided by real time monitoring applications, and scientists who can use the Sensor Web as a scientific computing platform to facilitate knowledge sharing and discovery. An Ontology Driven Multi-Agent Sensor Web has the potential to forever change the way in which geospatial data and knowledge is accessed and used. This research describes this far reaching vision, identifies key challenges and provides a first step towards the vision.
Thesis (Ph.D.)-University of KwaZulu-Natal, 2009.
APA, Harvard, Vancouver, ISO, and other styles
27

Floryan, Mark. "Evolving expert knowledge bases: Applications of crowdsourcing and serious gaming to advance knowledge development for intelligent tutoring systems." 2013. https://scholarworks.umass.edu/dissertations/AAI3589022.

Full text
Abstract:
This dissertation presents a novel effort to develop ITS technologies that adapt by observing student behavior. In particular, we define an evolving expert knowledge base (EEKB) that structures a domain's information as a set of nodes and the relationships that exist between those nodes. The structure of this model is not the particularly novel aspect of this work, but rather the model's evolving behavior. Past efforts have shown that this model, once created, is useful for providing students with expert feedback as they work within our ITS called Rashi. We present an algorithm that observes groups of students as they work within Rashi, and collects student contributions to form an accurate domain level EEKB. We then present experimentation that simulates more than 15,000 data points of real student interaction and analyzes the quality of the EEKB models that are produced. We discover that EEKB models can be constructed accurately, and with significant efficiency compared to human constructed models of the same form. We are able to make this judgment by comparing our automatically constructed models with similar models that were hand crafted by a small team of domain experts. We also explore several tertiary effects. We focus on the impact that gaming and game mechanics have on various aspects of this model acquisition process. We discuss explicit game mechanics that were implemented in the source ITS from which our data was collected. Students who are given our system with game mechanics contribute higher amounts of data, while also performing higher quality work. Additionally, we define a novel type of game called a knowledge-refinement game (KRG), which motivates subject matter experts (SMEs) to contribute to an already constructed EEKB, but for the purpose of refining the model in areas in which confidence is low. Experimental work with the KRG provides strong evidence that: 1) the quality of the original EEKB was indeed strong, as validated by KRG players, and 2) both the quality and breadth of knowledge within the EEKB are increased when players use the KRG.
APA, Harvard, Vancouver, ISO, and other styles
28

"Effective use of artificial intelligence in predicting energy consumption and underground dam levels in two gold mines in South Africa." Thesis, 2015. http://hdl.handle.net/10210/13332.

Full text
Abstract:
D.Ing. (Electrical and Electronic Engineering)
The electricity shortage in South Africa has required the implementation of demand side management (DSM) projects. The DSM projects were implemented by installing energy monitoring and control systems to monitor certain mining aspects such as water pumping systems. Certain energy saving procedures and control systems followed by the mining industry are not sustainable and must be updated regularly in order to meet any changes in the water pumping system. In addition, the present water pumping, monitoring, and control system does not predict the energy consumption or the underground water dam levels. Hence, there is a need to introduce new monitoring system that could control and predict the energy consumption of the underground water pumping system and dam levels based on present and historical data. The work is undertaken to investigate the feasibility of using artificial intelligence in certain aspects of the mining industry. If successful, artificial intelligence systems could lead to improved safety and reduced electrical energy consumption, and decreased human error that could occur throughout the pump station monitoring and control process ...
APA, Harvard, Vancouver, ISO, and other styles
29

Luo, Yi. "End-to-end Speech Separation with Neural Networks." Thesis, 2021. https://doi.org/10.7916/d8-2bsv-xt40.

Full text
Abstract:
Speech separation has long been an active research topic in the signal processing community with its importance in a wide range of applications such as hearable devices and telecommunication systems. It not only serves as a fundamental problem for all higher-level speech processing tasks such as automatic speech recognition, natural language understanding, and smart personal assistants, but also plays an important role in smart earphones and augmented and virtual reality devices. With the recent progress in deep neural networks, the separation performance has been significantly advanced by various new problem definitions and model architectures. The most widely-used approach in the past years performs separation in time-frequency domain, where a spectrogram or a time-frequency representation is first calculated from the mixture signal and multiple time-frequency masks are then estimated for the target sources. The masks are applied on the mixture's time-frequency representation to extract the target representations, and then operations such as inverse short-time Fourier transform is utilized to convert them back to waveforms. However, such frequency-domain methods may have difficulties in modeling the phase spectrogram as the conventional time-frequency masks often only consider the magnitude spectrogram. Moreover, the training objectives for the frequency-domain methods are typically also in frequency-domain, which may not be inline with widely-used time-domain evaluation metrics such as signal-to-noise ratio and signal-to-distortion ratio. The problem formulation of time-domain, end-to-end speech separation naturally arises to tackle the disadvantages in the frequency-domain systems. The end-to-end speech separation networks take the mixture waveform as input and directly estimate the waveforms of the target sources. Following the general pipeline of conventional frequency-domain systems which contains a waveform encoder, a separator, and a waveform decoder, time-domain systems can be design in a similar way while significantly improves the separation performance. In this dissertation, I focus on multiple aspects in the general problem formulation of end-to-end separation networks including the system designs, model architectures, and training objectives. I start with a single-channel pipeline, which we refer to as the time-domain audio separation network (TasNet), to validate the advantage of end-to-end separation comparing with the conventional time-frequency domain pipelines. I then move to the multi-channel scenario and introduce the filter-and-sum network (FaSNet) for both fixed-geometry and ad-hoc geometry microphone arrays. Next I introduce methods for lightweight network architecture design that allows the models to maintain the separation performance while using only as small as 2.5% model size and 17.6% model complexity. After that, I look into the training objective functions for end-to-end speech separation and describe two training objectives for separating varying numbers of sources and improving the robustness under reverberant environments, respectively. Finally I take a step back and revisit several problem formulations in end-to-end separation pipeline and raise more questions in this framework to be further analyzed and investigated in future works.
APA, Harvard, Vancouver, ISO, and other styles
30

Liang, Dawen. "Understanding Music Semantics and User Behavior with Probabilistic Latent Variable Models." Thesis, 2016. https://doi.org/10.7916/D8TH8MZP.

Full text
Abstract:
Bayesian probabilistic modeling provides a powerful framework for building flexible models to incorporate latent structures through likelihood model and prior. When we specify a model, we make certain assumptions about the underlying data-generating process with respect to these latent structures. For example, the latent Dirichlet allocation (LDA) model assumes that when generating a document, we first select a latent topic and then select a word that often appears in the selected topic. We can uncover the latent structures conditioned on the observed data via posterior inference. In this dissertation, we apply the tools of probabilistic latent variable models and try to understand complex real-world data about music semantics and user behavior. We first look into the problem of automatic music tagging -- inferring the semantic tags (e.g., "jazz'', "piano'', "happy'', etc.) from the audio features. We treat music tagging as a matrix completion problem and apply the Poisson matrix factorization model jointly on the vector-quantized audio features and a "bag-of-tags'' representation. This approach exploits the shared latent structure between semantic tags and acoustic codewords. We present experimental results on the Million Song Dataset for both annotation and retrieval tasks, illustrating the steady improvement in performance as more data is used. We then move to the intersection between music semantics and user behavior: music recommendation. The leading performance in music recommendation is achieved by collaborative filtering methods which exploit the similarity patterns in user's listening history. We address the fundamental cold-start problem of collaborative filtering: it cannot recommend new songs that no one has listened to. We train a neural network on semantic tagging information as a content model and use it as a prior in a collaborative filtering model. The proposed system is evaluated on the Million Song Dataset and shows comparably better result than the collaborative filtering approaches, in addition to the favorable performance in the cold-start case. Finally, we focus on general recommender systems. We examine two different types of data: implicit and explicit feedback, and introduce the notion of user exposure (whether or not a user is exposed to an item) as part of the data-generating process, which is latent for implicit data and observed for explicit data. For implicit data, we propose a probabilistic matrix factorization model and infer the user exposure from data. In the language of causal analysis (Imbens and Rubin, 2015), user exposure has close connection to the assignment mechanism. We leverage this connection more directly for explicit data and develop a causal inference approach to recommender systems. We demonstrate that causal inference for recommender systems leads to improved generalization to new data. Exact posterior inference is generally intractable for latent variables models. Throughout this thesis, we will design specific inference procedure to tractably analyze the large-scale data encountered under each scenario.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography