To see the other types of publications on this topic, follow the link: Inductive learning.

Dissertations / Theses on the topic 'Inductive learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Inductive learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ray, Oliver. "Hybrid abductive inductive learning." Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pascoe, James. "The evoluation of 'Boxes' to quantized inductive learning : a study in inductive learning /." Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-12172008-063016/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

林謀楷 and Mau-kai Lam. "Inductive machine learning with bias." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31212426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Heinz, Jeffrey Nicholas. "Inductive learning of phonotactic patterns." Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1467886191&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tappert, Peter M. "Damage identification using inductive learning." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-05092009-040651/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chu, Mabel. "Constructing transformation rules for inductive learning." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0023/MQ51055.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kit, Chun Yu. "Unsupervised lexical learning as inductive inference." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Law, Mark. "Inductive learning of answer set programs." Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/64824.

Full text
Abstract:
The goal of Inductive Logic Programming (ILP) is to find a hypothesis that explains a set of examples in the context of some pre-existing background knowledge. Until recently, most research on ILP targeted learning definite logic programs. This thesis constitutes the first comprehensive work on learning answer set programs, introducing new learning frameworks, theoretical results on the complexity and generality of these frameworks, algorithms for learning ASP programs, and an extensive evaluation of these algorithms. Although there is previous work on learning ASP programs, existing learning frameworks are either brave -- where examples should be explained by at least one answer set -- or cautious where examples should be explained by all answer sets. There are cases where brave induction is too weak and cautious induction is too strong. Our proposed frameworks combine brave and cautious learning and can learn ASP programs containing choice rules and constraints. Many applications of ASP use weak constraints to express a preference ordering over the answer sets of a program. Learning weak constraints corresponds to preference learning, which we achieve by introducing ordering examples. We then explore the generality of our frameworks, investigating what it means for a framework to be general enough to distinguish one hypothesis from another. We show that our frameworks are more general than both brave and cautious induction. We also present a new family of algorithms, called ILASP (Inductive Learning of Answer Set Programs), which we prove to be sound and complete. This work concerns learning from both non-noisy and noisy examples. In the latter case, ILASP returns a hypothesis that maximises the coverage of examples while minimising the length of the hypothesis. In our evaluation, we show that ILASP scales to tasks with large numbers of examples finding accurate hypotheses even in the presence of high proportions of noisy examples.
APA, Harvard, Vancouver, ISO, and other styles
9

Adjodah, Dhaval D. K. (Adjodlah Dhaval Dhamnidhi Kumar). "Social inductive biases for reinforcement learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/128415.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2019
Cataloged from the official PDF of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.
Includes bibliographical references (pages 117-126).
How can we build machines that collaborate and learn more seamlessly with humans, and with each other? How do we create fairer societies? How do we minimize the impact of information manipulation campaigns, and fight back? How do we build machine learning algorithms that are more sample efficient when learning from each other's sparse data, and under time constraints? At the root of these questions is a simple one: how do agents, human or machines, learn from each other, and can we improve it and apply it to new domains? The cognitive and social sciences have provided innumerable insights into how people learn from data using both passive observation and experimental intervention. Similarly, the statistics and machine learning communities have formalized learning as a rigorous and testable computational process.
There is a growing movement to apply insights from the cognitive and social sciences to improving machine learning, as well as opportunities to use machine learning as a sandbox to test, simulate and expand ideas from the cognitive and social sciences. A less researched and fertile part of this intersection is the modeling of social learning: past work has been more focused on how agents can learn from the 'environment', and there is less work that borrows from both communities to look into how agents learn from each other. This thesis presents novel contributions into the nature and usefulness of social learning as an inductive bias for reinforced learning.
I start by presenting the results from two large-scale online human experiments: first, I observe Dunbar cognitive limits that shape and limit social learning in two different social trading platforms, with the additional contribution that synthetic financial bots that transcend human limitations can obtain higher profits even when using naive trading strategies. Second, I devise a novel online experiment to observe how people, at the individual level, update their belief of future financial asset prices (e.g. S&P 500 and Oil prices) from social information. I model such social learning using Bayesian models of cognition, and observe that people make strong distributional assumptions on the social data they observe (e.g. assuming that the likelihood data is unimodal).
I were fortunate to collect one round of predictions during the Brexit market instability, and find that social learning leads to higher performance than when learning from the underlying price history (the environment) during such volatile times. Having observed the cognitive limits and biases people exhibit when learning from other agents, I present an motivational example of the strength of inductive biases in reinforcement learning: I implement a learning model with a relational inductive bias that pre-processes the environment state into a set of relationships between entities in the world. I observe strong improvements in performance and sample efficiency, and even observe the learned relationships to be strongly interpretable.
Finally, given that most modern deep reinforcement learning algorithms are distributed (in that they have separate learning agents), I investigate the hypothesis that viewing deep reinforcement learning as a social learning distributed search problem could lead to strong improvements. I do so by creating a fully decentralized, sparsely-communicating and scalable learning algorithm, and observe strong learning improvements with lower communication bandwidth usage (between learning agents) when using communication topologies that naturally evolved due to social learning in humans. Additionally, I provide a theoretical upper bound (that agrees with our empirical results) regarding which communication topologies lead to the largest learning performance improvement.
Given a future increasingly filled with decentralized autonomous machine learning systems that interact with humans, there is an increasing need to understand social learning to build resilient, scalable and effective learning systems, and this thesis provides insights into how to build such systems.
by Dhaval D.K. Adjodah.
Ph. D.
Ph.D. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Guang Carleton University Dissertation Engineering Systems and Computer. "Inductive learning in network fault diagnosis." Ottawa, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Grant, Timothy John. "Inductive learning of knowledge-based planning operators." [Maastricht : Maastricht : Rijksuniversiteit Limburg] ; University Library, Maastricht University [Host], 1996. http://arno.unimaas.nl/show.cgi?fid=6686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Swersky, Kevin. "Inductive principles for learning Restricted Boltzmann Machines." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/27816.

Full text
Abstract:
We explore the training and usage of the Restricted Boltzmann Machine for unsupervised feature extraction. We investigate the many different aspects involved in their training, and by applying the concept of iterate averaging we show that it is possible to greatly improve on state of the art algorithms. We also derive estimators based on the principles of pseudo-likelihood, ratio matching, and score matching, and we test them empirically against contrastive divergence, and stochastic maximum likelihood (also known as persistent contrastive divergence). Our results show that ratio matching and score matching are promising approaches to learning Restricted Boltzmann Machines. By applying score matching to the Restricted Boltzmann Machine, we show that training an auto-encoder neural network with a particular kind of regularization function is asymptotically consistent. Finally, we discuss the concept of deep learning and its relationship to training Restricted Boltzmann Machines, and briefly explore the impact of fine-tuning on the parameters and performance of a deep belief network.
APA, Harvard, Vancouver, ISO, and other styles
13

Tang, M. X. "Knowledge-based design support and inductive learning." Thesis, University of Edinburgh, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.662724.

Full text
Abstract:
In order to incorporate inductive learning techniques into a knowledge-based design model and an integrated knowledge-based design support system architecture, the computational techniques for developing a knowledge-based design support system architecture and the role of inductive learning in AI-based design are investigated. This investigation gives a background to the development of an incremental learning model for design suitable for a class of design tasks whose structures are not well known initially. This incremental learning model for design is used as a basis to develop knowledge-based design support system architecture that can be used as a kernel for knowledge-based design applications. This architecture integrates a number of computational techniques to support the representation and reasoning of design knowledge. In particular, it integrates a blackboard control system with an assumption-based truth maintenance system in an object-oriented environment to support the exploration of multiple design solutions by supporting the exploration and management of design contexts. As an integral part of this knowledge-based design support architecture, a design concept learning system utilising a number of unsupervised inductive learning techniques is developed. This design concept learning system combines concept formation techniques with design heuristics as background knowledge to build a design concept tree from raw data or past design examples. The effectiveness of this knowledge-based design support architecture and the design concept learning system is demonstrated through a realistic design domain, the design of small-molecule drugs one of the key tasks of which is to identify a pharmacophore description (the structure of a design problem) from known molecule examples. In this thesis, knowledge-based design and inductive learning techniques are first reviewed. Based on this review, an incremental learning model and an integrated architecture for intelligent design support are presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Brown, Martin Richard. "Inductive learning with uncertainty for image processing." Thesis, De Montfort University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hanna, S. "Addressing complex design problems through inductive learning." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1353781/.

Full text
Abstract:
Optimisation and related techniques are well suited to clearly defined problems involving systems that can be accurately simulated, but not to tasks in which the phenomena in question are highly complex or the problem ill-defined. These latter are typical of architecture and particularly creative design tasks, which therefore currently lack viable computational tools. It is argued that as design teams and construction projects of unprecedented scale are increasingly frequent, this is just where such optimisation and communication tools are most needed. This research develops a method by which to address complex design problems, by using inductive machine learning from example precedents either to approximate the behaviour of a complex system or to define objectives for its optimisation. Two design domains are explored. A structural problem of the optimisation of stiffness and mass of fine scale, modular space frames has relatively clearly defined goals, but a highly complex geometry of many interconnected members. A spatial problem of the layout of desks in the workplace addresses the social relationships supported by the pattern of their arrangement, and presents a design situation in which even the problem objectives are not known. These problems are chosen to represent a range of scales, types and sources of complexity against which the methods can be tested. The research tests two hypotheses in the context of these domains, relating to the simulation of a system and to communication between the designer and the machine. The first hypothesis is that the underlying structure and causes of a system’s behaviour must be understood to effectively predict or simulate its behaviour. This hypothesis is typical of modelling approaches in engineering. It is falsified by demonstrating that a function can be learned that models the system in question—either optimising of structural stiffness or determining desirable spatial patterns—without recourse to a bottom up simulation of that system. The second hypothesis is that communication of the behaviour of these systems to the machine requires explicit, a priori definitions and agreed upon conventions of meaning. This is typical of classical, symbolic approaches in artificial intelligence and still implicitly underlies computer aided design tools. It is falsified by a test equivalent to a test of linguistic competence, showing that the computer can form a concept of, and satisfy, a particular requirement that is implied only by ostensive communication by examples. Complex, ill-defined problems are handled in practice by hermeneutic, reflective processes, criticism and discussion. Both hypotheses involve discerning patterns caused by the complex structure from the higher level behaviour only, forming a predictive approximation of this, and using it to produce new designs. It is argued that as these abilities are the input and output requirements for a human designer to engage in the reflective design process, the machine can thus be provided with the appropriate interface to do so, resulting in a novel means of interaction with the computer in a design context. It is demonstrated that the designs output by the computer display both novelty and utility, and are therefore a potentially valuable contribution to collective creativity.
APA, Harvard, Vancouver, ISO, and other styles
16

Torgo, Luís Fernando Raínho Alves. "Inductive learning of tree-based regression models." Doctoral thesis, Universidade do Porto. Reitoria, 1999. http://hdl.handle.net/10216/10018.

Full text
Abstract:
Dissertação de Doutoramento em Ciência de Computadores apresentada à Faculdade de Ciências da Universidade do Porto
Esta tese explora diferentes aspectos da metodologia de indução de árvores de regressão a partir de amostras de dados. O objectivo principal deste estudo é o de melhorar a capacidade predictiva das árvores de regressão tentando manter, tanto quanto possível, a sua compreensibilidade e eficiência computacional. O nosso estudo sobre este tipo de modelos de regressão é dividido em três partes principais.Na primeira parte do estudo são descritas em detalhe duas metodologias para crescer árvores de regressão: uma que minimiza o erro quadrado médio; e outra que minimiza o desvio absoluto médio. A análise que é apresentada concentra-se primordialmente na questão da eficiência computacional do processo de crescimento das árvores. São apresentados diversos algoritmos novos que originam ganhos de eficiência computacional significativos. Por fim, é apresentada uma comparação experimental das duas metodologias alternativas, mostrando claramente os diferentes objectivos práticos de cada uma. A poda das árvores de regressão é um procedimento "standard" neste tipo de metodologias cujo objectivo principal é o de proporcionar um melhor compromisso entre a simplicidade e compreensibilidade das árvores e a sua capacidade predictiva. Na segunda parte desta dissertação são descritas uma série de técnicas novas de poda baseadas num processo de selecção a partir de um conjunto de árvores podadas alternativas. Apresentamos também um conjunto extenso de experiências comparando diferentes métodos de podar árvores de regressão. Os resultados desta comparação, levada a cabo num largo conjunto de problemas, mostram que as nossas técnicas de poda obtêm resultados, em termos de capacidade predictiva, significativamente superiores aos obtidos pelos métodos do actual "estado da arte". Na parte final desta dissertação é apresentado um novo tipo de árvores, que denominamos árvores de regressão locais. Estes modelos híbridos resultam da integração das árvores de regressão com técnicas de modelação ...
APA, Harvard, Vancouver, ISO, and other styles
17

Morris, William C. "Emergent grammatical relations : an inductive learning system /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9828973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Torgo, Luís Fernando Raínho Alves. "Inductive learning of tree-based regression models." Tese, Universidade do Porto. Reitoria, 1999. http://hdl.handle.net/10216/10018.

Full text
Abstract:
Dissertação de Doutoramento em Ciência de Computadores apresentada à Faculdade de Ciências da Universidade do Porto
Esta tese explora diferentes aspectos da metodologia de indução de árvores de regressão a partir de amostras de dados. O objectivo principal deste estudo é o de melhorar a capacidade predictiva das árvores de regressão tentando manter, tanto quanto possível, a sua compreensibilidade e eficiência computacional. O nosso estudo sobre este tipo de modelos de regressão é dividido em três partes principais.Na primeira parte do estudo são descritas em detalhe duas metodologias para crescer árvores de regressão: uma que minimiza o erro quadrado médio; e outra que minimiza o desvio absoluto médio. A análise que é apresentada concentra-se primordialmente na questão da eficiência computacional do processo de crescimento das árvores. São apresentados diversos algoritmos novos que originam ganhos de eficiência computacional significativos. Por fim, é apresentada uma comparação experimental das duas metodologias alternativas, mostrando claramente os diferentes objectivos práticos de cada uma. A poda das árvores de regressão é um procedimento "standard" neste tipo de metodologias cujo objectivo principal é o de proporcionar um melhor compromisso entre a simplicidade e compreensibilidade das árvores e a sua capacidade predictiva. Na segunda parte desta dissertação são descritas uma série de técnicas novas de poda baseadas num processo de selecção a partir de um conjunto de árvores podadas alternativas. Apresentamos também um conjunto extenso de experiências comparando diferentes métodos de podar árvores de regressão. Os resultados desta comparação, levada a cabo num largo conjunto de problemas, mostram que as nossas técnicas de poda obtêm resultados, em termos de capacidade predictiva, significativamente superiores aos obtidos pelos métodos do actual "estado da arte". Na parte final desta dissertação é apresentado um novo tipo de árvores, que denominamos árvores de regressão locais. Estes modelos híbridos resultam da integração das árvores de regressão com técnicas de modelação ...
APA, Harvard, Vancouver, ISO, and other styles
19

Lukac, Martin. "Quantum Inductive Learning and Quantum Logic Synthesis." PDXScholar, 2009. https://pdxscholar.library.pdx.edu/open_access_etds/2319.

Full text
Abstract:
Since Quantum Computer is almost realizable on large scale and Quantum Technology is one of the main solutions to the Moore Limit, Quantum Logic Synthesis (QLS) has become a required theory and tool for designing Quantum Logic Circuits. However, despite its growth, there is no any unified aproach to QLS as Quantum Computing is still being discovered and novel applications are being identified. The intent of this study is to experimentally explore principles of Quantum Logic Synthesis and its applications to Inductive Machine Learning. Based on algorithmic approach, I first design a Genetic Algorithm for Quantum Logic Synthesis that is used to prove and verify the methods proposed in this work. Based on results obtained from the evolutionary experimentation, I propose a fast, structure and cost based exhaustive search that is used for the design of a novel, least expensive universal family of quantum gates. The results form both the evolutionary and heuristic search are used to formulate an Inductive Learning Approach based on Quantum Logic Synthesis with the intended application being the humanoid behavioral robotics. The presented approach illustrates a successful algorithmic approach, where the search algorithm was able to invent/discover novel quantum circuits as well as novel principles in Quantum Logic Synthesis.
APA, Harvard, Vancouver, ISO, and other styles
20

Skabar, Andrew Alojz. "Inductive learning techniques for mineral potential mapping." Thesis, Queensland University of Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lam, Mau-kai. "Inductive machine learing with bias /." Hong Kong : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13972558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Griffiths, Anthony D. "Inductive generalisation in case-based reasoning systems." Thesis, University of York, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Pettersson, Emil. "Meta-Interpretive Learning Versus Inductive Metalogic Programming : A Comparative Analysis in Inductive Logic Programming." Thesis, Uppsala universitet, Institutionen för informatik och media, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393291.

Full text
Abstract:
Artificial intelligence and machine learning are fields of research that have become very popular and are getting more attention in the media as our computational power increases and the theories and latest developments of these fields can be put into practice in the real world. The field of machine learning consists of different paradigms, two of which are the symbolic and connectionist paradigms. In 1991 it was pointed out by Minsky that we could benefit from sharing ideas between the paradigms instead of competing for dominance in the field. That is why this thesis is investigating two approaches to inductive logic programming, where the main research goals are to, first: find similarities or differences between the approaches and potential areas where cross-pollination could be beneficial, and secondly: investigate their relative performance to each other based on the results published in the research. The approaches investigated are Meta-Interpretive Learning and Inductive Metalogic Programming, which belong to the symbolic paradigm of machine learning. The research is conducted through a comparative study based on published research papers. The conclusion to the study suggests that at least two aspects of the approaches could potentially be shared between them, namely the reversible aspect of the meta-interpreter and restricting the hypothesis space using the Herbrand base. However, the findings regarding performance were deemed incompatible, in terms of a fair one to one comparison. The results of the study are mainly specific, but could be interpreted as motivation for similar collaboration efforts between different paradigms.
APA, Harvard, Vancouver, ISO, and other styles
24

Bosch, Antal P. J. van den. "Learning to pronounce written words a study in inductive language learning /." Cadier en Keer : Maastricht : Phidippides ; University Library, Maastricht University [Host], 1997. http://arno.unimaas.nl/show.cgi?fid=5918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Alrajeh, Dalal S. "Requirements Elaboration using Model Checking and Inductive Learning." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Snyders, Sean. "Inductive machine learning bias in knowledge-based neurocomputing." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53463.

Full text
Abstract:
Thesis (MSc) -- Stellenbosch University , 2003.
ENGLISH ABSTRACT: The integration of symbolic knowledge with artificial neural networks is becoming an increasingly popular paradigm for solving real-world problems. This paradigm named knowledge-based neurocomputing, provides means for using prior knowledge to determine the network architecture, to program a subset of weights to induce a learning bias which guides network training, and to extract refined knowledge from trained neural networks. The role of neural networks then becomes that of knowledge refinement. It thus provides a methodology for dealing with uncertainty in the initial domain theory. In this thesis, we address several advantages of this paradigm and propose a solution for the open question of determining the strength of this learning, or inductive, bias. We develop a heuristic for determining the strength of the inductive bias that takes the network architecture, the prior knowledge, the learning method, and the training data into consideration. We apply this heuristic to well-known synthetic problems as well as published difficult real-world problems in the domain of molecular biology and medical diagnoses. We found that, not only do the networks trained with this adaptive inductive bias show superior performance over networks trained with the standard method of determining the strength of the inductive bias, but that the extracted refined knowledge from these trained networks deliver more concise and accurate domain theories.
AFRIKAANSE OPSOMMING: Die integrasie van simboliese kennis met kunsmatige neurale netwerke word 'n toenemende gewilde paradigma om reelewereldse probleme op te los. Hierdie paradigma genoem, kennis-gebaseerde neurokomputasie, verskaf die vermoe om vooraf kennis te gebruik om die netwerkargitektuur te bepaal, om a subversameling van gewigte te programeer om 'n leersydigheid te induseer wat netwerkopleiding lei, en om verfynde kennis van geleerde netwerke te kan ontsluit. Die rol van neurale netwerke word dan die van kennisverfyning. Dit verskaf dus 'n metodologie vir die behandeling van onsekerheid in die aanvangsdomeinteorie. In hierdie tesis adresseer ons verskeie voordele wat bevat is in hierdie paradigma en stel ons 'n oplossing voor vir die oop vraag om die gewig van hierdie leer-, of induktiewe sydigheid te bepaal. Ons ontwikkel 'n heuristiek vir die bepaling van die induktiewe sydigheid wat die netwerkargitektuur, die aanvangskennis, die leermetode, en die data vir die leer proses in ag neem. Ons pas hierdie heuristiek toe op bekende sintetiese probleme so weI as op gepubliseerde moeilike reelewereldse probleme in die gebied van molekulere biologie en mediese diagnostiek. Ons bevind dat, nie alleenlik vertoon die netwerke wat geleer is met die adaptiewe induktiewe sydigheid superieure verrigting bo die netwerke wat geleer is met die standaardmetode om die gewig van die induktiewe sydigheid te bepaal nie, maar ook dat die verfynde kennis wat ontsluit is uit hierdie geleerde netwerke meer bondige en akkurate domeinteorie lewer.
APA, Harvard, Vancouver, ISO, and other styles
27

Zimmermann, Tom. "Inductive Learning and Theory Testing: Applications in Finance." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467320.

Full text
Abstract:
This thesis explores the opportunities for economic research that arise from importing empirical methods from the field of machine learning. Chapter 1 applies inductive learning to cross-sectional asset pricing. Researchers have documented over three hundred variables that can explain differences in cross-sectional stock returns. But which ones contain independent information? Chapter 1 develops a framework, deep conditional portfolio sorts, that can be used to answer this question and that is based on ideas from the machine learning literature, tailored to an asset-pricing application. The method is applied to predicting future stock returns based on past stock returns at different horizons, and short-term returns (i.e. the past six months of returns) rather than medium- or long-term returns are recovered as the variables that convey almost all information about future returns. Chapter 2 argues that machine learning techniques, although focusing on predictions, can be used to test theories. In most theory tests, researchers control for known theories. In contrast, chapter 2 develops a simple model that illustrates how machine learning can be used to conduct an inductive test that allows to control for some unknown theories, as long as they are covered in some way by the data. The method is applied to the theory that realization utility and nominal loss aversion lead to the disposition effect (the propensity to sell winners rather than losers). An inductive test finds that short-term price trends and other features of the price history are more important to predict selling decisions than returns relative to purchase price. Chapter 3 provides another perspective on the disposition effect in the more traditional spirit of behavioral finance. It assesses the implications of different theories for an investor's probability to sell a stock as a function of the stock's return and then tests those implications empirically. Three different approaches that have been used in the literature are shown to lead to the, at first sight, contradictory findings that the probability to sell a stock is either V-shaped or inverted V-shaped in the stock's return. Since these approaches compute different conditional probabilities, they can be reconciled, however, when the conditioning set is taken into account.
Economics
APA, Harvard, Vancouver, ISO, and other styles
28

Tschorn, Patrick. "Incremental inductive logic programming for learning from annotated copora." Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Gandhi, Sachin. "Learning from a Genetic Algorithm with Inductive Logic Programming." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1125511501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Miramontes, Hercog Luis. "Evolutionary and conventional reinforcement learning in multi agent systems for social simulation." Thesis, London South Bank University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ortz, Courtney. "Aging and Associative and Inductive Reasoning Processes in Discrimination Learning." TopSCHOLAR®, 2006. http://digitalcommons.wku.edu/theses/266.

Full text
Abstract:
The purpose of this study was to investigate how associative and inductive reasoning processes develop over trials in feature positive (FP) and feature negative (FN) discrimination learning. Younger and older adults completed initial and transfer tasks with either consistent or inconsistent transfer. Participants articulated a rule on every trial. The measure of discrimination learning was the number of trials it took participants to articulate the exact rule. In the initial task, older adults articulated the rule more slowly than younger adults in FP discrimination and took marginally more trials to articulate the rule in FN discrimination than younger adults. Age differences were greater in FP discrimination than in FN discrimination learning because younger adults performed well in FP discrimination learning. In the transfer task, older adults articulated the FP rule more slowly than younger adults and both groups articulated the rule more quickly with consistent than inconsistent transfer. Older adults articulated the FN rule slower than older adults. The differences in trials to articulate the FN rule for the two groups were somewhat larger for inconsistent transfer than consistent transfer. Discrimination learning was explained in terms of associative and inductive reasoning processes reasonably well. The measure of associative processes was forgotten responses, whereas the measures of inductive reasoning processes were irrelevant cue shifts and perseverations. In FP discrimination learning in the initial task, older adults had a greater proportion of forgotten responses, irrelevant cue shifts, and marginally more perseverations than younger adults. Therefore, older adults had more difficulty with associative and inductive reasoning processes than younger adults in FP discrimination. In FN discrimination, older adults had a greater proportion of forgotten responses than younger adults. Older and younger adults had a similar number of irrelevant cue shifts and perseverations. Therefore, in FN discrimination older adults had more difficulty with associative processes than younger adults. Both groups had difficulty with inductive reasoning processes. In FP discrimination in the transfer task, older adults had a greater proportion of forgotten responses, irrelevant cue shifts, and perseverations than younger adults, and these proportions were similar in consistent and inconsistent transfer. Therefore, in FP discrimination older adults had more difficulty than younger adults with both associative and inductive reasoning processes. Both processes were similar with regards to consistent and inconsistent transfer. In FN discrimination, older adults had a greater proportion of forgotten responses than younger adults, and the proportion of forgotten responses was greater in inconsistent than in consistent transfer. Both groups made a similar number of irrelevant cue shifts, and there was a marginal difference in consistent and inconsistent transfer for this measure with a greater number in inconsistent transfer. Older adults had a greater proportion of perseverations than younger adults. However, there were no differences in the number of perseverations for consistent and inconsistent transfer. Thus, older adults had difficulty with associative and inductive reasoning processes. Younger adults' inductive reasoning skills improved. The associative and inductive reasoning processes in FN discrimination were not as efficient in inconsistent transfer as in consistent transfer.
APA, Harvard, Vancouver, ISO, and other styles
32

Snyder, Thomas D. "The effects of variability on damage identification with inductive learning." Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/42160.

Full text
Abstract:
This work discusses the effects of inherent variabilities on the damage identification problem. The goal of damage identification is to detect structural damage before it reaches a level which will detrimentally affect the structure’s performance. Inductive learning is one tool which has been proposed as an effective method to perform damage identification. There are many variabilities which are inherent in damage identification and can cause problems when attempting to detect damage. Temperature fluctuation and manufacturing variability are specifically addressed. Temperature is shown to be a cause-effect variability which has a measurable effect on the damage identification problem. The inductive learning method is altered to accommodate temperature and shown experimentally to be effective in identifying added mass damage at several locations on an aluminum plate. Manufacturing variability is shown to be a non-quantifiable variability. The inductive learning method is shown to be able to accommodate this variability through careful examination of statistical significances in dynamic response data. The method is experimentally shown to be effective in detecting hole damage in randomly selected aluminum plates from a manufactured batch.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
33

MacKendrick, Alex. "Interleaved Effects in Inductive Category Learning: The Role of Memory Retention." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5846.

Full text
Abstract:
Interleaved effects are widely documented. Research demonstrates that interleaved presentation orders, as opposed to blocked orders typically benefit inductive category learning. What drives interleaved effects is less straightforward. Interleaved presentations provide both the opportunity to compare and contrast between different types of category exemplars, which are temporally juxtaposed, and the opportunity to space study of the same type of category exemplars, which are temporally separated within the presentation span. Accordingly, interleaved effects might be driven by enhanced discrimination, enhanced memory retention, or both in some measure. Though recent studies have largely endorsed enhanced discrimination as the critical mechanism driving interleaved effects, there is no strong evidence to controvert the contribution of enhanced memory retention for interleaved effects. I further examined the role of memory retention by manipulating both presentation order and category structure. Across two experiments I found that memory retention may drive interleaved effects in categorization tasks.
APA, Harvard, Vancouver, ISO, and other styles
34

Lo, Chang-yun. "Optimizing ship air-defense evaluation model using simulation and inductive learning." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dsouza, Michael Dylan. "Fast Static Learning and Inductive Reasoning with Applications to ATPG Problems." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/51591.

Full text
Abstract:
Relations among various nodes in the circuit, as captured by static and inductive invariants, have shown to have a positive impact on a wide range of EDA applications. Techniques such as boolean constraint propagation for static learning and assume-then-verify approach to reason about inductive invariants have been possible due to efficient SAT solvers. Although a significant amount of research effort has been dedicated to the development of effective invariant learning techniques over the years, the computation time for deriving powerful multi-node invariants is still a bottleneck for large circuits. Fast computation of static and inductive invariants is the primary focus of this thesis. We present a novel technique to reduce the cost of static learning by intelligently identifying redundant computations that may not yield new invariants, thereby achieving significant speedup. The process of inductive invariant reasoning relies on the assume-then-verify framework, which requires multiple iterations to complete, making it infeasible for cases with a large set of multi-node invariants. We present filtering techniques that can be applied to a diverse set of multi-node invariants to achieve a significant boost in performance of the invariant checker. Mining and reasoning about all possible potential multi-node invariants is simply infeasible. To alleviate this problem, strategies that narrow down the focus on specific types of powerful multi-node invariants are also presented. Experimental results reflect the promise of these techniques. As a measure of quality, the invariants are utilized for untestable fault identification and to constrain ATPG for path delay fault testing, with positive results.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
36

Santos, Jose Carlos Almeida Santos. "Efficient learning and evaluation of complex concepts in inductive logic programming." Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/6409.

Full text
Abstract:
Inductive Logic Programming (ILP) is a subfield of Machine Learning with foundations in logic programming. In ILP, logic programming, a subset of first-order logic, is used as a uniform representation language for the problem specification and induced theories. ILP has been successfully applied to many real-world problems, especially in the biological domain (e.g. drug design, protein structure prediction), where relational information is of particular importance. The expressiveness of logic programs grants flexibility in specifying the learning task and understandability to the induced theories. However, this flexibility comes at a high computational cost, constraining the applicability of ILP systems. Constructing and evaluating complex concepts remain two of the main issues that prevent ILP systems from tackling many learning problems. These learning problems are interesting both from a research perspective, as they raise the standards for ILP systems, and from an application perspective, where these target concepts naturally occur in many real-world applications. Such complex concepts cannot be constructed or evaluated by parallelizing existing top-down ILP systems or improving the underlying Prolog engine. Novel search strategies and cover algorithms are needed. The main focus of this thesis is on how to efficiently construct and evaluate complex hypotheses in an ILP setting. In order to construct such hypotheses we investigate two approaches. The first, the Top Directed Hypothesis Derivation framework, implemented in the ILP system TopLog, involves the use of a top theory to constrain the hypothesis space. In the second approach we revisit the bottom-up search strategy of Golem, lifting its restriction on determinate clauses which had rendered Golem inapplicable to many key areas. These developments led to the bottom-up ILP system ProGolem. A challenge that arises with a bottom-up approach is the coverage computation of long, non-determinate, clauses. Prolog’s SLD-resolution is no longer adequate. We developed a new, Prolog-based, theta-subsumption engine which is significantly more efficient than SLD-resolution in computing the coverage of such complex clauses. We provide evidence that ProGolem achieves the goal of learning complex concepts by presenting a protein-hexose binding prediction application. The theory ProGolem induced has a statistically significant better predictive accuracy than that of other learners. More importantly, the biological insights ProGolem’s theory provided were judged by domain experts to be relevant and, in some cases, novel.
APA, Harvard, Vancouver, ISO, and other styles
37

Hall, Johan. "MaltParser -- An Architecture for Inductive Labeled Dependency Parsing." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-566.

Full text
Abstract:

This licentiate thesis presents a software architecture for inductive labeled dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The architecture is based on the theoretical framework of inductive dependency parsing by Nivre \citeyear{nivre06c} and has been realized in MaltParser, a system that supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language. Special attention is given in this thesis to learning methods based on support vector machines (SVM).

The implementation is validated in three sets of experiments using data from three languages (Chinese, English and Swedish). First, we check if the implementation realizes the underlying architecture. The experiments show that the MaltParser system outperforms the baseline and satisfies the basic constraints of well-formedness. Furthermore, the experiments show that it is possible to vary parsing algorithm, feature model and learning method independently. Secondly, we focus on the special properties of the SVM interface. It is possible to reduce the learning and parsing time without sacrificing accuracy by dividing the training data into smaller sets, according to the part-of-speech of the next token in the current parser configuration. Thirdly, the last set of experiments present a broad empirical study that compares SVM to memory-based learning (MBL) with five different feature models, where all combinations have gone through parameter optimization for both learning methods. The study shows that SVM outperforms MBL for more complex and lexicalized feature models with respect to parsing accuracy. There are also indications that SVM, with a splitting strategy, can achieve faster parsing than MBL. The parsing accuracy achieved is the highest reported for the Swedish data set and very close to the state of the art for Chinese and English.


Denna licentiatavhandling presenterar en mjukvaruarkitektur för

datadriven dependensparsning, dvs. för att automatiskt skapa en

syntaktisk analys i form av dependensgrafer för meningar i texter

på naturligt språk. Arkitekturen bygger på idén att man ska kunna variera parsningsalgoritm, särdragsmodell och inlärningsmetod oberoende av varandra. Till grund för denna arkitektur har vi använt det teoretiska ramverket för induktiv dependensparsning presenterat av Nivre \citeyear{nivre06c}. Arkitekturen har realiserats i programvaran MaltParser, där det är möjligt att definiera komplexa särdragsmodeller i ett speciellt beskrivningsspråk. I denna avhandling kommer vi att lägga extra tyngd vid att beskriva hur vi har integrerat inlärningsmetoden supportvektor-maskiner (SVM).

MaltParser valideras med tre experimentserier, där data från tre språk används (kinesiska, engelska och svenska). I den första experimentserien kontrolleras om implementationen realiserar den underliggande arkitekturen. Experimenten visar att MaltParser utklassar en trivial metod för dependensparsning (\emph{eng}. baseline) och de grundläggande kraven på välformade dependensgrafer uppfylls. Dessutom visar experimenten att det är möjligt att variera parsningsalgoritm, särdragsmodell och inlärningsmetod oberoende av varandra. Den andra experimentserien fokuserar på de speciella egenskaperna för SVM-gränssnittet. Experimenten visar att det är möjligt att reducera inlärnings- och parsningstiden utan att förlora i parsningskorrekthet genom att dela upp träningsdata enligt ordklasstaggen för nästa ord i nuvarande parsningskonfiguration. Den tredje och sista experimentserien presenterar en empirisk undersökning som jämför SVM med minnesbaserad inlärning (MBL). Studien använder sig av fem särdragsmodeller, där alla kombinationer av språk, inlärningsmetod och särdragsmodell

har genomgått omfattande parameteroptimering. Experimenten visar att SVM överträffar MBL för mer komplexa och lexikaliserade särdragsmodeller med avseende på parsningskorrekthet. Det finns även vissa indikationer på att SVM, med en uppdelningsstrategi, kan parsa en text snabbare än MBL. För svenska kan vi rapportera den högsta parsningskorrektheten hittills och för kinesiska och engelska är resultaten nära de bästa som har rapporterats.

APA, Harvard, Vancouver, ISO, and other styles
38

Ferdinand, Vanessa Anne. "Inductive evolution : cognition, culture, and regularity in language." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/11741.

Full text
Abstract:
Cultural artifacts, such as language, survive and replicate by passing from mind to mind. Cultural evolution always proceeds by an inductive process, where behaviors are never directly copied, but reverse engineered by the cognitive mechanisms involved in learning and production. I will refer to this type of evolutionary change as inductive evolution and explain how this represents a broader class of evolutionary processes that can include both neutral and selective evolution. This thesis takes a mechanistic approach to understanding the forces of evolution underlying change in culture over time, where the mechanisms of change are sought within human cognition. I define culture as anything that replicates by passing through a cognitive system and take language as a premier example of culture, because of the wealth of knowledge about linguistic behaviors (external language) and its cognitive processing mechanisms (internal language). Mainstream cultural evolution theories related to social learning and social transmission of information define culture ideationally, as the subset of socially-acquired information in cognition that affects behaviors. Their goal is to explain behaviors with culture and avoid circularity by defining behaviors as markedly not part of culture. I take a reductionistic approach and argue that all there is to culture is brain states and behaviors, and further, that a complete explanation of the forces of cultural change can not be explained by a subset of cognition related to social learning, but necessarily involves domain-general mechanisms, because cognition is an integrated system. Such an approach should decompose culture into its constituent parts and explore 1) how brains states effect behavior, 2) how behavior effects brain states, and 3) how brain states and behaviors change over time when they are linked up in a process of cultural transmission, where one person's behavior is the input to another. I conduct several psychological experiments on frequency learning with adult learners and describe the behavioral biases that alter the frequencies of linguistic variants over time. I also fit probabilistic models of cognition to participant data to understand the inductive biases at play during linguistic frequency learning. Using these inductive and behavioral biases, I infer a Markov model over my empirical data to extrapolate participants' behavior forward in cultural evolutionary time and determine equivalences (and divergences) between inductive evolution and standard models from population genetics. As a key divergence point, I introduce the concept of non-binomial cultural drift, argue that this is a rampant form of neutral evolution in culture, and empirically demonstrate that probability matching is one such inductive mechanism that results in non-binomial cultural drift. I argue further that all inductive problems involving representativeness are potential drivers of neutral evolution unique to cultural systems. I also explore deviations from probability matching and describe non-neutral evolution due to inductive regularization biases in a linguistic and non-linguistic domain. Here, I offer a new take on an old debate about the domain-specificity vs -generality of the cognitive mechanisms involved in language processing, and show that the evolution of regularity in language cannot be predicted in isolation from the general cognitive mechanisms involved in frequency learning. Using my empirical data on regularization vs probability matching, I demonstrate how the use of appropriate non-binomial null hypotheses offers us greater precision in determining the strength of selective forces in cultural evolution.
APA, Harvard, Vancouver, ISO, and other styles
39

Hayward, Ross. "Analytic and inductive learning in an efficient connectionist rule-based reasoning system." Thesis, Queensland University of Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
40

Selpi. "An inductive logic programming approach to learning which uORFs regulate gene expression." Thesis, Robert Gordon University, 2008. http://hdl.handle.net/10059/224.

Full text
Abstract:
Some upstream open reading frames (uORFs) regulate gene expression (i.e. they are functional) and can play key roles in keeping organisms healthy. However, how uORFs are involved in gene regulation is not het fully understood. In order to get a complete view of how uORFs are involved in gene regulation, it is expected that a large number of functional uORFs are needed. Unfortunately , lab experiments to verify that uORFs are functional are expensive. In this thesis, for the first time, the use of inductive logic programming (ILP) is explored for the task of learning which uORFs regulate gene expression in the yeast Saccharomyces cerevisiae. This work is directed to help select sets of candidate functional uORFs for experimental studies. With limited background knowledge, ILP can generate hypotheses which make the search for novel functional uORFs 17 times more efficient than random sampling. Adding mRNA secondary structure to the background knowledge results in hypotheses with significantly increased performance. This work is the first machine learning work to study both uORFs and mRNA secondary structures in the context of gene regulation. Using a novel combination of knowledge about biological conservation, gene ontology annotations and genes' response to different conditions results in hypotheses that are simple, informative, have an estimated sensitivity of 81% and provide provisional insights into biological characteristics of functional uORFs. The hypotheses predict 299 further genes to have 450 novel functional uORFs. A comparison with a related study suggests that 8 of these predicted functional uORFs (from 8 genes) are strong candidates for experimental studies.
APA, Harvard, Vancouver, ISO, and other styles
41

Nechab, Said. "Contributions to the development of safer expert systems and inductive learning algorithms." Thesis, University of Salford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Konda, Ramesh. "Predicting Machining Rate in Non-Traditional Machining using Decision Tree Inductive Learning." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/199.

Full text
Abstract:
Wire Electrical Discharge Machining (WEDM) is a nontraditional machining process used for machining intricate shapes in high strength and temperature resistive (HSTR) materials. WEDM provides high accuracy, repeatability, and a better surface finish; however the tradeoff is a very slow machining rate. Due to the slow machining rate in WEDM, machining tasks take many hours depending on the complexity of the job. Because of this, users of WEDM try to predict machining rate beforehand so that input parameter values can be pre-programmed to achieve automated machining. However, partial success with traditional methodologies such as thermal modeling, artificial neural networks, mathematical, statistical, and empirical models left this problem still open for further research and exploration of alternative methods. Also, earlier efforts in applying the decision tree rule induction algorithms for predicting the machining rate in WEDM had limitations such as use of coarse grained method of discretizing the target and exploration of only C4.5 as the learning algorithm. The goal of this dissertation was to address the limitations reported in literature in using decision tree rule induction algorithms for WEDM. In this study, the three decision tree inductive algorithms C5.0, CART and CHAID have been applied for predicting material removal rate when the target was discretized into varied number of classes (two, three, four, and five classes) by three discretization methods. There were a total of 36 distinct combinations when learning algorithms, discretization methods, and number of classes in the target are combined. All of these 36 models have been developed and evaluated based on the prediction accuracy. From this research, a total of 21 models found to be suitable for WEDM that have prediction accuracy ranging from 71.43% through 100%. The models indentified in the current study not only achieved better prediction accuracy compared to previous studies, but also allows the users to have much better control over WEDM than what was previously possible. Application of inductive learning and development of suitable predictive models for WEDM by incorporating varied number of classes in the target, different learning algorithms, and different discretization methods have been the major contribution of this research.
APA, Harvard, Vancouver, ISO, and other styles
43

Mansour, Tarek M. Eng Massachusetts Institute of Technology. "Deep neural networks are lazy : on the inductive bias of deep learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121680.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-78).
Deep learning models exhibit superior generalization performance despite being heavily overparametrized. Although widely observed in practice, there is currently very little theoretical backing for such a phenomena. In this thesis, we propose a step forward towards understanding generalization in deep learning. We present evidence that deep neural networks have an inherent inductive bias that makes them inclined to learn generalizable hypotheses and avoid memorization. In this respect, we propose results that suggest that the inductive bias stems from neural networks being lazy: they tend to learn simpler rules first. We also propose a definition of simplicity in deep learning based on the implicit priors ingrained in deep neural networks.
by Tarek Mansour.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
44

Park, Sae Bin. "The Process of Inductive Learning in Spaced, Massed, Interleaved, and Desirable Difficulty Conditions." Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/cmc_theses/322.

Full text
Abstract:
One way people enhance their learning is through a desirable difficulty that makes the learning phase more difficult. The present research was devised to further explore these results and test the hypothesis that desirable difficulties benefits inductive learning by helping people engage in deeper processing strategies. In this experiment, participants were instructed to process perceptual disfluency and study different butterfly species that were presented in a clear or blurry manner. All participants were exposed to the interleaved and blocked conditions (within subjects), there was also a between subjects condition of fluent vs. disfluent. I hypothesized that subjects would perform better when presented with disfluency (blurry picture) because people would be able to engage in deeper processing strategies. This supported my hypothesis that desirable difficulties benefits inductive learning by engaging the subject in deeper processing.
APA, Harvard, Vancouver, ISO, and other styles
45

Lu, Ying. "Transfer Learning for Image Classification." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC045/document.

Full text
Abstract:
Lors de l’apprentissage d’un modèle de classification pour un nouveau domaine cible avec seulement une petite quantité d’échantillons de formation, l’application des algorithmes d’apprentissage automatiques conduit généralement à des classifieurs surdimensionnés avec de mauvaises compétences de généralisation. D’autre part, recueillir un nombre suffisant d’échantillons de formation étiquetés manuellement peut s’avérer très coûteux. Les méthodes de transfert d’apprentissage visent à résoudre ce type de problèmes en transférant des connaissances provenant d’un domaine source associé qui contient beaucoup plus de données pour faciliter la classification dans le domaine cible. Selon les différentes hypothèses sur le domaine cible et le domaine source, l’apprentissage par transfert peut être classé en trois catégories: apprentissage par transfert inductif, apprentissage par transfert transducteur (adaptation du domaine) et apprentissage par transfert non surveillé. Nous nous concentrons sur le premier qui suppose que la tâche cible et la tâche source sont différentes mais liées. Plus précisément, nous supposons que la tâche cible et la tâche source sont des tâches de classification, tandis que les catégories cible et les catégories source sont différentes mais liées. Nous proposons deux méthodes différentes pour aborder ce problème. Dans le premier travail, nous proposons une nouvelle méthode d’apprentissage par transfert discriminatif, à savoir DTL(Discriminative Transfer Learning), combinant une série d’hypothèses faites à la fois par le modèle appris avec les échantillons de cible et les modèles supplémentaires appris avec des échantillons des catégories sources. Plus précisément, nous utilisons le résidu de reconstruction creuse comme discriminant de base et améliore son pouvoir discriminatif en comparant deux résidus d’un dictionnaire positif et d’un dictionnaire négatif. Sur cette base, nous utilisons des similitudes et des dissemblances en choisissant des catégories sources positivement corrélées et négativement corrélées pour former des dictionnaires supplémentaires. Une nouvelle fonction de coût basée sur la statistique de Wilcoxon-Mann-Whitney est proposée pour choisir les dictionnaires supplémentaires avec des données non équilibrées. En outre, deux processus de Boosting parallèles sont appliqués à la fois aux distributions de données positives et négatives pour améliorer encore les performances du classificateur. Sur deux bases de données de classification d’images différentes, la DTL proposée surpasse de manière constante les autres méthodes de l’état de l’art du transfert de connaissances, tout en maintenant un temps d’exécution très efficace. Dans le deuxième travail, nous combinons le pouvoir du transport optimal (OT) et des réseaux de neurones profond (DNN) pour résoudre le problème ITL. Plus précisément, nous proposons une nouvelle méthode pour affiner conjointement un réseau de neurones avec des données source et des données cibles. En ajoutant une fonction de perte du transfert optimal (OT loss) entre les prédictions du classificateur source et cible comme une contrainte sur le classificateur source, le réseau JTLN (Joint Transfer Learning Network) proposé peut effectivement apprendre des connaissances utiles pour la classification cible à partir des données source. En outre, en utilisant différents métriques comme matrice de coût pour la fonction de perte du transfert optimal, JTLN peut intégrer différentes connaissances antérieures sur la relation entre les catégories cibles et les catégories sources. Nous avons effectué des expérimentations avec JTLN basées sur Alexnet sur les jeux de données de classification d’image et les résultats vérifient l’efficacité du JTLN proposé. A notre connaissances, ce JTLN proposé est le premier travail à aborder ITL avec des réseaux de neurones profond (DNN) tout en intégrant des connaissances antérieures sur la relation entre les catégories cible et source
When learning a classification model for a new target domain with only a small amount of training samples, brute force application of machine learning algorithms generally leads to over-fitted classifiers with poor generalization skills. On the other hand, collecting a sufficient number of manually labeled training samples may prove very expensive. Transfer Learning methods aim to solve this kind of problems by transferring knowledge from related source domain which has much more data to help classification in the target domain. Depending on different assumptions about target domain and source domain, transfer learning can be further categorized into three categories: Inductive Transfer Learning, Transductive Transfer Learning (Domain Adaptation) and Unsupervised Transfer Learning. We focus on the first one which assumes that the target task and source task are different but related. More specifically, we assume that both target task and source task are classification tasks, while the target categories and source categories are different but related. We propose two different methods to approach this ITL problem. In the first work we propose a new discriminative transfer learning method, namely DTL, combining a series of hypotheses made by both the model learned with target training samples, and the additional models learned with source category samples. Specifically, we use the sparse reconstruction residual as a basic discriminant, and enhance its discriminative power by comparing two residuals from a positive and a negative dictionary. On this basis, we make use of similarities and dissimilarities by choosing both positively correlated and negatively correlated source categories to form additional dictionaries. A new Wilcoxon-Mann-Whitney statistic based cost function is proposed to choose the additional dictionaries with unbalanced training data. Also, two parallel boosting processes are applied to both the positive and negative data distributions to further improve classifier performance. On two different image classification databases, the proposed DTL consistently out performs other state-of-the-art transfer learning methods, while at the same time maintaining very efficient runtime. In the second work we combine the power of Optimal Transport and Deep Neural Networks to tackle the ITL problem. Specifically, we propose a novel method to jointly fine-tune a Deep Neural Network with source data and target data. By adding an Optimal Transport loss (OT loss) between source and target classifier predictions as a constraint on the source classifier, the proposed Joint Transfer Learning Network (JTLN) can effectively learn useful knowledge for target classification from source data. Furthermore, by using different kind of metric as cost matrix for the OT loss, JTLN can incorporate different prior knowledge about the relatedness between target categories and source categories. We carried out experiments with JTLN based on Alexnet on image classification datasets and the results verify the effectiveness of the proposed JTLN in comparison with standard consecutive fine-tuning. To the best of our knowledge, the proposed JTLN is the first work to tackle ITL with Deep Neural Networks while incorporating prior knowledge on relatedness between target and source categories. This Joint Transfer Learning with OT loss is general and can also be applied to other kind of Neural Networks
APA, Harvard, Vancouver, ISO, and other styles
46

Mallen, Jason. "Utilising incomplete domain knowledge in an information theoretic guided inductive knowledge discovery algorithm." Thesis, University of Portsmouth, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Torres, Padilla Juan Pablo. "Inductive Program Synthesis with a Type System." Thesis, Uppsala universitet, Informationssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mamer, Thierry. "A sequence-length sensitive approach to learning biological grammars using inductive logic programming." Thesis, Robert Gordon University, 2011. http://hdl.handle.net/10059/662.

Full text
Abstract:
This thesis aims to investigate if the ideas behind compression principles, such as the Minimum Description Length, can help us to improve the process of learning biological grammars from protein sequences using Inductive Logic Programming (ILP). Contrary to most traditional ILP learning problems, biological sequences often have a high variation in their length. This variation in length is an important feature of biological sequences which should not be ignored by ILP systems. However we have identified that some ILP systems do not take into account the length of examples when evaluating their proposed hypotheses. During the learning process, many ILP systems use clause evaluation functions to assign a score to induced hypotheses, estimating their quality and effectively influencing the search. Traditionally, clause evaluation functions do not take into account the length of the examples which are covered by the clause. We propose L-modification, a way of modifying existing clause evaluation functions so that they take into account the length of the examples which they learn from. An empirical study was undertaken to investigate if significant improvements can be achieved by applying L-modification to a standard clause evaluation function. Furthermore, we generally investigated how ILP systems cope with the length of examples in training data. We show that our L-modified clause evaluation function outperforms our benchmark function in every experiment we conducted and thus we prove that L-modification is a useful concept. We also show that the length of the examples in the training data used by ILP systems does have an undeniable impact on the results.
APA, Harvard, Vancouver, ISO, and other styles
49

Yu, Ting. "Incorporating prior domain knowledge into inductive machine learning: its implementation in contemporary capital markets." University of Technology, Sydney. Faculty of Information Technology, 2007. http://hdl.handle.net/2100/385.

Full text
Abstract:
An ideal inductive machine learning algorithm produces a model best approximating an underlying target function by using reasonable computational cost. This requires the resultant model to be consistent with the training data, and generalize well over the unseen data. Regular inductive machine learning algorithms rely heavily on numerical data as well as general-purpose inductive bias. However certain environments contain rich domain knowledge prior to the learning task, but it is not easy for regular inductive learning algorithms to utilize prior domain knowledge. This thesis discusses and analyzes various methods of incorporating prior domain knowledge into inductive machine learning through three key issues: consistency, generalization and convergence. Additionally three new methods are proposed and tested over data sets collected from capital markets. These methods utilize financial knowledge collected from various sources, such as experts and research papers, to facilitate the learning process of kernel methods (emerging inductive learning algorithms). The test results are encouraging and demonstrate that prior domain knowledge is valuable to inductive learning machines.
APA, Harvard, Vancouver, ISO, and other styles
50

Westendorp, James Computer Science &amp Engineering Faculty of Engineering UNSW. "Robust incremental relational learning." Awarded by:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/43513.

Full text
Abstract:
Real-world learning tasks present a range of issues for learning systems. Learning tasks can be complex and the training data noisy. When operating as part of a larger system, there may be limitations on available memory and computational resources. Learners may also be required to provide results from a stream. This thesis investigates the problem of incremental, relational learning from imperfect data with constrained time and memory resources. The learning process involves incremental update of a theory when an example is presented that contradicts the theory. Contradictions occur if there is an incorrect theory or noisy data. The learner cannot discriminate between the two possibilities, so both are considered and the better possibility used. Additionally, all changes to the theory must have support from multiple examples. These two principles allow learning from imperfect data. The Minimum Description Length principle is used for selection between possible worlds and determining appropriate levels of additional justification. A new encoding scheme allows the use of MDL within the framework of Inductive Logic Programming. Examples must be stored to provide additional justification for revisions without violating resource requirements. A new algorithm determines when to discard examples, minimising total usage while ensuring sufficient storage for justifications. Searching for revisions is the most computationally expensive part of the process, yet not all searches are successful. Another new algorithm uses a notion of theory stability as a guide to occasionally disallow entire searches to reduce overall time. The approach has been implemented as a learner called NILE. Empirical tests include two challenging domains where this type of learner acts as one component of a larger task. The first of these involves recognition of behavior activation conditions in another agent as part of an opponent modeling task. The second, more challenging task is learning to identify objects in visual images by recognising relationships between image features. These experiments highlight NILE'S strengths and limitations as well as providing new n domains for future work in ILP.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography