Dissertations / Theses on the topic 'Inductive learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Inductive learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ray, Oliver. "Hybrid abductive inductive learning." Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428111.
Full textPascoe, James. "The evoluation of 'Boxes' to quantized inductive learning : a study in inductive learning /." Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-12172008-063016/.
Full text林謀楷 and Mau-kai Lam. "Inductive machine learning with bias." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31212426.
Full textHeinz, Jeffrey Nicholas. "Inductive learning of phonotactic patterns." Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1467886191&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.
Full textTappert, Peter M. "Damage identification using inductive learning." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-05092009-040651/.
Full textChu, Mabel. "Constructing transformation rules for inductive learning." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0023/MQ51055.pdf.
Full textKit, Chun Yu. "Unsupervised lexical learning as inductive inference." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340205.
Full textLaw, Mark. "Inductive learning of answer set programs." Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/64824.
Full textAdjodah, Dhaval D. K. (Adjodlah Dhaval Dhamnidhi Kumar). "Social inductive biases for reinforcement learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/128415.
Full textCataloged from the official PDF of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.
Includes bibliographical references (pages 117-126).
How can we build machines that collaborate and learn more seamlessly with humans, and with each other? How do we create fairer societies? How do we minimize the impact of information manipulation campaigns, and fight back? How do we build machine learning algorithms that are more sample efficient when learning from each other's sparse data, and under time constraints? At the root of these questions is a simple one: how do agents, human or machines, learn from each other, and can we improve it and apply it to new domains? The cognitive and social sciences have provided innumerable insights into how people learn from data using both passive observation and experimental intervention. Similarly, the statistics and machine learning communities have formalized learning as a rigorous and testable computational process.
There is a growing movement to apply insights from the cognitive and social sciences to improving machine learning, as well as opportunities to use machine learning as a sandbox to test, simulate and expand ideas from the cognitive and social sciences. A less researched and fertile part of this intersection is the modeling of social learning: past work has been more focused on how agents can learn from the 'environment', and there is less work that borrows from both communities to look into how agents learn from each other. This thesis presents novel contributions into the nature and usefulness of social learning as an inductive bias for reinforced learning.
I start by presenting the results from two large-scale online human experiments: first, I observe Dunbar cognitive limits that shape and limit social learning in two different social trading platforms, with the additional contribution that synthetic financial bots that transcend human limitations can obtain higher profits even when using naive trading strategies. Second, I devise a novel online experiment to observe how people, at the individual level, update their belief of future financial asset prices (e.g. S&P 500 and Oil prices) from social information. I model such social learning using Bayesian models of cognition, and observe that people make strong distributional assumptions on the social data they observe (e.g. assuming that the likelihood data is unimodal).
I were fortunate to collect one round of predictions during the Brexit market instability, and find that social learning leads to higher performance than when learning from the underlying price history (the environment) during such volatile times. Having observed the cognitive limits and biases people exhibit when learning from other agents, I present an motivational example of the strength of inductive biases in reinforcement learning: I implement a learning model with a relational inductive bias that pre-processes the environment state into a set of relationships between entities in the world. I observe strong improvements in performance and sample efficiency, and even observe the learned relationships to be strongly interpretable.
Finally, given that most modern deep reinforcement learning algorithms are distributed (in that they have separate learning agents), I investigate the hypothesis that viewing deep reinforcement learning as a social learning distributed search problem could lead to strong improvements. I do so by creating a fully decentralized, sparsely-communicating and scalable learning algorithm, and observe strong learning improvements with lower communication bandwidth usage (between learning agents) when using communication topologies that naturally evolved due to social learning in humans. Additionally, I provide a theoretical upper bound (that agrees with our empirical results) regarding which communication topologies lead to the largest learning performance improvement.
Given a future increasingly filled with decentralized autonomous machine learning systems that interact with humans, there is an increasing need to understand social learning to build resilient, scalable and effective learning systems, and this thesis provides insights into how to build such systems.
by Dhaval D.K. Adjodah.
Ph. D.
Ph.D. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences
Shi, Guang Carleton University Dissertation Engineering Systems and Computer. "Inductive learning in network fault diagnosis." Ottawa, 1994.
Find full textGrant, Timothy John. "Inductive learning of knowledge-based planning operators." [Maastricht : Maastricht : Rijksuniversiteit Limburg] ; University Library, Maastricht University [Host], 1996. http://arno.unimaas.nl/show.cgi?fid=6686.
Full textSwersky, Kevin. "Inductive principles for learning Restricted Boltzmann Machines." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/27816.
Full textTang, M. X. "Knowledge-based design support and inductive learning." Thesis, University of Edinburgh, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.662724.
Full textBrown, Martin Richard. "Inductive learning with uncertainty for image processing." Thesis, De Montfort University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391102.
Full textHanna, S. "Addressing complex design problems through inductive learning." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1353781/.
Full textTorgo, Luís Fernando Raínho Alves. "Inductive learning of tree-based regression models." Doctoral thesis, Universidade do Porto. Reitoria, 1999. http://hdl.handle.net/10216/10018.
Full textEsta tese explora diferentes aspectos da metodologia de indução de árvores de regressão a partir de amostras de dados. O objectivo principal deste estudo é o de melhorar a capacidade predictiva das árvores de regressão tentando manter, tanto quanto possível, a sua compreensibilidade e eficiência computacional. O nosso estudo sobre este tipo de modelos de regressão é dividido em três partes principais.Na primeira parte do estudo são descritas em detalhe duas metodologias para crescer árvores de regressão: uma que minimiza o erro quadrado médio; e outra que minimiza o desvio absoluto médio. A análise que é apresentada concentra-se primordialmente na questão da eficiência computacional do processo de crescimento das árvores. São apresentados diversos algoritmos novos que originam ganhos de eficiência computacional significativos. Por fim, é apresentada uma comparação experimental das duas metodologias alternativas, mostrando claramente os diferentes objectivos práticos de cada uma. A poda das árvores de regressão é um procedimento "standard" neste tipo de metodologias cujo objectivo principal é o de proporcionar um melhor compromisso entre a simplicidade e compreensibilidade das árvores e a sua capacidade predictiva. Na segunda parte desta dissertação são descritas uma série de técnicas novas de poda baseadas num processo de selecção a partir de um conjunto de árvores podadas alternativas. Apresentamos também um conjunto extenso de experiências comparando diferentes métodos de podar árvores de regressão. Os resultados desta comparação, levada a cabo num largo conjunto de problemas, mostram que as nossas técnicas de poda obtêm resultados, em termos de capacidade predictiva, significativamente superiores aos obtidos pelos métodos do actual "estado da arte". Na parte final desta dissertação é apresentado um novo tipo de árvores, que denominamos árvores de regressão locais. Estes modelos híbridos resultam da integração das árvores de regressão com técnicas de modelação ...
Morris, William C. "Emergent grammatical relations : an inductive learning system /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9828973.
Full textTorgo, Luís Fernando Raínho Alves. "Inductive learning of tree-based regression models." Tese, Universidade do Porto. Reitoria, 1999. http://hdl.handle.net/10216/10018.
Full textEsta tese explora diferentes aspectos da metodologia de indução de árvores de regressão a partir de amostras de dados. O objectivo principal deste estudo é o de melhorar a capacidade predictiva das árvores de regressão tentando manter, tanto quanto possível, a sua compreensibilidade e eficiência computacional. O nosso estudo sobre este tipo de modelos de regressão é dividido em três partes principais.Na primeira parte do estudo são descritas em detalhe duas metodologias para crescer árvores de regressão: uma que minimiza o erro quadrado médio; e outra que minimiza o desvio absoluto médio. A análise que é apresentada concentra-se primordialmente na questão da eficiência computacional do processo de crescimento das árvores. São apresentados diversos algoritmos novos que originam ganhos de eficiência computacional significativos. Por fim, é apresentada uma comparação experimental das duas metodologias alternativas, mostrando claramente os diferentes objectivos práticos de cada uma. A poda das árvores de regressão é um procedimento "standard" neste tipo de metodologias cujo objectivo principal é o de proporcionar um melhor compromisso entre a simplicidade e compreensibilidade das árvores e a sua capacidade predictiva. Na segunda parte desta dissertação são descritas uma série de técnicas novas de poda baseadas num processo de selecção a partir de um conjunto de árvores podadas alternativas. Apresentamos também um conjunto extenso de experiências comparando diferentes métodos de podar árvores de regressão. Os resultados desta comparação, levada a cabo num largo conjunto de problemas, mostram que as nossas técnicas de poda obtêm resultados, em termos de capacidade predictiva, significativamente superiores aos obtidos pelos métodos do actual "estado da arte". Na parte final desta dissertação é apresentado um novo tipo de árvores, que denominamos árvores de regressão locais. Estes modelos híbridos resultam da integração das árvores de regressão com técnicas de modelação ...
Lukac, Martin. "Quantum Inductive Learning and Quantum Logic Synthesis." PDXScholar, 2009. https://pdxscholar.library.pdx.edu/open_access_etds/2319.
Full textSkabar, Andrew Alojz. "Inductive learning techniques for mineral potential mapping." Thesis, Queensland University of Technology, 2001.
Find full textLam, Mau-kai. "Inductive machine learing with bias /." Hong Kong : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13972558.
Full textGriffiths, Anthony D. "Inductive generalisation in case-based reasoning systems." Thesis, University of York, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336844.
Full textPettersson, Emil. "Meta-Interpretive Learning Versus Inductive Metalogic Programming : A Comparative Analysis in Inductive Logic Programming." Thesis, Uppsala universitet, Institutionen för informatik och media, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393291.
Full textBosch, Antal P. J. van den. "Learning to pronounce written words a study in inductive language learning /." Cadier en Keer : Maastricht : Phidippides ; University Library, Maastricht University [Host], 1997. http://arno.unimaas.nl/show.cgi?fid=5918.
Full textAlrajeh, Dalal S. "Requirements Elaboration using Model Checking and Inductive Learning." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511853.
Full textSnyders, Sean. "Inductive machine learning bias in knowledge-based neurocomputing." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53463.
Full textENGLISH ABSTRACT: The integration of symbolic knowledge with artificial neural networks is becoming an increasingly popular paradigm for solving real-world problems. This paradigm named knowledge-based neurocomputing, provides means for using prior knowledge to determine the network architecture, to program a subset of weights to induce a learning bias which guides network training, and to extract refined knowledge from trained neural networks. The role of neural networks then becomes that of knowledge refinement. It thus provides a methodology for dealing with uncertainty in the initial domain theory. In this thesis, we address several advantages of this paradigm and propose a solution for the open question of determining the strength of this learning, or inductive, bias. We develop a heuristic for determining the strength of the inductive bias that takes the network architecture, the prior knowledge, the learning method, and the training data into consideration. We apply this heuristic to well-known synthetic problems as well as published difficult real-world problems in the domain of molecular biology and medical diagnoses. We found that, not only do the networks trained with this adaptive inductive bias show superior performance over networks trained with the standard method of determining the strength of the inductive bias, but that the extracted refined knowledge from these trained networks deliver more concise and accurate domain theories.
AFRIKAANSE OPSOMMING: Die integrasie van simboliese kennis met kunsmatige neurale netwerke word 'n toenemende gewilde paradigma om reelewereldse probleme op te los. Hierdie paradigma genoem, kennis-gebaseerde neurokomputasie, verskaf die vermoe om vooraf kennis te gebruik om die netwerkargitektuur te bepaal, om a subversameling van gewigte te programeer om 'n leersydigheid te induseer wat netwerkopleiding lei, en om verfynde kennis van geleerde netwerke te kan ontsluit. Die rol van neurale netwerke word dan die van kennisverfyning. Dit verskaf dus 'n metodologie vir die behandeling van onsekerheid in die aanvangsdomeinteorie. In hierdie tesis adresseer ons verskeie voordele wat bevat is in hierdie paradigma en stel ons 'n oplossing voor vir die oop vraag om die gewig van hierdie leer-, of induktiewe sydigheid te bepaal. Ons ontwikkel 'n heuristiek vir die bepaling van die induktiewe sydigheid wat die netwerkargitektuur, die aanvangskennis, die leermetode, en die data vir die leer proses in ag neem. Ons pas hierdie heuristiek toe op bekende sintetiese probleme so weI as op gepubliseerde moeilike reelewereldse probleme in die gebied van molekulere biologie en mediese diagnostiek. Ons bevind dat, nie alleenlik vertoon die netwerke wat geleer is met die adaptiewe induktiewe sydigheid superieure verrigting bo die netwerke wat geleer is met die standaardmetode om die gewig van die induktiewe sydigheid te bepaal nie, maar ook dat die verfynde kennis wat ontsluit is uit hierdie geleerde netwerke meer bondige en akkurate domeinteorie lewer.
Zimmermann, Tom. "Inductive Learning and Theory Testing: Applications in Finance." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467320.
Full textEconomics
Tschorn, Patrick. "Incremental inductive logic programming for learning from annotated copora." Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538607.
Full textGandhi, Sachin. "Learning from a Genetic Algorithm with Inductive Logic Programming." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1125511501.
Full textMiramontes, Hercog Luis. "Evolutionary and conventional reinforcement learning in multi agent systems for social simulation." Thesis, London South Bank University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288112.
Full textOrtz, Courtney. "Aging and Associative and Inductive Reasoning Processes in Discrimination Learning." TopSCHOLAR®, 2006. http://digitalcommons.wku.edu/theses/266.
Full textSnyder, Thomas D. "The effects of variability on damage identification with inductive learning." Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/42160.
Full textMaster of Science
MacKendrick, Alex. "Interleaved Effects in Inductive Category Learning: The Role of Memory Retention." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5846.
Full textLo, Chang-yun. "Optimizing ship air-defense evaluation model using simulation and inductive learning." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26678.
Full textDsouza, Michael Dylan. "Fast Static Learning and Inductive Reasoning with Applications to ATPG Problems." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/51591.
Full textMaster of Science
Santos, Jose Carlos Almeida Santos. "Efficient learning and evaluation of complex concepts in inductive logic programming." Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/6409.
Full textHall, Johan. "MaltParser -- An Architecture for Inductive Labeled Dependency Parsing." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-566.
Full textThis licentiate thesis presents a software architecture for inductive labeled dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The architecture is based on the theoretical framework of inductive dependency parsing by Nivre \citeyear{nivre06c} and has been realized in MaltParser, a system that supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language. Special attention is given in this thesis to learning methods based on support vector machines (SVM).
The implementation is validated in three sets of experiments using data from three languages (Chinese, English and Swedish). First, we check if the implementation realizes the underlying architecture. The experiments show that the MaltParser system outperforms the baseline and satisfies the basic constraints of well-formedness. Furthermore, the experiments show that it is possible to vary parsing algorithm, feature model and learning method independently. Secondly, we focus on the special properties of the SVM interface. It is possible to reduce the learning and parsing time without sacrificing accuracy by dividing the training data into smaller sets, according to the part-of-speech of the next token in the current parser configuration. Thirdly, the last set of experiments present a broad empirical study that compares SVM to memory-based learning (MBL) with five different feature models, where all combinations have gone through parameter optimization for both learning methods. The study shows that SVM outperforms MBL for more complex and lexicalized feature models with respect to parsing accuracy. There are also indications that SVM, with a splitting strategy, can achieve faster parsing than MBL. The parsing accuracy achieved is the highest reported for the Swedish data set and very close to the state of the art for Chinese and English.
Denna licentiatavhandling presenterar en mjukvaruarkitektur för
datadriven dependensparsning, dvs. för att automatiskt skapa en
syntaktisk analys i form av dependensgrafer för meningar i texter
på naturligt språk. Arkitekturen bygger på idén att man ska kunna variera parsningsalgoritm, särdragsmodell och inlärningsmetod oberoende av varandra. Till grund för denna arkitektur har vi använt det teoretiska ramverket för induktiv dependensparsning presenterat av Nivre \citeyear{nivre06c}. Arkitekturen har realiserats i programvaran MaltParser, där det är möjligt att definiera komplexa särdragsmodeller i ett speciellt beskrivningsspråk. I denna avhandling kommer vi att lägga extra tyngd vid att beskriva hur vi har integrerat inlärningsmetoden supportvektor-maskiner (SVM).
MaltParser valideras med tre experimentserier, där data från tre språk används (kinesiska, engelska och svenska). I den första experimentserien kontrolleras om implementationen realiserar den underliggande arkitekturen. Experimenten visar att MaltParser utklassar en trivial metod för dependensparsning (\emph{eng}. baseline) och de grundläggande kraven på välformade dependensgrafer uppfylls. Dessutom visar experimenten att det är möjligt att variera parsningsalgoritm, särdragsmodell och inlärningsmetod oberoende av varandra. Den andra experimentserien fokuserar på de speciella egenskaperna för SVM-gränssnittet. Experimenten visar att det är möjligt att reducera inlärnings- och parsningstiden utan att förlora i parsningskorrekthet genom att dela upp träningsdata enligt ordklasstaggen för nästa ord i nuvarande parsningskonfiguration. Den tredje och sista experimentserien presenterar en empirisk undersökning som jämför SVM med minnesbaserad inlärning (MBL). Studien använder sig av fem särdragsmodeller, där alla kombinationer av språk, inlärningsmetod och särdragsmodell
har genomgått omfattande parameteroptimering. Experimenten visar att SVM överträffar MBL för mer komplexa och lexikaliserade särdragsmodeller med avseende på parsningskorrekthet. Det finns även vissa indikationer på att SVM, med en uppdelningsstrategi, kan parsa en text snabbare än MBL. För svenska kan vi rapportera den högsta parsningskorrektheten hittills och för kinesiska och engelska är resultaten nära de bästa som har rapporterats.
Ferdinand, Vanessa Anne. "Inductive evolution : cognition, culture, and regularity in language." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/11741.
Full textHayward, Ross. "Analytic and inductive learning in an efficient connectionist rule-based reasoning system." Thesis, Queensland University of Technology, 2001.
Find full textSelpi. "An inductive logic programming approach to learning which uORFs regulate gene expression." Thesis, Robert Gordon University, 2008. http://hdl.handle.net/10059/224.
Full textNechab, Said. "Contributions to the development of safer expert systems and inductive learning algorithms." Thesis, University of Salford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261864.
Full textKonda, Ramesh. "Predicting Machining Rate in Non-Traditional Machining using Decision Tree Inductive Learning." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/199.
Full textMansour, Tarek M. Eng Massachusetts Institute of Technology. "Deep neural networks are lazy : on the inductive bias of deep learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121680.
Full textThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-78).
Deep learning models exhibit superior generalization performance despite being heavily overparametrized. Although widely observed in practice, there is currently very little theoretical backing for such a phenomena. In this thesis, we propose a step forward towards understanding generalization in deep learning. We present evidence that deep neural networks have an inherent inductive bias that makes them inclined to learn generalizable hypotheses and avoid memorization. In this respect, we propose results that suggest that the inductive bias stems from neural networks being lazy: they tend to learn simpler rules first. We also propose a definition of simplicity in deep learning based on the implicit priors ingrained in deep neural networks.
by Tarek Mansour.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Park, Sae Bin. "The Process of Inductive Learning in Spaced, Massed, Interleaved, and Desirable Difficulty Conditions." Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/cmc_theses/322.
Full textLu, Ying. "Transfer Learning for Image Classification." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC045/document.
Full textWhen learning a classification model for a new target domain with only a small amount of training samples, brute force application of machine learning algorithms generally leads to over-fitted classifiers with poor generalization skills. On the other hand, collecting a sufficient number of manually labeled training samples may prove very expensive. Transfer Learning methods aim to solve this kind of problems by transferring knowledge from related source domain which has much more data to help classification in the target domain. Depending on different assumptions about target domain and source domain, transfer learning can be further categorized into three categories: Inductive Transfer Learning, Transductive Transfer Learning (Domain Adaptation) and Unsupervised Transfer Learning. We focus on the first one which assumes that the target task and source task are different but related. More specifically, we assume that both target task and source task are classification tasks, while the target categories and source categories are different but related. We propose two different methods to approach this ITL problem. In the first work we propose a new discriminative transfer learning method, namely DTL, combining a series of hypotheses made by both the model learned with target training samples, and the additional models learned with source category samples. Specifically, we use the sparse reconstruction residual as a basic discriminant, and enhance its discriminative power by comparing two residuals from a positive and a negative dictionary. On this basis, we make use of similarities and dissimilarities by choosing both positively correlated and negatively correlated source categories to form additional dictionaries. A new Wilcoxon-Mann-Whitney statistic based cost function is proposed to choose the additional dictionaries with unbalanced training data. Also, two parallel boosting processes are applied to both the positive and negative data distributions to further improve classifier performance. On two different image classification databases, the proposed DTL consistently out performs other state-of-the-art transfer learning methods, while at the same time maintaining very efficient runtime. In the second work we combine the power of Optimal Transport and Deep Neural Networks to tackle the ITL problem. Specifically, we propose a novel method to jointly fine-tune a Deep Neural Network with source data and target data. By adding an Optimal Transport loss (OT loss) between source and target classifier predictions as a constraint on the source classifier, the proposed Joint Transfer Learning Network (JTLN) can effectively learn useful knowledge for target classification from source data. Furthermore, by using different kind of metric as cost matrix for the OT loss, JTLN can incorporate different prior knowledge about the relatedness between target categories and source categories. We carried out experiments with JTLN based on Alexnet on image classification datasets and the results verify the effectiveness of the proposed JTLN in comparison with standard consecutive fine-tuning. To the best of our knowledge, the proposed JTLN is the first work to tackle ITL with Deep Neural Networks while incorporating prior knowledge on relatedness between target and source categories. This Joint Transfer Learning with OT loss is general and can also be applied to other kind of Neural Networks
Mallen, Jason. "Utilising incomplete domain knowledge in an information theoretic guided inductive knowledge discovery algorithm." Thesis, University of Portsmouth, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295773.
Full textTorres, Padilla Juan Pablo. "Inductive Program Synthesis with a Type System." Thesis, Uppsala universitet, Informationssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385282.
Full textMamer, Thierry. "A sequence-length sensitive approach to learning biological grammars using inductive logic programming." Thesis, Robert Gordon University, 2011. http://hdl.handle.net/10059/662.
Full textYu, Ting. "Incorporating prior domain knowledge into inductive machine learning: its implementation in contemporary capital markets." University of Technology, Sydney. Faculty of Information Technology, 2007. http://hdl.handle.net/2100/385.
Full textWestendorp, James Computer Science & Engineering Faculty of Engineering UNSW. "Robust incremental relational learning." Awarded by:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/43513.
Full text