Academic literature on the topic 'Inductive supervised learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Inductive supervised learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Inductive supervised learning"

1

Wu, Haiping, Khimya Khetarpal, and Doina Precup. "Self-Supervised Attention-Aware Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10311–19. http://dx.doi.org/10.1609/aaai.v35i12.17235.

Full text
Abstract:
Visual saliency has emerged as a major visualization tool for interpreting deep reinforcement learning (RL) agents. However, much of the existing research uses it as an analyzing tool rather than an inductive bias for policy learning. In this work, we use visual attention as an inductive bias for RL agents. We propose a novel self-supervised attention learning approach which can 1. learn to select regions of interest without explicit annotations, and 2. act as a plug for existing deep RL methods to improve the learning performance. We empirically show that the self-supervised attention-aware deep RL methods outperform the baselines in the context of both the rate of convergence and performance. Furthermore, the proposed self-supervised attention is not tied with specific policies, nor restricted to a specific scene. We posit that the proposed approach is a general self-supervised attention module for multi-task learning and transfer learning, and empirically validate the generalization ability of the proposed method. Finally, we show that our method learns meaningful object keypoints highlighting improvements both qualitatively and quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
2

Bisio, Federica, Sergio Decherchi, Paolo Gastaldo, and Rodolfo Zunino. "Inductive bias for semi-supervised extreme learning machine." Neurocomputing 174 (January 2016): 154–67. http://dx.doi.org/10.1016/j.neucom.2015.04.104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hovsepian, Karen, Peter Anselmo, and Subhasish Mazumdar. "Supervised inductive learning with Lotka–Volterra derived models." Knowledge and Information Systems 26, no. 2 (January 16, 2010): 195–223. http://dx.doi.org/10.1007/s10115-009-0280-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Juan, Liu, and Li Weihua. "A hybrid genetic algorithm for supervised inductive learning." Wuhan University Journal of Natural Sciences 1, no. 3-4 (December 1996): 611–16. http://dx.doi.org/10.1007/bf02900895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

B, Amarnath, and S. Appavu alias Balamurugan. "Feature Selection for Supervised Learning via Dependency Analysis." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 6885–91. http://dx.doi.org/10.1166/jctn.2016.5642.

Full text
Abstract:
A new feature selection method based on Inductive probability is proposed in this paper. The main idea is to find the dependent attributes and remove the redundant ones among them. The technology to obtain the dependency needed is based on Inductive probability approach. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If there is an opposing set of attribute values that do not lead to opposing classification decisions (zero probability), the two attributes are considered independent, otherwise dependent. One of them can be removed and thus the number of attributes is reduced. A new attribute selection algorithm with Inductive probability is implemented and evaluated through extensive experiments, comparing with related attribute selection algorithms over eight datasets such as Molecular Biology, Connect4, Soybean, Zoo, Ballon, Mushroom, Lenses and Fictional from UCI Machine Learning Repository databases.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Ruifeng, Fadi Dornaika, and Yassine Ruichek. "Inductive semi-supervised learning with Graph Convolution based regression." Neurocomputing 434 (April 2021): 315–22. http://dx.doi.org/10.1016/j.neucom.2020.12.084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Shuyi, Dino Ienco, Roberto Esposito, and Ruggero G. Pensa. "ESA☆: A generic framework for semi-supervised inductive learning." Neurocomputing 447 (August 2021): 102–17. http://dx.doi.org/10.1016/j.neucom.2021.03.051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dornaika, F., R. Dahbi, A. Bosaghzadeh, and Y. Ruichek. "Efficient dynamic graph construction for inductive semi-supervised learning." Neural Networks 94 (October 2017): 192–203. http://dx.doi.org/10.1016/j.neunet.2017.07.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Zhao, Lei Jia, Mingbo Zhao, Qiaolin Ye, Min Zhang, and Meng Wang. "Adaptive non-negative projective semi-supervised learning for inductive classification." Neural Networks 108 (December 2018): 128–45. http://dx.doi.org/10.1016/j.neunet.2018.07.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tian, Xilan, Gilles Gasso, and Stéphane Canu. "A multiple kernel framework for inductive semi-supervised SVM learning." Neurocomputing 90 (August 2012): 46–58. http://dx.doi.org/10.1016/j.neucom.2011.12.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Inductive supervised learning"

1

Khalid, Fahad. "Measure-based Learning Algorithms : An Analysis of Back-propagated Neural Networks." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4795.

Full text
Abstract:
In this thesis we present a theoretical investigation of the feasibility of using a problem specific inductive bias for back-propagated neural networks. We argue that if a learning algorithm is biased towards optimizing a certain performance measure, it is plausible to assume that it will generate a higher performance score when evaluated using that particular measure. We use the term measure function for a multi-criteria evaluation function that can also be used as an inherent function in learning algorithms, in order to customize the bias of a learning algorithm for a specific problem. Hence, the term measure-based learning algorithms. We discuss different characteristics of the most commonly used performance measures and establish similarities among them. The characteristics of individual measures and the established similarities are then correlated to the characteristics of the backpropagation algorithm, in order to explore the applicability of introducing a measure function to backpropagated neural networks. Our study shows that there are certain characteristics of the error back-propagation mechanism and the inherent gradient search method that limit the set of measures that can be used for the measure function. Also, we highlight the significance of taking the representational bias of the neural network into account when developing methods for measure-based learning. The overall analysis of the research shows that measure-based learning is a promising area of research with potential for further exploration. We suggest directions for future research that might help realize measure-based neural networks.
The study is an investigation on the feasibility of using a generic inductive bias for backpropagation artificial neural networks, which could incorporate any one or a combination of problem specific performance metrics to be optimized. We have identified several limitations of both the standard error backpropagation mechanism as well the inherent gradient search approach. These limitations suggest exploration of methods other than backpropagation, as well use of global search methods instead of gradient search. Also, we emphasize the importance of taking the representational bias of the neural network in consideration, since only a combination of both procedural and representational bias can provide highly optimal solutions.
APA, Harvard, Vancouver, ISO, and other styles
2

Carroll, James Lamond. "A Bayesian Decision Theoretical Approach to Supervised Learning, Selective Sampling, and Empirical Function Optimization." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3413.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lehmann, Jens. "Learning OWL Class Expressions." Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-38351.

Full text
Abstract:
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
APA, Harvard, Vancouver, ISO, and other styles
4

Bayoudh, Meriam. "Apprentissage de connaissances structurelles à partir d’images satellitaires et de données exogènes pour la cartographie dynamique de l’environnement amazonien." Thesis, Antilles-Guyane, 2013. http://www.theses.fr/2013AGUY0671/document.

Full text
Abstract:
Les méthodes classiques d'analyse d'images satellites sont inadaptées au volume actuel du flux de données. L'automatisation de l'interprétation de ces images devient donc cruciale pour l'analyse et la gestion des phénomènes observables par satellite et évoluant dans le temps et l'espace. Ce travail vise à automatiser la cartographie dynamique de l'occupation du sol à partir d'images satellites, par des mécanismes expressifs, facilement interprétables en prenant en compte les aspects structurels de l'information géographique. Il s'inscrit dans le cadre de l'analyse d'images basée objet. Ainsi, un paramétrage supervisé d'un algorithme de segmentation d'images est proposé. Dans un deuxième temps, une méthode de classification supervisée d'objets géographiques est présentée combinant apprentissage automatique par programmation logique inductive et classement par l'approche multi-class rule set intersection. Ces approches sont appliquées à la cartographie de la bande côtière Guyanaise. Les résultats démontrent la faisabilité du paramétrage de la segmentation, mais également sa variabilité en fonction des classes de la carte de référence et des données d'entrée. Les résultats de la classification supervisée montrent qu'il est possible d'induire des règles de classification expressives, véhiculant des informations cohérentes et structurelles dans un contexte applicatif donnée et conduisant à des valeurs satisfaisantes de précision et de KAPPA (respectivement 84,6% et 0,7). Ce travail de thèse contribue ainsi à l'automatisation de la cartographie dynamique à partir d'images de télédétection et propose des perspectives originales et prometteuses
Classical methods for satellite image analysis are inadequate for the current bulky data flow. Thus, automate the interpretation of such images becomes crucial for the analysis and management of phenomena changing in time and space, observable by satellite. Thus, this work aims at automating land cover cartography from satellite images, by expressive and easily interpretable mechanism, and by explicitly taking into account structural aspects of geographic information. It is part of the object-based image analysis framework, and assumes that it is possible to extract useful contextual knowledge from maps. Thus, a supervised parameterization methods of a segmentation algorithm is proposed. Secondly, a supervised classification of geographical objects is presented. It combines machine learning by inductive logic programming and the multi-class rule set intersection approach. These approaches are applied to the French Guiana coastline cartography. The results demonstrate the feasibility of the segmentation parameterization, but also its variability as a function of the reference map classes and of the input data. Yet, methodological developments allow to consider an operational implementation of such an approach. The results of the object supervised classification show that it is possible to induce expressive classification rules that convey consistent and structural information in a given application context and lead to reliable predictions, with overall accuracy and Kappa values equal to, respectively, 84,6% and 0,7. In conclusion, this work contributes to the automation of the dynamic cartography from remotely sensed images and proposes original and promising perpectives
APA, Harvard, Vancouver, ISO, and other styles
5

MOTTA, EDUARDO NEVES. "SUPERVISED LEARNING INCREMENTAL FEATURE INDUCTION AND SELECTION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28688@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
A indução de atributos não lineares a partir de atributos básicos é um modo de obter modelos preditivos mais precisos para problemas de classificação. Entretanto, a indução pode causar o rápido crescimento do número de atributos, resultando usualmente em overfitting e em modelos com baixo poder de generalização. Para evitar esta consequência indesejada, técnicas de regularização são aplicadas, para criar um compromisso entre um reduzido conjunto de atributos representativo do domínio e a capacidade de generalização Neste trabalho, descrevemos uma abordagem de aprendizado de máquina supervisionado com indução e seleção incrementais de atributos. Esta abordagem integra árvores de decisão, support vector machines e seleção de atributos utilizando perceptrons esparsos em um framework de aprendizado que chamamos IFIS – Incremental Feature Induction and Selection. Usando o IFIS, somos capazes de criar modelos regularizados não lineares de alto desempenho utilizando um algoritmo com modelo linear. Avaliamos o nosso sistema em duas tarefas de processamento de linguagem natural em dois idiomas. Na primeira tarefa, anotação morfossintática, usamos dois corpora, o corpus WSJ em língua inglesa e o Mac-Morpho em Português. Em ambos, alcançamos resultados competitivos com o estado da arte reportado na literatura, alcançando as acurácias de 97,14 por cento e 97,13 por cento, respectivamente. Na segunda tarefa, análise de dependência, utilizamos o corpus da CoNLL 2006 Shared Task em português, ultrapassando os resultados reportados durante aquela competição e alcançando resultados competitivos com o estado da arte para esta tarefa, com a métrica UAS igual a 92,01 por cento. Com a regularização usando um perceptron esparso, geramos modelos SVM que são até 10 vezes menores, preservando sua acurácia. A redução dos modelos é obtida através da regularização dos domínios dos atributos, que atinge percentuais de até 99 por cento. Com a regularização dos modelos, alcançamos uma redução de até 82 por cento no tamanho físico dos modelos. O tempo de predição do modelo compacto é reduzido em até 84 por cento. A redução dos domínios e modelos permite também melhorar a engenharia de atributos, através da análise dos domínios compactos e da introdução incremental de novos atributos.
Non linear feature induction from basic features is a method of generating predictive models with higher precision for classification problems. However, feature induction may rapidly lead to a huge number of features, causing overfitting and models with low predictive power. To prevent this side effect, regularization techniques are employed to obtain a trade-off between a reduced feature set representative of the domain and generalization power. In this work, we describe a supervised machine learning approach that incrementally inducts and selects feature conjunctions derived from base features. This approach integrates decision trees, support vector machines and feature selection using sparse perceptrons in a machine learning framework named IFIS – Incremental Feature Induction and Selection. Using IFIS, we generate regularized non-linear models with high performance using a linear algorithm. We evaluate our system in two natural language processing tasks in two different languages. For the first task, POS tagging, we use two corpora, WSJ corpus for English, and Mac-Morpho for Portuguese. Our results are competitive with the state-of-the-art performance in both, achieving accuracies of 97.14 per cent and 97.13 per cent, respectively. In the second task, Dependency Parsing, we use the CoNLL 2006 Shared Task Portuguese corpus, achieving better results than those reported during that competition and competitive with the state-of-the-art for this task, with UAS score of 92.01 per cent. Applying model regularization using a sparse perceptron, we obtain SVM models 10 times smaller, while maintaining their accuracies. We achieve model reduction by regularization of feature domains, which can reach 99 per cent. Using the regularized model we achieve model physical size shrinking of up to 82 per cent. The prediction time is cut by up to 84 per cent. Domains and models downsizing also allows enhancing feature engineering, through compact domain analysis and incremental inclusion of new features.
APA, Harvard, Vancouver, ISO, and other styles
6

Boonkwan, Prachya. "Scalable semi-supervised grammar induction using cross-linguistically parameterized syntactic prototypes." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9808.

Full text
Abstract:
This thesis is about the task of unsupervised parser induction: automatically learning grammars and parsing models from raw text. We endeavor to induce such parsers by observing sequences of terminal symbols. We focus on overcoming the problem of frequent collocation that is a major source of error in grammar induction. For example, since a verb and a determiner tend to co-occur in a verb phrase, the probability of attaching the determiner to the verb is sometimes higher than that of attaching the core noun to the verb, resulting in erroneous attachment *((Verb Det) Noun) instead of (Verb (Det Noun)). Although frequent collocation is the heart of grammar induction, it is precariously capable of distorting the grammar distribution. Natural language grammars follow a Zipfian (power law) distribution, where the frequency of any grammar rule is inversely proportional to its rank in the frequency table. We believe that covering the most frequent grammar rules in grammar induction will have a strong impact on accuracy. We propose an efficient approach to grammar induction guided by cross-linguistic language parameters. Our language parameters consist of 33 parameters of frequent basic word orders, which are easy to be elicited from grammar compendiums or short interviews with naïve language informants. These parameters are designed to capture frequent word orders in the Zipfian distribution of natural language grammars, while the rest of the grammar including exceptions can be automatically induced from unlabeled data. The language parameters shrink the search space of the grammar induction problem by exploiting both word order information and predefined attachment directions. The contribution of this thesis is three-fold. (1) We show that the language parameters are adequately generalizable cross-linguistically, as our grammar induction experiments will be carried out on 14 languages on top of a simple unsupervised grammar induction system. (2) Our specification of language parameters improves the accuracy of unsupervised parsing even when the parser is exposed to much less frequent linguistic phenomena in longer sentences when the accuracy decreases within 10%. (3) We investigate the prevalent factors of errors in grammar induction which will provide room for accuracy improvement. The proposed language parameters efficiently cope with the most frequent grammar rules in natural languages. With only 10 man-hours for preparing syntactic prototypes, it improves the accuracy of directed dependency recovery over the state-ofthe- art Gillenwater et al.’s (2010) completely unsupervised parser in: (1) Chinese by 30.32% (2) Swedish by 28.96% (3) Portuguese by 37.64% (4) Dutch by 15.17% (5) German by 14.21% (6) Spanish by 13.53% (7) Japanese by 13.13% (8) English by 12.41% (9) Czech by 9.16% (10) Slovene by 7.24% (11) Turkish by 6.72% and (12) Bulgarian by 5.96%. It is noted that although the directed dependency accuracies of some languages are below 60%, their TEDEVAL scores are still satisfactory (approximately 80%). This suggests us that our parsed trees are, in fact, closely related to the gold-standard trees despite the discrepancy of annotation schemes. We perform an error analysis of over- and under-generation analysis. We found three prevalent problems that cause errors in the experiments: (1) PP attachment (2) discrepancies of dependency annotation schemes and (3) rich morphology. The methods presented in this thesis were originally presented in Boonkwan and Steedman (2011). The thesis presents a great deal more detail in the design of crosslinguistic language parameters, the algorithm of lexicon inventory construction, experiment results, and error analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Packer, Thomas L. "Scalable Detection and Extraction of Data in Lists in OCRed Text for Ontology Population Using Semi-Supervised and Unsupervised Active Wrapper Induction." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4258.

Full text
Abstract:
Lists of records in machine-printed documents contain much useful information. As one example, the thousands of family history books scanned, OCRed, and placed on-line by FamilySearch.org probably contain hundreds of millions of fact assertions about people, places, family relationships, and life events. Data like this cannot be fully utilized until a person or process locates the data in the document text, extracts it, and structures it with respect to an ontology or database schema. Yet, in the family history industry and other industries, data in lists goes largely unused because no known approach adequately addresses all of the costs, challenges, and requirements of a complete end-to-end solution to this task. The diverse information is costly to extract because many kinds of lists appear even within a single document, differing from each other in both structure and content. The lists' records and component data fields are usually not set apart explicitly from the rest of the text, especially in a corpus of OCRed historical documents. OCR errors and the lack of document structure (e.g. HMTL tags) make list content hard to recognize by a software tool developed without a substantial amount of highly specialized, hand-coded knowledge or machine learning supervision. Making an approach that is not only accurate but also sufficiently scalable in terms of time and space complexity to process a large corpus efficiently is especially challenging. In this dissertation, we introduce a novel family of scalable approaches to list discovery and ontology population. Its contributions include the following. We introduce the first general-purpose methods of which we are aware for both list detection and wrapper induction for lists in OCRed or other plain text. We formally outline a mapping between in-line labeled text and populated ontologies, effectively reducing the ontology population problem to a sequence labeling problem, opening the door to applying sequence labelers and other common text tools to the goal of populating a richly structured ontology from text. We provide a novel admissible heuristic for inducing regular expression wrappers using an A* search. We introduce two ways of modeling list-structured text with a hidden Markov model. We present two query strategies for active learning in a list-wrapper induction setting. Our primary contributions are two complete and scalable wrapper-induction-based solutions to the end-to-end challenge of finding lists, extracting data, and populating an ontology. The first has linear time and space complexity and extracts highly accurate information at a low cost in terms of user involvement. The second has time and space complexity that are linear in the size of the input text and quadratic in the length of an output record and achieves higher F1-measures for extracted information as a function of supervision cost. We measure the performance of each of these approaches and show that they perform better than strong baselines, including variations of our own approaches and a conditional random field-based approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Augier, Sébastien. "Apprentissage Supervisé Relationnel par Algorithmes d'Évolution." Phd thesis, Université Paris Sud - Paris XI, 2000. http://tel.archives-ouvertes.fr/tel-00947322.

Full text
Abstract:
Cette thèse concerne l'apprentissage de règles relationnelles à partir d'exemples et de contre-exemples, à l'aide d'algorithmes évolutionnaires. Nous étudions tout d'abord un biais de langage offrant une expressivité suffisamment riche pour permettre de couvrir à la fois le cadre de l'apprentissage relationnel par interprétations et les formalismes propositionnels classiques. Bien que le coût de l'induction soit caractérisé par la complexité NP-difficile du test de subsomption pour cette classe de langages, une solution capable de traiter en pratique les problèmes réels complexes est proposée. Le système SIAO1, qui utilise ce biais de langage pour l'apprentissage de règles relationnelles est ensuite présenté. Il est fondé sur une stratégie de recherche évolutionnaire qui se distingue principalement des approches classiques par: - des opérateurs de mutation et de croisement dirigés par la théorie du domaine et par les exemples d'apprentissage; - le respect de la relation d'ordre définie sur le langage. L'évaluation du système sur plusieurs bases faisant référence en apprentissage automatique montre que SIAO1 est polyvalent, se compare favorablement aux autres approches et sollicite peu l'utilisateur en ce qui concerne la spécification de biais de recherche ou d'évaluation. La troisième partie de ce travail propose deux architectures parallèles génériques derivées des modèles maître-esclave asynchrone et du pipeline. Elles sont étudiées dans le cadre de l'extraction de connaissances à partir de données à l'aide de SIAO1 du point de vue de l'accélération qu'elles procurent d'une part et de leur capacité à changer d'échelle d'autre part. Un modèle de prédiction simple mais précis des performances de chacune des architectures parallèles est également proposé.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Inductive supervised learning"

1

Zhang, Mingxing, Fumin Shen, Hanwang Zhang, Ning Xie, and Wankou Yang. "Hashing with Inductive Supervised Learning." In Lecture Notes in Computer Science, 447–55. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24078-7_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lahbib, Dhafer, Marc Boullé, and Dominique Laurent. "Itemset-Based Variable Construction in Multi-relational Supervised Learning." In Inductive Logic Programming, 130–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38812-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

El Hamri, Mourad, Younès Bennani, and Issam Falih. "Inductive Semi-supervised Learning Through Optimal Transport." In Communications in Computer and Information Science, 668–75. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92307-5_78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bisio, Federica, Sergio Decherchi, Paolo Gastaldo, and Rodolfo Zunino. "Inductive Bias for Semi-supervised Extreme Learning Machine." In Proceedings of ELM-2014 Volume 1, 61–70. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14063-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Venturini, Gilles. "SIA: A supervised inductive algorithm with genetic search for learning attributes based concepts." In Machine Learning: ECML-93, 280–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56602-3_142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Govada, Aruna, Pravin Joshi, Sahil Mittal, and Sanjay K. Sahay. "Hybrid Approach for Inductive Semi Supervised Learning Using Label Propagation and Support Vector Machine." In Machine Learning and Data Mining in Pattern Recognition, 199–213. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21024-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Martin, Eric, Samuel Kaski, Fei Zheng, Geoffrey I. Webb, Xiaojin Zhu, Ion Muslea, Kai Ming Ting, et al. "Supervised Descriptive Rule Induction." In Encyclopedia of Machine Learning, 938–41. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Novak, Petra Kralj, Nada Lavrač, and Geoffrey I. Webb. "Supervised Descriptive Rule Induction." In Encyclopedia of Machine Learning and Data Mining, 1–4. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7502-7_808-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Novak, Petra Kralj, Nada Lavrač, and Geoffrey I. Webb. "Supervised Descriptive Rule Induction." In Encyclopedia of Machine Learning and Data Mining, 1210–13. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Toscano, David S., and Enrique V. Carrera. "Failure Detection in Induction Motors Using Non-supervised Machine Learning Algorithms." In Systems and Information Sciences, 48–59. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59194-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Inductive supervised learning"

1

Shi, Yuan, Zhenzhong Lan, Wei Liu, and Wei Bi. "Extending Semi-supervised Learning Methods for Inductive Transfer Learning." In 2009 Ninth IEEE International Conference on Data Mining (ICDM). IEEE, 2009. http://dx.doi.org/10.1109/icdm.2009.75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hovsepian, Karen, Peter Anselmo, and Subhasish Mazumdar. "Supervised Inductive Learning with Lotka-Volterra Derived Models." In 2008 Eighth IEEE International Conference on Data Mining (ICDM). IEEE, 2008. http://dx.doi.org/10.1109/icdm.2008.108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhan, Wang, and Min-Ling Zhang. "Inductive Semi-supervised Multi-Label Learning with Co-Training." In KDD '17: The 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3097983.3098141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sarkar, Anoop, and Gholamreza Haffari. "Inductive semi-supervised learning methods for natural language processing." In the Human Language Technology Conference of the NAACL, Companion Volume: Tutorial Abstracts. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1614101.1614106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yoo, Jaemin, Hyunsik Jeon, and U. Kang. "Belief Propagation Network for Hard Inductive Semi-Supervised Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/580.

Full text
Abstract:
Given graph-structured data, how can we train a robust classifier in a semi-supervised setting that performs well without neighborhood information? In this work, we propose belief propagation networks (BPN), a novel approach to train a deep neural network in a hard inductive setting, where the test data are given without neighborhood information. BPN uses a differentiable classifier to compute the prior distributions of nodes, and then diffuses the priors through the graphical structure, independently from the prior computation. This separable structure improves the generalization performance of BPN for isolated test instances, compared with previous approaches that jointly use the feature and neighborhood without distinction. As a result, BPN outperforms state-of-the-art methods in four datasets with an average margin of 2.4% points in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Wentao, Chenyang Si, Wei Wang, Liang Wang, Zilei Wang, and Tieniu Tan. "Few-Shot Learning with Part Discovery and Augmentation from Unlabeled Images." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/313.

Full text
Abstract:
Few-shot learning is a challenging task since only few instances are given for recognizing an unseen class. One way to alleviate this problem is to acquire a strong inductive bias via meta-learning on similar tasks. In this paper, we show that such inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes. Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations by maximizing the similarity of an image to its discriminative part. To mitigate the overfitting in few-shot classification caused by data scarcity, we further propose a part augmentation strategy by retrieving extra images from a base dataset. We conduct systematic studies on miniImageNet and tieredImageNet benchmarks. Remarkably, our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24% under 5-way 1-shot and 5-way 5-shot settings, which are comparable with state-of-the-art supervised methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, De, Feiping Nie, and Heng Huang. "Large-scale adaptive semi-supervised learning via unified inductive and transductive model." In KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2623330.2623731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Zhi, and Zhoujun Li. "Inductive and Effective Privacy-preserving Semi-supervised Learning with Harmonic Anchor Mixture." In ISEEIE 2021: 2021 International Symposium on Electrical, Electronics and Information Engineering. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3459104.3459187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

de Sousa, Celso A. R. "An inductive semi-supervised learning approach for the Local and Global Consistency algorithm." In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016. http://dx.doi.org/10.1109/ijcnn.2016.7727722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cropper, Andrew. "Playgol: Learning Programs Through Play." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/841.

Full text
Abstract:
Children learn though play. We introduce the analogous idea of learning programs through play. In this approach, a program induction system (the learner) is given a set of user-supplied build tasks and initial background knowledge (BK). Before solving the build tasks, the learner enters an unsupervised playing stage where it creates its own play tasks to solve, tries to solve them, and saves any solutions (programs) to the BK. After the playing stage is finished, the learner enters the supervised building stage where it tries to solve the build tasks and can reuse solutions learnt whilst playing. The idea is that playing allows the learner to discover reusable general programs on its own which can then help solve the build tasks. We claim that playing can improve learning performance. We show that playing can reduce the textual complexity of target concepts which in turn reduces the sample complexity of a learner. We implement our idea in Playgol, a new inductive logic programming system. We experimentally test our claim on two domains: robot planning and real-world string transformations. Our experimental results suggest that playing can substantially improve learning performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography