Academic literature on the topic 'Inductive learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Inductive learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Inductive learning"

1

Xindong, Wu. "Inductive learning." Journal of Computer Science and Technology 8, no. 2 (April 1993): 118–32. http://dx.doi.org/10.1007/bf02939474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chan, T. Y. T. "Inductive pattern learning." IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 29, no. 6 (1999): 667–74. http://dx.doi.org/10.1109/3468.798072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hadjimichael, Michael, and Anita Wasilewska. "Interactive inductive learning." International Journal of Man-Machine Studies 38, no. 2 (February 1993): 147–67. http://dx.doi.org/10.1006/imms.1993.1008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kubat, Miroslav. "Conceptual inductive learning." Artificial Intelligence 52, no. 2 (December 1991): 169–82. http://dx.doi.org/10.1016/0004-3702(91)90041-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pham, D. T., S. Bigot, and S. S. Dimov. "RULES-F: A fuzzy inductive learning algorithm." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 220, no. 9 (September 1, 2006): 1433–47. http://dx.doi.org/10.1243/0954406c20004.

Full text
Abstract:
Current inductive learning algorithms have difficulties handling attributes with numerical values. This paper presents RULES-F, a new fuzzy inductive learning algorithm in the RULES family, which integrates the capabilities and performance of a good inductive learning algorithm for classification applications with the ability to create accurate and compact fuzzy models for the generation of numerical outputs. The performance of RULES-F in two simulated control applications involving numerical output parameters is demonstrated and compared with that of the well-known fuzzy rule induction algorithm by Wang and Mendel.
APA, Harvard, Vancouver, ISO, and other styles
6

Santos, Paulo, Chris Needham, and Derek Magee. "Inductive learning spatial attention." Sba: Controle & Automação Sociedade Brasileira de Automatica 19, no. 3 (September 2008): 316–26. http://dx.doi.org/10.1590/s0103-17592008000300007.

Full text
Abstract:
This paper investigates the automatic induction of spatial attention from the visual observation of objects manipulated on a table top. In this work, space is represented in terms of a novel observer-object relative reference system, named Local Cardinal System, defined upon the local neighbourhood of objects on the table. We present results of applying the proposed methodology on five distinct scenarios involving the construction of spatial patterns of coloured blocks.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Xiaobo. "Ensemble Inductive Transfer Learning." Journal of Fiber Bioengineering and Informatics 8, no. 1 (June 2015): 105–15. http://dx.doi.org/10.3993/jfbi03201510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Russell, Stuart. "Inductive learning by machines." Philosophical Studies 64, no. 1 (October 1991): 37–64. http://dx.doi.org/10.1007/bf00356089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ray, Oliver. "Nonmonotonic abductive inductive learning." Journal of Applied Logic 7, no. 3 (September 2009): 329–40. http://dx.doi.org/10.1016/j.jal.2008.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rahmatian, Rouhollah, and Fatemeh Zarekar. "Inductive/Deductive Learning by Considering the Role of Gender—A Case Study of Iranian French-Learners." International Education Studies 9, no. 12 (November 28, 2016): 254. http://dx.doi.org/10.5539/ies.v9n12p254.

Full text
Abstract:
<p class="apa">This article defines the objective of discovering the first preferred styles of Iranian learners of French as a Foreign Language (FFL) as regards inductive or deductive learning; and secondly, the difference between gender-based learning tendencies. Considering these points as target variables, the questionnaire developed by Felder and Silverman in 1988 was applied to form the learning styles and consequently to associate them with inductive and deductive approaches. The results led the team to set the idea which is based on the choice of induction or deduction in language learning and the gender variable that follows different directions. Consequently, in terms of the inductive approach, we find ourselves facing a rather male solicitation. A proportion of the use of this approach is also associated with women whose motivation is seen rather noticeably. Moreover, the significance is relative rather than significant in all the relationships studied in this research: males and inductive (1)/deductive learning (2); females and inductive (3)/deductive learning (4); inductive (5)/deductive (6) among Iranians.</p>
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Inductive learning"

1

Ray, Oliver. "Hybrid abductive inductive learning." Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pascoe, James. "The evoluation of 'Boxes' to quantized inductive learning : a study in inductive learning /." Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-12172008-063016/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

林謀楷 and Mau-kai Lam. "Inductive machine learning with bias." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31212426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Heinz, Jeffrey Nicholas. "Inductive learning of phonotactic patterns." Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1467886191&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tappert, Peter M. "Damage identification using inductive learning." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-05092009-040651/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chu, Mabel. "Constructing transformation rules for inductive learning." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0023/MQ51055.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kit, Chun Yu. "Unsupervised lexical learning as inductive inference." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Law, Mark. "Inductive learning of answer set programs." Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/64824.

Full text
Abstract:
The goal of Inductive Logic Programming (ILP) is to find a hypothesis that explains a set of examples in the context of some pre-existing background knowledge. Until recently, most research on ILP targeted learning definite logic programs. This thesis constitutes the first comprehensive work on learning answer set programs, introducing new learning frameworks, theoretical results on the complexity and generality of these frameworks, algorithms for learning ASP programs, and an extensive evaluation of these algorithms. Although there is previous work on learning ASP programs, existing learning frameworks are either brave -- where examples should be explained by at least one answer set -- or cautious where examples should be explained by all answer sets. There are cases where brave induction is too weak and cautious induction is too strong. Our proposed frameworks combine brave and cautious learning and can learn ASP programs containing choice rules and constraints. Many applications of ASP use weak constraints to express a preference ordering over the answer sets of a program. Learning weak constraints corresponds to preference learning, which we achieve by introducing ordering examples. We then explore the generality of our frameworks, investigating what it means for a framework to be general enough to distinguish one hypothesis from another. We show that our frameworks are more general than both brave and cautious induction. We also present a new family of algorithms, called ILASP (Inductive Learning of Answer Set Programs), which we prove to be sound and complete. This work concerns learning from both non-noisy and noisy examples. In the latter case, ILASP returns a hypothesis that maximises the coverage of examples while minimising the length of the hypothesis. In our evaluation, we show that ILASP scales to tasks with large numbers of examples finding accurate hypotheses even in the presence of high proportions of noisy examples.
APA, Harvard, Vancouver, ISO, and other styles
9

Adjodah, Dhaval D. K. (Adjodlah Dhaval Dhamnidhi Kumar). "Social inductive biases for reinforcement learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/128415.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September, 2019
Cataloged from the official PDF of thesis. "The Table of Contents does not accurately represent the page numbering"--Disclaimer page.
Includes bibliographical references (pages 117-126).
How can we build machines that collaborate and learn more seamlessly with humans, and with each other? How do we create fairer societies? How do we minimize the impact of information manipulation campaigns, and fight back? How do we build machine learning algorithms that are more sample efficient when learning from each other's sparse data, and under time constraints? At the root of these questions is a simple one: how do agents, human or machines, learn from each other, and can we improve it and apply it to new domains? The cognitive and social sciences have provided innumerable insights into how people learn from data using both passive observation and experimental intervention. Similarly, the statistics and machine learning communities have formalized learning as a rigorous and testable computational process.
There is a growing movement to apply insights from the cognitive and social sciences to improving machine learning, as well as opportunities to use machine learning as a sandbox to test, simulate and expand ideas from the cognitive and social sciences. A less researched and fertile part of this intersection is the modeling of social learning: past work has been more focused on how agents can learn from the 'environment', and there is less work that borrows from both communities to look into how agents learn from each other. This thesis presents novel contributions into the nature and usefulness of social learning as an inductive bias for reinforced learning.
I start by presenting the results from two large-scale online human experiments: first, I observe Dunbar cognitive limits that shape and limit social learning in two different social trading platforms, with the additional contribution that synthetic financial bots that transcend human limitations can obtain higher profits even when using naive trading strategies. Second, I devise a novel online experiment to observe how people, at the individual level, update their belief of future financial asset prices (e.g. S&P 500 and Oil prices) from social information. I model such social learning using Bayesian models of cognition, and observe that people make strong distributional assumptions on the social data they observe (e.g. assuming that the likelihood data is unimodal).
I were fortunate to collect one round of predictions during the Brexit market instability, and find that social learning leads to higher performance than when learning from the underlying price history (the environment) during such volatile times. Having observed the cognitive limits and biases people exhibit when learning from other agents, I present an motivational example of the strength of inductive biases in reinforcement learning: I implement a learning model with a relational inductive bias that pre-processes the environment state into a set of relationships between entities in the world. I observe strong improvements in performance and sample efficiency, and even observe the learned relationships to be strongly interpretable.
Finally, given that most modern deep reinforcement learning algorithms are distributed (in that they have separate learning agents), I investigate the hypothesis that viewing deep reinforcement learning as a social learning distributed search problem could lead to strong improvements. I do so by creating a fully decentralized, sparsely-communicating and scalable learning algorithm, and observe strong learning improvements with lower communication bandwidth usage (between learning agents) when using communication topologies that naturally evolved due to social learning in humans. Additionally, I provide a theoretical upper bound (that agrees with our empirical results) regarding which communication topologies lead to the largest learning performance improvement.
Given a future increasingly filled with decentralized autonomous machine learning systems that interact with humans, there is an increasing need to understand social learning to build resilient, scalable and effective learning systems, and this thesis provides insights into how to build such systems.
by Dhaval D.K. Adjodah.
Ph. D.
Ph.D. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Guang Carleton University Dissertation Engineering Systems and Computer. "Inductive learning in network fault diagnosis." Ottawa, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Inductive learning"

1

Flach, Peter A. Second-order inductive learning. Tilburg: Tilburg University, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stephen, Muggleton, ed. Inductive logic programming. London: Academic Press in association with Turing Institute Press, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Utgoff, Paul E. Machine Learning of Inductive Bias. Boston, MA: Springer US, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Utgoff, Paul E. Machine learning of inductive bias. Boston: Kluwer Academic Publishers, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Utgoff, Paul E. Machine Learning of Inductive Bias. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4613-2283-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gentry, James A. Using inductive learning to predict bankruptcy. [Urbana, Ill.]: College of Commerce and Business Administration, University of Illinois at Urbana-Champaign, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Flach, P. A. The role of explanations in inductive learning. Tilburg: Tilburg University, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brown, Martin Richard. Inductive learning with uncertainty for image processing. Leicester: De Montfort University, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kalkanis, G. Inductive learning of statistically reliable tree classifiers. Manchester: UMIST, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grigoʹevich, Ivakhnenko Alekseĭ, ed. Inductive learning algorithms for complex systems modeling. Boca Raton: CRC Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Inductive learning"

1

Utgoff, Paul E., James Cussens, Stefan Kramer, Sanjay Jain, Frank Stephan, Luc De Raedt, Ljupčo Todorovski, et al. "Inductive Learning." In Encyclopedia of Machine Learning, 529. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sakama, Chiaki. "Learning Dishonesty." In Inductive Logic Programming, 225–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38812-5_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kakas, A. C., and F. Riguzzi. "Learning with abduction." In Inductive Logic Programming, 181–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3540635149_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

De Raedt, Luc, and Ingo Thon. "Probabilistic Rule Learning." In Inductive Logic Programming, 47–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21295-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Džeroski, Sašo, Luc De Raedt, and Hendrik Blockeel. "Relational reinforcement learning." In Inductive Logic Programming, 11–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0027307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Utgoff, Paul E., James Cussens, Stefan Kramer, Sanjay Jain, Frank Stephan, Luc De Raedt, Ljupčo Todorovski, et al. "Inductive Bias." In Encyclopedia of Machine Learning, 522. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Utgoff, Paul E., James Cussens, Stefan Kramer, Sanjay Jain, Frank Stephan, Luc De Raedt, Ljupčo Todorovski, et al. "Inductive Inference." In Encyclopedia of Machine Learning, 523–28. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Utgoff, Paul E., James Cussens, Stefan Kramer, Sanjay Jain, Frank Stephan, Luc De Raedt, Ljupčo Todorovski, et al. "Inductive Inference." In Encyclopedia of Machine Learning, 528. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Utgoff, Paul E., James Cussens, Stefan Kramer, Sanjay Jain, Frank Stephan, Luc De Raedt, Ljupčo Todorovski, et al. "Inductive Programming." In Encyclopedia of Machine Learning, 537–44. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Utgoff, Paul E., James Cussens, Stefan Kramer, Sanjay Jain, Frank Stephan, Luc De Raedt, Ljupčo Todorovski, et al. "Inductive Synthesis." In Encyclopedia of Machine Learning, 544. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_400.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Inductive learning"

1

Fitri, Mustika, Dian Budiana, and Adang Suherman. "Inductive Learning Methods towards Learning Outcomes." In Proceedings of the 3rd International Conference on Sport Science, Health, and Physical Education (ICSSHPE 2018). Paris, France: Atlantis Press, 2019. http://dx.doi.org/10.2991/icsshpe-18.2019.63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rossi, Ryan A., Rong Zhou, and Nesreen K. Ahmed. "Deep Inductive Network Representation Learning." In Companion of the The Web Conference 2018. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3184558.3191524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Sheng, Jianhua Tao, and Lianhong Cai. "Prosodic phrasing with inductive learning." In 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA: ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Peishun, and Xuefang Wang. "Inductive Learning in Malware Detection." In 2008 4th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM). IEEE, 2008. http://dx.doi.org/10.1109/wicom.2008.2921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"RECOGNIZING REAL EMOTIONS THROUGH INDUCTIVE WRITING TEACHING." In 16th International Conference on e-Learning. IADIS Press, 2022. http://dx.doi.org/10.33965/el2022_202203c027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aksoy, Ahmet, and Mehmet Hadi Gunes. "SILEA: A system for inductive LEArning." In 2016 7th International Conference on Information, Intelligence, Systems & Applications (IISA). IEEE, 2016. http://dx.doi.org/10.1109/iisa.2016.7785430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Honavar, Vasant. "Inductive learning using generalized distance measures." In Aerospace Sensing, edited by Firooz A. Sadjadi. SPIE, 1992. http://dx.doi.org/10.1117/12.139960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Orndoff, C. "Solar panel renewable energy inductive learning." In 2010 IEEE International Symposium on Sustainable Systems and Technology (ISSST). IEEE, 2010. http://dx.doi.org/10.1109/issst.2010.5507767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jantke, Klaus P. "Case-based learning in inductive inference." In the fifth annual workshop. New York, New York, USA: ACM Press, 1992. http://dx.doi.org/10.1145/130385.130409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Yuan, Zhenzhong Lan, Wei Liu, and Wei Bi. "Extending Semi-supervised Learning Methods for Inductive Transfer Learning." In 2009 Ninth IEEE International Conference on Data Mining (ICDM). IEEE, 2009. http://dx.doi.org/10.1109/icdm.2009.75.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Inductive learning"

1

Lukac, Martin. Quantum Inductive Learning and Quantum Logic Synthesis. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.2316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hurwitz, David, Rachel Adams, H. Benjamin Mason, Kamilah Buker, and Richard Slocum. Innovation in the classroom : A transportation geotechnics application of desktop learning modules to promote inductive learning. Oregon State University, 2017. http://dx.doi.org/10.5399/osu/1113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pazzani, Michael J. A Combined Analytic and Inductive Approach to Learning in Knowledge-based Systems. Fort Belvoir, VA: Defense Technical Information Center, January 1997. http://dx.doi.org/10.21236/ada335735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schmid, Ute, and Fritz Wysotzki. Applying Inductive Program Synthesis to Learning Domain-Dependent Control Knowledge - Transforming Plans into Programs. Fort Belvoir, VA: Defense Technical Information Center, June 2000. http://dx.doi.org/10.21236/ada382307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Küsters, Ralf, and Ralf Molitor. Computing Least Common Subsumers in ALEN. Aachen University of Technology, 2000. http://dx.doi.org/10.25368/2022.110.

Full text
Abstract:
Computing the least common subsumer (lcs) in description logics is an inference task first introduced for sublanguages of CLASSIC. Roughly speaking, the lcs of a set of concept descriptions is the most specific concept description that subsumes all of the input descriptions. As such, the lcs allows to extract the commonalities from given concept descriptions, a task essential for several applications like, e.g., inductive learning, information retrieval, or the bottom-up construction of KR-knowledge bases. Previous work on the lcs has concentrated on description logics that either allow for number restrictions or for existential restrictions. Many applications, however, require to combine these constructors. In this work, we present an lcs algorithm for the description logic ALEN, which allows for both constructors (as well as concept conjunction, primitive negation, and value restrictions). The proof of correctness of our lcs algorithm is based on an appropriate structural characterization of subsumption in ALEN also introduced in this paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Küsters, Ralf, and Ralf Molitor. Computing Least Common Subsumers in ALEN. Aachen University of Technology, 2000. http://dx.doi.org/10.25368/2022.110.

Full text
Abstract:
Computing the least common subsumer (lcs) in description logics is an inference task first introduced for sublanguages of CLASSIC. Roughly speaking, the lcs of a set of concept descriptions is the most specific concept description that subsumes all of the input descriptions. As such, the lcs allows to extract the commonalities from given concept descriptions, a task essential for several applications like, e.g., inductive learning, information retrieval, or the bottom-up construction of KR-knowledge bases. Previous work on the lcs has concentrated on description logics that either allow for number restrictions or for existential restrictions. Many applications, however, require to combine these constructors. In this work, we present an lcs algorithm for the description logic ALEN, which allows for both constructors (as well as concept conjunction, primitive negation, and value restrictions). The proof of correctness of our lcs algorithm is based on an appropriate structural characterization of subsumption in ALEN also introduced in this paper.
APA, Harvard, Vancouver, ISO, and other styles
7

Langley, Pat, and Herbert A. Simon. Applications of Machine Learning and Rule Induction,. Fort Belvoir, VA: Defense Technical Information Center, February 1995. http://dx.doi.org/10.21236/ada292607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography