Academic literature on the topic 'Connectionist learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Connectionist learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Connectionist learning"

1

Hinton, Geoffrey E. "Connectionist learning procedures." Artificial Intelligence 40, no. 1-3 (1989): 185–234. http://dx.doi.org/10.1016/0004-3702(89)90049-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Robins, Anthony V. "MULTIPLE REPRESENTATIONS IN CONNECTIONIST SYSTEMS." International Journal of Neural Systems 02, no. 04 (1991): 345–62. http://dx.doi.org/10.1142/s0129065791000327.

Full text
Abstract:
This paper proposes an extension to the basic framework of distributed representation through the learning and use of different sorts of information—“multiple representations”—in connectionist/neural network systems. In current distributed networks units are typically ascribed only one “representing” or information carrying state (activation). Similarly, connections carry a single piece of information (a weight derived from the structure of the population of patterns). In this paper we explore units and connections with multiple information carrying states. In this extended framework, multiple distributed representations can coexist with a given pattern of activation. Processing may be based on the interaction of these representations and multiple learning processes can occur simultaneously in a network. We illustrate these extensions using (in addition to patterns of activation) “centrality distribution” representations. Centrality distributions are applied to two tasks, the representation of category and type hierarchy information and the highlighting of exceptional mappings to speed up learning. We suggest that the use of multiple distributed representations in a network can increase the flexibility and power of connectionist systems while remaining within the subsymbolic paradigm. This topic is of particular relevance in the context of the recent interest in the limitations of connectionism and the interface between connectionist and symbolic methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Hanson, Stephen José, and David J. Burr. "What connectionist models learn: Learning and representation in connectionist networks." Behavioral and Brain Sciences 13, no. 3 (1990): 471–89. http://dx.doi.org/10.1017/s0140525x00079760.

Full text
Abstract:
AbstractConnectionist models provide a promising alternative to the traditional computational approach that has for several decades dominated cognitive science and artificial intelligence, although the nature of connectionist models and their relation to symbol processing remains controversial. Connectionist models can be characterized by three general computational features: distinct layers of interconnected units, recursive rules for updating the strengths of the connections during learning, and “simple” homogeneous computing elements. Using just these three features one can construct surprisingly elegant and powerful models of memory, perception, motor control, categorization, and reasoning. What makes the connectionist approach unique is not its variety of representational possibilities (including “distributed representations”) or its departure from explicit rule-based models, or even its preoccupation with the brain metaphor. Rather, it is that connectionist models can be used to explore systematically the complex interaction between learning and representation, as we try to demonstrate through the analysis of several large networks.
APA, Harvard, Vancouver, ISO, and other styles
4

Aksoy, Mehmet, and Hassan Mathkou. "CSLS: Connectionist Symbolic Learning System." Mathematical and Computational Applications 14, no. 3 (2009): 177–86. http://dx.doi.org/10.3390/mca14030177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

HADLEY, ROBERT F. "Systematicity in Connectionist Language Learning." Mind & Language 9, no. 3 (1994): 247–72. http://dx.doi.org/10.1111/j.1468-0017.1994.tb00225.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

CHRISTIANSEN, MORTEN H., and NICK CHATER. "Generalization and Connectionist Language Learning." Mind & Language 9, no. 3 (1994): 273–87. http://dx.doi.org/10.1111/j.1468-0017.1994.tb00226.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abraham, Ajith. "Complex learning in connectionist networks." Neurocomputing 130 (April 2014): 52. http://dx.doi.org/10.1016/j.neucom.2013.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Eiser, J. Richard, Tom Stafford, and Russell H. Fazio. "Prejudiced learning: A connectionist account." British Journal of Psychology 100, no. 2 (2009): 399–413. http://dx.doi.org/10.1348/000712608x357849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Neal, Radford M. "Connectionist learning of belief networks." Artificial Intelligence 56, no. 1 (1992): 71–113. http://dx.doi.org/10.1016/0004-3702(92)90065-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Clark, Andy. "Representational trajectories in connectionist learning." Minds and Machines 4, no. 3 (1994): 317–32. http://dx.doi.org/10.1007/bf00974197.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Connectionist learning"

1

Lee, Michael D. "Connectionist learning of mental representation." Adelaide, 1997. http://hdl.handle.net/2440/19137.

Full text
Abstract:
Title page, contents and abstract only. The complete thesis in print form is available from the University Library.<br>Thesis (Ph.D.)--University of Adelaide, Depts. of Psychology and Electrical and Electronic Engineering, 1997
APA, Harvard, Vancouver, ISO, and other styles
2

Bartos, Paul D. "Connectionist modelling of category learning." Thesis, Open University, 2002. http://oro.open.ac.uk/18865/.

Full text
Abstract:
A shortcoming is identified with respect to the ability of exemplar-based connectionist models of category learning to offer accounts of learning about stimuli with variable dimensionality. Models which may simulate these tasks, such as the configural-cue network (Gluck & Bower, 1988b), appear to be unable to accurately simulate certain data well simulated by exemplar-based models such as ALCOVE (Kruschke, 1992). A task in which the advantage of ALCOVE is exemplified is the prediction of human learning rates on the six category structures tested by Shepard, Hovland, and Jenkins (1961). The ability of ALCOVE to simulate the observed order of difficulty depends on its incorporation of selective attention processes (Nosofsky, Gluck, Palmeri, McKinley, & Glauthier, 1994). This thesis focuses on developing configural-cue network models which incorporate these processes. Informed by an information-theoretic approach to modelling the implementation of selective attention using a configural-cue representation, five connectionist models are developed. Each is capable of predicting the order of difficulty reported by Shepard et al. (1961). Two models employ a modular structure, but analysis suggests that these may lack much of the functionality of the basic configural-cue network. The remaining three incorporate dimensional attention schemes. These models appear to offer superior generalisability in relation to the simulation of learning about variable dimensionality stimuli. This generalisability is examined by applying a variant of one of these dimensional attention models, to data collected by Kruschke (1996a) on the inverse base-rate effect and base-rate neglect. The model provides a qualitative fit to this data. The success of these configural-cue models on these two tasks, only successfully modelled previously using two distinct types of representation, indicates that the approach has some potential for further applications. Differences between the models applied, however, indicates that more sophisticated conceptions of the attention process may be required to allow further generalisability..
APA, Harvard, Vancouver, ISO, and other styles
3

Bartos, Paul D. "Connectionist modelling of category learning." n.p, 2001. http://dart.open.ac.uk/abstracts/page.php?thesisid=155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Noelle, David Charles. "A connectionist model of instructed learning /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1997. http://wwwlib.umi.com/cr/ucsd/fullcit?p9811797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stark, Randall J. "Connectionist variable binding architectures." Thesis, University of Sussex, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hetherington, Phil A. (Phillip Alan). "The sequential learning problem in connectionist networks /." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=60076.

Full text
Abstract:
The modelling of strictly sequential experimental tasks, such as serial list learning, has underscored a potential problem for connectionism: namely, the inability of connectionist networks to retain old information during the acquisition of new material (McCloskey & Cohen, 1989; Ratcliff, 1990). While humans also suffer from interference, connectionist networks experience a much greater loss of old material; this excessive retroactive interference is termed the sequential learning problem. This paper reviews two papers arguing that connectionist networks are unable to overcome the sequential learning problem, and five papers offering potential solutions. Simulations exploring issues arising from these reviews are described in the later part of the paper. It is true that connectionist models do suffer from the sequential learning problem. However, it appears that the problem is found only with simulations employing a strictly sequential training regime and involving small, unstructured item sets. Hence, there is no reason to believe that more realistic simulations of large, structured domains, such as language, will suffer from the sequential learning problem.
APA, Harvard, Vancouver, ISO, and other styles
7

Alquézar, Mancho René. "Symbolic and connectionist learning techniques for grammatical inference." Doctoral thesis, Universitat Politècnica de Catalunya, 1997. http://hdl.handle.net/10803/6651.

Full text
Abstract:
This thesis is structured in four parts for a total of ten chapters. <br/><br/>The first part, introduction and review (Chapters 1 to 4), presents an extensive state-of-the-art review of both symbolic and connectionist GI methods, that serves also to state most of the basic material needed to describe later the contributions of the thesis. These contributions constitute the contents of the rest of parts (Chapters 5 to 10). <br/><br/>The second part, contributions on symbolic and connectionist techniques for regular grammatical inference (Chapters 5 to 7), describes the contributions related to the theory and methods for regular GI, which include other lateral subjects such as the representation oí. finite-state machines (FSMs) in recurrent neural networks (RNNs).<br/><br/>The third part of the thesis, augmented regular expressions and their inductive inference, comprises Chapters 8 and 9. The augmented regular expressions (or AREs) are defined and proposed as a new representation for a subclass of CSLs that does not contain all the context-free languages but a large class of languages capable of describing patterns with symmetries and other (context-sensitive) structures of interest in pattern recognition problems.<br/><br/>The fourth part of the thesis just includes Chapter 10: conclusions and future research. Chapter 10 summarizes the main results obtained and points out the lines of further research that should be followed both to deepen in some of the theoretical aspects raised and to facilitate the application of the developed GI tools to real-world problems in the area of computer vision.
APA, Harvard, Vancouver, ISO, and other styles
8

Bale, Tracey Ann. "Modular connectionist architectures and the learning of quantification skills." Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/842886/.

Full text
Abstract:
Modular connectionist systems comprise autonomous, communicating modules, achieving a behaviour more complex than that of a single neural network. The component modules, possibly of different topologies, may operate under various learning algorithms. Some modular connectionist systems are constrained at the representational level, in that the connectivity of the modules is hard-wired by the modeller; others are constrained at an architectural level, in that the modeller explicitly allocates each module to a specific subtask. Our approach aims to minimise these constraints, thus reducing the bias possibly introduced by the modeller. This is achieved, in the first case, through the introduction of adaptive connection weights and, in the second, by the automatic allocation of modules to subtasks as part of the learning process. The efficacy of a minimally constrained system, with respect to representation and architecture, is demonstrated by a simulation of numerical development amongst children. The modular connectionist system MASCOT (Modular Architecture for Subitising and Counting Over Time) is a dual-routed model simulating the quantification abilities of subitising and counting. A gating network learns to integrate the outputs of the two routes in determining the final output of the system. MASCOT simulates subitising through a numerosity detection system comprising modules with adaptive weights that self-organise over time. The effectiveness of MASCOT is demonstrated in that the distance effect and Fechner's law for numbers are seen to be consequences of this learning process. The automatic allocation of modules to subtasks is illustrated in a simulation of learning to count. Introducing feedback into one of two competing expert networks enables a mixture-of-experts model to perform decomposition of a task into static and temporal subtasks, and to allocate appropriate expert networks to those subtasks. MASCOT successfully performs decomposition of the counting task with a two-gated mixture-of-experts model and exhibits childlike counting errors.
APA, Harvard, Vancouver, ISO, and other styles
9

Hofer, Daniel G. Sbarbaro. "Connectionist feedforward networks for control of nonlinear systems." Thesis, University of Glasgow, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Peters, Lorna. "Children's early learning about object balancing : behavioural and connectionist studies." Thesis, University of Hertfordshire, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302287.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Connectionist learning"

1

Touretzky, David. Connectionist Approaches to Language Learning. Springer US, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Touretzky, David, ed. Connectionist Approaches to Language Learning. Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4008-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Phaf, R. Hans. Learning in Natural and Connectionist Systems. Springer Netherlands, 1994. http://dx.doi.org/10.1007/978-94-011-0840-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

French, Robert M., and Jacques P. Sougné, eds. Connectionist Models of Learning, Development and Evolution. Springer London, 2001. http://dx.doi.org/10.1007/978-1-4471-0281-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mechanisms of implicit learning: Connectionist models of sequence processing. MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mira, José, and Alberto Prieto, eds. Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45720-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Learning in natural and connectionist systems: Experiments and a model. Kluwer Academic Publishers, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

C, Mozer Michael, ed. Proceedings of the 1993 Connectionist Models Summer School. Lawrence Erlbaum Associates, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wermter, Stefan, Ellen Riloff, and Gabriele Scheler, eds. Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-60925-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

French, Robert M. Connectionist Models of Learning, Development and Evolution: Proceedings of the Sixth Neural Computation and Psychology Workshop, Liège, Belgium, 16-18 September 2000. Springer London, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Connectionist learning"

1

Chappelier, Jean-Cédric, Marco Gori, and Alain Grumbach. "Time in Connectionist Models." In Sequence Learning. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44565-x_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Karaminis, Themis N., and Michael S. C. Thomas. "Connectionist Theories of Learning." In Encyclopedia of the Sciences of Learning. Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eikmeyer, Hans-Jürgen. "Connectionist models of utterance production." In Parallelism, Learning, Evolution. Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-55027-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Seridi-Bouchelaghem, Hassina, Toufik Sari, and Mokthar Sellami. "A Connectionist Approach for Adaptive Lesson." In Technology Enhanced Learning. Springer US, 2005. http://dx.doi.org/10.1007/0-387-24047-0_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schwartz, D. B., and V. K. Samalam. "Analog VLSI for Connectionist Learning." In International Neural Network Conference. Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sayegh, Samir I. "A Fast Connectionist Learning Paradigm." In International Neural Network Conference. Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Phaf, R. Hans. "A connectionist approach to learning." In Learning in Natural and Connectionist Systems. Springer Netherlands, 1994. http://dx.doi.org/10.1007/978-94-011-0840-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Porat, Sara, and Jerome A. Feldman. "Learning Automata from Ordered Examples." In Connectionist Approaches to Language Learning. Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4008-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Phaf, R. Hans. "Psychological constraints on learning." In Learning in Natural and Connectionist Systems. Springer Netherlands, 1994. http://dx.doi.org/10.1007/978-94-011-0840-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Touretzky, David S. "Introduction." In Connectionist Approaches to Language Learning. Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4008-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Connectionist learning"

1

Baker, Walter L., and Jay A. Farrell. "Connectionist learning systems for control." In Boston - DL tentative, edited by David P. Casasent. SPIE, 1991. http://dx.doi.org/10.1117/12.25211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mitchell, Ben, and John Sheppard. "Deep Structure Learning: Beyond Connectionist Approaches." In 2012 Eleventh International Conference on Machine Learning and Applications (ICMLA). IEEE, 2012. http://dx.doi.org/10.1109/icmla.2012.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MONTGOMERY, L., K. KRISHNAKUMAR, and G. WEEKS. "Structural control using connectionist learning principles." In Guidance, Navigation and Control Conference. American Institute of Aeronautics and Astronautics, 1992. http://dx.doi.org/10.2514/6.1992-4467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Franklin, Judy A., Richard S. Sutton, Charles W. Anderson, Oliver G. Selfridge, and Daniel B. Schwartz. "Connectionist Learning Control at GTE Laboratories." In 1989 Symposium on Visual Communications, Image Processing, and Intelligent Robotics Systems, edited by Guillermo Rodriguez. SPIE, 1990. http://dx.doi.org/10.1117/12.969923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Chen-Huei, Chao-Chih Chang, and Shu-Yuen Hwang. "Connectionist learning procedure for edge detector." In Robotics - DL tentative, edited by David P. Casasent. SPIE, 1992. http://dx.doi.org/10.1117/12.57061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Farrell, J., B. Goldenthal, and K. Govindarajan. "Connectionist learning control systems: submarine depth control." In 29th IEEE Conference on Decision and Control. IEEE, 1990. http://dx.doi.org/10.1109/cdc.1990.204050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gallant. "A learning algorithm for connectionist concept classes." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Prasad, Rajesh S., U. V. Kulkarni, and Jayashree R. Prasad. "Machine learning in Evolving Connectionist Text Summarizer." In 2009 3rd International Conference on Anti-counterfeiting, Security, and Identification in Communication (2009 ASID). IEEE, 2009. http://dx.doi.org/10.1109/icasid.2009.5277001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Henderson, James, and Peter Lane. "A connectionist architecture for learning to parse." In the 36th annual meeting. Association for Computational Linguistics, 1998. http://dx.doi.org/10.3115/980845.980934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Henderson, James, and Peter Lane. "A connectionist architecture for learning to parse." In the 17th international conference. Association for Computational Linguistics, 1998. http://dx.doi.org/10.3115/980451.980934.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Connectionist learning"

1

Goetsch, Gordon J. CONSENSUS: A Statistical Learning Procedure in a Connectionist Network. Defense Technical Information Center, 1987. http://dx.doi.org/10.21236/ada188531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oliver, William L., and Walter Schneider. Using Rules and Task Division to Augment Connectionist Learning. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada218903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Library, Spring. The Cycle of Learning, Memorizing, and Forgetting. Spring Library, 2020. http://dx.doi.org/10.47496/sl.blog.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baird, Natalie, Tanushree Bharat Shah, Ali Clacy, et al. maths inside Resource Suite with Interdisciplinary Learning Activities. University of Glasgow, 2021. http://dx.doi.org/10.36399/gla.pubs.234071.

Full text
Abstract:
Maths inside is a photo competition open to everyone living in Scotland, hosted by the University of Glasgow. The maths inside project seeks to nourish a love for mathematics by embarking on a journey of discovery through a creative lens. This suite of resources have been created to inspire entrants, and support families, teachers and those out-of-school to make deeper connections with their surroundings. The maths inside is waiting to be discovered! Also contained in the suite is an example to inspire and support you to design your own interdisciplinary learning (IDL) activity matched to Education Scotland experiences and outcomes (Es+Os), to lead pupils towards the creation of their own entry. These resources are not prescriptive, and are designed with a strong creativity ethos for them to be adapted and delivered in a manner that meets the specific needs of those participating. The competition and the activities can be tailored to meet all and each learners' needs. We recommend that those engaging with maths inside for the first time complete their own mapping exercise linking the designed activity to the Es+Os. To create a collaborative resource bank open to everyone, we invite you to treat these resources as a working document for entrants, parents, carers, teachers and schools to make their own. Please share your tips, ideas and activities at info@mathsinside.com and through our social media channels. Past winning entries of the competition are also available for inspiration and for using as a teaching resource. Already inspired? Enter the competition!
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!