Academic literature on the topic 'Learning and Forgetting'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Learning and Forgetting.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Learning and Forgetting"

1

Kamuangu, Giasuma. "Learning and Forgetting." International Journal of Learning: Annual Review 12, no. 4 (2007): 45–52. http://dx.doi.org/10.18848/1447-9494/cgp/v14i04/45310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Zhizhong, and Derek Hoiem. "Learning without Forgetting." IEEE Transactions on Pattern Analysis and Machine Intelligence 40, no. 12 (December 1, 2018): 2935–47. http://dx.doi.org/10.1109/tpami.2017.2773081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carmona, Salvador, and Anders Gronlund. "Learning from Forgetting." Management Learning 29, no. 1 (March 1998): 21–38. http://dx.doi.org/10.1177/1350507698291002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Unterstell, Rembert. "Learning, Remembering and Forgetting." German Research 41, no. 2 (September 2019): 14–15. http://dx.doi.org/10.1002/germ.201970205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Villas-Boas, Sofia Berto, and J. Miguel Villas-Boas. "Learning, Forgetting, and Sales." Management Science 54, no. 11 (November 2008): 1951–60. http://dx.doi.org/10.1287/mnsc.1080.0909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ishikawa, Masumi. "Structural learning with forgetting." Neural Networks 9, no. 3 (April 1996): 509–21. http://dx.doi.org/10.1016/0893-6080(96)83696-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Globerson, Shlomo. "Incorporating Forgetting into Learning Curves." International Journal of Operations & Production Management 7, no. 4 (April 1987): 80–94. http://dx.doi.org/10.1108/eb054802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gold, James M., Gina Rehkemper, Sidney W. Binks, Constance J. Carpenter, Kirsten Fleming, Terry E. Goldberg, and Daniel R. Weinberger. "Learning and forgetting in schizophrenia." Journal of Abnormal Psychology 109, no. 3 (2000): 534–38. http://dx.doi.org/10.1037/0021-843x.109.3.534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

WATANABE, Eiji. "Selective Learning Algorithms with Forgetting Taking Account of the Balance between Learning and Forgetting." Transactions of the Institute of Systems, Control and Information Engineers 10, no. 12 (1997): 628–36. http://dx.doi.org/10.5687/iscie.10.628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Leung, Chi Sing, and Lai Wan Chan. "The Behavior of Forgetting Learning in Bidirectional Associative Memory." Neural Computation 9, no. 2 (February 1, 1997): 385–401. http://dx.doi.org/10.1162/neco.1997.9.2.385.

Full text
Abstract:
Forgetting learning is an incremental learning rule in associative memories. With it, the recent learning items can be encoded, and the old learning items will be forgotten. In this article, we analyze the storage behavior of bidirectional associative memory (BAM) under the forgetting learning. That is, “Can the most recent k learning item be stored as a fixed point?” Also, we discuss how to choose the forgetting constant in the forgetting learning such that the BAM can correctly store as many as possible of the most recent learning items. Simulation is provided to verify the theoretical analysis.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Learning and Forgetting"

1

Packer, Heather S. "Evolving ontologies with online learning and forgetting algorithms." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/194923/.

Full text
Abstract:
Agents that require vocabularies to complete tasks can be limited by static vocabularies which cannot evolve to meet unforeseen domain tasks, or reflect its changing needs or environment. However, agents can benefit from using evolution algorithms to evolve their vocabularies, namely the ability to support new domain tasks. While an agent can capitalise on being able support more domain tasks, using existing techniques can hinder them because they do not consider the associated costs involved with evolving an agent's ontology. With this motivation, we explore the area of ontology evolution in agent systems, and focus on the reduction of the costs associated with an evolving ontology. In more detail, we consider how an agent can reduce the costs of evolving an ontology, these include costs associated with: the acquisition of new concepts; processing new concepts; the increased memory usage from storing new concepts; and the removal of unnecessary concepts. Previous work reported in the literature has largely failed to analyse these costs in the context of evolving an agent's ontology. Against this background, we investigate and develop algorithms to enable agents to evolve their ontologies. More specifically, we present three online evolution algorithms that enable agents to: i) augment domain related concepts, ii) use prediction to select concepts to learn, and iii) prune unnecessary concepts from their ontology, with the aim to reduce the costs associated with the acquisition, processing and storage of acquired concepts. In order to evaluate our evolution algorithms, we developed an agent framework which enables agents to use these algorithms and measure an agent's performance. Finally, our empirical evaluation shows that our algorithms are successful in reducing the costs associated with evolving an agent's ontology.
APA, Harvard, Vancouver, ISO, and other styles
2

Vik, Mikael Eikrem. "Reducing catastrophic forgetting in neural networks using slow learning." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8702.

Full text
Abstract:

This thesis describes a connectionist approach to learning and long-term memory consolidation, inspired by empirical studies on the roles of the hippocampus and neocortex in the brain. The existence of complementary learning systems is due to demands posed on our cognitive system because of the nature of our experiences. It has been shown that dual-network architectures utilizing information transfer successfully can avoid the phenomenon of catastrophic forgetting involved in multiple sequence learning. The experiments involves a Reverberated Simple Recurrent Network which is trained on multiple sequences with the memory being reinforced by means of self-generated pseudopatterns. My focus will be on the implications of how differentiated learning speed affects the level of forgetting, without explicit training on the data used to form the existing memory.

APA, Harvard, Vancouver, ISO, and other styles
3

Besedin, Andrey. "Continual forgetting-free deep learning from high-dimensional data streams." Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1263.

Full text
Abstract:
Dans cette thèse, nous proposons une nouvelle approche de l’apprentissage profond pour la classification des flux de données de grande dimension. Au cours des dernières années, les réseaux de neurones sont devenus la référence dans diverses applications d’apprentissage automatique. Cependant, la plupart des méthodes basées sur les réseaux de neurones sont conçues pour résoudre des problèmes d’apprentissage statique. Effectuer un apprentissage profond en ligne est une tâche difficile. La principale difficulté est que les classificateurs basés sur les réseaux de neurones reposent généralement sur l’hypothèse que la séquence des lots de données utilisées pendant l’entraînement est stationnaire ; ou en d’autres termes, que la distribution des classes de données est la même pour tous les lots (hypothèse i.i.d.). Lorsque cette hypothèse ne tient pas les réseaux de neurones ont tendance à oublier les concepts temporairement indisponibles dans le flux. Dans la littérature scientifique, ce phénomène est généralement appelé oubli catastrophique. Les approches que nous proposons ont comme objectif de garantir la nature i.i.d. de chaque lot qui provient du flux et de compenser l’absence de données historiques. Pour ce faire, nous entrainons des modèles génératifs et pseudo-génératifs capable de produire des échantillons synthétiques à partir des classes absentes ou mal représentées dans le flux, et complètent les lots du flux avec ces échantillons. Nous testons nos approches dans un scénario d’apprentissage incrémental et dans un type spécifique de l’apprentissage continu. Nos approches effectuent une classification sur des flux de données dynamiques avec une précision proche des résultats obtenus dans la configuration de classification statique où toutes les données sont disponibles pour la durée de l’apprentissage. En outre, nous démontrons la capacité de nos méthodes à s’adapter à des classes de données invisibles et à de nouvelles instances de catégories de données déjà connues, tout en évitant d’oublier les connaissances précédemment acquises
In this thesis, we propose a new deep-learning-based approach for online classification on streams of high-dimensional data. In recent years, Neural Networks (NN) have become the primary building block of state-of-the-art methods in various machine learning problems. Most of these methods, however, are designed to solve the static learning problem, when all data are available at once at training time. Performing Online Deep Learning is exceptionally challenging.The main difficulty is that NN-based classifiers usually rely on the assumption that the sequence of data batches used during training is stationary, or in other words, that the distribution of data classes is the same for all batches (i.i.d. assumption).When this assumption does not hold Neural Networks tend to forget the concepts that are temporarily not available in thestream. In the literature, this phenomenon is known as catastrophic forgetting. The approaches we propose in this thesis aim to guarantee the i.i.d. nature of each batch that comes from the stream and compensates for the lack of historical data. To do this, we train generative models and pseudo-generative models capable of producing synthetic samples from classes that are absent or misrepresented in the stream and complete the stream’s batches with these samples. We test our approaches in an incremental learning scenario and a specific type of continuous learning. Our approaches perform classification on dynamic data streams with the accuracy close to the results obtained in the static classification configuration where all data are available for the duration of the learning. Besides, we demonstrate the ability of our methods to adapt to invisible data classes and new instances of already known data categories, while avoiding forgetting the previously acquired knowledge
APA, Harvard, Vancouver, ISO, and other styles
4

Evilevitch, Anton, and Robert Ingram. "Avoiding Catastrophic Forgetting in Continual Learning through Elastic Weight Consolidation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302552.

Full text
Abstract:
Image classification is an area of computer science with many areas of application. One key issue with using Artificial Neural Networks (ANN) for image classification is the phenomenon of Catastrophic Forgetting when training tasks sequentially (i.e Continual Learning). This is when the network quickly looses its performance on a given task after it has been trained on a new task. Elastic Weight Consolidation (EWC) has previously been proposed as a remedy to lessen the effects of this phenomena through the use of a loss function which utilizes a Fisher Information Matrix. We want to explore and establish if this still holds true for modern network architectures, and to what extent this can be applied using today’s state- of- the- art networks. We focus on applying this approach on tasks within the same dataset. Our results indicate that the approach is feasible, and does in fact lessen the effect of Catastrophic Forgetting. These results are achieved, however, at the cost of much longer execution times and time spent tuning the hyper- parameters.
Bildklassifiering är ett område inom dataologi med många tillämpningsområden. En nyckelfråga när det gäller användingen av Artificial Neural Networks (ANN) för bildklassifiering är fenomenet Catastrophic Forgetting. Detta inträffar när ett nätverk tränas sekventiellt (m.a.o. Continual Learning). Detta innebär att nätverket snabbt tappar prestanda för en viss uppgift efter att den har tränats på en ny uppgift. Elastic Weight Consolidation (EWC) har tidigare föreslagits som ett lindring genom applicering av en förlustfunktion som använder Fisher Information Matrix. Vi vill utforska och fastställa om detta fortfarande gäller för moderna nätverksarkitekturer, och i vilken utsträckning det kan tillämpas. Vi utför metoden på uppgifter inom en och samma dataset. Våra resultat visar att metoden är genomförbar och har en minskande effekt på Catastrophic Forgetting. Dessa resultat uppnås dock på bekostnad av längre körningstider och ökad tidsåtgång för val av hyperparametrar.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmad, Neida Basheer, and Neida Basheer Ahmad. "Forgetting Can Be Helpful for Learning: How Wakeful, Offline Processing Influences Infant Language Learning." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/624894.

Full text
Abstract:
In previous work, 11-month-old infants were unable to learn rules about the relation of the consonants in CVCV words when the stimuli were randomly ordered. By chance, the randomordering of the stimuli promoted local spurious generalizations that impeded infants' learning of the phonotactic rules. This experiment asked whether a 30-second delay after exposure to a list of 24 randomly ordered words promotes learning. The 30-second delay did promote learning, though not until the third block of testing, suggesting that a longer delay might have shown a more robust effect. The interaction between conformity and block did not approach significance. However, t-tests performed on each of the three blocks revealed that in the third block, infants displayed a novelty preference, wherein they listened longer to stimuli that did not conform to their familiarization rule than the stimuli that conformed to their familiarization rule. Additionally, there is a trend toward an interaction between the previous experiment (no delay) and the current experiment (30-sec delay), suggesting that the 30-second delay may have made a difference in infants' behavior.
APA, Harvard, Vancouver, ISO, and other styles
6

Hough, Gerald E. "Learning, forgetting, and remembering : retention of song in the adult songbird /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu148820355277807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Beaulieu, Shawn L. "Developing Toward Generality: Combating Catastrophic Forgetting with Developmental Compression." ScholarWorks @ UVM, 2018. https://scholarworks.uvm.edu/graddis/874.

Full text
Abstract:
General intelligence is the exhibition of intelligent behavior across multiple problems in a variety of settings, however intelligence is defined and measured. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting, in which sequential learning corrupts knowledge obtained earlier in the sequence or in which tasks antagonistically compete for system resources. Methods for obviating catastrophic forgetting have either sought to identify and preserve features of the system necessary to solve one problem when learning to solve another, or enforce modularity such that minimally overlapping sub-functions contain task-specific knowledge. While successful in some domains, both approaches scale poorly because they require larger architectures as the number of training instances grows, causing different parts of the system to specialize for separate subsets of the data. Presented here is a method called developmental compression that addresses catastrophic forgetting in the neural networks of embodied agents. It exploits the mild impacts of developmental mutations to lessen adverse changes to previously evolved capabilities and `compresses' specialized neural networks into a single generalized one. In the absence of domain knowledge, developmental compression produces systems that avoid overt specialization, alleviating the need to engineer a bespoke system for every task permutation, and does so in a way that suggests better scalability than existing approaches. This method is validated on a robot control problem and may be extended to other machine learning domains in the future.
APA, Harvard, Vancouver, ISO, and other styles
8

Weeks, Clinton. "Investigation of the differential forgetting rates of item and associative information /." [St. Lucia, Qld.], 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16837.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ariel, Robert. "The Contribution of Past Test Performance, New Learning, and Forgetting to Judgment-of-Learning Resolution." Kent State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=kent1277315741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jaber, Mohamad Y. "The effects of learning and forgetting on the economic manufactured quantity (EMQ)." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Learning and Forgetting"

1

The book of learning and forgetting. New York: Teachers College Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rediscovering psychoanalysis: Thinking and feeling, learning and forgetting. Hove, East Sussex: Routledge, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Benkard, C. Lanier. Learning and forgetting: The dynamics of aircraft production. Cambridge, MA: National Bureau of Economic Research, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

The house on Beartown Road: A memoir of learning and forgetting. New York: Random House, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harrow, Jenny. Modelling risk in public services organisations: Managers, organisational learning and organisational forgetting. York: ESRC Risk & Human Behaviour Programme, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Chong. Strategic Memory: The Natural History of Learning and Forgetting. Brain & Life Publishing, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stark, Alastair. Crafting and Forgetting Policy Lessons. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198831990.003.0006.

Full text
Abstract:
This chapter explores the data relating to lesson-learning process. It presents an expanded conceptualization of inquiry learning that covers how lessons are produced, how they are implemented, and their shelf-life once institutionalized. This reconceptualization reveals several new and complex aspects of learning that have not been considered before in public inquiry scholarship. It also draws attention to two specific aspects of lesson-learning process, largely neglected in relation to inquiries, that influence the effectiveness of learning attempts. The first is the issue of policy transfer, which shapes the way in which inquiry lessons are crafted and communicated. The second is the issue of institutional amnesia, which often undermines the lesson-learning gains that inquiries produce.
APA, Harvard, Vancouver, ISO, and other styles
8

Learning and Forgetting in Development NGOs: Insights from Organisational Theory. Taylor & Francis Group, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cohen, Elizabeth. The House on Beartown Road: A Memoir of Learning and Forgetting. Thorndike Press, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cohen, Elizabeth, and Elizabeth Cohen Van Pelt. The House on Beartown Road: A Memoir of Learning and Forgetting. Random House, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Learning and Forgetting"

1

Argote, Linda. "Organizational Forgetting." In Organizational Learning, 57–84. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-5251-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Zhizhong, and Derek Hoiem. "Learning Without Forgetting." In Computer Vision – ECCV 2016, 614–29. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46493-0_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MacLeod, Colin M. "Directed Forgetting." In Encyclopedia of the Sciences of Learning, 993–95. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_1084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Holan, Pablo Martin De, and Nelson Phillips. "Organizational Forgetting." In Handbook of Organizational Learning and Knowledge Management, 433–51. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781119207245.ch20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dar-Ei, Ezey M. "Applications (With & Without Forgetting)." In HUMAN LEARNING: From Learning Curves to Learning Organizations, 99–133. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4757-3113-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

der Vorm, Martijn van. "Learning and forgetting counterinsurgency." In The Conduct of War in the 21st Century, 189–208. Abingdon, Oxon ; New York, NY : Routledge, 2021. | Series: Routledge advances in defence studies: Routledge, 2021. http://dx.doi.org/10.4324/9781003054269-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dar-Ei, Ezey M. "Summary of Learning Models · No Forgetting." In HUMAN LEARNING: From Learning Curves to Learning Organizations, 25–55. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4757-3113-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dar-Ei, Ezey M. "A Summary of Learning Models with Forgetting." In HUMAN LEARNING: From Learning Curves to Learning Organizations, 77–97. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4757-3113-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kontinen, Tiina. "Unlearning, forgetting and ignorance." In Learning and Forgetting in Development NGOs, 100–127. Abingdon, Oxon ; New York, NY : Routledge, [2018] |: Routledge, 2018. http://dx.doi.org/10.4324/9781315108988-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tsapanos, Nikolaos, Anastasios Tefas, and Ioannis Pitas. "Dynamic Shape Learning and Forgetting." In Artificial Neural Networks – ICANN 2010, 333–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15825-4_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Learning and Forgetting"

1

Miao, Yuan. "Mobile Learning against Forgetting." In 2008 The Second International Conference on Next Generation Mobile Applications, Services, and Technologies. IEEE, 2008. http://dx.doi.org/10.1109/ngmast.2008.87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shibata, Takashi, Go Irie, Daiki Ikami, and Yu Mitsuzumi. "Learning with Selective Forgetting." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/137.

Full text
Abstract:
Lifelong learning aims to train a highly expressive model for a new task while retaining all knowledge for previous tasks. However, many practical scenarios do not always require the system to remember all of the past knowledge. Instead, ethical considerations call for selective and proactive forgetting of undesirable knowledge in order to prevent privacy issues and data leakage. In this paper, we propose a new framework for lifelong learning, called Learning with Selective Forgetting, which is to update a model for the new task with forgetting only the selected classes of the previous tasks while maintaining the rest. The key is to introduce a class-specific synthetic signal called mnemonic code. The codes are "watermarked" on all the training samples of the corresponding classes when the model is updated for a new task. This enables us to forget arbitrary classes later by only using the mnemonic codes without using the original data. Experiments on common benchmark datasets demonstrate the remarkable superiority of the proposed method over several existing methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Xie, Yanlu, Yue Chen, and Man Li. "Convolution Forgetting Curve Model for Repeated Learning." In 2020 International Conference on Artificial Intelligence and Education (ICAIE). IEEE, 2020. http://dx.doi.org/10.1109/icaie50891.2020.00109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kukk, V., and K. Umbleja. "Analysis of forgetting in a learning environment." In 2012 13th Biennial Baltic Electronics Conference (BEC2012). IEEE, 2012. http://dx.doi.org/10.1109/bec.2012.6376885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gidaris, Spyros, and Nikos Komodakis. "Dynamic Few-Shot Visual Learning Without Forgetting." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sakamoto, Yasuaki, and Toshihiko Matsuka. "Incorporating Forgetting in a Category Learning Model." In 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yaochu Jin and B. Sendhoff. "Alleviating Catastrophic Forgetting via Multi-Objective Learning." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.247332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gunay, Melike, Eyyup Yildiz, Yagiz Nalcakan, Batuhan Asiroglu, Ahmet Zencirli, Bugra Rumeysa Mete, and Tolga Ensari. "Digital Data Forgetting: A Machine Learning Approach." In 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, 2018. http://dx.doi.org/10.1109/ismsit.2018.8567046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rostami, Mohammad, Soheil Kolouri, and Praveen K. Pilly. "Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/463.

Full text
Abstract:
Despite huge success, deep networks are unable to learn effectively in sequential multitask learning settings as they forget the past learned tasks after learning new tasks. Inspired from complementary learning systems theory, we address this challenge by learning a generative model that couples the current task to the past learned tasks through a discriminative embedding space. We learn an abstract generative distribution in the embedding that allows generation of data points to represent past experience. We sample from this distribution and utilize experience replay to avoid forgetting and simultaneously accumulate new knowledge to the abstract distribution in order to couple the current task with past experience. We demonstrate theoretically and empirically that our framework learns a distribution in the embedding, which is shared across all tasks, and as a result tackles catastrophic forgetting.
APA, Harvard, Vancouver, ISO, and other styles
10

Riopel, Martin. "PRACTICE AND FORGETTING CURVES DEDUCED FROM SCALE INVARIANCE." In International Conference on Education and New Learning Technologies. IATED, 2017. http://dx.doi.org/10.21125/edulearn.2017.0188.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Learning and Forgetting"

1

Library, Spring. The Cycle of Learning, Memorizing, and Forgetting. Spring Library, December 2020. http://dx.doi.org/10.47496/sl.blog.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kyllonen, Patrick C., and William C. Tirre. Individual Differences in Associative Learning and Forgetting. Fort Belvoir, VA: Defense Technical Information Center, August 1989. http://dx.doi.org/10.21236/ada212765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Benkard, C. Lanier. Learning and Forgetting: The Dynamics of Aircraft Production. Cambridge, MA: National Bureau of Economic Research, May 1999. http://dx.doi.org/10.3386/w7127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography