Dissertations / Theses on the topic 'Learning and Forgetting'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 46 dissertations / theses for your research on the topic 'Learning and Forgetting.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Packer, Heather S. "Evolving ontologies with online learning and forgetting algorithms." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/194923/.
Full textVik, Mikael Eikrem. "Reducing catastrophic forgetting in neural networks using slow learning." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8702.
Full textThis thesis describes a connectionist approach to learning and long-term memory consolidation, inspired by empirical studies on the roles of the hippocampus and neocortex in the brain. The existence of complementary learning systems is due to demands posed on our cognitive system because of the nature of our experiences. It has been shown that dual-network architectures utilizing information transfer successfully can avoid the phenomenon of catastrophic forgetting involved in multiple sequence learning. The experiments involves a Reverberated Simple Recurrent Network which is trained on multiple sequences with the memory being reinforced by means of self-generated pseudopatterns. My focus will be on the implications of how differentiated learning speed affects the level of forgetting, without explicit training on the data used to form the existing memory.
Besedin, Andrey. "Continual forgetting-free deep learning from high-dimensional data streams." Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1263.
Full textIn this thesis, we propose a new deep-learning-based approach for online classification on streams of high-dimensional data. In recent years, Neural Networks (NN) have become the primary building block of state-of-the-art methods in various machine learning problems. Most of these methods, however, are designed to solve the static learning problem, when all data are available at once at training time. Performing Online Deep Learning is exceptionally challenging.The main difficulty is that NN-based classifiers usually rely on the assumption that the sequence of data batches used during training is stationary, or in other words, that the distribution of data classes is the same for all batches (i.i.d. assumption).When this assumption does not hold Neural Networks tend to forget the concepts that are temporarily not available in thestream. In the literature, this phenomenon is known as catastrophic forgetting. The approaches we propose in this thesis aim to guarantee the i.i.d. nature of each batch that comes from the stream and compensates for the lack of historical data. To do this, we train generative models and pseudo-generative models capable of producing synthetic samples from classes that are absent or misrepresented in the stream and complete the stream’s batches with these samples. We test our approaches in an incremental learning scenario and a specific type of continuous learning. Our approaches perform classification on dynamic data streams with the accuracy close to the results obtained in the static classification configuration where all data are available for the duration of the learning. Besides, we demonstrate the ability of our methods to adapt to invisible data classes and new instances of already known data categories, while avoiding forgetting the previously acquired knowledge
Evilevitch, Anton, and Robert Ingram. "Avoiding Catastrophic Forgetting in Continual Learning through Elastic Weight Consolidation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302552.
Full textBildklassifiering är ett område inom dataologi med många tillämpningsområden. En nyckelfråga när det gäller användingen av Artificial Neural Networks (ANN) för bildklassifiering är fenomenet Catastrophic Forgetting. Detta inträffar när ett nätverk tränas sekventiellt (m.a.o. Continual Learning). Detta innebär att nätverket snabbt tappar prestanda för en viss uppgift efter att den har tränats på en ny uppgift. Elastic Weight Consolidation (EWC) har tidigare föreslagits som ett lindring genom applicering av en förlustfunktion som använder Fisher Information Matrix. Vi vill utforska och fastställa om detta fortfarande gäller för moderna nätverksarkitekturer, och i vilken utsträckning det kan tillämpas. Vi utför metoden på uppgifter inom en och samma dataset. Våra resultat visar att metoden är genomförbar och har en minskande effekt på Catastrophic Forgetting. Dessa resultat uppnås dock på bekostnad av längre körningstider och ökad tidsåtgång för val av hyperparametrar.
Ahmad, Neida Basheer, and Neida Basheer Ahmad. "Forgetting Can Be Helpful for Learning: How Wakeful, Offline Processing Influences Infant Language Learning." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/624894.
Full textHough, Gerald E. "Learning, forgetting, and remembering : retention of song in the adult songbird /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu148820355277807.
Full textBeaulieu, Shawn L. "Developing Toward Generality: Combating Catastrophic Forgetting with Developmental Compression." ScholarWorks @ UVM, 2018. https://scholarworks.uvm.edu/graddis/874.
Full textWeeks, Clinton. "Investigation of the differential forgetting rates of item and associative information /." [St. Lucia, Qld.], 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16837.pdf.
Full textAriel, Robert. "The Contribution of Past Test Performance, New Learning, and Forgetting to Judgment-of-Learning Resolution." Kent State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=kent1277315741.
Full textJaber, Mohamad Y. "The effects of learning and forgetting on the economic manufactured quantity (EMQ)." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319967.
Full textLesort, Timothée. "Continual Learning : Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAE003.
Full textHumans learn all their life long. They accumulate knowledge from a sequence of learning experiences and remember the essential concepts without forgetting what they have learned previously. Artificial neural networks struggle to learn similarly. They often rely on data rigorously preprocessed to learn solutions to specific problems such as classification or regression.In particular, they forget their past learning experiences if trained on new ones.Therefore, artificial neural networks are often inept to deal with real-lifesuch as an autonomous-robot that have to learn on-line to adapt to new situations and overcome new problems without forgetting its past learning-experiences.Continual learning (CL) is a branch of machine learning addressing this type of problems.Continual algorithms are designed to accumulate and improve knowledge in a curriculum of learning-experiences without forgetting.In this thesis, we propose to explore continual algorithms with replay processes.Replay processes gather together rehearsal methods and generative replay methods.Generative Replay consists of regenerating past learning experiences with a generative model to remember them. Rehearsal consists of saving a core-set of samples from past learning experiences to rehearse them later. The replay processes make possible a compromise between optimizing the current learning objective and the past ones enabling learning without forgetting in sequences of tasks settings.We show that they are very promising methods for continual learning. Notably, they enable the re-evaluation of past data with new knowledge and the confrontation of data from different learning-experiences. We demonstrate their ability to learn continually through unsupervised learning, supervised learning and reinforcement learning tasks
Larsen, Caroline, and Elin Ryman. "A quantitative analysis of how the Variational Continual Learning method handles catastrophic forgetting." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280447.
Full textCatastrophic forgetting är ett problem som uppstår när ett artificiellt neuralt nätverk ersätter gammal kunskap då ny information lärs in. Flera metoder som påstås hantera det ovannämnda problemet har tränats och utvärderats med dataset bestående av ett litet antal uppgifter (tasks), vilket inte representerar en verklig situation där antalet uppgifter kan vara stort. Den här rapporten undersöker hur tre versioner av metoden Variational Continual Learning (VCL) hanterar catastrophic forgetting när artificiella neurala nätverk tränas med ett dataset med 20 uppgifter, samt ett dataset med 5 uppgifter. Resultaten visar att alla tre versioner av metoden presterade bra, även om det fanns viss antydan till catastrophic forgetting. I synnerhet uppnådde de två versionerna av VCL som utökats med ett episodiskt minne bäst resultat. Sammanfattningsvis kan det sägas att alla tre versioner av VCL-metoden hanterar problemet catastrophic forgetting när de tränas med ett dataset bestående av upp till 20 uppgifter.
Masana, Castrillo Marc. "Lifelong Learning of Neural Networks: Detecting Novelty and Adapting to New Domains without Forgetting." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671591.
Full textLa visión por computador ha experimentado cambios considerables en la última década a medida que las redes neuronales se han vuelto de uso común. Debido a que las capacidades computacionales disponibles han ido aumentando, las redes neuronales han logrado avances en muchas tareas de visión por computador e incluso han superado el rendimiento humano en otras. Una dirección de investigación que ha experimentado un aumento notable en interés son los sistemas de aprendizaje continuado. Estos sistemas deben ser capaces de realizar tareas de manera eficiente, identificar y aprender otras nuevas y, además, deben poder implementar versiones más compactas de sí mismos que sean expertos en tareas específicas. En esta tesis, contribuimos a la investigación sobre el aprendizaje continuado y abordamos la compresión y adaptación de redes a pequeños dominios, el aprendizaje incremental de redes ante una variedad de tareas y, finalmente, la detección de anomalías y novedades durante la inferencia. Exploramos cómo se puede transferir el conocimiento de grandes modelos pre-entrenados a redes con tareas más específicas capaces de ejecutarse en dispositivos más pequeños. El uso de un modelo pre-entrenado proporciona representaciones más robustas y una inicialización más estable al aprender una tarea más pequeña, lo que conduce a un mayor rendimiento y se conoce como adaptación de dominio. Sin embargo, esos modelos son demasiado grandes para ciertas aplicaciones que deben implementarse en dispositivos con memoria y capacidad computacional limitadas. En esta tesis mostramos que, después de realizar la adaptación de dominio, algunas activaciones aprendidas apenas contribuyen a las predicciones del modelo. Por lo tanto, proponemos aplicar compresión de redes basada en la descomposición matricial de bajo rango utilizando las estadísticas de las activaciones. Esto da como resultado una reducción significativa del tamaño del modelo y del coste computacional. Al igual que la inteligencia humana, el machine learning tiene como objetivo tener la capacidad de aprender y recordar conocimientos. Sin embargo, cuando una red neuronal ya entrenada aprende una nueva tarea, termina olvidando las anteriores. Esto se conoce como olvido catastrófico y su prevención se estudia en el aprendizaje continuo. El trabajo presentado en esta tesis analiza ampliamente las técnicas de aprendizaje continuo y presenta un enfoque para evitar el olvido catastrófico en escenarios de aprendizaje secuencial de tareas. Nuestra técnica se basa en utilizar máscaras ternarias cuando la red tiene que aprender nuevas tareas, reutilizando los conocimientos de las anteriores sin olvidar nada de ellas. A diferencia otros trabajos, nuestras máscaras se aplican a las activaciones de cada capa en lugar de a los pesos. Esto reduce considerablemente el número de parámetros que se agregarán para cada nueva tarea. Además, el análisis de una amplia gama de trabajos sobre aprendizaje incremental sin acceso a la identificación de la tarea, proporciona información sobre los enfoques actuales del estado del arte que se centran en evitar el olvido catastrófico mediante el uso de la regularización, el ensayo de tareas anteriores con memorias externas, o compensando el sesgo hacia la tarea más reciente. Las redes neuronales entrenadas con una función de coste basada en entropía cruzada obligan a las salidas del modelo a tender hacia un vector de salida única. Esto hace que los modelos tengan demasiada confianza cuando se les presentan imágenes o clases que no estaban presentes en la distribución del entrenamiento. La capacidad de un sistema para conocer los límites de las tareas aprendidas e identificar anomalías o clases que aún no se han aprendido es clave para el aprendizaje continuado y los sistemas autónomos. En esta tesis, presentamos un enfoque de aprendizaje con métricas para la detección de anomalías que aprende la tarea en un espacio métrico.
Computer vision has gone through considerable changes in the last decade as neural networks have come into common use. As available computational capabilities have grown, neural networks have achieved breakthroughs in many computer vision tasks, and have even surpassed human performance in others. With accuracy being so high, focus has shifted to other issues and challenges. One research direction that saw a notable increase in interest is on lifelong learning systems. Such systems should be capable of efficiently performing tasks, identifying and learning new ones, and should moreover be able to deploy smaller versions of themselves which are experts on specific tasks. In this thesis, we contribute to research on lifelong learning and address the compression and adaptation of networks to small target domains, the incremental learning of networks faced with a variety of tasks, and finally the detection of out-of-distribution samples at inference time. We explore how knowledge can be transferred from large pretrained models to more task-specific networks capable of running on smaller devices by extracting the most relevant information based on activation statistics. Using a pretrained model provides more robust representations and a more stable initialization when learning a smaller task, which leads to higher performance and is known as domain adaptation. However, those models are too large for certain applications that need to be deployed on devices with limited memory and computational capacity. In this thesis we show that, after performing domain adaptation, some learned activations barely contribute to the predictions of the model. Therefore, we propose to apply network compression based on low-rank matrix decomposition using the activation statistics. This results in a significant reduction of the model size and the computational cost. Like human intelligence, machine intelligence aims to have the ability to learn and remember knowledge. However, when a trained neural network is presented with learning a new task, it ends up forgetting previous ones. This is known as catastrophic forgetting and its avoidance is studied in continual learning. The work presented in this thesis extensively surveys continual learning techniques (both when knowing the task-ID at test time or not) and presents an approach to avoid catastrophic forgetting in sequential task learning scenarios. Our technique is based on using ternary masks in order to update a network to new tasks, reusing the knowledge of previous ones while not forgetting anything about them. In contrast to earlier work, our masks are applied to the activations of each layer instead of the weights. This considerably reduces the number of mask parameters to be added for each new task; with more than three orders of magnitude for most networks. Furthermore, the analysis on a wide range of work on incremental learning without access to the task-ID, provides insight on current state-of-the-art approaches that focus on avoiding catastrophic forgetting by using regularization, rehearsal of previous tasks from a small memory, or compensating the task-recency bias. We also consider the problem of out-of-distribution detection. Neural networks trained with a cross-entropy loss force the outputs of the model to tend toward a one-hot encoded vector. This leads to models being too overly confident when presented with images or classes that were not present in the training distribution. The capacity of a system to be aware of the boundaries of the learned tasks and identify anomalies or classes which have not been learned yet is key to lifelong learning and autonomous systems. In this thesis, we present a metric learning approach to out-of-distribution detection that learns the task at hand on an embedding space.
Tummaluri, Raghuram R. "Operator Assignment in Labor Intensive Cells Considering Operation Time Based Skill Levels, Learning and Forgetting." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1126900571.
Full textSharp, Jessica Lynn. "Retention in Male and Female Rats: Forgetting Curves for an Element that Violates Pattern Structure." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1505299781953525.
Full textOsorio, Ricardo M. Tamayo. "Sources of dissociation in the forgetting trajectories of implicit and explicit knowledge." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2009. http://dx.doi.org/10.18452/15867.
Full textIn this dissertation I investigate dissociations in the forgetting patterns of implicit and explicit knowledge. I claim that this approach may provide significant constraints for the assumption that a single system or mechanism determines both implicit and explicit processes. In the theoretical part, I construe a definition of implicit knowledge as information learned and retrieved without intention. I also explain the general role of single dissociations in theories of implicit knowledge. And I present an overview of the main lines of research concerned with the functions, operation, development, neural substrates, and forgetting patterns of implicit knowledge. In general, I argue that comparing the forgetting patterns of implicit and explicit knowledge may be best regarded from a graded perspective and may usefully bridge the gap between research on implicit learning and implicit memory. In a series of 4 Experiments university students were exposed to environmental regularities embedded in artificial grammar (AG) and serial reaction time (SRT) tasks. To compare the forgetting patterns, participants’ implicit (motor-performance based) and explicit (recognition based) knowledge was assessed before and after a retention interval. Taken together, the results indicate that explicit knowledge decays faster than implicit knowledge in both AG and SRT tasks. Furthermore, an interference task introduced instead of a retention interval produced the same pattern of dissociations. Finally, I conducted a set of simulations to asses the ability of a single-system model (Shanks, Wilkinson, & Channon, 2003) to account for my experimental results. The simulations showed that the model best fits the empirical data by introducing changes in the parameters related to (a) the common knowledge strength (for implicit and implicit knowledge), and (b) the reliability for the explicit test. In sum, my dissertation (1) suggests a conceptual framework for implicit and explicit knowledge, (2) provides new empirical evidence of dissociations in their forgetting patterns, and (3) identifies specific boundary conditions for a single-system model.
Li, Max Hongming. "Extension on Adaptive MAC Protocol for Space Communications." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1275.
Full textSharp, Jessica L. "Learning And Forgetting Of Complex Serial Behaviors In Rats: Interference And Spacing Effects In The Serial Multiple Choice Task." Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1564070613748065.
Full textKurtz, Tanja [Verfasser]. "Individual differences in learning and forgetting in old age: the role of basic cognitive abilities and subjective organization / Tanja Kurtz." Ulm : Universität Ulm. Fakultät für Ingenieurwissenschaften und Informatik, 2014. http://d-nb.info/1047384558/34.
Full textKubik, Veit. "Effects of Testing and Enactment on Memory." Doctoral thesis, Stockholms universitet, Psykologiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-108094.
Full textAt the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1. Epub ahead of print. Paper 2: Manuscript. Paper 3: Manuscript.
Wilson, Haley Pace. "Generalizability of Predictive Performance Optimizer Predictions Across Learning Task Type." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1471010032.
Full textGatto, Lorenzo. "Apprendimento continuo per il riconoscimento di immagini." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15607/.
Full textMartínez, Plumed Fernando. "Incremental and developmental perspectives for general-purpose learning systems." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/67269.
Full text[ES] El éxito abrumador de la Inteligencia Artificial (IA) en la resolución de tareas específicas (desde sistemas de recomendación hasta vehículos de conducción autónoma) no ha sido aún igualado con un avance similar en sistemas de IA de carácter más general enfocados en la resolución de una mayor variedad de tareas. Esta tesis aborda la creación de sistemas de IA de propósito general así como el análisis y evaluación tanto de su desarrollo como de sus capacidades cognitivas. En primer lugar, esta tesis contribuye con un sistema de aprendizaje de propósito general que reúne distintas ventajas como expresividad, comprensibilidad y versatilidad. El sistema está basado en aproximaciones de carácter inherentemente general: programación inductiva y aprendizaje por refuerzo. Además, dicho sistema se basa en una biblioteca dinámica de operadores de aprendizaje por lo que es capaz de operar en una amplia variedad de contextos. Esta flexibilidad, junto con su carácter declarativo, hace que sea posible utilizar el sistema de forma instrumental con el objetivo de facilitar la comprensión de las distintas construcciones que cada tarea requiere para ser resuelta. Por último, el proceso de aprendizaje también se revisa por medio de un enfoque evolutivo e incremental de adquisición, consolidación y olvido de conocimiento, necesario cuando se trabaja con recursos limitados (memoria y tiempo). En segundo lugar, esta tesis analiza el uso de tests de inteligencia humana para la evaluación de sistemas de IA y plantea si su uso puede constituir una alternativa válida a los enfoques actuales de evaluación de IA (más orientados a tareas). Para ello se realiza una exhaustiva revisión bibliográfica de aquellos sistemas de IA que han sido utilizados para la resolución de este tipo de problemas. Esto ha permitido analizar qué miden realmente los tests de inteligencia en los sistemas de IA, si son significativos para su evaluación, si realmente constituyen problemas complejos y, por último, si son útiles para entender la inteligencia (humana). Finalmente se analizan los conceptos de desarrollo cognitivo y aprendizaje incremental en sistemas de IA no solo a nivel conceptual, sino también por medio de estos problemas mejorando por tanto la comprensión y construcción de sistemas de propósito general evolutivos.
[CAT] L'èxit aclaparant de la Intel·ligència Artificial (IA) en la resolució de tasques específiques (des de sistemes de recomanació fins a vehicles de conducció autònoma) no ha sigut encara igualat amb un avanç similar en sistemes de IA de caràcter més general enfocats en la resolució d'una major varietat de tasques. Aquesta tesi aborda la creació de sistemes de IA de propòsit general així com l'anàlisi i avaluació tant del seu desenvolupament com de les seues capacitats cognitives. En primer lloc, aquesta tesi contribueix amb un sistema d'aprenentatge de propòsit general que reuneix diferents avantatges com ara expressivitat, comprensibilitat i versatilitat. El sistema està basat en aproximacions de caràcter inherentment general: programació inductiva i aprenentatge per reforç. A més, el sistema utilitza una biblioteca dinàmica d'operadors d'aprenentatge pel que és capaç d'operar en una àmplia varietat de contextos. Aquesta flexibilitat, juntament amb el seu caràcter declaratiu, fa que siga possible utilitzar el sistema de forma instrumental amb l'objectiu de facilitar la comprensió de les diferents construccions que cada tasca requereix per a ser resolta. Finalment, el procés d'aprenentatge també és revisat mitjançant un enfocament evolutiu i incremental d'adquisició, consolidació i oblit de coneixement, necessari quan es treballa amb recursos limitats (memòria i temps). En segon lloc, aquesta tesi analitza l'ús de tests d'intel·ligència humana per a l'avaluació de sistemes de IA i planteja si el seu ús pot constituir una alternativa vàlida als enfocaments actuals d'avaluació de IA (més orientats a tasques). Amb aquesta finalitat, es realitza una exhaustiva revisió bibliogràfica d'aquells sistemes de IA que han sigut utilitzats per a la resolució d'aquest tipus de problemes. Açò ha permès analitzar què mesuren realment els tests d'intel·ligència en els sistemes de IA, si són significatius per a la seua avaluació, si realment constitueixen problemes complexos i, finalment, si són útils per a entendre la intel·ligència (humana). Finalment s'analitzen els conceptes de desenvolupament cognitiu i aprenentatge incremental en sistemes de IA no solament a nivell conceptual, sinó també per mitjà d'aquests problemes millorant per tant la comprensió i construcció de sistemes de propòsit general evolutius.
Martínez Plumed, F. (2016). Incremental and developmental perspectives for general-purpose learning systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/67269
TESIS
Johansson, Philip. "Incremental Learning of Deep Convolutional Neural Networks for Tumour Classification in Pathology Images." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158225.
Full textHocquet, Guillaume. "Class Incremental Continual Learning in Deep Neural Networks." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.
Full textWe are interested in the problem of continual learning of artificial neural networks in the case where the data are available for only one class at a time. To address the problem of catastrophic forgetting that restrain the learning performances in these conditions, we propose an approach based on the representation of the data of a class by a normal distribution. The transformations associated with these representations are performed using invertible neural networks, which can be trained with the data of a single class. Each class is assigned a network that will model its features. In this setting, predicting the class of a sample corresponds to identifying the network that best fit the sample. The advantage of such an approach is that once a network is trained, it is no longer necessary to update it later, as each network is independent of the others. It is this particularly advantageous property that sets our method apart from previous work in this area. We support our demonstration with experiments performed on various datasets and show that our approach performs favorably compared to the state of the art. Subsequently, we propose to optimize our approach by reducing its impact on memory by factoring the network parameters. It is then possible to significantly reduce the storage cost of these networks with a limited performance loss. Finally, we also study strategies to produce efficient feature extractor models for continual learning and we show their relevance compared to the networks traditionally used for continual learning
Liang, Hongyan. "Three Essays on Performance Evaluation in Operations and Supply Chain Management." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1504827189112207.
Full textAlhawari, Omar I. "Operator Assignment Decisions in a Highly Dynamic Cellular Environment." Ohio University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1221596218.
Full textNguyen, Minh Ha Information Technology & Electrical Engineering Australian Defence Force Academy UNSW. "Cooperative coevolutionary mixture of experts : a neuro ensemble approach for automatic decomposition of classification problems." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2006. http://handle.unsw.edu.au/1959.4/38752.
Full textCook, Samantha. "The Effect of oestrogen in a series of models related to schizophrenia and Alzheimer¿s disease. A preclinical investigation into the effect of oestrogen on memory, executive function on and anxiety in response to pharmacological insult and in a model of natural forgetting." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5508.
Full textVelková, Romana. "Psychologické aspekty reklamy." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-199786.
Full textGerbier, Emilie. "Effet du type d’agencement temporel des répétitions d’une information sur la récupération explicite." Thesis, Lyon 2, 2011. http://www.theses.fr/2011LYO20029/document.
Full textHow information is repeated over time determines future recollection of this information. Studies in psychology revealed a distributed practice effect, that is, one retains information better when its occurrences are separated by long lags rather than by short lags. Our studies focused specifically on cases in which items were repeated upon several days. We compared the efficiency of three different temporal schedules of repetitions: A uniform schedule that consisted in repetitions occurring with equal intervals, an expanding schedule that consisted in repetitions occurring with longer and longer intervals, and a contracting schedule that consisted in repetitions occurring with shorter and shorter intervals. In Experiments 1 and 2, the learning phase lasted one week and the retention interval lasted two days. It was shown that the expanding and uniform schedules were more efficient than the contracting schedule. In Experiment 3, the learning phase lasted two weeks and the retention interval lasted 2, 6, or 13 days. It was shown that the superiority of the expanding schedule over the other two schedules appeared gradually when the retention interval increased, suggesting that different schedules yielded different forgetting rates. We also tried to test major theories of the distributed practice effect, such as the encoding variability (Experiment 4) and the study-phase retrieval (Experiment 2) theories. Our results appeared to be consistent with the study-phase retrieval theory. We concluded our dissertation by emphasizing the importance of considering findings from other areas in cognitive science–especially neuroscience and computer science–in the study of the distributed practice effect
Kim, Jong Wook Koubek Richard J. Ritter Frank E. "Procedural skills from learning to forgetting /." 2008. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-3130/index.html.
Full textChen, Hsin Min, and 陳新民. "Lot-Sizing Models with Learning and Forgetting Effects." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/18037690590189420474.
Full text國立臺灣科技大學
工業管理系
90
This dissertation studies the problems of incorporating both learning and forgetting effects into lot-sizing models in order to determine lot sizes and relevant management decisions. Three discrete time-varying demand models and one continuous stochastic demand model are proposed. This study also provides valuable suggestions for practitioners to choose appropriate lot-sizing techniques. Chapter two deals with the discrete time-varying demand lot-sizing problem in which both learning and forgetting effects on setup time and unit production time are considered under the condition of fixed learning and forgetting rates. The optimal production policy, including the number of production runs, lot sizes, and time points to start setups and production can be obtained by using a multi-dimensional forward dynamic programming algorithm. Experimental results indicate that the effects of learning on the lot-size decision are more influential than forgetting effects. The production learning effect on the total cost is more influential than either the forgetting effects or the setups learning effect. Since the multi-dimensional forward dynamic programming algorithm mentioned above becomes computationally intricate and intolerable in solving the problem in which the forgetting effect on the unit production time is a function of the break length and the level of experience gained prior to the break, a near-optimal forward dynamic programming algorithm is then proposed in Chapter three. The near-optimal solution is compared with those obtained by using the multi-dimensional forward dynamic programming algorithm and four extended heuristics (including the least unit cost heuristic, the technique for order placement and sizing, the Silver and Meal heuristic, and the economic production quantity algorithm). Several important observations obtained from a two-phase experiment verify the goodness of the proposed algorithm and the chosen heuristic method. In Chapter four, the original Wagner and Whitin and the classical Economic Order Quantity algorithms are extended to solve the problem in which the effects of production learning, production forgetting, and the time-value of money on cost are considered simultaneously. Numerical examples indicate that corresponding parameters for the three effects have significant impacts on the determination of lot sizes and relevant costs. Comparisons among models with and without the three effects are also made. In Chapter five, we consider a continuous stochastic demand lot-sizing model in which the replenishment lead time is affected by manufacturing learning and forgetting. According to three propositions that each feasible solution must satisfy, an effective search algorithm is derived to obtain the optimal solution with integer decision variables, including the number of orders, the order size, and the reorder level. Computational results indicate that the learning and forgetting effects on the expected total cost become significant as the ordering cost or the backorder cost increases.
Van, Rensburg Madri Stephani Jansen. "Forgetting to remember : organisational memory." Thesis, 2011. http://hdl.handle.net/10500/4812.
Full textPsychology
Ph. D. (Consulting Psychology)
Gao, Zhi-Xian, and 高植賢. "CONSIDERING LEARNING AND FORGETTING EFFECTS IN LOT SIZING METHODS." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/33904955168299427783.
Full textOsothsilp, Napassavong. "Worker-task assignment based in individual learning, forgetting, and task complexity." 2002. http://www.library.wisc.edu/databases/connect/dissertations.html.
Full textWeng, Li Cheng, and 翁麗卿. "Lot sizing models with learning and forgetting in production and setups." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/14093641266444446126.
Full text國立台灣工業技術學院
管理技術研究所
85
The paper deals with the problem of incorporating both learning andforgetting effects into discrete time-varying demand lot-sizing modelsto determine lot sizing. Forgetting is retrogression in learning which causes a loss of labourproductivity due to breaks between intermittent production runs, and a lossof setup due to frequency of setup. The focus of this work is on productor''sview, we assumed that inventory be received in the ending of periods. For more concise demonstration, only four lot-sizing models, chosen from theliterature are included in this study. Three chosen heuristic models, includingthe economics order quantity (EOQ), the least cost (LUC), and the Silver & Meal (SM) models, are extended to the case where the two effects are consideredsimulaneously. The extended WWA is used to generate optimal solutions. Severalimportant conclusions are drawn from a comparison of the three heuristic solutions with the optimal solutions, and suggestions for future researchand for lot-size users to choose an appropriate lot-sizing technique are made.
Hockenberry, Jason. "Learning, forgetting and technology substitution in the treatment of coronary artery disease." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3316896.
Full text王哲宏. "A study on resource-constrained multi-project scheduling with learning and forgetting considerations." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/46820788578569134309.
Full text國立屏東科技大學
工業管理系
93
In compliance with the trend of competitive environment, a business capable with cost estimation and timing of activity on project bidding is critical both for operation processing and profit making. For a project characterized as repetitive procedures, effects of learning experience and forgetting period resulting in consequential operations are main factors to determine the variation of the net present value. Systematically, this study considers the time value of money with the effects of learning experience and forgetting periods into the resource-constrained multi-project scheduling problem to develop two efficient solution procedures in terms of the optimal model and the genetic algorithm. To test the superiority of model and algorithm proposed in this work, a comparison between the developed heuristic rule and current heuristic rules is also conducted followed by an analysis on key factor affecting the project scheduling performance. The implementation results indicate that the heuristic rule proposed in this work is superior the current heuristic rules and effects of learning experience and forgetting period are significant. As a result, this study recommends a decision maker should consider such features into project planning. In addition, the proposed search heuristic rule is helpful for project managers as a means cons reduction on project scheduling.
Glysch, Randall L. "The influence of story structure on learning and forgetting of younger and older adults." 1990. http://catalog.hathitrust.org/api/volumes/oclc/23856798.html.
Full textTypescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 36-40).
Chiang, Chun-Yi, and 江俊儀. "A Study of Mixed Inventory Models with Ordering Process under Learning and Forgetting Effect." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/ng5q78.
Full text國立臺北科技大學
工業工程與管理研究所
98
Traditionally, the pattern of repeated ordering process is assumed that ordering cost is always constant, hence the relationship of ordering cost and total cost can be simple linear function. In fact, each unit of ordering cost can not be fixed constant due to learning effect in the process of ordering operation. Moreover, the interruption of ordering operation will cause forgetting effect, and results in the actual total cost higher than the total cost which only takes learning effect into consideration. This paper investigates the impact of learning effect and forgetting effect on ordering cost for the continuous review inventory model involving controllable lead time with the mixture of backorder price discounts and partial lost sales. In the circumstance, order quantity, backorder price discount, safety factor and lead time are decision variables. The objective is to minimize the expected total cost with respect to related decision variables. We assume that probability distributions of the lead time demand are one for the normal distribution and another for the general distribution. We also develop an algorithm procedure, respectively, to find the optimal order quantity, optimal backorder price discount, optimal safety factor and optimal lead time. Furthermore, two numerical examples are also given to illustrate the results.
"Incremental Learning With Sample Generation From Pretrained Networks." Master's thesis, 2020. http://hdl.handle.net/2286/R.I.57207.
Full textDissertation/Thesis
Masters Thesis Computer Science 2020
Mondesire, Sean. "Complementary Layered Learning." Doctoral diss., 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6140.
Full textPh.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
"Lifelong Adaptive Neuronal Learning for Autonomous Multi-Robot Demining in Colombia, and Enhancing the Science, Technology and Innovation Capacity of the Ejército Nacional de Colombia." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.55488.
Full textDissertation/Thesis
Doctoral Dissertation Applied Mathematics for the Life and Social Sciences 2019
Langa, Selaelo Norah. "The Role and function of emotions in primary school children's meaningful learning." Diss., 1999. http://hdl.handle.net/10500/17169.
Full textPsychology of Education
M.Ed.(Psychology of Education)
Anbil, Parthipan Sarath Chandar. "On challenges in training recurrent neural networks." Thèse, 2019. http://hdl.handle.net/1866/23435.
Full textIn a multi-step prediction problem, the prediction at each time step can depend on the input at any of the previous time steps far in the past. Modelling such long-term dependencies is one of the fundamental problems in machine learning. In theory, Recurrent Neural Networks (RNNs) can model any long-term dependency. In practice, they can only model short-term dependencies due to the problem of vanishing and exploding gradients. This thesis explores the problem of vanishing gradient in recurrent neural networks and proposes novel solutions for the same. Chapter 3 explores the idea of using external memory to store the hidden states of a Long Short Term Memory (LSTM) network. By making the read and write operations of the external memory discrete, the proposed architecture reduces the rate of gradients vanishing in an LSTM. These discrete operations also enable the network to create dynamic skip connections across time. Chapter 4 attempts to characterize all the sources of vanishing gradients in a recurrent neural network and proposes a new recurrent architecture which has significantly better gradient flow than state-of-the-art recurrent architectures. The proposed Non-saturating Recurrent Units (NRUs) have no saturating activation functions and use additive cell updates instead of multiplicative cell updates. Chapter 5 discusses the challenges of using recurrent neural networks in the context of lifelong learning. In the lifelong learning setting, the network is expected to learn a series of tasks over its lifetime. The dependencies in lifelong learning are not just within a task, but also across the tasks. This chapter discusses the two fundamental problems in lifelong learning: (i) catastrophic forgetting of old tasks, and (ii) network capacity saturation. Further, it proposes a solution to solve both these problems while training a recurrent neural network.