Academic literature on the topic 'Continuous Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Continuous Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Continuous Learning"

1

Geeganage, Dakshi T. K., and Asoka S. Karunananda. "Ontology Driven Continuous Learning Approach." International Journal of Knowledge Engineering-IACSIT 1, no. 1 (2015): 37–42. http://dx.doi.org/10.7763/ijke.2015.v1.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Williams, Ruth. "Continuous learning." Nursing Management 24, no. 6 (2017): 11. http://dx.doi.org/10.7748/nm.24.6.11.s11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martin, Alec. "Continuous learning partnership." Education + Training 27, no. 5 (1985): 132–34. http://dx.doi.org/10.1108/eb017135.

Full text
Abstract:
The consultative paper Towards an Adult Training Strategy, published by the Manpower Services Commission in 1983, explicitly recognised and sought to foster wider appreciation of the need for systematic continuous learning throughout adult life. Adults will come from four decades; from various backgrounds of culture and language; from work, leisure and unemployment. If the opportunities of this situation are seized, a massive commitment will have been made to the development of the learning society. Inevitably this will mean commitment to the Open Society which, as Karl Popper long ago pointed out, also has its enemies, and recognition that mass communication has already decentralised power by distributing the information‐base.
APA, Harvard, Vancouver, ISO, and other styles
4

Mahida, Ankur. "A Review on Continuous Integration and Continuous Deployment (CI/CD) for Machine Learning." International Journal of Science and Research (IJSR) 10, no. 3 (2021): 1967–70. http://dx.doi.org/10.21275/sr24314131827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

van Breda-Verduijn, Hester, and Marjoleine Heijboer. "Learning culture, continuous learning, organizational learning anthropologist." Industrial and Commercial Training 48, no. 3 (2016): 123–28. http://dx.doi.org/10.1108/ict-11-2015-0074.

Full text
Abstract:
Purpose – The purpose of this paper is to clarify the way an organizational culture forms the right breeding ground for continuous learning. More and more organizations feel the urgency for innovation and continuous improvement. Learning is a key issue in this. A powerful culture of learning forms an effective breeding ground for continuous learning. That is the reason why in this paper the concept of “learning culture” will be analyzed: how will it contribute to continuous improvement and innovation? The authors will answer this question by taking the perspective of an organizational learning anthropologist. Design/methodology/approach – The paper combines the perspective of educational sciences and cultural anthropology, and is based on a variety of professional literature. The main point of reference is the model of organizational culture of Schein (1999). Findings – Each organization has its own unique learning culture. A learning culture is considered effective when it is supporting the organizational objectives. And a learning culture is effective when it forms an effective breeding ground for the learning needed within the organization. Practical/implications – This perspective will bring learning and development professionals new ways of looking at the learning issues and solutions in their organizations. They will get acquainted with the method to analyze the learning culture in their own organization. They will understand how their organizational culture can influence learning issues. Besides that, they will get some ideas on how to improve the learning culture of their organization. Originality/value – This paper combines insights from cultural anthropology and educational sciences.
APA, Harvard, Vancouver, ISO, and other styles
6

Stratford, Elaine. "Collaboration and continuous learning." Geographical Research 60, no. 2 (2022): 216–17. http://dx.doi.org/10.1111/1745-5871.12539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hylton, Preetee. "'Continuous learning is contagious'." BDJ Team 8, no. 10 (2021): 34–36. http://dx.doi.org/10.1038/s41407-021-0769-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Day, George S. "Continuous Learning about Markets." California Management Review 36, no. 4 (1994): 9–31. http://dx.doi.org/10.2307/41165764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rowold, Jens, and Jan Schilling. "Career‐related continuous learning." Career Development International 11, no. 6 (2006): 489–503. http://dx.doi.org/10.1108/13620430610692917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Day, George S. "Continuous learning about markets." Planning Review 20, no. 5 (1992): 47–49. http://dx.doi.org/10.1108/eb054381.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Continuous Learning"

1

Effraimidis, Dimitros. "Computation approaches for continuous reinforcement learning problems." Thesis, University of Westminster, 2016. https://westminsterresearch.westminster.ac.uk/item/q0y82/computation-approaches-for-continuous-reinforcement-learning-problems.

Full text
Abstract:
Optimisation theory is at the heart of any control process, where we seek to control the behaviour of a system through a set of actions. Linear control problems have been extensively studied, and optimal control laws have been identified. But the world around us is highly non-linear and unpredictable. For these dynamic systems, which don’t possess the nice mathematical properties of the linear counterpart, the classic control theory breaks and other methods have to be employed. But nature thrives by optimising non-linear and over-complicated systems. Evolutionary Computing (EC) methods exploit nature’s way by imitating the evolution process and avoid to solve the control problem analytically. Reinforcement Learning (RL) from the other side regards the optimal control problem as a sequential one. In every discrete time step an action is applied. The transition of the system to a new state is accompanied by a sole numerical value, the “reward” that designate the quality of the control action. Even though the amount of feedback information is limited into a sole real number, the introduction of the Temporal Difference method made possible to have accurate predictions of the value-functions. This paved the way to optimise complex structures, like the Neural Networks, which are used to approximate the value functions. In this thesis we investigate the solution of continuous Reinforcement Learning control problems by EC methodologies. The accumulated reward of such problems throughout an episode suffices as information to formulate the required measure, fitness, in order to optimise a population of candidate solutions. Especially, we explore the limits of applicability of a specific branch of EC, that of Genetic Programming (GP). The evolving population in the GP case is comprised from individuals, which are immediately translated to mathematical functions, which can serve as a control law. The major contribution of this thesis is the proposed unification of these disparate Artificial Intelligence paradigms. The provided information from the systems are exploited by a step by step basis from the RL part of the proposed scheme and by an episodic basis from GP. This makes possible to augment the function set of the GP scheme with adaptable Neural Networks. In the quest to achieve stable behaviour of the RL part of the system a modification of the Actor-Critic algorithm has been implemented. Finally we successfully apply the GP method in multi-action control problems extending the spectrum of the problems that this method has been proved to solve. Also we investigated the capability of GP in relation to problems from the food industry. These type of problems exhibit also non-linearity and there is no definite model describing its behaviour.
APA, Harvard, Vancouver, ISO, and other styles
2

Welby-Solomon, Vanessa. "The continuous learning cycle. Investigating possibilities for experiential learning." Thesis, University of the Western Cape, 2015. http://hdl.handle.net/11394/5357.

Full text
Abstract:
Magister Educationis (Adult Learning and Global Change) - MEd(AL)<br>Scholars focusing on experiential learning argue that experience should be considered as critical for adult learning. This research paper frames experiential learning within a Constructivist framework. This paper focuses on an investigation into the ways that facilitators use the Continuous Learning Cycle, a model for learning based on Kolb's Learning Cycle, to facilitate learning through experience during the triad skills observation role-play in a workshop, which is part of an induction programme, for a retail bank. Indications are that facilitators use the Continuous Learning Cycle in limited ways, and therefore undermine the possibilities for optimal experiential learning; and that the Continuous Learning Cycle has limitations.
APA, Harvard, Vancouver, ISO, and other styles
3

Johannemann, Jonathan. "COAL : a continuous active learning system." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111453.

Full text
Abstract:
Thesis: M. Fin., Massachusetts Institute of Technology, Sloan School of Management, Master of Finance Program, 2017.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 59-60).<br>In this thesis, our objective is to enable businesses looking to enhance their product by varying its attributes, where effectiveness of the new product is assessed by humans. To achieve this, we mapped the task to a machine learning problem. The solution is two fold: learn a non linear model that can map the attribute space to the human response, which can then be used to make predictions, and an active learning strategy that enables learning this model incrementally. We developed a system called Continuous active learning system (COAL).<br>by Jonathan Johannemann.<br>M. Fin.
APA, Harvard, Vancouver, ISO, and other styles
4

Boyer, Eric. "Continuous auditory feedback for sensorimotor learning." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066165/document.

Full text
Abstract:
Notre système sensorimoteur a développé une relation particulière entre nos actions et le retour sonore qui en découle. Les systèmes de captation gestuelle et les technologies audio permettent de manipuler ce retour sonore par la sonification interactive du mouvement. Nous explorons dans divers cadres expérimentaux la contribution de la sonification à l'apprentissage moteur dans les systèmes interactifs. Tout d'abord, nous montrons que le système auditif intègre des indices acoustiques issus du mouvement pour le contrôle moteur. Des représentations de l'espace émergent de ces indices et sont transformées en commandes motrices. Le cas d'un objet virtuel sonore nous apprend que ces représentations audiomotrices influencent les stratégies d'exploration et permettent des cas de substitution sensorielle par le son. Ensuite, nous mesurons qu'un retour sonore continu permet d'améliorer significativement la performance à une tâche de poursuite. La sonification de l'erreur et des paramètres de la tâche aident à la performance mais montrent des effets différents sur l'apprentissage. Nous observons également que la sonification du mouvement de l'utilisateur augmente l'énergie contenue dans le geste et prévient la dépendance au retour sonore. Enfin, nous présentons le concept de tâche sonore dans lequel la cible est présentée et s'exprime sous forme de paramètres sonores à reproduire. Les résultats montrent qu'une adaptation motrice peut être provoquée par des indices acoustiques seuls. Ce travail permet de dégager des principes importants du design de l'interaction geste-son, et présente des applications originales comme des scénarios interactifs pour la rééducation<br>Our sensorimotor system has developed a specific relationship between our actions and their sonic outcomes, which it interprets as auditory feedback. The development of motion sensing and audio technologies allows emphasizing this relationship through interactive sonification of movement. We propose several experimental frameworks (visual, non-visual, tangible, virtual) to assess the contribution of sonification to sensorimotor control and learning in interactive systems. First, we show that the auditory system integrates dynamic auditory cues for online motor control, either from head or hand movements. Auditory representations of space and of the scene can be built from audio features and transformed into motor commands. The framework of a virtual sonic object illustrates that auditory-motor representations can shape exploratory movement features and allow for sensory substitution. Second, we measure that continuous auditory feedback in a tracking task helps significantly the performance. Both error and task sonification can help performance but have different effects on learning. We also observe that sonification of user’s movement can increase the energy of produced motion and prevent feedback dependency. Finally, we present the concept of sound-oriented task, where the target is expressed as acoustic features to match. We show that motor adaptation can be driven by interactive audio cues only. In this work, we highlight important guidelines for sonification design in auditory-motor coupling research, as well as applications through original setups we developed, like perceptual and physical training, and playful gesture-sound interactive scenarios for rehabilitation
APA, Harvard, Vancouver, ISO, and other styles
5

Nichols, B. "Reinforcement learning in continuous state- and action-space." Thesis, University of Westminster, 2014. https://westminsterresearch.westminster.ac.uk/item/967w8/reinforcement-learning-in-continuous-state-and-action-space.

Full text
Abstract:
Reinforcement learning in the continuous state-space poses the problem of the inability to store the values of all state-action pairs in a lookup table, due to both storage limitations and the inability to visit all states sufficiently often to learn the correct values. This can be overcome with the use of function approximation techniques with generalisation capability, such as artificial neural networks, to store the value function. When this is applied we can select the optimal action by comparing the values of each possible action; however, when the action-space is continuous this is not possible. In this thesis we investigate methods to select the optimal action when artificial neural networks are used to approximate the value function, through the application of numerical optimization techniques. Although it has been stated in the literature that gradient-ascent methods can be applied to the action selection [47], it is also stated that solving this problem would be infeasible, and therefore, is claimed that it is necessary to utilise a second artificial neural network to approximate the policy function [21, 55]. The major contributions of this thesis include the investigation of the applicability of action selection by numerical optimization methods, including gradient-ascent along with other derivative-based and derivative-free numerical optimization methods,and the proposal of two novel algorithms which are based on the application of two alternative action selection methods: NM-SARSA [40] and NelderMead-SARSA. We empirically compare the proposed methods to state-of-the-art methods from the literature on three continuous state- and action-space control benchmark problems from the literature: minimum-time full swing-up of the Acrobot; Cart-Pole balancing problem; and a double pole variant. We also present novel results from the application of the existing direct policy search method genetic programming to the Acrobot benchmark problem [12, 14].
APA, Harvard, Vancouver, ISO, and other styles
6

Stefano, Alexandra di. "Beyond the rhetoric : a grounded perspective on learning company and learning community relationships." Thesis, Open University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dahlberg, Leslie. "Evolutionary Computation in Continuous Optimization and Machine Learning." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35674.

Full text
Abstract:
Evolutionary computation is a field which uses natural computational processes to optimize mathematical and industrial problems. Differential Evolution, Particle Swarm Optimization and Estimation of Distribution Algorithm are some of the newer emerging varieties which have attracted great interest among researchers. This work has compared these three algorithms on a set of mathematical and machine learning benchmarks and also synthesized a new algorithm from the three other ones and compared it to them. The results from the benchmark show which algorithm is best suited to handle various machine learning problems and presents the advantages of using the new algorithm. The new algorithm called DEDA (Differential Estimation of Distribution Algorithms) has shown promising results at both machine learning and mathematical optimization tasks.
APA, Harvard, Vancouver, ISO, and other styles
8

Tappen, Marshall Friend 1976. "Learning continuous models for estimating intrinsic component images." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37878.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.<br>Also issued in pages.<br>MIT Rotch Library copy: issued in pages.<br>Includes bibliographical references (leaves 137-144).<br>The goal of computer vision is to use an image to recover the characteristics of a scene, such as its shape or illumination. This is difficult because an image is the mixture of multiple characteristics. For example, an edge in an image could be caused by either an edge on a surface or a change in the surface's color. Distinguishing the effects of different scene characteristics is an important step towards high-level analysis of an image. This thesis describes how to use machine learning to build a system that recovers different characteristics of the scene from a single, gray-scale image of the scene. The goal of the system is to use the observed image to recover images, referred to as Intrinsic Component Images, that represent the scene's characteristics. The development of the system is focused on estimating two important characteristics of a scene, its shading and reflectance, from a single image. From the observed image, the system estimates a shading image, which captures the interaction of the illumination and shape of the scene pictured, and an albedo image, which represents how the surfaces in the image reflect light. Measured both qualitatively and quantitatively, this system produces state-of-the-art estimates of shading and albedo images.<br>(cont.) This system is also flexible enough to be used for the separate problem of removing noise from an image. Building this system requires algorithms for continuous regression and learning the parameters of a Conditionally Gaussian Markov Random Field. Unlike previous work, this system is trained using real-world surfaces with ground-truth shading and albedo images. The learning algorithms are designed to accommodate the large amount of data in this training set.<br>by Marshall Friend Tappen.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Tummala, Akhil. "Self-learning algorithms applied in Continuous Integration system." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16675.

Full text
Abstract:
Context: Continuous Integration (CI) is a software development practice where a developer integrates a code into the shared repository. And, then an automated system verifies the code and runs automated test cases to find integration error. For this research, Ericsson’s CI system is used. The tests that are performed in CI are regression tests. Based on the time scopes, the regression test suites are categorized into hourly and daily test suits. The hourly test is performed on all the commits made in a day, whereas the daily test is performed at night on the latest build that passed the hourly test. Here, the hourly and daily test suites are static, and the hourly test suite is a subset of the daily test suite. Since the daily test is performed at the end of the day, the results are obtained on the next day, which is delaying the feedback to the developers regarding the integration errors. To mitigate this problem, research is performed to find the possibility of creating a learning model and integrating into the CI system, which can then create a dynamic hourly test suite for faster feedback. Objectives: This research aims to find the suitable machine learning algorithm for CI system and investigate the feasibility of creating self-learning test machinery. This goal is achieved by examining the CI system and, finding out what type data is required for creating the learning model for prioritizing the test cases. Once the necessary data is obtained, then the selected algorithms are evaluated to find the suitable learning algorithm for creating self-learning test machinery. And then, the investigation is done whether the created learning model can be integrated into the CI workflow to create the self-learning test machinery. Methods: In this research, an experiment is conducted for evaluating the learning algorithms. For this experimentation, the data is provided by Ericsson AB, Gothenburg. The dataset consists of the daily test information and the test case results. The algorithms that are evaluated in this experiment are Naïve Bayes, Support vector machines, and Decision trees. This evaluation is done by performing leave-one-out cross-validation. And, the learning algorithm performance is calculated by using the prediction accuracy. After obtaining the accuracies, the algorithms are compared to find the suitable machine learning algorithm for CI system. Results: Based on the Experiment results it is found that support vector machines have outperformed Naïve Bayes and Decision tree algorithms in performance. But, due to the challenges present in the current CI system, the created learning model is not feasible to integrate into the CI. The primary challenge faced by the CI system is, mapping of test case failure to its respective commit is no possible (cannot find which commit made the test case to fail). This is because the daily test is performed on the latest build which is the combination of commits made in that day. Another challenge present is low data storage. Due to this low data storage, problems like the curse of dimensionality and class imbalance has occurred. Conclusions: By conducting this research, a suitable learning algorithm is identified for creating a self-learning machinery. And, also identified the challenges facing to integrate the model in CI. Based on the results obtained from the experiment, it is recognized that support vector machines have high prediction accuracy in test case result classification compared to Naïve Bayes and Decision trees.
APA, Harvard, Vancouver, ISO, and other styles
10

Pinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.

Full text
Abstract:
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais.<br>This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Continuous Learning"

1

United States. Dept. of Labor. Office of the Assistant Secretary for Administration and Management, ed. Continuous learning catalog: Continuous learning : everybody's business. U.S. Dept. of Labor, Office of the Assistant Secretary for Administration and Management, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Canada. Dept. of National Defence. Directorate of Continuous Learning Strategies. Manager's guide to continuous learning. National Defence, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Canadian Centre for Management Development., ed. Continuous learning: A CCMD report : summary. Canadian Centre for Management Development, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rao, K. Sudha. Influence of continuous evaluation on learning. National Council of Educational Research and Training, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yasin, Mahmuddin. Organisasi, manajemen, leadership: Studi transformasi BUMN : pentingnya continuous learning dan continuous improvement. Exposé, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Saenz, Maria Jesus, Eduardo Ubaghs, and Alejandra Isabel Cuevas. Enabling Horizontal Collaboration Through Continuous Relational Learning. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-08093-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oosthuizen, Izak. Self-directed learning research: An imperative for transforming the educational landscape. AOSIS, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Robert, Howard. The Learning imperative: Managing people for continuous innovation. Harvard Business School Press], 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Manuel, London, ed. Continuous learning in organizations: Individual, group, and organizational perspectives. Lawrence Erlbaum, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wardrop, Alex. The Para-Academic Handbook: A Toolkit for Making-Learning-Creating-Acting. Intellect, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Continuous Learning"

1

Akers, L. T. "Continuous Learning." In ACS Symposium Series. American Chemical Society, 2010. http://dx.doi.org/10.1021/bk-2010-1055.ch007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hijfte, Stijn Van. "Continuous Learning." In Make Your Organization a Center of Innovation. Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6507-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

John, C. Frederic. "Continuous Learning." In Storytelling and Market Research. Routledge, 2021. http://dx.doi.org/10.4324/9781003202516-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Carter, Eric, and Matthew Hurst. "Continuous Delivery." In Agile Machine Learning. Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5107-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gruber, Susan, and Mark J. van der Laan. "Bounded Continuous Outcomes." In Targeted Learning. Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-9782-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bishop, Christopher M., and Hugh Bishop. "Continuous Latent Variables." In Deep Learning. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-45468-4_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shultz, Thomas R., Scott E. Fahlman, Susan Craw, et al. "Continuous Attribute." In Encyclopedia of Machine Learning. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Moeller, John, Vivek Srikumar, Sarathkrishna Swaminathan, Suresh Venkatasubramanian, and Dustin Webb. "Continuous Kernel Learning." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46227-1_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bang, Henning, and Thomas Nesset Midelfart. "Continuous Team Learning." In Effective Management Teams and Organizational Behavior. Routledge, 2021. http://dx.doi.org/10.4324/9781003053552-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ducoulombier, Antoine, and Michèle Sebag. "Continuous mimetic evolution." In Machine Learning: ECML-98. Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0026704.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Continuous Learning"

1

Vignau, Benjamin, Patrice Clémente, Pascal Berthomé, and Joseph Kawalec. "Continuous learning: a feasible solution for continuous authentication using PPG?" In 2024 IEEE International Joint Conference on Biometrics (IJCB). IEEE, 2024. http://dx.doi.org/10.1109/ijcb62174.2024.10744508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Runkel, Christina, Ander Biguri, and Carola-Bibiane Schönlieb. "Continuous Learned Primal Dual." In 2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2024. http://dx.doi.org/10.1109/mlsp58920.2024.10734760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Spears, Tyler, and P. Thomas Fletcher. "Learning Spatially-Continuous Fiber Orientation Functions." In 2024 IEEE International Symposium on Biomedical Imaging (ISBI). IEEE, 2024. http://dx.doi.org/10.1109/isbi56570.2024.10635838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dhakan, Paresh, Kathryn Elizabeth Merrick, Inaki Rano, and Nazmul Haque Siddique. "Modular Continuous Learning Framework." In 2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). IEEE, 2018. http://dx.doi.org/10.1109/devlrn.2018.8761008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cerman, Otto, and Petr Husek. "Self-learning continuous controllers." In Vision (ICARCV 2010). IEEE, 2010. http://dx.doi.org/10.1109/icarcv.2010.5707210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vasilateanu, Andrei, and A. G. Turcus. "CHATBOT FOR CONTINUOUS MOBILE LEARNING." In 11th International Conference on Education and New Learning Technologies. IATED, 2019. http://dx.doi.org/10.21125/edulearn.2019.0525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baucum, Michael, Daniel Belotto, Sayre Jeannet, Eric Savage, Prannoy Mupparaju, and Carlos W. Morato. "Semi-supervised Deep Continuous Learning." In the 2017 International Conference. ACM Press, 2017. http://dx.doi.org/10.1145/3094243.3094247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pazis, Jason, and Michail G. Lagoudakis. "Learning continuous-action control policies." In 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). IEEE, 2009. http://dx.doi.org/10.1109/adprl.2009.4927541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Krisler, Brian, and Richard Alterman. "Continuous learning through inline training." In 2016 IEEE Frontiers in Education Conference (FIE). IEEE, 2016. http://dx.doi.org/10.1109/fie.2016.7757547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gary, Kevin A., and Suhas Xavier. "Agile learning through continuous assessment." In 2015 IEEE Frontiers in Education Conference (FIE). IEEE, 2015. http://dx.doi.org/10.1109/fie.2015.7344278.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Continuous Learning"

1

Baird, III, Klopf Leemon C., and A. H. Reinforcement Learning With High-Dimensional, Continuous Actions. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada280844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Griffith, David, Susan Heller-Zeisler, Joy Herman, et al. Providing NIST Supervisors with a Continuous Learning Program. National Institute of Standards and Technology, 2011. http://dx.doi.org/10.6028/nist.ir.7776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chernozhukov, Victor, Greg Lewis, Vasilis Syrgkanis, and Mert Demirer. Semi-Parametric Efficient Policy Learning with Continuous Actions. The IFS, 2019. http://dx.doi.org/10.1920/wp.cem.2019.3419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meadors, Grant, Shira Goldhaber-Gordon, and Lexington Smith. Deep learning to help find continuous gravitational waves. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1830555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Ying-Ying, and Kyle Colangelo. Double debiased machine learning nonparametric inference with continuous treatments. The IFS, 2019. http://dx.doi.org/10.1920/wp.cem.2019.5419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Ying-Ying, and Kyle Colangelo. Double debiased machine learning nonparametric inference with continuous treatments. The IFS, 2019. http://dx.doi.org/10.1920/wp.cem.2019.7219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Buckell, Chris, and Mairi Macintyre. Sustaining Continuous Improvement in Public Sector Services Through Double Loop Learning. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ghavamzadeh, Mohammad, Sridhar Mahadevan, and Rajbala Makar. Extending Hierarchical Reinforcement Learning to Continuous-Time, Average-Reward, and Multi-Agent Models. Defense Technical Information Center, 2003. http://dx.doi.org/10.21236/ada445107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stone, Peter, and Manuela Veloso. Beating a Defender in Robotic Soccer: Memory-Based Learning of a Continuous Function,. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada303088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Desk, Front. Report on Technology-Enabled Learning Competency Framework for Teachers in Zambia. Commonwealth of Learning (COL), 2023. http://dx.doi.org/10.56059/11599/5458.

Full text
Abstract:
The Report on Technology-Enabled Learning Competency Framework for Teachers in Zambia addresses the imperative of adapting to 21st-century education demands. Amidst the rise of technology-driven learning environments, this framework emerges as a response to evolving pedagogical landscapes. Acknowledging ICT's transformative potential in education, Zambia's Ministry of General Education seeks innovation through technology-enabled learning. Yet, teacher competencies in this realm remain uneven. The Teaching Council of Zambia intervenes to uplift teachers' continuous professional development through technology. Thus, this framework outlines vital knowledge, skills and attitudes, nurturing digital literacy and technological adeptness. Aligned with an international model designed by UNESCO and Zambia's context, the framework standardises competencies, offers guidance, fosters teacher professional growth and bridges digital disparities, ultimately enhancing education quality.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography