To see the other types of publications on this topic, follow the link: Action learning system.

Journal articles on the topic 'Action learning system'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Action learning system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Eason, Ken. "Action learning across the decades." Leadership in Health Services 30, no. 2 (May 2, 2017): 118–28. http://dx.doi.org/10.1108/lhs-11-2016-0057.

Full text
Abstract:
Purpose The purpose of this paper is to explore how action learning concepts were used in two healthcare projects undertaken many decades apart. The specific purpose in both cases was to examine how action learning can contribute to shared learning across key stakeholders in a complex socio-technical system. In each case study, action learning supported joint design programmes and the sharing of perspectives about the complex system under investigation. Design/methodology/approach Two action learning projects are described: first, the Hospital Internal Communications (HIC) project led by Reg Revans in the 1960s. Senior staff in ten London hospitals formed action learning teams to address communication issues. Second, in the Better Outcomes for People with Learning Disabilities: Transforming Care (BOLDTC) project, videoconferencing equipment enabled people with learning disabilities to increase their opportunities to communicate. A mutual learning process was established to enable stakeholders to explore the potential of the technical system to improve individual care. Findings The HIC project demonstrated the importance of evidence being shared between team members and that action had to engage the larger healthcare system outside the hospital. The BOLDTC project confirmed the continuing relevance of action learning to healthcare today. Mutual learning was achieved between health and social care specialists and technologists. Originality/value This work draws together the socio-technical systems tradition (considering both social and technical issues in organisations) and action learning to demonstrate that complex systems development needs to be undertaken as a learning process in which action provides the fuel for learning and design.
APA, Harvard, Vancouver, ISO, and other styles
2

Bruggeman, H., J. J. Rieser, and H. L. Pic. "An action system analysis of visuomotor learning." Journal of Vision 4, no. 8 (August 1, 2004): 285. http://dx.doi.org/10.1167/4.8.285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tsai, Jen-Kai, Chen-Chien Hsu, Wei-Yen Wang, and Shao-Kang Huang. "Deep Learning-Based Real-Time Multiple-Person Action Recognition System." Sensors 20, no. 17 (August 23, 2020): 4758. http://dx.doi.org/10.3390/s20174758.

Full text
Abstract:
Action recognition has gained great attention in automatic video analysis, greatly reducing the cost of human resources for smart surveillance. Most methods, however, focus on the detection of only one action event for a single person in a well-segmented video, rather than the recognition of multiple actions performed by more than one person at the same time for an untrimmed video. In this paper, we propose a deep learning-based multiple-person action recognition system for use in various real-time smart surveillance applications. By capturing a video stream of the scene, the proposed system can detect and track multiple people appearing in the scene and subsequently recognize their actions. Thanks to high resolution of the video frames, we establish a zoom-in function to obtain more satisfactory action recognition results when people in the scene become too far from the camera. To further improve the accuracy, recognition results from inflated 3D ConvNet (I3D) with multiple sliding windows are processed by a nonmaximum suppression (NMS) approach to obtain a more robust decision. Experimental results show that the proposed method can perform multiple-person action recognition in real time suitable for applications such as long-term care environments.
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Shih-Wei, Bao-Yun Liu, and Pao-Chi Chang. "Deep Learning-Based Violin Bowing Action Recognition." Sensors 20, no. 20 (October 9, 2020): 5732. http://dx.doi.org/10.3390/s20205732.

Full text
Abstract:
We propose a violin bowing action recognition system that can accurately recognize distinct bowing actions in classical violin performance. This system can recognize bowing actions by analyzing signals from a depth camera and from inertial sensors that are worn by a violinist. The contribution of this study is threefold: (1) a dataset comprising violin bowing actions was constructed from data captured by a depth camera and multiple inertial sensors; (2) data augmentation was achieved for depth-frame data through rotation in three-dimensional world coordinates and for inertial sensing data through yaw, pitch, and roll angle transformations; and, (3) bowing action classifiers were trained using different modalities, to compensate for the strengths and weaknesses of each modality, based on deep learning methods with a decision-level fusion process. In experiments, large external motions and subtle local motions produced from violin bow manipulations were both accurately recognized by the proposed system (average accuracy > 80%).
APA, Harvard, Vancouver, ISO, and other styles
5

AGOGINO, ADRIAN, and KAGAN TUMER. "LEARNING INDIRECT ACTIONS IN COMPLEX DOMAINS: ACTION SUGGESTIONS FOR AIR TRAFFIC CONTROL." Advances in Complex Systems 12, no. 04n05 (August 2009): 493–512. http://dx.doi.org/10.1142/s0219525909002283.

Full text
Abstract:
Providing intelligent algorithms to manage the ever-increasing flow of air traffic is critical to the efficiency and economic viability of air transportation systems. Yet, current automated solutions leave existing human controllers "out of the loop" rendering the potential solutions both technically dangerous (e.g. inability to react to suddenly developing conditions) and politically charged (e.g. role of air traffic controllers in a fully automated system). Instead, this paper outlines a distributed agent-based solution where agents provide suggestions to human controllers. Though conceptually pleasing, this approach introduces two critical research issues. First, the agent actions are now filtered through interactions with other agents, human controllers and the environment before leading to a system state. This indirect action-to-effect process creates a complex learning problem. Second, even in the best case, not all air traffic controllers will be willing or able to follow the agents' suggestions. This partial participation effect will require the system to be robust to the number of controllers that follow the agent suggestions. In this paper, we present an agent reward structure that allows agents to learn good actions in this indirect environment, and explore the ability of those suggestion agents to achieve good system level performance. We present a series of experiments based on real historical air traffic data combined with simulation of air traffic flow around the New York city area. Results show that the agents can improve system-wide performance by up to 20% over that of human controllers alone, and that these results degrade gracefully when the number of human controllers that follow the agents' suggestions declines.
APA, Harvard, Vancouver, ISO, and other styles
6

SUBRAMANIAN, K., and S. SURESH. "HUMAN ACTION RECOGNITION USING META-COGNITIVE NEURO-FUZZY INFERENCE SYSTEM." International Journal of Neural Systems 22, no. 06 (November 27, 2012): 1250028. http://dx.doi.org/10.1142/s0129065712500281.

Full text
Abstract:
We propose a sequential Meta-Cognitive learning algorithm for Neuro-Fuzzy Inference System (McFIS) to efficiently recognize human actions from video sequence. Optical flow information between two consecutive image planes can represent actions hierarchically from local pixel level to global object level, and hence are used to describe the human action in McFIS classifier. McFIS classifier and its sequential learning algorithm is developed based on the principles of self-regulation observed in human meta-cognition. McFIS decides on what-to-learn, when-to-learn and how-to-learn based on the knowledge stored in the classifier and the information contained in the new training samples. The sequential learning algorithm of McFIS is controlled and monitored by the meta-cognitive components which uses class-specific, knowledge based criteria along with self-regulatory thresholds to decide on one of the following strategies: (i) Sample deletion (ii) Sample learning and (iii) Sample reserve. Performance of proposed McFIS based human action recognition system is evaluated using benchmark Weizmann and KTH video sequences. The simulation results are compared with well known SVM classifier and also with state-of-the-art action recognition results reported in the literature. The results clearly indicates McFIS action recognition system achieves better performances with minimal computational effort.
APA, Harvard, Vancouver, ISO, and other styles
7

TUMER, KAGAN, and NEWSHA KHANI. "LEARNING FROM ACTIONS NOT TAKEN IN MULTIAGENT SYSTEMS." Advances in Complex Systems 12, no. 04n05 (August 2009): 455–73. http://dx.doi.org/10.1142/s0219525909002301.

Full text
Abstract:
In large cooperative multiagent systems, coordinating the actions of the agents is critical to the overall system achieving its intended goal. Even when the agents aim to cooperate, ensuring that the agent actions lead to good system level behavior becomes increasingly difficult as systems become larger. One of the fundamental difficulties in such multiagent systems is the slow learning process where an agent not only needs to learn how to behave in a complex environment, but also needs to account for the actions of other learning agents. In this paper, we present a multiagent learning approach that significantly improves the learning speed in multiagent systems by allowing an agent to update its estimate of the rewards (e.g. value function in reinforcement learning) for all its available actions, not just the action that was taken. This approach is based on an agent estimating the counterfactual reward it would have received had it taken a particular action. Our results show that the rewards on such "actions not taken" are beneficial early in training, particularly when only particular "key" actions are used. We then present results where agent teams are leveraged to estimate those rewards. Finally, we show that the improved learning speed is critical in dynamic environments where fast learning is critical to tracking the underlying processes.
APA, Harvard, Vancouver, ISO, and other styles
8

Shvarts, Anna, Rosa Alberto, Arthur Bakker, Michiel Doorman, and Paul Drijvers. "Embodied instrumentation in learning mathematics as the genesis of a body-artifact functional system." Educational Studies in Mathematics 107, no. 3 (June 3, 2021): 447–69. http://dx.doi.org/10.1007/s10649-021-10053-0.

Full text
Abstract:
AbstractRecent developments in cognitive and educational science highlight the role of the body in learning. Novel digital technologies increasingly facilitate bodily interaction. Aiming for understanding of the body’s role in learning mathematics with technology, we reconsider the instrumental approach from a radical embodied cognitive science perspective. We highlight the complexity of any action regulation, which is performed by a complex dynamic functional system of the body and brain in perception-action loops driven by multilevel intentionality. Unlike mental schemes, functional systems are decentralized and can be extended by artifacts. We introduce the notion of a body-artifact functional system, pointing to the fact that artifacts are included in the perception-action loops of instrumented actions. The theoretical statements of this radical embodied reconsideration of the instrumental approach are illustrated by an empirical example, in which embodied activities led a student to the development of instrumented actions with a unit circle as an instrument to construct a sine graph. Supplementing videography of the student’s embodied actions and gestures with eye-tracking data, we show how new functional systems can be formed. Educational means to facilitate the development of body-artifact functional systems are discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Jeon, Sang Gil, Yeongmahn You, Hyun Kyung Jo, and Yoon Jeong Baek. "Developing an Evaluation Framework for Action Learning Using Viable System Model: In Search of Viable Action Learning." International Journal of Learning: Annual Review 12, no. 9 (2006): 195–206. http://dx.doi.org/10.18848/1447-9494/cgp/v12i09/48086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Takahashi, Yasutake, and Minoru Asada. "State-Action Space Construction for Multi-Layered Learning System." Journal of the Robotics Society of Japan 21, no. 2 (2003): 164–71. http://dx.doi.org/10.7210/jrsj.21.164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Greene, Sarah M., Robert J. Reid, and Eric B. Larson. "Implementing the Learning Health System: From Concept to Action." Annals of Internal Medicine 157, no. 3 (August 7, 2012): 207. http://dx.doi.org/10.7326/0003-4819-157-3-201208070-00012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Halonen, Raija. "Action learning with an information system project: subjective reflections." Reflective Practice 9, no. 1 (February 2008): 89–99. http://dx.doi.org/10.1080/14623940701816691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Graham, Wayne. "Developing organizations: a system of enquiry, action and learning." Development and Learning in Organizations: An International Journal 31, no. 1 (January 3, 2017): 12–14. http://dx.doi.org/10.1108/dlo-03-2016-0023.

Full text
Abstract:
Purpose This paper aims to demonstrate the applicability of an action learning model to improve organizational outcomes. Design/methodology/approach This paper extends previous work by applying the system of enquiry, action and learning (SEAL) model using an action research methodology to a small business operating in the health services industry. Findings The SEAL model is a useful approach to introduce small business practitioners to the principles of organizational development (OD). Research limitations/implications The application is limited to one small business, and subsequent studies could apply the model to more organizations that operate in industries other than health services. Practical implications Business owners from this study and previous studies have found the model to be useful in the improvement of organizational outcomes. Originality/value The SEAL model is a simplified model that introduces principles of OD and has provided value to the business owners of this study.
APA, Harvard, Vancouver, ISO, and other styles
14

Tasnim, Nusrat, Md Mahbubul Islam, and Joong-Hwan Baek. "Deep Learning-Based Action Recognition Using 3D Skeleton Joints Information." Inventions 5, no. 3 (September 17, 2020): 49. http://dx.doi.org/10.3390/inventions5030049.

Full text
Abstract:
Human action recognition has turned into one of the most attractive and demanding fields of research in computer vision and pattern recognition for facilitating easy, smart, and comfortable ways of human-machine interaction. With the witnessing of massive improvements to research in recent years, several methods have been suggested for the discrimination of different types of human actions using color, depth, inertial, and skeleton information. Despite having several action identification methods using different modalities, classifying human actions using skeleton joints information in 3-dimensional space is still a challenging problem. In this paper, we conceive an efficacious method for action recognition using 3D skeleton data. First, large-scale 3D skeleton joints information was analyzed and accomplished some meaningful pre-processing. Then, a simple straight-forward deep convolutional neural network (DCNN) was designed for the classification of the desired actions in order to evaluate the effectiveness and embonpoint of the proposed system. We also conducted prior DCNN models such as ResNet18 and MobileNetV2, which outperform existing systems using human skeleton joints information.
APA, Harvard, Vancouver, ISO, and other styles
15

WU, KANGHENG, QIANG YANG, and YUNFEI JIANG. "ARMS: an automatic knowledge engineering tool for learning action models for AI planning." Knowledge Engineering Review 22, no. 2 (June 2007): 135–52. http://dx.doi.org/10.1017/s0269888907001087.

Full text
Abstract:
AbstractWe present an action model learning system known as ARMS (Action-Relation Modelling System) for automatically discovering action models from a set of successfully observed plans. Current artificial intelligence (AI) planners show impressive performance in many real world and artificial domains, but they all require the definition of an action model. ARMS is aimed at automatically learning action models from observed example plans, where each example plan is a sequence of action traces. These action models can then be used by the human editors to refine. The expectation is that this system will lessen the burden of the human editors in designing action models from scratch. In this paper, we describe the ARMS in detail. To learn action models, ARMS gathers knowledge on the statistical distribution of frequent sets of actions in the example plans. It then builds a weighted propositional satisfiability (weighted SAT) problem and solves it using a weighted MAXSAT solver. Furthermore, we show empirical evidence that ARMS can indeed learn a good approximation of the finally action models effectively.
APA, Harvard, Vancouver, ISO, and other styles
16

Powell, Daryl John, and Paul Coughlan. "Rethinking lean supplier development as a learning system." International Journal of Operations & Production Management 40, no. 7/8 (May 19, 2020): 921–43. http://dx.doi.org/10.1108/ijopm-06-2019-0486.

Full text
Abstract:
PurposeThis paper investigates developing a learning-to-learn capability as a critical success factor for sustainable lean transformation.Design/methodology/approachThis research design is guided by our research question: how can suppliers learn to learn as part of a buyer-led collaborative lean transformation? The authors adopt action learning research to generate actionable knowledge from a lean supplier development initiative over a three-year period.FindingsDrawing on emergent insights from the initiative, the authors find that developing a learning-to-learn capability is a core and critical success factor for lean transformation. The authors also find that network action learning has a significant enabling role in buyer-led collaborative lean transformations.Originality/valueThe authors contribute to lean theory and practice by making the distinction between learning about and implementing lean best practices and adopting a learning-to-learn perspective to build organisational capabilities, consistent with lean thinking and practice. Further, the authors contribute to methodology, adopting action learning research to explore learning-to-learn as a critical success factor for sustainable lean transformation.
APA, Harvard, Vancouver, ISO, and other styles
17

Herwig, Arvid, Wolfgang Prinz, and Florian Waszak. "Two Modes of Sensorimotor Integration in Intention-Based and Stimulus-Based Actions." Quarterly Journal of Experimental Psychology 60, no. 11 (October 2007): 1540–54. http://dx.doi.org/10.1080/17470210601119134.

Full text
Abstract:
Human actions may be driven endogenously (to produce desired environmental effects) or exogenously (to accommodate to environmental demands). There is a large body of evidence indicating that these two kinds of action are controlled by different neural substrates. However, only little is known about what happens—in functional terms—on these different “routes to action”. Ideomotor approaches claim that actions are selected with respect to their perceptual consequences. We report experiments that support the validity of the ideomotor principle and that, at the same time, show that it is subject to a far-reaching constraint: It holds for endogenously driven actions only! Our results suggest that the activity of the two “routes to action” is based on different types of learning: The activity of the system guiding stimulus-based actions is accompanied by stimulus–response (sensorimotor) learning, whereas the activity of the system controlling intention-based actions results in action–effect (ideomotor) learning.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Bo, Chunsheng Hua, Decai Li Yuqing He, and Jianda Han. "Intelligent Human–UAV Interaction System with Joint Cross-Validation over Action–Gesture Recognition and Scene Understanding." Applied Sciences 9, no. 16 (August 9, 2019): 3277. http://dx.doi.org/10.3390/app9163277.

Full text
Abstract:
We propose an intelligent human–unmanned aerial vehicle (UAV) interaction system, in which, instead of using the conventional remote controller, the UAV flight actions are controlled by a deep learning-based action–gesture joint detection system. The Resnet-based scene-understanding algorithm is introduced into the proposed system to enable the UAV to adjust its flight strategy automatically, according to the flying conditions. Meanwhile, both the deep learning-based action detection and multi-feature cascade gesture recognition methods are employed by a cross-validation process to create the corresponding flight action. The effectiveness and efficiency of the proposed system are confirmed by its application to controlling the flight action of a real flying UAV for more than 3 h.
APA, Harvard, Vancouver, ISO, and other styles
19

West, Greg L., Kyoko Konishi, and Veronique D. Bohbot. "Video Games and Hippocampus-Dependent Learning." Current Directions in Psychological Science 26, no. 2 (April 2017): 152–58. http://dx.doi.org/10.1177/0963721416687342.

Full text
Abstract:
Research examining the impact of video games on neural systems has largely focused on visual attention and motor control. Recent evidence now shows that video games can also impact the hippocampal memory system. Further, action and 3D-platform video-game genres are thought to have differential impacts on this system. In this review, we examine the specific design elements unique to either action or 3D-platform video games and break down how they could either favor or discourage use of the hippocampal memory system during gameplay. Analysis is based on well-established principles of hippocampus-dependent and non-hippocampus-dependent forms of learning from the human and rodent literature.
APA, Harvard, Vancouver, ISO, and other styles
20

Hartanto, Herlina, Maria Cristina B. Lorenzo, and Anita L. Frio. "Collective action and learning in developing a local monitoring system." International Forestry Review 4, no. 3 (September 1, 2002): 184–95. http://dx.doi.org/10.1505/ifor.4.3.184.17404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rooney-Varga, Juliette N., Florian Kapmeier, John D. Sterman, Andrew P. Jones, Michele Putko, and Kenneth Rath. "The Climate Action Simulation." Simulation & Gaming 51, no. 2 (December 22, 2019): 114–40. http://dx.doi.org/10.1177/1046878119890643.

Full text
Abstract:
Background. We describe and provide an initial evaluation of the Climate Action Simulation, a simulation-based role-playing game that enables participants to learn for themselves about the response of the climate-energy system to potential policies and actions. Participants gain an understanding of the scale and urgency of climate action, the impact of different policies and actions, and the dynamics and interactions of different policy choices. Intervention. The Climate Action Simulation combines an interactive computer model, En-ROADS, with a role-play in which participants make decisions about energy and climate policy. They learn about the dynamics of the climate and energy systems as they discover how En-ROADS responds to their own climate-energy decisions. Methods. We evaluated learning outcomes from the Climate Action Simulation using pre- and post-simulation surveys as well as a focus group. Results. Analysis of survey results showed that the Climate Action Simulation increases participants’ knowledge about the scale of emissions reductions and policies and actions needed to address climate change. Their personal and emotional engagement with climate change also grew. Focus group participants were overwhelmingly positive about the Climate Action Simulation, saying it left them feeling empowered to make a positive difference in addressing the climate challenge. Discussion and Conclusions. Initial evaluation results indicate that the Climate Action Simulation offers an engaging experience that delivers gains in knowledge about the climate and energy systems, while also opening affective and social learning pathways.
APA, Harvard, Vancouver, ISO, and other styles
22

Buchan, I., and J. Ainsworth. "Combining Health Data Uses to Ignite Health System Learning." Methods of Information in Medicine 54, no. 06 (2015): 479–87. http://dx.doi.org/10.3414/me15-01-0064.

Full text
Abstract:
SummaryObjectives: In this paper we aim to characterise the critical mass of linked data, methods and expertise required for health systems to adapt to the needs of the populations they serve – more recently known as learning health systems. The objectives are to: 1) identify opportunities to combine separate uses of common data sources in order to reduce duplication of data processing and improve information quality; 2) identify challenges in scaling-up the reuse of health data sufficiently to support health system learning.Methods: The challenges and opportunities were identified through a series of e-health stakeholder consultations and workshops in Northern England from 2011 to 2014. From 2013 the concepts presented here have been refined through feedback to collaborators, including patient/citizen representatives, in a regional health informatics research network (www.herc.ac.uk).Results: Health systems typically have separate information pipelines for: 1) commissioning services; 2) auditing service performance; 3) managing finances; 4) monitoring public health; and 5) research. These pipelines share common data sources but usually duplicate data extraction, aggregation, cleaning/preparation and analytics. Suboptimal analyses may be performed due to a lack of expertise, which may exist elsewhere in the health system but is fully committed to a different pipeline. Contextual knowledge that is essential for proper data analysis and interpretation may be needed in one pipeline but accessible only in another. The lack of capable health and care intelligence systems for populations can be attributed to a legacy of three flawed assumptions: 1) universality: the generalizability of evidence across populations; 2) time-invariance: the stability of evidence over time; and 3) reducibility: the reduction of evidence into specialised subsystems that may be recombined.Conclusions: We conceptualize a population health and care intelligence system capable of supporting health system learning and we put forward a set of maturity tests of progress toward such a system. A factor common to each test is data-action latency; a mature system spawns timely actions proportionate to the information that can be derived from the data, and in doing so creates meaningful measurement about system learning. We illustrate, using future scenarios, some major opportunities to improve health systems by exchanging conventional intelligence pipelines for networked critical masses of data, methods and expertise that minimise data-action latency and ignite system-learning.
APA, Harvard, Vancouver, ISO, and other styles
23

Jo, YoungJae, and SuHong Park. "Development of Action Learning Support System Model based on Flipped Learning in University Class." Korean Association For Learner-Centered Curriculum And Instruction 19, no. 19 (October 15, 2019): 25–54. http://dx.doi.org/10.22251/jlcci.2019.19.19.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Studley, Matthew, and Larry Bull. "Using the XCS Classifier System for Multi-objective Reinforcement Learning Problems." Artificial Life 13, no. 1 (January 2007): 69–86. http://dx.doi.org/10.1162/artl.2007.13.1.69.

Full text
Abstract:
We investigate the performance of a learning classifier system in some simple multi-objective, multi-step maze problems, using both random and biased action-selection policies for exploration. Results show that the choice of action-selection policy can significantly affect the performance of the system in such environments. Further, this effect is directly related to population size, and we relate this finding to recent theoretical studies of learning classifier systems in single-step problems.
APA, Harvard, Vancouver, ISO, and other styles
25

Leem, JoonBum, and Ha Young Kim. "Action-specialized expert ensemble trading system with extended discrete action space using deep reinforcement learning." PLOS ONE 15, no. 7 (July 27, 2020): e0236178. http://dx.doi.org/10.1371/journal.pone.0236178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Garcia, J., and F. Fernandez. "Safe Exploration of State and Action Spaces in Reinforcement Learning." Journal of Artificial Intelligence Research 45 (December 19, 2012): 515–64. http://dx.doi.org/10.1613/jair.3761.

Full text
Abstract:
In this paper, we consider the important problem of safe exploration in reinforcement learning. While reinforcement learning is well-suited to domains with complex transition dynamics and high-dimensional state-action spaces, an additional challenge is posed by the need for safe and efficient exploration. Traditional exploration techniques are not particularly useful for solving dangerous tasks, where the trial and error process may lead to the selection of actions whose execution in some states may result in damage to the learning system (or any other system). Consequently, when an agent begins an interaction with a dangerous and high-dimensional state-action space, an important question arises; namely, that of how to avoid (or at least minimize) damage caused by the exploration of the state-action space. We introduce the PI-SRL algorithm which safely improves suboptimal albeit robust behaviors for continuous state and action control tasks and which efficiently learns from the experience gained from the environment. We evaluate the proposed method in four complex tasks: automatic car parking, pole-balancing, helicopter hovering, and business management.
APA, Harvard, Vancouver, ISO, and other styles
27

Xue, Yiran, Rui Wu, Jiafeng Liu, and Xianglong Tang. "Crowd Evacuation Guidance Based on Combined Action Reinforcement Learning." Algorithms 14, no. 1 (January 18, 2021): 26. http://dx.doi.org/10.3390/a14010026.

Full text
Abstract:
Existing crowd evacuation guidance systems require the manual design of models and input parameters, incurring a significant workload and a potential for errors. This paper proposed an end-to-end intelligent evacuation guidance method based on deep reinforcement learning, and designed an interactive simulation environment based on the social force model. The agent could automatically learn a scene model and path planning strategy with only scene images as input, and directly output dynamic signage information. Aiming to solve the “dimension disaster” phenomenon of the deep Q network (DQN) algorithm in crowd evacuation, this paper proposed a combined action-space DQN (CA-DQN) algorithm that grouped Q network output layer nodes according to action dimensions, which significantly reduced the network complexity and improved system practicality in complex scenes. In this paper, the evacuation guidance system is defined as a reinforcement learning agent and implemented by the CA-DQN method, which provides a novel approach for the evacuation guidance problem. The experiments demonstrate that the proposed method is superior to the static guidance method, and on par with the manually designed model method.
APA, Harvard, Vancouver, ISO, and other styles
28

Bonny, Binoo P., R. M. Prasad, Sindhu S. Narayan, and Mercy Varughese. "Participatory Learning, Experimentation, Action and Dissemination (PLEAD)." Outlook on Agriculture 34, no. 2 (June 2005): 111–15. http://dx.doi.org/10.5367/0000000054224346.

Full text
Abstract:
The authors evolve a model for technology evolution and adaptation in agriculture through a participatory approach. The model follows the premise that the integration of local knowledge, the experience of farmers and quality assessment of evolved strategies help in developing technologies that promote the long-term sustainability of the system. The premise is tested through field interventions under way in 18 farmer research groups (FRG) formed for the purpose in the two agroclimatic zones of Kerala where rice forms the major crop. The experimentation is carried out in fields of selected promoter farmers from the FRGs, taking into account the existing agro-ecological peculiarities and land-use pattern. Appropriate technologies for the system are selected by the farmers from a basket of scientifically proven options and are integrated to enhance the quality of farmer-tried strategies, without researchers conducting any new experiments. The process has resulted in evolving the participatory learning, experimentation, action and dissemination (PLEAD) model, which allows interactive participation of farmers, thereby enabling them to become decision makers through the process of action–reflection–action (PRAXIS) of successful field trials conducted by them. The key elements of the model include agro-ecosystem scanning, farmer-led experimentation and farmer-to-farmer extension. The processes provide lateral and co-learning experiences that benefit all the participants.
APA, Harvard, Vancouver, ISO, and other styles
29

Yuan, Yuan, Dong Wang, and Qi Wang. "Memory-Augmented Temporal Dynamic Learning for Action Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9167–75. http://dx.doi.org/10.1609/aaai.v33i01.33019167.

Full text
Abstract:
Human actions captured in video sequences contain two crucial factors for action recognition, i.e., visual appearance and motion dynamics. To model these two aspects, Convolutional and Recurrent Neural Networks (CNNs and RNNs) are adopted in most existing successful methods for recognizing actions. However, CNN based methods are limited in modeling long-term motion dynamics. RNNs are able to learn temporal motion dynamics but lack effective ways to tackle unsteady dynamics in long-duration motion. In this work, we propose a memory-augmented temporal dynamic learning network, which learns to write the most evident information into an external memory module and ignore irrelevant ones. In particular, we present a differential memory controller to make a discrete decision on whether the external memory module should be updated with current feature. The discrete memory controller takes in the memory history, context embedding and current feature as inputs and controls information flow into the external memory module. Additionally, we train this discrete memory controller using straight-through estimator. We evaluate this end-to-end system on benchmark datasets (UCF101 and HMDB51) of human action recognition. The experimental results show consistent improvements on both datasets over prior works and our baselines.
APA, Harvard, Vancouver, ISO, and other styles
30

Jiang, Meiying, and Qibing Jin. "Multivariable System Identification Method Based on Continuous Action Reinforcement Learning Automata." Processes 7, no. 8 (August 17, 2019): 546. http://dx.doi.org/10.3390/pr7080546.

Full text
Abstract:
In this work, a closed-loop identification method based on a reinforcement learning algorithm is proposed for multiple-input multiple-output (MIMO) systems. This method could be an attractive alternative solution to the problem that the current frequency-domain identification algorithms are usually dependent on the attenuation factor. With this method, after continuously interacting with the environment, the optimal attenuation factor can be identified by the continuous action reinforcement learning automata (CARLA), and then the corresponding parameters could be estimated in the end. Moreover, the proposed method could be applied to time-varying systems online due to its online learning ability. The simulation results suggest that the presented approach can meet the requirement of identification accuracy in both square and non-square systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Torabi, Faraz. "Imitation Learning from Observation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9900–9901. http://dx.doi.org/10.1609/aaai.v33i01.33019900.

Full text
Abstract:
Humans and other animals have a natural ability to learn skills from observation, often simply from seeing the effects of these skills: without direct knowledge of the underlying actions being taken. For example, after observing an actor doing a jumping jack, a child can copy it despite not knowing anything about what's going on inside the actor's brain and nervous system. The main focus of this thesis is extending this ability to artificial autonomous agents, an endeavor recently referred to as "imitation learning from observation." Imitation learning from observation is especially relevant today due to the accessibility of many online videos that can be used as demonstrations for robots. Meanwhile, advances in deep learning have enabled us to solve increasingly complex control tasks mapping visual input to motor commands. This thesis contributes algorithms that learn control policies from state-only demonstration trajectories. Two types of algorithms are considered. The first type begins by recovering the missing action information from demonstrations and then leverages existing imitation learning algorithms on the full state-action trajectories. Our preliminary work has shown that learning an inverse dynamics model of the agent in a self-supervised fashion and then inferring the actions performed by the demonstrator enables sufficient action recovery for this purpose. The second type of algorithm uses model-free end-to-end learning. Our preliminary results indicate that iteratively optimizing a policy based on the closeness of the imitator's and expert's state transitions leads to a policy that closely mimics the demonstrator's trajectories.
APA, Harvard, Vancouver, ISO, and other styles
32

Keusch, Gerald T., Wen L. Kilama, Suerie Moon, Nicole A. Szlezák, and Catherine M. Michaud. "The Global Health System: Linking Knowledge with Action—Learning from Malaria." PLoS Medicine 7, no. 1 (January 19, 2010): e1000179. http://dx.doi.org/10.1371/journal.pmed.1000179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Xu, Lei. "Bayesian Ying-Yang system, best harmony learning, and five action circling." Frontiers of Electrical and Electronic Engineering in China 5, no. 3 (September 2010): 281–328. http://dx.doi.org/10.1007/s11460-010-0108-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Xin, Bo, Haixu Yu, You Qin, Qing Tang, and Zhangqing Zhu. "Exploration Entropy for Reinforcement Learning." Mathematical Problems in Engineering 2020 (January 9, 2020): 1–12. http://dx.doi.org/10.1155/2020/2672537.

Full text
Abstract:
The training process analysis and termination condition of the training process of a Reinforcement Learning (RL) system have always been the key issues to train an RL agent. In this paper, a new approach based on State Entropy and Exploration Entropy is proposed to analyse the training process. The concept of State Entropy is used to denote the uncertainty for an RL agent to select the action at every state that the agent will traverse, while the Exploration Entropy denotes the action selection uncertainty of the whole system. Actually, the action selection uncertainty of a certain state or the whole system reflects the degree of exploration and the stage of the learning process for an agent. The Exploration Entropy is a new criterion to analyse and manage the training process of RL. The theoretical analysis and experiment results illustrate that the curve of Exploration Entropy contains more information than the existing analytical methods.
APA, Harvard, Vancouver, ISO, and other styles
35

MEGA, SATORU, YOUNES FADIL, ARATA HORIE, and KUNIAKI UEHARA. "ASSIST IN COOKING: ACTION SUPPORT SYSTEM FOR INTERACTIVE SELF-TRAINING." International Journal of Semantic Computing 02, no. 02 (June 2008): 207–33. http://dx.doi.org/10.1142/s1793351x08000415.

Full text
Abstract:
Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.
APA, Harvard, Vancouver, ISO, and other styles
36

Al-Faris, Mahmoud, John Chiverton, David Ndzi, and Ahmed Isam Ahmed. "Vision Based Dynamic Thermal Comfort Control Using Fuzzy Logic and Deep Learning." Applied Sciences 11, no. 10 (May 19, 2021): 4626. http://dx.doi.org/10.3390/app11104626.

Full text
Abstract:
A wide range of techniques exist to help control the thermal comfort of an occupant in indoor environments. A novel technique is presented here to adaptively estimate the occupant’s metabolic rate. This is performed by utilising occupant’s actions using computer vision system to identify the activity of an occupant. Recognized actions are then translated into metabolic rates. The widely used Predicted Mean Vote (PMV) thermal comfort index is computed using the adaptivey estimated metabolic rate value. The PMV is then used as an input to a fuzzy control system. The performance of the proposed system is evaluated using simulations of various activities. The integration of PMV thermal comfort index and action recognition system gives the opportunity to adaptively control occupant’s thermal comfort without the need to attach a sensor on an occupant all the time. The obtained results are compared with the results for the case of using one or two fixed metabolic rates. The included results appear to show improved performance, even in the presence of errors in the action recognition system.
APA, Harvard, Vancouver, ISO, and other styles
37

Wada, Atsushi, and Keiki Takadama. "Analyzing Strength-Based Classifier System from Reinforcement Learning Perspective." Journal of Advanced Computational Intelligence and Intelligent Informatics 13, no. 6 (November 20, 2009): 631–39. http://dx.doi.org/10.20965/jaciii.2009.p0631.

Full text
Abstract:
Learning Classifier Systems (LCSs) are rule-based adaptive systems that have both Reinforcement Learning (RL) and rule-discovery mechanisms for effective and practical on-line learning. With the aim of establishing a common theoretical basis between LCSs and RL algorithms to share each field's findings, a detailed analysis was performed to compare the learning processes of these two approaches. Based on our previous work on deriving an equivalence between the Zeroth-level Classifier System (ZCS) and Q-learning with Function Approximation (FA), this paper extends the analysis to the influence of actually applying the conditions for this equivalence. Comparative experiments have revealed interesting implications: (1) ZCS's original parameter, the deduction rate, plays a role in stabilizing the action selection, but (2) from the Reinforcement Learning perspective, such a process inhibits the ability to accurately estimate values for the entire state-action space, thus limiting the performance of ZCS in problems requiring accurate value estimation.
APA, Harvard, Vancouver, ISO, and other styles
38

ALEVEN, VINCENT, IDO ROLL, BRUCE M. McLAREN, and KENNETH R. KOEDINGER. "Automated, Unobtrusive, Action-by-Action Assessment of Self-Regulation During Learning With an Intelligent Tutoring System." Educational Psychologist 45, no. 4 (October 25, 2010): 224–33. http://dx.doi.org/10.1080/00461520.2010.517740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, Seongwoo, Joonho Seon, Chanuk Kyeong, Soohyun Kim, Youngghyu Sun, and Jinyoung Kim. "Novel Energy Trading System Based on Deep-Reinforcement Learning in Microgrids." Energies 14, no. 17 (September 3, 2021): 5515. http://dx.doi.org/10.3390/en14175515.

Full text
Abstract:
Inefficiencies in energy trading systems of microgrids are mainly caused by uncertainty in non-stationary operating environments. The problem of uncertainty can be mitigated by analyzing patterns of primary operation parameters and their corresponding actions. In this paper, a novel energy trading system based on a double deep Q-networks (DDQN) algorithm and a double Kelly strategy is proposed for improving profits while reducing dependence on the main grid in the microgrid systems. The DDQN algorithm is proposed in order to select optimized action for improving energy transactions. Additionally, the double Kelly strategy is employed to control the microgrid’s energy trading quantity for producing long-term profits. From the simulation results, it is confirmed that the proposed strategies can achieve a significant improvement in the total profits and independence from the main grid via optimized energy transactions.
APA, Harvard, Vancouver, ISO, and other styles
40

GANESH, SUMITRA, and RUZENA BAJCSY. "LEARNING AND RECOGNITION OF HUMAN ACTIONS USING OPTIMAL CONTROL PRIMITIVES." International Journal of Humanoid Robotics 06, no. 03 (September 2009): 459–79. http://dx.doi.org/10.1142/s0219843609001802.

Full text
Abstract:
We propose a unified approach for recognition and learning of human actions, based on an optimal control model of human motion. In this model, the goals and preferences of the agent engaged in a particular action are encapsulated as a cost function or performance criterion, that is optimized to yield the details of the movement. The cost function is a compact, intuitive and flexible representation of the action. A parameterized form of the cost function is considered, wherein the structure reflects the goals of the actions, and the parameters determine the relative weighting of different terms. We show how the cost function parameters can be estimated from data by solving a nonlinear least squares problem. The parameter estimation method is tested on motion capture data for two different reaching actions and six different subjects. We show that the problem of action recognition in the context of this representation is similar to that of mode estimation in a hybrid system and can be solved using a particle filter if a receding horizon formulation of the optimal controller is adopted. We use the proposed approach to recognize different reaching actions from the 3D hand trajectory of subjects.
APA, Harvard, Vancouver, ISO, and other styles
41

Berns, Gregory S., and Terrence J. Sejnowski. "A Computational Model of How the Basal Ganglia Produce Sequences." Journal of Cognitive Neuroscience 10, no. 1 (January 1998): 108–21. http://dx.doi.org/10.1162/089892998563815.

Full text
Abstract:
We propose a systems-level computational model of the basal ganglia based closely on known anatomy and physiology. First, we assume that the thalamic targets, which relay ascending information to cortical action and planning areas, are tonically inhibited by the basal ganglia. Second, we assume that the output stage of the basal ganglia, the internal segment of the globus pallidus (GPi), selects a single action from several competing actions via lateral interactions. Third, we propose that a form of local working memory exists in the form of reciprocal connections between the external globus pallidus (GPe) and the subthalamic nucleus (STN). As a test of the model, the system was trained to learn a sequence of states that required the context of previous actions. The striatum, which was assumed to represent a conjunction of cortical states, directly selected the action in the GP during training. The STN-to-GP connection strengths were modified by an associative learning rule and came to encode the sequence after 20 to 40 iterations through the sequence. Subsequently, the system automatically reproduced the sequence when cued to the first action. The behavior of the model was found to be sensitive to the ratio of the striatal-nigral learning rate to the STN-GP learning rate. Additionally, the degree of striatal inhibition of the globus pallidus had a significant influence on both learning and the ability to select an action. Low learning rates, which would be hypothesized to reflect low levels of dopamine, as in Parkinson's disease, led to slow acquisition of contextual information. However, this could be partially offset by modeling a lesion of the globus pallidus that resulted in an increase in the gain of the STN units. The parameter sensitivity of the model is discussed within the framework of existing behavioral and lesion data.
APA, Harvard, Vancouver, ISO, and other styles
42

Eggert, Elena, Annet Bluschke, Adam Takacs, Maximilian Kleimaker, Alexander Münchau, Veit Roessner, Moritz Mückschel, and Christian Beste. "Perception-Action Integration Is Modulated by the Catecholaminergic System Depending on Learning Experience." International Journal of Neuropsychopharmacology 24, no. 7 (March 17, 2021): 592–600. http://dx.doi.org/10.1093/ijnp/pyab012.

Full text
Abstract:
Abstract Background The process underlying the integration of perception and action is a focal topic in neuroscientific research and cognitive frameworks such as the theory of event coding have been developed to explain the mechanisms of perception-action integration. The neurobiological underpinnings are poorly understood. While it has been suggested that the catecholaminergic system may play a role, there are opposing predictions regarding the effects of catecholamines on perception-action integration. Methods Methylphenidate (MPH) is a compound commonly used to modulate the catecholaminergic system. In a double-blind, randomized crossover study design, we examined the effect of MPH (0.25 mg/kg) on perception-action integration using an established “event file coding” paradigm in a group of n = 45 healthy young adults. Results The data reveal that, compared with the placebo, MPH attenuates binding effects based on the established associations between stimuli and responses, provided participants are already familiar with the task. However, without prior task experience, MPH did not modulate performance compared with the placebo. Conclusions Catecholamines and learning experience interactively modulate perception-action integration, especially when perception-action associations have to be reconfigured. The data suggest there is a gain control–based mechanism underlying the interactive effects of learning/task experience and catecholaminergic activity during perception-action integration.
APA, Harvard, Vancouver, ISO, and other styles
43

GUERRA-FILHO, GUTEMBERG. "THE MORPHOLOGY OF HUMAN ACTIONS: FINDING ESSENTIAL ACTUATORS, MOTION PATTERNS, AND THEIR COORDINATION." International Journal of Humanoid Robotics 06, no. 03 (September 2009): 537–60. http://dx.doi.org/10.1142/s0219843609001814.

Full text
Abstract:
In this paper, we present the steps required for the construction of a praxicon, a structured lexicon of human actions, through the learning of grammar systems for human actions. The discovery of a Human Activity Language involves learning the syntax of human motion which requires the construction of this praxicon. The morphology inference process assumes that a non-arbitrary symbolic representation of the human movement is given. Thus, to analyze the morphology of a particular action, we are given a symbolic representation for the motion of each actuator associated with several repeated performances of this action. As a formal model, we propose a new Parallel Synchronous Grammar System where each component grammar corresponds to an actuator. We present a novel parallel learning algorithm to induce this grammar system. Our representation explicitly contains the set of joints (degrees of freedom) actually responsible for achieving the goal aimed by the activity, the motion performed by each participating actuator, and the synchronization rules modeling coordination among these actuators. We evaluated our inference approach with synthetic data and real human motion data. The algorithm manages to induce the correct grammar system even when the input contains noise. Therefore, our approach was successful in both representational and learning aspects, and may serve as a tool to parse movement, learn patterns, and to generate actions.
APA, Harvard, Vancouver, ISO, and other styles
44

P, Nishanth. "Machine Learning based Human Fall Detection System." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 25, 2021): 2677–82. http://dx.doi.org/10.22214/ijraset.2021.35394.

Full text
Abstract:
Falls have become one of the reasons for death. It is common among the elderly. According to World Health Organization (WHO), 3 out of 10 living alone elderly people of age 65 and more tend to fall. This rate may get higher in the upcoming years. In recent years, the safety of elderly residents alone has received increased attention in a number of countries. The fall detection system based on the wearable sensors has made its debut in response to the early indicator of detecting the fall and the usage of the IoT technology, but it has some drawbacks, including high infiltration, low accuracy, poor reliability. This work describes a fall detection that does not reliant on wearable sensors and is related on machine learning and image analysing in Python. The camera's high-frequency pictures are sent to the network, which uses the Convolutional Neural Network technique to identify the main points of the human. The Support Vector Machine technique uses the data output from the feature extraction to classify the fall. Relatives will be notified via mobile message. Rather than modelling individual activities, we use both motion and context information to recognize activities in a scene. This is based on the notion that actions that are spatially and temporally connected rarely occur alone and might serve as background for one another. We propose a hierarchical representation of action segments and activities using a two-layer random field model. The model allows for the simultaneous integration of motion and a variety of context features at multiple levels, as well as the automatic learning of statistics that represent the patterns of the features.
APA, Harvard, Vancouver, ISO, and other styles
45

Solovyeva, Elena, and Ali Abdullah. "Controlling system based on neural networks with reinforcement learning for robotic manipulator." Information and Control Systems, no. 5 (October 20, 2020): 24–32. http://dx.doi.org/10.31799/1684-8853-2020-5-24-32.

Full text
Abstract:
Introduction: Due to its advantages, such as high flexibility and the ability to move heavy pieces with high torques and forces, the robotic arm, also named manipulator robot, is the most used industrial robot. Purpose: We improve the controlling quality of a manipulator robot with seven degrees of freedom in the V-REP program's environment using the reinforcement learning method based on deep neural networks. Methods: Estimate the action signal's policy by building a numerical algorithm using deep neural networks. The action-network sends the action's signal to the robotic manipulator, and the critic-network performs a numerical function approximation to calculate the value function (Q-value). Results: We create a model of the robot and the environment using the reinforcement-learning library in MATLAB and connecting the output signals (the action's signal) to a simulated robot in V-REP program. Train the robot to reach an object in its workspace after interacting with the environment and calculating the reward of such interaction. The model of the observations was done using three vision sensors. Based on the proposed deep learning method, a model of an agent representing the robotic manipulator was built using four layers neural network for the actor with four layers neural network for the critic. The agent's model representing the robotic manipulator was trained for several hours until the robot started to reach the object in its workspace in an acceptable way. The main advantage over supervised learning control is allowing our robot to perform actions and train at the same moment, giving the robot the ability to reach an object in its workspace in a continuous space action. Practical relevance: The results obtained are used to control the behavior of the movement of the manipulator without the need to construct kinematic models, which reduce the mathematical complexity of the calculation and provide a universal solution.
APA, Harvard, Vancouver, ISO, and other styles
46

Ye, Wei Qiong, Kai Li Cheng, and Lin Xu. "Applied-Information Technology and Parallel Learning Rules in Intelligent Control System." Advanced Materials Research 908 (March 2014): 539–42. http://dx.doi.org/10.4028/www.scientific.net/amr.908.539.

Full text
Abstract:
This paper applied information technology and takes the exploratory action or state transfer formed experiences of control system as the basis, in the unsupervised condition obtaining planning and controlling rules from experiences as the control knowledge. In learning process, experiences will deduce to rules, then generate high-level concepts or rules, constitute multi-resolution knowledge architecture. In the experiment, mobile robot effective learning and planning system state and action control in quasi-optimal manner based with random experiences.
APA, Harvard, Vancouver, ISO, and other styles
47

LIN, CHIN-TENG, MING-CHIH KAN, and I.-FANG CHUNG. "A NEURAL NETWORK THAT LEARNS FROM FUZZY DATA FOR LANGUAGE ACQUISITION." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 04, no. 06 (December 1996): 581–603. http://dx.doi.org/10.1142/s0218488596000330.

Full text
Abstract:
This paper proposes a four-layered fuzzy language acquisition network (FLAN) for acquiring fuzzy language. It can catch the intended information from a sentence (command) spoken in natural language with fuzzy terms. The intended information includes a meaningful semantic action and the fuzzy linguistic information of that action (for example, the phrase “move forward” represents the meaningful semantic action and the phrase “very high speed” represents the linguistic information in the fuzzy command “Move forward in a very high speed.”). The proposed FLAN has two important features. First, we can make no restrictions whatever on the fuzzy language input which is used to specify the desired information, and the network requires no acoustic, prosodic, grammar and syntactic structure. Second, the linguistic information of an action is learned automatically and it is represented by fuzzy numbers based on α-level sets. A supervised learning scheme is proposed to train the FLAN on fuzzy training data. This learning scheme consists of the mutual-information (MI) supervised learning algorithm for learning meaningful semantic actions, and the fuzzy backpropagation (FBP) learning algorithm for learning linguistic information. An experimental system is constructed to illustrate the performance and applicability of the proposed FLAN.
APA, Harvard, Vancouver, ISO, and other styles
48

Chien, Chih-Feng, and Zahra Moghadasian. "An Action Research." International Journal of Online Pedagogy and Course Design 2, no. 4 (October 2012): 1–19. http://dx.doi.org/10.4018/ijopcd.2012100101.

Full text
Abstract:
This article reports an action research, conducted in an undergraduate course, Second Language Acquisition and Development, in a 3-way blended learning (BL) environment—face-to-face (F2F), an online learning system (eLearning), and Second Life (SL). The study was the result of a joint project between a university in America and one in Hong Kong. Data collected from the American students were included in this study. The purposes of this action research is to: 1) monitor BL course and provide suggestions for future courses, and 2) investigate if students’ time spent in eLearning and the number of discussion posts impact students’ achievement. This action research provides suggestions for improving application of 3-way BL in future courses in terms of course design, instructor preparation and online grading.
APA, Harvard, Vancouver, ISO, and other styles
49

Jiang, Fengqing, and Xiao Chen. "An Action Recognition Algorithm for Sprinters Using Machine Learning." Mobile Information Systems 2021 (May 19, 2021): 1–10. http://dx.doi.org/10.1155/2021/9919992.

Full text
Abstract:
The advancements in modern science and technology have greatly promoted the progress of sports science. Advanced technological methods have been widely used in sports training, which have not only improved the scientific level of training but also promoted the continuous growth of sports technology and competition results. With the development of sports science and the gradual deepening of sport practices, the use of scientific training methods and monitoring approaches has improved the effect of sports training and athletes’ performance. This paper takes sprint as the research problem and constructs the image of sprinter’s action recognition based on machine learning. In view of the shortcomings of traditional dual-stream convolutional neural network for processing long-term video information, the time-segmented dual-stream network, based on sparse sampling, is used to better express the characteristics of long-term motion. First, the continuous video frame data is divided into multiple segments, and a short sequence of data containing user actions is formed by randomly sampling each segment of the video frame sequence. Next, it is applied to the dual-stream network for feature extraction. The optical flow image extraction involved in the dual-stream network is implemented by the system using the Lucas–Kanade algorithm. The system in this paper has been tested in actual scenarios, and the results show that the system design meets the expected requirements of the sprinters.
APA, Harvard, Vancouver, ISO, and other styles
50

Khany, Reza, and Majid Amiri. "Action control, L2 motivational self system, and motivated learning behavior in a foreign language learning context." European Journal of Psychology of Education 33, no. 2 (December 8, 2016): 337–53. http://dx.doi.org/10.1007/s10212-016-0325-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography