To see the other types of publications on this topic, follow the link: Vision-based motion controls.

Journal articles on the topic 'Vision-based motion controls'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Vision-based motion controls.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Park, Jaehong, Wonsang Hwang, Hyunil Kwon, Kwangsoo Kim, and Dong-il “Dan” Cho. "A novel line of sight control system for a robot vision tracking system, using vision feedback and motion-disturbance feedforward compensation." Robotica 31, no. 1 (April 12, 2012): 99–112. http://dx.doi.org/10.1017/s0263574712000124.

Full text
Abstract:
SUMMARYThis paper presents a novel line of sight control system for a robot vision tracking system, which uses a position feedforward controller to preposition a camera, and a vision feedback controller to compensate for the positioning error. Continuous target tracking is an important function for service robots, surveillance robots, and cooperating robot systems. However, it is difficult to track a specific target using only vision information, while a robot is in motion. This is especially true when a robot is moving fast or rotating fast. The proposed system controls the camera line of sight, using a feedforward controller based on estimated robot position and motion information. Specifically, the camera is rotated in the direction opposite to the motion of the robot. To implement the system, a disturbance compensator is developed to determine the current position of the robot, even when the robot wheels slip. The disturbance compensator is comprised of two extended Kalman filters (EKFs) and a slip detector. The inputs of the disturbance compensator are data from an accelerometer, a gyroscope, and two wheel-encoders. The vision feedback information, which is the targeting error, is used as the measurement update for the two EKFs. Using output of the disturbance compensator, an actuation module pans the camera to locate a target at the center of an image plane. This line of sight control methodology improves the recognition performance of the vision tracking system, by keeping a target image at the center of an image frame. The proposed system is implemented on a two-wheeled robot. Experiments are performed for various robot motion scenarios in dynamic situations to evaluate the tracking and recognition performance. Experimental results showed the proposed system achieves high tracking and recognition performances with a small targeting error.
APA, Harvard, Vancouver, ISO, and other styles
2

Rioux, Antoine, Claudia Esteves, Jean-Bernard Hayet, and Wael Suleiman. "Cooperative Vision-Based Object Transportation by Two Humanoid Robots in a Cluttered Environment." International Journal of Humanoid Robotics 14, no. 03 (August 25, 2017): 1750018. http://dx.doi.org/10.1142/s0219843617500189.

Full text
Abstract:
Although in recent years, there have been quite a few studies aimed at the navigation of robots in cluttered environments, few of these have addressed the problem of robots navigating while moving a large or heavy object. Such a functionality is especially useful when transporting objects of different shapes and weights without having to modify the robot hardware. In this work, we tackle the problem of making two humanoid robots navigate in a cluttered environment while transporting a very large object that simply could not be moved by a single robot. We present a complete navigation scheme, from the incremental construction of a map of the environment and the computation of collision-free trajectories to the design of the control to execute those trajectories. We present experiments made on real NAO robots, equipped with RGB-D sensors mounted on their heads, moving an object around obstacles. Our experiments show that a significantly large object can be transported without modifying the robot main hardware, and therefore that our scheme enhances the humanoid robots capacities in real-life situations. Our contributions are: (1) a low-dimension multi-robot motion planning algorithm that finds an obstacle-free trajectory, by using the constructed map of the environment as an input, (2) a framework that produces continuous and consistent odometry data, by fusing the visual and the robot odometry information, (3) a synchronization system that uses the projection of the robots based on their hands positions coupled with the visual feedback error computed from a frontal camera, (4) an efficient real-time whole-body control scheme that controls the motions of the closed-loop robot–object–robot system.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Ling, and Sitong Chen. "Student Physical Fitness Test System and Test Data Analysis System Based on Computer Vision." Wireless Communications and Mobile Computing 2021 (May 15, 2021): 1–8. http://dx.doi.org/10.1155/2021/5589065.

Full text
Abstract:
Computer vision technology is one of the main research directions of artificial intelligence. With the rapid growth of image or video data scale and the improvement of computing power, computer vision technology has achieved unprecedented development in recent years and is widely used in a variety of scenes. This study mainly discusses the design of student physical fitness test system and test data analysis system based on computer vision. This study is mainly based on the motion attitude determination algorithm to identify the motion. In hardware configuration, the key is CPU and GPU. The model realizes large-scale matrix computation based on the parallel computing power provided by GPU and uses CPU to realize data reading and preprocessing. The assessment controller is responsible for the transmission of instructions and status information and controls the operation of the entire pitch assessment system. It is the control center of the entire system. ZigBee wireless communication technology is adopted as the communication method of human posture measurement terminal and assessment controller. The input image is preprocessed through scaling and standardization. The image is scaled to the resolution of 224 × 224 when input, which is performed to realize data parallel training. The image was changed by means of random horizontal flip, random rotation, and color change to achieve the effect of expanding the dataset. Then, the test evaluation module was used to evaluate various test indexes of the body. During the sit-up test, nine out of 10 sit-ups can be accurately counted and the recognition rate reaches 90 percent. The results show that the system designed in this study has high accuracy and good performance, which can be used for the physical fitness test and test data analysis of students.
APA, Harvard, Vancouver, ISO, and other styles
4

Khan, Taha, Jerker Westin, and Mark Dougherty. "Motion Cue Analysis for Parkinsonian Gait Recognition." Open Biomedical Engineering Journal 7, no. 1 (January 15, 2013): 1–8. http://dx.doi.org/10.2174/1874120701307010001.

Full text
Abstract:
This paper presents a computer-vision based marker-free method for gait-impairment detection in Patients with Parkinson’s disease (PWP). The system is based upon the idea that a normal human body attains equilibrium during the gait by aligning the body posture with Axis-of-Gravity (AOG) using feet as the base of support. In contrast, PWP appear to be falling forward as they are less-able to align their body with AOG due to rigid muscular tone. A normal gait exhibits periodic stride-cycles with stride-angle around 45o between the legs, whereas PWP walk with shortened stride-angle with high variability between the stride-cycles. In order to analyze Parkinsonian-gait (PG), subjects were videotaped with several gait-cycles. The subject’s body was segmented using a color-segmentation method to form a silhouette. The silhouette was skeletonized for motion cues extraction. The motion cues analyzed were stride-cycles (based on the cyclic leg motion of skeleton) and posture lean (based on the angle between leaned torso of skeleton and AOG). Cosine similarity between an imaginary perfect gait pattern and the subject gait patterns produced 100% recognition rate of PG for 4 normal-controls and 3 PWP. Results suggested that the method is a promising tool to be used for PG assessment in home-environment.
APA, Harvard, Vancouver, ISO, and other styles
5

Mitrokhin, A., P. Sutor, C. Fermüller, and Y. Aloimonos. "Learning sensorimotor control with neuromorphic sensors: Toward hyperdimensional active perception." Science Robotics 4, no. 30 (May 15, 2019): eaaw6736. http://dx.doi.org/10.1126/scirobotics.aaw6736.

Full text
Abstract:
The hallmark of modern robotics is the ability to directly fuse the platform’s perception with its motoric ability—the concept often referred to as “active perception.” Nevertheless, we find that action and perception are often kept in separated spaces, which is a consequence of traditional vision being frame based and only existing in the moment and motion being a continuous entity. This bridge is crossed by the dynamic vision sensor (DVS), a neuromorphic camera that can see the motion. We propose a method of encoding actions and perceptions together into a single space that is meaningful, semantically informed, and consistent by using hyperdimensional binary vectors (HBVs). We used DVS for visual perception and showed that the visual component can be bound with the system velocity to enable dynamic world perception, which creates an opportunity for real-time navigation and obstacle avoidance. Actions performed by an agent are directly bound to the perceptions experienced to form its own “memory.” Furthermore, because HBVs can encode entire histories of actions and perceptions—from atomic to arbitrary sequences—as constant-sized vectors, autoassociative memory was combined with deep learning paradigms for controls. We demonstrate these properties on a quadcopter drone ego-motion inference task and the MVSEC (multivehicle stereo event camera) dataset.
APA, Harvard, Vancouver, ISO, and other styles
6

Wong, Sai-Keung, Kai-Min Chen, and Ting-Yu Chen. "Interactive Sand Art Drawing Using RGB-D Sensor." International Journal of Software Engineering and Knowledge Engineering 28, no. 05 (May 2018): 643–61. http://dx.doi.org/10.1142/s0218194018500183.

Full text
Abstract:
We present an interactive system using one RGB-D sensor, which allows a user to use bare hands to perform sand drawing. Our system supports the common sand drawing functions, such as sand erosion, sand spilling, and sand leaking. To use hands to manipulate the virtual sand, we design four key hand gestures. The idea is that the gesture of one hand controls the drawing actions. The motion and gesture of the other hand control the drawing positions. There are three major steps. First, our system adopts a vision-based bare-hand detection method which computes the hand position and recognizes the hand gestures. Second, the drawing positions and the drawing actions are sent to a sand drawing subsystem. Finally, the subsystem performs the sand drawing actions. Experimental results show that our system enables users to draw a rich variety of sand pictures.
APA, Harvard, Vancouver, ISO, and other styles
7

Kadota, Hisao, Hidenori Kawamura, Masahito Yamamoto, Toshihiko Takaya, and Azuma Ohuchi. "Vision-Based Motion Control of Indoor Blimp Robot(Featured Robot 1,Session: TP1-B)." Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM 2004.4 (2004): 47. http://dx.doi.org/10.1299/jsmeicam.2004.4.47_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Caccia, M. "Vision-based ROV horizontal motion control." IFAC Proceedings Volumes 37, no. 8 (July 2004): 60–65. http://dx.doi.org/10.1016/s1474-6670(17)31951-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bouteraa, Yassine, Ismail Ben Abdallah, Atef Ibrahim, and Tariq Ahamed Ahanger. "Fuzzy logic-based connected robot for home rehabilitation." Journal of Intelligent & Fuzzy Systems 40, no. 3 (March 2, 2021): 4835–50. http://dx.doi.org/10.3233/jifs-201671.

Full text
Abstract:
In this paper, a robotic system dedicated to remote wrist rehabilitation is proposed as an Internet of Things (IoT) application. The system offers patients home rehabilitation. Since the physiotherapist and the patient are on different sites, the system guarantees that the physiotherapist controls and supervises the rehabilitation process and that the patient repeats the same gestures made by the physiotherapist. A human-machine interface (HMI) has been developed to allow the physiotherapist to remotely control the robot and supervise the rehabilitation process. Based on a computer vision system, physiotherapist gestures are sent to the robot in the form of control instructions. Wrist range of motion (RoM), EMG signal, sensor current measurement, and streaming from the patient’s environment are returned to the control station. The various acquired data are displayed in the HMI and recorded in its database, which allows later monitoring of the patient’s progress. During the rehabilitation process, the developed system makes it possible to follow the muscle contraction thanks to an extraction of the Electromyography (EMG) signal as well as the patient’s resistance thanks to a feedback from a current sensor. Feature extraction algorithms are implemented to transform the EMG raw signal into a relevant data reflecting the muscle contraction. The solution incorporates a cascade fuzzy-based decision system to indicate the patient’s pain. As measurement safety, when the pain exceeds a certain threshold, the robot should stop the action even if the desired angle is not yet reached. Information on the patient, the evolution of his state of health and the activities followed, are all recorded, which makes it possible to provide an electronic health record. Experiments on 3 different subjects showed the effectiveness of the developed robotic solution.
APA, Harvard, Vancouver, ISO, and other styles
10

Kamangar, Zahed, Soran Saeed, and Asrin Zardoie. "Training Robot Arm 5 Degree of Freedom for Tracking the desired route using MLP." Kurdistan Journal of Applied Research 2, no. 3 (August 27, 2017): 232–39. http://dx.doi.org/10.24017/science.2017.3.44.

Full text
Abstract:
This paper work presents a new method of controlling the robot arm. The control system is the most important part of industrial robot. In industrial robot arms, it is very important to control the desired path and direction. In this paper, the presented control method is a multilayer neural network. Which controls and compares the location of the joins at the end point of the path relative to the zero position (the beginning of the path-static state). And try to learn the ultimate position of each joints due to changes in angles and direction of movement to carry out the motion process. The superiority of this method is that it can operate without considering 3D space (working space), the dynamic equations, and have Cartesian coordinates of the points on the desired path. Innovating this method of controlling the choice of the route is based on feedback from the vision system and human intelligence. This way, the operator selects and applies how to move the joints and the links of the robot and the method of walking the path. Applying the path through the movement of links and motion of joints and changing their angles in order to reach the end effector to the end point of the path. In this system, using the potentiometers (volumes) as an encoder connected to the axis of the joints, it is possible to obtain the location of the joints on the basis of variations in the voltage range and convert it to the equivalent digital 1024-0 values as has been used the MLP neural network input.
APA, Harvard, Vancouver, ISO, and other styles
11

Pieters, Roel, Zhenyu Ye, Pieter Jonker, and Henk Nijmeijer. "Direct Motion Planning for Vision-Based Control." IEEE Transactions on Automation Science and Engineering 11, no. 4 (October 2014): 1282–88. http://dx.doi.org/10.1109/tase.2014.2345954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Oda, Naoki, Masahide Ito, and Masaaki Shibata. "Vision-based motion control for robotic systems." IEEJ Transactions on Electrical and Electronic Engineering 4, no. 2 (March 2009): 176–83. http://dx.doi.org/10.1002/tee.20395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Guillén-Bonilla, José Trinidad, Claudia Carolina Vaca García, Stefano Di Gennaro, María Eugenia Sánchez Morales, and Cuauhtémoc Acosta Lúa. "Vision-Based Nonlinear Control of Quadrotors Using the Photogrammetric Technique." Mathematical Problems in Engineering 2020 (November 16, 2020): 1–10. http://dx.doi.org/10.1155/2020/5146291.

Full text
Abstract:
This paper presents a controller designed via the backstepping technique, for the tracking of a reference trajectory obtained via the photogrammetric technique. The dynamic equations used to represent the motion of the quadrotor helicopter are based on the Newton–Euler model. The resulting quadrotor model has been divided into four subsystems for the altitude, longitudinal, lateral, and yaw motions. A control input is designed for each subsystem. Furthermore, the photogrammetric technique has been used to obtain the reference trajectory to be tracked. The performance and effectiveness of the proposed nonlinear controllers have been tested via numerical simulations using the Pixhawk Pilot Support Package developed for Matlab/Simulink.
APA, Harvard, Vancouver, ISO, and other styles
14

Nomura, M., and N. Watanabe. "Vision Based Motion Control Application for Factory Automation." IFAC Proceedings Volumes 25, no. 29 (October 1992): 259–63. http://dx.doi.org/10.1016/s1474-6670(17)50576-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Caccia, M. "Vision-based ROV horizontal motion control: Experimental results." IFAC Proceedings Volumes 37, no. 10 (July 2004): 397–402. http://dx.doi.org/10.1016/s1474-6670(17)31764-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Shiuh-Jer, and Shian-Shin Wu. "Vision-Based Robotic Motion Control for Non-autonomous Environment." Journal of Intelligent and Robotic Systems 54, no. 5 (October 14, 2008): 733–54. http://dx.doi.org/10.1007/s10846-008-9286-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

WANG, MIN, XIADONG LV, and XINHAN HUANG. "VISION BASED MOTION CONTROL AND TRAJECTORY TRACKING FOR MICROASSEMBLY ROBOTS." International Journal of Information Acquisition 04, no. 03 (September 2007): 237–49. http://dx.doi.org/10.1142/s0219878907001319.

Full text
Abstract:
This paper presents a vision based motion control and trajectory tracking strategies for microassembly robots including a self-optimizing visual servoing depth motion control method and a novel trajectory snake tracking strategy. To measure micromanipulator depth motion, a normalized gray-variance focus measure operator is developed using depth from focus techniques. The extracted defocus features are theoretically distributed with one peak point which can be applied to locate the microscopic focal depth via self-optimizing control. Tracking differentiators are developed to suppress noises and track the features and their differential values without oscillation. Based on the differential defocus signals a coarse-to-fine self-optimizing controller is presented for micromanipulator to precisely locate focus depth. As well as a novel trajectory snake energy function of robotic motion is defined involving kinematic energy, curve potential and image potential energy. The motion trajectory can be located through searching the converged energy distribution of the snake function. Energy weights in the function are real-time adjusted to avoid local minima during convergence. To improve snake searching efficiency, quadratic-trajectory least square estimator is employed to predict manipulator motion position before tracking. Experimental results in a microassembly robotic system demonstrate that the proposed strategies are successful and effective.
APA, Harvard, Vancouver, ISO, and other styles
18

Mitchell, Alexandra G., Robert D. McIntosh, Stephanie Rossit, Michael Hornberger, and Suvankar Pal. "Assessment of visually guided reaching in prodromal Alzheimer’s disease: a cross-sectional study protocol." BMJ Open 10, no. 6 (June 2020): e035021. http://dx.doi.org/10.1136/bmjopen-2019-035021.

Full text
Abstract:
IntroductionRecent evidence has implicated the precuneus of the medial parietal lobe as one of the first brain areas to show pathological changes in Alzheimer’s disease (AD). Damage to the precuneus through focal brain injury is associated with impaired visually guided reaching, particularly for objects in peripheral vision. This raises the hypothesis that peripheral misreaching may be detectable in patients with prodromal AD. The aim of this study is to assess the frequency and severity of peripheral misreaching in patients with mild cognitive impairment (MCI) and AD.Methods and analysisPatients presenting with amnestic MCI, mild-to-moderate AD and healthy older-adult controls will be tested (target N=24 per group). Peripheral misreaching will be assessed using two set-ups: a tablet-based task of lateral reaching and motion-tracked radial reaching (in depth). There are two versions of each task, one where participants can look directly at targets (free reaching), another wheren they must maintain central fixation (peripheral reaching). All tasks will be conducted first on their dominant, and then their non-dominant side. For each combination of task and side, a Peripheral Misreaching Index (PMI) will be calculated as the increase in absolute reaching error between free and peripheral reaching. Each patient will be classified as showing peripheral misreaching if their PMI is significantly abnormal, by comparison to control performance, on either side of space. We will then test whether the frequency of peripheral misreaching exceeds the chance level in each patient group and compare the overall severity of misreaching between groups.Ethics and disseminationEthical approval was provided by the National Health Service (NHS) East of England, Cambridge Central Research Ethics Committee (REC 19/EE/0170). The results of this study will be published in a peer-reviewed journal and presented at academic conferences.
APA, Harvard, Vancouver, ISO, and other styles
19

Chroust, S., and M. Vincze. "Comparison of Prediction Methods for Vision-Based Control of Motion." IFAC Proceedings Volumes 33, no. 27 (September 2000): 207–12. http://dx.doi.org/10.1016/s1474-6670(17)37930-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Jiang, Zhao Hui. "Vision-based Cartesian space motion control for flexible robotic manipulators." International Journal of Modelling, Identification and Control 4, no. 4 (2008): 406. http://dx.doi.org/10.1504/ijmic.2008.021480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hirai, S., T. Masui, and S. Kawamura. "1A1-M5 Vision-based Motion Control of Pneumatic Group Actuators." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2001 (2001): 17. http://dx.doi.org/10.1299/jsmermd.2001.17_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Caccia, M. "Vision-based ROV horizontal motion control: Near-seafloor experimental results." Control Engineering Practice 15, no. 6 (June 2007): 703–14. http://dx.doi.org/10.1016/j.conengprac.2006.05.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yu, Junzhi, Kai Wang, Min Tan, and Jianwei Zhang. "Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces." Scientific World Journal 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/631296.

Full text
Abstract:
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.
APA, Harvard, Vancouver, ISO, and other styles
24

Windsor, Shane P., Richard J. Bomphrey, and Graham K. Taylor. "Vision-based flight control in the hawkmoth Hyles lineata." Journal of The Royal Society Interface 11, no. 91 (February 6, 2014): 20130921. http://dx.doi.org/10.1098/rsif.2013.0921.

Full text
Abstract:
Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata , and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths’ responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths’ responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight.
APA, Harvard, Vancouver, ISO, and other styles
25

ODA, Naoki. "Vision-Based Motion Control of Mobile Robotic Systems for Human Support." Journal of The Institute of Electrical Engineers of Japan 130, no. 6 (2010): 340–43. http://dx.doi.org/10.1541/ieejjournal.130.340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Moshtagh, N., N. Michael, A. Jadbabaie, and K. Daniilidis. "Vision-Based, Distributed Control Laws for Motion Coordination of Nonholonomic Robots." IEEE Transactions on Robotics 25, no. 4 (August 2009): 851–60. http://dx.doi.org/10.1109/tro.2009.2022439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bonin-Font, Francisco, Javier Antich Tobaruela, Alberto Ortiz Rodriguez, and Gabriel Oliver. "Vision-based mobile robot motion control combining T2 and ND approaches." Robotica 32, no. 4 (September 6, 2013): 591–609. http://dx.doi.org/10.1017/s0263574713000878.

Full text
Abstract:
SUMMARYNavigating along a set of programmed points in a completely unknown environment is a challenging task which mostly depends on the way the robot perceives and symbolizes the environment and decisions it takes in order to avoid the obstacles while it intends to reach subsequent goals. Tenacity and Traversability (T2)1-based strategies have demonstrated to be highly effective for reactive navigation, extending the benefits of the artificial Potential Field method to complex situations, such as trapping zones or mazes. This paper presents a new approach for reactive mobile robot behavior control which rules the actions to be performed to avoid unexpected obstacles while the robot executes a mission between several defined sites. This new strategy combines the T2 principles to escape from trapping zones together with additional criteria based on the Nearness Diagram (ND)13 strategy to move in cluttered or densely occupied scenarios. Success in a complete set of experiments, using a mobile robot equipped with a single camera, shows extensive environmental conditions where the strategy can be applied.
APA, Harvard, Vancouver, ISO, and other styles
28

Oh, P., and D. Burschka. "From the guest editors - Software packages for vision-based motion control." IEEE Robotics & Automation Magazine 12, no. 4 (December 2005): 3–4. http://dx.doi.org/10.1109/mra.2005.1577015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Schreiber, Michael. "SmartImage sensors deliver low‐cost Windows‐based vision to motion control." Assembly Automation 18, no. 3 (September 1998): 215–19. http://dx.doi.org/10.1108/01445159810224833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sharma, R., and S. Hutchinson. "Motion perceptibility and its application to active vision-based servo control." IEEE Transactions on Robotics and Automation 13, no. 4 (1997): 607–17. http://dx.doi.org/10.1109/70.611333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jabbari Asl, Hamed, and Ton Duc Do. "Asymptotic Vision-Based Tracking Control of the Quadrotor Aerial Vehicle." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/954976.

Full text
Abstract:
This paper proposes an image-based visual servo (IBVS) controller for the 3D translational motion of the quadrotor unmanned aerial vehicles (UAV). The main purpose of this paper is to provide asymptotic stability for vision-based tracking control of the quadrotor in the presence of uncertainty in the dynamic model of the system. The aim of the paper also includes the use of flow of image features as the velocity information to compensate for the unreliable linear velocity data measured by accelerometers. For this purpose, the mathematical model of the quadrotor is presented based on the optic flow of image features which provides the possibility of designing a velocity-free IBVS controller with considering the dynamics of the robot. The image features are defined from a suitable combination of perspective image moments without using the model of the object. This property allows the application of the proposed controller in unknown places. The controller is robust with respect to the uncertainties in the translational dynamics of the system associated with the target motion, image depth, and external disturbances. Simulation results and a comparison study are presented which demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Wooh Yun, Ji Wook Kwon, and Ji Won Seo. "Formation Control of Quadrotor UAVs by Vision-Based Positioning." Applied Mechanics and Materials 798 (October 2015): 282–86. http://dx.doi.org/10.4028/www.scientific.net/amm.798.282.

Full text
Abstract:
In this paper, a formation control method of quadrotor Unmanned Aerial Vehicles (UAVs) by vision-based positioning is presented. The relative positions and attitudes of two UAVs with respect to a visual marker attached to the third UAV is estimated by a camera calibration method. Based on the estimated positions and attitudes, two UAVs are controlled to the desired positions to form a given formation with respect to the third UAV. A simplified dynamics model of a quadrotor UAV is utilized to design a controller. The proposed formation control method is validated by an experiment with a motion capture system which provides the ground truth of the position data.
APA, Harvard, Vancouver, ISO, and other styles
33

Fujita, Toyomi, Takayuki Tanaka, Satoru Takahashi, Hidenori Takauji, and Shun’ichi Kaneko. "Special Issue on Vision and Motion Control." Journal of Robotics and Mechatronics 27, no. 2 (April 20, 2015): 121. http://dx.doi.org/10.20965/jrm.2015.p0121.

Full text
Abstract:
Robot vision is an important robotics and mechatronics technology for realizing intelligent robot systems that work in the real world. Recent improvements in computer processing are enabling environment to be recognized and robot to be controlled based on dynamic high-speed, highly accurate image information. In industrial application, target objects are detected much more robustly and reliably through high-speed processing. In intelligent systems applications, security systems that detect human beings have recently been applied positively in computer vision. Another attractive application is recognizing actions and gestures by detecting human – an application that would enable human beings and robots to interact and cooperate more smoothly when robots observe and assist human partners. This key technology could be used for aiding the elderly and handicapped in practical environments such as hospital, home, and so on. This special issue covers topics on robot vision and motion control including dynamic image processing. These articles are certain to be both informative and interesting to robotics and mechatronics researchers. We thank the authors for submitting their work and for assisting during the review process. We also thank the reviewers for their dedicated time and effort.
APA, Harvard, Vancouver, ISO, and other styles
34

Osman, Kawther, Jawhar Ghommam, Hasan Mehrjerdi, and Maarouf Saad. "Vision-based curved lane keeping control for intelligent vehicle highway system." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, no. 8 (November 12, 2018): 961–79. http://dx.doi.org/10.1177/0959651818810621.

Full text
Abstract:
This article addresses the coordinated longitudinal and lateral motion control for an intelligent vehicle highway system. The strategy of this work consists of defining the edges of the traveled lane using a vision sensor. According to the detected boundaries, a constrained path-following method is proposed to drive the longitudinal and the lateral vehicle’s motion. Error constraints of the intelligent vehicle highway system position are manipulated by including the function of barrier Lyapunov in designing the guidance algorithm for the intelligent vehicle highway system. To calculate the necessary forces that would steer the vehicle to the desired path, a control design is proposed that integrates the sign of the error for the compensation of the uncertain vehicle’s parameters. The Lyapunov function is later used to minimize the path-following errors and to guarantee a stable system. The efficiency of the developed approach is proved by numerical simulations.
APA, Harvard, Vancouver, ISO, and other styles
35

Du, Qin Jun, Chao Sun, and Xing Guo Huang. "Motion Control System Design of a Humanoid Robot Based on Stereo Vision." Applied Mechanics and Materials 55-57 (May 2011): 877–80. http://dx.doi.org/10.4028/www.scientific.net/amm.55-57.877.

Full text
Abstract:
Vision is an important means of the humanoid robot to get external environmental information; vision system is an important part of humanoid robot. The system of a humanoid robot with the functions of visual perception and object manipulation is very complex because the body of the humanoid robot possesses many joint units and sensors. Two computers linked by Memolink communication unit is adopted to meet the needs of real time motion control and visual information processing tasks. The motion control system included coordination control computer, the distributed DSP joint controllers, DC motor drivers and sensors. Linux and real-time RT-Linux OS are used as the operating system to achieve the real-time control capability.
APA, Harvard, Vancouver, ISO, and other styles
36

Shangguan, Zeyu, Lingyu Wang, Jianquan Zhang, and Wenbo Dong. "Vision-Based Object Recognition and Precise Localization for Space Body Control." International Journal of Aerospace Engineering 2019 (March 25, 2019): 1–10. http://dx.doi.org/10.1155/2019/7050915.

Full text
Abstract:
The space motion control is an important issue on space robot, rendezvous and docking, small satellite formation, and some on-orbit services. The motion control needs robust object detection and high-precision object localization. Among many sensing systems such as laser radar, inertia sensors, and GPS navigation, vision-based navigation is more adaptive to noncontact applications in the close distance and in high-dynamic environment. In this work, a vision-based system serving for a free-floating robot inside the spacecraft is introduced, and the method to measure space body 6-DOF position-attitude is presented. At first, the deep-learning method is applied for robust object detection in the complex background, and after the object is navigated at the close distance, the reference marker is used for more precise matching and edge detection. After the accurate coordinates are gotten in the image sequence, the object space position and attitude are calculated by the geometry method and used for fine control. The experimental results show that the recognition method based on deep-learning at a distance and marker matching in close range effectively eliminates the false target recognition and improves the precision of positioning at the same time. The testing result shows the recognition accuracy rate is 99.8% and the localization precision is far less than 1% in 1.5 meters. The high-speed camera and embedded electronic platform driven by GPU are applied for accelerating the image processing speed so that the system works at best by 70 frames per second. The contribution of this work is to introduce the deep-learning method for precision motion control and in the meanwhile ensure both the robustness and real time of the system. It aims at making such vision-based system more practicable in the real-space applications.
APA, Harvard, Vancouver, ISO, and other styles
37

Park, Jaehong, Wonsang Hwang, Wook Bahn, Chang-hun Lee, Tae-il Kim, Muhammad Muneeb Shaikh, Kwang-soo Kim, and Dong-il “Dan” Cho. "Pan/Tilt Camera Control for Vision Tracking System Based on the Robot Motion and Vision Information." IFAC Proceedings Volumes 44, no. 1 (January 2011): 3165–70. http://dx.doi.org/10.3182/20110828-6-it-1002.01781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hsu, J.-C., R.-H. Lin, and E. C. Yeh. "Vision-based motion measurement by directly extracting image features for vehicular steering control." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 211, no. 4 (April 1, 1997): 277–89. http://dx.doi.org/10.1243/0954407971526434.

Full text
Abstract:
An image sensing technique is developed using image plane analysis to measure the vehicular motion variables through an in-vehicle camera. A new measurement model is derived to directly connect the image features of vanishing point and base point with the desired motion variables comprising heading angle, lateral deviation, yaw rate and sideslip angle of the vehicle. These are useful in vehicular steering control applications such as automatic and four-wheel steering systems for providing feedback motion information in a convenient way. In order to test the proposed vision-based measuring scheme, a computerized road scene is simulated as the test sample using a newly proposed model for curvature-based road generation. Finally, experimental works are performed using the real road scene to verify the image sensing method. Consistent results are obtained by comparing with other measurements from a yaw rate gyro and from vehicular traces left on the road.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Bo, and Chao Liu. "Robust 3D Motion Tracking for Vision-Based Control in Robotic Heart Surgery." Asian Journal of Control 16, no. 3 (October 8, 2013): 632–45. http://dx.doi.org/10.1002/asjc.785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lee, Daeho, and Youngtae Park. "Vision-based remote control system by motion detection and open finger counting." IEEE Transactions on Consumer Electronics 55, no. 4 (November 2009): 2308–13. http://dx.doi.org/10.1109/tce.2009.5373803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Shakernia, Omid, Yi Ma, T. John Koo, and Shankar Sastry. "LANDING AN UNMANNED AIR VEHICLE: VISION BASED MOTION ESTIMATION AND NONLINEAR CONTROL." Asian Journal of Control 1, no. 3 (October 22, 2008): 128–45. http://dx.doi.org/10.1111/j.1934-6093.1999.tb00014.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Tseng, Yuan-Wei, Tsung-Wui Hung, Chung-Long Pan, and Rong-Ching Wu. "Motion Control System of Unmanned Railcars Based on Image Recognition." Applied System Innovation 2, no. 1 (March 5, 2019): 9. http://dx.doi.org/10.3390/asi2010009.

Full text
Abstract:
The main purpose of this paper is to construct an autopilot system for unmanned railcars based on computer vision technology in a fixed luminous environment. Four graphic predefined signs of different colors and shapes serve as motion commands of acceleration, deceleration, reverse and stop for the motion control system of railcars based on image recognition. The predefined signs’ strong classifiers were trained based on Haar-like feature training and AdaBoosting from Open Source Computer Vision Library (OpenCV). Comprehensive system integrations such as hardware, device drives, protocols, an application program in Python and man machine interface have been properly done. The objectives of this research include: (1) Verifying the feasibility of graphic predefined signs serving as commands of a motion control system of railcars with computer vision through experiments; (2) Providing reliable solutions for motion control of unmanned railcars, based on image recognition at affordable cost. The experiment results successfully verify the proposed methodology and integrated system. In the main program, every predefined sign must be detected at least three times in consecutive images within 0.2 s before the system confirms the detection. This digital filter like feature can filter out false detections and make the correct rate of detections close to 100%. After detecting a predefined sign, it was observed that the system could generate new motion commands to drive the railcars within 0.3 s. Therefore, both real time performance and the precision of the system are good. Since the sensing and control devices of the proposed system consist of computer, camera and predefined signs only, both the implementation and maintenance costs are very low. In addition, the proposed system is immune to electromagnetic interference, so it is ideal to merge into popular radio Communication Based Train Control (CBTC) systems in railways to improve the safety of operations.
APA, Harvard, Vancouver, ISO, and other styles
43

Mirisola, Luiz G. B., and Jorge Dias. "Exploiting Attitude Sensing in Vision-Based Navigation for an Airship." Journal of Robotics 2009 (2009): 1–16. http://dx.doi.org/10.1155/2009/854102.

Full text
Abstract:
An Attitude Heading Reference System (AHRS) is used to compensate for rotational motion, facilitating vision-based navigation above smooth terrain by generating virtual images to simulate pure translation movement. The AHRS combines inertial and earth field magnetic sensors to provide absolute orientation measurements, and our recently developed calibration routine determines the rotation between the frames of reference of the AHRS and the monocular camera. In this way, the rotation is compensated, and the remaining translational motion is recovered by directly finding a rigid transformation to register corresponding scene coordinates. With a horizontal ground plane, the pure translation model performs more accurately than image-only approaches, and this is evidenced by recovering the trajectory of our airship UAV and comparing with GPS data. Visual odometry is also fused with the GPS, and ground plane maps are generated from the estimated vehicle poses and used to evaluate the results. Finally, loop closure is detected by looking for a previous image of the same area, and an open source SLAM package based in 3D graph optimization is employed to correct the visual odometry drift. The accuracy of the height estimation is also evaluated against ground truth in a controlled environment.
APA, Harvard, Vancouver, ISO, and other styles
44

Dirik, Castillo, and Kocamaz. "Gaze-Guided Control of an Autonomous Mobile Robot Using Type-2 Fuzzy Logic." Applied System Innovation 2, no. 2 (April 24, 2019): 14. http://dx.doi.org/10.3390/asi2020014.

Full text
Abstract:
Motion control of mobile robots in a cluttered environment with obstacles is an important problem. It is unsatisfactory to control a robot’s motion using traditional control algorithms in a complex environment in real time. Gaze tracking technology has brought an important perspective to this issue. Gaze guided driving a vehicle based on eye movements supply significant features of nature task to realization. This paper presents an intelligent vision-based gaze guided robot control (GGC) platform that uses a user-computer interface based on gaze tracking enables a user to control the motion of a mobile robot using eyes gaze coordinate as inputs to the system. In this paper, an overhead camera, eyes tracking device, a differential drive mobile robot, vision and interval type-2 fuzzy inference (IT2FIS) tools are utilized. The methodology incorporates two basic behaviors; map generation and go-to-goal behavior. Go-to-goal behavior based on an IT2FIS is more soft and steady progress in data processing with uncertainties to generate better performance. The algorithms are implemented in the indoor environment with the presence of obstacles. Experiments and simulation results indicated that intelligent vision-based gaze guided robot control (GGC) system can be successfully applied and the IT2FIS can successfully make operator intention, modulate speed and direction accordingly.
APA, Harvard, Vancouver, ISO, and other styles
45

Ho, Chao Ching, and C. L. Shih. "Machine Vision Based Tracking Control of a Ball-Beam System." Key Engineering Materials 381-382 (June 2008): 301–4. http://dx.doi.org/10.4028/www.scientific.net/kem.381-382.301.

Full text
Abstract:
The dynamic behavior of a ball-beam system is highly nonlinear and its characteristic is difficult to define. In this paper we present a new ball-beam balancing control system using machine vision to feedback the beam angle and ball position on the beam. Adaptive threshold based continuously mean shift vision tracking algorithm is applied to record the ball position and the beam angle with highly captured frame-rate. The proposed vision tracking algorithm is tolerant to lighting influence, highly computing efficiency and more robust than traditional template pattern matching or edge detection algorithm under non-ideal environment. The vision tracking performance is experimentally tested on a ball-beam benchmark system, where a PD controller is applied to control the motion of the ball to maintain balance. Experimental result shows that the beam angle measurement, ball tracking and balancing control of the vision feedback system are robust, accurate and highly efficient.
APA, Harvard, Vancouver, ISO, and other styles
46

Han, SangUk, and SangHyun Lee. "A vision-based motion capture and recognition framework for behavior-based safety management." Automation in Construction 35 (November 2013): 131–41. http://dx.doi.org/10.1016/j.autcon.2013.05.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yuan, Cao, Ma Lianchuan, and Weigang Ma. "Mobile Target Tracking Based on Hybrid Open-Loop Monocular Vision Motion Control Strategy." Discrete Dynamics in Nature and Society 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/690576.

Full text
Abstract:
This paper proposes a new real-time target tracking method based on the open-loop monocular vision motion control. It uses the particle filter technique to predict the moving target’s position in an image. Due to the properties of the particle filter, the method can effectively master the motion behaviors of the linear and nonlinear. In addition, the method uses the simple mathematical operation to transfer the image information in the mobile target to its real coordinate information. Therefore, it requires few operating resources. Moreover, the method adopts the monocular vision approach, which is a single camera, to achieve its objective by using few hardware resources. Firstly, the method evaluates the next time’s position and size of the target in an image. Later, the real position of the objective corresponding to the obtained information is predicted. At last, the mobile robot should be controlled in the center of the camera’s vision. The paper conducts the tracking test to the L-type and the S-type and compares with the Kalman filtering method. The experimental results show that the method achieves a better tracking effect in the L-shape experiment, and its effect is superior to the Kalman filter technique in the L-type or S-type tracking experiment.
APA, Harvard, Vancouver, ISO, and other styles
48

Futagami, Takuya, Noboru Hayasaka, and Takao Onoye. "Evaluation for Energy Savings in Occupancy Lighting Control using Vision-based Motion Sensor." Transactions of the Institute of Systems, Control and Information Engineers 33, no. 5 (May 15, 2020): 139–48. http://dx.doi.org/10.5687/iscie.33.139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Xianlun, and Longfei Chen. "A Vision-Based Coordinated Motion Scheme for Dual-Arm Robots." Journal of Intelligent & Robotic Systems 97, no. 1 (May 31, 2019): 67–79. http://dx.doi.org/10.1007/s10846-019-01035-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Aladem, Mohamed, Stanley Baek, and Samir A. Rawashdeh. "Evaluation of Image Enhancement Techniques for Vision-Based Navigation under Low Illumination." Journal of Robotics 2019 (March 20, 2019): 1–15. http://dx.doi.org/10.1155/2019/5015741.

Full text
Abstract:
Cameras are valuable sensors for robotics perception tasks. Among these perception tasks are motion estimation, localization, and object detection. Cameras are attractive sensors because they are passive and relatively cheap and can provide rich information. However, being passive sensors, they rely on external illumination from the environment which means that their performance degrades in low-light conditions. In this paper, we present and investigate four methods to enhance images under challenging night conditions. The findings are relevant to a wide range of feature-based vision systems, such as tracking for augmented reality, image registration, localization, and mapping, as well as deep learning-based object detectors. As autonomous mobile robots are expected to operate under low-illumination conditions at night, evaluation is based on state-of-the-art systems for motion estimation, localization, and object detection.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography