To see the other types of publications on this topic, follow the link: Machine (robot) vision system.

Journal articles on the topic 'Machine (robot) vision system'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Machine (robot) vision system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jiménez Moreno, Robinson, Oscar Aviles, and Ruben Darío Hernández Beleño. "Humanoid Robot Cooperative System by Machine Vision." International Journal of Online Engineering (iJOE) 13, no. 12 (2017): 162. http://dx.doi.org/10.3991/ijoe.v13i12.7594.

Full text
Abstract:
This article presents a supervised control position system, based on image processing and oriented to the cooperative work between two humanoid robots that work autonomously. The first robot picks up an object, carry it to the second robot and after that the same second robot places it in an endpoint, this is achieved through doing movements in straight line trajectories and turns of 180 degrees. Using for this the Microsoft Kinect , finding for each robot and the reference object its exact spatial position, through the color space conversion and filtering, derived from the information of the RGB camera that counts and obtains this result using the information transmitted from the depth sensor, obtaining the final location of each. Through programming in C #, and the developed algorithms that allow to command each robot in order to work together for transport the reference object, from an initial point, delivering this object from one robot to the other and depositing it in an endpoint. This experiment was tested performed the same trajectory, under uniform light conditions, achieving each time the successful delivering of the object
APA, Harvard, Vancouver, ISO, and other styles
2

Pereira, Tiago, Tiago Gameiro, José Pedro, Carlos Viegas, and N. M. Fonseca Ferreira. "Vision System for a Forestry Navigation Machine." Sensors 24, no. 5 (2024): 1475. http://dx.doi.org/10.3390/s24051475.

Full text
Abstract:
This article presents the development of a vision system designed to enhance the autonomous navigation capabilities of robots in complex forest environments. Leveraging RGBD and thermic cameras, specifically the Intel RealSense 435i and FLIR ADK, the system integrates diverse visual sensors with advanced image processing algorithms. This integration enables robots to make real-time decisions, recognize obstacles, and dynamically adjust their trajectories during operation. The article focuses on the architectural aspects of the system, emphasizing the role of sensors and the formulation of algorithms crucial for ensuring safety during robot navigation in challenging forest terrains. Additionally, the article discusses the training of two datasets specifically tailored to forest environments, aiming to evaluate their impact on autonomous navigation. Tests conducted in real forest conditions affirm the effectiveness of the developed vision system. The results underscore the system’s pivotal contribution to the autonomous navigation of robots in forest environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Yunxuan. "Further Perspective of Machine Vision in Industrial Robot Systems." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 909–14. http://dx.doi.org/10.54097/hset.v39i.6675.

Full text
Abstract:
With the rapid development of automation systems and artificial intelligence, industrial robots have played significant roles in automated production processes. Due to the improvement of computer chips, more and more vision algorithms are able to run in industrial robot systems. Increasingly, robotic systems based on visual recognition are replacing those based on traditional sensors. However, there is much work to be done in real situations. This research focuses on the application of machine vision in the field of industrial robot systems. It first gives the overall summary of industrial robot systems and machine vision as well as their applications. Then it gives an example of a sorting system and expounds its strategy. Finally, it turns out that machine vision can be widely used in industrial robot systems because of its excellent performance, although there are still existing problems to be solved in it. Those problems may be solved by further development of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Oh, Je-Keun, Giho Jang, Semin Oh, et al. "Bridge inspection robot system with machine vision." Automation in Construction 18, no. 7 (2009): 929–41. http://dx.doi.org/10.1016/j.autcon.2009.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Martyshkin, Alexey I. "Motion Planning Algorithm for a Mobile Robot with a Smart Machine Vision System." Nexo Revista Científica 33, no. 02 (2020): 651–71. http://dx.doi.org/10.5377/nexo.v33i02.10800.

Full text
Abstract:
This study is devoted to the challenges of motion planning for mobile robots with smart machine vision systems. Motion planning for mobile robots in the environment with obstacles is a problem to deal with when creating robots suitable for operation in real-world conditions. The solutions found today are predominantly private, and are highly specialized, which prevents judging of how successful they are in solving the problem of effective motion planning. Solutions with a narrow application field already exist and are being already developed for a long time, however, no major breakthrough has been observed yet. Only a systematic improvement in the characteristics of such systems can be noted. The purpose of this study: develop and investigate a motion planning algorithm for a mobile robot with a smart machine vision system. The research subject for this article is a motion planning algorithm for a mobile robot with a smart machine vision system. This study provides a review of domestic and foreign mobile robots that solve the motion planning problem in a known environment with unknown obstacles. The following navigation methods are considered for mobile robots: local, global, individual. In the course of work and research, a mobile robot prototype has been built, capable of recognizing obstacles of regular geometric shapes, as well as plan and correct the movement path. Environment objects are identified and classified as obstacles by means of digital image processing methods and algorithms. Distance to the obstacle and relative angle are calculated by photogrammetry methods, image quality is improved by linear contrast enhancement and optimal linear filtering using the Wiener-Hopf equation. Virtual tools, related to mobile robot motion algorithm testing, have been reviewed, which led us to selecting Webots software package for prototype testing. Testing results allowed us to make the following conclusions. The mobile robot has successfully identified the obstacle, planned a path in accordance with the obstacle avoidance algorithm, and continued moving to the destination. Conclusions have been drawn regarding the concluded research.
APA, Harvard, Vancouver, ISO, and other styles
6

Fu, Yan. "Application of Machine Vision Recognition System in Mobile Robot." Journal of Physics: Conference Series 2083, no. 4 (2021): 042036. http://dx.doi.org/10.1088/1742-6596/2083/4/042036.

Full text
Abstract:
Abstract In order to solve the problem of autonomous recognition of hexapod robot and realize the intelligent and humanized development of robot, OpenMV is taken as the main platform, hexapod robot is taken as the main machine carrier, Python is taken as the main development language, C language is taken as the auxiliary development language, and the reasonable application of image processing technology is added. A simple visual recognition system based on OpenMV is designed to realize the application of visual recognition.
APA, Harvard, Vancouver, ISO, and other styles
7

Cai, Lin. "Development and Design of Smart-Robot Image Transmission and Processing System Based on On-Line Control." Applied Mechanics and Materials 602-605 (August 2014): 813–16. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.813.

Full text
Abstract:
With the rapid development of network technology, communication technology and multimedia technology, and robot technology is getting mature, network control robot system has gradually become a main direction of current robot research. Network based robot refers to the public through the network and the control operation of the robot Shi Yuancheng. Study on the idea of network robot is the network technology and robot technology integration together, through the network to control the robot. In the network of the robot, machine vision plays a more and more important role. To a strange environment of robot control, observed the function of an image. Machine vision can not only according to the characteristics of image recognition, robot path, but also can provide visual understanding of the observation space to strange environment and to control the robot. It is also belongs to the field of robot visual category for image transmission and processing technology on the essence of robot network control. The vision system of robot is a machine vision system, refers to the use of computer to realize the vision function of the people, is to use computer to achieve the objective of 3D world understanding.[5] The so-called three-dimensional understanding refers to the observed object shape, size, texture and motion feature distance, leaving on the understanding of the concept of robot design.
APA, Harvard, Vancouver, ISO, and other styles
8

Xie, Xiang. "Industrial Robot Assembly Line Design Using Machine Vision." Journal of Robotics 2023 (March 30, 2023): 1–13. http://dx.doi.org/10.1155/2023/4409033.

Full text
Abstract:
In order to further improve the functional requirements and performance indicators of the industrial robot assembly system and more accurately realize the measurement and recognition of the target position of the assembly line by the vision system, this article constructs a robot assembly line system based on obstacle detection and robot arm obstacle path planning based on machine vision technology and further improves the intelligence and accuracy of the assembly line system through the design and optimization of the system software module. Through the experimental verification of the positioning error based on the eye-to-hand binocular vision system and eye-in-hand monocular vision system, the system proposed in this article meets the design accuracy requirements of less than 0.1 mm in x/y direction and less than 1 mm in depth direction and verifies the feasibility and high accuracy of the system.
APA, Harvard, Vancouver, ISO, and other styles
9

Ho, Chao Ching, Ming Chen Chen, and Chih Hao Lien. "Machine Vision-Based Intelligent Fire Fighting Robot." Key Engineering Materials 450 (November 2010): 312–15. http://dx.doi.org/10.4028/www.scientific.net/kem.450.312.

Full text
Abstract:
Designing a visual monitoring system to detect fire flame is a complex task because a large amount of video data must be transmitted and processed in real time. In this work, an intelligent fire fighting and detection system is proposed which uses a machine vision to locate the fire flame positions and to control a mobile robot to approach the fire source. This real-time fire monitoring system uses the motion history detection algorithm to register the possible fire position in transmitted video data and then analyze the spectral, spatial and temporal characteristics of the fire regions in the image sequences. The fire detecting and fighting system is based on the visual servoing feedback framework with portable components, off-the-shelf commercial hardware, and embedded programming. Experimental results show that the proposed intelligent fire fighting system is successfully detecting the fire flame and extinguish the fire source reliably.
APA, Harvard, Vancouver, ISO, and other styles
10

Rahmadian, Reza, and Mahendra Widyartono. "Machine Vision and Global Positioning System for Autonomous Robotic Navigation in Agriculture: A Review." Journal of Information Engineering and Educational Technology 1, no. 1 (2017): 46. http://dx.doi.org/10.26740/jieet.v1n1.p46-54.

Full text
Abstract:
Interest on robotic agriculture system has led to the development of agricultural robots that helps to improve the farming operation and increase the agriculture productivity. Much research has been conducted to increase the capability of the robot to assist agricultural operation, which leads to development of autonomous robot. This development provides a means of reducing agriculture’s dependency on operators, workers, also reducing the inaccuracy caused by human errors. There are two important development components for autonomous navigation. The first component is Machine vision for guiding through the crops and the second component is GPS technology to guide the robot through the agricultural fields.
APA, Harvard, Vancouver, ISO, and other styles
11

Kabir, Raihan, Yutaka Watanobe, Md Rashedul Islam, Keitaro Naruse, and Md Mostafizer Rahman. "Unknown Object Detection Using a One-Class Support Vector Machine for a Cloud–Robot System." Sensors 22, no. 4 (2022): 1352. http://dx.doi.org/10.3390/s22041352.

Full text
Abstract:
Inter-robot communication and high computational power are challenging issues for deploying indoor mobile robot applications with sensor data processing. Thus, this paper presents an efficient cloud-based multirobot framework with inter-robot communication and high computational power to deploy autonomous mobile robots for indoor applications. Deployment of usable indoor service robots requires uninterrupted movement and enhanced robot vision with a robust classification of objects and obstacles using vision sensor data in the indoor environment. However, state-of-the-art methods face degraded indoor object and obstacle recognition for multiobject vision frames and unknown objects in complex and dynamic environments. From these points of view, this paper proposes a new object segmentation model to separate objects from a multiobject robotic view-frame. In addition, we present a support vector data description (SVDD)-based one-class support vector machine for detecting unknown objects in an outlier detection fashion for the classification model. A cloud-based convolutional neural network (CNN) model with a SoftMax classifier is used for training and identification of objects in the environment, and an incremental learning method is introduced for adding unknown objects to the robot knowledge. A cloud–robot architecture is implemented using a Node-RED environment to validate the proposed model. A benchmarked object image dataset from an open resource repository and images captured from the lab environment were used to train the models. The proposed model showed good object detection and identification results. The performance of the model was compared with three state-of-the-art models and was found to outperform them. Moreover, the usability of the proposed system was enhanced by the unknown object detection, incremental learning, and cloud-based framework.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Xing Ze, Ling Zhu, and Yi Hua. "Embedded Robot Vision System Based on DSP." Applied Mechanics and Materials 734 (February 2015): 168–71. http://dx.doi.org/10.4028/www.scientific.net/amm.734.168.

Full text
Abstract:
Aim at the real-time problem of industrial robot vision system, design a embedded robot vision system based on DSP microprocessor. This system can use CCD camera and the ultrasonic sensor to collect the target environment information. It also can use the processor DSP to process the images and recognize target. And then through the communication module, send results in the form of wireless to the upper computer, providing target object information for robot control layer. This system completes the software and hardware system design, image collection & processing and robot control, as well as meet the real-time requirements of machine vision system.
APA, Harvard, Vancouver, ISO, and other styles
13

Che, Chang, Haotian Zheng, Zengyi Huang, Wei Jiang, and Bo Liu. "Intelligent robotic control system based on computer vision technology." Applied and Computational Engineering 64, no. 1 (2024): 150–55. http://dx.doi.org/10.54254/2755-2721/64/20241373.

Full text
Abstract:
Computer vision is a kind of simulation of biological vision using computers and related equipment. It is an important part of the field of artificial intelligence. Its research goal is to make computers have the ability to recognize three-dimensional environmental information through two-dimensional images. Computer vision is based on image processing technology, signal processing technology, probability statistical analysis, computational geometry, neural network, machine learning theory and computer information processing technology, through computer analysis and processing of visual information.The article explores the intersection of computer vision technology and robotic control, highlighting its importance in various fields such as industrial automation, healthcare, and environmental protection. Computer vision technology, which simulates human visual observation, plays a crucial role in enabling robots to perceive and understand their surroundings, leading to advancements in tasks like autonomous navigation, object recognition, and waste management. By integrating computer vision with robot control, robots gain the ability to interact intelligently with their environment, improving efficiency, quality, and environmental sustainability. The article also discusses methodologies for developing intelligent garbage sorting robots, emphasizing the application of computer vision image recognition, feature extraction, and reinforcement learning techniques. Overall, the integration of computer vision technology with robot control holds promise for enhancing human-computer interaction, intelligent manufacturing, and environmental protection efforts.
APA, Harvard, Vancouver, ISO, and other styles
14

Binggao, He, Fan Caitian, Mu Xinbei, and Wang Rui. "MOBILE ROBOT TRACKING SYSTEM BASED ON MACHINE VISION AND LASER RADAR." Вестник ТОГУ, no. 2(73) (June 24, 2024): 63–70. http://dx.doi.org/10.38161/1996-3440-2024-2-63-70.

Full text
Abstract:
The proposed solution addresses the issue of insufficient real-time performance and accuracy in mobile robot path tracking by introducing a system that combines machine vision and laser radar. In this study, the Broadcom BCM2711 microcontroller chip is connected to the RS232 communication interface for transmitting information to the ARM embedded processor. Users can access position distance, direction, and other robot-related data through the man-machine interface's LCD display in a Windows operating system environment. By initiating an adaptive position tracking algorithm program identified by the robot within the position tracking unit, mobile position tracking of the robot is achieved. Experimental results demonstrate significant improvements in both real-time performance and accuracy of this mobile robot tracking system.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Xinwei, Jianming Qi, Xinsheng Jiao, and Zhiyong Zhou. "Realization of calibration control algorithm for curved surface-oriented machine vision." Journal of Physics: Conference Series 2183, no. 1 (2022): 012026. http://dx.doi.org/10.1088/1742-6596/2183/1/012026.

Full text
Abstract:
Abstract Aiming at the problem of the positioning deviation of the composite material surface metallization spraying equipment when spraying the workpiece, in order to ensure the processing quality, the robot vision calibration control algorithm is designed to complete the positioning calibration during the thermal spraying process. The vision calibration system is composed of a control system, a robot system, a vision system and a laser rangefinder. When the system is started, the laser rangefinder transmits the measurement data to the control system in real time, and the XYZABC position compensation data is processed by the control algorithm and transmitted to the robot to complete the vision calibration. Take composite material surface metallization automatic spraying equipment as an example, the result is that the robot system’s planar motion radius is greater than or equal to 2.55m, the height direction motion distance is 3m, the repeat positioning accuracy is ±0.05mm, and the mold positioning accuracy is ±0.08mm. Finally, the composite material surface metal is completed. Chemical spraying operations.
APA, Harvard, Vancouver, ISO, and other styles
16

Tan, K. S., M. N. Ayob, H. B. Hassrizal, et al. "Integrating Vision System to a Pick and Place Cartesian Robot." Journal of Physics: Conference Series 2107, no. 1 (2021): 012037. http://dx.doi.org/10.1088/1742-6596/2107/1/012037.

Full text
Abstract:
Abstract Vision aided pick and place cartesian robot is a combination of machine vision system and robotic system. They communicate with each other simultaneously to perform object sorting. In this project, machine vision algorithm for object sorting to solve the problem in failure sorting due to imperfection of images edges and different types of colours is proposed. The image is acquired by a camera and followed by image calibration. Pre-processing of image is performed through these methods, which are HSI colour space transformation, Gaussian filter for image filtering, Otsu’s method for image binarization, and Canny edge detection. LabVIEW edge-based geometric matching is selected for template matching. After the vision application analysed the image, electrical signal will send to robotic arm for object sorting if the acquired image is matched with template image. The proposed machine vision algorithm has yielded an accurate template matching score from 800 to 1000 under different disturbances and conditions. This machine vision algorithm provides more customizable parameters for each methods yet improves the accuracy of template matching.
APA, Harvard, Vancouver, ISO, and other styles
17

Halawi Ghoson, Nourhan, Nisar Hakam, Zohreh Shakeri, Vincent Meyrueis, Stéphane Loubère, and Khaled Benfriha. "TOWARDS REMOTE CONTROL OF MANUFACTURING MACHINES THROUGH ROBOT VISION SENSORS." Proceedings of the Design Society 3 (June 19, 2023): 3601–10. http://dx.doi.org/10.1017/pds.2023.361.

Full text
Abstract:
AbstractThe remote management of equipment is part of the functionalities granted by the design principles of Industry 4.0. However, some critical operations are managed by operators, machine setup and initialization serve as a significant illustration. Since the initialization is a repetitive task, industrial robots with a smart vision system can undertake these duties, enhancing the autonomy and flexibility of the manufacturing process. The smart vision system is considered essential for the implementation of several characteristics of Industry 4.0. This paper introduces a novel solution for controlling manufacturing machines using an embedded camera on the robot. This implementation requires the development of an interactive interface, designed in accordance with the supervision system known as Manufacturing Execution System. The framework is implemented inside a manufacturing cell, demonstrating a quick response time and an improvement between the cameras.
APA, Harvard, Vancouver, ISO, and other styles
18

Rahmadian, Reza, and Mahendra Widyartono. "Harvesting System for Autonomous Robotic in Agriculture: A Review." INAJEEE Indonesian Journal of Electrical and Eletronics Engineering 2, no. 1 (2019): 1. http://dx.doi.org/10.26740/inajeee.v2n1.p1-6.

Full text
Abstract:
Technology in the modern day has led to the development of agricultural robots that helps to increase the agriculture productivity. Numerous research has been conducted to help increasing the capability of the robot in assisting agricultural operation, which leads to development of autonomous robot. The development aim is to help reducing agriculture’s dependency on operators, workers, also reducing the inaccuracy caused by human errors. There are two important development components for autonomous harvesting. The first component is Machine vision for detecting the crops and guiding the robot through the field and the second component actuator to grab or picking the crops or fruits.
APA, Harvard, Vancouver, ISO, and other styles
19

Huang, Wensheng, and Hongli Xu. "Development of six-DOF welding robot with machine vision." Modern Physics Letters B 32, no. 34n36 (2018): 1840079. http://dx.doi.org/10.1142/s0217984918400791.

Full text
Abstract:
The application of machine vision to industrial robots is a hot topic in robot research nowadays. A welding robot with machine vision had been developed, which is convenient and flexible to reach the welding point with six degrees-of-freedom (DOF) manipulator, while the singularity of its movement trail is prevented, and the stability of the mechanism had been fully guaranteed. As the precise industry camera can capture the optical feature of the workpiece to reflect in the camera’s CCD lens, the workpiece is identified and located through a visual pattern recognition algorithm based on gray scale processing, on the gradient direction of edge pixel or on geometric element so that high-speed visual acquisition, image preprocessing, feature extraction and recognition, target location are integrated and hardware processing power is improved. Another task is to plan control strategy of control system, and the upper computer software is programmed in order that multi-axis motion trajectory is optimized and servo control is accomplished. Finally, prototype was developed and validation experiments show that the welding robot has high stability, high efficiency, high precision, even if welding joints are random and workpiece contour is irregular.
APA, Harvard, Vancouver, ISO, and other styles
20

Kuznetsova, Anna, Tatiana Maleva, and Vladimir Soloviev. "Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot." Agronomy 10, no. 7 (2020): 1016. http://dx.doi.org/10.3390/agronomy10071016.

Full text
Abstract:
A machine vision system for detecting apples in orchards was developed. The system was designed to be used in harvesting robots and is based on a YOLOv3 algorithm with special pre- and post-processing. The proposed pre- and post-processing techniques made it possible to adapt the YOLOv3 algorithm to be used in an apple-harvesting robot machine vision system, providing an average apple detection time of 19 ms with a share of objects being mistaken for apples at 7.8% and a share of unrecognized apples at 9.2%. Both the average detection time and error rates are less than in all known similar systems. The system can operate not only in apple-harvesting robots but also in orange-harvesting robots.
APA, Harvard, Vancouver, ISO, and other styles
21

Mao, Ziqi. "Design of Path Planning Robot Based on Machine Vision." Highlights in Science, Engineering and Technology 12 (August 26, 2022): 177–80. http://dx.doi.org/10.54097/hset.v12i.1451.

Full text
Abstract:
We established a path planning mobile robot system based on machine vision. The robot has multiple sensors to scan for obstacles. After catching the images, the robot will greyscale, process the images and find the most efficient path based on the ant colony algorithm. Finally, the robot will reach the destination with the corporation of motors and other components under the control of the motherboard. Simulations prove that the path planning mobile robot can find the most efficient way with censors and programs, and it can also reach its destination with the cooperation of the moving components.
APA, Harvard, Vancouver, ISO, and other styles
22

Ho, Chao Ching, You Min Chen, Tien Yun Chi, and Tzu Hsin Kuo. "Machine Vision-Based Automatic Placement System for Solenoid Housing." Key Engineering Materials 649 (June 2015): 9–13. http://dx.doi.org/10.4028/www.scientific.net/kem.649.9.

Full text
Abstract:
This paper proposes a machine vision-based, servo-controlled delta robotic system for solenoid housing placement. The system consists of a charge-coupled device camera and a delta robot. To begin the placement process, the solenoid housing targets inside the camera field were identified and used to guide the delta robot to the grabbing zone according to the calibrated homography transformation. To determine the angle of solenoid housing, image preprocessing was then implemented in order to rotate the target object to assemble with the solenoid coil. Finally, the solenoid housing was grabbed automatically and placed in the collecting box. The experimental results demonstrate that the proposed system can help to reduce operator fatigue and to achieve high-quality placements.
APA, Harvard, Vancouver, ISO, and other styles
23

Nussibaliyeva, Arailym, Gani Sergazin, Gulzhamal Tursunbayeva, et al. "Development of an Artificial Vision for a Parallel Manipulator Using Machine-to-Machine Technologies." Sensors 24, no. 12 (2024): 3792. http://dx.doi.org/10.3390/s24123792.

Full text
Abstract:
This research focuses on developing an artificial vision system for a flexible delta robot manipulator and integrating it with machine-to-machine (M2M) communication to optimize real-time device interaction. This integration aims to increase the speed of the robotic system and improve its overall performance. The proposed combination of an artificial vision system with M2M communication can detect and recognize targets with high accuracy in real time within the limited space considered for positioning, further localization, and carrying out manufacturing processes such as assembly or sorting of parts. In this study, RGB images are used as input data for the MASK-R-CNN algorithm, and the results are processed according to the features of the delta robot arm prototype. The data obtained from MASK-R-CNN are adapted for use in the delta robot control system, considering its unique characteristics and positioning requirements. M2M technology enables the robot arm to react quickly to changes, such as moving objects or changes in their position, which is crucial for sorting and packing tasks. The system was tested under near real-world conditions to evaluate its performance and reliability.
APA, Harvard, Vancouver, ISO, and other styles
24

Ehrenman, Gayle. "Eyes on the Line." Mechanical Engineering 127, no. 08 (2005): 25–27. http://dx.doi.org/10.1115/1.2005-aug-2.

Full text
Abstract:
This article discusses vision-enabled robots that are helping factories to keep the production lines rolling, even when the parts are out of place. The automotive industry was one of the earliest to adopt industrial robots, and continues to be one of its biggest users, but now industrial robots are turning up in more unusual factory settings, including pharmaceutical production and packaging, consumer electronics assembly, machine tooling, and food packaging. No current market research is available that breaks down vision-enabled versus blind robot usage. However, all the major industrial robot manufacturers are turning out models that are vision-enabled; one manufacturer said that its entire current line of robots are vision enabled. All it takes to change over the robot system is some fairly basic tooling changes to the robot's end-effector, and some programming changes in the software. The combination of speed, relatively low cost , flexibility, and ease of use that vision-enabled robots offer is making an increasing number of factories consider putting another set of eyes on their lines.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Zhen Yu, and He Wen Xu. "Application of Machine Vision Based on DM642 of Embedded System." Applied Mechanics and Materials 701-702 (December 2014): 428–32. http://dx.doi.org/10.4028/www.scientific.net/amm.701-702.428.

Full text
Abstract:
This article takes the industrial robot workpiece sorting issue as a background, introduces an embedded machine vision system based on DM642. The system realizes the image preprocessing, feature extraction, image recognition and other work in DSP, and transmits detection results to robot controller through network interface. Experimental results show that the system can effectively solve the problem of sorting regular geometric workpiece, and can meet the requirements of real-time and accuracy in industrial applications.
APA, Harvard, Vancouver, ISO, and other styles
26

Paneru, Biplov, Bishwash Paneru, Ramhari Poudyal, Krishna Bikram Shah, Khem Narayan Poudyal, and Yam Krishna Poudel. "Automated Environmental Stewardship: A Ribbon-Cutting Robot with Machine Vision for Sustainable Operation." Jurnal Teknokes 17, no. 1 (2024): 8–19. http://dx.doi.org/10.35882/teknokes.v17i1.679.

Full text
Abstract:
This paper provides a novel way for automating ribbon-cutting rituals that use a specifically constructed robot with superior computer vision capabilities. The system achieves an outstanding 92% accuracy rate when assessing picture data by using a servo motor for ribbon identification, a motor driver for robot movement control, and nichrome wire for precision cutting. The robot's ability to recognize and interact with the ribbon is greatly improved when it uses a Keras and TensorFlow-based red ribbon identification model which obtained accuracy of about 93% on testing set before deployment in system. Implemented within a Raspberry Pi robot, the method exhibits amazing success in automating ceremonial activities, removing the need for human intervention. This multidisciplinary method assures the precision and speed of ribbon-cutting events, representing a significant step forward in the merging of tradition and technology via the seamless integration of robots and computer vision.
APA, Harvard, Vancouver, ISO, and other styles
27

Hou, Lixin, Zeye Liu, Jixuan You, et al. "Tomato Sorting System Based on Machine Vision." Electronics 13, no. 11 (2024): 2114. http://dx.doi.org/10.3390/electronics13112114.

Full text
Abstract:
In the fresh tomato market, it is crucial to sort and sell tomatoes based on their quality. This is important to enhance the competitiveness and profitability of the market. However, the manual sorting process is subjective and inefficient. To address this issue, we have developed an automatic tomato sorting system that uses the Raspberry PI 4B as the control platform for the robot arm. This system has been integrated with a human–computer interaction interface sorting system. Our experimental results indicate that this sorting method has an accuracy rate of 99.1% and an efficiency of 1350 tomatoes per hour. This development is in line with modern agricultural mechanization and intelligence needs.
APA, Harvard, Vancouver, ISO, and other styles
28

Khodabandehloo, K. "Robotic handling and packaging of poultry products." Robotica 8, no. 4 (1990): 285–97. http://dx.doi.org/10.1017/s0263574700000321.

Full text
Abstract:
SUMMARYThis paper presents the findings of a research programme leading to the development of a robotic system for packaging poultry portions. The results show that an integrated system, incorporating machine vision and robots, can be made feasible for industrial use. The elements of this system, including the end-effector, the vision module, the robot hardware and the system software are presented. Models and algorithms for automatic recognition and handling of poultry portions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
29

Marshall, S. "Machine vision: Automated visual inspection and robot vision." Automatica 30, no. 4 (1994): 731–32. http://dx.doi.org/10.1016/0005-1098(94)90163-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Xiao, Xu, Yiming Jiang, and Yaonan Wang. "Key Technologies for Machine Vision for Picking Robots: Review and Benchmarking." Machine Intelligence Research 22, no. 1 (2025): 2–16. https://doi.org/10.1007/s11633-024-1517-1.

Full text
Abstract:
Abstract The increase in precision agriculture has promoted the development of picking robot technology, and the visual recognition system at its core is crucial for improving the level of agricultural automation. This paper reviews the progress of visual recognition technology for picking robots, including image capture technology, target detection algorithms, spatial positioning strategies and scene understanding. This article begins with a description of the basic structure and function of the vision system of the picking robot and emphasizes the importance of achieving high-efficiency and high-accuracy recognition in the natural agricultural environment. Subsequently, various image processing techniques and vision algorithms, including color image analysis, three-dimensional depth perception, and automatic object recognition technology that integrates machine learning and deep learning algorithms, were analysed. At the same time, the paper also highlights the challenges of existing technologies in dynamic lighting, occlusion problems, fruit maturity diversity, and real-time processing capabilities. This paper further discusses multisensor information fusion technology and discusses methods for combining visual recognition with a robot control system to improve the accuracy and working rate of picking. At the same time, this paper also introduces innovative research, such as the application of convolutional neural networks (CNNs) for accurate fruit detection and the development of event-based vision systems to improve the response speed of the system. At the end of this paper, the future development of visual recognition technology for picking robots is predicted, and new research trends are proposed, including the refinement of algorithms, hardware innovation, and the adaptability of technology to different agricultural conditions. The purpose of this paper is to provide a comprehensive analysis of visual recognition technology for researchers and practitioners in the field of agricultural robotics, including current achievements, existing challenges and future development prospects.
APA, Harvard, Vancouver, ISO, and other styles
31

Kondo, Naoshi, Kazuya Yamamoto, Hiroshi Shimizu, et al. "A Machine Vision System for Tomato Cluster Harvesting Robot." Engineering in Agriculture, Environment and Food 2, no. 2 (2009): 60–65. http://dx.doi.org/10.1016/s1881-8366(09)80017-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Younas, Muhammad Awais, Ali Hassan Abdullah, Ghulam Muhayyu Din, Muhammad Faisal, Muhammad Mudassar, and Amsh Bin Yasir. "Smart Manufacturing System Using LLM for Human-Robot Collaboration: Applications and Challenges." European Journal of Theoretical and Applied Sciences 3, no. 1 (2025): 215–26. https://doi.org/10.59324/ejtas.2025.3(1).21.

Full text
Abstract:
In the era of Industry 4.0, emerging technologies such as artificial intelligence (AI), big data, and the internet of things (IoT) are rapidly transforming and upgrading the manufacturing industry, with robots playing an increasingly crucial role in this process. These advancements lay the foundation for high-quality development in intelligent manufacturing. With the introduction of Industry 5.0, the human-centered approach has gained significant attention, giving rise to a new field of human-centric manufacturing. The distinction between humans and robots in intelligent manufacturing systems is becoming increasingly blurred, and research on human-robot collaboration has become a hot topic. This paper proposes a prototype method for human-robot smart collaborative operation in intelligent manufacturing systems, based on the integration of large language model (LLM) and machine vision. By leveraging the strengths of commuter vision and LLMs, the method aims to enhance the intelligence of human-robot smart collaboration in manufacturing systems. Additionally, this study disused the applications and challenges of this proposed model.
APA, Harvard, Vancouver, ISO, and other styles
33

Muhammad, Awais Younas, Hassan Abdullah Ali, Muhayyu Din Ghulam, Faisal Muhammad, Mudassar Muhammad, and Bin Yasir Amsh. "Smart Manufacturing System Using LLM for Human-Robot Collaboration: Applications and Challenges." European Journal of Theoretical and Applied Sciences 3, no. 1 (2025): 215–26. https://doi.org/10.59324/ejtas.2025.3(1).21.

Full text
Abstract:
In the era of Industry 4.0, emerging technologies such as artificial intelligence (AI), big data, and the internet of things (IoT) are rapidly transforming and upgrading the manufacturing industry, with robots playing an increasingly crucial role in this process. These advancements lay the foundation for high-quality development in intelligent manufacturing. With the introduction of Industry 5.0, the human-centered approach has gained significant attention, giving rise to a new field of human-centric manufacturing. The distinction between humans and robots in intelligent manufacturing systems is becoming increasingly blurred, and research on human-robot collaboration has become a hot topic. This paper proposes a prototype method for human-robot smart collaborative operation in intelligent manufacturing systems, based on the integration of large language model (LLM) and machine vision. By leveraging the strengths of commuter vision and LLMs, the method aims to enhance the intelligence of human-robot smart collaboration in manufacturing systems. Additionally, this study disused the applications and challenges of this proposed model. 
APA, Harvard, Vancouver, ISO, and other styles
34

Ye, Li, Bin Luo, and Jian Hua Yang. "Based on the Photoelectric Orientation System of the Robot in the Application of Production Line." Applied Mechanics and Materials 778 (July 2015): 235–39. http://dx.doi.org/10.4028/www.scientific.net/amm.778.235.

Full text
Abstract:
This paper use the laser sensor to primary mapping the position of the artifacts. Then manipulator carry through accurately positioning on the artifacts by machine vision. Then the manipulator accurately capture the artifact by adjust error automatically. By using laser sensor and machine vision, the robot on industrial production line can more accurately positioning fetching artifacts and the requirements of industrial production line for precise localization of robot.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Guangrong, and Liang Hong. "Research on Environment Perception System of Quadruped Robots Based on LiDAR and Vision." Drones 7, no. 5 (2023): 329. http://dx.doi.org/10.3390/drones7050329.

Full text
Abstract:
Due to the high stability and adaptability, quadruped robots are currently highly discussed in the robotics field. To overcome the complicated environment indoor or outdoor, the quadruped robots should be configured with an environment perception system, which mostly contain LiDAR or a vision sensor, and SLAM (Simultaneous Localization and Mapping) is deployed. In this paper, the comparative experimental platforms, including a quadruped robot and a vehicle, with LiDAR and a vision sensor are established firstly. Secondly, a single sensor SLAM, including LiDAR SLAM and Visual SLAM, are investigated separately to highlight their advantages and disadvantages. Then, multi-sensor SLAM based on LiDAR and vision are addressed to improve the environmental perception performance. Thirdly, the improved YOLOv5 (You Only Look Once) by adding ASFF (adaptive spatial feature fusion) is employed to do the image processing of gesture recognition and achieve the human–machine interaction. Finally, the challenge of environment perception system for mobile robot based on comparison between wheeled and legged robots is discussed. This research provides an insight for the environment perception of legged robots.
APA, Harvard, Vancouver, ISO, and other styles
36

Song, Changjiang. "Fuel Tank Position Localization Based on Machine Vision." International Journal of Computer Science and Information Technology 4, no. 2 (2024): 38–47. http://dx.doi.org/10.62051/ijcsit.v4n2.06.

Full text
Abstract:
Refueling robots, as an important part of intelligent service systems, greatly enhance the safety and efficiency of the refueling process. This study focuses on the machine vision module in refueling robots, specifically exploring the application of high-resolution cameras and LiDAR in data acquisition. We used Convolutional Neural Networks (CNNs) such as ResNet and MobileNet for feature extraction, which ensured high-precision recognition and classification in a variety of environments. Meanwhile, the target detection module uses YOLOv4 and MobileNet3 with fast and accurate target localization capabilities to effectively identify and calibrate the location of fueling ports. In addition, we introduced Extended Kalman Filter (EKF) and Bayesian Filter algorithms for data fusion and state estimation, which improves the robustness and reliability of the system. By combining these advanced vision techniques and algorithms, the refueling robot realizes efficient and accurate automatic refueling operation. This research provides theoretical and technical support for the further development of intelligent refueling robots so that they can still operate stably in complex environments.
APA, Harvard, Vancouver, ISO, and other styles
37

Xia, Wen Tao, Yan Ying Wang, Zhi Gang Huang, Hao Guan, and Ping Cai Li. "Trajectory Control of Museum Commentary Robot Based on Machine Vision." Applied Mechanics and Materials 615 (August 2014): 145–48. http://dx.doi.org/10.4028/www.scientific.net/amm.615.145.

Full text
Abstract:
The design aim to make the museum robot move according to desired trajectory. Having a commentary robot in museum can not only arouse visitor’s curiosity, but also save human resource. Furthermore, this robot can change and upgrade its software system according to museum action situation to accomplish different trajectory in different space. The machine vision tracked method is applied to museum robot, which mainly use camera to seek the marked object in proper order and accomplish designed trajectory movement.
APA, Harvard, Vancouver, ISO, and other styles
38

S., Hamidreza Mohades Kasaei, Mohammadreza Mohades Kasaei S., A. Monadjemi S., and Taheri Mohsen. "BRAIN Journal - Modeling and Implementation of Omnidirectional Soccer Robot with Wide Vision Scope Applied in Robocup-MSL." BRAIN - Broad Research in Artificial Intelligence and Neuroscience 1, no. 3 (2010): 65–74. https://doi.org/10.5281/zenodo.1036437.

Full text
Abstract:
ABSTRACT The purpose of this paper is to design and implement a middle size soccer robot to conform RoboCup MSL league. First, according to the rules of RoboCup, we design the middle size soccer robot, The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. And this soccer robot equips the laptop computer system and interface circuits to make decisions. In fact, the omnidirectional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and target tracking. The boundary-following algorithm (BFA) is applied to find the important features of the field. We utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry systems are fused for robust selflocalization. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. In the control strategies, we present three state modes, which include the Attack Strategy, Defense Strategy and Intercept Strategy. The methods have been tested in the many Robocup competition field middle size robots.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Xue, Kailang Lan, Haisen Zeng, Meng Song, Min Liu, and Xin Liu. "Research on Autonomous Spraying Robot Based on Machine Vision." Highlights in Science, Engineering and Technology 9 (September 30, 2022): 161–67. http://dx.doi.org/10.54097/hset.v9i.1737.

Full text
Abstract:
In view of the problems that the current spraying robot needs manual teaching and cannot meet the requirements of flexible processing, this paper carries out the research on the autonomous spraying robot based on machine vision. First, the overall design, structure design and parts selection of the robot system are carried out according to the market functional requirements. Then, the feature extraction algorithm of the workpieces to be sprayed is designed, which mainly uses Opencv to denoise the collected image, remove the background, and extract features. According to the image processing results, the spraying trajectory is determined by trajectory planning. Finally, the autonomous spraying experiment is carried out through the built spraying robot platform, and the functions of spraying process, adaptive workpiece shape, adaptive workpiece pose and other functions are analyzed, and the goal of robot adaptive spraying is realized.
APA, Harvard, Vancouver, ISO, and other styles
40

Lu, Sheng Rong, and Huan Long Guo. "Research of Machine Vision System about Robot Soccer Based on the HSV." Advanced Materials Research 121-122 (June 2010): 807–12. http://dx.doi.org/10.4028/www.scientific.net/amr.121-122.807.

Full text
Abstract:
In this paper, including the introduction of image processing and image processing factors influence the discussion, after the RGB image access to the transformation of HSV, mainly through the robot to select the image of the identification process design, completed the capture of a robot A specific color of the entity, after the robot moves to give you accurate basis for the judgment. First of all, the introduction for images on the basis of the relevant knowledge is presented. Second, the image and the image of the divisions are briefly introduced. Third, we proposed the image factors and image processing technology. Fourth, conversion from RGB to HSV model is presented. Finally, we designed the image of the robot to access and identify procedures modular.
APA, Harvard, Vancouver, ISO, and other styles
41

Tung, Tzu-Jan, Mohamed Al-Hussein, and Pablo Martinez. "Vision-Based Guiding System for Autonomous Robotic Corner Cleaning of Window Frames." Buildings 13, no. 12 (2023): 2990. http://dx.doi.org/10.3390/buildings13122990.

Full text
Abstract:
Corner cleaning is the most important manufacturing step of window framing to ensure aesthetic quality. After the welding process, the current methods to clean the welding seams lack quality control and adaptability. This increases rework, cost, and the waste produced in manufacturing and is largely due to the use of CNC cutting machines, as well as the reliance on manual inspection and weld seam cleaning. Dealing with manufacturing imperfections becomes a challenging task, as CNC machines rely on predetermined cleaning paths and frame information. To tackle such challenges using Industry 4.0 approaches and automation technology, such as robots and sensors, in this paper, a novel intelligent system is proposed to increase the process capacity to adapt to variability in weld cleaning conditions while ensuring quality through a combined approach of robot arms and machine vision that replaces the existing manual-based methods. Using edge detection to identify the window position and its orientation, artificial intelligence image processing techniques (Mask R-CNN model) are used to detect the window weld seam and to guide the robot manipulator in its cleaning process. The framework is divided into several modules, beginning with the estimation of a rough position for the purpose of guiding the robot toward the window target, followed by an image processing and detection module used in conjunction with instance segmentation techniques to segment the target area of the weld seam, and, finally, the generation of cleaning paths for further robot manipulation. The proposed robotic system is validated two-fold: first, in a simulated environment and then, in a real-world scenario, with the results obtained demonstrating the effectiveness and adaptability of the proposed system. The evaluation of the proposed framework shows that the trained Mask R-CNN can locate and quantify weld seams with 95% mean average precision (less than 1 cm).
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Chen, Xuewu Xu, Chen Fan, and Guoping Wang. "Literature Review of Machine Vision in Application Field." E3S Web of Conferences 236 (2021): 04027. http://dx.doi.org/10.1051/e3sconf/202123604027.

Full text
Abstract:
Aiming at the application and research of machine vision, a comprehensive and detailed elaboration is carried out in its two application areas: visual inspection and robot vision. Introduce the composition, characteristics and application advantages of the machine vision system. Based on the analysis of the current research status at home and abroad, the application development trend of machine vision is prospected.
APA, Harvard, Vancouver, ISO, and other styles
43

Gurko, Alexander, Oleg Sergiyenko, and Lars Lindner. "Robust laser positioning in a mobile robot machine vision system." Vehicle and electronics. Innovative technologies, no. 20 (November 30, 2021): 27–36. http://dx.doi.org/10.30977/veit.2021.20.0.03.

Full text
Abstract:
Problem. Laser scanning devices are widely used in Machine Vision Systems (MVS) of an autonomous mobile robot for solving SLAM problems. One of the concerns with MVS operation is the ability to detect relatively small obstacles. This requires scanning a limited sector within the field of view or even focusing on a specific point of space. The accuracy of the laser beam positioning is hampered by various kinds of uncertainties both due to the model simplifying and using inaccurate values of its parameters, as well as lacking information about perturbations. Goal. This paper presents the improvement of the MVS, described in previous works of the authors, by robust control of the DC motor, which represents the Positioning Laser drive. Methodology. For this purpose, a DC motor model is built, taking into account the parametric uncertainty. A robust digital PD controller for laser positioning is designed, and a comparative evaluation of the robust properties of the obtained control system with a classical one is carried out. The PWM signal formation by the microcontroller and processes in the H-bridge are also taken into account. Results. The obtained digital controller meets the transient process and accuracy requirements and combines the simplicity of a classic controller with a weak sensitivity to the parametric uncertainties of the drive model. Originality. The originality of the paper is in its focus on the MVS of the autonomous mobile robot developed by the authors. Practical value. The implementation of the MVS with the proposed controller will increase the reliability of obstacles detection within a robot field of view and the accuracy of environment mapping.
APA, Harvard, Vancouver, ISO, and other styles
44

Guan, Ning, and Pei Zhang. "Active Compliance Control System for Intelligent Inspection Robot in Power Room." Journal of Control Science and Engineering 2022 (September 21, 2022): 1–7. http://dx.doi.org/10.1155/2022/7829082.

Full text
Abstract:
In order to solve the problems of blind spot and time-consuming and laborious monitoring in manual inspection of power information and communication rooms, a kind of active compliant control system for intelligent inspection robots of power information and communication rooms is proposed. This research includes the noncontact detection method of the inspection robot in the power room based on machine vision; the optimization environment of the power room; the control of the inspection robot through the monitoring system terminal; the automatic visual inspection of the machine through the monitoring network transmission; the display of the monitoring terminal and the use of the calibration of the machine; and the edge detection method of the inspection image. The motion model of the inspection robot in the power room is constructed and the compliant control method of the inspection robot is designed. The experimental results show that the ineffective fluctuation range of torque of each joint is the smallest when the inspection robot is disturbed by obstacles in the process of executing signal change instructions, and the average fluctuation range of robot limbs is 0.65%. Conclusion. Based on the research results of this essay, it is proved that the controllability of power signal lamp change can be optimized separately in the future, and the safety of circuit operation can be improved.
APA, Harvard, Vancouver, ISO, and other styles
45

Xing, Si Ming, and Zhi Yong Luo. "Research on Wire-Plugging Robot System Based on Machine Vision." Applied Mechanics and Materials 275-277 (January 2013): 2459–66. http://dx.doi.org/10.4028/www.scientific.net/amm.275-277.2459.

Full text
Abstract:
ADSL line test in the field of telecommunication is high-strength work. But current testing method has low working efficiency and cannot realize automatic test. In this paper, the wire-plugging test robot system based on machine vision is designed to realize remote test and automatic wire-plugging, and it also can improve work efficiency. Dual-positioning method which based on technologies of color-coded blocks recognition and visual locating is used in this system. Color-coded blocks are recognized to realize socket coarse-positioning, the stepper motors in directions of X-axis and Y-axis are drived to move to nearby the socket quickly. Video-positioning technology is used to realized pinpoint the socket. The stepper motors of X-axis and Y-axis are drived to make a plugging to align a socket after the pinpoint action is realized, and then the motor in the direction of Z-axis is drived to realize wire-plugging action. Plugging is resetted to a safe place after the end of the wire-plugging. Performance tests have improved that this wire-plugging test robot system can realize plug-testing task quickly and accurately, so it is a stable wire-plugging equipment.
APA, Harvard, Vancouver, ISO, and other styles
46

Xiem, HoangVan, and Do Nam. "An efficient regression method for 3D object localization in machine vision systems." IAES International Journal of Robotics and Automation (IJRA) 11, no. 2 (2022): 111–21. https://doi.org/10.11591/ijra.v11i2.pp111-121.

Full text
Abstract:
Machine vision or robot vision plays is playing an important role in many industrial systems and has a lot of potential applications in the future of automation tasks such as in-house robot managing, swarm robotics controlling, product line observing, and robot grasping. One of the most common yet challenging tasks in machine vision is 3D object localization. Although several works have been introduced and achieved good results for object localization, there is still room to further improve the object location determination. In this paper, we introduce a novel 3D object localization algorithm in which a checkerboard pattern-based method is used to initialize the object location and followed by a regression model to regularize the object location. The proposed object localization is employed in a low-cost robot grasping system where only one simple 2D camera is used. Experimental results showed that the proposed algorithm significantly improves the accuracy of the object localization when compared to the relevant works.
APA, Harvard, Vancouver, ISO, and other styles
47

Opiyo, Samwel, Cedric Okinda, Jun Zhou, Emmy Mwangi, and Nelson Makange. "Medial axis-based machine-vision system for orchard robot navigation." Computers and Electronics in Agriculture 185 (June 2021): 106153. http://dx.doi.org/10.1016/j.compag.2021.106153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Gao, Mingyu, Xiao Li, Zhiwei He, and Yuxiang Yang. "An Automatic Assembling System for Sealing Rings Based on Machine Vision." Journal of Sensors 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/4207432.

Full text
Abstract:
In order to grab and place the sealing rings of battery lid quickly and accurately, an automatic assembling system for sealing rings based on machine vision is developed in this paper. The whole system is composed of the light sources, cameras, industrial control units, and a 4-degree-of-freedom industrial robot. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Finally the sealing rings are placed on the sealing ports of battery lid accurately and automatically. Experimental results demonstrate that the proposed system can grab the sealing rings and place them on the sealing port of the fast-moving battery lid successfully. More importantly, the proposed system can improve the efficiency of the battery production line obviously.
APA, Harvard, Vancouver, ISO, and other styles
49

Al-Khafaji, Israa M. Abdalameer, and A. V. Panov. "FEDERATED LEARNING FOR VISION-BASED OBSTACLE AVOIDANCE IN MOBILE ROBOTS." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 23, no. 3 (2023): 35–47. http://dx.doi.org/10.14529/ctcr230304.

Full text
Abstract:
Federated learning (FL) is a machine learning approach that allows multiple devices or systems to train a model collaboratively, without exchanging their data. This is particularly useful for autonomous mobile robots, as it allows them to train models customized to their specific environment and tasks, while keeping the data they collect private. Research Objective to train a model to recognize and classify different types of objects, or to navigate around obstacles in its environment. Materials and me¬thods we used FL to train models for a variety of tasks, such as object recognition, obstacle avoidance, localization, and path planning by an autonomous mobile robot operating in a warehouse FL. We equipped the robot with sensors and a processor to collect data and perform machine learning tasks. The robot must communicate with a central server or cloud platform that coordinates the training process and collects model updates from different devices. We trained a neural network (CNN) and used a PID algorithm to generate a control signal that adjusts the position or other variable of the system based on the difference between the desired and actual values, using the relative, integrative and derivative terms to achieve the desired performance. Results through careful design and execution, there are several challenges to implementing FL in autonomous mobile robots, including the need to ensure data privacy and security, and the need to manage communications and the computational resources needed to train the model. Conclusion. We conclude that FL enables autonomous mobile robots to continuously improve their performance and adapt to changing environments and potentially improve the performance of vision-based obstacle avoidance strategies and enable them to learn and adapt more quickly and effectively, leading to more robust and autonomous systems.
APA, Harvard, Vancouver, ISO, and other styles
50

Byzkrovnyi, Oleksandr, Kyrylo Smelyakov, Anastasiya Chupryna, Loreta Savulioniene, and Paulius Sakalys. "COMPARISON OF POTENTIAL ROAD ACCIDENT DETECTION ALGORITHMS FOR MODERN MACHINE VISION SYSTEM." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 3 (June 13, 2023): 50–55. http://dx.doi.org/10.17770/etr2023vol3.7299.

Full text
Abstract:
Nowadays the robotics is relevant development industry. Robots are becoming more sophisticated, and this requires more sophisticated technologies. One of them is robot vision. This is needed for robots which communicate with the environment using vision instead of a batch of sensors. These data are utilized to analyze the situation at hand and develop a real-time action plan for the given scenario. This article explores the most suitable algorithm for detecting potential road accidents, specifically focusing on the scenario of turning left across one or more oncoming lanes. The selection of the optimal algorithm is based on a comparative analysis of evaluation and testing results, including metrics such as maximum frames per second for video processing during detection using robot’s hardware. The study categorises potential accidents into two classes: danger and not-danger. The Yolov7 and Detectron2 algorithms are compared, and the article aims to create simple models with the potential for future refinement. Also, this article provides conclusions and recommendations regarding the practical implementation of the proposed models and algorithm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!