To see the other types of publications on this topic, follow the link: Vision sensor.

Journal articles on the topic 'Vision sensor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Vision sensor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yang, Le, Han Wang, Jiajian Zheng, Xin Duan, and Qishuo Cheng. "Research and Application of Visual Object Recognition System Based on Deep Learning and Neural Morphological Computation." International Journal of Computer Science and Information Technology 2, no. 1 (2024): 10–17. http://dx.doi.org/10.62051/ijcsit.v2n1.02.

Full text
Abstract:
The development of advanced optoelectronic vision sensors for high-level image recognition and data preprocessing is poised to accelerate the progress of machine vision and mobile electronic technology. Compared to traditional sensory computing methods, such as analog-to-digital signal conversion and digital logic computation tasks (i.e., Von Neumann computing), neural morphological vision computing can significantly improve energy efficiency and data processing speed by minimizing unnecessary raw data transmission between front-end photosensitive sensors and back-end processors. Neural morpho
APA, Harvard, Vancouver, ISO, and other styles
2

Chai, Yang. "(Invited) Bioinspired in-Sensor Computing for Artificial Vision." ECS Meeting Abstracts MA2024-02, no. 35 (2024): 2466. https://doi.org/10.1149/ma2024-02352466mtgabs.

Full text
Abstract:
The visual scene in the physical world integrates multidimensional information (spatial, temporal, polarization, spectrum, etc) and typically displays unstructured characteristics. Conventional image sensors cannot process this multidimensional vision data, creating a need for vision sensors that can efficiently extract features from substantial multidimensional vision data. Vision sensors are able to transform the unstructured visual scene into featured information without relying on sophisticated algorithms and complex hardware. In this talk, I will describe our team’s efforts towards bioins
APA, Harvard, Vancouver, ISO, and other styles
3

Bassett, J., and G. Walker. "A Split Image Vision Sensor." Journal of Engineering for Industry 117, no. 1 (1995): 94–101. http://dx.doi.org/10.1115/1.2803284.

Full text
Abstract:
A vision sensor has been developed that uses only two lenses, a split prism, and a detector to acquire an image. This system uses the split prism to create a split image such that the displacement of the image is proportional to its range from the sensor. Prototype sensors have been examined both theoretically and experimentally, and have been found to measure object ranges with less than ±2 percent error. Acquisition of a single-point depth measurement is sufficiently fast for real-time use, and the optical components needed to build the sensor are inexpensive. The effect that each optical co
APA, Harvard, Vancouver, ISO, and other styles
4

Sundar, Varun, and Mohit Gupta. "Quanta Computer Vision." XRDS: Crossroads, The ACM Magazine for Students 31, no. 2 (2024): 38–43. https://doi.org/10.1145/3703403.

Full text
Abstract:
Light impinges on a camera's sensor as a collection of discrete quantized elements, or photons. An emerging class of devices, called single-photon sensors, offers the unique capability of detecting individual photons with high-timing precision. With the increasing accessibility of high-resolution single-photon sensors, we can now explore what computer vision would look like if we could operate on light, one photon at a time.
APA, Harvard, Vancouver, ISO, and other styles
5

Hasegawa, Hiroaki, Yosuke Suzuki, Aiguo Ming, Masatoshi Ishikawa, and Makoto Shimojo. "Robot Hand Whose Fingertip Covered with Net-Shape Proximity Sensor - Moving Object Tracking Using Proximity Sensing -." Journal of Robotics and Mechatronics 23, no. 3 (2011): 328–37. http://dx.doi.org/10.20965/jrm.2011.p0328.

Full text
Abstract:
Occlusion in several millimeters from an object to be grasped made it difficult for a vision-sensor-based approach to detect relative positioning between this object and robot fingers joint grasping. The proximity sensor we proposed detects the object at a near range very effectively. We developed a thin proximity sensor sheet to cover the 3 fingers of a robot hand. Integrating sensors and hand control, we implemented an objecttracking controller. Using proximity sensory signals, the controller coordinates wrist positioning based on palm proximity sensors and grasping from fingertip sensors, e
APA, Harvard, Vancouver, ISO, and other styles
6

Yuhara, H. "Stereo vision sensor." JSAE Review 21, no. 4 (2000): 529–34. http://dx.doi.org/10.1016/s0389-4304(00)00080-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kolar, Prasanna, Patrick Benavidez, and Mo Jamshidi. "Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation." Sensors 20, no. 8 (2020): 2180. http://dx.doi.org/10.3390/s20082180.

Full text
Abstract:
This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion
APA, Harvard, Vancouver, ISO, and other styles
8

Fossum, Eric R., Nobukazu Teranishi, and Albert J. P. Theuwissen. "Digital Image Sensor Evolution and New Frontiers." Annual Review of Vision Science 10, no. 1 (2024): 171–98. http://dx.doi.org/10.1146/annurev-vision-101322-105538.

Full text
Abstract:
This article reviews nearly 60 years of solid-state image sensor evolution and identifies potential new frontiers in the field. From early work in the 1960s, through the development of charge-coupled device image sensors, to the complementary metal oxide semiconductor image sensors now ubiquitous in our lives, we discuss highlights in the evolutionary chain. New frontiers, such as 3D stacked technology, photon-counting technology, and others, are briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Cho, Dooyong, and Junho Gong. "A Feasibility Study on Extension of Measurement Distance in Vision Sensor Using Super-Resolution for Dynamic Response Measurement." Sensors 23, no. 20 (2023): 8496. http://dx.doi.org/10.3390/s23208496.

Full text
Abstract:
The current civil infrastructure conditions can be assessed through the measurement of displacement using conventional contact-type sensors. To address the disadvantages of traditional sensors, vision-based sensor measurement systems have been derived in numerous studies and proven as an alternative to traditional sensors. Despite the benefits of the vision sensor, it is well known that the accuracy of the vision-based displacement measurement is largely dependent on the camera extrinsic or intrinsic parameters. In this study, the feasibility study of a deep learning-based single image super-r
APA, Harvard, Vancouver, ISO, and other styles
10

Aya Zuhair Salim and Luma Issa Abdul-Kareem. "A Review of Advances in Bio-Inspired Visual Models Using Event-and Frame-Based Sensors." Advances in Technology Innovation 10, no. 1 (2025): 44–57. https://doi.org/10.46604/aiti.2024.14121.

Full text
Abstract:
This paper reviews visual system models using event- and frame-based vision sensors. The event-based sensors mimic the retina by recording data only in response to changes in the visual field, thereby optimizing real-time processing and reducing redundancy. In contrast, frame-based sensors capture duplicate data, requiring more processing resources. This research develops a hybrid model that combines both sensor types to enhance efficiency and reduce latency. Through simulations and experiments, this approach addresses limitations in data integration and speed, offering improvements over exist
APA, Harvard, Vancouver, ISO, and other styles
11

Bloss, Richard. "Latest in VISION SENSOR Technology as well as innovations in sensing, pressure, force, medical, particle size and many other applications." Sensor Review 37, no. 1 (2017): 7–11. http://dx.doi.org/10.1108/sr-09-2016-0186.

Full text
Abstract:
Purpose The purpose of this paper is to review some of the latest in new vision sensor technologies as well as other innovative sensor products being developed and reaching the market. Design/methodology/approach This study is a review of published information and papers on research as well as contact and discussions with researchers and suppliers in this field at the Vision Show and the Ceramics Show. Findings Microelectronics and electrochemical technologies have been a major factor in technology advancements of sensors for a wide range of applications. Vision sensors have become very import
APA, Harvard, Vancouver, ISO, and other styles
12

AlHarami, AlKhzami, Abubakar Abubakar, Bo Zhang, and Amine Bermak. "Progressive Early Image Recognition for Wireless Vision Sensor Networks." Sensors 22, no. 17 (2022): 6348. http://dx.doi.org/10.3390/s22176348.

Full text
Abstract:
A wireless vision sensor network (WVSN) is built by using multiple image sensors connected wirelessly to a central server node performing video analysis, ultimately automating different tasks such as video surveillance. In such applications, a large deployment of sensors in the same way as Internet-of-Things (IoT) devices is required, leading to extreme requirements in terms of sensor cost, communication bandwidth and power consumption. To achieve the best possible trade-off, we propose in this paper a new concept that attempts to achieve image compression and early image recognition leading t
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, H., K. Choi, and I. Lee. "IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W5 (August 20, 2015): 459–65. http://dx.doi.org/10.5194/isprsannals-ii-3-w5-459-2015.

Full text
Abstract:
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combi
APA, Harvard, Vancouver, ISO, and other styles
14

Simoni, Andrea, Alvise Sartori, Massimo Gottardi, and Alessandro Zorat. "A digital vision sensor." Sensors and Actuators A: Physical 47, no. 1-3 (1995): 439–43. http://dx.doi.org/10.1016/0924-4247(94)00937-d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tyler, Neil. "Event-Based Vision Sensor." New Electronics 51, no. 18 (2019): 9. http://dx.doi.org/10.12968/s0047-9624(22)61429-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kodukula, Venkatesh, Saad Katrawala, Britton Jones, Carole-Jean Wu, and Robert LiKamWa. "Dynamic Temperature Management of Near-Sensor Processing for Energy-Efficient High-Fidelity Imaging." Sensors 21, no. 3 (2021): 926. http://dx.doi.org/10.3390/s21030926.

Full text
Abstract:
Vision processing on traditional architectures is inefficient due to energy-expensive off-chip data movement. Many researchers advocate pushing processing close to the sensor to substantially reduce data movement. However, continuous near-sensor processing raises sensor temperature, impairing imaging/vision fidelity. We characterize the thermal implications of using 3D stacked image sensors with near-sensor vision processing units. Our characterization reveals that near-sensor processing reduces system power but degrades image quality. For reasonable image fidelity, the sensor temperature need
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Qilong, Yu Zhang, Weichao Shi, and Meng Nie. "Laser Ranging-Assisted Binocular Visual Sensor Tracking System." Sensors 20, no. 3 (2020): 688. http://dx.doi.org/10.3390/s20030688.

Full text
Abstract:
Aimed at improving the low measurement accuracy of the binocular vision sensor along the optical axis in the process of target tracking, we proposed a method for auxiliary correction using a laser-ranging sensor in this paper. In the process of system measurement, limited to the mechanical performance of the two-dimensional turntable, the measurement value of a laser-ranging sensor is lagged. In this paper, the lag information is updated directly to solve the time delay. Moreover, in order to give full play to the advantages of binocular vision sensors and laser-ranging sensors in target track
APA, Harvard, Vancouver, ISO, and other styles
18

Vladareanu, Luige. "Advanced Intelligent Control through Versatile Intelligent Portable Platforms." Sensors 20, no. 13 (2020): 3644. http://dx.doi.org/10.3390/s20133644.

Full text
Abstract:
Deep research and communicating new trends in the design, control and applications of the real time control of intelligent sensors systems using advanced intelligent control methods and techniques is the main purpose of this research. The innovative multi-sensor fusion techniques, integrated through the Versatile Intelligent Portable (VIP) platforms are developed, combined with computer vision, virtual and augmented reality (VR&AR) and intelligent communication, including remote control, adaptive sensor networks, human-robot (H2R) interaction systems and machine-to-machine (M2M) interfaces
APA, Harvard, Vancouver, ISO, and other styles
19

MENEGATTI, EMANUELE, MANUEL CAVASIN, ENRICO PAGELLO, ENZO MUMOLO, and MASSIMILIANO NOLICH. "COMBINING AUDIO AND VIDEO SURVEILLANCE WITH A MOBILE ROBOT." International Journal on Artificial Intelligence Tools 16, no. 02 (2007): 377–98. http://dx.doi.org/10.1142/s0218213007003321.

Full text
Abstract:
This paper presents a Distributed Perception System for application of intelligent surveillance. The system prototype presented in this paper is composed of a static acoustic agent and a static vision agent cooperating with a mobile vision agent mounted on a mobile robot. The audio and video sensors distributed in the environment are used as a single sensor to reveal and track the presence of a person in the surveilled environment. The robot extends the capabilities of the system by adding a mobile sensor (in this work an omnidirectional camera). The mobile omnidirectional camera can be used t
APA, Harvard, Vancouver, ISO, and other styles
20

Ruseruka, Cuthbert, Judith Mwakalonge, Gurcan Comert, Saidi Siuhi, Frank Ngeni, and Kristin Major. "Pavement Distress Identification Based on Computer Vision and Controller Area Network (CAN) Sensor Models." Sustainability 15, no. 8 (2023): 6438. http://dx.doi.org/10.3390/su15086438.

Full text
Abstract:
Recent technological developments have attracted the use of machine learning technologies and sensors in various pavement maintenance and rehabilitation studies. To avoid excessive road damages, which cause high road maintenance costs, reduced mobility, vehicle damages, and safety concerns, the periodic maintenance of roads is necessary. As part of maintenance works, road pavement conditions should be monitored continuously. This monitoring is possible using modern distress detection methods that are simple to use, comparatively cheap, less labor-intensive, faster, safer, and able to provide d
APA, Harvard, Vancouver, ISO, and other styles
21

Antonenko, V. A., and V. M. Borovytsky. "Signal processing in facet systems of technical vision." Optoelectronic Information-Power Technologies 44, no. 2 (2023): 38–43. http://dx.doi.org/10.31649/1681-7893-2022-44-2-38-43.

Full text
Abstract:
The article presents an overview of bio-similar motion sensors facet systems of technical vision – the Reichard correlation detector, the Horridge and Nguyen model, and it proposed the universal motion detection sensor. This sensor contains a microcontroller that quickly calculates the correlation function and its maximum value to find the direction and speed of movement in the field of view. The principles of their operation, advantages, disadvantages, and possibilities of application are considered.
APA, Harvard, Vancouver, ISO, and other styles
22

Casciati, Fabio, Sara Casciati, and Li Jun Wu. "Vision-Based Sensing in Dynamic Tests." Key Engineering Materials 569-570 (July 2013): 767–74. http://dx.doi.org/10.4028/www.scientific.net/kem.569-570.767.

Full text
Abstract:
The availability of a suitable data acquisition sensor network is a key implementation issue to link models with real world structures. Non-contact displacement sensors should be preferred since they do not change the system properties. A two-dimensional vision-based displacement measurement sensor is the focus of this contribution. In particular, the perspective distortion introduced by the angle between the optic axis of the camera and the normal to the plane in which the structural system deforms is considered. A two-dimensional affine transformation is utilized to eliminate the distortion
APA, Harvard, Vancouver, ISO, and other styles
23

Et. al., M. Hyndhavi,. "DEVELOPMENT OF VEHICLE TRACKING USING SENSOR FUSION." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (2021): 731–39. http://dx.doi.org/10.17762/itii.v9i2.406.

Full text
Abstract:
The development of vehicle tracking using sensor fusion is presented in this paper. Advanced driver assistance systems (ADAS) are becoming more popular in recent years. These systems use sensor information for real-time control. To improve the standard and robustness, especially in the presence of environmental noises like varying lighting, weather conditions, and fusion of sensors has been the center of attention in recent studies. Faced with complex traffic conditions, the single sensor has been unable to meet the security requirements of ADAS and autonomous driving. The common environment p
APA, Harvard, Vancouver, ISO, and other styles
24

V., Bharatidevi. "Sensor Applications in Robotics." Research and Development in Machine Design 8, no. 1 (2025): 5–8. https://doi.org/10.5281/zenodo.14831418.

Full text
Abstract:
<em>Sensors play a crucial role in robotics by enabling perception, control, and decision-making, allowing robots to interact intelligently with their environment. This paper explores various sensor technologies used in robotics, including vision sensors, LiDAR, ultrasonic sensors, force/torque sensors, and inertial measurement units (IMUs). The integration of these sensors enhances robotic autonomy in applications such as industrial automation, medical robotics, autonomous vehicles, and humanoid robotics. Recent advancements in AI-driven sensor fusion and edge computing have further improved
APA, Harvard, Vancouver, ISO, and other styles
25

Shen, Tzung-Sz, Jianbing Huang, and Chia-Hsiang Menq. "Multiple-Sensor Planning and Information Integration for Automatic Coordinate Metrology." Journal of Computing and Information Science in Engineering 1, no. 2 (2001): 167–79. http://dx.doi.org/10.1115/1.1385827.

Full text
Abstract:
Multiple-sensor integration of vision and touch probe sensors has been shown to be a feasible approach for rapid and high-precision coordinate acquisition [Shen, T. S., Huang, J., and Meng, C. H., 2000, “Multiple-sensor integration for rapid and high-precision coordinate metrology,” IEEE/ASME Trans. Mechatron. 5, pp. 110–121]. However, the automation of coordinate measurements is still hindered by unknown surface areas that cannot be digitized using the vision system due to occlusions. It is identified that the estimation and reasoning of unknown surface areas, and automatic sensor planning us
APA, Harvard, Vancouver, ISO, and other styles
26

Uhm, Taeyoung, Jeongwoo Park, Jungwoo Lee, Gideok Bae, Geonhui Ki, and Youngho Choi. "Design of Multimodal Sensor Module for Outdoor Robot Surveillance System." Electronics 11, no. 14 (2022): 2214. http://dx.doi.org/10.3390/electronics11142214.

Full text
Abstract:
Recent studies on surveillance systems have employed various sensors to recognize and understand outdoor environments. In a complex outdoor environment, useful sensor data obtained under all weather conditions, during the night and day, can be utilized for application to robots in a real environment. Autonomous surveillance systems require a sensor system that can acquire various types of sensor data and can be easily mounted on fixed and mobile agents. In this study, we propose a method for modularizing multiple vision and sound sensors into one system, extracting data synchronized with 3D Li
APA, Harvard, Vancouver, ISO, and other styles
27

Cao, Ming Qiang, Zhi Hong Yan, Yong Lun Song, and Zhi Xiang Chen. "Study on Weld Seam Tracking System Based on Laser Vision Sensing." Advanced Materials Research 655-657 (January 2013): 1108–13. http://dx.doi.org/10.4028/www.scientific.net/amr.655-657.1108.

Full text
Abstract:
Seam tracking is a basic condition to ensure a fine welding shape. In order to meet this requirement, the researchers have put forward a variety of sensors and some of them have been applied to practice successfully, such as contact sensor, arc sensor and vision sensor. Laser vision sensor has been proven to be one of the most successfully technology for its many advantages. In this paper, based on laser visual sensing technology, a seam tracking system is established with an embedded microcontroller LPC1768. With this system, a real-time tracking algorithm is proposed and the seam tracing pro
APA, Harvard, Vancouver, ISO, and other styles
28

López-Medina, Miguel Ángel, Macarena Espinilla, Chris Nugent, and Javier Medina Quero. "Evaluation of convolutional neural networks for the classification of falls from heterogeneous thermal vision sensors." International Journal of Distributed Sensor Networks 16, no. 5 (2020): 155014772092048. http://dx.doi.org/10.1177/1550147720920485.

Full text
Abstract:
The automatic detection of falls within environments where sensors are deployed has attracted considerable research interest due to the prevalence and impact of falling people, especially the elderly. In this work, we analyze the capabilities of non-invasive thermal vision sensors to detect falls using several architectures of convolutional neural networks. First, we integrate two thermal vision sensors with different capabilities: (1) low resolution with a wide viewing angle and (2) high resolution with a central viewing angle. Second, we include fuzzy representation of thermal information. T
APA, Harvard, Vancouver, ISO, and other styles
29

Guerra, Edmundo, Rodrigo Munguía, Yolanda Bolea, and Antoni Grau. "Detection and Positioning of Pipes and Columns with Autonomous Multicopter Drones." Mathematical Problems in Engineering 2018 (June 21, 2018): 1–13. http://dx.doi.org/10.1155/2018/2758021.

Full text
Abstract:
A multimodal sensory array to accurately position aerial multicopter drones with respect to pipes has been studied, and a solution exploiting both LiDAR and vision sensors has been proposed. Several challenges, including detection of pipes and other cylindrical elements in sensor space and validation of the elements detected, have been studied. A probabilistic parametric method has been applied to segment and position cylinders with LIDAR, while several vision-based techniques have been tested to find the contours of the pipe, combined with conic estimation cylinder pose recovery. Multiple sol
APA, Harvard, Vancouver, ISO, and other styles
30

Kamal, Rohan. "Third Vision for Blinds." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 5550–53. http://dx.doi.org/10.22214/ijraset.2023.52499.

Full text
Abstract:
Abstract: This paper introduces a third vision system designed to assist visually impaired individuals in detecting obstacles and dangers during walking, as well as identifying the world around them. The proposed solution acts as an artificial vision and alarm unit, consisting of three sensors (ultrasonic, water, and heat flame), and a microcontroller (Arduino Uno R3) to process sensor signals into short pulses for the Arduino pins to activate buzzers and LED bulbs. Our project aims to provide an affordable and lightweight smart stick suitable for most blind people, making it accessible to all
APA, Harvard, Vancouver, ISO, and other styles
31

Kang, Zaohui, Jizhong Xue, Chun Sing Lai, Yu Wang, Haoliang Yuan, and Fangyuan Xu. "Vision Transformer-Based Photovoltaic Prediction Model." Energies 16, no. 12 (2023): 4737. http://dx.doi.org/10.3390/en16124737.

Full text
Abstract:
Sensing the cloud movement information has always been a difficult problem in photovoltaic (PV) prediction. The information used by current PV prediction methods makes it challenging to accurately perceive cloud movements. The obstruction of the sun by clouds will lead to a significant decrease in actual PV power generation. The PV prediction network model cannot respond in time, resulting in a significant decrease in prediction accuracy. In order to overcome this problem, this paper develops a visual transformer model for PV prediction, in which the target PV sensor information and the surrou
APA, Harvard, Vancouver, ISO, and other styles
32

Senarath, W. A. T. N., S. A. W. Fernando, and R. M. T. P. Rajakaruna. "Contact Position Estimation in the Event of Simultaneous Multiple Contacts in Vision-based Tactile Sensors." Journal of Advances in Engineering and Technology 2, no. 1 (2022): 52–64. http://dx.doi.org/10.54389/hhzm8357.

Full text
Abstract:
Tactile sensors are used to detect physical contact or pressure. They provide feedback about the physical environment and allow more natural and intuitive interaction with machines. Tactile sensors have many applications in the fields of agriculture, space exploration, health and automotive. Capacitive, resistive, as well as vision (optical) based tactile sensors have been proposed in the literature. This paper proposes a novel approach to solving the problem of estimating the contact locations in the event of simultaneous multiple contacts in vision-based tactile sensors. The relationship bet
APA, Harvard, Vancouver, ISO, and other styles
33

Monta, Mitsuji, Naoshi Kondo, Seiichi Arima, and Kazuhiko Namba. "Robotic Vision for Bioproduction Systems." Journal of Robotics and Mechatronics 15, no. 3 (2003): 341–48. http://dx.doi.org/10.20965/jrm.2003.p0341.

Full text
Abstract:
The vision system is one of the most important external sensors for an agricultural robot because the robot has to find its target among various objects in complicated background. Optical and morphological properties, therefore, should be investigated first to recognize the target object properly, when a visual sensor for agricultural robot is developed. A TV camera is widely used as a vision sensor for agricultural robot. Target image can be easily obtained by using color component images from TV camera, when the target color is different from the colors of the other objects and its backgroun
APA, Harvard, Vancouver, ISO, and other styles
34

Guo, Yishan, Mingqiu Li, and Mingqiu Li. "Research on Road Condition Sensing Technology based on Vision and Radar Information." Journal of Physics: Conference Series 2400, no. 1 (2022): 012032. http://dx.doi.org/10.1088/1742-6596/2400/1/012032.

Full text
Abstract:
Abstract In the process of intelligent vehicle driving, due to the complexity of the environment, a single sensor or multiple homogeneous sensors cannot completely perceive the traffic environment around the intelligent vehicle. Therefore, it is necessary to study the information fusion scheme of different sensors and make use of the advantages of each sensor to make up for the deficiency of a single sensor, so as to realize the function of cooperation and mutual compensation between multiple sensors. In this paper, millimeter-wave radar and camera are selected as sensors for an intelligent ve
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Zhenchang. "Analysis of the Application Scenarios of Different Sensors in Automated Guided Vehicles." Highlights in Science, Engineering and Technology 114 (October 31, 2024): 122–28. http://dx.doi.org/10.54097/n0an7570.

Full text
Abstract:
This study offers a detailed examination of diverse sensor technologies employed in Automated Guided Vehicles (AGVs) across various settings, including warehouses, hospitals, and outdoor environments. The paper investigates the use of a wide range of sensors— Light Detection and Ranging (LiDAR), inertial, vision-based, and magnetic sensors—in AGVs to improve navigation accuracy, reliability, and flexibility. LiDAR generates precise 3D maps and identifies obstacles; inertial sensors, such as accelerometers and gyroscopes, deliver essential data for movement and orientation; vision-based sensors
APA, Harvard, Vancouver, ISO, and other styles
36

Quan, Shengjiang, Xiao Liang, Hairui Zhu, Masahiro Hirano, and Yuji Yamakawa. "HiVTac: A High-Speed Vision-Based Tactile Sensor for Precise and Real-Time Force Reconstruction with Fewer Markers." Sensors 22, no. 11 (2022): 4196. http://dx.doi.org/10.3390/s22114196.

Full text
Abstract:
Although they have been under development for years and are attracting a lot of attention, vision-based tactile sensors still have common defects—the use of such devices to infer the direction of external forces is poorly investigated, and the operating frequency is too low for them to be applied in practical scenarios. Moreover, discussion of the deformation of elastomers used in vision-based tactile sensors remains insufficient. This research focuses on analyzing the deformation of a thin elastic layer on a vision-based tactile sensor by establishing a simplified deformation model, which is
APA, Harvard, Vancouver, ISO, and other styles
37

Mueggler, Elias, Henri Rebecq, Guillermo Gallego, Tobi Delbruck, and Davide Scaramuzza. "The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM." International Journal of Robotics Research 36, no. 2 (2017): 142–49. http://dx.doi.org/10.1177/0278364917691115.

Full text
Abstract:
New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightnes
APA, Harvard, Vancouver, ISO, and other styles
38

Feng, Yang, Hengyi Lv, Hailong Liu, Yisa Zhang, Yuyao Xiao, and Chengshan Han. "Event Density Based Denoising Method for Dynamic Vision Sensor." Applied Sciences 10, no. 6 (2020): 2024. http://dx.doi.org/10.3390/app10062024.

Full text
Abstract:
Dynamic vision sensor (DVS) is a new type of image sensor, which has application prospects in the fields of automobiles and robots. Dynamic vision sensors are very different from traditional image sensors in terms of pixel principle and output data. Background activity (BA) in the data will affect image quality, but there is currently no unified indicator to evaluate the image quality of event streams. This paper proposes a method to eliminate background activity, and proposes a method and performance index for evaluating filter performance: noise in real (NIR) and real in noise (RIN). The low
APA, Harvard, Vancouver, ISO, and other styles
39

LaFiandra, M., and W. Harper. "A Comparison of Soldier Performance on a Target Detection and Identification Task Using Fused Sensor Technology and Current Night Vision Technology." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 19 (2007): 1332–35. http://dx.doi.org/10.1177/154193120705101912.

Full text
Abstract:
Soldiers rely on night vision devices to enhance their ability to detect and identify objects of interest in environments of reduced luminosity. The night vision device that is currently being used by United States Army Soldiers deployed in Iraq and Afghanistan is based on Image Intensifying (I2) technology. An alternative technology for night vision devices is to use a fused sensor that combines I2 technology and a thermal sensor. The purpose of this study is to compare Soldier performance on detecting and identifying human targets while using a night vision device with I2 technology to their
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Yunhui, Yuanbing Zhao, Yinan Wu, and Xuan Zhang. "Digital Communication of Folk Art in Urban Scenes Based on Vision Sensor Images." Mobile Information Systems 2022 (June 20, 2022): 1–13. http://dx.doi.org/10.1155/2022/2800496.

Full text
Abstract:
With the continuous development of China’s socio-economic level and culture, some of China’s folk arts are gradually declining, and the protection and dissemination of folk arts are extremely important. In today’s modern society with digital information as the carrier, digital communication has more incomparable advantages than traditional communication methods. This paper proposes a research on the digital communication of folk art in urban scenes based on visual sensor images. This article introduces the related applications of visual sensor images and studies the spread of folk art in citie
APA, Harvard, Vancouver, ISO, and other styles
41

MAEDER, ANDREAS, HANNES BISTRY, and JIANWEI ZHANG. "INTELLIGENT VISION SYSTEMS FOR ROBOTIC APPLICATIONS." International Journal of Information Acquisition 05, no. 03 (2008): 259–67. http://dx.doi.org/10.1142/s0219878908001648.

Full text
Abstract:
Vision-based sensors are a key component for robot-systems, where many tasks depend on image data. Realtime control constraints bind a lot of processing power for only a single sensor modality. Dedicated and distributed processing resources are the "natural" solution to overcome this limitation. This paper presents experiments, using embedded processors as well as dedicated hardware, to execute various image (pre)processing tasks. Architectural concepts and requirements for intelligent vision systems have been acquired.
APA, Harvard, Vancouver, ISO, and other styles
42

Yu, Feng. "Singing and Nervous System Regulation Based on Wireless Sensor Network Perception." Journal of Sensors 2021 (October 25, 2021): 1–10. http://dx.doi.org/10.1155/2021/2258625.

Full text
Abstract:
In order to build an intelligent platform that can be applied to singing and nervous system adjustment, this paper optimizes the positioning and information processing algorithms for wireless sensor network perception. Moreover, this article combines binocular vision to realize the singer’s real-time positioning, combines the singer’s emotion recognition with the intelligent sensor system, and combines the emotion recognition with the adjustment of the nervous system, so that the singer can better control the intelligent platform. In addition, in order to solve the problem of multisensor infor
APA, Harvard, Vancouver, ISO, and other styles
43

Idesawa, Masanori, Yasushi Mae, and Junji Oaki. "Special Issue on Robot Vision - Vision for Action -." Journal of Robotics and Mechatronics 21, no. 6 (2009): 671. http://dx.doi.org/10.20965/jrm.2009.p0671.

Full text
Abstract:
Robot vision is a key technology in robotics and mechatronics for realizing intelligent robot systems that work in the real world. The fact that robot vision algorithms required much time and effort to apply in real-world applications has delayed their dissemination until new forms made possible by recent rapid improvements in computer speed. Now the day is coming when robot vision may surpass human vision in many applications. This special issue presents 13 papers on the latest robot vision achievements and their applications. The first two propose ways of measuring and modeling 3D objects in
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Zeyu. "Research on application method of intelligent driving technology based on monocular vision sensor." Theoretical and Natural Science 52, no. 1 (2024): 186–91. http://dx.doi.org/10.54254/2753-8818/52/2024ch0160.

Full text
Abstract:
With the development of driverless cars, intelligent driving technology is increasingly used in the automotive industry, monocular vision sensor plays an indispensable role in intelligent driving technology because of its simple structure, low cost and abundant information. This paper discusses and optimizes the application of the monocular vision sensor in intelligent driving. The basic principle and key technologies of the monocular vision sensor are described in detail. In the specific application of the monocular vision sensor, this paper focuses on the monocular vision sensor's depth lear
APA, Harvard, Vancouver, ISO, and other styles
45

Lv, Hengyi, Yang Feng, Yisa Zhang, and Yuchen Zhao. "Dynamic Vision Sensor Tracking Method Based on Event Correlation Index." Complexity 2021 (April 27, 2021): 1–11. http://dx.doi.org/10.1155/2021/8973482.

Full text
Abstract:
Dynamic vision sensor is a kind of bioinspired sensor. It has the characteristics of fast response, large dynamic range, and asynchronous output event stream. These characteristics make it have advantages that traditional image sensors do not have in the field of tracking. The output form of the dynamic vision sensor is asynchronous event stream, and the object information needs to be provided by the relevant event cluster. This article proposes a method based on the event correlation index to obtain the object’s position, contour, and other information and is compatible with traditional track
APA, Harvard, Vancouver, ISO, and other styles
46

Okada, Kei, Takeshi Morishita, Marika Hayashi, Masayuki Inaba, and Hirochika Inoue. "Design and Development of a Small Stereovision Sensor Module for Small Self-Contained Autonomous Robots." Journal of Robotics and Mechatronics 17, no. 3 (2005): 248–54. http://dx.doi.org/10.20965/jrm.2005.p0248.

Full text
Abstract:
We designed a small stereovision (SSV) sensor module for easily adding visual functions to a small robot and enabling their use. The SSV sensor module concept includes 1) a vision sensor module containing a camera and a visual processor and 2) connecting to a robot system through general-purpose interface. This design enables the use of visual functions as ordinary sensors such, as touch or ultra-sonic sensors, by simply connecting a general-purpose interface port such as an IO port or serial connector. We developed a prototype module with small CMOS image sensors for a mobile phone and a 16 b
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Yushan, Wenbo Zhang, Xuewu Ji, Chuanxiang Ren, and Jian Wu. "Research on Lane a Compensation Method Based on Multi-Sensor Fusion." Sensors 19, no. 7 (2019): 1584. http://dx.doi.org/10.3390/s19071584.

Full text
Abstract:
The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate the real lane in real-time during sensor jumps. This paper presents a lane compensation method based on multi-sensor fusion of global positioning system (GPS), inertial measurement unit (IMU) and vision sensors. In order to compensate the lane, the cubic polynomial function of the longitudinal distance is selected as the lane model. In thi
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Chen-Yu, Shi-Jun Liang, Shuang Wang, et al. "Gate-tunable van der Waals heterostructure for reconfigurable neural network vision sensor." Science Advances 6, no. 26 (2020): eaba6173. http://dx.doi.org/10.1126/sciadv.aba6173.

Full text
Abstract:
Early processing of visual information takes place in the human retina. Mimicking neurobiological structures and functionalities of the retina provides a promising pathway to achieving vision sensor with highly efficient image processing. Here, we demonstrate a prototype vision sensor that operates via the gate-tunable positive and negative photoresponses of the van der Waals (vdW) vertical heterostructures. The sensor emulates not only the neurobiological functionalities of bipolar cells and photoreceptors but also the unique connectivity between bipolar cells and photoreceptors. By tuning ga
APA, Harvard, Vancouver, ISO, and other styles
49

Zou, Yanbiao, and Xiangzhi Chen. "Hand–eye calibration of arc welding robot and laser vision sensor through semidefinite programming." Industrial Robot: An International Journal 45, no. 5 (2018): 597–610. http://dx.doi.org/10.1108/ir-02-2018-0034.

Full text
Abstract:
PurposeThis paper aims to propose a hand–eye calibration method of arc welding robot and laser vision sensor by using semidefinite programming (SDP).Design/methodology/approachThe conversion relationship between the pixel coordinate system and laser plane coordinate system is established on the basis of the mathematical model of three-dimensional measurement of laser vision sensor. In addition, the conversion relationship between the arc welding robot coordinate system and the laser vision sensor measurement coordinate system is also established on the basis of the hand–eye calibration model.
APA, Harvard, Vancouver, ISO, and other styles
50

You, B.-H., and J.-W. Kim. "A study on an automatic seam tracking system by using an electromagnetic sensor for sheet metal arc welding of butt joints." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 216, no. 6 (2002): 911–20. http://dx.doi.org/10.1243/095440502320193030.

Full text
Abstract:
Many sensors, such as the vision sensor and the laser displacement sensor, have been developed to automate the arc welding process. However, these sensors have some problems due to the effects of arc light, fumes and spatter. An electromagnetic sensor, which utilizes the generation of an eddy current, was developed for detecting the weld line of a butt joint in which the root gap size was zero. An automatic seam tracking system designed for sheet metal arc welding was constructed with a sensor. Through experiments, it was revealed that the system had an excellent seam tracking accuracy of the
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!