To see the other types of publications on this topic, follow the link: Système de multi-Camera.

Journal articles on the topic 'Système de multi-Camera'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Système de multi-Camera.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guo Peiyao, 郭珮瑶, 蒲志远 Pu Zhiyuan та 马展 Ma Zhan. "多相机系统:成像增强及应用". Laser & Optoelectronics Progress 58, № 18 (2021): 1811013. http://dx.doi.org/10.3788/lop202158.1811013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xiao Yifan, 肖一帆, та 胡伟 Hu Wei. "基于多相机系统的高精度标定". Laser & Optoelectronics Progress 60, № 20 (2023): 2015003. http://dx.doi.org/10.3788/lop222787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao Yanfang, 赵艳芳, 孙鹏 Sun Peng, 董明利 Dong Mingli, 刘其林 Liu Qilin, 燕必希 Yan Bixi та 王君 Wang Jun. "多相机视觉测量系统在轨自主定向方法". Laser & Optoelectronics Progress 61, № 10 (2024): 1011003. http://dx.doi.org/10.3788/lop231907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ren Guoyin, 任国印, 吕晓琪 Xiaoqi Lü та 李宇豪 Li Yuhao. "多摄像机视场下基于一种DTN的多人脸实时跟踪系统". Laser & Optoelectronics Progress 59, № 2 (2022): 0210004. http://dx.doi.org/10.3788/lop202259.0210004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kulathunga, Geesara, Aleksandr Buyval, and Aleksandr Klimchik. "Multi-Camera Fusion in Apollo Software Distribution." IFAC-PapersOnLine 52, no. 8 (2019): 49–54. http://dx.doi.org/10.1016/j.ifacol.2019.08.047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mehta, S. S., and T. F. Burks. "Multi-camera Fruit Localization in Robotic Harvesting." IFAC-PapersOnLine 49, no. 16 (2016): 90–95. http://dx.doi.org/10.1016/j.ifacol.2016.10.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

R.Kennady, Et al. "A Nonoverlapping Vision Field Multi-Camera Network for Tracking Human Build Targets." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 3 (2023): 366–69. http://dx.doi.org/10.17762/ijritcc.v11i3.9871.

Full text
Abstract:
This research presents a procedure for tracking human build targets in a multi-camera network with nonoverlapping vision fields. The proposed approach consists of three main steps: single-camera target detection, single-camera target tracking, and multi-camera target association and continuous tracking. The multi-camera target association includes target characteristic extraction and the establishment of topological relations. Target characteristics are extracted based on the HSV (Hue, Saturation, and Value) values of each human build movement target, and the space-time topological relations o
APA, Harvard, Vancouver, ISO, and other styles
8

Guler, Puren, Deniz Emeksiz, Alptekin Temizel, Mustafa Teke, and Tugba Taskaya Temizel. "Real-time multi-camera video analytics system on GPU." Journal of Real-Time Image Processing 11, no. 3 (2013): 457–72. http://dx.doi.org/10.1007/s11554-013-0337-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Sunan, Rodney Swee Huat Teo, and William Wai Lun Leong. "Multi-Camera Networks for Coverage Control of Drones." Drones 6, no. 3 (2022): 67. http://dx.doi.org/10.3390/drones6030067.

Full text
Abstract:
Multiple unmanned multirotor (MUM) systems are becoming a reality. They have a wide range of applications such as for surveillance, search and rescue, monitoring operations in hazardous environments and providing communication coverage services. Currently, an important issue in MUM is coverage control. In this paper, an existing coverage control algorithm has been extended to incorporate a new sensor model, which is downward facing and allows pan-tilt-zoom (PTZ). Two new constraints, namely view angle and collision avoidance, have also been included. Mobile network coverage among the MUMs is s
APA, Harvard, Vancouver, ISO, and other styles
10

WANG, Liang. "Multi-Camera Calibration Based on 1D Calibration Object." ACTA AUTOMATICA SINICA 33, no. 3 (2007): 0225. http://dx.doi.org/10.1360/aas-007-0225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Moreno, Patricio, Juan Francisco Presenza, Ignacio Mas, and Juan Ignacio Giribet. "Aerial Multi-Camera Robotic Jib Crane." IEEE Robotics and Automation Letters 6, no. 2 (2021): 4103–8. http://dx.doi.org/10.1109/lra.2021.3065299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wu, Yi-Chang, Ching-Han Chen, Yao-Te Chiu, and Pi-Wei Chen. "Cooperative People Tracking by Distributed Cameras Network." Electronics 10, no. 15 (2021): 1780. http://dx.doi.org/10.3390/electronics10151780.

Full text
Abstract:
In the application of video surveillance, reliable people detection and tracking are always challenging tasks. The conventional single-camera surveillance system may encounter difficulties such as narrow-angle of view and dead space. In this paper, we proposed multi-cameras network architecture with an inter-camera hand-off protocol for cooperative people tracking. We use the YOLO model to detect multiple people in the video scene and incorporate the particle swarm optimization algorithm to track the person movement. When a person leaves the area covered by a camera and enters an area covered
APA, Harvard, Vancouver, ISO, and other styles
13

Fu, Qiang, Xiang-Yang Chen, and Wei He. "A Survey on 3D Visual Tracking of Multicopters." International Journal of Automation and Computing 16, no. 6 (2019): 707–19. http://dx.doi.org/10.1007/s11633-019-1199-2.

Full text
Abstract:
Abstract Three-dimensional (3D) visual tracking of a multicopter (where the camera is fixed while the multicopter is moving) means continuously recovering the six-degree-of-freedom pose of the multicopter relative to the camera. It can be used in many applications, such as precision terminal guidance and control algorithm validation for multicopters. However, it is difficult for many researchers to build a 3D visual tracking system for multicopters (VTSMs) by using cheap and off-the-shelf cameras. This paper firstly gives an over- view of the three key technologies of a 3D VTSMs: multi-camera
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Yi, Di Tang, Dongsheng Wang, Wenjie Song, Junbo Wang, and Mengyin Fu. "Multi-camera visual SLAM for off-road navigation." Robotics and Autonomous Systems 128 (June 2020): 103505. http://dx.doi.org/10.1016/j.robot.2020.103505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Feng, Xin, Xiao Lv, Junyu Dong, Yongshun Liu, Fengfeng Shu, and Yihui Wu. "Double-Glued Multi-Focal Bionic Compound Eye Camera." Micromachines 14, no. 8 (2023): 1548. http://dx.doi.org/10.3390/mi14081548.

Full text
Abstract:
Compound eye cameras are a vital component of bionics. Compound eye lenses are currently used in light field cameras, monitoring imaging, medical endoscopes, and other fields. However, the resolution of the compound eye lens is still low at the moment, which has an impact on the application scene. Photolithography and negative pressure molding were used to create a double-glued multi-focal bionic compound eye camera in this study. The compound eye camera has 83 microlenses, with ommatidium diameters ranging from 400 μm to 660 μm, and a 92.3 degree field-of-view angle. The double-gluing structu
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Xinhua, Jie Tian, Hailan Kuang, and Xiaolin Ma. "A Stereo Calibration Method of Multi-Camera Based on Circular Calibration Board." Electronics 11, no. 4 (2022): 627. http://dx.doi.org/10.3390/electronics11040627.

Full text
Abstract:
In the application of 3D reconstruction of multi-cameras, it is necessary to calibrate the camera used separately, and at the same time carry out multi-stereo calibration, and the calibration accuracy directly affects the effect of the 3D reconstruction of the system. Many researchers focus on the optimization of the calibration algorithm and the improvement of calibration accuracy after obtaining the calibration plate pattern coordinates, ignoring the impact of calibration on the accuracy of the calibration board pattern coordinate extraction. Therefore, this paper proposes a multi-camera ste
APA, Harvard, Vancouver, ISO, and other styles
17

Alsadik, Bashar, Fabio Remondino, and Francesco Nex. "Simulating a Hybrid Acquisition System for UAV Platforms." Drones 6, no. 11 (2022): 314. http://dx.doi.org/10.3390/drones6110314.

Full text
Abstract:
Currently, there is a rapid trend in the production of airborne sensors consisting of multi-view cameras or hybrid sensors, i.e., a LiDAR scanner coupled with one or multiple cameras to enrich the data acquisition in terms of colors, texture, completeness of coverage, accuracy, etc. However, the current UAV hybrid systems are mainly equipped with a single camera that will not be sufficient to view the facades of buildings or other complex objects without having double flight paths with a defined oblique angle. This entails extensive flight planning, acquisition duration, extra costs, and data
APA, Harvard, Vancouver, ISO, and other styles
18

Dexheimer, Eric, Patrick Peluse, Jianhui Chen, James Pritts, and Michael Kaess. "Information-Theoretic Online Multi-Camera Extrinsic Calibration." IEEE Robotics and Automation Letters 7, no. 2 (2022): 4757–64. http://dx.doi.org/10.1109/lra.2022.3145061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Jian, Chunjuan Bo, and Dong Wang. "A novel multi-target multi-camera tracking approach based on feature grouping." Computers & Electrical Engineering 92 (June 2021): 107153. http://dx.doi.org/10.1016/j.compeleceng.2021.107153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Yun-Lun, Hao-Ting Li, and Chen-Kuo Chiang. "Multi-Camera Vehicle Tracking Based on Deep Tracklet Similarity Network." Electronics 11, no. 7 (2022): 1008. http://dx.doi.org/10.3390/electronics11071008.

Full text
Abstract:
Multi-camera vehicle tracking at the city scale has received lots of attention in the last few years. It has large-scale differences, frequent occlusion, and appearance differences caused by the viewing angle differences, which is quite challenging. In this research, we propose the Tracklet Similarity Network (TSN) for a multi-target multi-camera (MTMC) vehicle tracking system based on the evaluation of the similarity between vehicle tracklets. In addition, a novel component, Candidates Intersection Ratio (CIR), is proposed to refine the similarity. It provides an associate scheme to build the
APA, Harvard, Vancouver, ISO, and other styles
21

Oh, Hyondong, Dae-Yeon Won, Sung-Sik Huh, David Hyunchul Shim, Min-Jea Tahk, and Antonios Tsourdos. "Indoor UAV Control Using Multi-Camera Visual Feedback." Journal of Intelligent & Robotic Systems 61, no. 1-4 (2010): 57–84. http://dx.doi.org/10.1007/s10846-010-9506-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Chuan, Shijie Liu, Xiaoyan Wang, and Xiaowei Lan. "Time Synchronization and Space Registration of Roadside LiDAR and Camera." Electronics 12, no. 3 (2023): 537. http://dx.doi.org/10.3390/electronics12030537.

Full text
Abstract:
The sensing system consisting of Light Detection and Ranging (LiDAR) and a camera provides complementary information about the surrounding environment. To take full advantage of multi-source data provided by different sensors, an accurate fusion of multi-source sensor information is needed. Time synchronization and space registration are the key technologies that affect the fusion accuracy of multi-source sensors. Due to the difference in data acquisition frequency and deviation in startup time between LiDAR and the camera, asynchronous data acquisition between LiDAR and camera is easy to occu
APA, Harvard, Vancouver, ISO, and other styles
23

Dinc, Semih, Farbod Fahimi, and Ramazan Aygun. "Mirage: an O(n) time analytical solution to 3D camera pose estimation with multi-camera support." Robotica 35, no. 12 (2017): 2278–96. http://dx.doi.org/10.1017/s0263574716000874.

Full text
Abstract:
SUMMARYMirage is a camera pose estimation method that analytically solves pose parameters in linear time for multi-camera systems. It utilizes a reference camera pose to calculate the pose by minimizing the 2D projection error between reference and actual pixel coordinates. Previously, Mirage has been successfully applied to trajectory tracking (visual servoing) problem. In this study, a comprehensive evaluation of Mirage is performed by particularly focusing on the area of camera pose estimation. Experiments have been performed using simulated and real data on noisy and noise-free environment
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Jinwoo, and Seokho Chi. "Multi-camera vision-based productivity monitoring of earthmoving operations." Automation in Construction 112 (April 2020): 103121. http://dx.doi.org/10.1016/j.autcon.2020.103121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Pedersini, F., A. Sarti, and S. Tubaro. "Accurate and simple geometric calibration of multi-camera systems." Signal Processing 77, no. 3 (1999): 309–34. http://dx.doi.org/10.1016/s0165-1684(99)00042-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Congcong, Jing Li, Yuguang Xie, Jiayang Nie, Tao Yang, and Zhaoyang Lu. "Multi-camera joint spatial self-organization for intelligent interconnection surveillance." Engineering Applications of Artificial Intelligence 107 (January 2022): 104533. http://dx.doi.org/10.1016/j.engappai.2021.104533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

GU, YUANTAO, YILUN CHEN, ZHENGWEI JIANG, and KUN TANG. "PARTICLE FILTER BASED MULTI-CAMERA INTEGRATION FOR FACE 3D-POSE TRACKING." International Journal of Wavelets, Multiresolution and Information Processing 04, no. 04 (2006): 677–90. http://dx.doi.org/10.1142/s0219691306001531.

Full text
Abstract:
Face tracking has many visual applications such as human-computer interfaces, video communications and surveillance. Color-based particle trackers have been proved robust and versatile for a modest computational cost. In this paper, a probabilistic method for integrating multi-camera information is introduced to track human face 3D-pose variations. The proposed method fuses information coming from several calibrated cameras via one color-based particle filter. The algorithm relies on the following novelties. First, the human head other than face is defined as the target of our algorithm. To di
APA, Harvard, Vancouver, ISO, and other styles
28

Su, Shan, Li Yan, Hong Xie, et al. "Multi-Level Hazard Detection Using a UAV-Mounted Multi-Sensor for Levee Inspection." Drones 8, no. 3 (2024): 90. http://dx.doi.org/10.3390/drones8030090.

Full text
Abstract:
This paper introduces a developed multi-sensor integrated system comprising a thermal infrared camera, an RGB camera, and a LiDAR sensor, mounted on a lightweight unmanned aerial vehicle (UAV). This system is applied to the inspection tasks of levee engineering, enabling the real-time, rapid, all-day, all-round, and non-contact acquisition of multi-source data for levee structures and their surrounding environments. Our aim is to address the inefficiencies, high costs, limited data diversity, and potential safety hazards associated with traditional methods, particularly concerning the structur
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Jincheng, Guoqing Deng, Wen Zhang, Chaofan Zhang, Fan Wang, and Yong Liu. "Realization of CUDA-based real-time multi-camera visual SLAM in embedded systems." Journal of Real-Time Image Processing 17, no. 3 (2019): 713–27. http://dx.doi.org/10.1007/s11554-019-00924-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Svanström, Fredrik, Fernando Alonso-Fernandez, and Cristofer Englund. "Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities." Drones 6, no. 11 (2022): 317. http://dx.doi.org/10.3390/drones6110317.

Full text
Abstract:
Automatic detection of flying drones is a key issue where its presence, especially if unauthorized, can create risky situations or compromise security. Here, we design and evaluate a multi-sensor drone detection system. In conjunction with standard video cameras and microphone sensors, we explore the use of thermal infrared cameras, pointed out as a feasible and promising solution that is scarcely addressed in the related literature. Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest. The sensing solution
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Andrew Tzer-Yeu, Morteza Biglari-Abhari, and Kevin I.-Kai Wang. "Investigating fast re-identification for multi-camera indoor person tracking." Computers & Electrical Engineering 77 (July 2019): 273–88. http://dx.doi.org/10.1016/j.compeleceng.2019.06.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Sheng, Zhisheng You, and Yuxi Zhang. "A Novel Multi-Projection Correction Method Based on Binocular Vision." Electronics 12, no. 4 (2023): 910. http://dx.doi.org/10.3390/electronics12040910.

Full text
Abstract:
In order to improve the accuracy of multi-projection correction fusion, a multi-projection correction method based on binocular vision is proposed. To date, most of the existing methods are based on the single-camera mode, which may lose the depth information of the display wall and may not accurately obtain the details of the geometric structure of the display wall. The proposed method uses the depth information of a binocular camera to build a high-precision 3D display wall model; thus, there is no need to know the specific CAD size of the display wall in advance. Meanwhile, this method can
APA, Harvard, Vancouver, ISO, and other styles
33

Sasaki, Kazuyuki, Yasuo Sakamoto, Takashi Shibata, and Yasufumi Emori. "The Multi-Purpose Camera: A New Anterior Eye Segment Analysis System." Ophthalmic Research 22, no. 1 (1990): 3–8. http://dx.doi.org/10.1159/000267056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kornuta, Tomasz, and Cezary Zieliński. "Robot Control System Design Exemplified by Multi-Camera Visual Servoing." Journal of Intelligent & Robotic Systems 77, no. 3-4 (2013): 499–523. http://dx.doi.org/10.1007/s10846-013-9883-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hung, Michael Chien-Chun, and Kate Ching-Ju Lin. "Joint sink deployment and association for multi-sink wireless camera networks." Wireless Communications and Mobile Computing 16, no. 2 (2014): 209–22. http://dx.doi.org/10.1002/wcm.2509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tran, Nha, Toan Nguyen, Minh Nguyen, Khiet Luong, and Tai Lam. "Global-local attention with triplet loss and label smoothed crossentropy for person re-identification." IAES International Journal of Artificial Intelligence (IJ-AI) 12, no. 4 (2023): 1883. http://dx.doi.org/10.11591/ijai.v12.i4.pp1883-1891.

Full text
Abstract:
Person re-identification (Person Re-ID) is a research direction on tracking and identifying people in surveillance camera systems with non-overlapping camera perspectives. Despite much research on this topic, there are still some practical problems that Person Re-ID has not yet solved, in reality, human objects can easily be obscured by obstructions such as other people, trees, luggage, umbrellas, signs, cars, motorbikes. In this paper, we propose a multibranch deep learning network architecture. In which one branch is for the representation of global features and two branches are for the repr
APA, Harvard, Vancouver, ISO, and other styles
37

Popovic, Vladan, Kerem Seyid, Abdulkadir Akin, et al. "Image Blending in a High Frame Rate FPGA-based Multi-Camera System." Journal of Signal Processing Systems 76, no. 2 (2013): 169–84. http://dx.doi.org/10.1007/s11265-013-0858-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Haoyu, Chi Chen, Yong He, et al. "Easy Rocap: A Low-Cost and Easy-to-Use Motion Capture System for Drones." Drones 8, no. 4 (2024): 137. http://dx.doi.org/10.3390/drones8040137.

Full text
Abstract:
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera occlusion, and the cost of common commercial Mocap systems is high. To address these challenges, we propose Easy Rocap, a low-cost, open-source robot motion capture system, which can quickly and robustly capture the accurate position and orientation of the robot. Firstly, based on training a real-time object detector, an object-filte
APA, Harvard, Vancouver, ISO, and other styles
39

Fan, Zhijie, Zhiwei Cao, Xin Li, Chunmei Wang, Bo Jin, and Qianjin Tang. "Video Surveillance Camera Identity Recognition Method Fused With Multi-Dimensional Static and Dynamic Identification Features." International Journal of Information Security and Privacy 17, no. 1 (2023): 1–18. http://dx.doi.org/10.4018/ijisp.319304.

Full text
Abstract:
With the development of smart cities, video surveillance networks have become an important infrastructure for urban governance. However, by replacing or tampering with surveillance cameras, an important front-end device, attackers are able to access the internal network. In order to identify illegal or suspicious camera identities in advance, a camera identity identification method that incorporates multidimensional identification features is proposed. By extracting the static information of cameras and dynamic traffic information, a camera identity system that incorporates explicit, implicit,
APA, Harvard, Vancouver, ISO, and other styles
40

Chebi, Hocine. "Novel greedy grid-voting algorithm for optimisation placement of multi-camera." International Journal of Sensor Networks 35, no. 3 (2021): 170. http://dx.doi.org/10.1504/ijsnet.2021.10036663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chebi, Hocine. "Novel greedy grid-voting algorithm for optimisation placement of multi-camera." International Journal of Sensor Networks 35, no. 3 (2021): 170. http://dx.doi.org/10.1504/ijsnet.2021.113840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kim, Juhwan, and Dongsik Jo. "Optimal Camera Placement to Generate 3D Reconstruction of a Mixed-Reality Human in Real Environments." Electronics 12, no. 20 (2023): 4244. http://dx.doi.org/10.3390/electronics12204244.

Full text
Abstract:
Virtual reality and augmented reality are increasingly used for immersive engagement by utilizing information from real environments. In particular, three-dimensional model data, which is the basis for creating virtual places, can be manually developed using commercial modeling toolkits, but with the advancement of sensing technology, computer vision technology can also be used to create virtual environments. Specifically, a 3D reconstruction approach can generate a single 3D model from image information obtained from various scenes in real environments using several cameras (multi-cameras). T
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Bo, Jiayao Hou, Yanyan Ma, Fei Wang, and Fei Wei. "Multi-DS Strategy for Source Camera Identification in Few-Shot Sample Data Sets." Security and Communication Networks 2022 (September 6, 2022): 1–14. http://dx.doi.org/10.1155/2022/8716884.

Full text
Abstract:
Source camera identification (SCI) is an intriguing problem in digital forensics, which identifies the source device of given images. However, most existing works require sufficient training samples to ensure performance. In this work, we propose a method based on semi-supervised ensemble learning (multi-DS) strategy, which extends labeled data set by multi-distance-based clustering strategy and then calibrate the pseudo-labels through a self-correction mechanism. Next, we iteratively perform the calibration-appending-training process to improve our model. We design comprehensive experiments,
APA, Harvard, Vancouver, ISO, and other styles
44

Ferraguti, Federica, Chiara Talignani Landi, Silvia Costi, et al. "Safety barrier functions and multi-camera tracking for human–robot shared environment." Robotics and Autonomous Systems 124 (February 2020): 103388. http://dx.doi.org/10.1016/j.robot.2019.103388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vandendriessche, Jurgen, Bruno da Silva, Lancelot Lhoest, An Braeken, and Abdellah Touhafi. "M3-AC: A Multi-Mode Multithread SoC FPGA Based Acoustic Camera." Electronics 10, no. 3 (2021): 317. http://dx.doi.org/10.3390/electronics10030317.

Full text
Abstract:
Acoustic cameras allow the visualization of sound sources using microphone arrays and beamforming techniques. The required computational power increases with the number of microphones in the array, the acoustic images resolution, and in particular, when targeting real-time. Such a constraint limits the use of acoustic cameras in many wireless sensor network applications (surveillance, industrial monitoring, etc.). In this paper, we propose a multi-mode System-on-Chip (SoC) Field-Programmable Gate Arrays (FPGA) architecture capable to satisfy the high computational demand while providing wirele
APA, Harvard, Vancouver, ISO, and other styles
46

Park, K. S., S. Y. Chang, and S. K. Youn. "Topology optimization of the primary mirror of a multi-spectral camera." Structural and Multidisciplinary Optimization 25, no. 1 (2003): 46–53. http://dx.doi.org/10.1007/s00158-002-0271-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Shifeng, and Ning Wang. "Cloud robotic grasping of Gaussian mixture model based on point cloud projection under occlusion." Assembly Automation 41, no. 3 (2021): 312–23. http://dx.doi.org/10.1108/aa-11-2020-0170.

Full text
Abstract:
Purpose In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that must be mastered. Usually, the information source of grasping mainly comes from visual sensors. However, due to the uncertainty of the working environment, the information acquisition of the vision sensor may encounter the situation of being blocked by unknown objects. This paper aims to propose a solution to the problem in robot grasping when the vision sensor information is blocked by sharing the informatio
APA, Harvard, Vancouver, ISO, and other styles
48

González-Galván, Emilio J., Sergio R. Cruz-Ramı́rez, Michael J. Seelinger, and J. Jesús Cervantes-Sánchez. "An efficient multi-camera, multi-target scheme for the three-dimensional control of robots using uncalibrated vision." Robotics and Computer-Integrated Manufacturing 19, no. 5 (2003): 387–400. http://dx.doi.org/10.1016/s0736-5845(03)00048-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tao, Kekai, Gaoge Lian, Yongshun Liu, et al. "Design and Integration of the Single-Lens Curved Multi-Focusing Compound Eye Camera." Micromachines 12, no. 3 (2021): 331. http://dx.doi.org/10.3390/mi12030331.

Full text
Abstract:
Compared with a traditional optical system, the single-lens curved compound eye imaging system has superior optical performance, such as a large field of view (FOV), small size, and high portability. However, defocus and low resolution hinder the further development of single-lens curved compound eye imaging systems. In this study, the design of a nonuniform curved compound eye with multiple focal lengths was used to solve the defocus problem. A two-step gas-assisted process, which was combined with photolithography, soft photolithography, and ultraviolet curing, was proposed for fabricating t
APA, Harvard, Vancouver, ISO, and other styles
50

Nuger, Evgeny, and Beno Benhabib. "Multi-Camera Active-Vision for Markerless Shape Recovery of Unknown Deforming Objects." Journal of Intelligent & Robotic Systems 92, no. 2 (2018): 223–64. http://dx.doi.org/10.1007/s10846-018-0773-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!