Auswahl der wissenschaftlichen Literatur zum Thema „Camera guidance for robot“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Camera guidance for robot" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Camera guidance for robot"

1

Golkowski, Alexander Julian, Marcus Handte, Peter Roch, and Pedro J. Marrón. "An Experimental Analysis of the Effects of Different Hardware Setups on Stereo Camera Systems." International Journal of Semantic Computing 15, no. 03 (2021): 337–57. http://dx.doi.org/10.1142/s1793351x21400080.

Der volle Inhalt der Quelle
Annotation:
For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rusydi, Muhammad Ilhamdi, Aulia Novira, Takayuki Nakagome, et al. "Autonomous Movement Control of Coaxial Mobile Robot based on Aspect Ratio of Human Face for Public Relation Activity Using Stereo Thermal Camera." Journal of Robotics and Control (JRC) 3, no. 3 (2022): 361–73. http://dx.doi.org/10.18196/jrc.v3i3.14750.

Der volle Inhalt der Quelle
Annotation:
In recent years, robots that recognize people around them and provide guidance, information, and monitoring have been attracting attention. The mainstream of conventional human recognition technology is the method using a camera or laser range finder. However, it is difficult to recognize with a camera due to fluctuations in lighting 1), and it is often affected by the recognition environment such as misrecognition 2) with a person's leg and a chair's leg with a laser range finder. Therefore, we propose a human recognition method using a thermal camera that can visualize human heat. This study
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yang, Long, and Nan Feng Xiao. "Robot Stereo Vision Guidance System Based on Attention Mechanism." Applied Mechanics and Materials 385-386 (August 2013): 708–11. http://dx.doi.org/10.4028/www.scientific.net/amm.385-386.708.

Der volle Inhalt der Quelle
Annotation:
Add attention mechanism into traditional robot stereo vision system, thus got the possible workpiece position quickly by saliency image, highly accelerate the computing process. First, to get the camera intrinsic matrix and extrinsic matrix, camera stereo calibration needed be done. Then use those parameter matrixes to rectify the newly captured images, disparity map can be got based on the OpenCV library, meanwhile, saliency image was computed by Itti algorithm. Workpiece spatial pose to left camera coordinates can be got with triangulation measurement principal. After a series of coordinates
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sai, Hesin, and Yoshikuni Okawa. "Structured Sign for Guidance of Mobile Robot." Journal of Robotics and Mechatronics 3, no. 5 (1991): 379–86. http://dx.doi.org/10.20965/jrm.1991.p0379.

Der volle Inhalt der Quelle
Annotation:
As part of a guidance system for mobile robots operating on a wide and flat floor, such as an ordinary factory or a gymnasium, we have proposed a special-purpose sign. It consists of a cylinder, with four slits, and a fluorescent light, which is placed on the axis of the cylinder. Two of the slits are parallel to each other, and the other two are angled. A robot obtains an image of the sign with a TV camera. After thresholding, we have four bright sets of pixels which correspond to the four slits of the cylinder. We compute by measuring the relative distances between the four points, the dista
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Parpală, Radu Constantin, Mario Andrei Ivan, Lidia Florentina Parpală, Costel Emil Coteț, and Cicerone Laurențiu Popa. "Camera Calibration in High-Speed Robotic Assembly Operations." Applied Sciences 14, no. 19 (2024): 8687. http://dx.doi.org/10.3390/app14198687.

Der volle Inhalt der Quelle
Annotation:
The increase in positioning accuracy and repeatability allowed the integration of robots in assembly operations using guidance systems (structured applications) or video acquisition systems (unstructured applications). This paper proposes a procedure to determine the measuring plane using a 3D laser camera. To validate the procedure, the camera coordinates and orientation will be verified using robot coordinates. This procedure is an essential element for camera calibration and consists of developing a mathematical model using the least square method and planar regression. The mathematical mod
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Imasato, Akimitsu, and Noriaki Maru. "Guidance and Control of Nursing Care Robot Using Gaze Point Detector and Linear Visual Servoing." International Journal of Automation Technology 5, no. 3 (2011): 452–57. http://dx.doi.org/10.20965/ijat.2011.p0452.

Der volle Inhalt der Quelle
Annotation:
The gaze guidance and control we propose for a nursing robot uses a gaze point detector (GPD) and linear visual servoing (LVS). The robot captures stereo camera images, presents them via a head-mounted display (HMD) to the user, calculates the user’s gaze tracked by the camera, and moves to gaze of the LVS. Since in the proposal, persons requiring nursing share the robot’s field of view via the GPD, the closer they get to the target, the more accurate control becomes. The GPD, on the user’s head, has an HMD and a CCD camera.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Blais, François, Marc Rioux, and Jacques Domey. "Compact three-dimensional camera for robot and vehicle guidance." Optics and Lasers in Engineering 10, no. 3-4 (1989): 227–39. http://dx.doi.org/10.1016/0143-8166(89)90039-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Yang, Chun Hui, and Fu Dong Wang. "Trajectory Recognition and Navigation Control in the Mobile Robot." Key Engineering Materials 464 (January 2011): 11–14. http://dx.doi.org/10.4028/www.scientific.net/kem.464.11.

Der volle Inhalt der Quelle
Annotation:
Fast and accurate acquisition of navigation information is the key and premise for robot guidance. In this paper, a robot trajectory guidance system composed of a camera, a Digital Signal Controller and mobile agency driven by stepper motors is given. First the JPEG (Joint Photographic Expert Group) image taken by camera is decoded and turns to correspond pixel image. By binarization process the image is then transformed to a binary image. A fast line extraction algorithm is presented based on Column Elementary Line Segment method. Furthermore the trajectory direction deviation parameters and
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Belmonte, Álvaro, José Ramón, Jorge Pomares, Gabriel Garcia, and Carlos Jara. "Optimal Image-Based Guidance of Mobile Manipulators using Direct Visual Servoing." Electronics 8, no. 4 (2019): 374. http://dx.doi.org/10.3390/electronics8040374.

Der volle Inhalt der Quelle
Annotation:
This paper presents a direct image-based controller to perform the guidance of a mobile manipulator using image-based control. An eye-in-hand camera is employed to perform the guidance of a mobile differential platform with a seven degrees-of-freedom robot arm. The presented approach is based on an optimal control framework and it is employed to control mobile manipulators during the tracking of image trajectories taking into account robot dynamics. The direct approach allows us to take both the manipulator and base dynamics into account. The proposed image-based controllers consider the optim
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bazeille, Stephane, Emmanuel Battesti, and David Filliat. "A Light Visual Mapping and Navigation Framework for Low-Cost Robots." Journal of Intelligent Systems 24, no. 4 (2015): 505–24. http://dx.doi.org/10.1515/jisys-2014-0116.

Der volle Inhalt der Quelle
Annotation:
AbstractWe address the problems of localization, mapping, and guidance for robots with limited computational resources by combining vision with the metrical information given by the robot odometry. We propose in this article a novel light and robust topometric simultaneous localization and mapping framework using appearance-based visual loop-closure detection enhanced with the odometry. The main advantage of this combination is that the odometry makes the loop-closure detection more accurate and reactive, while the loop-closure detection enables the long-term use of odometry for guidance by co
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Camera guidance for robot"

1

Pearson, Christopher Mark. "Linear array cameras for mobile robot guidance." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318875.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Grepl, Pavel. "Strojové vidění pro navádění robotu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.

Der volle Inhalt der Quelle
Annotation:
Master's thesis deals with the design, assembly, and testing of a camera system for localization of randomly placed and oriented objects on a conveyor belt with the purpose of guiding a robot on those objects. The theoretical part is focused on research in individual components making a camera system and on the field of 2D and 3D localization of objects. The practical part consists of two possible arrangements of the camera system, solution of the chosen arrangement, creating testing images, programming the algorithm for image processing, creating HMI, and testing the complete system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Macknojia, Rizwan. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study e
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Maier, Daniel [Verfasser], and Maren [Akademischer Betreuer] Bennewitz. "Camera-based humanoid robot navigation." Freiburg : Universität, 2015. http://d-nb.info/1119452082/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gu, Lifang. "Visual guidance of robot motion." University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.

Der volle Inhalt der Quelle
Annotation:
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sens
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Arthur, Richard B. "Vision-Based Human Directed Robot Guidance." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Stark, Per. "Machine vision camera calibration and robot communication." Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.

Der volle Inhalt der Quelle
Annotation:
<p>This thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Snailum, Nicholas. "Mobile robot navigation using single camera vision." Thesis, University of East London, 2001. http://roar.uel.ac.uk/3565/.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the research carried out in overcoming the problems encountered during the development of an autonomous mobile robot (AMR) which uses a single television camera for navigation in environments with visible edges, such as corridors and hallways. The objective was to determine the minimal sensing and signal processing requirements for a real AMR that could achieve self-steering, navigation and obstacle avoidance in real unmodified environments. A goal was to design algorithms that could meet the objective while being able to run on a laptop personal computer (PC). This const
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Toschi, Marco. "Towards Monocular Depth Estimation for Robot Guidance." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Den vollen Inhalt der Quelle finden
Annotation:
Human visual perception is a powerful tool to let us interact with the world, interpreting depth using both physiological and psychological cues. In the early days, machine vision was primarily inspired by physiological cues, guiding robots with bulky sensors based on focal length adjustments, pattern matching, and binocular disparity. In reality, however, we always get a certain degree of depth sensation from the monocular image reproduced on the retina, which is judged by our brain upon empirical grounds. With the advent of deep learning techniques, estimating depth from a monocular image ha
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xi, Min. "Image sequence guidance for mobile robot navigation." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36082/1/36082_Xi_1998.pdf.

Der volle Inhalt der Quelle
Annotation:
Vision based mobile robot navigation is a changeling issue in automated robot control. Using a camera as an active sensor requires the processing of a huge amount of visual data that is captured as image sequence. The relevant visual information for a robot navigation system needs to be extracted from the visual data and used for a real-time control. Several questions need to be answered including:1) What is the relevant information and how to extract it from a sequence of 2D images? 2) How to recognise the 3D surrounding environment from the extracted images? 3) How to generate a collision-
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Camera guidance for robot"

1

S, Roth Zvi, ed. Camera-aided robot calibration. CRC Press, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Horn, Geoffrey M. Camera operator. Gareth Stevens Pub., 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Snailum, Nicholas. Mobile robot navigation using single camera vision. University of East London, 2001.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Great Britain. Joint Advisory Committee for Broadcasting and Performing Arts. and Great Britain. Health and Safety Executive., eds. Camera operations on location: Guidance for managers and camera crews. HSE Books, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Pomerleau, Dean A. Neural Network Perception for Mobile Robot Guidance. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3192-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Pomerleau, Dean A. Neural Network Perception for Mobile Robot Guidance. Springer US, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pomerleau, Dean A. Neural network perception for mobile robot guidance. Kluwer Academic Publishers, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Manatt, Kathleen G. Robot scientist. Cherry Lake Pub., 2007.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Steer, Barry. Navigation for the guidance of a mobile robot. typescript, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

1962-, Christensen H. I., Bowyer Kevin 1955-, and Bunke Horst, eds. Active robot vision: Camera heads, model based navigation and reactive control. World Scientific, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Camera guidance for robot"

1

Bihlmaier, Andreas. "Endoscope Robots and Automated Camera Guidance." In Learning Dynamic Spatial Relations. Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-14914-7_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Baltes, Jacky. "Camera Calibration Using Rectangular Textures." In Robot Vision. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_30.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Martínez, Antonio B., and Albert Larré. "Fast Mobile Robot Guidance." In Traditional and Non-Traditional Robotic Sensors. Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-75984-0_26.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Scheibe, Karsten, Hartmut Korsitzky, Ralf Reulke, Martin Scheele, and Michael Solbrig. "EYESCAN - A High Resolution Digital Panoramic Camera." In Robot Vision. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rojtberg, Pavel. "User Guidance for Interactive Camera Calibration." In Virtual, Augmented and Mixed Reality. Multimodal Interaction. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21607-8_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Barnes, Nick, and Zhi-Qiang Liu. "Object Recognition Mobile Robot Guidance." In Knowledge-Based Vision-Guided Robots. Physica-Verlag HD, 2002. http://dx.doi.org/10.1007/978-3-7908-1780-5_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bihlmaier, Andreas. "Intraoperative Robot-Based Camera Assistance." In Learning Dynamic Spatial Relations. Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-14914-7_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Anko, Börner, Hirschmüller Heiko, Scheibe Karsten, Suppa Michael, and Wohlfeil Jürgen. "MFC - A Modular Line Camera for 3D World Modulling." In Robot Vision. Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-78157-8_24.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tian, Jiandong. "Imaging Modeling and Camera Sensitivity Recovery." In All Weather Robot Vision. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6429-8_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bobadilla, Leonardo, Katrina Gossman, and Steven M. LaValle. "Manipulating Ergodic Bodies through Gentle Guidance." In Robot Motion and Control 2011. Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-2343-9_23.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Camera guidance for robot"

1

Raj, Subin, Yashaswi Sinha, and Pradipta Biswas. "Mixed Reality based Robot Teleopeation with Haptic Guidance." In 2024 IEEE Conference on Telepresence. IEEE, 2024. https://doi.org/10.1109/telepresence63209.2024.10841737.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Claudia Carvalho, Ana, Ana Isabel Carvalho, and Ricardo Correia. "Robot With Vision Guidance System for Unscrewing Operation." In 2024 International Conference on Decision Aid Sciences and Applications (DASA). IEEE, 2024. https://doi.org/10.1109/dasa63652.2024.10836165.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kameoka, Kanako, Shigeru Uchikado, and Sun Lili. "Visual Guidance for a Mobile Robot with a Camera." In TENCON 2006 - 2006 IEEE Region 10 Conference. IEEE, 2006. http://dx.doi.org/10.1109/tencon.2006.344018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

"Guidance of Robot Arms using Depth Data from RGB-D Camera." In 10th International Conference on Informatics in Control, Automation and Robotics. SciTePress - Science and and Technology Publications, 2013. http://dx.doi.org/10.5220/0004481903150321.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Martinez-Rey, Miguel, Felipe Espinosa, Alfredo Gardel, Carlos Santos, and Enrique Santiso. "Mobile robot guidance using adaptive event-based pose estimation and camera sensor." In 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP). IEEE, 2016. http://dx.doi.org/10.1109/ebccsp.2016.7605089.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, Zhiyuan, and Demao Ye. "Research on visual guidance algorithm of forking robot based on monocular camera." In Conference on Optics Ultra Precision Manufacturing and Testing, edited by Dawei Zhang, Lingbao Kong, and Xichun Luo. SPIE, 2020. http://dx.doi.org/10.1117/12.2575675.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Samu, Tayib, Nikhal Kelkar, David Perdue, Michael A. Ruthemeyer, Bradley O. Matthews, and Ernest L. Hall. "Line following using a two camera guidance system for a mobile robot." In Photonics East '96, edited by David P. Casasent. SPIE, 1996. http://dx.doi.org/10.1117/12.256287.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Parco-Llorona, Diego, Alvaro Chávez-Urbina, Jaime Huaytalla-Pariona, and Deyby Huamanchahua. "Proof of Concept of a Soft Robot for Camera Guidance in Surgical Interventions." In 2023 3rd International Conference on Emerging Smart Technologies and Applications (eSmarTA). IEEE, 2023. http://dx.doi.org/10.1109/esmarta59349.2023.10293509.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Parco Llorona, Diego Sthywen, Alvaro Adolfo Chavez Urbina, Alberto Jesus Torres Hinostroza, and Frank William Zárate Peña. "Conceptual Design of a Soft Robot for Camera Guidance in Endoscopic Surgical Interventions." In JCRAI 2023: 2023 International Joint Conference on Robotics and Artificial Intelligence. ACM, 2023. http://dx.doi.org/10.1145/3632971.3633044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sai, Heshin, and Yoshikuni Okawa. "A Structured Sign for the Guidance of Autonomous Mobile Robots." In ASME 1993 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/cie1993-0019.

Der volle Inhalt der Quelle
Annotation:
Abstract For the explicit purpose of guiding autonomous mobile robots on wide and flat planes, we propose a specially designed guiding sign. It consists of a cylinder with four slits on its surface, and a fluorescent light on the center axis of the cylinder. Two outer slits are parallel to each other, and the other two inner slits are angled. A robot takes an image of the sign with a TV camera. By the threshold operation, it has four bright sets of pixels each of which corresponds to one of the four slits of the cylinder. By measuring the relative distances between those four bright blobs in t
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Camera guidance for robot"

1

Chen, J., W. E. Dixon, D. M. Dawson, and V. K. Chitrakaran. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera. Defense Technical Information Center, 2004. http://dx.doi.org/10.21236/ada465705.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Haas, Gary, and Philip R. Osteen. Wall Sensing for an Autonomous Robot With a Three-Dimensional Time-of-Flight (3-D TOF) Camera. Defense Technical Information Center, 2011. http://dx.doi.org/10.21236/ada539897.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

O'Dea, Annika, Nicholas Spore, Tanner Jernigan, et al. 3D measurements of water surface elevation using a flash lidar camera. Engineer Research and Development Center (U.S.), 2023. http://dx.doi.org/10.21079/11681/47496.

Der volle Inhalt der Quelle
Annotation:
This Coastal and Hydraulics Engineering technical note (CHETN) presents preliminary results from a series of tests conducted at the US Army Engineer Research and Development Center (ERDC), Coastal and Hydraulics Laboratory (CHL), Field Research Facility (FRF), in Duck, North Carolina, to explore the capabilities and limitations of the GSFL16K Flash Lidar Camera in nearshore science and engineering applications. The document summarizes the spatial coverage and density of data collected in three deployment scenarios and with a range of tuning parameters and provides guidance for future deploymen
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lee, W. S., Victor Alchanatis, and Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Der volle Inhalt der Quelle
Annotation:
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!