Academic literature on the topic 'Vision, Monocular'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vision, Monocular.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vision, Monocular"

1

Coday, Mary P., Michael A. Warner, Kurt V. Jahrling, and Peter A. D. Rubin. "Acquired Monocular Vision." Ophthalmic Plastic and Reconstructive Surgery 18, no. 1 (January 2002): 56–63. http://dx.doi.org/10.1097/00002341-200201000-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

HOLM, EJLER. "NYSTAGMUS IN MONOCULAR VISION." Acta Ophthalmologica 5, no. 1-3 (May 27, 2009): 387–99. http://dx.doi.org/10.1111/j.1755-3768.1927.tb01020.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Uozato, Hiroshi. "Binocular and Monocular Vision." JAPANESE ORTHOPTIC JOURNAL 35 (2006): 61–66. http://dx.doi.org/10.4263/jorthoptic.35.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kraut, Joel A., and Veronica Lopez-Fernandez. "Adaptation to Monocular Vision." International Ophthalmology Clinics 42, no. 3 (2002): 203–13. http://dx.doi.org/10.1097/00004397-200207000-00021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dickmanns, Ernst Dieter, and Volker Graefe. "Dynamic monocular machine vision." Machine Vision and Applications 1, no. 4 (December 1988): 223–40. http://dx.doi.org/10.1007/bf01212361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ihrig, Carolyn, and Daniel P. Schaefer. "Acquired Monocular Vision Rehabilitation program." Journal of Rehabilitation Research and Development 44, no. 4 (2007): 593. http://dx.doi.org/10.1682/jrrd.2006.06.0071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Erkelens, C. J., and R. van Ee. "Monocular symmetry in binocular vision." Journal of Vision 7, no. 4 (March 1, 2007): 5. http://dx.doi.org/10.1167/7.4.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Guerrero, J. J., and C. Sagüés. "Navigation from Uncalibrated Monocular Vision." IFAC Proceedings Volumes 31, no. 3 (March 1998): 351–56. http://dx.doi.org/10.1016/s1474-6670(17)44110-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abel, Sharon M., and Christine Tikuisis. "Sound localization with monocular vision." Applied Acoustics 66, no. 8 (August 2005): 932–44. http://dx.doi.org/10.1016/j.apacoust.2004.11.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ramachandran, V. S., S. Cobb, and L. Levi. "Monocular double vision in strabismus." NeuroReport 5, no. 12 (July 1994): 1418. http://dx.doi.org/10.1097/00001756-199407000-00001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Vision, Monocular"

1

Jama, Michal. "Monocular vision based localization and mapping." Diss., Kansas State University, 2011. http://hdl.handle.net/2097/8561.

Full text
Abstract:
Doctor of Philosophy
Department of Electrical and Computer Engineering
Balasubramaniam Natarajan
Dale E. Schinstock
In this dissertation, two applications related to vision-based localization and mapping are considered: (1) improving navigation system based satellite location estimates by using on-board camera images, and (2) deriving position information from video stream and using it to aid an auto-pilot of an unmanned aerial vehicle (UAV). In the first part of this dissertation, a method for analyzing a minimization process called bundle adjustment (BA) used in stereo imagery based 3D terrain reconstruction to refine estimates of camera poses (positions and orientations) is presented. In particular, imagery obtained with pushbroom cameras is of interest. This work proposes a method to identify cases in which BA does not work as intended, i.e., the cases in which the pose estimates returned by the BA are not more accurate than estimates provided by a satellite navigation systems due to the existence of degrees of freedom (DOF) in BA. Use of inaccurate pose estimates causes warping and scaling effects in the reconstructed terrain and prevents the terrain from being used in scientific analysis. Main contributions of this part of work include: 1) formulation of a method for detecting DOF in the BA; and 2) identifying that two camera geometries commonly used to obtain stereo imagery have DOF. Also, this part presents results demonstrating that avoidance of the DOF can give significant accuracy gains in aerial imagery. The second part of this dissertation proposes a vision based system for UAV navigation. This is a monocular vision based simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video-stream from a single camera. This is different from common SLAM solutions that use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. The SLAM solution was built by significantly modifying and extending a recent open-source SLAM solution that is fundamentally different from a traditional approach to solving SLAM problem. The modifications made are those needed to provide the position measurements necessary for the navigation solution on a UAV while simultaneously building the map, all while maintaining control of the UAV. The main contributions of this part include: 1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; 2) improved performance of the SLAM algorithm for lower camera frame rates; and 3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible, and can be effective in Global Positioning System denied environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Cheda, Diego. "Monocular Depth Cues in Computer Vision Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/121644.

Full text
Abstract:
La percepción de la profundidad es un aspecto clave en la visión humana. El ser humano realiza esta tarea sin esfuerzo alguno con el objetivo de efectuar diversas actividades cotidianas. A menudo, la percepción de la profundidad se ha asociado con la visión binocular. Pese a esto, los seres humanos tienen una capacidad asombrosa de percibir las relaciones de profundidad, incluso a partir de una sola imagen, mediante el uso de varias pistas monoculares. En el campo de la visión por ordenador, si la información de la profundidad de una imagen estuviera disponible, muchas tareas podr´ıan ser planteadas desde una perspectiva diferente en aras de un mayor rendimiento y robustez. Sin embargo, dada una única imagen, esta posibilidad es generalmente descartada, ya que la obtención de la información de profundidad es frecuentemente obtenida por las técnicas de reconstrucción tridimensional, que requieren dos o más imágenes de la misma escena tomadas desde diferentes puntos de vista. Recientemente, algunas propuestas han demostrado que es posible obtener información de profundidad a partir de imágenes individuales. En esencia, la idea es aprovechar el conocimiento a priori de las condiciones de adquisición de la imagen y de la escena observada para estimar la profundidad empleando pistas pictóricas monoculares. Estos enfoques tratan de estimar con precisión los mapas de profundidad de la escena empleando técnicas computacionalmente costosas. Sin embargo, muchos algoritmos de visión por ordenador no necesitan un mapa de profundidad detallado de la imagen. De hecho, sólo una descripción en profundidad aproximada puede ser muy valiosa en muchos problemas. En nuestro trabajo, hemos demostrado que incluso la información aproximada de profundidad puede integrarse en diferentes tareas siguiendo una estrategia holística con el fin de obtener resultados más precisos y robustos. En ese sentido, hemos propuesto una técnica simple, pero fiable, por medio de la cual regiones de la imagen de una escena se clasifican en rangos de profundidad discretos para construir un mapa tosco de la profundidad. Sobre la base de esta representación, hemos explorado la utilidad de nuestro método en tres dominios de aplicación desde puntos de vista novedosos: la estimación de la rotación de la cámara, la estimación del fondo de una escena y la generación de ventanas de interés para la detección de peatones. En el primer caso, calculamos la rotación de la cámara montada en un veh´ıculo en movimiento mediante dos nuevos m˜A c ⃝todos que identifican elementos distantes en la imagen a través de nuestros mapas de profundidad. En la reconstrucción del fondo de una imagen, propusimos un método novedoso que penaliza las regiones cercanas en una función de coste que integra, además, información del color y del movimiento. Por último, empleamos la información geométrica y de la profundidad de una escena para la generación de peatones candidatos. Este método reduce significativamente el número de ventanas generadas, las cuales serán posteriormente procesadas por un clasificador de peatones. En todos los casos, los resultados muestran que los enfoques basados en la profundidad contribuyen a un mejor rendimiento de las aplicaciones estudidadas.
Depth perception is a key aspect of human vision. It is a routine and essential visual task that the human do effortlessly in many daily activities. This has often been associated with stereo vision, but humans have an amazing ability to perceive depth relations even from a single image by using several monocular cues. In the computer vision field, if image depth information were available, many tasks could be posed from a different perspective for the sake of higher performance and robustness. Nevertheless, given a single image, this possibility is usually discarded, since obtaining depth information has frequently been performed by three-dimensional reconstruction techniques, requiring two or more images of the same scene taken from different viewpoints. Recently, some proposals have shown the feasibility of computing depth information from single images. In essence, the idea is to take advantage of a priori knowledge of the acquisition conditions and the observed scene to estimate depth from monocular pictorial cues. These approaches try to precisely estimate the scene depth maps by employing computationally demanding techniques. However, to assist many computer vision algorithms, it is not really necessary computing a costly and detailed depth map of the image. Indeed, just a rough depth description can be very valuable in many problems. In this thesis, we have demonstrated how coarse depth information can be integrated in different tasks following holistic and alternative strategies to obtain more precise and robustness results. In that sense, we have proposed a simple, but reliable enough technique, whereby image scene regions are categorized into discrete depth ranges to build a coarse depth map. Based on this representation, we have explored the potential usefulness of our method in three application domains from novel viewpoints: camera rotation parameters estimation, background estimation and pedestrian candidate generation. In the first case, we have computed camera rotation mounted in a moving vehicle from two novels methods that identify distant elements in the image, where the translation component of the image flow field is negligible. In background estimation, we have proposed a novel method to reconstruct the background by penalizing close regions in a cost function, which integrates color, motion, and depth terms. Finally, we have benefited of geometric and depth information available on single images for pedestrian candidate generation to significantly reduce the number of generated windows to be further processed by a pedestrian classifier. In all cases, results have shown that our depth-based approaches contribute to better performances.
APA, Harvard, Vancouver, ISO, and other styles
3

Veldman, Kyle John. "Monocular vision for collision avoidance in vehicles." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101478.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 21).
An experimental study facilitated by Ford Global Technologies, Inc. on the potential substitution of stereovision systems in car automation with monocular vision systems. The monocular system pairs a camera and passive lens with an active lens. Most active lenses require linear actuating systems to adjust the optical parameters of the system but this experiment employed an Optotune focus tunable lens adjusted by a Lorentz actuator for a much more reliable system. Tests were conducted in a lab environment to capture images of environmental objects at different distances from the system, pass those images through an image processing algorithm operating a high-pass filter to separate in-focus aspects of the image from out-of focus ones. Although the system is in the early phases of testing, monocular vision shows the ability to replace stereovision system. However, additional testing must be done to acclimate the apparatus to environmental factors, minimize the processing speed, and redesign the system for portability.
by Kyle John Veldman.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
4

Ng, Matthew James. "Corridor Navigation for Monocular Vision Mobile Robots." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1856.

Full text
Abstract:
Monocular vision robots use a single camera to process information about its environment. By analyzing this scene, the robot can determine the best navigation direction. Many modern approaches to robot hallway navigation involve using a plethora of sensors to detect certain features in the environment. This can be laser range finders, inertial measurement units, motor encoders, and cameras. By combining all these sensors, there is unused data which could be useful for navigation. To draw back and develop a baseline approach, this thesis explores the reliability and capability of solely using a camera for navigation. The basic navigation structure begins by taking frames from the camera and breaking them down to find the most prominent lines. The location where these lines intersect determine the forward direction to drive the robot. To improve the accuracy of navigation, algorithm improvements and additional features from the camera frames are used. This includes line intersection weighting to reduce noise from extraneous lines, floor segmentation to improve rotational stability, and person detection.
APA, Harvard, Vancouver, ISO, and other styles
5

Pereira, Fabio Irigon. "High precision monocular visual odometry." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.

Full text
Abstract:
Extrair informação de profundidade a partir de imagens bidimensionais é um importante problema na área de visão computacional. Diversas aplicações se beneficiam desta classe de algoritmos tais como: robótica, a indústria de entretenimento, aplicações médicas para diagnóstico e confecção de próteses e até mesmo exploração interplanetária. Esta aplicação pode ser dividida em duas etapas interdependentes: a estimação da posição e orientação da câmera no momento em que a imagem foi gerada, e a estimativa da estrutura tridimensional da cena. Este trabalho foca em técnicas de visão computacional usadas para estimar a trajetória de um veículo equipado com uma câmera, problema conhecido como odometria visual. Para obter medidas objetivas de eficiência e precisão, e poder comparar os resultados obtidos com o estado da arte, uma base de dados de alta precisão, bastante utilizada pela comunidade científica foi utilizada. No curso deste trabalho novas técnicas para rastreamento de detalhes, estimativa de posição de câmera, cálculo de posição 3D de pontos e recuperação de escala são propostos. Os resultados alcançados superam os mais bem ranqueados trabalhos na base de dados escolhida até o momento da publicação desta tese.
Recovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
APA, Harvard, Vancouver, ISO, and other styles
6

Goroshin, Rostislav. "Obstacle detection using a monocular camera." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Benoit, Stephen M. "Monocular optical flow for real-time vision systems." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23862.

Full text
Abstract:
This thesis introduces a monocular optical flow algorithm that has been shown to perform well at nearly real-time frame rates (4 FPS) on natural image sequences. The system is completely bottom-up, using pixel region-matching techniques. A coordinated gradient descent method is broken down into two stages; pixel region matching error measures are locally minimized, and flow field consistency constraints apply non-linear adaptive diffusion, causing confident measurements to influence their less confident neighbors. Convergence is usually accomplished with one iteration for an image frame pair. Temporal integration and Kalman filtering predicts upcoming flow fields and figure/ground separation. The algorithm is designed for flexibility: large displacements are tracked as easily as sub-pixel displacements, and higher-level information can feed flow field predictions into the measurement predictions into the measurement process.
APA, Harvard, Vancouver, ISO, and other styles
8

李宏釗 and Wan-chiu Li. "Localization of a mobile robot by monocular vision." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Malan, Daniel Francois. "3D tracking between satellites using monocular computer vision." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/3081.

Full text
Abstract:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2005.
Visually estimating three-dimensional position, orientation and motion, between an observer and a target, is an important problem in computer vision. Solutions which compute threedimensional movement from two-dimensional intensity images, usually rely on stereoscopic vision. Some research has also been done in systems utilising a single (monocular) camera. This thesis investigates methods for estimating position and pose from monocular image sequences. The intended future application is of visual tracking between satellites flying in close formation. The ideas explored in this thesis build on methods developed for use in camera calibration, and structure from motion (SfM). All these methods rely heavily on the use of different variations of the Kalman Filter. After describing the problem from a mathematical perspective we develop different approaches to solving the estimation problem. The different approaches are successfully tested on simulated as well as real-world image sequences, and their performance analysed.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Wan-chiu. "Localization of a mobile robot by monocular vision /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23765896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Vision, Monocular"

1

Lepetit, Vincent. Monocular model-based 3D tracking of rigid objects. Boston, MA: NOW Publishers, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salzmann, Mathieu. Deformable surface 3D reconstruction from monocular images. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ṣafadī, Khalīl ibn Aybak. al- Shuʻūr bi-al-ʻūr. ʻAmmān, al-Urdun: Dār ʻĀmmār, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Blind, Royal National Institute for the. Sight in one eye only (monocular vision) and people with learning difficulties. London: R.N.I.B., 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schaeren, Peter. Real-time 3-D scene acquisition by monocular motion induced stero. Konstanz: Hartung-Gorre, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Verghese, Gilbert. Perspective alignment back-projection for real-time monocular three-dimensional model-based computer vision. Toronto: Dept. of Computer Science, University of Toronto, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brady, Frank B. A singular view: The art of seeing with one eye. 5th ed. Annapolis, Md: F.B. Brady, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brady, Frank B. A singular view: The art of seeing with one eye. Toronto: Edgemore Enterprises, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Assaré, Patativa do, ed. O Patativa que eu conheci. Campinas, SP: Komedi, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ontario. Ministry of Transportation. Safety Research Office, ed. Monocular vision and commercial motor vehicle safety. [Downsview, Ont.]: Safety Research Office, Safety Policy Branch, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Vision, Monocular"

1

Said, Engy T., and Bishoy Said. "Postoperative Monocular Vision Loss." In Clinical Anesthesiology, 453–61. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-8696-1_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Poeck, Klaus. "Monocular Loss of Vision." In Diagnostic Decisions in Neurology, 84–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 1985. http://dx.doi.org/10.1007/978-3-642-70693-6_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Merriott, David, Steven Carter, and Lilangi S. Ediriwickrema. "Transient Monocular Vision Loss." In Controversies in Neuro-Ophthalmic Management, 171–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74103-7_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luo, Chong, and Wenjun Zeng. "Monocular and Binocular People Tracking." In Computer Vision, 1–4. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_872-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shirai, Yoshiaki. "Shape from Monocular Images." In Three-Dimensional Computer Vision, 141–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/978-3-642-82429-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vernon, David. "Monocular Vision — Segmentation in Additive Images." In Fourier Vision, 27–48. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1413-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vernon, David. "Monocular Vision — Segmentation in Occluding Images." In Fourier Vision, 49–73. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1413-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Ruilong, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, and Hao Li. "Monocular Real-Time Volumetric Performance Capture." In Computer Vision – ECCV 2020, 49–67. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58592-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lerasle, F., G. Rives, M. Dhome, and A. Yassine. "Human body tracking by monocular vision." In Lecture Notes in Computer Science, 518–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61123-1_166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sayd, Patrick, Michel Dhome, and Jean-Marc Lavest. "Recovering Generalized Cylinders by monocular vision." In Object Representation in Computer Vision II, 25–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61750-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vision, Monocular"

1

Hwang, Jihye, Yeounggwang Ji, and Eun Yi Kim. "Monocular vision-based collision avoidance system." In the 14th international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2371664.2371688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Jiaolong, Lei Chen, and Wei Liang. "Monocular vision based robot self-localization." In 2010 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2010. http://dx.doi.org/10.1109/robio.2010.5723497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Royer, E., J. Bom, M. Dhome, B. Thuilot, M. Lhuillier, and F. Marmoiton. "Outdoor autonomous navigation using monocular vision." In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2005. http://dx.doi.org/10.1109/iros.2005.1545495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stein, Gregory J., Christopher Bradley, Victoria Preston, and Nicholas Roy. "Enabling Topological Planning with Monocular Vision." In 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020. http://dx.doi.org/10.1109/icra40945.2020.9197484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, C. C., and X. H. Xiao. "Monocular vision technique for vibration measurement." In Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2009. SPIE, 2009. http://dx.doi.org/10.1117/12.815424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sakamoto, Kunio, Kazuki Saruta, and Kazutoki Takeda. "Monocular multiview stereoscopic 3D vision system." In Photonics West 2001 - Electronic Imaging, edited by Stephen A. Benton, Sylvia H. Stevenson, and T. John Trout. SPIE, 2001. http://dx.doi.org/10.1117/12.429481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar, Anil, Hailin Ren, and Pinhas Ben-Tzvi. "Obstacle Identification for Vision Assisted Control Architecture of a Hybrid Mechanism Mobile Robot." In ASME 2017 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/dscc2017-5324.

Full text
Abstract:
This paper presents a monocular vision-based, unsupervised floor detection algorithm for semi-autonomous control of a Hybrid Mechanism Mobile Robot (HMMR). The paper primarily focuses on combining monocular vision cues with inertial sensing and ultrasonic ranging for on-line obstacle identification and path planning in the event of limited wireless connectivity. A novel, unsupervised vision algorithm was developed for floor detection and identifying traversable areas, in order to avoid obstacles in semi-autonomous control architecture. The floor detection algorithms were validated and experimentally tested in an indoor environment under various lighting conditions.
APA, Harvard, Vancouver, ISO, and other styles
8

Eade, E. D., and T. W. Drummond. "Edge Landmarks in Monocular SLAM." In British Machine Vision Conference 2006. British Machine Vision Association, 2006. http://dx.doi.org/10.5244/c.20.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Castelow, D. A., and A. J. Rerolle. "A Monocular Ground Plane Estimation System." In British Machine Vision Conference 1991. Springer-Verlag London Limited, 1991. http://dx.doi.org/10.5244/c.5.58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Marzorati, D., M. Matteucci, D. A. Migliore, and D. G. Sorrenti. "Monocular SLAM with Inverse Scaling Parametrization." In British Machine Vision Conference 2008. British Machine Vision Association, 2008. http://dx.doi.org/10.5244/c.22.94.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Vision, Monocular"

1

CuQlock-Knopp, V. G., Dawn E. Sipes, Warren Torgerson, Edward Bender, and John O. Merritt. Extended Use of Night Vision Goggles: An Evaluation of Comfort for Monocular and Biocular Configurations. Fort Belvoir, VA: Defense Technical Information Center, July 1997. http://dx.doi.org/10.21236/ada328485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

CuQlock-Knopp, V. G., Warren Torgerson, Dawn E. Sipes, Edward Bender, and John O. Merritt. A Comparison of Monocular, Biocular, and Binocular Night Vision Goggles for Traversing Off-Road Terrain on Foot. Fort Belvoir, VA: Defense Technical Information Center, March 1995. http://dx.doi.org/10.21236/ada294018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography