Academic literature on the topic 'Vision sensor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vision sensor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vision sensor"

1

Yang, Le, Han Wang, Jiajian Zheng, Xin Duan, and Qishuo Cheng. "Research and Application of Visual Object Recognition System Based on Deep Learning and Neural Morphological Computation." International Journal of Computer Science and Information Technology 2, no. 1 (2024): 10–17. http://dx.doi.org/10.62051/ijcsit.v2n1.02.

Full text
Abstract:
The development of advanced optoelectronic vision sensors for high-level image recognition and data preprocessing is poised to accelerate the progress of machine vision and mobile electronic technology. Compared to traditional sensory computing methods, such as analog-to-digital signal conversion and digital logic computation tasks (i.e., Von Neumann computing), neural morphological vision computing can significantly improve energy efficiency and data processing speed by minimizing unnecessary raw data transmission between front-end photosensitive sensors and back-end processors. Neural morpho
APA, Harvard, Vancouver, ISO, and other styles
2

Chai, Yang. "(Invited) Bioinspired in-Sensor Computing for Artificial Vision." ECS Meeting Abstracts MA2024-02, no. 35 (2024): 2466. https://doi.org/10.1149/ma2024-02352466mtgabs.

Full text
Abstract:
The visual scene in the physical world integrates multidimensional information (spatial, temporal, polarization, spectrum, etc) and typically displays unstructured characteristics. Conventional image sensors cannot process this multidimensional vision data, creating a need for vision sensors that can efficiently extract features from substantial multidimensional vision data. Vision sensors are able to transform the unstructured visual scene into featured information without relying on sophisticated algorithms and complex hardware. In this talk, I will describe our team’s efforts towards bioins
APA, Harvard, Vancouver, ISO, and other styles
3

Bassett, J., and G. Walker. "A Split Image Vision Sensor." Journal of Engineering for Industry 117, no. 1 (1995): 94–101. http://dx.doi.org/10.1115/1.2803284.

Full text
Abstract:
A vision sensor has been developed that uses only two lenses, a split prism, and a detector to acquire an image. This system uses the split prism to create a split image such that the displacement of the image is proportional to its range from the sensor. Prototype sensors have been examined both theoretically and experimentally, and have been found to measure object ranges with less than ±2 percent error. Acquisition of a single-point depth measurement is sufficiently fast for real-time use, and the optical components needed to build the sensor are inexpensive. The effect that each optical co
APA, Harvard, Vancouver, ISO, and other styles
4

Sundar, Varun, and Mohit Gupta. "Quanta Computer Vision." XRDS: Crossroads, The ACM Magazine for Students 31, no. 2 (2024): 38–43. https://doi.org/10.1145/3703403.

Full text
Abstract:
Light impinges on a camera's sensor as a collection of discrete quantized elements, or photons. An emerging class of devices, called single-photon sensors, offers the unique capability of detecting individual photons with high-timing precision. With the increasing accessibility of high-resolution single-photon sensors, we can now explore what computer vision would look like if we could operate on light, one photon at a time.
APA, Harvard, Vancouver, ISO, and other styles
5

Hasegawa, Hiroaki, Yosuke Suzuki, Aiguo Ming, Masatoshi Ishikawa, and Makoto Shimojo. "Robot Hand Whose Fingertip Covered with Net-Shape Proximity Sensor - Moving Object Tracking Using Proximity Sensing -." Journal of Robotics and Mechatronics 23, no. 3 (2011): 328–37. http://dx.doi.org/10.20965/jrm.2011.p0328.

Full text
Abstract:
Occlusion in several millimeters from an object to be grasped made it difficult for a vision-sensor-based approach to detect relative positioning between this object and robot fingers joint grasping. The proximity sensor we proposed detects the object at a near range very effectively. We developed a thin proximity sensor sheet to cover the 3 fingers of a robot hand. Integrating sensors and hand control, we implemented an objecttracking controller. Using proximity sensory signals, the controller coordinates wrist positioning based on palm proximity sensors and grasping from fingertip sensors, e
APA, Harvard, Vancouver, ISO, and other styles
6

Yuhara, H. "Stereo vision sensor." JSAE Review 21, no. 4 (2000): 529–34. http://dx.doi.org/10.1016/s0389-4304(00)00080-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kolar, Prasanna, Patrick Benavidez, and Mo Jamshidi. "Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation." Sensors 20, no. 8 (2020): 2180. http://dx.doi.org/10.3390/s20082180.

Full text
Abstract:
This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion
APA, Harvard, Vancouver, ISO, and other styles
8

Fossum, Eric R., Nobukazu Teranishi, and Albert J. P. Theuwissen. "Digital Image Sensor Evolution and New Frontiers." Annual Review of Vision Science 10, no. 1 (2024): 171–98. http://dx.doi.org/10.1146/annurev-vision-101322-105538.

Full text
Abstract:
This article reviews nearly 60 years of solid-state image sensor evolution and identifies potential new frontiers in the field. From early work in the 1960s, through the development of charge-coupled device image sensors, to the complementary metal oxide semiconductor image sensors now ubiquitous in our lives, we discuss highlights in the evolutionary chain. New frontiers, such as 3D stacked technology, photon-counting technology, and others, are briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Cho, Dooyong, and Junho Gong. "A Feasibility Study on Extension of Measurement Distance in Vision Sensor Using Super-Resolution for Dynamic Response Measurement." Sensors 23, no. 20 (2023): 8496. http://dx.doi.org/10.3390/s23208496.

Full text
Abstract:
The current civil infrastructure conditions can be assessed through the measurement of displacement using conventional contact-type sensors. To address the disadvantages of traditional sensors, vision-based sensor measurement systems have been derived in numerous studies and proven as an alternative to traditional sensors. Despite the benefits of the vision sensor, it is well known that the accuracy of the vision-based displacement measurement is largely dependent on the camera extrinsic or intrinsic parameters. In this study, the feasibility study of a deep learning-based single image super-r
APA, Harvard, Vancouver, ISO, and other styles
10

Aya Zuhair Salim and Luma Issa Abdul-Kareem. "A Review of Advances in Bio-Inspired Visual Models Using Event-and Frame-Based Sensors." Advances in Technology Innovation 10, no. 1 (2025): 44–57. https://doi.org/10.46604/aiti.2024.14121.

Full text
Abstract:
This paper reviews visual system models using event- and frame-based vision sensors. The event-based sensors mimic the retina by recording data only in response to changes in the visual field, thereby optimizing real-time processing and reducing redundancy. In contrast, frame-based sensors capture duplicate data, requiring more processing resources. This research develops a hybrid model that combines both sensor types to enhance efficiency and reduce latency. Through simulations and experiments, this approach addresses limitations in data integration and speed, offering improvements over exist
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Vision sensor"

1

Ollesson, Niklas. "Automatic Configuration of Vision Sensor." Thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93415.

Full text
Abstract:
In factory automation cameras and image processing algorithms can be used to inspect objects. This can decrease the faulty objects that leave the factory and reduce manual labour needed. A vision sensor is a system where camera and image processing is delivered together, and that only needs to be configured for the application that it is to be used for. Thus no programming knowledge is needed for the customer. In this Master’s thesis a way to make the configuration of a vision sensor even easier is developed and evaluated. The idea is that the customer knows his or her product much better than
APA, Harvard, Vancouver, ISO, and other styles
2

Bolduc, Marc. "A foveated sensor for robotic vision." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22640.

Full text
Abstract:
The design of a visual system for autonomous mobile robots presents conflicting requirements. Using a foveated sensor, based on models of the primate retina, affords a compromise between requirements of a wide visual field, high resolution, and small fast-to-process output images. From a review of the biology literature and existing models of data reduction performed by the primate retina, it becomes apparent that overlapping receptive field models are more biologically realistic than nonoverlapping ones. They also provide more flexibility in terms of the type of computation masks that can be
APA, Harvard, Vancouver, ISO, and other styles
3

Hol, Jeroen D. "Sensor Fusion and Calibration of Inertial Sensors, Vision, Ultra-Wideband and GPS." Doctoral thesis, Linköpings universitet, Reglerteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-66184.

Full text
Abstract:
The usage of inertial sensors has traditionally been confined primarily to the aviation and marine industry due to their associated cost and bulkiness. During the last decade, however, inertial sensors have undergone a rather dramatic reduction in both size and cost with the introduction of MEMS technology. As a result of this trend, inertial sensors have become commonplace for many applications and can even be found in many consumer products, for instance smart phones, cameras and game consoles. Due to the drift inherent in inertial technology, inertial sensors are typically used in combinati
APA, Harvard, Vancouver, ISO, and other styles
4

Hagfalk, Erik, and Ianke Erik Eriksson. "Vision Sensor Scheduling for Multiple Target Tracking." Thesis, Linköping University, Automatic Control, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57717.

Full text
Abstract:
<p>This thesis considers the problem of tracking multiple static or moving targets with one single pan/tilt-camera with a limited field of view. The objective is to minimize both the time needed to pan and tilt the camera's view between the targets and the total position uncertainty of all targets. To solve this problem, several planning methods have been developed and evaluated by Monte Carlo simulations and real world experiments. If the targets are moving and their true positions are unknown, both their current and future positions need to be estimated in order to calculate the best sensor
APA, Harvard, Vancouver, ISO, and other styles
5

Arias, Estrada Miguel Octavio. "VLSI architecture for a motion vision sensor." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0023/NQ31480.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Benlamri, Rachid. "A multiple-sensor based system for image inspection." Thesis, University of Manchester, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Simon, D. G. "A new sensor for robot arm and tool calibration." Thesis, University of Surrey, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Marbán, González Arturo. "Vision based sensor substitution in robotic assisted surgery." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/664614.

Full text
Abstract:
Perceiving and understanding the world represents a long-term goal in the field of Artificial Intelligence (AI). In recent years, advances in the field of Machine Learning (ML), and specifically in Deep Learning (DL), have led to the development of powerful models based on Deep Neural Networks (DNN) capable of interpreting high dimensional data, leading to higher performance in perception related tasks. DNNs designed in a Supervised Learning (SL) setting, such as Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) networks, greatly contribute to the state of the art in image
APA, Harvard, Vancouver, ISO, and other styles
9

Bunschoten, Roland. "Mapping and localization from a panoramic vision sensor." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2003. http://dare.uva.nl/document/69485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hamid, Gabriel. "A design for a vision-based velocity sensor." Thesis, University of Oxford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Vision sensor"

1

National Research Council (U.S.). Committee on New Sensor Technologies: Materials and Applications., ed. Expanding the vision of sensor materials. National Academy Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Haala, Norbert. Multi-Sensor-Photogrammetrie: Vision oder Wirklichkeit? Verlag der Bayerischen Akademie der Wissenschaften in Kommission beim Verlags C.H. Beck, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Shengyong, Y. F. Li, Jianwei Zhang, and Wanliang Wang, eds. Active Sensor Planning for Multiview Vision Tasks. Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-77072-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Shengyong. Active Sensor Planning for Multiview Vision Tasks. Springer-Verlag Berlin Heidelberg, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Distributed video sensor networks-research challenges and future directions workshop (2009 : Riverside, Calif.), ed. Distributed video sensor networks. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Durrant-Whyte, Hugh F. Integration, Coordination and Control of Multi-Sensor Robot Systems. Springer US, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Otmar, Loffeld, Centre national de la recherche scientifique (France), and Society of Photo-optical Instrumentation Engineers., eds. Vision systems--sensors, sensor systems, and components: 10-12 June 1996, Besançon, France. SPIE--the International Society for Optical Engineering, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fernández-Berni, Jorge, Ricardo Carmona-Galán, and Ángel Rodríguez-Vázquez. Low-Power Smart Imagers for Vision-Enabled Sensor Networks. Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-2392-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ricardo, Carmona-Galán, Rodríguez-Vázquez Angel, and SpringerLink (Online service), eds. Low-Power Smart Imagers for Vision-Enabled Sensor Networks. Springer New York, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bestehorn, Markus. Querying Moving Objects Detected by Sensor Networks. Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Vision sensor"

1

Varshney, Pramod K. "Sensor Fusion." In Computer Vision. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_301-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Varshney, Pramod K. "Sensor Fusion." In Computer Vision. Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Varshney, Pramod K. "Sensor Fusion." In Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Liyuan, Mingxin Zhao, Ke Ning, Xu Yang, Xuemin Zheng, and Nanjian Wu. "Neuromorphic Vision Chip." In Near-sensor and In-sensor Computing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11506-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Benosman, R., and J. Devars. "Panoramic Stereovision Sensor." In Panoramic Vision. Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Terzopoulos, Demetri, and Faisal Z. Qureshi. "Virtual Vision." In Distributed Video Sensor Networks. Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-127-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jeong, Wootae. "Sensors, Machine Vision, and Sensor Networks." In Springer Handbook of Automation. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-96729-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Christensen, H. I., D. Kragic, and F. Sandberg. "Vision for Interaction." In Sensor Based Intelligent Robots. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45993-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Clark, James J. "Active Sensor (Eye) Movement Control." In Computer Vision. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_278-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Clark, James J. "Active Sensor (Eye) Movement Control." In Computer Vision. Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_278.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vision sensor"

1

Liu, Xiaolin, Chao Gao, Bowei Jiang, et al. "CMOS Compatible Spike Vision Sensor." In 2025 9th IEEE Electron Devices Technology & Manufacturing Conference (EDTM). IEEE, 2025. https://doi.org/10.1109/edtm61175.2025.11041282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yi, Juheon, Chulhong Min, and Fahim Kawsar. "Vision Paper." In SenSys '21: The 19th ACM Conference on Embedded Networked Sensor Systems. ACM, 2021. http://dx.doi.org/10.1145/3485730.3493453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Manish, and Devendra P. Garg. "Three-Dimensional Occupancy Grids With the Use of Vision and Proximity Sensors in a Robotic Workcell." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-59593.

Full text
Abstract:
This paper discusses the use of multiple vision sensors and a proximity sensor to obtain three-dimensional occupancy profile of robotic workspace, identify key features, and obtain a 3-D model of the objects in the work space. The present research makes use of three identical vision sensors. Two of these sensors are mounted on a stereo rig on the sidewall of the robotic workcell. The third vision sensor is located above the workcell. The vision sensors on the stereo rig provide information about three-dimensional position of any point in the robotic workspace. The camera to robot calibration f
APA, Harvard, Vancouver, ISO, and other styles
4

Sohal, Shubhdildeep S., and Pinhas Ben-Tzvi. "Sensor Based Target Tracking With Application to Autonomous Docking and Self-Reconfigurability." In ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22181.

Full text
Abstract:
Abstract This paper presents a target detection technique, which combines a supervised learning model with sensor data to eliminate false positives for a given input image frame. Such a technique aids with selective docking procedures where multiple robots are present in the environment. Hence the sensor data provides additional information for this decision making process. Senor accuracy plays a crucial role when the motion of the robot is defined by the use of data recorded by its sensors. The uncertainties in the sensory data can cause misalignments due to poor calibration of the sensor, wh
APA, Harvard, Vancouver, ISO, and other styles
5

Shahshahani, Allen, Jake Shahshahani, Lynne L. Grewe, Archana Kashyap, and Krishnan Chandran. "iSight: computer vision based system to assist low vision." In Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII, edited by Ivan Kadar. SPIE, 2018. http://dx.doi.org/10.1117/12.2305233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zukerman, Ingrid, Enes Makalic, and Michael Niemann. "Combining probabilistic reference resolution with simulated vision." In 2008 International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP). IEEE, 2008. http://dx.doi.org/10.1109/issnip.2008.4761990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Douillard, Bertrand, Alex Brooks, and Fabio Ramos. "A 3D laser and vision based classifier." In 2009 International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP). IEEE, 2009. http://dx.doi.org/10.1109/issnip.2009.5416828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liao, Fuyou, and Yang Chai. "In-sensor Computing Devices for Bio-inspired Vision Sensors." In 2022 6th IEEE Electron Devices Technology & Manufacturing Conference (EDTM). IEEE, 2022. http://dx.doi.org/10.1109/edtm53872.2022.9798059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Larkin, Eugene Vasilievich, Aleksandr Nikolaevich Privalov, and Tatiana Alekseevna Akimenko. "IR Sensor Test System." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-1130-1135.

Full text
Abstract:
Methods for testing IR sensors, which are currently widely used as a source of information about the environment in various sectors of the national economy, are being investigated. It is shown that due to the transformation of the informative parameters of the observed scene by the sensor, information loss at the output of the device is possible. The structure of the testing system has been developed, the main element of which is a patented generator of reference test signals, which makes it possible to evaluate the following informative parameters: thermal signal characteristic, distortion, r
APA, Harvard, Vancouver, ISO, and other styles
10

PAVEL, M., J. LARIMER, and A. AHUMADA. "Sensor fusion for synthetic vision." In 8th Computing in Aerospace Conference. American Institute of Aeronautics and Astronautics, 1991. http://dx.doi.org/10.2514/6.1991-3730.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Vision sensor"

1

Heng Ban. Novel Corrosion Sensor for Vision 21 Systems. Office of Scientific and Technical Information (OSTI), 2005. http://dx.doi.org/10.2172/882673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Berry, Nina M., and Teresa H. Ko. On computer vision in wireless sensor networks. Office of Scientific and Technical Information (OSTI), 2004. http://dx.doi.org/10.2172/919195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heng Ban and Bharat Soni. Novel Corrosion Sensor for Vision 21 Systems. Office of Scientific and Technical Information (OSTI), 2007. http://dx.doi.org/10.2172/921641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Heng Ban. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS. Office of Scientific and Technical Information (OSTI), 2004. http://dx.doi.org/10.2172/839161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schneider, Kathryn, Joseph Conroy, and Wiliam Nothwang. Computing Optic Flow with ArduEye Vision Sensor. Defense Technical Information Center, 2013. http://dx.doi.org/10.21236/ada572633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sekmen, Aki, and Fenghui Yao. Multi-Sensor Vision Data Fusion for Smart Airborne Surveillance. Defense Technical Information Center, 2009. http://dx.doi.org/10.21236/ada499525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Surana, Amit, and Allen Tannenbaum. Vision-Based Autonomous Sensor-Tasking in Uncertain Adversarial Environments. Defense Technical Information Center, 2015. http://dx.doi.org/10.21236/ada619641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carter, Jimmy, Melissa Pham, Richard Masarro, Jarrod Edwards, John Anderson, and Robert Fischer. Terrestrial vision-based localization using synthetic horizons. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/49217.

Full text
Abstract:
Vision-based localization could improve navigation and routing solutions in GPS-denied environments. In this study, data from a Carnegie Robotics MultiSense S7 stereo camera were matched to a synthetic horizon derived from foundation sources using novel two-dimensional correlation techniques. Testing was conducted at multiple observation locations over known ground control points (GCPs) at the US Army Engineer Research and Development Center (ERDC), Geospatial Research Laboratory (GRL), Corbin Research Facility. Testing was conducted at several different observational azimuths for these locati
APA, Harvard, Vancouver, ISO, and other styles
9

Siegel, David A., Ivona Cetinic, Andrew F. Thompson, et al. EXport Processes in the Ocean from RemoTe Sensing (EXPORTS) North Atlantic sensor calibration and intercalibration documents. NASA STI Program and Woods Hole Oceanographic Institution, 2023. http://dx.doi.org/10.1575/1912/66998.

Full text
Abstract:
The following documents collect information regarding the calibration and intercalibration of various sensors that were deployed during the North Atlantic field component of the NASA EXPORTS project (EXPORTS NA), which took place between May 4 and June 1, 2021 (Johnson et al., 2023). The EXPORTS NA campaign was designed to to provide a contrasting end member to the earlier North Pacific field campaign, and focused on carbon export associated with the North Atlantic spring bloom in which gravitational sinking of organic particles, the physical advection and mixing, and active transport by verti
APA, Harvard, Vancouver, ISO, and other styles
10

Kamp, Jan, Pieter Blok, Gerrit Polder, Jan van der Wolf, and Henk Jalink. Smart disease detection seed potatoes 2015-2018 : Detection of virus and bacterial diseases using vision and sensor technology. Stichting Wageningen Research, Wageningen Plant Research, Business Unit Field Corps, 2020. http://dx.doi.org/10.18174/494707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!