Academic literature on the topic 'View cameras'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'View cameras.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "View cameras"

1

Anjum, Nadeem. "Camera Localization in Distributed Networks Using Trajectory Estimation." Journal of Electrical and Computer Engineering 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/604647.

Full text
Abstract:
This paper presents an algorithm for camera localization using trajectory estimation (CLUTE) in a distributed network of nonoverlapping cameras. The algorithm recovers the extrinsic calibration parameters, namely, the relative position and orientation of the camera network on a common ground plane coordinate system. We first model the observed trajectories in each camera's field of view using Kalman filtering, then we use this information to estimate the missing trajectory information in the unobserved areas by fusing the results of a forward and backward linear regression estimation from adja
APA, Harvard, Vancouver, ISO, and other styles
2

Agnello, F. "PERSPECTIVE RESTITUTION FROM VIEW CAMERAS PHOTOS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-2/W1-2022 (February 25, 2022): 17–24. http://dx.doi.org/10.5194/isprs-archives-xlvi-2-w1-2022-17-2022.

Full text
Abstract:
Abstract. The paper aims at discussing the accuracy of perspective restitution from view camera photos; view cameras are non-standard cameras frequently used in the past century for on field shooting of buildings and urban sites; this is why the reconstruction of lost buildings often deals with photos taken with a view camera. The case study chosen for the proposed experiment is an urban complex built in Palermo in the ‘50s. The site features a very regular layout with surfaces at right angle, that supports the graphic reconstruction of photos’ inner and outer orientation. The site has been su
APA, Harvard, Vancouver, ISO, and other styles
3

Fan, Zhen, Xiu Li, and Yipeng Li. "Multi-Agent Deep Reinforcement Learning for Online 3D Human Poses Estimation." Remote Sensing 13, no. 19 (2021): 3995. http://dx.doi.org/10.3390/rs13193995.

Full text
Abstract:
Most multi-view based human pose estimation techniques assume the cameras are fixed. While in dynamic scenes, the cameras should be able to move and seek the best views to avoid occlusions and extract 3D information of the target collaboratively. In this paper, we address the problem of online view selection for a fixed number of cameras to estimate multi-person 3D poses actively. The proposed method exploits a distributed multi-agent based deep reinforcement learning framework, where each camera is modeled as an agent, to optimize the action of all the cameras. An inter-agent communication pr
APA, Harvard, Vancouver, ISO, and other styles
4

Dang, Chang Gwon, Seung Soo Lee, Mahboob Alam, et al. "Korean Cattle 3D Reconstruction from Multi-View 3D-Camera System in Real Environment." Sensors 24, no. 2 (2024): 427. http://dx.doi.org/10.3390/s24020427.

Full text
Abstract:
The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow growth. In this paper, a system of cameras is built to synchronously capture 3D data and then reconstruct a 3D mesh representation. In general, to reconstruct non-rigid objects, a system of cameras is synchronized and calibrated, and then the data of each camera are transformed to global coordinates. However, when
APA, Harvard, Vancouver, ISO, and other styles
5

Steger, Carsten, and Markus Ulrich. "A Multi-view Camera Model for Line-Scan Cameras with Telecentric Lenses." Journal of Mathematical Imaging and Vision 64, no. 2 (2021): 105–30. http://dx.doi.org/10.1007/s10851-021-01055-x.

Full text
Abstract:
AbstractWe propose a novel multi-view camera model for line-scan cameras with telecentric lenses. The camera model supports an arbitrary number of cameras and assumes a linear relative motion with constant velocity between the cameras and the object. We distinguish two motion configurations. In the first configuration, all cameras move with independent motion vectors. In the second configuration, the cameras are mounted rigidly with respect to each other and therefore share a common motion vector. The camera model can model arbitrary lens distortions by supporting arbitrary positions of the li
APA, Harvard, Vancouver, ISO, and other styles
6

McKay, Carolyn, and Murray Lee. "Body-worn images: Point-of-view and the new aesthetics of policing." Crime, Media, Culture: An International Journal 16, no. 3 (2019): 431–50. http://dx.doi.org/10.1177/1741659019873774.

Full text
Abstract:
Police organisations across much of the Western world have eagerly embraced body-worn video camera technology, seen as a way to enhance public trust in police, provide transparency in policing activity, reduce conflict between police and citizens and provide a police perspective of incidents and events. Indeed, the cameras have become an everyday piece of police ‘kit’. Despite the growing ubiquity of the body-worn video camera, understandings of the nature and value of the audiovisual footage produced by police remain inchoate. Given body-worn video camera’s promise of veracity, this article i
APA, Harvard, Vancouver, ISO, and other styles
7

Yin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou, and Jilong Zhang. "Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing." Remote Sensing 10, no. 8 (2018): 1298. http://dx.doi.org/10.3390/rs10081298.

Full text
Abstract:
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero phot
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yaning, Tianhao Wu, Jungang Yang, and Wei An. "Infrared Camera Array System and Self-Calibration Method for Enhanced Dim Target Perception." Remote Sensing 16, no. 16 (2024): 3075. http://dx.doi.org/10.3390/rs16163075.

Full text
Abstract:
Camera arrays can enhance the signal-to-noise ratio (SNR) between dim targets and backgrounds through multi-view synthesis. This is crucial for the detection of dim targets. To this end, we design and develop an infrared camera array system with a large baseline. The multi-view synthesis of camera arrays relies heavily on the calibration accuracy of relative poses in the sub-cameras. However, the sub-cameras within a camera array lack strict geometric constraints. Therefore, most current calibration methods still consider the camera array as multiple pinhole cameras for calibration. Moreover,
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Zhe, Zhaozong Meng, Nan Gao, and Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target." Sensors 19, no. 13 (2019): 3008. http://dx.doi.org/10.3390/s19133008.

Full text
Abstract:
Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for
APA, Harvard, Vancouver, ISO, and other styles
10

Chiu, Cheng Yu, Chih Han Chang, Hsin Jung Lin, and Tsong Liang Huang. "New Lane Departure Warning System Based on Side-View Cameras." Applied Mechanics and Materials 764-765 (May 2015): 1361–65. http://dx.doi.org/10.4028/www.scientific.net/amm.764-765.1361.

Full text
Abstract:
This paper addressed a new lane departure warning system (LDWS). We used the side-view cameras to promote Advanced Driver Assistance Systems (ADAS). A left side-view camera detected the right lane next to vehicle, and a right side-view camera detected the right lane. Two cameras processed in their algorithm and gave warning message, independently and separately. Our algorithm combined those warning messages to analyze environment situations. At the end, we used the LUXGEN MPV to test and showed results of verifications and tests.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "View cameras"

1

McLemore, Donald Rodney Jr. "Layered Sensing Using Master-Slave Cameras." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1253565440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Miller, Gregor. "High quality novel view rendering from multiple cameras." Thesis, University of Surrey, 2007. http://epubs.surrey.ac.uk/843243/.

Full text
Abstract:
The research presented in this thesis is targeted towards obtaining high quality novel views of a dynamic scene using video from multiple wide-baseline views, with free-viewpoint video as the main application goal. The research has led to several novel contributions to the 3D reconstruction computer vision literature. The first novel contribution of this work is the exact view-dependent visual hull, a method to efficiently reconstruct a three dimensional representation of the scene with respect to a given viewpoint. This approach includes two novel contributions which allow the reconstruction
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yixin M. Eng Massachusetts Institute of Technology. "Multi-view tracking of soccer players with dynamic cameras." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106111.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>Page 52 blank. Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 45-47).<br>Challenges such as player occlusion, fast player motion, small size of players relative to the background make it difficult to track soccer players accurately and consistently throughout a game. To solve these challenges, in this work we present a multi-view approach to tracking soccer players. Here, we formulate tracking as the problem of assigning a label to eac
APA, Harvard, Vancouver, ISO, and other styles
4

Rydström, Daniel. "Calibration of Laser Triangulating Cameras in Small Fields of View." Thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94210.

Full text
Abstract:
A laser triangulating camera system projects a laser line onto an object to create height curveson the object surface. By moving the object, height curves from different parts of the objectcan be observed and combined to produce a three dimensional representation of the object.The calibration of such a camera system involves transforming received data to get real worldmeasurements instead of pixel based measurements. The calibration method presented in this thesis focuses specifically on small fields ofview. The goal is to provide an easy to use and robust calibration method that can complemen
APA, Harvard, Vancouver, ISO, and other styles
5

Shin, Sung Woong. "Rigorous Model of Panoramic Cameras." The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1048869881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jambi, Layal. "Development of small field of view gamma cameras for medical imaging." Thesis, University of Leicester, 2018. http://hdl.handle.net/2381/43063.

Full text
Abstract:
Over recent years, in the field of nuclear medicine, advances in the development of small field of view (SFOV) gamma cameras have been increased. High resolution compact gamma cameras are designed to be used in intraoperative medical imaging procedures such as in head and neck sentinel node biopsies or for small organ imaging such as thyroid investigations. SFOV imaging can offer advantages over large field of view (LFOV) cameras in spatial resolution and sensitivity although, there is a trade-off between spatial resolution and sensitivity. Also SFOV cameras are highly favourable in terms of r
APA, Harvard, Vancouver, ISO, and other styles
7

Asadipour, Saeedeh. "5 Broken Cameras: Landscape, Trauma, and Witnessing." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1459439752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Raffoul, Joseph Naim. "Blob Feature Extraction for Event Detection Cameras." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1590165017029087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zamalieva, Daniya. "Transformational Models for Background Subtraction with Moving Cameras." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408702317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Young-ran. "Pose estimation of line cameras using linear features /." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1486457871786059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "View cameras"

1

Simmons, Steve. Using the view camera. Amphoto, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stroebel, Leslie D. Stroebel's view camera basics. Focal Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stroebel, Leslie D. View camera technique. 4th ed. Focal Press, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

D, Stroebel Leslie, Wignall Jeff, and Eastman Kodak Company, eds. Photography with large-format cameras. Professional Photography Division, Photographic Products Group, Eastman Kodak Co., 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stroebel, Leslie D. View camera technique. 6th ed. Focal Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stroebel, Leslie D. View camera technique. 5th ed. Focal Press, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stone, Jim. A user's guide to the view camera. 2nd ed. Longman, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

West, Bert. Build your own view camera: An easy & inexpensive passport to the professional world of photography for the hobbyist. Dogstar Pub., 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stone, Jim. A user's guide to the view camera. HarperCollins, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Center, Langley Research, ed. Development of a large field of view shadowgraph system for a 16 ft. transonic wind tunnel. National Aeronautics and Space Administration, Langley Research Center, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "View cameras"

1

Wan, Cheng, Yiquan Wu, and Jun Sato. "Dynamic Multiple View Geometry with Affine Cameras." In Lecture Notes in Computer Science. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11758-4_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Duchamp, Gaspard, Omar Ait-Aider, Eric Royer, and Jean-Marc Lavest. "Multiple View 3D Reconstruction with Rolling Shutter Cameras." In Communications in Computer and Information Science. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29971-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Puig, Luis, and J. J. Guerrero. "Two-View Relations Between Omnidirectional and Conventional Cameras." In Omnidirectional Vision Systems. Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4947-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Köpeczi-Bócz, Ákos T., Tian Mi, Gábor Orosz, and Dénes Takács. "YOLOgraphy: Image Processing Based Vehicle Position Recognition." In Lecture Notes in Mechanical Engineering. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70392-8_56.

Full text
Abstract:
AbstractA methodology is developed to extract vehicle kinematic information from roadside cameras at an intersection using deep learning. The ground truth data of top view bounding boxes are collected with the help of unmanned aerial vehicles (UAVs). These top view bounding boxes containing vehicle position, size, and orientation information, are converted to the roadside view bounding boxes using homography transformation. The ground truth data and the roadside view images are used to train a modified YOLOv5 neural network, and thus, to learn the homography transformation matrix. The output o
APA, Harvard, Vancouver, ISO, and other styles
5

Perkins, Alan C., A. H. Ng, and John E. Lees. "Chapter 1 Small Field of View Gamma Cameras and Intraoperative Applications." In Gamma Cameras for Interventional and Intraoperative Imaging. CRC Press, 2016. http://dx.doi.org/10.1201/9781315370224-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Darambara, D. G. "Chapter 2 Detector Design for Small Field of View (SFOV) Nuclear Cameras." In Gamma Cameras for Interventional and Intraoperative Imaging. CRC Press, 2016. http://dx.doi.org/10.1201/9781315370224-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jain, Lakhmi C., Margarita N. Favorskaya, and Dmitry Novikov. "Panorama Construction from Multi-view Cameras in Outdoor Scenes." In Computer Vision in Control Systems-2. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11430-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kroeger, Till, Ralf Dragon, and Luc Van Gool. "Multi-view Tracking of Multiple Targets with Dynamic Cameras." In Lecture Notes in Computer Science. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11752-2_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lv, Xun, Yuan Wang, Feiyi Xu, Jianhui Nie, Feng Xu, and Hao Gao. "Novel View Synthesis of Dynamic Human with Sparse Cameras." In Artificial Intelligence. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93046-2_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Ming, Xueqian Jin, Xuejiao Hu, Jingzhao Dai, Sidan Du, and Yang Li. "MODE: Multi-view Omnidirectional Depth Estimation with 360$$^\circ $$ Cameras." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19827-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "View cameras"

1

Yogamani, Senthil, David Unger, Venkatraman Narayanan, and Varun Ravi Kumar. "FisheyeBEVSeg: Surround View Fisheye Cameras based Bird’s-Eye View Segmentation for Autonomous Driving." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Hongtao, and Wei Tian. "3D Object Detection based on Surround-View Fisheye Cameras." In 2024 8th CAA International Conference on Vehicular Control and Intelligence (CVCI). IEEE, 2024. https://doi.org/10.1109/cvci63518.2024.10830142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Weichen, Yezhi Shen, Qian Lin, Jan P. Allebach, and Fengqing Zhu. "Pose Guided Portrait View Interpolation from Dual Cameras with a Long Baseline." In 2024 IEEE 26th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2024. http://dx.doi.org/10.1109/mmsp61759.2024.10743353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Haoyu, Shaomin Xiong, and Toshiki Hirano. "A Real-Time Human Recognition and Tracking System With a Dual-Camera Setup." In ASME 2019 28th Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/isps2019-7469.

Full text
Abstract:
Abstract Most surveillance camera systems are still controlled and monitored by humans. Smart surveillance camera systems are proposed to automatically understand the scene captured, identify the objects of interest, detect the abnormality, etc. However, most surveillance cameras are either wide-angle or pan-tilt-zoom (PTZ). When the cameras are in the wide-view mode, small objects can be hard to be recognized. On the other hand, when the cameras are zoomed-in to the object of interest, the global view cannot be covered and important events outside the zoomed view will be missed. In this paper
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Yupeng, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit. "View Synthesis: LiDAR Camera versus Depth Estimation." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.35.

Full text
Abstract:
Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images and corresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu- tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even by using a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap- plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated and configured adequately, with DERS using MPEG-I’s Reference Vi
APA, Harvard, Vancouver, ISO, and other styles
6

Xie, Yupeng, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit. "View Synthesis: LiDAR Camera versus Depth Estimation." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.35.

Full text
Abstract:
Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images andcorresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu-tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even byusing a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap-plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated andconfigured adequately, with DERS using MPEG-I’s Reference View Sy
APA, Harvard, Vancouver, ISO, and other styles
7

López-Fernández, D., F. J. Madrid-Cuevas, A. Carmona-Poyato, R. Muñoz-Salinas, and R. Medina-Carnicer. "Multi-view gait recognition on curved trajectories." In ICDSC '15: International Conference on distributed Smart Cameras. ACM, 2015. http://dx.doi.org/10.1145/2789116.2789122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jarusirisawad, Songkran, and Hideo Saito. "3DTV View Generation Using Uncalibrated Cameras." In 2008 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON). IEEE, 2008. http://dx.doi.org/10.1109/3dtv.2008.4547807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Meijer, Peter B. L., Christian Leistner, and Anthony Martiniere. "Multiple View Camera Calibration for Localization." In 2007 First ACM/IEEE International Conference on Distributed Smart Cameras. IEEE, 2007. http://dx.doi.org/10.1109/icdsc.2007.4357528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nikitin, Pavel, Marco Cagnazzo, Joel Jung, and Attilio Fiandrotti. "Exploiting View Synthesis for Super-multiview Video Compression." In ICDSC 2019: 13th International Conference on Distributed Smart Cameras. ACM, 2019. http://dx.doi.org/10.1145/3349801.3349820.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "View cameras"

1

Mathew, Jijo K., Haydn Malackowski, Yerassyl Koshan, et al. Development of Latitude/Longitude (and Route/Milepost) Model for Positioning Traffic Management Cameras. Purdue University, 2024. http://dx.doi.org/10.5703/1288284317720.

Full text
Abstract:
Traffic Incident Management (TIM) is a FHWA Every Day Counts initiative with the objective of reducing secondary crashes, improving travel reliability, and ensuring the safety of responders. Agency roadside cameras play a critical role in TIM by helping dispatchers quickly identify the precise location of incidents when receiving reports from motorists with varying levels of spatial accuracy. Reconciling position reports that are often mile-marker based with cameras that operate in a Pan-Tilt-Zoom (PTZ) coordinate system relies on dispatchers having detailed knowledge of hundreds of cameras an
APA, Harvard, Vancouver, ISO, and other styles
2

Porcel Magnusson, Cristina. Unsettled Topics Concerning Coating Detection by LiDAR in Autonomous Vehicles. SAE International, 2021. http://dx.doi.org/10.4271/epr2021002.

Full text
Abstract:
Autonomous vehicles (AVs) utilize multiple devices, like high-resolution cameras and radar sensors, to interpret the driving environment and achieve full autonomy. One of these instruments—the light detection and ranging (LiDAR) sensor—utilizes pulsed infrared (IR) light, typically at wavelengths of 905 nm or 1,550 nm, to calculate object distance and position. Exterior automotive paint covers an area larger than any other exterior material. Therefore, understanding how LiDAR wavelengths interact with vehicle coatings is extremely important for the safety of future automated driving technologi
APA, Harvard, Vancouver, ISO, and other styles
3

Edmonds, P. H., and S. S. Medley. A procedure for generating quantitative 3-D camera views of tokamak divertors. Office of Scientific and Technical Information (OSTI), 1996. http://dx.doi.org/10.2172/238597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Salto, Hideo, Shigeyuki Baba, Makoto Kimura, Sundar Vedula, and Takeo Kanade. Appearance-Based Virtual View Generation of Temporally-Varying Events from Multi-Camera Images in the 3D Room. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada363762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Smyth, Christopher C., James W. Gombash, and Patricia M. Burcham. Indirect Vision Driving with Fixed Flat Panel Displays for Near Unity, Wide, and Extended Fields of Camera View. Defense Technical Information Center, 2001. http://dx.doi.org/10.21236/ada395211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir, and Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Full text
Abstract:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This syst
APA, Harvard, Vancouver, ISO, and other styles
7

Brodie, Katherine, Brittany Bruder, Richard Slocum, and Nicholas Spore. Simultaneous mapping of coastal topography and bathymetry from a lightweight multicamera UAS. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41440.

Full text
Abstract:
A low-cost multicamera Unmanned Aircraft System (UAS) is used to simultaneously estimate open-coast topography and bathymetry from a single longitudinal coastal flight. The UAS combines nadir and oblique imagery to create a wide field of view (FOV), which enables collection of mobile, long dwell timeseries of the littoral zone suitable for structure-from motion (SfM), and wave speed inversion algorithms. Resultant digital surface models (DSMs) compare well with terrestrial topographic lidar and bathymetric survey data at Duck, NC, USA, with root-mean-square error (RMSE)/bias of 0.26/–0.05 and
APA, Harvard, Vancouver, ISO, and other styles
8

Anderson, Gerald L., and Kalman Peleg. Precision Cropping by Remotely Sensed Prorotype Plots and Calibration in the Complex Domain. United States Department of Agriculture, 2002. http://dx.doi.org/10.32747/2002.7585193.bard.

Full text
Abstract:
This research report describes a methodology whereby multi-spectral and hyperspectral imagery from remote sensing, is used for deriving predicted field maps of selected plant growth attributes which are required for precision cropping. A major task in precision cropping is to establish areas of the field that differ from the rest of the field and share a common characteristic. Yield distribution f maps can be prepared by yield monitors, which are available for some harvester types. Other field attributes of interest in precision cropping, e.g. soil properties, leaf Nitrate, biomass etc. are ob
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!