Academic literature on the topic 'Depth camera'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Depth camera.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Depth camera"

1

Yamazoe, Hirotake, Hiroshi Habe, Ikuhisa Mitsugami, and Yasushi Yagi. "Depth error correction for projector-camera based consumer depth cameras." Computational Visual Media 4, no. 2 (March 14, 2018): 103–11. http://dx.doi.org/10.1007/s41095-017-0103-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Zhe, Zhaozong Meng, Nan Gao, and Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target." Sensors 19, no. 13 (July 8, 2019): 3008. http://dx.doi.org/10.3390/s19133008.

Full text
Abstract:
Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
3

Chiu, Chuang-Yuan, Michael Thelwell, Terry Senior, Simon Choppin, John Hart, and Jon Wheat. "Comparison of depth cameras for three-dimensional reconstruction in medicine." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 233, no. 9 (June 28, 2019): 938–47. http://dx.doi.org/10.1177/0954411919859922.

Full text
Abstract:
KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor.
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Yuxiang, Xiang Meng, and Mingyu Gao. "Vision System of Mobile Robot Combining Binocular and Depth Cameras." Journal of Sensors 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/4562934.

Full text
Abstract:
In order to optimize the three-dimensional (3D) reconstruction and obtain more precise actual distances of the object, a 3D reconstruction system combining binocular and depth cameras is proposed in this paper. The whole system consists of two identical color cameras, a TOF depth camera, an image processing host, a mobile robot control host, and a mobile robot. Because of structural constraints, the resolution of TOF depth camera is very low, which difficultly meets the requirement of trajectory planning. The resolution of binocular stereo cameras can be very high, but the effect of stereo matching is not ideal for low-texture scenes. Hence binocular stereo cameras also difficultly meet the requirements of high accuracy. In this paper, the proposed system integrates depth camera and stereo matching to improve the precision of the 3D reconstruction. Moreover, a double threads processing method is applied to improve the efficiency of the system. The experimental results show that the system can effectively improve the accuracy of 3D reconstruction, identify the distance from the camera accurately, and achieve the strategy of trajectory planning.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Huang, and Zhao. "A New Model of RGB-D Camera Calibration Based On 3D Control Field." Sensors 19, no. 23 (November 21, 2019): 5082. http://dx.doi.org/10.3390/s19235082.

Full text
Abstract:
With extensive application of RGB-D cameras in robotics, computer vision, and many other fields, accurate calibration becomes more and more critical to the sensors. However, most existing models for calibrating depth and the relative pose between a depth camera and an RGB camera are not universally applicable to many different kinds of RGB-D cameras. In this paper, by using the collinear equation and space resection of photogrammetry, we present a new model to correct the depth and calibrate the relative pose between depth and RGB cameras based on a 3D control field. We establish a rigorous relationship model between the two cameras; then, we optimize the relative parameters of two cameras by least-squares iteration. For depth correction, based on the extrinsic parameters related to object space, the reference depths are calculated by using a collinear equation. Then, we calibrate the depth measurements with consideration of the distortion of pixels in depth images. We apply Kinect-2 to verify the calibration parameters by registering depth and color images. We test the effect of depth correction based on 3D reconstruction. Compared to the registration results from a state-of-the-art calibration model, the registration results obtained with our calibration parameters improve dramatically. Likewise, the performances of 3D reconstruction demonstrate obvious improvements after depth correction.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Tian-Long, Lin Ao, Jie Zheng, and Zhi-Bin Sun. "Reconstructing Depth Images for Time-of-Flight Cameras Based on Second-Order Correlation Functions." Photonics 10, no. 11 (October 31, 2023): 1223. http://dx.doi.org/10.3390/photonics10111223.

Full text
Abstract:
Depth cameras are closely related to our daily lives and have been widely used in fields such as machine vision, autonomous driving, and virtual reality. Despite their diverse applications, depth cameras still encounter challenges like multi-path interference and mixed pixels. Compared to traditional sensors, depth cameras have lower resolution and a lower signal-to-noise ratio. Moreover, when used in environments with scattering media, object information scatters multiple times, making it difficult for time-of-flight (ToF) cameras to obtain effective object data. To tackle these issues, we propose a solution that combines ToF cameras with second-order correlation transform theory. In this article, we explore the utilization of ToF camera depth information within a computational correlated imaging system under ambient light conditions. We integrate compressed sensing and non-training neural networks with ToF technology to reconstruct depth images from a series of measurements at a low sampling rate. The research indicates that by leveraging the depth data collected by the camera, we can recover negative depth images. We analyzed and addressed the reasons behind the generation of negative depth images. Additionally, under undersampling conditions, the use of reconstruction algorithms results in a higher peak signal-to-noise ratio compared to images obtained from the original camera. The results demonstrate that the introduced second-order correlation transformation can effectively reduce noise originating from the ToF camera itself and direct ambient light, thereby enabling the use of ToF cameras in complex environments such as scattering media.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Yang, Danqing Chen, Jun Wu, Mingyi Huang, and Yubin Weng. "Calibration of RGB-D Camera Using Depth Correction Model." Journal of Physics: Conference Series 2203, no. 1 (February 1, 2022): 012032. http://dx.doi.org/10.1088/1742-6596/2203/1/012032.

Full text
Abstract:
Abstract This paper proposes a calibration method of RGB-D camera, especially its depth camera. First, use a checkerboard calibration board under auxiliary Infrared light source to collect calibration images. Then, the internal and external parameters of the depth camera are calculated by Zhang’s calibration method, which improves the accuracy of the internal parameter. Next, the depth correction model is proposed to directly calibrate the distortion of the depth image, which is more intuitive and faster than the disparity distortion correction model. This method is simple, high-precision, and suitable for most depth cameras.
APA, Harvard, Vancouver, ISO, and other styles
8

Tu, Li-fen, and Qi Peng. "Method of Using RealSense Camera to Estimate the Depth Map of Any Monocular Camera." Journal of Electrical and Computer Engineering 2021 (May 18, 2021): 1–9. http://dx.doi.org/10.1155/2021/9152035.

Full text
Abstract:
Robot detection, recognition, positioning, and other applications require not only real-time video image information but also the distance from the target to the camera, that is, depth information. This paper proposes a method to automatically generate any monocular camera depth map based on RealSense camera data. By using this method, any current single-camera detection system can be upgraded online. Without changing the original system, the depth information of the original monocular camera can be obtained simply, and the transition from 2D detection to 3D detection can be realized. In order to verify the effectiveness of the proposed method, a hardware system was constructed using the Micro-vision RS-A14K-GC8 industrial camera and the Intel RealSense D415 depth camera, and the depth map fitting algorithm proposed in this paper was used to test the system. The results show that, except for a few depth-missing areas, the results of other areas with depth are still good, which can basically describe the distance difference between the target and the camera. In addition, in order to verify the scalability of the method, a new hardware system was constructed with different cameras, and images were collected in a complex farmland environment. The generated depth map was good, which could basically describe the distance difference between the target and the camera.
APA, Harvard, Vancouver, ISO, and other styles
9

Unger, Michael, Adrian Franke, and Claire Chalopin. "Automatic depth scanning system for 3D infrared thermography." Current Directions in Biomedical Engineering 2, no. 1 (September 1, 2016): 369–72. http://dx.doi.org/10.1515/cdbme-2016-0162.

Full text
Abstract:
AbstractInfrared thermography can be used as a pre-, intra- and post-operative imaging technique during medical treatment of patients. Modern infrared thermal cameras are capable of acquiring images with a high sensitivity of 10 mK and beyond. They provide a planar image of an examined 3D object in which this high sensitivity is only reached within a plane perpendicular to the camera axis and defined by the focus of the lens. Out of focus planes are blurred and temperature values are inaccurate. A new 3D infrared thermography system is built by combining a thermal camera with a depth camera. Multiple images at varying focal planes are acquired with the infrared camera using a motorized system. The sharp regions of individual images are projected onto the 3D object’s surface obtained by the depth camera. The system evaluation showed that deviation between measured temperature values and a ground truth is reduced with our system.
APA, Harvard, Vancouver, ISO, and other styles
10

Haider, Azmi, and Hagit Hel-Or. "What Can We Learn from Depth Camera Sensor Noise?" Sensors 22, no. 14 (July 21, 2022): 5448. http://dx.doi.org/10.3390/s22145448.

Full text
Abstract:
Although camera and sensor noise are often disregarded, assumed negligible or dealt with in the context of denoising, in this paper we show that significant information can actually be deduced from camera noise about the captured scene and the objects within it. Specifically, we deal with depth cameras and their noise patterns. We show that from sensor noise alone, the object’s depth and location in the scene can be deduced. Sensor noise can indicate the source camera type, and within a camera type the specific device used to acquire the images. Furthermore, we show that noise distribution on surfaces provides information about the light direction within the scene as well as allows to distinguish between real and masked faces. Finally, we show that the size of depth shadows (missing depth data) is a function of the object’s distance from the background, its distance from the camera and the object’s size. Hence, can be used to authenticate objects location in the scene. This paper provides tools and insights into what can be learned from depth camera sensor noise.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Depth camera"

1

Sjöholm, Daniel. "Calibration using a general homogeneous depth camera model." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204614.

Full text
Abstract:
Being able to accurately measure distances in depth images is important for accurately reconstructing objects. But the measurement of depth is a noisy process and depth sensors could use additional correction even after factory calibration. We regard the pair of depth sensor and image sensor to be one single unit, returning complete 3D information. The 3D information is combined by relying on the more accurate image sensor for everything except the depth measurement. We present a new linear method of correcting depth distortion, using an empirical model based around the constraint of only modifying depth data, while keeping planes planar. The depth distortion model is implemented and tested on the Intel RealSense SR300 camera. The results show that the model is viable and generally decreases depth measurement errors after calibrating, with an average improvement in the 50 percent range on the tested data sets.
Att noggrant kunna mäta avstånd i djupbilder är viktigt för att kunna göra bra rekonstruktioner av objekt. Men denna mätprocess är brusig och dagens djupsensorer tjänar på ytterligare korrektion efter fabrikskalibrering. Vi betraktar paret av en djupsensor och en bildsensor som en enda enhet som returnerar komplett 3D information. 3D informationen byggs upp från de två sensorerna genom att lita på den mer precisa bildsensorn för allt förutom djupmätningen. Vi presenterar en ny linjär metod för att korrigera djupdistorsion med hjälp av en empirisk modell, baserad kring att enbart förändra djupdatan medan plana ytor behålls plana. Djupdistortionsmodellen implementerades och testades på kameratypen Intel RealSense SR300. Resultaten visar att modellen fungerar och i regel minskar mätfelet i djupled efter kalibrering, med en genomsnittlig förbättring kring 50 procent för de testade dataseten.
APA, Harvard, Vancouver, ISO, and other styles
2

Jansson, Isabell. "Visualizing Realtime Depth Camera Configuration using Augmented Reality." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139934.

Full text
Abstract:
A Time-of-Flight camera from SICK IVP AB is used to monitor a region of interest which implies that the camera has to be configured, it has to be mounted correctly and be aware of the region of interest. Performing the configuration process currently induce the user to manage the captured 3-Dimensional data in a 2-Dimensional environment which obstructs the process and possibly causes configuration errors due to misinterpretations of the captured data. The aim of the thesis is to investigate the concept of using Augmented Reality as a tool for facilitating the configuration process of a Time-of-Flight camera and evaluate if Augmented Reality enhances the understanding for the process. In order to evaluate the concept, a prototype application is developed. The thesis report discusses the motivation and background of the work, the implementation as well as the results.
APA, Harvard, Vancouver, ISO, and other styles
3

Efstratiou, Panagiotis. "Skeleton Tracking for Sports Using LiDAR Depth Camera." Thesis, KTH, Medicinteknik och hälsosystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297536.

Full text
Abstract:
Skeletal tracking can be accomplished deploying human pose estimation strategies. Deep learning is shown to be the paramount approach in the realm where in collaboration with a ”light detection and ranging” depth camera the development of a markerless motion analysis software system seems to be feasible. The project utilizes a trained convolutional neural network in order to track humans doing sport activities and to provide feedback after biomechanical analysis. Implementations of four filtering methods are presented regarding movement’s nature, such as kalman filter, fixedinterval smoother, butterworth and moving average filter. The software seems to be practicable in the field evaluating videos at 30Hz, as it is demonstrated by indoor cycling and hammer throwing events. Nonstatic camera behaves quite well against a standstill and upright person while the mean absolute error is 8.32% and 6.46% referential to left and right knee angle, respectively. An impeccable system would benefit not only the sports domain but also the health industry as a whole.
Skelettspårning kan åstadkommas med hjälp av metoder för uppskattning av mänsklig pose. Djupinlärningsmetoder har visat sig vara det främsta tillvägagångssättet och om man använder en djupkamera med ljusdetektering och varierande omfång verkar det vara möjligt att utveckla ett markörlöst system för rörelseanalysmjukvara. I detta projekt används ett tränat neuralt nätverk för att spåra människor under sportaktiviteter och för att ge feedback efter biomekanisk analys. Implementeringar av fyra olika filtreringsmetoder för mänskliga rörelser presenteras, kalman filter, utjämnare med fast intervall, butterworth och glidande medelvärde. Mjukvaran verkar vara användbar vid fälttester för att utvärdera videor vid 30Hz. Detta visas genom analys av inomhuscykling och släggkastning. En ickestatisk kamera fungerar ganska bra vid mätningar av en stilla och upprättstående person. Det genomsnittliga absoluta felet är 8.32% respektive 6.46% då vänster samt höger knävinkel användes som referens. Ett felfritt system skulle gynna såväl idrottssom hälsoindustrin.
APA, Harvard, Vancouver, ISO, and other styles
4

Huotari, V. (Ville). "Depth camera based customer behaviour analysis for retail." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201510292099.

Full text
Abstract:
In 2000s traditional shop-based retailing has had to adapt to competition created by internet-based e-commerce. As a distinction from traditional retail, e-commerce can gather unprecedented amount of information about its customers and their behaviour. To enable behaviour-based analysis in traditional retailing, the customers need to be tracked reliably through the store. One such tracking technology is depth camera people tracking system developed at VTT, Technical Research Centre of Finland Ltd. This study aims to use the aforementioned people tracking system’s data to enable e-commerce style behavioural analysis in physical retail locations. This study is done following the design science research paradigm to construct a real-life artefact. The artefact designed and implemented is based on accumulated knowledge from a systematic literature review, application domain analysis and iterative software engineering practices. Systematic literature review is used to understand what kind of performance evaluation is done in retail. These metrics are then analysed in regards to people tracking technologies to propose a conceptual framework for customer tracking in retail. From this the artefact is designed, implemented and evaluated. Evaluation is done by combination of requirement validation, field experiments and three distinct real-life field studies. Literature review found that retailing uses traditionally easily available performance metrics such as sales and profit. It was also clear that movement data, apart from traffic calculation, has been unavailable for retail and thus is not often used as quantifiable performance metric. As a result this study presents one novel way to use customer movement as a store performance metric. The artefact constructed quantifies, visualises and analyses customer tracking data with the provided depth camera system, which is a new approach to people tracking domain. The evaluation with real-life cases concludes that the artefact can indeed find and classify interesting behavioural patterns from customer tracking data.
APA, Harvard, Vancouver, ISO, and other styles
5

Januzi, Altin. "Triple-Camera Setups for Image-Based Depth Estimation." Thesis, Uppsala universitet, Institutionen för elektroteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-422717.

Full text
Abstract:
This study was conducted as to attempt to find whether a three-camera setup could improve depth map retrieval as opposed to a two-camera setup. The study is based on three main ideas to exploit information from the additional camera. These are three cameras on one axis, as to exploit wide and short baseline benefits, three cameras on different axes, as to exploit vertical and horizontal scanning of the scene and a third idea of combining the two previous ideas. More than three cameras would impose particular implications on the solution and without sufficient theoretical justification thereof the study was limited to a study of the different three-camera configurations possible. As a practical connection was of interest, the study was further limited by the possibility to perform in real-time. An implementation based on previous research was made such as to evaluate images with specific scenes. Pre-processing by Census transformation of the images, camera calibration and rectification of different camera setups and optimizaton by the SGM algorithm are part of the solution used to retrieve results to analyse. Two tests were then studied, first one with rendered images and then one with images from real cameras. From these tests it was noted that a three-camera configuration can improve upon the results significantly and further, if the third camera was placed in perpendicular axis to the first camera pair, unique information was yielded which improves upon the result in specific cases. Using three cameras on the same axis showed no improvement when considering the error metrics BMP and BMPRE, but offers wider application uses, consistently providing better results than the worst pair.
APA, Harvard, Vancouver, ISO, and other styles
6

Rangappa, Shreedhar. "Absolute depth using low-cost light field cameras." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/36224.

Full text
Abstract:
Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera.
APA, Harvard, Vancouver, ISO, and other styles
7

Nassir, Cesar. "Domain-Independent Moving Object Depth Estimation using Monocular Camera." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233519.

Full text
Abstract:
Today automotive companies across the world strive to create vehicles with fully autonomous capabilities. There are many benefits of developing autonomous vehicles, such as reduced traffic congestion, increased safety and reduced pollution, etc. To be able to achieve that goal there are many challenges ahead, one of them is visual perception. Being able to estimate depth from a 2D image has been shown to be a key component for 3D recognition, reconstruction and segmentation. Being able to estimate depth in an image from a monocular camera is an ill-posed problem since there is ambiguity between the mapping from colour intensity and depth value. Depth estimation from stereo images has come far compared to monocular depth estimation and was initially what depth estimation relied on. However, being able to exploit monocular cues is necessary for scenarios when stereo depth estimation is not possible. We have presented a novel CNN network, BiNet which is inspired by ENet, to tackle depth estimation of moving objects using only a monocular camera in real-time. It performs better than ENet in the Cityscapes dataset while adding only a small overhead to the complexity.
I dag strävar bilföretag över hela världen för att skapa fordon med helt autonoma möjligheter. Det finns många fördelar med att utveckla autonoma fordon, såsom minskad trafikstockning, ökad säkerhet och minskad förorening, etc. För att kunna uppnå det målet finns det många utmaningar framåt, en av dem är visuell uppfattning. Att kunna uppskatta djupet från en 2D-bild har visat sig vara en nyckelkomponent för 3D-igenkännande, rekonstruktion och segmentering. Att kunna uppskatta djupet i en bild från en monokulär kamera är ett svårt problem eftersom det finns tvetydighet mellan kartläggningen från färgintensitet och djupvärde. Djupestimering från stereobilder har kommit långt jämfört med monokulär djupestimering och var ursprungligen den metod som man har förlitat sig på. Att kunna utnyttja monokulära bilder är dock nödvändig för scenarier när stereodjupuppskattning inte är möjligt. Vi har presenterat ett nytt nätverk, BiNet som är inspirerat av ENet, för att ta itu med djupestimering av rörliga objekt med endast en monokulär kamera i realtid. Det fungerar bättre än ENet med datasetet Cityscapes och lägger bara till en liten kostnad på komplexiteten.
APA, Harvard, Vancouver, ISO, and other styles
8

Pinard, Clément. "Robust Learning of a depth map for obstacle avoidance with a monocular stabilized flying camera." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY003/document.

Full text
Abstract:
Le drone orienté grand public est principalement une caméra volante, stabilisée et de bonne qualité. Ceux-ci ont démocratisé la prise de vue aérienne, mais avec leur succès grandissant, la notion de sécurité est devenue prépondérante.Ce travail s'intéresse à l'évitement d'obstacle, tout en conservant un vol fluide pour l'utilisateur.Dans ce contexte technologique, nous utilisons seulement une camera stabilisée, par contrainte de poids et de coût.Pour leur efficacité connue en vision par ordinateur et leur performance avérée dans la résolution de tâches complexes, nous utilisons des réseaux de neurones convolutionnels (CNN). Notre stratégie repose sur un systeme de plusieurs niveaux de complexité dont les premieres étapes sont de mesurer une carte de profondeur depuis la caméra. Cette thèse étudie les capacités d'un CNN à effectuer cette tâche.La carte de profondeur, étant particulièrement liée au flot optique dans le cas d'images stabilisées, nous adaptons un réseau connu pour cette tâche, FlowNet, afin qu'il calcule directement la carte de profondeur à partir de deux images stabilisées. Ce réseau est appelé DepthNet.Cette méthode fonctionne en simulateur avec un entraînement supervisé, mais n'est pas assez robuste pour des vidéos réelles. Nous étudions alors les possibilites d'auto-apprentissage basées sur la reprojection différentiable d'images. Cette technique est particulièrement nouvelle sur les CNNs et nécessite une étude détaillée afin de ne pas dépendre de paramètres heuristiques.Finalement, nous développons un algorithme de fusion de cartes de profondeurs pour utiliser DepthNet sur des vidéos réelles. Plusieurs paires différentes sont données à DepthNet afin d'avoir une grande plage de profondeurs mesurées
Customer unmanned aerial vehicles (UAVs) are mainly flying cameras. They democratized aerial footage, but with thei success came security concerns.This works aims at improving UAVs security with obstacle avoidance, while keeping a smooth flight. In this context, we use only one stabilized camera, because of weight and cost incentives.For their robustness in computer vision and thei capacity to solve complex tasks, we chose to use convolutional neural networks (CNN). Our strategy is based on incrementally learning tasks with increasing complexity which first steps are to construct a depth map from the stabilized camera. This thesis is focused on studying ability of CNNs to train for this task.In the case of stabilized footage, the depth map is closely linked to optical flow. We thus adapt FlowNet, a CNN known for optical flow, to output directly depth from two stabilized frames. This network is called DepthNet.This experiment succeeded with synthetic footage, but is not robust enough to be used directly on real videos. Consequently, we consider self supervised training with real videos, based on differentiably reproject images. This training method for CNNs being rather novel in literature, a thorough study is needed in order not to depend too moch on heuristics.Finally, we developed a depth fusion algorithm to use DepthNet efficiently on real videos. Multiple frame pairs are fed to DepthNet to get a great depth sensing range
APA, Harvard, Vancouver, ISO, and other styles
9

Kuznetsova, Alina [Verfasser]. "Hand pose recogniton using a consumer depth camera / Alina Kuznetsova." Hannover : Technische Informationsbibliothek (TIB), 2016. http://d-nb.info/1100290125/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sandberg, David. "Model-Based Video Coding Using a Colour and Depth Camera." Thesis, Linköpings universitet, Datorseende, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68737.

Full text
Abstract:
In this master thesis, a model-based video coding algorithm has been developed that uses input from a colour and depth camera, such as the Microsoft Kinect. Using a model-based representation of a video has several advantages over the commonly used block-based approach, used by the H.264 standard. For example, videos can be rendered in 3D, be viewed from alternative views, and have objects inserted into them for augmented reality and user interaction. This master thesis demonstrates a very efficient way of encoding the geometry of a scene. The results of the proposed algorithm show that it can reach very low bitrates with comparable results to the H.264 standard.
I detta examensarbete har en modellbaserad videokodningsalgoritm utvecklats som använder data från en djup- och färgkamera, exempelvis Microsoft Kinect. Det finns flera fördelar med en modellbaserad representation av en video över den mer vanligt förekommande blockbaserade varianten, vilket används av bland annat H.264. Några exempel är möjligheten att rendera videon i 3D samt från alternativa vyer, placera in objekt i videon samt möjlighet för användaren att interagera med scenen. Detta examensarbete påvisar en väldigt effektiv metod för komprimering av scengeometri. Resultaten av den presenterade algoritmen visar att möjligheten att uppnå väldigt låg bithastighet med jämförelsebara resultat med H.264-standarden.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Depth camera"

1

1958-, Li Xun, ed. Lights! camera! kai shi!: In depth interviews with China's new generation of movie directors. Norfalk, Conn: Eastbridge, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Michael, Underwood. Death in camera. Bath: Chivers, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Copyright Paperback Collection (Library of Congress), ed. Eileen Fulton's lights, camera, death. New York: Ballantine Books, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Jiang, Zicheng Liu, and Ying Wu. Human Action Recognition with Depth Cameras. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04561-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fossati, Andrea, Juergen Gall, Helmut Grabner, Xiaofeng Ren, and Kurt Konolige, eds. Consumer Depth Cameras for Computer Vision. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4640-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

La camera verde: Il cinema e la morte. Napoli: Ipermedium libri, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zanuttigh, Pietro, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, and Guido Maria Cortelazzo. Time-of-Flight and Structured Light Depth Cameras. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30973-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Camera de Gas. 2nd ed. Barcelona: Planet, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Understanding close-up photography: Creative close encounters with or without a macro Lens. New York: Amphoto Books, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Johnson, Jean. Dr. J.R.N. Owen: Frontier doctor and leader of Death Valley's camel caravan. Death Valley, Calif. (P.O. Box 338): Death Valley '49ers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Depth camera"

1

Langmann, Benjamin. "Depth Camera Assessment." In Wide Area 2D/3D Imaging, 5–19. Wiesbaden: Springer Fachmedien Wiesbaden, 2014. http://dx.doi.org/10.1007/978-3-658-06457-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Jinghao, and Jiro Tanaka. "Hand Gesture Authentication Using Depth Camera." In Advances in Intelligent Systems and Computing, 641–54. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03405-4_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Krüger, Björn, Anna Vögele, Lukas Herwartz, Thomas Terkatz, Andreas Weber, Carmen Garcia, Ingo Fietze, and Thomas Penzel. "Sleep Detection Using a Depth Camera." In Computational Science and Its Applications – ICCSA 2014, 824–35. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09144-0_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kotfis, Dave. "Octree Mapping from a Depth Camera." In GPU Pro 360, 317–34. First edition. j Boca Raton, FL : CRC Press/Taylor & Francis Group, 2018. j Includes bibliographical references and index.: A K Peters/CRC Press, 2018. http://dx.doi.org/10.1201/9781351052108-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zanuttigh, Pietro, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, and Guido Maria Cortelazzo. "3D Scene Reconstruction from Depth Camera Data." In Time-of-Flight and Structured Light Depth Cameras, 231–51. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30973-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Meng-Chieh, Huan Wu, Jia-Ling Liou, Ming-Sui Lee, and Yi-Ping Hung. "Multiparameter Sleep Monitoring Using a Depth Camera." In Biomedical Engineering Systems and Technologies, 311–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38256-7_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Piperi, Erald, Ilo Bodi, and Elidon Avrami. "Multi-depth Camera Setup for Body Scanning." In Lecture Notes on Multidisciplinary Industrial Engineering, 334–38. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-48933-4_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Israël, Jonathan, and Aurélien Plyer. "A Brute Force Approach to Depth Camera Odometry." In Consumer Depth Cameras for Computer Vision, 49–60. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4640-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kirschmann, Moritz A., Jörg Pierer, Alexander Steinecker, Philipp Schmid, and Arne Erdmann. "Plenoptic Inspection System for Automatic Quality Control of MEMS and Microsystems." In IFIP Advances in Information and Communication Technology, 220–32. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72632-4_16.

Full text
Abstract:
AbstractOptical quality control of MEMS and microsystems is challenging as these structures are micro-scale and three dimensional. Here we lay out different optical systems that can be used for 3D optical quality control in general and for such structures in particular. We further investigate one of these technologies – plenoptic cameras and characterize them for the described task, showing advantages and disadvantages. Key advantages are a huge increase in depth of field compared to conventional microscope camera systems allowing for fast acquisition of non-flat systems and secondly the resulting total focus images and depth maps. Finally we conclude that these advantages render plenoptic cameras a valuable technology for the application of quality control.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Sung-Yeol, Andreas Koschan, Mongi A. Abidi, and Yo-Sung Ho. "Three-Dimensional Video Contents Exploitation in Depth Camera-Based Hybrid Camera System." In Signals and Communication Technology, 349–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12802-8_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Depth camera"

1

Hyok Song, Jisang Yoo, Sooyeong Kwak, Cheon Lee, and Byeongho Choi. "3D mesh and multi-view synthesis implementation using stereo cameras and a depth camera." In 2013 3DTV Vision Beyond Depth (3DTV-CON). IEEE, 2013. http://dx.doi.org/10.1109/3dtv.2013.6676645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xie, Yupeng, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit. "View Synthesis: LiDAR Camera versus Depth Estimation." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.35.

Full text
Abstract:
Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images and corresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu- tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even by using a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap- plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated and configured adequately, with DERS using MPEG-I’s Reference View Synthesizer (RVS). In IV-PSNR, the LiDAR camera reaches 32.2dB view synthesis quality with a 15cm camera baseline and 40.3dB with a 2cm baseline. Though DERS outperforms the LiDAR camera with 4.2dB, the latter provides a better quality-performance trade- off. However, visual inspection demonstrates that LiDAR’s virtual views have even slightly higher quality than with DERS in most tested low-texture scene areas, except for object borders. Overall, we highly recommend using LiDAR cameras over advanced depth estimation methods (like DERS) in real-time DIBR applications. Neverthe- less, this requires delicate calibration with multiple tools further exposed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
3

Xie, Yupeng, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit. "View Synthesis: LiDAR Camera versus Depth Estimation." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.35.

Full text
Abstract:
Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images andcorresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu-tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even byusing a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap-plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated andconfigured adequately, with DERS using MPEG-I’s Reference View Synthesizer (RVS). In IV-PSNR, the LiDARcamera reaches 32.2dB view synthesis quality with a 15cm camera baseline and 40.3dB with a 2cm baseline.Though DERS outperforms the LiDAR camera with 4.2dB, the latter provides a better quality-performance trade-off. However, visual inspection demonstrates that LiDAR’s virtual views have even slightly higher quality thanwith DERS in most tested low-texture scene areas, except for object borders. Overall, we highly recommend usingLiDAR cameras over advanced depth estimation methods (like DERS) in real-time DIBR applications. Neverthe-less, this requires delicate calibration with multiple tools further exposed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Belhedi, Amira, Adrien Bartoli, Vincent Gay-bellile, Steve Bourgeois, Patrick Sayd, and Kamel Hamrouni. "Depth Correction for Depth Camera From Planarity." In British Machine Vision Conference 2012. British Machine Vision Association, 2012. http://dx.doi.org/10.5244/c.26.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Chao-Chung. "Depth Camera Noise Modeling." In 2023 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan). IEEE, 2023. http://dx.doi.org/10.1109/icce-taiwan58799.2023.10227043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wong, Emily, Isabella Humphrey, Scott Switzer, Christopher Crutchfield, Nathan Hui, Curt Schurgers, and Ryan Kastner. "Underwater Depth Calibration Using a Commercial Depth Camera." In WUWNet'22: The 16th International Conference on Underwater Networks & Systems. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3567600.3568158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Long, Yunfei, Daniel Morris, Xiaoming Liu, Marcos Castro, Punarjay Chakravarty, and Praveen Narayanan. "Radar-Camera Pixel Depth Association for Depth Completion." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.01232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tanee, Siraprapa, and Dusit Thanapatay. "Scoliosis screening using depth camera." In 2017 International Electrical Engineering Congress (iEECON). IEEE, 2017. http://dx.doi.org/10.1109/ieecon.2017.8075869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Popescu, Voicu, and Daniel Aliaga. "The depth discontinuity occlusion camera." In the 2006 symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1111411.1111436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Wei-Chih, Keng-Ren Lin, Chi L. Tsui, David Schipf, and Jonathan Leang. "Underwater camera with depth measurement." In SPIE Smart Structures and Materials + Nondestructive Evaluation and Health Monitoring, edited by Tribikram Kundu. SPIE, 2016. http://dx.doi.org/10.1117/12.2219013.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Depth camera"

1

Clausen, Jay, Michael Musty, Anna Wagner, Susan Frankenstein, and Jason Dorvee. Modeling of a multi-month thermal IR study. Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41060.

Full text
Abstract:
Inconsistent and unacceptable probability of detection (PD) and false alarm rates (FAR) due to varying environmental conditions hamper buried object detection. A 4-month study evaluated the environmental parameters impacting standoff thermal infra-red(IR) detection of buried objects. Field observations were integrated into a model depicting the temporal and spatial thermal changes through a 1-week period utilizing a 15-minute time-step interval. The model illustrates the surface thermal observations obtained with a thermal IR camera contemporaneously with a 3-d presentation of subsurface soil temperatures obtained with 156 buried thermocouples. Precipitation events and subsequent soil moisture responses synchronized to the temperature data are also included in the model simulation. The simulation shows the temperature response of buried objects due to changes in incoming solar radiation, air/surface soil temperature changes, latent heat exchange between the objects and surrounding soil, and impacts due to precipitation/changes in soil moisture. Differences are noted between the thermal response of plastic and metal objects as well as depth of burial below the ground surface. Nearly identical environmental conditions on different days did not always elicit the same spatial thermal response.
APA, Harvard, Vancouver, ISO, and other styles
2

Limoges, A., A. Normandeau, J. B R Eamer, N. Van Nieuwenhove, M. Atkinson, H. Sharpe, T. Audet, et al. 2022William-Kennedy expedition: Nunatsiavut Coastal Interaction Project (NCIP). Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/332085.

Full text
Abstract:
The accelerating Arctic cryosphere decline severely impacts the land on which northern communities live through the presence of coastal and marine geohazards and coastal erosion, which further places the cultural heritage of coastal archaeological sites at risks. Sea ice decline also compromises the formation of polynyas, with unknown consequences for the regional ecosystems. From the 10th to the 18th of July 2022, a scientific cruise onboard the research vessel William-Kennedy allowed the collection of a suite of samples and data from the marine coastal environment of Nain, Nunatsiavut. In total, 42 surface sediment samples, 29 sediment cores, 41 conductivity-temperature-depth (CTD) profiles, 13 water samples, 24 phytoplankton nets and 13 zooplankton nets were collected. The cruise allowed the deployment of 2 moorings equipped with sediment traps in Nain Bay and within deeper offshore waters. Triangulation showed that the 2 moorings were correctly placed near their target locations. Drop camera transects were deployed in Webb Bay and at the easternmost tip of Paulmp;gt;'s Island to image the seabed and study benthic habitats. Finally, acoustic sub-bottom profiling along the entire study area allowed a high-resolution characterization of the stratigraphy of the seafloor, helped identifying locations for sediment sampling and inferring geological information about the depositional environments. The material and data collected during the research cruise will be key to 1) evaluating the productivity and dynamics of small recurring polynyas (i.e., rattles) on diverse timescales, 2) assessing marine and coastal geohazards (e.g., landslides) in relation to the deglacial history of Nain, 3) investigate the seabed geomorphology in Webb Bay and linkages with permafrost and sea-level changes and 3) conducting benthic habitat characterization. Co-led by the University of New Brunswick (UNB) and Natural Resources Canada (NRCan), this cruise was done in collaboration with the Government of Nunatsiavut, Université du Québec à Montréal, Université Laval, Dalhousie University and Memorial University, and was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and NRCan.
APA, Harvard, Vancouver, ISO, and other styles
3

Christie, Benjamin, Osama Ennasr, and Garry Glaspell. Autonomous navigation and mapping in a simulated environment. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42006.

Full text
Abstract:
Unknown Environment Exploration (UEE) with an Unmanned Ground Vehicle (UGV) is extremely challenging. This report investigates a frontier exploration approach, in simulation, that leverages Simultaneous Localization And Mapping (SLAM) to efficiently explore unknown areas by finding navigable routes. The solution utilizes a diverse sensor payload that includes wheel encoders, three-dimensional (3-D) LIDAR, and Red, Green, Blue and Depth (RGBD) cameras. The main goal of this effort is to leverage frontier-based exploration with a UGV to produce a 3-D map (up to 10 cm resolution). The solution provided leverages the Robot Operating System (ROS).
APA, Harvard, Vancouver, ISO, and other styles
4

Ennasr, Osama, Brandon Dodd, Michael Paquette, Charles Ellison, and Garry Glaspell. Low size, weight, power, and cost (SWaP-C) payload for autonomous navigation and mapping on an unmanned ground vehicle. Engineer Research and Development Center (U.S.), September 2023. http://dx.doi.org/10.21079/11681/47683.

Full text
Abstract:
Autonomous navigation and unknown environment exploration with an unmanned ground vehicle (UGV) is extremely challenging. This report investigates a mapping and exploration solution utilizing low size, weight, power, and cost payloads. The platform presented here leverages simultaneous localization and mapping to efficiently explore unknown areas by finding navigable routes. The solution utilizes a diverse sensor payload that includes wheel encoders, 3D lidar, and red-green-blue and depth cameras. The main goal of this effort is to leverage path planning and navigation for mapping and exploration with a UGV to produce an accurate 3D map. The solution provided also leverages the Robot Operating System.
APA, Harvard, Vancouver, ISO, and other styles
5

Kulhandjian, Hovannes. Detecting Driver Drowsiness with Multi-Sensor Data Fusion Combined with Machine Learning. Mineta Transportation Institute, September 2021. http://dx.doi.org/10.31979/mti.2021.2015.

Full text
Abstract:
In this research work, we develop a drowsy driver detection system through the application of visual and radar sensors combined with machine learning. The system concept was derived from the desire to achieve a high level of driver safety through the prevention of potentially fatal accidents involving drowsy drivers. According to the National Highway Traffic Safety Administration, drowsy driving resulted in 50,000 injuries across 91,000 police-reported accidents, and a death toll of nearly 800 in 2017. The objective of this research work is to provide a working prototype of Advanced Driver Assistance Systems that can be installed in present-day vehicles. By integrating two modes of visual surveillance to examine a biometric expression of drowsiness, a camera and a micro-Doppler radar sensor, our system offers high reliability over 95% in the accuracy of its drowsy driver detection capabilities. The camera is used to monitor the driver’s eyes, mouth and head movement and recognize when a discrepancy occurs in the driver's blinking pattern, yawning incidence, and/or head drop, thereby signaling that the driver may be experiencing fatigue or drowsiness. The micro-Doppler sensor allows the driver's head movement to be captured both during the day and at night. Through data fusion and deep learning, the ability to quickly analyze and classify a driver's behavior under various conditions such as lighting, pose-variation, and facial expression in a real-time monitoring system is achieved.
APA, Harvard, Vancouver, ISO, and other styles
6

Amiri, Rahmatullah, and Ashley Jackson. Taliban Taxation in Afghanistan: (2006-2021). Institute of Development Studies (IDS), February 2022. http://dx.doi.org/10.19088/ictd.2022.004.

Full text
Abstract:
Before taking control of Afghanistan in August 2021, the Taliban had developed a remarkably state-like revenue collection system throughout the country. This ICTD research explores how that came to be, and what factors shaped the various forms of Taliban taxation. Drawing primarily on fieldwork from Helmand, Ghazni and Kunduz provinces, this paper explores in depth three commonplace types of Taliban taxation: ushr (effectively a harvest tax, applied to both legal crops as well as opium), taxation on transport of goods (similar to customs), and taxes on aid interventions. The paper pays particular attention to geographic variation, exploring how and why each practice evolved differently at the subnational level.
APA, Harvard, Vancouver, ISO, and other styles
7

Khan, Asad, Angeli Jayme, Imad Al-Qadi, and Gregary Renshaw. Embedded Energy Harvesting Modules in Flexible Pavements. Illinois Center for Transportation, April 2024. http://dx.doi.org/10.36501/0197-9191/24-008.

Full text
Abstract:
Energy from pavements can be harvested in multiple ways to produce clean energy. One of the techniques is electromagnetic energy harvesting, in which mechanical energy from vehicles is captured in the form of input displacement to produce electricity. In this study, a rack-and-pinion electromagnetic energy harvester proposed in the literature as a speed bump is optimized for highway-speed vehicles. A displacement transfer plate is also proposed, with a minimum depth of embedment in the pavement to carry input displacements from passing vehicles and excite the energy harvester. The energy harvester was designed, and kinematic modeling was carried out to establish power–output relations as a function of rack velocity. Sensitivity analysis of various parameters indicated that, for high-speed applications where rack velocities are relatively high, small input excitations could be harnessed to achieve the rated revolutions per minute (RPM) of the generator. A set of laboratory tests was conducted to validate the kinematic model, and a good correlation was observed between measured and predicted voltages. Dynamic modeling of the plate was done for both recovery and compression to obtain the plate and rack velocities. Using Monte Carlo simulation, the plate was designed for a class-9 truck with wide-base tires moving at 128 km/h. Design and layout of the energy harvester with a displacement transfer plate was proposed for field validation. The energy harvester with the displacement plate could be integrated with transverse rumble strips in construction zones and near diversions. Hence, it could be used as a standalone system to power roadside applications such as safety signs, road lights, speed cameras, and vehicle-to-infrastructure (V2I) systems.
APA, Harvard, Vancouver, ISO, and other styles
8

HEFNER, Robert. IHSAN ETHICS AND POLITICAL REVITALIZATION Appreciating Muqtedar Khan’s Islam and Good Governance. IIIT, October 2020. http://dx.doi.org/10.47816/01.001.20.

Full text
Abstract:
Ours is an age of pervasive political turbulence, and the scale of the challenge requires new thinking on politics as well as public ethics for our world. In Western countries, the specter of Islamophobia, alt-right populism, along with racialized violence has shaken public confidence in long-secure assumptions rooted in democracy, diversity, and citizenship. The tragic denouement of so many of the Arab uprisings together with the ascendance of apocalyptic extremists like Daesh and Boko Haram have caused an even greater sense of alarm in large parts of the Muslim-majority world. It is against this backdrop that M.A. Muqtedar Khan has written a book of breathtaking range and ethical beauty. The author explores the history and sociology of the Muslim world, both classic and contemporary. He does so, however, not merely to chronicle the phases of its development, but to explore just why the message of compassion, mercy, and ethical beauty so prominent in the Quran and Sunna of the Prophet came over time to be displaced by a narrow legalism that emphasized jurisprudence, punishment, and social control. In the modern era, Western Orientalists and Islamists alike have pushed the juridification and interpretive reification of Islamic ethical traditions even further. Each group has asserted that the essence of Islam lies in jurisprudence (fiqh), and both have tended to imagine this legal heritage on the model of Western positive law, according to which law is authorized, codified, and enforced by a leviathan state. “Reification of Shariah and equating of Islam and Shariah has a rather emaciating effect on Islam,” Khan rightly argues. It leads its proponents to overlook “the depth and heights of Islamic faith, mysticism, philosophy or even emotions such as divine love (Muhabba)” (13). As the sociologist of Islamic law, Sami Zubaida, has similarly observed, in all these developments one sees evidence, not of a traditionalist reassertion of Muslim values, but a “triumph of Western models” of religion and state (Zubaida 2003:135). To counteract these impoverishing trends, Khan presents a far-reaching analysis that “seeks to move away from the now failed vision of Islamic states without demanding radical secularization” (2). He does so by positioning himself squarely within the ethical and mystical legacy of the Qur’an and traditions of the Prophet. As the book’s title makes clear, the key to this effort of religious recovery is “the cosmology of Ihsan and the worldview of Al-Tasawwuf, the science of Islamic mysticism” (1-2). For Islamist activists whose models of Islam have more to do with contemporary identity politics than a deep reading of Islamic traditions, Khan’s foregrounding of Ihsan may seem unfamiliar or baffling. But one of the many achievements of this book is the skill with which it plumbs the depth of scripture, classical commentaries, and tasawwuf practices to recover and confirm the ethic that lies at their heart. “The Quran promises that God is with those who do beautiful things,” the author reminds us (Khan 2019:1). The concept of Ihsan appears 191 times in 175 verses in the Quran (110). The concept is given its richest elaboration, Khan explains, in the famous hadith of the Angel Gabriel. This tradition recounts that when Gabriel appeared before the Prophet he asked, “What is Ihsan?” Both Gabriel’s question and the Prophet’s response make clear that Ihsan is an ideal at the center of the Qur’an and Sunna of the Prophet, and that it enjoins “perfection, goodness, to better, to do beautiful things and to do righteous deeds” (3). It is this cosmological ethic that Khan argues must be restored and implemented “to develop a political philosophy … that emphasizes love over law” (2). In its expansive exploration of Islamic ethics and civilization, Khan’s Islam and Good Governance will remind some readers of the late Shahab Ahmed’s remarkable book, What is Islam? The Importance of Being Islamic (Ahmed 2016). Both are works of impressive range and spiritual depth. But whereas Ahmed stood in the humanities wing of Islamic studies, Khan is an intellectual polymath who moves easily across the Islamic sciences, social theory, and comparative politics. He brings the full weight of his effort to conclusion with policy recommendations for how “to combine Sufism with political theory” (6), and to do so in a way that recommends specific “Islamic principles that encourage good governance, and politics in pursuit of goodness” (8).
APA, Harvard, Vancouver, ISO, and other styles
9

Leis, B. N., and N. D. Ghadiali. L51720 Pipe Axial Flaw Failure Criteria - PAFFC Version 1.0 Users Manual and Software. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), May 1994. http://dx.doi.org/10.55274/r0011357.

Full text
Abstract:
In the early 1970's, the Pipeline Research Council International, Inc.(PRCI) developed a failure criterion for pipes that had a predominately empirical basis. This criterion was based on flaw sixes that existed prior to pressurization and did not address possible growth due to the pressure in service or in a hydrostatic test or during the hold time at pressure in a hydrotest. So long as that criterion was used within the scope of the underlying database and empirical calibration, the results of its predictions were reasonably accurate. However, with the advent of newer steels and the related increased toughness that supported significant stable flaw growth, it became evident that this criterion should be updated. This updating led to the PRCI ductile flaw growth model (DFGM) that specifically accounted for the stable growth observed at flaws controlled by the steel's toughness and a limit-states analysis that addressed plastic-collapse at the flaw. This capability provided an accurate basis to assess flaw criticality in pipelines and also the means to develop hydrotest plans on a pipeline specific basis. Unfortunately, this enhanced capability came at the expense of increased complexity that made this new capability difficult to use on a day-today basis. To counter this complexity, this capability has been recast in the form of a PC computer program. Benefit: This topical report contains the computer program and technical manual for a failure criterion that will predict the behavior of an axially oriented, partially through the wall flaw in a pipeline. The model has been given the acronym PAFFC which stands for Pipe Axial Flaw Failure Criteria. PAFFC is an extension of a previously developed ductile flaw growth model, L51543, and can account for both a flaw's time dependent growth under pressure as well as its unstable growth leading to failure. As part of the output, the user is presented with a graphical depiction of the flaw sizes in terms of combinations of flaw length and depth, that will fail (or survive) a given operating or test pressure. As compared to existing criteria, this model provides a more accurate prediction of flaw behavior for a broad range of pipeline conditions.
APA, Harvard, Vancouver, ISO, and other styles
10

Raymond, Kara, Laura Palacios, Cheryl McIntyre, and Evan Gwilliam. Status of climate and water resources at Saguaro National Park: Water year 2019. Edited by Alice Wondrak Biel. National Park Service, December 2021. http://dx.doi.org/10.36967/nrr-2288717.

Full text
Abstract:
Climate and hydrology are major drivers of ecosystems. They dramatically shape ecosystem structure and function, particularly in arid and semi-arid ecosystems. Understanding changes in climate, groundwater, and water quality and quantity is central to assessing the condition of park biota and key cultural resources. The Sonoran Desert Network collects data on climate, groundwater, and surface water at 11 National Park Service units in south-ern Arizona and New Mexico. This report provides an integrated look at climate, groundwater, and springs conditions at Saguaro National Park (NP) during water year 2019 (October 2018–September 2019). Annual rainfall in the Rincon Mountain District was 27.36" (69.49 cm) at the Mica Mountain RAWS station and 12.89" (32.74 cm) at the Desert Research Learning Center Davis station. February was the wettest month, accounting for nearly one-quarter of the annual rainfall at both stations. Each station recorded extreme precipitation events (>1") on three days. Mean monthly maximum and minimum air temperatures were 25.6°F (-3.6°C) and 78.1°F (25.6°C), respectively, at the Mica Mountain station, and 37.7°F (3.2°C) and 102.3°F (39.1°C), respectively, at the Desert Research Learning Center station. Overall temperatures in WY2019 were cooler than the mean for the entire record. The reconnaissance drought index for the Mica Mountain station indicated wetter conditions than average in WY2019. Both of the park’s NOAA COOP stations (one in each district) had large data gaps, partially due to the 35-day federal government shutdown in December and January. For this reason, climate conditions for the Tucson Mountain District are not reported. The mean groundwater level at well WSW-1 in WY2019 was higher than the mean for WY2018. The water level has generally been increasing since 2005, reflecting the continued aquifer recovery since the Central Avra Valley Storage and Recovery Project came online, recharging Central Arizona Project water. Water levels at the Red Hills well generally de-clined starting in fall WY2019, continuing through spring. Monsoon storms led to rapid water level increases. Peak water level occurred on September 18. The Madrona Pack Base well water level in WY2019 remained above 10 feet (3.05 m) below measuring point (bmp) in the fall and winter, followed by a steep decline starting in May and continuing until the end of September, when the water level rebounded following a three-day rain event. The high-est water level was recorded on February 15. Median water levels in the wells in the middle reach of Rincon Creek in WY2019 were higher than the medians for WY2018 (+0.18–0.68 ft/0.05–0.21 m), but still generally lower than 6.6 feet (2 m) bgs, the mean depth-to-water required to sustain juvenile cottonwood and willow trees. RC-7 was dry in June–September, and RC-4 was dry in only September. RC-5, RC-6 and Well 633106 did not go dry, and varied approximately 3–4 feet (1 m). Eleven springs were monitored in the Rincon Mountain District in WY2019. Most springs had relatively few indications of anthropogenic or natural disturbance. Anthropogenic disturbance included spring boxes or other modifications to flow. Examples of natural disturbance included game trails and scat. In addition, several sites exhibited slight disturbance from fires (e.g., burned woody debris and adjacent fire-scarred trees) and evidence of high-flow events. Crews observed 1–7 taxa of facultative/obligate wetland plants and 0–3 invasive non-native species at each spring. Across the springs, crews observed four non-native plant species: rose natal grass (Melinis repens), Kentucky bluegrass (Poa pratensis), crimson fountaingrass (Cenchrus setaceus), and red brome (Bromus rubens). Baseline data on water quality and chemistry were collected at all springs. It is likely that that all springs had surface water for at least some part of WY2019. However, temperature sensors to estimate surface water persistence failed...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography