Academic literature on the topic 'RGB-Depth Image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'RGB-Depth Image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "RGB-Depth Image"

1

Li, Hengyu, Hang Liu, Ning Cao, et al. "Real-time RGB-D image stitching using multiple Kinects for improved field of view." International Journal of Advanced Robotic Systems 14, no. 2 (2017): 172988141769556. http://dx.doi.org/10.1177/1729881417695560.

Full text
Abstract:
This article concerns the problems of a defective depth map and limited field of view of Kinect-style RGB-D sensors. An anisotropic diffusion based hole-filling method is proposed to recover invalid depth data in the depth map. The field of view of the Kinect-style RGB-D sensor is extended by stitching depth and color images from several RGB-D sensors. By aligning the depth map with the color image, the registration data calculated by registering color images can be used to stitch depth and color images into a depth and color panoramic image concurrently in real time. Experiments show that the
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Yan, Jiqian Li, and Jing Bai. "Multiple Classifiers-Based Feature Fusion for RGB-D Object Recognition." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 05 (2017): 1750014. http://dx.doi.org/10.1142/s0218001417500148.

Full text
Abstract:
RGB-D-based object recognition has been enthusiastically investigated in the past few years. RGB and depth images provide useful and complementary information. Fusing RGB and depth features can significantly increase the accuracy of object recognition. However, previous works just simply take the depth image as the fourth channel of the RGB image and concatenate the RGB and depth features, ignoring the different power of RGB and depth information for different objects. In this paper, a new method which contains three different classifiers is proposed to fuse features extracted from RGB image a
APA, Harvard, Vancouver, ISO, and other styles
3

Cao, Hao, Xin Zhao, Ang Li, and Meng Yang. "Depth Image Rectification Based on an Effective RGB–Depth Boundary Inconsistency Model." Electronics 13, no. 16 (2024): 3330. http://dx.doi.org/10.3390/electronics13163330.

Full text
Abstract:
Depth image has been widely involved in various tasks of 3D systems with the advancement of depth acquisition sensors in recent years. Depth images suffer from serious distortions near object boundaries due to the limitations of depth sensors or estimation methods. In this paper, a simple method is proposed to rectify the erroneous object boundaries of depth images with the guidance of reference RGB images. First, an RGB–Depth boundary inconsistency model is developed to measure whether collocated pixels in depth and RGB images belong to the same object. The model extracts the structures of RG
APA, Harvard, Vancouver, ISO, and other styles
4

OYAMA, Tadahiro, and Daisuke MATSUZAKI. "Depth Image Generation from monocular RGB image." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2019 (2019): 2P2—H09. http://dx.doi.org/10.1299/jsmermd.2019.2p2-h09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Longyu, Hao Xia, and Yanyou Qiao. "Texture Synthesis Repair of RealSense D435i Depth Images with Object-Oriented RGB Image Segmentation." Sensors 20, no. 23 (2020): 6725. http://dx.doi.org/10.3390/s20236725.

Full text
Abstract:
A depth camera is a kind of sensor that can directly collect distance information between an object and the camera. The RealSense D435i is a low-cost depth camera that is currently in widespread use. When collecting data, an RGB image and a depth image are acquired simultaneously. The quality of the RGB image is good, whereas the depth image typically has many holes. In a lot of applications using depth images, these holes can lead to serious problems. In this study, a repair method of depth images was proposed. The depth image is repaired using the texture synthesis algorithm with the RGB ima
APA, Harvard, Vancouver, ISO, and other styles
6

Kwak, Jeonghoon, and Yunsick Sung. "Automatic 3D Landmark Extraction System Based on an Encoder–Decoder Using Fusion of Vision and LiDAR." Remote Sensing 12, no. 7 (2020): 1142. http://dx.doi.org/10.3390/rs12071142.

Full text
Abstract:
To provide a realistic environment for remote sensing applications, point clouds are used to realize a three-dimensional (3D) digital world for the user. Motion recognition of objects, e.g., humans, is required to provide realistic experiences in the 3D digital world. To recognize a user’s motions, 3D landmarks are provided by analyzing a 3D point cloud collected through a light detection and ranging (LiDAR) system or a red green blue (RGB) image collected visually. However, manual supervision is required to extract 3D landmarks as to whether they originate from the RGB image or the 3D point c
APA, Harvard, Vancouver, ISO, and other styles
7

Tang, Shengjun, Qing Zhu, Wu Chen, et al. "ENHANCED RGB-D MAPPING METHOD FOR DETAILED 3D MODELING OF LARGE INDOOR ENVIRONMENTS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-1 (June 2, 2016): 151–58. http://dx.doi.org/10.5194/isprsannals-iii-1-151-2016.

Full text
Abstract:
RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combini
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Shengjun, Qing Zhu, Wu Chen, et al. "ENHANCED RGB-D MAPPING METHOD FOR DETAILED 3D MODELING OF LARGE INDOOR ENVIRONMENTS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-1 (June 2, 2016): 151–58. http://dx.doi.org/10.5194/isprs-annals-iii-1-151-2016.

Full text
Abstract:
RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combini
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Dan, Ba Li, Guanyun Xi, Shusheng Wang, Lei Xu, and Juncheng Ma. "A Shooting Distance Adaptive Crop Yield Estimation Method Based on Multi-Modal Fusion." Agronomy 15, no. 5 (2025): 1036. https://doi.org/10.3390/agronomy15051036.

Full text
Abstract:
To address the low estimation accuracy of deep learning-based crop yield image recognition methods under untrained shooting distances, this study proposes a shooting distance adaptive crop yield estimation method by fusing RGB and depth image information through multi-modal data fusion. Taking strawberry fruit fresh weight as an example, RGB and depth image data of 348 strawberries were collected at nine heights ranging from 70 to 115 cm. First, based on RGB images and shooting height information, a single-modal crop yield estimation model was developed by training a convolutional neural netwo
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Ki-Seung. "Improving the Performance of Automatic Lip-Reading Using Image Conversion Techniques." Electronics 13, no. 6 (2024): 1032. http://dx.doi.org/10.3390/electronics13061032.

Full text
Abstract:
Variation in lighting conditions is a major cause of performance degradation in pattern recognition when using optical imaging. In this study, infrared (IR) and depth images were considered as possible robust alternatives against variations in illumination, particularly for improving the performance of automatic lip-reading. The variations due to lighting conditions were quantitatively analyzed for optical, IR, and depth images. Then, deep neural network (DNN)-based lip-reading rules were built for each image modality. Speech recognition techniques based on IR or depth imaging required an addi
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "RGB-Depth Image"

1

Deng, Zhuo. "RGB-DEPTH IMAGE SEGMENTATION AND OBJECT RECOGNITION FOR INDOOR SCENES." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/427631.

Full text
Abstract:
Computer and Information Science<br>Ph.D.<br>With the advent of Microsoft Kinect, the landscape of various vision-related tasks has been changed. Firstly, using an active infrared structured light sensor, the Kinect can provide directly the depth information that is hard to infer from traditional RGB images. Secondly, RGB and depth information are generated synchronously and can be easily aligned, which makes their direct integration possible. In this thesis, I propose several algorithms or systems that focus on how to integrate depth information with traditional visual appearances for address
APA, Harvard, Vancouver, ISO, and other styles
2

Hasnat, Md Abul. "Unsupervised 3D image clustering and extension to joint color and depth segmentation." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.

Full text
Abstract:
L'accès aux séquences d'images 3D s'est aujourd'hui démocratisé, grâce aux récentes avancées dans le développement des capteurs de profondeur ainsi que des méthodes permettant de manipuler des informations 3D à partir d'images 2D. De ce fait, il y a une attente importante de la part de la communauté scientifique de la vision par ordinateur dans l'intégration de l'information 3D. En effet, des travaux de recherche ont montré que les performances de certaines applications pouvaient être améliorées en intégrant l'information 3D. Cependant, il reste des problèmes à résoudre pour l'analyse et la se
APA, Harvard, Vancouver, ISO, and other styles
3

Baban, A. Erep Thierry Roland. "Contribution au développement d'un système intelligent de quantification des nutriments dans les repas d'Afrique subsaharienne." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP100.

Full text
Abstract:
La malnutrition, qu'elle soit liée à un apport insuffisant ou excessif en nutriments, représente un défi mondial de santé publique touchant des milliards de personnes. Elle affecte tous les systèmes organiques en étant un facteur majeur de risque pour les maladies non transmissibles telles que les maladies cardiovasculaires, le diabète et certains cancers. Évaluer l'apport alimentaire est crucial pour prévenir la malnutrition, mais cela reste un défi. Les méthodes traditionnelles d'évaluation alimentaire sont laborieuses et sujettes aux biais. Les avancées en IA ont permis la conception de VBD
APA, Harvard, Vancouver, ISO, and other styles
4

Řehánek, Martin. "Detekce objektů pomocí Kinectu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236602.

Full text
Abstract:
With the release of the Kinect device new possibilities appeared, allowing a simple use of image depth in image processing. The aim of this thesis is to propose a method for object detection and recognition in a depth map. Well known method Bag of Words and a descriptor based on Spin Image method are used for the object recognition. The Spin Image method is one of several existing approaches to depth map which are described in this thesis. Detection of object in picture is ensured by the sliding window technique. That is improved and speeded up by utilization of the depth information.
APA, Harvard, Vancouver, ISO, and other styles
5

SANTOS, LEANDRO TAVARES ARAGAO DOS. "GENERATING SUPERRESOLVED DEPTH MAPS USING LOW COST SENSORS AND RGB IMAGES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28673@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR<br>PROGRAMA DE EXCELENCIA ACADEMICA<br>As aplicações da reconstrução em três dimensões de uma cena real são as mais diversas. O surgimento de sensores de profundidade de baixo custo, tal qual o Kinect, sugere o desenvolvimento de sistemas de reconstrução mais baratos que aqueles já existentes. Contudo, os dados disponibilizados por este dispositivo ainda carecem em muito quando comparados àqueles providos por sistemas mais sofisticados. No mundo acadêmico e comercial, algumas inic
APA, Harvard, Vancouver, ISO, and other styles
6

Thörnberg, Jesper. "Combining RGB and Depth Images for Robust Object Detection using Convolutional Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174137.

Full text
Abstract:
We investigated the advantage of combining RGB images with depth data to get more robust object classifications and detections using pre-trained deep convolutional neural networks. We relied upon the raw images from publicly available datasets captured using Microsoft Kinect cameras. The raw images varied in size, and therefore required resizing to fit our network. We designed a resizing method called "bleeding edge" to avoid distorting the objects in the images. We present a novel method of interpolating the missing depth pixel values by comparing to similar RGB values. This method proved sup
APA, Harvard, Vancouver, ISO, and other styles
7

Möckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Full text
Abstract:
In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of view RGB image to a deep network tasked with predicting the depth for the entire RGB image. We show that by providing a narrow field of view depth image, we improve results for the area outside the provided depth compared to an earlier approach only utilizing a single RGB image for depth prediction. We also show that larger depth maps provide a greater advantage than smaller ones and that the accuracy of the model decreases with the distance from the provided depth. Further,
APA, Harvard, Vancouver, ISO, and other styles
8

Hammond, Patrick Douglas. "Deep Synthetic Noise Generation for RGB-D Data Augmentation." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7516.

Full text
Abstract:
Considerable effort has been devoted to finding reliable methods of correcting noisy RGB-D images captured with unreliable depth-sensing technologies. Supervised neural networks have been shown to be capable of RGB-D image correction, but require copious amounts of carefully-corrected ground-truth data to train effectively. Data collection is laborious and time-intensive, especially for large datasets, and generation of ground-truth training data tends to be subject to human error. It might be possible to train an effective method on a relatively smaller dataset using synthetically damaged dep
APA, Harvard, Vancouver, ISO, and other styles
9

Tu, Chieh-Min, and 杜介民. "Depth Image Inpainting with RGB-D Camera." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/k4m42a.

Full text
Abstract:
碩士<br>義守大學<br>資訊工程學系<br>103<br>Since Microsoft released the cheap Kinect sensors as a new natural user interface, stereo imaging is made from previous multi-view color image synthesis, to now synthesis of color image and depth image. But the captured depth images may lose some depth values so that stereoscopic effect is often poor in general. This thesis is based on Kinect RGB-D camera to develop an object-based depth inpainting method. Firstly, the background differencing, frame differencing and depth thresholding strategies are used as a basis for segmenting foreground objects from a dynamic
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Shih-Pi, and 林士筆. "In-air Handwriting Chinese Character Recognition Base on RGB Image without Depth Information." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/2mhfzk.

Full text
Abstract:
碩士<br>國立中央大學<br>資訊工程學系<br>107<br>As technology changes rapidly, Human-Computer Interaction(HCI) no longer being limited by keyboard. Existing handwriting products are provided sufficient feature to recognize handwriting trajectories on density and stability. For Chinese font, it is relatively difficult for machines to obtain stable trajectory comparing to English and numerals. In the past, in-air hand detection and tracking often used the devices with depth information. For example, Kinect uses two infrared cameras to obtain depth information, which cause higher price on devices. Therefore, th
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "RGB-Depth Image"

1

Pan, Hong, Søren Ingvor Olsen, and Yaping Zhu. "Joint Spatial-Depth Feature Pooling for RGB-D Object Classification." In Image Analysis. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19665-7_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Shirui, Hamid A. Jalab, and Zhen Dai. "Intrinsic Face Image Decomposition from RGB Images with Depth Cues." In Advances in Visual Informatics. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-34032-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Jinxin, Qingxiang Wang, and Xiaoqiang Ren. "Target Recognition Based on Kinect Combined RGB Image with Depth Image." In Advances in Intelligent Systems and Computing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25128-4_89.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mechal, Chaymae El, Najiba El Amrani El Idrissi, and Mostefa Mesbah. "CNN-Based Obstacle Avoidance Using RGB-Depth Image Fusion." In Lecture Notes in Electrical Engineering. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6893-4_78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Petrelli, Alioscia, and Luigi Di Stefano. "Learning to Weight Color and Depth for RGB-D Visual Search." In Image Analysis and Processing - ICIAP 2017. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68560-1_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Anran, Yao Zhao, and Chunyu Lin. "RGB Image Guided Depth Hole-Filling Using Bidirectional Attention Mechanism." In Advances in Intelligent Information Hiding and Multimedia Signal Processing. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1053-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Farahnakian, Fahimeh, and Jukka Heikkonen. "RGB and Depth Image Fusion for Object Detection Using Deep Learning." In Advances in Intelligent Systems and Computing. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3357-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Khaire, Pushpajit, Javed Imran, and Praveen Kumar. "Human Activity Recognition by Fusion of RGB, Depth, and Skeletal Data." In Proceedings of 2nd International Conference on Computer Vision & Image Processing. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-7895-8_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Yijin, Xinyang Liu, Wenqi Dong, et al. "DELTAR: Depth Estimation from a Light-Weight ToF Sensor and RGB Image." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19769-7_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kam, Jaewon, Jungeon Kim, Soongjin Kim, Jaesik Park, and Seungyong Lee. "CostDCNet: Cost Volume Based Depth Completion for a Single RGB-D Image." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20086-1_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "RGB-Depth Image"

1

Qiu, Zhouyan, Shang Zeng, Joaquín Martínez Sánchez, and Pedro Arias. "Comparative analysis of image super-resolution: A concurrent study of RGB and depth images." In 2024 International Workshop on the Theory of Computational Sensing and its Applications to Radar, Multimodal Sensing and Imaging (CoSeRa). IEEE, 2024. http://dx.doi.org/10.1109/cosera60846.2024.10720360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Xiaorong, Qinghua Zeng, Sheng Hong, Yineng Li, and Yan Wang. "Local Linear Fitting Method for Lidar Depth Completion Based on RGB Image Guidance." In 2024 7th International Conference on Robotics, Control and Automation Engineering (RCAE). IEEE, 2024. https://doi.org/10.1109/rcae62637.2024.10834263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Morisset, Maxime, Marc Donias, and Christian Germain. "Principal Curvatures as Pose-Invariant Features of Depth Maps for RGB-D Object Recognition." In 2024 IEEE Thirteenth International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, 2024. http://dx.doi.org/10.1109/ipta62886.2024.10755742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baban A Erep, Thierry Roland, Lotfi Chaari, Pierre Ele, and Eugene Sobngwi. "ESeNet-D : Efficient Semantic Segmentation for RGB-Depth Food Images." In 2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2024. http://dx.doi.org/10.1109/mlsp58920.2024.10734761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chaar, Mohamad Mofeed, Jamal Raiyn, and Galia Weidl. "Predicting Depth Maps from Single RGB Images and Addressing Missing Information in Depth Estimation." In 11th International Conference on Vehicle Technology and Intelligent Transport Systems. SCITEPRESS - Science and Technology Publications, 2025. https://doi.org/10.5220/0013365900003941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Yeh-Wei, Tzu-Kai Wang, Chi-Chung Lau, et al. "Repairing IR depth image with 2D RGB image." In Current Developments in Lens Design and Optical Engineering XIX, edited by R. Barry Johnson, Virendra N. Mahajan, and Simon Thibault. SPIE, 2018. http://dx.doi.org/10.1117/12.2321205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Issaranon, Theerasit, Chuhang Zou, and David Forsyth. "Counterfactual Depth from a Single RGB Image." In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Wenju, Wenkang Hu, Tianzhen Dong, and Jiantao Qu. "Depth Image Enhancement Algorithm Based on RGB Image Fusion." In 2018 11th International Symposium on Computational Intelligence and Design (ISCID). IEEE, 2018. http://dx.doi.org/10.1109/iscid.2018.10126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hui, Tak-Wai, and King Ngi Ngan. "Depth enhancement using RGB-D guided filtering." In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bai, Jinghui, Jingyu Yang, Xinchen Ye, and Chunping Hou. "Depth refinement for binocular kinect RGB-D cameras." In 2016 Visual Communications and Image Processing (VCIP). IEEE, 2016. http://dx.doi.org/10.1109/vcip.2016.7805545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!