Добірка наукової літератури з теми "RGB-D Image"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "RGB-D Image".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "RGB-D Image"
Uddin, Md Kamal, Amran Bhuiyan, and Mahmudul Hasan. "Fusion in Dissimilarity Space Between RGB D and Skeleton for Person Re Identification." International Journal of Innovative Technology and Exploring Engineering 10, no. 12 (October 30, 2021): 69–75. http://dx.doi.org/10.35940/ijitee.l9566.10101221.
Повний текст джерелаLi, Hengyu, Hang Liu, Ning Cao, Yan Peng, Shaorong Xie, Jun Luo, and Yu Sun. "Real-time RGB-D image stitching using multiple Kinects for improved field of view." International Journal of Advanced Robotic Systems 14, no. 2 (March 1, 2017): 172988141769556. http://dx.doi.org/10.1177/1729881417695560.
Повний текст джерелаWu, Yan, Jiqian Li, and Jing Bai. "Multiple Classifiers-Based Feature Fusion for RGB-D Object Recognition." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 05 (February 27, 2017): 1750014. http://dx.doi.org/10.1142/s0218001417500148.
Повний текст джерелаKitzler, Florian, Norbert Barta, Reinhard W. Neugschwandtner, Andreas Gronauer, and Viktoria Motsch. "WE3DS: An RGB-D Image Dataset for Semantic Segmentation in Agriculture." Sensors 23, no. 5 (March 1, 2023): 2713. http://dx.doi.org/10.3390/s23052713.
Повний текст джерелаZheng, Huiming, and Wei Gao. "End-to-End RGB-D Image Compression via Exploiting Channel-Modality Redundancy." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (March 24, 2024): 7562–70. http://dx.doi.org/10.1609/aaai.v38i7.28588.
Повний текст джерелаPeroš, Josip, Rinaldo Paar, Vladimir Divić, and Boštjan Kovačić. "Fusion of Laser Scans and Image Data—RGB+D for Structural Health Monitoring of Engineering Structures." Applied Sciences 12, no. 22 (November 19, 2022): 11763. http://dx.doi.org/10.3390/app122211763.
Повний текст джерелаYan, Zhiqiang, Hongyuan Wang, Qianhao Ning, and Yinxi Lu. "Robust Image Matching Based on Image Feature and Depth Information Fusion." Machines 10, no. 6 (June 8, 2022): 456. http://dx.doi.org/10.3390/machines10060456.
Повний текст джерелаYuan, Yuan, Zhitong Xiong, and Qi Wang. "ACM: Adaptive Cross-Modal Graph Convolutional Neural Networks for RGB-D Scene Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9176–84. http://dx.doi.org/10.1609/aaai.v33i01.33019176.
Повний текст джерелаWang, Z., T. Li, L. Pan, and Z. Kang. "SCENE SEMANTIC SEGMENTATION FROM INDOOR RGB-D IMAGES USING ENCODE-DECODER FULLY CONVOLUTIONAL NETWORKS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 12, 2017): 397–404. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-397-2017.
Повний текст джерелаKanda, Takuya, Kazuya Miyakawa, Jeonghwang Hayashi, Jun Ohya, Hiroyuki Ogata, Kenji Hashimoto, Xiao Sun, Takashi Matsuzawa, Hiroshi Naito, and Atsuo Takanishi. "Locating Mechanical Switches Using RGB-D Sensor Mounted on a Disaster Response Robot." Electronic Imaging 2020, no. 6 (January 26, 2020): 16–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.6.iriacv-016.
Повний текст джерелаДисертації з теми "RGB-D Image"
Murgia, Julian. "Segmentation d'objets mobiles par fusion RGB-D et invariance colorimétrique." Thesis, Belfort-Montbéliard, 2016. http://www.theses.fr/2016BELF0289/document.
Повний текст джерелаThis PhD thesis falls within the scope of video-surveillance, and more precisely focuses on the detection of movingobjects in image sequences. In many applications, good detection of moving objects is an indispensable prerequisiteto any treatment applied to these objects such as people or cars tracking, passengers counting, detection ofdangerous situations in specific environments (level crossings, pedestrian crossings, intersections, etc.), or controlof autonomous vehicles. The reliability of computer vision based systems require robustness against difficultconditions often caused by lighting conditions (day/night, shadows), weather conditions (rain, wind, snow...) and thetopology of the observed scene (occultation...).Works detailed in this PhD thesis aim at reducing the impact of illumination conditions by improving the quality of thedetection of mobile objects in indoor or outdoor environments and at any time of the day. Thus, we propose threestrategies working as a combination to improve the detection of moving objects:i) using colorimetric invariants and/or color spaces that provide invariant properties ;ii) using passive stereoscopic camera (in outdoor environments) and Microsoft Kinect active camera (in outdoorenvironments) in order to partially reconstruct the 3D environment, providing an additional dimension (a depthinformation) to the background/foreground subtraction algorithm ;iii) a new fusion algorithm based on fuzzy logic in order to combine color and depth information with a certain level ofuncertainty for the pixels classification
Tykkälä, Tommi. "Suivi de caméra image en temps réel base et cartographie de l'environnement." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00933813.
Повний текст джерелаLai, Po Kong. "Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39663.
Повний текст джерелаKadkhodamohammadi, Abdolrahim. "3D detection and pose estimation of medical staff in operating rooms using RGB-D images." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD047/document.
Повний текст джерелаIn this thesis, we address the two problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients in the development of surgical assistance applications. We perceive the OR using compact RGB-D cameras that can be conveniently integrated in the room. These sensors provide complementary information about the scene, which enables us to develop methods that can cope with numerous challenges present in the OR, e.g. clutter, textureless surfaces and occlusions. We present novel part-based approaches that take advantage of depth, multi-view and temporal information to construct robust human detection and pose estimation models. Evaluation is performed on new single- and multi-view datasets recorded in operating rooms. We demonstrate very promising results and show that our approaches outperform state-of-the-art methods on this challenging data acquired during real surgeries
Meilland, Maxime. "Cartographie RGB-D dense pour la localisation visuelle temps-réel et la navigation autonome." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://tel.archives-ouvertes.fr/tel-00686803.
Повний текст джерелаVillota, Juan Carlos Perafán. "Adaptive registration using 2D and 3D features for indoor scene reconstruction." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-17042017-090901/.
Повний текст джерелаO alinhamento entre pares de nuvens de pontos é uma tarefa importante na construção de mapas de ambientes em 3D. A combinação de características locais 2D com informação de profundidade fornecida por câmeras RGB-D são frequentemente utilizadas para melhorar tais alinhamentos. No entanto, em ambientes internos com baixa iluminação ou pouca textura visual o método usando somente características locais 2D não é particularmente robusto. Nessas condições, as características 2D são difíceis de serem detectadas, conduzindo a um desalinhamento entre pares de quadros consecutivos. A utilização de características 3D locais pode ser uma solução uma vez que tais características são extraídas diretamente de pontos 3D e são resistentes a variações na textura visual e na iluminação. Como situações de variações em cenas reais em ambientes internos são inevitáveis, essa tese apresenta um novo sistema desenvolvido com o objetivo de melhorar o alinhamento entre pares de quadros usando uma combinação adaptativa de características esparsas 2D e 3D. Tal combinação está baseada nos níveis de estrutura geométrica e de textura visual contidos em cada cena. Esse sistema foi testado com conjuntos de dados RGB-D, incluindo vídeos com movimentos irrestritos da câmera e mudanças naturais na iluminação. Os resultados experimentais mostram que a nossa proposta supera aqueles métodos que usam características 2D ou 3D separadamente, obtendo uma melhora da precisão no alinhamento de cenas em ambientes internos reais.
Shi, Yangyu. "Infrared Imaging Decision Aid Tools for Diagnosis of Necrotizing Enterocolitis." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40714.
Повний текст джерелаBaban, A. Erep Thierry Roland. "Contribution au développement d'un système intelligent de quantification des nutriments dans les repas d'Afrique subsaharienne." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP100.
Повний текст джерелаMalnutrition, including under- and overnutrition, is a global health challenge affecting billions of people. It impacts all organ systems and is a significant risk factor for noncommunicable diseases such as cardiovascular diseases, diabetes, and some cancers. Assessing food intake is crucial for preventing malnutrition but remains challenging. Traditional methods for dietary assessment are labor-intensive and prone to bias. Advancements in AI have made Vision-Based Dietary Assessment (VBDA) a promising solution for automatically analyzing food images to estimate portions and nutrition. However, food image segmentation in VBDA faces challenges due to food's non-rigid structure, high intra-class variation (where the same dish can look very different), inter-class resemblance (where different foods appear similar) and scarcity of publicly available datasets.Almost all food segmentation research has focused on Asian and Western foods, with no datasets for African cuisines. However, African dishes often involve mixed food classes, making accurate segmentation challenging. Additionally, research has largely focus on RGB images, which provides color and texture but may lack geometric detail. To address this, RGB-D segmentation combines depth data with RGB images. Depth images provide crucial geometric details that enhance RGB data, improve object discrimination, and are robust to factors like illumination and fog. Despite its success in other fields, RGB-D segmentation for food is underexplored due to difficulties in collecting food depth images.This thesis makes key contributions by developing new deep learning models for RGB (mid-DeepLabv3+) and RGB-D (ESeNet-D) image segmentation and introducing the first food segmentation datasets focused on African food images. Mid-DeepLabv3+ is based on DeepLabv3+, featuring a simplified ResNet backbone with and added skip layer (middle layer) in the decoder and SimAM attention mechanism. This model offers an optimal balance between performance and efficiency, matching DeepLabv3+'s performance while cutting computational load by half. ESeNet-D consists on two encoder branches using EfficientNetV2 as backbone, with a fusion block for multi-scale integration and a decoder employing self-calibrated convolution and learned interpolation for precise segmentation. ESeNet-D outperforms many RGB and RGB-D benchmark models while having fewer parameters and FLOPs. Our experiments show that, when properly integrated, depth information can significantly improve food segmentation accuracy. We also present two new datasets: AfricaFoodSeg for “food/non-food” segmentation with 3,067 images (2,525 for training, 542 for validation), and CamerFood focusing on Cameroonian cuisine. CamerFood datasets include CamerFood10 with 1,422 images from ten food classes, and CamerFood15, an enhanced version with 15 food classes, 1,684 training images, and 514 validation images. Finally, we address the challenge of scarce depth data in RGB-D food segmentation by demonstrating that Monocular Depth Estimation (MDE) models can aid in generating effective depth maps for RGB-D datasets
Hasnat, Md Abul. "Unsupervised 3D image clustering and extension to joint color and depth segmentation." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.
Повний текст джерелаAccess to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
Řehánek, Martin. "Detekce objektů pomocí Kinectu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236602.
Повний текст джерелаКниги з теми "RGB-D Image"
Rosin, Paul L., Yu-Kun Lai, Ling Shao, and Yonghuai Liu, eds. RGB-D Image Analysis and Processing. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3.
Повний текст джерелаRosin, Paul L., Yonghuai Liu, Ling Shao, and Yu-Kun Lai. RGB-D Image Analysis and Processing. Springer International Publishing AG, 2020.
Знайти повний текст джерелаKohli, Pushmeet, Zhengyou Zhang, Ling Shao, and Jungong Han. Computer Vision and Machine Learning with RGB-D Sensors. Springer, 2014.
Знайти повний текст джерелаKohli, Pushmeet, Zhengyou Zhang, Ling Shao, and Jungong Han. Computer Vision and Machine Learning with RGB-D Sensors. Springer, 2016.
Знайти повний текст джерелаComputer Vision and Machine Learning with RGB-D Sensors. Springer, 2014.
Знайти повний текст джерелаHester, Desirae. Picture Book of dσdge Chαrgєrs: An Album Consist of Compelling Photos of dσdge Chαrgєrs with High Quality Images As a Special Gift for Friends, Family, Lovers, Relative. Independently Published, 2022.
Знайти повний текст джерелаЧастини книг з теми "RGB-D Image"
Civera, Javier, and Seong Hun Lee. "RGB-D Odometry and SLAM." In RGB-D Image Analysis and Processing, 117–44. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_6.
Повний текст джерелаZollhöfer, Michael. "Commodity RGB-D Sensors: Data Acquisition." In RGB-D Image Analysis and Processing, 3–13. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_1.
Повний текст джерелаMalleson, Charles, Jean-Yves Guillemaut, and Adrian Hilton. "3D Reconstruction from RGB-D Data." In RGB-D Image Analysis and Processing, 87–115. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_5.
Повний текст джерелаRen, Tongwei, and Ao Zhang. "RGB-D Salient Object Detection: A Review." In RGB-D Image Analysis and Processing, 203–20. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_9.
Повний текст джерелаCong, Runmin, Hao Chen, Hongyuan Zhu, and Huazhu Fu. "Foreground Detection and Segmentation in RGB-D Images." In RGB-D Image Analysis and Processing, 221–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_10.
Повний текст джерелаSahin, Caner, Guillermo Garcia-Hernando, Juil Sock, and Tae-Kyun Kim. "Instance- and Category-Level 6D Object Pose Estimation." In RGB-D Image Analysis and Processing, 243–65. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_11.
Повний текст джерелаZhang, Song-Hai, and Yu-Kun Lai. "Geometric and Semantic Modeling from RGB-D Data." In RGB-D Image Analysis and Processing, 267–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_12.
Повний текст джерелаSchwarz, Max, and Sven Behnke. "Semantic RGB-D Perception for Cognitive Service Robots." In RGB-D Image Analysis and Processing, 285–307. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_13.
Повний текст джерелаSpinsante, Susanna. "RGB-D Sensors and Signal Processing for Fall Detection." In RGB-D Image Analysis and Processing, 309–34. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_14.
Повний текст джерелаMoyà-Alcover, Gabriel, Ines Ayed, Javier Varona, and Antoni Jaume-i-Capó. "RGB-D Interactive Systems on Serious Games for Motor Rehabilitation Therapy and Therapeutic Measurements." In RGB-D Image Analysis and Processing, 335–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_15.
Повний текст джерелаТези доповідей конференцій з теми "RGB-D Image"
Teng, Qianqian, and Xianbo He. "RGB-D Image Modeling Method Based on Transformer: RDT." In 2024 3rd International Conference on Artificial Intelligence, Internet of Things and Cloud Computing Technology (AIoTC), 386–89. IEEE, 2024. http://dx.doi.org/10.1109/aiotc63215.2024.10748282.
Повний текст джерелаWang, Kexuan, Chenhua Liu, Huiguang Wei, Li Jing, and Rongfu Zhang. "RFNET: Refined Fusion Three-Branch RGB-D Salient Object Detection Network." In 2024 IEEE International Conference on Image Processing (ICIP), 741–46. IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647308.
Повний текст джерелаFouad, Islam I., Sherine Rady, and Mostafa G. M. Mostafa. "Efficient image segmentation of RGB-D images." In 2017 12th International Conference on Computer Engineering and Systems (ICCES). IEEE, 2017. http://dx.doi.org/10.1109/icces.2017.8275331.
Повний текст джерелаLi, Shijie, Rong Li, and Juergen Gall. "Semantic RGB-D Image Synthesis." In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2023. http://dx.doi.org/10.1109/iccvw60793.2023.00101.
Повний текст джерелаZhang, Xiaoxiong, Sajid Javed, Ahmad Obeid, Jorge Dias, and Naoufel Werghi. "Gender Recognition on RGB-D Image." In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191068.
Повний текст джерелаZhang, Shaopeng, Ming Zhong, Gang Zeng, and Rui Gan. "Joining geometric and RGB features for RGB-D semantic segmentation." In The Second International Conference on Image, Video Processing and Artificial Intelligence, edited by Ruidan Su. SPIE, 2019. http://dx.doi.org/10.1117/12.2541645.
Повний текст джерелаLi, Benchao, Wanhua Li, Yongyi Tang, Jian-Fang Hu, and Wei-Shi Zheng. "GL-PAM RGB-D Gesture Recognition." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451157.
Повний текст джерелаShibata, Toshihiro, Yuji Akai, and Ryo Matsuoka. "Reflection Removal Using RGB-D Images." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451639.
Повний текст джерелаValognes, Julien, Maria A. Amer, and Niloufar Salehi Dastjerdi. "Effective keyframe extraction from RGB and RGB-D video sequences." In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, 2017. http://dx.doi.org/10.1109/ipta.2017.8310120.
Повний текст джерелаHui, Tak-Wai, and King Ngi Ngan. "Depth enhancement using RGB-D guided filtering." In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025778.
Повний текст джерела