Academic literature on the topic '3D object registration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '3D object registration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "3D object registration"

1

Benfield, Kate J., Dylan E. Burruel, and Trevor J. Lujan. "Guidelines for Accurate Multi-Temporal Model Registration of 3D Scanned Objects." Journal of Imaging 9, no. 2 (2023): 43. http://dx.doi.org/10.3390/jimaging9020043.

Full text
Abstract:
Changes in object morphology can be quantified using 3D optical scanning to generate 3D models of an object at different time points. This process requires registration techniques that align target and reference 3D models using mapping functions based on common object features that are unaltered over time. The goal of this study was to determine guidelines when selecting these localized features to ensure robust and accurate 3D model registration. For this study, an object of interest (tibia bone replica) was 3D scanned at multiple time points, and the acquired 3D models were aligned using a simple cubic registration block attached to the object. The size of the registration block and the number of planar block surfaces selected to calculate the mapping functions used for 3D model registration were varied. Registration error was then calculated as the average linear surface variation between the target and reference tibial plateau surfaces. We obtained very low target registration errors when selecting block features with an area equivalent to at least 4% of the scanning field of view. Additionally, we found that at least two orthogonal surfaces should be selected to minimize registration error. Therefore, when registering 3D models to measure multi-temporal morphological change (e.g., mechanical wear), we recommend selecting multiplanar features that account for at least 4% of the scanning field of view. For the first time, this study has provided guidelines for selecting localized object features that can provide accurate 3D model registration for 3D scanned objects.
APA, Harvard, Vancouver, ISO, and other styles
2

Hendriksen, Luuk Antonie, Andrea Sciacchitano, and Fulvio Scarano. "Object Registration Techniques For 3D Particle Tracking." Proceedings of the International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics 21 (July 8, 2024): 1–31. http://dx.doi.org/10.55037/lxlaser.21st.113.

Full text
Abstract:
Image based 3D particle tracking is currently the most widely used technique for volumetric velocity measurements. Inspecting the flow-field around an object is however, hampered by the latter, obstructing the view across it. In this study, the problem of measurement limitations due to the above is addressed. The present work builds upon the recent proposal from Wieneke and Rockstroh (2024), whereby the information of the occluded lines of sight can be incorporated into the particle tracking algorithm. The approach, however, necessitates of methods that accurately evaluate the shape and position of the object within the measurement domain. Methods of object marking and the following 3D registration of a digital object model (CAD) are discussed. For the latter, the Iterative Closest Point (ICP) registration algorithm is adopted. The accuracy of object registration is evaluated by means of experiments, where marking approaches that include physical and optically projected markers are discussed and compared. Three objects with growing level of geometrical complexity are considered: a cube, a truncated wing and a scaled model of a sport cyclist. The registered CAD representations of the physical objects are included in aerodynamic experiments, and the flow field is measured by means of large-scale particle tracking using helium filled soap bubbles. Three operating regimes are studied and compared: monolithic, partitioned and object-aware (OA) monolithic. The results indicate that object registration enables a correct reconstruction of particle tracers and strongly reduces the domain clipping typical of the monolithic approach. Furthermore, the dynamical use of all views in the OA monolithic method offers clear benefits compared to the partitioned approach, namely a lower occurrence of ghost particles. Finally, the combined visualization of the object and the surrounding flow pattern offers means of insightful data inspection and interpretation, along with posing a basis for PIV data assimilation at the fluid-solid interface.
APA, Harvard, Vancouver, ISO, and other styles
3

Sileo, Monica, Domenico Daniele Bloisi, and Francesco Pierri. "Grasping of Solid Industrial Objects Using 3D Registration." Machines 11, no. 3 (2023): 396. http://dx.doi.org/10.3390/machines11030396.

Full text
Abstract:
Robots allow industrial manufacturers to speed up production and to increase the product’s quality. This paper deals with the grasping of partially known industrial objects in an unstructured environment. The proposed approach consists of two main steps: (1) the generation of an object model, using multiple point clouds acquired by a depth camera from different points of view; (2) the alignment of the generated model with the current view of the object in order to detect the grasping pose. More specifically, the model is obtained by merging different point clouds with a registration procedure based on the iterative closest point (ICP) algorithm. Then, a grasping pose is placed on the model. Such a procedure only needs to be executed once, and it works even in the presence of objects only partially known or when a CAD model is not available. Finally, the current object view is aligned to the model and the final grasping pose is estimated. Quantitative experiments using a robot manipulator and three different real-world industrial objects were conducted to demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Saiti, Evdokia, Antonios Danelakis, and Theoharis Theoharis. "Cross-time registration of 3D point clouds." Computers and Graphics 99 (July 21, 2021): 139–52. https://doi.org/10.1016/j.cag.2021.07.005.

Full text
Abstract:
Registration is a ubiquitous operation in visual computing and constitutes an important pre-processing step for operations such as 3D object reconstruction, retrieval and recognition. Particularly in cultural heritage (CH) applications, registration techniques are essential for the digitization and restoration pipelines. Cross-time registration is a special case where the objects to be registered are instances of the same object after undergoing processes such as erosion or restoration. Traditional registration techniques are inadequate to address this problem with the required high accuracy for detecting minute changes; some are extremely slow. A deep learning registration framework for cross-time registration is proposed which uses the DeepGMR network in combination with a novel down-sampling scheme for cross-time registration. A dataset especially designed for cross-time registration is presented (called ECHO) and an extensive evaluation of state-of-the-art methods is conducted for the challenging case of cross-time registration.
APA, Harvard, Vancouver, ISO, and other styles
5

Kornuta, Tomasz, and Maciej Stefańczyk. "Modreg: A Modular Framework for RGB-D Image Acquisition and 3D Object Model Registration." Foundations of Computing and Decision Sciences 42, no. 3 (2017): 183–201. http://dx.doi.org/10.1515/fcds-2017-0009.

Full text
Abstract:
AbstractRGB-D sensors became a standard in robotic applications requiring object recognition, such as object grasping and manipulation. A typical object recognition system relies on matching of features extracted from RGB-D images retrieved from the robot sensors with the features of the object models. In this paper we present ModReg: a system for registration of 3D models of objects. The system consists of a modular software associated with a multi-camera setup supplemented with an additional pattern projector, used for the registration of high-resolution RGB-D images. The objects are placed on a fiducial board with two dot patterns enabling extraction of masks of the placed objects and estimation of their initial poses. The acquired dense point clouds constituting subsequent object views undergo pairwise registration and at the end are optimized with a graph-based technique derived from SLAM. The combination of all those elements resulted in a system able to generate consistent 3D models of objects.
APA, Harvard, Vancouver, ISO, and other styles
6

Widyastuti, Ratri, Asep Yusup Saptari, and Arif Rahman. "Registration Strategy of Handheld Scanner (HS) and Terrestrial Laser Scanner Integration for Building Utility Mapping." IOP Conference Series: Earth and Environmental Science 1047, no. 1 (2022): 012012. http://dx.doi.org/10.1088/1755-1315/1047/1/012012.

Full text
Abstract:
Abstract Currently, the need for a 3D model that represents the existing condition of a building is needed, especially for building management. While performed the 3D mapping of building utilities such as pipelines in buildings, the Terrestrial Laser Scanner (TLS) technology couldn’t cover the object in the ceiling, so Handheld Scanner (HS) was used to complete the scanned objects. The purpose of this article is conducting the process of mapping between HS and TLS for building pipelines. The integration between two different sensor resolutions and sensor scan coverage requires a separate strategy to generate 3D models for pipeline objects in building. The use of specific target and 3D transformation method could do the registration between two set of point clouds which was obtained by two different technologies. The target object, such as spherical object which is usually used as a tie point between scan results in the registration process, cannot be recorded by HS. The specific targets were pasted on the pipelines in order to be captured by the HS. The specific targets become tie point in the registration process. Meanwhile, the registration process using the Iterative Closest Point (ICP) algorithm cannot be carried out because the two scan results do not meet the overlap percentage standard, so that the 3D transformation would be used in registration process. The result is that the registration accuracy is 0.062 m.
APA, Harvard, Vancouver, ISO, and other styles
7

Jerbić, Bojan, Filip Šuligoj, Marko Švaco, and Bojan Šekoranja. "Robot Assisted 3D Point Cloud Object Registration." Procedia Engineering 100 (2015): 847–52. http://dx.doi.org/10.1016/j.proeng.2015.01.440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Saiti, Evdokia, and Theoharis Theoharis. "Multimodal registration across 3D point clouds and CT-volumes." Computers and Graphics 106 (June 25, 2022): 259–66. https://doi.org/10.1016/j.cag.2022.06.012.

Full text
Abstract:
Multimodal registration is a challenging problem in visual computing, commonly faced during medical image-guided interventions, data fusion and 3D object retrieval. The main challenge of multimodal registration is finding accurate correspondence between modalities, since different modalities do not exhibit the same characteristics. This paper explores how the coherence of different modalities can be utilized for the challenging task of 3D multimodal registration. A novel deep learning multimodal registration framework is proposed by introducing a siamese deep learning architecture, especially designed for aligning and fusing modalities of different structural and physical principles. The cross-modal attention blocks lead the network to establish correspondences between features of different modalities. The proposed framework focuses on the alignment of 3D point clouds and the micro-CT 3D volumes of the same object. A multimodal dataset consisting of real micro-CT scans and their synthetically generated 3D models (point clouds) is presented and utilized for evaluating our methodology.
APA, Harvard, Vancouver, ISO, and other styles
9

Grzelka, Kornelia, Karolina Pargieła, Aleksandra Jasińska, Artur Warchoł, and Jarosław Bydłosz. "Registration of Objects for 3D Cadastre: An Integrated Approach." Land 13, no. 12 (2024): 2070. https://doi.org/10.3390/land13122070.

Full text
Abstract:
3D cadastral issues have been the subject of scientific research for more than 20 years. However, the initial registration of objects in 3D cadastres remains a significant challenge. The purpose of this study is to verify whether it is possible to register objects for future 3D cadastres based on data from various sources such as laser scanning measurements and technical documentation. The research object is a building in Krakow (Poland). For objects with easy access such as common parts, parking lots, and outer parts of a building, laser scanning was applied. The premises (apartments) are private properties without free access; thus, the technical documentation of the building was used. A 3D model was built in BIM way (Autodesk Revit) based on a terrestrial laser scanning point cloud and technical drawings, but only for the geometrical part of the building. To illustrate the legal relationships among the 3D cadastral objects in the building, a UML model was also created. The results prove that the registration of objects for future 3D cadastres using both methods is possible. However, further research is required.
APA, Harvard, Vancouver, ISO, and other styles
10

Chua, Chin Seng, and Ray Jarvis. "3D free-form surface registration and object recognition." International Journal of Computer Vision 17, no. 1 (1996): 77–99. http://dx.doi.org/10.1007/bf00127819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "3D object registration"

1

Thornton, Kenneth B. "Accurate image-based 3D object registration and reconstruction /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/5910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grankvist, Ola. "Recognition and Registration of 3D Models in Depth Sensor Data." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131452.

Full text
Abstract:
Object Recognition is the art of localizing predefined objects in image sensor data. In this thesis a depth sensor was used which has the benefit that the 3D pose of the object can be estimated. This has applications in e.g. automatic manufacturing, where a robot picks up parts or tools with a robot arm. This master thesis presents an implementation and an evaluation of a system for object recognition of 3D models in depth sensor data. The system uses several depth images rendered from a 3D model and describes their characteristics using so-called feature descriptors. These are then matched with the descriptors of a scene depth image to find the 3D pose of the model in the scene. The pose estimate is then refined iteratively using a registration method. Different descriptors and registration methods are investigated. One of the main contributions of this thesis is that it compares two different types of descriptors, local and global, which has seen little attention in research. This is done for two different scene scenarios, and for different types of objects and depth sensors. The evaluation shows that global descriptors are fast and robust for objects with a smooth visible surface whereas the local descriptors perform better for larger objects in clutter and occlusion. This thesis also presents a novel global descriptor, the CESF, which is observed to be more robust than other global descriptors. As for the registration methods, the ICP is shown to perform most accurately and ICP point-to-plane more robust.
APA, Harvard, Vancouver, ISO, and other styles
3

Amplianitis, Konstantinos. "3D real time object recognition." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17717.

Full text
Abstract:
Die Objekterkennung ist ein natürlicher Prozess im Menschlichen Gehirn. Sie ndet im visuellen Kortex statt und nutzt die binokulare Eigenschaft der Augen, die eine drei- dimensionale Interpretation von Objekten in einer Szene erlaubt. Kameras ahmen das menschliche Auge nach. Bilder von zwei Kameras, in einem Stereokamerasystem, werden von Algorithmen für eine automatische, dreidimensionale Interpretation von Objekten in einer Szene benutzt. Die Entwicklung von Hard- und Software verbessern den maschinellen Prozess der Objek- terkennung und erreicht qualitativ immer mehr die Fähigkeiten des menschlichen Gehirns. Das Hauptziel dieses Forschungsfeldes ist die Entwicklung von robusten Algorithmen für die Szeneninterpretation. Sehr viel Aufwand wurde in den letzten Jahren in der zweidimen- sionale Objekterkennung betrieben, im Gegensatz zur Forschung zur dreidimensionalen Erkennung. Im Rahmen dieser Arbeit soll demnach die dreidimensionale Objekterkennung weiterent- wickelt werden: hin zu einer besseren Interpretation und einem besseren Verstehen von sichtbarer Realität wie auch der Beziehung zwischen Objekten in einer Szene. In den letzten Jahren aufkommende low-cost Verbrauchersensoren, wie die Microsoft Kinect, generieren Farb- und Tiefendaten einer Szene, um menschenähnliche visuelle Daten zu generieren. Das Ziel hier ist zu zeigen, wie diese Daten benutzt werden können, um eine neue Klasse von dreidimensionalen Objekterkennungsalgorithmen zu entwickeln - analog zur Verarbeitung im menschlichen Gehirn.<br>Object recognition is a natural process of the human brain performed in the visual cor- tex and relies on a binocular depth perception system that renders a three-dimensional representation of the objects in a scene. Hitherto, computer and software systems are been used to simulate the perception of three-dimensional environments with the aid of sensors to capture real-time images. In the process, such images are used as input data for further analysis and development of algorithms, an essential ingredient for simulating the complexity of human vision, so as to achieve scene interpretation for object recognition, similar to the way the human brain perceives it. The rapid pace of technological advancements in hardware and software, are continuously bringing the machine-based process for object recognition nearer to the inhuman vision prototype. The key in this eld, is the development of algorithms in order to achieve robust scene interpretation. A lot of recognisable and signi cant e ort has been successfully carried out over the years in 2D object recognition, as opposed to 3D. It is therefore, within this context and scope of this dissertation, to contribute towards the enhancement of 3D object recognition; a better interpretation and understanding of reality and the relationship between objects in a scene. Through the use and application of low-cost commodity sensors, such as Microsoft Kinect, RGB and depth data of a scene have been retrieved and manipulated in order to generate human-like visual perception data. The goal herein is to show how RGB and depth information can be utilised in order to develop a new class of 3D object recognition algorithms, analogous to the perception processed by the human brain.
APA, Harvard, Vancouver, ISO, and other styles
4

Manikhi, Omid, and Behnam Adlkhast. "A 3D OBJECT SCANNER : An approach using Microsoft Kinect." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-24418.

Full text
Abstract:
In this thesis report, an approach to use Microsoft Kinect to scan an object and providea 3D model for further processing has been proposed. The additional requiredhardware to rotate the object and fully expose it to the sensor, the drivers and SDKsused and the implemented software are discussed. It is explained how the acquireddata is stored and an efficient storage and mapping method requiring no specialhardware and memory is introduced. The solution proposed circumvents the PointCloud registration task based on the fact that the transformation from one frame tothe next is known with extremely high precision. Next, a method to merge theacquired 3D data from all over the object into a single noise-free model is proposedusing Spherical Transformation and a few experiments and their results aredemonstrated and discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Ghorpade, Vijaya Kumar. "3D Semantic SLAM of Indoor Environment with Single Depth Sensor." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC085/document.

Full text
Abstract:
Pour agir de manière autonome et intelligente dans un environnement, un robot mobile doit disposer de cartes. Une carte contient les informations spatiales sur l’environnement. La géométrie 3D ainsi connue par le robot est utilisée non seulement pour éviter la collision avec des obstacles, mais aussi pour se localiser et pour planifier des déplacements. Les robots de prochaine génération ont besoin de davantage de capacités que de simples cartographies et d’une localisation pour coexister avec nous. La quintessence du robot humanoïde de service devra disposer de la capacité de voir comme les humains, de reconnaître, classer, interpréter la scène et exécuter les tâches de manière quasi-anthropomorphique. Par conséquent, augmenter les caractéristiques des cartes du robot à l’aide d’attributs sémiologiques à la façon des humains, afin de préciser les types de pièces, d’objets et leur aménagement spatial, est considéré comme un plus pour la robotique d’industrie et de services à venir. Une carte sémantique enrichit une carte générale avec les informations sur les entités, les fonctionnalités ou les événements qui sont situés dans l’espace. Quelques approches ont été proposées pour résoudre le problème de la cartographie sémantique en exploitant des scanners lasers ou des capteurs de temps de vol RGB-D, mais ce sujet est encore dans sa phase naissante. Dans cette thèse, une tentative de reconstruction sémantisée d’environnement d’intérieur en utilisant une caméra temps de vol qui ne délivre que des informations de profondeur est proposée. Les caméras temps de vol ont modifié le domaine de l’imagerie tridimensionnelle discrète. Elles ont dépassé les scanners traditionnels en termes de rapidité d’acquisition des données, de simplicité fonctionnement et de prix. Ces capteurs de profondeur sont destinés à occuper plus d’importance dans les futures applications robotiques. Après un bref aperçu des approches les plus récentes pour résoudre le sujet de la cartographie sémantique, en particulier en environnement intérieur. Ensuite, la calibration de la caméra a été étudiée ainsi que la nature de ses bruits. La suppression du bruit dans les données issues du capteur est menée. L’acquisition d’une collection d’images de points 3D en environnement intérieur a été réalisée. La séquence d’images ainsi acquise a alimenté un algorithme de SLAM pour reconstruire l’environnement visité. La performance du système SLAM est évaluée à partir des poses estimées en utilisant une nouvelle métrique qui est basée sur la prise en compte du contexte. L’extraction des surfaces planes est réalisée sur la carte reconstruite à partir des nuages de points en utilisant la transformation de Hough. Une interprétation sémantique de l’environnement reconstruit est réalisée. L’annotation de la scène avec informations sémantiques se déroule sur deux niveaux : l’un effectue la détection de grandes surfaces planes et procède ensuite en les classant en tant que porte, mur ou plafond; l’autre niveau de sémantisation opère au niveau des objets et traite de la reconnaissance des objets dans une scène donnée. A partir de l’élaboration d’une signature de forme invariante à la pose et en passant par une phase d’apprentissage exploitant cette signature, une interprétation de la scène contenant des objets connus et inconnus, en présence ou non d’occultations, est obtenue. Les jeux de données ont été mis à la disposition du public de la recherche universitaire<br>Intelligent autonomous actions in an ordinary environment by a mobile robot require maps. A map holds the spatial information about the environment and gives the 3D geometry of the surrounding of the robot to not only avoid collision with complex obstacles, but also selflocalization and for task planning. However, in the future, service and personal robots will prevail and need arises for the robot to interact with the environment in addition to localize and navigate. This interaction demands the next generation robots to understand, interpret its environment and perform tasks in human-centric form. A simple map of the environment is far from being sufficient for the robots to co-exist and assist humans in the future. Human beings effortlessly make map and interact with environment, and it is trivial task for them. However, for robots these frivolous tasks are complex conundrums. Layering the semantic information on regular geometric maps is the leap that helps an ordinary mobile robot to be a more intelligent autonomous system. A semantic map augments a general map with the information about entities, i.e., objects, functionalities, or events, that are located in the space. The inclusion of semantics in the map enhances the robot’s spatial knowledge representation and improves its performance in managing complex tasks and human interaction. Many approaches have been proposed to address the semantic SLAM problem with laser scanners and RGB-D time-of-flight sensors, but it is still in its nascent phase. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Time-of-flight cameras have dramatically changed the field of range imaging, and surpassed the traditional scanners in terms of rapid acquisition of data, simplicity and price. And it is believed that these depth sensors will be ubiquitous in future robotic applications. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Starting with a brief motivation in the first chapter for semantic stance in normal maps, the state-of-the-art methods are discussed in the second chapter. Before using the camera for data acquisition, the noise characteristics of it has been studied meticulously, and properly calibrated. The novel noise filtering algorithm developed in the process, helps to get clean data for better scan matching and SLAM. The quality of the SLAM process is evaluated using a context-based similarity score metric, which has been specifically designed for the type of acquisition parameters and the data which have been used. Abstracting semantic layer on the reconstructed point cloud from SLAM has been done in two stages. In large-scale higher-level semantic interpretation, the prominent surfaces in the indoor environment are extracted and recognized, they include surfaces like walls, door, ceiling, clutter. However, in indoor single scene object-level semantic interpretation, a single 2.5D scene from the camera is parsed and the objects, surfaces are recognized. The object recognition is achieved using a novel shape signature based on probability distribution of 3D keypoints that are most stable and repeatable. The classification of prominent surfaces and single scene semantic interpretation is done using supervised machine learning and deep learning systems. To this end, the object dataset and SLAM data are also made publicly available for academic research
APA, Harvard, Vancouver, ISO, and other styles
6

Mahiddine, Amine. "Recalage hétérogène pour la reconstruction 3D de scènes sous-marines." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4027/document.

Full text
Abstract:
Le relevé et la reconstruction 3D de scènes sous-marine deviennent chaque jour plus incontournable devant notre intérêt grandissant pour l’étude des fonds sous-marins. La majorité des travaux existants dans ce domaine sont fondés sur l’utilisation de capteurs acoustiques l’image n’étant souvent qu’illustrative.L’objectif de cette thèse consiste à développer des techniques permettant la fusion de données hétérogènes issues d’un système photogrammétrique et d’un système acoustique.Les travaux présentés dans ce mémoire sont organisés en trois parties. La première est consacrée au traitement des données 2D afin d’améliorer les couleurs des images sous-marines pour augmenter la répétabilité des descripteurs en chaque point 2D. Puis, nous proposons un système de visualisation de scène en 2D sous forme de mosaïque.Dans la deuxième partie, une méthode de reconstruction 3D à partir d’un ensemble non ordonné de plusieurs images a été proposée. Les données 3D ainsi calculées seront fusionnées avec les données provenant du système acoustique dans le but de reconstituer le site sous-marin.Dans la dernière partie de ce travail de thèse, nous proposons une méthode de recalage 3D originale qui se distingue par la nature du descripteur extrait en chaque point. Le descripteur que nous proposons est invariant aux transformations isométriques (rotation, transformation) et permet de s’affranchir du problème de la multi-résolution. Nous validons à l’aide d’une étude effectuée sur des données synthétiques et réelles où nous montrons les limites des méthodes de recalages existantes dans la littérature. Au final, nous proposons une application de notre méthode à la reconnaissance d’objets 3D<br>The survey and the 3D reconstruction of underwater become indispensable for our growing interest in the study of the seabed. Most of the existing works in this area are based on the use of acoustic sensors image.The objective of this thesis is to develop techniques for the fusion of heterogeneous data from a photogrammetric system and an acoustic system.The presented work is organized in three parts. The first is devoted to the processing of 2D data to improve the colors of the underwater images, in order to increase the repeatability of the feature descriptors. Then, we propose a system for creating mosaics, in order to visualize the scene.In the second part, a 3D reconstruction method from an unordered set of several images was proposed. The calculated 3D data will be merged with data from the acoustic system in order to reconstruct the underwater scene.In the last part of this thesis, we propose an original method of 3D registration in terms of the nature of the descriptor extracted at each point. The descriptor that we propose is invariant to isometric transformations (rotation, transformation) and addresses the problem of multi-resolution. We validate our approach with a study on synthetic and real data, where we show the limits of the existing methods of registration in the literature. Finally, we propose an application of our method to the recognition of 3D objects
APA, Harvard, Vancouver, ISO, and other styles
7

Ingberg, Benjamin. "Registration of 2D Objects in 3D data." Thesis, Linköpings universitet, Datorseende, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119338.

Full text
Abstract:
In the field of industrial automation large savings can be realized if position andorientation of an object is known. Knowledge about an objects position and orien-tation can be used by advanced robotic systems to be able to work with complexitems. Specifically 2D-objects are a big enough sub domain to motivate specialattention. Traditionally this problem has been solved with large mechanical sys-tems that forces the objects into specific configurations. Besides being expensive,taking up a lot of space and having great difficulty handling fragile items, thesemechanical systems have to be constructed for each particular type of object. Thisthesis explores the possibility of using registration algorithms from computer vi-sion based on 3D-data to find flat objects. While systems for locating 3D objectsalready exists they have issues with locating essentially flat objects since theirpositioning is mostly a function of their contour. The thesis consists of a briefexamination of 2D-algorithms and their extension to 3D as well as results fromthe most suitable algorithm.<br>Inom fältet industriautomation kan stora besparingar realiseras om man kännertill position och orientering av föremål i deras leverade skick. Kunskap om dettatillåter avancerade robotsystem att arbeta med komplicerade föremål. Specifikt2D föremål är en tillräckligt stor underdomän för att motivera speciallösningar.Traditionellt har det här problemet lösts med stora mekaniska system, som, utö-ver att de är dyra, tar upp mycket yta och har svårt att hantera ömtåliga objektäven måste konstrueras som en specifik lösning för varje objektstyp. Denna upp-sats undersöker möjligheten att använda sig av registreringsalgoritmer baseradepå datorseende i 3D-data för att hitta platta föremål. Det finns system som han-terar lokalisering av 3D objekt men de har problem med att hantera essentieltplatta föremål då deras positionering främst är en funktion av deras kontur. Upp-satsen består av en undersökning av några 2D-algoritmer och deras uttökning till3D samt resultat från en implementation som fungerar väl.
APA, Harvard, Vancouver, ISO, and other styles
8

Yaqub, Mohammad. "Automatic measurements of femoral characteristics using 3D ultrasound images in utero." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:857a12d2-ffe3-4fa6-89c3-0d8319ee2fbb.

Full text
Abstract:
Vitamin D is very important for endochondral ossification and it is commonly insufficient during pregnancy (Javaid et al., 2006). Insufficiency of vitamin D during pregnancy predicts bone mass and hence predicts adult osteoporosis (Javaid et al., 2006). The relationship between maternal vitamin D and manually measured fetal biometry has been studied (Mahon et al., 2009). However, manual fetal biometry especially volumetric measurements are subjective, time-consuming and possibly irreproducible. Computerised measurements can overcome or at least reduce such problems. This thesis concerns the development and evaluation of novel methods to do this. This thesis makes three contributions. Firstly, we have developed a novel technique based on the Random Forests (RF) classifier to segment and measure several fetal femoral characteristics from 3D ultrasound volumes automatically. We propose a feature selection step in the training stage to eliminate irrelevant features and utilise the "good" ones. We also develop a weighted voting mechanism to weight tree probabilistic decisions in the RF classifier. We show that the new RF classifier is more accurate than the classic method (Yaqub et al., 2010b, Yaqub et al., 2011b). We achieved 83% segmentation precision using the proposed technique compared to manually segmented volumes. The proposed segmentation technique was also validated on segmenting adult brain structures in MR images and it showed excellent accuracy. The second contribution is a wavelet-based image fusion technique to enhance the quality of the fetal femur and to compensate for missing information in one volume due to signal attenuation and acoustic shadowing. We show that using image fusion to increase the image quality of ultrasound images of bony structures leads to a more accurate and reproducible assessment and measurement qualitatively and quantitatively (Yaqub et al., 2010a, Yaqub et al., 2011a). The third contribution concerns the analysis of data from a cohort study of 450 fetal femoral ultrasound volumes (18-21 week gestation). The femur length, cross-sectional areas, volume, splaying indices and angles were automatically measured using the RF method. The relationship between these measurements and the fetal gestational age and maternal vitamin D was investigated. Segmentation of a fetal femur is fast (2.3s/volume), thanks to the parallel implementation. The femur volume, length, splaying index were found to significantly correlate with fetal gestational age. Furthermore, significant correlations between the automatic measurements and 10 nmol increment in maternal 25OHD during second trimester were found.
APA, Harvard, Vancouver, ISO, and other styles
9

Ye, Mao. "MONOCULAR POSE ESTIMATION AND SHAPE RECONSTRUCTION OF QUASI-ARTICULATED OBJECTS WITH CONSUMER DEPTH CAMERA." UKnowledge, 2014. http://uknowledge.uky.edu/cs_etds/25.

Full text
Abstract:
Quasi-articulated objects, such as human beings, are among the most commonly seen objects in our daily lives. Extensive research have been dedicated to 3D shape reconstruction and motion analysis for this type of objects for decades. A major motivation is their wide applications, such as in entertainment, surveillance and health care. Most of existing studies relied on one or more regular video cameras. In recent years, commodity depth sensors have become more and more widely available. The geometric measurements delivered by the depth sensors provide significantly valuable information for these tasks. In this dissertation, we propose three algorithms for monocular pose estimation and shape reconstruction of quasi-articulated objects using a single commodity depth sensor. These three algorithms achieve shape reconstruction with increasing levels of granularity and personalization. We then further develop a method for highly detailed shape reconstruction based on our pose estimation techniques. Our first algorithm takes advantage of a motion database acquired with an active marker-based motion capture system. This method combines pose detection through nearest neighbor search with pose refinement via non-rigid point cloud registration. It is capable of accommodating different body sizes and achieves more than twice higher accuracy compared to a previous state of the art on a publicly available dataset. The above algorithm performs frame by frame estimation and therefore is less prone to tracking failure. Nonetheless, it does not guarantee temporal consistent of the both the skeletal structure and the shape and could be problematic for some applications. To address this problem, we develop a real-time model-based approach for quasi-articulated pose and 3D shape estimation based on Iterative Closest Point (ICP) principal with several novel constraints that are critical for monocular scenario. In this algorithm, we further propose a novel method for automatic body size estimation that enables its capability to accommodate different subjects. Due to the local search nature, the ICP-based method could be trapped to local minima in the case of some complex and fast motions. To address this issue, we explore the potential of using statistical model for soft point correspondences association. Towards this end, we propose a unified framework based on Gaussian Mixture Model for joint pose and shape estimation of quasi-articulated objects. This method achieves state-of-the-art performance on various publicly available datasets. Based on our pose estimation techniques, we then develop a novel framework that achieves highly detailed shape reconstruction by only requiring the user to move naturally in front of a single depth sensor. Our experiments demonstrate reconstructed shapes with rich geometric details for various subjects with different apparels. Last but not the least, we explore the applicability of our method on two real-world applications. First of all, we combine our ICP-base method with cloth simulation techniques for Virtual Try-on. Our system delivers the first promising 3D-based virtual clothing system. Secondly, we explore the possibility to extend our pose estimation algorithms to assist physical therapist to identify their patients’ movement dysfunctions that are related to injuries. Our preliminary experiments have demonstrated promising results by comparison with the gold standard active marker-based commercial system. Throughout the dissertation, we develop various state-of-the-art algorithms for pose estimation and shape reconstruction of quasi-articulated objects by leveraging the geometric information from depth sensors. We also demonstrate their great potentials for different real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Mohamed, Waleed A. "Medical Image Registration and 3D Object Matching." Thesis, 2012. http://spectrum.library.concordia.ca/973698/1/thesis.pdf.

Full text
Abstract:
The great challenge in image registration and 3D object matching is to devise computationally efficient algorithms for aligning images so that their details overlap accurately and retrieving similar shapes from large databases of 3D models. The first problem addressed is this thesis is medical image registration, which we formulate as an optimization problem in the information-theoretic framework. We introduce a viable and practical image registration method by maximizing an entropic divergence measure using a modified simultaneous perturbation stochastic approximation algorithm. The feasibility of the proposed image registration approach is demonstrated through extensive experiments. The rest of the thesis is devoted to a joint exploitation of geometry and topology of 3D objects for as parsimonious as possible representation of models and its subsequent application in 3D object representation, matching, and retrieval problems. More precisely, we introduce a skeletal graph for topological 3D shape representation using Morse theory. The proposed skeletonization algorithm encodes a 3D shape into a topological Reeb graph using a normalized mixture distance function. We also propose a novel graph matching algorithm by comparing the relative shortest paths between the skeleton endpoints. Moreover, we describe a skeletal graph for 3D object matching and retrieval. This skeleton is constructed from the second eigenfunction of the Laplace-Beltrami operator defined on the surface of the 3D object. Using the generalized eigenvalue decomposition, a matrix computational framework based on the finite element method is presented to compute the spectrum of the Laplace-Beltrami operator. Illustrating experiments on two standard 3D shape benchmarks are provided to demonstrate the feasibility and the much improved performance of the proposed skeletal graphs as shape descriptors for 3D object matching and retrieval.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "3D object registration"

1

Lee, Junesuk, Eung-su Kim, and Soon-Yong Park. "3D Non-rigid Registration of Deformable Object Using GPU." In Pattern Recognition and Image Analysis. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31332-6_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mielke, Tonia, Fabian Joeres, and Christian Hansen. "Natural 3D Object Manipulation for Interactive Laparoscopic Augmented Reality Registration." In Virtual, Augmented and Mixed Reality: Design and Development. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05939-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Collignon, André, Dirk Vandermeulen, Paul Suetens, and Guy Marchal. "An Object Oriented Tool for 3D Multimodality Surface-based Image Registration." In Computer Assisted Radiology / Computergestützte Radiologie. Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-49351-5_93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tan, Qimeng, Delun Li, Congcong Bao, Ming Chen, and Yun Zhang. "A Coarse Registration Algorithm Between 3D Point Cloud and CAD Model of Non-cooperative Object for Space Manipulator." In Intelligent Robotics and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27541-9_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Burel, Gilles, Hugues Henocq, and Jean-Yves Catros. "Registration of 3D Objects Using Linear Algebra." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-540-49197-2_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ohkubo, Ryo, Ryo Kurazume, and Katsushi Ikeuchi. "Simultaneous Registration of 2D Images onto 3D Models for Texture Mapping." In Digitally Archiving Cultural Objects. Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-75807_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Masuda, Tomohito, Yuichiro Hirota, Ko Nishino, and Katsushi Ikeuchi. "Simultaneous Determination of Registration and Deformation Parameters among 3D Range Images." In Digitally Archiving Cultural Objects. Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-75807_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mahmoud, Nader, Stephane A. Nicolau, Arabi Keshk, Mostafa A. Ahmad, Luc Soler, and Jacques Marescaux. "Fast 3D Structure From Motion with Missing Points from Registration of Partial Reconstructions." In Articulated Motion and Deformable Objects. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31567-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Alan, Elizabeth Bullitt, and Stephen M. Pizer. "3D/2D registration via skeletal near projective invariance in tubular objects." In Medical Image Computing and Computer-Assisted Intervention — MICCAI’98. Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0056284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yamaguchi, Takuma, Hiroshi Kawasaki, Ryo Furukawa, and Toshihiro Nakayama. "Super-Resolution of Multiple Moving 3D Objects with Pixel-Based Registration." In Computer Vision – ACCV 2009. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12297-2_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "3D object registration"

1

Jin, David, Sushrut Karmalkar, Harry Zhang, and Luca Carlone. "Multi-Model 3D Registration: Finding Multiple Moving Objects in Cluttered Point Clouds." In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10610926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Preteux, Francoise J., Sorin Curila, and Marius Malciu. "3D object registration in image sequences." In Photonics West '98 Electronic Imaging, edited by Edward R. Dougherty and Jaakko T. Astola. SPIE, 1998. http://dx.doi.org/10.1117/12.304599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lam, Joseph, and Michael Greenspan. "3D Object Recognition by Surface Registration of Interest Segments." In 2013 International Conference on 3D Vision (3DV). IEEE, 2013. http://dx.doi.org/10.1109/3dv.2013.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rodriguez, Diego, Florian Huber, and Sven Behnke. "Category-level Part-based 3D Object Non-rigid Registration." In 17th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0010761800003124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Sunghan, Mingyu Kim, Jeongtae Lee, et al. "Registration of 3D Point Clouds for Ship Block Measurement." In SNAME 5th World Maritime Technology Conference. SNAME, 2015. http://dx.doi.org/10.5957/wmtc-2015-252.

Full text
Abstract:
In this paper, a software system for registration of point clouds is developed. The system consists of two modules for registration and user interaction. The registration module contains functions for manual and automatic registration. The manual method allows a user to select feature points or planes from the point clouds manually. The selected planes or features are then processed to establish the correspondence between the point clouds, and registration is performed to obtain one large point cloud. The automatic registration uses sphere targets. Sphere targets are attached to an object of interest. A scanner measures the object as well as the targets to produce point clouds, from which the targets are extracted using shape intrinsic properties. Then correspondence between the point clouds is obtained using the targets, and the registration is performed. The user interaction module provides a GUI environment which allows a user to navigate point clouds, to compute various features, to visualize point clouds and to select/unselect points interactively and the point-processing unit containing functions for filtering, estimation of geometric features, and various data structures for managing point clouds of large size. The developed system is tested with actual measurement data of various blocks in a shipyard.
APA, Harvard, Vancouver, ISO, and other styles
6

Huynh, Brandon, Jason Orlosky, and Tobias Hollerer. "Semantic Labeling and Object Registration for Augmented Reality Language Learning." In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2019. http://dx.doi.org/10.1109/vr.2019.8797804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lim, Ser Nam, Li Guan, Shubao Liu, and Xingwei Yang. "Automatic Registration of Smooth Object Image to 3D CAD Model for Industrial Inspection Applications." In 2013 International Conference on 3D Vision (3DV). IEEE, 2013. http://dx.doi.org/10.1109/3dv.2013.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Choy, Christopher Bongsoo, Michael Stark, Sam Corbett-Davies, and Silvio Savarese. "Enriching object detection with 2D-3D registration and continuous viewpoint estimation." In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2015. http://dx.doi.org/10.1109/cvpr.2015.7298866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Song, Limei, Hongwei An, and Hui Xiong. "Color 3D measurement and digital stereo flag registration for large object." In 2010 10th International Conference on Signal Processing (ICSP 2010). IEEE, 2010. http://dx.doi.org/10.1109/icosp.2010.5655925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dang, Zheng, and Mathieu Salzmann. "AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud Registration." In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography