Dissertations / Theses on the topic '2D images'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic '2D images.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Truong, Michael Vi Nguyen. "2D-3D registration of cardiac images." Thesis, King's College London (University of London), 2014. https://kclpure.kcl.ac.uk/portal/en/theses/2d3d-registration-of-cardiac-images(afef93e6-228c-4bc7-aab0-94f1e1ecf006).html.
Full textJones, Jonathan-Lee. "2D and 3D segmentation of medical images." Thesis, Swansea University, 2015. https://cronfa.swan.ac.uk/Record/cronfa42504.
Full textGuarnera, Giuseppe Claudio. "Shape Modeling and Description from 2D Images." Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1365.
Full textSdiri, Bilel. "2D/3D Endoscopic image enhancement and analysis for video guided surgery." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD030.
Full textMinimally invasive surgery has made remarkable progress in the last decades and became a very popular diagnosis and treatment tool, especially with the rapid medical and technological advances leading to innovative new tools such as robotic surgical systems and wireless capsule endoscopy. Due to the intrinsic characteristics of the endoscopic environment including dynamic illumination conditions and moist tissues with high reflectance, endoscopic images suffer often from several degradations such as large dark regions,with low contrast and sharpness, and many artifacts such as specular reflections and blur. These challenges together with the introduction of three dimensional(3D) imaging surgical systems have prompted the question of endoscopic images quality, which needs to be enhanced. The latter process aims either to provide the surgeons/doctors with a better visual feedback or improve the outcomes of some subsequent tasks such as features extraction for 3D organ reconstruction and registration. This thesis addresses the problem of endoscopic image quality enhancement by proposing novel enhancement techniques for both two-dimensional (2D) and stereo (i.e. 3D)endoscopic images.In the context of automatic tissue abnormality detection and classification for gastro-intestinal tract disease diagnosis, we proposed a pre-processing enhancement method for 2D endoscopic images and wireless capsule endoscopy improving both local and global contrast. The proposed method expose inner subtle structures and tissues details, which improves the features detection process and the automatic classification rate of neoplastic,non-neoplastic and inflammatory tissues. Inspired by binocular vision attention features of the human visual system, we proposed in another workan adaptive enhancement technique for stereo endoscopic images combining depth and edginess information. The adaptability of the proposed method consists in adjusting the enhancement to both local image activity and depth level within the scene while controlling the interview difference using abinocular perception model. A subjective experiment was conducted to evaluate the performance of the proposed algorithm in terms of visual qualityby both expert and non-expert observers whose scores demonstrated the efficiency of our 3D contrast enhancement technique. In the same scope, we resort in another recent stereo endoscopic image enhancement work to the wavelet domain to target the enhancement towards specific image components using the multiscale representation and the efficient space-frequency localization property. The proposed joint enhancement methods rely on cross-view processing and depth information, for both the wavelet decomposition and the enhancement steps, to exploit the inter-view redundancies together with perceptual human visual system properties related to contrast sensitivity and binocular combination and rivalry. The visual qualityof the processed images and objective assessment metrics demonstrate the efficiency of our joint stereo enhancement in adjusting the image illuminationin both dark and saturated regions and emphasizing local image details such as fine veins and micro vessels, compared to other endoscopic enhancement techniques for 2D and 3D images
Meng, Ting, and Yating Yu. "Deconvolution algorithms of 2D Transmission Electron Microscopy images." Thesis, KTH, Optimeringslära och systemteori, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-110096.
Full textHuang, Hui. "Efficient reconstruction of 2D images and 3D surfaces." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2821.
Full textHenrichsen, Arne. "3D reconstruction and camera calibration from 2D images." Master's thesis, University of Cape Town, 2000. http://hdl.handle.net/11427/9725.
Full textA 3D reconstruction technique from stereo images is presented that needs minimal intervention from the user. The reconstruction problem consists of three steps, each of which is equivalent to the estimation of a specific geometry group. The first step is the estimation of the epipolar geometry that exists between the stereo image pair, a process involving feature matching in both images. The second step estimates the affine geometry, a process of finding a special plane in projective space by means of vanishing points. Camera calibration forms part of the third step in obtaining the metric geometry, from which it is possible to obtain a 3D model of the scene. The advantage of this system is that the stereo images do not need to be calibrated in order to obtain a reconstruction. Results for both the camera calibration and reconstruction are presented to verify that it is possible to obtain a 3D model directly from features in the images.
Agerskov, Niels, and Gabriel Carrizo. "Application for Deriving 2D Images from 3D CT Image Data for Research Purposes." Thesis, KTH, Skolan för teknik och hälsa (STH), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190881.
Full textPå Karolinska universitetssjukhuset, Huddinge har man länge önskat möjligheten att utföra mallningar av höftproteser med hjälp av data från datortomografiundersökningar (DT). Detta har hittills inte varit möjligt eftersom programmet som används för mallning av höftproteser enbart accepterar traditionella slätröntgenbilder. Därför var syftet med detta projekt att skapa en mjukvaru-applikation som kan användas för att generera 2D-bilder för mallning av proteser från DT-data. För att skapa applikationen användes huvudsakligen Python-kodbiblioteken NumPy och The Visualization Toolkit (VTK) tillsammans med användargränssnittsbiblioteket PyQt4. I applikationen ingår ett grafiskt användargränssnitt och metoder för optimering av bilderna i mallningssammanhang. Applikationen fungerar men bildernas kvalitet måste utvärderas med en större urvalsgrupp.
Srinivasan, Nirmala. "Cross-Correlation Of Biomedical Images Using Two Dimensional Discrete Hermite Functions." University of Akron / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=akron1341866987.
Full textBowden, Nathan Charles. "Camera based texture mapping: 3D applications for 2D images." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2407.
Full textChaudhary, Priyanka. "SPHEROID DETECTION IN 2D IMAGES USING CIRCULAR HOUGH TRANSFORM." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/9.
Full textDandu, Sai Venkata Satya Siva Kumar, and Sujit Kadimisetti. "2D SPECTRAL SUBTRACTION FOR NOISE SUPPRESSION IN FINGERPRINT IMAGES." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13848.
Full textLe, Van Linh. "Automatic landmarking for 2D biological images : image processing with and without deep learning methods." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0238.
Full textLandmarks are presented in the applications of different domains such as biomedical or biological. It is also one of the data types which have been usedin different analysis, for example, they are not only used for measuring the form of the object, but also for determining the similarity between two objects. In biology, landmarks are used to analyze the inter-organisms variations, however the supply of landmarks is very heavy and most often they are provided manually. In recent years, several methods have been proposed to automatically predict landmarks, but it is existing the hardness because these methods focused on the specific data. This thesis focuses on automatic determination of landmarks on biological images, more specifically on two-dimensional images of beetles. In our research, we have collaborated with biologists to build a dataset including the images of 293 beetles. For each beetle in this dataset, 5 images correspond to 5 parts have been taken into account, e.g., head, body, pronotum, left and right mandible. Along with each image, a set of landmarks has been manually proposed by biologists. First step, we have brought a method whichwas applied on fly wings, to apply on our dataset with the aim to test the suitability of image processing techniques on our problem. Secondly, we have developed a method consisting of several stages to automatically provide the landmarks on the images.These two first steps have been done on the mandible images which are considered as obvious to use the image processing methods. Thirdly, we have continued to consider other complex remaining parts of beetles. Accordingly, we have used the help of Deep Learning. We have designed a new model of Convolutional Neural Network, named EB-Net, to predict the landmarks on remaining images. In addition, we have proposed a new procedure to augment the number of images in our dataset, which is seen as our limitation to apply deep learning. Finally, to improve the quality of predicted coordinates, we have employed Transfer Learning, another technique of Deep Learning. In order to do that, we trained EB-Net on a public facial keypoints. Then, they were transferred to fine-tuning on beetle’s images. The obtained results have been discussed with biologists, and they have confirmed that the quality of predicted landmarks is statistically good enough to replace the manual landmarks for most of the different morphometry analysis
Grandi, Jerônimo Gustavo. "Multidimensional similarity search for 2D-3D medical data correlation and fusion." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/104133.
Full textImages of the inner anatomy are essential for clinical practice. To establish a correlation between them is an important procedure for diagnosis and treatment. In this thesis, we propose an approach to correlate within-modality 2D and 3D data from ordinary acquisition protocols based solely on the pixel/voxel information. The work was divided into two development phases. First, we explored the similarity problem between medical images using the perspective of image quality assessment. It led to the development of a 2-step technique that settles the compromise between processing speed and precision of two known approaches. We evaluated the quality and applicability of the 2-step and, in the second phase, we extended the method to use similarity analysis to, given an arbitrary slice image (2D), find the location of this slice within the volume data (3D). The solution minimizes the virtually infinite number of possible cross section orientations and uses optimizations to reduce the computational workload and output accurate results. The matching is displayed in a volumetric three-dimensional visualization fusing the 3D with the 2D. An experimental analysis demonstrated that despite the computational complexity of the algorithm, the use of severe data sampling allows achieving a great compromise between performance and accuracy even when performed with low gradient intensity datasets.
ARMANDE, NASSER. "Caracterisation de reseaux fins dans les images 2d et 3d applications : images satellites et medicales." Paris 11, 1997. http://www.theses.fr/1997PA112094.
Full textAllouch, Yair. "Multi scale geometric segmentation on 2D and 3D Digital Images /." [Beer Sheva] : Ben Gurion University of the Negev, 2007. http://aranne5.lib.ad.bgu.ac.il/others/AlloucheYair.pdf.
Full textGadsby, David. "Object recognition for threat detection from 2D X-ray images." Thesis, Manchester Metropolitan University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493851.
Full textDowell, Rachel J. (Rachel Jean). "Registration of 2D ultrasound images in preparation for 3D reconstruction." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10181.
Full textCheng, Yuan 1971. "3D reconstruction from 2D images and applications to cell cytoskeleton." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88870.
Full textIncludes bibliographical references (leaves 121-129).
Approaches to achieve three dimensional (3D) reconstruction from 2D images can be grouped into two categories: computer-vision-based reconstruction and tomographic reconstruction. By exploring both the differences and connections between these two types of reconstruction, the thesis attempts to develop a new technique that can be applied to 3D reconstruction of biological structures. Specific attention is given to the reconstruction of the cell cytoskeleton from electron microscope images. The thesis is composed of two parts. The first part studies computer-vision-based reconstruction methods that extract 3D information from geometric relationship among images. First, a multiple-feature-based stereo reconstruction algorithm that recovers the 3D structure of an object from two images is presented. A volumetric reconstruction method is then developed by extending the algorithm to multiple images. The method integrates a sequence of 3D reconstruction from different stereo pairs. It achieves a globally optimized reconstruction by evaluating certainty values of each stereo reconstruction. This method is tuned and applied to 3D reconstruction of the cell cytoskeleton. Feasibility, reliability and flexibility of the method are explored.
(cont.) The second part of the thesis focuses on a special tomographic reconstruction, discrete tomography, where the object to be reconstructed is composed of a discrete set of materials each with uniform values. A Bayesian labeling process is proposed as a framework for discrete tomography. The process uses an expectation-maximization (EM) algorithm with which the reconstruction is obtained efficiently. Results demonstrate that the proposed algorithm achieves high reconstruction quality even with a small number of projections. An interesting relationship between discrete tomography and conventional tomography is also derived, showing that discrete tomography is a more generalized form of tomography and conventional tomography is only a special case of such generalization.
by Yuan Cheng.
Ph.D.
Mertzanidou, T. "Automatic correspondence between 2D and 3D images of the breast." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1362435/.
Full textNgo, Hoai Diem Phuc. "Rigid transformations on 2D digital images : combinatorial and topological analysis." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1091/document.
Full textIn this thesis, we study rigid transformations in the context of computer imagery. In particular, we develop a fully discrete framework for handling such transformations. Rigid transformations, initially defined in the continuous domain, are involved in a wide range of digital image processing applications. In this context, the induced digital rigid transformations present different geometrical and topological properties with respect to their continuous analogues. In order to overcome the issues raised by these differences, we propose to formulate rigid transformations on digital images in a fully discrete framework. In this framework, Euclidean rigid transformations producing the same digital rigid transformation are put in the same equivalence class. Moreover, the relationship between these classes can be modeled as a graph structure. We prove that this graph has a polynomial space complexity with respect to the size of the considered image, and presents useful structural properties. In particular, it allows us to generate incrementally all digital rigid transformations without numerical approximation. This structure constitutes a theoretical tool to investigate the relationships between geometry and topology in the context of digital images. It is also interesting from the methodological point of view, as we illustrate by its use for assessing the topological behavior of images under rigid transformations
Phan, Tan Binh. "On the 3D hollow organ cartography using 2D endoscopic images." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0135.
Full textStructure from motion (SfM) algorithms represent an efficient means to construct extended 3D surfaces using images of a scene acquired from different viewpoints. SfM methods simultaneously determine the camera motion and a 3D point cloud lying on the surfaces to be recovered. Classical SfM algorithms use feature point detection and matching methods to track homologous points across the image sequences, each point track corresponding to a 3D point to be reconstructed. The SfM algorithms exploit the correspondences between homologous points to recover the 3D scene structure and the successive camera poses in an arbitrary world coordinate system. There exist different state-of-the-art SfM algorithms which can efficiently reconstruct different types of scenes, under the condition that the images include enough textures or structures. However, most of the existing solutions are inappropriate, or at least not optimal, when the sequences of images are without or only with few textures. This thesis proposes two dense optical flow (DOF)-based SfM solutions to reconstruct complex scenes using images with few textures and acquired under changing illumination conditions. It is notably shown how accurate DOF fields can be optimally used due to an image selection strategy which both maximizes the number and size of homologous point sets, and minimizes the errors in the homologous point localization. The accuracy of the proposed 3D cartography methods is assessed on phantoms with known dimensions. The robustness and the interest of the proposed methods are demonstrated on various complex medical scenes using a constant algorithm parameter set. The proposed solutions reconstructed organs seen in different medical examinations (epithelial surface of the inner stomach wall, inner epithelial bladder surface, and the skin surface in dermatology) and various imaging modalities (white light for all examinations, green-blue light in gastroscopy and fluorescence in cystoscopy)
Zhang, Yan. "Feature-based automatic registration of images with 2D and 3D models." Thesis, University of Central Lancashire, 2006. http://clok.uclan.ac.uk/21603/.
Full textLu, Ping. "Rotation Invariant Registration of 2D Aerial Images Using Local Phase Correlation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-199588.
Full textMastin, Dana Andrew. "Statistical methods for 2D-3D registration of optical and LIDAR images." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55123.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 121-123).
Fusion of 3D laser radar (LIDAR) imagery and aerial optical imagery is an efficient method for constructing 3D virtual reality models. One difficult aspect of creating such models is registering the optical image with the LIDAR point cloud, which is a camera pose estimation problem. We propose a novel application of mutual information registration which exploits statistical dependencies in urban scenes, using variables such as LIDAR elevation, LIDAR probability of detection (pdet), and optical luminance. We employ the well known downhill simplex optimization to infer camera pose parameters. Utilization of OpenGL and graphics hardware in the optimization process yields registration times on the order of seconds. Using an initial registration comparable to GPS/INS accuracy, we demonstrate the utility of our algorithms with a collection of urban images. Our analysis begins with three basic methods for measuring mutual information. We demonstrate the utility of the mutual information measures with a series of probing experiments and registration tests. We improve the basic algorithms with a novel application of foliage detection, where the use of only non-foliage points improves registration reliability significantly. Finally, we show how the use of an existing registered optical image can be used in conjunction with foliage detection to achieve even more reliable registration.
by Dana Andrew Mastin.
S.M.
Qiu, Xuchong. "2D and 3D Geometric Attributes Estimation in Images via deep learning." Thesis, Marne-la-vallée, ENPC, 2021. http://www.theses.fr/2021ENPC0005.
Full textThe visual perception of 2D and 3D geometric attributes (e.g. translation, rotation, spatial size and etc.) is important in robotic applications. It helps robotic system build knowledge about its surrounding environment and can serve as the input for down-stream tasks such as motion planning and physical intersection with objects.The main goal of this thesis is to automatically detect positions and poses of interested objects for robotic manipulation tasks. In particular, we are interested in the low-level task of estimating occlusion relationship to discriminate different objects and the high-level tasks of object visual tracking and object pose estimation.The first focus is to track the object of interest with correct locations and sizes in a given video. We first study systematically the tracking framework based on discriminative correlation filter (DCF) and propose to leverage semantics information in two tracking stages: the visual feature encoding stage and the target localization stage. Our experiments demonstrate that the involvement of semantics improves the performance of both localization and size estimation in our DCF-based tracking framework. We also make an analysis for failure cases.The second focus is using object shape information to improve the performance of object 6D pose estimation and do object pose refinement. We propose to estimate the 2D projections of object 3D surface points with deep models to recover object 6D poses. Our results show that the proposed method benefits from the large number of 3D-to-2D point correspondences and achieves better performance. As a second part, we study the constraints of existing object pose refinement methods and develop a pose refinement method for objects in the wild. Our experiments demonstrate that our models trained on either real data or generated synthetic data can refine pose estimates for objects in the wild, even though these objects are not seen during training.The third focus is studying geometric occlusion in single images to better discriminate objects in the scene. We first formalize geometric occlusion definition and propose a method to automatically generate high-quality occlusion annotations. Then we propose a new occlusion relationship formulation (i.e. abbnom) and the corresponding inference method. Experiments on occlusion reasoning benchmarks demonstrate the superiority of the proposed formulation and method. To recover accurate depth discontinuities, we also propose a depth map refinement method and a single-stage monocular depth estimation method.All the methods that we propose leverage on the versatility and power of deep learning. This should facilitate their integration in the visual perception module of modern robotic systems.Besides the above methodological advances, we also made available software (for occlusion and pose estimation) and datasets (of high-quality occlusion information) as a contribution to the scientific community
Sintorn, Ida-Maria. "Segmentation methods and shape descriptions in digital images : applications in 2D and 3D microscopy /." Uppsala : Centre for Image Analysis, Swedish University of Agricultural Sciences, 2005. http://epsilon.slu.se/200520.pdf.
Full textKarathanou, Argyro. "Image processing for on-line analysis of electron microscope images : automatic Recognition of Reconstituted Membranes." Phd thesis, Université de Haute Alsace - Mulhouse, 2009. http://tel.archives-ouvertes.fr/tel-00559800.
Full textNorth, Peter R. J. "The reconstruction of visual appearance by combining stereo surfaces." Thesis, University of Sussex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362837.
Full textChiu, Bernard. "A new segmentation algorithm for prostate boundary detection in 2D ultrasound images." Thesis, Waterloo, Ont. : University of Waterloo, [Dept. of Electrical and Computer Engineering], 2003. http://etd.uwaterloo.ca/etd/bcychiu2003.pdf.
Full text"A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering". Includes bibliographical references.
Randell, Charles James. "3D underwater monocular machine vision from 2D images in an attenuating medium." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ32764.pdf.
Full textLaw, Kwok-wai Albert, and 羅國偉. "3D reconstruction of coronary artery and brain tumor from 2D medical images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31245572.
Full textChau, T. K. W. "An investigation into interpretation of 2D images using a knowledge based controller." Thesis, City University London, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375830.
Full textZöllei, Lilla 1977. "2D-3D rigid-body registration of X-ray flourscopy and CT images." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86790.
Full textHärd, Victoria. "Automatic Alignment of 2D Cine Morphological Images Using 4D Flow MRI Data." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131470.
Full textJiang, Qitong. "Euler Characteristic Transform of Shapes in 2D Digital Images as Cubical Sets." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1586387046539831.
Full textReddy, Serendra. "Automatic 2D-to-3D conversion of single low depth-of-field images." Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24475.
Full textARAÚJO, Caio Fernandes. "Segmentação de imagens 3D utilizando combinação de imagens 2D." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/21040.
Full textMade available in DSpace on 2017-08-30T18:18:42Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertacao Caio Fernandes Araujo Versão Biblioteca.pdf: 4719896 bytes, checksum: 223db1c4382e6f970dc2cd659978ab60 (MD5) Previous issue date: 2016-08-12
CAPES
Segmentar imagens de maneira automática é um grande desafio. Apesar do ser humano conseguir fazer essa distinção, em muitos casos, para um computador essa divisão pode não ser tão trivial. Vários aspectos têm de ser levados em consideração, que podem incluir cor, posição, vizinhanças, textura, entre outros. Esse desafio aumenta quando se passa a utilizar imagens médicas, como as ressonâncias magnéticas, pois essas, além de possuírem diferentes formatos dos órgãos em diferentes pessoas, possuem áreas em que a variação da intensidade dos pixels se mostra bastante sutil entre os vizinhos, o que dificulta a segmentação automática. Além disso, a variação citada não permite que haja um formato pré-definido em vários casos, pois as diferenças internas nos corpos dos pacientes, especialmente os que possuem alguma patologia, podem ser grandes demais para que se haja uma generalização. Mas justamente por esse possuírem esses problemas, são os principais focos dos profissionais que analisam as imagens médicas. Este trabalho visa, portanto, contribuir para a melhoria da segmentação dessas imagens médicas. Para isso, utiliza a ideia do Bagging de gerar diferentes imagens 2D para segmentar a partir de uma única imagem 3D, e conceitos de combinação de classificadores para uni-las, para assim conseguir resultados estatisticamente melhores, se comparados aos métodos populares de segmentação. Para se verificar a eficácia do método proposto, a segmentação das imagens foi feita utilizando quatro técnicas de segmentação diferentes, e seus resultados combinados. As técnicas escolhidas foram: binarização pelo método de Otsu, o K-Means, rede neural SOM e o modelo estatístico GMM. As imagens utilizadas nos experimentos foram imagens reais, de ressonâncias magnéticas do cérebro, e o intuito do trabalho foi segmentar a matéria cinza do cérebro. As imagens foram todas em 3D, e as segmentações foram feitas em fatias 2D da imagem original, que antes passa por uma fase de pré-processamento, onde há a extração do cérebro do crânio. Os resultados obtidos mostram que o método proposto se mostrou bem sucedido, uma vez que, em todas as técnicas utilizadas, houve uma melhoria na taxa de acerto da segmentação, comprovada através do teste estatístico T-Teste. Assim, o trabalho mostra que utilizar os princípios de combinação de classificadores em segmentações de imagens médicas pode apresentar resultados melhores.
Automatic image segmentation is still a great challenge today. Despite the human being able to make this distinction, in most of the cases easily and quickly, to a computer this task may not be that trivial. Several characteristics have to be taken into account by the computer, which may include color, position, neighborhoods, texture, among others. This challenge increases greatly when it comes to using medical images, like the MRI, as these besides producing images of organs with different formats in different people, have regions where the intensity variation of pixels is subtle between neighboring pixels, which complicates even more the automatic segmentation. Furthermore, the above mentioned variation does not allow a pre-defined format in various cases, because the internal differences between patients bodies, especially those with a pathology, may be too large to make a generalization. But specially for having this kind of problem, those people are the main targets of the professionals that analyze medical images. This work, therefore, tries to contribute to the segmentation of medical images. For this, it uses the idea of Bagging to generate different 2D images from a single 3D image, and combination of classifiers to unite them, to achieve statistically significant better results, if compared to popular segmentation methods. To verify the effectiveness of the proposed method, the segmentation of the images is performed using four different segmentation techniques, and their combined results. The chosen techniques are the binarization by the Otsu method, K-Means, the neural network SOM and the statistical model GMM. The images used in the experiments were real MRI of the brain, and the dissertation objective is to segment the gray matter (GM) of the brain. The images are all in 3D, and the segmentations are made using 2D slices of the original image that pass through a preprocessing stage before, where the brain is extracted from the skull. The results show that the proposed method is successful, since, in all the applied techniques, there is an improvement in the accuracy rate, proved by the statistical test T-Test. Thus, the work shows that using the principles of combination of classifiers in medical image segmentation can obtain better results.
Boui, Marouane. "Détection et suivi de personnes par vision omnidirectionnelle : approche 2D et 3D." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE009/document.
Full textIn this thesis we will handle the problem of 3D people detection and tracking in omnidirectional images sequences, in order to realize applications allowing3D pose estimation, we investigate the problem of 3D people detection and tracking in omnidirectional images sequences. This requires a stable and accurate monitoring of the person in a real environment. In order to achieve this, we will use a catadioptric camera composed of a spherical mirror and a perspective camera. This type of sensor is commonly used in computer vision and robotics. Its main advantage is its wide field of vision, which allows it to acquire a 360-degree view of the scene with a single sensor and in a single image. However, this kind of sensor generally generates significant distortions in the images, not allowing a direct application of the methods conventionally used in perspective vision. Our thesis contains a description of two monitoring approaches that take into account these distortions. These methods show the progress of our work during these three years, allowing us to move from person detection to the 3Destimation of its pose. The first step of this work consisted in setting up a person detection algorithm in the omnidirectional images. We proposed to extend the conventional approach for human detection in perspective image, based on the Gradient-Oriented Histogram (HOG), in order to adjust it to spherical images. Our approach uses the Riemannian varieties to adapt the gradient calculation for omnidirectional images as well as the spherical gradient for spherical images to generate our omnidirectional image descriptor
Chu, Jiaqi. "Orbital angular momentum encoding/decoding of 2D images for scalable multiview colour displays." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274903.
Full textBaudour, Alexis. "Détection de filaments dans des images 2D et 3D : modélisation, étude mathématique et algorithmes." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00507520.
Full textBEIL, FRANK MICHAEL. "Approche structurelle de l'analyse de la texture dans les images cellulaires 2d et 3d." Paris 7, 1999. http://www.theses.fr/1999PA077019.
Full textMEZERREG, MOHAMED. "Structures de donnees graphiques : contribution a la conception d'un s.g.b.d. images 2d et 3d." Paris 7, 1990. http://www.theses.fr/1990PA077155.
Full textStebbing, Richard. "Model-based segmentation methods for analysis of 2D and 3D ultrasound images and sequences." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f0e855ca-5ed9-4e40-994c-9b470d5594bf.
Full textKang, Xin, and 康欣. "Feature-based 2D-3D registration and 3D reconstruction from a limited number of images via statistical inference for image-guidedinterventions." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48079625.
Full textpublished_or_final_version
Orthopaedics and Traumatology
Doctoral
Doctor of Philosophy
Lohou, Christophe. "Contribution à l'analyse topologique des images : étude d'algorithmes de squelettisation pour images 2D et 3D selon une approche topologie digitale ou topologie discrète." Marne-la-Vallée, 2001. http://www.theses.fr/2001MARN0120.
Full textThis thesis proposes new thinning algorithms for 2D or 3D images according to two approaches using either the digital topology or the discrete topology. In the first part, we recall some fundamental notions of digital topology and several thinning algorithms amongs the well-known ones, which delete simple points. Then, we propose a methodology to produce new thinning algorithms based on the parallel deletion of P-simple points. Such algorithms are conceived in order to they delete at least the points removed by another one given existent thinning algorithm. By applying this methodology, we produce two new algorithms. Although results seem to be satisfying, the proposal and encoding of new proposed algorithms are not easy. In the second part, we use the concept of partially order set (or poset). We propose more straightforwardly than before, a thinning algorithm consisting in the repetition of parallel deletion of αn-simple points, followed by the parallel deletion of βn-simple points. We also have proposed new definitions of end points which permit us to obtain either curve skeletons or surface skeletons. The thinning scheme is used on 2D, 3D binary images, or on 2D grayscale images. At last, a study of a parallel filtering of skeletons is developped
Pitocchi, Jonathan. "Quantitative assessment of bone quality after total hip replacement through medical images: 2D and 3D approaches." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13023/.
Full textCheng, Jie-Zhi, and 鄭介誌. "Cell-Based Image Segmentation for 2D and 2D Series Ultrasound Images." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/74833813995255310003.
Full text國立臺灣大學
醫學工程學研究所
95
Boundary information of the object of interest in sonography is the fundamental basis for many clinical studies. It can help to manifest the abnormality of anatomy by characterizing the morphological features and plays the essential role in numerous quantitative ultrasound image analyses. For instance, the evaluation of functional properties of heart demands the quantification of the deformation of epi- and endo-cardiac surfaces. To draw a convincing conclusion for the quantitative analysis, the boundary information should be reliable and efficiently generated― which means robust image segmentation techniques are necessary. This study addresses the challenging segmentation problem of ultrasound images into two parts: 2D and 2D series. Theses two parts are attacked by the two proposed algorithms, i.e. ACCOMP and C2RC-MAP algorithms, respectively. The unique feature of the proposed algorithms is the concept of cell-based. The cell is the catchment basin tessellated by two-pass watershed transformation and is served as the basic operational unit in the two proposed algorithms. Taking the cell tessellation as the basis can be beneficial in three main points. First, comparing to directly finding solutions on pixels, searching on cells is more efficient. It is because the search space spanned by cells is dramatically smaller than the space of pixels. Therefore redundant computation could be saved. Second, the concrete region and edge information can be obtained in the cell tessellation. The concrete information in regions and edges could be valuable clues to assist the segmentation task. Third, as the cell is the group of pixels with homogeneous intensity, it might be more robust to noise in statistics— which could potentially improve the task of image process in ultrasound images. With these three advantages, cell-based image segmentation approaches might be more efficacious and efficient than pixel-based approaches. The ACCOMP algorithm is a two-phase data-driven approach, which is constituted by the partition and the edge grouping phases. The partition phase is purposed to tessellate the image or ROI with prominent components and is carries out by the cell competition algorithm. For the second phase, it is realized by the cell-based graph-traversing algorithm. Focusing on the edge information, the complicated echogenicity problem can be bypassed. The ACCOMP algorithm is validated on 300 breast sonograms, including 165 carcinomas and 135 benign cysts. The results show that more than 70% of the derived boundaries fall within the span of the manually outlines under 95% confident interval. The robustness of reproducibility is confirmed by the Friedman test, the p-values of which is 0.54. It has also suggested that the lesions sizes derived by the ACCOMP algorithm are highly correlated with the lesions defined by the average manually delineated boundaries. To ensure the delineated boundaries of a series of 2D images closely following the visually perceivable edges with high boundary coherence between consecutive slices, the C2RC-MAP algorithm is proposed. It deforms the region boundary in a cell-by-cell fashion through a cell-based two-region competition process. The cell-based deformation is guided by a cell-based MAP framework with a posterior function characterizing the distribution of the cell means in each region, the salience and shape complexity of the region boundary and the boundary coherence of the consecutive slices. The C2RC-MAP algorithm has been validated using 10 series of breast sonograms, including 7 compression series and 3 freehand series. The compression series contains 2 carcinoma and 5 fibroadenoma cases and the freehand series 2 carcinoma and 1 fibroadenoma cases. The results show that more than 70% of the derived boundaries fall within the span of the manually delineated boundaries. The robustness of the proposed algorithm to the variation of ROI is confirmed by the Friedman tests, the p-values of which are 0.517 and 0.352 for the compression and freehand series groups, respectively. The Pearson’s correlations between the lesion sizes derived by the proposed algorithm and those defined by the average manually delineated boundaries are all higher than 0.990. The overlapping and difference ratios between the derived boundaries and the average manually delineated boundaries are mostly higher than 0.90 and lower than 0.13, respectively. For both series groups, all assessments conclude that the boundaries derived by the proposed algorithm be comparable to those delineated manually. Moreover, it is shown that the proposed algorithm is superior to the Chan and Vese level set method based on the paired-sample t-tests on the performance indices at 5% significance level.
Cheng, Jie-Zhi. "Cell-Based Image Segmentation for 2D and 2D Series Ultrasound Images." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1107200714111100.
Full textYung-Chih, Hsu, and 徐永智. "Reconstruct 2D Magnetic Resonance Images to 3D Images." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/00306249532420443901.
Full text國立臺灣大學
電機工程學研究所
89
Due to recent advances in Magnetic Resonance (MR), fetal images could be acquired by using fast MR imaging sequences, such as T2-weighted fast spin echo (SE). Preliminary results on fetal MR imaging in three-dimension directly, which facilitates thin-slice acquisition, have been demonstrated with scan time on the order of 29 sec. However, the current scan time can be detrimental to image quality in terms of both maternal respiratory motion and fetal motion. Here, we propose to develop a mathematically based method, pseudo-3D imaging, that could potentially be applied to fetal MR imaging. The main idea of pseudo-3D is that we use three sets of orthogonal 2D thick-slice images and by proper mathematically calculation the 3D volume data can be reconstructed with improving resolution. The post-processing of eliminating block effect is used to improve the quality of reconstructed images and makes images more readable. The results from both the mathematical phantom and experimental studies show that the proposed algorithm is theoretically feasible in the absence of image mis-registration. Therefore, in situations where true 3D acquisition is hampered by factors such as scan time rendering multi-slice 2D acquisition to be the only possible approach, pseudo-3D reconstruction using three orthogonal 2D slices seems to be a good alternative to achieve MR imaging with a pseudo isotropic spatial resolution.