Dissertations / Theses on the topic 'Tomographic reconstruction algorithms'

To see the other types of publications on this topic, follow the link: Tomographic reconstruction algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Tomographic reconstruction algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kim, Chuyoung. "Algorithms for Tomographic Reconstruction of Rectangular Temperature Distributions using Orthogonal Acoustic Rays." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Non-intrusive acoustic thermometry using an acoustic impulse generator and two microphones is developed and integrated with tomographic techniques to reconstruct temperature contours. A low velocity plume at around 450 °F exiting through a rectangular duct (3.25 by 10 inches) was used for validation and reconstruction. 0.3 % static temperature relative error compared with thermocouple-measured data was achieved using a cross-correlation algorithm to calculate speed of sound. Tomographic reconstruction algorithms, the simplified multiplicative algebraic reconstruction technique (SMART) and least squares method (LSQR), are investigated for visualizing temperature contours of the heated plume. A rectangular arrangement of transmitter and microphones with a traversing mechanism collected two orthogonal sets of acoustic projection data. Both reconstruction techniques have successfully recreated the overall characteristic of the contour; however, for the future work, the integration of the refraction effect and implementation of additional angled projections are required to improve local temperature estimation accuracy. The root-mean-square percentage errors of reconstructing non-uniform, asymmetric temperature contours using the SMART and LSQR method are calculated as 20% and 19%, respectively.
Master of Science
2

MALALLA, NUHAD ABDULWAHED YOUNIS. "C-ARM TOMOGRAPHIC IMAGING TECHNIQUE FOR DETECTION OF KIDNEY STONES." OpenSIUC, 2016. https://opensiuc.lib.siu.edu/dissertations/1278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nephrolithiasis can be a painful problem due to presence of kidney stones. Kidney stone is among the common painful disorders of the urinary system. Various imaging modalities are used to diagnose patients with symptoms of renal or urinary tract disease such as plain kidney, ureter, bladder x-ray (KUB), intravenous pyelography (IVP), and computed tomography (CT). As a traditional three-dimensional (3D) nephrolithiasis and kidney stones detection technique, computed tomography (CT) provides detailed cross-sectional images as well as 3D structure of kidney from moving the x-ray beam in a circle around the body. However, the risk of CT scans of the kidney is relatively higher exposure to radiation which is more than regular x-rays. C-arm technique is a new x-ray imaging modality that uses 2D array detector and cone shaped x-ray beam to create 3D information about the scanned object. Both x-ray source and 2D array detector cells mounted on C-shaped wheeled structure (C-arm). A series of projection images are acquired by rotating the C-arm around the patient in along circular path with a single rotation. The characteristic structure of C-arm allows to provide wide variety of movements around the patient that helps to remain the patient stationary during scanning time. In this work, we investigated a C-arm technique to generate a series of tomographic images for nephrolithiasis and detection of kidney stones. C-arm tomographic technique (C-arm tomosynthesis) as a new three dimensional (3D) kidney imaging method that provides a series of two dimensional (2D) images along partial circular orbit over limited view angle. Our experiments were done with kidney phantom which formed from a pig kidney with two embedded kidney stones inside it and low radiation dosage. Radiation dose and scanning time needed for kidney imaging are all dramatically reduced due to the cone beam geometry and also to limitation of angular rotation. To demonstrate the capability of our C-arm tomosynthesis to generate 3D kidney information for kidney stone detection, two groups of tomographic image reconstruction algorithms were developed for C-arm tomosynthesis: direct algorithms such as filtered back projection (FBP) and iterative algorithms such as simultaneous algebraic reconstruction technique (SART), maximum likelihood expectation maximization (MLEM), ordered- subset maximum likelihood expectation maximization (OS-MLEM) and Pre-computed penalized likelihood reconstruction (PPL). Three reconstruction methods were investigated including: pixel-driven method (PDM), ray-driven method (RDM) and distance driven method (DDM). Each method differs in their efficiency of calculation accuracy per computing time. Preliminary results demonstrated the capability of proposed technique to generate volumetric data about the kidney for nephrolithiasis and kidney stone detection by using all investigated reconstruction algorithms. In spite of each algorithms differs in their strategies, embedded kidney stone can be clearly visualized in all reconstruction results. Computer simulation studies were also done on simulated phantom to evaluate the results for each reconstruction algorithm. To mimic kidney phantom, simulated phantom was simulated with two different size kidney stones. Dataset of projection images was collated by using a virtual C-arm tomosynthesis with geometric configuration similar to real technique. All investigated algorithms were used to reconstruct 3D information. Different of image quality functions were applied to evaluate the imaging system and the reconstruction algorithms. The results show the capability of C-arm tomosynthesis to generate 3D information of kidney structures and to identify the size and location of kidney stones with limited amount of radiation dose.
3

Millardet, Maël. "Amélioration de la quantification des images TEP à l'yttrium 90." Thesis, Ecole centrale de Nantes, 2022. https://tel.archives-ouvertes.fr/tel-03871632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La popularité de l'imagerie TEP à l'yttrium 90 va grandissante. Cependant, la probabilité qu'une désintégration d'un noyau d'yttrium 90 mène à l'émission d'un positon n'est que de 3,2 × 10-5, et les images reconstruites sont par conséquent caractérisées par un niveau de bruit élevé, ainsi que par un biais positif dans les régions de faible activité. Pour corriger ces problèmes, les méthodes classiques consistent à utiliser des algorithmes pénalisés, ou autorisant des valeurs négatives dans l'image. Cependant, une étude comparant et combinant ces différentes méthodes dans le contexte spécifique de l'yttrium 90 manquait encore à l'appel au début de cette thèse. Cette dernière vise donc à combler ce manque. Malheureusement, les méthodes autorisant les valeurs négatives ne peuvent pas être utilisées directement dans le cadre d'une étude dosimétrique, et cette thèse commence donc par proposer une nouvelle méthode de posttraitement des images, visant à en supprimer les valeurs négatives en en conservant les valeurs moyennes le plus localement possible. Une analyse complète multi-objectifs de ces différentes méthodes est ensuite proposée. Cette thèse se termine en posant les prémices de ce qui pourra devenir un algorithme permettant de proposer un jeu d'hyperparamètres de reconstruction adéquats, à partir des seuls sinogrammes
Yttrium-90 PET imaging is becoming increasingly popular. However, the probability that decay of a yttrium-90 nucleus will lead to the emission of a positron is only 3.2 × 10-5, and the reconstructed images are therefore characterised by a high level of noise, as well as a positive bias in low activity regions. To correct these problems, classical methods use penalised algorithms or allow negative values in the image. However, a study comparing and combining these different methods in the specific context of yttrium-90 was still missing at the beginning of this thesis. This thesis, therefore, aims to fill this gap. Unfortunately, the methods allowing negative values cannot be used directly in a dosimetric study. Therefore, this thesis starts by proposing a new method of post-processing the images, aiming to remove the negative values while keeping the average values as locally as possible. A complete multi-objective analysis of these different methods is then proposed. This thesis ends by laying the foundations of what could become an algorithm providing a set of adequate reconstruction hyper parameters from sinograms alone
4

Defontaine-Caritu, Marielle. "Reconstruction optique de tomographies : application à la tomographie ultrasonore en réflexion." Compiègne, 1995. http://www.theses.fr/1995COMPD814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les techniques de reconstruction tomographique sont toujours un sujet de recherche d'actualité. Des algorithmes de plus en plus sophistiqués s'adaptent de mieux en mieux aux techniques d'acquisition associées. Ces méthodes de reconstruction, malgré les énormes progrès de l'informatique, sont encore dépendantes du temps de calcul des machines, de leur encombrement et de leur coût élevé. Nous présentons un système de reconstruction optique, basé sur l'algorithme de rétroprojection en faisceaux droits parallèles. Les informations des projections sont codées optiquement, grâce à un ensemble modulateur et déflecteur acousto-optiques. Elles forment instantanément les images de rétroprojection optiques. Une caméra CCD placée au foyer image en fin de chaîne optique, assure l'intégration. La même séquence est mise en œuvre pour toutes les projections, et après une révolution complète de la caméra, restée en pause, on obtient sur la cellule l'image reconstruite de la coupe. Le seul facteur limitatif dans le temps concerne le temps d'acquisition des projections, qui est environ d'une seconde dans notre cas. Après avoir validé cette méthode à l'aide de projections simulées, nous avons entrepris de l'associer à un système d'acquisition ultrasonore. L'algorithme mis en œuvre sur la chaîne optique ne correspond pas à une méthode de reconstruction exacte. Cependant les images obtenues restituent correctement les structures explorées. Par ailleurs, dans le but de conduire à une reconstruction exacte, une dernière partie est consacrée aux améliorations envisagées concernant la chaîne optique.
5

Vallot, Delphine. "Reconstruction adaptative optimisée pour la quantification en tomographie de positons couplée à un tomodensitomètre." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'étude a été initiée à l'occasion de la disponibilité d'un algorithme itératif de reconstruction en tomographie par émission de positons, qui présente l'avantage d'atteindre la convergence grâce à une méthode de régularisation. Il a donc fallu évaluer ses performances et son éventuel apport en comparaison aux algorithmes de référence, puis étudier l'influence du seul paramètre accessible aux utilisateurs pour une optimisation en clinique. Pour cela, plusieurs tests ont été réalisés, sur fantômes d'abord, puis sur patients car les résultats obtenus n'étaient pas directement transposables en clinique. Cet algorithme a de nombreux avantages par rapport au standard actuel OSEM-MLEM (moins de bruit, meilleur contraste, meilleure détectabilité des lésions) mais pourrait encore être amélioré pour diminuer les artéfacts et la surestimation de certaines métriques, grâce à l'utilisation de fantômes plus anthropomorphiques et l'accès à plus de paramètres de reconstruction. Des travaux sont encore en cours avec l'éditeur
This study was initiated to evaluate an iterative reconstruction algorithm in positron emission tomography based on a regularization method to obtain convergence. Our aim was to assess its performance, in comparison with other currently available algorithms and to study the impact of the only parameter available to users for eventual optimization, both using anthropomorphic phantoms and clinical data. We confirm that this algorithm shows several advantages compared to the traditional OSEM-MLEM concerning noise, contrast and detectability. By using anthropomorphic phantoms and with access to more reconstruction parameters, the performance could be further improved to decrease the artefacts and the overestimation of certain metrics. Work in progress
6

Velo, Alexandre França. "Análise da aplicação de diferentes algoritmos de reconstrução de imagens tomográficas de objetos industriais." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/85/85131/tde-08022019-142220/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Existe na indústria o interesse em utilizar as informações da tomografia computadorizada a fim de conhecer o interior (i) dos objetos industriais fabricados ou (ii) das máquinas e seus meios de produção. Nestes casos, a tomografia tem como finalidade (a) controlar a qualidade do produto final e (b) otimizar a produção, contribuindo na fase piloto dos projetos e na análise da qualidade dos meios sem interromper a produção. O contínuo controle de qualidade dos meios de produção é a chave mestra para garantir a qualidade e a competitividade dos produtos. O Centro de Tecnologia das Radiações (CTR), do Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP) vem desenvolvendo esta tecnologia para fins de análises de processos industriais há algum tempo. Atualmente, o laboratório tem desenvolvido três gerações de tomógrafos: (i) primeira geração; (ii) terceira geração; e (iii) tomógrafo Instant Non-Scanning. Os algoritmos de reconstrução de imagens tomográficas tem uma importância relevante para o funcionamento ideal desta tecnologia. Nesta tese, foram desenvolvidos e analisados os algoritmos de reconstrução de imagens tomográficas para serem implementados aos protocolos experimentais dos tomógrafos. Os métodos de reconstrução de imagem analítico e iterativo foram desenvolvidos utilizando o software Matlab® r2013b. Os algoritmos iterativos apresentaram imagens com melhor resolução espacial comparado com as obtidas pelo método analítico. Entretanto as imagens por método analítico apresentaram menos ruídos. O tempo para obtenção de imagem pelo método iterativo é relativamente elevado, e aumenta conforme aumenta a matriz de pixels da imagem. Já o método analítico fornece imagens instantâneas. Para as reconstruções de imagens utilizando o tomógrafo Instant Non-Scanning, as imagens pelo método analítico não apresentaram qualidade de imagem satisfatória comparada aos métodos iterativos.
There is an interest in the industry to use the CT information in order to know the interior (i) of the manufactured industrial objects or (ii) the machines and their means of production. In these cases, the purpose of the tomography systems is to (a) control the quality of the final product and (b) to optimize production, contributing to the pilot phase of the projects and to analyze the quality of the means without interrupting he line production. Continuous quality assurance of the means of production is the key to ensuring product quality and competitiveness. The Radiation Technology Center of the Nuclear and Energy Research Institute (IPEN/CNEN-SP) has been developing this technology for the purpose of industrial analysis. Currently the laboratory has developed three generations of tomography systems: (i) first generation; (ii) third generation; and (iii) Instant Non-Scanning tomography. The algorithms for the reconstruction of tomographic images are of relevant importance for the optimal functioning of this technology. In this PhD thesis, the reconstruction algorithms of tomographic images were developed and analyzed to be implemented to the tomography systems developed. The analytical and iterative image reconstruction methods were developed using the software Matlab® r2013b. The iterative algorithms presented images with better spatial resolution compared to those obtained by the analytical method; however the images of the analytical method presented be less image noisy. The time to obtain the image by the iterative method is high, and increases as the image matrix increases, while the analytical method provides fast images. For images reconstructions using the Instant Non-Scanning tomography system, the images by the analytical method did not present satisfactory image quality compared to the iterative methods.
7

Belward, Catherine. "Reconstruction algorithms for electrical empedance tomography /." St. Lucia, Qld, 2003. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe17243.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Polydorides, Nicholas. "Image reconstruction algorithms for soft-field tomography." Thesis, University of Manchester, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

林吉雄 and Kat-hung Lam. "Geometric object reconstruction from orthogonal ray sum data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lam, Kat-hung. "Geometric object reconstruction from orthogonal ray sum data /." [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13458747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Artola, Jose. "A study on electrical impedance tomography reconstruction algorithms." Thesis, University of York, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nordin, Md Jan. "An image reconstruction algorithm for a dual modality tomographic system." Thesis, Sheffield Hallam University, 1995. http://shura.shu.ac.uk/20268/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis describes an investigation into the use of dual modality tomography to measure component concentrations within a cross-section. The benefits and limitations of using dual modality compared with single modality are investigated and discussed. A number of methods are available to provide imaging systems for process tomography applications and seven imaging techniques are reviewed. Two modalities of tomography were chosen for investigation (i.e. Electrical Impedance Tomography (EIT) and optical tomography) and the proposed dual modality system is presented. Image reconstruction algorithms for EIT (based on modified Newton-Raphson method), optical tomography (based on back-projection method) and with both modalities combined together to produce a single tomographic imaging system are described, enabling comparisons to be made between the individual and combined modalities. To analyse the performance of the image reconstruction algorithms used in the EIT, optical tomography and dual modality investigations, a sequence of reconstructions using a series of phantoms is performed on a simulated vessel. Results from two distinct cases are presented, a) simulation of a vertical pipe in which the cross-section is filled with liquid or liquid and objects being imaged and b) simulation of a horizontal pipe where the conveying liquid level may vary from pipe full down to 14% of liquid. A computer simulation of an EIT imaging system based on a 16 electrode sensor array is used. The quantitative images obtained from simulated reconstruction are compared in term of percentage area with the actual cross-section of the model. It is shown from the results that useful reconstructions may be obtained with widely differing levels of liquid, despite the limitations in accuracy of the reconstructions. The test results obtained using the phantoms with optical tomography, based on two projections each of sixteen views, show that the images produced agree closely on a quantitative basis with the physical models. The accuracy of the optical reconstructions, neglecting the effects of aliasing due to only two projections, is much higher than for the EIT reconstructions. Neglecting aliasing, the measured accuracies range from 0.1% to 0.8% for the pipe filled with water. For the sewer condition, i.e. the pipe not filled with water, the major phase is measured with an accuracy of 1% to 3.4%. For the single optical modality the minor components are measured with accuracies 6.6% to 19%. The test results obtained using the phantoms show that the images produced by combining both EIT and optical tomography method agree quantitatively with the physical models. The EIT eliminates most of the aliasing and the results now show that the optical part of the system provides accuracies for the minor components in the range 1% to 5%. It is concluded that the dual modality system shows a measurable increase in accuracy compared with the single modality systems. The dual modality system should be investigated further using laboratory flow rigs in order to check accuracies and determine practical limitations. Finally, suggestions for future work on improving the accuracy, speed and resolution of the dual modality imaging system is presented.
13

Araujo, Ericky Caldas de Almeida. "Estudo e aplicação do algoritmo FDK para a reconstrução de imagens tomográficas multi-cortes." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-10032009-132445/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
O presente projeto consistiu no estudo e aplicação do algoritmo FDK (Feldkamp-Davis-Kress) para a reconstrução de imagens tomográficas utilizando a geometria de feixe cônico, resultando na implementação de um sistema adaptado de tomografia computadorizada multicortes (TCMC). Para a aquisição das projeções, utilizou-se uma plataforma giratória com goniômetro acoplado, um equipamento de raios X e um detector digital, tipo CCD. Para processar a reconstrução das imagens, foi utilizado um PC, no qual foi implementado o algoritmo FDK. Inicialmente foi aplicado o algoritmo FDK original, no qual se assume o caso físico ideal no processo de medições. Em seguida, foram incorporadas ao algoritmo, algumas correções de artefatos relacionados ao processo de medição das projeções. Para testar o funcionamento do algoritmo implementado, foram feitas reconstruções a partir de projeções simuladas computacionalmente. Foram montados e testados três sistemas de aquisição de projeções, nos quais foram usados diferentes equipamentos de raios X, detectores, metodologias e técnicas radiográficas, a fim de garantir que fossem coletados os melhores conjuntos possíveis de projeções. Realizou-se a calibração do sistema de TCMC implementado. Para isso, utilizou-se um objeto com uma distribuição de coeficientes de atenuação linear conhecida, que foi projetado e fabricado especificamente para isto. Por fim, o sistema de TCMC implementado foi utilizado na reconstrução tomográfica de um objeto não homogêneo, cuja distribuição volumétrica do coeficiente de atenuação linear é desconhecida. As imagens reconstruídas foram analisadas a fim de avaliar o desempenho do sistema de TCMC implementado.
This work consisted on the study and application of the FDK (Feldkamp-Davis-Kress) algorithm for tomographic image reconstruction using cone-beam geometry, resulting on the implementation of an adapted multi-slice computed tomography (MSCT) system. For the acquisition of the projections, a rotating platform coupled to a goniometer, an x-ray equipment and a CCD type digital detector were used. The FDK algorithm was implemented on a PC which was used for the reconstruction process. Initially, the original FDK algorithm was applied considering only the ideal physical conditions in the measurement process. Then some artifacts corrections related to the projections measurement process were incorporated. Computational simulations were performed to test the functioning of the implemented algorithm. Three projections acquisition systems, which used different x-ray equipments, detectors, methodologies and radiographic techniques, were assembled and tested in order to ensure that the best possible set of data was collected. The implemented MSCT system was calibrated. A specially designed and manufactured object with a known linear attenuation coefficient distribution was used for this purpose. Finally, the implemented MSCT system was used for multi-slice tomographic reconstruction of an inhomogeneous object, whose attenuation coefficient distribution was unknown. The reconstructed images were analyzed to assess the performance of the TCMC system that was implemented.
14

Magee, Kathleen Ann 1959. "Parallel implementations of image reconstruction algorithms for emission tomography." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/277300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Techniques for implementing the EM and simulated-annealing image reconstruction algorithms on a large-grain parallel computer for faster execution per iteration are developed for emission tomography applications. The speedups obtained by implementing the algorithms on up to 54 processors connected in a ring topology are found to be nearly linear. Reconstruction involves finding an estimate of the emission distribution that minimizes an energy function that contains a data-agreement term and a noise-control term. The EM algorithm minimizes the complete-incomplete form of the data-agreement term, which is easily partitioned for parallel computation. The simulated-annealing algorithm is a Monte Carlo method in which any form of data-agreement and noise-control term can be minimized. In the reconstruction of a thyroid phantom, it is demonstrated that the complete-incomplete data-agreement term can be used to facilitate the parallel implementation of simulated annealing while still guaranteeing convergence.
15

Kalisse, Camille George Emile. "Reconstruction algorithms for the Aberdeen impedance imaging systems." Thesis, University of Aberdeen, 1993. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU055336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The backprojection method for electrical impedance image reconstruction has been adapted for the opposing current drive configuration implemented in the second generation of Aberdeen impedance imaging systems. The logarithmic conformal transformation is used to solve the Forward problem for a two-dimensional homogeneous medium of circular cross-section. Pixel weights of backprojection are calculated from the normalised distances of the pixel centres from the boundary side of backprojection. An experimental solution to the Forward problem is a homogeneous medium of irregular cross-section and three-dimensional boundary is proposed and implemented. A thorax phantom was built for this purpose using radiotherapy moulding techniques. The potential distribution in this phantom was measured using a tetrapolar inpedance measuring device and the equipotential lines falling on the electrodes were plotted. A reconstruction matrix capable of reconstructing dynamic impedance images of the thorax was formulated. Images representing resistivity change distributions between maximum inspiration and maximum expiration have been reconstructed. These thorax cross-section images show the most faithful representation of the expected resistivity changes due to respiration.
16

Weber, Loriane. "Iterative tomographic X-Ray phase reconstruction." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI085/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’imagerie par contraste de phase suscite un intérêt croissant dans le domaine biomédical, puisqu’il offre un contraste amélioré par rapport à l’imagerie d’atténuation conventionnelle. En effet, le décalage en phase induit par les tissus mous, dans la gamme d’énergie utilisée en imagerie, est environ mille fois plus important que leur atténuation. Le contraste de phase peut être obtenu, entre autres, en laissant un faisceau de rayons X cohérent se propager librement après avoir traversé un échantillon. Dans ce cas, les signaux obtenus peuvent être modélisés par la diffraction de Fresnel. Le défi de l’imagerie de phase quantitative est de retrouver l’atténuation et l’information de phase de l’objet observé, à partir des motifs diffractés enregistrés à une ou plusieurs distances. Ces deux quantités d’atténuation et de phase, sont entremêlées de manière non-linéaire dans le signal acquis. Dans ces travaux, nous considérons les développements et les applications de la micro- et nanotomographie de phase. D’abord, nous nous sommes intéressés à la reconstruction quantitative de biomatériaux à partir d’une acquisition multi-distance. L’estimation de la phase a été effectuée via une approche mixte, basée sur la linéarisation du modèle de contraste. Elle a été suivie d’une étape de reconstruction tomographique. Nous avons automatisé le processus de reconstruction de phase, permettant ainsi l’analyse d’un grand nombre d’échantillons. Cette méthode a été utilisée pour étudier l’influence de différentes cellules osseuses sur la croissance de l’os. Ensuite, des échantillons d’os humains ont été observés en nanotomographie de phase. Nous avons montré le potentiel d’une telle technique sur l’observation et l’analyse du réseau lacuno-canaliculaire de l’os. Nous avons appliqué des outils existants pour caractériser de manière plus approfondie la minéralisation et les l’orientation des fibres de collagènes de certains échantillons. L’estimation de phase, est, néanmoins, un problème inverse mal posé. Il n’existe pas de méthode de reconstruction générale. Les méthodes existantes sont soit sensibles au bruit basse fréquence, soit exigent des conditions strictes sur l’objet observé. Ainsi, nous considérons le problème inverse joint, qui combine l’estimation de phase et la reconstruction tomographique en une seule étape. Nous avons proposé des algorithmes itératifs innovants qui couplent ces deux étapes dans une seule boucle régularisée. Nous avons considéré un modèle de contraste linéarisé, couplé à un algorithme algébrique de reconstruction tomographique. Ces algorithmes sont testés sur des données simulées
Phase contrast imaging has been of growing interest in the biomedical field, since it provides an enhanced contrast compared to attenuation-based imaging. Actually, the phase shift of the incoming X-ray beam induced by an object can be up to three orders of magnitude higher than its attenuation, particularly for soft tissues in the imaging energy range. Phase contrast can be, among others existing techniques, achieved by letting a coherent X-ray beam freely propagate after the sample. In this case, the obtained and recorded signals can be modeled as Fresnel diffraction patterns. The challenge of quantitative phase imaging is to retrieve, from these diffraction patterns, both the attenuation and the phase information of the imaged object, quantities that are non-linearly entangled in the recorded signal. In this work we consider developments and applications of X-ray phase micro and nano-CT. First, we investigated the reconstruction of seeded bone scaffolds using sed multiple distance phase acquisitions. Phase retrieval is here performed using the mixed approach, based on a linearization of the contrast model, and followed by filtered-back projection. We implemented an automatic version of the phase reconstruction process, to allow for the reconstruction of large sets of samples. The method was applied to bone scaffold data in order to study the influence of different bone cells cultures on bone formation. Then, human bone samples were imaged using phase nano-CT, and the potential of phase nano-imaging to analyze the morphology of the lacuno-canalicular network is shown. We applied existing tools to further characterize the mineralization and the collagen orientation of these samples. Phase retrieval, however, is an ill-posed inverse problem. A general reconstruction method does not exist. Existing methods are either sensitive to low frequency noise, or put stringent requirements on the imaged object. Therefore, we considered the joint inverse problem of combining both phase retrieval and tomographic reconstruction. We proposed an innovative algorithm for this problem, which combines phase retrieval and tomographic reconstruction into a single iterative regularized loop, where a linear phase contrast model is coupled with an algebraic tomographic reconstruction algorithm. This algorithm is applied to numerical simulated data
17

Tairi, Souhil. "Développement de méthodes itératives pour la reconstruction en tomographie spectrale." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0160/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Depuis quelques années les détecteurs à pixels hybrides ont ouvert la voie au développement de la tomographie à rayon X spectrale ou tomodensitométrie (TDM) spectrale. La TDM spectrale permet d’extraire plus d’information concernant la structure interne de l’objet par rapport à la TDM d’absorption classique. Un de ses objectifs dans l’imagerie médicale est d’identifier et quantifier des composants d’intérêt dans un objet, tels que des marqueurs biologique appelés agents de contraste (iode, baryum, etc.). La majeure partie de l’état de l’art procède en deux étapes : - la "pré-reconstruction" qui consiste à séparer les composants dans l’espace des projections puis reconstruire, - la "post-reconstruction", qui reconstruit l’objet puis sépare les composants.On s’intéresse dans ce travail de thèse à une approche qui consiste à séparer et reconstruire simultanément les composants de l’objet. L’état de l’art des méthodes de reconstruction et séparation simultanées de données de TDM spectrale reste à ce jour peu fourni et les approches de reconstruction existantes sont limitées dans leurs performances et ne tiennent souvent pas compte de la complexité du modèle d’acquisition.L’objectif principal de ce travail de thèse est de proposer des approches de reconstruction et séparation tenant compte de la complexité du modèle afin d’améliorer la qualité des images reconstruites. Le problème à résoudre est un problème inverse, mal-posé, non-convexe et de très grande dimension. Pour le résoudre, nous proposons un algorithme proximal à métrique variable. Des résultats prometteurs sont obtenus sur des données réelles et montrent des avantages en terme de qualité de reconstruction
In recent years, hybrid pixel detectors have paved the way for the development of spectral X ray tomography or spectral tomography (CT). Spectral CT provides more information about the internal structure of the object compared to conventional absorption CT. One of its objectives in medical imaging is to obtain images of components of interest in an object, such as biological markers called contrast agents (iodine, barium, etc.).The state of the art of simultaneous reconstruction and separation of spectral CT data methods remains to this day limited. Existing reconstruction approaches are limited in their performance and often do not take into account the complexity of the acquisition model.The main objective of this thesis work is to propose better quality reconstruction approaches that take into account the complexity of the model in order to improve the quality of the reconstructed images. Our contribution considers the non-linear polychromatic model of the X-ray beam and combines it with an earlier model on the components of the object to be reconstructed. The problem thus obtained is an inverse, non-convex and misplaced problem of very large dimensions.To solve it, we propose a proximal algorithmwith variable metrics. Promising results are shown on real data. They show that the proposed approach allows good separation and reconstruction despite the presence of noise (Gaussian or Poisson). Compared to existing approaches, the proposed approach has advantages over the speed of convergence
18

Cheng, Jiayi. "Investigating signal denoising and iterative reconstruction algorithms in photoacoustic tomography." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Photoacoustic tomography (PAT) is a promising biomedical imaging modality that achieves strong optical contrast and high ultrasound resolution. This technique is based on the photoacoustic (PA) effect which refers to illuminating the tissue by a nanosecond pulsed laser and generating acoustic waves by thermoelastic expansion. By detecting the PA waves, the initial pressure distribution that corresponds to the optical absorption map can be obtained by a reconstruction algorithm. In the linear array transducer based data acquisition system, the PA signals are contaminated with various noises. Also, the reconstruction suffers from artifacts and missing structures due to the limited detection view. We aim to reduce the effect of noise by a denoising preprocessing. The PAT system with a linear array transducer and a parallel data acquisition system (DAQ) has prominent band-shaped noise due to signal interference. The band-shaped noise is treated as a low-rank matrix, and the pure PA signal is treated as a sparse matrix, respectively. Robust principal component analysis (RPCA) algorithm is applied to extract the pure PA signal from the noise contaminated PA measurement. The RPCA approach is conducted on experiment data of different samples. The denoising results are compared with several methods and RPCA is shown to outperform the other methods. It is demonstrated that RPCA is promising in reducing the background noise in PA image reconstruction. We also aim to improve the iterative reconstruction. The variance reduced stochastic gradient descent (VR-SGD) algorithm is implemented in PAT reconstruction. A new forward projection matrix is also developed to more accurately match with the measurement data. Using different evaluation criteria, such as peak signal-to-noise ratio (PSNR), relative root-mean-square of reconstruction error (RRMSE) and line profile comparisons, the reconstructions from various iterative algorithms are compared. The advantages of VR-SGD are demonstrated on both simulation and experimental data. Our results indicate that VR-SGD in combination with the accurate projection matrix can lead to improved reconstruction in a small number of iterations. RPCA denoising and VR-SGD iterative reconstruction have been implemented in PAT. Our results show that RPCA and VR-SGD are promising approaches to improve the image reconstruction quality in PAT.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
19

Armstrong, Ian. "Quantitative accuracy of iterative reconstruction algorithms in positron emission tomography." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/quantitative-accuracy-of-iterative-reconstruction-algorithms-in-positron-emission-tomography(512320eb-05dc-4ea5-999f-03be55de362d).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Positron Emission Tomography (PET) plays an essential role in the management of patients with cancer. It is used to detect and characterise malignancy as well as monitor response to therapy. PET is a quantitative imaging tool, producing images that quantify the uptake of a radiotracer that has been administered to the patient. The most common measure of uptake derived from the image is known as a Standardised Uptake Value (SUV). Data acquired on the scanner is processed to produce images that are reported by clinicians. This task is known as image reconstruction and uses computational algorithms to process the scan data. The last decade has seen substantial development of these algorithms, which have become commercially available: modelling of the scanner spatial resolution (resolution modelling) and time of flight (TOF). The Biograph mCT was the first scanner from Siemens Healthcare to feature these two algorithms and the scanner at Central Manchester University Hospitals was the first Biograph mCT to go live in the UK. This PhD project, sponsored by Siemens Healthcare, aims to evaluate the effect of these algorithms on SUV in routine oncology imaging through a combination of phantom and patient studies. Resolution modelling improved visualisation of small objects and resulted in significant increases of uptake measurements. This may pose a challenge to clinicians when interpreting established uptake metrics that are used as an indication of disease status. Resolution modelling reduced the variability of SUV. This improved precision is particularly beneficial when assessing SUV changes during therapy monitoring. TOF was shown to reduce image noise with a conservation of FDG uptake measurements, relative to non-TOF algorithms. As a result of this work, TOF has been used routinely since mid-2014 at the CMUH department. This has facilitated a reduction of patient and staff radiation dose and an increase of 100 scans performed each year in the department.
20

Lei, Qingchun. "Development and Validation of Reconstruction Algorithms for 3D Tomography Diagnostics." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/74233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This work reports three reconstruction algorithms developed to address the practical issues encountered in 3D tomography diagnostics, such as the limited view angles available in many practical applications, the large scale and nonlinearity of the problems when they are in 3D, and the measurement uncertainty. These algorithms are: an algebraic reconstruction technique (ART) screening algorithm, a nonlinear iterative reconstruction technique (NIRT), and an iterative reconstruction technique integrating view registration optimization (IRT-VRO) algorithm. The ART screening algorithm was developed to enhance the performance of the traditional ART algorithm to solve linear tomography problems, the NIRT was to solve nonlinear tomography problems, and the IRT-VRO was to address the issue of view registration uncertainty in both linear and nonlinear problems. This dissertation describes the mathematical formulations, and the experimental and numerical validations for these algorithms. It is expected that the results obtained in this dissertation to lay the groundwork for their further development and expanded adaption in the deployment of tomography diagnostics in various practical applications.
Ph. D.
21

Meeson, Stuart. "An investigation of optimal performance criteria in electrical impedance tomography." Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Musca, Lorenzo. "Tomographic 3D reconstruction through linear inversion for breast microwave imaging." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10507/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.
23

Abhange, Shital K. "Graphical User Interface (GUI) to Study Different Reconstruction Algorithms in Computed Tomography." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1239218325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Tafas, Jihad. "An algorithm for two-dimensional density reconstruction in proton computed tomography (PCT)." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of this thesis is to develop an optimized and effective iterative reconstruction algorithm and hardware acceleration methods that work synonymously together through reconstruction in proton computed tomography, which accurately maps the electron density.
25

Perez-Juste, Abascal Juan Felipe. "Improvements in reconstruction algorithms for electrical impedance tomography of brain function." Thesis, University College London (University of London), 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Deidda, Daniel. "Hybrid kernelised expectation maximisation reconstruction algorithms for quantitative positron emission tomography." Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/22956/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Positron emission tomography (PET) imaging is commonly used in clinical practice to diagnose different diseases. However, the limited spatial resolution sometimes prevents the desired diagnostic accuracy. This work examines some of the issues related to PET image reconstruction in the context of PET-magnetic resonance (MR) imaging, and proposes a novel PET iterative reconstruction algorithm, hybrid kernelised expectation maximisation (HKEM), to overcome these issues by exploiting synergistic information from PET and MR images. When the number of detected events is low, the reconstructed images are often biased and noisy. Anatomically-guided PET image reconstruction techniques have been demonstrated to reduce partial volume effect (PVE), noise, and improve quantification, but, they have also been shown to rely on the accurate registration between the anatomical image and the PET image, otherwise they may suppress important PET information that may lead to false negative detection of disease. The aim of the HKEM algorithm is to maintain the benefits of the anatomically-guided methods and overcome their limitations by incorporating synergistic information iteratively. The findings obtained using simulated and real datasets, by performing region of interest (ROI) analysis and voxel-wise analysis are as follows: first, anatomically-guided techniques provide reduced PVE and higher contrast compared to standard techniques, and HKEM provides even higher contrast in almost all the cases; second, the absolute bias and the noise affecting low-count datasets is reduced; third, HKEM reduces PET unique features suppression due to PET-MR spatial inconsistency. This thesis, therefore argues that using synergistic information, via the kernel method, increases the accuracy and precision of the PET clinical diagnostic examination. The promising quantitative features of the HKEM method give the opportunity to explore many possible clinical applications, such as cancer and inflammation.
27

Deba, Charlie Nindjou. "Evaluation and verification of five different image reconstruction algorithms for electrical resistance tomography applications." Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2016.
Tomography is the ability to internally visualise an opaque medium or a body, using different imaging techniques. Electrical Resistance Tomography (ERT) technique is a method commonly used in process tomography. It uses a non-intrusive resistance measurement between a set of electrodes attached on the circumference of a fixed cross-section with a given conductivity and permittivity distribution. ERT appears to be simple, low cost, safe and non-invasive. Despite the advantages of ERT, the reconstruction of the internal conductivity of the pipe still face a crucial challenges such as noise, a relatively low spatial resolution, as well as ill-posedness of the inverse problem when doing the image reconstruction using reconstruction algorithms. Although previous work showed the potential of various algorithms for the reconstruction of ERT tomograms, no full characterisation and comparison of different algorithms could be found for real flow situations. The ERT system was tested in the identification of different objects and fluid beds in a real time situation. The data collected from the measurements were then used for the image reconstruction using an algorithm developed by Time Long (One-step algorithm) and four EIDORS-based algorithms namely: Gauss-Newton algorithm with Laplace Prior (LP) and Gaussian prior (Automatic Hyper Parameter Selection (AHSP)), the Total Variation (TV) algorithm and the Conjugate Gradient (CG) algorithm. The performance of each algorithm was tested in different scenarios. The results obtained were then compared based on the quality and the accuracy of the images as well as the computational time of each algorithm. Firstly, reconstructed images were obtained using objects placed inside the ERT pipe test. Secondly, the algorithm performances were put to test in a level bed setup experiment and finally, the algorithm reconstructions were applied to the real flow situation, where different flow rates were applied. The results obtained were then analysed and compared.
28

Abdmouleh, Fatma. "Reconstitution tomographique de propriétés qualitatives et quantitatives d'images." Phd thesis, Université de Strasbourg, 2013. http://tel.archives-ouvertes.fr/tel-01011954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La tomographie consiste à reconstruire un objet nD à partir de projections (n-1)D. Cette discipline soulève plusieurs questions auxquelles la recherche essaie d'apporter des réponses. On s'intéresse dans cette thèse à trois aspects de cette problématique : 1) la reconstruction de l'image 2D à partir de projections dans un cadre rarement étudié qui est celui des sources ponctuelles ; 2) l'unicité de cette reconstruction ; 3) l'estimation d'informations concernant un objet sans passer par l'étape de reconstitution de son image. Afin d'aborder le problème de reconstruction pour la classe des ensembles convexes, nous définissons une nouvelle classe d'ensembles ayant des propriétés de convexité qu'on appelle convexité par quadrants pour des sources ponctuelles. Après une étude de cette nouvelle classe d'ensembles, nous montrons qu'elle présente des liens forts avec la classe des ensembles convexes. Nous proposons alors un algorithme de reconstruction d'ensemblesconvexes par quadrants qui, si l'unicité de la reconstruction est garantie, permet de reconstruire des ensembles convexes en un temps polynomial. Nous montrons que si une conjecture, que nous avons proposée, est vraie, les conditions de l'unicité pour les ensembles convexes par quadrants sont les mêmes que celles pour les ensembles convexes. Concernant le troisième aspect étudié dans cette thèse, nous proposons une méthode qui permet d'estimer, à partir d'une seule projection, la surface d'un ensemble 2D. Concernant l'estimation du périmètre d'un ensemble 2D, en considérant les projections par une deuxième source d'un ensemble convexe, nous obtenons deux bornes inférieures et une borne supérieure pour le périmètre de l'objet projeté.
29

Laurent, Christophe. "Adéquation algorithmes et architectures parallèles pour la reconstruction 3D en tomographie X." Phd thesis, Université Claude Bernard - Lyon I, 1996. http://tel.archives-ouvertes.fr/tel-00004999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le but de cette thèse est d'étudier l'adéquation des approches parallèles à l'imagerie 3D, en prenant comme problème cible de la reconstruction d'images 3D en tomographie par rayons X. L'évolution technologique des systèmes d'acquisition, du premier scanner au Morphomètre, a généré de nouvelles problématiques au niveau des méthodes de reconstruction et au niveau des besoins informatiques. Une première partie est consacrée aux fondements de la tomographie assistée par ordinateur et à la présentation des machines parallèles. L'évolution des machines de calcul intensif, du Cray 1 au Cray T3D, permet de résoudre des problèmes de taille croissante. Dans la deuxième partie, nous définissons la méthodologie suivie pour paralléliser les algorithmes de reconstruction. Nous identifions les opérateurs de base (projection et rétroprojection) sur lesquel est porté l'effort de parallélisation. Nous définissons deux approches de parallélisation (locale et globale). Différentes versions optimisées de ces algorithmes parallèles sont présentées prenant en compte d'une part la minimisation des temps de communications (recouvrement calcul/communication, communication sur un arbre binaire) et d'autre part la régulation de charge (partitionnement adaptatif). Dans la troisième partie, à partir de ces opérateurs parallélisés, nous construisons trois méthodes de reconstruction parallèles : une méthode analytique (la méthode de Feldkamp) et deux méthodes algébriques (la méthode Art par blocs et méthode SIRT). Les algorithmes sont expérimentés sur différentes machines parallèles : Maspar, Réseau de stations Sun, Ferme de processeurs Alpha, Paragon d'Intel, IBM SP1 et Cray T3D. Pour assurer la portabilité sur les différentes machines, nous avons utilisé PVM. Nous évaluons nos performances et nous montrons nos résultats de reconstruction 3D à partir de données simulées. Puis, nous présentons des reconstructions 3D effectués sur le Cray T3D, à partir de données expérimentales du Morphomètre, prototype de scanner X 3D.
30

Franchois, Ann. "Contribution a la tomographie microonde : algorithmes de reconstruction quantitative et verifications experimentales." Paris 11, 1993. http://www.theses.fr/1993PA112234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce memoire represente une contribution au developpement de la technique d'imagerie microonde active pour applications biomedicales, sur les plans algorithmique et experimental. L'objectif est ici la reconstruction de la permittivite complexe d'un objet biologique, a partir de mesures du champ diffracte dans un certain nombre de points proches de l'objet. Une configuration bidimensionnelle pour les milieux et une polarisation transverse magnetique pour les champs ont ete adoptes, de sorte que les problemes direct et inverse peuvent etre resolus a partir d'une equation integrale scalaire, discretisee a cet effet par une methode des moments. Le probleme inverse, qui est non lineaire et mal pose, est formule comme la minimisation de l'erreur quadratique moyenne par rapport aux mesures. Deux methodes de reconstruction sont presentees, celle du recuit simule, une methode d'optimisation globale, et celle de levenberg-marquardt, une methode locale de type gauss-newton, pour laquelle le choix du parametre de regularisation, par une methode empirique et par la methode de la validation croisee generalisee, est egalement etudie. Une procedure de calibration pour une camera microonde planaire a 2. 45 ghz a d'autre part ete mise en uvre afin d'obtenir des donnees reelles ayant une precision satisfaisante. Elle comprend une methode de calibration du recepteur, permettant de reduire ses erreurs systematiques et differentes methodes pour la determination du champ incident dans l'objet et de la permittivite complexe du milieu exterieur. La methode de levenberg-marquardt est finalement appliquee aux donnees reelles ainsi obtenues pour trois fantomes homogenes presentant des contrastes varies
31

Xu, Minghua. "Photoacoustic computed tomography in biological tissues: algorithms and breast imaging." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Photoacoustic computed tomography (PAT) has great potential for application in the biomedical field. It best combines the high contrast of electromagnetic absorption and the high resolution of ultrasonic waves in biological tissues. In Chapter II, we present time-domain reconstruction algorithms for PAT. First, a formal reconstruction formula for arbitrary measurement geometry is presented. Then, we derive a universal and exact back-projection formula for three commonly used measurement geometries, including spherical, planar and cylindrical surfaces. We also find this back-projection formula can be extended to arbitrary measurement surfaces under certain conditions. A method to implement the back-projection algorithm is also given. Finally, numerical simulations are performed to demonstrate the performance of the back-projection formula. In Chapter III, we present a theoretical analysis of the spatial resolution of PAT for the first time. The three common geometries as well as other general cases are investigated. The point-spread functions (PSF's) related to the bandwidth and the sensing aperture of the detector are derived. Both the full-width-at-half-maximum of the PSF and the Rayleigh criterion are used to define the spatial resolution. In Chapter IV, we first present a theoretical analysis of spatial sampling in the PA measurement for three common geometries. Then, based on the sampling theorem, we propose an optimal sampling strategy for the PA measurement. Optimal spatial sampling periods for different geometries are derived. The aliasing effects on the PAT images are also discussed. Finally, we conduct numerical simulations to test the proposed optimal sampling strategy and also to demonstrate how the aliasing related to spatially discrete sampling affects the PAT image. In Chapter V, we first describe a prototype of the RF-induced PAT imaging system that we have built. Then, we present experiments of phantom samples as well as a preliminary study of breast imaging for cancer detection.
32

Israel-Jost, Vincent. "Optimisation de la reconstruction en tomographie d'émission monophotonique avec colimateur sténopé." Université Louis Pasteur (Strasbourg) (1971-2008), 2006. https://publication-theses.unistra.fr/public/theses_doctorat/2006/ISRAEL-JOST_Vincent_2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En imagerie du petit animal par tomographie d'émission monophotonique (TEMP), la modélisation de la réponse physique du détecteur demande beaucoup de soin pour en contrebalancer la faible résolution intrinsèque. Les coupes du volume à reconstruire s'obtiennent ainsi à partir des projections, à la fois par une opération de rétroprojection et une déconvolution de la réponse impulsionnelle du détecteur. Nous optons dès lors pour des méthodes itératives de résolution d'un grand système linéaire qui fonctionnent indépendamment de la complexité du modèle. Pour parvenir à des résultats exploitables, tant en terme de résolution spatiale que de temps de calcul chez le rat ou la souris, nous décrivons dans ce travail les choix de notre modélisation par une réponse impulsionnelle gaussienne, ajustée suivant des paramètres physiques et géométriques. Nous utilisons ensuite la symétrie de rotation inhérente au dispositif pour ramener le calcul de P opérateurs de projections au calcul d'un seul d'entre eux, par une discrétisation de l'espace compatible avec cette symétrie, tout en contrôlant la densité angulaire de voxels pour éviter un suréchantillonnage au centre du volume. Enfin, nous proposons une nouvelle classe d'algorithmes adaptés à la fréquence qui permettent d'optimiser la reconstruction d'une gamme de fréquence spatiale donnée, évitant ainsi d'avoir à calculer de nombreuses itérations lorsque le spectre à reconstruire se retrouve surtout dans les hautes fréquences
In SPECT small animal imaging, it is highly recommended to accurately model the response of the detector in order to improve the low spatial resolution. The volume to reconstruct is thus obtained both by backprojecting and deconvolving the projections. We chose iterative methods, which permit one to solve the inverse problem independently from the model's complexity. We describe in this work a gaussian model of point spread function (PSF) whose position, width and maximum are computed according to physical and geometrical parameters. Then we use the rotation symmetry to replace the computation of P projection operators, each one corresponding to one position of the detector around the object, by the computation of only one of them. This is achieved by choosing an appropriate polar discretization, for which we control the angular density of voxels to avoid oversampling the center of the field of view. Finally, we propose a new family of algorithms, the so-called frequency adapted algorithms, which enable to optimize the reconstruction of a given band in the frequency domain on both the speed of convergence and the quality of the image
33

Israel-Jost, Vincent Sonnendrücker Eric Constantinesco André. "Optimisation de la reconstruction en tomographie d'émission monophotonique avec colimateur sténopé." Strasbourg : Université Louis Pasteur, 2007. http://eprints-scd-ulp.u-strasbg.fr:8080/698/01/israel-jost2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Donner, Quentin. "Correction de l'atténuation et du rayonnement diffusé en tomographie d'émission à simples photons." Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'objectif de la tomographie d'émission à simples photons est d'établir une image fonctionnelle d'un organe. Pour ce faire, on administre au patient un radioélément qui se fixe dans l'organe, puis on calcule la distribution du radioélément à partir de mesures du rayonnement émis. Une proportion non négligeable de ce rayonnement interagit cependant dans la matière. L'objectif de ces travaux est de tenir compte de ces interactions afin de reconstruire plus précisément. Après avoir décrit les principales caractéristiques du système d'acquisition, nous nous consacrons a la reconstruction de la distribution d'émission à partir de projections atténuées, notamment quand la géométrie d'acquisition est conique. Plusieurs méthodes du type prédiction-correction sont fréquemment utilisées, mais leur convergence n'est pas établie. En fait, nous montrons théoriquement que la plus simple de ces méthodes diverge dans un cas particulier. La correction d'atténuation peut également être traitée par la méthode du gradient préconditionnée. Nous proposons plusieurs préconditionnements, qui conduisent à différents algorithmes de reconstruction. Ces algorithmes présentent de grandes analogies avec les algorithmes de prédiction-correction, mais ils ont l'avantage d'être convergents. Nous terminons ce chapitre en validant une méthode du type prédiction-correction, celle de Morozumi, sur données simulées et sur données expérimentales. Nous abordons ensuite le problème de la détermination de l'objet atténuant suivant deux approches. La première repose sur l'utilisation des mesures de rayonnement direct: on reconstruit l'émission une première fois sans corriger l'atténuation, puis on ajuste un ellipsoïde aux contours extérieurs de cette image. La seconde approche est basée sur l'utilisation des mesures du rayonnement diffuse: nous étudions la possibilité d'utiliser ces mesures pour établir une cartographie d'atténuation et nous mettons en évidence les difficultés liées a ce problème. Nous terminons en regardant comment la cartographie d'atténuation peut servir à corriger les effets du rayonnement diffuse. Nous proposons une méthode du type prédiction-correction, nous discutons sa convergence, puis nous la comparons a une méthode classique sur données expérimentales. .
The aim of single photon emission tomography is to compute a functional picture of an organ. This is done by administering to the patient a radiopharmaceutical which is fixing in the organ. Then, one computes the distribution of the radiopharmaceutical from the measurement of the emitted gamma-rays. However, an important part of these gamma-rays are interacting with the matter inside the body. The aim of this work is to take these interactions into account so as to reconstruct more accurately. .
35

Iborra, Carreres Amadeo. "Development of a New 3D Reconstruction Algorithm for Computed Tomography (CT)." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/59421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
[EN] Model-based computed tomography (CT) image reconstruction is dominated by iterative algorithms. Although long reconstruction times remain as a barrier in practical applications, techniques to speed up its convergence are object of investigation, obtaining impressive results. In this thesis, a direct algorithm is proposed for model-based image reconstruction. The model-based approximation relies on the construction of a model matrix that poses a linear system which solution is the reconstructed image. The proposed algorithm consists in the QR decomposition of this matrix and the resolution of the system by a backward substitution process. The cost of this image reconstruction technique is a matrix vector multiplication and a backward substitution process, since the model construction and the QR decomposition are performed only once, because of each image reconstruction corresponds to the resolution of the same CT system for a different right hand side. Several problems regarding the implementation of this algorithm arise, such as the exact calculation of a volume intersection, definition of fill-in reduction strategies optimized for CT model matrices, or CT symmetry exploit to reduce the size of the system. These problems have been detailed and solutions to overcome them have been proposed, and as a result, a proof of concept implementation has been obtained. Reconstructed images have been analyzed and compared against the filtered backprojection (FBP) and maximum likelihood expectation maximization (MLEM) reconstruction algorithms, and results show several benefits of the proposed algorithm. Although high resolutions could not have been achieved yet, obtained results also demonstrate the prospective of this algorithm, as great performance and scalability improvements would be achieved with the success in the development of better fill-in strategies or additional symmetries in CT geometry.
[ES] En la reconstrucción de imagen de tomografía axial computerizada (TAC), en su modalidad model-based, prevalecen los algoritmos iterativos. Aunque los altos tiempos de reconstrucción aún son una barrera para aplicaciones prácticas, diferentes técnicas para la aceleración de su convergencia están siendo objeto de investigación, obteniendo resultados impresionantes. En esta tesis, se propone un algoritmo directo para la reconstrucción de imagen model-based. La aproximación model-based se basa en la construcción de una matriz modelo que plantea un sistema lineal cuya solución es la imagen reconstruida. El algoritmo propuesto consiste en la descomposición QR de esta matriz y la resolución del sistema por un proceso de sustitución regresiva. El coste de esta técnica de reconstrucción de imagen es un producto matriz vector y una sustitución regresiva, ya que la construcción del modelo y la descomposición QR se realizan una sola vez, debido a que cada reconstrucción de imagen supone la resolución del mismo sistema TAC para un término independiente diferente. Durante la implementación de este algoritmo aparecen varios problemas, tales como el cálculo exacto del volumen de intersección, la definición de estrategias de reducción del relleno optimizadas para matrices de modelo de TAC, o el aprovechamiento de simetrías del TAC que reduzcan el tama\~no del sistema. Estos problemas han sido detallados y se han propuesto soluciones para superarlos, y como resultado, se ha obtenido una implementación de prueba de concepto. Las imágenes reconstruidas han sido analizadas y comparadas frente a los algoritmos de reconstrucción filtered backprojection (FBP) y maximum likelihood expectation maximization (MLEM), y los resultados muestran varias ventajas del algoritmo propuesto. Aunque no se han podido obtener resoluciones altas aún, los resultados obtenidos también demuestran el futuro de este algoritmo, ya que se podrían obtener mejoras importantes en el rendimiento y la escalabilidad con el éxito en el desarrollo de mejores estrategias de reducción de relleno o simetrías en la geometría TAC.
[CAT] En la reconstrucció de imatge tomografia axial computerizada (TAC) en la seua modalitat model-based prevaleixen els algorismes iteratius. Tot i que els alts temps de reconstrucció encara són un obstacle per a aplicacions pràctiques, diferents tècniques per a l'acceleració de la seua convergència estàn siguent objecte de investigació, obtenint resultats impressionants. En aquesta tesi, es proposa un algorisme direct per a la recconstrucció de image model-based. L'aproximació model-based es basa en la construcció d'una matriu model que planteja un sistema lineal quina sol·lució es la imatge reconstruida. L'algorisme propost consisteix en la descomposició QR d'aquesta matriu i la resolució del sistema per un procés de substitució regresiva. El cost d'aquesta tècnica de reconstrucció de imatge es un producte matriu vector i una substitució regresiva, ja que la construcció del model i la descomposició QR es realitzen una sola vegada, degut a que cada reconstrucció de imatge suposa la resolució del mateix sistema TAC per a un tèrme independent diferent. Durant la implementació d'aquest algorisme sorgixen diferents problemes, tals com el càlcul exacte del volum de intersecció, la definició d'estratègies de reducció de farcit optimitzades per a matrius de model de TAC, o el aprofitament de simetries del TAC que redueixquen el tamany del sistema. Aquestos problemes han sigut detallats y s'han proposat solucions per a superar-los, i com a resultat, s'ha obtingut una implementació de prova de concepte. Les imatges reconstruides han sigut analitzades i comparades front als algorismes de reconstrucció filtered backprojection (FBP) i maximum likelihood expectation maximization (MLEM), i els resultats mostren varies ventajes del algorisme propost. Encara que no s'han pogut obtindre resolucions altes ara per ara, els resultats obtinguts també demostren el futur d'aquest algorisme, ja que es prodrien obtindre millores importants en el rendiment i la escalabilitat amb l'éxit en el desemvolupament de millors estratègies de reducció de farcit o simetries en la geometria TAC.
Iborra Carreres, A. (2015). Development of a New 3D Reconstruction Algorithm for Computed Tomography (CT) [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59421
TESIS
36

Bouaoune, Yasmina. "Développement d'algorithmes de reconstruction d'images pour la réalisation d'un système numérique de radiographie rotationnelle panoramique dentaire." Paris 12, 1994. http://www.theses.fr/1994PA120024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'objet de cette these est de mettre au point un panoramique dentaire numerique incluant un algorithme de reconstruction d'images. De plus, le remplacement du film radiographique conventionnel par des capteurs sensibles aux rayons x apportera tous les avantages inherents aux images numeriques (obtention quasi-instantanee, reduction des doses, possibilite de mettre en uvre des techniques de traitement numerique d'images, archivage, transmission) les systemes de radiographie rotationnelle panoramique dentaire actuels ont des defauts apparaissant sous la forme de zones d'attenuation non homogenes vis-a-vis de l'ensemble du cliche lors de la traversee par le faisceau de rayons x de structures placees en dehors de l'epaisseur tomographique (vertebres cervicales, branches mandibulaires). Cette mauvaise qualite de l'image est souvent penalisante pour une exploitation diagnostique fiable du cliche panoramique. Le developpement d'un outil informatique de simulation s'est avere necessaire pour valider les deux algorithmes de formation de l'image panoramique dentaire numerique que nous proposons. L'une est fondee sur le principe tomographique du panoramique conventionnel, l'autre est une methode originale de reconstruction d'image. Les resultats de cette simulation ont ete confirmes par une experimentation en vraie grandeur
37

Hou, Steven Shuyu. "Time-Domain Fluorescence Diffuse Optical Tomography: Algorithms and Applications." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:13070053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Fluorescence diffuse optical tomography provides non-invasive, in vivo imaging of molecular targets in small animals. While standard fluorescence microscopy is limited to shallow depths and small fields of view, tomographic methods allows recovery of the distribution of fluorescent probes throughout the small animal body. In this thesis, we present novel reconstruction algorithms for the tomographic separation of optical parameters using time-domain (TD) measurements. These technique are validated using simulations and with experimental phantom and mouse imaging studies. We outline the contributions of each chapter of the thesis below. First, we explore the TD fluorescence tomography reconstruction problem for single and multiple fluorophores with discrete lifetimes. We focus on late arriving photons and compare a direct inversion approach with a two-step, asymptotic approach operating on the same TD data. We show that for lifetime multiplexing, the two methods produce fundamentally different kinds of solutions. The direct inversion is computationally inefficient and results in poor separation but has overall higher resolution while the asymptotic approach provides better separation, relative quantitation of lifetime components and localization but has overall lower resolution. We verify these results with simulation and experimental phantoms. Second, we introduce novel high resolution lifetime multiplexing algorithms which combine asymptotic methods for separation of fluorophores with the high resolving power of early photon tomography. We show the effectiveness of such methods to achieve high resolution reconstructions of multiple fluorophores in simulations with complex-shaped phantoms, a digital mouse atlas and also experimentally in fluorescent tube phantoms. Third, we compare the performance of tomographic spectral and lifetime multiplexing. We show that both of these techniques involve a two-step procedure, consisting of a diffuse propagation step and a basis-function mixing step. However, in these two techniques, the order of the two steps is switched, which leads to a fundamental difference in imaging performance. As an illustration of this difference, we show that the relative concentrations of three colocalized fluorophores in a diffuse medium can accurately be retrieved with lifetime methods but cannot be retrieved with spectral methods. Fourth, we address the long standing challenge in diffuse optical tomography (DOT) of cross-talk between absorption and scattering. We extend the ideas developed from lifetime multiplexing algorithms by using a constrained optimization approach for separation of absorption and scattering in DOT. Using custom designed phantoms, we demonstrate a novel technique allows better separation of absorption and scattering inclusions compared to existing algorithms for CW and TD diffuse optical tomography. Finally, we show experimental validation of the lifetime multiplexing algorithms developed in this thesis using three experimental models. First, we show the reconstruction of overlapping complex shapes in a dish phantom. Second, we demonstrate the localization accuracy of lifetime based methods using fluorescent pellets embedded in a sacrificed mouse. Third, we show using planar imaging and tomography, the in vivo recovery of multiple anatomically targeted near-infrared fluorophores. In summary, we have presented novel reconstruction algorithms and experimental methods that extend the capability of time-domain fluorescence diffuse optical tomography systems. The methods developed in this thesis should also have applicability for general multi-parameter image reconstruction problems.
Engineering and Applied Sciences
38

Charbonnier, Pierre. "Reconstruction d'image : régularisation avec prise en compte des discontinuités." Nice, 1994. https://hal.archives-ouvertes.fr/tel-01183977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse est consacrée à la reconstruction d'image régularisée. Nous nous intéressons au cas où l'image à reconstruire est formée de zones homogènes séparées par des bords francs. C'est la régularisation avec prise en compte des discontinuités. L'image reconstruite est obtenue par minimisation d'une fonctionnelle d'énergie, ou critère. Dans un premier temps, nous proposons des conditions suffisantes pour définir un critère assurant la prise en compte des discontinuités. Nous montrons ensuite que sous ces conditions, il est possible de transformer le critère en introduisant une variable auxiliaire intéragissant avec la variable principale, soit de manière additive, soit de manière multiplicative. Le rôle de la variable auxiliaire est double : marquer les discontinuités et faciliter l'optimisation du critère. Le critère augmenté est quadratique en la variable principale lorsqu'on fixe la variable auxiliaire et convexe par rapport à la variable auxiliaire lorsqu'on fixe la variable principale. Dans ce dernier cas, nous donnons de plus l'expression littérale de la valeur optimale de la variable auxiliaire. Dans un second temps, nous exploitons d'un point de vue algorithmique les propriétés de cette régularisation semi-quadratique. La stratégie d'optimisation adoptée consiste à minimiser alternativement le critère par rapport à la variable principale et par rapport à la variable auxiliaire. Ceci nous conduit à deux algorithmes à introduction progressive de discontinuités : ARTUR et LEGEND, dont nous proposons ici une étude théroqieu et pratique. Nous appliquons les algorithmes à la reconstruction tomographique. ARTUR et LEGEND réalisent un bon compromis entre temps de calcul et qualité d'image reconstruite. Néanmoins, ils impliquent la résolution de systèmes linéaires de grandes tailles, lourds à manipuler d'un point de vue informatique. Nous proposons, à titre de perspective, une technique permettant de limiter la quantité d'information nécessaire à la reconstruction et donc d'envisager des dimensiosn d'image plus importantes.
39

Girard, Didier Laurent Pierre Jean. "Les méthodes de régularisation optimale et leurs applications en tomographie nouveaux algorithmes performants de reconstruction d'images /." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00311758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Boyacioglu, Rasim. "Performance Evaluation Of Current Density Based Magnetic Resonance Electrical Impedance Tomography Reconstruction Algorithms." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611016/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Magnetic Resonance Electrical Impedance Tomography (MREIT) reconstructs conductivity distribution with internal current density (MRCDI) and boundary voltage measurements. There are many algorithms proposed for the solution of MREIT inverse problem which can be divided into two groups: Current density (J) and magnetic flux density (B) based reconstruction algorithms. In this thesis, J-based MREIT reconstruction algorithms are implemented and optimized with modifications. These algorithms are simulated with five conductivity models which have different geometries and conductivity values. Results of simulation are discussed and reconstruction algorithms are compared according to their performances. Equipotential-Projection algorithm has lower error percentages than other algorithms for noise-free case whereas Hybrid algorithm has the best performance for noisy cases. Although J-substitution and Hybrid algorithms have relatively long reconstruction times, they produced the best images perceptually. v Integration along Cartesian Grid Lines and Integration along Equipotential Lines algorithms diverge as noise level increases. Equipotential-Projection algorithm has erroneous lines starting from corners of FOV especially for noisy cases whereas Solution as a Linear Equation System has a typical grid artifact. When performance with data of experiment 1 is considered, only Solution as a Linear Equation System algorithm partially reconstructed all elements which show that it is robust to noise. Equipotential-Projection algorithm reconstructed resistive element partially and other algorithms failed in reconstruction of conductivity distribution. Experimental results obtained with a higher conductivity contrast show that Solution as a Linear Equation System, J-Substitution and Hybrid algorithms reconstructed both phantom elements and Hybrid algorithm is superior to other algorithms in percentage error comparison.
41

Eker, Gokhan. "Performance Evaluation Of Magnetic Flux Density Based Magnetic Resonance Electrical Impedance Tomography Reconstruction Algorithms." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610940/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Magnetic Resonance Electrical Impedance Tomography (MREIT) reconstructs images of electrical conductivity distribution based on magnetic flux density (B) measurements. Magnetic flux density is generated by an externally applied current on the object and measured by a Magnetic Resonance Imaging (MRI) scanner. With the measured data and peripheral voltage measurements, the conductivity distribution of the object can be reconstructed. There are two types of reconstruction algorithms. First type uses current density distributions to reconstruct conductivity distribution. Object must be rotated in MRI scanner to measure three components of magnetic flux density. These types of algorithms are called J-based reconstruction algorithms. The second type of reconstruction algorithms uses only one component of magnetic flux density which is parallel to the main magnetic field of MRI scanner. This eliminates the need of subject rotation. These types of algorithms are called B-based reconstruction algorithms. In this study four of the B-based reconstruction algorithms, proposed by several research groups, are examined. The algorithms are tested by different computer models for noise-free and noisy data. For noise-free data, the algorithms work successfully. System SNR 30, 20 and 13 are used for noisy data. For noisy data the performance of algorithm is not as satisfactory as noise-free data. Twice differentiation of z component of B (Bz) is used for two of the algorithms. These algorithms are very sensitive to noise. One of the algorithms uses only one differentiation of Bz so it is immune to noise. The other algorithm uses sensitivity matrix to reconstruct conductivity distribution.
42

Woo, Hok Wai. "Development of a new reconstruction algorithm and an electrical impedance tomography system /." Thesis, Connect to this title online; UW restricted, 1990. http://hdl.handle.net/1773/5866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Recur, Benoît. "Précision et qualité en reconstruction tomographique : algorithmes et applications." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14113/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Il existe un grand nombre de modalités permettant l'acquisition d'un objet de manière non destructrice (Scanner à Rayons X, micro-scanner, Ondes Térahertz, Microscopie Électronique de Transmission, etc). Ces outils acquièrent un ensemble de projections autour de l'objet et une étape de reconstruction aboutit à une représentation de l'espace acquis. La principale limitation de ces méthodes est qu'elles s'appuient sur une modélisation continue de l'espace alors qu'elles sont exploitées dans un domaine fini. L'étape de discrétisation qui en résulte est une source d'erreurs sur les images produites. De plus, la phase d'acquisition ne s'effectue pas de manière idéale et peut donc être entachée d'artéfacts et de bruits. Un grand nombre de méthodes, directes ou itératives, ont été développées pour tenter de réduire les erreurs et reproduire une image la plus représentative possible de la réalité. Un panorama de ces reconstructions est proposé ici et est coloré par une étude de la qualité, de la précision et de la résistances aux bruits d'acquisition.Puisque la discrétisation constitue l'une des principales limitations, nous cherchons ensuite à adapter des méthodes discrètes pour la reconstruction de données réelles. Ces méthodes sont exactes dans un domaine fini mais ne sont pas adaptées à une acquisition réelle, notamment à cause de leur sensibilité aux erreurs. Nous proposons donc un lien entre les deux mondes et développons de nouvelles méthodes discrètes plus robustes aux bruits. Enfin, nous nous intéressons au problème des données manquantes, i.e. lorsque l'acquisition n'est pas uniforme autour de l'objet, à l'origine de déformations dans les images reconstruites. Comme les méthodes discrètes sont insensibles à cet effet nous proposons une amorce de solution utilisant les outils développés dans nos travaux
A large kind of methods are available now to acquire an object in a non-destructive way (X-Ray scanner, micro-scanner, Tera-hertz waves, Transmission Electron Microscopy, etc). These tools acquire a projection set around the object and a reconstruction step leads to a representation of the acquired domain. The main limitation of these methods is that they rely on a continuous domain modeling wheareas they compute in a finite domain. The resulting discretization step sparks off errors in obtained images. Moreover, the acquisition step is not performed ideally and may be corrupted by artifacts and noises. Many direct or iterative methods have been developped to try to reduce errors and to give a better representative image of reality. An overview of these reconstructions is proposed and it is enriched with a study on quality, precision and noise robustness.\\Since the discretization is one of the major limitations, we try to adjust discrete methods for the reconstruction of real data. These methods are accurate in a finite domain but are not suitable for real acquisition, especially because of their error sensitivity. Therefore, we propose a link between the two worlds and we develop new discrete and noise robust methods. Finally, we are interesting in the missing data problem, i.e. when the acquisition is not uniform around the object, giving deformations into reconstructed images. Since discrete reconstructions are insensitive to this effect, we propose a primer solution using the tools developed previously
44

Martin, Lorca Dario. "Implementation And Comparison Of Reconstruction Algorithms For Magnetic Resonance." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608250/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In magnetic resonance electrical impedance tomography (MR-EIT), crosssectional images of a conductivity distribution are reconstructed. When current is injected to a conductor, it generates a magnetic field, which can be measured by a magnetic resonance imaging (MRI) scanner. MR-EIT reconstruction algorithms can be grouped into two: current density based reconstruction algorithms (Type-I) and magnetic flux density based reconstruction algorithms (Type-II). The aim of this study is to implement a series of reconstruction algorithms for MR-EIT, proposed by several research groups, and compare their performance under the same circumstances. Five direct and one iterative Type-I algorithms, and an iterative Type-II algorithm are investigated. Reconstruction errors and spatial resolution are quantified and compared. Noise levels corresponding to system SNR 60, 30 and 20 are considered. Iterative algorithms provide the lowest errors for the noise- free case. For the noisy cases, the iterative Type-I algorithm yields a lower error than the Type-II, although it can diverge for SNR lower than 20. Both of them suffer significant blurring effects, especially at SNR 20. Another two algorithms make use of integration in the reconstruction, producing intermediate errors, but with high blurring effects. Equipotential lines are calculated for two reconstruction algorithms. These lines may not be found accurately when SNR is lower than 20. Another disadvantage is that some pixels may not be covered and, therefore, cannot be reconstructed. Finally, the algorithm involving the solution of a linear system provides the less blurred images with intermediate error values. It is also very robust against noise.
45

Vernersson, Ante. "Adaptive Backprojection : Developing a simple reconstruction algorithm." Thesis, Mittuniversitetet, Avdelningen för elektronikkonstruktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This group project aims to investigate the possibility ofconstructing a prototype micro-CT-scanner, hopefullyfurthering the advancement in the field of small CT-scanners,with the ultimate goal to provide a structure useable in anambulance, to scan and send images to the hospital, so thattreatment can commence directly upon arrival.This report deals with the matter of reconstructing the takenimages to construct a three-dimensional model of thescanned object. An algorithm is developed in a mannerunderstandable to those not familiar with traditionalreconstruction techniques, simplifying the understanding ofCT backprojection while also providing a useful algorithm touse along with the scanner setup. The working principles ofthe reconstruction algorithm are explained along with itsdevelopment process, and the project ends with clear resultsin the form of an “Adaptive Backprojection” algorithm.
46

關福延 and Folk-year Kwan. "An intelligent approach to automatic medical model reconstruction fromserial planar CT images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31243216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Allali, Anthony. "Algorithme de reconstruction itératif pour tomographie optique diffuse avec mesures dans le domaine temporel." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'imagerie par tomographie optique diffuse requiert de modéliser la propagation de la lumière dans un tissu biologique pour une configuration optique et géométrique donnée. On appelle cela le problème direct. Une nouvelle approche basée sur la méthode des différences finies pour modéliser numériquement via l'équation de la diffusion (ED) la propagation de la lumière dans le domaine temporel dans un milieu inhomogène 3D avec frontières irrégulières est développée pour le cas de l'imagerie intrinsèque, c'est-à-dire l'imagerie des paramètres optiques d'absorption et de diffusion d'un tissu. Les éléments finis, lourds en calculs, car utilisant des maillages non structurés, sont généralement préférés, car les différences finies ne permettent pas de prendre en compte simplement des frontières irrégulières. L'utilisation de la méthode de blocking-off ainsi que d'un filtre de Sobel en 3D peuvent en principe permettre de surmonter ces difficultés et d'obtenir des équations rapides à résoudre numériquement avec les différences finies. Un algorithme est développé dans le présent ouvrage pour implanter cette approche et l'appliquer dans divers cas puis de la valider en comparant les résultats obtenus à ceux de simulations Monte-Carlo qui servent de référence. L'objectif ultime du projet est de pouvoir imager en trois dimensions un petit animal, c'est pourquoi le modèle de propagation est au coeur de l'algorithme de reconstruction d'images. L'obtention d'images requière la résolution d'un problème inverse de grandes dimensions et l'algorithme est basé sur une fonction objective que l'on minimise de façon itérative à l'aide d'une méthode basée sur le gradient. La fonction objective mesure l'écart entre les mesures expérimentales faites sur le sujet et les prédictions de celles-ci obtenues du modèle de propagation. Une des difficultés dans ce type d'algorithme est l'obtention du gradient. Ceci est fait à l'aide de variables auxiliaire (ou adjointes). Le but est de développer et de combiner des méthodes qui permettent à l'algorithme de converger le plus rapidement possible pour obtenir les propriétés optiques les plus fidèles possible à la réalité capable d'exploiter la dépendance temporelle des mesures résolues en temps, qui fournissent plus d'informations tout autre type de mesure en TOD. Des résultats illustrant la reconstruction d'un milieu complexe comme une souris sont présentés pour démontrer le potentiel de notre approche.
48

Israel-Jost, Vincent. "Optimisation de la reconstruction en tomographie d'émission monophotonique avec collimateur sténopé." Phd thesis, Université Louis Pasteur - Strasbourg I, 2006. http://tel.archives-ouvertes.fr/tel-00112526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En imagerie du petit animal par tomographie d'émission monophotonique (TEMP), la modélisation de la réponse physique du détecteur demande beaucoup de soin pour en contrebalancer la faible résolution intrinsèque. Les coupes du volume à reconstruire s'obtiennent ainsi à partir des projections, à la fois par une opération de rétroprojection et une déconvolution de la réponse impulsionnelle du détecteur. Nous optons dès lors pour des méthodes itératives de résolution d'un grand système linéaire qui fonctionnent indépendamment de la complexité du modèle.
Pour parvenir à des résultats exploitables, tant en terme de résolution spatiale que de temps de calcul chez le rat ou la souris, nous décrivons dans ce travail les choix de notre modélisation par une réponse impulsionnelle gaussienne, ajustée suivant des paramètres physiques et géométriques. Nous utilisons ensuite la symétrie de rotation inhérente au dispositif pour ramener le calcul de P opérateurs de projections au calcul d'un seul d'entre eux, par une discrétisation de l'espace compatible avec cette symétrie, tout en contrôlant la densité angulaire de voxels pour éviter un suréchantillonnage au centre du volume.
Enfin, nous proposons une nouvelle classe d'algorithmes adaptés à la fréquence qui permettent d'optimiser la reconstruction d'une gamme de fréquence spatiale donnée, évitant ainsi d'avoir à calculer de nombreuses itérations lorsque le spectre à reconstruire se retrouve surtout dans les hautes fréquences.
49

Rouault-Pic, Sandrine. "Reconstruction en tomographie locale : introduction d'information à priori basse résolution." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00005016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Un des objectifs actuel en tomographie est de réduire la dose injectée au patient. Les nouveaux systèmes d'imagerie, intégrant des détecteurs haute résolution de petites tailles ou des sources fortement collimatées permettent ainsi de réduire la dose. Ces dispositifs mettent en avant le problème de reconstruction d'image à partir d'informations locales. Un moyen d'aborder le problème de tomographie locale est d'introduire une information à priori, afin de lever la non-unicité de la solution. Nous proposons donc de compléter les projections locales haute résolution (provenant de systèmes décrits précédemment) par des projections complètes basse résolution, provenant par exemple d'un examen scanner standard. Nous supposons que la mise en correspondance des données a été effectuée, cette partie ne faisant pas l'objet de notre travail. Nous avons dans un premier temps, adapté des méthodes de reconstruction classiques (ART, Gradient conjugué régularisé et Rétroprojection filtrée) au problème local, en introduisant dans le processus de reconstruction l'information à priori. Puis, dans un second temps, nous abordons les méthodes de reconstruction par ondelettes et nous proposons également une adaptation à notre problème. Dans tous les cas, la double résolution apparait également dans l'image reconstruite, avec une résolution plus fine dans la région d'intérêt. Enfin, étant donné le coût élevé des méthodes mises en oeuvre, nous proposons une parallélisation des algorithmes implémentés.
50

Barrera, Cruz Marco Antonio. "Hybrid method algebraic/inverse radon transform for region of interest reconstruction of computed tomography images /." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography