Academic literature on the topic 'Image reconstruction. Three-dimensional imaging. Computer vision'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Image reconstruction. Three-dimensional imaging. Computer vision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Image reconstruction. Three-dimensional imaging. Computer vision"

1

Wu, Fupei, Shukai Zhu, and Weilin Ye. "A Single Image 3D Reconstruction Method Based on a Novel Monocular Vision System." Sensors 20, no. 24 (December 9, 2020): 7045. http://dx.doi.org/10.3390/s20247045.

Full text
Abstract:
Three-dimensional (3D) reconstruction and measurement are popular techniques in precision manufacturing processes. In this manuscript, a single image 3D reconstruction method is proposed based on a novel monocular vision system, which includes a three-level charge coupled device (3-CCD) camera and a ring structured multi-color light emitting diode (LED) illumination. Firstly, a procedure for the calibration of the illumination’s parameters, including LEDs’ mounted angles, distribution density and incident angles, is proposed. Secondly, the incident light information, the color distribution information and gray level information are extracted from the acquired image, and the 3D reconstruction model is built based on the camera imaging model. Thirdly, the surface height information of the detected object within the field of view is computed based on the built model. The proposed method aims at solving the uncertainty and the slow convergence issues arising in 3D surface topography reconstruction using current shape-from-shading (SFS) methods. Three-dimensional reconstruction experimental tests are carried out on convex, concave, angular surfaces and on a mobile subscriber identification module (SIM) card slot, showing relative errors less than 3.6%, respectively. Advantages of the proposed method include a reduced time for 3D surface reconstruction compared to other methods, demonstrating good suitability of the proposed method in reconstructing surface 3D morphology.
APA, Harvard, Vancouver, ISO, and other styles
2

Casero, Ramón, Urszula Siedlecka, Elizabeth S. Jones, Lena Gruscheski, Matthew Gibb, Jürgen E. Schneider, Peter Kohl, and Vicente Grau. "Transformation diffusion reconstruction of three-dimensional histology volumes from two-dimensional image stacks." Medical Image Analysis 38 (May 2017): 184–204. http://dx.doi.org/10.1016/j.media.2017.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Loli Piccolomini, Elena, and Elena Morotti. "A Model-Based Optimization Framework for Iterative Digital Breast Tomosynthesis Image Reconstruction." Journal of Imaging 7, no. 2 (February 13, 2021): 36. http://dx.doi.org/10.3390/jimaging7020036.

Full text
Abstract:
Digital Breast Tomosynthesis is an X-ray imaging technique that allows a volumetric reconstruction of the breast, from a small number of low-dose two-dimensional projections. Although it is already used in the clinical setting, enhancing the quality of the recovered images is still a subject of research. The aim of this paper was to propose and compare, in a general optimization framework, three slightly different models and corresponding accurate iterative algorithms for Digital Breast Tomosynthesis image reconstruction, characterized by a convergent behavior. The suggested model-based implementations are specifically aligned to Digital Breast Tomosynthesis clinical requirements and take advantage of a Total Variation regularizer. We also tune a fully-automatic strategy to set a proper regularization parameter. We assess our proposals on real data, acquired from a breast accreditation phantom and a clinical case. The results confirm the effectiveness of the presented framework in reconstructing breast volumes, with particular focus on the masses and microcalcifications, in few iterations and in enhancing the image quality in a prolonged execution.
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Shufeng, and Keqi Wang. "Image Processing and Splicing Method for 3D Optical Scanning Surface Reconstruction of Wood Grain." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 08 (November 20, 2019): 2054021. http://dx.doi.org/10.1142/s021800142054021x.

Full text
Abstract:
Based on environment compensation, scanning image processing technology was employed to investigate point cloud data and space matching method for wood grain. Collision avoidance recognition algorithm was used to collocate mark points, when remarkably reduced the error matching of distance coincidence mark points. The proposed method used marking of flag sample points based on weight to compensate for the marking points ambiguity of distinguishing information in scanning environment, and select the optimal path for the weighted results. The same splicing points in different images was identified, solving the problem of fuzzy splicing by distance matching. Experimental results and three-dimensional (3D) printing wood cross-section model reconstructed by surface fitting were compared. Results showed that the 3D scanning image mosaic of wood growth texture at the cross-section had no obvious stereo characteristics. The proposed method has improved the accuracy of surface mosaic in reverse scanning imaging for wood grain. This method can be applied to support the application needs of reverse surface reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
5

Grass, M., R. Koppe, E. Klotz, R. Proksa, M. H. Kuhn, H. Aerts, J. Op de Beek, and R. Kemkers. "Three-dimensional reconstruction of high contrast objects using C-arm image intensifier projection data." Computerized Medical Imaging and Graphics 23, no. 6 (December 1999): 311–21. http://dx.doi.org/10.1016/s0895-6111(99)00028-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Yuan-Tsung, and Ming-Shi Wang. "Three-dimensional reconstruction and fusion for multi-modality spinal images." Computerized Medical Imaging and Graphics 28, no. 1-2 (January 2004): 21–31. http://dx.doi.org/10.1016/j.compmedimag.2003.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Ting. "Optimized Fuzzy Clustering Algorithms for Brain MRI Image Segmentation Based on Local Gaussian Probability and Anisotropic Weight Models." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 09 (May 27, 2018): 1857005. http://dx.doi.org/10.1142/s0218001418570057.

Full text
Abstract:
Brain Magnetic Resonance Imaging (MRI) image segmentation is one of the critical technologies of clinical medicine, and is the basis of three-dimensional reconstruction and downstream analysis between normal tissues and diseased tissues. However, there are various limitations in brain MRI images, such as gray irregularities, noise, and low contrast, reducing the accuracy of the brain MRI images segmentation. In this paper, we propose two optimization solutions for the fuzzy clustering algorithm based on local Gaussian probability fuzzy C-means (LGP-FCM) model and anisotropic weight fuzzy C-means (AW-FCM) model and apply it in brain MRI image segmentation. An FCM clustering algorithm is proposed based on AW-FCM. By introducing the new neighborhood weight calculation method, each point has the weight of anisotropy, effectively overcomes the influence of noise on the image segmentation. In addition, the LGP model is introduced in the objective function of fuzzy clustering, and a fuzzy clustering segmentation algorithm based on LGP-FCM is proposed. A clustering segmentation algorithm of adaptive scale fuzzy LGP model is proposed. The neighborhood scale corresponding to each pixel in the image is automatically estimated, which improves the robustness of the model and achieves the purpose of precise segmentation. Extensive experimental results demonstrate that the proposed LGP-FCM algorithm outperforms comparison algorithms in terms of sensitivity, specificity and accuracy. LGP-FCM can effectively segment the target regions from brain MRI images.
APA, Harvard, Vancouver, ISO, and other styles
8

Hoffmeister, Jeffrey W., Gregory C. Rinehart, and Michael W. Vannier. "Three-dimensional surface reconstructions using a general purpose image processing system." Computerized Medical Imaging and Graphics 14, no. 1 (January 1990): 35–42. http://dx.doi.org/10.1016/0895-6111(90)90138-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yuan, Xiaohui, and Xiaojing Yuan. "Fusion of multi-planar images for improved three-dimensional object reconstruction." Computerized Medical Imaging and Graphics 35, no. 5 (July 2011): 373–82. http://dx.doi.org/10.1016/j.compmedimag.2010.11.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Guohui, Xuan Zhang, and Jin Cheng. "A Unified Shape-From-Shading Approach for 3D Surface Reconstruction Using Fast Eikonal Solvers." International Journal of Optics 2020 (May 29, 2020): 1–12. http://dx.doi.org/10.1155/2020/6156058.

Full text
Abstract:
Object shape reconstruction from images has been an active topic in computer vision. Shape-from-shading (SFS) is an important approach for inferring 3D surface from a single shading image. In this paper, we present a unified SFS approach for surfaces of various reflectance properties using fast eikonal solvers. The whole approach consists of three main components: a unified SFS model, a unified eikonal-type partial differential image irradiance (PDII) equation, and fast eikonal solvers for the PDII equation. The first component is designed to address different reflectance properties including diffuse, specular, and hybrid reflections in the imaging process of the camera. The second component is meant to derive the PDII equation under an orthographic camera projection and a single distant point light source whose direction is the same as the camera. Finally, the last component is targeted at solving the resultant PDII equation by using fast eikonal solvers. It comprises two Godunov-based schemes with fast sweeping method that can handle the eikonal-type PDII equation. Experiments on several synthetic and real images demonstrate that each type of the surfaces can be effectively reconstructed with more accurate results and less CPU running time.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Image reconstruction. Three-dimensional imaging. Computer vision"

1

Mai, Fei, and 買斐. "3D reconstruction of lines, ellipses and curves from multiple images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40887911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mai, Fei. "3D reconstruction of lines, ellipses and curves from multiple images." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B40887911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Chi Hin. "Structured lighting 3D reconstruction and 3D shape matching of human model for garment industries /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?MECH%202006%20LIUC.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Xiongbo, and 張雄波. "3D trajectory recovery in spatial and time domains from multiple images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/195966.

Full text
Abstract:
Recovering 3D structure from multiple 2D images is a fundamental problem in computer vision. Most of existing methods focus on the reconstruction of static points in 3D space; however, the reconstruction of trajectories which are resulted from moving points should also have our full attention due to its high efficiency in structure modeling and description. Depending on whether points are moving in spatial domain or in time domain, trajectory recovery turns out to be a curve reconstruction problem or a non-rigid structure recovery problem respectively. This thesis addresses several issues that were not considered in existing approaches in both of the two problems. For the curve reconstruction problem, we propose a dedicated method for planar curve reconstruction and an optimization method for general curve reconstruction. In the planar curve reconstruction method, measured projected curves that are typically represented by sequences of points are fitted using B-splines before reconstruction, enabling the occlusion problem to be handled naturally. Also, an optimization algorithm is developed to match the fitted curves across images while enforcing the planarity constraint, and the algorithm is guaranteed to converge. In the general curve reconstruction method, Non-Uniform Rational B-Spline (NURBS) is employed for curve representation in 3D space, which improves the flexibility in curve description while maintaining the smoothness of a curve at the same time. Starting with measured point sequences of projected curves, a complete set of algorithms are developed and evaluated, including curve initialization and optimization of the initialized curve by minimizing the 2D reprojection error that is defined to be the 2D Euclidean distance from measured points to reprojected curves. Experiments show that the proposed methods are robust and efficient, and are excellent in producing high-quality reconstruction results. For the non-rigid structure recovery problem, we proposed two methods for the recovery of non-rigid structures together with a strategy that automates the process of non-rigid structure recovery. Compared with existing methods using synthetic datasets, both of the two proposed methods perform significantly better than existing methods when there are noise contaminations in measurements, and are capable to recover the ground truth solution when the measurements are noise free whereas no existing method is capable of achieving this so far. In the first method, namely factorization-based method, the available constraints in non-rigid structure from motion are analyzed and the ambiguity of the solution space of the proposed method is clarified, leading to a straightforward approach that requires only solution to several linear equations in least-squares sense instead of having to solve non-linear optimization problems in existing methods. In the second method, namely bundle adjustment method, a modified trajectory basis model that is demonstrated to be more flexible for non-rigid structure description is proposed. The method seeks for optimal non-rigid structure and camera matrices by alternately solving a set of linear equations in least square sense. Experiments on real non-rigid motions show that the method improves the quality of reconstruction significantly.
published_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
5

Kalghatgi, Roshan Satish. "Reconstruction techniques for fixed 3-D lines and fixed 3-D points using the relative pose of one or two cameras." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43590.

Full text
Abstract:
In general, stereovision can be defined as a two part problem. The first is the correspondence problem. This involves determining the image point in each image of a set of images that correspond to the same physical point P. We will call this set of image points, N. The second problem is the reconstruction problem. Once a set of image points, N, that correspond to point P has been determined, N is then used to extract three dimensional information about point P. This master's thesis presents three novel solutions to the reconstruction problem. Two of the techniques presented are for detecting the location of a 3-D point and one for detecting a line expressed in a three dimensional coordinate system. These techniques are tested and validated using a unique 3-D finger detection algorithm. The techniques presented are unique because of their simplicity and because they do not require the cameras to be placed in specific locations, orientations or have specific alignments. On the contrary, it will be shown that the techniques presented in this thesis allow the two cameras used to assume almost any relative pose provided that the object of interest is within their field of view. The relative pose of the cameras at a given instant in time, along with basic equations from the perspective image model are used to form a system of equations that when solved, reveal the 3-D coordinates of a particular fixed point of interest or the three dimensional equation of a fixed line of interest. Finally, it will be shown that a single moving camera can successfully perform the same line and point detection accomplished by two cameras by altering the pose of the camera. The results presented in this work are beneficial to any typical stereovision application because of the computational ease in comparison to other point and line reconstruction techniques. But more importantly, this work allows for a single moving camera to perceive three-dimensional position information, which effectively removes the two camera constraint for a stereo vision system. When used with other monocular cues such as texture or color, the work presented in this thesis could be as accurate as binocular stereo vision at interpreting three dimensional information. Thus, this work could potentially increase the three dimensional perception of a robot that normally uses one camera, such as an eye-in-hand robot or a snake like robot.
APA, Harvard, Vancouver, ISO, and other styles
6

Ganapathi, Annadurai Kartick. "3D Shape Reconstruction from Multiple Range Image Views." The University of Waikato, 2006. http://hdl.handle.net/10289/2267.

Full text
Abstract:
Shape reconstruction of different three dimensional objects using multiple range images has evolved recently within the recent past. In this research shape reconstruction of a three dimensional object using multiple range image views is investigated. Range images were captured using the Waikato Range Imager. This range images camera is novel in that it uses heterodyne imaging and is capable of acquiring range images with precision less than a millimeter simultaneously over a full field. Multiple views of small objects were taken and the FastRBF was explored as a mean of registration and surface rendering. For comparison to the real range data, simulated range data under noise free condition were also generated and reconstructed with the FastRBF tool box. The registration and reconstruction of simple object was performed using different views with the FastRBF toolbox. Analysis of the registration process showed that the translation error produced due to distortion during registration of different views hinders the process of reconstructing a complete surface. While analyzing the shape reconstruction using the FastRBF tool it is also determined that a small change in accuracy values can affect the interpolation drastically. Results of reconstruction of a real 3D object from multiple views are shown.
APA, Harvard, Vancouver, ISO, and other styles
7

Steedly, Drew. "Rigid Partitioning Techniques for Efficiently Generating 3D Reconstructions from Images." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4925.

Full text
Abstract:
This thesis explores efficient techniques for generating 3D reconstructions from imagery. Non-linear optimization is one of the core techniques used when computing a reconstruction and is a computational bottleneck for large sets of images. Since non-linear optimization requires a good initialization to avoid getting stuck in local minima, robust systems for generating reconstructions from images build up the reconstruction incrementally. A hierarchical approach is to split up the images into small subsets, reconstruct each subset independently and then hierarchically merge the subsets. Rigidly locking together portions of the reconstructions reduces the number of parameters needed to represent them when merging, thereby lowering the computational cost of the optimization. We present two techniques that involve optimizing with parts of the reconstruction rigidly locked together. In the first, we start by rigidly grouping the cameras and scene features from each of the reconstructions being merged into separate groups. Cameras and scene features are then incrementally unlocked and optimized until the reconstruction is close to the minimum energy. This technique is most effective when the influence of the new measurements is restricted to a small set of parameters. Measurements that stitch together weakly coupled portions of the reconstruction, though, tend to cause deformations in the low error modes of the reconstruction and cannot be efficiently incorporated with the previous technique. To address this, we present a spectral technique for clustering the tightly coupled portions of a reconstruction into rigid groups. Reconstructions partitioned in this manner can closely mimic the poorly conditioned, low error modes, and therefore efficiently incorporate measurements that stitch together weakly coupled portions of the reconstruction. We explain how this technique can be used to scalably and efficiently generate reconstructions from large sets of images.
APA, Harvard, Vancouver, ISO, and other styles
8

Schindler, Grant. "Unlocking the urban photographic record through 4D scene modeling." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34719.

Full text
Abstract:
Vast collections of historical photographs are being digitally archived and placed online, providing an objective record of the last two centuries that remains largely untapped. We propose that time-varying 3D models can pull together and index large collections of images while also serving as a tool of historical discovery, revealing new information about the locations, dates, and contents of historical images. In particular, our goal is to use computer vision techniques to tie together a large set of historical photographs of a given city into a consistent 4D model of the city: a 3D model with time as an additional dimension. To extract 4D city models from historical images, we must perform inference about the position of cameras and scene structure in both space and time. Traditional structure from motion techniques can be used to deal with the spatial problem, while here we focus on the problem of inferring temporal information: a date for each image and a time interval for which each structural element in the scene persists. We first formulate this task as a constraint satisfaction problem based on the visibility of structural elements in each image, resulting in a temporal ordering of images. Next, we present methods to incorporate real date information into the temporal inference solution. Finally, we present a general probabilistic framework for estimating all temporal variables in structure from motion problems, including an unknown date for each camera and an unknown time interval for each structural element. Given a collection of images with mostly unknown or uncertain dates, we can use this framework to automatically recover the dates of all images by reasoning probabilistically about the visibility and existence of objects in the scene. We present results for image collections consisting of hundreds of historical images of cities taken over decades of time, including Manhattan and downtown Atlanta.
APA, Harvard, Vancouver, ISO, and other styles
9

Wan, Sau Kuen. "Modeling with panoramic image network for image-based walkthroughs /." access full-text access abstract and table of contents, 2005. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?mphil-cs-b1988588xa.pdf.

Full text
Abstract:
Thesis (M.Phil.)--City University of Hong Kong, 2005.
"Submitted to Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Philosophy" Includes bibliographical references (leaves 81-84)
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, Chen. "3D model reconstruction from silhouettes." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/b40203311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Image reconstruction. Three-dimensional imaging. Computer vision"

1

Salzmann, Mathieu. Deformable surface 3D reconstruction from monocular images. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Favaro, Paolo. 3-D shape estimation and image restoration: Exploiting defocus and motion blur. London: Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1966-, Schlüns Karsten, and Koschan Andreas 1956-, eds. Computer vision: Three-dimensional data from images. Singapore: Springer, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Klette, Reinhard. Computer vision: Three-dimensional data from images. Singapore: Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Toriwaki, Junichiro. Fundamentals of Three-Dimensional Digital Image Processing. London: Springer London, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

C, Hemmy D., and Cooter R. D, eds. Craniofacial deformities: Atlas of three dimensional reconstruction from computed tomography. New York: Springer-Verlag, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pu, Shi. Knowledge based building facade reconstruction from laser point clouds and images. Delft: Netherlands Geodetic Commission, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

IEEE Workshop on Stereo and Multi-Baseline Vision (2001 Kauai, Hawaii). IEEE Workshop on stereo and Multi-Baseline Vision (SMBV 2001): Proceedings, 9-10 December 2001, Kkauai, Hawaii. Los Alamitos, Calif: IEEE Computer Society, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wong, Kenneth H. Medical imaging 2010: Visualization, image-guided procedures, and modeling : 14-16 February 2010, San Diego, California, United States. Bellingham, Wash: SPIE, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wong, Kenneth H. Medical imaging 2010: Visualization, image-guided procedures, and modeling : 14-16 February 2010, San Diego, California, United States. Bellingham, Wash: SPIE, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Image reconstruction. Three-dimensional imaging. Computer vision"

1

Minar, Matiur Rahman, and Heejune Ahn. "CloTH-VTON: Clothing Three-Dimensional Reconstruction for Hybrid Image-Based Virtual Try-ON." In Computer Vision – ACCV 2020, 154–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69544-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tiwari, Shailendra, and Rajeev Srivastava. "Research and Developments in Medical Image Reconstruction Methods and its Applications." In Research Developments in Computer Vision and Image Processing, 274–312. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4558-5.ch014.

Full text
Abstract:
Image reconstruction from projection is the field that lays the foundation for Medical Imaging or Medical Image Processing. The rapid and proceeding progress in medical image reconstruction, and the related developments in analysis methods and computer-aided diagnosis, has promoted medical imaging into one of the most important sub-fields in scientific imaging. Computer technology has enabled tomographic and three-dimensional reconstruction of images, illustrating both anatomical features and physiological functioning, free from overlying structures.In this chapter, the authors share their opinions on the research and development in the field of Medical Image Reconstruction Techniques, Computed Tomography (CT), challenges and the impact of future technology developments in CT, Computed Tomography Metrology in industrial research & development, technology, and clinical performance of different CT-scanner generations used for cardiac imaging, such as Electron Beam CT (EBCT), single-slice CT, and Multi-Detector row CT (MDCT) with 4, 16, and 64 simultaneously acquired slices. The authors identify the limitations of current CT-scanners, indicate potential of improvement and discuss alternative system concepts such as CT with area detectors and Dual Source CT (DSCT), recent technology with a focus on generation and detection of X-rays, as well as image reconstruction are discussed. Furthermore, the chapter includes aspects of applications, dose exposure in computed tomography, and a brief overview on special CT developments. Since this chapter gives a review of the major accomplishments and future directions in this field, with emphasis on developments over the past 50 years, the interested reader is referred to recent literature on computed tomography including a detailed discussion of CT technology in the references section.
APA, Harvard, Vancouver, ISO, and other styles
3

Ferrari, Claudio, Stefano Berretti, and Alberto del Bimbo. "Single View 3D Face Reconstruction." In Recent Advances in 3D Imaging, Modeling, and Reconstruction, 215–27. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-5294-9.ch010.

Full text
Abstract:
3D face reconstruction from a single 2D image is a fundamental computer vision problem of extraordinary difficulty that dates back to the 1980s. Briefly, it is the task of recovering the three-dimensional geometry of a human face from a single RGB image. While the problem of automatically estimating the 3D structure of a generic scene from RGB images can be regarded as a general task, the particular morphology and non-rigid nature of human faces make it a challenging problem for which dedicated approaches are still currently studied. This chapter aims at providing an overview of the problem, its evolutions, the current state of the art, and future trends.
APA, Harvard, Vancouver, ISO, and other styles
4

Hong, Stephen Baoming. "THREE-DIMENSIONAL RECONSTRUCTION METHODS IN NEAR-FIELD CODED APERTURE FOR SPECT IMAGING SYSTEM." In Computer Vision in Medical Imaging, 175–88. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814460941_0010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tiwari, Shailendra, and Rajeev Srivastava. "Research and Developments in Medical Image Reconstruction Methods and Its Applications." In Medical Imaging, 491–535. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-0571-6.ch019.

Full text
Abstract:
Image reconstruction from projection is the field that lays the foundation for Medical Imaging or Medical Image Processing. The rapid and proceeding progress in medical image reconstruction, and the related developments in analysis methods and computer-aided diagnosis, has promoted medical imaging into one of the most important sub-fields in scientific imaging. Computer technology has enabled tomographic and three-dimensional reconstruction of images, illustrating both anatomical features and physiological functioning, free from overlying structures. In this chapter, the authors share their opinions on the research and development in the field of Medical Image Reconstruction Techniques, Computed Tomography (CT), challenges and the impact of future technology developments in CT, Computed Tomography Metrology in industrial research & development, technology, and clinical performance of different CT-scanner generations used for cardiac imaging, such as Electron Beam CT (EBCT), single-slice CT, and Multi-Detector row CT (MDCT) with 4, 16, and 64 simultaneously acquired slices. The authors identify the limitations of current CT-scanners, indicate potential of improvement and discuss alternative system concepts such as CT with area detectors and Dual Source CT (DSCT), recent technology with a focus on generation and detection of X-rays, as well as image reconstruction are discussed. Furthermore, the chapter includes aspects of applications, dose exposure in computed tomography, and a brief overview on special CT developments. Since this chapter gives a review of the major accomplishments and future directions in this field, with emphasis on developments over the past 50 years, the interested reader is referred to recent literature on computed tomography including a detailed discussion of CT technology in the references section.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Image reconstruction. Three-dimensional imaging. Computer vision"

1

Müller, Simone, and Dieter Kranzlmüller. "Dynamic Sensor Matching for Parallel Point Cloud Data Acquisition." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.3.

Full text
Abstract:
Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. Thequality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, andinsufficient object reconstructions caused by surface illustration. Additionally external physical effects likelighting conditions, material properties, and reflections can lead to deviations between real and virtual objectperception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors onsurfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. Theincreased information density leads to more details in surrounding detection and object illustration. During apre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examinesand allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a newmetadata set consisting of image and localisation data. The post-processing reworks and matches the locallyassigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloudcan be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Ourapproach builds the foundation for dynamic and real-time based generation of digital twins with the aid of realsensor data.
APA, Harvard, Vancouver, ISO, and other styles
2

Tallita Passos, Bianka, Wemerson Delcio Parreira, Anita Maria da Rocha Fernandes, and Eros Comunello. "Detecção de buracos em pavimento asfáltico com base em Processamento Digital de Imagens e Deep Learning." In Computer on the Beach. São José: Universidade do Vale do Itajaí, 2021. http://dx.doi.org/10.14210/cotb.v12.p422-427.

Full text
Abstract:
The road infrastructure conditions are directly related to the safetyand operational cost of transportation. Potholes are defects in thepaving that affect safety on the road. Therefore, identifying potholesis an important step in defining road maintenance and interventionstrategies. Among the approaches used to detect defects in roadsare vibration techniques, laser scanning and 3D reconstruction, andfinally methods that are vision-based. These vision-based methodsutilize image processing, considered low cost and that can be performedby common two-dimensional cameras. This research aimsto combine digital image processing and deep learning concepts facilitatingthe recognition of pothole-like defects in road images withasphalt paving. In order to carry out these experiments, differentnetwork architectures were used.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhenyu, Sun, Liu Liqiang, and Wang Lihui. "Three-dimensional Reconstruction of Single Pipeline Radiographic Image." In 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL). IEEE, 2020. http://dx.doi.org/10.1109/cvidl51233.2020.00019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Arvind, Shraddha Chaudhary, Sumantra Dutta Roy, and Prakash Chand. "Three dimensional reconstruction of cylindrical pellet using stereo vision." In 2015 Fifth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG). IEEE, 2015. http://dx.doi.org/10.1109/ncvpripg.2015.7489939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Hsuan-Ting, Chien-Yue Chen, Jhe-Sian Lin, and Wu-Jhyun Li. "Image Reconstruction by Applying Fresnel Transform on Phase-Only Computer Generated Hologram at Tilted Planes." In Digital Holography and Three-Dimensional Imaging. Washington, D.C.: OSA, 2015. http://dx.doi.org/10.1364/dh.2015.dw2a.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kashiwagi, Akifumi, and Yuji Sakamoto. "A fast calculation method of cylindrical computer-generated holograms which perform image-reconstruction of volume data." In Digital Holography and Three-Dimensional Imaging. Washington, D.C.: OSA, 2007. http://dx.doi.org/10.1364/dh.2007.dwb7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Min, Yu-juan Si, Shi-gang Wang, Yuan-zhi Lyu, Bo-wen Jia, and Wei Wu. "Computer virtual reconstruction of a three dimensional scene in integral imaging." In 2016 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2016. http://dx.doi.org/10.1109/icalip.2016.7846529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mehta, Chandresh, and Thenkurussi Kesavadas. "A Framework for Three Dimensional Solid Model Reconstruction." In ASME 2006 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/detc2006-99501.

Full text
Abstract:
3D object construction by reverse engineering falls into two categories: surface reconstruction and solid model reconstruction. The 3D surface reconstruction techniques are intended to extract only the geometric information from the measured point cloud and are commonly used in computer graphics and computer vision, whereas the 3D solid model reconstruction techniques are expected to extract the geometric as well as the topological information from the measured point cloud and has application in the field of CAD/CAM. This paper presents a novel framework for 3D solid model reconstruction, which will enable reconstruction of a B-rep model of a physical object based on the 3D point cloud data captured from the surface of the object. In this framework, we use a magnetic position sensor for measuring the data from the surface of the object. This has numerous advantages over conventional methods of data acquisition that use laser scanner or coordinate measuring machine. For segmenting the measured point cloud data into sub-regions, a non-iterative region growing algorithm is developed and implemented. Our surface detection scheme is based on a Modified Gaussian image (MGI) of the sub-region and least-square techniques are used for fitting a surface to the points in a sub-region. The reconstructed B-Rep model is stored in an ISO 10303 (STEP) file format so that it can be imported in to standard CAD/CAM systems for future modifications or analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Theodoracatos, Vassilios E., and Dale E. Calkins. "A 3-D Vision System Model for Automatic Object Surface Sensing." In ASME 1992 Design Technical Conferences. American Society of Mechanical Engineers, 1992. http://dx.doi.org/10.1115/detc1992-0166.

Full text
Abstract:
Abstract The development of a “light striping” (structured light) based three-dimensional vision system for automatic surface sensing is presented. The three-dimensional world-point reconstruction process and system modeling methodology involves homogeneous coordinate transformations applied in two independent stages; the video imaging stage using three-dimensional perspective transformations, and the mechanical scanning stage, using three-dimensional affine transformations. Concatenation of the two independent matrix models leads to a robust four-by-four matrix system model. The independent treatment of the two-dimensional imaging process from the three-dimensional modeling process, has reduced the number of unknown internal and external geometrical parameters. The reconstructed sectional contours (light stripes) are automatically and in real-time registered with respect to a common world coordinate system in a format compatible with B-spline surface approximation. The reconstruction process is demonstrated by measuring the surface of a 19.5-ft long by 2 feet beam rowing shell. A detailed statistical accuracy and precision analysis shows an average error, 0.2 percent (0.002), of an object’s largest dimension within the the camera’s field-of-view. System sensitivity analysis reveals a nonlinear increase for angles between the normals of the image and laser planes higher than 45 degrees.
APA, Harvard, Vancouver, ISO, and other styles
10

Medelli´n Castillo, Hugo I., and Manuel A. Ochoa Alfaro. "Development of a Tridimensional Visualization and Model Reconstruction System Based on Computed Tomographic Data." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-62822.

Full text
Abstract:
Medical image processing constitutes an important research area of the biomedical engineering since it provides accurate human body information for 3D visualization and analysis, diagnostic, surgical treatment planning, surgical training, prosthesis and implant design, wafer and surgical guides design. Computed tomography (CT) and magnetic resonance imaging (MRI) have had a great impact in the medicine since they can represent complex three dimensional (3D) anomalities or deformities. In this paper, the development of a system for tridimensional visualization and model reconstruction based on CT data is presented. The aim is to provide a system capable to assist the design process of prosthesis, implants and surgical guides by reconstructing anatomical 3D models which can be exported to any CAD program or computer aided surgery (CAS) system. A complete description of the proposed system is presented. The new system is able to visualize and reconstruct bone and/or soft tissues. Three types of renders are used: one for 3D visualization based on three planes, other for 3D surface reconstruction based on the well known marching cubes algorithm, and the other for 3D volume visualization based on the ray-casting algorithm. The functionality and performance of the system are evaluated by means of four case studies. The results have proved the capability of the system to visualize and reconstruct anatomical 3D models from medical images.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography