To see the other types of publications on this topic, follow the link: 2D-3D dimensional images.

Journal articles on the topic '2D-3D dimensional images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '2D-3D dimensional images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kim, Hyungsuk, Chang Hyun Yoo, Soo Bin Park, and Hyun Seok Song. "Difference in glenoid retroversion between two-dimensional axial computed tomography and three-dimensional reconstructed images." Clinics in Shoulder and Elbow 23, no. 2 (2020): 71–79. http://dx.doi.org/10.5397/cise.2020.00122.

Full text
Abstract:
Background: The glenoid version of the shoulder joint correlates with the stability of the glenohumeral joint and the clinical results of total shoulder arthroplasty. We sought to analyze and compare the glenoid version measured by traditional axial two-dimensional (2D) computed tomography (CT) and three-dimensional (3D) reconstructed images at different levels.Methods: A total of 30 cases, including 15 male and 15 female patients, who underwent 3D shoulder CT imaging was randomly selected and matched by sex consecutively at one hospital. The angular difference between the scapular body axis a
APA, Harvard, Vancouver, ISO, and other styles
2

Sudjai, Narumol, Palanan Siriwanarangsun, Nittaya Lektrakul, et al. "Robustness of Radiomic Features: Two-Dimensional versus Three-Dimensional MRI-Based Feature Reproducibility in Lipomatous Soft-Tissue Tumors." Diagnostics 13, no. 2 (2023): 258. http://dx.doi.org/10.3390/diagnostics13020258.

Full text
Abstract:
This retrospective study aimed to compare the intra- and inter-observer manual-segmentation variability in the feature reproducibility between two-dimensional (2D) and three-dimensional (3D) magnetic-resonance imaging (MRI)-based radiomic features. The study included patients with lipomatous soft-tissue tumors that were diagnosed with histopathology and underwent MRI scans. Tumor segmentation based on the 2D and 3D MRI images was performed by two observers to assess the intra- and inter-observer variability. In both the 2D and the 3D segmentations, the radiomic features were extracted from the
APA, Harvard, Vancouver, ISO, and other styles
3

Tulunoglu, Ozlem, Elcin Esenlik, Ayse Gulsen, and Ibrahim Tulunoglu. "A Comparison of Three-Dimensional and Two-Dimensional Cephalometric Evaluations of Children with Cleft Lip and Palate." European Journal of Dentistry 05, no. 04 (2011): 451–58. http://dx.doi.org/10.1055/s-0039-1698918.

Full text
Abstract:
ABSTRACTObjectives: The aim of this retrospective study was to compare the consistency of orthodontic measurement performed on cephalometric films and 3D CT images of cleft lip and palate (CLP) patients. Methods: The study was conducted with 2D radiographs and 3D CT images of 9 boys and 6 girls aged 7-12 with CLP. 3D reconstructions were performed using MIMICS software. Results: Frontal analysis found statistical differences for all parameters except occlusal plane tilt (OcP-tilt) and McNamara analysis found statistical differences in 2D and 3D measurements for all parameters except ANS-Me and
APA, Harvard, Vancouver, ISO, and other styles
4

Takeuchi, Hironori, Kenji Matsuura, Tetsushi Ueta, and Tomohito Wada. "Development of a Support System for Recalling 3D Vision from a 2D Plane." Journal of Educational Multimedia and Hypermedia 32, no. 1 (2025): 5–34. https://doi.org/10.70725/014177zoxurj.

Full text
Abstract:
Basketball tactical patterns are typically taught using tools such as 2D tactical boards. Rapid decision-making in a team depends on the ability to connect two-dimensional (2D) third-person positions with three-dimensional (3D) first-person perspectives. This study develops a support system that offers a virtual environment to enhance the efficient recall of 3D vision from a 2D board. The proposed system generates static or dynamic 3D visualizations based on 2D inputs. A total of 30 volunteers participated in this study and were randomly assigned to groups that received either 2D images, stati
APA, Harvard, Vancouver, ISO, and other styles
5

Gunasekaran, Ganesan, and Meenakshisundaram Venkatesan. "An Efficient Technique for Three-Dimensional Image Visualization Through Two-Dimensional Images for Medical Data." Journal of Intelligent Systems 29, no. 1 (2017): 100–109. http://dx.doi.org/10.1515/jisys-2017-0315.

Full text
Abstract:
Abstract The main idea behind this work is to present three-dimensional (3D) image visualization through two-dimensional (2D) images that comprise various images. 3D image visualization is one of the essential methods for excerpting data from given pieces. The main goal of this work is to figure out the outlines of the given 3D geometric primitives in each part, and then integrate these outlines or frames to reconstruct 3D geometric primitives. The proposed technique is very useful and can be applied to many kinds of images. The experimental results showed a very good determination of the reco
APA, Harvard, Vancouver, ISO, and other styles
6

Yahanda, Alexander T., Timothy J. Goble, Peter T. Sylvester, et al. "Impact of 3-Dimensional Versus 2-Dimensional Image Distortion Correction on Stereotactic Neurosurgical Navigation Image Fusion Reliability for Images Acquired With Intraoperative Magnetic Resonance Imaging." Operative Neurosurgery 19, no. 5 (2020): 599–607. http://dx.doi.org/10.1093/ons/opaa152.

Full text
Abstract:
Abstract BACKGROUND Fusion of preoperative and intraoperative magnetic resonance imaging (iMRI) studies during stereotactic navigation may be very useful for procedures such as tumor resections but can be subject to error because of image distortion. OBJECTIVE To assess the impact of 3-dimensional (3D) vs 2-dimensional (2D) image distortion correction on the accuracy of auto-merge image fusion for stereotactic neurosurgical images acquired with iMRI using a head phantom in different surgical positions. METHODS T1-weighted intraoperative images of the head phantom were obtained using 1.5T iMRI.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Haoran. "A Review of 3D-2D Registration Methods and Applications based on Medical Images." Highlights in Science, Engineering and Technology 35 (April 11, 2023): 200–224. http://dx.doi.org/10.54097/hset.v35i.7055.

Full text
Abstract:
The registration of preoperative three-dimensional (3D) medical images with intraoperative two-dimensional (2D) data is a key technology for image-guided radiotherapy, minimally invasive surgery, and interventional procedures. In this paper, we review 3D-2D registration methods using computed tomography (CT) and magnetic resonance imaging (MRI) as preoperative 3D images and ultrasound, X-ray, and visible light images as intraoperative 2D images. The 3D-2D registration techniques are classified into intensity-based, structure-based, and gradient-based according to the different registration fea
APA, Harvard, Vancouver, ISO, and other styles
8

Holzleitner, Iris J., Alex L. Jones, Kieran J. O’Shea, et al. "Do 3D Face Images Capture Cues of Strength, Weight, and Height Better than 2D Face Images do?" Adaptive Human Behavior and Physiology 7, no. 3 (2021): 209–19. http://dx.doi.org/10.1007/s40750-021-00170-8.

Full text
Abstract:
Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D fa
APA, Harvard, Vancouver, ISO, and other styles
9

Esan, Dorcas Oladayo, Pius Adewale Owolawi, and Chunling Tu. "Advanced 3D Artistic Image Generation with VAE-SDFCycleGAN." Journal of Information Systems and Informatics 6, no. 4 (2024): 2508–24. https://doi.org/10.51519/journalisi.v6i4.900.

Full text
Abstract:
Generation of a 3-dimensional (3D)-based artistic image from a 2-dimensional (2D) image using a generative adversarial network (GAN) framework is challenging. Most existing artistic GAN-based frameworks lack robust algorithms lack suitable 3D data representations that can fit into GAN to produce high-quality 3D artistic images. To produce 3D artistic images from 2D image that considerably improves scalability and visual quality, this research integrates innovative variational autoencoder signed distance function, cycle generative adversarial network (VAE-SDFCycleGAN). The proposed method feeds
APA, Harvard, Vancouver, ISO, and other styles
10

Choi, Chang-Hyuk, Hee-Chan Kim, Daewon Kang, and Jun-Young Kim. "Comparative study of glenoid version and inclination using two-dimensional images from computed tomography and three-dimensional reconstructed bone models." Clinics in Shoulder and Elbow 23, no. 3 (2020): 119–24. http://dx.doi.org/10.5397/cise.2020.00220.

Full text
Abstract:
Background: This study was performed to compare glenoid version and inclination measured using two-dimensional (2D) images from computed tomography (CT) scans or three-dimensional (3D) reconstructed bone models.Methods: Thirty patients who had undergone conventional CT scans were included. Two orthopedic surgeons measured glenoid version and inclination three times on 2D images from CT scans (2D measurement), and two other orthopedic surgeons performed the same measurements using 3D reconstructed bone models (3D measurement). The 3D-reconstructed bone models were acquired and measured with Mim
APA, Harvard, Vancouver, ISO, and other styles
11

Falah .K, Rasha, and Rafeef Mohammed .H. "Convert 2D shapes in to 3D images." Journal of Al-Qadisiyah for computer science and mathematics 9, no. 2 (2017): 19–23. http://dx.doi.org/10.29304/jqcm.2017.9.2.146.

Full text
Abstract:
There are several complex programs that using for convert 2D images to 3D models with difficult techniques. In this paper ,it will be introduce a useful technique and using simple Possibilities and language for converting 2D to 3D images. The technique would be used; a three-dimensional projection using three images for the same shape and display three dimensional image from different side and to implement the particular work, visual programming with 3Dtruevision engine would be used, where its given acceptable result with shorting time. And it could be used in the field of engineering drawing
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Shuting, Yanfei Su, Baiqi Lai, et al. "2D3D-DescNet: Jointly Learning 2D and 3D Local Feature Descriptors for Cross-Dimensional Matching." Remote Sensing 16, no. 13 (2024): 2493. http://dx.doi.org/10.3390/rs16132493.

Full text
Abstract:
The cross-dimensional matching of 2D images and 3D point clouds is an effective method by which to establish the spatial relationship between 2D and 3D space, which has potential applications in remote sensing and artificial intelligence (AI). In this paper, we propose a novel multi-task network, 2D3D-DescNet, to learn 2D and 3D local feature descriptors jointly and perform cross-dimensional matching of 2D image patches and 3D point cloud volumes. The 2D3D-DescNet contains two branches with which to learn 2D and 3D feature descriptors, respectively, and utilizes a shared decoder to generate th
APA, Harvard, Vancouver, ISO, and other styles
13

Zhou, Hui-li, Hong Xiang, Li Duan, et al. "Application of Combined Two-Dimensional and Three-Dimensional Transvaginal Contrast Enhanced Ultrasound in the Diagnosis of Endometrial Carcinoma." BioMed Research International 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/292743.

Full text
Abstract:
Objective. The goal of this study was to explore the clinical value of combining two-dimensional (2D) and three-dimensional (3D) transvaginal contrast-enhanced ultrasounds (CEUS) in diagnosis of endometrial carcinoma (EC).Methods. In this prospective diagnostic study, transvaginal 2D and 3D CEUS were performed on 68 patients with suspected EC, and the results of the obtained 2D-CEUS and 3D-CEUS images were compared with the gold standard for statistical analysis.Results. 2D-CEUS benign endometrial lesions showed the normal uterine perfusion phase while EC cases showed early arrival and early w
APA, Harvard, Vancouver, ISO, and other styles
14

Yamamoto, Seiichi, Masao Yoshino, Kohei Nakanishi, et al. "Trial of three-dimensional image estimation from two-dimensional projected trajectory images of alpha particles in GAGG scintillator." Journal of Instrumentation 20, no. 05 (2025): T05010. https://doi.org/10.1088/1748-0221/20/05/t05010.

Full text
Abstract:
Abstract High-resolution trajectory images of alpha particles emitted by Ac-225 and its daughter radionuclides were obtained using a Gd3Al2Ga3O12 (GAGG) scintillator combined with a magnifying unit and a high-sensitivity CCD camera. However, these images were limited to two-dimensional (2D) projections. To achieve more precise estimations of alpha particle trajectories, three-dimensional (3D) images were desired. For this purpose we tried to estimate 3D images by analyzing the intensities and projected ranges of 2D trajectory images of 8.4 MeV alpha particles emitted by an Ac-225 daughter radi
APA, Harvard, Vancouver, ISO, and other styles
15

Jacobs, R., A. Adriansens, K. Verstreken, P. Suetens, and D. van Steenberghe. "Predictability of a three-dimensional planning system for oral implant surgery." Dentomaxillofacial Radiology 28, no. 2 (1999): 105–11. http://dx.doi.org/10.1038/sj/dmfr/4600419.

Full text
Abstract:
OBJECTIVES To compare 2D CT alone with 2D + 3D reconstruction for pre-operative planning of implant placement. METHODS Spiral CT scans of 33 consecutive patients were used for both reformatted 2D and 3D computer-assisted planning. The number, site and size of implants and the occurrence of anatomical complications during planning and implant placement were statistically compared using the percentage agreement and the Kendall's correlation coefficients (tau). Although planning was performed in 33 patients, implants were only placed in 21 patients. In 11 patients surgery was based on 2D + 3D ima
APA, Harvard, Vancouver, ISO, and other styles
16

Park, Minsoo, Hang-Nga Mai, Mai Yen Mai, Thaw Thaw Win, Du-Hyeong Lee, and Cheong-Hee Lee. "Intra- and Interrater Agreement of Face Esthetic Analysis in 3D Face Images." BioMed Research International 2023 (April 10, 2023): 1–7. http://dx.doi.org/10.1155/2023/3717442.

Full text
Abstract:
The use of three-dimensional (3D) facial scans for facial analysis is increasing in maxillofacial treatment. The aim of this study was to investigate the consistency of two-dimensional (2D) and 3D facial analyses performed by multiple raters. Six men and four women (25–36-year-old) participated in this study. The 2D images of the smiling and resting faces in the frontal and sagittal planes were obtained. The 3D facial and intraoral scans were merged to generate virtual 3D faces. Ten clinicians performed facial analyses by investigating 14 indices of 2D and 3D faces. Intra- and interrater agree
APA, Harvard, Vancouver, ISO, and other styles
17

Bentley, Laurence R., and Mehran Gharibi. "Two‐ and three‐dimensional electrical resistivity imaging at a heterogeneous remediation site." GEOPHYSICS 69, no. 3 (2004): 674–80. http://dx.doi.org/10.1190/1.1759453.

Full text
Abstract:
Geometrically complex heterogeneities at a decommissioned sour gas plant could not be adequately characterized with drilling and 2D electrical resistivity surveys alone. In addition, 2D electrical resistivity imaging profiles produced misleading images as a result of out‐of‐plane resistivity anomalies and violation of the 2D assumption. Accurate amplitude and positioning of electrical conductivity anomalies associated with the subsurface geochemical distribution were required to effectively analyze remediation alternatives. Forward and inverse modeling and field examples demonstrated that 3D r
APA, Harvard, Vancouver, ISO, and other styles
18

Brownhill, Daniel, Yachin Chen, Barbara A. K. Kreilkamp, et al. "Automated subcortical volume estimation from 2D MRI in epilepsy and implications for clinical trials." Neuroradiology 64, no. 5 (2021): 935–47. http://dx.doi.org/10.1007/s00234-021-02811-x.

Full text
Abstract:
Abstract Purpose Most techniques used for automatic segmentation of subcortical brain regions are developed for three-dimensional (3D) MR images. MRIs obtained in non-specialist hospitals may be non-isotropic and two-dimensional (2D). Automatic segmentation of 2D images may be challenging and represents a lost opportunity to perform quantitative image analysis. We determine the performance of a modified subcortical segmentation technique applied to 2D images in patients with idiopathic generalised epilepsy (IGE). Methods Volume estimates were derived from 2D (0.4 × 0.4 × 3 mm) and 3D (1 × 1x1m
APA, Harvard, Vancouver, ISO, and other styles
19

Hosoi, Fumiki, Sho Umeyama, and Kuangting Kuo. "Estimating 3D Chlorophyll Content Distribution of Trees Using an Image Fusion Method Between 2D Camera and 3D Portable Scanning Lidar." Remote Sensing 11, no. 18 (2019): 2134. http://dx.doi.org/10.3390/rs11182134.

Full text
Abstract:
An image fusion method has been proposed for plant images taken using a two-dimensional (2D) camera and three-dimensional (3D) portable lidar for obtaining a 3D distribution of physiological and biochemical plant properties. In this method, a 2D multispectral camera with five bands (475–840 nm) and a 3D high-resolution portable scanning lidar were applied to three sets of sample trees. After producing vegetation index (VI) images from multispectral images, 3D point cloud lidar data were projected onto the 2D plane based on perspective projection, keeping the depth information of each of the li
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Tianze, Blessed Kondowe, Brave Kadoko Nyirenda, Jun Liu, Hui Zhang, and Jin Shang. "Experimental comparative study of Two-dimensional and Three-dimensional CT reconstruction in detecting maxillofacial fractures at Mzuzu Central Hospital, Malawi." Malawi Medical Journal 36, no. 5 (2025): 303–7. https://doi.org/10.4314/mmj.v36i5.2.

Full text
Abstract:
ObjectiveThe aim of this study is to compare the diagnostic value of two-dimensional (2D) CT and three-dimensional (3D) CT reconstruction techniques in detecting maxillofacial fractures in patients at Mzuzu Central Hospital (MCH).Methods 67 maxillofacial trauma patients admitted to Mzuzu Central Hospital from Jan to Sep 2024 underwent multi-slice spiral CT (MSCT) scanning. Images were post-processed using 2D and 3D reconstruction techniques. Clinical and radiological data were collected from the patients, and a comparative analysis of the results from the two reconstruction techniques was perf
APA, Harvard, Vancouver, ISO, and other styles
21

Wee, Lai K., Hum Y. Chai, Sharul R. Bin Samsury, Naizaithull F. Binti Mujamil, and Eko Supriyanto. "Comparative studies of two dimensional and three dimensional ultrasonic nuchal translucency in trisomy assessments." Anais da Academia Brasileira de Ciências 84, no. 4 (2012): 1157–68. http://dx.doi.org/10.1590/s0001-37652012000400030.

Full text
Abstract:
Current two-dimensional (2D) ultrasonic marker measurements are inherent with intra- and inter-observer variability limitations. The objective of this paper is to investigate the performance of conventional 2D ultrasonic marker measurements and proposed programmable interactive three-dimensional (3D) marker evaluation. This is essentially important to analyze that the measurement on 3D volumetric measurement possesses higher impact and reproducibility vis-à-vis 2D measurement. Twenty three cases of prenatal ultrasound examination were obtained from collaborating hospital after Ethical Committe
APA, Harvard, Vancouver, ISO, and other styles
22

Mario, Julia, Shambhavi Venkataraman, Valerie Fein-Zachary, Mark Knox, Alexander Brook, and Priscilla Slanetz. "Lumpectomy Specimen Radiography: Does Orientation or 3-Dimensional Tomosynthesis Improve Margin Assessment?" Canadian Association of Radiologists Journal 70, no. 3 (2019): 282–91. http://dx.doi.org/10.1016/j.carj.2019.03.005.

Full text
Abstract:
Purpose Our purpose was twofold. First, we sought to determine whether 2 orthogonal oriented views of excised breast cancer specimens could improve surgical margin assessment compared to a single unoriented view. Second, we sought to determine whether 3D tomosynthesis could improve surgical margin assessment compared to 2D mammography alone. Materials and Methods Forty-one consecutive specimens were prospectively imaged using 4 protocols: single view unoriented 2D image acquired on a specimen unit (1VSU), 2 orthogonal oriented 2D images acquired on the specimen unit (2VSU), 2 orthogonal orient
APA, Harvard, Vancouver, ISO, and other styles
23

Peluso, Antonino, Giulia Falone, Rossana Pipitone, Francesco Moscagiuri, Francesco Caroccia, and Michele D’Attilio. "Three-Dimensional Enlow’s Counterpart Analysis: Neutral Track." Diagnostics 13, no. 14 (2023): 2337. http://dx.doi.org/10.3390/diagnostics13142337.

Full text
Abstract:
The aim of this study is to provide a novel method to perform Enlow’s neutral track analysis on cone-beam computed tomography (CBCT) images. Eighteen CBCT images of skeletal Class I (ANB = 2° ± 2°) subjects (12 males and 6 females, aged from 9 to 19 years) with no history of previous orthodontic treatment were selected. For each subject, 2D Enlow’s neutral track analysis was performed on lateral cephalograms extracted from CBCT images and 3D neutral track analysis was performed on CBCT images. A Student’s t-test did not show any statistically significant difference between the 2D and 3D measur
APA, Harvard, Vancouver, ISO, and other styles
24

Bobulski, J. "Multimodal face recognition method with two-dimensional hidden Markov model." Bulletin of the Polish Academy of Sciences Technical Sciences 65, no. 1 (2017): 121–28. http://dx.doi.org/10.1515/bpasts-2017-0015.

Full text
Abstract:
Abstract The paper presents a new solution for the face recognition based on two-dimensional hidden Markov models. The traditional HMM uses one-dimensional data vectors, which is a drawback in the case of 2D and 3D image processing, because part of the information is lost during the conversion to one-dimensional features vector. The paper presents a concept of the full ergodic 2DHMM, which can be used in 2D and 3D face recognition. The experimental results demonstrate that the system based on two dimensional hidden Markov models is able to achieve a good recognition rate for 2D, 3D and multimo
APA, Harvard, Vancouver, ISO, and other styles
25

Chiu, Chun-Yi, Yung-Hui Huang, Wei-Chang Du, et al. "Efficient Strike Artifact Reduction Based on 3D-Morphological Structure Operators from Filtered Back-Projection PET Images." Sensors 21, no. 21 (2021): 7228. http://dx.doi.org/10.3390/s21217228.

Full text
Abstract:
Positron emission tomography (PET) can provide functional images and identify abnormal metabolic regions of the whole-body to effectively detect tumor presence and distribution. The filtered back-projection (FBP) algorithm is one of the most common images reconstruction methods. However, it will generate strike artifacts on the reconstructed image and affect the clinical diagnosis of lesions. Past studies have shown reduction in strike artifacts and improvement in quality of images by two-dimensional morphological structure operators (2D-MSO). The morphological structure method merely processe
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Guangjie, Aidi Gong, Pei Nie, et al. "Contrast-Enhanced CT Texture Analysis for Distinguishing Fat-Poor Renal Angiomyolipoma From Chromophobe Renal Cell Carcinoma." Molecular Imaging 18 (January 1, 2019): 153601211988316. http://dx.doi.org/10.1177/1536012119883161.

Full text
Abstract:
Objective: To evaluate the value of 2-dimensional (2D) and 3-dimensional (3D) computed tomography texture analysis (CTTA) models in distinguishing fat-poor angiomyolipoma (fpAML) from chromophobe renal cell carcinoma (chRCC). Methods: We retrospectively enrolled 32 fpAMLs and 24 chRCCs. Texture features were extracted from 2D and 3D regions of interest in triphasic CT images. The 2D and 3D CTTA models were constructed with the least absolute shrinkage and selection operator algorithm and texture scores were calculated. The diagnostic performance of the 2D and 3D CTTA models was evaluated with
APA, Harvard, Vancouver, ISO, and other styles
27

Costa e Silva, Adriana Paula de Andrade da, José Leopoldo Ferreira Antunes, and Marcelo Gusmão Paraiso Cavalcanti. "Interpretation of mandibular condyle fractures using 2D- and 3D-computed tomography." Brazilian Dental Journal 14, no. 3 (2003): 203–8. http://dx.doi.org/10.1590/s0103-64402003000300012.

Full text
Abstract:
Computed tomography (CT) has been increasingly used in the examination of patients with craniofacial trauma. This technique is useful in the examination of the temporomandibular joint and allows the diagnosis of fractures of the mandibular condyle. Aiming to verify whether the three-dimensional reconstructed images from CT (3D-CT) produce more effective visual information than the two-dimensional (2D-CT) ones, we evaluated 2D-CT and 3D-CT examinations of 18 patients with mandibular condyle fractures. We observed that 2D-CT and 3D-CT reconstructed images produced similar information for the dia
APA, Harvard, Vancouver, ISO, and other styles
28

Matos, Ana Paula Pinho, Osvaldo Luiz Aranda, Edson Marchiori, et al. "Three-Dimensional Microscopic Characteristics of the Human Uterine Cervix Evaluated by Microtomography." Diagnostics 15, no. 5 (2025): 603. https://doi.org/10.3390/diagnostics15050603.

Full text
Abstract:
Objectives: To analyze the microscopic anatomy of the human uterine cervix in two-dimensional (2D) and three-dimensional (3D) images obtained by microtomography (microCT). Methods: Human uterine cervixes surgically removed for benign gynecologic conditions were immersed in formalin and iodine solution for more than 72 h and images were acquired by microtomography. Results: In total, 10 cervical specimens were evaluated. The images provided by microCT allowed the study of the vaginal squamous epithelium, demonstrated microscopic 3D images of the metaplastic process between the exo and endocervi
APA, Harvard, Vancouver, ISO, and other styles
29

Madhu, Aravind P., C. Akhil Balu, Akshay Krishnan, Adithya Aravind, Jibin Noble, and Vishnu Sankar. "Design of 3D volumetric display." Journal of Physics: Conference Series 2070, no. 1 (2021): 012204. http://dx.doi.org/10.1088/1742-6596/2070/1/012204.

Full text
Abstract:
Abstract Stereoscopic, or multi-view, display systems that can give significant visual clues for the human brain to understand three-dimensional (3D) objects, they are regarded as better alternatives to traditional two-dimensional (2D) displays. A device that can render 3D images for viewers without the use of specific headgear or glasses is known as an auto-stereoscopic display. Manipulation of light rays via Light engines is also used to create 3D images in 3D space. We introduce a new auto-stereoscopic swept-volume display (SVD) system based on light-emitting diode (LED) arrays in this rese
APA, Harvard, Vancouver, ISO, and other styles
30

Canessa, Enrique, and Livio Tenze. "Morphing a Stereogram into Hologram." Journal of Imaging 6, no. 1 (2020): 1. http://dx.doi.org/10.3390/jimaging6010001.

Full text
Abstract:
We developed a method to transform stereoscopic two-dimensional (2D) images into holograms via unsupervised morphing deformations between left (L) and right (R) input images. By using robust DeepFlow and light-field rendering algorithms, we established correlations between a 2D scene and its three-dimensional (3D) display on a Looking Glass HoloPlay monitor. The possibility of applying this method, together with a lookup table for multi-view glasses-free 3D streaming with a stereo webcam, was also analyzed.
APA, Harvard, Vancouver, ISO, and other styles
31

Matsuyama, S., K. Ishii, S. Toyama та ін. "3D imaging of human cells using PIXEμCT". International Journal of PIXE 24, № 01n02 (2014): 67–75. http://dx.doi.org/10.1142/s0129083514500089.

Full text
Abstract:
We report imaging of human lung epithelial cells exposed to cobalt oxide ([Formula: see text]) microparticles using three-dimensional (3D) particle-induced X-ray emission microcomputed tomography ([Formula: see text]). The use of energy-selectable quasi-monochromatic low-energy X-rays generated via proton microbeam bombardment led to high-quality images. We also carried out two-dimensional (2D) micro-PIXE imaging. The 3D [Formula: see text] imaging data are complementary with 2D micro-PIXE images and the CT value ratios of the cells show that the strong absorption stems from [Formula: see text
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Chenshuang, Hellen Teixeira, Nipul Tanna, et al. "The Reliability of Two- and Three-Dimensional Cephalometric Measurements: A CBCT Study." Diagnostics 11, no. 12 (2021): 2292. http://dx.doi.org/10.3390/diagnostics11122292.

Full text
Abstract:
Cephalometry is a standard diagnostic tool in orthodontic and orthognathic surgery fields. However, built-in magnification from the cephalometric machine produces double images from left- and right-side craniofacial structures on the film, which poses difficulty for accurate cephalometric tracing and measurements. The cone-beam computed tomography (CBCT) images not only allow three-dimensional (3D) analysis, but also enable the extraction of two-dimensional (2D) images without magnification. To evaluate the most reliable cephalometric analysis method, we extracted 2D lateral cephalometric imag
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, J. J., J. L. Hunter, W. J. Lin, and R. W. Linton. "Three-Dimensional Display of Secondary Ion Images." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 2 (1990): 344–45. http://dx.doi.org/10.1017/s0424820100135320.

Full text
Abstract:
Since the sample surface region is continuously sputtered in dynamic secondary ion mass spectrometry (SIMS), three dimensional (3D) chemical maps can be obtained by acquiring a series of two dimensional (2D) images. Owing to the limitations of the ion beam sputtering technique, SIMS analysis artifacts resulting from factors such as surface roughness, matrix effects, and atomic mixing are present in the 3D volume data. One potential advantage of using 3D display is to provide visual feedback regarding the elimination of artifacts by utilizing correction algorithms as well as correlative informa
APA, Harvard, Vancouver, ISO, and other styles
34

Zamora, Natalia, Jose M. Llamas, Rosa Cibrián, Jose L. Gandia, and Vanessa Paredes. "Cephalometric measurements from 3D reconstructed images compared with conventional 2D images." Angle Orthodontist 81, no. 5 (2011): 856–64. http://dx.doi.org/10.2319/121210-717.1.

Full text
Abstract:
Abstract Objective: To assess whether the values of different measurements taken on three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) are comparable with those taken on two-dimensional (2D) images from conventional lateral cephalometric radiographs (LCRs) and to examine if there are differences between the different types of CBCT software when taking those measurements. Material and Methods: Eight patients were selected who had both an LRC and a CBCT. The 3D reconstructions of each patient in the CBCT were evaluated using two different software packages, NemoCeph
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, S. C., H. W. Wang, H. L. Kao, P. C. Hsiao, and W. F. Su. "Three-dimensional bone CT reconstruction anatomy of the vidian canal." Rhinology journal 51, no. 4 (2013): 306–14. http://dx.doi.org/10.4193/rhino12.189.

Full text
Abstract:
Objectives: To examine the anatomical features of the anterior opening of the vidian canal using three-dimensional (3D) computed tomography (CT) images of the bone. Methods: We reviewed 62 patients who had undergone bilateral vidian neurectomies. One hundred and twenty-four vidian canals and their surrounding anatomies were analyzed. 3D images were reconstructed using algorithms and compared with conventional two-dimensional (2D) CT images. Results: A bony prominence that overlaid the vidian canal along the sphenoid sinus floor was found in 60 (48.39 %) canals. Pneumatization of the pterygoid
APA, Harvard, Vancouver, ISO, and other styles
36

Reid, Donald B., Myles Douglas, and Edward B. Diethrich. "The Clinical Value of Three-Dimensional Intravascular Ultrasound Imaging." Journal of Endovascular Therapy 2, no. 4 (1995): 356–64. http://dx.doi.org/10.1177/152660289500200408.

Full text
Abstract:
Two-dimensional (2D) intravascular ultrasound (IVUS) imaging can now be reconstructed into three dimensions from serial 2D images captured following a “pullback” of the IVUS catheter through the target site. Three-dimensional (3D) reconstructions provide “longitudinal” and “volume” images. The former is similar to an angiogram and can be examined in three dimensions by rotating the image around its longitudinal axis, providing clinically useful information during endovascular procedures. The volume view takes longer to create and is not an exact reconstruction, but it provides images that can
APA, Harvard, Vancouver, ISO, and other styles
37

Al-Khuzaie, Maryam I. Mousa, and Waleed A. Mahmoud Al-Jawher. "Enhancing Brain Tumor Classification with a Novel Three-Dimensional Convolutional Neural Network (3D-CNN) Fusion Model." Journal Port Science Research 7, no. 3 (2024): 254–67. http://dx.doi.org/10.36371/port.2024.3.5.

Full text
Abstract:
Three-dimensional convolutional neural networks (3D CNNs) have been widely applied to analyze brain tumour images (BT) to understand the disease's progress better. It is well-known that training 3D-CNN is computationally expensive and has the potential of overfitting due to the small sample size available in the medical imaging field. Here, we proposed a novel 2D-3D approach by converting a 2D brain image to a 3D fused image using a gradient of the image Learnable Weighted. By the 2D-to-3D conversion, the proposed model can easily forward the fused 3D image through a pre-trained 3D model while
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Jiangpeng, Heping Xie, Cunbao Li, and Yifei Liu. "Deep Learning-Based Reconstruction of 3D Morphology of Geomaterial Particles from Single-View 2D Images." Materials 17, no. 20 (2024): 5100. http://dx.doi.org/10.3390/ma17205100.

Full text
Abstract:
The morphology of particles formed in different environments contains critical information. Thus, the rapid and effective reconstruction of their three-dimensional (3D) morphology is crucial. This study reconstructs the 3D morphology from two-dimensional (2D) images of particles using artificial intelligence (AI). More than 100,000 particles were sampled from three sources: naturally formed particles (desert sand), manufactured particles (lunar soil simulant), and numerically generated digital particles. A deep learning approach based on a voxel representation of the morphology and multi-dimen
APA, Harvard, Vancouver, ISO, and other styles
39

ZAIKIN, A., J. KURTHS, P. SAPARIN, W. GOWIN та S. PROHASKA. "MODELING BONE RESORPTION IN 2D CT AND 3D μCT IMAGES". International Journal of Bifurcation and Chaos 15, № 09 (2005): 2995–3009. http://dx.doi.org/10.1142/s0218127405013836.

Full text
Abstract:
We study several algorithms to simulate bone mass loss in two-dimensional and three-dimensional computed tomography bone images. The aim is to extrapolate and predict the bone loss, to provide test objects for newly developed structural measures, and to understand the physical mechanisms behind the bone alteration. Our bone model approach differs from those already reported in the literature by two features. First, we work with original bone images, obtained by computed tomography (CT); second, we use structural measures of complexity to evaluate bone resorption and to compare it with the data
APA, Harvard, Vancouver, ISO, and other styles
40

Schell, Adam, John M. Rhee, John Holbrook, Eric Lenehan, and Kun Young Park. "Assessing Foraminal Stenosis in the Cervical Spine: A Comparison of Three-Dimensional Computed Tomographic Surface Reconstruction to Two-Dimensional Modalities." Global Spine Journal 7, no. 3 (2017): 266–71. http://dx.doi.org/10.1177/2192568217699190.

Full text
Abstract:
Study Design: Retrospective radiographic study. Objective: The optimal radiographic modality for assessing cervical foraminal stenosis is unclear. Determination on conventional axial cuts is made difficult due in part to the complex, oblique orientation of the cervical neuroforamen. The utility of 3-dimensonal (3D) computed tomography (CT) reconstruction in improving neuroforaminal assessment is not well understood. The objective of this study is to determine inter-rater variability in grading cervical foraminal stenosis using 3 different CT imaging modalities: 3D CT surface reconstructions (3
APA, Harvard, Vancouver, ISO, and other styles
41

ALEKSANDROVA, O. "3D FACE MODEL RECONSTRUCTING FROM ITS 2D IMAGES USING NEURAL NETWORKS." Scientific papers of Donetsk National Technical University. Series: Informatics, Cybernetics and Computer Science 2 - №1, no. 33-34 (2022): 57–64. http://dx.doi.org/10.31474/1996-1588-2021-2-33-57-64.

Full text
Abstract:
The most common methods of reconstruction of 3D-models of the face are considered, their quantitative estimates are analyzed and determined, the most promising approach is highlighted - 3D Morphable Model. The necessity of its modification in order to improve the results of reconstruction based on the analysis of the main components and the use of generative-competitive neural network is substantiated. One of the advantages of using the 3D Morphable Model with principal component analysis is to present only a plausible solution when the solution space is limited, which simplifies the problem t
APA, Harvard, Vancouver, ISO, and other styles
42

Eom, Junseong, and Sangjun Moon. "Three-Dimensional High-Resolution Digital Inline Hologram Reconstruction with a Volumetric Deconvolution Method." Sensors 18, no. 9 (2018): 2918. http://dx.doi.org/10.3390/s18092918.

Full text
Abstract:
The digital in-line holographic microscope (DIHM) was developed for a 2D imaging technology and has recently been adapted to 3D imaging methods, providing new approaches to obtaining volumetric images with both a high resolution and wide field-of-view (FOV), which allows the physical limitations to be overcome. However, during the sectioning process of 3D image generation, the out-of-focus image of the object becomes a significant impediment to obtaining evident 3D features in the 2D sectioning plane of a thick biological sample. Based on phase retrieved high-resolution holographic imaging and
APA, Harvard, Vancouver, ISO, and other styles
43

Ban, Yuxi, Yang Wang, Shan Liu, et al. "2D/3D Multimode Medical Image Alignment Based on Spatial Histograms." Applied Sciences 12, no. 16 (2022): 8261. http://dx.doi.org/10.3390/app12168261.

Full text
Abstract:
The key to image-guided surgery (IGS) technology is to find the transformation relationship between preoperative 3D images and intraoperative 2D images, namely, 2D/3D image registration. A feature-based 2D/3D medical image registration algorithm is investigated in this study. We use a two-dimensional weighted spatial histogram of gradient directions to extract statistical features, overcome the algorithm’s limitations, and expand the applicable scenarios under the premise of ensuring accuracy. The proposed algorithm was tested on CT and synthetic X-ray images, and compared with existing algori
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Xiao Wei, Seok Ki Lee, Sung Jin Cho, and Seok Tae Kim. "3D Integral Imaging Encryption Using a Depth-Converted Elemental Image Array." Applied Mechanics and Materials 479-480 (December 2013): 958–62. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.958.

Full text
Abstract:
We propose a three-dimensional (3D) image encryption method based on the modified computational integral imaging (CII) technique with the smart pixel mapping (SPM) algorithm, which is introduced for reconstructing orthoscopic 3D images with improved image quality. The depth-converted two-dimensional (2D) elemental image array (EIA) is firstly obtained by SPM-based CII system, and then the 2D EIA is encrypted by Fibonacci transform for 3D image encryption. Compared with conventional encryption methods based on integral imaging (II), the proposed method enables us to reconstruct orthoscopic 3D i
APA, Harvard, Vancouver, ISO, and other styles
45

Zheng, Siming, Mingyu Zhu, and Mingliang Chen. "Hybrid Multi-Dimensional Attention U-Net for Hyperspectral Snapshot Compressive Imaging Reconstruction." Entropy 25, no. 4 (2023): 649. http://dx.doi.org/10.3390/e25040649.

Full text
Abstract:
In order to capture the spatial-spectral (x,y,λ) information of the scene, various techniques have been proposed. Different from the widely used scanning-based methods, spectral snapshot compressive imaging (SCI) utilizes the idea of compressive sensing to compressively capture the 3D spatial-spectral data-cube in a single-shot 2D measurement and thus it is efficient, enjoying the advantages of high-speed and low bandwidth. However, the reconstruction process, , to retrieve the 3D cube from the 2D measurement, is an ill-posed problem and it is challenging to reconstruct high quality images. Pr
APA, Harvard, Vancouver, ISO, and other styles
46

Campero, Alvaro, Matias Baldoncini, Juan F. Villalonga, et al. "A simple technique for generating 3D endoscopic images." Surgical Neurology International 14 (February 17, 2023): 54. http://dx.doi.org/10.25259/sni_1106_2022.

Full text
Abstract:
Background: Most neurosurgical photographs are limited to two-dimensional (2D), in this sense, most teaching and learning of neuroanatomical structures occur without an appreciation of depth. The objective of this article is to describe a simple technique for obtaining right and left 2D endoscopic images with manual angulation of the optic. Methods: The implementation of a three-dimensional (3D) endoscopic image technique is reported. We first describe the background and core principles related to the methods employed. Photographs are taken demonstrating the principles and also during an endos
APA, Harvard, Vancouver, ISO, and other styles
47

Sezer, Sümeyye, Vitoria Piai, Roy P. C. Kessels, and Mark ter Laan. "Information Recall in Pre-Operative Consultation for Glioma Surgery Using Actual Size Three-Dimensional Models." Journal of Clinical Medicine 9, no. 11 (2020): 3660. http://dx.doi.org/10.3390/jcm9113660.

Full text
Abstract:
Three-dimensional (3D) technologies are being used for patient education. For glioma, a personalized 3D model can show the patient specific tumor and eloquent areas. We aim to compare the amount of information that is understood and can be recalled after a pre-operative consult using a 3D model (physically printed or in Augmented Reality (AR)) versus two-dimensional (2D) MR images. In this explorative study, healthy individuals were eligible to participate. Sixty-one participants were enrolled and assigned to either the 2D (MRI/fMRI), 3D (physical 3D model) or AR groups. After undergoing a moc
APA, Harvard, Vancouver, ISO, and other styles
48

Ottensmeyer, F. P., and N. A. Farrow. "Three-dimensional reconstruction from dark-field electron micrographs of macromolecules at random unknown angles." Proceedings, annual meeting, Electron Microscopy Society of America 50, no. 2 (1992): 1058–59. http://dx.doi.org/10.1017/s0424820100129929.

Full text
Abstract:
Electron microscopy produces 2D images of 3D objects with resolutions generally from about 2-5 nm for stained or shadowed specimens, to as good as 0.3-0.5 nm for unstained specimens using bright field or dark field techniques. Many groups have worked on methods that attempt to recover the 3D information that is lost in the 2D representation. We have built on and extended previous techniques, and report here the development and application of a robust, unbiased quaternion-based alignment procedure to facilitate 3D reconstruction of molecules imaged at random unknown orientations. The approach i
APA, Harvard, Vancouver, ISO, and other styles
49

Wong, Ka Wai, and Benjamin Bachmann. "Three-dimensional electron temperature measurement of inertial confinement fusion hotspots using x-ray emission tomography." Review of Scientific Instruments 93, no. 7 (2022): 073501. http://dx.doi.org/10.1063/5.0097471.

Full text
Abstract:
We present a novel approach to reconstruct three-dimensional (3D) electron temperature distributions of inertially confined fusion plasma hotspots at the National Ignition Facility. Using very limited number of two-dimensional (2D) x-ray imaging lines of sight, we perform 3D reconstructions of x-ray emission distributions from different x-ray energy channels ranging from 20 to 30 keV. 2D time-integrated x-ray images are processed using the algebraic reconstruction technique to reconstruct a 3D hotspot x-ray emission distribution that is self-consistent with the input images. 3D electron temper
APA, Harvard, Vancouver, ISO, and other styles
50

Fisichella, V. A., F. Jäderling, S. Horvath, P. O. Stotzer, A. Kilander, and M. Hellström. "Primary three-dimensional analysis with perspective-filet view versus primary two-dimensional analysis: Evaluation of lesion detection by inexperienced readers at computed tomographic colonography in symptomatic patients." Acta Radiologica 50, no. 3 (2009): 244–55. http://dx.doi.org/10.1080/02841850802714797.

Full text
Abstract:
Background: “Perspective-filet view” is a novel three-dimensional (3D) viewing technique for computed tomography colonography (CTC). Studies with experienced readers have shown a sensitivity for perspective-filet view similar to that of 2D or 3D endoluminal fly-through in detection of colorectal lesions. It is not known whether perspective-filet view, compared to axial images, improves lesion detection by inexperienced readers. Purpose: To compare primary 3D analysis using perspective-filet view (3D Filet) with primary 2D analysis, as used by inexperienced CTC readers. Secondary aims were to c
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!