Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Multi-modal imaging.

Artykuły w czasopismach na temat „Multi-modal imaging”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Multi-modal imaging”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Mohankumar, Arthi, and Roshni Mohan. "Multi-Modal imaging of torpedo maculopathy." TNOA Journal of Ophthalmic Science and Research 61, no. 1 (2023): 143. http://dx.doi.org/10.4103/tjosr.tjosr_9_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Haotian, Qiaoyu Ma, Yiran Qiu, and Zongying Lai. "A Multi-Hierarchical Complementary Feature Interaction Network for Accelerated Multi-Modal MR Imaging." Applied Sciences 14, no. 21 (2024): 9764. http://dx.doi.org/10.3390/app14219764.

Pełny tekst źródła
Streszczenie:
Magnetic resonance (MR) imaging is widely used in the clinical field due to its non-invasiveness, but the long scanning time is still a bottleneck for its popularization. Using the complementary information between multi-modal imaging to accelerate imaging provides a novel and effective MR fast imaging solution. However, previous technologies mostly use simple fusion methods and fail to fully utilize their potential sharable knowledge. In this study, we introduced a novel multi-hierarchical complementary feature interaction network (MHCFIN) to realize joint reconstruction of multi-modal MR ima
Style APA, Harvard, Vancouver, ISO itp.
3

Alilet, Mona, Julien Behr, Jean-Philippe Nueffer, Benoit Barbier-Brion, and Sébastien Aubry. "Multi-modal imaging of the subscapularis muscle." Insights into Imaging 7, no. 6 (2016): 779–91. http://dx.doi.org/10.1007/s13244-016-0526-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Dumbryte, Irma, Donatas Narbutis, Maria Androulidaki, Arturas Vailionis, Saulius Juodkazis, and Mangirdas Malinauskas. "Teeth Microcracks Research: Towards Multi-Modal Imaging." Bioengineering 10, no. 12 (2023): 1354. http://dx.doi.org/10.3390/bioengineering10121354.

Pełny tekst źródła
Streszczenie:
This perspective is an overview of the recent advances in teeth microcrack (MC) research, where there is a clear tendency towards a shift from two-dimensional (2D) to three-dimensional (3D) examination techniques, enhanced with artificial intelligence models for data processing and image acquisition. X-ray micro-computed tomography combined with machine learning allows 3D characterization of all spatially resolved cracks, despite the locations within the tooth in which they begin and extend, and the arrangement of MCs and their structural properties. With photoluminescence and micro-/nano-Rama
Style APA, Harvard, Vancouver, ISO itp.
5

Hallinan, Robert, Brendan M. Connolly, and David C. Mackenzie. "Renal vein thrombosis: Multi-modal imaging findings." Visual Journal of Emergency Medicine 38 (January 2025): 102194. https://doi.org/10.1016/j.visj.2025.102194.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Blinowska, Katarzyna, Gernot Müller-Putz, Vera Kaiser, et al. "Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration." Computational Intelligence and Neuroscience 2009 (2009): 1–10. http://dx.doi.org/10.1155/2009/813607.

Pełny tekst źródła
Streszczenie:
Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging
Style APA, Harvard, Vancouver, ISO itp.
7

Watkin, Kenneth L., and Michael A. McDonald. "Multi-Modal Contrast Agents." Academic Radiology 9, no. 2 (2002): S285—S289. http://dx.doi.org/10.1016/s1076-6332(03)80205-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Merkle, Arno, Leah L. Lavery, Jeff Gelb, and Nicholas Piché. "Fusing Multi-scale and Multi-modal 3D Imaging and Characterization." Microscopy and Microanalysis 20, S3 (2014): 820–21. http://dx.doi.org/10.1017/s1431927614005820.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Dong, Di, Jie Tian, Yakang Dai, Guorui Yan, Fei Yang, and Ping Wu. "Unified reconstruction framework for multi-modal medical imaging." Journal of X-Ray Science and Technology 19, no. 1 (2011): 111–26. http://dx.doi.org/10.3233/xst-2010-0281.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Bansal, Reema, Nitin Kumar, and Monika Balyan. "Multi-modal imaging in benign familial fleck retina." Indian Journal of Ophthalmology 69, no. 6 (2021): 1641. http://dx.doi.org/10.4103/ijo.ijo_633_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Merk, Vivian, Johan Decelle, Si Chen, and Derk Joester. "Multi-modal correlative chemical imaging of aquatic microorganisms." Microscopy and Microanalysis 27, S1 (2021): 298–300. http://dx.doi.org/10.1017/s1431927621001641.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Cole, Laura M., Joshua Handley, Emmanuelle Claude, et al. "Multi-Modal Mass Spectrometric Imaging of Uveal Melanoma." Metabolites 11, no. 8 (2021): 560. http://dx.doi.org/10.3390/metabo11080560.

Pełny tekst źródła
Streszczenie:
Matrix assisted laser desorption ionisation mass spectrometry imaging (MALDI-MSI), was used to obtain images of lipids and metabolite distribution in formalin fixed and embedded in paraffin (FFPE) whole eye sections containing primary uveal melanomas (UM). Using this technique, it was possible to obtain images of lysophosphatidylcholine (LPC) type lipid distribution that highlighted the tumour regions. Laser ablation inductively coupled plasma mass spectrometry images (LA-ICP-MS) performed on UM sections showed increases in copper within the tumour periphery and intratumoural zinc in tissue fr
Style APA, Harvard, Vancouver, ISO itp.
13

Beckus, Andre, Alexandru Tamasan, and George K. Atia. "Multi-Modal Non-Line-of-Sight Passive Imaging." IEEE Transactions on Image Processing 28, no. 7 (2019): 3372–82. http://dx.doi.org/10.1109/tip.2019.2896517.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Lee, Junwon, Jeremy Rogers, Michael Descour, et al. "Imaging quality assessment of multi-modal miniature microscope." Optics Express 11, no. 12 (2003): 1436. http://dx.doi.org/10.1364/oe.11.001436.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Dong, Di, Jie Tian, Yakang Dai, Guorui Yan, Fei Yang, and Ping Wu. "Unified reconstruction framework for multi-modal medical imaging." Journal of X-Ray Science and Technology: Clinical Applications of Diagnosis and Therapeutics 19, no. 1 (2011): 111–26. http://dx.doi.org/10.3233/xst-2010-028100281.

Pełny tekst źródła
Streszczenie:
Various types of advanced imaging technologies have significantly improved the quality of medical care available to patients. Corresponding medical image reconstruction algorithms, especially 3D reconstruction, play an important role in disease diagnosis and treatment assessment. However, these increasing reconstruction methods are not implemented in a unified software framework, which brings along lots of disadvantages such as breaking connection of different modalities, lack of module reuse and inconvenience to method comparison. This paper discusses reconstruction process from the viewpoint
Style APA, Harvard, Vancouver, ISO itp.
16

Sawai, Toshiki, Masato Matsubayashi, Fumiya Uchida, Masatoshi Miyahara, and Hideo Nishikawa. "Quadricuspid pulmonary valve evaluated by multi-modal imaging." European Heart Journal - Cardiovascular Imaging 19, no. 12 (2018): 1333. http://dx.doi.org/10.1093/ehjci/jey113.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Medina-Valdés, L., M. Pérez-Liva, J. Camacho, J. M. Udías, J. L. Herraiz, and N. González-Salido. "Multi-modal Ultrasound Imaging for Breast Cancer Detection." Physics Procedia 63 (2015): 134–40. http://dx.doi.org/10.1016/j.phpro.2015.03.022.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Phillips, N. W., M. W. M. Jones, G. van Riessen, D. J. Vine, B. Abbey, and F. Hofmann. "Multi-modal Nanoscale Imaging of Materials and Biology." Microscopy and Microanalysis 24, S2 (2018): 32–33. http://dx.doi.org/10.1017/s1431927618012588.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Hathon, Lori A., Michael T. Myers, Mike Dixon, and Kultaransingh Hooghan. "Multi-modal SEM Imaging for Shale Reservoir Characterization." Microscopy and Microanalysis 23, S1 (2017): 2116–17. http://dx.doi.org/10.1017/s1431927617011242.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Schall, Ulrich, Paul E. Rasser, Ross Fulham, et al. "PHENOTYPING OF SCHIZOPHRENIA BY MULTI-MODAL BRAIN IMAGING." Schizophrenia Research 117, no. 2-3 (2010): 480–81. http://dx.doi.org/10.1016/j.schres.2010.02.906.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Rabha, Diganta, Sritam Biswas, Nabadweep Chamuah, Manab Mandal, and Pabitra Nath. "Wide-field multi-modal microscopic imaging using smartphone." Optics and Lasers in Engineering 137 (February 2021): 106343. http://dx.doi.org/10.1016/j.optlaseng.2020.106343.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Casciani, Emanuele, Chiara De Vincentiis, Maria Chiara Colaiacomo, and Gian Franco Gualdi. "Multi-Modal Imaging Technologies in Cardiovascular Risk Assessment." Therapeutic Apheresis and Dialysis 17, no. 2 (2012): 138–49. http://dx.doi.org/10.1111/j.1744-9987.2012.01132.x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Gubarkova, Ekaterina V., Varvara V. Dudenkova, Felix I. Feldchtein, et al. "Multi-modal optical imaging characterization of atherosclerotic plaques." Journal of Biophotonics 9, no. 10 (2015): 1009–20. http://dx.doi.org/10.1002/jbio.201500223.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Huang, Xiaojing, Hanfei Yan, Ajith Pattammattel, Longlong Wu, Ian Robinson, and Yong Chu. "Correlative imaging with multi-modal scanning probe microscopy." Acta Crystallographica Section A Foundations and Advances 79, a2 (2023): C497. http://dx.doi.org/10.1107/s2053273323091180.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Pasupuleti, Murali Krishna. "AI-Driven Radiology: Multi-Modal Imaging Diagnosis Using Ensemble Models." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 620–30. https://doi.org/10.62311/nesx/rphcr22.

Pełny tekst źródła
Streszczenie:
Abstract: The integration of artificial intelligence (AI) in radiology has significantly improved diagnostic workflows, particularly with the advent of multi-modal imaging systems. This study proposes an ensemble deep learning framework for radiological diagnosis by combining complementary information from computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Modality-specific convolutional neural networks—ResNet50 for CT, DenseNet121 for MRI, and EfficientNet for PET—were independently trained and their outputs aggregated via a fusion layer for fi
Style APA, Harvard, Vancouver, ISO itp.
26

Breen, William G., Madhava P. Aryal, Yue Cao, and Michelle M. Kim. "Integrating multi-modal imaging in radiation treatments for glioblastoma." Neuro-Oncology 26, Supplement_1 (2024): S17—S25. http://dx.doi.org/10.1093/neuonc/noad187.

Pełny tekst źródła
Streszczenie:
Abstract Advances in diagnostic and treatment technology along with rapid developments in translational research may now allow the realization of precision radiotherapy. Integration of biologically informed multimodality imaging to address the spatial and temporal heterogeneity underlying treatment resistance in glioblastoma is now possible for patient care, with evidence of safety and potential benefit. Beyond their diagnostic utility, several candidate imaging biomarkers have emerged in recent early-phase clinical trials of biologically based radiotherapy, and their definitive assessment in
Style APA, Harvard, Vancouver, ISO itp.
27

Kotsugi, Masashi, Kengo Konishi, Shohei Yokoyama, et al. "Transarterial embolization for anterior cranial fossa dural arteriovenous fistula based on multi-modal three-dimensional imaging." Surgical Neurology International 15 (October 25, 2024): 386. http://dx.doi.org/10.25259/sni_698_2024.

Pełny tekst źródła
Streszczenie:
Background: Dural arteriovenous fistula (DAVF) in the anterior cranial fossa (ACF) is known to show a high risk of intracranial hemorrhage. Recently, multi-modal fusion imaging with computed tomography angiography, computed tomography venography, and three-dimensional (3D) rotation angiography have been used preoperatively to ensure anatomical safety. We report on endovascular treatment as a first-line approach for ACFDAVF based on the understanding of vascular anatomy obtained from multi-modal fusion imaging. Methods: All patients with ACF-DAVF treated endovascularly as a first-line approach
Style APA, Harvard, Vancouver, ISO itp.
28

Prabhakar, Neeraj, Ilya Belevich, Markus Peurla, et al. "Cell Volume (3D) Correlative Microscopy Facilitated by Intracellular Fluorescent Nanodiamonds as Multi-Modal Probes." Nanomaterials 11, no. 1 (2020): 14. http://dx.doi.org/10.3390/nano11010014.

Pełny tekst źródła
Streszczenie:
Three-dimensional correlative light and electron microscopy (3D CLEM) is attaining popularity as a potential technique to explore the functional aspects of a cell together with high-resolution ultrastructural details across the cell volume. To perform such a 3D CLEM experiment, there is an imperative requirement for multi-modal probes that are both fluorescent and electron-dense. These multi-modal probes will serve as landmarks in matching up the large full cell volume datasets acquired by different imaging modalities. Fluorescent nanodiamonds (FNDs) are a unique nanosized, fluorescent, and el
Style APA, Harvard, Vancouver, ISO itp.
29

Toochukwu Juliet Mgbole. "Machine learning integration for early-stage cancer detection using multi-modal imaging analysis." World Journal of Advanced Research and Reviews 25, no. 1 (2025): 385–413. https://doi.org/10.30574/wjarr.2025.25.1.0066.

Pełny tekst źródła
Streszczenie:
Introduction: Early detection of cancer plays a crucial role in improving patient outcomes and survival rates. Traditional diagnostic methods often face challenges in accurately identifying early-stage cancers, leading to delayed treatment and reduced chances of successful intervention. Progress achieved in AI within the past few years, specifically, ML and DL, significantly enhanced the potential to diagnose and predict cancer. This review analyses the use of multi-modal imaging data, genomics, and clinical parameters to employ ML approaches in early cancer diagnosis. Combining machine learni
Style APA, Harvard, Vancouver, ISO itp.
30

Wang, Yuhao, Yang Liu, Aihua Zheng, and Pingping Zhang. "DeMo: Decoupled Feature-Based Mixture of Experts for Multi-Modal Object Re-Identification." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 8 (2025): 8141–49. https://doi.org/10.1609/aaai.v39i8.32878.

Pełny tekst źródła
Streszczenie:
Multi-modal object Re-IDentification (ReID) aims to retrieve specific objects by combining complementary information from multiple modalities. Existing multi-modal object ReID methods primarily focus on the fusion of heterogeneous features. However, they often overlook the dynamic quality changes in multi-modal imaging. In addition, the shared information between different modalities can weaken modality-specific information. To address these issues, we propose a novel feature learning framework called DeMo for multi-modal object ReID, which adaptively balances decoupled features using a mixtur
Style APA, Harvard, Vancouver, ISO itp.
31

Shin, Tae-Hyun, Youngseon Choi, Soojin Kim, and Jinwoo Cheon. "Recent advances in magnetic nanoparticle-based multi-modal imaging." Chemical Society Reviews 44, no. 14 (2015): 4501–16. http://dx.doi.org/10.1039/c4cs00345d.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Hartley, Matthew, and Gerard Kleywegt. "Towards Public Archiving of Large, Multi-Modal Imaging Datasets." Microscopy and Microanalysis 28, S1 (2022): 1526–27. http://dx.doi.org/10.1017/s1431927622006134.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Soslow, Jonathan H., and Margaret M. Samyn. "Multi-modal imaging of the pediatric heart transplant recipient." Translational Pediatrics 8, no. 4 (2019): 322–38. http://dx.doi.org/10.21037/tp.2019.08.04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Allegra Mascaro, Anna Letizia, Leonardo Sacconi, Ludovico Silvestri, Graham Knott, and Francesco S. Pavone. "Multi-Modal Optical Imaging of the Cerebellum in Animals." Cerebellum 15, no. 1 (2015): 18–20. http://dx.doi.org/10.1007/s12311-015-0730-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Zappia, Marcello, Francesco Di Pietto, Alberto Aliprandi, et al. "Multi-modal imaging of adhesive capsulitis of the shoulder." Insights into Imaging 7, no. 3 (2016): 365–71. http://dx.doi.org/10.1007/s13244-016-0491-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Tan, Tao, Zhang Li, Yue Sun, and Shandong Wu. "Guest Editorial: Multi-Modal Joint Learning in Healthcare Imaging." IEEE Journal of Biomedical and Health Informatics 29, no. 5 (2025): 3083–85. https://doi.org/10.1109/jbhi.2025.3556451.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Zhang, Yilin. "Multi-Modal Medical Image Matching Based on Multi-Task Learning and Semantic-Enhanced Cross-Modal Retrieval." Traitement du Signal 40, no. 5 (2023): 2041–49. http://dx.doi.org/10.18280/ts.400522.

Pełny tekst źródła
Streszczenie:
With the continuous advancement of medical imaging technology, a vast amount of multi-modal medical image data has been extensively utilized for disease diagnosis, treatment, and research. Effective management and utilization of these data becomes a pivotal challenge, particularly when undertaking image matching and retrieval. Although numerous methods for medical image matching and retrieval exist, they primarily rely on traditional image processing techniques, often limited to manual feature extraction and singular modality handling. To address these limitations, this study introduces an alg
Style APA, Harvard, Vancouver, ISO itp.
38

Pan, Wenjie, Linhan Huang, Jianbao Liang, Lan Hong, and Jianqing Zhu. "Progressively Hybrid Transformer for Multi-Modal Vehicle Re-Identification." Sensors 23, no. 9 (2023): 4206. http://dx.doi.org/10.3390/s23094206.

Pełny tekst źródła
Streszczenie:
Multi-modal (i.e., visible, near-infrared, and thermal-infrared) vehicle re-identification has good potential to search vehicles of interest in low illumination. However, due to the fact that different modalities have varying imaging characteristics, a proper multi-modal complementary information fusion is crucial to multi-modal vehicle re-identification. For that, this paper proposes a progressively hybrid transformer (PHT). The PHT method consists of two aspects: random hybrid augmentation (RHA) and a feature hybrid mechanism (FHM). Regarding RHA, an image random cropper and a local region h
Style APA, Harvard, Vancouver, ISO itp.
39

Badhiwala, Krishna N., Daniel L. Gonzales, Daniel G. Vercosa, Benjamin W. Avants, and Jacob T. Robinson. "Microfluidics for electrophysiology, imaging, and behavioral analysis of Hydra." Lab on a Chip 18, no. 17 (2018): 2523–39. http://dx.doi.org/10.1039/c8lc00475g.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Dehghani, Farzaneh, Reihaneh Derafshi, Joanna Lin, Sayeh Bayat, and Mariana Bento. "Alzheimer Disease Detection Studies: Perspective on Multi-Modal Data." Yearbook of Medical Informatics 33, no. 01 (2024): 266–76. https://doi.org/10.1055/s-0044-1800756.

Pełny tekst źródła
Streszczenie:
Summary Objectives: Alzheimer's Disease (AD) is one of the most common neurodegenerative diseases, resulting in progressive cognitive decline, and so accurate and timely AD diagnosis is of critical importance. To this end, various medical technologies and computer-aided diagnosis (CAD), ranging from biosensors and raw signals to medical imaging, have been used to provide information about the state of AD. In this survey, we aim to provide a review on CAD systems for automated AD detection, focusing on different data types: namely, signals and sensors, medical imaging, and electronic medical re
Style APA, Harvard, Vancouver, ISO itp.
41

Li, Ruijiang. "Abstract IA05: Toward multi-modal foundation AI for precision oncology." Clinical Cancer Research 31, no. 2_Supplement (2025): IA05. https://doi.org/10.1158/1557-3265.targetedtherap-ia05.

Pełny tekst źródła
Streszczenie:
Abstract Clinical decision-making is a complex process that involves information obtained from multiple data modalities. Artificial intelligence (AI) approaches that can effectively integrate multi-modal data hold significant promise to advance clinical care. Two areas of success are imaging and digital pathology, where AI has shown great potential to improve cancer diagnosis and treatment. This talk will provide an overview on how AI can be used to extract information from imaging and pathology and identify prognostic and predictive biomarkers for personalized cancer treatment. In particular,
Style APA, Harvard, Vancouver, ISO itp.
42

Liu, Tracy W., Seth T. Gammon, and David Piwnica-Worms. "Multi-Modal Multi-Spectral Intravital Microscopic Imaging of Signaling Dynamics in Real-Time during Tumor–Immune Interactions." Cells 10, no. 3 (2021): 499. http://dx.doi.org/10.3390/cells10030499.

Pełny tekst źródła
Streszczenie:
Intravital microscopic imaging (IVM) allows for the study of interactions between immune cells and tumor cells in a dynamic, physiologically relevant system in vivo. Current IVM strategies primarily use fluorescence imaging; however, with the advances in bioluminescence imaging and the development of new bioluminescent reporters with expanded emission spectra, the applications for bioluminescence are extending to single cell imaging. Herein, we describe a molecular imaging window chamber platform that uniquely combines both bioluminescent and fluorescent genetically encoded reporters, as well
Style APA, Harvard, Vancouver, ISO itp.
43

Xiao, Peng, Zhengyu Duan, Gengyuan Wang, et al. "Multi-modal Anterior Eye Imager Combining Ultra-High Resolution OCT and Microvascular Imaging for Structural and Functional Evaluation of the Human Eye." Applied Sciences 10, no. 7 (2020): 2545. http://dx.doi.org/10.3390/app10072545.

Pełny tekst źródła
Streszczenie:
To establish complementary information for the diagnosis and evaluation of ocular surface diseases, we developed a multi-modal, non-invasive optical imaging platform by combining ultra-high resolution optical coherence tomography (UHR-OCT) with a microvascular imaging system based on slit-lamp biomicroscopy. Our customized UHR-OCT module achieves an axial resolution of ≈2.9 μm in corneal tissue with a broadband light source and an A-line acquisition rate of 24 kHz with a line array CCD camera. The microvascular imaging module has a lateral resolution of 3.5 μm under maximum magnification of ≈1
Style APA, Harvard, Vancouver, ISO itp.
44

Cheng, Hanlong, Xueyan Wang, Xuan Liu, et al. "An effective NIR laser/tumor-microenvironment co-responsive cancer theranostic nanoplatform with multi-modal imaging and therapies." Nanoscale 13, no. 24 (2021): 10816–28. http://dx.doi.org/10.1039/d1nr01645h.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Kuang, Xiao-yan, Huan Liu, Wen-yong Hu, and Yuan-zhi Shao. "Hydrothermal synthesis of core–shell structured TbPO4:Ce3+@TbPO4:Gd3+nanocomposites for magnetic resonance and optical imaging." Dalton Trans. 43, no. 32 (2014): 12321–28. http://dx.doi.org/10.1039/c4dt00249k.

Pełny tekst źródła
Streszczenie:
Multi-modal imaging based on multifunctional nanoparticles provides deep, non-invasive and highly sensitive imaging and is a promising alternative approach that can improve the sensitivity of early cancer diagnosis.
Style APA, Harvard, Vancouver, ISO itp.
46

Liu, Yu, Xiaolin Lv, Heng Liu, et al. "Porous gold nanocluster-decorated manganese monoxide nanocomposites for microenvironment-activatable MR/photoacoustic/CT tumor imaging." Nanoscale 10, no. 8 (2018): 3631–38. http://dx.doi.org/10.1039/c7nr08535d.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Liu, Tracy W., Seth T. Gammon, David Fuentes, and David Piwnica-Worms. "Multi-Modal Multi-Spectral Intravital Macroscopic Imaging of Signaling Dynamics in Real Time during Tumor–Immune Interactions." Cells 10, no. 3 (2021): 489. http://dx.doi.org/10.3390/cells10030489.

Pełny tekst źródła
Streszczenie:
A major obstacle in studying the interplay between cancer cells and the immune system has been the examination of proposed biological pathways and cell interactions in a dynamic, physiologically relevant system in vivo. Intravital imaging strategies are one of the few molecular imaging techniques that can follow biological processes at cellular resolution over long periods of time in the same individual. Bioluminescence imaging has become a standard preclinical in vivo optical imaging technique with ever-expanding versatility as a result of the development of new emission bioluminescent report
Style APA, Harvard, Vancouver, ISO itp.
48

Adil Ibrahim Khalil. "Multi-Modal Fusion Techniques for Improved Diagnosis in Medical Imaging." Journal of Information Systems Engineering and Management 10, no. 1s (2024): 47–56. https://doi.org/10.52783/jisem.v10i1s.100.

Pełny tekst źródła
Streszczenie:
Identifying diverse disease states is crucial for prompt and efficient clinical management. Complementary data from many medical imaging modalities, including MRI, CT, and PET, can be integrated to improve diagnostic performance. This work aims to assess how well multi-modal fusion methods work to enhance medical picture diagnosis. A multicenter study was conducted with 150 patients with different clinical conditions (mean age 58.2 ± 12.4 years, 52% female). After gathering data from MRI, CT, and PET scans, structural, functional, and textural characteristics were removed from each modality. T
Style APA, Harvard, Vancouver, ISO itp.
49

Joshi, Bishnu P., and Thomas D. Wang. "Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging." Cancers 2, no. 2 (2010): 1251–87. http://dx.doi.org/10.3390/cancers2021251.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Sun, He, and Katherine L. Bouman. "Deep Probabilistic Imaging: Uncertainty Quantification and Multi-modal Solution Characterization for Computational Imaging." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (2021): 2628–37. http://dx.doi.org/10.1609/aaai.v35i3.16366.

Pełny tekst źródła
Streszczenie:
Computational image reconstruction algorithms generally produce a single image without any measure of uncertainty or confidence. Regularized Maximum Likelihood (RML) and feed-forward deep learning approaches for inverse problems typically focus on recovering a point estimate. This is a serious limitation when working with under-determined imaging systems, where it is conceivable that multiple image modes would be consistent with the measured data. Characterizing the space of probable images that explain the observational data is therefore crucial. In this paper, we propose a variational deep p
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!