To see the other types of publications on this topic, follow the link: NnU-Net.

Journal articles on the topic 'NnU-Net'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 journal articles for your research on the topic 'NnU-Net.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Savjani, Ricky. "nnU-Net: Further Automating Biomedical Image Autosegmentation." Radiology: Imaging Cancer 3, no. 1 (January 1, 2021): e209039. http://dx.doi.org/10.1148/rycan.2021209039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sunoqrot, Mohammed R. S., Kirsten M. Selnæs, Elise Sandsmark, Sverre Langørgen, Helena Bertilsson, Tone F. Bathen, and Mattijs Elschot. "The Reproducibility of Deep Learning-Based Segmentation of the Prostate Gland and Zones on T2-Weighted MR Images." Diagnostics 11, no. 9 (September 16, 2021): 1690. http://dx.doi.org/10.3390/diagnostics11091690.

Full text
Abstract:
Volume of interest segmentation is an essential step in computer-aided detection and diagnosis (CAD) systems. Deep learning (DL)-based methods provide good performance for prostate segmentation, but little is known about the reproducibility of these methods. In this work, an in-house collected dataset from 244 patients was used to investigate the intra-patient reproducibility of 14 shape features for DL-based segmentation methods of the whole prostate gland (WP), peripheral zone (PZ), and the remaining prostate zones (non-PZ) on T2-weighted (T2W) magnetic resonance (MR) images compared to manual segmentations. The DL-based segmentation was performed using three different convolutional neural networks (CNNs): V-Net, nnU-Net-2D, and nnU-Net-3D. The two-way random, single score intra-class correlation coefficient (ICC) was used to measure the inter-scan reproducibility of each feature for each CNN and the manual segmentation. We found that the reproducibility of the investigated methods is comparable to manual for all CNNs (14/14 features), except for V-Net in PZ (7/14 features). The ICC score for segmentation volume was found to be 0.888, 0.607, 0.819, and 0.903 in PZ; 0.988, 0.967, 0.986, and 0.983 in non-PZ; 0.982, 0.975, 0.973, and 0.984 in WP for manual, V-Net, nnU-Net-2D, and nnU-Net-3D, respectively. The results of this work show the feasibility of embedding DL-based segmentation in CAD systems, based on multiple T2W MR scans of the prostate, which is an important step towards the clinical implementation.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Guobin, Zhiyong Yang, Bin Huo, Shude Chai, and Shan Jiang. "Multiorgan segmentation from partially labeled datasets with conditional nnU-Net." Computers in Biology and Medicine 136 (September 2021): 104658. http://dx.doi.org/10.1016/j.compbiomed.2021.104658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lian, Luya, Tianer Zhu, Fudong Zhu, and Haihua Zhu. "Deep Learning for Caries Detection and Classification." Diagnostics 11, no. 9 (September 13, 2021): 1672. http://dx.doi.org/10.3390/diagnostics11091672.

Full text
Abstract:
Objectives: Deep learning methods have achieved impressive diagnostic performance in the field of radiology. The current study aimed to use deep learning methods to detect caries lesions, classify different radiographic extensions on panoramic films, and compare the classification results with those of expert dentists. Methods: A total of 1160 dental panoramic films were evaluated by three expert dentists. All caries lesions in the films were marked with circles, whose combination was defined as the reference dataset. A training and validation dataset (1071) and a test dataset (89) were then established from the reference dataset. A convolutional neural network, called nnU-Net, was applied to detect caries lesions, and DenseNet121 was applied to classify the lesions according to their depths (dentin lesions in the outer, middle, or inner third D1/2/3 of dentin). The performance of the test dataset in the trained nnU-Net and DenseNet121 models was compared with the results of six expert dentists in terms of the intersection over union (IoU), Dice coefficient, accuracy, precision, recall, negative predictive value (NPV), and F1-score metrics. Results: nnU-Net yielded caries lesion segmentation IoU and Dice coefficient values of 0.785 and 0.663, respectively, and the accuracy and recall rate of nnU-Net were 0.986 and 0.821, respectively. The results of the expert dentists and the neural network were shown to be no different in terms of accuracy, precision, recall, NPV, and F1-score. For caries depth classification, DenseNet121 showed an overall accuracy of 0.957 for D1 lesions, 0.832 for D2 lesions, and 0.863 for D3 lesions. The recall results of the D1/D2/D3 lesions were 0.765, 0.652, and 0.918, respectively. All metric values, including accuracy, precision, recall, NPV, and F1-score values, were proven to be no different from those of the experienced dentists. Conclusion: In detecting and classifying caries lesions on dental panoramic radiographs, the performance of deep learning methods was similar to that of expert dentists. The impact of applying these well-trained neural networks for disease diagnosis and treatment decision making should be explored.
APA, Harvard, Vancouver, ISO, and other styles
5

Abel, Lorraine, Jakob Wasserthal, Thomas Weikert, Alexander W. Sauter, Ivan Nesic, Marko Obradovic, Shan Yang, et al. "Automated Detection of Pancreatic Cystic Lesions on CT Using Deep Learning." Diagnostics 11, no. 5 (May 19, 2021): 901. http://dx.doi.org/10.3390/diagnostics11050901.

Full text
Abstract:
Pancreatic cystic lesions (PCL) are a frequent and underreported incidental finding on CT scans and can transform into neoplasms with devastating consequences. We developed and evaluated an algorithm based on a two-step nnU-Net architecture for automated detection of PCL on CTs. A total of 543 cysts on 221 abdominal CTs were manually segmented in 3D by a radiology resident in consensus with a board-certified radiologist specialized in abdominal radiology. This information was used to train a two-step nnU-Net for detection with the performance assessed depending on lesions’ volume and location in comparison to three human readers of varying experience. Mean sensitivity was 78.8 ± 0.1%. The sensitivity was highest for large lesions with 87.8% for cysts ≥220 mm3 and for lesions in the distal pancreas with up to 96.2%. The number of false-positive detections for cysts ≥220 mm3 was 0.1 per case. The algorithm’s performance was comparable to human readers. To conclude, automated detection of PCL on CTs is feasible. The proposed model could serve radiologists as a second reading tool. All imaging data and code used in this study are freely available online.
APA, Harvard, Vancouver, ISO, and other styles
6

Huo, Lu, Xiaoxin Hu, Qin Xiao, Yajia Gu, Xu Chu, and Luan Jiang. "Segmentation of whole breast and fibroglandular tissue using nnU-Net in dynamic contrast enhanced MR images." Magnetic Resonance Imaging 82 (October 2021): 31–41. http://dx.doi.org/10.1016/j.mri.2021.06.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Heidenreich, Julius F., Tobias Gassenmaier, Markus J. Ankenbrand, Thorsten A. Bley, and Tobias Wech. "Self-configuring nnU-net pipeline enables fully automatic infarct segmentation in late enhancement MRI after myocardial infarction." European Journal of Radiology 141 (August 2021): 109817. http://dx.doi.org/10.1016/j.ejrad.2021.109817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tahuk, Paulus Klau, Agustinus Agung Dethan, and Stefanus Sio. "ENERGY AND NITROGEN BALANCE OF MALE BALI CATTLE FATTENED BY GREEN FEED IN SMALLHOLDER FARMS." Journal of Tropical Animal Science and Technology 2, no. 1 (July 31, 2020): 23–36. http://dx.doi.org/10.32938/jtast.v2i1.590.

Full text
Abstract:
The experiment was conducted for 3 months from March to June 2013 using nine (9) males Bali Cattle ages 2,5 - 3,5 or an average 3.0 years old based on teeth estimated with initial body weight range is 227-290 kg or an average of 257.40±23,60 kg in the Fattening Stalls, Bero Sembada Farmers Group, Laen Manen Sub District, Belu Regency, East Nusa Tenggara. This research be adapted to the practice of ranchers in fattened of cattle that includes management of feeding, housing, and health. Type of feed given during the study was Centrosema pubences, Clitoria ternatea, jerami Zea mays segar, Pennisetum purpuphoides, Leucaena leucocephala, natural grass, Pennisetum purpureum and Sesbania grandiflora. Variables measured consumption and digestibility energy and N, energy and N Balance, NNU and biological value. Data were analyzed with descriptive analysis procedures. The results showed that the kinetic energy (Mcal/kg/head/day) is the energy consumption of 30.657; energy feses, undigested and urine, respectively 10.136; 20.522 and 1.026, as well as energy Balance 19.496. Meanwhile, consumption of N is 169 000 g/head/day ; excretion of N feses, urine and N digested, respectively 50, 20 and 119, as well as Balance N 104 g/head/day. While net nitrogen utilization and biological value of nitrogen is 58.580% and 83.194%. Can be concluded that male Bali cattle finishing phase in fattening using a feed single forage the improve energy-nitrogen intake and digestibility, resulting a positive nitrogen Balance and energy, as well as net nitrogen utilization and biological value protein feed is high enough.
APA, Harvard, Vancouver, ISO, and other styles
9

Jung, Seok-Ki, Ho-Kyung Lim, Seungjun Lee, Yongwon Cho, and In-Seok Song. "Deep Active Learning for Automatic Segmentation of Maxillary Sinus Lesions Using a Convolutional Neural Network." Diagnostics 11, no. 4 (April 12, 2021): 688. http://dx.doi.org/10.3390/diagnostics11040688.

Full text
Abstract:
The aim of this study was to segment the maxillary sinus into the maxillary bone, air, and lesion, and to evaluate its accuracy by comparing and analyzing the results performed by the experts. We randomly selected 83 cases of deep active learning. Our active learning framework consists of three steps. This framework adds new volumes per step to improve the performance of the model with limited training datasets, while inferring automatically using the model trained in the previous step. We determined the effect of active learning on cone-beam computed tomography (CBCT) volumes of dental with our customized 3D nnU-Net in all three steps. The dice similarity coefficients (DSCs) at each stage of air were 0.920 ± 0.17, 0.925 ± 0.16, and 0.930 ± 0.16, respectively. The DSCs at each stage of the lesion were 0.770 ± 0.18, 0.750 ± 0.19, and 0.760 ± 0.18, respectively. The time consumed by the convolutional neural network (CNN) assisted and manually modified segmentation decreased by approximately 493.2 s for 30 scans in the second step, and by approximately 362.7 s for 76 scans in the last step. In conclusion, this study demonstrates that a deep active learning framework can alleviate annotation efforts and costs by efficiently training on limited CBCT datasets.
APA, Harvard, Vancouver, ISO, and other styles
10

Bouget, David, Roelant S. Eijgelaar, André Pedersen, Ivar Kommers, Hilko Ardon, Frederik Barkhof, Lorenzo Bello, et al. "Glioblastoma Surgery Imaging–Reporting and Data System: Validation and Performance of the Automated Segmentation Task." Cancers 13, no. 18 (September 17, 2021): 4674. http://dx.doi.org/10.3390/cancers13184674.

Full text
Abstract:
For patients with presumed glioblastoma, essential tumor characteristics are determined from preoperative MR images to optimize the treatment strategy. This procedure is time-consuming and subjective, if performed by crude eyeballing or manually. The standardized GSI-RADS aims to provide neurosurgeons with automatic tumor segmentations to extract tumor features rapidly and objectively. In this study, we improved automatic tumor segmentation and compared the agreement with manual raters, describe the technical details of the different components of GSI-RADS, and determined their speed. Two recent neural network architectures were considered for the segmentation task: nnU-Net and AGU-Net. Two preprocessing schemes were introduced to investigate the tradeoff between performance and processing speed. A summarized description of the tumor feature extraction and standardized reporting process is included. The trained architectures for automatic segmentation and the code for computing the standardized report are distributed as open-source and as open-access software. Validation studies were performed on a dataset of 1594 gadolinium-enhanced T1-weighted MRI volumes from 13 hospitals and 293 T1-weighted MRI volumes from the BraTS challenge. The glioblastoma tumor core segmentation reached a Dice score slightly below 90%, a patientwise F1-score close to 99%, and a 95th percentile Hausdorff distance slightly below 4.0 mm on average with either architecture and the heavy preprocessing scheme. A patient MRI volume can be segmented in less than one minute, and a standardized report can be generated in up to five minutes. The proposed GSI-RADS software showed robust performance on a large collection of MRI volumes from various hospitals and generated results within a reasonable runtime.
APA, Harvard, Vancouver, ISO, and other styles
11

Tampu, Iulian Emil, Neda Haj-Hosseini, and Anders Eklund. "Does Anatomical Contextual Information Improve 3D U-Net-Based Brain Tumor Segmentation?" Diagnostics 11, no. 7 (June 25, 2021): 1159. http://dx.doi.org/10.3390/diagnostics11071159.

Full text
Abstract:
Effective, robust, and automatic tools for brain tumor segmentation are needed for the extraction of information useful in treatment planning. Recently, convolutional neural networks have shown remarkable performance in the identification of tumor regions in magnetic resonance (MR) images. Context-aware artificial intelligence is an emerging concept for the development of deep learning applications for computer-aided medical image analysis. A large portion of the current research is devoted to the development of new network architectures to improve segmentation accuracy by using context-aware mechanisms. In this work, it is investigated whether or not the addition of contextual information from the brain anatomy in the form of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) masks and probability maps improves U-Net-based brain tumor segmentation. The BraTS2020 dataset was used to train and test two standard 3D U-Net (nnU-Net) models that, in addition to the conventional MR image modalities, used the anatomical contextual information as extra channels in the form of binary masks (CIM) or probability maps (CIP). For comparison, a baseline model (BLM) that only used the conventional MR image modalities was also trained. The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject. Median (mean) Dice scores of 90.2 (81.9), 90.2 (81.9), and 90.0 (82.1) were obtained on the official BraTS2020 validation dataset (125 subjects) for BLM, CIM, and CIP, respectively. Results show that there is no statistically significant difference when comparing Dice scores between the baseline model and the contextual information models (p > 0.05), even when comparing performances for high and low grade tumors independently. In a few low grade cases where improvement was seen, the number of false positives was reduced. Moreover, no improvements were found when considering model training time or domain generalization. Only in the case of compensation for fewer MR modalities available for each subject did the addition of anatomical contextual information significantly improve (p < 0.05) the segmentation of the whole tumor. In conclusion, there is no overall significant improvement in segmentation performance when using anatomical contextual information in the form of either binary WM, GM, and CSF masks or probability maps as extra channels.
APA, Harvard, Vancouver, ISO, and other styles
12

Isensee, Fabian, Paul F. Jaeger, Simon A. A. Kohl, Jens Petersen, and Klaus H. Maier-Hein. "nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation." Nature Methods, December 7, 2020. http://dx.doi.org/10.1038/s41592-020-01008-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tan, Wenjun, Peifang Huang, Xiaoshuo Li, Genqiang Ren, Yufei Chen, and Jinzhu Yang. "A review on segmentation of lung parenchyma based on deep learning methods." Journal of X-Ray Science and Technology, August 28, 2021, 1–15. http://dx.doi.org/10.3233/xst-210956.

Full text
Abstract:
Precise segmentation of lung parenchyma is essential for effective analysis of the lung. Due to the obvious contrast and large regional area compared to other tissues in the chest, lung tissue is less difficult to segment. Special attention to details of lung segmentation is also needed. To improve the quality and speed of segmentation of lung parenchyma based on computed tomography (CT) or computed tomography angiography (CTA) images, the 4th International Symposium on Image Computing and Digital Medicine (ISICDM 2020) provides interesting and valuable research ideas and approaches. For the work of lung parenchyma segmentation, 9 of the 12 participating teams used the U-Net network or its modified forms, and others used the methods to improve the segmentation accuracy include attention mechanism, multi-scale feature information fusion. Among them, U-Net achieves the best results including that the final dice coefficient of CT segmentation is 0.991 and the final dice coefficient of CTA segmentation is 0.984. In addition, attention U-Net and nnU-Net network also performs well. In this review paper, the methods chosen by 12 teams from different research groups are evaluated and their segmentation results are analyzed for the study and references to those involved.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Guobin, Zhiyong Yang, Bin Huo, Shude Chai, and Shan Jiang. "Automatic segmentation of organs at risk and tumors in CT images of lung cancer from partially labelled datasets with a semi-supervised conditional nnU-Net." Computer Methods and Programs in Biomedicine, September 2021, 106419. http://dx.doi.org/10.1016/j.cmpb.2021.106419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mariscal Harana, J., V. Vergani, C. Asher, R. Razavi, A. King, B. Ruijsink, and E. Puyol Anton. "Large-scale, multi-vendor, multi-protocol, quality-controlled analysis of clinical cine CMR using artificial intelligence." European Heart Journal - Cardiovascular Imaging 22, Supplement_2 (June 1, 2021). http://dx.doi.org/10.1093/ehjci/jeab090.046.

Full text
Abstract:
Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Advancing Impact Award scheme of the EPSRC Impact Acceleration Account at King’s College London Background Artificial intelligence (AI) has the potential to facilitate the automation of CMR analysis for biomarker extraction. However, most AI algorithms are trained on a specific input domain (e.g., scanner vendor or hospital-tailored imaging protocol) and lack the robustness to perform optimally when applied to CMR data from other input domains. Purpose To develop and validate a robust CMR analysis tool for automatic segmentation and cardiac function analysis which achieves state-of-the-art performance for multi-vendor short-axis cine CMR images. Methods The current work is an extension of our previously published quality-controlled AI-based tool for cine CMR analysis [1]. We deployed an AI algorithm that is equipped to handle different image sizes and domains automatically - the ‘nnU-Net’ framework [2] - and retrained our tool using the UK Biobank (UKBB) cohort population (n = 4,872) and a large database of clinical CMR studies obtained from two NHS hospitals (n = 3,406). The NHS hospital data came from three different scanner types: Siemens Aera 1.5T (n = 1,419), Philips Achieva 1.5T and 3T (n = 1,160), and Philips Ingenia 1.5T (n = 827). The ‘nnU-net’ was used to segment both ventricles and the myocardium. The proposed method was evaluated on randomly selected test sets from UKBB (n = 488) and NHS (n = 331) and on two external publicly available databases of clinical CMRs acquired on Philips, Siemens, General Electric (GE), and Canon CMR scanners – ACDC (n = 100) [3] and M&Ms (n = 321) [4]. We calculated the Dice scores - which measure the overlap between manual and automatic segmentations - and compared manual vs AI-based measures of biventricular volumes and function. Results Table 1 shows that the Dice scores for the NHS, ACDC, and M&Ms scans are similar to those obtained in the highly controlled, single vendor and single field strength UKBB scans. Although our AI-based tool was only trained on CMR scans from two vendors (Philips and Siemens), it performs similarly in unseen vendors (GE and Canon). Furthermore, it achieves state-of-the-art performance in online segmentation challenges, without being specifically trained on these databases. Table 1 also shows good agreement between manual and automated clinical measures of ejection fraction and ventricular volume and mass. Conclusions We show that our proposed AI-based tool, which combines training on a large-scale multi-domain CMR database with a state-of-the-art AI algorithm, allows us to robustly deal with routine clinical data from multiple centres, vendors, and field strengths. This is a fundamental step for the clinical translation of AI algorithms. Moreover, our method yields a range of additional metrics of cardiac function (filling and ejection rates, regional wall motion, and strain) at no extra computational cost.
APA, Harvard, Vancouver, ISO, and other styles
16

Aviles, J., G. Maso Talou, O. Camara, M. Mejia Cordova, E. Ferdian, G. Kat, A. Young, et al. "Automatic segmentation of the aorta on multi-center and multi-vendor phase-contrast enhanced magnetic resonance angiographies and the advantages of transfer learning." European Heart Journal - Cardiovascular Imaging 22, Supplement_2 (June 1, 2021). http://dx.doi.org/10.1093/ehjci/jeab090.121.

Full text
Abstract:
Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Guala A. received funding from the Spanish Ministry of Science, Innovation and Universities Background Phase-contrast (PC) enhanced magnetic resonance (MR) angiography (MRA) is a class of angiogram exploiting velocity data to increase the signal-to-noise ratio, thus avoiding the administration of external contrast agent, normally used to segment 4D flow MR data. To train deep-learning algorithms to segment PC-MRA a large amount of manually annotated data is needed: however, the relatively novelty of the sequence, its rapid evolution and the extensive time needed to manually segment data limit its availability. Purpose The aim of this study was to test a deep learning algorithm in the segmentation of multi-center and multi-vendor PC-MRA and to test if transfer learning (TL) improves performance. Methods A large dataset (LD) of 262 and a small one (SD) of 22 PC-MRA, acquired without contrast agent at 1.5 T in a General Electric and a Siemens scanner, respectively, were manually annotated and divided into training (232 and 15 cases) and testing (30 and 7) sets. They both included PC-MRA of healthy subjects and patients with aortic diseases (excluding dissections) and native aorta. A convolutional neural networks (CNN) based on nnU-Net framework [1] was trained in the LD and another in the SD. The left ventricle was removed semi-automatically from the DL segmentations of the LD as it was not relevant for this application. Networks were then tested on the test sets of the dataset there were trained and the other dataset to assess generalizability. Finally, a fine-tuning transfer learning approach was applied to LD network and the performance on both test sets were tested. Dice score, Hausdorff distance, Jaccard score and Average Symmetrical Surface Distance were used as segmentation quality metrics. Results LD network achieved good performance in LD test set, with a DS of 0.904, ASSD of 1.47, J of 0.827 and HD of 6.35, which further improve after removing the left ventricle in the post-processing to a DS of 0.942, ASSD of 0.93, J of 0.892 and HD of 3.32. SD network results in an average DS of 0.895, ASSD of 0.59, J of 0.812 and HD of 2.05. Once tested on the testing set of the other dataset, LD network resulted in a DS of 0.612 while SD network in DS of 0.375, thus showing limited generalizability. However, the application of transfer learning to LD network resulted in the improvement of the evaluation metrics on the SD from a DS of 0.612 to 0.858, while slightly worsening in the first one without post-processing to 0.882. Conclusions nnU-net framework is effective for fast automatic segmentation of the aorta from multi-center and multi-vendor PC-MRA, showing performance comparable with the state of the art. The application of transfer learning allows for increased generalization to data from center not included in the original training. These results unlock the possibility for fully-automatic analysis of multi-vendor multi-center 4D flow MR.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography