Journal articles on the topic 'Lesions segmentation'

To see the other types of publications on this topic, follow the link: Lesions segmentation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Lesions segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ma, Tian, Xinlei Zhou, Jiayi Yang, Boyang Meng, Jiali Qian, Jiehui Zhang, and Gang Ge. "Dental Lesion Segmentation Using an Improved ICNet Network with Attention." Micromachines 13, no. 11 (November 7, 2022): 1920. http://dx.doi.org/10.3390/mi13111920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Precise segmentation of tooth lesions is critical to creation of an intelligent tooth lesion detection system. As a solution to the problem that tooth lesions are similar to normal tooth tissues and difficult to segment, an improved segmentation method of the image cascade network (ICNet) network is proposed to segment various lesion types, such as calculus, gingivitis, and tartar. First, the ICNet network model is used to achieve real-time segmentation of lesions. Second, the Convolutional Block Attention Module (CBAM) is integrated into the ICNet network structure, and large-size convolutions in the spatial attention module are replaced with layered dilated convolutions to enhance the relevant features while suppressing useless features and solve the problem of inaccurate lesion segmentations. Finally, part of the convolution in the network model is replaced with an asymmetric convolution to reduce the calculations added by the attention module. Experimental results show that compared with Fully Convolutional Networks (FCN), U-Net, SegNet, and other segmentation algorithms, our method has a significant improvement in the segmentation effect, and the image processing frequency is higher, which satisfies the real-time requirements of tooth lesion segmentation accuracy.
2

Verma, Khushboo, Satwant Kumar, and David Paydarfar. "Automatic Segmentation and Quantitative Assessment of Stroke Lesions on MR Images." Diagnostics 12, no. 9 (August 24, 2022): 2055. http://dx.doi.org/10.3390/diagnostics12092055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Lesion studies are crucial in establishing brain-behavior relationships, and accurately segmenting the lesion represents the first step in achieving this. Manual lesion segmentation is the gold standard for chronic strokes. However, it is labor-intensive, subject to bias, and limits sample size. Therefore, our objective is to develop an automatic segmentation algorithm for chronic stroke lesions on T1-weighted MR images. Methods: To train our model, we utilized an open-source dataset: ATLAS v2.0 (Anatomical Tracings of Lesions After Stroke). We partitioned the dataset of 655 T1 images with manual segmentation labels into five subsets and performed a 5-fold cross-validation to avoid overfitting of the model. We used a deep neural network (DNN) architecture for model training. Results: To evaluate the model performance, we used three metrics that pertain to diverse aspects of volumetric segmentation, including shape, location, and size. The Dice similarity coefficient (DSC) compares the spatial overlap between manual and machine segmentation. The average DSC was 0.65 (0.61–0.67; 95% bootstrapped CI). Average symmetric surface distance (ASSD) measures contour distances between the two segmentations. ASSD between manual and automatic segmentation was 12 mm. Finally, we compared the total lesion volumes and the Pearson correlation coefficient (ρ) between the manual and automatically segmented lesion volumes, which was 0.97 (p-value < 0.001). Conclusions: We present the first automated segmentation model trained on a large multicentric dataset. This model will enable automated on-demand processing of MRI scans and quantitative chronic stroke lesion assessment.
3

Rossi, Farli. "APPLICATION OF A SEMI-AUTOMATED TECHNIQUE IN LUNG LESION SEGMENTATION." Jurnal Teknoinfo 15, no. 1 (January 15, 2021): 56. http://dx.doi.org/10.33365/jti.v15i1.945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Segmentation is one of the most important steps in automated medical diagnosis applications, which affects the accuracy of the overall system. In this study, we apply a semi-automated technique that combines an active contour and low-level processing techniques in lung lesion segmentation by extracting lung lesions from thoracic Positron Emission Tomography (PET)/Computed Tomography (CT) images. The lesions were first segmented in Positron Emission Tomography (PET) images which have been converted previously to Standardised Uptake Values (SUVs). The segmented PET images then serve as an initial contour for subsequent active contour segmentation of corresponding CT images. To measure accuracy, the Jaccard Index (JI) was used. Jaccard Index (JI) was calculated by comparing the segmented lesion to alternative segmentations obtained from the QIN lung CT segmentation challenge, which is possible by registering the whole body PET/CT images to the corresponding thoracic CT images. The results showed that the semi-automated technique (combination techniques between an active contour and low-level processing) in lung lesion segmentation has moderate accuracy with an average JI value of 0.76±0.12.
4

Abdullah, Bassem A., Akmal A. Younis, and Nigel M. John. "Multi-Sectional Views Textural Based SVM for MS Lesion Segmentation in Multi-Channels MRIs." Open Biomedical Engineering Journal 6, no. 1 (May 9, 2012): 56–72. http://dx.doi.org/10.2174/1874120701206010056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI) data. The technique uses a trained support vector machine (SVM) to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions mainly based on the textural features with aid of the other features. The classification is done on each of the axial, sagittal and coronal sectional brain view independently and the resultant segmentations are aggregated to provide more accurate output segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated approach that does not rely on manually delineating the MS lesions. In addition, the technique introduces the concept of the multi-sectional view segmentation to produce verified segmentation. The proposed textural-based SVM technique was evaluated using three simulated datasets and more than fifty real MRI datasets. The results were compared with state of the art methods. The obtained results indicate that the proposed method would be viable for use in clinical practice for the detection of MS lesions in MRI.
5

Wang, Xueling, Xianmin Meng, and Shu Yan. "Deep Learning-Based Image Segmentation of Cone-Beam Computed Tomography Images for Oral Lesion Detection." Journal of Healthcare Engineering 2021 (September 21, 2021): 1–7. http://dx.doi.org/10.1155/2021/4603475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper aimed to study the adoption of deep learning (DL) algorithm of oral lesions for segmentation of cone-beam computed tomography (CBCT) images. 90 patients with oral lesions were taken as research subjects, and they were grouped into blank, control, and experimental groups, whose images were treated by the manual segmentation method, threshold segmentation algorithm, and full convolutional neural network (FCNN) DL algorithm, respectively. Then, effects of different methods on oral lesion CBCT image recognition and segmentation were analyzed. The results showed that there was no substantial difference in the number of patients with different types of oral lesions among three groups ( P > 0.05 ). The accuracy of lesion segmentation in the experimental group was as high as 98.3%, while those of the blank group and control group were 78.4% and 62.1%, respectively. The accuracy of segmentation of CBCT images in the blank group and control group was considerably inferior to the experimental group ( P < 0.05 ). The segmentation effect on the lesion and the lesion model in the experimental group and control group was evidently superior to the blank group ( P < 0.05 ). In short, the image segmentation accuracy of the FCNN DL method was better than the traditional manual segmentation and threshold segmentation algorithms. Applying the DL segmentation algorithm to CBCT images of oral lesions can accurately identify and segment the lesions.
6

Xiong, Hui, Laith R. Sultan, Theodore W. Cary, Susan M. Schultz, Ghizlane Bouzghar, and Chandra M. Sehgal. "The diagnostic performance of leak-plugging automated segmentation versus manual tracing of breast lesions on ultrasound images." Ultrasound 25, no. 2 (January 25, 2017): 98–106. http://dx.doi.org/10.1177/1742271x17690425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Materials and methods Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( Oa) between the margins, and area under the ROC curves ( Az). Results The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R2 of 0.91). Oa was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall Oa between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. Az for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of Az between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. Conclusion The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.
7

Wang, Ying, Jie Su, Qiuyu Xu, and Yixin Zhong. "A Collaborative Learning Model for Skin Lesion Segmentation and Classification." Diagnostics 13, no. 5 (February 28, 2023): 912. http://dx.doi.org/10.3390/diagnostics13050912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The automatic segmentation and classification of skin lesions are two essential tasks in computer-aided skin cancer diagnosis. Segmentation aims to detect the location and boundary of the skin lesion area, while classification is used to evaluate the type of skin lesion. The location and contour information of lesions provided by segmentation is essential for the classification of skin lesions, while the skin disease classification helps generate target localization maps to assist the segmentation task. Although the segmentation and classification are studied independently in most cases, we find meaningful information can be explored using the correlation of dermatological segmentation and classification tasks, especially when the sample data are insufficient. In this paper, we propose a collaborative learning deep convolutional neural networks (CL-DCNN) model based on the teacher–student learning method for dermatological segmentation and classification. To generate high-quality pseudo-labels, we provide a self-training method. The segmentation network is selectively retrained through classification network screening pseudo-labels. Specially, we obtain high-quality pseudo-labels for the segmentation network by providing a reliability measure method. We also employ class activation maps to improve the location ability of the segmentation network. Furthermore, we provide the lesion contour information by using the lesion segmentation masks to improve the recognition ability of the classification network. Experiments are carried on the ISIC 2017 and ISIC Archive datasets. The CL-DCNN model achieved a Jaccard of 79.1% on the skin lesion segmentation task and an average AUC of 93.7% on the skin disease classification task, which is superior to the advanced skin lesion segmentation methods and classification methods.
8

Liang, Yingbo, and Jian Fu. "Watershed Algorithm for Medical Image Segmentation Based on Morphology and Total Variation Model." International Journal of Pattern Recognition and Artificial Intelligence 33, no. 05 (April 8, 2019): 1954019. http://dx.doi.org/10.1142/s0218001419540193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The traditional watershed algorithm has the limitation of false mark in medical image segmentation, which causes over-segmentation and images to be contaminated by noise possibly during acquisition. In this study, we proposed an improved watershed segmentation algorithm based on morphological processing and total variation model (TV) for medical image segmentation. First of all, morphological gradient preprocessing is performed on MRI images of brain lesions. Secondly, the gradient images are denoised by the all-variational model. While retaining the edge information of MRI images of brain lesions, the image noise is reduced. And then, the internal and external markers are obtained by forced minimum technique, and the gradient amplitude images are corrected by using these markers. Finally, the modified gradient image is subjected to watershed transformation. The experiment of segmentation and simulation of brain lesion MRI image is carried out on MATLAB. And the segmentation results are compared with other watershed algrothims. The experimental results demonstrate that our method obtains the least number of regions, which can extract MRI images of brain lesions effectively. In addition, this method can inhibit over-segmentation, improving the segmentation results of lesions in MRI images of brain lesions.
9

Kaur, Manpreet, Sunitha Varghese, Leon Jekel, Niklas Tillmanns, Sara Merkaj, Khaled Bousabarah, MingDe Lin, Jitendra Bhawnani, Veronica Chiang, and Mariam Aboian. "NIMG-07. APPLYING A GLIOMA-TRAINED DEEP LEARNING AUTO-SEGMENTATION TOOL ON BM PRE- AND POST-RADIOSURGERY." Neuro-Oncology 24, Supplement_7 (November 1, 2022): vii162—vii163. http://dx.doi.org/10.1093/neuonc/noac209.626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract PURPOSE Stereotactic radiosurgery (SRS) has become the mainstay to treat BM. Follow-up MRI provides important information on lesion treatment response and guides future therapy planning. Volumetric measurements of BM have shown promise over traditional uni- and two-dimensional measurements in more accurate and repeatable assessment. However, routine clinical use has yet to be achieved because the workflow is laborious. In previous work, we developed a PACS-integrated deep learning algorithm for automatic high- and low-grade glioma 3D segmentation. In this work, we applied this U-Net to segment BM on pre- and post-Gamma Knife (GK) MRI and evaluated the performance. METHODS 10 pre- and post-GK studies were autosegmented in five randomly selected patients (melanoma n= 3, breast n= 2). The glioma trained algorithm segmented the “Whole Tumor” (tumor core+peritumoral edema on T2w-FLAIR) and “Tumor Core” (CE tumor core+necrosis on SPGR). The AI generated segmentation was then revised as needed by a board-certified neuroradiologist and the dice-similarity-coefficient (DSC) between the revised and automatic volumetric segmentations were calculated. RESULTS Four patients had multicentric (2-4 BM) lesions. The mean± SD DSC for Whole Tumor and Tumor Core were 0.92±0.06 and 0.46±0.30 for pretreatment, 0.84±0.09 and 0.41±0.25 for posttreatment BM, respectively. The tool detected lesions with a sensitivity of 45% (5/11) for pretreatment and 50% (3/6) for posttreatment lesions. Three pretreatment and all posttreatment lesions that were not detected by the autosegmentation tool showed a very faint hyperintense peritumoral edema in T2w-FLAIR. CONCLUSION Volumetric segmentation of edema on FLAIR using the glioma-trained segmentation algorithm on pre- and post-GK BM did not require major adjustment of segmentation if it detects the lesion. On the other hand, with low sensitivity of lesion detection and low DSC for enhancing component, dedicated training of the algorithm on annotated BM data will be needed.
10

Mechrez, Roey, Jacob Goldberger, and Hayit Greenspan. "Patch-Based Segmentation with Spatial Consistency: Application to MS Lesions in Brain MRI." International Journal of Biomedical Imaging 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/7952541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents an automatic lesion segmentation method based on similarities between multichannel patches. A patch database is built using training images for which the label maps are known. For each patch in the testing image,ksimilar patches are retrieved from the database. The matching labels for thesekpatches are then combined to produce an initial segmentation map for the test case. Finally an iterative patch-based label refinement process based on the initial segmentation map is performed to ensure the spatial consistency of the detected lesions. The method was evaluated in experiments on multiple sclerosis (MS) lesion segmentation in magnetic resonance images (MRI) of the brain. An evaluation was done for each image in the MICCAI 2008 MS lesion segmentation challenge. Results are shown to compete with the state of the art in the challenge. We conclude that the proposed algorithm for segmentation of lesions provides a promising new approach for local segmentation and global detection in medical images.
11

Noor, N. S. M., N. M. Saad, A. R. Abdullah, and N. M. Ali. "Automated segmentation and classification technique for brain stroke." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 3 (June 1, 2019): 1832. http://dx.doi.org/10.11591/ijece.v9i3.pp1832-1841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Difussion-Weighted Imaging (DWI) plays an important role in the diagnosis of brain stroke by providing detailed information regarding the soft tissue contrast in the brain organ. Conventionally, the differential diagnosis of brain stroke lesions is performed manually by professional neuroradiologists during a highly subjective and time- consuming process. This study proposes a segmentation and classification technique to detect brain stroke lesions based on diffusion-weighted imaging (DWI). The type of stroke lesions consists of acute ischemic, sub-acute ischemic, chronic ischemic and acute hemorrhage. For segmentation, fuzzy c-Means (FCM) and active contour is proposed to segment the lesion’s region. FCM is implemented with active contour to separate the cerebral spinal fluid (CSF) with the hypointense lesion. Pre-processing is applied to the DWI for image normalization, background removal and image enhancement. The algorithm performance has been evaluated using Jaccard Index, Dice Coefficient (DC) and both false positive rate (FPR) and false negative rate (FNR). The average results for the Jaccard index, DC, FPR and FNR are 0.55, 0.68, 0.23 and 0.23, respectively. First statistical order method is applied to the segmentation result to obtain the features for the classifier input. For classification technique, bagged tree classifier is proposed to classify the type of stroke. The accuracy results for the classification is 90.8%. Based on the results, the proposed technique has potential to segment and classify brain stroke lesion from DWI images.
12

de Oliveira, Marcela, Marina Piacenti-Silva, Fernando Coronetti Gomes da Rocha, Jorge Manuel Santos, Jaime dos Santos Cardoso, and Paulo Noronha Lisboa-Filho. "Lesion Volume Quantification Using Two Convolutional Neural Networks in MRIs of Multiple Sclerosis Patients." Diagnostics 12, no. 2 (January 18, 2022): 230. http://dx.doi.org/10.3390/diagnostics12020230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: Multiple sclerosis (MS) is a neurologic disease of the central nervous system which affects almost three million people worldwide. MS is characterized by a demyelination process that leads to brain lesions, allowing these affected areas to be visualized with magnetic resonance imaging (MRI). Deep learning techniques, especially computational algorithms based on convolutional neural networks (CNNs), have become a frequently used algorithm that performs feature self-learning and enables segmentation of structures in the image useful for quantitative analysis of MRIs, including quantitative analysis of MS. To obtain quantitative information about lesion volume, it is important to perform proper image preprocessing and accurate segmentation. Therefore, we propose a method for volumetric quantification of lesions on MRIs of MS patients using automatic segmentation of the brain and lesions by two CNNs. Methods: We used CNNs at two different moments: the first to perform brain extraction, and the second for lesion segmentation. This study includes four independent MRI datasets: one for training the brain segmentation models, two for training the lesion segmentation model, and one for testing. Results: The proposed brain detection architecture using binary cross-entropy as the loss function achieved a 0.9786 Dice coefficient, 0.9969 accuracy, 0.9851 precision, 0.9851 sensitivity, and 0.9985 specificity. In the second proposed framework for brain lesion segmentation, we obtained a 0.8893 Dice coefficient, 0.9996 accuracy, 0.9376 precision, 0.8609 sensitivity, and 0.9999 specificity. After quantifying the lesion volume of all patients from the test group using our proposed method, we obtained a mean value of 17,582 mm3. Conclusions: We concluded that the proposed algorithm achieved accurate lesion detection and segmentation with reproducibility corresponding to state-of-the-art software tools and manual segmentation. We believe that this quantification method can add value to treatment monitoring and routine clinical evaluation of MS patients.
13

Pitkänen, Johanna, Juha Koikkalainen, Tuomas Nieminen, Ivan Marinkovic, Sami Curtze, Gerli Sibolt, Hanna Jokinen, et al. "Evaluating severity of white matter lesions from computed tomography images with convolutional neural network." Neuroradiology 62, no. 10 (April 13, 2020): 1257–63. http://dx.doi.org/10.1007/s00234-020-02410-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Purpose Severity of white matter lesion (WML) is typically evaluated on magnetic resonance images (MRI), yet the more accessible, faster, and less expensive method is computed tomography (CT). Our objective was to study whether WML can be automatically segmented from CT images using a convolutional neural network (CNN). The second aim was to compare CT segmentation with MRI segmentation. Methods The brain images from the Helsinki University Hospital clinical image archive were systematically screened to make CT-MRI image pairs. Selection criteria for the study were that both CT and MRI images were acquired within 6 weeks. In total, 147 image pairs were included. We used CNN to segment WML from CT images. Training and testing of CNN for CT was performed using 10-fold cross-validation, and the segmentation results were compared with the corresponding segmentations from MRI. Results A Pearson correlation of 0.94 was obtained between the automatic WML volumes of MRI and CT segmentations. The average Dice similarity index validating the overlap between CT and FLAIR segmentations was 0.68 for the Fazekas 3 group. Conclusion CNN-based segmentation of CT images may provide a means to evaluate the severity of WML and establish a link between CT WML patterns and the current standard MRI-based visual rating scale.
14

Fourcade, Constance, Jean-Sebastien Frenel, Noémie Moreau, Gianmarco Santini, Aislinn Brennan, Caroline Rousseau, Marie Lacombe, et al. "PERCIST-like response assessment with FDG PET based on automatic segmentation of all lesions in metastatic breast cancer." Journal of Clinical Oncology 40, no. 16_suppl (June 1, 2022): e13057-e13057. http://dx.doi.org/10.1200/jco.2022.40.16_suppl.e13057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
e13057 Background: In metastatic breast cancer (MBC), treatment response is often assessed with FDG PET per PERCIST which evaluates changes in SULpeak of the single hottest tumor lesion identified on the baseline and follow-up images. PERCIST therefore does not consider tumor heterogeneity. This work aims to compare responses determined with the automatic segmentation of all lesions to response determined manually per PERCIST. Methods: 10 MBC patients (61±14 y/o) undergoing either chemo- or hormonotherapy were randomly selected from the prospective EPICURE study (NCT03958136). A baseline and two follow-up FDG PET were acquired at pre-, early- (1 month) and mid-treatment times for each patient. All metastatic lesions on all images were manually segmented by experts. Using the Advanced Normalization Tools (ANTs) image registration method, we wrapped baseline lesion segmentations to automatically obtain the follow-up ones. These registered segmentations were compared to the ones done manually using standard biomarkers: SULpeak, lesion size and Total Lesion Glycolysis (TLG). Differences between baseline and follow-up images were visually represented by coloring the follow-up segmentations: in green for responsive lesions (decreasing SULpeak) and in red for progressive ones (increasing SULpeak). Two expert physicians were then asked to evaluate treatment response while seeing these colored segmentations. They assessed the FDG PET images in pairs, evaluating for each patient the baseline and one of the corresponding follow-ups in a blinded manner: either the early- or the mid-treatment follow-up. Evaluations were then compared: i) early- vs mid-treatment response and ii) follow-up response vs patient’s clinical outcome. Results: Biomarkers extracted from the registered segmentations were similar to the ones extracted from the manual segmentations, with a Lin correlation coefficient of 0.92, 0.87 and 0.95 for the SULpeak, lesion size and TLG respectively. These findings were obtained within ̃10min, whereas the manual segmentation of the three PET images for any given patient took ̃1h. With the use of colored segmentations, early follow-up evaluations were predictive of mid-treatment response in 65% of the cases. The blinded physicians agreed with the clinical outcomes 85% and 95% of the time for the early- and mid-treatment images respectively. Conclusions: With segmentations automatically derived from ANTs registration, we managed to extract biomarkers that are comparable to the ones obtained with manual segmentations; both segmentations carried similar information. ANTs fast registration and biomarkers computation can make it a useful tool in clinical routine. In addition, lesion coloring helped evaluate treatment response and early-treatment follow-up images were shown to be predictive of mid-treatment response. Clinical trial information: NCT03958136.
15

Fourcade, Constance, Jean-Sebastien Frenel, Noémie Moreau, Gianmarco Santini, Aislinn Brennan, Caroline Rousseau, Marie Lacombe, et al. "PERCIST-like response assessment with FDG PET based on automatic segmentation of all lesions in metastatic breast cancer." Journal of Clinical Oncology 40, no. 16_suppl (June 1, 2022): e13057-e13057. http://dx.doi.org/10.1200/jco.2022.40.16_suppl.e13057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
e13057 Background: In metastatic breast cancer (MBC), treatment response is often assessed with FDG PET per PERCIST which evaluates changes in SULpeak of the single hottest tumor lesion identified on the baseline and follow-up images. PERCIST therefore does not consider tumor heterogeneity. This work aims to compare responses determined with the automatic segmentation of all lesions to response determined manually per PERCIST. Methods: 10 MBC patients (61±14 y/o) undergoing either chemo- or hormonotherapy were randomly selected from the prospective EPICURE study (NCT03958136). A baseline and two follow-up FDG PET were acquired at pre-, early- (1 month) and mid-treatment times for each patient. All metastatic lesions on all images were manually segmented by experts. Using the Advanced Normalization Tools (ANTs) image registration method, we wrapped baseline lesion segmentations to automatically obtain the follow-up ones. These registered segmentations were compared to the ones done manually using standard biomarkers: SULpeak, lesion size and Total Lesion Glycolysis (TLG). Differences between baseline and follow-up images were visually represented by coloring the follow-up segmentations: in green for responsive lesions (decreasing SULpeak) and in red for progressive ones (increasing SULpeak). Two expert physicians were then asked to evaluate treatment response while seeing these colored segmentations. They assessed the FDG PET images in pairs, evaluating for each patient the baseline and one of the corresponding follow-ups in a blinded manner: either the early- or the mid-treatment follow-up. Evaluations were then compared: i) early- vs mid-treatment response and ii) follow-up response vs patient’s clinical outcome. Results: Biomarkers extracted from the registered segmentations were similar to the ones extracted from the manual segmentations, with a Lin correlation coefficient of 0.92, 0.87 and 0.95 for the SULpeak, lesion size and TLG respectively. These findings were obtained within ̃10min, whereas the manual segmentation of the three PET images for any given patient took ̃1h. With the use of colored segmentations, early follow-up evaluations were predictive of mid-treatment response in 65% of the cases. The blinded physicians agreed with the clinical outcomes 85% and 95% of the time for the early- and mid-treatment images respectively. Conclusions: With segmentations automatically derived from ANTs registration, we managed to extract biomarkers that are comparable to the ones obtained with manual segmentations; both segmentations carried similar information. ANTs fast registration and biomarkers computation can make it a useful tool in clinical routine. In addition, lesion coloring helped evaluate treatment response and early-treatment follow-up images were shown to be predictive of mid-treatment response. Clinical trial information: NCT03958136.
16

M D, Swetha, and Aditya C R. "Noise Invariant Convolution Neural Network for Segmentation of Multiple Sclerosis Lesions from Brain Magnetic Resonance Imaging." International Journal of Online and Biomedical Engineering (iJOE) 18, no. 13 (October 19, 2022): 38–55. http://dx.doi.org/10.3991/ijoe.v18i13.34273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The objective of the research work is to accurately segment multiple sclerosis (MS) lesions in brain Magnetic Resonance Imaging (MRI) of varying sizes and also to classify its types. Designing effective automatic segmentation and classification tool aid the doctors in better understanding MS lesion progressions. In meeting research challenges, this paper presents Noise Invariant Convolution Neural Network (NICNN). The NICNN model is efficient in the detection and segmentation of MS lesions of varying sizes in comparison with standard CNN-based segmentation methods. Further, this paper introduced a new cross-validation scheme to address the class imbalance issue by selecting effective features for classifying the type of MS lesion. The experiment outcome shows the proposed method provides improved Dice Similarity Coefficient (DSC), Positive Predicted Value (PPV), and True Positive Rate (TPR) value compared to the state-of-art CNN-based MS lesion segmentation method. Further, achieves better accuracy in classifying MS lesion types compared to standard MS lesion type classification models.
17

Meyer-Baese, A., T. Schlossbauer, O. Lange, and A. Wismueller. "Small Lesions Evaluation Based on Unsupervised Cluster Analysis of Signal-Intensity Time Courses in Dynamic Breast MRI." International Journal of Biomedical Imaging 2009 (2009): 1–10. http://dx.doi.org/10.1155/2009/326924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An application of an unsupervised neural network-based computer-aided diagnosis (CAD) system is reported for the detection and characterization of small indeterminate breast lesions, average size 1.1 mm, in dynamic contrast-enhanced MRI. This system enables the extraction of spatial and temporal features of dynamic MRI data and additionally provides a segmentation with regard to identification and regional subclassification of pathological breast tissue lesions. Lesions with an initial contrast enhancement≥50% were selected with semiautomatic segmentation. This conventional segmentation analysis is based on the mean initial signal increase and postinitial course of all voxels included in the lesion. In this paper, we compare the conventional segmentation analysis with unsupervised classification for the evaluation of signal intensity time courses for the differential diagnosis of enhancing lesions in breast MRI. The results suggest that the computerized analysis system based on unsupervised clustering has the potential to increase the diagnostic accuracy of MRI mammography for small lesions and can be used as a basis for computer-aided diagnosis of breast cancer with MR mammography.
18

Li, Yingjie, Chao Xu, Jubao Han, Ziheng An, Deyu Wang, Haichao Ma, and Chuanxu Liu. "MHAU-Net: Skin Lesion Segmentation Based on Multi-Scale Hybrid Residual Attention Network." Sensors 22, no. 22 (November 11, 2022): 8701. http://dx.doi.org/10.3390/s22228701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Melanoma is a main factor that leads to skin cancer, and early diagnosis and treatment can significantly reduce the mortality of patients. Skin lesion boundary segmentation is a key to accurately localizing a lesion in dermoscopic images. However, the irregular shape and size of the lesions and the blurred boundary of the lesions pose significant challenges for researchers. In recent years, pixel-level semantic segmentation strategies based on convolutional neural networks have been widely used, but many methods still suffer from the inaccurate segmentation of fuzzy boundaries. In this paper, we proposed a multi-scale hybrid attentional convolutional neural network (MHAU-Net) for the precise localization and segmentation of skin lesions. MHAU-Net has four main components: multi-scale resolution input, hybrid residual attention (HRA), dilated convolution, and atrous spatial pyramid pooling. Multi-scale resolution inputs provide richer visual information, and HRA solves the problem of blurred boundaries and enhances the segmentation results. The Dice, mIoU, average specificity, and sensitivity on the ISIC2018 task 1 validation set were 93.69%, 90.02%, 92.7% and 93.9%, respectively. The segmentation metrics are significantly better than the latest DCSAU-Net, UNeXt, and U-Net, and excellent segmentation results are achieved on different datasets. We performed model robustness validations on the Kvasir-SEG dataset with an overall sensitivity and average specificity of 95.91% and 96.28%, respectively.
19

Hojjatoleslami, S. A., and F. Kruggel. "Segmentation of large brain lesions." IEEE Transactions on Medical Imaging 20, no. 7 (July 2001): 666–69. http://dx.doi.org/10.1109/42.932750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Mingfeng, Guoqin Xu, Junyu Li, and Jianping Huang. "A Method for Segmenting Disease Lesions of Maize Leaves in Real Time Using Attention YOLACT++." Agriculture 11, no. 12 (December 2, 2021): 1216. http://dx.doi.org/10.3390/agriculture11121216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Northern leaf blight (NLB) is a serious disease in maize which leads to significant yield losses. Automatic and accurate methods of quantifying disease are crucial for disease identification and quantitative assessment of severity. Leaf images collected with natural backgrounds pose a great challenge to the segmentation of disease lesions. To address these problems, we propose an image segmentation method based on YOLACT++ with an attention module for segmenting disease lesions of maize leaves in natural conditions in order to improve the accuracy and real-time ability of lesion segmentation. The attention module is equipped on the output of the ResNet-101 backbone and the output of the FPN. The experimental results demonstrate that the proposed method improves segmentation accuracy compared with the state-of-the-art disease lesion-segmentation methods. The proposed method achieved 98.71% maize leaf lesion segmentation precision, a comprehensive evaluation index of 98.36%, and a mean Intersection over Union of 84.91%; the average processing time of a single image was about 31.5 ms. The results show that the proposed method allows for the automatic and accurate quantitative assessment of crop disease severity in natural conditions.
21

Tang, Suigu, Xiaoyuan Yu, Chak-Fong Cheang, Zeming Hu, Tong Fang, I.-Cheong Choi, and Hon-Ho Yu. "Diagnosis of Esophageal Lesions by Multi-Classification and Segmentation Using an Improved Multi-Task Deep Learning Model." Sensors 22, no. 4 (February 15, 2022): 1492. http://dx.doi.org/10.3390/s22041492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
It is challenging for endoscopists to accurately detect esophageal lesions during gastrointestinal endoscopic screening due to visual similarities among different lesions in terms of shape, size, and texture among patients. Additionally, endoscopists are busy fighting esophageal lesions every day, hence the need to develop a computer-aided diagnostic tool to classify and segment the lesions at endoscopic images to reduce their burden. Therefore, we propose a multi-task classification and segmentation (MTCS) model, including the Esophageal Lesions Classification Network (ELCNet) and Esophageal Lesions Segmentation Network (ELSNet). The ELCNet was used to classify types of esophageal lesions, and the ELSNet was used to identify lesion regions. We created a dataset by collecting 805 esophageal images from 255 patients and 198 images from 64 patients to train and evaluate the MTCS model. Compared with other methods, the proposed not only achieved a high accuracy (93.43%) in classification but achieved a dice similarity coefficient (77.84%) in segmentation. In conclusion, the MTCS model can boost the performance of endoscopists in the detection of esophageal lesions as it can accurately multi-classify and segment the lesions and is a potential assistant for endoscopists to reduce the risk of oversight.
22

Swetha, R. "Multi-Lesion Segmentation of Diabetic Retinopathy Using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 2835–38. http://dx.doi.org/10.22214/ijraset.2022.44497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME) are the two major complications of diabetes and have a significant impact on working individuals of the world population. DR doesn’t give any early symptoms. Therefore, it is important to diagnose DR at an early stage. The two above mentioned diseases usually depend on the presence and areas of lesions in fundus images. The four main related lesions include soft exudates, hard exudates,microaneurysms, and haemorrhages. Since lesions in retinal fundus images are a pivotal indicator of DR, analyzing retinal fundus images is the most popular method for DR screening. The examination of fundus images is time-consuming and small lesions are hard to observe. Therefore, adopting deep learning techniques for lesion segmentation is of great importance. In this project, we use one of the deep learning techniques called U-Net, which is a variant of Convolutional Neural Networks (CNN) for multiple lesion segmentation.
23

Pang, Yachun, Li Li, Wenyong Hu, Yanxia Peng, Lizhi Liu, and Yuanzhi Shao. "Computerized Segmentation and Characterization of Breast Lesions in Dynamic Contrast-Enhanced MR Images Using Fuzzy c-Means Clustering and Snake Algorithm." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/634907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a novel two-step approach that incorporates fuzzy c-means (FCMs) clustering and gradient vector flow (GVF) snake algorithm for lesions contour segmentation on breast magnetic resonance imaging (BMRI). Manual delineation of the lesions by expert MR radiologists was taken as a reference standard in evaluating the computerized segmentation approach. The proposed algorithm was also compared with the FCMs clustering based method. With a database of 60 mass-like lesions (22 benign and 38 malignant cases), the proposed method demonstrated sufficiently good segmentation performance. The morphological and texture features were extracted and used to classify the benign and malignant lesions based on the proposed computerized segmentation contour and radiologists’ delineation, respectively. Features extracted by the computerized characterization method were employed to differentiate the lesions with an area under the receiver-operating characteristic curve (AUC) of 0.968, in comparison with an AUC of 0.914 based on the features extracted from radiologists’ delineation. The proposed method in current study can assist radiologists to delineate and characterize BMRI lesion, such as quantifying morphological and texture features and improving the objectivity and efficiency of BMRI interpretation with a certain clinical value.
24

Rajaraman, Sivaramakrishnan, Feng Yang, Ghada Zamzmi, Zhiyun Xue, and Sameer K. Antani. "A Systematic Evaluation of Ensemble Learning Methods for Fine-Grained Semantic Segmentation of Tuberculosis-Consistent Lesions in Chest Radiographs." Bioengineering 9, no. 9 (August 24, 2022): 413. http://dx.doi.org/10.3390/bioengineering9090413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Automated segmentation of tuberculosis (TB)-consistent lesions in chest X-rays (CXRs) using deep learning (DL) methods can help reduce radiologist effort, supplement clinical decision-making, and potentially result in improved patient treatment. The majority of works in the literature discuss training automatic segmentation models using coarse bounding box annotations. However, the granularity of the bounding box annotation could result in the inclusion of a considerable fraction of false positives and negatives at the pixel level that may adversely impact overall semantic segmentation performance. This study evaluates the benefits of using fine-grained annotations of TB-consistent lesions toward training the variants of U-Net models and constructing their ensembles for semantically segmenting TB-consistent lesions in both original and bone-suppressed frontal CXRs. The segmentation performance is evaluated using several ensemble methods such as bitwise- AND, bitwise-OR, bitwise-MAX, and stacking. Extensive empirical evaluations showcased that the stacking ensemble demonstrated superior segmentation performance (Dice score: 0.5743, 95% confidence interval: (0.4055, 0.7431)) compared to the individual constituent models and other ensemble methods. To the best of our knowledge, this is the first study to apply ensemble learning to improve fine-grained TB-consistent lesion segmentation performance.
25

Foo, Alex, Wynne Hsu, Mong Li Lee, Gilbert Lim, and Tien Yin Wong. "Multi-Task Learning for Diabetic Retinopathy Grading and Lesion Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 08 (April 3, 2020): 13267–72. http://dx.doi.org/10.1609/aaai.v34i08.7035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Although deep learning for Diabetic Retinopathy (DR) screening has shown great success in achieving clinically acceptable accuracy for referable versus non-referable DR, there remains a need to provide more fine-grained grading of the DR severity level as well as automated segmentation of lesions (if any) in the retina images. We observe that the DR severity level of an image is dependent on the presence of different types of lesions and their prevalence. In this work, we adopt a multi-task learning approach to perform the DR grading and lesion segmentation tasks. In light of the lack of lesion segmentation mask ground-truths, we further propose a semi-supervised learning process to obtain the segmentation masks for the various datasets. Experiments results on publicly available datasets and a real world dataset obtained from population screening demonstrate the effectiveness of the multi-task solution over state-of-the-art networks.
26

Zortea, Maciel, Stein Olav Skrøvseth, Thomas R. Schopf, Herbert M. Kirchesch, and Fred Godtliebsen. "Automatic Segmentation of Dermoscopic Images by Iterative Classification." International Journal of Biomedical Imaging 2011 (2011): 1–19. http://dx.doi.org/10.1155/2011/972648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Accurate detection of the borders of skin lesions is a vital first step for computer aided diagnostic systems. This paper presents a novel automatic approach to segmentation of skin lesions that is particularly suitable for analysis of dermoscopic images. Assumptions about the image acquisition, in particular, the approximate location and color, are used to derive an automatic rule to select small seed regions, likely to correspond to samples of skin and the lesion of interest. The seed regions are used as initial training samples, and the lesion segmentation problem is treated as binary classification problem. An iterative hybrid classification strategy, based on a weighted combination of estimated posteriors of a linear and quadratic classifier, is used to update both the automatically selected training samples and the segmentation, increasing reliability and final accuracy, especially for those challenging images, where the contrast between the background skin and lesion is low.
27

Li, Yu, Meilong Zhu, Guangmin Sun, Jiayang Chen, Xiaorong Zhu, and Jinkui Yang. "Weakly supervised training for eye fundus lesion segmentation in patients with diabetic retinopathy." Mathematical Biosciences and Engineering 19, no. 5 (2022): 5293–311. http://dx.doi.org/10.3934/mbe.2022248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<abstract> <sec><title>Objective</title><p>Diabetic retinopathy is the leading cause of vision loss in working-age adults. Early screening and diagnosis can help to facilitate subsequent treatment and prevent vision loss. Deep learning has been applied in various fields of medical identification. However, current deep learning-based lesion segmentation techniques rely on a large amount of pixel-level labeled ground truth data, which limits their performance and application. In this work, we present a weakly supervised deep learning framework for eye fundus lesion segmentation in patients with diabetic retinopathy.</p> </sec> <sec><title>Methods</title><p>First, an efficient segmentation algorithm based on grayscale and morphological features is proposed for rapid coarse segmentation of lesions. Then, a deep learning model named Residual-Attention Unet (RAUNet) is proposed for eye fundus lesion segmentation. Finally, a data sample of fundus images with labeled lesions and unlabeled images with coarse segmentation results is jointly used to train RAUNet to broaden the diversity of lesion samples and increase the robustness of the segmentation model.</p> </sec> <sec><title>Results</title><p>A dataset containing 582 fundus images with labels verified by doctors, including hemorrhage (HE), microaneurysm (MA), hard exudate (EX) and soft exudate (SE), and 903 images without labels was used to evaluate the model. In ablation test, the proposed RAUNet achieved the highest intersection over union (IOU) on the labeled dataset, and the proposed attention and residual modules both improved the IOU of the UNet benchmark. Using both the images labeled by doctors and the proposed coarse segmentation method, the weakly supervised framework based on RAUNet architecture significantly improved the mean segmentation accuracy by over 7% on the lesions.</p> </sec> <sec><title>Significance</title><p>This study demonstrates that combining unlabeled medical images with coarse segmentation results can effectively improve the robustness of the lesion segmentation model and proposes a practical framework for improving the performance of medical image segmentation given limited labeled data samples.</p> </sec> </abstract>
28

Zhang, Jinling, Jun Yang, and Min Zhao. "Automatic Segmentation Algorithm of Magnetic Resonance Image in Diagnosis of Liver Cancer Patients under Deep Convolutional Neural Network." Scientific Programming 2021 (September 10, 2021): 1–13. http://dx.doi.org/10.1155/2021/4614234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To study the influence of different sequences of magnetic resonance imaging (MRI) images on the segmentation of hepatocellular carcinoma (HCC) lesions, the U-Net was improved. Moreover, deep fusion network (DFN), data enhancement strategy, and random data (RD) strategy were introduced, and a multisequence MRI image segmentation algorithm based on DFN was proposed. The segmentation experiments of single-sequence MRI image and multisequence MRI image were designed, and the segmentation result of single-sequence MRI image was compared with those of convolutional neural network (FCN) algorithm. In addition, RD experiment and single-input experiment were also designed. It was found that the sensitivity (0.595 ± 0.145) and DSC (0.587 ± 0.113) obtained by improved U-Net were significantly higher than the sensitivity (0.405 ± 0.098) and DSC (0.468 ± 0.115, P < 0.05 ) obtained by U-Net. The sensitivity of multisequence MRI image segmentation algorithm based on DFN (0.779 ± 0.015) was significantly higher than that of FCN algorithm (0.604 ± 0.056, P < 0.05 ). The multisequence MRI image segmentation algorithm based on the DFN had higher indicators for liver cancer lesions than those of the improved U-Net. When RD was added, it not only increased the DSC of the single-sequence network enhanced by the hepatocyte-specific magnetic resonance contrast agent (Gd-EOB-DTPA) by 1% but also increased the DSC of the multisequence MRI image segmentation algorithm based on DFN by 7.6%. In short, the improved U-Net can significantly improve the recognition rate of small lesions in liver cancer patients. The addition of RD strategy improved the segmentation indicators of liver cancer lesions of the DFN and can fuse image features of multiple sequences, thereby improving the accuracy of lesion segmentation.
29

Okuboyejo, Damilola, and Oludayo O. Olugbara. "Segmentation of Melanocytic Lesion Images Using Gamma Correction with Clustering of Keypoint Descriptors." Diagnostics 11, no. 8 (July 29, 2021): 1366. http://dx.doi.org/10.3390/diagnostics11081366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The early detection of skin cancer, especially through the examination of lesions with malignant characteristics, has been reported to significantly decrease the potential fatalities. Segmentation of the regions that contain the actual lesions is one of the most widely used steps for achieving an automated diagnostic process of skin lesions. However, accurate segmentation of skin lesions has proven to be a challenging task in medical imaging because of the intrinsic factors such as the existence of undesirable artifacts and the complexity surrounding the seamless acquisition of lesion images. In this paper, we have introduced a novel algorithm based on gamma correction with clustering of keypoint descriptors for accurate segmentation of lesion areas in dermoscopy images. The algorithm was tested on dermoscopy images acquired from the publicly available dataset of Pedro Hispano hospital to achieve compelling equidistant sensitivity, specificity, and accuracy scores of 87.29%, 99.54%, and 96.02%, respectively. Moreover, the validation of the algorithm on a subset of heavily noised skin lesion images collected from the public dataset of International Skin Imaging Collaboration has yielded the equidistant sensitivity, specificity, and accuracy scores of 80.59%, 100.00%, and 94.98%, respectively. The performance results are propitious when compared to those obtained with existing modern algorithms using the same standard benchmark datasets and performance evaluation indices.
30

Jamil, Uzma, M. Usman Akram, Shehzad Khalid, Sarmad Abbas, and Kashif Saleem. "Computer Based Melanocytic and Nevus Image Enhancement and Segmentation." BioMed Research International 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/2082589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Digital dermoscopy aids dermatologists in monitoring potentially cancerous skin lesions. Melanoma is the 5th common form of skin cancer that is rare but the most dangerous. Melanoma is curable if it is detected at an early stage. Automated segmentation of cancerous lesion from normal skin is the most critical yet tricky part in computerized lesion detection and classification. The effectiveness and accuracy of lesion classification are critically dependent on the quality of lesion segmentation. In this paper, we have proposed a novel approach that can automatically preprocess the image and then segment the lesion. The system filters unwanted artifacts including hairs, gel, bubbles, and specular reflection. A novel approach is presented using the concept of wavelets for detection and inpainting the hairs present in the cancer images. The contrast of lesion with the skin is enhanced using adaptive sigmoidal function that takes care of the localized intensity distribution within a given lesion’s images. We then present a segmentation approach to precisely segment the lesion from the background. The proposed approach is tested on the European database of dermoscopic images. Results are compared with the competitors to demonstrate the superiority of the suggested approach.
31

Jaworek-Korjakowska, Joanna, and Pawel Kleczek. "Region Adjacency Graph Approach for Acral Melanocytic Lesion Segmentation." Applied Sciences 8, no. 9 (August 22, 2018): 1430. http://dx.doi.org/10.3390/app8091430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Malignant melanoma is among the fastest increasing malignancies in many countries. Due to its propensity to metastasize and lack of effective therapies for most patients with advanced disease, early detection of melanoma is a clinical imperative. In non-Caucasian populations, melanomas are frequently located in acral volar areas and their dermoscopic appearance differs from the non-acral ones. Although lesion segmentation is a natural preliminary step towards its further analysis, so far virtually no acral skin lesion segmentation method has been proposed. Our goal was to develop an effective segmentation algorithm dedicated for acral lesions. We obtain a superpixel oversegmentation of a lesion image by performing clustering in a joint color-spatial 5d space defined by coordinates of CIELAB color space and spatial coordinates of the image. We then construct a region adjacency graph based on this superpixel representation. We obtain the ultimate segmentation result by performing a hierarchical region merging. The proposed segmentation method has been tested on 134 color dermoscopic images of different types of acral melanocytic lesions (including melanoma) from various sources. It achieved an average Dice index value of 0.85, accuracy 0.91, precision 0.89, sensitivity 0.87, and specificity 0.88. Experimental results suggest the effectiveness of the proposed method, which would help improve the accuracy of other diagnostic algorithms for acral melanoma detection. The results also suggest that the computational approach towards lesion segmentation yields more stable output than manual segmentation by dermatologists.
32

Li, Dapeng, and Xiaoguang Liu. "Design of an Incremental Music Teaching and Assisted Therapy System Based on Artificial Intelligence Attention Mechanism." Occupational Therapy International 2022 (June 16, 2022): 1–11. http://dx.doi.org/10.1155/2022/7117986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the continuous updating and advancement of artificial intelligence technology, it gradually begins to shine in various industries, especially playing an increasingly important role in incremental music teaching and assisted therapy systems. This study designs artificial intelligence models from the perspectives of attention mechanism, contextual information guidance, and distant dependencies combined with incremental music teaching for the segmentation of MS (multiple sclerosis) lesions and achieves the automatic and accurate segmentation of MS lesions through the multidimensional analysis of multimodal magnetic resonance imaging data, which provides a basis for physicians to quantitatively analyze MS lesions, thus assisting them in the diagnosis and treatment of MS. To address the highly variable characteristics of MS lesion location, size, number, and shape, this paper firstly designs a 3D context-guided module based on Kronecker convolution to integrate lesion information from different fields of view, starting from lesion contextual information capture. Then, a 3D spatial attention module is introduced to enhance the representation of lesion features in MRI images. The experiments in this paper confirm that the context-guided module, cross-dimensional cross-attention module, and multidimensional feature similarity module designed for the characteristics of MS lesions are effective, and the proposed attentional context U-Net and multidimensional cross-attention U-Net have greater advantages in the objective evaluation index of lesion segmentation, while being combined with the incremental music teaching approach to assist treatment, which provides a new idea for the intelligent assisted treatment approach. In this paper, from algorithm design to experimental validation, both in terms of accuracy, the operational difficulty of the experiment, consumption of arithmetic power, and time cost, the unique superiority of the artificial intelligence attention-based combined with incremental music teaching adjunctive therapy system proposed in this paper can be seen in the MS lesion segmentation task.
33

Fuller, Sarah N., Ahmad Shafiei, David J. Venzon, David J. Liewehr, Michal Mauda Havanuk, Maran G. Ilanchezhian, Maureen Edgerly, et al. "Tumor Doubling Time Using CT Volumetric Segmentation in Metastatic Adrenocortical Carcinoma." Current Oncology 28, no. 6 (November 1, 2021): 4357–66. http://dx.doi.org/10.3390/curroncol28060370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Adrenocortical carcinoma (ACC) is a rare malignancy with an overall unfavorable prognosis. Clinicians treating patients with ACC have noted accelerated growth in metastatic liver lesions that requires rapid intervention compared to other metastatic locations. This study measured and compared the growth rates of metastatic ACC lesions in the lungs, liver, and lymph nodes using volumetric segmentation. A total of 12 patients with metastatic ACC (six male; six female) were selected based on their medical history. Computer tomography (CT) exams were retrospectively reviewed and a sampling of ≤5 metastatic lesions per organ were selected for evaluation. Lesions in the liver, lung, and lymph nodes were measured and evaluated by volumetric segmentation. Statistical analyses were performed to compare the volumetric growth rates of the lesions in each organ system. In this cohort, 5/12 had liver lesions, 7/12 had lung lesions, and 5/12 had lymph node lesions. A total of 92 lesions were evaluated and segmented for lesion volumetry. The volume doubling time per organ system was 27 days in the liver, 90 days in the lungs, and 95 days in the lymph nodes. In this series of 12 patients with metastatic ACC, liver lesions showed a faster growth rate than lung or lymph node lesions.
34

Jayachandran, A., and B. AnuSheeba. "Hybrid Melanoma Classification System Using Multi-Layer Fuzzy C-Means Clustering and Deep Convolutional Neural Network." Journal of Medical Imaging and Health Informatics 11, no. 11 (November 1, 2021): 2709–15. http://dx.doi.org/10.1166/jmihi.2021.3873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Skin cancer is considered one of the most common type of cancer in several countries. Due to the difficulty and subjectivity in the clinical diagnosis of skin lesions, Computer-Aided Diagnosis systems are being developed for assist experts to perform more reliable diagnosis. The clinical analysis and diagnosis of skin lesions relies not only on the visual information but also on the context information provided by the patient. Skin lesion segmentation plays a significant part in the earlier and precise identification of skin cancer using computer aided diagnosis (CAD) models. But, the segmentation of skin lesions in dermoscopic images is a difficult process due to the constraints of artefacts (hairs, gel bubbles, ruler markers), unclear boundaries, poor and so on. In this work, multi class skin lesion classification system is developed based on multi layered Fuzzy C-means clustering and deep convolutional neural networks. Evaluate the performance of the proposed MLFCM with DCNN model on multi class skin cancer Dermoscopy images. Our results suggest that it is possible to boost the performance of skin lesion segmentation and classification simultaneously via training a unified model to perform both tasks in a mutual bootstrapping way.
35

Jekel, Leon, Khaled Bousabarah, MingDe Lin, Sara Merkaj, Manpreet Kaur, Arman Avesta, Sanjay Aneja, et al. "NIMG-02. PACS-INTEGRATED AUTO-SEGMENTATION WORKFLOW FOR BRAIN METASTASES USING NNU-NET." Neuro-Oncology 24, Supplement_7 (November 1, 2022): vii162. http://dx.doi.org/10.1093/neuonc/noac209.622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract PURPOSE Monitoring metastatic disease to the brain is laborious and time-consuming, especially in the setting of multiple metastases and when performed manually. Response assessment in brain metastases based on maximal unidimensional diameter as per the RANO-BM guideline is commonly performed1, however, accurate volumetric lesion estimates can be crucial for clinical decision-making2 and enhance outcome prediction3. We propose a deep learning (DL)-based auto-segmentation approach with the potential for improvement of time-efficiency, reproducibility and robustness against inter-rater variability. Materials and METHODS We retrospectively retrieved 259 patients with a total number of 916 lesions from our institutional database from 2014 - 2021. Patients with prior history of local radiation therapy or surgery were excluded. Manually generated trainee segmentations were revised and adjusted by a board-certified radiologist and served as ground truth for evaluation of segmentation accuracy. Model performance was tested via dice-similarity-coefficient (DSC). Volumetric measurements were then obtained within our PACS-integrated workflow on Visage 7 (Visage Imaging, Inc., San Diego, CA) at the click of one button. RESULTS For model training and evaluation, a train-test split of 90:10 on patient-level was performed (n= 234:25 (Patients), n= 861:55 (Lesions). A DL-algorithm (nnUNet) was incrementally trained on 10 batches of 23 patients. The DSC of the U-Net gradually increased throughout the training process and heuristically reached a plateau of 0.85. The sensitivity of the algorithm was 83% with detection of 46 out of 55 lesions in the testing dataset. The lesions that were not detected by the algorithm were below 5 mm in size. The false positive rate was 8% (n=4/50). CONCLUSION Our study demonstrates the feasibility of PACS-based integration of automatized segmentation workflows of brain metastases. An incremental-training approach is recommended to adapt DL algorithms to specific hospital settings.
36

Lu, Fangfang, Chi Tang, Tianxiang Liu, Zhihao Zhang, and Leida Li. "Multi-Attention Segmentation Networks Combined with the Sobel Operator for Medical Images." Sensors 23, no. 5 (February 24, 2023): 2546. http://dx.doi.org/10.3390/s23052546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Medical images are used as an important basis for diagnosing diseases, among which CT images are seen as an important tool for diagnosing lung lesions. However, manual segmentation of infected areas in CT images is time-consuming and laborious. With its excellent feature extraction capabilities, a deep learning-based method has been widely used for automatic lesion segmentation of COVID-19 CT images. However, the segmentation accuracy of these methods is still limited. To effectively quantify the severity of lung infections, we propose a Sobel operator combined with multi-attention networks for COVID-19 lesion segmentation (SMA-Net). In our SMA-Net method, an edge feature fusion module uses the Sobel operator to add edge detail information to the input image. To guide the network to focus on key regions, SMA-Net introduces a self-attentive channel attention mechanism and a spatial linear attention mechanism. In addition, the Tversky loss function is adopted for the segmentation network for small lesions. Comparative experiments on COVID-19 public datasets show that the average Dice similarity coefficient (DSC) and joint intersection over union (IOU) of the proposed SMA-Net model are 86.1% and 77.8%, respectively, which are better than those in most existing segmentation networks.
37

Wang, Deli, Zheng Gong, Yanfen Zhang, and Shouxi Wang. "Convolutional Neural Network Intelligent Segmentation Algorithm-Based Magnetic Resonance Imaging in Diagnosis of Nasopharyngeal Carcinoma Foci." Contrast Media & Molecular Imaging 2021 (August 13, 2021): 1–9. http://dx.doi.org/10.1155/2021/2033806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this study was to explore the adoption value of convolutional neural network- (CNN-) based magnetic resonance imaging (MRI) image intelligent segmentation model in the identification of nasopharyngeal carcinoma (NPC) lesions. The multisequence cross convolutional (MSCC) method was used in the complex convolutional network algorithm to establish the intelligent segmentation model two-dimensional (2D) ResUNet for the MRI image of the NPC lesion. Moreover, a multisequence multidimensional fusion segmentation model (MSCC-MDF) was further established. With 45 patients with NPC as the research objects, the Dice coefficient, Hausdorff distance (HD), and percentage of area difference (PAD) were calculated to evaluate the segmentation effect of MRI lesions. The results showed that the 2D-ResUNet model processed by MSCC had the largest Dice coefficient of 0.792 ± 0.045 for segmenting the tumor lesions of NPC, and it also had the smallest HD and PAD, which were 5.94 ± 0.41 mm and 15.96 ± 1.232%, respectively. When batch size = 5, the convergence curve was relatively gentle, and the convergence speed was the best. The largest Dice coefficient of MSCC-MDF model segmenting NPC tumor lesions was 0.896 ± 0.09, and its HD and PAD were the smallest, which were 5.07 ± 0.54 mm and 14.41 ± 1.33%, respectively. Its Dice coefficient was lower than other algorithms ( P < 0.05 ), but HD and PAD were significantly higher than other algorithms ( P < 0.05 ). To sum up, the MSCC-MDF model significantly improved the segmentation performance of MRI lesions in NPC patients, which provided a reference for the diagnosis of NPC.
38

Driessen, Julia, Gerben J. C. Zwezerijnen, Jakoba J. Eertink, Marie José Kersten, Anton Hagenbeek, Otto S. Hoekstra, Josée M. Zijlstra, and Ronald Boellaard. "Baseline Metabolic Tumor Volume in 18FDG-PET-CT Scans in Classical Hodgkin Lymphoma Using Semi-Automatic Segmentation." Blood 134, Supplement_1 (November 13, 2019): 4049. http://dx.doi.org/10.1182/blood-2019-125495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction Baseline metabolic tumor volume (bMTV) is increasingly studied as a prognostic factor for classical Hodgkin lymphoma (cHL). Before implementation as a clinical prognostic marker, it is important to investigate different methods for deriving bMTV since not all methods are suitable for each type of malignancy. Semi-automatic segmentation is influenced less by observer bias and variability compared to manual segmentation and might therefore be more reliable for assessing bMTV. However, not much is known about the use of different semi-automatic segmentation methods and how this influences the prognostic value of bMTV in cHL. Here we present a comparison of bMTV derived with 6 semi-automatic segmentation methods. In addition, a visual quality scoring of all segmentations is performed to gain insight into which segmentation methods could be used to determine bMTV in cHL. Methods We selected 61 baseline 18FDG-PET-CT scans that met specific quality criteria (http://EARL.EANM.org) from patients treated in the Transplant BRaVE study for relapsed/refractory cHL [Blood 2018 132:2923]. Six semi-automatic segmentation methods were applied using the Accurate tool, an in-house developed software application which has already been validated in other types of cancer, including diffuse large B-cell lymphoma [Eur Radiol 2019 06178:9, J Nucl Med. 2018;59(suppl 1):1753]. We compared two fixed thresholds (SUV4.0 and SUV2.5), two relative thresholds (A50P: a contrast corrected 50% of standard uptake value (SUV) peak, and 41max: 41% of SUVmax), and 2 majority vote methods, MV2 and MV3 selecting delineations of ≥2 and ≥3 of previously mentioned methods, respectively. Quality of the segmentation was scored using visual quality scores (QS) by two reviewers (JD, GZ): QS-1 for complete selections containing all visible tumor localizations; QS-2 when segmentations 'flood' into regions with physiological FDG uptake; QS-3 when segmentations do not select all visible lesions; or QS-4: a combination of QS-2 and QS-3. In addition, the quality of the delineation was rated: QS-A for good visual delineation of lesions; QS-B for too small delineation; and QS-C for too large delineation. All segmentations that had score QS-2 or QS-4 were manually adapted by erasing regions that flooded into areas with high physiological uptake. Figure 1 shows examples of the quality scores. We used Spearman's correlations to compare the bMTV of all semi-automatic methods. Comparison of quality scores was performed using chi-square tests. Results The median bMTV differed substantially among the segmentation methods, ranging from 24 mL for SUV4.0 to 88 mL for 41max (Table 1). However, there was a high significant correlation (p <0.0001) between all methods with spearman coefficients ranging between 0.77 and 0.93 (Table 2). The quality of the segmentation was best using the SUV2.5 threshold with QS-1 in 64% of scans and delineation was best for MV3 with QS-A in 56% (Table 3). The segmentation quality was significantly better when less than 5 lesions were present on a scan. A large difference was observed for SUV2.5 with score QS-1 in 91% of cases for scans with <5 lesions (n=22), compared to QS-1 in 49% for scans containing ≥5 lesions (n=39) (p <0.001; Table 3). The delineation quality did not depend on the number of lesions. However, for SUV2.5, A50P and MV3, the delineation was considered better when the SUVmax of selected volumes of interest (VOI) was <10, while SUV4.0 performed significantly better with a SUVmax ≥10 (Table 3). Conclusions We found a good correlation between all methods, suggesting that the segmentation method used will probably not influence the predictive value of bMTV. Ease of use was highest with a semi-automatic segmentation of bMTV using the SUV2.5 segmentation method. SUV2.5 had the best visual quality and needed least manual adaptation. To investigate possible implementation of bMTV in clinical practice, we will validate the quality of the segmentation methods and the predictive value of bMTV in a larger cohort of patients with other prognostic parameters including quantitative radiomics analysis of baseline PET-scans. Disclosures Kersten: Bristol-Myers Squibb: Honoraria, Research Funding; Gilead: Honoraria; Roche: Honoraria, Research Funding; Celgene: Honoraria, Research Funding; Novartis: Honoraria; Mundipharma: Honoraria, Research Funding; Amgen: Honoraria, Research Funding; Miltenyi: Honoraria; Takeda Oncology: Research Funding; Kite Pharma: Honoraria, Research Funding. Zijlstra:Janssen: Honoraria; Gilead: Consultancy, Honoraria, Membership on an entity's Board of Directors or advisory committees; Takeda: Consultancy, Honoraria, Membership on an entity's Board of Directors or advisory committees; Roche: Consultancy, Honoraria, Membership on an entity's Board of Directors or advisory committees.
39

Kalinovsky, A., V. Liauchuk, and A. Tarasau. "LESION DETECTION IN CT IMAGES USING DEEP LEARNING SEMANTIC SEGMENTATION TECHNIQUE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W4 (May 10, 2017): 13–17. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w4-13-2017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.
40

Vélez, Paulina, Manuel Miranda, Carmen Serrano, and Begoña Acha. "Does a Previous Segmentation Improve the Automatic Detection of Basal Cell Carcinoma Using Deep Neural Networks?" Applied Sciences 12, no. 4 (February 17, 2022): 2092. http://dx.doi.org/10.3390/app12042092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Basal Cell Carcinoma (BCC) is the most frequent skin cancer and its increasing incidence is producing a high overload in dermatology services. In this sense, it is convenient to aid physicians in detecting it soon. Thus, in this paper, we propose a tool for the detection of BCC to provide a prioritization in the teledermatology consultation. Firstly, we analyze if a previous segmentation of the lesion improves the ulterior classification of the lesion. Secondly, we analyze three deep neural networks and ensemble architectures to distinguish between BCC and nevus, and BCC and other skin lesions. The best segmentation results are obtained with a SegNet deep neural network. A 98% accuracy for distinguishing BCC from nevus and a 95% accuracy classifying BCC vs. all lesions have been obtained. The proposed algorithm outperforms the winner of the challenge ISIC 2019 in almost all the metrics. Finally, we can conclude that when deep neural networks are used to classify, a previous segmentation of the lesion does not improve the classification results. Likewise, the ensemble of different neural network configurations improves the classification performance compared with individual neural network classifiers. Regarding the segmentation step, supervised deep learning-based methods outperform unsupervised ones.
41

Ferrante, Matteo, Lisa Rinaldi, Francesca Botta, Xiaobin Hu, Andreas Dolp, Marta Minotti, Francesca De Piano, et al. "Application of nnU-Net for Automatic Segmentation of Lung Lesions on CT Images and Its Implication for Radiomic Models." Journal of Clinical Medicine 11, no. 24 (December 9, 2022): 7334. http://dx.doi.org/10.3390/jcm11247334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Radiomics investigates the predictive role of quantitative parameters calculated from radiological images. In oncology, tumour segmentation constitutes a crucial step of the radiomic workflow. Manual segmentation is time-consuming and prone to inter-observer variability. In this study, a state-of-the-art deep-learning network for automatic segmentation (nnU-Net) was applied to computed tomography images of lung tumour patients, and its impact on the performance of survival radiomic models was assessed. In total, 899 patients were included, from two proprietary and one public datasets. Different network architectures (2D, 3D) were trained and tested on different combinations of the datasets. Automatic segmentations were compared to reference manual segmentations performed by physicians using the DICE similarity coefficient. Subsequently, the accuracy of radiomic models for survival classification based on either manual or automatic segmentations were compared, considering both hand-crafted and deep-learning features. The best agreement between automatic and manual contours (DICE = 0.78 ± 0.12) was achieved averaging 2D and 3D predictions and applying customised post-processing. The accuracy of the survival classifier (ranging between 0.65 and 0.78) was not statistically different when using manual versus automatic contours, both with hand-crafted and deep features. These results support the promising role nnU-Net can play in automatic segmentation, accelerating the radiomic workflow without impairing the models’ accuracy. Further investigations on different clinical endpoints and populations are encouraged to confirm and generalise these findings.
42

Tong, Xiaozhong, Junyu Wei, Bei Sun, Shaojing Su, Zhen Zuo, and Peng Wu. "ASCU-Net: Attention Gate, Spatial and Channel Attention U-Net for Skin Lesion Segmentation." Diagnostics 11, no. 3 (March 12, 2021): 501. http://dx.doi.org/10.3390/diagnostics11030501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Segmentation of skin lesions is a challenging task because of the wide range of skin lesion shapes, sizes, colors, and texture types. In the past few years, deep learning networks such as U-Net have been successfully applied to medical image segmentation and exhibited faster and more accurate performance. In this paper, we propose an extended version of U-Net for the segmentation of skin lesions using the concept of the triple attention mechanism. We first selected regions using attention coefficients computed by the attention gate and contextual information. Second, a dual attention decoding module consisting of spatial attention and channel attention was used to capture the spatial correlation between features and improve segmentation performance. The combination of the three attentional mechanisms helped the network to focus on a more relevant field of view of the target. The proposed model was evaluated using three datasets, ISIC-2016, ISIC-2017, and PH2. The experimental results demonstrated the effectiveness of our method with strong robustness to the presence of irregular borders, lesion and skin smooth transitions, noise, and artifacts.
43

Moreau, Noémie, Caroline Rousseau, Constance Fourcade, Gianmarco Santini, Aislinn Brennan, Ludovic Ferrer, Marie Lacombe, et al. "Automatic Segmentation of Metastatic Breast Cancer Lesions on 18F-FDG PET/CT Longitudinal Acquisitions for Treatment Response Assessment." Cancers 14, no. 1 (December 26, 2021): 101. http://dx.doi.org/10.3390/cancers14010101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Metastatic breast cancer patients receive lifelong medication and are regularly monitored for disease progression. The aim of this work was to (1) propose networks to segment breast cancer metastatic lesions on longitudinal whole-body PET/CT and (2) extract imaging biomarkers from the segmentations and evaluate their potential to determine treatment response. Baseline and follow-up PET/CT images of 60 patients from the EPICUREseinmeta study were used to train two deep-learning models to segment breast cancer metastatic lesions: One for baseline images and one for follow-up images. From the automatic segmentations, four imaging biomarkers were computed and evaluated: SULpeak, Total Lesion Glycolysis (TLG), PET Bone Index (PBI) and PET Liver Index (PLI). The first network obtained a mean Dice score of 0.66 on baseline acquisitions. The second network obtained a mean Dice score of 0.58 on follow-up acquisitions. SULpeak, with a 32% decrease between baseline and follow-up, was the biomarker best able to assess patients’ response (sensitivity 87%, specificity 87%), followed by TLG (43% decrease, sensitivity 73%, specificity 81%) and PBI (8% decrease, sensitivity 69%, specificity 69%). Our networks constitute promising tools for the automatic segmentation of lesions in patients with metastatic breast cancer allowing treatment response assessment with several biomarkers.
44

Hui, Haisheng, Xueying Zhang, Zelin Wu, and Fenlian Li. "Dual-Path Attention Compensation U-Net for Stroke Lesion Segmentation." Computational Intelligence and Neuroscience 2021 (August 31, 2021): 1–16. http://dx.doi.org/10.1155/2021/7552185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For the segmentation task of stroke lesions, using the attention U-Net model based on the self-attention mechanism can suppress irrelevant regions in an input image while highlighting salient features useful for specific tasks. However, when the lesion is small and the lesion contour is blurred, attention U-Net may generate wrong attention coefficient maps, leading to incorrect segmentation results. To cope with this issue, we propose a dual-path attention compensation U-Net (DPAC-UNet) network, which consists of a primary network and auxiliary path network. Both networks are attention U-Net models and identical in structure. The primary path network is the core network that performs accurate lesion segmentation and outputting of the final segmentation result. The auxiliary path network generates auxiliary attention compensation coefficients and sends them to the primary path network to compensate for and correct possible attention coefficient errors. To realize the compensation mechanism of DPAC-UNet, we propose a weighted binary cross-entropy Tversky (WBCE-Tversky) loss to train the primary path network to achieve accurate segmentation and propose another compound loss function called tolerance loss to train the auxiliary path network to generate auxiliary compensation attention coefficient maps with expanded coverage area to perform compensate operations. We conducted segmentation experiments using the 239 MRI scans of the anatomical tracings of lesions after stroke (ATLAS) dataset to evaluate the performance and effectiveness of our method. The experimental results show that the DSC score of the proposed DPAC-UNet network is 6% higher than the single-path attention U-Net. It is also higher than the existing segmentation methods of the related literature. Therefore, our method demonstrates powerful abilities in the application of stroke lesion segmentation.
45

Hwang, Yoo Na, Min Ji Seo, and Sung Min Kim. "A Segmentation of Melanocytic Skin Lesions in Dermoscopic and Standard Images Using a Hybrid Two-Stage Approach." BioMed Research International 2021 (April 6, 2021): 1–19. http://dx.doi.org/10.1155/2021/5562801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The segmentation of a skin lesion is regarded as very challenging because of the low contrast between the lesion and the surrounding skin, the existence of various artifacts, and different imaging acquisition conditions. The purpose of this study is to segment melanocytic skin lesions in dermoscopic and standard images by using a hybrid model combining a new hierarchical K -means and level set approach, called HK-LS. Although the level set method is usually sensitive to initial estimation, it is widely used in biomedical image segmentation because it can segment more complex images and does not require a large number of manually labelled images. The preprocessing step is used for the proposed model to be less sensitive to intensity inhomogeneity. The proposed method was evaluated on medical skin images from two publicly available datasets including the PH2 database and the Dermofit database. All skin lesions were segmented with high accuracies (>94%) and Dice coefficients (>0.91) of the ground truth on two databases. The quantitative experimental results reveal that the proposed method yielded significantly better results compared to other traditional level set models and has a certain advantage over the segmentation results of U-net in standard images. The proposed method had high clinical applicability for the segmentation of melanocytic skin lesions in dermoscopic and standard images.
46

Abdullah Hamad, Abdulsattar, Mustafa Musa Jaber, Mohammed Altaf Ahmed, Ghaida Muttashar Abdulsahib, Osamah Ibrahim Khalaf, and Zelalem Meraf. "Using Convolutional Neural Networks for Segmentation of Multiple Sclerosis Lesions in 3D Magnetic Resonance Imaging." Advances in Materials Science and Engineering 2022 (April 22, 2022): 1–10. http://dx.doi.org/10.1155/2022/4905115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Magnetic Resonance Imaging to detect its lesions is used to diagnose multiple sclerosis. Experts usually perform this detection process manually, but there is interest in automating it to speed up the diagnosis and monitoring of this disease. A variety of automatic image segmentation methods have been proposed to quickly detect these lesions. A Gaussian Mixture Model is first constructed to identify outliers in each image. Then, using a set of rules based on expert knowledge of multiple sclerosis lesions, those outliers of the model that do not match the lesions' characteristics are discarded. Furthermore, segmented lesions usually correspond to gray matter-rich brain regions. In some cases, false positives can be detected, but the rules used cannot eliminate all errors without jeopardizing the segmentation’s quality. The second method involves training a convolutional neural network (CNN) that can segment lesions based on a set of training images. This technique can learn a set of filters that, when applied to small sections of an image called “patches,” produce a set of characteristics that can be used to classify each voxel of the image as a lesion or healthy tissue. On the other hand, the results show that the networks are capable of producing results in the worked database comparable to those produced by the algorithms in the literature.
47

Satheesha, T. Y., D. Sathyanarayana, and M. N. Giri Prasad. "Proposed Threshold Algorithm for Accurate Segmentation for Skin Lesion." International Journal of Biomedical and Clinical Engineering 4, no. 2 (July 2015): 40–47. http://dx.doi.org/10.4018/ijbce.2015070104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Automated diagnosis of skin cancer can be easily achieved only by effective segmentation of skin lesion. But this is a highly challenging task due to the presence of intensity variations in the images of skin lesions. The authors here, have presented a histogram analysis based fuzzy C mean threshold technique to overcome the drawbacks. This not only reduces the computational complexity but also unifies advantages of soft and hard threshold algorithms. Calculation of threshold values even the presence of abrupt intensity variations is simplified. Segmentation of skin lesions is easily achieved, in a more efficient way in the following algorithm. The experimental verification here is done on a large set of skin lesion images containing every possible artifacts which highly contributes to reversed segmentation outputs. This algorithm efficiency was measured based on a comparison with other prominent threshold methods. This approach has performed reasonably well and can be implemented in the expert skin cancer diagnostic systems
48

Ding, Xiangwen, and Shengsheng Wang. "Efficient Unet with depth-aware gated fusion for automatic skin lesion segmentation." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 9963–75. http://dx.doi.org/10.3233/jifs-202566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Melanoma is a very serious disease. The segmentation of skin lesions is a critical step for diagnosing melanoma. However, skin lesions possess the characteristics of large size variations, irregular shapes, blurring borders, and complex background information, thus making the segmentation of skin lesions remain a challenging problem. Though deep learning models usually achieve good segmentation performance for skin lesion segmentation, they have a large number of parameters and FLOPs, which limits their application scenarios. These models also do not make good use of low-level feature maps, which are essential for predicting detailed information. The Proposed EUnet-DGF uses MBconv to implement its lightweight encoder and maintains a strong encoding ability. Moreover, the depth-aware gated fusion block designed by us can fuse feature maps of different depths and help predict pixels on small patterns. The experiments conducted on the ISIC 2017 dataset and PH2 dataset show the superiority of our model. In particular, EUnet-DGF only accounts for 19% and 6.8% of the original Unet in terms of the number of parameters and FLOPs. It possesses a great application potential in practical computer-aided diagnosis systems.
49

Ge, Ting, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao, and Shanxiang Mu. "Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation." Computational Intelligence and Neuroscience 2019 (July 1, 2019): 1–11. http://dx.doi.org/10.1155/2019/9378014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and follow-up treatment. An automatic segmentation method for brain lesions is proposed based on the low-rank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a low-rank representation that incorporates sparsity-inducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.
50

Xie, Fei, Panpan Zhang, Tao Jiang, Jiao She, Xuemin Shen, Pengfei Xu, Wei Zhao, Gang Gao, and Ziyu Guan. "Lesion Segmentation Framework Based on Convolutional Neural Networks with Dual Attention Mechanism." Electronics 10, no. 24 (December 13, 2021): 3103. http://dx.doi.org/10.3390/electronics10243103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computational intelligence has been widely used in medical information processing. The deep learning methods, especially, have many successful applications in medical image analysis. In this paper, we proposed an end-to-end medical lesion segmentation framework based on convolutional neural networks with a dual attention mechanism, which integrates both fully and weakly supervised segmentation. The weakly supervised segmentation module achieves accurate lesion segmentation by using bounding-box labels of lesion areas, which solves the problem of the high cost of pixel-level labels with lesions in the medical images. In addition, a dual attention mechanism is introduced to enhance the network’s ability for visual feature learning. The dual attention mechanism (channel and spatial attention) can help the network pay attention to feature extraction from important regions. Compared with the current mainstream method of weakly supervised segmentation using pseudo labels, it can greatly reduce the gaps between ground-truth labels and pseudo labels. The final experimental results show that our proposed framework achieved more competitive performances on oral lesion dataset, and our framework further extended to dermatological lesion segmentation.

To the bibliography