Academic literature on the topic 'Histopathological tumor segmentation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Histopathological tumor segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Histopathological tumor segmentation"

1

Elidrissi, Sofyan, Ikram Ben Abdel Ouahab, Mohammed Bouhorma, and Fatiha Elouaai. "Unveiling the Clinical Significance of Microsatellite Instability in Colorectal Cancer: Deep Learning and the Segment Anything Model for Accurate Segmentation and Classification." International Journal of Online and Biomedical Engineering (iJOE) 21, no. 06 (2025): 97–110. https://doi.org/10.3991/ijoe.v21i06.54491.

Full text
Abstract:
Microsatellite instability (MSI) is crucial for colorectal cancer (CRC) diagnosis and prognosis. Accurate differentiation between MSI and microsatellite stability (MSS) tumors is essential for personalized treatment. This paper introduces a novel approach combining the segment anything model (SAM), Yolov8, and convolutional neural networks (CNNs) for precise segmentation and classification of histopathological images. SAM employs a prompt-based mechanism for segmenting tumor regions like invasive margins, tumor-infiltrating lymphocytes (TILs), and necrotic areas. Integrating SAM’s segmentation with CNN-based classification achieves high-accuracy MSI-H/MSS subtyping by focusing on key histopathological features. Tested on TCGA-CRC data, this approach outperformed traditional methods in segmentation and classification accuracy, enhancing MSI/MSS diagnostic potential and enabling efficient high-throughput analysis in clinical and research settings.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yiqing, Qiming He, Hufei Duan, Huijuan Shi, Anjia Han, and Yonghong He. "Using Sparse Patch Annotation for Tumor Segmentation in Histopathological Images." Sensors 22, no. 16 (2022): 6053. http://dx.doi.org/10.3390/s22166053.

Full text
Abstract:
Tumor segmentation is a fundamental task in histopathological image analysis. Creating accurate pixel-wise annotations for such segmentation tasks in a fully-supervised training framework requires significant effort. To reduce the burden of manual annotation, we propose a novel weakly supervised segmentation framework based on sparse patch annotation, i.e., only small portions of patches in an image are labeled as ‘tumor’ or ‘normal’. The framework consists of a patch-wise segmentation model called PSeger, and an innovative semi-supervised algorithm. PSeger has two branches for patch classification and image classification, respectively. This two-branch structure enables the model to learn more general features and thus reduce the risk of overfitting when learning sparsely annotated data. We incorporate the idea of consistency learning and self-training into the semi-supervised training strategy to take advantage of the unlabeled images. Trained on the BCSS dataset with only 25% of the images labeled (five patches for each labeled image), our proposed method achieved competitive performance compared to the fully supervised pixel-wise segmentation models. Experiments demonstrate that the proposed solution has the potential to reduce the burden of labeling histopathological images.
APA, Harvard, Vancouver, ISO, and other styles
3

Zadeh Shirazi, Amin, Eric Fornaciari, Mark D. McDonnell, et al. "The Application of Deep Convolutional Neural Networks to Brain Cancer Images: A Survey." Journal of Personalized Medicine 10, no. 4 (2020): 224. http://dx.doi.org/10.3390/jpm10040224.

Full text
Abstract:
In recent years, improved deep learning techniques have been applied to biomedical image processing for the classification and segmentation of different tumors based on magnetic resonance imaging (MRI) and histopathological imaging (H&E) clinical information. Deep Convolutional Neural Networks (DCNNs) architectures include tens to hundreds of processing layers that can extract multiple levels of features in image-based data, which would be otherwise very difficult and time-consuming to be recognized and extracted by experts for classification of tumors into different tumor types, as well as segmentation of tumor images. This article summarizes the latest studies of deep learning techniques applied to three different kinds of brain cancer medical images (histology, magnetic resonance, and computed tomography) and highlights current challenges in the field for the broader applicability of DCNN in personalized brain cancer care by focusing on two main applications of DCNNs: classification and segmentation of brain cancer tumors images.
APA, Harvard, Vancouver, ISO, and other styles
4

van der Kamp, Ananda, Thomas de Bel, Ludo van Alst, et al. "Automated Deep Learning-Based Classification of Wilms Tumor Histopathology." Cancers 15, no. 9 (2023): 2656. http://dx.doi.org/10.3390/cancers15092656.

Full text
Abstract:
(1) Background: Histopathological assessment of Wilms tumors (WT) is crucial for risk group classification to guide postoperative stratification in chemotherapy pre-treated WT cases. However, due to the heterogeneous nature of the tumor, significant interobserver variation between pathologists in WT diagnosis has been observed, potentially leading to misclassification and suboptimal treatment. We investigated whether artificial intelligence (AI) can contribute to accurate and reproducible histopathological assessment of WT through recognition of individual histopathological tumor components. (2) Methods: We assessed the performance of a deep learning-based AI system in quantifying WT components in hematoxylin and eosin-stained slides by calculating the Sørensen–Dice coefficient for fifteen predefined renal tissue components, including six tumor-related components. We trained the AI system using multiclass annotations from 72 whole-slide images of patients diagnosed with WT. (3) Results: The overall Dice coefficient for all fifteen tissue components was 0.85 and for the six tumor-related components was 0.79. Tumor segmentation worked best to reliably identify necrosis (Dice coefficient 0.98) and blastema (Dice coefficient 0.82). (4) Conclusions: Accurate histopathological classification of WT may be feasible using a digital pathology-based AI system in a national cohort of WT patients.
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Youngjae, Jinhee Park, and Gil-Jin Jang. "Efficient Perineural Invasion Detection of Histopathological Images Using U-Net." Electronics 11, no. 10 (2022): 1649. http://dx.doi.org/10.3390/electronics11101649.

Full text
Abstract:
Perineural invasion (PNI), a sign of poor diagnosis and tumor metastasis, is common in a variety of malignant tumors. The infiltrating patterns and morphologies of tumors vary by organ and histological diversity, making PNI detection difficult in biopsy, which must be performed manually by pathologists. As the diameters of PNI nerves are measured on a millimeter scale, the PNI region is extremely small compared to the whole pathological image. In this study, an efficient deep learning-based method is proposed for detecting PNI regions in multiple types of cancers using only PNI annotations without detailed segmentation maps for each nerve and tumor cells obtained by pathologists. The key idea of the proposed method is to train the adopted deep learning model, U-Net, to capture the boundary regions where two features coexist. A boundary dilation method and a loss combination technique are proposed to improve the detection performance of PNI without requiring full segmentation maps. Experiments were conducted with various combinations of boundary dilation widths and loss functions. It is confirmed that the proposed method effectively improves PNI detection performance from 0.188 to 0.275. Additional experiments were also performed on normal nerve detection to validate the applicability of the proposed method to the general boundary detection tasks. The experimental results demonstrate that the proposed method is also effective for general tasks, and it improved nerve detection performance from 0.511 to 0.693.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Yibao, Zhaoyang Xu, Yihao Guo, et al. "Scale-Adaptive viable tumor burden estimation via histopathological microscopy image segmentation." Computers in Biology and Medicine 189 (May 2025): 109915. https://doi.org/10.1016/j.compbiomed.2025.109915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mutkule, Prasad R., Nilesh P. Sable, Parikshit N. Mahalle, and Gitanjali R. Shinde. "Histopathological parameter and brain tumor mapping using distributed optimizer tuned explainable AI classifier." Journal of Autonomous Intelligence 7, no. 5 (2024): 1617. http://dx.doi.org/10.32629/jai.v7i5.1617.

Full text
Abstract:
<p>Brain tumors represent a critical and severe challenge worldwide early and accurate diagnosis is necessary to increase the predictions for individuals with brain tumors. Several studies on brain tumor mapping have been conducted recently; however, the methods have some drawbacks, including poor image quality, a lack of data, and a limited capacity for generalization ability. To tackle these drawbacks this research presents a distributed optimizer tuned explainable AI classifier model for brain tumor mapping from histopathological images. The foraging gyps africanus optimization enabled explainable artificial intelligence (FGAO enabled explainable AI) combines the advantages of the explainable AI classifier model and hybrid spatio-temporal attention-based ResUNet segmentation model. The hybrid spatio-temporal attention-based ResUNet segmentation model accurately segments the histopathological images that leverage both Spatio-Temporal attention and the ResUNet model which addresses performance degradation problems. The nature-inspired algorithms draw inspiration from the foraging and hunting traits which optimize the tunable parameters of the explainable AI classifier. The SHAP model in the explainable AI translates the insights into predictions that produce explanations for the decisions made by the CNN model which fosters end-user confidence. The experimental results show that the FGAO-enabled explainable AI model outperforms the conventional approaches in terms of accuracy 95.75%, sensitivity 95.10%, and specificity 96.32% for TP 80.</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Althubaity, DaifAllah D., Faisal Fahad Alotaibi, Abdalla Mohamed Ahmed Osman, et al. "Automated Lung Cancer Segmentation in Tissue Micro Array Analysis Histopathological Images Using a Prototype of Computer-Assisted Diagnosis." Journal of Personalized Medicine 13, no. 3 (2023): 388. http://dx.doi.org/10.3390/jpm13030388.

Full text
Abstract:
Background: Lung cancer is a fatal disease that kills approximately 85% of those diagnosed with it. In recent years, advances in medical imaging have greatly improved the acquisition, storage, and visualization of various pathologies, making it a necessary component in medicine today. Objective: Develop a computer-aided diagnostic system to detect lung cancer early by segmenting tumor and non-tumor tissue on Tissue Micro Array Analysis (TMA) histopathological images. Method: The prototype computer-aided diagnostic system was developed to segment tumor areas, non-tumor areas, and fundus on TMA histopathological images. Results: The system achieved an average accuracy of 83.4% and an F-measurement of 84.4% in segmenting tumor and non-tumor tissue. Conclusion: The computer-aided diagnostic system provides a second diagnostic opinion to specialists, allowing for more precise diagnoses and more appropriate treatments for lung cancer.
APA, Harvard, Vancouver, ISO, and other styles
9

Altini, Nicola, Emilia Puro, Maria Giovanna Taccogna, et al. "Tumor Cellularity Assessment of Breast Histopathological Slides via Instance Segmentation and Pathomic Features Explainability." Bioengineering 10, no. 4 (2023): 396. http://dx.doi.org/10.3390/bioengineering10040396.

Full text
Abstract:
The segmentation and classification of cell nuclei are pivotal steps in the pipelines for the analysis of bioimages. Deep learning (DL) approaches are leading the digital pathology field in the context of nuclei detection and classification. Nevertheless, the features that are exploited by DL models to make their predictions are difficult to interpret, hindering the deployment of such methods in clinical practice. On the other hand, pathomic features can be linked to an easier description of the characteristics exploited by the classifiers for making the final predictions. Thus, in this work, we developed an explainable computer-aided diagnosis (CAD) system that can be used to support pathologists in the evaluation of tumor cellularity in breast histopathological slides. In particular, we compared an end-to-end DL approach that exploits the Mask R-CNN instance segmentation architecture with a two steps pipeline, where the features are extracted while considering the morphological and textural characteristics of the cell nuclei. Classifiers that are based on support vector machines and artificial neural networks are trained on top of these features in order to discriminate between tumor and non-tumor nuclei. Afterwards, the SHAP (Shapley additive explanations) explainable artificial intelligence technique was employed to perform a feature importance analysis, which led to an understanding of the features processed by the machine learning models for making their decisions. An expert pathologist validated the employed feature set, corroborating the clinical usability of the model. Even though the models resulting from the two-stage pipeline are slightly less accurate than those of the end-to-end approach, the interpretability of their features is clearer and may help build trust for pathologists to adopt artificial intelligence-based CAD systems in their clinical workflow. To further show the validity of the proposed approach, it has been tested on an external validation dataset, which was collected from IRCCS Istituto Tumori “Giovanni Paolo II” and made publicly available to ease research concerning the quantification of tumor cellularity.
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Rujuan, Jiayi Yang, Qi Chen, et al. "Distinguishing of Histopathological Staging Features of H-E Stained Human cSCC by Microscopical Multispectral Imaging." Biosensors 14, no. 10 (2024): 467. http://dx.doi.org/10.3390/bios14100467.

Full text
Abstract:
Cutaneous squamous cell carcinoma (cSCC) is the second most common malignant skin tumor. Early and precise diagnosis of tumor staging is crucial for long-term outcomes. While pathological diagnosis has traditionally served as the gold standard, the assessment of differentiation levels heavily depends on subjective judgments. Therefore, how to improve the diagnosis accuracy and objectivity of pathologists has become an urgent problem to be solved. We used multispectral imaging (MSI) to enhance tumor classification. The hematoxylin and eosin (H&E) stained cSCC slides were from Shanghai Ruijin Hospital. Scale-invariant feature transform was applied to multispectral images for image stitching, while the adaptive threshold segmentation method and random forest segmentation method were used for image segmentation, respectively. Synthetic pseudo-color images effectively highlight tissue differences. Quantitative analysis confirms significant variation in the nuclear area between normal and cSCC tissues (p < 0.001), supported by an AUC of 1 in ROC analysis. The AUC within cSCC tissues is 0.57. Further study shows higher nuclear atypia in poorly differentiated cSCC tissues compared to well-differentiated cSCC (p < 0.001), also with an AUC of 1. Lastly, well differentiated cSCC tissues show more and larger keratin pearls. These results have shown that combined MSI with imaging processing techniques will improve H&E stained human cSCC diagnosis accuracy, and it will be well utilized to distinguish histopathological staging features.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Histopathological tumor segmentation"

1

Lerousseau, Marvin. "Weakly Supervised Segmentation and Context-Aware Classification in Computational Pathology." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG015.

Full text
Abstract:
L’anatomopathologie est la discipline médicale responsable du diagnostic et de la caractérisation des maladies par inspection macroscopique, microscopique, moléculaire et immunologique des tissus. Les technologies modernes permettent de numériser des lames tissulaire en images numériques qui peuvent être traitées par l’intelligence artificielle pour démultiplier les capacités des pathologistes. Cette thèse a présenté plusieurs approches nouvelles et puissantes qui s’attaquent à la segmentation et à la classification pan-cancer des images de lames numériques. L’apprentissage de modèles de segmentation pour des lames numériques est compliqué à cause de difficultés d’obtention d’annotations qui découlent (i) d’une pénurie de pathologistes, (ii) d’un processus d’annotation ennuyeux, et (iii) de différences majeurs entre les annotations inter-pathologistes. Mon premier axe de travail a abordé la segmentation des tumeurs pan-cancéreuses en concevant deux nouvelles approches d’entraînement faiblement supervisé qui exploitent des annotations à l’échelle de la lame qui sont faciles et rapides à obtenir. En particulier, ma deuxième contribution à la segmentation était un algorithme générique et très puissant qui exploite les annotations de pourcentages de tumeur pour chaque lame, sans recourir à des annotations de pixels. De vastes expériences à grande échelle ont montré la supériorité de mes approches par rapport aux méthodes faiblement supervisées et supervisées pour la segmentation des tumeurs pan-cancer sur un ensemble de données de plus de 15 000 lames de tissus congelés. Mes résultats ont également démontré la robustesse de nos approches au bruit et aux biais systémiques dans les annotations. Les lames numériques sont difficiles à classer en raison de leurs tailles colossales, qui vont de millions de pixels à plusieurs milliards de pixels, avec un poids souvent supérieur à 500 mégaoctets. L’utilisation directe de la vision par ordinateur traditionnelle n’est donc pas possible, incitant l’utilisation de l’apprentissage par instances multiples, un paradigme d’apprentissage automatique consistant à assimiler une lame comme un ensemble de tuiles uniformément échantillonnés à partir de cette dernière. Jusqu’à mes travaux, la grande majorité des approches d’apprentissage à instances multiples considéraient les tuiles comme échantillonnées de manière indépendante et identique, c’est-à-dire qu’elles ne prenaient pas en compte la relation spatiale des tuiles extraites d’une image de lame numérique. Certaines approches ont exploité une telle interconnexion spatiale en tirant parti de modèles basés sur des graphes, bien que le véritable domaine des lames numériques soit spécifiquement le domaine de l’image qui est plus adapté aux réseaux de neurones convolutifs. J’ai conçu un cadre d’apprentissage à instances multiples puissant et modulaire qui exploite la relation spatiale des tuiles extraites d’une lame numérique en créant une carte clairsemée des projections multidimensionnelles de patches, qui est ensuite traitée en projection de lame numérique par un réseau convolutif à entrée clairsemée, avant d’être classée par un modèle générique de classification. J’ai effectué des expériences approfondies sur trois tâches de classification d’images de lames numériques, dont la tâche par excellence du cancérologue de soustypage des tumeurs, sur un ensemble de données de plus de 20 000 images de lames numériques provenant de données publiques. Les résultats ont mis en évidence la supériorité de mon approche vis-à-vis les méthodes d’apprentissage à instances multiples les plus répandues. De plus, alors que mes expériences n’ont étudié mon approche qu’avec des réseaux de neurones convolutifs à faible entrée avec deux couches convolutives, les résultats ont montré que mon approche fonctionne mieux à mesure que le nombre de paramètres augmente, suggérant que des réseaux de neurones convolutifs plus sophistiqués peuvent facilement obtenir des résultats su<br>Anatomic pathology is the medical discipline responsible for the diagnosis and characterization of diseases through the macroscopic, microscopic, molecular and immunologic inspection of tissues. Modern technologies have made possible the digitization of tissue glass slides into whole slide images, which can themselves be processed by artificial intelligence to enhance the capabilities of pathologists. This thesis presented several novel and powerful approaches that tackle pan-cancer segmentation and classification of whole slide images. Learning segmentation models for whole slide images is challenged by an annotation bottleneck which arises from (i) a shortage of pathologists, (ii) an intense cumbersomeness and boring annotation process, and (iii) major inter-annotators discrepancy. My first line of work tackled pan-cancer tumor segmentation by designing two novel state-of-the-art weakly supervised approaches that exploit slide-level annotations that are fast and easy to obtain. In particular, my second segmentation contribution was a generic and highly powerful algorithm that leverages percentage annotations on a slide basis, without needing any pixelbased annotation. Extensive large-scale experiments showed the superiority of my approaches over weakly supervised and supervised methods for pan-cancer tumor segmentation on a dataset of more than 15,000 unfiltered and extremely challenging whole slide images from snap-frozen tissues. My results indicated the robustness of my approaches to noise and systemic biases in annotations. Digital slides are difficult to classify due to their colossal sizes, which range from millions of pixels to billions of pixels, often weighing more than 500 megabytes. The straightforward use of traditional computer vision is therefore not possible, prompting the use of multiple instance learning, a machine learning paradigm consisting in assimilating a whole slide image as a set of patches uniformly sampled from it. Up to my works, the greater majority of multiple instance learning approaches considered patches as independently and identically sampled, i.e. discarded the spatial relationship of patches extracted from a whole slide image. Some approaches exploited such spatial interconnection by leveraging graph-based models, although the true domain of whole slide images is specifically the image domain which is more suited with convolutional neural networks. I designed a highly powerful and modular multiple instance learning framework that leverages the spatial relationship of patches extracted from a whole slide image by building a sparse map from the patches embeddings, which is then further processed into a whole slide image embedding by a sparse-input convolutional neural network, before being classified by a generic classifier model. My framework essentially bridges the gap between multiple instance learning, and fully convolutional classification. I performed extensive experiments on three whole slide image classification tasks, including the golden task of cancer pathologist of subtyping tumors, on a dataset of more than 20,000 whole slide images from public data. Results highlighted the superiority of my approach over all other widespread multiple instance learning methods. Furthermore, while my experiments only investigated my approach with sparse-input convolutional neural networks with two convolutional layers, the results showed that my framework works better as the number of parameters increases, suggesting that more sophisticated convolutional neural networks can easily obtain superior results
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Pei-Chen, and 黃珮楨. "Real Time Automatic Lung Tumor Segmentation in Whole-slide Histopathological Images." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/2h8u6r.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Histopathological tumor segmentation"

1

Lerousseau, Marvin, Maria Vakalopoulou, Marion Classe, et al. "Weakly Supervised Multiple Instance Learning Histopathological Tumor Segmentation." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59722-1_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hieber, Daniel, Nico Haisch, Gregor Grambow, et al. "Comparing nnU-Net and deepflash2 for Histopathological Tumor Segmentation." In Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti240487.

Full text
Abstract:
Machine Learning (ML) has evolved beyond being a specialized technique exclusively used by computer scientists. Besides the general ease of use, automated pipelines allow for training sophisticated ML models with minimal knowledge of computer science. In recent years, Automated ML (AutoML) frameworks have become serious competitors for specialized ML models and have even been able to outperform the latter for specific tasks. Moreover, this success is not limited to simple tasks but also complex ones, like tumor segmentation in histopathological tissue, a very time-consuming task requiring years of expertise by medical professionals. Regarding medical image segmentation, the leading AutoML frameworks are nnU-Net and deepflash2. In this work, we begin to compare those two frameworks in the area of histopathological image segmentation. This use case proves especially challenging, as tumor and healthy tissue are often not clearly distinguishable by hard borders but rather through heterogeneous transitions. A dataset of 103 whole-slide images from 56 glioblastoma patients was used for the evaluation. Training and evaluation were run on a notebook with consumer hardware, determining the suitability of the frameworks for their application in clinical scenarios rather than high-performance scenarios in research labs.
APA, Harvard, Vancouver, ISO, and other styles
3

Spiess, Ellena, Dominik Müller, Moritz Dinser, et al. "Automatic Segmentation of Histopathological Glioblastoma Whole-Slide Images Utilizing MONAI." In Studies in Health Technology and Informatics. IOS Press, 2025. https://doi.org/10.3233/shti250279.

Full text
Abstract:
Manual segmentation of histopathological images is both resource-intensive and prone to human error, particularly when dealing with challenging tumor types like Glioblastoma (GBM), an aggressive and highly heterogeneous brain tumor. The fuzzy borders of GBM make it especially difficult to segment, requiring models with strong generalization capabilities to achieve reliable results. In this study, we leverage the Medical Open Network for Artificial Intelligence (MONAI) framework to segment GBM tissue from hematoxylin and eosin-stained Whole-Slide Images. MONAI performed comparably well to state-of-the-art AutoML tools on our in-house dataset, achieving a Dice score of 79%. These promising results highlight the potential for future research on public datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Ho Heon, Won Chan Jeong, Youngjin Park, and Young Sin Ko. "Understanding Stain Separation Improves Cross-Scanner Adenocarcinoma Segmentation with Joint Multi-Task Learning." In Studies in Health Technology and Informatics. IOS Press, 2025. https://doi.org/10.3233/shti250272.

Full text
Abstract:
Digital pathology has made significant advances in tumor diagnosis and segmentation; however, image variability resulting from tissue preparation and digitization - referred to as domain shift - remains a significant challenge. Variations caused by heterogeneous scanners introduce color inconsistencies that negatively affect the performance of segmentation algorithms. To address this issue, we have developed a joint multitask U-net architecture trained for both segmentation and stain separation. This model isolates the stain matrix and stain density, allowing it to handle color variations and improve generalization across different scanners. On 180 stain images from three different scanners, our model achieved a Dice score of 0.898 and an Intersection Over Union (IoU) score of 0.816, outperforming conventional supervised learning methods by +1.5% and +2.5%, respectively. On external datasets containing images from six different scanners, our model averaged a Dice score and IoU of 0.792. By leveraging our novel approach to stain separation, we improved segmentation accuracy and generalization across diverse histopathological samples. These advances may pave the way for more reliable and consistent diagnostic tools for breast adenocarcinoma.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Histopathological tumor segmentation"

1

Mezgebo, Biniyam, Joema Lima, Abdoljalil Addeh, et al. "Attention-Enhanced UNet for Automated Gleason Score 3 Tumor Segmentation in Histopathological Whole Slide Images." In 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI). IEEE, 2025. https://doi.org/10.1109/isbi60581.2025.10980899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Xiansong, Hongliang He, Pengxu Wei, Chi Zhang, Juncen Zhang, and Jie Chen. "Tumor Tissue Segmentation for Histopathological Images." In MMAsia '19: ACM Multimedia Asia. ACM, 2019. http://dx.doi.org/10.1145/3338533.3372210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Musulin, Jelena, Daniel Štifanić, Ana Zulijani, and Zlatan Car. "SEMANTIC SEGMENTATION OF ORAL SQUAMOUS CELL CARCINOMA ON EPITHELLIAL AND STROMAL TISSUE." In 1st INTERNATIONAL Conference on Chemo and BioInformatics. Institute for Information Technologies, University of Kragujevac, 2021. http://dx.doi.org/10.46793/iccbi21.194m.

Full text
Abstract:
Oral cancer (OC) is among the top ten cancers worlwide, with more than 90% being squamous cell carcinoma. Despite diagnostic and therapeutic development in OC patients’ mortality and morbidity rates remain high with no advancement in the last 50 years. Development of diagnostic tools in identifying pre-cancer lesions and detecting early-stage OC might contribute to minimal invasive treatment/surgery therapy, improving prognosis and survival rates, and maintaining a high quality of life of patients. For this reason, Artificial Intelligence (AI) algorithms are widely used as a computational aid in tumor classification and segmentation to help clinicians in the earlier discovery of cancer and better monitoring of oral lesions. In this paper, we propose an AI-based system for automatic segmentation of the epithelial and stromal tissue from oral histopathological images in order to assist clinicians in discovering new informative features. In terms of semantic segmentation, the proposed AI system based on preprocessing methods and deep convolutional neural networks produced satisfactory results, with 0.878 ± 0.027 mIOU and 0.955 ± 0.014 F1 score. The obtained results show that the proposed AI-based system has a great potential in diagnosing oral squamous cell carcinoma, therefore, this paper is the first step towards analysing the tumor microenvironment, specifically segmentation of the microenvironment cells.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!