Dissertations / Theses on the topic 'Intelligence artificielle en médecine'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Intelligence artificielle en médecine.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Julen, Nathalie. "Eléments pour une université virtuelle en médecine : le projet CARDIOLAB." Rennes 1, 2002. http://www.theses.fr/2002REN1B063.
Full textBourgeois-République, Claire. "Plate-forme de réglage automatique et adaptatif d'implant cochléaire par algorithme évolutionnaire interactif." Dijon, 2004. http://www.theses.fr/2004DIJOS070.
Full textWolman, Frédéric. "Modèles du processus de raisonnement diagnostique : application au développement d'un système d'aide au diagnostic dans les tumeurs osseuses." Bordeaux 2, 1991. http://www.theses.fr/1991BOR23057.
Full textDrancé, Martin. "Graphes de connaissances et intelligence artificielle explicable : application au repositionnement de médicaments." Electronic Thesis or Diss., Bordeaux, 2024. https://theses.hal.science/tel-04874772.
Full textDrug repositioning involves finding new therapeutic uses for existing medications that are already approved to treat other conditions. This approach takes advantage of the existing knowledge about these molecules, enabling faster and less costly development compared to creating new drugs. Repositioning is particularly useful for addressing unmet medical needs, such as rare or emerging diseases. In recent years, the development of knowledge graphs has enabled the consolidation of all this biomedical information around drugs, coming from large data sources or knowledge repositories. A knowledge graph is a structured representation of information integrated from different sources, linking these pieces of information together using relationships. This representation is especially useful for understanding the complex relationships that structure knowledge about drugs. Nowadays, it is widely used for the task of drug repositioning. An effective way to reposition drugs using these graphs is to employ artificial intelligence (AI) methods that predict new links between objects in the graph. In this way, a well-trained model can suggest a new connection between a drug and a disease, indicating a potential opportunity for repositioning. However, this methodology has a significant disadvantage : link prediction models often provide opaque results that cannot be easily interpreted by the end users. This thesis proposes to explore the use of explainable AI methods for the purpose of repositioning drugs based on biomedical data represented in knowledge graphs. First, we analyze the impact of pre-training on multihop reasoning models for link prediction. We demonstrate that building representations of the graph entities before model training improves the predictive performance, as well as the quantity and diversity of explanations. Secondly, we examine how the addition of relationships in a knowledge graph affects link prediction results. We show that adding links in three biomedical knowledge graphs improves the predictive performance of the SQUIRE model across different types of relationships related to drug repositioning. An analysis of the impact on model explainability is also conducted, following the addition of these relationships. Finally, we propose a new methodology for the task of link classification in a knowledge graph, based on the use of random forests. Using information about the neighborhood of each node in the graph, we show that a random forest model can accurately predict the existence or absence of a link between two nodes. These results allow for a visualization of the nodes used to make the predictions. Lastly, we apply this method to drug repositioning for amyotrophic lateral sclerosis (ALS)
Ounissi, Mehdi. "Decoding the Black Box : Enhancing Interpretability and Trust in Artificial Intelligence for Biomedical Imaging - a Step Toward Responsible Artificial Intelligence." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS237.
Full textIn an era dominated by AI, its opaque decision-making --known as the "black box" problem-- poses significant challenges, especially in critical areas like biomedical imaging where accuracy and trust are crucial. Our research focuses on enhancing AI interpretability in biomedical applications. We have developed a framework for analyzing biomedical images that quantifies phagocytosis in neurodegenerative diseases using time-lapse phase-contrast video microscopy. Traditional methods often struggle with rapid cellular interactions and distinguishing cells from backgrounds, critical for studying conditions like frontotemporal dementia (FTD). Our scalable, real-time framework features an explainable cell segmentation module that simplifies deep learning algorithms, enhances interpretability, and maintains high performance by incorporating visual explanations and by model simplification. We also address issues in visual generative models, such as hallucinations in computational pathology, by using a unique encoder for Hematoxylin and Eosin staining coupled with multiple decoders. This method improves the accuracy and reliability of synthetic stain generation, employing innovative loss functions and regularization techniques that enhance performance and enable precise synthetic stains crucial for pathological analysis. Our methodologies have been validated against several public benchmarks, showing top-tier performance. Notably, our framework distinguished between mutant and control microglial cells in FTD, providing new biological insights into this unproven phenomenon. Additionally, we introduced a cloud-based system that integrates complex models and provides real-time feedback, facilitating broader adoption and iterative improvements through pathologist insights. The release of novel datasets, including video microscopy on microglial cell phagocytosis and a virtual staining dataset related to pediatric Crohn's disease, along with all source codes, underscores our commitment to transparent open scientific collaboration and advancement. Our research highlights the importance of interpretability in AI, advocating for technology that integrates seamlessly with user needs and ethical standards in healthcare. Enhanced interpretability allows researchers to better understand data and improve tool performance
Crémilleux, Bruno. "Induction automatique : aspects théoriques, le système ARBRE, applications en médecine." Phd thesis, Grenoble 1, 1991. http://tel.archives-ouvertes.fr/tel-00339492.
Full textDubuc, Antoine. "Utilisation de nouvelles approches en intelligence artificielle pour le diagnostic de lésions de la muqueuse orale." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES058.
Full textClinical manifestations of oral mucosal pathologies exhibit significant polymorphism. This includes benign dermatoses, cancerous or even pre-cancerous lesions. Early diagnosis remains a key element of management, especially for oral cavity cancers. However, this diagnosis is currently often delayed, whether due to the inherent difficulty in clinical presentation or insufficient availability of specialist consultations, leading to extended waiting times. Therefore, it appears useful to propose reliable diagnostic support tools for oral healthcare professionals. While artificial intelligence in dermatological diagnosis is advancing rapidly, its application to oral cavity pathology is still relatively limited. We adapted and evaluated various artificial intelligence approaches and developed a tool intended for healthcare professionals
Blanchard, Jean-Marc. "Modélisation de l'expertise en recherche clinique : Application à la cancérologie." Lyon, INSA, 1994. http://www.theses.fr/1994ISAL0133.
Full textThe medical field and the doctors' usual practice showed an early interest for Artificial Intelligence (A. I. ). The application presented in this piece of work tackles the question of A. I. Within the field of Clinical Research in Oncology. A full presentation of this specific field of research gives a good appreciation of the difficulties met by the clinicians within the frame of their activity. The expert programs of this application are developed to provide an aid for therapeutic decision as well as for the inclusion of patients in Clinical Studies in Oncology. The validation of the quality of the proposed decisions made by the therapeutic decision aided system led to a global result that is 85% conform to the experts proposed decisions after an initial evaluation of about 80%. We also demonstrate that it is possible to improve the expertise by simply extending it to some pathologies that were not taken into account by the initial model. Finally, on the ground of the organization of the knowledge databases that were used, and from the identification of the structuration of the therapeutic proceeding of the experts, we propose a conceptual model of this proceeding representation. This model, decomposed in 3 stages of resolution, translates the doctors expressed expertise while putting their shills in practice
Marie, Florent. "COLISEUM-3D : Une plate-forme innovante pour la segmentation d'images médicales par Raisonnement à Partir de Cas (RàPC) et méthodes d'apprentissage de type Deep Learning." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCD051.
Full textNephroblastoma (or Wilms' tumor) affects the kidney and is the most common childhood abdominal cancer tumor. During therapy, it is recommended to preserve kidney function as well as possible by preserving the pathological kidney under certain conditions. The information from the imagery is important to validate them, but partly involves segmentation of the scan images. As this task is very time-consuming, it is common for this information not to be fully exploited. Artificial Intelligence (AI) represents a promising way to automate segmentations but generally requires a large amount of learning data. Among other things, Case-Based Reasoning (CBR) is a knowledge-based AI approach that integrates business knowledge and an adaptation of an existing solution to optimize the resolution of a new problem. Convolutional Neural Networks (CNNs) are a purely experiential approach. We propose a CBR system coupled with a region growing algorithm in order to segment kidneys deformed by nephroblastoma. In parallel, a training method for CNNs, called OV[exposant ]2ASSION, is also proposed for tumor segmentation. Both approaches aim to address a lack of learning data (few segmentations of nephroblastoma and pathological kidneys are available). The evaluations show good performance with the calculation of segmentations similar to those made by radiologists and surgeons
Ouédraogo, Ismaila. "Technologie mobile et intelligence artificielle pour l'amélioration de la littératie en santé dans les milieux défavorisés." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0023.
Full textAccess and use of health information is indeed a major challenge in sub-Saharan Africa, especially for populations with low literacy. These difficulties are exacerbated by the increasing prevalence of foreign language content in digital health solutions, as well as the sometimes inadequate design of these solutions for local populations. These factors must be taken into account in the development and implementation of digital health solutions to ensure that they are truly accessible and beneficial to all populations. In this context, this research focuses on improving the accessibility and use of health information (health literacy) among lowliterate populations in Burkina Faso through AI-enabled mobile health solutions. The research methodology includes literature reviews, interviews, surveys and observations to accurately understand the specific needs of low literacy users. Based on this feedback, concrete design principles will be established to guide the development of a prototype Interactive Voice Response (IVR) system in the Dioula language. The mobile service is then evaluated with users to enable iterative improvements to the solution, taking user feedback into account. In addition, this research contributes to the creation of annotated speech data in Dioula to address the lack of speech data for assistive speech technologies for the population. By highlighting the importance of local languages and adapted technologies, this study demonstrates how AI-enabled mobile health solutions can effectively overcome barriers related to literacy to improve the health literacy of marginalised populations. The findings of this study are in line with the United Nations Sustainable Development Goals (SDGs), thus reinforcing their positive impact on the health of vulnerable populations in Burkina Faso
Yameogo, Relwende Aristide. "Risques et perspectives du big data et de l'intelligence artificielle : approche éthique et épistémologique." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMLH10.
Full textIn the 21st century, the use of big data and AI in the field of health has gradually expanded, although it is accompanied by problems linked to the emergence of practices based on the use of digital traces. The aim of this thesis is to evaluate the use of big data and AI in medical practice, to discover the processes generated by digital tools in the field of health and to highlight the ethical problems they pose.The use of ICTs in medical practice is mainly based on the use of EHR, prescription software and connected objects. These uses raise many problems for physicians who are aware of the risk involved in protecting patients' health data. In this work, we are implementing a method for designing CDSS, the temporal fuzzy vector space. This method allows us to model a new clinical diagnostic score for pulmonary embolism. Through the "Human-trace" paradigm, our research allows us not only to measure the limitation in the use of ICT, but also to highlight the interpretative biases due to the delinking between the individual caught in his complexity as a "Human-trace" and the data circulating about him via digital traces. If big data, coupled with AI can play a major role in the implementation of CDSS, it cannot be limited to this field. We are also studying how to set up big data and AI development processes that respect the deontological and medical ethics rules associated with the appropriation of ICTs by the actors of the health system
Aygalinc, Pascal. "Application de la reconnaissance des formes à l'aide au diagnostic médical : sélection multicritère de variables explicatives." Lille 1, 1986. http://www.theses.fr/1986LIL10083.
Full textPressat-Laffouilhère, Thibaut. "Modèle ontologique formel, un appui à la sélection des variables pour la construction des modèles multivariés." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMR104.
Full textResponding to a causal research question in the context of observational studies requires the selection ofconfounding variables. Integrating them into a multivariate model as co-variables helps reduce bias in estimatingthe true causal effect of exposure on the outcome. Identification is achieved through causal diagrams (CDs) ordirected acyclic graphs (DAGs). These representations, composed of nodes and directed arcs, prevent theselection of variables that would introduce bias, such as mediating and colliding variables. However, existingmethods for constructing CDs lack systematic approaches and exhibit limitations in terms of formalism,expressiveness, and completeness. To offer a formal and comprehensive framework capable of representing allnecessary information for variable selection on an enriched CD, analyzing this CD, and, most importantly,explaining the analysis results, we propose utilizing an ontological model enriched with inference rules. Anontological model allows for representing knowledge in the form of an expressive and formal graph consisting ofclasses and relations similar to the nodes and arcs of Cds. We developed the OntoBioStat (OBS) ontology basedon a list of competency questions about variable selection and an analysis of scientific literature on CDs andontologies. The construction framework of OBS is richer than that of a CD, incorporating implicit elements likenecessary causes, study context, uncertainty in knowledge, and data quality. To evaluate the contribution of OBS,we used it to represent variables from a published observational study and compared its conclusions with thoseof a CD. OBS identified new confounding variables due to its different construction framework and the axiomsand inference rules. OBS was also used to represent an ongoing retrospective study analysis. The modelexplained statistical correlations found between study variables and highlighted potential confounding variablesand their possible substitutes (proxies). Information on data quality and causal relation uncertainty facilitatedproposing sensitivity analyses, enhancing the study's conclusion robustness. Finally, inferences were explainedthrough the reasoning capabilities provided by OBS's formal representation. Ultimately, OBS will be integratedinto statistical analysis tools to leverage existing libraries for variable selection, making it accessible toepidemiologists and biostatisticians
Hacid, Mohand Saïd. "Intégration des techniques d'intelligence artificielle dans un environnement de simulation : Application a la rééducation écologique de la mémoire." Lyon, INSA, 1991. http://www.theses.fr/1991ISAL0057.
Full textThe REMISSION (Rééducation Ecologique de la Mémoire par un Système de SimulatiON system in which the work presented here is framed. Enters into the current pattern of research in neuropsychology as regards the memory re-education. The REMISSION system combines techniques of Artificial Intelligence, Data Management, and Interactive Simulation and Animation. The architecture of the REMISSION system is based on the co-operation of two specialized components 1. The graphic component this ensures the management of graphic objects, the graphic management of the patient' a interactions, time management, and ia used as patient interface. 2. The decisional component : the role of this intelligent part of the system consists in carrying out both the re-education process and strategy. It also manages the patient interaction on the play-field which represents the simulated environment This co-operation bas required the working out of a communication interface. This interface allows on to handle external events (patient's actions on the environment) and internal events (interferences,aids) triggered by a generator. The knowledge base for the simulation is modelled on the object and rule paradigm
Hillmer, Dirk. "Computer-based analysis of Biological Images Neuronal Networks for Image Processing." Electronic Thesis or Diss., Bordeaux, 2024. https://theses.hal.science/tel-04650911.
Full textAI in medicine is a rapidly growing field, and its significance in dermatology is increasingly pronounced. Advancements in neural networks, accelerated by powerful GPUs, have catalyzed the development of AI systems for skin disorder analysis. This study presents a novel approach that harnesses computer graphics techniques to create AI networks tailored to skin disorders. The synergy of these techniques not only generates training data but also optimizes image manipulation for enhanced processing. Vitiligo, a common depigmenting skin disorder, serves as a poignant case study. The evolution of targeted therapies underscores the necessity for precise assessment of the affected surface area. However, traditional evaluation methods are time-intensive and prone to inter- and intra-rater variability. In response, this research endeavors to construct an artificial intelligence (AI) system capable of objectively quantifying facial vitiligo severity.The AI model's training and validation leveraged a dataset of one hundred facial vitiligo images. Subsequently, an independent dataset of sixty-nine facial vitiligo images was used for final evaluation. The scores assigned by three expert physicians were compared with both inter- and intra-rater performances, as well as the AI's assessments. Impressively, the AI model achieved a remarkable accuracy of 93%, demonstrating its efficacy in quantifying facial vitiligo severity. The outcomes highlighted substantial concordance between AI-generated scores and those provided by human raters.Expanding beyond facial vitiligo, this model's utility in analyzing full-body images and images from various angles emerged as a promising avenue for exploration. Integrating these images into a comprehensive representation could offer insights into vitiligo's progression over time, thereby enhancing clinical diagnosis and research outcomes. While the journey has been fruitful, certain aspects of the research encountered roadblocks due to insufficient image and data resources. An exploration into analysis of in vivo mouse models and analysing pigmentation of skin cells in a preclinical embryo models as well as retina image recognition was regrettably halted. Nevertheless, these challenges illuminate the dynamic nature of research and underscore the importance of adaptability in navigating unforeseen obstacles.In conclusion, this study showcases the potential of AI to revolutionize dermatological assessment. By providing an objective evaluation of facial vitiligo severity, the proposed AI model offers a valuable adjunct to human assessment in both clinical practice and research settings. The ongoing pursuit of integrating AI into the analysis of diverse image datasets holds promise for broader applications in dermatology and beyond
La, Barbera Giammarco. "Learning anatomical digital twins in pediatric 3D imaging for renal cancer surgery." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT040.
Full textPediatric renal cancers account for 9% of pediatric cancers with a 9/10 survival rate at the expense of the loss of a kidney. Nephron-sparing surgery (NSS, partial removal of the kidney) is possible if the cancer meets specific criteria (regarding volume, location and extent of the lesion). Indication for NSS is relying on preoperative imaging, in particular X-ray Computerized Tomography (CT). While assessing all criteria in 2D images is not always easy nor even feasible, 3D patient-specific models offer a promising solution. Building 3D models of the renal tumor anatomy based on segmentation is widely developed in adults but not in children. There is a need of dedicated image processing methods for pediatric patients due to the specificities of the images with respect to adults and to heterogeneity in pose and size of the structures (subjects going from few days of age to 16 years). Moreover, in CT images, injection of contrast agent (contrast-enhanced CT, ceCT) is often used to facilitate the identification of the interface between different tissues and structures but this might lead to heterogeneity in contrast and brightness of some anatomical structures, even among patients of the same medical database (i.e., same acquisition procedure). This can complicate the following analyses, such as segmentation. The first objective of this thesis is to perform organ/tumor segmentation from abdominal-visceral ceCT images. An individual 3D patient model is then derived. Transfer learning approaches (from adult data to children images) are proposed to improve state-of-the-art performances. The first question we want to answer is if such methods are feasible, despite the obvious structural difference between the datasets, thanks to geometric domain adaptation. A second question is if the standard techniques of data augmentation can be replaced by data homogenization techniques using Spatial Transformer Networks (STN), improving training time, memory requirement and performances. In order to deal with variability in contrast medium diffusion, a second objective is to perform a cross-domain CT image translation from ceCT to contrast-free CT (CT) and vice-versa, using Cycle Generative Adversarial Network (CycleGAN). In fact, the combined use of ceCT and CT images can improve the segmentation performances on certain anatomical structures in ceCT, but at the cost of a double radiation exposure. To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it. We present an extension of CycleGAN to generate such images, from unpaired databases. Anatomical constraints are introduced by automatically selecting the region of interest and by using the score of a Self-Supervised Body Regressor, improving the selection of anatomically-paired images between the two domains (CT and ceCT) and enforcing anatomical consistency. A third objective of this work is to complete the 3D model of patient affected by renal tumor including also arteries, veins and collecting system (i.e. ureters). An extensive study and benchmarking of the literature on anatomic tubular structure segmentation is presented. Modifications to state-of-the-art methods for our specific application are also proposed. Moreover, we present for the first time the use of the so-called vesselness function as loss function for training a segmentation network. We demonstrate that combining eigenvalue information with structural and voxel-wise information of other loss functions results in an improvement in performance. Eventually, a tool developed for using the proposed methods in a real clinical setting is shown as well as a clinical study to further evaluate the benefits of using 3D models in pre-operative planning. The intent of this research is to demonstrate through a retrospective evaluation of experts how criteria for NSS are more likely to be found in 3D compared to 2D images. This study is still ongoing
El, Hadramy Sidaty. "AI-enabled IVUS-guided augmented reality for liver surgery." Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAD031.
Full textThis thesis presents a novel augmented reality (AR) system guided by intravascular ultrasound (IVUS) to enhance the precision of minimally invasive liver surgeries. While IVUS provides advantages for intraoperative imaging, no previous method has integrated this real-time data into the surgical guidance of liver procedures. The proposed method aims to align intraoperative IVUS data with preoperative CT images, providing surgeons with an anatomically accurate view of liver structures that adapt to the deformation of the organ during the intervention. We adopt a non-rigid registration approach based on anatomical features to implement this system, which combines a biomechanical model with artificial intelligence (AI) models. This combination allows for both accuracy and real-time. Major contributions include the development of SOniCS, a framework that integrates FEniCS and SOFA for accurate soft tissue simulation, and an AI-driven method for reconstructing 3D IVUS volumes without external tracking systems, thus simplifying the surgical setup. The primary contribution of this thesis is developing a patient-specific, real-time registration method that allows for continuous alignment of IVUS data with preoperative models and adaptation to patient-specific characteristics. The work presented in this thesis represents an advancement in computer-assisted liver surgery, providing a practical system to assist minimally invasive liver interventions
Morizet-Mahoudeaux, Pierre. "Suivi en continu de processus dynamiques : fonctionnement en mode bouclé contrôlé d'un système d'intelligence artificielle." Compiègne, 1987. http://www.theses.fr/1987COMPE068.
Full textThis work presents the study and the realization of an artificial intelligence based system which monitors and diagnoses the successive states of a dynamic process. The first chapter shows, with the help of two examples (one concerns signal processing, the other the resuscitation protocol following of neonates), the properties that a system for dynamic control following must satisfy. The necessity of the maintenance of inconsistencies, inhibition and reactivation for facts rules and goals is presented. The knowledge acquisition and structuration system (SUPER), which was at the foundations of this work, is presented. The natural language interface and the object representation model are also described. The specific properties of this system in comparison with general expert systems are given and justified. The solutions given by non-monotonic logic expert systems are also presented. The third part is the description of the fact base maintenance system, when dates change, is inserted or suppressed. The particular case of change of state rules is studied, with the possibility of memorizing or not the fact that the system's state has changed. The looped inference engine is then presented. It ensures the managing of goals analysis and resolution planning according to the passed and present datas. Two applications examples are given in the last part. The first one shows how to manage with signal processing algorithm during a problem resolution as temporary results are proved or not. The second one describes resolutions strategies management in the case of a process-environment system as it slowly evolves
Boukebeche, Madani. "AIPSys : un système à base de connaissances orienté objet en vue de l'aide à la prescription de médicaments." Paris 5, 1997. http://www.theses.fr/1997PA055005.
Full textCuré, Olivier. "Siam : système intéractif d'automédication multimédia." Paris 5, 1999. http://www.theses.fr/1999PA05S019.
Full textPortet, François. "Pilotages d'algorithmes pour la reconnaissance en ligne d'arythmies cardiaques." Rennes 1, 2005. https://tel.archives-ouvertes.fr/tel-00011942v2.
Full textYang, Yingyu. "Analyse automatique de la fonction cardiaque par intelligence artificielle : approche multimodale pour un dispositif d'échocardiographie portable." Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4107.
Full textAccording to the 2023 annual report of the World Heart Federation, cardiovascular diseases (CVD) accounted for nearly one third of all global deaths in 2021. Compared to high-income countries, more than 80% of CVD deaths occurred in low and middle-income countries. The inequitable distribution of CVD diagnosis and treatment resources still remains unresolved. In the face of this challenge, affordable point-of-care ultrasound (POCUS) devices demonstrate significant potential to improve the diagnosis of CVDs. Furthermore, by taking advantage of artificial intelligence (AI)-based tools, POCUS enables non-experts to help, thus largely improving the access to care, especially in less-served regions.The objective of this thesis is to develop robust and automatic algorithms to analyse cardiac function for POCUS devices, with a focus on echocardiography (ECHO) and electrocardiogram (ECG). Our first goal is to obtain explainable cardiac features from each single modality respectively. Our second goal is to explore a multi-modal approach by combining ECHO and ECG data.We start by presenting two novel deep learning (DL) frameworks for echocardiography segmentation and motion estimation tasks, respectively. By incorporating shape prior and motion prior into DL models, we demonstrate through extensive experiments that such prior can help improve the accuracy and generalises well on different unseen datasets. Furthermore, we are able to extract left ventricle ejection fraction (LVEF), global longitudinal strain (GLS) and other useful indices for myocardial infarction (MI) detection.Next, we propose an explainable DL model for unsupervised electrocardiogram decomposition. This model can extract interpretable information related to different ECG subwaves without manual annotation. We further apply those parameters to a linear classifier for myocardial infarction detection, which showed good generalisation across different datasets.Finally, we combine data from both modalities together for trustworthy multi-modal classification. Our approach employs decision-level fusion with uncertainty, allowing training with unpaired multi-modal data. We further evaluate the trained model using paired multi-modal data, showcasing the potential of multi-modal MI detection to surpass that from a single modality.Overall, our proposed robust and generalisable algorithms for ECHO and ECG analysis demonstrate significant potential for portable cardiac function analysis. We anticipate that our novel framework could be further validated using real-world portable devices. We envision that such advanced integrative tools may significantly contribute towards better identification of CVD patients
Wallis, David. "A study of machine learning and deep learning methods and their application to medical imaging." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST057.
Full textWe first use Convolutional Neural Networks (CNNs) to automate mediastinal lymph node detection using FDG-PET/CT scans. We build a fully automated model to go directly from whole-body FDG-PET/CT scans to node localisation. The results show a comparable performance to an experienced physician. In the second half of the thesis we experimentally test the performance, interpretability, and stability of radiomic and CNN models on three datasets (2D brain MRI scans, 3D CT lung scans, 3D FDG-PET/CT mediastinal scans). We compare how the models improve as more data is available and examine whether there are patterns common to the different problems. We question whether current methods for model interpretation are satisfactory. We also investigate how precise segmentation affects the performance of the models. We first use Convolutional Neural Networks (CNNs) to automate mediastinal lymph node detection using FDG-PET/CT scans. We build a fully automated model to go directly from whole-body FDG-PET/CT scans to node localisation. The results show a comparable performance to an experienced physician. In the second half of the thesis we experimentally test the performance, interpretability, and stability of radiomic and CNN models on three datasets (2D brain MRI scans, 3D CT lung scans, 3D FDG-PET/CT mediastinal scans). We compare how the models improve as more data is available and examine whether there are patterns common to the different problems. We question whether current methods for model interpretation are satisfactory. We also investigate how precise segmentation affects the performance of the models
Fromont, Élisa. "Apprentissage multisource par programmation logique inductive : application à la caractérisation d'arythmies cardiques." Rennes 1, 2005. https://tel.archives-ouvertes.fr/tel-00011455v2.
Full textBenkirane, Hakim. "Deep learning methods for the integration of multi-omics and histopathology data for precision medicine in oncology." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASR022.
Full textPrecision medicine is an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle. The objective it to predict more accurately which treatment and prevention strategies for a particular disease will work in which groups of people. In oncology, precision medicine comes with a drastic increase in the data that is collected for each individual, characterized by a large diversity of data sources. Advanced cancer patients receiving cancer treatment, for instance, are often subject to a complete molecular profiling, on top of clinical profiling and pathology images. As a consequence, integration methods for multi-modal data (image, clinical, molecular) is a critical issue to allow the definition of individual predictive models. This thesis tackles the development of computational models and learning strategies adept at deciphering complex, high-dimensional interactions. A significant focus is also placed on the explainability of these AI-driven models, ensuring that predictions are understandable and clinically actionable
Ketata, Firas. "Risk prediction of endocrine diseases using data science and explainable artificial intelligence." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCD022.
Full textThis thesis aims to predict the risk of endocrine diseases using data science and machine learning. The aim is to leverage this risk identification to assist doctors in managing financial resources, personalizing the treatment of carbohydrate anomalies in patients with beta-thalassemia major, and screening for metabolic syndrome in adolescents. An explainability study of the predictions was developed in this thesis to evaluate the reliability of predicting glucose anomalies and to reduce the financial burden associated with screening for metabolic syndrome. Finally, in response to the observed limitations of explainable machine learning, we propose an approach to improve and evaluate this explainability, which we test on several datasets
Karboub, Kaouter. "Contribution à l'amélioration des performances des services médicaux urgents appliquant l'IoT et l'intelligence artificielle." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0093.
Full textInternet of Things (IoT) and Artificial Intelligence (AI)are two advancing technological areas utilizing the capabilities of performing hands free tasks and intelligent data analysis. These technologies are showing promising potentials of improving the Human-to-Machine interactions in clinical workflow, create a better foundation of clinical decision-making, and improve the accessibility of clinical data. The novel aspect, rapid advancement, and new application possibilities of IoT and AI are in the initial phases. Hence, the thesis research has the objectives of identifying and investigating the potential, challenges, and possibilities of using IoT and AI to assess clinical settings.From the other hand, Various organizations claim that increasing attention should be put on an efficient use of healthcare resources. The internationally rising life expectancy and population size is accompanied by hospitals that are relying more on short admissions, and thus on limited bed capacity. The international World Health Report published by the World Health Organization shows that 20-40% of all healthcare resources are not being sufficiently utilized. Thus, tools that benefit an efficient healthcare system is greatly relevant to the present society. The goal of this thesis is to expand methods in the field of IoT and AI and modeling and optimization to hospital patient flow with a view to provide management and planners with a range of decision tools for improving the utilization of hospital resources. We elaborate on several relevant hospital optimization problems which relate to decision making on both the strategic, tactical and operational level. In addition, we focus on various types of patient flow, from inpatient to outpatient admissions, which has led to many different research studies. Methodologically we mainly focus on evaluating the different instances of patient flow but specifically on patients with cardiovascular diseases (CVD) based on Markov chain modeling.Mainly, the focus was on separating the patient stay in the hospital into three main phases. Each phase in an interdependent, time varying and function of the other phase. The core of the contribution is to assess and give every step of the process of admitting, treating, and discharging patients with solutions that can help physicians take decisions in short time but also take them efficiently. These techniques used IoT in order to collect electrocardiogram signals (ECG) from patients with different CVD pathologies and to transfer these data into a platform that can preprocess it and store it. AI that is used to automatically classify these signals along with three MIT dataset and decide which patients have cardiovascular diseases with no physician intervention. Then AI was used to efficiently predict which patients need to be discharged based on their epidemiological, physiological signals and characteristics and also based on their Length of Stay (LOS) and on their admission and transfer history. Finally, comes the role of using metaheuristic optimization. This last one, into account the admission, treatment trajectory and first survival analysis of these patients to decide which patients will be allocated to a bed in which ward mainly in the Intensive Care Unit (ICU).The proposed system for studying and optimizing the patients flow in a health care facility show high performance based on the different performance metrics we are using in this research project
Mercadier, Yves. "Classification automatique de textes par réseaux de neurones profonds : application au domaine de la santé." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS068.
Full textThis Ph.D focuses on the analysis of textual data in the health domain and in particular on the supervised multi-class classification of data from biomedical literature and social media.One of the major difficulties when exploring such data by supervised learning methods is to have a sufficient number of data sets for models training. Indeed, it is generally necessary to label manually the data before performing the learning step. The large size of the data sets makes this labellisation task very expensive, which should be reduced with semi-automatic systems.In this context, active learning, in which the Oracle intervenes to choose the best examples to label, is promising. The intuition is as follows: by choosing the smartly the examples and not randomly, the models should improve with less effort for the oracle and therefore at lower cost (i.e. with less annotated examples). In this PhD, we will evaluate different active learning approaches combined with recent deep learning models.In addition, when small annotated data set is available, one possibility of improvement is to artificially increase the data quantity during the training phase, by automatically creating new data from existing data. More precisely, we inject knowledge by taking into account the invariant properties of the data with respect to certain transformations. The augmented data can thus cover an unexplored input space, avoid overfitting and improve the generalization of the model. In this Ph.D, we will propose and evaluate a new approach for textual data augmentation.These two contributions will be evaluated on different textual datasets in the medical domain
Chamma, Ahmad. "Statistical interpretation of high-dimensional complex prediction models for biomedical data." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG028.
Full textModern large health datasets represent population characteristics in multiple modalities, including brain imaging and socio-demographic data. These large cohorts make it possible to predict and understand individual outcomes, leading to promising results in the epidemiological context of forecasting/predicting the occurrence of diseases, health outcomes, or other events of interest. As data collection expands into different scientific domains, such as brain imaging and genomic analysis, variables are related by complex, possibly non-linear dependencies, along with high degrees of correlation. As a result, popular models such as linear and tree-based techniques are no longer effective in such high-dimensional settings. Powerful non-linear machine learning algorithms, such as Random Forests (RFs) and Deep Neural Networks (DNNs), have become important tools for characterizing inter-individual differences and predicting biomedical outcomes, such as brain age. Explaining the decision process of machine learning algorithms is crucial both to improve the performance of a model and to aid human understanding. This can be achieved by assessing the importance of variables. Traditionally, scientists have favored simple, transparent models such as linear regression, where the importance of variables can be easily measured by coefficients. However, with the use of more advanced methods, direct access to the internal structure has become limited and/or uninterpretable from a human perspective. As a result, these methods are often referred to as "black box" methods. Standard approaches based on Permutation Importance (PI) assess the importance of a variable by measuring the decrease in the loss score when the variable of interest is replaced by its permuted version. While these approaches increase the transparency of black box models and provide statistical validity, they can produce unreliable importance assessments when variables are correlated.The goal of this work is to overcome the limitations of standard permutation importance by integrating conditional schemes. Therefore, we investigate two model-agnostic frameworks, Conditional Permutation Importance (CPI) and Block-Based Conditional Permutation Importance (BCPI), which effectively account for correlations between covariates and overcome the limitations of PI. We present two new algorithms designed to handle situations with correlated variables, whether grouped or ungrouped. Our theoretical and empirical results show that CPI provides computationally efficient and theoretically sound methods for evaluating individual variables. The CPI framework guarantees type-I error control and produces a concise selection of significant variables in large datasets.BCPI presents a strategy for managing both individual and grouped variables. It integrates statistical clustering and uses prior knowledge of grouping to adapt the DNN architecture using stacking techniques. This framework is robust and maintains type-I error control even in scenarios with highly correlated groups of variables. It performs well on various benchmarks. Empirical evaluations of our methods on several biomedical datasets showed good face validity. Our methods have also been applied to multimodal brain data in addition to socio-demographics, paving the way for new discoveries and advances in the targeted areas. The CPI and BCPI frameworks are proposed as replacements for conventional permutation-based methods. They provide improved interpretability and reliability in estimating variable importance for high-performance machine learning models
Dojat, Michel. "Contribution à la représentation d'expertises médicales dynamiques : Application en réanimation médicale." Compiègne, 1994. http://www.theses.fr/1994COMPD747.
Full textTo represent dynamic medical expertise we studied two mechanisms for temporal abstractions : aggregation of similar observed situations and forgetting non relevant information. These mechanisms allow clinical observations to be incrementally interpreted in real-time. Knowledge distribution between autonomous and asynchronous agents was investigated. We built NéoGanesh, a closed-Ioop knowledge-based system used for ventilator management. NéoGanesh integrates a distributed model of medical reasoning and an explicit representation of time. The system is based on the representation of physicians expertise. It interprets clinical data in real-time and controls the mechanical assistance provided to a patient who suffers from a lung disease. NéoGanesh develops a therapeutic strategy to gradually reeducate the respiratory muscles of the patient, and evaluates his capacity to breathe without mechanical assistance. Our representation paradigm based on object-orientation and production rules facilitates extension, refinement and reuse of knowledge bases. A clinical evaluation of NéoGanesh was performed at Henri Mondor Hospital (Créteil, France). The results show the clinical interest of such a system for patient monitoring and the relevance of the selected models and knowledge representation choices
Jeanpierre, Laurent. "Apprentissage et adaptation pour la modélisation stochastique de systèmes dynamiques réels." Nancy 1, 2002. http://docnum.univ-lorraine.fr/public/SCD_T_2002_0246_JEANPIERRE.pdf.
Full textThe exploitation of Artificial Intelligence algorithms in real conditions is an interesting method for their improvement, since weaknesses are shown very quickly thanks to real, uncontrolled constraints. In particular, I study two problems of medical diagnosis and a classical problem of robot navigation. Using fuzzy sets with Markov models provide an intuitive but powerful system to solve such situations. Then, I introduce diagnosis learning which betters the cooperation with doctors, as it allows correcting the model while ensuring numerical stability. Thus, doctors can modify the patient model without setting each parameter manually. Finally, I show this approach can be generalized to a whole class of diagnosis problems. To achieve this goal, I show an integrated development environment that allows to simply link modules altogether to have a given problem solved. This should help creating new applications, while minimizing the programming time of researchers
Aegerter, Philippe. "Consult-eao : étude et réalisation d'un système expert d'aide à l'enseignement de la médecine : application aux pays en développement." Compiègne, 1988. http://www.theses.fr/1988COMPD116.
Full textThis thesis describes an intelligent computer assisted instruction system, designed for rural health workers in developing countries. This system, called Consult-Eao, provides a case simulation teaching. By asking questions he thinks relevant, the student has to solve the diagnostic and therapeutic problem of the patient displayed by the system. Consult-Eao includes an expert system which solves the case besides the student in order to guide him. This module includes a knowledge base of the domain : medicine in developing countries, which is represented by frames. The reasoning strategy is represented by meta-rules. A pedagogical module is in charge of the student and updates a student model. Thus the system, which can generate autonomously adapted cases, is able to perform a pedagogical monitoring of the student
Dujols, Pierre. "Analyse des énoncés médicaux en langage naturel : vers un prototype d'indexation automatique." Montpellier 2, 1990. http://www.theses.fr/1990MON20008.
Full textBraham, Najoua. "Organisation d'un système de simulation de cas autour d'un système expert en hématologie." Compiègne, 1986. http://www.theses.fr/1986COMPS144.
Full textArantes, Júnior Wilmondes Manzi de. "P. A. S. Pluggable Alert System : un système pour la génération et l'affichage d'alertes médicales adaptées à l'utilisateur." Lyon, INSA, 2006. http://theses.insa-lyon.fr/publication/2006ISAL0025/these.pdf.
Full textWe propose a system that is able to detect and trigger user-defined medical alerts in the context of healthcare networks. Such alerts are created by using fuzzy linguistic variables associated with importance levels (e. G. Alert if age = elderly; important and air-temperature = very-hot; very-important) and whose dependency relationships (e. G. The weight depends on the age) are modeled through a weighted oriented graph. Each alert the system triggers has two quality indicators – an applicability level and a trust level – which state to which extent the patient is concerned and how reliable it is. Our system is also able to transparently infer missing information by using an historical database containing previously reported similar cases. Finally, a multi-agents display module adapts each alert to the context, which is represented by the patient (elderly, etc. ), the healthcare user (physician, etc. ), the display device (PC, PDA, etc. ), the place (hospital, etc. ) and the urgency of the alert itself (very urgent, etc. ). The adaptation process is controlled by three intelligent agents – the patient, the physician and the alert – which negotiate to agree on the min-max quality levels required for the three dimensions of display: contents (information that will be shown), graphics (graphic components that will be used) and security (protection mechanisms to use). Then, the corresponding task announcements are broadcasted within three societies of reactive agents (which have not cognitive capabilities and simply perform tasks) representing these dimensions and the winning ones collaborate to build the interface of the alert. The final system will be integrated into the hospital information system provided by the company that has sponsored my research and will be patented as soon as possible
Bonnet, Christine. "Simulation dans un contexte rééducationnel : Acquisition des éléments de l'environnement de simulation et analyse des interactions de l'utilisateur." Lyon, INSA, 1991. http://www.theses.fr/1991ISAL0056.
Full textThe automation of memory re-education fallowing an ecological approach requires integration of : *Artificial Intelligence techniques ( use of an Expert system shell : Nexpert Object), *Data Management (use -of a Relational Database Management System : Oracle), * and Interactive Simulation and Animation techniques (use of a graphical software Package : Data Views). Acquiring Knowledge needs to set up user friendly environments, thus allowing the neuropsychologists to unput in the system the elements of the re-education. We implemented all these techniques in a system, which innovates in computerized memory re-education : the REMISSION system (Rééduçation Ecologique de a Mémoire un Système de Simulation). The elements of both the simulation environment and the re-educational strategies are modeled with the objects and rules concepts. Elements are introduced in the system via a descriptive language, aimed at making the knowledge expression easier. A graphical component (which manages, the objects, the events and the time) and a deductive one cooperate in the system. The evaluation of the patient's. Actions is carried out by mappings of patient's and system's solutions
Mobarek, Iyad. "Conception d'un système national des équipements médicaux automatiques pour améliorer la performance et réduire les coûts d'investissements et d'exploitations des dispositifs médicaux." Compiègne, 2006. http://www.theses.fr/2006COMP1623.
Full textThis thesis describes the different phases of developing, implementing and evaluating a unique Clinical Engineering System (CES) based on Jordan. This includes establishing and then automating ail technical issues and information related to medical equipment in 29 hospitals, 685 health centers, 332 dental clinics, 348 pediatric and mother care clinics and 23 blood banks. Every medical equipment was assigned an identity code that can be recognized through a bar code scanning system and similarly ail other involved parameters like hospitals, personnel, spare parts, workshops. . . Etc. Are also coded comprehensively. The fully automated CES presents a powerful system; implemented over network covering different locations of the Directorate of Biomedical Engineering (DBE) at Ministry Of Heath ail over the country, presenting the first Comprehensive CES to be implemented on the national level and the automated system can read and report in both Arabic and English languages. Compared to international figures the developed clinical engineering system has enhanced the performance of medical equipment including its availability (uptime) up to the best available international levels at extremely low cost. The complete system proved to be invaluable tool to manage, control and report all different parameters concerning medical equipment in the considered clinical engineering system. The System was evaluated and found to be reliable, effective and unique compared to internationally available systems and it presents a. Successful model for other countries
Hamdi, Takoua. "Analyse de l'évolution de la glycémie des patients diabétiques insulinodépendants." Electronic Thesis or Diss., Toulon, 2019. http://www.theses.fr/2019TOUL0004.
Full textThe main goal of this thesis is to help type 1 diabetes (T1 D) contrai and stabilize blood sugar levels. Fo this, an analysis of the evolution of the blood glucose is necessary, then and after the recording of the blood glucose values using the CGM, a good method of predicting the glycemia is essential for the patient to adjust the blood sugar. dose of insulin injected on the basis of these predicted values. ln this context, we focused in the first chapter a study on the principle of the regulation of blood glucose, which we have presented glycemic homeostasis, the evolution of blood glucose, the organs responsible for the regulation of blood glucose, and sets of mechanisms for the regulation of blood glucose. Thus, to better understand diabetes, we presented generalities on diabetes: history of diabetes, distribution of diabetes in the world, types of diabetes and the difference between them, the means of treatment of type 1 diabetes and materials techniques used for the management of diabetes. ln the second chapter, we studied the evolution of blood glucose, so we showed that blood glucose has a chaotic appearance. As a result, blood glucose is unpredictable in the long term, with a predictability limit of almost 45 minutes. The third chapter was a continuation of the work presented in the preceding chapter. lndeed, after determining the predictability limit, we study the approaches to predict glucose levels. lndeed, a vast bibliographie researc has been launched on ail the methods of prediction of the glycemia of which one has the mathematical methods and the methods of artificial intelligence. ln this work, two approaches to predicting blood glucose have been proposed. The first approach is a new adaptive ANN. lndeed, by optimizing the ANN architecture f each patient. The precision of the proposed ANNs is discussed on the basis of certain statistical criteria such as RMSE and MAPE. The average obtained of RMSE is 6.43 mg / dl, and the average of MAPE is of 3.87% for a Horizon of Prediction HP= 15 min. Comparing with other technical models established in the literature, the proposed method has several advantages such as accuracy and adaptability. Thus, the experiments show the capacity of the proposed ANNs for a better prediction of the level of the glycemia. The second approach is a weighted SVR based on the DE algorithm, the average obtained from RMSE was 9.44 mg/ dl for an HP equal to 15 min. A comparison with the techniques established in the literature shows that the proposed method has many advantages such as precision, adaptability and ease of application. Base on the experimental results, the proposed combination of the SVR optimization algorithm with DE has im roved rediction accurac due to its efficienc in modelin nonlinear and corn lex data sets
Wacquant, Sylvie. "Contribution à l'étude d'un modèle de réseaux d'automates corticaux : principes & outils logiciels." Rouen, 1993. http://www.theses.fr/1993ROUES063.
Full textGal, Jocelyn. "Application d’algorithmes de machine learning pour l’exploitation de données omiques en oncologie." Electronic Thesis or Diss., Université Côte d'Azur (ComUE), 2019. http://theses.univ-cotedazur.fr/2019AZUR6026.
Full textThe development of computer science in medicine and biology has generated a large volume of data. The complexity and the amount of information to be integrated for optimal decision-making in medicine have largely exceeded human capacities. These data includes demographic, clinical and radiological variables, but also biological variables and particularly omics (genomics, proteomics, transcriptomics and metabolomics) characterized by a large number of measured variables relatively to a generally small number of patients. Their analysis represents a real challenge as they are frequently "noisy" and associated with situations of multi-colinearity. Nowadays, computational power makes it possible to identify clinically relevant models within these sets of data by using machine learning algorithms. Through this thesis, our goal is to apply supervised and unsupervised learning methods, to large biological data, in order to participate in the optimization of the classification and therapeutic management of patients with various types of cancer. In the first part of this work a supervised learning method is applied to germline immunogenetic data to predict the efficacy and toxicity of immune checkpoint inhibitor therapy. In the second part, different unsupervised learning methods are compared to evaluate the contribution of metabolomics in the diagnosis and management of breast cancer. Finally, the third part of this work aims to expose the contribution that simulated therapeutic trials can make in biomedical research. The application of machine learning methods in oncology offers new perspectives to clinicians allowing them to make diagnostics faster and more accurately, or to optimize therapeutic management in terms of efficacy and toxicity
Rebaud, Louis. "Whole-body / total-body biomarkers in PET imaging." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST047.
Full textThis thesis in partnership with Institut Curie and Siemens Healthineers explores the use of Positron Emission Tomography (PET) for cancer prognosis, focusing on non-Hodgkin lymphomas, especially follicular lymphoma (FL) and diffuse large B cell lymphoma (DLBCL). Assuming that current biomarkers computed in PET images overlook significant information, this work focuses on the search for new biomarkers in whole-body PET imaging. An initial manual approach validated a previously identified feature (tumor fragmentation) and explored the prognostic significance of splenic involvement in DLBCL, finding that the volume of splenic involvement does not further stratify patients with such an involvement. To overcome the empirical limitations of the manual search, a semi-automatic feature identification method was developed. It consisted in the automatic extraction of thousands of candidate biomarkers and there subsequent testing by a selection pipeline design to identify features quantifying new prognostic information. The selected biomarkers were then analysed and re-encoded in simpler and more intuitive ways. Using this approach, 22 new image-based biomarkers were identified, reflecting biological information about the tumours, but also the overall health status of the patient. Among them, 10 features were found prognostic of both FL and DLBCL patient outcome. The thesis also addresses the challenge of using these features in clinical practice, proposing the Individual Coefficient Approximation for Risk Estimation (ICARE) model. This machine learning model, designed to reduce overfitting and improve generalizability, demonstrated effectiveness in the HECKTOR 2022 challenge for predicting outcomes from head and neck cancer patients [18F]-PET/CT scans. This model was also found to overfit less than other machine learning methods on an exhaustive comparison using a benchmark of 71 medical datasets. All these developments were implemented in a software extension of a prototype developed by Siemens Healthineers
Demuth, Stanislas. "Computational approach for precision medicine in multiple sclerosis." Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAJ062.
Full textThis PhD work explored the secondary use of clinical research data in multiple sclerosis (MS) and their integration with modern information technology to support neurologists’ therapeutic decisions. Tabular data of 31,786 patients with MS were integrated into a homemade cloud-based precision medicine platform from 11 industrial RCTs and two cohorts of the French MS registry. The resulting clinical decision support system relied on interactive data visualization. It showed a similar discriminatory capacity to machine learning but better explainability and calibration in a held-out real-world population. Dedicated training of neurologists appeared required. Regulatory barriers were addressed by generating virtual patients using a privacy-by-design method. They achieved sufficient privacy and clinical utility to proxy the reference data. These translational efforts demonstrated the clinical utility of several data engineering processes to develop a new paradigm of precision medicine in MS
Zhao, Xin. "Development of an intelligent garment integrating physiological sensors and a decision making system : applied to the online human well-being monitoring." Thesis, Lille, 2020. https://pepite-depot.univ-lille.fr/LIBRE/EDSPI/2020/2020LILUI076.pdf.
Full textFetal movements are one significant indicator of fetal health status. Reduction or discontinuation in fetal movements perceived by the mother could be a sign that fetal development requires enhanced monitoring. In practice, maternal perception of reduced fetal movements helps to determine the optimal time for delivery especially for high-risk pregnancy. However, fetal movement counting by the mother suffers from imprecision and subjectivity due to each mother's personal habits, customs and activity-rest periods, which usually leads to unnecessary concern and anxiety to the mother. Ultrasound-based technology, on the other hand, can be used for accurate and reliable monitoring of fetal movements. However, concerns have been expressed in the literature about the exceeded exposure of ultrasound, which could have a negative effect. Besides, it requires an in-hospital setting which can sometimes be far from the living environment, immobilization of the mother, and trained personnel to manipulate the device. In this study, we present a new garment-based wearable system for online monitoring of fetal movements. The proposed system is mainly composed of: 1) a garment carefully designed for enhancing pregnant women's comfort and guaranteeing the quality of measured signals, 2) a network of sensors/a communicating embedded system integrated into the right positions of the garment and 3) a mobile application connected to the garment and linking to the cloud with healthcare professionals allowing the mother to transmit and visualize in return the key information related to her baby's health. This work highlights the development of an embedded decision-making algorithm for online and reliable counting of fetal movement based on the sensor data. The proposed system is connected to a remote medical expert system on the cloud computing platform with which clinicians can make advanced medical diagnosis. Different from the existing wearable systems, both the electronic/signal issues and textile/garment design have been fully taken into account in the proposed intelligent garment, and a balance between the embedded intelligence and that of the cloud has been considered to guarantee an overall improvement. The experimental results has proved that the proposed system can effectively and automatically perform fetal movement counting, and has potential applications in offering an innovative solution in the field of human health, benefiting pregnant women, alleviating the burden on health systems for applications to the monitoring of fetal development.. This research provides guidance for the application of remote health monitoring by using wearable systems in antenatal care
Molinet, Benjamin. "Génération et évaluation d'explications argumentatives en langage naturel appliquées au domaine médical." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4063.
Full textArgument(ation) mining, a rapidly growing area of Natural Language Processing (NLP) and computational models of argument, aims at the automatic recognition of argument structures (i.e., components and relations) in natural language textual resources. In the healthcare domain, argument mining has proven beneficial in providing methods to automatically detect argumentative structures to support Evidence-Based Medicine (EBM). The importance of these approaches relies on the fact that, despite the accuracy of neural models in medical diagnosis, explanation of their outcomes remains problematic. The thesis tackles this open issue and focuses on generation and assessment of natural language argumentative explanations for diagnosis predictions, supporting clinicians in decision making and learning. First, I proposed a novel complete pipeline to automatically generate natural language explanations of medical question answering exams for diagnoses relying on a medical ontology and clinical entities from exam texts. I defined a state-of-the-art medical named entity recognition and classification (NERC) system to detect layperson symptoms and medical findings that I align to ontology terms so as to justify a diagnosis of a clinical case provided to medical residents. NERC module, called SYMEXP, allows our system to generate template-based natural language argumentative explanations to justify why the correct answer is correct and why the other proposed options are not. Second, I proposed an argument-based explanation assessment framework, called ABEXA, to automatically extract the argumentative structure of a medical question answering document and highlight a set of customisable criteria to characterise the clinical explanation and the document argumentation. ABEXA tackles the issue of explanation assessment from the argumentative viewpoint by defining a set of graph rules over an automatically generated argumentative graph. Third, I contributed to the design and development of the ANTIDOTE software tool, proposing different modules for argumentation-driven explainable Artificial Intelligence for digital medicine. Our system offers the following functionalities: multilingual argumentative analysis for the medical domain, explanation, extraction and generation of clinical diagnoses, multilingual large language models for the medical domain, and the first multilingual benchmark for medical question-answering. In conclusion, in this thesis, I explore how artificial intelligence combined with the argumentation theory could lead to more transparent healthcare systems. We apply our results to the critical domain of medicine showing all their potential in terms of support for education, for example, of clinical residents
Lambert, Benjamin. "Quantification et caractérisation de l'incertitude de segmentation d'images médicales pardes réseaux profonds." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALS011.
Full textIn recent years, artificial intelligence algorithms have demonstrated outstanding performance in a wide range of tasks, including the segmentation and classification of medical images. The automatic segmentation of lesions in brain MRIs enables a rapid quantification of the disease progression: a count of new lesions, a measure of total lesion volume and a description of lesion shape. This analysis can then be used by the neuroradiologist to adapt therapeutic treatment if necessary. This makes medical decisions faster and more precise.At present, these algorithms, which are often regarded as black boxes, produce predictions without any information concerning their certainty. This hinders the full adoption of artificial intelligence algorithms in sensitive areas, as they tend to produce errors with high confidence, potentially misleading human decision-makers. Identifying and understanding the causes of these failures is key to maximizing the usefulness of AI algorithms and enabling their acceptance within the medical profession. To achieve this goal, it is important to be able to distinguish between the two main sources of uncertainty. First, aleatoric uncertainty, which corresponds to uncertainty linked to intrinsic image noise and acquisition artifacts. Secondly, epistemic uncertainty, which relates to the lack of knowledge of the model.The joint aim of Pixyl and GIN is to achieve better identification of the sources of uncertainty in deep neural networks, and consequently develop new methods for estimating this uncertainty in routine, real-time clinical use.In the context of medical image segmentation, uncertainty estimation is relevant at several scales. Firstly, at the voxel scale, uncertainty can be quantified using uncertainty maps. This makes it possible to superimpose the image, its segmentation and the uncertainty map to visualize uncertain area. Secondly, for pathologies such as Multiple Sclerosis, the radiologist's attention is focused on the lesion rather than the voxel. Structural uncertainty estimation, i.e. at the lesion scale, enables the radiologist to quickly control uncertain lesions that may be false positives. Thirdly, high-level metrics such as volume or number of lesions are commonly extracted from segmentations. Being able to associate predictive intervals with these metrics is important so that the clinician can take this uncertainty into account in his analysis. Finally, uncertainty can be quantified at the scale of the whole image, for example to detect out-of-distribution images that present a significant anomaly that could bias their analysis.In this thesis, the development of uncertainty quantification tools operating at each of these levels is proposed. More generally, the desired and expected methods should enable Pixyl to improve its current models, services and products. For clinical application, inference time is particularly critical: decision support is only useful if it is fast enough to be applied during patient consultation (i.e. in less than 5 minutes). What's more, innovative solutions will need to maintain a high level of performance even when applied to small image databases, as is generally the case in the medical field
Henry-Chatelain, Catherine. "Génération de méta-faits pour la modélisation du raisonnement en diagnostic médical : application du diagnostic de l'infection néonatale." Compiègne, 1987. http://www.theses.fr/1987COMPD068.
Full textThe theme of this work is the development of an expert system for materno-foetal diagnosis in newborn babies. The study is part of the development of an essential expert system usable in either the diagnostic or simulation mode. Firstly, we present the various stages of an expert system development and also the main modes of knowledge representation via expert system description in the medical field. Secondly, we describe the essential expert system and its natural language interface with which its development has been conducted. Following this, we describe the main feature of materno-foetal infections, so as to highlight the various problems associated with their diagnosis. These are broken down and formulated in such a way that the analysis is in the form of fairly simple reasoning process. We put forward a general-purpose model of knowledge representation, based here upon infection criteria, as well as a meta-knowledge automatic generation module ; the latter, using the direct description of the basic facts allows us to deduce new data, in terms compatible with those used by doctors. The practical use of the module is described in considerable detail. The whole of the various generated meta-knowledge is reported, as is its analysis and the choice of triggerable rules. An example of a consultation is given. Results are presented for the evaluation phase, which was conducted in a pediatric reanimation unit
Yazzourh, Sophia. "Apprentissage par renforcement et outcome-weighted learning bayésien pour la médecine de précision : Intégration de connaissances médicales dans les algorithmes de décision." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES139.
Full textPrecision medicine aims to tailor treatments to the characteristics of each patient by relying on the frameworks of Individualized Treatment Regimes (ITR) and Dynamic Treatment Regimes (DTR). ITRs involve a single therapeutic decision, while DTRs allow for the adaptation of treatments over time through a sequence of decisions. For these approaches to be effective, they must be capable of handling complex data and integrating medical knowledge, which is essential for enabling realistic and safe clinical use. This work presents three research projects. First, a state-of-the-art review of methods for integrating medical knowledge into Reinforcement Learning (RL) models was conducted, considering the context of DTR and their specific constraints for application to observational data. Second, a probabilistic method for constructing rewards was developed for RL models, based on the preferences of medical experts. Illustrated by case studies on diabetes and cancer, this method generates data-driven rewards, avoiding the biases of "manual" construction and ensuring consistency with medical objectives in learning treatment recommendation strategies. Third, a Bayesian framework for the Outcome-Weighted Learning (OWL) method was proposed to quantify uncertainty in treatment recommendations, thereby enhancing the robustness of therapeutic decisions, and was illustrated through simulations studies. This contributions aim to improve the reliability of decision-making tools in precision medicine, by integrating medical knowledge into RL models on one hand, and proposing a Bayesian framework to quantify uncertainty in the OWL model on the other. This work is part of a global perspective of interdisciplinary collaboration, particularly among the fields of machine learning, medical sciences, and statistics
Mroueh, Dit Injibar Mohamed. "Classification évidentielle mono- et multi-label : application à la détection de maladies cardio-vasculaires." Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0011.
Full textThis thesis focuses on the detection of cardiovascular diseases through the monitoring of physiological signals. The objective is to develop mono- and multi-label classification approaches, based on the theory of belief functions, to predict or diagnose a complication linked to one or more cardiovascular diseases. First, an approach providing parameter extraction and information modeling in an evidential framework is developed to predict atrial fibrillation, a cardiac arrhythmia. An extension of this approach uses a reject classification option and alternative information modeling. The thesis then broadens the field of application to cover several cardiovascular diseases at the same time. The problem is thus defined as a multi-label classification where the labels represent features of the diseases. A multi-label classification approach is developed in the evidential domain which makes use of correlations between diseases to increase diagnostic accuracy. Finally, a theoretical approach of multi-label classification, which takes advantage of the correlation between labels, has been proposed. This ensemble method allows for efficient multi-label classification. The proposed approaches are validated using a public medical database, MIMIC III, hosted on Physionet
Ameli, Samila. "Construction d'un langage de dictionnaire conceptuel en vue du traitement du langage naturel : application au langage médical." Compiègne, 1989. http://www.theses.fr/1989COMPD226.
Full textThis study deals with the realisation of a « new generation » information retrieval system, taking consideration of texts signification. This system compares texts (questions and documents) by their content. A knowledge base being indispensable for text “comprehension”, a dictionary of concepts has been designed in which are defined the concepts and their mutual relations thru a user friendly language called SUMIX. SUMIX enables us (1) to solve ambiguities due to polysemia by considering context dependencies, (2) to make use of property inheritance and so can largely help cogniticiens in the creation of the knowledge and inference base, (3) to define subject dependant relation between concepts which make possible metaknowledge handling. The dictionary of concepts is essentially used (1) to index concepts (and not characters string) which enables us to select a wide range of documents in the conceptual extraction phase, (2) to filter the previously selected documents by comparing the structure of each document with that of the query in the structural analysis phase
Pouchoulin, Dominique. "Transfert de masse en hémodiafiltration sur membranes haute perméabilité." Compiègne, 1988. http://www.theses.fr/1988COMPD148.
Full textHaemodialysis with low permeability cellulose membrane is the basic treatment of renal insufficiency (purification by diffusion). The development of high permeability membranes and hemofiltration methods allows a more physiologic purification (by convection). Using the two processes, haemodiafiltration is the most efficient technique and the best for short dialysis (3 x 2 h/week). An analysis of the coupling between convection and diffusion is made to develop a one dimensional numerical model. The model is compared with in vitro results (water solutions). Modifications of the properties of a polysulfone membrane with protein adsorption are studied. All measures are made with hollow fibbers modules. The reliability of the results is warranted by the development of an automated testing bench : regulation of flows and pressures, on line photometric quantity determination, data acquisition, computed data treatment. Blood ultra filtration is studied with a one dimensional osmotic model using literature data. This model allows an analysis of retrofiltration phenomena and the optimisation of the geometry of a haemofilter. Limiting flow predictions are compared with gel model ones