Dissertations / Theses on the topic 'Base de données hospitalières'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Base de données hospitalières.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Jouanjus, Emilie. "Identification des complications graves associées à l'usage de substances psychoactives." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2123/.
Full textThe French Addictovigilance system is unique in Europe. However, it is not meant to reliably and exhaustively comprehend the dangerousness of drugs with potential of abuse. Notably, the under-reporting of serious abuse and dependence cases raises the issue of the relevance of using these data to assess the medical complication risk associated with psychoactive drug use. Another possible approach could be the use of administrative computerized hospital databases. We used data from the French hospital database PMSI (Programme de Médicalisation des Systèmes d'Information) to estimated the frequency of complications related to psychoactive substance use. First, three-source-capture-recapture analysis was applied. Then, cannabis-related hospitalizations identified from PMSI were systematically reviewed. These studies revealed a relatively high prevalence of cardiovascular complications, and these findings led us to specifically characterize cannabis-related cardiovascular complications at the national level by using the data collected by the French Addictovigilance System (i. E. Spontaneous Reports). To conclude, this thesis enabled to qualitatively and quantitatively characterize psychoactive-drug-related-complications, particularly cannabis. Doing so, we assessed the relevance of the data sources which can possibly be explored to identify serious complications related to psychoactive drug use (including PMSI), and defined methodological criteria in order to make the best use of them
Couris, Chantal Marie. "Les bases de données médico-administratives hospitalières comme source d'information permanente pour l'épidémiologie descriptive du cancer : méthodes d'estimation et de correction du nombre de cancers du sein incidents à l'hôpital." Lyon 1, 2001. http://www.theses.fr/2001LYO1T234.
Full textElenga, Narcisse. "L’épidémiologie de l’infection VIH chez l’adulte en Guadeloupe : étude à partir de la base de données hospitalière sur l’infection VIH (1988-2009)." Thesis, Antilles-Guyane, 2014. http://www.theses.fr/2014AGUY0786/document.
Full textWith an accumulated incidence rate of 56 cases per 100 000 inhabitants, Guadeloupe is the second French region most affected by HIV. Most of the publications on HIV/AIDS were essentially descriptive, stemming from annual reports. The Guadeloupean Hospital Database on HIV was scarcely exploited for research. The objective of this work was to estimate the fundamental aspects of HIV/AIDS in Guadeloupe. So late presentation for care, delay between HIV diagnosis and first specialised consultation, follow-up interruption, incidence of depression, opportunistic infections and deaths were studied as well as their predictive factors. We were able to highlight the following points: first, the very late HIV diagnosis (CD4< 200 / mm3) concerned 40, 12 % of HIV- infected adults followed in Guadeloupe. 36 % of patients had delays between HIV diagnosis and first specialised consultation. However, more than 22 % of patients were permanently lost to follow-up and never seen again. The first three classifying affections were oesophageal candidiasis, HIV-wasting syndrome and pneumocystosis. The causes of deaths reported in the literature were similar to those observed in our study. AIDS-related infections were the first cause of deaths.This thesis work thus tried to generate, in one of the most HIV-affected French regions, some indicators from the French Hospital Database on HIV. This database often did not contain all the desired informations, but the important number of observations allowed to have high power for the studied variables. These results, although often confirming the clinicians' empirical knowledge, may help understand some aspects of the HIV epidemic in Guadeloupe
Pinaire, Jessica. "Explorer les trajectoires de patients via les bases médico-économiques : application à l'infarctus du myocarde." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS020/document.
Full textWith approximately 120,000 people affected each year, 12,000 deaths from the first crisis and 18,000 deaths after one year, myocardial infarction is a major public health issue. This pathology requires hospitalization and management in an intensive care cardiology unit. We study this pathology using the French national Prospective Paiement System (PPS) databases.The collection of national hospital data within the framework of the PPS generates about 25 million records per year.These data, which are initially collected for medico-economic purposes, contain information that may have other purposes: improving patient care, predicting the evolution of care, planning their costs, etc.Another emerging issue is that of providing tools for exploring patients' hospital trajectories using data from the PPS. Through several objectives, this thesis aims to suggest tools combining methods from three disciplines: medical computing, data mining and biostatistics.We make four contributions.The first contribution concerns the constitution of a quality database to analyze patient trajectories. The second contribution is a semi-automatic method for the systematic review of the literature. This part of the work delineates the contours of the trajectory concept in the biomedical field. The third contribution is the identification of care trajectories in the prediction of intra-hospital death. Our research strategy is divided into two phases: 1) Identification of typical patient trajectories using data mining tools; 2) Construction of a prediction model from these trajectories to predict death. Finally, the last contribution is the characterization of patient flows through the various hospital events, also considering of delays and costs. In this contribution, we propose a combined-data mining and a longitudinal data clustering technique
Ledieu, Thibault. "Analyse et visualisation de trajectoires de soins par l’exploitation de données massives hospitalières pour la pharmacovigilance." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1B032/document.
Full textThe massification of health data is an opportunity to answer questions about vigilance and quality of care. The emergence of big data in health is an opportunity to answer questions about vigilance and quality of care. In this thesis work, we will present approaches to exploit the diversity and volume of intra-hospital data for pharmacovigilance use and monitoring the proper use of drugs. This approach will be based on the modelling of intra-hospital care trajectories adapted to the specific needs of pharmacovigilance. Using data from a hospital warehouse, it will be necessary to characterize events of interest and identify a link between the administration of these health products and the occurrence of adverse reactions, or to look for cases of misuse of the drug. The hypothesis put forward in this thesis is that an interactive visual approach would be suitable for the exploitation of these heterogeneous and multi-domain biomedical data in the field of pharmacovigilance. We have developed two prototypes allowing the visualization and analysis of care trajectories. The first prototype is a tool for visualizing the patient file in the form of a timeline. The second application is a tool for visualizing and searching a cohort of event sequences The latter tool is based on the implementation of sequence analysis algorithms (Smith-Waterman, Apriori, GSP) for the search for similarity or patterns of recurring events. These human-machine interfaces have been the subject of usability studies on use cases from actual practice that have proven their potential for routine use
Ferret, Laurie. "Anticoagulants oraux, réutilisation de données hospitalières informatisées dans une démarche de soutien à la qualité des soins." Thesis, Lille 2, 2015. http://www.theses.fr/2015LIL2S016/document.
Full textIntroduction :Oral anticoagulants raise major issues in terms of bleeding risk and appropriate use. The computerization of medical records offers the ability to access large databases that can be explored automatically. The objective of this work is to show how routinely collected data can be reused to study issues related to anticoagulants in a supportive approach to quality of care.MethodsThis work was carried out on the electronic data (97,355 records) of a community hospital. For each inpatient stay we have diagnostic, biological, drug and administrative data, and the discharge letters. This work is organized around three axes:Axis I. The objective is to evaluate the accuracy of the detection of factors that may increase the anticoagulant effect of vitamin K antagonists (VKA), using rules developed in the PSIP european project (grant agreement N° 216130). A case review on one year enabled the calculation of the positive predictive value and sensitivity of the rules. Axis II. We conducted a cohort study on data from 2007 to 2012 to determine the major elements involved in raising the risk of bleeding related to VKA in clinical reality. Cases were the stays with an elevation of the INR beyond 5, the controls did not have.Axis III. We made data reuse serve a study of the quality of the prescriptions. On the one hand we assessed treatment of the thromboembolic risk recommendations in atrial fibrillation (AF) in the elderly, on the other hand we investigated the prescription of direct oral anticoagulants.Results : Axis I : The positive predictive value of the rules intended to detect the factors favoring the elevation of INR in case of treatment with VKA is 22.4%, the sensitivity is 84.6%. The main contributive rules are the ones intended to detect an infectious syndrome and amiodarone.Axis II : The major factor increasing the INR with VKA treatment highlighted by the cohort study are infectious syndrome, cancer, hepatic insufficiency and hypoprotidemia. The recommendations compliance rate in atrial fibrillation in the elderly is 47.8%. Only 45% of patients receive oral anticoagulants, 22.9% do not receive antithrombotic treatment at all and 32.1% received platelet aggregation inhibitors. Direct oral anticoagulants are prescribed at inadequate dosages in 15 to 31.4% of patients, respectively for dabigatran and rivaroxaban. These errors are mainly underdosages in the elderly with atrial fibrillation (82.6%).Discussion : The computerization of medical records has led to the creation of large medical databases, which can be used for various purposes as we show in this work. In the first work axis we have shown that rule-based decision support systems detect the contributing factors for VKA overdose with a good sensitivity but a low positive predictive value. The second line shows that we could use the data for exploratory purposes to identify factors associated with increased INR in patients receiving VKA in “real life practice”. The third line shows that the rule-based systems can also be used to identify inappropriate prescribing for the purpose of improving the quality of care. In the field of anticoagulation this work opens up innovative perspectives for improving the quality of care
Deboscker, Stéphanie. "Les entérocoques résistants aux glycopeptides : épidémiologie et modélisation de leur transmission hospitalière." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAJ106.
Full textThe objective of our work was to study the factors associated with acquisition of glycopeptide-resistant enterococci (GRE) during a single-strain outbreak, to describe their natural history and to model their transmission between 3 specialized wards. The Bayesian multivariable analysis of our first study showed that a history of hospitalization and the use of antibiotics and antacids during hospitalization were associated with an increased risk of GRE acquisition. The description of GRE-carriers followed since 2007 then showed that half of the patients had negative screenings after 3months. Finally, the literature review revealed that the most relevant model for simulating GRE hospital diffusion was an agent-based model. The simulations confirmed the importance of hand hygiene for patient care in comparison to other barrier measures. With 80% compliance, there were no secondary cases in 50% of the simulations
Dehainsala, Hondjack. "Explicitation de la sémantique dans lesbases de données : Base de données à base ontologique et le modèle OntoDB." Phd thesis, Université de Poitiers, 2007. http://tel.archives-ouvertes.fr/tel-00157595.
Full texten termes de classes et de propriétés, ainsi que des relations qui les lient. Avec le développement de
modèles d'ontologies stables dans différents domaines, OWL dans le domaine duWeb sémantique,
PLIB dans le domaine technique, de plus en plus de données (ou de métadonnées) sont décrites par référence à ces ontologies. La taille croissante de telles données rend nécessaire de les gérer au sein de bases de données originales, que nous appelons bases de données à base ontologique (BDBO), et qui possèdent la particularité de représenter, outre les données, les ontologies qui en définissent le sens. Plusieurs architectures de BDBO ont ainsi été proposées au cours des dernières années. Les chémas qu'elles utilisent pour la représentation des données sont soit constitués d'une unique table de triplets de type (sujet, prédicat, objet), soit éclatés en des tables unaires et binaires respectivement pour chaque classe et pour chaque propriété. Si de telles représentations permettent une grande flexibilité dans la structure des données représentées, elles ne sont ni susceptibles de passer à grande échelle lorsque chaque instance est décrite par un nombre significatif de propriétés, ni adaptée à la structure des bases de données usuelles, fondée sur les relations n-aires. C'est ce double inconvénient que vise à résoudre le modèle OntoDB. En introduisant des hypothèses de typages qui semblent acceptables dans beaucoup de domaine d'application, nous proposons une architecture de BDBO constituée de quatre parties : les deux premières parties correspondent à la structure usuelle des bases de données : données reposant sur un schéma logique de données, et méta-base décrivant l'ensemble de la structure de tables.
Les deux autres parties, originales, représentent respectivement les ontologies, et le méta-modèle
d'ontologie au sein d'un méta-schéma réflexif. Des mécanismes d'abstraction et de nomination permettent respectivement d'associer à chaque donnée le concept ontologique qui en définit le sens, et d'accéder aux données à partir des concepts, sans se préoccuper de la représentation des données. Cette architecture permet à la fois de gérer de façon efficace des données de grande taille définies par référence à des ontologies (données à base ontologique), mais aussi d'indexer des bases de données usuelles au niveau connaissance en leur adjoignant les deux parties : ontologie et méta-schéma. Le modèle d'architecture que nous proposons a été validé par le développement d'un prototype opérationnel implanté sur le système PostgreSQL avec le modèle d'ontologie PLIB. Nous présentons également une évaluation comparative de nos propositions aux modèles présentés antérieurement.
Bounar, Boualem. "Génération automatique de programmes sur une base de données en réseau : couplage PROLOG-Base de données en réseau." Lyon 1, 1986. http://www.theses.fr/1986LYO11703.
Full textEl, Khalil Firas. "Sécurité de la base de données cadastrales." Thesis, Polynésie française, 2015. http://www.theses.fr/2015POLF0001/document.
Full textQuantity Based Aggregation (QBA) controls closely related to inference control database and has been rarely addressed by the scientific community. Let us consider a set S of N elements. The aggregation of k elements, at most, out of N is not considered sensitive, while the aggregation of mor than k out of N elements is considered sensitive and should be prevented. The role of QBA control is to make sure the number of disclosed elements of S is less than or equal to k, where k
Dehainsala, Hondjack. "Explicitation de la sémantique dans les bases de données : base de données à base ontologique et le modèle OntoDB." Poitiers, 2007. http://www.theses.fr/2007POIT2270.
Full textAn Ontology–Based DataBase (OBDB) is a database which allows to store both data and ontologies that define data meaning. In this thesis, we propose a new architecture model for OBDB, called OntoDB. This model has two main original features. First, like usual databases, each stored entity is associated with a logical schema which define the structure of all its instances. Thus, our approach provides for adding ontology to existing database for semantic indexation of its content. Second, meta-model of the ontology model is also represented in the same database. This allows to support change and evolution of ontology models. The OntoDB model has been validated by a prototype. Performance evaluation of this prototype has been done and has shown that our approach allows to manage very large data and supports scalability much better than the previously proposed approaches
Lamer, Antoine. "Contribution à la prévention des risques liés à l’anesthésie par la valorisation des informations hospitalières au sein d’un entrepôt de données." Thesis, Lille 2, 2015. http://www.theses.fr/2015LIL2S021/document.
Full textIntroduction Hospital Information Systems (HIS) manage and register every day millions of data related to patient care: biological results, vital signs, drugs administrations, care process... These data are stored by operational applications provide remote access and a comprehensive picture of Electronic Health Record. These data may also be used to answer to others purposes as clinical research or public health, particularly when integrated in a data warehouse. Some studies highlighted a statistical link between the compliance of quality indicators related to anesthesia procedure and patient outcome during the hospital stay. In the University Hospital of Lille, the quality indicators, as well as the patient comorbidities during the post-operative period could be assessed with data collected by applications of the HIS. The main objective of the work is to integrate data collected by operational applications in order to realize clinical research studies.Methods First, the data quality of information registered by the operational applications is evaluated with methods … by the literature or developed in this work. Then, data quality problems highlighted by the evaluation are managed during the integration step of the ETL process. New data are computed and aggregated in order to dispose of indicators of quality of care. Finally, two studies bring out the usability of the system.Results Pertinent data from the HIS have been integrated in an anesthesia data warehouse. This system stores data about the hospital stay and interventions (drug administrations, vital signs …) since 2010. Aggregated data have been developed and used in two clinical research studies. The first study highlighted statistical link between the induction and patient outcome. The second study evaluated the compliance of quality indicators of ventilation and the impact on comorbity.Discussion The data warehouse and the cleaning and integration methods developed as part of this work allow performing statistical analysis on more than 200 000 interventions. This system can be implemented with other applications used in the CHRU of Lille but also with Anesthesia Information Management Systems used by other hospitals
Lemaire, Pierre. "Base de données informatique : application aux leucémies aigue͏̈s." Paris 5, 1997. http://www.theses.fr/1997PA05P039.
Full textKindombi, Lola Ndontoni. "Communications interactives avec une machine base de données." Paris 11, 1985. http://www.theses.fr/1985PA112379.
Full textHsu, Lung-Cheng. "Pbase : une base de données déductive en Prolog." Compiègne, 1988. http://www.theses.fr/1988COMPD126.
Full textThis thesis describes a relational database system coupling PROLOG II and VAX RMS (Record Management Services). The SQL-like DDL (Data Definition Language) and DML (Data Manipulation Language) are implemented in PROLOG and the management of storage and research of fact record is delegated to RMS. The indexed file organization is adopted to provide a satisfactory response time. An interface written in PASCAL is called to enable the communication between PROLOG and RMS. Once the interface is established, access to the database is transparent. No precompilation is requiert. PBASE can be used as a general DBMS or it can cooperate with an expert system (Our SQL translation module can be considered as such) to manage the voluminous facts stored in the secondary memory. It can also cooperate with VAX RDB (Relational DataBase) to constitute a powerful deductive database. Although PBASE works for normalized relations as well as non-normalized ones, a normalization module is included to avoid the problems caused by the redundancy of data
Fankam, Nguemkam Chimène. "OntoDB2 : un système flexible et efficient de base de données à base ontologique pour le web sémantique et les données techniques." Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aéronautique, 2009. https://tel.archives-ouvertes.fr/tel-00452533.
Full textThe need to represent the semantics of data in various scientific fields (medicine, geography, engineering, etc…) has resulted in the definition of data referring to ontologies, also called ontology-based data. With the proliferation of domain ontologies, and the increasing volume of data to handle, has emerge the need to define systems capable of managing large size of ontology-based data. Such systems are called Ontology Based DataBase (OBDB) Management Systems. The main limitations of existing OBDB systems are (1) their rigidity, (2) lack of support for non standard data (spatial, temporal, etc…) and (3) their lack of effectiveness to manage large size data. In this thesis, we propose a new OBDB called OntoDB2, allowing (1) the support of ontologies based on different ontology models, (2) the extension of its model to meet specific applications requirements, and (3) an original management of ontology-based data facilitating scalability. Onto DB2 is based on the existence of a kernel ontology, and model-based techniques to enable a flexible extension of this kernel. We propose to represent only canonical data by transforming, under certain conditions, any given non-canonical data to its canonical representation. We propose to use the ontology query language to (1) to access non-canonical data thereby transform and, (2) index and pre-calculate the reasoning operations by using the mechanisms of the underlying DBMS
Ficheur, Grégoire. "Réutilisation de données hospitalières pour la recherche d'effets indésirables liés à la prise d'un médicament ou à la pose d'un dispositif médical implantable." Thesis, Lille 2, 2015. http://www.theses.fr/2015LIL2S015/document.
Full textIntroduction:The adverse events associated with drug administration or placement of an implantable medical device should be sought systematically after the beginning of the commercialisation. Studies conducted in this phase are observational studies that can be performed from hospital databases. The objective of this work is to study the interest of the re-use of hospital data for the identification of such an adverse event.Materials and methods:Two hospital databases have been re-used between the years 2007 to 2013: the first contains 171 million inpatient stays including diagnostic codes, procedures and demographic data. This data is linked with a single patient identifier; the second database contains the same kinds of information for 80,000 stays and also the laboratory results and drug administrations for each inpatient stay. Four studies were conducted on these pieces of data to identify adverse drug events and adverse events following the placement of an implantable medical device.Results:The first study demonstrates the ability of a set of detection of rules to automatically identify adverse drug events with hyperkalaemia. The second study describes the variation of a laboratory results associated with the presence of a frequent sequential pattern composed of drug administrations and laboratory results. The third piece of work enables the user to build a web tool exploring on the fly the reasons for rehospitalisation of patients with an implantable medical device. The fourth and final study estimates the thrombotic and bleeding risks following a total hip replacement.Conclusion:The re-use of hospital data in a pharmacoepidemiological perspective allows the identification of adverse events associated with drug administration or placement of an implantable medical device. The value of this data is the amount statistical power they bring as well as the types of associations they allow to analyse
Bec, Xavier. "Une base de données pour les effets spéciaux numériques." Paris 8, 2000. http://www.theses.fr/2000PA081818.
Full textJouzier, Cécile. "Constitution d'une base de données d'histoire médico-pharmaceutique bordelaise." Bordeaux 2, 2000. http://www.theses.fr/2000BOR2P107.
Full textDevulder, Grégory. "Base de données de séquences, phylogénie et identification bactérienne." Lyon 1, 2004. http://www.theses.fr/2004LYO10164.
Full textOuld, Yahia Sabiha. "Interrogation multi-critères d'une base de données spatio-temporelles." Troyes, 2005. http://www.theses.fr/2005TROY0006.
Full textThe study of the human behavior in driving situations is of primary importance for the improvement of drivers security. This study is complex because of the numerous situations in which the driver may be involved. The objective of the CASSICE project (Symbolic Characterization of Driving Situations) is to elaborate a tool in order to simplify the analysis task of the driver's behavior. In this paper, we will mainly take an interest in the indexation and querying of a multimedia database including the numerical data and the video sequences relating to a type of driving situations. We will put the emphasis on the queries to this database. They are often complex because they are formulated according to criteria depending on time, space and they use terms of the natural language
Vachey, Françoise. "Les suffixes toponymiques français : atlas et base de données." Nancy 2, 1999. http://www.theses.fr/1999NAN21036.
Full textPloquin, Catherine. "LAB langage d'analyse associé à une base de données." Bordeaux 1, 1985. http://www.theses.fr/1985BOR10534.
Full textAbdelhédi, Fatma. "Conception assistée d’entrepôts de données et de documents XML pour l’analyse OLAP." Thesis, Toulouse 1, 2014. http://www.theses.fr/2014TOU10005/document.
Full textToday, data warehouses are a major issue for business intelligence applications within companies. Sources of a warehouse, i.e. the origin of data that feed, are diverse and heterogeneous sequential files, spreadsheets, relational databases, Web documents. The complexity is such that the software on the market only partially meets the needs of decision makers when they want to analyze the data. Therefore, our work is within the decision support systems context that integrate all data types (mainly extracted from relational databases and XML documents databases) for decision makers. They aim to provide models, methods and software tools to elaborate and manipulate data warehouses. Our work has specifically focused on two complementary issues: aided data warehouse and modeling and OLAP analysis of XML documents
Pineau, Nicolas. "La performance en analyse sensorielle : une approche base de données." Phd thesis, Université de Bourgogne, 2006. http://tel.archives-ouvertes.fr/tel-00125171.
Full textGagnon, Bertrand. "Gestion d'information sur les procédés thermiques par base de données." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65447.
Full textFolio, Patrice. "Etablissement d'une base de données protéomique de Listeria monocytogenes EGDe." Clermont-Ferrand 2, 2003. http://www.theses.fr/2003CLF21478.
Full textPersyn, Emmanuel. "Base de données informatiques sur la première guerre du Golfe." Lille 3, 2003. http://www.theses.fr/2003LIL30018.
Full textTahir, Hassane. "Aide à la contextualisation de l’administration de base de données." Paris 6, 2013. http://www.theses.fr/2013PA066789.
Full textThe complexity of database administration tasks requires the development of tools for supporting database experts. When problems occur, the database administrator (DBA) is frequently the first person blamed. Most DBAs work in a fire-fighting mode and have little opportunity to be proactive. They must be constantly ready to analyze and correct failures based on a large set of procedures. In addition, they are continually readjusting these procedures and developing practices to manage a multitude of specific situations that differ from the generic situation by some few contextual elements. These practices have to deal with these contextual elements in order to solve the problem at hand. This thesis aims to use Contextual Graphs formalism in order to improve existing procedures used in database administration. The thesis shows also the benefits of using Contextual Graphs to capture user practices in order to be reused in the working contexts. Up to now, this improvement is achieved by a DBA through practices that adapt procedures to the context in which tasks should be performed and the incidents appear. This work will be the basis for designing and implementing a Context-Based Intelligent Assistant System (CBIAS) for supporting DBAs
Treger, Michèle. "Spécification et implantation d'une base de données des contacts intermoléculaires." Université Louis Pasteur (Strasbourg) (1971-2008), 1991. http://www.theses.fr/1991STR13089.
Full textÉtat-Le, Blanc Marie-Sylvie d'. "Une base de données sédimentologiques : structure, mise en place, applications." Bordeaux 1, 1986. http://www.theses.fr/1986BOR10565.
Full textPeerbocus, Mohamed Ally. "Gestion de l'évolution spatiotemporelle dans une base de données géographiques." Paris 9, 2001. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2001PA090055.
Full textCuré, Olivier. "Relations entre bases de données et ontologies dans le cadre du web des données." Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00843284.
Full textDe, Vlieger P. "Création d'un environnement de gestion de base de données " en grille ". Application à l'échange de données médicales." Phd thesis, Université d'Auvergne - Clermont-Ferrand I, 2011. http://tel.archives-ouvertes.fr/tel-00654660.
Full textDe, Vlieger Paul. "Création d'un environnement de gestion de base de données "en grille" : application à l'échange de données médicales." Phd thesis, Université d'Auvergne - Clermont-Ferrand I, 2011. http://tel.archives-ouvertes.fr/tel-00719688.
Full textGrignard, Arnaud. "Modèles de visualisation à base d'agents." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066268/document.
Full textInformation visualization is the study of interactive visual representations of abstract data to reinforce human cognition. It is very closely associated with data mining issues which allow to explore, understand and analyze phenomena, systems or data masses whose complexity continues to grow today. However, most existing visualization techniques are not suited to the exploration and understanding of datasets that consist of a large number of individual data from heterogeneous sources that share many properties with what are commonly called "complex systems". The reason is often the use of monolithic and centralized approaches. This situation is reminiscent of the modeling of complex systems (social sciences, chemistry, ecology, and many other fields) before progress represented by the generalization of agent-based approaches twenty years ago. In this thesis, I defend the idea that the same approach can be applied with the same success to the field of information visualization. By starting from the now commonly accepted idea that the agent-based models offer appropriate representations the complexity of a real system, I propose to use an approach based on the definition of agent-based visualization models to facilitate visual representation of complex data and to provide innovative support which allows to explore, programmatically and visually, their underlying dynamics. Just like their software counterparts, agent-based visualization models are composed of autonomous graphical entities that can interact and organize themselves, learn from the data they process and as a result adapt their behavior and visual representations. By providing a user the ability to describe visualization tasks in this form, my goal is to allow them to benefit from the flexibility, modularity and adaptability inherent in agent-based approaches. These concepts have been implemented and experimented on the GAMA modeling and simulation platform in which I developed a 3D immersive environment offering the user different point of views and way to interact with agents. Their implementation is validated on models chosen for their properties, supports a linear progression in terms of complexity, allowing us to highlight the concepts of flexibility, modularity and adaptability. Finally, I demonstrate through the particular case of data visualization, how my approach allows, in real time, to represent, to clarify, or even discover their dynamics and how that progress in terms of visualization can contributing,in turn, to improve the modeling of complex systems
Devogele, Thomas. "Processus d'intégration et d'appariement de bases de données géographiques : application à une base de données routières multi-échelles." Versailles-St Quentin en Yvelines, 1997. https://tel.archives-ouvertes.fr/tel-00085113.
Full textPonchateau, Cyrille. "Conception et exploitation d'une base de modèles : application aux data sciences." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2018. http://www.theses.fr/2018ESMA0005/document.
Full textIt is common practice in experimental science to use time series to represent experimental results, that usually come as a list of values in chronological order (indexed by time) and generally obtained via sensors connected to the studied physical system. Those series are analyzed to obtain a mathematical model that allow to describe the data and thus to understand and explain the behavio rof the studied system. Nowadays, storage and analyses technologies for time series are numerous and mature, but the storage and management technologies for mathematical models and their linking to experimental numerical data are both scarce and recent. Still, mathematical models have an essential role to play in the interpretation and validation of experimental results. Consequently, an adapted storage system would ease the management and re-usability of mathematical models. This work aims at developing a models database to manage mathematical models and provide a “query by data” system, to help retrieve/identify a model from an experimental time series. In this work, I will describe the conception (from the modeling of the system, to its software architecture) of the models database and its extensions to allow the “query by data”. Then, I will describe the prototype of models database,that I implemented and the results obtained by tests performed on the latter
Jean, Stéphane. "OntoQL, un langage d'exploitation des bases de données à base ontologique." Phd thesis, Université de Poitiers, 2007. http://tel.archives-ouvertes.fr/tel-00201777.
Full textAlfonso, Espinosa-Oviedo Javier. "Coordination fiable de services de données à base de politiques active." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-01011464.
Full textEspinosa, Oviedo Javier Alfonso. "Coordination fiable de services de données à base de politiques actives." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM022/document.
Full textWe propose an approach for adding non-functional properties (exception handling, atomicity, security, persistence) to services' coordinations. The approach is based on an Active Policy Model (AP Model) for representing services' coordinations with non-functional properties as a collection of types. In our model, a services' coordination is represented as a workflow composed of an ordered set of activities, each activity in charge of implementing a call to a service' operation. We use the type Activity for representing a workflow and its components (i.e., the workflow' activities and the order among them). A non-functional property is represented as one or several Active Policy types, each policy composed of a set of event-condition-action rules in charge of implementing an aspect of the property. Instances of active policy and activity types are considered in the model as entities that can be executed. We use the Execution Unit type for representing them as entities that go through a series of states at runtime. When an active policy is associated to one or several execution units, its rules verify whether each unit respects the implemented non-functional property by evaluating their conditions over their execution unit state, and when the property is not verified, the rules execute their actions for enforcing the property at runtime. We also proposed a proof of concept Active Policy Execution Engine for executing an active policy oriented workflow modelled using our AP Model. The engine implements an execution model that determines how AP, Rule and Activity instances interact among each other for adding non-functional properties (NFPs) to a workflow at execution time. We validated the AP Model and the Active Policy Execution Engine by defining active policy types for addressing exception handling, atomicity, state management, persistency and authentication properties. These active policy types were used for implementing reliable service oriented applications, and mashups for integrating data from services
Dubois, Jean-Christophe. "Vers une interrogation en langage naturel d'une base de données image." Nancy 1, 1998. http://www.theses.fr/1998NAN10044.
Full textDa, Costa David. "Visualisation et fouille interactive de données à base de points d'intérêts." Tours, 2007. http://www.theses.fr/2007TOUR4021.
Full textIn this thesis, we present the problem of the visual data mining. We generally notice that it is specific to the types of data and that it is necessary to spend a long time to analyze the results in order to obtain an answer on the aspect of data. In this thesis, we have developed an interactive visualization environment for data exploration using points of interest. This tool visualizes all types of data and is generic because it uses only one similarity measure. These methods must be able to deal with large data sets. We also sought to improve the performances of our visualization algorithms, thus we managed to represent one million data. We also extended our tool to the data clustering. Most existing data clustering methods work in an automatic way, the user is not implied iin the process. We try to involve more significantly the user role in the data clustering process in order to improve his comprehensibility of the data results
Rolbert, Monique. "Résolution des formes pronominales dans l'interface d'interrogation d'une base de données." Aix-Marseille 2, 1989. http://www.theses.fr/1989AIX22018.
Full textPinasseau, Marianne. "Elaboration d'une base de données bibliographique sur les eaux minérales embouteillées." Bordeaux 2, 1999. http://www.theses.fr/1999BOR2P012.
Full textKratky, Andreas. "Les auras numériques : pour une poétique de la base de données." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010561/document.
Full textDatabase are ubiquitous in our lives and play an important rôle in many aspects of our daily activities. Conceived as a technical support to facilitate the efficient management of information and as the preferred means of storage, the database has gained a level of importance with aesthetic and political implications that go far beyond purely technical questions.Both theorical and practical in its approach, our research investigates the database as a means of expressive and poetic creation and reveals its specific character, in particular the discretization of data and the establishment of flexible relationships between them. In order to develop a poetics of the database we will reconsider the term « aura », which was utilized by walter Benjamin to analyse the transformations of the nature of aesthetic experience brought about by industrial rationalisation and technology at the end of the nineteenth century. The practical part of our research consists of two interactive projects based on the poetic principles elaborated in context of this dissertation
Toumani, Farouk. "Le raisonnement taxinomique dans les modèles conceptuels de données : application à la retro-conception des bases de données relationnelles." Lyon, INSA, 1997. http://www.theses.fr/1997ISAL0051.
Full textTerrninological logics, as modem knowledge representation formalisms, are acknowledged to be one of the most promising artificial intelligence techniques in database applications. They allow the development of new data models equipped with taxonomic reasoning ability. However, these languages turned out to be inadequate in conceptual modelling area where emphasis must be put on the accurate and natural desc1iption of the universe of discourse. In this work, we first examine the features of terminological logics with respect to the requirements of conceptual modelling. We show that terminological logics do not support direct mode/ling requirement and constructs in these formalisms are semantically overloaded. Then we propose a model, called as a formalization of a semantic model, namely an Entity Relationship (E/R) model, using terminological logics. We demonstrate that E/R schemas and schemas are equivalent with respect to their formation capacity measure. This result ensure that the reasoning on an E/R schema can be reduced to a reasoning on its equivalent M schemas. As an application of this work, we propose to use in a relational database reverse engineering process in order to support automatic construction and enrichment of conceptual schemas and to maintain their correctness (consistency and minimality)
Verlaine, Lionel. "Optimisation des requêtes dans une machine bases de données." Paris 6, 1986. http://www.theses.fr/1986PA066532.
Full textBrahimi, Lahcene. "Données de tests non fonctionnels de l'ombre à la lumière : une approche multidimensionnelle pour déployer une base de données." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0009/document.
Full textChoosing appropriate database management systems (DBMS) and/or execution platforms for given database (DB) is complex and tends to be time- and effort-intensive since this choice has an important impact on the satisfaction of non-functional requirements (e.g., temporal performance or energy consumption). lndeed, a large number of tests have been performed for assessing the quality of developed DB. This assessment often involves metrics associated with non-functional requirement. That leads to a mine of tests covering all life-cycle phases of the DB's design. Tests and their environments are usually published in scientific articles or specific websites such as Transaction Processing Council (TPC). Therefore, this thesis bas taken a special interest to the capitalization and the reutilization of performed tests to reduce and mastery the complexity of the DBMS/platforms selection process. By analyzing the test accurately, we identify that tests concem: the data set, the execution platform, the addressed non-functional requirements, the used queries, etc. Thus, we propose an approach of conceptualization and persistence of all dimensions as well as the results of tests. Conseguently, this thesis leads to the following contributions. (1) The design model based on descriptive, prescriptive and ontological concepts to raise the different dimensions. (2) The development of a multidimensional repository to store the test environments and their results. (3) The development of a decision making methodology based on a recommender system for DBMS and platforms selection
Dârlea, Georgiana-Lavinia. "Un système de classification supervisée à base de règles implicatives." Chambéry, 2010. http://www.theses.fr/2010CHAMS001.
Full textThis PhD thesis presents a series of research works done in the field of supervised data classification more precisely in the domain of semi-automatic learning of fuzzy rules-based classifiers. The prepared manuscript presents first an overview of the classification problem, and also of the main classification methods that have already been implemented and certified in order to place the proposed method in the general context of the domain. Once the context established, the actual research work is presented: the definition of a formal background for representing an elementary fuzzy rule-based classifier in a bi-dimensional space, the description of a learning algorithm for these elementary classifiers for a given data set and the conception of a multi-dimensional classification system which is able to handle multi-classes problems by combining the elementary classifiers. The implementation and testing of all these functionalities and finally the application of the resulted classifier on two real-world digital image problems are finally presented: the analysis of the quality of industrial products using 3D tomographic images and the identification of regions of interest in radar satellite images