Dissertations / Theses on the topic 'Métabolites – Bases de données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Métabolites – Bases de données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Roux, Aurélie. "Analyse du métabolome urinaire humain par chromatographie liquide couplée à la spectrométrie de masse à haute résolution." Paris 6, 2011. http://www.theses.fr/2011PA066575.
Full textLe, Boulch Malo. "Taxonomie et inférence fonctionnelle des procaryotes : développement de MACADAM, une base de données de voies métaboliques associées à une taxonomie." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0131.
Full textProkaryotes are ubiquitous organisms living in communities, whose extreme metabolic diversity iscorrelated with their ubiquity. To contribute to a better understanding of the functional role ofprokaryotes, we developed MACADAM: a database of metabolic pathways associated with aprokaryote-centric taxonomy. The aim is to provide the scientific community with open access tofunctional information data which has been selected for its genomic and annotation quality, whichis interoperable and simply structured, thereby enabling updates to be made to the data gatheredfrom data sources such as MetaCyc, MicroCyc and RefSeq by MACADAM. MACADAM meetsthese criteria. MACADAM includes PGDBs (Pathway/Genome DataBases) assembled fromRefSeq genomes meeting the complete genome quality criteria, by using the Pathway Toolssoftware made available by MetaCyc, a metabolic pathway database. In order to enrich thedatabase and increase the quality of functional information in MACADAM, a collection of expertcurated PGDBs named MicroCyc was added. Its PGDBs are favoured over those of RefSeq.Functional information sourced from the literature contained in FAPROTAX and IJSEM phenotypicdatabases was also added. MACADAM contains 13 509 PGDBs (13 195 bacterial PGDBs and314 archaeal PGDBs) and 1 260 unique metabolic pathways. Built using interoperabletechnologies (Python 3, SQLite), in a downloadable format and with open-source code,MACADAM can be integrated into tools requiring the pairing of functional and taxonomicinformation. To improve its visibility among the microbiology community, MACADAM is availableonline (http://macadam.toulouse.inra.fr). By using the taxonomy of the NCBI Taxonomy database,MACADAM makes it possible to link any taxon—ranging from phylum to species—to its functionalinformation. Each metabolic pathway is associated with two completeness scores (a PS: PathwayScore and a PFS: Pathway Frequency Score). With each update, MACADAM integrates the newversions of RefSeq, NCBI Taxonomy and MicroCyc, allowing any corrections made to thetaxonomy to be promptly amended and to add information on recently-submitted genomes. Twoexamples of ways in which to use MACADAM, and a comparison with an inference approachbased on metagenomic readings allowed for a discussion of the strengths and weaknesses (i)MACADAM and (ii) of inference by a prior taxonomic identification approach. The identification ofindividuals within the prokaryotic community benefits greatly from advances in sequencingtechnology and the refinement of bioinformatics analysis pipelines. The analysis of readings frommetagenomic sequencing leads to the reconstruction of putative genomes and metagenomicspecies. In this context, we examined the problem of correcting taxonomic assignments ofmetagenomic species, by using a phylogenetic tree reconstruction approach on the one hand, andby using an overall genome relatedness index (ANI) on the other hand. This work allowed us toclarify the positioning of nine groups of metagenomic species, and highlighted errors in referencegenome affiliation in Megasphaera and Blautia Obeum. It also allowed us to confirm thereclassification of Ruminococcus gauvreauii into the genus Blautia. To limit errors and preventtheir replication, it is important to ensure the quality of the information contained in the databases.In this context, the scientific community should have better knowledge of the rules of nomenclatureand systematic methods. Further efforts should be made to advocate the merits of correctingdatabase data. Finally, although metagenomics provides a better understanding of the microbialcommunities around us, an effort to cultivate organisms that are said to be uncultivable wouldincrease the knowledge and diversity of prokaryotic organisms in databases. These efforts willhave a direct impact on the quality of functional information and the coverage of MACADAM'sprokaryotic diversity
Gross-Amblard, David. "Tatouage des bases de données." Habilitation à diriger des recherches, Université de Bourgogne, 2010. http://tel.archives-ouvertes.fr/tel-00590970.
Full textWaller, Emmanuel. "Méthodes et bases de données." Paris 11, 1993. http://www.theses.fr/1993PA112481.
Full textCastelltort, Arnaud. "Historisation de données dans les bases de données NoSQLorientées graphes." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20076.
Full textThis thesis deals with data historization in the context of graphs. Graph data have been dealt with for many years but their exploitation in information systems, especially in NoSQL engines, is recent. The emerging Big Data and 3V contexts (Variety, Volume, Velocity) have revealed the limits of classical relational databases. Historization, on its side, has been considered for a long time as only linked with technical and backups issues, and more recently with decisional reasons (Business Intelligence). However, historization is now taking more and more importance in management applications.In this framework, graph databases that are often used have received little attention regarding historization. Our first contribution consists in studying the impact of historized data in management information systems. This analysis relies on the hypothesis that historization is taking more and more importance. Our second contribution aims at proposing an original model for managing historization in NoSQL graph databases.This proposition consists on the one hand in elaborating a unique and generic system for representing the history and on the other hand in proposing query features.We show that the system can support both simple and complex queries.Our contributions have been implemented and tested over synthetic and real databases
Benchkron, Said Soumia. "Bases de données et logiciels intégrés." Paris 9, 1985. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1985PA090025.
Full textMarie-Julie, Jean Michel. "Bases de données d'images- Calculateurs parallèles." Paris 6, 2000. http://www.theses.fr/2000PA066593.
Full textVoisard, Agnès. "Bases de données géographiques : du modèle de données à l'interface utilisateur." Paris 11, 1992. http://www.theses.fr/1992PA112354.
Full textNguyen, Gia Toan. "Quelques fonctionnalités de bases de données avancées." Habilitation à diriger des recherches, Grenoble 1, 1986. http://tel.archives-ouvertes.fr/tel-00321615.
Full textGross-Amblard, David. "Approximation dans les bases de données contraintes." Paris 11, 2000. http://www.theses.fr/2000PA112304.
Full textQian, Shunchu. "Restructuration de bases de données entité-association." Dijon, 1995. http://www.theses.fr/1995DIJOS064.
Full textCollobert, Ronan. "Algorithmes d'Apprentissage pour grandes bases de données." Paris 6, 2004. http://www.theses.fr/2004PA066063.
Full textBossy, Robert. "Édition coopérative de bases de données scientifiques." Paris 6, 2002. http://www.theses.fr/2002PA066047.
Full textValceschini-Deza, Nathalie. "Accès sémantique aux bases de données textuelles." Nancy 2, 1999. http://www.theses.fr/1999NAN21021.
Full textSouihli, Asma. "Interrogation des bases de données XML probabilistes." Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0046/document.
Full textProbabilistic XML is a probabilistic model for uncertain tree-structured data, with applications to data integration, information extraction, or uncertain version control. We explore in this dissertation efficient algorithms for evaluating tree-pattern queries with joins over probabilistic XML or, more specifically, for approximating the probability of each item of a query result. The approach relies on, first, extracting the query lineage over the probabilistic XML document, and, second, looking for an optimal strategy to approximate the probability of the propositional lineage formula. ProApproX is the probabilistic query manager for probabilistic XML presented in this thesis. The system allows users to query uncertain tree-structured data in the form of probabilistic XML documents. It integrates a query engine that searches for an optimal strategy to evaluate the probability of the query lineage. ProApproX relies on a query-optimizer--like approach: exploring different evaluation plans for different parts of the formula and predicting the cost of each plan, using a cost model for the various evaluation algorithms. We demonstrate the efficiency of this approach on datasets used in a number of most popular previous probabilistic XML querying works, as well as on synthetic data. An early version of the system was demonstrated at the ACM SIGMOD 2011 conference. First steps towards the new query solution were discussed in an EDBT/ICDT PhD Workshop paper (2011). A fully redesigned version that implements the techniques and studies shared in the present thesis, is published as a demonstration at CIKM 2012. Our contributions are also part of an IEEE ICDE
Benzine, Mehdi. "Combinaison sécurisée des données publiques et sensibles dans les bases de données." Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0024.
Full textProtection of sensitive data is a major issue in the databases field. Many software and hardware solutions have been designed to protect data when stored and during query processing. Moreover, it is also necessary to provide a secure manner to combine sensitive data with public data. To achieve this goal, we designed a new storage and processing architecture. Our solution combines a main server that stores public data and a secure server dedicated to the storage and processing of sensitive data. The secure server is a hardware token which is basically a combination of (i) a secured microcontroller and (ii) a large external NAND Flash memory. The queries which combine public and sensitive data are split in two sub queries, the first one deals with the public data, the second one deals with the sensitive data. Each sub query is processed on the server storing the corresponding data. Finally, the data obtained by the computation of the sub query on public data is sent to the secure server to be mixed with the result of the computation on sensitive data. For security reasons, the final result is built on the secure server. This architecture resolves the security problems, because all the computations dealing with sensitive data are done by the secure server, but brings performance problems (few RAM, asymmetric cost of read/write operations. . . ). These problems will be solved by different strategies of query optimization
Léonard, Michel. "Conception d'une structure de données dans les environnements de bases de données." Grenoble 1, 1988. http://tel.archives-ouvertes.fr/tel-00327370.
Full textRipoche, Hugues. "Une construction interactive d'interprétations de données : application aux bases de données de séquences génétiques." Montpellier 2, 1995. http://www.theses.fr/1995MON20248.
Full textRoux, Aurélie. "ANALYSE DU METABOLOME URINAIRE HUMAIN PAR CHROMATOGRAPHIE LIQUIDE COUPLEE A LA SPECTROMETRIE DE MASSE A HAUTE RESOLUTION." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00641529.
Full textSmine, Hatem. "Outils d'aide à la conception : des bases de données relationnelles aux bases d'objets complexes." Nice, 1988. http://www.theses.fr/1988NICE4213.
Full textNunez, Del Prado Cortez Miguel. "Attaques d'inférence sur des bases de données géolocalisées." Phd thesis, INSA de Toulouse, 2013. http://tel.archives-ouvertes.fr/tel-00926957.
Full textThion-Goasdoue, Virginie. "Bases de données, contraintes d'intégrité et logiques modales." Paris 11, 2004. http://www.theses.fr/2004PA112134.
Full textIn this thesis, we use tableaux system for modal logics in order to solve databases problems related to integrity constraints. In first part, we use a tableaux system for first order modal logics in the context of a method testing integrity constraints preservation in an object oriented database. We develop a proof search strategy and we prove that it is sound and complete in its unbounded version. This leads to the implementation of a theorem prover for first order modal logics k, k4, d, t and s4. The prover can also be used for other applications where the test of validity of first order modal logics is needed (software verification, multi-agents systems, etc. ). In second part, we study hybrid multi-modal logic (hmml) as a formalism to express schemas and integrity constraints for semi-structured data. On the one hand we prove that hmml captures the notion of semi-structured data and constraints on it. On the other hand we generalize the notion of schema, by proposing a definition of schema where references are "well typed" (contrary to what happens with dtds), and we prove that this new notion can be formalized by sentences of hmml exactly like a constraint is. When a tableaux system for the hmml is added to this approach, some classical database problems can be treated (constraints implication, schemas inclusion, constraints satisfiability, etc. )
Guo, Yanli. "Confidentialité et intégrité de bases de données embarquées." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0038.
Full textAs a decentralized way for managing personal data, the Personal Data Server approach (PDS) resorts to Secure Portable Token, combining the tamper resistance of a smart card microcontroller with the mass storage capacity of NAND Flash. The data is stored, accessed and its access rights controlled using such devices. To support powerful PDS application requirements, a full-fledged DBMS engine is embedded in the SPT. This thesis addresses two problems with the confidentiality and integrity of personal data: (i) the database stored on the NAND Flash remains outside the security perimeter of the microcontroller, thus potentially suffering from attacks; (ii) the PDS approach relies on supporting servers to provide durability, availability, and global processing functionalities. Appropriate protocols must ensure that these servers cannot breach the confidentiality of the manipulated data. The proposed solutions rely on cryptography techniques, without incurring large overhead
Najjar, Ahmed. "Forage de données de bases administratives en santé." Doctoral thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/28162.
Full textCurrent health systems are increasingly equipped with data collection and storage systems. Therefore, a huge amount of data is stored in medical databases. Databases, designed for administrative or billing purposes, are fed with new data whenever the patient uses the healthcare system. This specificity makes these databases a rich source of information and extremely interesting. These databases can unveil the constraints of reality, capturing elements from a great variety of real medical care situations. So, they could allow the conception and modeling the medical treatment process. However, despite the obvious interest of these administrative databases, they are still underexploited by researchers. In this thesis, we propose a new approach of the mining for administrative data to detect patterns from patient care trajectories. Firstly, we have proposed an algorithm able to cluster complex objects that represent medical services. These objects are characterized by a mixture of numerical, categorical and multivalued categorical variables. We thus propose to extract one projection space for each multivalued variable and to modify the computation of the distance between the objects to consider these projections. Secondly, a two-step mixture model is proposed to cluster these objects. This model uses the Gaussian distribution for the numerical variables, multinomial for the categorical variables and the hidden Markov models (HMM) for the multivalued variables. Finally, we obtain two algorithms able to cluster complex objects characterized by a mixture of variables. Once this stage is reached, an approach for the discovery of patterns of care trajectories is set up. This approach involves the followed steps: 1. preprocessing that allows the building and generation of medical services sets. Thus, three sets of medical services are obtained: one for hospital stays, one for consultations and one for visits. 2. modeling of treatment processes as a succession of labels of medical services. These complex processes require a sophisticated method of clustering. Thus, we propose a clustering algorithm based on the HMM. 3. creating an approach of visualization and analysis of the trajectory patterns to mine the discovered models. All these steps produce the knowledge discovery process from medical administrative databases. We apply this approach to databases for elderly patients over 65 years old who live in the province of Quebec and are suffering from heart failure. The data are extracted from the three databases: the MSSS MED-ÉCHO database, the RAMQ bank and the database containing death certificate data. The obtained results clearly demonstrated the effectiveness of our approach by detecting special patterns that can help healthcare administrators to better manage health treatments.
Bost, Raphaël. "Algorithmes de recherche sur bases de données chiffrées." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S001/document.
Full textSearchable encryption aims at making efficient a seemingly easy task: outsourcing the storage of a database to an untrusted server, while keeping search features. With the development of Cloud storage services, for both private individuals and businesses, efficiency of searchable encryption became crucial: inefficient constructions would not be deployed on a large scale because they would not be usable. The key problem with searchable encryption is that any construction achieving ''perfect security'' induces a computational or a communicational overhead that is unacceptable for the providers or for the users --- at least with current techniques and by today's standards. This thesis proposes and studies new security notions and new constructions of searchable encryption, aiming at making it more efficient and more secure. In particular, we start by considering the forward and backward privacy of searchable encryption schemes, what it implies in terms of security and efficiency, and how we can realize them. Then, we show how to protect an encrypted database user against active attacks by the Cloud provider, and that such protections have an inherent efficiency cost. Finally, we take a look at existing attacks against searchable encryption, and explain how we might thwart them
Raïssi, Chedy. "Extraction de Séquences Fréquentes : Des Bases de Données Statiques aux Flots de Données." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2008. http://tel.archives-ouvertes.fr/tel-00351626.
Full textRaissi, Chedy. "Extraction de séquences fréquentes : des bases de données statiques aux flots de données." Montpellier 2, 2008. http://www.theses.fr/2008MON20063.
Full textLaurent, Anne. "Bases de données multidimensionnelles floues et leur utilisation pour la fouille de données." Paris 6, 2002. http://www.theses.fr/2002PA066426.
Full textSahri, Soror. "Conception et implantation d'un système de bases de données distribuée & scalable : SD-SQL Server." Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090013.
Full textOur thesis elaborates on the design of a scalable distributed database system (SD-DBS). A novel feature of an SD-DBS is the concept of a scalable distributed relational table, a scalable table in short. Such a table accommodates dynamic splits of its segments at SD-DBS storage nodes. A split occurs when an insert makes a segment to overflow, like in, e. G. , B-tree file. Current DBMSs provide the static partitioning only, requiring a cumbersome global reorganization from time to time. The transparency of the distribution of a scalable table is in this light an important step beyond the current technology. Our thesis explores the design issues of an SD-DBS, by constructing a prototype termed SD-SQL Server. As its name indicates, it uses the services of SQL-Server. SD-SQL Server repartitions a table when an insert overflows existing segments. With the comfort of a single node SQL Server user, the SD-SQL Server user has larger tables or a faster response time through the dynamic parallelism. We present the architecture of our system, its implementation and the performance analysis
Laabi, Abderrazzak. "Étude et réalisation de la gestion des articles appartenant à des bases de données gérées par une machine bases de données." Paris 11, 1987. http://www.theses.fr/1987PA112338.
Full textThe work presented in this thesis is part of a study and development project concerning the design of three layers of the DBMS on the DORSAL-32 Data Base Machine. The first layer ensures record management within the storage areas, record and page locking organization according to the access mode and transaction coherency degree. It ensures also the handling of micro-logs which permit to guarantee the atomicity of an action. The second layer ensures handling of transaction logging and warm restarts which guarantee the atomicity and durability of a transaction. The third layer ensures simultaneous access management and handling of lock tables. Performance measures of the methods used are also presented. The last chapter of this report contains a research work concerning the implementation of the virtual linear hashing method in our DBMS. The problem studied is the transfer of records from one page to another. Under these conditions, the record pointers which are classically used don't permit direct access. We propose a new pointer which enables direct access to the record, on no matter which page it is contained at a given instant
Boullé, Marc. "Recherche d'une représentation des données efficace pour la fouille des grandes bases de données." Phd thesis, Télécom ParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003023.
Full textCuré, Olivier. "Relations entre bases de données et ontologies dans le cadre du web des données." Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00843284.
Full textCharmpi, Konstantina. "Méthodes statistiques pour la fouille de données dans les bases de données de génomique." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM017/document.
Full textOur focus is on statistical testing methods, that compare a given vector of numeric values, indexed by all genes in the human genome, to a given set of genes, known to be associated to a particular type of cancer for instance. Among existing methods, Gene Set Enrichment Analysis is the most widely used. However it has several drawbacks. Firstly, the calculation of p-values is very much time consuming, and insufficiently precise. Secondly, like most other methods, it outputs a large number of significant results, the majority of which are not biologically meaningful. The two issues are addressed here, by two new statistical procedures, the Weighted and Doubly Weighted Kolmogorov-Smirnov tests. The two tests have been applied both to simulated and real data, and compared with other existing procedures. Our conclusion is that, beyond their mathematical and algorithmic advantages, the WKS and DWKS tests could be more informative in many cases, than the classical GSEA test and efficiently address the issues that have led to their construction
Kezouit, Omar Abdelaziz. "Bases de données relationnelles et analyse de données : conception et réalisation d'un système intégré." Paris 11, 1987. http://www.theses.fr/1987PA112130.
Full textZelasco, José Francisco. "Gestion des données : contrôle de qualité des modèles numériques des bases de données géographiques." Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20232.
Full textA Digital Surface Model (DSM) is a numerical surface model which is formed by a set of points, arranged as a grid, to study some physical surface, Digital Elevation Models (DEM), or other possible applications, such as a face, or some anatomical organ, etc. The study of the precision of these models, which is of particular interest for DEMs, has been the object of several studies in the last decades. The measurement of the precision of a DSM model, in relation to another model of the same physical surface, consists in estimating the expectancy of the squares of differences between pairs of points, called homologous points, one in each model which corresponds to the same feature of the physical surface. But these pairs are not easily discernable, the grids may not be coincident, and the differences between the homologous points, corresponding to benchmarks in the physical surface, might be subject to special conditions such as more careful measurements than on ordinary points, which imply a different precision. The generally used procedure to avoid these inconveniences has been to use the squares of vertical distances between the models, which only address the vertical component of the error, thus giving a biased estimate when the surface is not horizontal. The Perpendicular Distance Evaluation Method (PDEM) which avoids this bias, provides estimates for vertical and horizontal components of errors, and is thus a useful tool for detection of discrepancies in Digital Surface Models (DSM) like DEMs. The solution includes a special reference to the simplification which arises when the error does not vary in all horizontal directions. The PDEM is also assessed with DEM's obtained by means of the Interferometry SAR Technique
Ykhlef, Mourad. "Interrogation des données semistructurées." Bordeaux 1, 1999. http://www.theses.fr/1999BOR1A640.
Full textJacob, Stéphane. "Protection cryptographique des bases de données : conception et cryptanalyse." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00738272.
Full textCollet, Christine. "Les formulaires complexes dans les bases de données multimédia." Phd thesis, Grenoble 1, 1987. http://tel.archives-ouvertes.fr/tel-00325851.
Full textCoulon, Cedric. "Réplication Préventive dans une grappe de bases de données." Phd thesis, Université de Nantes, 2006. http://tel.archives-ouvertes.fr/tel-00481299.
Full textBouganim, Luc. "Sécurisation du Contrôle d'Accès dans les Bases de Données." Habilitation à diriger des recherches, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00308620.
Full textJault, Claude. "Méthodologie de la conception des bases de données relationnelles." Paris 9, 1989. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1989PA090011.
Full textThis thesis analyses the different relational data base design methods and, because of their insufficiencies, propose a new method. The first chapter presents the concepts: conceptual and logical schemas and models, links between entities, connection cardinalities, relational model concepts (relations, dependencies, primary and foreign keys), normalization (with the demonstration of the 4th normal form not included into the 3rd), integrity constraints (domain, relation, reference), null values, and a new type of constraints, the constraints between links. The second chapter gives an account of the different methods which can be dispatched in three groups. Those which utilize the entity-relationship model: the American and French model-versions (with their extensions), the axial method, the remora method; those which don't utilize conceptual schema: universal relation approach, godd and date approach, view integration approach; and the IA method (NIAM) using the semantic networks. The third chapter exposes the entity-link-relation method, elaborated in this thesis. It is supported by a conceptual model representing the entities and their links, with the integrity constraints between these links. It proceeds in three phases: the total conceptual approach, centered on entities and links (1:n and 1:1, the links m:n converted to two links 1:n) ; the detail conceptual approach, which defines the attributes and the semantic domains, normalizes entities, examines no-permanent dependencies and the link-constraints ; the logical approach, which gives the relational schema, controls its normality, defines integrity constraints and solves referential deadlocks. The fourth chapter gives one concrete case of the entity-link-relation method
Fansi, Janvier. "Sécurité des bases de données XML (eXtensible Markup Language)." Pau, 2007. http://www.theses.fr/2007PAUU3007.
Full textXML has emerged as the de facto standard for representing and exchanging information on the Internet. As Internet is a public network, corporations and organizations which use XML need mechanisms to protect XML data against unauthorised access. Thus, several schemes for XML access control have been proposed. They can be classified in two major categories: views materialization and queries rewriting techniques. In this thesis, we point out the drawbacks of views materialization approaches through the development of a prototype of secured XML database based on one of those approaches. Afterwards, we propose a technique aimed at securing XML by means of queries rewriting. We prove its correctness and show that it is more efficient than competing works. Finally, we extend our proposal in order to controlling the updating of XML databases
Hammiche, Samira. "Approximation de requêtes dans les bases de données multimédia." Lyon 1, 2007. http://www.theses.fr/2007LYO10080.
Full textGrison, Thierry. "Intégration de schémas de bases de données entité-association." Dijon, 1994. http://www.theses.fr/1994DIJOS005.
Full textBidoit-Tollu, Nicole. "Bases de données déductives : négation et logique des défauts." Paris 11, 1989. http://www.theses.fr/1989PA112387.
Full textGardy, Danièle. "Bases de données, allocations aléatoires : quelques analyses de performances." Paris 11, 1989. http://www.theses.fr/1989PA112221.
Full textThis thesis is devoted to the analysis of some parameters of interest for estimating the performance of computer systems, most notably database systems. The unifying features are the description of the phenomena to be studied in terms of random allocations and the systematic use of methods from the average-case analysis of algorithms. We associate a generating function with each parameter of interest, which we use to derive an asymptotic expression of this parameter. The main problem studied in this work is the estimation of the sizes of derived relations in a relational database framework. We show that this is closely related to the so-called "occupancy problem" in urn models, a classical tool of discrete probability theory. We characterize the conditional distribution of the size of a relation derived from relations whose sizes are known, and give conditions which ensure the a. Symptotic normality of the limiting distribution. We next study the implementation of "logical" relations by multi-attribute or doubly chained trees, for which we give results on the complexity of a random orthogonal range query. Finally, we study some "dynamic" random allocation phenomena, such as the birthday problem, which models the occurrence of collisions in hashing, and a model of the Least Recently Used cache memory algorithm
Richard, Philippe. "Des objets complexes aux bases de données orienté-objet." Paris 11, 1989. http://www.theses.fr/1989PA112299.
Full textThis thesis is concerned with the two main streams of database research: • Complex object models: the relational data model provides a good theoretical foundation for databases and data manipulation languages. However, this model lacks of semantic power for the new emerging applications of databases (CAD, CAM, Office automation). The data involved in these domains of application are structurally more complex than sets of flat tuples. Our work in the Verso project defines a model for complex objects whose operations can be implemented by a finite state automaton which filters data on the fly. • Object-oriented databases: although complex objects models provide good solutions for taking into account complex data, they fall short of solving the problem of applications programming. Their associated languages lack of the necessary computing power. In the early 80's, new systems appeared which mixed database functionalities with the computing power of a general purpose programming language. Depending of the approach, we can speak of persistent programming languages or object oriented data bases. This work is composed of two parts. The first one (Chapters I to V) presents a state of the art which describes the main results in the database research on complex objects models (chapter I) and database pro gramming languages ( chapters II to V). The second part of this thesis (Chapter VI) groups seven publications which describe our work in these two domains
Coulon, Cédric. "Réplication préventive dans une grappe de bases de données." Nantes, 2006. http://www.theses.fr/2006NANT2074.
Full textIn a database cluster, preventive replication can provide strong consistency without the limitations of synchronous replication. In this thesis, we present a full solution for preventive replication that supports multi-master and partial configurations, where databases are partially replicated at different nodes. To increase transaction throughput, we propose an optimization that eliminates delay at the expense of a few transaction aborts and we introduce concurrent replica refreshment. We describe large-scale experimentation of our algorithm based on our RepDB* prototype over a cluster of 64 nodes running the PostgreSQL DBMS. Our experimental results using the TPC-C Benchmark show that the proposed approach yields excellent scale-up and speed-up
Badr, Youakim. "Couplage documents et bases de données : étude et réalisation." Lyon, INSA, 2003. http://www.theses.fr/2003ISAL0079.
Full textUntil recently, databases proved to be a robust and mature technology. They have served well the needs of applications for which they were designed. Today, in the age of XML and multimedia documents, a variety of document-based applications begin to be identified. Documents are in wide use because of being more flexible in capturing much greater variety of data types, including images, sound, video clips and specially paragraphs of free text. They are designed for human consumption and production. When documents are the norm in human activities, sophisticated techniques developed for databases no longer apply. Coupling documents and databases reveals an increasing awareness in the computer science community. How can we develop a generic approach that ensures flexible and well-adapted information capture based on XML documents and at the same time efficient information retrieval and manipulation based on databases? This dissertation presents the Coupling Approach to integrate XML documents of text in natural language and Object-Relational databases. The Coupling Approach starts with the database schema to produce XML DTDs of arbitrary complexity to cover user and application needs. At this point, users or third party applications generate XML documents containing relevant data and conforming to these DTDs. In the case of paragraphs in natural language, the Coupling Approach carries out information extraction and manipulation of relevant data whereas in the case of elementary the Coupling Approach applies only data manipulation. In both cases, the manipulation restructures the data in documents into a valid format easy to store in the database. An interesting characteristic of the Coupling Approach is the integration of Information Extraction System and the design of expressive extraction patterns. However, behind the scene we have provided algorithms and necessary formalisms to reduce human interventions and conceive an approach independent from any application domain. To test our ideas, we have developed a modular architecture and we have implemented a prototype. Finally, we have validated the prototype on small corpus of medical records
Lafaye, Julien. "Tatouage des bases de données avec préservation de contraintes." Paris, CNAM, 2007. http://www.theses.fr/2007CNAM0576.
Full textThis thesis deals with databases watermarking. Watermarking is an information hiding method which enables to embed marks within digital contents. The first application of digital watermarking is copyright protection, which is also the main focus of this thesis. It is divided into three independent parts. In the first one, we present a novel watermarking method for numerical databases which permits to embed digital watermarks in relational tables while preserving the results of queries, previously defined using a language similar to SQL. In a second part, we consider XML streams and geographical databases. We introduce two novel constraint-preserving watermarking algorithms for these data types. For XML streams, our contribution is a watermarking scheme which translates the original content into a watermarked one with the same type. For geographical databases, we propose an algorithm which embeds the watermark without altering the shapes of the geographical objects. In the third part, we investigate the computational complexity of obtaining optimal watermarking schemes. We show that this task is highly intractable and identify the features responsible for it