Dissertations / Theses on the topic 'Gestion des données réparties'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Gestion des données réparties.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Robert, de Saint Victor Isabelle. "Système déductif dans le contexte de données réparties." Lyon 1, 1988. http://www.theses.fr/1988LYO10084.
Full textDriouche, Mohamed. "Un système de gestion de base de données réparties dans un environnement temps réel." Paris 6, 1989. http://www.theses.fr/1989PA066730.
Full textKerhervé, Brigitte. "Vues relationnelles : implantation dans les systèmes de gestion de bases de données centralisés et répartis." Paris 6, 1986. http://www.theses.fr/1986PA066090.
Full textMokadem, Riad. "Signatures algébriques dans la gestion de structures de données distribuées et scalables." Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090014.
Full textRecent years saw emergence of new architectures, involving multiple computers. New concepts were proposed. Among most popular are those of a multicomputer or of a Network of Worksattion and more recently, of Peer to Peer and Grid Computing. This thesis consists on the design, implementation and performance measurements of a prototype SDDS manager, called SDDS-2005. It manages key based ordered files in distributed RAM of Windows machines forming a grid or P2P network. Our scheme can backup the RAM on each storage node onto the local disk. Our goal is to write only the data that has changed since the last backup. We interest also to update records and non key search (scans). Their common denominator was some application of the properties of new signature scheme based that we call algebraic signatures, which are useful in this context. Ones needs then to find only the areas that changed in the bucket since the last buckup. Our signature based scheme for updating records at the SDDS client should prove its advantages in client-server based database systems in general. It holds the promise of interesting possibilities for transactional concurrency control, beyond the mere avoidance of lost updates. We also update only data have been changed because of the using the algebraic signatures. Also, partly pre-computed algebraic signature of a string encodes each symbol by its cumulative signatures. They protect the SDDS data against incidental viewing by an unauthorized server’s administrator. The method appears attractive, it does not amply any storage overhead. It is also completly transparent for servers and occurs in client. Next, our cheme provide fast string search (match) directly on encoded data at the SDDS servers. They appear an alternative to known Karp-Rabin type schemes. Scans can explore the storage nodes in parallel. They match the records by entire non-key content or by its substring, prefix, longest common prefix or longest common string. The search complexity is almost O (1) for prefix search. One may use them also to detect and localize the silent corruption. These features should be of interest to P2P and grid computing. Then, we propose novel string search algorithm called n-Gramme search. It also appears then among the fastest known, e. G, probably often the faster one we know. It cost only a small fraction of existing records match, especially for larger strings search. The experiments prove high efficiency of our implementation. Our buckup scheme is substantially more efficient with the algebraic signatures. The signature calculus is itself substantially faster, the gain being about 30 %. Also, experiments prove that our cumulative pre-computing notably accelerates the string searchs which are faster than the partial one, at the expense of higher encoding/decoding overhead. They are new alternatives to known Karp-Rabin type schemes, and likely to be usually faster. The speed of string matches opens interesting perspectives for the popular join, group-by, rollup, and cube database operations. Our work has been subject of five publications in international conferences [LMS03, LMS05a, LMS05b, ML06, l&al06]. For convenience, we have included the latest publications. Also, the package termed SDDS-2005 is available for non-commercial use at http://ceria. Dauphine. Fr/. It builds up on earlier versions of the prototype, a cumulative effort of several folks and n-Gramme algorithm implementation. We have also presented our proposed prototype, SDDS-2005, at the Microsoft Research Academic Days 2006
Hatimi, Mostafa. "Gestion des données dupliquées dans un environnement sujet aux partitionnements du réseau de communication." Montpellier 2, 1990. http://www.theses.fr/1990MON20133.
Full textMeynard, Michel. "Contrôle de la cohérence des bases de données réparties et dupliquées, sujettes aux partitionnements." Montpellier 2, 1990. http://www.theses.fr/1990MON20022.
Full textSteff, Yann. "SMA et gestion coopérative de réseaux et systèmes : un cadre méthodologique pour une macro-organisation autonome." Toulouse 3, 2002. http://www.theses.fr/2002TOU30043.
Full textSahri, Soror. "Conception et implantation d'un système de bases de données distribuée & scalable : SD-SQL Server." Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090013.
Full textOur thesis elaborates on the design of a scalable distributed database system (SD-DBS). A novel feature of an SD-DBS is the concept of a scalable distributed relational table, a scalable table in short. Such a table accommodates dynamic splits of its segments at SD-DBS storage nodes. A split occurs when an insert makes a segment to overflow, like in, e. G. , B-tree file. Current DBMSs provide the static partitioning only, requiring a cumbersome global reorganization from time to time. The transparency of the distribution of a scalable table is in this light an important step beyond the current technology. Our thesis explores the design issues of an SD-DBS, by constructing a prototype termed SD-SQL Server. As its name indicates, it uses the services of SQL-Server. SD-SQL Server repartitions a table when an insert overflows existing segments. With the comfort of a single node SQL Server user, the SD-SQL Server user has larger tables or a faster response time through the dynamic parallelism. We present the architecture of our system, its implementation and the performance analysis
Faye, David Célestin. "Médiation de données sémantique dans SenPeer, un système pair-à-pair de gestion de données." Phd thesis, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00481311.
Full textFauré, Fabienne. "Gestion de configuration et migration dans les systèmes coopératifs : une architecture répartie orientée services." Toulouse 3, 1994. http://www.theses.fr/1994TOU30253.
Full textBonnel, Nicolas Achille Jacques. "Adapnet : stratégies adaptatives pour la gestion de données distribuées sur un réseau pair-a pair." Lorient, 2008. http://www.theses.fr/2008LORIS134.
Full textIn the last few years, the amount of digital information produced has exponentially increased. This raises problems regarding the storage, the access and the availability of this data. Software and hardware architectures based on the peer-to-peer (p2p) paradigm seem to satisfy the needs of data storage but cannot handle efficiently both data accessibility and availability. We present ,in this thesis various contributions on p2p architectures for managing large volumes of information. We propose various strategies that operate on dedicated virtual topologies that can be maintained at low cost. More precisely, these topologies scale well because the cost for node arrival and node departure is on average constant, whatever the size of the network. We analyze the main paradigms of information sharing on a p2p network, considering successively the problem of access to typed information (semi-structured) and the general case that completely separates the nature of the queries and data location. We propose a routing strategy using structure and content of semi-structured information. We also propose strategies that efficiently explore the network when there is no assumption on the nature of data or queries. In order to manage a quality of service (which is expressed ln terms of speed and reliability), we, also investigate the problem of information availability, more precisely we replicate data stored ln the network. We propose a novel approach exploiting an estimation of local density of data
Acosta, Francisco. "Les arbres balances : spécification, performances et contrôle de concurrence." Montpellier 2, 1991. http://www.theses.fr/1991MON20201.
Full textLoukil, Adlen. "Méthodologies, Modèles et Architectures de Référence pour la Gestion et l'Echange de Données Médicales Multimédia : Application aux Projets Européen OEDIPE et BRITER." Lyon, INSA, 1997. http://www.theses.fr/1997ISAL0016.
Full textInterchange and Integration of medical data is a fundamental task in modern medicine. However, a significant obstacle to the development of efficient interoperable information systems is the lack of software tools that provide transparent access to heterogeneous distributed databases. Currently most of the solutions are stand-alone ones fitting only one configuration. To solve this problems of integration and interoperability, we propose in this thesis an original approach which is based on the definition of communication protocols and the design of generic interface between the specific implementations of the protocols and the target databases associated to the Hospital Information Systems. The proposed solution is based on the development of a data dictionary modelling the communications protocols and the databases structures and generic module for the data storage and extraction. The design involves issues related to reverse engineering procedures and to automatic generation of SQL statements. To illustrate this approach, we present the demonstration prototype we have developed in the framework of the OEDIPE AIM project to experiment and to test open interchange of ECGs and associated clinical data. The second part is devoted to the modelling and integration of distributed electronic patient records using communications protocols. We first present a multidimensional approach for the structuring of patient records and propose a generic object oriented information model which integrates bio signals, images and accompanying clinical information. We then, describe a prototype system which has been developed in the framework of the BRITER AIM project for accessing and handling heterogeneous patient data stored in distributed electronic patient records in order to support Rehabilitation healthcare professional in making decisions. We thus demonstrate that the use of standard communications protocols allows and facilitate the development of portable and interoperable medical applications for the benefit of the health care field
Lobry, Olivier. "Support Mémoire Adaptable Pour Serveurs de Données Répartis." Phd thesis, Université Joseph Fourier (Grenoble), 2000. http://tel.archives-ouvertes.fr/tel-00346893.
Full textIl n'est malheureusement pas possible d'offrir un serveur de données universel capable de répondre aux exigences de tous les SI. Ceux-ci diffèrent en effet significativement par le type des informations qu'ils traitent, la nature des traitements effectués, les propriétés de traitement qu'ils garantissent, les caractéristiques du matériel sous-jacent, etc. De ce fait, chaque système d'information intègre son ou ses propres serveurs de données implantant des politiques de gestion figées.
Les inconvénients d'une telle approche sont loin d'être négligeables. Tout d'abord, la ré-implantation de mécanismes élémentaires de gestion de ressources augmente le coût de conception. Ensuite, la rigidité comportementale réduit considérablement la réactivité à l'évolution tant en qualité qu'en quantité des informations, traitements et ressources matérielles. Enfin, l'opacité des tels systèmes rend difficile leur coexistence sur une même plate-forme.
Cette thèse montre qu'il n'existe pas de politique de gestion de la mémoire idéale. Plutôt que d'essayer d'offrir un serveur idéal, elle tente de définir une infrastructure permettant de concevoir des serveurs de données adaptés et évolutifs. Elle adresse plus particulièrement le problème de la gestion de la mémoire physique et se place dans le contexte des grappes de machines. Elle propose le support mémoire adaptable ADAMS basé sur un modèle de gestion hiérarchique et un modèle de communication par événements. Ce support facilite l'intégration de différents types de politiques tout en séparant bien leurs rôles respectifs sans faire d'hypothèse sur leur inter-dépendances.
Une intégration d'ADAMS à la mémoire virtuelle répartie permanente et adaptable du système Arias est ensuite exposée. ADAMS étend les caractéristiques de ce système afin de prendre en compte les besoins particulier de gestion des serveurs de données tout en réduisant le grain d'adaptabilité. Nous illustrons à travers un exemple comment le support résultant permet d'implanter un serveur de données dont les politiques de gestion peuvent être adaptées dynamiquement.
El, Merhebi Souad. "La gestion d'effet : une méthode de filtrage pour les environnements virtuels répartis." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/243/1/El_Merhebi_Souad.pdf.
Full textDistributed virtual environments (DVEs) are intended to provide an immersive experience to their users within a shared virtual environment. For this purpose, DVEs try to supply participants with coherent views of the shared world. This requires a heavy message exchange between participants especially with the increasing popularity of massively multiplayer DVEs. This heavy message exchange consumes a lot of processing power and bandwidth, slowing down the system and limiting interactivity. Indeed, coherence, interactivity and scalability are basic requirements of DVEs. However, these requirements are conflicting because coherence requires the more important exchange of messages that we can have while interactivity and scalability demand to decrease this exchange to minimum. For this reason, the management of message exchange is essential for distributed virtual environments. To manage message exchange in an intelligent way, DVE systems use various filtering techniques. Among them, interest management techniques filter messages according to users' interests in the world. In this document, we present our interest management technique, the effect management. This technique expresses the interests and manifestations of participants in various media through conscience and effect zones. When the conscience zone of a participant collides the effect zone of another participant in a given medium, the first one becomes conscious of the second. ). .
Duque, Hector. "Conception et mise en oeuvre d'un environnement logiciel de manipulation et d'accès à des données réparties : application aux grilles d'images médicales : le système DSEM / DM2." Lyon, INSA, 2005. http://theses.insa-lyon.fr/publication/2005ISAL0050/these.pdf.
Full textOur vision, in this thesis, is the one of a bio-medical grip as a partner of hospital's information systems, sharing computing resources as well as a platform for sharing information. Therefore, we aim at (i) providing transparent access to huge distributed medical data sets, (ii) querying these data by their content, and (iii), sharing computing resources within the grip. Assuming the existence of a grip infrastructure, we suggest a multi-layered architecture (Distributed Systems Engines – DSE). This architecture allows us to design High Performance Distributed Systems which are highly extensible, scalable and open. It ensures the connection between the grip, data storing systems, and medical platforms. The conceptual design of the architecture assumes a horizontal definition for each one of the layers, and is based on a multi-process structure. This structure enables the exchange of messages between processes by using the Message Passing Paradigm. These processes and messages allow one to define entities of a higher level of semantic significance, which we call Drivers and, which instead of single messages, deal with different kinds of transactions: queries, tasks and requests. Thus, we define different kinds of drivers for dealing with each kind of transaction, and in a higher level, we define services as an aggregation of drivers. The architectural framework of drivers and services eases the design of components of a Distributed System (DS), which we call engines, and also eases the extensibility and scalability of DS
Monnet, Sébastien. "Gestion des données dans les grilles de calcul : support pour la tolérance aux fautes et la cohérence des données." Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00411447.
Full textChtioui, Hajer. "Gestion de la cohérence des données dans les systèmes multiprocesseurs sur puce." Valenciennes, 2011. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/8926ebf0-3437-465d-8e4f-cfd4328f6db6.
Full textThe work presented in this thesis aims to provide an efficient hardware solution for managing cache coherency of shared data in shared memory multiprocessor systems-on-chip (MPSoC) dedicated for intensive signal processing applications. Several solutions are proposed in the literature to solve this problem. However, most of these solutions are efficient only for high-performance multiprocessor systems. These systems take rarely into account hardware resources and energy consumption limitations. In MPSoCs architectures these constraints are very important. In addition, these solutions do not take into account access patterns from the different processors to shared data. In this thesis, we propose a new approach for treating cache coherency problem. It consists on a new hybrid (invalidation/update) adaptive cache coherence protocol. A hardware architecture that facilitates its implementation and optimizes its performance is also proposed. The experimental results show that the proposed protocol in conjunction with this architecture provides an interesting level of performances and energy consumption
Villemur, Thierry. "Conception de services et de protocoles pour la gestion de groupes coopératifs." Phd thesis, Université Paul Sabatier - Toulouse III, 1995. http://tel.archives-ouvertes.fr/tel-00146528.
Full textFan, Linghua. "Un système réparti de gestion de données (DIMS) pour améliorer le pilotage du processus d'innovation." Valenciennes, 2003. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/9859f3a6-4492-4996-8fb0-8ebfe09827e0.
Full textData integration has gained new importance since the widespread success of the WWW. The goal of data integration is to provide an uniform interface to a multitude of distributed, autonomics and heterogeneous information sources available online. We design novel techniques and tools to simplify the exploitation of heterogeneous web data sources. Distributed Information Management System (DIMS) is an XML-based data integration system for accessing these web sources. It utilizes some efficiencies tools to wrapper heterogeneous web sources into standard XML data and generates different wrappers. Our approach is based on mediator - wrapper architectures. In these architectures, mediators accept queries from users, and then process them with respect to wrappers and return answers. Our system uses UML to design our software, and provide a prototype can be using independence platform implementing on Java language. The prototype of DIMS and a case study are used to validate our approach
Antoniu, Gabriel. "Contribution à la conception de services de partage de données pour les grilles de calcul." Habilitation à diriger des recherches, École normale supérieure de Cachan - ENS Cachan, 2009. http://tel.archives-ouvertes.fr/tel-00437324.
Full textBrun-Cottan, Georges. "Cohérence de données répliquées partagées par un groupe de processus coopérant à distance." Phd thesis, Université Pierre et Marie Curie - Paris VI, 1998. http://tel.archives-ouvertes.fr/tel-00160422.
Full textCe problème est important par deux aspects : par son application dans tous les domaines impliquant la coopération d'individus et par son caractère fondamental dans la structuration et la compréhension des mécanismes de coopération.
Notre étude critique, des critères de cohérence associés aux cohérences dites «faibles», embrasse quatre domaines : les systèmes transactionnels, les mémoires partagées réparties, les objets concurrents et les plate-formes de communication. Notre thèse contribue sur trois points :
Notre modèle d'exécution est libre de tout a priori concernant la causalité des opérations. Ce modèle est basé sur des histoires répliquées.
Notre modèle de partage, la réplication coopérante, dérivée de la réplication active, n'impose pas un ordre commun unique sur l'exécution des opérations. Les réplicats sont autonomes et ne coopèrent que lorsque leur vue de l'histoire globale ne suffit plus à garantir la correction de l'application.
Nos principes systèmes permettent de construire un nouveau type de composant, le gestionnaire de cohérence . Ce composant :
prend en charge la coopération des réplicats. Il implante la partie complexe de la gestion de cohérence : le contrôle de la distribution, de la réplication et de la concurrence ;
maintient, sur l'ordre réparti des opérations, des propriétés déterministes. Ces propriétés définissent un contrat de cohérence ; elles peuvent être utilisées comme critère de correction ;
est choisi à l'exécution par l'application ;
est réutilisable.
Nous avons réalisé Core, une plate-forme de développement complète, partiellement documentée et accessible sur FTP, développée au-dessus d'Unix. Core offre, outre les services usuels nécessaires à la mise en oeuvre de groupes de processus répartis, une bibliothèque extensible de gestionnaires de cohérence. Core offre aussi de nombreuses classes, utilisées tant pour la réalisation de nouveaux gestionnaires que pour l'expression de nouveaux types et modèles d'exécution, par les concepteurs d'applications. Nous avons réalisé, avec Core, deux applications : une application d'édition coopérative basée sur Emacs et une simulation de ressource partagée.
Pons, Jean-François. "Contrôle de la cohérence des accès aux objets dans les systèmes répartis : application des règles d'écriture recouverte." Montpellier 2, 1986. http://www.theses.fr/1986MON20071.
Full textPark, Young-Min. "Réseau virtuel : développement d'un système semi-réparti de gestion de bases de données non limitées sur le Web." Paris 8, 2004. http://www.theses.fr/2004PA082341.
Full textNotre projet porte sur la notion nouvelle de " réseau virtuel ". Cette notion peut être comparée à la notion de " réseau classique " ou de " réseau traditionnel ". Elle est utile surtout pour les gros réseaux et elle constitue l'une des meilleures solutions pour envisager le remplacement des réseaux classiques. D'un point de vue économique, le réseau classique oblige souvent à un important investissement. Plus la taille du réseau est grande, plus il nécessite de grosses dépenses liées au nombre des ordinateurs, des stations de travail, des serveurs et des personnes qui administrent et surveillent son fonctionnement. Le " réseau virtuel " fonctionne en utilisant le réseau déjà existant de l'Internet. A la place de nombreux serveurs et logiciels, le " réseau virtuel " se construit avec un serveur d'applications unique et plusieurs serveurs de bases de données qui ensemble effectuent les tâches que le réseau classique prend en charge. Le premier des avantages du " réseau virtuel " est qu'il est beaucoup moins coûteux que le réseau classique. Il utilise l'environnement d'Internet comme moyen de liaison entre les ordinateurs. Aucun autre matériel n'est exigé. Le second avantage est qu'il est plus facile de préserver la compatibilité et la cohérence des données. Le " réseau virtuel " utilise les mêmes applications pour toutes les machines connectées et les données. Le but de notre projet est de concevoir pour le " réseau virtuel ", la structure la plus efficace liant ses serveurs, de développer des applications souples, capables de traiter des bases de données de toutes tailles, et de mettre au point une stratégie de séparation des serveurs de bases de données permettant d'atteindre la capacité de stockage souhaitée
Brahem, Mariem. "Optimisation de requêtes spatiales et serveur de données distribué - Application à la gestion de masses de données en astronomie." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV009/document.
Full textThe big scientific data generated by modern observation telescopes, raises recurring problems of performances, in spite of the advances in distributed data management systems. The main reasons are the complexity of the systems and the difficulty to adapt the access methods to the data. This thesis proposes new physical and logical optimizations to optimize execution plans of astronomical queries using transformation rules. These methods are integrated in ASTROIDE, a distributed system for large-scale astronomical data processing.ASTROIDE achieves scalability and efficiency by combining the benefits of distributed processing using Spark with the relevance of an astronomical query optimizer.It supports the data access using the query language ADQL that is commonly used.It implements astronomical query algorithms (cone search, kNN search, cross-match, and kNN join) tailored to the proposed physical data organization.Indeed, ASTROIDE offers a data partitioning technique that allows efficient processing of these queries by ensuring load balancing and eliminating irrelevant partitions. This partitioning uses an indexing technique adapted to astronomical data, in order to reduce query processing time
Matta, Natalie. "Vers une gestion décentralisée des données des réseaux de capteurs dans le contexte des smart grids." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0010/document.
Full textThis thesis focuses on the decentralized management of data collected by wireless sensor networks which are deployed in a smart grid, i.e. the evolved new generation electricity network. It proposes a decentralized architecture based on multi-agent systems for both data and energy management in the smart grid. In particular, our works deal with data management of sensor networks which are deployed in the distribution electric subsystem of a smart grid. They aim at answering two key challenges: (1) detection and identification of failure and disturbances requiring swift reporting and appropriate reactions; (2) efficient management of the growing volume of data caused by the proliferation of sensors and other sensing entities such as smart meters. The management of this data can call upon several methods, including the aggregation of data packets on which we focus in this thesis. To this end, we propose to aggregate (PriBaCC) and/or to correlate (CoDA) the contents of these data packets in a decentralized manner. Data processing will thus be done faster, consequently leading to rapid and efficient decision-making concerning energy management. The validation of our contributions by means of simulation has shown that they meet the identified challenges. It has also put forward their enhancements with respect to other existing approaches, particularly in terms of reducing data volume as well as transmission delay of high priority data
Bavueza, Munsana Dia Lemfu. "Ravir : un système de coopération des bases de données hétérogènes." Montpellier 2, 1987. http://www.theses.fr/1987MON20265.
Full textGhurbhurn, Rahee. "Intégration de données à partir de connaissances : une approche multi-agent pour la gestion des changements." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2008. http://tel.archives-ouvertes.fr/tel-00785415.
Full textGrazziottin, Ribeiro Helena. "Un service de règles actives pour fédérations de bases de données." Université Joseph Fourier (Grenoble), 2000. http://www.theses.fr/2000GRE10084.
Full textEl, Zoghby Nicole. "Fusion distribuée de données échangées dans un réseau de véhicules." Phd thesis, Université de Technologie de Compiègne, 2014. http://tel.archives-ouvertes.fr/tel-01070896.
Full textGruszka, Samuel. "Étude et spécification d'un partitionnement dynamique Data-Flow en environnement numérique." Toulouse, INPT, 1995. http://www.theses.fr/1995INPT074H.
Full textSéraphin, John. "Réalisation d'un intranet : cohérence d'un ensemble réparti et communicant, autour d'une architecture réflexive." Paris 5, 1998. http://www.theses.fr/1998PA05S007.
Full textKostadinov, Dimitre Davidov. "Personnalisation de l'information : une approche de gestion de profils et de reformulation de requêtes." Versailles-St Quentin en Yvelines, 2007. http://www.theses.fr/2007VERS0027.
Full textThis thesis contains two parts. The first one is a study of the state of the art on data personalization and a proposition of a user profile model. The second one is a focus on a specific problem which is the query reformulation using profile knowledge. The relevance of the information is defined by a set of criteria and preferences describing the user. The data describing the users is often gathered in the form of profiles. In this thesis we propose a generic and extensible model of profile, which enables the classification of the profile's contents. Personalization may occur in each step of the query life cycle. The second contribution of this thesis is the study of two query reformulation approaches based on algorithms of query enrichment and query rewriting and the proposition of an advanced query reformulation approach
Cointe, Christophe. "Aide à la gestion de conflits en conception concourante dans un système distribué." Montpellier 2, 1998. http://www.theses.fr/1998MON20082.
Full textWithin Artificial Intelligence research works, task distribution must concur to the effectiveness of task realization. Among all the factors which can impede the benefits expected by distribution, the cost of conflict management cost plays an utmost role. In order to facilitate the use of methods for conflict detection and management, we consider it favorable to set our work in the context of open distributed system. In our approach, the designer is allowed to use all the available methods for conflict management, and is guided in his/her choices, on the one hand, by a multi-perspective view on the data and, on the other hand, by the realization context of his/her task. Furthermore, we use the viewpoint notion to allow for an intelligent indexing of data. Thus, a designer describes, bye means of a viewpoint, the way in which the other designers must interpret the data that he/she proposes. Following our theoretical study, we propose a multi-agents system, CREoPS², based on technics and tools which are Internet-compatible
Liu, Ji. "Gestion multisite de workflows scientifiques dans le cloud." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT260/document.
Full textLarge-scale in silico scientific experiments generally contain multiple computational activities to process big data. Scientific Workflows (SWfs) enable scientists to model the data processing activities. Since SWfs deal with large amounts of data, data-intensive SWfs is an important issue. In a data-intensive SWf, the activities are related by data or control dependencies and one activity may consist of multiple tasks to process different parts of experimental data. In order to automatically execute data-intensive SWfs, Scientific Work- flow Management Systems (SWfMSs) can be used to exploit High Performance Computing (HPC) environments provided by a cluster, grid or cloud. In addition, SWfMSs generate provenance data for tracing the execution of SWfs.Since a cloud offers stable services, diverse resources, virtually infinite computing and storage capacity, it becomes an interesting infrastructure for SWf execution. Clouds basically provide three types of services, i.e. Infrastructure-as-a-Service (IaaS), Platform- as-a-Service (PaaS) and Software-as-a-Service (SaaS). SWfMSs can be deployed in the cloud using Virtual Machines (VMs) to execute data-intensive SWfs. With a pay-as-you- go method, the users of clouds do not need to buy physical machines and the maintenance of the machines are ensured by the cloud providers. Nowadays, a cloud is typically made of several sites (or data centers), each with its own resources and data. Since a data- intensive SWf may process distributed data at different sites, the SWf execution should be adapted to multisite clouds while using distributed computing or storage resources.In this thesis, we study the methods to execute data-intensive SWfs in a multisite cloud environment. Some SWfMSs already exist while most of them are designed for computer clusters, grid or single cloud site. In addition, the existing approaches are limited to static computing resources or single site execution. We propose SWf partitioning algorithms and a task scheduling algorithm for SWf execution in a multisite cloud. Our proposed algorithms can significantly reduce the overall SWf execution time in a multisite cloud.In particular, we propose a general solution based on multi-objective scheduling in order to execute SWfs in a multisite cloud. The general solution is composed of a cost model, a VM provisioning algorithm, and an activity scheduling algorithm. The VM provisioning algorithm is based on our proposed cost model to generate VM provisioning plans to execute SWfs at a single cloud site. The activity scheduling algorithm enables SWf execution with the minimum cost, composed of execution time and monetary cost, in a multisite cloud. We made extensive experiments and the results show that our algorithms can reduce considerably the overall cost of the SWf execution in a multisite cloud
Salmon, Loïc. "Une approche holistique combinant flux temps-réel et données archivées pour la gestion et le traitement d'objets mobiles : application au trafic maritime." Thesis, Brest, 2019. http://www.theses.fr/2019BRES0006/document.
Full textOver the past few years, the rapid prolifération of sensors and devices recording positioning information regularly produces very large volumes of heterogeneous data. This leads to many research challenges as the storage, distribution, management,Processing and analysis of the large mobility data generated still needs to be solved. Current works related to the manipulation of mobility data have been directed towards either mining archived historical data or continuous processing of incoming data streams.The aim of this research is to design a holistic System whose objective is to provide a combined processing of real time data streams and archived data positions. The proposed solution is real-time oriented, historical data and informations extracted from them allowing to enhance quality of the answers to queries. A event paradigm is discussed to facilitate the hybrid approach and to identify typical moving objects behaviors. Finally, a query concerning signal coverage of moving objects has been studied and applied to maritime data showing the relevance of a hybrid approach to deal with moving object data processing
Cron, Geneviève. "Diagnostic par reconnaissance des formes floue d'un système dynamique et réparti : Application à la gestion en temps réel du trafic téléphonique français." Compiègne, 1999. http://www.theses.fr/1999COMP1231.
Full textTran, Viet-Trung. "Scalable data-management systems for Big Data." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00920432.
Full textBillet, Benjamin. "Système de gestion de flux pour l'Internet des objets intelligents." Thesis, Versailles-St Quentin en Yvelines, 2015. http://www.theses.fr/2015VERS012V/document.
Full textThe Internet of Things (IoT) is currently characterized by an ever-growing number of networked Things, i.e., devices which have their own identity together with advanced computation and networking capabilities: smartphones, smart watches, smart home appliances, etc. In addition, these Things are being equipped with more and more sensors and actuators that enable them to sense and act on their environment, enabling the physical world to be linked with the virtual world. Specifically, the IoT raises many challenges related to its very large scale and high dynamicity, as well as the great heterogeneity of the data and systems involved (e.g., powerful versus resource-constrained devices, mobile versus fixed devices, continuously-powered versus battery-powered devices, etc.). These challenges require new systems and techniques for developing applications that are able to (i) collect data from the numerous data sources of the IoT and (ii) interact both with the environment using the actuators, and with the users using dedicated GUIs. To this end, we defend the following thesis: given the huge volume of data continuously being produced by sensors (measurements and events), we must consider (i) data streams as the reference data model for the IoT and (ii) continuous processing as the reference computation model for processing these data streams. Moreover, knowing that privacy preservation and energy consumption are increasingly critical concerns, we claim that all the Things should be autonomous and work together in restricted areas as close as possible to the users rather than systematically shifting the computation logic into powerful servers or into the cloud. For this purpose, our main contribution can be summarized as designing and developing a distributed data stream management system for the IoT. In this context, we revisit two fundamental aspects of software engineering and distributed systems: service-oriented architecture and task deployment. We address the problems of (i) accessing data streams through services and (ii) deploying continuous processing tasks automatically, according to the characteristics of both tasks and devices. This research work lead to the development of a middleware layer called Dioptase, designed to run on the Things and abstract them as generic devices that can be dynamically assigned communication, storage and computation tasks according to their available resources. In order to validate the feasability and the relevance of our work, we implemented a prototype of Dioptase and evaluated its performance. In addition, we show that Dioptase is a realistic solution which can work in cooperation with legacy sensor and actuator networks currently deployed in the environment
Katsifodimos, Asterios. "Scalable view-based techniques for web data : algorithms and systems." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00870456.
Full textVargas-Solar, Genoveva. "Service d'évènements flexible pour l'intégration d'applications bases de données réparties." Université Joseph Fourier (Grenoble ; 1971-2015), 2000. http://www.theses.fr/2000GRE10259.
Full textSarr, Idrissa. "Routage des transactions dans les bases de données à large échelle." Paris 6, 2010. http://www.theses.fr/2010PA066330.
Full textVömel, Christof. "Contributions à la recherche en calcul scientifique haute performance pour les matrices creuses." Toulouse, INPT, 2003. http://www.theses.fr/2003INPT003H.
Full textAbdouli, Majeb. "Étude des modèles étendus de transactions : adaptation aux SGBD temps réel." Le Havre, 2006. http://www.theses.fr/2006LEHA0011.
Full textReal-time database systems (RTDBS) are defined as systems whose objective is not only to respect the transactions and data temporal constraints (as in real-time systems), but they also respect the logical consistency of the database (as in classical DBS). In a DBS, it is difficult to deal with real-time contraints in addition to the database logical consistency. On the other hand, real-time systems are not designed to meet transactions real-time constraints when there is a large amount of data. In the majority of previous works on RTBS, the systems are based on the flat transactions modle and the main aim is to respect the two kinds of constraints. In this model, a transaction is composed of two primitive operation : "read" and "write". If an operation fails, then the whole transaction is aborted and restarted, leading often the transaction to miss its deadline. Wa deduce from this that this model is not appropriate to RTDBS. Our contribution in this work has consisisted of developing protocols to manage the intra-transactions conflicts in both centralized and distributed environments. We've also developed an concurrency control protocol based on transaction urgency. Finally, we've proposed an hierarchical commit protocol which guarantees the uniform distributed transaction model based on imprecise computation. Each proposed protocol is evaluated and compared to the protocols proposed in the literature
Dahan, Sylvain. "Mécanismes de recherche de services extensibles pour les environnements de grilles de calcul." Besançon, 2005. http://www.theses.fr/2005BESA2063.
Full textThe aim of Grid computing is to share computing resources. Users should find efficiently the resources that they need. To do it, we propose to connect the resources with an overlay network and to use a flooding search algorithm. Overlay networks are usually formed with a graph or a tree. Trees use an optimal number of messages but suffer of bottlenecks which reduce the number of simultaneous search that can be done. Graphs use more messages but support an higher number of simultaneous searches. We propose a new topology which uses an optimal number of messages like trees and does not have any bottleneck like graphs. If every node of a tree is a computer, some computers are leaves which receive messages and the others are intermediate nodes which forward messages. We distribute the intermediate nodes role between every server in a way where every server have the same roles. This new tree structure is build recursively: every server is a leaf and intermediate nodes are complete graphs of their children. We show that such kind of tree can be build and that it is possible to run tree traversal on it. We also show that the load is fairly shared between the servers. As a result, this structure has better performances than the tree and the graph in search speed term and in load term
Ravat, Franck. "Od3 : contribution méthodologique à la conception de bases de données orientées objet réparties." Toulouse 3, 1996. http://www.theses.fr/1996TOU30150.
Full textMedina, Marquez Alejandro. "L'analyse des données évolutives." Paris 9, 1985. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1985PA090022.
Full textBray, Laetitia. "Une plateforme réflexive ouverte pour la gestion d'applications concurrentes réparties à base d'acteurs." Toulouse 3, 2003. http://www.theses.fr/2003TOU30147.
Full textDuflos, Sandrine Viviane Julie. "Sosma : une architecture pour la gestion de la sécurité des applications multimédias réparties." Paris 6, 2005. http://www.theses.fr/2005PA066587.
Full textKouici, Nabil. "Gestion des déconnexions pour applications réparties à base de composants en environnements mobiles." Evry, Institut national des télécommunications, 2005. https://tel.archives-ouvertes.fr/tel-00012013.
Full textLast years have been marked by a rapid evolution in computer networks and machines used in distributed environments. This evolution has opened up new opportunities for mobile computing. Mobile computing allows a mobile user to access various kinds of information at any time and in any place. However, mobile computing raises the problem of data availability in the presence of disconnections. We distinguish two kinds of disconnections : voluntary disconnections and involuntary disconnections. Traditional middleware are mainly connection-oriented programming environments in which a client must maintain a connection to a server. These middleware are inadequate for mobile computing where the resources are unstable (bandwidth, battery, memory. . . ). In addition, the development of distributed applications converges more and more towards the use of componentoriented middleware that better addresses the application complexity by separating functional and extra-functional concerns using the component/container paradigm. The objective of this work is the disconnection management of component-based applications in mobiles environments. The solution consists in maintaining a logical connection between a client and its servers using the concept of disconnected operation. However, the majority of the existing solutions present an " ad hoc " solutions. Indeed, these solutions do not propose a separation between functional concerns and disconnection management. These solutions do not propose a disconnection-aware approach to design distributed applications that have to work in the presence of disconnections. Moreover, the component-oriented paradigm is rarely invested in disconnection management, this last limitation being due to the newness of this model. In this PhD Thesis, we present MADA, a mobile application development approach. In this approach, disconnection management is taken into account when modelling application at the architectural level. Then, we present a middleware service for the software cache management of mobile terminal. We validate the solution using a prototype implemented in Java, for CORBA component-based application, within the DOMINT platform. We also integrate the disconnection management in the containers. Finally, we propose a specification and a Java/CCM implementation of our container using the extensible container model (ECM) of OpenCCM