Dissertations / Theses on the topic 'Allocation des données réparties'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Allocation des données réparties.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Quiané-Ruiz, Jorge-Alnulfo. "Allocation de requêtes dans des systèmes d'information distribués avec des participants autonomes." Nantes, 2008. https://tel.archives-ouvertes.fr/tel-00464475.
Full textIn large-scale distributed information systems, where participants (consumers and providers) are autonomous and have special interests for some queries, query allocation is a challenge. Much work in this context has focused on distributing queries among providers in a way that maximizes overall performance (typically throughput and response time). However, participants usually have certain expectations with respect to the mediator, which are not only performance-related. Such expectations mainly reflect their interests to allocate and perform queries, e. G. Their interests towards: providers (based on reputation for example), quality of service, topics of interests, and relationships with other participants. In this context, because of participants’ autonomy, dissatisfaction is a problem since it may lead participants to leave the mediator. Participant’s satisfaction means that the query allocation method meets its expectations. Thus, besides balancing query load, preserving the participants’ interests so that they are satisfied is also important. In this thesis, we address the query allocation problem in these environments and make the following main contributions. First, we provide a model to characterize the participants’ perception of the system regarding their interests and propose measures to evaluate the quality of query allocation methods. Second, we propose a framework for query allocation, called SbQA, that dynamically trades consumers’ interests for providers’ interests based on their satisfaction. Third, we propose an query allocation approach, called SbQA, that allows a query allocation method (specifically SbQA) to scale up in terms of the numbers of mediators, participants, and hence of performed queries. Fourth, we propose a query replication method, called SbQR, that allows to support participants’ failures when allocating queries while preserving participants’ satisfaction and good system performance. Last, but not least, we analytically and experimentally validate our proposals and demonstrate that they yield high efficiency while satisfying participants
Robert, de Saint Victor Isabelle. "Système déductif dans le contexte de données réparties." Lyon 1, 1988. http://www.theses.fr/1988LYO10084.
Full textVargas-Solar, Genoveva. "Service d'évènements flexible pour l'intégration d'applications bases de données réparties." Université Joseph Fourier (Grenoble ; 1971-2015), 2000. http://www.theses.fr/2000GRE10259.
Full textSarr, Idrissa. "Routage des transactions dans les bases de données à large échelle." Paris 6, 2010. http://www.theses.fr/2010PA066330.
Full textDahan, Sylvain. "Mécanismes de recherche de services extensibles pour les environnements de grilles de calcul." Besançon, 2005. http://www.theses.fr/2005BESA2063.
Full textThe aim of Grid computing is to share computing resources. Users should find efficiently the resources that they need. To do it, we propose to connect the resources with an overlay network and to use a flooding search algorithm. Overlay networks are usually formed with a graph or a tree. Trees use an optimal number of messages but suffer of bottlenecks which reduce the number of simultaneous search that can be done. Graphs use more messages but support an higher number of simultaneous searches. We propose a new topology which uses an optimal number of messages like trees and does not have any bottleneck like graphs. If every node of a tree is a computer, some computers are leaves which receive messages and the others are intermediate nodes which forward messages. We distribute the intermediate nodes role between every server in a way where every server have the same roles. This new tree structure is build recursively: every server is a leaf and intermediate nodes are complete graphs of their children. We show that such kind of tree can be build and that it is possible to run tree traversal on it. We also show that the load is fairly shared between the servers. As a result, this structure has better performances than the tree and the graph in search speed term and in load term
Ravat, Franck. "Od3 : contribution méthodologique à la conception de bases de données orientées objet réparties." Toulouse 3, 1996. http://www.theses.fr/1996TOU30150.
Full textLumineau, Nicolas. "Organisation et localisation de données hétérogènes et réparties sur un réseau Pair-à-Pair." Paris 6, 2005. http://www.theses.fr/2005PA066436.
Full textMeynard, Michel. "Contrôle de la cohérence des bases de données réparties et dupliquées, sujettes aux partitionnements." Montpellier 2, 1990. http://www.theses.fr/1990MON20022.
Full textDriouche, Mohamed. "Un système de gestion de base de données réparties dans un environnement temps réel." Paris 6, 1989. http://www.theses.fr/1989PA066730.
Full textOğuz, Damla. "Méthodes d'optimisation pour le traitement de requêtes réparties à grande échelle sur des données liées." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30067/document.
Full textLinked Data is a term to define a set of best practices for publishing and interlinking structured data on the Web. As the number of data providers of Linked Data increases, the Web becomes a huge global data space. Query federation is one of the approaches for efficiently querying this distributed data space. It is employed via a federated query engine which aims to minimize the response time and the completion time. Response time is the time to generate the first result tuple, whereas completion time refers to the time to provide all result tuples. There are three basic steps in a federated query engine which are data source selection, query optimization, and query execution. This thesis contributes to the subject of query optimization for query federation. Most of the studies focus on static query optimization which generates the query plans before the execution and needs statistics. However, the environment of Linked Data has several difficulties such as unpredictable data arrival rates and unreliable statistics. As a consequence, static query optimization can cause inefficient execution plans. These constraints show that adaptive query optimization should be used for federated query processing on Linked Data. In this thesis, we first propose an adaptive join operator which aims to minimize the response time and the completion time for federated queries over SPARQL endpoints. Second, we extend the first proposal to further reduce the completion time. Both proposals can change the join method and the join order during the execution by using adaptive query optimization. The proposed operators can handle different data arrival rates of relations and the lack of statistics about them. The performance evaluation of this thesis shows the efficiency of the proposed adaptive operators. They provide faster completion times and almost the same response times, compared to symmetric hash join. Compared to bind join, the proposed operators perform substantially better with respect to the response time and can also provide faster completion times. In addition, the second proposed operator provides considerably faster response time than bind-bloom join and can improve the completion time as well. The second proposal also provides faster completion times than the first proposal in all conditions. In conclusion, the proposed adaptive join operators provide the best trade-off between the response time and the completion time. Even though our main objective is to manage different data arrival rates of relations, the performance evaluation reveals that they are successful in both fixed and different data arrival rates
Al, King Raddad. "Localisation de sources de données et optimisation de requêtes réparties en environnement pair-à-pair." Toulouse 3, 2010. http://thesesups.ups-tlse.fr/912/.
Full textDespite of their great success in the file sharing domain, P2P systems support only simple queries usually based on looking up a file by using its name. Recently, several research works have made to extend P2P systems to be able to share data having a fine granularity (i. E. Atomic attribute) and to process queries written with a highly expressive language (i. E. SQL). The characteristics of P2P systems (e. G. Large-scale, node autonomy and instability) make impractical to have a global catalog that stores often information about data, schemas and data source hosts. Because of the absence of a global catalog, two problems become more difficult: (i) locating data sources with taking into account the schema heterogeneity and (ii) query optimization. In our thesis, we propose an approach for processing SQL queries in a P2P environment. To solve the semantic heterogeneity between local schemas, our approach is based on domain ontology and on similarity formulas. As for the structural heterogeneity of local schemas, it is solved by the extension of a query routing method (i. E. Chord protocol) with Structure Indexes. Concerning the query optimization problem, we propose to take advantage of the data source localization phase to obtain all metadata required for generating a close to optimal execution plan. Finally, in order to show the feasibility and the validity of our propositions, we carry out performance evaluations and we discuss the obtained results
Sahri, Soror. "Conception et implantation d'un système de bases de données distribuée & scalable : SD-SQL Server." Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090013.
Full textOur thesis elaborates on the design of a scalable distributed database system (SD-DBS). A novel feature of an SD-DBS is the concept of a scalable distributed relational table, a scalable table in short. Such a table accommodates dynamic splits of its segments at SD-DBS storage nodes. A split occurs when an insert makes a segment to overflow, like in, e. G. , B-tree file. Current DBMSs provide the static partitioning only, requiring a cumbersome global reorganization from time to time. The transparency of the distribution of a scalable table is in this light an important step beyond the current technology. Our thesis explores the design issues of an SD-DBS, by constructing a prototype termed SD-SQL Server. As its name indicates, it uses the services of SQL-Server. SD-SQL Server repartitions a table when an insert overflows existing segments. With the comfort of a single node SQL Server user, the SD-SQL Server user has larger tables or a faster response time through the dynamic parallelism. We present the architecture of our system, its implementation and the performance analysis
Hatimi, Mostafa. "Gestion des données dupliquées dans un environnement sujet aux partitionnements du réseau de communication." Montpellier 2, 1990. http://www.theses.fr/1990MON20133.
Full textMokadem, Riad. "Signatures algébriques dans la gestion de structures de données distribuées et scalables." Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090014.
Full textRecent years saw emergence of new architectures, involving multiple computers. New concepts were proposed. Among most popular are those of a multicomputer or of a Network of Worksattion and more recently, of Peer to Peer and Grid Computing. This thesis consists on the design, implementation and performance measurements of a prototype SDDS manager, called SDDS-2005. It manages key based ordered files in distributed RAM of Windows machines forming a grid or P2P network. Our scheme can backup the RAM on each storage node onto the local disk. Our goal is to write only the data that has changed since the last backup. We interest also to update records and non key search (scans). Their common denominator was some application of the properties of new signature scheme based that we call algebraic signatures, which are useful in this context. Ones needs then to find only the areas that changed in the bucket since the last buckup. Our signature based scheme for updating records at the SDDS client should prove its advantages in client-server based database systems in general. It holds the promise of interesting possibilities for transactional concurrency control, beyond the mere avoidance of lost updates. We also update only data have been changed because of the using the algebraic signatures. Also, partly pre-computed algebraic signature of a string encodes each symbol by its cumulative signatures. They protect the SDDS data against incidental viewing by an unauthorized server’s administrator. The method appears attractive, it does not amply any storage overhead. It is also completly transparent for servers and occurs in client. Next, our cheme provide fast string search (match) directly on encoded data at the SDDS servers. They appear an alternative to known Karp-Rabin type schemes. Scans can explore the storage nodes in parallel. They match the records by entire non-key content or by its substring, prefix, longest common prefix or longest common string. The search complexity is almost O (1) for prefix search. One may use them also to detect and localize the silent corruption. These features should be of interest to P2P and grid computing. Then, we propose novel string search algorithm called n-Gramme search. It also appears then among the fastest known, e. G, probably often the faster one we know. It cost only a small fraction of existing records match, especially for larger strings search. The experiments prove high efficiency of our implementation. Our buckup scheme is substantially more efficient with the algebraic signatures. The signature calculus is itself substantially faster, the gain being about 30 %. Also, experiments prove that our cumulative pre-computing notably accelerates the string searchs which are faster than the partial one, at the expense of higher encoding/decoding overhead. They are new alternatives to known Karp-Rabin type schemes, and likely to be usually faster. The speed of string matches opens interesting perspectives for the popular join, group-by, rollup, and cube database operations. Our work has been subject of five publications in international conferences [LMS03, LMS05a, LMS05b, ML06, l&al06]. For convenience, we have included the latest publications. Also, the package termed SDDS-2005 is available for non-commercial use at http://ceria. Dauphine. Fr/. It builds up on earlier versions of the prototype, a cumulative effort of several folks and n-Gramme algorithm implementation. We have also presented our proposed prototype, SDDS-2005, at the Microsoft Research Academic Days 2006
Legtchenko, Sergey. "Adaptation dynamique des architectures réparties pour jeux massivement multijoueurs." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00931865.
Full textNicolle, Cécile. "Système d'Accès à des Bases de Données Hétérogènes réparties en vue d'une aide à la décision (SABaDH)." Lyon, INSA, 2001. http://theses.insa-lyon.fr/publication/2001ISAL0076/these.pdf.
Full textSince all time, for decision making, decider had to be faced with access problem of all needed data to take the better decision. Nowadays, most systems provide help for this decision making. But it's always difficult to know where the decider can find relevant data. Furthermore, decider can't know type of all data which he need to make his decision. That's why we propose an architecture of an access system which allows decider ask his request in language like natural language, without more detail about their location. Our system can find this data, and provides all information in relation with searched data, these information being relevant. Our system can alleviate some deficiency about search domain. Our system uses wrapper principle, and XML as internal language and request and answer language. Two prototype have been realised, one about search in legal texts base, the other about XML interrogation of Progress base with answer in XML
Bruneau, Pierrick. "Contributions en classification automatique : agrégation bayésienne de mélanges de lois et visualisation interactive." Phd thesis, Nantes, 2010. http://www.theses.fr/2010NANT2023.
Full textThe internet and recent architectures such as sensor networks are currently witnessing tremendous and continuously growing amounts of data, often distributed on large scales. Combined with user expectations with respect to tooling, this encourages developing adequate techniques for analyzing and indexing. Classication and clustering tasks are about characterizing classes within data collections. These are often used as building blocks for designing tools aimed at making data accessible to users. In this document, we describe our contributions to mixture models aggregation. These models are classically used for content categorization. Using variational Bayesian principles, we aimed at designing low computation and transmission costs algorithms. Doing so, we aimed at proposing a building block for distributed density model estimation. We also contributed to visual classication applied to data streams. To this purpose, we employed bio-mimetic principles, and results from graph theory. More specically, visual and dynamic abstractions of an underlying clustering process were proposed. We strived to provide users with ecient interfaces, while allowing using their actions as a feedback
Acosta, Francisco. "Les arbres balances : spécification, performances et contrôle de concurrence." Montpellier 2, 1991. http://www.theses.fr/1991MON20201.
Full textGhassany, Mohamad. "Contributions à l'apprentissage collaboratif non supervisé." Paris 13, 2013. http://www.theses.fr/2013PA132041.
Full textThe research outlined in this thesis concerns the development of collaborative clustering approaches based on topological methods, such as self-organizing maps (SOM), generative topographic mappings (GTM) and variational Bayesian GTM (VBGTM). So far, clustering methods performs on a single data set, but recent applications require data sets distributed among several sites. So, communication between the different data sets is necessary, while respecting the privacy of every site, i. E. Sharing data between sites is not allowed. The fundamental concept of collaborative clustering is that the clustering algorithms operate locally on individual data sets, but collaborate by exchanging information about their findings. The strength of collaboration, or confidence, is precised by a parameter called coefficient of collaboration. This thesis proposes to learn it automatically during the collaboration phase. Two data scenarios are treated in this thesis, referred as vertical and horizontal collaboration. The vertical collaboration occurs when data sets contain different objects and same patterns. The horizontal collaboration occurs when they have same objects and described by different Patterns
Everaere, Patricia. "Contribution à l'étude des opérateurs de fusion : manipulabilité et fusion disjonctive." Artois, 2006. http://www.theses.fr/2006ARTO0402.
Full textPropositional merging operators aim at defining the beliefs/goals of a group of agents from their individual beliefs/goals, represented by propositional formulae. Two widely used criteria for comparing existing merging operators are rationality and computational complexity. Our claim is that those two criteria are not enough, and that a further one has to be considered as well, namely strategy-proofness. A merging operator is said to be non strategy-proof if there is an agent involved in the merging process who can change the result of the merging, so as to make it closer to her expected one, by lying on her true beliefs/goals. A non strategy-proof merging operator does not give any guarantee that the results it provides are adequate to the beliefs/goals of the group, since it does not incite the agents to report their true beliefs/goals. A first contribution of this thesis consists of a study of the strategy-proofness of existing propositional merging operators. It shows that no existing merging operators fully satisfy the three criteria under consideration: rationality, complexity and strategy-proofness. Our second contribution consists of two new families of disjunctive merging operators, i. E. , operators ensuring that the result of the merging process entails the disjunction of the information given at start. The operators from both families are shown as valuable alternatives to formula-based merging operators, which are disjunctive, but exhibit a high computational complexity, are not strategy-proof, and are not fully rational
Naacke, Hubert. "Modèle de coût pour médiateur de bases de données hétérogènes." Versailles-St Quentin en Yvelines, 1999. http://www.theses.fr/1999VERS0013.
Full textLes systemes distribues accedent a des sources d'informations diverses au moyen de requetes declaratives. Une solution pour resoudre les problemes lies a l'heterogeneite des sources repose sur l'architecture mediateur / adaptateurs. Dans cette architecture, le mediateur accepte en entree une requete de l'utilisateur, la traite en accedant aux sources via les adaptateurs concernes et renvoie la reponse a l'utilisateur. Le mediateur offre une vue globale et centralisee des sources. Les adaptateurs offrent un acces uniforme aux sources, au service du mediateur. Pour traiter une requete de maniere efficace, le mediateur doit optimiser le plan decrivant le traitement de la requete. Pour cela, plusieurs plans semantiquement equivalents sont envisages, le cout (i. E. Le temps de reponse) de chaque plan est estime afin de choisir celui de moindre cout qui sera execute. Le mediateur estime le cout des operations traitees par les sources en utilisant les informations de cout que les sources exportent. Or, a cause de l'autonomie des sources, les informations exportees peuvent s'averer insuffisantes pour estimer le cout des operations avec une precision convenable. Cette these propose une nouvelle methode permettant au developpeur d'adaptateur d'exporter un modele de cout d'une source a destination du mediateur. Le modele exporte contient des statistiques qui decrivent les donnees stockees dans la source ainsi que des fonctions mathematiques pour evaluer le cout des traitements effectues par la source. Lorsque le developpeur d'adaptateur manque d'information ou de moyen, il a la possibilite de fournir un modele de cout partiel qui est automatiquement complete avec le modele generique predefini au sein du mediateur. Nous validons experimentalement le modele de cout propose en accedant a des sources web. Cette validation montre l'efficacite du modele de cout generique ainsi que celle des modeles plus specialises selon les particularites des sources et les cas d'applications
Houas, Heykel. "Allocation de ressources pour la transmission de données multimedia scalables." Phd thesis, Université de Cergy Pontoise, 2009. http://tel.archives-ouvertes.fr/tel-00767889.
Full textHouas, Heykel. "Allocation de ressources pour la transmission de données multimédia scalables." Cergy-Pontoise, 2009. http://biblioweb.u-cergy.fr/theses/09CERG0430.pdf.
Full textThis thesis is dedicated to the resources allocation for the transmission of scalable multimedia data under Quality-of-Service (QoS) constraints on heterogeneous networks. We focus on wire and wireless links (DS-CDMA, OFDMA) with the transmission of images, speech over frequency and non frequency selective channels. Resources from the physical layer are addressed : channel code rates (to protect the data against the degradation of the signal-to-noise ratio SNR), modulation orders, carriers ordering (to convey the layers) and the allocated power. The aim of this report is to allocate these parameters in order to maximize the source rate of the multimedia data under targeted QoS and system payload with a perfect or partial channel knowledge. The QoS is expressed in term of perceived quality from the End To end User and in term of Bit Error Rate per Class from the scalable source encoder. In a such context, we propose some link adaptation schemes whose novelty is to enable the truncation of the data layers. Moreover, these strategies make use of the sensivity to transmission errors and the channel state information to dynamically adapt the protection of the layers (Unequal Error Protection UEP) in accordance with the QoS requirements. These procedures explore multiple resources optimization criteria : the minimization of the system payload and the maximization of the robustness to the channel estimation error. For each one, we perform the optimal allocation (bit loading) of the previous parameters that maximize the source rate while ensuring the constraints of the receiver. We show that these schemes fit to any communication system and we present the performances and compare them to the State Of The Art procedures
Poulliat, Charly. "Allocation et optimisation de ressources pour la transmission de données multimédia." Cergy-Pontoise, 2004. http://www.theses.fr/2004CERG0271.
Full textJouis, Christophe. "Contributions à la conceptualisation et à la Modélisation des connaissances à partir d'une analyse linguistique de textes : réalisation d'un prototype : le système SEEK." Paris, EHESS, 1993. http://www.theses.fr/1993EHES0051.
Full textWe present a linguistic and computer model the aim of which is the understanding of linguistic items inserted in their context. This model is constituted by knowledge based systems of contextual exploration which consists in seeking linguistic clues in texts. It is shown that nothing else than a basic morpho-syntactical analysis and the use of the context of an examined linguistic item is required to build semantic representations. These contextual data express a knowledge of the language without using any other knowledge of the world. We present in details a program based on this model : seek. It is a help tool for knowledge extraction forme texts in natural language. This latter has been integrated in a workshop of cognitive engineering associated to a methodology of knowledge acquisition and modelling called metodac
Ketata, Imen. "Méthode de découverte de sources de données tenant compte de la sémantique en environnement de grille de données." Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1917/.
Full textNowadays, data grid applications look to share a huge number of data sources in an unstable environment where a data source may join or leave the system at any time. These data sources are highly heterogeneous because they are independently developed and managed and geographically scattered. In this environment, efficient discovery of relevant data sources for query execution is a complex problem due to the source heterogeneity, large scale environment and system instability. First works on data source discovery are based on a keyword search. These initial solutions are not sufficient because they do not take into account problem of semantic heterogeneity of data sources. Thus, the community has proposed other solutions to consider semantic aspects. A first solution consists in using a global schema or global ontology. However, the conception of such scheme or such ontology is a complex task due to the number of data sources. Other solutions have been proposed providing mappings between data source schemas or based on domain ontologies and establishing mapping relations between them. All these solutions impose a fixed topology for connections as well as mapping relationships. However, the definition of mapping relations between domain ontologies is a difficult task and imposing a fixed topology is a major inconvenience. In this perspective, we propose in this thesis a method for discovering data sources taking into account semantic heterogeneity problems in unstable and large scale environment. For that purpose, we associate a Virtual Organisation (VO) and a domain ontology to each domain and we rely on relationship mappings between existing ontologies. We do not impose any hypothesis on the relationship mapping topology, except that they form connected graph. We define an addressing system for permanent access from any OVi to another OVj despite peers' dynamicity (with i inégalité j). We also present a method of maintenance called 'lazy' to limit the number of messages required to maintain the addressing system during the connection or disconnection of peers. To study the feasibility as well as the viability of our proposals, we make a performance evaluation
Kerhervé, Brigitte. "Vues relationnelles : implantation dans les systèmes de gestion de bases de données centralisés et répartis." Paris 6, 1986. http://www.theses.fr/1986PA066090.
Full textCazalens, Sylvie. "Formalisation en logique non standard de certaines méthodes de raisonnement pour fournir des réponses coopératives, dans des systèmes de bases de données et de connaissances." Toulouse 3, 1992. http://www.theses.fr/1992TOU30172.
Full textLoukil, Adlen. "Méthodologies, Modèles et Architectures de Référence pour la Gestion et l'Echange de Données Médicales Multimédia : Application aux Projets Européen OEDIPE et BRITER." Lyon, INSA, 1997. http://www.theses.fr/1997ISAL0016.
Full textInterchange and Integration of medical data is a fundamental task in modern medicine. However, a significant obstacle to the development of efficient interoperable information systems is the lack of software tools that provide transparent access to heterogeneous distributed databases. Currently most of the solutions are stand-alone ones fitting only one configuration. To solve this problems of integration and interoperability, we propose in this thesis an original approach which is based on the definition of communication protocols and the design of generic interface between the specific implementations of the protocols and the target databases associated to the Hospital Information Systems. The proposed solution is based on the development of a data dictionary modelling the communications protocols and the databases structures and generic module for the data storage and extraction. The design involves issues related to reverse engineering procedures and to automatic generation of SQL statements. To illustrate this approach, we present the demonstration prototype we have developed in the framework of the OEDIPE AIM project to experiment and to test open interchange of ECGs and associated clinical data. The second part is devoted to the modelling and integration of distributed electronic patient records using communications protocols. We first present a multidimensional approach for the structuring of patient records and propose a generic object oriented information model which integrates bio signals, images and accompanying clinical information. We then, describe a prototype system which has been developed in the framework of the BRITER AIM project for accessing and handling heterogeneous patient data stored in distributed electronic patient records in order to support Rehabilitation healthcare professional in making decisions. We thus demonstrate that the use of standard communications protocols allows and facilitate the development of portable and interoperable medical applications for the benefit of the health care field
Bonnel, Nicolas Achille Jacques. "Adapnet : stratégies adaptatives pour la gestion de données distribuées sur un réseau pair-a pair." Lorient, 2008. http://www.theses.fr/2008LORIS134.
Full textIn the last few years, the amount of digital information produced has exponentially increased. This raises problems regarding the storage, the access and the availability of this data. Software and hardware architectures based on the peer-to-peer (p2p) paradigm seem to satisfy the needs of data storage but cannot handle efficiently both data accessibility and availability. We present ,in this thesis various contributions on p2p architectures for managing large volumes of information. We propose various strategies that operate on dedicated virtual topologies that can be maintained at low cost. More precisely, these topologies scale well because the cost for node arrival and node departure is on average constant, whatever the size of the network. We analyze the main paradigms of information sharing on a p2p network, considering successively the problem of access to typed information (semi-structured) and the general case that completely separates the nature of the queries and data location. We propose a routing strategy using structure and content of semi-structured information. We also propose strategies that efficiently explore the network when there is no assumption on the nature of data or queries. In order to manage a quality of service (which is expressed ln terms of speed and reliability), we, also investigate the problem of information availability, more precisely we replicate data stored ln the network. We propose a novel approach exploiting an estimation of local density of data
Faye, David Célestin. "Médiation de données sémantique dans SenPeer, un système pair-à-pair de gestion de données." Phd thesis, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00481311.
Full textRivierre, Yvan. "Algorithmes auto-stabilisants pour la construction de structures couvrantes réparties." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM089/document.
Full textThis thesis deals with the self-stabilizing construction of spanning structures over a distributed system. Self-stabilization is a paradigm for fault-tolerance in distributed algorithms. It guarantees that the system eventually satisfies its specification after transient faults hit the system. Our model of distributed system assumes locally shared memories for communicating, unique identifiers for symmetry-breaking, and distributed daemon for execution scheduling, that is, the weakest proper daemon. More generally, we aim for the weakest possible assumptions, such as arbitrary topologies, in order to propose the most versatile constructions of distributed spanning structures. We present four original self-stabilizing algorithms achieving k-clustering, (f,g)-alliance construction, and ranking. For every of these problems, we prove the correctness of our solutions. Moreover, we analyze their time and space complexity using formal proofs and simulations. Finally, for the (f,g)-alliance problem, we consider the notion of safe convergence in addition to self-stabilization. It enforces the system to first quickly satisfy a specification that guarantees a minimum of conditions, and then to converge to a more stringent specification
Longueville, Véronique. "Modélisation, calcul et évaluation de liens pour la navigation dans les grands ensembles d'images fixes." Toulouse 3, 1993. http://www.theses.fr/1993TOU30149.
Full textBergougnoux, Patrick. "MIME, un environnement de développement coopératif pour applications distribuées." Toulouse 3, 1992. http://www.theses.fr/1992TOU30014.
Full textSteff, Yann. "SMA et gestion coopérative de réseaux et systèmes : un cadre méthodologique pour une macro-organisation autonome." Toulouse 3, 2002. http://www.theses.fr/2002TOU30043.
Full textDuque, Hector. "Conception et mise en oeuvre d'un environnement logiciel de manipulation et d'accès à des données réparties : application aux grilles d'images médicales : le système DSEM / DM2." Lyon, INSA, 2005. http://theses.insa-lyon.fr/publication/2005ISAL0050/these.pdf.
Full textOur vision, in this thesis, is the one of a bio-medical grip as a partner of hospital's information systems, sharing computing resources as well as a platform for sharing information. Therefore, we aim at (i) providing transparent access to huge distributed medical data sets, (ii) querying these data by their content, and (iii), sharing computing resources within the grip. Assuming the existence of a grip infrastructure, we suggest a multi-layered architecture (Distributed Systems Engines – DSE). This architecture allows us to design High Performance Distributed Systems which are highly extensible, scalable and open. It ensures the connection between the grip, data storing systems, and medical platforms. The conceptual design of the architecture assumes a horizontal definition for each one of the layers, and is based on a multi-process structure. This structure enables the exchange of messages between processes by using the Message Passing Paradigm. These processes and messages allow one to define entities of a higher level of semantic significance, which we call Drivers and, which instead of single messages, deal with different kinds of transactions: queries, tasks and requests. Thus, we define different kinds of drivers for dealing with each kind of transaction, and in a higher level, we define services as an aggregation of drivers. The architectural framework of drivers and services eases the design of components of a Distributed System (DS), which we call engines, and also eases the extensibility and scalability of DS
Guo, Chaopeng. "Allocation de ressources efficace en énergie pour les bases de données dans le cloud." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30065.
Full textToday a lot of cloud computing and cloud database techniques are adopted both in industry and academia to face the arrival of the big data era. Meanwhile, energy efficiency and energy saving become a major concern in data centers, which are in charge of large distributed systems and cloud databases. However, energy efficiency and service-level agreement of cloud databases are suffering from resource provisioning, resource over-provisioning and resource under-provisioning, namely that there is a gap between resource provided and resource required. Since the usage of cloud database is dynamical, resource of the system should be provided according to its workload. In this thesis, we present our work on energy-efficient resource provisioning for cloud databases that utilizes dynamic voltage and frequency scaling (DVFS) technique to cope with resource provisioning issues. Additionally, a migration approach is introduced to improve the energy efficiency of cloud database systems further. Our contribution can be summarized as follows: At first, the behavior of energy efficiency of the cloud database system under DVFS technique is analyzed. Based on the benchmark result, two frequency selection approaches are proposed. Then, a frequency selection approach with bounded problem is introduced, in which the power consumption and migration cost are treated separately. A linear programming algorithm and a multi-phases algorithm are proposed. Because of the huge solution space, both algorithms are compared within a small case, while the multi-phases algorithm is evaluated with larger cases. Further, a frequency selection approach with optimization problem is introduced, in which the energy consumption for executing the workload and migration cost are handled together. Two algorithms, a genetic based algorithm and a monte carlo tree search based algorithm are proposed. Both algorithms have their pros and cons. At last, a migration approach is introduced to migrate data according to the given frequencies and current data layout. A migration plan can be obtained within polynomial time by the proposed Constrained MHTM algorithm
Epimakhov, Igor. "Allocation des ressources pour l'optimisation de requêtes dans les systèmes de grille de données." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2017/.
Full textData grid systems are utilized more and more due to their storage and computing capacities. One of the main problems of these systems is the resource allocation for SQL query optimization. Recently, the scientific community published numerous approaches and methods of resource allocation, striving to take into account different peculiarities of data grid systems: heterogeneity, instability and large scale. Centralized management structure predominates in the proposed methods, in spite of the risks incurred of the solution in the large scale systems. In the thesis we adopt the hybrid approach of resource allocation for query optimization, meaning that, we first make a static resource allocation during the query compile time, and then reallocate the resources dynamically during the query runtime. As opposed to the previously proposed methods, we use a decentralized management structure. The static part of our method consists of the strategy of initial resource allocation with a query 'broker'. As for the dynamic part, we propose a strategy that uses the cooperation between relational mobile operations and stationary coordinators of nodes in order to decentralize the process of dynamic resource reallocation. Key elements of our method are: (i) limitation of research space for resolve problems caused by the large scale, (ii) principle of resources distribution between query operations for determining the parallelism degree of operations and for balancing the load dynamically and (iii) decentralization of the dynamic allocation process. Results of performance evaluation show the efficiency of our propositions. Our initial resource allocation strategy gives results superior to the referenced method that we used for the comparison. The dynamic reallocation strategy decrease considerably the response time in the presence of the system instability and load misbalance
Sauquet, Dominique. "Lied : un modèle de données sémantique et temporel : son intégration dans une architecture distribuée et son utilisation pour des applications médicales." Châtenay-Malabry, Ecole centrale de Paris, 1998. http://www.theses.fr/1998ECAP0586.
Full textAntoniu, Gabriel. "Contribution à la conception de services de partage de données pour les grilles de calcul." Habilitation à diriger des recherches, École normale supérieure de Cachan - ENS Cachan, 2009. http://tel.archives-ouvertes.fr/tel-00437324.
Full textLe, Sergent Thierry. "Méthodes d'exécution et machines virtuelles parallèles pour l'implantation distribuée du langage de programmation parallèle LCS." Toulouse 3, 1993. http://www.theses.fr/1993TOU30021.
Full textKandi, Mohamed Mehdi. "Allocation de ressources élastique pour l'optimisation de requêtes." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30172.
Full textCloud computing has become a widely used way to query databases. Today's cloud providers offer a variety of services implemented on parallel architectures. Performance targets and possible penalties in case of violation are established in advance in a contract called Service-Level Agreement (SLA). The provider's goal is to maximize its benefit while respecting the needs of tenants. Before the birth of cloud systems, several studies considered the problem of resource allocation for database querying in parallel architectures. The execution plan for each query is a graph of dependent tasks. The expression "Resource allocation" in this context often implies the placement of tasks within available resources and also their scheduling that takes into account dependencies between tasks. The main goal was to minimize query execution time and maximize the use of resources. However, this goal does not necessarily guarantee the best economic benefit for the provider in the cloud. In order to maximize the provider's benefit and meet the needs of tenants, it is important to include the economic model and SLAs in the resource allocation process. Indeed, the needs of tenants in terms of performance are different, so it would be interesting to allocate resources in a way that favors the most demanding tenants and ensure an acceptable quality of service for the least demanding tenants. In addition, in the cloud the number of assigned resources can increase/decrease according to demand (elasticity) and the monetary cost depends on the number of assigned resources, so it would be interesting to set up a mechanism to automatically choose the right moment to add or remove resources according to the load (auto-scaling). In this thesis, we are interested in designing elastic resource allocation methods for database queries in the cloud. This solution includes: (1) a static two-phase resource allocation method to ensure a good compromise between provider benefit and tenant satisfaction, while ensuring a reasonable allocation cost, (2) an SLA-driven resource reallocation to limit the impact of estimation errors on the benefit and (3) an auto-scaling method based on reinforcement learning that meet the specificities of database queries. In order to evaluate our contributions, we have implemented our methods in a simulated cloud environment and compared them with state-of-the-art methods in terms of monetary cost of the execution of queries as well as the allocation cost
Benslimane, Djamal. "Etudes de l'apport des techniques de parallélisme dans l'amélioration des performances des systèmes à base de règles de production." Clermont-Ferrand 2, 1990. http://www.theses.fr/1990CLF21287.
Full textVilarem, Jean-François. "Contrôle de concurrence mixte en environnement distribué : une méthode fusionnant verrouillage et certification." Montpellier 2, 1989. http://www.theses.fr/1989MON20023.
Full textBrunie, Hugo. "Optimisation des allocations de données pour des applications du Calcul Haute Performance sur une architecture à mémoires hétérogènes." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0014/document.
Full textHigh Performance Computing, which brings together all the players responsible for improving the computing performance of scientific applications on supercomputers, aims to achieve exaflopic performance. This race for performance is today characterized by the manufacture of heterogeneous machines in which each component is specialized. Among these components, system memories specialize too, and the trend is towards an architecture composed of several memories with complementary characteristics. The question arises then of these new machines use whose practical performance depends on the application data placement on the different memories. Compromising code update against performance is challenging. In this thesis, we have developed a data allocation on Heterogeneous Memory Architecture problem formulation. In this formulation, we have shown the benefit of a temporal analysis of the problem, because many studies were based solely on a spatial approach this result highlight their weakness. From this formulation, we developed an offline profiling tool to approximate the coefficients of the objective function in order to solve the allocation problem and optimize the allocation of data on a composite architecture composed of two main memories with complementary characteristics. In order to reduce the amount of code changes needed to execute an application according to our toolbox recommended allocation strategy, we have developed a tool that can automatically redirect data allocations from a minimum source code instrumentation. The performance gains obtained on mini-applications representative of the scientific applications coded by the community make it possible to assert that intelligent data allocation is necessary to fully benefit from heterogeneous memory resources. On some problem sizes, the gain between a naive data placement strategy, and an educated data allocation one, can reach up to ×3.75 speedup
Bekele, Dawit. "Contribution à l'étude de la répartition d'applications écrites en langage ADA 83." Toulouse 3, 1994. http://www.theses.fr/1994TOU30069.
Full textEl, Attar Ali. "Estimation robuste des modèles de mélange sur des données distribuées." Phd thesis, Université de Nantes, 2012. http://tel.archives-ouvertes.fr/tel-00746118.
Full textBenkrid, Soumia. "Le déploiement, une phase à part entière dans le cycle de vie des entrepôts de données : application aux plateformes parallèles." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2014. http://www.theses.fr/2014ESMA0027/document.
Full textDesigning a parallel data warehouse consists of choosing the hardware architecture, fragmenting the data warehouse schema, allocating the generated fragments, replicating fragments to ensure high system performance and defining the treatment strategy and load balancing.The major drawback of this design cycle is its ignorance of the interdependence between subproblems related to the design of PDW and the use of heterogeneous metrics to achieve thesame goal. Our first proposal defines an analytical cost model for parallel processing of OLAP queries in a cluster environment. Our second takes into account the interdependence existing between fragmentation and allocation. In this context, we proposed a new approach to designa PDW on a cluster machine. During the fragmentation process, our approach determines whether the fragmentation pattern generated is relevant to the allocation process or not. The results are very encouraging and validation is done on Teradata. For our third proposition, we presented a design method which is an extension of our work. In this phase, an original method of replication, based on fuzzy logic is integrated
Luong, Duc-Hung. "On resource allocation in cloudified mobile network." Thesis, La Rochelle, 2019. http://www.theses.fr/2019LAROS031.
Full textMobile traffic had been dramatically increasing in recent years along with the evolution toward next generation of mobile network (5G). To face this increasing demands, Network Function Virtualization (NFV), Software Defined Networking (SDN) and Cloud Computing emerged to provide more flexibility and elasticity for mobile networks toward 5G. However, the design of these softwarization technologies for mobile network is not sufficient by itself as and the mobile services also have critical requirements in term of quality of services and user experiences that still need to be full field. Therefore, this thesis focuses on how to apply efficiently softwarization to mobile network services and associate to it flexible resource allocation. The main objective of this thesis is to propose an architecture leveraging virtualization technologies and cloud computing on legacy mobile network architecture. The proposal not only well adopts and provides flexibility as well as high availability to network infrastructure but also satisfies the quality of services requirements of future mobile services. More specifically, we first studied the use of the "cloud-native" approach and "microservices" for the creation of core network components and those of the radio access network (RAN) toward 5G. Then, in order to maintain a target level of quality of services, we dealt with the problem of the automatic scaling of microservices, via a predictive approach that we propose to avoid degradation of services. It is integrated with an autonomous orchestration platform for mobile network services. Finally, we have also proposed and implemented a multi-level scheduler, which allows both to manage the resources allocated for a virtualized mobile network, called "slice", but also and above all to manage the resources allocated to several network instances, deployed within the same physical infrastructure. All these proposals were implemented and evaluated on a realistic test bench
Gara, Slim. "Allocation dynamique des ressources pour la transmission de la vidéo sur un réseau ATM." Versailles-St Quentin en Yvelines, 1998. http://www.theses.fr/1998VERS0007.
Full text