Dissertations / Theses on the topic 'Decentralized computing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 39 dissertations / theses for your research on the topic 'Decentralized computing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Lu, Kai. "Decentralized load balancing in heterogeneous computational grids." Thesis, The University of Sydney, 2007. http://hdl.handle.net/2123/9382.
Full textJosilo, Sladana. "Decentralized Algorithms for Resource Allocation in Mobile Cloud Computing Systems." Licentiate thesis, KTH, Nätverk och systemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-228084.
Full text-the webinar ID on Zoom: 670-3514-7251, - the registration URL: https://kth-se.zoom.us/webinar/register/WN_EQCltecySbSMoEQiRztIZg
QC 20180518
Ferreira, Heitor José Simões Baptista. "4Sensing - decentralized processing for participatory sensing data." Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/5091.
Full textParticipatory sensing is a new application paradigm, stemming from both technical and social drives, which is currently gaining momentum as a research domain. It leverages the growing adoption of mobile phones equipped with sensors, such as camera, GPS and accelerometer, enabling users to collect and aggregate data, covering a wide area without incurring in the costs associated with a large-scale sensor network. Related research in participatory sensing usually proposes an architecture based on a centralized back-end. Centralized solutions raise a set of issues. On one side, there is the implications of having a centralized repository hosting privacy sensitive information. On the other side, this centralized model has financial costs that can discourage grassroots initiatives. This dissertation focuses on the data management aspects of a decentralized infrastructure for the support of participatory sensing applications, leveraging the body of work on participatory sensing and related areas, such as wireless and internet-wide sensor networks, peer-to-peer data management and stream processing. It proposes a framework covering a common set of data management requirements - from data acquisition, to processing, storage and querying - with the goal of lowering the barrier for the development and deployment of applications. Alternative architectural approaches - RTree, QTree and NTree - are proposed and evaluated experimentally in the context of a case-study application - SpeedSense - supporting the monitoring and prediction of traffic conditions, through the collection of speed and location samples in an urban setting, using GPS equipped mobile phones.
Wilson, Dany. "Architecture for a Fully Decentralized Peer-to-Peer Collaborative Computing Platform." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32790.
Full textBicak, Mesude. "Agent-based modelling of decentralized ant behaviour using high performance computing." Thesis, University of Sheffield, 2011. http://etheses.whiterose.ac.uk/1392/.
Full textTkachuk, Roman-Valentyn. "Towards Decentralized Orchestration of Next-generation Cloud Infrastructures." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21345.
Full textBarker, James W. "A Low-Cost, Decentralized Distributed Computing Architecture for an Autonomous User Environment." NSUWorks, 1998. http://nsuworks.nova.edu/gscis_etd/402.
Full textTordsson, Johan. "Decentralized resource brokering for heterogeneous grid environments." Licentiate thesis, Umeå : Department of Computing Science, Umeå University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-966.
Full textAgarwal, Radhika. "DRAP: A Decentralized Public Resourced Cloudlet for Ad-Hoc Networks." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30682.
Full textWang, Mianyu Kam Moshe Kandasamy Nagarajan. "A decentralized control and optimization framework for autonomic performance management of web-server systems /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/2643.
Full textAriyattu, Resmi. "Towards federated social infrastructures for plug-based decentralized social networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S031/document.
Full textIn this thesis, we address two issues in the area of decentralized distributed systems: network-aware overlays and collaborative editing. Even though network overlays have been extensively studied, most solutions either ignores the underlying physical network topology, or uses mechanisms that are specific to a given platform or applications. This is problematic, as the performance of an overlay network strongly depends on the way its logical topology exploits the underlying physical network. To address this problem, we propose Fluidify, a decentralized mechanism for deploying an overlay network on top of a physical infrastructure while maximizing network locality. Fluidify uses a dual strategy that exploits both the logical links of an overlay and the physical topology of its underlying network to progressively align one with the other. The resulting protocol is generic, efficient, scalable and can substantially improve network overheads and latency in overlay based systems. The second issue that we address focuses on collaborative editing platforms. Distributed collaborative editors allow several remote users to contribute concurrently to the same document. Only a limited number of concurrent users can be supported by the currently deployed editors. A number of peer-to-peer solutions have therefore been proposed to remove this limitation and allow a large number of users to work collaboratively. These decentralized solution assume however that all users are editing the same set of documents, which is unlikely to be the case. To open the path towards more flexible decentralized collaborative editors, we present Filament, a decentralized cohort-construction protocol adapted to the needs of large-scale collaborative editors. Filament eliminates the need for any intermediate DHT, and allows nodes editing the same document to find each other in a rapid, efficient and robust manner by generating an adaptive routing field around themselves. Filament's architecture hinges around a set of collaborating self-organizing overlays that utilizes the semantic relations between peers. The resulting protocol is efficient, scalable and provides beneficial load-balancing properties over the involved peers
Ben, Hafaiedh Khaled. "Studying the Properties of a Distributed Decentralized b+ Tree with Weak-Consistency." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20578.
Full textAdam, Constantin. "A Middleware for Self-Managing Large-Scale Systems." Doctoral thesis, KTH, Skolan för elektro- och systemteknik (EES), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4178.
Full textQC 20100629
Huhtinen, J. (Jouni). "Utilization of neural network and agent technology combination for distributed intelligent applications and services." Doctoral thesis, University of Oulu, 2005. http://urn.fi/urn:isbn:9514278550.
Full textAdam, Constantin. "Scalable Self-Organizing Server Clusters with Quality of Service Objectives." Licentiate thesis, KTH, School of Electrical Engineering (EES), 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272.
Full textAdvanced architectures for cluster-based services that have been recently proposed allow for service differentiation, server overload control and high utilization of resources. These systems, however, rely on centralized functions, which limit their ability to scale and to tolerate faults. In addition, they do not have built-in architectural support for automatic reconfiguration in case of failures or addition/removal of system components.
Recent research in peer-to-peer systems and distributed management has demonstrated the potential benefits of decentralized over centralized designs: a decentralized design can reduce the configuration complexity of a system and increase its scalability and fault tolerance.
This research focuses on introducing self-management capabilities into the design of cluster-based services. Its intended benefits are to make service platforms dynamically adapt to the needs of customers and to environment changes, while giving the service providers the capability to adjust operational policies at run-time.
We have developed a decentralized design that efficiently allocates resources among multiple services inside a server cluster. The design combines the advantages of both centralized and decentralized architectures. It allows associating a set of QoS objectives with each service. In case of overload or failures, the quality of service degrades in a controllable manner. We have evaluated the performance of our design through extensive simulations. The results have been compared with performance characteristics of ideal systems.
Helmy, Ahmed. "Energy-Efficient Bandwidth Allocation for Integrating Fog with Optical Access Networks." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39912.
Full textSolat, Siamak. "Novel fault-tolerant, self-configurable, scalable, secure, decentralized, and high-performance distributed database replication architecture using innovative sharding to enable the use of BFT consensus mechanisms in very large-scale networks." Electronic Thesis or Diss., Université Paris Cité, 2023. http://www.theses.fr/2023UNIP7025.
Full textThis PhD thesis consists of 6 Chapters. In the first Chapter, as an introduction, we provide an overview of the general goals and motives of decentralized and permissionless networks, as well as the obstacles they face. In the introduction, we also refer to the irrational and illogical solution, known as "permissioned blockchain" that has been proposed to improve the performance of networks similar to Bitcoin. This matter has been detailed in Chapter 5. In Chapter 2, we make clear and intelligible the systems that the proposed idea, Parallel Committees, is based on such networks. We detail the indispensable features and essential challenges in replication systems. Then in Chapter 3, we discuss in detail the low performance and scalability limitations of replication systems that use consensus mechanisms to process transactions, and how these issues can be improved using the sharding technique. We describe the most important challenges in the sharding of distributed replication systems, an approach that has already been implemented in several blockchain-based replication systems and although it has shown remarkable potential to improve performance and scalability, yet current sharding techniques have several significant scalability and security issues. We explain why most current sharding protocols use a random assignment approach for allocating and distributing nodes between shards due to security reasons. We also detail how a transaction is processed in a sharded replication system, based on current sharding protocols. We describe how a shared-ledger across shards imposes additional scalability limitations and security issues on the network and explain why cross-shard or inter-shard transactions are undesirable and more costly, due to the problems they cause, including atomicity failure and state transition challenges, along with a review of proposed solutions. We also review some of the most considerable recent works that utilize sharding techniques for replication systems. This part of the work has been published as a peer-reviewed book chapter in "Building Cybersecurity Applications with Blockchain Technology and Smart Contracts" (Springer, 2023). In Chapter 4, we propose a novel sharding technique, Parallel Committees, supporting both processing and storage/state sharding, to improve the scalability and performance of distributed replication systems that use a consensus to process clients' requests. We introduce an innovative and novel approach of distributing nodes between shards, using a public key generation process that simultaneously mitigates Sybil attack and serves as a proof-of-work mechanism. Our approach effectively reduces undesirable cross-shard transactions that are more complex and costly to process than intra-shard transactions. The proposed idea has been published as peer-reviewed conference proceedings in the IEEE BCCA 2023. We then explain why we do not make use of a blockchain structure in the proposed idea, an issue that is discussed in great detail in Chapter 5. This clarification has been published in the Journal of Software (JSW), Volume 16, Number 3, May 2021. And, in the final Chapter of this thesis, Chapter 6, we summarize the important points and conclusions of this research
Mason, Richard S. "A framework for fully decentralised cycle stealing." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/26039/1/Richard_Mason_Thesis.pdf.
Full textMason, Richard S. "A framework for fully decentralised cycle stealing." Queensland University of Technology, 2007. http://eprints.qut.edu.au/26039/.
Full textLi, Jia. "Arms: a decentralised naming model for object-based distributed computing systems." Thesis, Li, Jia (2010) Arms: a decentralised naming model for object-based distributed computing systems. PhD thesis, Murdoch University, 2010. https://researchrepository.murdoch.edu.au/id/eprint/5122/.
Full textMundy, David H. "Decentralised control flow : a computational model for distributed systems." Thesis, University of Newcastle Upon Tyne, 1988. http://hdl.handle.net/10443/2050.
Full textGutierrez, Soto Mariantonieta. "MULTI-AGENT REPLICATOR CONTROL METHODOLOGIES FOR SUSTAINABLE VIBRATION CONTROL OF SMART BUILDING AND BRIDGE STRUCTURES." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494249419696286.
Full textBlatt, Nicole I. "Trust and influence in the information age operational requirements for network centric warfare." Thesis, Monterey, California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1313.
Full textMilitary leaders and scholars alike debate the existence of a revolution in military affairs (RMA) based on information technology. This thesis will show that the Information RMA not only exists, but will also reshape how we plan, operate, educate, organize, train, and equip forces for the 21st century. This thesis introduces the Communication Technology (CommTech) Model to explain how communication technologies affect organizations, leadership styles, and decision-making processes. Due to the growth in networking enterprises, leaders will have to relinquish their tight, centralized control over subordinates. Instead, they will have to perfect their use of softer power skills such as influence and trust as they embrace decentralized decision-making. Network Centric Warfare, Self-Synchronization, and Network Enabled Operations are concepts that provide the framework for integrating information technology into the battlespace. The debate that drives centralized versus decentralized control in network operations is analyzed with respect to the CommTech Model. A new term called Operational Trust is introduced and developed, identifying ways to make it easier to build trust among network entities. Finally, the thesis focuses on what leaders need to do to shape network culture for effective operations.
Major, United States Air Force
Babovic, Vladan. "Emergence, evolution, intelligence: hydroinformatics : a study of distributed and decentralised computing using intelligent agents /." Rotterdam [etc.] : Balkema, 1996. http://opac.nebis.ch/cgi-bin/showAbstract.pl?u20=905410404X.
Full textJavet, Ludovic. "Privacy-preserving distributed queries compatible with opportunistic networks." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG038.
Full textIn today's society, where IoT and digital platforms are transforming our daily lives, personal data is generated in profusion and its usage is often beyond our control. Recent legislations like the GDPR in Europe propose concrete solutions to regulate these new practices and protect our privacy. Meanwhile, on the technical side, new architectures are emerging to respond to this urgent need to reclaim our own personal data. This is the case of Personal Data Management Systems (PDMS) which offer a decentralized way to store and manage personal data, empowering individuals with greater control over their digital lives.This thesis explores the distributed use of these PDMS in an Opportunistic Network context, where messages are transferred from one device to another without the need for any infrastructure. The objective is to enable the implementation of complex processing crossing data from thousands of individuals, while guaranteeing the security and fault tolerance of the executions.The proposed approach leverages the Trusted Execution Environments to define a new computing paradigm, entitled Edgelet computing, that satisfies both validity, resiliency and privacy properties. Contributions include: (1) security mechanisms to protect executions from malicious attacks seeking to plunder personal data, (2) resiliency strategies to tolerate failures and message losses induced by the fully decentralized environment, (3) extensive validations and practical demonstrations of the proposed methods
Cuadrado-Cordero, Ismael. "Microclouds : an approach for a network-aware energy-efficient decentralised cloud." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S003/document.
Full textThe current datacenter-centralized architecture limits the cloud to the location of the datacenters, generally far from the user. This architecture collides with the latest trend of ubiquity of Cloud computing. Also, current estimated energy usage of data centers and core networks adds up to 3% of the global energy production, while according to latest estimations only 42,3% of the population is connected. In the current work, we focused on two drawbacks of datacenter-centralized Clouds: Energy consumption and poor quality of service. On the one hand, due to its centralized nature, energy consumption in networks is affected by the centralized vision of the Cloud. That is, backbone networks increase their energy consumption in order to connect the clients to the datacenters. On the other hand, distance leads to increased utilization of the broadband Wide Area Network and poor user experience, especially for interactive applications. A distributed approach can provide a better Quality of Experience (QoE) in large urban populations in mobile cloud networks. To do so, the cloud should confine local traffic close to the user, running on the users and network devices. In this work, we propose a novel distributed cloud architecture based on microclouds. Microclouds are dynamically created and allow users to contribute resources from their computers, mobile and network devices to the cloud. This way, they provide a dynamic and scalable system without the need of an extra investment in infrastructure. We also provide a description of a realistic mobile cloud use case, and the adaptation of microclouds on it. Through simulations, we show an overall saving up to 75% of energy consumed in standard centralized clouds with our approach. Also, our results indicate that this architecture is scalable with the number of mobile devices and provide a significantly lower latency than regular datacenter-centralized approaches. Finally, we analyze the use of incentives for Mobile Clouds, and propose a new auction system adapted to the high dynamism and heterogeneity of these systems. We compare our solution to other existing auctions systems in a Mobile Cloud use case, and show the suitability of our solution
Hamidouche, Lyes. "Vers une dissémination efficace de données volumineuses sur des réseaux wi-fi denses." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS188/document.
Full textWe are witnessing a proliferation of mobile technologies and an increasing volume of data used by mobile applications. Devices consume thus more and more bandwidth. In this thesis, we focus on dense Wi-Fi networks during large-scale events (such as conferences). In this context, the bandwidth consumption and the interferences caused by the parallel downloads of a large volume of data by several mobile devices that are connected to the same Wi-Fi network degrade the performance of the dissemination. Device-to-Device (D2D) communication technologies such as Bluetooth or Wi-Fi Direct can be used in order to improve network performance to deliver better QoE to users. In this thesis we propose two approaches for improving the performance of data dissemination. The first approach, more suited to a dynamic configuration, is to use point-to-point D2D connections on a flat topology for data exchange. Our evaluations show that our approach can reduce dissemination times by up to 60% compared to using Wi-Fi alone. In addition, we ensure a fair distribution of the energy load on the devices to preserve the weakest batteries in the network. We have observed that by taking into account the battery life and the bandwidth of mobile devices, the solicitation of the weakest batteries can be reduced significantly. The second approach, more adapted to static configurations, consists in setting up hierarchical topologies by gathering mobile devices in small clusters. In each cluster, a device is chosen to relay the data that it receives from the server and forwards it to its neighbors. This approach helps to manage interference more efficiently by adjusting the signal strength in order to limit cluster reach. In this case, we observed up to 30% gains in dissemination time. In the continuity of this thesis work, we discuss three perspectives which would be interesting to be undertaken, in particular the automatic adaptation of the dissemination to the state of the network and the simultaneous use of both topology types, flat and hierarchical
Debbabi, Bassem. "Cube : a decentralised architecture-based framework for software self-management." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM004/document.
Full textIn recent years, the world has witnessed the rapid emergence of several novel technologies and computing environments, including cloud computing, ubiquitous computing and sensor networks. These environments have been rapidly capitalised upon for building new types of applications, and bringing added-value to users. At the same time, the resulting applications have been raising a number of new significant challenges, mainly related to system design, deployment and life-cycle management during runtime. Such challenges stem from the very nature of these novel environments, characterized by large scales, high distribution, resource heterogeneity and increased dynamism. The main objective of this thesis is to provide a generic, reusable and extensible self-management solution for these types of applications, in order to help alleviate this stringent problem. We are particularly interested in providing support for the runtime management of system architecture and life-cycle, focusing on applications that are component-based and that run in highly dynamic, distributed and large-scale environments. In order to achieve this goal, we propose a synergistic solution – the Cube framework – that combines techniques from several adjacent research domains, including self-organization, constraint satisfaction, self-adaptation and self-reflection based on architectural models. In this solution, a set of decentralised Autonomic Managers self-organize dynamically, in order to build and administer a target application, by following a shared description of administrative goals. This formal description, called Archetype, contains a graph-oriented specification of the application elements to manage and of various constraints associated with these elements. A prototype of the Cube framework has been implemented for the particular application domain of data-mediation. Experiments have been carried-out in the context of two national research projects: Self-XL and Medical. Obtained results indicate the viability of the proposed solution for creating, repairing and adapting component-based applications running in distributed volatile and evolving environments
Scriven, Ian Michael. "Derivation and Application of Approximate Electromagnetic Noise Source Models using Decentralised Parallel Particle Swarm Optimisation." Thesis, Griffith University, 2012. http://hdl.handle.net/10072/367576.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Engineering
Science, Environment, Engineering and Technology
Full Text
Hamidouche, Lyes. "Vers une dissémination efficace de données volumineuses sur des réseaux wi-fi denses." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS188.
Full textWe are witnessing a proliferation of mobile technologies and an increasing volume of data used by mobile applications. Devices consume thus more and more bandwidth. In this thesis, we focus on dense Wi-Fi networks during large-scale events (such as conferences). In this context, the bandwidth consumption and the interferences caused by the parallel downloads of a large volume of data by several mobile devices that are connected to the same Wi-Fi network degrade the performance of the dissemination. Device-to-Device (D2D) communication technologies such as Bluetooth or Wi-Fi Direct can be used in order to improve network performance to deliver better QoE to users. In this thesis we propose two approaches for improving the performance of data dissemination. The first approach, more suited to a dynamic configuration, is to use point-to-point D2D connections on a flat topology for data exchange. Our evaluations show that our approach can reduce dissemination times by up to 60% compared to using Wi-Fi alone. In addition, we ensure a fair distribution of the energy load on the devices to preserve the weakest batteries in the network. We have observed that by taking into account the battery life and the bandwidth of mobile devices, the solicitation of the weakest batteries can be reduced significantly. The second approach, more adapted to static configurations, consists in setting up hierarchical topologies by gathering mobile devices in small clusters. In each cluster, a device is chosen to relay the data that it receives from the server and forwards it to its neighbors. This approach helps to manage interference more efficiently by adjusting the signal strength in order to limit cluster reach. In this case, we observed up to 30% gains in dissemination time. In the continuity of this thesis work, we discuss three perspectives which would be interesting to be undertaken, in particular the automatic adaptation of the dissemination to the state of the network and the simultaneous use of both topology types, flat and hierarchical
Luxey, Adrien. "Les e-squads : un nouveau paradigme pour la conception d'applications ubiquitaires respectant le droit à la vie privée." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S071.
Full textThe emergence of the Internet of Things (IoT) has proved to be dangerous for the people's right to privacy: more and more connected appliances are sharing personal data about people's daily lives to private parties beyond their control. Still, we believe that the IoT could just as well become a guardian of our privacy, would it escape the centralised cloud model. In this thesis, we explore a novel concept, the e-squad: to make one's connected devices collaborate through gossip communication to build new privacy-preserving services. We show how an e-squad can privately build knowledge about their user and leverage this information to create predictive applications. We demonstrate that collaboration helps to overcome the end-devices' poor availability and performance. Finally, we federate e-squads to build a robust anonymous file exchange service, proving their potential beyond the single-user scenario. We hope this manuscript inspires the advent of more humane and delightful technology
Paulo, João Tiago Medeiros. "Dependable decentralized storage management for cloud computing." Doctoral thesis, 2015. http://hdl.handle.net/1822/38462.
Full textThe volume of worldwide digital information is growing and will continue to grow at an impressive rate. Storage deduplication is accepted as valuable technique for handling such data explosion. Namely, by eliminating unnecessary duplicate content from storage systems, both hardware and storage management costs can be improved. Nowadays, this technique is applied to distinct storage types and, it is increasingly desired in cloud computing infrastructures, where a significant portion of worldwide data is stored. However, designing a deduplication system for cloud infrastructures is a complex task, as duplicates must be found and eliminated across a distributed cluster that supports virtual machines and applications with strict storage performance requirements. The core of this dissertation addresses precisely the challenges of cloud infrastructures deduplication. We start by surveying and comparing the existing deduplication systems and the distinct storage environments targeted by them. This discussion is missing in the literature and it is important for understanding the novel issues that must be addressed by cloud deduplication systems. Then, as our main contribution, we introduce our own deduplication system that eliminates duplicates across virtual machine volumes in a distributed cloud infrastructure. Redundant content is found and removed in a cluster-wide fashion while having a negligible impact in the performance of applications using the deduplicated volumes. Our prototype is evaluated in a real distributed setting with a benchmark suited for deduplication systems, which is also a contribution of this dissertation.
O volume de informação digital mundial está a crescer a uma taxa impressionante. A deduplicação de sistemas de armazenamento' é aceite como uma técnica valiosa para gerir esta explosão de dados, dado que ao eliminar o conteúdo duplicado é possível reduzir ambos os custos físicos e de gestão destes sistemas. Atualmente, esta técnica é aplicada a diversos tipos de armazenamento e é cada vez mais desejada em infraestruturas de computação em nuvem, onde é guardada uma parte considerável dos dados gerados mundialmente. Porém, conceber um sistema de deduplicação para computação em nuvem não é fácil, visto que os dados duplicados têm de ser eliminados numa infraestrutura distribuída onde estão a correr máquinas virtuais e aplicações com requisitos estritos de desempenho. Esta dissertação foca estes desafios. Em primeiro lugar, analisamos e comparamos os sistemas de deduplicação existentes e os diferentes ambientes de armazenamento abordados por estes. Esta discussão permite compreender quais os desafios enfrentados pelos sistemas de deduplicação de computação em nuvem. Como contribuição principal, introduzimos o nosso próprio sistema que elimina dados duplicados entre volumes de máquinas virtuais numa infraestrutura de computação em nuvem distribuída. O conteúdo redundante é removido abrangendo toda a infraestrutura e de forma a introduzir um impacto mínimo no desempenho dos volumes deduplicados. O nosso protótipo é avaliado experimentalmente num cenário distribuído real e com uma ferramenta de avaliação apropriada para este tipo de sistemas, a qual é também uma contribuição desta dissertação.
Fundação para a Ciência e Tecnologia (FCT) bolsa de doutoramento SFRH/BD/71372/2010.
BERGAMINI, LORENZO. "Opportunistic computing in fully decentralized and mobile networks." Doctoral thesis, 2013. http://hdl.handle.net/11573/918747.
Full text"Centralized and Decentralized Methods of Efficient Resource Allocation in Cloud Computing." Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.40818.
Full textDissertation/Thesis
Doctoral Dissertation Industrial Engineering 2016
Larkin, Jeffrey Michael. "Automatic service discovery for grid computing a decentralized approach to the grid /." 2005. http://etd.utk.edu/2005/LarkinJeffrey.pdf.
Full textTitle from title page screen (viewed on Aug. 30, 2005). Thesis advisor: Jack Dongarra. Document formatted into pages (v, 28 p. : ill. (some col.)). Vita. Includes bibliographical references (p. 19-21).
Chung, Wu-Chun, and 鍾武君. "Decentralized Management of Resource Information and Discovery in Large-Scale Distributed Computing." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/60346442894734830336.
Full text國立清華大學
資訊工程學系
101
The increasing demand of computing power and services derives the development of new paradigm of large-scale distributed computing systems such as Grids and Clouds. To efficiently manage the diverse and scattered resources, the resource information and discovery system is employed to record the status of resource attributes and locate the resource fulfilled the requirement of demands. As the scale of a system expands, the major difficulty is to prevent a communication bottleneck, a single point of failure or the load imbalance by the federation of resources distributed over computer networks. In view of this, a decentralized management for large-scale distributed resources comes out with more attentions. With the inherent properties of scalability and robustness, the Peer-to-Peer (P2P) approach is attractive to such environments. In this dissertation, we conduct a series of studies on the decentralized resource management for distributed computing environments. The first study investigates the synergy between Grids and P2P, in which a decentralized Grid-to-Grid (G2G) framework is introduced to harmonize autonomic Grid resources in realizing the Grid federation. Second, a further exploration of employing the P2P network is conducted on resource information and discovery for large-scale computing environments. We propose corresponding solutions for both structured and unstructured networks. The main results show that our approaches are able to efficiently organize the resource information among the distributed environment and locate those available resources under the decentralized management. Finally, the third study, we are concerned with the overlay maintenance if multiple overlays co-habit in the distributed computing system. How to simplify the common overlay-maintenance is motivated us to come up with a cooperative strategy for the multi-overlay maintenance. Experimental results demonstrate that the cost of multi-overlay maintenance could be significantly decreased while applying our approach.
(11153640), Amir Daneshmand. "Parallel and Decentralized Algorithms for Big-data Optimization over Networks." Thesis, 2021.
Find full textRecent decades have witnessed the rise of data deluge generated by heterogeneous sources, e.g., social networks, streaming, marketing services etc., which has naturally created a surge of interests in theory and applications of large-scale convex and non-convex optimization. For example, real-world instances of statistical learning problems such as deep learning, recommendation systems, etc. can generate sheer volumes of spatially/temporally diverse data (up to Petabytes of data in commercial applications) with millions of decision variables to be optimized. Such problems are often referred to as Big-data problems. Solving these problems by standard optimization methods demands intractable amount of centralized storage and computational resources which is infeasible and is the foremost purpose of parallel and decentralized algorithms developed in this thesis.
This thesis consists of two parts: (I) Distributed Nonconvex Optimization and (II) Distributed Convex Optimization.
In Part (I), we start by studying a winning paradigm in big-data optimization, Block Coordinate Descent (BCD) algorithm, which cease to be effective when problem dimensions grow overwhelmingly. In particular, we considered a general family of constrained non-convex composite large-scale problems defined on multicore computing machines equipped with shared memory. We design a hybrid deterministic/random parallel algorithm to efficiently solve such problems combining synergically Successive Convex Approximation (SCA) with greedy/random dimensionality reduction techniques. We provide theoretical and empirical results showing efficacy of the proposed scheme in face of huge-scale problems. The next step is to broaden the network setting to general mesh networks modeled as directed graphs, and propose a class of gradient-tracking based algorithms with global convergence guarantees to critical points of the problem. We further explore the geometry of the landscape of the non-convex problems to establish second-order guarantees and strengthen our convergence to local optimal solutions results to global optimal solutions for a wide range of Machine Learning problems.
In Part (II), we focus on a family of distributed convex optimization problems defined over meshed networks. Relevant state-of-the-art algorithms often consider limited problem settings with pessimistic communication complexities with respect to the complexity of their centralized variants, which raises an important question: can one achieve the rate of centralized first-order methods over networks, and moreover, can one improve upon their communication costs by using higher-order local solvers? To answer these questions, we proposed an algorithm that utilizes surrogate objective functions in local solvers (hence going beyond first-order realms, such as proximal-gradient) coupled with a perturbed (push-sum) consensus mechanism that aims to track locally the gradient of the central objective function. The algorithm is proved to match the convergence rate of its centralized counterparts, up to multiplying network factors. When considering in particular, Empirical Risk Minimization (ERM) problems with statistically homogeneous data across the agents, our algorithm employing high-order surrogates provably achieves faster rates than what is achievable by first-order methods. Such improvements are made without exchanging any Hessian matrices over the network.
Finally, we focus on the ill-conditioning issue impacting the efficiency of decentralized first-order methods over networks which rendered them impractical both in terms of computation and communication cost. A natural solution is to develop distributed second-order methods, but their requisite for Hessian information incurs substantial communication overheads on the network. To work around such exorbitant communication costs, we propose a “statistically informed” preconditioned cubic regularized Newton method which provably improves upon the rates of first-order methods. The proposed scheme does not require communication of Hessian information in the network, and yet, achieves the iteration complexity of centralized second-order methods up to the statistical precision. In addition, (second-order) approximate nature of the utilized surrogate functions, improves upon the per-iteration computational cost of our earlier proposed scheme in this setting.
Parker, Christopher A. C. "Collective decision-making in decentralized multiple-robot systems a biologically inspired approach to making up all of your minds /." Phd thesis, 2009. http://hdl.handle.net/10048/495.
Full textTitle from PDF file main screen (viewed on Aug. 19, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Department of Computing Science, University of Alberta." Includes bibliographical references.
(11184969), Chang Shen Lee. "Distributed Network Processing and Optimization under Communication Constraint." Thesis, 2021.
Find full text