Dissertations / Theses on the topic 'Decentralized computing'

To see the other types of publications on this topic, follow the link: Decentralized computing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Decentralized computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Josilo, Sladana. "Decentralized Algorithms for Resource Allocation in Mobile Cloud Computing Systems." Licentiate thesis, KTH, Nätverk och systemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-228084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The rapid increase in the number of mobile devices has been followed by an increase in the capabilities of mobile devices, such as the computational power, memory and battery capacity. Yet, the computational resources of individual mobile devices are still insufficient for various delay sensitive and computationally intensive applications. These emerging applications could be supported by mobile cloud computing, which allows using external computational resources. Mobile cloud computing does not only improve the users’ perceived performance of mobile applications, but it also may reduce the energy consumption of mobile devices, and thus it may extend their battery life. However, the overall performance of mobile cloud computing systems is determined by the efficiency of allocating communication and computational resources. The work in this thesis proposes decentralized algorithms for allocating these two resources in mobile cloud computing systems. In the first part of the thesis, we consider the resource allocation problem in a mobile cloud computing system that allows mobile users to use cloud computational resources and the resources of each other. We consider that each mobile device aims at minimizing its perceived response time, and we develop a game theoretical model of the problem. Based on the game theoretical model, we propose an efficient decentralized algorithm that relies on average system parameters, and we show that the proposed algorithm could be a promising solution for coordinating multiple mobile devices. In the second part of the thesis, we consider the resource allocation problem in a mobile cloud computing system that consists of multiple wireless links and a cloud server. We model the problem as a strategic game, in which each mobile device aims at minimizing a combination of its response time and energy consumption for performing the computation. We prove the existence of equilibrium allocations of mobile cloud resources, and we use game theoretical tools for designing polynomial time decentralized algorithms with a bounded approximation ratio. We then consider the problem of allocating communication and computational resources over time slots, and we show that equilibrium allocations still exist. Furthermore, we analyze the structure of equilibrium allocations, and we show that the proposed decentralized algorithm for computing equilibria achieves good system performance. By providing constructive equilibrium existence proofs, the results in this thesis provide low complexity decentralized algorithms for allocating mobile cloud resources for various mobile cloud computing architectures.

-the webinar ID on Zoom: 670-3514-7251,  - the registration URL​: https://kth-se.zoom.us/webinar/register/WN_EQCltecySbSMoEQiRztIZg​

QC 20180518

2

Ferreira, Heitor José Simões Baptista. "4Sensing - decentralized processing for participatory sensing data." Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/5091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.
Participatory sensing is a new application paradigm, stemming from both technical and social drives, which is currently gaining momentum as a research domain. It leverages the growing adoption of mobile phones equipped with sensors, such as camera, GPS and accelerometer, enabling users to collect and aggregate data, covering a wide area without incurring in the costs associated with a large-scale sensor network. Related research in participatory sensing usually proposes an architecture based on a centralized back-end. Centralized solutions raise a set of issues. On one side, there is the implications of having a centralized repository hosting privacy sensitive information. On the other side, this centralized model has financial costs that can discourage grassroots initiatives. This dissertation focuses on the data management aspects of a decentralized infrastructure for the support of participatory sensing applications, leveraging the body of work on participatory sensing and related areas, such as wireless and internet-wide sensor networks, peer-to-peer data management and stream processing. It proposes a framework covering a common set of data management requirements - from data acquisition, to processing, storage and querying - with the goal of lowering the barrier for the development and deployment of applications. Alternative architectural approaches - RTree, QTree and NTree - are proposed and evaluated experimentally in the context of a case-study application - SpeedSense - supporting the monitoring and prediction of traffic conditions, through the collection of speed and location samples in an urban setting, using GPS equipped mobile phones.
3

Bicak, Mesude. "Agent-based modelling of decentralized ant behaviour using high performance computing." Thesis, University of Sheffield, 2011. http://etheses.whiterose.ac.uk/1392/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ant colonies are complex biological systems that respond to changing conditions in nature by solving dynamic problems. Their ability of decentralized decision-making and their self-organized trail systems have inspired computer scientists since 1990s, and consequently initiated a class of heuristic search algorithms, known as ant colony optimization (ACO) algorithms. These have proven to be very effective in solving combinatorial optimisation problems, especially in the field of telecommunication. The major challenge in social insect research is understanding how colony-level behaviour emerges from individual interactions. Models to date focus on simple pheromone usage with mathematically devised behaviour, which deviates largely from the real ant behaviour. Furthermore, simulating large-scale behaviour at the individual level is a difficult computational challenge; hence models fail to simulate realistic colony sizes and dimensions for foraging environments. In this thesis, FLAME, an agent-based modelling (ABM) framework capable of producing parallelisable models, was used as the modelling platform and simulations were performed on a High Performance Computing (HPC) grid. This enabled large-scale simulations of complex models to be run in parallel on a grid, without compromising on the time taken to attain results. Furthermore, the advanced features of the framework, such as dynamic creation of agents during a simulation, provided realistic grounds for modelling pheromones and the environment. ABM approach through FLAME was utilized to improve existing models of the Pharaoh's ants (Monomorium pharaonis) focusing on their foraging strategies. Based on related biological research, a number of hypotheses were further tested, which were: (i) the ability of the specialist ‘U-turner' ants in trail maintenance, (ii) the trail choices performed at bifurcations, and (iii) the ability of ants to deposit increased concentrations of pheromones based on food quality. Heterogeneous colonies with 7% U-turner ant agents were further shown to perform significantly better in foraging compared to homogeneous colonies. Furthermore, laying pheromones with a higher intensity based on food quality was shown to be beneficial for the Pharaoh's ant colonies in switching to more rewarding trails. The movement of the Pharaoh's ants in unexplored areas (without pheromones) was also investigated by conducting biological experiments. Video tracking was used to extract movement vectors from the recordings of experiments and the data obtained was subject to statistical analysis in order to devise parameters for ant movement in the models developed. Overall, this research makes contributions to biology and computer science research by: (i) utilizing ABM and HPC via FLAME to reduce technological challenges, (ii) further validating existing hypotheses through realistic models, (iii) developing a video tracking system to acquire experimental data, and (iv) discussing potential applications to emergent telecommunication and networking problems.
4

Wilson, Dany. "Architecture for a Fully Decentralized Peer-to-Peer Collaborative Computing Platform." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present an architecture for a fully decentralized peer-to-peer collaborative computing platform, offering services similar to Cloud Service Provider’s Platform-as-a-Service (PaaS) model, using volunteered resources rather than dedicated resources. This thesis is motivated by three research questions: (1) Is it possible to build a peer-to-peer col- laborative system using a fully decentralized infrastructure relying only on volunteered resources?, (2) How can light virtualization be used to mitigate the complexity inherent to the volunteered resources?, and (3) What are the minimal requirements for a computing platform similar to the PaaS cloud computing platform? We propose an architecture composed of three layers: the Network layer, the Virtual layer, and the Application layer. We also propose to use light virtualization technologies, or containers, to provide a uniform abstraction of the contributing resources and to isolate the host environment from the contributed environment. Then, we propose a minimal API specification for this computing platform, which is also applicable to PaaS computing platforms. The findings of this thesis corroborate the hypothesis that peer-to-peer collaborative systems can be used as a basis for developing volunteer cloud computing infrastructures. We outline the implications of using light virtualization as an integral virtualization primitive in public distributed computing platform. Finally, this thesis lays out a starting point for most volunteer cloud computing infrastructure development effort, because it circumscribes the essential requirements and presents solutions to mitigate the complexities inherent to this paradigm.
5

Tkachuk, Roman-Valentyn. "Towards Decentralized Orchestration of Next-generation Cloud Infrastructures." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cloud Computing helps to efficiently utilize the abundance of computing resources in large data centers. It enables interested parties to deploy their services in data centers while the hardware infrastructure is maintained by the cloud provider. Cloud computing is interesting in particular as it enables automation of service deployment and management processes. However, the more complex the service structure becomes, the more complex deployment and management automation of all its parts can become. To this end, the concept of service orchestration is introduced to streamline service deployment and management processes. Orchestration enables the definition and execution of complex automation workflows targeted to provision computing infrastructure, deploy needed service features, and provide management support. In particular, the orchestration process enables the deployment and enforcement of security and compliance mechanisms in the context of systems where sensitive data is being processed.  This thesis investigates the orchestration process as a uniform approach to deploy and manage network services and required security and compliance mechanisms. To this end, we investigate different use-cases where the orchestration process is applied to address specific requirements towards security and compliance. This thesis includes two parts. In the first part, we focus on centralized orchestration mechanisms, where all activities are performed from one trusted server. We explore the use-cases of a security testbed and collaborative AI engineering and investigate the advantages and limitations of orchestration mechanisms application in their context. In the second part, we shift towards the investigation of decentralized orchestration mechanisms. We employ blockchain technology as the main decentralization mechanism, exploring the advantages and limitations of its application in the context of digital marketplaces. We demonstrate that the shift towards blockchain-enabled orchestration enables the deployment and management of decentralized security mechanisms, ensuring compliant behavior of digital marketplace actors.
6

Barker, James W. "A Low-Cost, Decentralized Distributed Computing Architecture for an Autonomous User Environment." NSUWorks, 1998. http://nsuworks.nova.edu/gscis_etd/402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The focus of this research was the individual or small organization. These organizations include small businesses, community groups, K-12 schools or community colleges, local government, and the individual user, as well as many others. In this work, all of these organizations as well as the individual user were collectively referred to as users. The common element shared by each of these users was that they each have legitimate purposes for access to Internet services or each provides a service or services that could be enhanced if distributed via the connectivity provided by the Internet. However, the costs of establishing a conventional Internet server and the associated connectivity are prohibitive to such small-scale organizations. The objectives of this research were to: Establish a definition of a low-cost decentralized distributed computing environment for Intel-based personal computers that will provide users the capability to access the full spectrum of Internet services while enabling them with the ability to retain control of their computing environment. Develop a replication process to replicate and distribute the defined environment in a modular form so as to facilitate installation on a target system. Conduct testing and evaluation of the architecture and replication process to validate its ease of configuration and installation, and compliance with the requirements to provide users the capability to access the full spectrum of Internet services while retaining complete control of their computing environment. This was accomplished in three phases: (a) Phase I - Define an objective architecture, (b) Phase II - Develop a technique for replicating and distributing the architecture, and (c) Phase III - Test and validate the architecture and the replication and distribution processes. Definition of the objective architecture was accomplished through development of a prototype system that successfully demonstrated all of the characteristics required by the objectives of this research. Following the definition of the architecture on the prototype system, development of a technique for replicating and distributing the architecture was undertaken. This was accomplished by developing a group of programs that configured a system to the needs of a target user, captured that configured system on a removable medium, and restored that configured system on the target hardware. Finally the architecture, as well as its replication and distribution processes were evaluated for validity using statistical analysis of data collected from test subjects acting as users. All of these tasks were accomplished within the Linux Operating System environment using only software tools developed by the researcher or tools that are a native component of Linux. The first objective of this research was satisfied by the researcher's selection of Linux and its suite of associated applications as the operating system that would host the solution system. The second objective of this research was accomplished by the researcher's development of a suite of software tools that replicated the configured environment, moved the replication to an appropriate media and restored the environment on a target system. Inviting a group of Linux users to use the tools and provide feedback via a survey satisfied the third objective of this research. It was concluded that the three objectives of this research and therefore the overall goal of this research were accomplished. In each measured evaluation of the architecture, procedures and programs developed by the researcher, the resulting data were plotted in the advanced area or the area tending toward the advanced level of maturity as defined by the Boloix and Robillard (1995) evaluation scale. In a like manner the resulting data were plotted in the exceptionally compliant range or higher on the normal distribution curve survey scale. The trend of results was consistently at the advanced level of maturity on the Boloix and Robillard (1995) evaluation scale or in the exceptionally compliant range of the normal distribution curve survey scale. The researcher found that the results of testing the defined architecture and replication process revealed users are able to quickly implement a fully configured Linux system with all the capabilities defined in the architecture. This resulting Linux system provided a low cost, decentralized, distributed computing environment for Intel-based personal computers that enabled users to access the full spectrum of Internet services while maintaining control of their computing environment. By accomplishing this objective the researcher's Linux system can provide fiscally constrained individuals or small organizations full access to Internet services without the high costs of establishing a conventional Internet server and associated connectivity, prohibitive to a small-scale organization.
7

Tordsson, Johan. "Decentralized resource brokering for heterogeneous grid environments." Licentiate thesis, Umeå : Department of Computing Science, Umeå University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Agarwal, Radhika. "DRAP: A Decentralized Public Resourced Cloudlet for Ad-Hoc Networks." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Handheld devices are becoming increasingly common, and they have varied range of resources. Mobile Cloud Computing (MCC) allows resource constrained devices to offload computation and use storage capacities of more resourceful surrogate machines. This enables creation of new and interesting applications for all devices. We propose a scheme that constructs a high-performance de-centralized system by a group of volunteer mobile devices which come together to form a resourceful unit (cloudlet). The idea is to design a model to operate as a public-resource between mobile devices in close geographical proximity. This cloudlet can provide larger storage capability and can be used as a computational resource by other devices in the network. The system needs to watch the movement of the participating nodes and restructure the topology if some nodes that are providing support to the cloudlet fail or move out of the network. In this work, we discuss the need of the system, our goals and design issues in building a scalable and reconfigurable system. We achieve this by leveraging the concept of virtual dominating set to create an overlay in the broads of the network and distribute the responsibilities in hosting a cloudlet server. We propose an architecture for such a system and develop algorithms that are requited for its operation. We map the resources available in the network by first scoring each device individually, and then gathering these scores to determine suitable candidate cloudlet nodes. We have simulated cloudlet functionalities for several scenarios and show that our approach is viable alternative for many applications such as sharing GPS, crowd sourcing, natural language processing, etc.
9

Wang, Mianyu Kam Moshe Kandasamy Nagarajan. "A decentralized control and optimization framework for autonomic performance management of web-server systems /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/2643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ariyattu, Resmi. "Towards federated social infrastructures for plug-based decentralized social networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S031/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans cette thèse, nous abordons deux problèmes soulevés par les systèmes distribués décentralisés - le placement de réseaux logiques de façon compatible avec le réseau physique sous-jacent et la construction de cohortes d'éditeurs pour dans les systèmes d'édition collaborative. Bien que les réseaux logiques (overlay networks) été largement étudiés, la plupart des systèmes existant ne prennent pas ou prennent mal en compte la topologie du réseau physique sous-jacent, alors que la performance de ces systèmes dépend dans une grande mesure de la manière dont leur topologie logique exploite la localité présente dans le réseau physique sur lequel ils s'exécutent. Pour résoudre ce problème, nous proposons dans cette thèse Fluidify, un mécanisme décentralisé pour le déploiement d'un réseau logique sur une infrastructure physique qui cherche à maximiser la localité du déploiement. Fluidify utilise une stratégie double qui exploite à la fois les liaisons logiques d'un réseau applicatif et la topologie physique de son réseau sous-jacent pour aligner progressivement l'une avec l'autre. Le protocole résultant est générique, efficace, évolutif et peut améliorer considérablement les performances de l'ensemble. La deuxième question que nous abordons traite des plates-formes d'édition collaborative. Ces plates-formes permettent à plusieurs utilisateurs distants de contribuer simultanément au même document. Seuls un nombre limité d'utilisateurs simultanés peuvent être pris en charge par les éditeurs actuellement déployés. Un certain nombre de solutions pair-à-pair ont donc été proposées pour supprimer cette limitation et permettre à un grand nombre d'utilisateurs de collaborer sur un même document sans aucune coordination centrale. Ces plates-formes supposent cependant que tous les utilisateurs d'un système éditent le même jeu de document, ce qui est peu vraisemblable. Pour ouvrir la voie à des systèmes plus flexibles, nous présentons, Filament, un protocole décentralisé de construction de cohorte adapté aux besoins des grands éditeurs collaboratifs. Filament élimine la nécessité de toute table de hachage distribuée (DHT) intermédiaire et permet aux utilisateurs travaillant sur le même document de se retrouver d'une manière rapide, efficace et robuste en générant un champ de routage adaptatif autour d'eux-mêmes. L'architecture de Filament repose sur un ensemble de réseaux logiques auto-organisées qui exploitent les similarités entre jeux de documents édités par les utilisateurs. Le protocole résultant est efficace, évolutif et fournit des propriétés bénéfiques d'équilibrage de charge sur les pairs impliqués
In this thesis, we address two issues in the area of decentralized distributed systems: network-aware overlays and collaborative editing. Even though network overlays have been extensively studied, most solutions either ignores the underlying physical network topology, or uses mechanisms that are specific to a given platform or applications. This is problematic, as the performance of an overlay network strongly depends on the way its logical topology exploits the underlying physical network. To address this problem, we propose Fluidify, a decentralized mechanism for deploying an overlay network on top of a physical infrastructure while maximizing network locality. Fluidify uses a dual strategy that exploits both the logical links of an overlay and the physical topology of its underlying network to progressively align one with the other. The resulting protocol is generic, efficient, scalable and can substantially improve network overheads and latency in overlay based systems. The second issue that we address focuses on collaborative editing platforms. Distributed collaborative editors allow several remote users to contribute concurrently to the same document. Only a limited number of concurrent users can be supported by the currently deployed editors. A number of peer-to-peer solutions have therefore been proposed to remove this limitation and allow a large number of users to work collaboratively. These decentralized solution assume however that all users are editing the same set of documents, which is unlikely to be the case. To open the path towards more flexible decentralized collaborative editors, we present Filament, a decentralized cohort-construction protocol adapted to the needs of large-scale collaborative editors. Filament eliminates the need for any intermediate DHT, and allows nodes editing the same document to find each other in a rapid, efficient and robust manner by generating an adaptive routing field around themselves. Filament's architecture hinges around a set of collaborating self-organizing overlays that utilizes the semantic relations between peers. The resulting protocol is efficient, scalable and provides beneficial load-balancing properties over the involved peers
11

Ben, Hafaiedh Khaled. "Studying the Properties of a Distributed Decentralized b+ Tree with Weak-Consistency." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Distributed computing is very popular in the field of computer science and is widely used in web applications. In such systems, tasks and resources are partitioned among several computers so that the workload can be shared among the different computers in the network, in contrast to systems using a single server computer. Distributed system designs are used for many practical reasons and are often found to be more scalable, robust and suitable for many applications. The aim of this thesis is to study the properties of a distributed tree data-structure that allow searches, insertions and deletions of data elements. In particular, the b- tree structure [13] is considered, which is a generalization of a binary search tree. The study consists of analyzing the effect of distributing such a tree among several computers and investigates the behavior of such structure over a long period of time by growing the network of computers supporting the tree, while the state of the structure is instantly updated as insertions and deletions operations are performed. It also attempts to validate the necessary and sufficient invariants of the b-tree-structure that guarantee the correctness of the search operations. A simulation study is also conducted to verify the validity of such distributed data-structure and the performance of the algorithm that implements it. Finally, a discussion is provided in the end of the thesis to compare the performance of the system design with other distributed tree structure designs.
12

Adam, Constantin. "A Middleware for Self-Managing Large-Scale Systems." Doctoral thesis, KTH, Skolan för elektro- och systemteknik (EES), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis investigates designs that enable individual components of a distributed system to work together and coordinate their actions towards a common goal. While the basic motivation for our research is to develop engineering principles for large-scale autonomous systems, we address the problem in the context of resource management in server clusters that provide web services. To this end, we have developed, implemented and evaluated a decentralized design for resource management that follows four principles. First, in order to facilitate scalability, each node has only partial knowledge of the system. Second, each node can adapt and change its role at runtime. Third, each node runs a number of local control mechanisms independently and asynchronously from its peers. Fourth, each node dynamically adapts its local configuration in order to optimize a global utility function. The design includes three fundamental building blocks: overlay construction, request routing and application placement. Overlay construction organizes the cluster nodes into a single dynamic overlay. Request routing directs service requests towards nodes with available resources. Application placement partitions the cluster resources between applications, and dynamically adjusts the allocation in response to changes in external load, node failures, etc. We have evaluated the design using complexity analysis, simulation and prototype implementation. Using complexity analysis and simulation, we have shown that the system is scalable, operates efficiently in steady state, quickly adapts to external events and allows for effective service differentiation by a system administrator. A prototype has been built using accepted technologies (Java, Tomcat) and evaluated using standard benchmarks (TPC-W and RUBiS). The evaluation results show that the behavior of the prototype matches closely that of the simulated design for key metrics related to adaptability and robustness, therefore validating our design and proving its feasibility.
QC 20100629
13

Huhtinen, J. (Jouni). "Utilization of neural network and agent technology combination for distributed intelligent applications and services." Doctoral thesis, University of Oulu, 2005. http://urn.fi/urn:isbn:9514278550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The use of agent systems has increased enormously, especially in the field of mobile services. Intelligent services have also increased rapidly in the web. In this thesis, the utilization of software agent technology in mobile services and decentralized intelligent services in the multimedia business is introduced and described. Both Genie Agent Architecture (GAA) and Decentralized International and Intelligent Software Architecture (DIISA) are described. The common problems in decentralized software systems are lack of intelligence, communication of software modules and system learning. Another problem is the personalization of users and services. A third problem is the matching of users and service characteristics in web application level in a non-linear way. In this case it means that web services follow human steps and are capable of learning from human inputs and their characteristics in an intelligent way. This third problem is addressed in this thesis and solutions are presented with two intelligent software architectures and services. The solutions of the thesis are based on a combination of neural network and agent technology. To be more specific, solutions are based on an intelligent agent which uses certain black box information like Self-Organized Map (SOM). This process is as follows; information agents collect information from different sources like the web, databases, users, other software agents and the environment. Information is filtered and adapted for input vectors. Maps are created from a data entry of an SOM. Using maps is very simple, input forms are completed by users (automatically or manually) or user agents. Input vectors are formed again and sent to a certain map. The map gives several outputs which are passed through specific algorithms. This information is passed to an intelligent agent. The needs for web intelligence and knowledge representation serving users is a current issue in many business solutions. The main goal is to enable this by means of autonomous agents which communicate with each other using an agent communication language and with users using their native languages via several communication channels.
14

Adam, Constantin. "Scalable Self-Organizing Server Clusters with Quality of Service Objectives." Licentiate thesis, KTH, School of Electrical Engineering (EES), 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Advanced architectures for cluster-based services that have been recently proposed allow for service differentiation, server overload control and high utilization of resources. These systems, however, rely on centralized functions, which limit their ability to scale and to tolerate faults. In addition, they do not have built-in architectural support for automatic reconfiguration in case of failures or addition/removal of system components.

Recent research in peer-to-peer systems and distributed management has demonstrated the potential benefits of decentralized over centralized designs: a decentralized design can reduce the configuration complexity of a system and increase its scalability and fault tolerance.

This research focuses on introducing self-management capabilities into the design of cluster-based services. Its intended benefits are to make service platforms dynamically adapt to the needs of customers and to environment changes, while giving the service providers the capability to adjust operational policies at run-time.

We have developed a decentralized design that efficiently allocates resources among multiple services inside a server cluster. The design combines the advantages of both centralized and decentralized architectures. It allows associating a set of QoS objectives with each service. In case of overload or failures, the quality of service degrades in a controllable manner. We have evaluated the performance of our design through extensive simulations. The results have been compared with performance characteristics of ideal systems.

15

Helmy, Ahmed. "Energy-Efficient Bandwidth Allocation for Integrating Fog with Optical Access Networks." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Access networks have been going through many reformations to make them adapt to arising traffic trends and become better suited for many new demanding applications. To that end, incorporating fog and edge computing has become a necessity for supporting many emerging applications as well as alleviating network congestions. At the same time, energy-efficiency has become a strong imperative for access networks to reduce both their operating costs and carbon footprint. In this dissertation, we address these two challenges in long-reach optical access networks. We first study the integration of fog and edge computing with optical access networks, which is believed to form a highly capable access network by combining the huge fiber capacity with closer-to-the-edge computing and storage resources. In our study, we examine the offloading performance under different cloudlet placements when the underlying bandwidth allocation is either centralized or decentralized. We combine between analytical modeling and simulation results in order to identify the different factors that affect the offloading performance within each paradigm. To address the energy efficiency requirement, we introduce novel enhancements and modifications to both allocation paradigms that aim to enhance their network performance while conserving energy. We consider this work to be one of the first to explore the integration of fog and edge computing with optical access networks from both bandwidth allocation and energy efficiency perspectives in order to identify which allocation paradigm would be able to meet the requirements of next-generation access networks.
16

Mason, Richard S. "A framework for fully decentralised cycle stealing." Queensland University of Technology, 2007. http://eprints.qut.edu.au/26039/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
17

Mundy, David H. "Decentralised control flow : a computational model for distributed systems." Thesis, University of Newcastle Upon Tyne, 1988. http://hdl.handle.net/10443/2050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents two sets of principles for the organisation of distributed computing systems. Details of models of computation based on these principles are together given, with proposals for programming languages based on each model of computation. The recursive control flow principles are based on the concept of recursive control flow computing system structuring. A recursive comprises a group of subordinate computing systems connected together by Each subordinate computing system may either be a communications medium. which a a computing system consists of a processing unit, memory some is itself a recursive component, and input/output devices, or computing components control flow system. The memory of all the computing systems within a recursive control flow computing subordinate system are arranged in a hierarchy. Using suitable addresses, any part of the hierarchy is accessible to any sequence of instructions which may be executed by the processing unit of a subordinate computing system. This rise to serious difficulties in the global accessibility gives understanding of programs written the meaning of in a programming language recursive control flow on the model of computation. based Reasoning about a particular program in isolation is difficult because of the potential interference between the execution different programs cannot be ignored . alternative principles, decentralised control flow, restrict the The accessibility of subordinate global the memory components of the computing The basis of the concept of objects forms the systems. principles. Information channels may flow along unnamed between instances of these objects, this being the only way in which one instance of an object may communicate with some other instance of an object. Reasoning particular program written in a programming language about a based on the decentralised control flow model of computation is easier since it is that there will be no interference between the guaranteed execution of different programs.
18

Gutierrez, Soto Mariantonieta. "MULTI-AGENT REPLICATOR CONTROL METHODOLOGIES FOR SUSTAINABLE VIBRATION CONTROL OF SMART BUILDING AND BRIDGE STRUCTURES." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494249419696286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Blatt, Nicole I. "Trust and influence in the information age operational requirements for network centric warfare." Thesis, Monterey, California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Approved for public release; distribution in unlimited.
Military leaders and scholars alike debate the existence of a revolution in military affairs (RMA) based on information technology. This thesis will show that the Information RMA not only exists, but will also reshape how we plan, operate, educate, organize, train, and equip forces for the 21st century. This thesis introduces the Communication Technology (CommTech) Model to explain how communication technologies affect organizations, leadership styles, and decision-making processes. Due to the growth in networking enterprises, leaders will have to relinquish their tight, centralized control over subordinates. Instead, they will have to perfect their use of softer power skills such as influence and trust as they embrace decentralized decision-making. Network Centric Warfare, Self-Synchronization, and Network Enabled Operations are concepts that provide the framework for integrating information technology into the battlespace. The debate that drives centralized versus decentralized control in network operations is analyzed with respect to the CommTech Model. A new term called Operational Trust is introduced and developed, identifying ways to make it easier to build trust among network entities. Finally, the thesis focuses on what leaders need to do to shape network culture for effective operations.
Major, United States Air Force
20

Babovic, Vladan. "Emergence, evolution, intelligence: hydroinformatics : a study of distributed and decentralised computing using intelligent agents /." Rotterdam [etc.] : Balkema, 1996. http://opac.nebis.ch/cgi-bin/showAbstract.pl?u20=905410404X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cuadrado-Cordero, Ismael. "Microclouds : an approach for a network-aware energy-efficient decentralised cloud." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S003/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'architecture actuelle du cloud, reposant sur des datacenters centralisés, limite la qualité des services offerts par le cloud du fait de l'éloignement de ces datacenters par rapport aux utilisateurs. En effet, cette architecture est peu adaptée à la tendance actuelle promouvant l'ubiquité du cloud computing. De plus, la consommation énergétique actuelle des data centers, ainsi que du cœur de réseau, représente 3% de la production totale d'énergie, tandis que selon les dernières estimations, seulement 42,3% de la population serait connectée. Dans cette thèse, nous nous intéressons à deux inconvénients majeurs des clouds centralisés: la consommation d'énergie ainsi que la faible qualité de service offerte. D'une part, du fait de son architecture centralisée, le cœur de réseau consomme plus d'énergie afin de connecter les utilisateurs aux datacenters. D'autre part, la distance entre les utilisateurs et les datacenters entraîne une utilisation accrue du réseau mondial à large bande, menant à des expériences utilisateurs de faible qualité, particulièrement pour les applications interactives. Une approche semi-centralisée peut offrir une meilleur qualité d'expérience aux utilisateurs urbains dans des réseaux clouds mobiles. Pour ce faire, cette approche confine le traffic local au plus proche de l'utilisateur, tout en maintenant les caractéristiques centralisées s’exécutant sur les équipements réseaux et utilisateurs. Dans cette thèse, nous proposons une nouvelle architecture de cloud distribué, basée sur des "microclouds". Des "microclouds" sont créés de manière dynamique, afin que les ressources utilisateurs provenant de leurs ordinateurs, téléphones ou équipements réseaux puissent être mises à disposition dans le cloud. De ce fait, les microclouds offrent un système dynamique, passant à l'échelle, tout en évitant d’investir dans de nouvelles infrastructures. Nous proposons également un exemple d'utilisation des microclouds sur un cas typique réel. Par simulation, nous montrons que notre approche permet une économie d'énergie pouvant atteindre 75%, comparée à une approche centralisée standard. En outre, nos résultats indiquent que cette architecture passe à l'échelle en terme du nombre d'utilisateurs mobiles, tout en offrant une bien plus faible latence qu'une architecture centralisée. Pour finir, nous analysons comment inciter les utilisateurs à partager leur ressources dans les clouds mobiles et proposons un nouveau mécanisme d'enchère adapté à l'hétérogénéité et la forte dynamicité de ces systèmes. Nous comparons notre solution aux autres mécanismes d’enchère existants dans des cas d'utilisations typiques au sein des clouds mobiles, et montrons la pertinence de notre solution
The current datacenter-centralized architecture limits the cloud to the location of the datacenters, generally far from the user. This architecture collides with the latest trend of ubiquity of Cloud computing. Also, current estimated energy usage of data centers and core networks adds up to 3% of the global energy production, while according to latest estimations only 42,3% of the population is connected. In the current work, we focused on two drawbacks of datacenter-centralized Clouds: Energy consumption and poor quality of service. On the one hand, due to its centralized nature, energy consumption in networks is affected by the centralized vision of the Cloud. That is, backbone networks increase their energy consumption in order to connect the clients to the datacenters. On the other hand, distance leads to increased utilization of the broadband Wide Area Network and poor user experience, especially for interactive applications. A distributed approach can provide a better Quality of Experience (QoE) in large urban populations in mobile cloud networks. To do so, the cloud should confine local traffic close to the user, running on the users and network devices. In this work, we propose a novel distributed cloud architecture based on microclouds. Microclouds are dynamically created and allow users to contribute resources from their computers, mobile and network devices to the cloud. This way, they provide a dynamic and scalable system without the need of an extra investment in infrastructure. We also provide a description of a realistic mobile cloud use case, and the adaptation of microclouds on it. Through simulations, we show an overall saving up to 75% of energy consumed in standard centralized clouds with our approach. Also, our results indicate that this architecture is scalable with the number of mobile devices and provide a significantly lower latency than regular datacenter-centralized approaches. Finally, we analyze the use of incentives for Mobile Clouds, and propose a new auction system adapted to the high dynamism and heterogeneity of these systems. We compare our solution to other existing auctions systems in a Mobile Cloud use case, and show the suitability of our solution
22

Hamidouche, Lyes. "Vers une dissémination efficace de données volumineuses sur des réseaux wi-fi denses." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS188/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Face à la prolifération des technologies mobiles et à l’augmentation du volume des données utilisées par les applications mobiles, les périphériques consomment de plus en plus de bande passante. Dans cette thèse, nous nous concentrons sur les réseaux Wi-Fi denses comme cela peut être le cas lors d’événements à grande échelle (ex: conférences, séminaire, etc.) où un serveur doit acheminer des données à un grand nombre de périphériques dans une fenêtre temporelle réduite. Dans ce contexte, la consommation de bande passante et les interférences engendrées par les téléchargements parallèles d’une donnée volumineuse par plusieurs périphériques connectés au même réseau dégradent les performances. Les technologies de communication Device-to-Device (D2D) comme Bluetooth ou Wi-Fi Direct permettent de mieux exploiter les ressources du réseau et d’améliorer les performances pour offrir une meilleure qualité d’expérience (QoE) aux utilisateurs. Dans cette thèse nous proposons deux approches pour l’amélioration des performances de la dissémination de données. La première approche, plus adaptée à une configuration mobile, consiste à utiliser des connexions D2D en point-à-point sur une topologie plate pour les échanges de données. Nos évaluations montrent que notre approche permet de réduire les temps de dissémination jusqu’à 60% par rapport à l’utilisation du Wi-Fi seul. De plus, nous veillons à avoir une répartition équitable de la charge énergétique sur les périphériques afin de préserver les batteries les plus faibles du réseau. Nous avons pu voir qu’avec la prise en compte de l’autonomie des batteries et de la bande passante, la sollicitation des batteries les plus faibles peut être réduite de manière conséquente. La deuxième approche, plus adaptée à des configurations statiques, consiste à mettre en place des topologies hiérarchiques dans lesquelles on regroupe les périphériques par clusters. Dans chaque cluster, un périphérique est élu pour être le relais des données qu’il recevra depuis le serveur et qu’il transmettra à ses voisins. Cette approche permet de gérer plus efficacement les interférences en adaptant la puissance du signal afin de limiter la portée des clusters. Dans ce cas, nous avons observé jusqu’à 30 % de gains en temps de dissémination. Dans la continuité des travaux de cette thèse, nous discutons de plusieurs perspectives qu’il serait intéressant d’entreprendre par la suite, notamment l’adaptation automatique du protocole de dissémination à l’état du réseau et l’utilisation simultanée des deux types de topologie plate et hiérarchique
We are witnessing a proliferation of mobile technologies and an increasing volume of data used by mobile applications. Devices consume thus more and more bandwidth. In this thesis, we focus on dense Wi-Fi networks during large-scale events (such as conferences). In this context, the bandwidth consumption and the interferences caused by the parallel downloads of a large volume of data by several mobile devices that are connected to the same Wi-Fi network degrade the performance of the dissemination. Device-to-Device (D2D) communication technologies such as Bluetooth or Wi-Fi Direct can be used in order to improve network performance to deliver better QoE to users. In this thesis we propose two approaches for improving the performance of data dissemination. The first approach, more suited to a dynamic configuration, is to use point-to-point D2D connections on a flat topology for data exchange. Our evaluations show that our approach can reduce dissemination times by up to 60% compared to using Wi-Fi alone. In addition, we ensure a fair distribution of the energy load on the devices to preserve the weakest batteries in the network. We have observed that by taking into account the battery life and the bandwidth of mobile devices, the solicitation of the weakest batteries can be reduced significantly. The second approach, more adapted to static configurations, consists in setting up hierarchical topologies by gathering mobile devices in small clusters. In each cluster, a device is chosen to relay the data that it receives from the server and forwards it to its neighbors. This approach helps to manage interference more efficiently by adjusting the signal strength in order to limit cluster reach. In this case, we observed up to 30% gains in dissemination time. In the continuity of this thesis work, we discuss three perspectives which would be interesting to be undertaken, in particular the automatic adaptation of the dissemination to the state of the network and the simultaneous use of both topology types, flat and hierarchical
23

Debbabi, Bassem. "Cube : a decentralised architecture-based framework for software self-management." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM004/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Durant ces dernières années, nous avons assisté à une forte émergence de nouvelles technologies et environnements informatiques tels que le cloud computing, l'informatique ubiquitaire ou les réseaux de capteurs. Ces environnements ont permis d'élaborer de nouveaux types d'applications avec une forte valeur ajoutée pour les usagés. Néanmoins, ils ont aussi soulevés de nombreux défis liés notamment à la conception, au déploiement et à la gestion de cycle de vie des applications. Ceci est dû à la nature même de ces environnements distribués, caractérisés par une grande flexibilité, un dynamisme accru et une forte hétérogénéité des ressources. L'objectif principal de cette thèse est de fournir une solution générique, réutilisable et extensible pour l'auto-gestion de ces applications. Nous nous sommes concentrés sur la fourniture d'un support logiciel permettant de gérer à l'exécution les architectures et leur cycle de vie, notamment pour les applications à base de composants s'exécutant dans des environnements dynamiques, distributes et à grande échelle. De façon à atteindre cet objectif, nous proposons une solution synergique – le framework Cube – combinant des techniques issues de domaines de recherche adjacents tels que l'auto-organisation, la satisfaction de contraintes, l'auto-adaptation et la reflexion fondée sur les modèles architecturaux. Dans notre solution, un ensemble de gestionnaires autonomiques décentralisés s'auto-organise de façon à construire et gérer une application cible en s'appuyant sur une description partagée des buts de l'application. Cette description formelle, appelé Archetype, prend la forme d'un graphe orienté exprimant les différents éléments de l'architecture et un ensemble de contraintes. Un prototype du framework Cube a été implanté dans le domaine particulier de la médiation. Des expériences ont été conduites dans le cadre de deux projets de recherché nationaux: Self-XL et Medical. Les résultats obtenus démontrent la validité de notre approche pour créer, réparer et adapter des applications à base de composants s'exécutant dans des environnements distribués, dynamiques et hétérogènes
In recent years, the world has witnessed the rapid emergence of several novel technologies and computing environments, including cloud computing, ubiquitous computing and sensor networks. These environments have been rapidly capitalised upon for building new types of applications, and bringing added-value to users. At the same time, the resulting applications have been raising a number of new significant challenges, mainly related to system design, deployment and life-cycle management during runtime. Such challenges stem from the very nature of these novel environments, characterized by large scales, high distribution, resource heterogeneity and increased dynamism. The main objective of this thesis is to provide a generic, reusable and extensible self-management solution for these types of applications, in order to help alleviate this stringent problem. We are particularly interested in providing support for the runtime management of system architecture and life-cycle, focusing on applications that are component-based and that run in highly dynamic, distributed and large-scale environments. In order to achieve this goal, we propose a synergistic solution – the Cube framework – that combines techniques from several adjacent research domains, including self-organization, constraint satisfaction, self-adaptation and self-reflection based on architectural models. In this solution, a set of decentralised Autonomic Managers self-organize dynamically, in order to build and administer a target application, by following a shared description of administrative goals. This formal description, called Archetype, contains a graph-oriented specification of the application elements to manage and of various constraints associated with these elements. A prototype of the Cube framework has been implemented for the particular application domain of data-mediation. Experiments have been carried-out in the context of two national research projects: Self-XL and Medical. Obtained results indicate the viability of the proposed solution for creating, repairing and adapting component-based applications running in distributed volatile and evolving environments
24

Luxey, Adrien. "Les e-squads : un nouveau paradigme pour la conception d'applications ubiquitaires respectant le droit à la vie privée." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'émergence de l'Internet des Objets (IoT) menace actuellement le droit à la vie privée: un quantité croissante de périphériques connectés transmet continuellement des informations personnelles à des entités privées hors du contrôle des populations. Malgré tout, nous pensons que l'IoT pourrait tout aussi bien devenir un gardien de notre vie privée, s'il s'échappait du modèle client-serveur. Dans cette thèse, nous proposons un nouveau concept, celui d'e-squad : en permettant aux équipements connectés des utilisateurs de collaborer par voie de communication épidémique afin de construire de nouveaux services respectueux de la vie privée. Cette communication entraîne la création d'un savoir sur l'utilisateur, qui peut être utilisé par l'e-squad afin de proposer des applications prédictives. Nous démontrons que la collaboration permet de surmonter les piètres connectivité et performance des objets connectés. Enfin, en fédérant les e-squads, nous construisons un robuste réseau pour l'échange de fichier anonyme, prouvant le potentiel des e-squads au-delà des applications à un seul participant. Nous espérons que ce manuscrit inspirera la création de technologies plus agréables et humaines
The emergence of the Internet of Things (IoT) has proved to be dangerous for the people's right to privacy: more and more connected appliances are sharing personal data about people's daily lives to private parties beyond their control. Still, we believe that the IoT could just as well become a guardian of our privacy, would it escape the centralised cloud model. In this thesis, we explore a novel concept, the e-squad: to make one's connected devices collaborate through gossip communication to build new privacy-preserving services. We show how an e-squad can privately build knowledge about their user and leverage this information to create predictive applications. We demonstrate that collaboration helps to overcome the end-devices' poor availability and performance. Finally, we federate e-squads to build a robust anonymous file exchange service, proving their potential beyond the single-user scenario. We hope this manuscript inspires the advent of more humane and delightful technology
25

Paulo, João Tiago Medeiros. "Dependable decentralized storage management for cloud computing." Doctoral thesis, 2015. http://hdl.handle.net/1822/38462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto.
The volume of worldwide digital information is growing and will continue to grow at an impressive rate. Storage deduplication is accepted as valuable technique for handling such data explosion. Namely, by eliminating unnecessary duplicate content from storage systems, both hardware and storage management costs can be improved. Nowadays, this technique is applied to distinct storage types and, it is increasingly desired in cloud computing infrastructures, where a significant portion of worldwide data is stored. However, designing a deduplication system for cloud infrastructures is a complex task, as duplicates must be found and eliminated across a distributed cluster that supports virtual machines and applications with strict storage performance requirements. The core of this dissertation addresses precisely the challenges of cloud infrastructures deduplication. We start by surveying and comparing the existing deduplication systems and the distinct storage environments targeted by them. This discussion is missing in the literature and it is important for understanding the novel issues that must be addressed by cloud deduplication systems. Then, as our main contribution, we introduce our own deduplication system that eliminates duplicates across virtual machine volumes in a distributed cloud infrastructure. Redundant content is found and removed in a cluster-wide fashion while having a negligible impact in the performance of applications using the deduplicated volumes. Our prototype is evaluated in a real distributed setting with a benchmark suited for deduplication systems, which is also a contribution of this dissertation.
O volume de informação digital mundial está a crescer a uma taxa impressionante. A deduplicação de sistemas de armazenamento' é aceite como uma técnica valiosa para gerir esta explosão de dados, dado que ao eliminar o conteúdo duplicado é possível reduzir ambos os custos físicos e de gestão destes sistemas. Atualmente, esta técnica é aplicada a diversos tipos de armazenamento e é cada vez mais desejada em infraestruturas de computação em nuvem, onde é guardada uma parte considerável dos dados gerados mundialmente. Porém, conceber um sistema de deduplicação para computação em nuvem não é fácil, visto que os dados duplicados têm de ser eliminados numa infraestrutura distribuída onde estão a correr máquinas virtuais e aplicações com requisitos estritos de desempenho. Esta dissertação foca estes desafios. Em primeiro lugar, analisamos e comparamos os sistemas de deduplicação existentes e os diferentes ambientes de armazenamento abordados por estes. Esta discussão permite compreender quais os desafios enfrentados pelos sistemas de deduplicação de computação em nuvem. Como contribuição principal, introduzimos o nosso próprio sistema que elimina dados duplicados entre volumes de máquinas virtuais numa infraestrutura de computação em nuvem distribuída. O conteúdo redundante é removido abrangendo toda a infraestrutura e de forma a introduzir um impacto mínimo no desempenho dos volumes deduplicados. O nosso protótipo é avaliado experimentalmente num cenário distribuído real e com uma ferramenta de avaliação apropriada para este tipo de sistemas, a qual é também uma contribuição desta dissertação.
Fundação para a Ciência e Tecnologia (FCT) bolsa de doutoramento SFRH/BD/71372/2010.
26

"Centralized and Decentralized Methods of Efficient Resource Allocation in Cloud Computing." Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.40818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
abstract: Resource allocation in cloud computing determines the allocation of computer and network resources of service providers to service requests of cloud users for meeting the cloud users' service requirements. The efficient and effective resource allocation determines the success of cloud computing. However, it is challenging to satisfy objectives of all service providers and all cloud users in an unpredictable environment with dynamic workload, large shared resources and complex policies to manage them. Many studies propose to use centralized algorithms for achieving optimal solutions for resource allocation. However, the centralized algorithms may encounter the scalability problem to handle a large number of service requests in a realistically satisfactory time. Hence, this dissertation presents two studies. One study develops and tests heuristics of centralized resource allocation to produce near-optimal solutions in a scalable manner. Another study looks into decentralized methods of performing resource allocation. The first part of this dissertation defines the resource allocation problem as a centralized optimization problem in Mixed Integer Programming (MIP) and obtains the optimal solutions for various resource-service problem scenarios. Based on the analysis of the optimal solutions, various heuristics are designed for efficient resource allocation. Extended experiments are conducted with larger numbers of user requests and service providers for performance evaluation of the resource allocation heuristics. Experimental results of the resource allocation heuristics show the comparable performance of the heuristics to the optimal solutions from solving the optimization problem. Moreover, the resource allocation heuristics demonstrate better computational efficiency and thus scalability than solving the optimization problem. The second part of this dissertation looks into elements of service provider-user coordination first in the formulation of the centralized resource allocation problem in MIP and then in the formulation of the optimization problem in a decentralized manner for various problem cases. By examining differences between the centralized, optimal solutions and the decentralized solutions for those problem cases, the analysis of how the decentralized service provider-user coordination breaks down the optimal solutions is performed. Based on the analysis, strategies of decentralized service provider-user coordination are developed.
Dissertation/Thesis
Doctoral Dissertation Industrial Engineering 2016
27

Larkin, Jeffrey Michael. "Automatic service discovery for grid computing a decentralized approach to the grid /." 2005. http://etd.utk.edu/2005/LarkinJeffrey.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.) -- University of Tennessee, Knoxville, 2005.
Title from title page screen (viewed on Aug. 30, 2005). Thesis advisor: Jack Dongarra. Document formatted into pages (v, 28 p. : ill. (some col.)). Vita. Includes bibliographical references (p. 19-21).
28

Chung, Wu-Chun, and 鍾武君. "Decentralized Management of Resource Information and Discovery in Large-Scale Distributed Computing." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/60346442894734830336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
博士
國立清華大學
資訊工程學系
101
The increasing demand of computing power and services derives the development of new paradigm of large-scale distributed computing systems such as Grids and Clouds. To efficiently manage the diverse and scattered resources, the resource information and discovery system is employed to record the status of resource attributes and locate the resource fulfilled the requirement of demands. As the scale of a system expands, the major difficulty is to prevent a communication bottleneck, a single point of failure or the load imbalance by the federation of resources distributed over computer networks. In view of this, a decentralized management for large-scale distributed resources comes out with more attentions. With the inherent properties of scalability and robustness, the Peer-to-Peer (P2P) approach is attractive to such environments.   In this dissertation, we conduct a series of studies on the decentralized resource management for distributed computing environments. The first study investigates the synergy between Grids and P2P, in which a decentralized Grid-to-Grid (G2G) framework is introduced to harmonize autonomic Grid resources in realizing the Grid federation. Second, a further exploration of employing the P2P network is conducted on resource information and discovery for large-scale computing environments. We propose corresponding solutions for both structured and unstructured networks. The main results show that our approaches are able to efficiently organize the resource information among the distributed environment and locate those available resources under the decentralized management. Finally, the third study, we are concerned with the overlay maintenance if multiple overlays co-habit in the distributed computing system. How to simplify the common overlay-maintenance is motivated us to come up with a cooperative strategy for the multi-overlay maintenance. Experimental results demonstrate that the cost of multi-overlay maintenance could be significantly decreased while applying our approach.
29

(11153640), Amir Daneshmand. "Parallel and Decentralized Algorithms for Big-data Optimization over Networks." Thesis, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Recent decades have witnessed the rise of data deluge generated by heterogeneous sources, e.g., social networks, streaming, marketing services etc., which has naturally created a surge of interests in theory and applications of large-scale convex and non-convex optimization. For example, real-world instances of statistical learning problems such as deep learning, recommendation systems, etc. can generate sheer volumes of spatially/temporally diverse data (up to Petabytes of data in commercial applications) with millions of decision variables to be optimized. Such problems are often referred to as Big-data problems. Solving these problems by standard optimization methods demands intractable amount of centralized storage and computational resources which is infeasible and is the foremost purpose of parallel and decentralized algorithms developed in this thesis.


This thesis consists of two parts: (I) Distributed Nonconvex Optimization and (II) Distributed Convex Optimization.


In Part (I), we start by studying a winning paradigm in big-data optimization, Block Coordinate Descent (BCD) algorithm, which cease to be effective when problem dimensions grow overwhelmingly. In particular, we considered a general family of constrained non-convex composite large-scale problems defined on multicore computing machines equipped with shared memory. We design a hybrid deterministic/random parallel algorithm to efficiently solve such problems combining synergically Successive Convex Approximation (SCA) with greedy/random dimensionality reduction techniques. We provide theoretical and empirical results showing efficacy of the proposed scheme in face of huge-scale problems. The next step is to broaden the network setting to general mesh networks modeled as directed graphs, and propose a class of gradient-tracking based algorithms with global convergence guarantees to critical points of the problem. We further explore the geometry of the landscape of the non-convex problems to establish second-order guarantees and strengthen our convergence to local optimal solutions results to global optimal solutions for a wide range of Machine Learning problems.


In Part (II), we focus on a family of distributed convex optimization problems defined over meshed networks. Relevant state-of-the-art algorithms often consider limited problem settings with pessimistic communication complexities with respect to the complexity of their centralized variants, which raises an important question: can one achieve the rate of centralized first-order methods over networks, and moreover, can one improve upon their communication costs by using higher-order local solvers? To answer these questions, we proposed an algorithm that utilizes surrogate objective functions in local solvers (hence going beyond first-order realms, such as proximal-gradient) coupled with a perturbed (push-sum) consensus mechanism that aims to track locally the gradient of the central objective function. The algorithm is proved to match the convergence rate of its centralized counterparts, up to multiplying network factors. When considering in particular, Empirical Risk Minimization (ERM) problems with statistically homogeneous data across the agents, our algorithm employing high-order surrogates provably achieves faster rates than what is achievable by first-order methods. Such improvements are made without exchanging any Hessian matrices over the network.


Finally, we focus on the ill-conditioning issue impacting the efficiency of decentralized first-order methods over networks which rendered them impractical both in terms of computation and communication cost. A natural solution is to develop distributed second-order methods, but their requisite for Hessian information incurs substantial communication overheads on the network. To work around such exorbitant communication costs, we propose a “statistically informed” preconditioned cubic regularized Newton method which provably improves upon the rates of first-order methods. The proposed scheme does not require communication of Hessian information in the network, and yet, achieves the iteration complexity of centralized second-order methods up to the statistical precision. In addition, (second-order) approximate nature of the utilized surrogate functions, improves upon the per-iteration computational cost of our earlier proposed scheme in this setting.

30

Parker, Christopher A. C. "Collective decision-making in decentralized multiple-robot systems a biologically inspired approach to making up all of your minds /." Phd thesis, 2009. http://hdl.handle.net/10048/495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph.D.)--University of Alberta, 2009.
Title from PDF file main screen (viewed on Aug. 19, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Department of Computing Science, University of Alberta." Includes bibliographical references.
31

(11184969), Chang Shen Lee. "Distributed Network Processing and Optimization under Communication Constraint." Thesis, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent years, the amount of data in the information processing systems has significantly increased, which is also referred to as big-data. The design of systems handling big-data calls for a scalable approach, which brings distributed systems into the picture. In contrast to centralized systems, data are spread across the network of agents in the distributed system, and agents cooperatively complete tasks through local communications and local computations. However, the design and analysis of distributed systems, in which no central coordinators with complete information are present, are challenging tasks. In order to support communication among agents to enable multi-agent coordination among others, practical communication constraints should be taken into consideration in the design and analysis of such systems. The focus of this dissertation is to provide design and analysis of distributed network processing using finite-rate communications among agents. In particular, we address the following open questions: 1) can one design algorithms balancing a graph weight matrix using finite-rate and simplex communications among agents? 2) can one design algorithms computing the average of agents’ states using finite-rate and simplex communications? and 3) going beyond of ad-hoc algorithmic designs, can one design a black-box mechanism transforming a general class of algorithms with unquantized communication to their finite-bit quantized counterparts?

This dissertation addresses the above questions. First, we propose novel distributed algorithms solving the weight-balancing and average consensus problems using only finite-rate simplex communications among agents, compliant to the directed nature of the network topology. A novel convergence analysis is put forth, based on a new metric inspired by the
positional system representations. In the second half of this dissertation, distributed optimization subject to quantized communications is studied. Specifically, we consider a general class of linearly convergent distributed algorithms cast as fixed-point iterate, and propose a novel black-box quantization mechanism. In the proposed mechanism, a novel quantizer preserving linear convergence is proposed, which is proved to be more communication efficient than state-of-the-art quantization mechanisms. Extensive numerical results validate our theoretical findings.

To the bibliography