To see the other types of publications on this topic, follow the link: Networking and computing resources.

Dissertations / Theses on the topic 'Networking and computing resources'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Networking and computing resources.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Grassi, Giulio. "Connected cars : a networking challenge and a computing resource for smart cities." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066554/document.

Full text
Abstract:
Récemment, les villes sont devenues "de plus en plus intelligentes", avec une multitude de périphériques IoT et de capteurs déployés partout. Parmi ces objets intelligents, les voitures peuvent jouer un rôle important. Les véhicules sont (ou seront), en effet, équipés avec plusieurs interfaces réseau, ils ont (ou auront) des capacités de calcul et des dispositifs capables d'analyser l'environnement. Pour réaliser le concept de "connected-car" il faut un changement de modèle Internet, à partir d'une architecture centrée sur l'hôte (IP) vers un paradigme centré sur l'information, comment l'architecture ICN (Information Centric Networking). Cette thèse analyse ainsi les avantages et les défis du paradigme ICN, en particulier du Named Data Networking (NDN), dans le domaine VANET, en présentant la première implémentation de NDN pour VANET (V-NDN). Il propose ensuite Navigo, un mécanisme de forwarding basé sur NDN pour la récupération de contenu en utilisant les communications V2V et V2I. Ensuite, le problème de la mobilité des fournisseurs de données est traité, proposant une solution distribuée basée sur NDN, MAP-Me. Toutefois, le rôle du véhicule dans les villes intelligentes ne s'arrête pas au niveau de la connectivité. Les voitures, avec leurs nouvelles capacités de calcul, sont les candidates idéales pour jouer un rôle dans l'architecture Fog Computing, en déplaçant des tâches de calcul vers l'edge du réseau. En tant que preuve de concept, cette thèse présente ParkMaster, un système qui combine les techniques de machine learning, le cloud et l'edge pour analyser l'environnement et traiter le problème de la disponibilité du stationnement<br>In the recent years we have seen a continuous integration of technology with the urban environment. This fusion aims to improve the efficiency and the quality of living in big urban agglomerates, while reducing the costs for their management. Cities are getting “smarter and smarter”, with a plethora of IoT devices and sensors deployed all over the urban areas. Among those intelligent objects, an important role may be played by cars. Modern vehicles are (or will be) indeed equipped with multiple network interfaces, they have (or will have) computational capabilities and devices able to sense the environment. However, smart and connected cars do not represent only an opportunity, but also a challenge. Computation capabilities are limited, mobility and the diversity of network interfaces are obstacles when providing connectivity to the Internet and to other vehicles. When addressing the networking aspect, we believe that a shift in the Internet model is needed, from a host oriented architecture (IP) to a more content focused paradigm, the Information Centric Networking (ICN) architectures. This thesis thus analyzes the benefits and the challenges of the ICN paradigm, in particular of Named Data Networking (NDN), in the VANET domain, presenting the first implementation running on real cars of NDN for VANET (V-NDN). It then proposes Navigo, an NDN based forwarding mechanism for content retrieval over V2V and V2I communications, with the goal of efficiently discovering and retrieving data while reducing the network overhead. Networking mobility is not only a challenge for vehicles, but for any connected mobile device. For this reason, this thesis extends its initial area of interest — VANET — and addresses the network mobility problem for generic mobile nodes, proposing a NDN-based solution, dubbed MAP–Me. MAP-Me tackles the intra-AS content provider mobility problem without relying on any fixed node in the network. It exploits notifications messages at the time of a handover and the forwarding plane to maintain the data provider “always” reachable.Finally, the “connected car” concept is not the only novel element in modern vehicles. Cars indeed won’t be only connected, but also smart, able to locally process data produced by in-car sensors. Vehicles are the perfect candidates to play an important role in the recently proposed Fog Computing architecture. Such an architecture moves computational tasks typical of the cloud away from it and brings them to the edge, closer to where the data is produced. To prove that such a model, with the car as computing edge node, is already feasible with the current technology and not only a vision for the future, this thesis presents ParkMaster. Parkmaster is a fully deployed edge-based system that combines vision and machine learning techniques, the edge (driver’s smartphone) and the cloud to sense the environment and tackle the parking availability problem
APA, Harvard, Vancouver, ISO, and other styles
2

Grassi, Giulio. "Connected cars : a networking challenge and a computing resource for smart cities." Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066554.

Full text
Abstract:
Récemment, les villes sont devenues "de plus en plus intelligentes", avec une multitude de périphériques IoT et de capteurs déployés partout. Parmi ces objets intelligents, les voitures peuvent jouer un rôle important. Les véhicules sont (ou seront), en effet, équipés avec plusieurs interfaces réseau, ils ont (ou auront) des capacités de calcul et des dispositifs capables d'analyser l'environnement. Pour réaliser le concept de "connected-car" il faut un changement de modèle Internet, à partir d'une architecture centrée sur l'hôte (IP) vers un paradigme centré sur l'information, comment l'architecture ICN (Information Centric Networking). Cette thèse analyse ainsi les avantages et les défis du paradigme ICN, en particulier du Named Data Networking (NDN), dans le domaine VANET, en présentant la première implémentation de NDN pour VANET (V-NDN). Il propose ensuite Navigo, un mécanisme de forwarding basé sur NDN pour la récupération de contenu en utilisant les communications V2V et V2I. Ensuite, le problème de la mobilité des fournisseurs de données est traité, proposant une solution distribuée basée sur NDN, MAP-Me. Toutefois, le rôle du véhicule dans les villes intelligentes ne s'arrête pas au niveau de la connectivité. Les voitures, avec leurs nouvelles capacités de calcul, sont les candidates idéales pour jouer un rôle dans l'architecture Fog Computing, en déplaçant des tâches de calcul vers l'edge du réseau. En tant que preuve de concept, cette thèse présente ParkMaster, un système qui combine les techniques de machine learning, le cloud et l'edge pour analyser l'environnement et traiter le problème de la disponibilité du stationnement<br>In the recent years we have seen a continuous integration of technology with the urban environment. This fusion aims to improve the efficiency and the quality of living in big urban agglomerates, while reducing the costs for their management. Cities are getting “smarter and smarter”, with a plethora of IoT devices and sensors deployed all over the urban areas. Among those intelligent objects, an important role may be played by cars. Modern vehicles are (or will be) indeed equipped with multiple network interfaces, they have (or will have) computational capabilities and devices able to sense the environment. However, smart and connected cars do not represent only an opportunity, but also a challenge. Computation capabilities are limited, mobility and the diversity of network interfaces are obstacles when providing connectivity to the Internet and to other vehicles. When addressing the networking aspect, we believe that a shift in the Internet model is needed, from a host oriented architecture (IP) to a more content focused paradigm, the Information Centric Networking (ICN) architectures. This thesis thus analyzes the benefits and the challenges of the ICN paradigm, in particular of Named Data Networking (NDN), in the VANET domain, presenting the first implementation running on real cars of NDN for VANET (V-NDN). It then proposes Navigo, an NDN based forwarding mechanism for content retrieval over V2V and V2I communications, with the goal of efficiently discovering and retrieving data while reducing the network overhead. Networking mobility is not only a challenge for vehicles, but for any connected mobile device. For this reason, this thesis extends its initial area of interest — VANET — and addresses the network mobility problem for generic mobile nodes, proposing a NDN-based solution, dubbed MAP–Me. MAP-Me tackles the intra-AS content provider mobility problem without relying on any fixed node in the network. It exploits notifications messages at the time of a handover and the forwarding plane to maintain the data provider “always” reachable.Finally, the “connected car” concept is not the only novel element in modern vehicles. Cars indeed won’t be only connected, but also smart, able to locally process data produced by in-car sensors. Vehicles are the perfect candidates to play an important role in the recently proposed Fog Computing architecture. Such an architecture moves computational tasks typical of the cloud away from it and brings them to the edge, closer to where the data is produced. To prove that such a model, with the car as computing edge node, is already feasible with the current technology and not only a vision for the future, this thesis presents ParkMaster. Parkmaster is a fully deployed edge-based system that combines vision and machine learning techniques, the edge (driver’s smartphone) and the cloud to sense the environment and tackle the parking availability problem
APA, Harvard, Vancouver, ISO, and other styles
3

Ahmed, Kishwar. "Energy Demand Response for High-Performance Computing Systems." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3569.

Full text
Abstract:
The growing computational demand of scientific applications has greatly motivated the development of large-scale high-performance computing (HPC) systems in the past decade. To accommodate the increasing demand of applications, HPC systems have been going through dramatic architectural changes (e.g., introduction of many-core and multi-core systems, rapid growth of complex interconnection network for efficient communication between thousands of nodes), as well as significant increase in size (e.g., modern supercomputers consist of hundreds of thousands of nodes). With such changes in architecture and size, the energy consumption by these systems has increased significantly. With the advent of exascale supercomputers in the next few years, power consumption of the HPC systems will surely increase; some systems may even consume hundreds of megawatts of electricity. Demand response programs are designed to help the energy service providers to stabilize the power system by reducing the energy consumption of participating systems during the time periods of high demand power usage or temporary shortage in power supply. This dissertation focuses on developing energy-efficient demand-response models and algorithms to enable HPC system's demand response participation. In the first part, we present interconnection network models for performance prediction of large-scale HPC applications. They are based on interconnected topologies widely used in HPC systems: dragonfly, torus, and fat-tree. Our interconnect models are fully integrated with an implementation of message-passing interface (MPI) that can mimic most of its functions with packet-level accuracy. Extensive experiments show that our integrated models provide good accuracy for predicting the network behavior, while at the same time allowing for good parallel scaling performance. In the second part, we present an energy-efficient demand-response model to reduce HPC systems' energy consumption during demand response periods. We propose HPC job scheduling and resource provisioning schemes to enable HPC system's emergency demand response participation. In the final part, we propose an economic demand-response model to allow both HPC operator and HPC users to jointly reduce HPC system's energy cost. Our proposed model allows the participation of HPC systems in economic demand-response programs through a contract-based rewarding scheme that can incentivize HPC users to participate in demand response.
APA, Harvard, Vancouver, ISO, and other styles
4

Soares, João Monteiro. "Integration of the cloud computing paradigm with the opeerator network’s infrastructure." Doctoral thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/14854.

Full text
Abstract:
Doutoramento em Engenharia Informática<br>The proliferation of Internet access allows that users have the possibility to use services available directly through the Internet, which translates in a change of the paradigm of using applications and in the way of communicating, popularizing in this way the so-called cloud computing paradigm. Cloud computing brings with it requirements at two different levels: at the cloud level, usually relying in centralized data centers, where information technology and network resources must be able to guarantee the demand of such services; and at the access level, i.e., depending on the service being consumed, different quality of service is required in the access network, which is a Network Operator (NO) domain. In summary, there is an obvious network dependency. However, the network has been playing a relatively minor role, mostly as a provider of (best-effort) connectivity within the cloud and in the access network. The work developed in this Thesis enables for the effective integration of cloud and NO domains, allowing the required network support for cloud. We propose a framework and a set of associated mechanisms for the integrated management and control of cloud computing and NO domains to provide endto- end services. Moreover, we elaborate a thorough study on the embedding of virtual resources in this integrated environment. The study focuses on maximizing the host of virtual resources on the physical infrastructure through optimal embedding strategies (considering the initial allocation of resources as well as adaptations through time), while at the same time minimizing the costs associated to energy consumption, in single and multiple domains. Furthermore, we explore how the NO can take advantage of the integrated environment to host traditional network functions. In this sense, we study how virtual network Service Functions (SFs) should be modelled and managed in a cloud environment and enhance the framework accordingly. A thorough evaluation of the proposed solutions was performed in the scope of this Thesis, assessing their benefits. We implemented proof of concepts to prove the added value, feasibility and easy deployment characteristics of the proposed framework. Furthermore, the embedding strategies evaluation has been performed through simulation and Integer Linear Programming (ILP) solving tools, and it showed that it is possible to reduce the physical infrastructure energy consumption without jeopardizing the virtual resources acceptance. This fact can be further increased by allowing virtual resource adaptation through time. However, one should have in mind the costs associated to adaptation processes. The costs can be minimized, but the virtual resource acceptance can be also reduced. This tradeoff has also been subject of the work in this Thesis.<br>A proliferação do acesso à Internet permite aos utilizadores usar serviços disponibilizados diretamente através da Internet, o que se traduz numa mudança de paradigma na forma de usar aplicações e na forma de comunicar, popularizando desta forma o conceito denominado de cloud computing. Cloud computing traz consigo requisitos a dois níveis: ao nível da própria cloud, geralmente dependente de centros de dados centralizados, onde as tecnologias de informação e recursos de rede têm que ser capazes de garantir as exigências destes serviços; e ao nível do acesso, ou seja, dependendo do serviço que esteja a ser consumido, são necessários diferentes níveis de qualidade de serviço na rede de acesso, um domínio do operador de rede. Em síntese, existe uma clara dependência da cloud na rede. No entanto, o papel que a rede tem vindo a desempenhar neste âmbito é reduzido, sendo principalmente um fornecedor de conectividade (best-effort) tanto no dominio da cloud como no da rede de acesso. O trabalho desenvolvido nesta Tese permite uma integração efetiva dos domínios de cloud e operador de rede, dando assim à cloud o efetivo suporte da rede. Para tal, apresentamos uma plataforma e um conjunto de mecanismos associados para gestão e controlo integrado de domínios cloud computing e operador de rede por forma a fornecer serviços fim-a-fim. Além disso, elaboramos um estudo aprofundado sobre o mapeamento de recursos virtuais neste ambiente integrado. O estudo centra-se na maximização da incorporação de recursos virtuais na infraestrutura física por meio de estratégias de mapeamento ótimas (considerando a alocação inicial de recursos, bem como adaptações ao longo do tempo), enquanto que se minimizam os custos associados ao consumo de energia. Este estudo é feito para cenários de apenas um domínio e para cenários com múltiplos domínios. Além disso, exploramos como o operador de rede pode aproveitar o referido ambiente integrado para suportar funções de rede tradicionais. Neste sentido, estudamos como as funções de rede virtualizadas devem ser modeladas e geridas num ambiente cloud e estendemos a plataforma de acordo com este conceito. No âmbito desta Tese foi feita uma avaliação extensa das soluções propostas, avaliando os seus benefícios. Implementámos provas de conceito por forma a demonstrar as mais-valias, viabilidade e fácil implantação das soluções propostas. Além disso, a avaliação das estratégias de mapeamento foi realizada através de ferramentas de simulação e de programação linear inteira, mostrando que é possível reduzir o consumo de energia da infraestrutura física, sem comprometer a aceitação de recursos virtuais. Este aspeto pode ser melhorado através da adaptação de recursos virtuais ao longo do tempo. No entanto, deve-se ter em mente os custos associados aos processos de adaptação. Os custos podem ser minimizados, mas isso implica uma redução na aceitação de recursos virtuais. Esta compensação foi também um tema abordado nesta Tese.
APA, Harvard, Vancouver, ISO, and other styles
5

Wickboldt, Juliano Araújo. "Flexible and integrated resource management for IaaS cloud environments based on programmability." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/131894.

Full text
Abstract:
Nuvens de infraestrutura como serviço (IaaS) estão se tornando um ambiente habitual para execução de aplicações modernas da Internet. Muitas plataformas de gerenciamento de nuvem estão disponíveis para aquele que deseja construir uma nuvem de IaaS privada ou pública (e.g., OpenStack, Eucalyptus, OpenNebula). Um aspecto comum do projeto de plataformas atuais diz respeito ao seu modelo de controle caixa-preta. Em geral, as plataformas de gerenciamento de nuvem são distribuídas com um conjunto de estratégias de alocação de recursos embutida em seu núcleo. Dessa forma, os administradores de nuvem têm poucas oportunidades de influenciar a maneira como os recursos são realmente gerenciados (e.g., posicionamento de máquinas virtuais ou seleção caminho de enlaces virtuais). Os administradores poderiam se beneficiar de personalizações em estratégias de gerenciamento de recursos, por exemplo, para atingir os objetivos específicos de cada ambiente ou a fim de permitir a alocação de recursos orientada à aplicação. Além disso, as preocupações acerca do gerenciamento de recursos em nuvens se dividem geralmente em computação, armazenamento e redes. Idealmente, essas três preocupações deveriam ser abordadas no mesmo nível de importância por implementações de plataformas. No entanto, ao contrário do gerenciamento de computação e armazenamento, que têm sido amplamente estudados, o gerenciamento de redes em ambientes de nuvem ainda é bastante incipiente. A falta de flexibilidade e suporte desequilibrado para o gerenciamento de recursos dificulta a adoção de nuvens como um ambiente de execução viável para muitas aplicações modernas da Internet com requisitos rigorosos de elasticidade e qualidade do serviço. Nesta tese, um novo conceito de plataforma de gerenciamento de nuvem é introduzido onde o gerenciamento de recursos flexível é obtido pela adição de programabilidade no núcleo da plataforma. Além disso, uma API simplificada e orientada a objetos é introduzida a fim de permitir que os administradores escrevam e executem programas de gerenciamento de recursos para lidar com todos os tipos de recursos a partir de um único ponto. Uma plataforma é apresentada como uma prova de conceito, incluindo um conjunto de adaptadores para lidar com tecnologias de virtualização e de redes modernas, como redes definidas por software com OpenFlow, Open vSwitches e Libvirt. Dois estudos de caso foram realizados a fim de avaliar a utilização de programas de gerenciamento de recursos para implantação e otimização de aplicações através de uma rede emulada usando contêineres de virtualização Linux e Open vSwitches operando sob o protocolo OpenFlow. Os resultados mostram a viabilidade da abordagem proposta e como os programas de implantação e otimização são capazes de alcançar diferentes objetivos definidos pelo administrador.<br>Infrastructure as a Service (IaaS) clouds are becoming an increasingly common way to deploy modern Internet applications. Many cloud management platforms are available for users that want to build a private or public IaaS cloud (e.g., OpenStack, Eucalyptus, OpenNebula). A common design aspect of current platforms is their black-box-like controlling nature. In general, cloud management platforms ship with one or a set of resource allocation strategies hard-coded into their core. Thus, cloud administrators have few opportunities to influence how resources are actually managed (e.g., virtual machine placement or virtual link path selection). Administrators could benefit from customizations in resource management strategies, for example, to achieve environment specific objectives or to enable application-oriented resource allocation. Furthermore, resource management concerns in clouds are generally divided into computing, storage, and networking. Ideally, these three concerns should be addressed at the same level of importance by platform implementations. However, as opposed to computing and storage management, which have been extensively investigated, network management in cloud environments is rather incipient. The lack of flexibility and unbalanced support for resource management hinders the adoption of clouds as a viable execution environment for many modern Internet applications with strict requirements for elasticity or Quality of Service. In this thesis, a new concept of cloud management platform is introduced where resource management is made flexible by the addition of programmability to the core of the platform. Moreover, a simplified object-oriented API is introduced to enable administrators to write and run resource management programs to handle all kinds of resources from a single point. An implementation is presented as a proof of concept, including a set of drivers to deal with modern virtualization and networking technologies, such as software-defined networking with OpenFlow, Open vSwitches, and Libvirt. Two case studies are conducted to evaluate the use of resource management programs for the deployment and optimization of applications over an emulated network using Linux virtualization containers and Open vSwitches running the OpenFlow protocol. Results show the feasibility of the proposed approach and how deployment and optimization programs are able to achieve different objectives defined by the administrator.
APA, Harvard, Vancouver, ISO, and other styles
6

Ranadive, Adit Uday. "Virtualized resource management in high performance fabric clusters." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54241.

Full text
Abstract:
Providing performance and isolation guarantees for applications running in virtualized datacenter environments requires continuous management of the underlying physical resources. For communication- and I/O-intensive applications running on such platforms, the management methods must adequately deal with the shared use of the high-performance fabrics these applications require. In particular, new classes of latency-sensitive and data-intensive workloads running in virtualized environments rely on emerging fabrics like 40+Gbps Ethernet and InfiniBand/RoCE with support for RDMA, VMM-bypass and hardware-level virtualization (SR-IOV). However, the benefits provided by these technology advances are offset by several management constraints: (i) the inability of the hypervisor to monitor the VMs’ usage of these fabrics can affect the platform’s ability to provide isolation and performance guarantees, (ii) the hypervisor cannot provide fine-grained I/O provisioning or perform management decisions for VMs, thus reducing the degree of consolidation that can be supported on the platforms, and (iii) without such support it is harder to integrate these fabrics into emerging cloud computing platforms and datacenter fabric management solutions. This is made particularly challenging for workloads spanning multiple VMs, utilizing physical resources distributed across multiple server nodes and the interconnection fabric. This thesis addresses the problem of realizing a flexible, dynamic resource management system for virtualized platforms with high performance fabrics. We make the following key contributions: (i) A lightweight monitoring tool, IBMon, integrated with the hypervisor to monitor VMs’ use of RDMA-enabled virtualized interconnects, using memory introspection techniques. (ii) The design and construction of a resource management system that leverages IBMon to provide latency-sensitive applications performance guarantees. This system is built on microeconomic principles of supply and demand and can be deployed on a per-node (Resource Exchange) or a multi-node (Distributed Resource Exchange) basis. Fine-grained resource allocations can be enforced through several mechanisms, including CPU capping or fabric-level congestion control. (iii) Sphinx, a fabric management solution that leverages Resource Exchange to orchestrate network and provide latency proportionality for consolidated workloads, based on user/application-specified policies. (iv) Implementation and experimental evaluation using InfiniBand clusters virtualized with the Xen or KVM hypervisor, managed via the OpenFloodlight SDN controller, and using representative data-intensive and latency-sensitive benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
7

Adam, Jonathan. "Analyzing Function and Potential in Cuba's El Paquete : A Postcolonial Approach." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229990.

Full text
Abstract:
The dire state of Cuban internet connectivity has inspired local informal innovations. One such innovation is El Paquete, a weekly distribution of downloaded content spread through an informal network. Taking a postcolonial approach, I investigate through user experiences how this network operates in a resource-poor environment. This investigation articulates a model of El Paquete centered on social interactions, which inform the system’s function but also shape El Paquete’s design and role in society. Based on this model, a set of speculative design exercises probe possibilities to streamline El Paquete’s compilation, involve consumer preferences in its design directions, or act as a disruption tolerant network. In uncovering the technical possibilities of El Paquete, these designs illuminate how its current design serves Cuban communities by embodying realities and limitations of Cuban society. El Paquete’s embodiment of informal innovation serves as a call to designers to continuously rethink development design processes, centering communities and their knowledge and technical practices.<br>Det kritiska tillståndet för den kubanska internetanslutningen har inspirerat flertalet informella lokala innovationer. Ett exempel på en sådan innovation är El Paquete, vars lösning går ut på distribution av nedladdat innehåll som sprids veckovis genom ett informellt nätverk. Jag har undersökt hur detta nätverk fungerar i en resursfattig miljö genom att undersöka användarupplevelser ur ett postkolonialt perspektiv. I denna undersökning framförs en modell av El Paquete som inriktas på sociala interaktioner, vilket utgör systemets funktioner men som också formar El Paquete’s design och samhällsroll. Baserat på denna modell undersöks möjligheterna till att effektivisera El Paquete’s sammanställning, genom ett antal olika spekulativa designövningar som inkluderar konsumentpreferenser i designinriktning, eller som ett avbrottstolerant nätverk. Dessa designer belyser hur dagens tekniska möjligheter med El Paquete är till nytta för kubanska samhällen genom förkroppsligandet av deras verklighet och begränsningar. El Paquete’s förkroppsligande av informell innovation fungerar som en uppmaning till designers att kontinuerligt ompröva utvecklingen av designprocesser som fokuserar på samhällets kunskap och tekniska praxis.
APA, Harvard, Vancouver, ISO, and other styles
8

Peres, Martin. "A holistic approach to green networking in wireless networks : collaboration among autonomic systems as a mean towards efficient resource-sharing." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0433/document.

Full text
Abstract:
Les vingt dernières années ont vu l’émergence de systèmes sans fil dans la vie de tous les jours. Ils ont rendu possible la création de technologies telles que les téléphones portables, le WiFi ou l’internet mobile qui sont maintenant tenus pour acquis dans la société actuelle. L’impact environnemental des technologies de l’information et des communications connaît une croissance exponentielle et a atteint l’impact de l’industrie du transport aérien. L’initiative d’informatique verte a été lancée en réponse à cette observation pour réduire de 15 à 30% les émissions de gaz à effet de serre en 2020 comparé aux prédictions faites en 2002 afin de garder le réchauffement climatique inférieur à 2°C. Dans cette thèse, nous avons étudié des techniques d’économie d’énergie dans les réseaux sans fil et comment elles interagissent entre elles afin de donner une vue holistique des réseaux verts. Nous prenons également en compte l’usage du spectre radio fréquence qui est le moyen le plus utilisé pour les communications entre systèmes sans fil et qui devient une ressource rare à cause du besoin grandissant de notre société pour de la bande passante en mobilité. Cette thèse suit les couches réseaux avant de remonter les piles matérielleset logicielles. Des contributions ont été apportées à la plupart des couches afin de proposer un réseau sans fil autonome où les noeuds peuvent collaborer pour améliorer les performances du réseau, réduire de façon globale l’utilisation du spectre radio tout en limitant la consommation énergétique du réseau<br>The last twenty years saw the emergence of wireless systems in everyday’s life. They made possible technologies such as mobile phones, WiFi or mobile Internet which are now taken for granted in today’s society. The environmental impact of Information and Communications Technology (ICT) has been raising exponentially to equate the impact of the airline industry. The green computing initiative has been created in response to this observation in order to meet the 15%-30% reduction in green-house gases by 2020 compared to estimations made in 2002 to keep the global temperature increasebelow 2°C. In this thesis, we studied power-saving techniques in wireless networks and how they interact with each others to provide a holistic view of green networking. We also take into account the radio frequency resource which is the most commonly usedcommunication medium for wireless systems and is becoming a scarce resource due to our society’s ever-increasing need for mobile bandwidth. This thesis goes down the network stacks before going up the hardware and software stack. Contributions have been madeat most layers in order to propose an autonomic wireless network where nodes can work collaboratively to improve the network’s performance, globally reduce the radio frequency spectrum usage while also increasing their battery life
APA, Harvard, Vancouver, ISO, and other styles
9

Amarasinghe, Heli. "Network Resource Management in Infrastructure-as-a-Service Clouds." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39141.

Full text
Abstract:
Cloud Infrastructure-as-a-Service (IaaS) is a form of utility computing which has emerged with the recent innovations in the service computing and data communication technologies. Regardless of the fact that IaaS is attractive for application service providers, satisfying user requests while ensuring cloud operational objectives is a complicated task that raises several resource management challenges. Among these challenges, limited controllability over network services delivered to cloud consumers is prominent in single datacenter cloud environments. In addition, the lack of seamless service migration and optimization, poor infrastructure utilization, and unavailability of efficient fault tolerant techniques are noteworthy challenges in geographically distributed datacenter clouds. Initially in this thesis, a datacenter resource management framework is presented to address the challenge of limited controllability over cloud network traffic. The proposed framework integrates network virtualization functionalities offered by software defined networking (SDN) into cloud ecosystem. To provide rich traffic control features to IaaS consumers, control plane virtualization capabilities offered by SDN have been employed. Secondly, a quality of service (QoS) aware seamless service migration and optimization framework has been proposed in the context of geo-distributed datacenters. Focus has been given to a mobile end-user scenario where frequent cloud service migrations are required to mitigate QoS violations. Finally, an SDN-based dynamic fault restoration scheme and a shared backup-based fault protection scheme have been proposed. The fault restoration has been achieved by introducing QoS-aware reactive and shared risk link group-aware proactive path computation algorithms. Shared backup protection has been achieved by optimizing virtual and backup link embedding through a novel integer linear programming approach. The proposed solutions significantly improve bandwidth utilization in inter-datacenter networks while recovering from substrate link failures.
APA, Harvard, Vancouver, ISO, and other styles
10

Mechtri, Marouen. "Virtual networked infrastructure provisioning in distributed cloud environments." Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0028/document.

Full text
Abstract:
L'informatique en nuage (Cloud Computing) a émergé comme un nouveau paradigme pour offrir des ressources informatiques à la demande et pour externaliser des infrastructures logicielles et matérielles. Le Cloud Computing est rapidement et fondamentalement en train de révolutionner la façon dont les services informatiques sont mis à disposition et gérés. Ces services peuvent être demandés à partir d’un ou plusieurs fournisseurs de Cloud d’où le besoin de la mise en réseau entre les composants des services informatiques distribués dans des emplacements géographiquement répartis. Les utilisateurs du Cloud veulent aussi déployer et instancier facilement leurs ressources entre les différentes plateformes hétérogènes de Cloud Computing. Les fournisseurs de Cloud assurent la mise à disposition des ressources de calcul sous forme des machines virtuelles à leurs utilisateurs. Par contre, ces clients veulent aussi la mise en réseau entre leurs ressources virtuelles. En plus, ils veulent non seulement contrôler et gérer leurs applications, mais aussi contrôler la connectivité réseau et déployer des fonctions et des services de réseaux complexes dans leurs infrastructures virtuelles dédiées. Les besoins des utilisateurs avaient évolué au-delà d'avoir une simple machine virtuelle à l'acquisition de ressources et de services virtuels complexes, flexibles, élastiques et intelligents. L'objectif de cette thèse est de permettre le placement et l’instanciation des ressources complexes dans des infrastructures de Cloud distribués tout en permettant aux utilisateurs le contrôle et la gestion de leurs ressources. En plus, notre objectif est d'assurer la convergence entre les services de cloud et de réseau. Pour atteindre cela, nous proposons des algorithmes de mapping d’infrastructures virtuelles dans les centres de données et dans le réseau tout en respectant les exigences des utilisateurs. Avec l'apparition du Cloud Computing, les réseaux traditionnels sont étendus et renforcés avec des réseaux logiciels reposant sur la virtualisation des ressources et des fonctions réseaux. En plus, le nouveau paradigme d'architecture réseau (Software Defined Networks) est particulièrement pertinent car il vise à offrir la programmation du réseau et à découpler, dans un équipement réseau, la partie plan de données de la partie plan de contrôle. Dans ce contexte, la première partie propose des algorithmes optimaux (exacts) et heuristiques de placement pour trouver le meilleur mapping entre les demandes des utilisateurs et les infrastructures sous-jacentes, tout en respectant les exigences exprimées dans les demandes. Cela inclut des contraintes de localisation permettant de placer une partie des ressources virtuelles dans le même nœud physique. Ces contraintes assurent aussi le placement des ressources dans des nœuds distincts. Les algorithmes proposés assurent le placement simultané des nœuds et des liens virtuels sur l’infrastructure physique. Nous avons proposé aussi un algorithme heuristique afin d’accélérer le temps de résolution et de réduire la complexité du problème. L'approche proposée se base sur la technique de décomposition des graphes et la technique de couplage des graphes bipartis. Dans la troisième partie, nous proposons un cadriciel open source (framework) permettant d’assurer la mise en réseau dynamique entre des ressources Cloud distribués et l’instanciation des fonctions réseau dans l’infrastructure virtuelle de l’utilisateur. Ce cadriciel permettra de déployer et d’activer les composants réseaux afin de mettre en place les demandes des utilisateurs. Cette solution se base sur un gestionnaire des ressources réseaux "Cloud Network Gateway Manager" et des passerelles logicielles permettant d’établir la connectivité dynamique et à la demande entre des ressources cloud et réseau. Le CNG-Manager offre le contrôle de la partie réseau et prend en charge le déploiement des fonctions réseau nécessaires dans l'infrastructure virtuelle des utilisateurs<br>Cloud computing emerged as a new paradigm for on-demand provisioning of IT resources and for infrastructure externalization and is rapidly and fundamentally revolutionizing the way IT is delivered and managed. The resulting incremental Cloud adoption is fostering to some extent cloud providers cooperation and increasing the needs of tenants and the complexity of their demands. Tenants need to network their distributed and geographically spread cloud resources and services. They also want to easily accomplish their deployments and instantiations across heterogeneous cloud platforms. Traditional cloud providers focus on compute resources provisioning and offer mostly virtual machines to tenants and cloud services consumers who actually expect full-fledged (complete) networking of their virtual and dedicated resources. They not only want to control and manage their applications but also control connectivity to easily deploy complex network functions and services in their dedicated virtual infrastructures. The needs of users are thus growing beyond the simple provisioning of virtual machines to the acquisition of complex, flexible, elastic and intelligent virtual resources and services. The goal of this thesis is to enable the provisioning and instantiation of this type of more complex resources while empowering tenants with control and management capabilities and to enable the convergence of cloud and network services. To reach these goals, the thesis proposes mapping algorithms for optimized in-data center and in-network resources hosting according to the tenants' virtual infrastructures requests. In parallel to the apparition of cloud services, traditional networks are being extended and enhanced with software networks relying on the virtualization of network resources and functions especially through network resources and functions virtualization. Software Defined Networks are especially relevant as they decouple network control and data forwarding and provide the needed network programmability and system and network management capabilities. In such a context, the first part proposes optimal (exact) and heuristic placement algorithms to find the best mapping between the tenants' requests and the hosting infrastructures while respecting the objectives expressed in the demands. This includes localization constraints to place some of the virtual resources and services in the same host and to distribute other resources in distinct hosts. The proposed algorithms achieve simultaneous node (host) and link (connection) mappings. A heuristic algorithm is proposed to address the poor scalability and high complexity of the exact solution(s). The heuristic scales much better and is several orders of magnitude more efficient in terms of convergence time towards near optimal and optimal solutions. This is achieved by reducing complexity of the mapping process using topological patterns to map virtual graph requests to physical graphs representing respectively the tenants' requests and the providers' physical infrastructures. The proposed approach relies on graph decomposition into topology patterns and bipartite graphs matching techniques. The third part propose an open source Cloud Networking framework to achieve cloud and network resources provisioning and instantiation in order to respectively host and activate the tenants' virtual resources and services. This framework enables and facilitates dynamic networking of distributed cloud services and applications. This solution relies on a Cloud Network Gateway Manager and gateways to establish dynamic connectivity between cloud and network resources. The CNG-Manager provides the application networking control and supports the deployment of the needed underlying network functions in the tenant desired infrastructure (or slice since the physical infrastructure is shared by multiple tenants with each tenant receiving a dedicated and isolated portion/share of the physical resources)
APA, Harvard, Vancouver, ISO, and other styles
11

Bui, Thai Le Quy. "Using Spammers' Computing Resources for Volunteer Computing." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1629.

Full text
Abstract:
Spammers are continually looking to circumvent counter-measures seeking to slow them down. An immense amount of time and money is currently devoted to hiding spam, but not enough is devoted to effectively preventing it. One approach for preventing spam is to force the spammer's machine to solve a computational problem of varying difficulty before granting access. The idea is that suspicious or problematic requests are given difficult problems to solve while legitimate requests are allowed through with minimal computation. Unfortunately, most systems that employ this model waste the computing resources being used, as they are directed towards solving cryptographic problems that provide no societal benefit. While systems such as reCAPTCHA and FoldIt have allowed users to contribute solutions to useful problems interactively, an analogous solution for non-interactive proof-of-work does not exist. Towards this end, this paper describes MetaCAPTCHA and reBOINC, an infrastructure for supporting useful proof-of-work that is integrated into a web spam throttling service. The infrastructure dynamically issues CAPTCHAs and proof-of-work puzzles while ensuring that malicious users solve challenging puzzles. Additionally, it provides a framework that enables the computational resources of spammers to be redirected towards meaningful research. To validate the efficacy of our approach, prototype implementations based on OpenCV and BOINC are described that demonstrate the ability to harvest spammer's resources for beneficial purposes.
APA, Harvard, Vancouver, ISO, and other styles
12

Kulakov, Y., and R. Rader. "Computing Resources Scaling Survey." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/55750.

Full text
Abstract:
The results of the survey about usage of scalable environment, peak workloads management and automatic scaling configuration among IT companies are presented and discussed in this paper. The hypothesis that most companies use automatic scaling based on static thresholds is checked. The insight into the most popular setups of manual and automatic scalable systems on the market is given.
APA, Harvard, Vancouver, ISO, and other styles
13

Mechtri, Marouen. "Virtual networked infrastructure provisioning in distributed cloud environments." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0028.

Full text
Abstract:
L'informatique en nuage (Cloud Computing) a émergé comme un nouveau paradigme pour offrir des ressources informatiques à la demande et pour externaliser des infrastructures logicielles et matérielles. Le Cloud Computing est rapidement et fondamentalement en train de révolutionner la façon dont les services informatiques sont mis à disposition et gérés. Ces services peuvent être demandés à partir d’un ou plusieurs fournisseurs de Cloud d’où le besoin de la mise en réseau entre les composants des services informatiques distribués dans des emplacements géographiquement répartis. Les utilisateurs du Cloud veulent aussi déployer et instancier facilement leurs ressources entre les différentes plateformes hétérogènes de Cloud Computing. Les fournisseurs de Cloud assurent la mise à disposition des ressources de calcul sous forme des machines virtuelles à leurs utilisateurs. Par contre, ces clients veulent aussi la mise en réseau entre leurs ressources virtuelles. En plus, ils veulent non seulement contrôler et gérer leurs applications, mais aussi contrôler la connectivité réseau et déployer des fonctions et des services de réseaux complexes dans leurs infrastructures virtuelles dédiées. Les besoins des utilisateurs avaient évolué au-delà d'avoir une simple machine virtuelle à l'acquisition de ressources et de services virtuels complexes, flexibles, élastiques et intelligents. L'objectif de cette thèse est de permettre le placement et l’instanciation des ressources complexes dans des infrastructures de Cloud distribués tout en permettant aux utilisateurs le contrôle et la gestion de leurs ressources. En plus, notre objectif est d'assurer la convergence entre les services de cloud et de réseau. Pour atteindre cela, nous proposons des algorithmes de mapping d’infrastructures virtuelles dans les centres de données et dans le réseau tout en respectant les exigences des utilisateurs. Avec l'apparition du Cloud Computing, les réseaux traditionnels sont étendus et renforcés avec des réseaux logiciels reposant sur la virtualisation des ressources et des fonctions réseaux. En plus, le nouveau paradigme d'architecture réseau (Software Defined Networks) est particulièrement pertinent car il vise à offrir la programmation du réseau et à découpler, dans un équipement réseau, la partie plan de données de la partie plan de contrôle. Dans ce contexte, la première partie propose des algorithmes optimaux (exacts) et heuristiques de placement pour trouver le meilleur mapping entre les demandes des utilisateurs et les infrastructures sous-jacentes, tout en respectant les exigences exprimées dans les demandes. Cela inclut des contraintes de localisation permettant de placer une partie des ressources virtuelles dans le même nœud physique. Ces contraintes assurent aussi le placement des ressources dans des nœuds distincts. Les algorithmes proposés assurent le placement simultané des nœuds et des liens virtuels sur l’infrastructure physique. Nous avons proposé aussi un algorithme heuristique afin d’accélérer le temps de résolution et de réduire la complexité du problème. L'approche proposée se base sur la technique de décomposition des graphes et la technique de couplage des graphes bipartis. Dans la troisième partie, nous proposons un cadriciel open source (framework) permettant d’assurer la mise en réseau dynamique entre des ressources Cloud distribués et l’instanciation des fonctions réseau dans l’infrastructure virtuelle de l’utilisateur. Ce cadriciel permettra de déployer et d’activer les composants réseaux afin de mettre en place les demandes des utilisateurs. Cette solution se base sur un gestionnaire des ressources réseaux "Cloud Network Gateway Manager" et des passerelles logicielles permettant d’établir la connectivité dynamique et à la demande entre des ressources cloud et réseau. Le CNG-Manager offre le contrôle de la partie réseau et prend en charge le déploiement des fonctions réseau nécessaires dans l'infrastructure virtuelle des utilisateurs<br>Cloud computing emerged as a new paradigm for on-demand provisioning of IT resources and for infrastructure externalization and is rapidly and fundamentally revolutionizing the way IT is delivered and managed. The resulting incremental Cloud adoption is fostering to some extent cloud providers cooperation and increasing the needs of tenants and the complexity of their demands. Tenants need to network their distributed and geographically spread cloud resources and services. They also want to easily accomplish their deployments and instantiations across heterogeneous cloud platforms. Traditional cloud providers focus on compute resources provisioning and offer mostly virtual machines to tenants and cloud services consumers who actually expect full-fledged (complete) networking of their virtual and dedicated resources. They not only want to control and manage their applications but also control connectivity to easily deploy complex network functions and services in their dedicated virtual infrastructures. The needs of users are thus growing beyond the simple provisioning of virtual machines to the acquisition of complex, flexible, elastic and intelligent virtual resources and services. The goal of this thesis is to enable the provisioning and instantiation of this type of more complex resources while empowering tenants with control and management capabilities and to enable the convergence of cloud and network services. To reach these goals, the thesis proposes mapping algorithms for optimized in-data center and in-network resources hosting according to the tenants' virtual infrastructures requests. In parallel to the apparition of cloud services, traditional networks are being extended and enhanced with software networks relying on the virtualization of network resources and functions especially through network resources and functions virtualization. Software Defined Networks are especially relevant as they decouple network control and data forwarding and provide the needed network programmability and system and network management capabilities. In such a context, the first part proposes optimal (exact) and heuristic placement algorithms to find the best mapping between the tenants' requests and the hosting infrastructures while respecting the objectives expressed in the demands. This includes localization constraints to place some of the virtual resources and services in the same host and to distribute other resources in distinct hosts. The proposed algorithms achieve simultaneous node (host) and link (connection) mappings. A heuristic algorithm is proposed to address the poor scalability and high complexity of the exact solution(s). The heuristic scales much better and is several orders of magnitude more efficient in terms of convergence time towards near optimal and optimal solutions. This is achieved by reducing complexity of the mapping process using topological patterns to map virtual graph requests to physical graphs representing respectively the tenants' requests and the providers' physical infrastructures. The proposed approach relies on graph decomposition into topology patterns and bipartite graphs matching techniques. The third part propose an open source Cloud Networking framework to achieve cloud and network resources provisioning and instantiation in order to respectively host and activate the tenants' virtual resources and services. This framework enables and facilitates dynamic networking of distributed cloud services and applications. This solution relies on a Cloud Network Gateway Manager and gateways to establish dynamic connectivity between cloud and network resources. The CNG-Manager provides the application networking control and supports the deployment of the needed underlying network functions in the tenant desired infrastructure (or slice since the physical infrastructure is shared by multiple tenants with each tenant receiving a dedicated and isolated portion/share of the physical resources)
APA, Harvard, Vancouver, ISO, and other styles
14

Blair, James M. "Architectures for Real-Time Automatic Sign Language Recognition on Resource-Constrained Device." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/851.

Full text
Abstract:
Powerful, handheld computing devices have proliferated among consumers in recent years. Combined with new cameras and sensors capable of detecting objects in three-dimensional space, new gesture-based paradigms of human computer interaction are becoming available. One possible application of these developments is an automated sign language recognition system. This thesis reviews the existing body of work regarding computer recognition of sign language gestures as well as the design of systems for speech recognition, a similar problem. Little work has been done to apply the well-known architectural patterns of speech recognition systems to the domain of sign language recognition. This work creates a functional prototype of such a system, applying three architectures seen in speech recognition systems, using a Hidden Markov classifier with 75-90% accuracy. A thorough search of the literature indicates that no cloud-based system has yet been created for sign language recognition and this is the first implementation of its kind. Accordingly, there have been no empirical performance analyses regarding a cloud-based Automatic Sign Language Recognition (ASLR) system, which this research provides. The performance impact of each architecture, as well as the data interchange format, is then measured based on response time, CPU, memory, and network usage across an increasing vocabulary of sign language gestures. The results discussed herein suggest that a partially-offloaded client-server architecture, where feature extraction occurs on the client device and classification occurs in the cloud, is the ideal selection for all but the smallest vocabularies. Additionally, the results indicate that for the potentially large data sets transmitted for 3D gesture classification, a fast binary interchange protocol such as Protobuf has vastly superior performance to a text-based protocol such as JSON.
APA, Harvard, Vancouver, ISO, and other styles
15

Marini, Riccardo. "Software Defined Networking Architectures for LoRaWAN." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
This thesis proposes new solutions for LoRaWAN networks taking advantages of Software Defined Networking architectures. In particular, an analysis of the current implementation of the Adaptive Data Rate mechanism developed by LoRaWAN standard, as well as a proposal of a new algorithm, will be provided. This will be addressed by considering both a cloud-based and a fog-based architecture in order to observe differences between the two approaches in a number of different scenarios. The proposed algorithms and the two architectures are compared via numerical results achieved through simulations and experimental tests.
APA, Harvard, Vancouver, ISO, and other styles
16

Foss, Richard John. "A networking approach to sharing music studio resources." Thesis, Rhodes University, 1996. http://hdl.handle.net/10962/d1006660.

Full text
Abstract:
This thesis investigates the extent to which networking technology can be used to provide remote workstation access to a pool of shared music studio resources. A pilot system is described in which MIDI messages, studio control data, and audio signals flow between the workstations and a studio server. A booking and timing facility avoids contention and allows for accurate reports of studio usage. The operation of the system has been evaluated in terms of its ability to satislY three fundamental goals, namely the remote, shared and centralized access to studio resources. Three essential network configurations have been identified, incorporating a mix of star and bus topologies, and their relative potential for satisfYing the fundamental goals has been highlighted.
APA, Harvard, Vancouver, ISO, and other styles
17

Peraccini, Simone. "Named Data Networking for Computing in the Mobile Edge." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17059/.

Full text
Abstract:
Oggi connettersi a Internet è una pratica di uso comune, destinata a crescere notevolmente con l'avvento dell'Internet of Things. Nonostante il grande successo, Internet presenta diversi limiti architetturali che lo rendono un veicolo povero in confronto alla grande mole di contenuti trasmessi. La seconda importante limitazione riguarda la sicurezza che è applicata solamente a livello di host ma non di contenuto. Per promuovere la prototipizzazione di nuovi paradigmi architetturali, nel 2010 la National Science Fondation crea il programma Future Internet Architecture. Tra i progetti nati, il più emergente è Named Data Networking(NDN). Il suo intento è quello di distogliere l'attenzione da "dove" trovare una risorsa per concentrarsi su "cosa" applicazioni e utenti cerchino. Per questo NDN pensa che l'identificazione non debba più riguardare gli host ma i contenuti. Quest'ultimi sono distinti da un nome univoco, godono di immutabilità e racchiudono la strategia di sicurezza. L'obiettivo di questo progetto è quello di dare un piccolo contributo a Named Data Networking, ideando e sviluppando un protocollo di computazione cooperativa, che operi per mezzo del protocollo NDN. L'intento del protocollo è quello di permettere a generici dispositivi wifi di assegnare l'esecuzione di alcuni dei loro task ad un nodo nelle vicinanze. Inoltre, la tesi accenna come il protocollo si inserisca nel tema dell'Edge computing, che assieme al Fog computing pone le basi per l'evoluzione dell'architettura Cloud. Il lavoro ha portato alla realizzazione di un prototipo del protocollo che è possibile installare sui nodi del simulatore di reti ndnSIM. Esso soddisfa i comportamenti base richiesti ed offre buone prestazioni negli scenari statici. L'elaborato si conclude con alcuni test che ne confermano il corretto funzionamento ma allo stesso tempo denotano alcuni aspetti da migliorare negli sviluppi futuri.
APA, Harvard, Vancouver, ISO, and other styles
18

Kodumuri, Samyuktha. "RemoraBook: Privacy-Preserving Social Networking Based On Remora Computing." Cleveland State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=csu1600208690003839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Jun. "Flexible distributed computing with volunteered resources." Thesis, Queen Mary, University of London, 2010. http://qmro.qmul.ac.uk/xmlui/handle/123456789/358.

Full text
Abstract:
Nowadays, computational grids have evolved to a stage where they can comprise many volunteered resources owned by different individual users and/or institutions, such as desktop grids and volunteered computing grids. This brings benefits for large-scale computing, as more resources are available to exploit. On the other hand, the inherent characteristics of the volunteered resources bring some challenges for efficiently exploiting them. For example, jobs may not be able to be executed by some resources, as the computing resources can be heterogeneous. Furthermore, the resources can be volatile as the resource owners usually have the right to decide when and how to donate the idle Central Processing Unit (CPU) cycles of their computers. Therefore, in order to utilise volunteered resources efficiently, this research investigated solutions from different aspects. Firstly, this research proposes a new computational Grid architecture based on Java and Java application migration technologies to provide fundamental support for coping with these challenges. This proposed architecture supports heterogeneous resources, ensuring local activities are not affected by Grid jobs and enabling resources to carry out live and automatic Java application migration. Secondly, this research work proposes some job-scheduling and migration algorithms based on resource availability prediction and/or artificial intelligence techniques. To examine the proposed algorithms, this work includes a series of experiments in both synthetic and practical scenarios and compares the performance of the proposed algorithms with existing ones across a variety of scenarios. According to the critical assessment, each algorithm has its own distinct advantages and performs well when certain conditions are met. In addition, this research analyses the characteristics of resources in terms of the availability pattern of practical volunteer-based grids. The analysis shows that each environment has its own characteristics and each volunteered resource’s availability tends to possess weak correlations across different days and times-of-day.
APA, Harvard, Vancouver, ISO, and other styles
20

Paverd, Andrew James. "Enhanced mobile computing using cloud resources." Master's thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/11063.

Full text
Abstract:
Summary in English.<br>Includes bibliographical references.<br>The purpose of this research is to investigate, review and analyse the use of cloud resources for the enhancement of mobile computing. Mobile cloud computing refers to a distributed computing relationship between a resource-constrained mobile device and a remote high-capacity cloud resource. Investigation of prevailing trends has shown that this will be a key technology in the development of future mobile computing systems. This research presents a theoretical analysis framework for mobile cloud computing. This analysis framework is a structured consolidation of the salient considerations identified in recent scientific literature and commercial endeavours. The use of this framework in the analysis of various mobile application domains has elucidated several significant benefits of mobile cloud computing including increases in system performance and efficiency. Based on recent scientific literature and commercial endeavours, various implementation approaches for mobile cloud computing have been identified, categorized and analysed according to their architectural characteristics. This has resulted in a set of advantages and disadvantages for each category of system architecture. Overall, through the development and application of the new analysis framework, this work provides a consolidated review and structured critical analysis of the current research and developments in the field of mobile cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
21

Su, Ying. "On managing visibility of resources in social networking sites." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/5506.

Full text
Abstract:
The social network sites has been very popular these days. In these systems, re sources are associated with each user (i.e., network node) in the network, and these resources include both soft and physical resources, such as pictures, videos, books, etc. Users would like to control who can see their resources, e.g., any one, only friends, friends and families, friends of friends, etc. The rules defining who can see the resources are called visibility views, and in our system they are identified by the owner nodes and regular path expressions. The users can issue queries for discovering resources, e.g., “Find all lawnmowers in my three degree network that are visible to me”. The evaluation process needs to check all candidate resources to see if they are visible to the query issuer, and this requires computing the visibility views associated with every candidate resources. The visibility views are expensive to evaluate. In order to facilitate the visibility query answering, it is necessary to pre-compute and materialize the views in the so called “cache”. But because of the tremendous volume of the whole view data, we have to select only a portion of them to materialize, based on metrics such as the views’ sizes, the costs to evaluate them, the popularity of references, the temporal locality of requested views, and the correlation relationships among them. The problem of trying to select the views in a static way is called the static view selection problem, and it has been studied extensively by the database community, mostly for answering OLAP queries in data warehouses. We call it “static” because it pre-computes and materializes the views in one lump-sum, based on statistics accumulated before the query sequence starts. This problem has a deep root from the caching problem studied in the theory and system communities. The researches done on caching policies solve the problem in an ad-hoc dynamic way by making decisions at each request, which make use of the temporal locality. Many ad-hoc policies have been proposed in the literature. Specifically, there is a branch of research done based on the so called Independent Reference Model, in which view requests are assumed to form a fixed time-independent sequence. It was shown that under this model optimal policies can be found. Coffman and Denning [13] presented a policy called A₀ that is optimal with respect the long-run average metric for paging, where the cached objects (i.e., memory pages) have uniform sizes and retrieval costs. Bahat [7] later presented a stationary Markov replacement policy C₀, which is optimal for cases in which the retrieval costs for the cached objects are arbitrary but their sizes are uniform. In this thesis we address the problem one step further: to assume both the sizes and costs are arbitrary. We reveal the intrinsic relationship between the static view selection problem and the caching problem, and show that, still under the independent reference model, any dynamic replacement policy (random or deterministic) are no better than the optimal static selection algorithm in terms of the long-run average metric. We then present a branch of optimal history dependent replacement policies for the arbitrary sizes and arbitrary costs case, and show that policy A₀ and C₀ are special cases of our policy. Furthermore, we prove that a policy is optimal if and only if the induced stochastic process is a unichain. We also study the approximate static algorithms and policies: we bring up polynomial static algorithms that are both K-approximate with regard to the fractional optimum solution and HKF-approximate to the integral optimum solution, where K is the size of the cache, K’ is the total size of all view objects minus K, and HKI is the harmonic number of K’. In addition, we present a K-competitive dynamic policy and show that K is the best approximation ratio for both the static algorithms and dynamic policies.
APA, Harvard, Vancouver, ISO, and other styles
22

Bou, Abdo Jacques. "Efficient and secure mobile cloud networking." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066551.

Full text
Abstract:
MCC (Mobile Cloud Computing) est un candidat très fort pour le NGN (Next Generation Network) qui permet aux utilisateurs mobiles d’avoir une mobilité étendue, une continuité de service et des performances supérieures. Les utilisateurs peuvent s’attendre à exécuter leurs travaux plus rapidement, avec une faible consommation de batterie et à des prix abordables ; mais ce n’est pas toujours le cas. Diverses applications mobiles ont été développées pour tirer parti de cette nouvelle technologie, mais chacune de ces applications possède ses propres exigences. Plusieurs MCA (Mobile Cloud Architectures) ont été proposées, mais aucune n'a été adaptée pour toutes les applications mobiles, ce qui a mené à une faible satisfaction du client. De plus, l'absence d'un modèle d'affaires (business model) valide pour motiver les investisseurs a empêché son déploiement à l'échelle de production. Cette thèse propose une nouvelle architecture de MCA (Mobile Cloud Architecture) qui positionne l'opérateur de téléphonie mobile au cœur de cette technologie avec un modèle d'affaires de recettes. Cette architecture, nommée OCMCA (Operator Centric Mobile Cloud Architecture), relie l'utilisateur d’un côté et le fournisseur de services Cloud (CSP) de l'autre côté, et héberge un cloud dans son réseau. La connexion OCMCA / utilisateur peut utiliser les canaux multiplex menant à un service beaucoup moins cher pour les utilisateurs, mais avec plus de revenus, et de réduire les embouteillages et les taux de rejet pour l'opérateur. La connexion OCMCA / CSP est basée sur la fédération, ainsi un utilisateur qui a été enregistré avec n’importe quel CSP, peut demander que son environnement soit déchargé de cloud hébergé par l'opérateur de téléphonie mobile afin de recevoir tous les services et les avantages de OCMCA.Les contributions de cette thèse sont multiples. Premièrement, nous proposons OCMCA et nous prouvons qu'il a un rendement supérieur à toutes les autres MCA (Mobile Cloud Architectures). Le modèle d'affaires (business model) de cette architecture se concentre sur la liberté de l'abonnement de l'utilisateur, l'utilisateur peut ainsi être abonné à un fournisseur de cloud et être toujours en mesure de se connecter via cette architecture à son environnement à l'aide du déchargement et de la fédération<br>Mobile cloud computing is a very strong candidate for the title "Next Generation Network" which empowers mobile users with extended mobility, service continuity and superior performance. Users can expect to execute their jobs faster, with lower battery consumption and affordable prices; however this is not always the case. Various mobile applications have been developed to take advantage of this new technology, but each application has its own requirements. Several mobile cloud architectures have been proposed but none was suitable for all mobile applications which resulted in lower customer satisfaction. In addition to that, the absence of a valid business model to motivate investors hindered its deployment on production scale. This dissertation proposes a new mobile cloud architecture which positions the mobile operator at the core of this technology equipped with a revenue-making business model. This architecture, named OCMCA (Operator Centric Mobile Cloud Architecture), connects the user from one side and the Cloud Service Provider (CSP) from the other and hosts a cloud within its network. The OCMCA/user connection can utilize multicast channels leading to a much cheaper service for the users and more revenues, lower congestion and rejection rates for the operator. The OCMCA/CSP connection is based on federation, thus a user who has been registered with any CSP, can request her environment to be offloaded to the mobile operator's hosted cloud in order to receive all OCMCA's services and benefits
APA, Harvard, Vancouver, ISO, and other styles
23

Valente, Fredy Joao. "An integrated parallel/distributed environment for high performance computing." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zaripov, Behruz. "Analysis of Fog Networking Procedures in Heterogeneous Wireless Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
The purpose of this study is to provide a general framework of the latest trends of mobile network architectures. The two main architectures treated in this work are Cloud-RAN and Fog-RAN. We give descriptions of both architectures, then show advantages and disadvantages of them. We mainly focus on the performance of Fog-RAN model taking into account only computation and communication latencies. In order to analyse our Fog-RAN architecture, we measure the impact of traffic produced in our network, by using 3 different policies: Random Policy, Maximum Available Capacity Policy and Nearest Node Policy. Furthermore, we measure the impact of delay by fixing the amount of traffic generated by the network. Numerical results of our considered scenarios show that the maximum available capacity policy outperforms two other polices, when the traffic produced in the network is very high. When the traffic is very low, the best policy is the nearest node one. On the other hand, by fixing the amount of traffic we show that when the delay threshold is from 1-3 ms the Maximum capacity policy performs better than two other policies. When the delay threshold is greater than 5 ms the Nearest Node policy shows better results.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Da Ph D. Massachusetts Institute of Technology. "Computing with unreliable resources : design, analysis and algorithms." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/89846.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 187-197) and index.<br>This thesis is devoted to the study of computing with unreliable resources, a paradigm emerging in a variety of technologies, such as circuit design, cloud computing, and crowdsourcing. In circuit design, as we approach the physical limits, semiconductor fabrication has been increasingly susceptible to fabrication flaws, resulting unreliable circuit components. In cloud computing, due to co-hosting, virtualization and other factors, the response time of computing nodes are variable. This calls for computation frameworks that take this unreliable quality-of-service into account. In crowdsourcing, we humans are the unreliable computing processors due to our inherent cognitive limitations. We investigate these three topics in the three parts of this thesis. We demonstrate that it is often necessary to introduce redundancy to achieve reliable computing, and this needs to be carried out judiciously to attain an appealing balance between reliability and resource usage. In particular, it is crucial to take the statistical properties of unreliability into account during system design, rather than to handle it as an afterthought. In the first part, we investigate the topic of circuit design with unreliable circuit components. We first analyze the design of Flash Analog-to-Digital Converter (ADC) with imprecise comparators. Formulating this as a problem of scalar quantization with noisy partition points, we analyze fundamental limits on ADC accuracy and obtain designs that increase the yield of ADC (e.g., 5% to 10% for 6-bit Flash ADCs). Our results show that, given a fixed amount of silicon area, building more smaller and less precise comparators leads to better ADC accuracy. We then address the problem of digital circuit design with faulty components. To achieve reliability, we introduce redundant elements that can replace faulty elements via a configurable interconnect. We show that the required number of redundant elements depends on the amount of interconnect available, and propose designs that achieve near-optimal trade-o between redundancy and interconnect overhead in several design settings. The second part of this thesis explores the problem of executing a collection of tasks in parallel on a group of computing nodes. This setting is often seen in cloud computing and crowdsourcing, where the response times of computing nodes are random due to their variability. In this case, the overall latency is determined by the response time of the slowest computing node, which is often much larger than the average response time. Task replication, which sends the same task to multiple computing nodes and obtains the earliest result, reduces latency, but in general incurs additional resource usage. We propose a theoretical framework to analyze the trade-o between latency and resource usage. We show that, while in general there is a tension between latency and resource usage, there exist scenarios where replicating tasks judiciously reduce both latency and resource usage simultaneously. Our investigation gives insights on when and how replication helps, and provides ecient scheduling policies for a variety of computing scenarios. Lastly, we research the problem of crowd-based ranking via pairwise comparisons, with humans as unreliable comparators. We formulate this as the problem of approximate sorting with noisy comparisons. By developing a rate-distortion theory on permutation spaces, we obtain information-theoretic lower bounds for the query complexity of approximate sorting with both noiseless and noisy comparisons. Our lower bound shows the optimality of certain existing algorithms with respect to noiseless comparisons and provides a benchmark for approximate sorting with noisy comparisons.<br>by Da Wang.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Soundarapandian, Manikandan. "Relational Computing Using HPC Resources: Services and Optimizations." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/56586.

Full text
Abstract:
Computational epidemiology involves processing, analysing and managing large volumes of data. Such massive datasets cannot be handled efficiently by using traditional standalone database management systems, owing to their limitation in the degree of computational efficiency and bandwidth to scale to large volumes of data. In this thesis, we address management and processing of large volumes of data for modeling, simulation and analysis in epidemiological studies. Traditionally, compute intensive tasks are processed using high performance computing resources and supercomputers whereas data intensive tasks are delegated to standalone databases and some custom programs. DiceX framework is a one-stop solution for distributed database management and processing and its main mission is to leverage and utilize supercomputing resources for data intensive computing, in particular relational data processing. While standalone databases are always on and a user can submit queries at any time for required results, supercomputing resources must be acquired and are available for a limited time period. These resources are relinquished either upon completion of execution or at the expiration of the allocated time period. This kind of reservation based usage style poses critical challenges, including building and launching a distributed data engine onto the supercomputer, saving the engine and resuming from the saved image, devising efficient optimization upgrades to the data engine and enabling other applications to seamlessly access the engine . These challenges and requirements cause us to align our approach more closely with cloud computing paradigms of Infrastructure as a Service(IaaS) and Platform as a Service(PaaS). In this thesis, we propose cloud computing like workflows, but using supercomputing resources to manage and process relational data intensive tasks. We propose and implement several services including database freeze and migrate and resume, ad-hoc resource addition and table redistribution. These services assist in carrying out the workflows defined. We also propose an optimization upgrade to the query planning module of postgres-XC, the core relational data processing engine of the DiceX framework. With a knowledge of domain semantics, we have devised a more robust data distribution strategy that would enable to push down most time consuming sql operations forcefully to the postgres-XC data nodes, bypassing its query planner's default shippability criteria without compromising correctness. Forcing query push down reduces the query processing time by a factor of almost 40%-60% for certain complex spatio-temporal queries on our epidemiology datasets. As part of this work, a generic broker service has also been implemented, which acts as an interface to the DiceX framework by exposing restful apis, which applications can make use of to query and retrieve results irrespective of the programming language or environment.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Binghan. "Software Defined Networking and Tunneling for Mobile Networks." Thesis, KTH, Kommunikationssystem, CoS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-118376.

Full text
Abstract:
With the deployment of Long Term Evolution (LTE) networks, mobile networks will become an important infrastructure component in the cloud ecosystem.  However, in the cloud computing era, traditional routing and switching platforms do not meet the requirements of this new trend, especially in a mobile network environment. With the recent advances in software switches and efficient virtualization using commodity servers, Software Defined Networking (SDN) has emerged as a powerful technology to meet the new requirements for supporting a new generation of cloud service. This thesis describers an experimental investigation of cloud computing, SDN, and a mobile network’s packet core. The design of a mobile network exploiting the evolution of SDN is also presented. The actual implementation consists of a GTP enabled Open vSwitch together with the transparent mode of mobile network SDN evolution. Open vSwitch is a SDN product designed for computer networks. The implementation extends Open vSwitch with an implementation of the GTP protocol. This extension enables Open vSwitch to be an excellent SDN component for mobile networks. In transparent mode, a cloud data center is deployed without making any modification to the existing mobile networks.  In the practical evaluation of the GTP-U tunnel protocol implementation, the measured metrics are UDP and TCP throughput, end-to-end latency and jitter.  Two experiments have been conducted and described in the evaluation chapter. Cloud computing has become one of the hottest Internet topics. It is attractive for the mobile network to adopt cloud computing technology in order to enjoy the benefits of cloud computing. For example, to reduce network construction cost, make the network deployment more flexible, etc. This thesis presents an potential direction for mobile network cloud computing. Since this thesis relies on open source projects, readers may use the results to explore a feasible direction for mobile network cloud computing evolution.<br>Med utbyggnaden av långa (LTE) Term Evolution nätverk, mobila nätverk kommer blivit en viktig infrastruktur komponent i molnet ekosystemet. Men i cloud computing eran, uppfyller traditionella routing och switching plattformar inte kraven i denna nya trend, särskilt i ett mobilnät miljö. Med de senaste framstegen i programvara växlar och effektiv virtualisering påråvaror servrar, programvarustyrd Nätverk (SDN) har utvecklats till en kraftfull teknik för att möta de nya kraven för att stödja en ny generation av molntjänst. Denna avhandling beskrivarna en försöksverksamhet inriktad undersökning av cloud computing, SDN och ett mobilnät är Packet Core. Utformningen av ett mobilnät utnyttja SDN utveckling presenteras också. Det faktiska genomförandet består av en GTP aktiverad Open Vswitch tillsammans med transparent läge av mobilnätet SDN evolution. Öppna Vswitch är en SDN-produkt avsedd för datornätverk. Genomförandet utökar Open Vswitch med en implementering av GTP-protokollet. Denna uppgradering gör Open Vswitch vara som en utmärkt SDN komponent för mobila nätverk. I transparent läge är ett moln datacenter utplacerade utan göra eventuella ändringar till befintliga mobilnät. I den praktiska utvärderingen av GTP-U tunnel protokollimplementering, de uppmätta mått är UDP och TCP genomströmning, end-to-end-latens, jitter och paketförluster.  Tvåexperiment har utförts i utvärderingen kapitlet. Cloud computing har blivit en av de hetaste av Internet. Således kan framtiden för det mobila nätet ocksåanta teknik cloud computing och dra nytta av cloud computing. Till exempel minska kostnaderna nätbyggnad, gör nätverket distribuera mer flexibla, etc. .. Denna avhandling presenterar en möjlig inriktning för mobilnät cloud computing. Eftersom denna avhandling bygger påopen source-projekt, läsarna använda resultatet av den att utforska möjliga riktning mobilnät cloud computing utveckling.
APA, Harvard, Vancouver, ISO, and other styles
28

Chauhan, Maneesh. "Measurement and Analysis of Networking Performance in Virtualised Environments." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177327.

Full text
Abstract:
Mobile cloud computing, having embraced the ideas like computation ooading, mandates a low latency, high speed network to satisfy the quality of service and usability assurances for mobile applications. Networking performance of clouds based on Xen and Vmware virtualisation solutions has been extensively studied by researchers, although, they have mostly been focused on network throughput and bandwidth metrics. This work focuses on the measurement and analysis of networking performance of VMs in a small, KVM based data centre, emphasising the role of virtualisation overheads in the Host-VM latency and eventually to the overall latency experienced by remote clients. We also present some useful tools such as Driftanalyser, VirtoCalc and Trotter that we developed for carrying out specific measurements and analysis. Our work proves that an increase in a VM's CPU workload has direct implications on the network Round trip times. We also show that Virtualisation Overheads (VO) have significant bearing on the end to end latency and can contribute up to 70% of the round trip time between the Host and VM. Furthermore, we thoroughly study Latency due to Virtualisation Overheads as a networking performance metric and analyse the impact of CPU loads and networking workloads on it. We also analyse the resource sharing patterns and their effects amongst VMs of different sizes on the same Host. Finally, having observed a dependency between network performance of a VM and the Host CPU load, we suggest that in a KVM based cloud installation, workload profiling and optimum processor pinning mechanism can be e ectively utilised to regulate network performance of the VMs. The ndings from this research work are applicable to optimising latency oriented VM provisioning in the cloud data centres, which would benefit most latency sensitive mobile cloud applications.<br>Mobil cloud computing, har anammat ideerna som beräknings avlastning, att en låg latens, höghastighetsnät för att tillfredsställa tjänsternas kvalitet och användbarhet garantier för mobila applikationer. Nätverks prestanda moln baserade på Xen och VMware virtualiseringslösningar har studerats av forskare, även om de har mestadels fokuserat på nätverksgenomströmning och bandbredd statistik. Arbetet är inriktat på mätning och analys av nätverksprestanda i virtuella maskiner i en liten, KVM baserade datacenter, betonar betydelsen av virtualiserings omkostnader i värd-VM latens och så småningom till den totala fördröjningen upplevs av fjärrklienter. Wealso presentera några användbara verktyg som Driftanalyser, VirtoCalc och Trotter som vi utvecklat för att utföra specifika mätningar och analyser. Vårt arbete visar att en ökning av en VM processor arbetsbelastning har direkta konsekvenser för nätverket Round restider. Vi visar också att Virtualiserings omkostnader (VO) har stor betydelse för början till slut latens och kan bidra med upp till 70 % av rundtrippstid mellan värd och VM. Dessutom är vi noga studera Latency grund Virtualiserings Omkostnader som en nätverksprestanda och undersöka effekterna av CPU-belastning och nätverks arbetsbelastning på den. Vi analyserar också de resursdelningsmönster och deras effekter bland virtuella maskiner i olika storlekar på samma värd. Slutligen, efter att ha observerat ett beroende mellan nätverksprestanda i ett VM och värd CPU belastning, föreslar vi att i en KVM baserad moln installation, arbetsbelastning profilering och optimal processor pinning mekanism kan anvandas effektivt för att reglera VM nätverksprestanda. Resultaten från denna forskning gäller att optimera latens orienterade VM provisione i molnet datacenter, som skulle dra störst latency känsliga mobila molnapplikationer.
APA, Harvard, Vancouver, ISO, and other styles
29

Guenkova-Luy, Teodora. "Multimedia networking coordination of multimedia services in next generation mobile networks." Saarbrücken VDM, Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3037222&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Ming. "User-Centric Security and Privacy Mechanisms in Untrusted Networking and Computing Environments." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-dissertations/323.

Full text
Abstract:
"Our modern society is increasingly relying on the collection, processing, and sharing of digital information. There are two fundamental trends: (1) Enabled by the rapid developments in sensor, wireless, and networking technologies, communication and networking are becoming more and more pervasive and ad hoc. (2) Driven by the explosive growth of hardware and software capabilities, computation power is becoming a public utility and information is often stored in centralized servers which facilitate ubiquitous access and sharing. Many emerging platforms and systems hinge on both dimensions, such as E-healthcare and Smart Grid. However, the majority information handled by these critical systems is usually sensitive and of high value, while various security breaches could compromise the social welfare of these systems. Thus there is an urgent need to develop security and privacy mechanisms to protect the authenticity, integrity and confidentiality of the collected data, and to control the disclosure of private information. In achieving that, two unique challenges arise: (1) There lacks centralized trusted parties in pervasive networking; (2) The remote data servers tend not to be trusted by system users in handling their data. They make existing security solutions developed for traditional networked information systems unsuitable. To this end, in this dissertation we propose a series of user-centric security and privacy mechanisms that resolve these challenging issues in untrusted network and computing environments, spanning wireless body area networks (WBAN), mobile social networks (MSN), and cloud computing. The main contributions of this dissertation are fourfold. First, we propose a secure ad hoc trust initialization protocol for WBAN, without relying on any pre-established security context among nodes, while defending against a powerful wireless attacker that may or may not compromise sensor nodes. The protocol is highly usable for a human user. Second, we present novel schemes for sharing sensitive information among distributed mobile hosts in MSN which preserves user privacy, where the users neither need to fully trust each other nor rely on any central trusted party. Third, to realize owner-controlled sharing of sensitive data stored on untrusted servers, we put forward a data access control framework using Multi-Authority Attribute-Based Encryption (ABE), that supports scalable fine-grained access and on-demand user revocation, and is free of key-escrow. Finally, we propose mechanisms for authorized keyword search over encrypted data on untrusted servers, with efficient multi-dimensional range, subset and equality query capabilities, and with enhanced search privacy. The common characteristic of our contributions is they minimize the extent of trust that users must place in the corresponding network or computing environments, in a way that is user-centric, i.e., favoring individual owners/users."
APA, Harvard, Vancouver, ISO, and other styles
31

Van, Maren Benjamin Philip, and Maren Benjamin Philip Van. "Using Geosocial Networking Apps to Promote Syphilis Awareness and Health Resources." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/625231.

Full text
Abstract:
Since 2013, the United States has seen a rise in syphilis incidence to epidemic proportions, especially among young men who have sex with men (MSM) who utilize geosocial networking (GSN) applications (apps) to find dates and hookups. Syphilis is an easily treatable sexually transmitted disease (STD). In response to this epidemic, the Pima County Health Department has been developing interventions to reduce the incidence of syphilis. In this study, we tested the effectiveness of using GSN apps to increase syphilis awareness and facilitate communication between MSM and health officials. Informed by survey results, the county created a “Health Advisor” app profile on select GSN apps for MSM. Two phases of app messaging were conducted over six months with different styles of messaging. We found a passive style of messaging was more effective in terms of user response and frequency of conversations that include health information than an active style. Geosocial networking apps are an efficient medium to distribute health information and alert community members about a syphilis or health concerns, especially to high-risk groups. Future public health efforts should be aimed at strengthening the credibility, presence, and scope of the health official on the GSN apps.
APA, Harvard, Vancouver, ISO, and other styles
32

Miah, Abdul J. "Automated library networking in American public community college learning resources centers." Diss., Virginia Polytechnic Institute and State University, 1989. http://books.google.com/books?id=5LbgAAAAMAAJ.

Full text
Abstract:
Thesis (Ed. D.)--Virginia Polytechnic Institute and State University, 1989.<br>Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 148-159).
APA, Harvard, Vancouver, ISO, and other styles
33

Olsson, Philip. "A Study of OpenStack Networking Performance." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191023.

Full text
Abstract:
Cloud computing is a fast-growing sector among software companies. Cloud platforms provide services such as spreading out storage and computational power over several geographic locations, on-demand resource allocation and flexible payment options. Virtualization is a technology used in conjunction with cloud technology and offers the possibility to share the physical resources of a host machine by hosting several virtual machines on the same physical machine. Each virtual machine runs its operating system which makes the virtual machines hardware independent. The cloud and virtualization layers add additional layers of software to the server environments to provide the services. The additional layers cause an overlay in latency which can be problematic for latency sensitive applications. The primary goal of this thesis is to investigate how the networking components impact the latency in an OpenStack cloud compared to a traditional deployment. The networking components were benchmarked under different load scenarios, and the results indicate that the additional latency added by the networking components is not too significant in the used network setup. Instead, a significant performance degradation could be seen on the applications running in the virtual machine which caused most of the added latency in the cloud environment.<br>Molntjänster är en snabbt växande sektor bland mjukvaruföretag. Molnplattformar tillhandahåller tjänster så som utspridning av lagring och beräkningskraft över olika geografiska områden, resursallokering på begäran och flexibla betalningsmetoder. Virtualisering är en teknik som används tillsammans med molnteknologi och erbjuder möjligheten att dela de fysiska resurserna hos en värddator mellan olika virtuella maskiner som kör på samma fysiska dator. Varje virtuell maskin kör sitt egna operativsystem vilket gör att de virtuella maskinerna blir hårdvaruoberoende. Moln och virtualiseringslagret lägger till ytterligare mjukvarulager till servermiljöer för att göra teknikerna möjliga. De extra mjukvarulagrerna orsakar ett pålägg på responstiden vilket kan vara ett problem för applikationer som kräver snabb responstid. Det primära målet i detta examensarbete är att undersöka hur de extra nätverkskomponenterna i molnplattformen OpenStack påverkar responstiden. Nätverkskomonenterna var utvärderade under olika belastningsscenarion och resultaten indikerar att den extra responstiden som orsakades av de extra nätverkskomponenterna inte har allt för stor betydelse på responstiden i den använda nätverksinstallationen. En signifikant perstandaförsämring sågs på applikationerna som körde på den virtuella maskinen vilket stod för den större delen av den ökade responstiden.
APA, Harvard, Vancouver, ISO, and other styles
34

Halvardsson, Victor, and Sandra Janson. "Networking within the public sector : How the effect of networking and competitive advantages facilitate growth." Thesis, Linnéuniversitetet, Institutionen för marknadsföring (MF), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-54211.

Full text
Abstract:
The purpose of the study was to describe how networking can provide competitive advantage to facilitate growth when offering consultancy services toward the public sector. The majority of companies are looking to expand their business due to different reasons. With todays intense competiton on the market it is becoming increasingly important to outperform competitors in order to maintain the current customerbase aswell as it is to gain new businesses. Companies that work toward the public sector have special laws, directives and regulations that have to be taken into account when conducting business. Involvement with networks is based on different reasons, it can be due to gaining new customers, contacts and knowledge to name a few.   The authors have performed a qualitative case study with a focus on two companies. The empirical findings is based on information collected through interviews with these companies and through a quanitative self-completion questionnaire with a sample group of 16 repondents. By analysing the empirical information the authors have concluded that networking activities are important in order to prosper growth. However, there is a lack of networking strategies among the two companies of focus which constraint the firms to get the most out of the networks.
APA, Harvard, Vancouver, ISO, and other styles
35

Srivatsan, Siddhartha Eluppai. "Integrating heterogeneous computing resources to form a campus grid." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Di, Maria Riccardo. "Elastic computing on Cloud resources for the CMS experiment." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8955/.

Full text
Abstract:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
37

Rashid, Md Mamunur. "Non-grid opportunistic resources for (big data) volunteer computing." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/61077/.

Full text
Abstract:
CPU-intensive computing at LHC (The Large Hadron Collider) requires collaborative distributed computing resources to accomplish its data reconstruction and analysis. Currently, institutional Grid is trying to manage and process large datasets within limited time and cost. The baseline paradigm is now well established to use the Computing Grid and more specifically the WLCG (Worldwide LHC Computing Grid) and its supporting infrastructures. In order to achieve its Grid Computing, LHCb has developed a Community Grid Solution called DIRAC (Distributed Infrastructure with Remote Agent Control). It is based on a pilot job submission system to the institutional Grid infrastructures. However, there are other computing resources like idle desktops (e.g. SETI@home), the idle computing cluster (e.g. CERN's Online selection farm outside data-taking periods by LHC detectors) that could be used outside the Grid infrastructures. Because of their lightweight, in particular, simulation activities could benefit from using those opportunistic resources. The DIRAC architecture allows the use of the existing institutional grid resources. To expand the capability of existing computing powers, I have proposed to integrate opportunistic resources in the distributed computing system (DIRAC). In order, not to be dependent on the local settings for the worker node at the external resource, I propose using virtual machines. The architectural modifications required for DIRAC are presented here, with specific examples for data analyses non-Grid clusters. This solution was achieved by making the necessary changes in 3 state-of-the-art technologies: DIRAC, CernVM and OpenNebula. The combination of these three techniques is referred to as the DiCON architecture. I am referring the new approach as a framework rather than a specific technical solution to a specific scientific problem as this can be reused in similar big data analysis environment. I have also shown how this was used to analyse large-scale climate data. This was a rather challenging to use one developed infrastructure to another research area. I have also proposed to use dataflow architecture to exploit the possibilities of opportunistic resources and in the meantime, establish reliability and stability. Dataflow computing architecture in a virtual environment is seen as a possible future research extension of this work. This is a theoretical contribution only and this is a unique approach in a virtual cloud (not in-house computing) environment. This paradigm could give the scientific community access to a large number of non- conventional opportunistic CPU resources for scientific data processing. This PhD work optimises the challenges and the solutions provided by such a computing infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
38

CRISTOFORI, Andrea. "Grid accounting for computing and storage resources towards standardization." Doctoral thesis, Università degli studi di Ferrara, 2011. http://hdl.handle.net/11392/2389368.

Full text
Abstract:
In the last years, we have seen a growing interest of the scientific community first and commercial vendors then, in new technologies like Grid and Cloud computing. The first in particular, was born to meet the enormous computational requests mostly coming from physic experiments, especially Large Hadron Collider's (LHC) experiments at Conseil Européen pour la Recherche Nucléaire (European Laboratory for Particle Physics) (CERN) in Geneva. Other scientific disciplines that are also benefiting from those technologies are biology, astronomy, earth sciences, life sciences, etc. Grid systems allow the sharing of heterogeneous computational and storage resources between different geographically distributed institutes, agencies or universities. For this purpose technologies have been developed to allow communication, authentication, storing and processing of the required software and scientific data. This allows different scientific communities the access to computational resources that a single institute could not host for logistical and cost reasons. Grid systems were not the only answer to this growing need of resources of different communities. At the same time, in the last years, we have seen the affirmation of the so called Cloud Computing. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g.: networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The use of both computational paradigms and the utilization of storage resources, leverage on different authentication and authorization tools. The utilization of those technologies requires systems for the accounting of the consumed resources. Those systems are built on the top of the existing infrastructure and they collect all the needed data related to the users, groups and resources utilized. This information is then collected in central repositories where they can be analyzed and aggregated. Open Grid Forum (OGF) is the international organism that works to develop standards in the Grid environment. Usage Record - Working Group (UR-WG) is a group, born within OGF aiming at standardizing the Usage Record (UR) structure and publication for different kinds of resources. Up to now, the emphasis has been on the accounting for computational resources. With time it came out the need to expand those concepts to other aspects and especially to a definition and implementation of a standard UR for storage accounting. Several extensions to the UR definition are proposed in this thesis and the proposed developments in this field are described. The Distributed Grid Accounting System (DGAS) has been chosen, among other tools available, as the accounting system for the Italian Grid and is also adopted in other countries such as Greece and Germany. Together with HLRmon, it offers a complete accounting system and it is the tool that has been used during the writing of the thesis at INFN-CNAF. • In Chapter 1, I will focus on the paradigm of distributed computing and the Grid infrastructure will be introduced with particular emphasis on the gLite middleware and the EGI-InSPIRE project. • In Chapter 2, I will discuss some Grid accounting systems for computational resources with particular stress for DGAS. • In Chapter 3, the cross-check monitoring system used to check the correctness of the gathered data at the INFN-CNAF's Tier1 is presented. • In Chapter 4, another important aspect on accounting, accounting for storage resources, is introduced and the definition of a standard UR for storage accounting is presented. • In Chapter 5, an implementation of a new accounting system for the storage that uses the definitions given in Chapter 4 is presented. • In Chapter 6, the focus of the thesis move on the performance and reliability tests performed on the latest development release of DGAS that implements ActiveMQ as a standard transport mechanism. • In Appendix A are collected the BASH scripts and SQL code that are part of the cross-check tool described in Chapter 3. • In Appendix B are collected the scripts used in the implementation of the accounting system described in Chapter 5. • In Appendix C are collected the scripts and configurations used for the tests of the ActiveMQ implementation of DGAS described in Chapter 6. • In Appendix D are collected the publications in which I contributed during the thesis work
APA, Harvard, Vancouver, ISO, and other styles
39

Jamaliannasrabadi, Saba. "High Performance Computing as a Service in the Cloud Using Software-Defined Networking." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1433963448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Delgado, Javier. "A grid computing network platform for enhanced data management and visualization." FIU Digital Commons, 2007. http://digitalcommons.fiu.edu/etd/2766.

Full text
Abstract:
This thesis presents a novel approach towards providing a collaboration environment by using Grid Computing. The implementation includes the deployment of a cluster attached to a mural display for high performance computing and visualization and a Grid-infrastructure for sharing storage space across a wide area network and easing the remote use of the computing resources. A medical data processing application is implemented on the platform. The outcome is enhanced use of remote storage facilities and quick return time for computationally-intensive problems. The central issue of this thesis work is thus one that focuses on the development of a secure distributed system for data management and visualization to respond to the need for more efficient interaction and collaboration between technical researchers and medical professionals. The proposed networked solution is envisioned such as to provide synergy for more collaboration on theoretical and experimental issues involving analysis, visualization, and data sharing across sites.
APA, Harvard, Vancouver, ISO, and other styles
41

Davis, Don, Toby Bennett, and Jay Costenbader. "RECONFIGURABLE GATEWAY SYSTEMS FOR SPACE DATA NETWORKING." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/608358.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California<br>Over a dozen commercial remote sensing programs are currently under development representing billions of dollars of potential investment. While technological advances have dramatically decreased the cost of building and launching these satellites, the cost and complexity of accessing their data for commercial use are still prohibitively high. This paper describes Reconfigurable Gateway Systems which provide, to a broad spectrum of existing and new data users, affordable telemetry data acquisition, processing and distribution for real-time remotely sensed data at rates up to 300 Mbps. These Gateway Systems are based upon reconfigurable computing, multiprocessing, and process automation technologies to meet a broad range of satellite communications and data processing applications. Their flexible architecture easily accommodates future enhancements for decompression, decryption, digital signal processing and image / SAR data processing.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhao, Jian, and 趙建. "Performance modeling and optimization solutions for networking systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/196434.

Full text
Abstract:
This thesis targets at modeling and resolving practical problems using mathematical tools in two representative networking systems nowadays, i.e., peer-to-peer (P2P) video streaming system and cloud computing system. In the first part, we study how to mitigate the following tussle between content service providers and ISPs in P2P video streaming systems: network-agnostic P2P protocol designs bring lots of inter-ISP traffic and increase traffic relay cost of ISPs; in turn, ISPs start to throttle P2P packets, which significantly deteriorates P2P streaming performance. First, we investigate the problem in a mesh-based P2P live streaming system. We use end-to-end streaming delays as performance, and quantify the amount of inter-ISP traffic with the number of copies of the live streams imported into each ISP. Considering multiple ISPs at different bandwidth levels, we model the generic relationship between the volume of inter-ISP traffic and streaming performance, which provides useful insights on the design of effective locality-aware peer selection protocols and server deployment strategies across multiple ISPs. Next, we study a similar problem in a hybrid P2P-cloud CDN system for VoD streaming. We characterize the relationship between the costly bandwidth consumption from the cloud CDN and the inter-ISP traffic. We apply a loss network model to derive the bandwidth consumption under any given chunk distribution pattern among peer caches and any streaming request dispatching strategy among ISPs, and derive the optimal peer caching and request dispatching strategies which minimize the bandwidth demand from the cloud CDN. Based on the fundamental insights from our analytical results, we design a locality-aware, hybrid P2P-cloud CDN streaming protocol. In the second part, we study the profit maximization and cost minimization problems in Infrastructure-as- a- Service (IaaS) cloud systems. The first problem is how a geo-distributed cloud system should price its datacenter resources at different locations, such that its overall profit is maximized over long-term operation. We design an efficient online algorithm for dynamic pricing of VM resources across datacenters, together with job scheduling and server provisioning in each datacenter, to maximize the cloud's profit over the long run. Theoretical analysis shows that our algorithm can schedule jobs within their respective deadlines, while achieving a time-averaged overall profit closely approaching the offline maximum, which is computed by assuming perfect information on future job arrivals is freely available. The second problem is how federated clouds should trade their computing resources among each other to reduce the cost, by exploiting diversities of different clouds' workloads and operational costs. We formulate a global cost minimization problem among multiple clouds under the cooperative scenario where each individual cloud's workload and cost information is publicly available. Taking into considerations jobs with disparate length, a non-preemptive approximation algorithm for leftover job migration and new job scheduling is designed. Given to the selfishness of individual clouds, we further design a randomized double auction mechanism to elicit clouds' truthful bidding for buying or selling virtual machines. The auction mechanism is proven to be truthful, and to guarantee the same approximation ratio to what the cooperative approximation algorithm achieves.<br>published_or_final_version<br>Computer Science<br>Doctoral<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
43

Al-Malki, Dana Mohammed. "Development of virtual network computing (VNC) environment for networking and enhancing user experience." Thesis, City, University of London, 2006. http://openaccess.city.ac.uk/18319/.

Full text
Abstract:
Virtual Network Computing (VNC) is a thin client developed by Real VNC Ltd, Formerly of Olivetti Research Ltd/AT&T labs Cambridge and can be used as a collaborative environment, therefore it has been chosen as the basis of this research study. The purpose of this thesis is to investigate and develop a VNC based environment over the network and to improve the users’ Quality of Experience (QoE) of using VNC between networked groups by the incorporation of videoconferencing with VNC and enhancing QoE in Mobile environments where the network status is far from ideal and is prone to disconnection. This thesis investigates the operation of VNC in different environments and scenarios such as wireless environments by investigating user and device mobility and ways to sustain their seamless connection when in motion. As part of the study I also researched all groups that implement VNC like universities, research groups and laboratories and virtual laboratories. In addition to that I identified the successful features and security measures in VNC in order to create a secure environment. This was achieved by pinpointing the points of strength and weakness in VNC as opposed to popular thin clients and remote control applications and analysing VNC according to conforming to several security measures. Furthermore, it is reasonable to say that the success of any scheme that attempts to deliver desirable levels of Quality of Service (QoS) of an effective application for the future Internet must be based, not only on the progress of technology, but on usersʹ requirements. For instance, a collaborative environment has not yet reached the desired expectation of its users since it is not capable of handling any unexpected events which can result from a sudden disconnection of a nomadic user engaged in an ongoing collaborative session; this is consequently associated with breaking the social dynamics of the group collaborating in the session. Therefore, I have concluded that knowing the social dynamics of application’s users as a group and their requirements and expectations of a successful experience can lead an application designer to exploit technology to autonomously support the initiating and maintaining of social interaction. Moreover, I was able to successfully develop a VNC based environment for networked groups that facilitates the administration of different remote VNC sessions. In addition to a prototype that uses videoconferencing in parallel to VNC to provide a better user’s QoE of VNC. The last part of the thesis was concerned with designing a framework to improve and assess QoE of all users in a collaborative environment where it can be especially applied in the presence of nomadic clients with their much frequent disconnections. I have designed a conceptual algorithm called Improved Collaborative Quality of Experience (IC‐QoE), an algorithm that aims to eliminate frustration and improve QoE of users in a collaborative session in the case of disconnections and examined its use and benefits in real world scenarios such as research teams and implemented a prototype to present the concepts of this algorithm. Finally, I have designed a framework to suggest ways to evaluate this algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

ZHANG, TIANZHU. "Control plane optimization in Software Defined Networking and task allocation for Fog Computing." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2706750.

Full text
Abstract:
As the next generation of mobile wireless standard, the fifth generation (5G) of cellular/wireless network has drawn worldwide attention during the past few years. Due to its promise of higher performance over the legacy 4G network, an increasing number of IT companies and institutes have started to form partnerships and create 5G products. Emerging techniques such as Software Defined Networking and Mobile Edge Computing are also envisioned as key enabling technologies to augment 5G competence. However, as popular and promising as it is, 5G technology still faces several intrinsic challenges such as (i) the strict requirements in terms of end-to-end delays, (ii) the required reliability in the control plane and (iii) the minimization of the energy consumption. To cope with these daunting issues, we provide the following main contributions. As first contribution, we address the problem of the optimal placement of SDN controllers. Specifically, we give a detailed analysis of the impact that controller placement imposes on the reactivity of SDN control plane, due to the consistency protocols adopted to manage the data structures that are shared across different controllers. We compute the Pareto frontier, showing all the possible tradeoffs achievable between the inter-controller delays and the switch-to-controller latencies. We define two data-ownership models and formulate the controller placement problem with the goal of minimizing the reaction time of control plane, as perceived by a switch. We propose two evolutionary algorithms, namely Evo-Place and Best-Reactivity, to compute the Pareto frontier and the controller placement minimizing the reaction time, respectively. Experimental results show that Evo-Place outperforms its random counterpart, and Best-Reactivity can achieve a relative error of <= 30% with respect to the optimal algorithm by only sampling less than 10% of the whole solution space. As second contribution, we propose a stateful SDN approach to improve the scalability of traffic classification in SDN networks. In particular, we leverage the OpenState extension to OpenFlow to deploy state machines inside the switch and minimize the number of packets redirected to the traffic classifier. We experimentally compare two approaches, namely Simple Count-Down (SCD) and Compact Count-Down (CCD), to scale the traffic classifier and minimize the flow table occupancy. As third contribution, we propose an approach to improve the reliability of SDN controllers. We implement BeCheck, which is a software framework to detect ``misbehaving'' controllers. BeCheck resides transparently between the control plane and data plane, and monitors the exchanged OpenFlow traffic messages. We implement three policies to detect misbehaving controllers and forward the intercepted messages. BeCheck along with the different policies are validated in a real test-bed. As fourth contribution, we investigate a mobile gaming scenario in the context of fog computing, denoted as Integrated Mobile Gaming (IMG) scenario. We partition mobile games into individual tasks and cognitively offload them either to the cloud or the neighbor mobile devices, so as to achieve minimal energy consumption. We formulate the IMG model as an ILP problem and propose a heuristic named Task Allocation with Minimal Energy cost (TAME). Experimental results show that TAME approaches the optimal solutions while outperforming two other state-of-the-art task offloading algorithms.
APA, Harvard, Vancouver, ISO, and other styles
45

Medhioub, Houssem. "Architectures et mécanismes de fédération dans les environnements cloud computing et cloud networking." Thesis, Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0009/document.

Full text
Abstract:
Présenté dans la littérature comme une nouvelle technologie, le Cloud Computing est devenu incontournable dans la mise en place et la fourniture des services informatiques. Cette thèse s’inscrit dans le contexte de cette nouvelle technologie qui est en mesure de transformer la mise en place, la gestion et l’utilisation des systèmes d’information. L'adoption et la vulgarisation du Cloud ont été ralenties par la jeunesse même des concepts et l'hétérogénéité des solutions existantes. Cette difficulté d'adoption se manifeste par l'absence de standard, l'hétérogénéité des architectures et des API, le Vendor Lock-In imposé par les leaders du marché et des manques qui ralentissent la fédération. La motivation principale de la thèse est de simplifier l'adoption du cloud et la migration vers ses environnements et technologies. Notre objectif est de proposer des solutions d'interopérabilité et de fédération dans le Cloud. Le travail de recherche s’est aussi articulé autour de deux grands axes. Le premier concerne le rapprochement des réseaux du futur et des Clouds. Le deuxième axe concerne l'interopérabilité et la fédération entre solutions et services cloud. Une analyse de l’état de l’art sur le Cloud Computing et le Cloud Networking, a permis de confirmer des manques pressentis et de proposer deux architectures de fédération Cloud. La première architecture permet le rapprochement entre le Cloud Computing et le Cloud Networking. La seconde architecture facilite l'interopérabilité et le courtage de services Cloud. L'étude des deux architectures a fait ressortir deux composants primordiaux et essentiels pour assurer la fédération: une interface générique et un système d'échange de messages. Ces deux composants correspondent à deux contributions centrales de la thèse et reflètent l’ensemble des contributions (quatre au total) du travail de recherche<br>Presented in the literature as a new technology, Cloud Computing has become essential in the development and delivery of IT services. Given the innovative potential of Cloud, our thesis was conducted in the context of this promising technology. It was clear that the Cloud would change the way we develop, manage and use information systems. However, the adoption and popularization of Cloud were slow and difficult given the youth of the concepts and heterogeneity of the existing solutions. This difficulty in adoption is reflected by the lack of standard, the presence of heterogeneous architectures and APIs, the introduction of Vendor Lock-In imposed by the market leaders and the lack of cloud federation principles and facilitators. The main motivation for our PhD is to simplify the adoption of the cloud paradigm and the migration to cloud environments and technologies. Our goal has consequently been to improve interoperability and enable federation in the cloud. The thesis focused on two main areas. The first concerns the convergence of future networks and clouds and the second the improvement of federation and interoperability between heterogeneous cloud solutions and services. Based on our work in state of the art about Cloud Computing and Cloud Networking, we defined in this thesis two architectures for Cloud federation. The first architecture enables the merging (convergence) of Cloud Computing and Cloud Networking. The second architecture addresses interoperability between services and proposes cloud-brokering solutions. The study enabled the identification of two essential components for cloud federation, namely: a generic interface and a message exchange system. These two components have been two contributions of our thesis. The proposed federation architectures and these two components summarize the four major contributions of our work
APA, Harvard, Vancouver, ISO, and other styles
46

Medhioub, Houssem. "Architectures et mécanismes de fédération dans les environnements cloud computing et cloud networking." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0009.

Full text
Abstract:
Présenté dans la littérature comme une nouvelle technologie, le Cloud Computing est devenu incontournable dans la mise en place et la fourniture des services informatiques. Cette thèse s’inscrit dans le contexte de cette nouvelle technologie qui est en mesure de transformer la mise en place, la gestion et l’utilisation des systèmes d’information. L'adoption et la vulgarisation du Cloud ont été ralenties par la jeunesse même des concepts et l'hétérogénéité des solutions existantes. Cette difficulté d'adoption se manifeste par l'absence de standard, l'hétérogénéité des architectures et des API, le Vendor Lock-In imposé par les leaders du marché et des manques qui ralentissent la fédération. La motivation principale de la thèse est de simplifier l'adoption du cloud et la migration vers ses environnements et technologies. Notre objectif est de proposer des solutions d'interopérabilité et de fédération dans le Cloud. Le travail de recherche s’est aussi articulé autour de deux grands axes. Le premier concerne le rapprochement des réseaux du futur et des Clouds. Le deuxième axe concerne l'interopérabilité et la fédération entre solutions et services cloud. Une analyse de l’état de l’art sur le Cloud Computing et le Cloud Networking, a permis de confirmer des manques pressentis et de proposer deux architectures de fédération Cloud. La première architecture permet le rapprochement entre le Cloud Computing et le Cloud Networking. La seconde architecture facilite l'interopérabilité et le courtage de services Cloud. L'étude des deux architectures a fait ressortir deux composants primordiaux et essentiels pour assurer la fédération: une interface générique et un système d'échange de messages. Ces deux composants correspondent à deux contributions centrales de la thèse et reflètent l’ensemble des contributions (quatre au total) du travail de recherche<br>Presented in the literature as a new technology, Cloud Computing has become essential in the development and delivery of IT services. Given the innovative potential of Cloud, our thesis was conducted in the context of this promising technology. It was clear that the Cloud would change the way we develop, manage and use information systems. However, the adoption and popularization of Cloud were slow and difficult given the youth of the concepts and heterogeneity of the existing solutions. This difficulty in adoption is reflected by the lack of standard, the presence of heterogeneous architectures and APIs, the introduction of Vendor Lock-In imposed by the market leaders and the lack of cloud federation principles and facilitators. The main motivation for our PhD is to simplify the adoption of the cloud paradigm and the migration to cloud environments and technologies. Our goal has consequently been to improve interoperability and enable federation in the cloud. The thesis focused on two main areas. The first concerns the convergence of future networks and clouds and the second the improvement of federation and interoperability between heterogeneous cloud solutions and services. Based on our work in state of the art about Cloud Computing and Cloud Networking, we defined in this thesis two architectures for Cloud federation. The first architecture enables the merging (convergence) of Cloud Computing and Cloud Networking. The second architecture addresses interoperability between services and proposes cloud-brokering solutions. The study enabled the identification of two essential components for cloud federation, namely: a generic interface and a message exchange system. These two components have been two contributions of our thesis. The proposed federation architectures and these two components summarize the four major contributions of our work
APA, Harvard, Vancouver, ISO, and other styles
47

Solomon, Robert Tyree. "Strategies for Human Resources Professionals Using Social Networking Websites for Hiring Decisions." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6678.

Full text
Abstract:
The use of social networking websites by employers without adequate strategies can lead to misuse of job applicant's information or discriminatory hiring practices. The purpose of this multiple case study was to identify strategies that some human resource professionals in the southeastern United States implemented to maximize the use of social networking websites in the hiring process. Signaling theory was used as the conceptual framework for this study. Semistructured face-to-face interviews were conducted with 8 purposefully selected human resource professionals who used social networking websites for at least 3 years to screen and select job applicants. Documentation of participating organizations was also reviewed to assess the guidance employees received for using social networking websites to inform hiring decisions. Two other sources of data included field notes and observations of participants during interviews. Interview transcripts and supporting documents were coded using a priori and emergent codes focused on identifying themes among strategies hiring managers used. A few of the themes that emerged from the thematic analysis of the interview data were professional social media, personal social media, and legal concerns. The results of this study may contribute to positive social change by providing human resource professionals and hiring managers with more knowledge for optimizing the use of social networking websites for cybervetting and hiring job candidates.
APA, Harvard, Vancouver, ISO, and other styles
48

Krishna, Nitesh. "Software-Defined Computational Offloading for Mobile Edge Computing." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37580.

Full text
Abstract:
Computational offloading advances the deployment of Mobile Edge Computing (MEC) in the next generation communication networks. However, the distributed nature of the mobile users and the complex applications make it challenging to schedule the tasks reasonably among multiple devices. Therefore, by leveraging the idea of Software-Defined Networking (SDN) and Service Composition (SC), we propose a Software-Defined Service Composition model (SDSC). In this model, the SDSC controller is deployed at the edge of the network and composes service in a centralized manner to reduce the latency of the task execution and the traffic on the access links by satisfying the user-specific requirement. We formulate the low latency service composition as a Constraint Satisfaction Problem (CSP) to make it a user-centric approach. With the advent of the SDN, the global view and the control of the entire network are made available to the network controller which is further leveraged by our SDSC approach. Furthermore, the service discovery and the offloading of tasks are designed for MEC environment so that the users can have a complex and robust system. Moreover, this approach performs the task execution in a distributed manner. We also define the QoS model which provides the composition rule that forms the best possible service composition at the time of need. Moreover, we have extended our SDSC model to involve the constant mobility of the mobile devices. To solve the mobility issue, we propose a mobility model and a mobility-aware QoS approach enabled in the SDSC model. The experimental simulation results demonstrate that our approach can obtain better performance than the energy saving greedy algorithm and the random offloading approach in a mobile environment.
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Jaewoong. "A modelling and networking architecture for distributed virtual environments with multiple servers." Thesis, University of Hull, 1999. http://hydra.hull.ac.uk/resources/hull:8383.

Full text
Abstract:
Virtual Environments (VEs) attempt to give people the illusion of immersion that they are in a computer generated world. VEs allow people to actively participate in a synthetic environment. They range from a single-person running on a single computer, to multiple-people running on several computers connected through a network. When VEs are distributed on multiple computers across a network, we call this a Distributed Virtual Environment (DVE). Virtual Environments can benefit greatly from distributed strategies. A networked VE system based on the Client-Server model is the most commonly used paradigm in constructing DVE systems. In a Client-Server model, data can be distributed on several server computers. The server computers provide services to their own clients via networks. In some client-server models, however, a powerful server is required, or it will become a bottleneck. To reduce the amount of data and traffic maintained by a single server, the servers themselves can be distributed, and the virtual environment can be divided over a network of servers. The system described in this thesis, therefore, is based on the client-server model with multiple servers. This grouping is called a Distributed Virtual Environment System with Multiple- Servers (DVM). A DVM system shows a new paradigm of distributed virtual environments based on shared 3D synthetic environments. A variety of network elements are required to support large scale DVM systems. The network is currently the most constrained resource of the DVM system. Development of networking architectures is the key to solving the DVM challenge. Therefore, a networking architecture for implementing a DVM model is proposed. Finally, a DVM prototype system is described to demonstrate the validity of the modelling and network architecture of a DVM model.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Jie Zhang. "Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1532737201524604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography