To see the other types of publications on this topic, follow the link: Federated cloud.

Dissertations / Theses on the topic 'Federated cloud'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Federated cloud.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Al, Abdulwahid Abdulwahid Abdullah. "Federated authentication using the Cloud (Cloud Aura)." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9596.

Full text
Abstract:
Individuals, businesses and governments undertake an ever-growing range of activities online and via various Internet-enabled digital devices. Unfortunately, these activities, services, information and devices are the targets of cybercrimes. Verifying the user legitimacy to use/access a digital device or service has become of the utmost importance. Authentication is the frontline countermeasure of ensuring only the authorised user is granted access; however, it has historically suffered from a range of issues related to the security and usability of the approaches. Traditionally deployed in a point-of-entry mode (although a number of implementations also provide for re-authentication), the intrusive nature of the control is a significant inhibitor. Thus, it is apparent that a more innovative, convenient and secure user authentication solution is vital. This thesis reviews the authentication methods along with the current use of authentication technologies, aiming at developing a current state-of-the-art and identifying the open problems to be tackled and available solutions to be adopted. It also investigates whether these authentication technologies have the capability to fill the gap between the need for high security whilst maximising user satisfaction. This is followed by a comprehensive literature survey and critical analysis of the existing research domain on continuous and transparent multibiometric authentication. It is evident that most of the undertaken studies and proposed solutions thus far endure one or more shortcomings; for instance, an inability to balance the trade-off between security and usability, confinement to specific devices, lack or negligence of evaluating users’ acceptance and privacy measures, and insufficiency or absence of real tested datasets. It concludes that providing users with adequate protection and convenience requires innovative robust authentication mechanisms to be utilised in a universal manner. Accordingly, it is paramount to have a high level of performance, scalability, and interoperability amongst existing and future systems, services and devices. A survey of 302 digital device users was undertaken and reveals that despite the widespread interest in more security, there is a quite low number of respondents using or maintaining the available security measures. However, it is apparent that users do not avoid applying the concept of authentication security but avoid the inconvenience of its current common techniques (biometrics are having growing practical interest). The respondents’ perceptions towards Trusted Third-Party (TTP) enable utilising biometrics for a novel authentication solution managed by a TTP working on multiple devices to access multiple services. However, it must be developed and implemented considerately. A series of experimental feasibility analysis studies disclose that even though prior Transparent Authentication Systems (TAS) models performed relatively well in practice on real live user data, an enhanced model utilising multibiometric fusion outweighs them in terms of the security and transparency of the system within a device. It is also empirically established that a centralised federated authentication approach using the Cloud would help towards constructing a better user profile encompassing multibiometrics and soft biometric information from their multiple devices and thus improving the security and convenience of the technique beyond those of unimodal, the Non-Intrusive and Continuous Authentication (NICA), and the Weighted Majority Voting Fusion (WMVF) and what a single device can do by itself. Furthermore, it reduces the intrusive authentication requests by 62%-74% (of the total assumed intrusive requests without operating this model) in the worst cases. As such, the thesis proposes a novel authentication architecture, which is capable of operating in a transparent, continuous and convenient manner whilst functioning across a range of digital devices – bearing in mind it is desirable to work on differing hardware configurations, operating systems, processing capabilities and network connectivity but they are yet to be validated. The approach, entitled Cloud Aura, can achieve high levels of transparency thereby being less dependent on secret-knowledge or any other intrusive login and leveraging the available devices capabilities without requiring any external sensors. Cloud Aura incorporates a variety of biometrics from different types, i.e. physiological, behavioural, and soft biometrics and deploys an on-going identity confidence level based upon them, which is subsequently reflected on the user privileges and mapped to the risk level associated to them, resulting in relevant reaction(s). While in use, it functions with minimal processing overhead thereby reducing the time required for the authentication decision. Ultimately, a functional proof of concept prototype is developed showing that Cloud Aura is feasible and would have the provisions of effective security and user convenience.
APA, Harvard, Vancouver, ISO, and other styles
2

RODRIGUES, Thiago Gomes. "Cloudacc: a cloud-based accountability framework for federated cloud." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18590.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-19T15:09:08Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tgr_thesis.pdf: 4801672 bytes, checksum: ce1d30377cfe8fad52dbfd02d55554e6 (MD5)
Made available in DSpace on 2017-04-19T15:09:08Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tgr_thesis.pdf: 4801672 bytes, checksum: ce1d30377cfe8fad52dbfd02d55554e6 (MD5) Previous issue date: 2016-09-08
The evolution of software service delivery has changed the way accountability is performed. The complexity related to cloud computing environments increases the difficulty in properly performing accountability, since the evidences are spread through the whole infrastructure, from different servers, in physical, virtualization and application layers. This complexity increases when the cloud federation is considered because besides the inherent complexity of the virtualized environment, the federation members may not implement the same security procedures and policies. The main objective of this thesis is to propose an accountability framework named CloudAcc, that supports audit, management, planning and billing process in federated cloud environments, increasing trust and transparency. Furthermore, CloudAcc considers the legal safeguard requirements presented in Brazilian Marco Civil da Internet. We confirm the CloudAcc effectiveness when some infrastructure elements were submitted against Denial of Service (DoS) and Brute Force attacks, and our framework was able to detect them. Facing the results obtained, we can conclude that CloudAcc contributes to the state-of-the-art once it provides the holistic vision of the cloud federated environment through the evidence collection considering the three layers, supporting audit, management, planning and billing process in federated cloud environments.
A maneira de realizar accountability tem variado à medida em que o modo de entrega de serviços de Tecnologia da Informação (TI) tem evoluído. Em ambientes de nuvem a complexidade de realizar accountability apropriadamente é alta porque as evidências devem ser coletadas considerando-se as camadas física, de virtualização e de aplicações, que estão espalhadas em diferentes servidores e elementos da infraestrutura. Esta complexidade é ampliada quando ocorre a federação das infraestruturas de nuvem porque além da complexidade inerente ao ambiente virtualizado, os membros da federação podem não ter os mesmos grupos de políticas e práticas de segurança. O principal objetivo desta tese é propor um framework de accountability, denominado CloudAcc, que suporte processos de auditoria, gerenciamento, planejamento e cobrança, em nuvens federadas, aumentando a confiança e a transparência. Além disso, o CloudAcc também considera os requisitos legais para a salvaguarda dos registros, conforme descrito no Marco Civil da Internet brasileira. A efetividade do CloudAcc foi confirmada quando alguns componentes da infraestrutura da nuvem foram submetidos a ataques de negação de serviço e de força bruta, e o framework foi capaz de detectá-los. Diante dos resultados obtidos, pode-se concluir que o CloudAcc contribui para o estado-da-arte, uma vez que fornece uma visão holística do ambiente de nuvem federada através da coleta de evidências em três camadas suportando os processos de auditoria, gerenciamento, planejamento e cobrança.
APA, Harvard, Vancouver, ISO, and other styles
3

Liverani, Tommaso. "Federated Learning per Applicazioni Edge Cloud su Piattaforma fog05." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Federated learning è sicuramente una delle tecniche di machine learning di maggiore interesse attualmente. Esso trova applicazione nello scenario fog computing in cui risulta spesso necessario mantenere la riservatezza dei dati mantenuti nei nodi fog. La riservatezza dei dati è infatti una delle caratteristiche peculiari dei processi di federated learning. In tali scenari risulta particolarmente utile impiegare una piattaforma con supporto alla migrazione che potrà essere utilizzato per implementare determinati meccanismi come il bilanciamento di carico. Questo supporto potrà quindi essere impiegato per la migrazione di entità coinvolte tra nodi fog in un'architettura edge-cloud o per abilitare futuri scenari completamente decentralizzati. L'obiettivo della tesi è quindi la realizzazione di un'applicazione per federated learning con supporto alla migrazione per architetture edge-cloud. A tale scopo si è scelto di utilizzare fog05. Fog05 è una piattaforma per fog computing con supporto alla migrazione che presenta caratteristiche particolarmente innovative e interessanti rispetto alle soluzioni attualmente diffuse. Fog05 permette la gestione di sistemi estremamente eterogenei attraverso l’utilizzo di plugin che le consentono di interagire con tecnologie differenti come lxd,docker e kvm. In una prima fase abbiamo quindi realizzato l’applicazione descritta tramite fog05 mentre in una seconda fase abbiamo studiato e testato il supporto alla migrazione di fog05 rispetto alle tecnologie supportate.
APA, Harvard, Vancouver, ISO, and other styles
4

Silva, Francisco Airton Pereira da. "Monext: an accounting framework for federated clouds." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/11989.

Full text
Abstract:
Cloud computing has become an established paradigm for running services on external infrastructure that dynamically allocates virtually unlimited capacity. This paradigm creates a new scenario for the deployment of applications and information technology (IT) services. In this model, complete applications and machine infrastructure are offered to users, who are charged only for the resources they consume. Thus, cloud resources are offered through service abstractions classified into three main categories: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). In IaaS, computing resources are offered as virtual machines to the end user. The aim to offer such unlimited resources necessitates distributing these virtual machines through multiple data centers. This distribution makes harder to fulfill a number of requirements including security, reliability, availability, and accounting. Accounting refers to how resources are recorded, accounted for, and charged. Even for a single cloud provider this task is hard, and it becomes more difficult for a federation of cloud computing, or federated cloud, in which a cloud provider dynamically outsources resources to other providers in response to demand variation. Thus, a cluster of clouds shares heterogeneous resources, requiring greater effort to manage and accurately account for the distributed resources. Some earlier research has addressed the development of platforms for federated clouds but without considering the accounting requirement. This dissertation presents a framework for charging IaaS with a focus on federated cloud. In order to gather information about this topic area and to generate guidelines for building the framework, we applied a systematic mapping study. This dissertation also presents an initial validation of the tool through a case study, showing evidence that the requirements generated through the mapping study were fulfilled by the framework and presenting indications of its feasibility in a real cloud service scenario
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-10T18:37:17Z No. of bitstreams: 2 Dissertação Francisco Airton da Silva.pdf: 5605679 bytes, checksum: 61aa2b6df102174ff2e190ab47678cbf (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Made available in DSpace on 2015-03-11T17:45:06Z (GMT). No. of bitstreams: 2 Dissertação Francisco Airton da Silva.pdf: 5605679 bytes, checksum: 61aa2b6df102174ff2e190ab47678cbf (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-02-27
A Computação na Nuvem se tornou um paradigma consumado para executar serviços em infraestruturas externas, onde de uma forma virtual a capacidade praticamente ilimitada pode ser alocada dinamicamente. Este paradigma estabelece um novo cenário para a implantação de aplicações e serviços de TI. Neste modelo, desde aplicações completas até infraestrutura de máquinas são ofertadas a usuários, que são cobrados apenas pelo uso dos recursos consumidos. Desta forma, os bens de consumo da nuvem são ofertados através de abstrações de serviços, onde atualmente são citadas três principais categorias: Software como Serviço (SaaS), Plataforma como Serviço (PaaS) e Infraestrutura como Serviço (IaaS). No caso do IaaS são oferecidos recursos computacionais na forma de Máquinas Virtuais para o cliente final. Para atingir o aspecto ilimitado de recursos é necessário distribuir estas Máquinas Virtuais por vários Data Centers. Esta distribuição dificulta atender uma série de requisitos como Segurança, Confiabilidade, Disponibilidade e a Tarifação pelos recursos consumidos. A Tarifação refere-se a como os recursos são registrados, contabilizados e cobrados. Mesmo no caso de um único provedor esta tarefa não é fácil e existe um contexto em que esta dificuldade se torna ainda maior, conhecida como Federação de Computação na Nuvem ou também chamadas de Nuvens Federadas. Nuvens Federadas ocorrem quando um provedor de Computação na Nuvem terceiriza recursos dinamicamente para outros provedores em resposta à variação da demanda. Desta forma ocorre um aglomerado de nuvens, porém seus recursos são heterogêneos, acarretando num maior esforço para gerenciar os recursos distribuídos e por consequência para a Tarifação. Neste contexto foram identificadas pesquisas nesta área sobre plataformas para Nuvens Federadas, que não abordam o requisito de Tarifação. Esta dissertação apresenta um framework voltado à tarifação de Infraestrutura como Serviço com foco em Nuvens Federadas. Objetivando colher informações sobre esta área e consequentemente gerar insumos para fundamentar as decisões na construção do framework, foi aplicado um Estudo de Mapeamento Sistemático. Esta dissertação também apresenta uma validação inicial da ferramenta, através de um estudo experimental, mostrando indícios de que os requisitos gerados pelo Mapeamento Sistemático foram atendidos, bem como será viável a aplicação da solução por provedores de serviços de nuvem em um cenário real.
APA, Harvard, Vancouver, ISO, and other styles
5

MacDermott, A. M. "Collaborative intrusion detection in federated cloud environments using Dempster-Shafer theory of evidence." Thesis, Liverpool John Moores University, 2017. http://researchonline.ljmu.ac.uk/7609/.

Full text
Abstract:
Moving services to the Cloud environment is a trend that has been increasing in recent years, with a constant increase in sophistication and complexity of such services. Today, even critical infrastructure operators are considering moving their services and data to the Cloud. As Cloud computing grows in popularity, new models are deployed to further the associated benefits. Federated Clouds are one such concept, which are an alternative for companies reluctant to move their data out of house to a Cloud Service Providers (CSP) due to security and confidentiality concerns. Lack of collaboration among different components within a Cloud federation, or among CSPs, for detection or prevention of attacks is an issue. For protecting these services and data, as Cloud environments and Cloud federations are large scale, it is essential that any potential solution should scale alongside the environment adapt to the underlying infrastructure without any issues or performance implications. This thesis presents a novel architecture for collaborative intrusion detection specifically for CSPs within a Cloud federation. Our approach offers a proactive model for Cloud intrusion detection based on the distribution of responsibilities, whereby the responsibility for managing the elements of the Cloud is distributed among several monitoring nodes and brokering, utilising our Service-based collaborative intrusion detection – “Security as a Service” methodology. For collaborative intrusion detection, the Dempster-Shafer (D-S) theory of evidence is applied, executing as a fusion node with the role of collecting and fusing the information provided by the monitoring entities, taking the final decision regarding a possible attack. This type of detection and prevention helps increase resilience to attacks in the Cloud. The main novel contribution of this project is that it provides the means by which DDoS attacks are detected within a Cloud federation, so as to enable an early propagated response to block the attack. This inter-domain cooperation will offer holistic security, and add to the defence in depth. However, while the utilisation of D-S seems promising, there is an issue regarding conflicting evidences which is addressed with an extended two stage D-S fusion process. The evidence from the research strongly suggests that fusion algorithms can play a key role in autonomous decision making schemes, however our experimentation highlights areas upon which improvements are needed before fully applying to federated environments.
APA, Harvard, Vancouver, ISO, and other styles
6

Zapolskas, Vytautas. "Securing Cloud Storage Service." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for telematikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18626.

Full text
Abstract:
Cloud computing brought flexibility, scalability, and capital cost savings to the IT industry. As more companies turn to cloud solutions, securing cloud based services becomes increasingly important, because for many organizations, the final barrier to adopting cloud computing is whether it is sufficiently secure.More users rely on cloud storage as it is mainly because cloud storage is available to be used by multiple devices (e.g. smart phones, tablets, notebooks, etc.) at the same time. These services often offer adequate protection to user's private data. However, there were cases where user's private data was accessible to other user's, since this data is stored in a multi-tenant environment. These incidents reduce the trust of cloud storage service providers, hence there is a need to securely migrate data from one cloud storage provider to another.This thesis proposes a design of a service for providing Security as a Service for cloud brokers in a federated cloud. This scheme allows customers to securely migrate from one provider to another. To enable the design of this scheme, possible security and privacy risks of a cloud storage service were analysed and identified. Moreover, in order to successfully protect private data, data protection requirements (for data retention, sanitization, and processing) were analysed. The proposed service scheme utilizes various encryption techniques and also includes identity and key management mechanisms, such as "federated identity management".While our proposed design meets most of the defined security and privacy requirements, it is still unknown how to properly handle data sanitization, to meet data protection requirements, and provide users data recovery capabilities (backups, versioning, etc.).
APA, Harvard, Vancouver, ISO, and other styles
7

Rinaldi, Riccardo. "Deployment e Gestione di Applicazioni di Federated Learning in Edge Cloud Computing basate sul Framework Fog05." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Il Federated Learning è la nuova branca del Machine Learning nata per sopperire al bisogno di nuovi sistemi architetturali che siano in grado di gestire i Big Data e allo stesso tempo garantire la privacy di eventuali dati sensibili. Per poter operare a queste due condizioni si è pensato di raccogliere i dati in un database centralizzato in modo che questi non lascino mai i margini della rete. Ecco perché è subentrato il mondo dell’edge computing in cui dispositivi intelligenti, posti tra il cloud e le Things, hanno il compito di pre-processare i dati raccolti per poi aggregarli su un unico server. Federated Learning e Edge/Cloud Computing sono due facce della stessa medaglia. I due mondi sono infatti profondamente interconnessi poiché fare Federated Learning vuol dire operare in un ambiente di tipo edge e cloud.
APA, Harvard, Vancouver, ISO, and other styles
8

Ariyattu, Resmi. "Towards federated social infrastructures for plug-based decentralized social networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S031/document.

Full text
Abstract:
Dans cette thèse, nous abordons deux problèmes soulevés par les systèmes distribués décentralisés - le placement de réseaux logiques de façon compatible avec le réseau physique sous-jacent et la construction de cohortes d'éditeurs pour dans les systèmes d'édition collaborative. Bien que les réseaux logiques (overlay networks) été largement étudiés, la plupart des systèmes existant ne prennent pas ou prennent mal en compte la topologie du réseau physique sous-jacent, alors que la performance de ces systèmes dépend dans une grande mesure de la manière dont leur topologie logique exploite la localité présente dans le réseau physique sur lequel ils s'exécutent. Pour résoudre ce problème, nous proposons dans cette thèse Fluidify, un mécanisme décentralisé pour le déploiement d'un réseau logique sur une infrastructure physique qui cherche à maximiser la localité du déploiement. Fluidify utilise une stratégie double qui exploite à la fois les liaisons logiques d'un réseau applicatif et la topologie physique de son réseau sous-jacent pour aligner progressivement l'une avec l'autre. Le protocole résultant est générique, efficace, évolutif et peut améliorer considérablement les performances de l'ensemble. La deuxième question que nous abordons traite des plates-formes d'édition collaborative. Ces plates-formes permettent à plusieurs utilisateurs distants de contribuer simultanément au même document. Seuls un nombre limité d'utilisateurs simultanés peuvent être pris en charge par les éditeurs actuellement déployés. Un certain nombre de solutions pair-à-pair ont donc été proposées pour supprimer cette limitation et permettre à un grand nombre d'utilisateurs de collaborer sur un même document sans aucune coordination centrale. Ces plates-formes supposent cependant que tous les utilisateurs d'un système éditent le même jeu de document, ce qui est peu vraisemblable. Pour ouvrir la voie à des systèmes plus flexibles, nous présentons, Filament, un protocole décentralisé de construction de cohorte adapté aux besoins des grands éditeurs collaboratifs. Filament élimine la nécessité de toute table de hachage distribuée (DHT) intermédiaire et permet aux utilisateurs travaillant sur le même document de se retrouver d'une manière rapide, efficace et robuste en générant un champ de routage adaptatif autour d'eux-mêmes. L'architecture de Filament repose sur un ensemble de réseaux logiques auto-organisées qui exploitent les similarités entre jeux de documents édités par les utilisateurs. Le protocole résultant est efficace, évolutif et fournit des propriétés bénéfiques d'équilibrage de charge sur les pairs impliqués
In this thesis, we address two issues in the area of decentralized distributed systems: network-aware overlays and collaborative editing. Even though network overlays have been extensively studied, most solutions either ignores the underlying physical network topology, or uses mechanisms that are specific to a given platform or applications. This is problematic, as the performance of an overlay network strongly depends on the way its logical topology exploits the underlying physical network. To address this problem, we propose Fluidify, a decentralized mechanism for deploying an overlay network on top of a physical infrastructure while maximizing network locality. Fluidify uses a dual strategy that exploits both the logical links of an overlay and the physical topology of its underlying network to progressively align one with the other. The resulting protocol is generic, efficient, scalable and can substantially improve network overheads and latency in overlay based systems. The second issue that we address focuses on collaborative editing platforms. Distributed collaborative editors allow several remote users to contribute concurrently to the same document. Only a limited number of concurrent users can be supported by the currently deployed editors. A number of peer-to-peer solutions have therefore been proposed to remove this limitation and allow a large number of users to work collaboratively. These decentralized solution assume however that all users are editing the same set of documents, which is unlikely to be the case. To open the path towards more flexible decentralized collaborative editors, we present Filament, a decentralized cohort-construction protocol adapted to the needs of large-scale collaborative editors. Filament eliminates the need for any intermediate DHT, and allows nodes editing the same document to find each other in a rapid, efficient and robust manner by generating an adaptive routing field around themselves. Filament's architecture hinges around a set of collaborating self-organizing overlays that utilizes the semantic relations between peers. The resulting protocol is efficient, scalable and provides beneficial load-balancing properties over the involved peers
APA, Harvard, Vancouver, ISO, and other styles
9

Espling, Daniel. "Enabling Technologies for Management of Distributed Computing Infrastructures." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-80129.

Full text
Abstract:
Computing infrastructures offer remote access to computing power that can be employed, e.g., to solve complex mathematical problems or to host computational services that need to be online and accessible at all times. From the perspective of the infrastructure provider, large amounts of distributed and often heterogeneous computer resources need to be united into a coherent platform that is then made accessible to and usable by potential users. Grid computing and cloud computing are two paradigms that can be used to form such unified computational infrastructures. Resources from several independent infrastructure providers can be joined to form large-scale decentralized infrastructures. The primary advantage of doing this is that it increases the scale of the available resources, making it possible to address more complex problems or to run a greater number of services on the infrastructures. In addition, there are advantages in terms of factors such as fault-tolerance and geographical dispersion. Such multi-domain infrastructures require sophisticated management processes to mitigate the complications of executing computations and services across resources from different administrative domains. This thesis contributes to the development of management processes for distributed infrastructures that are designed to support multi-domain environments. It describes investigations into how fundamental management processes such as scheduling and accounting are affected by the barriers imposed by multi-domain deployments, which include technical heterogeneity, decentralized and (domain-wise) self-centric decision making, and a lack of information on the state and availability of remote resources. Four enabling technologies or approaches are explored and developed within this work: (I) The use of explicit definitions of cloud service structure as inputs for placement and management processes to ensure that the resulting placements respect the internal relationships between different service components and any relevant constraints. (II) Technology for the runtime adaptation of Virtual Machines to enable the automatic adaptation of cloud service contexts in response to changes in their environment caused by, e.g., service migration across domains. (III) Systems for managing meta-data relating to resource usage in multi-domain grid computing and cloud computing infrastructures. (IV) A global fairshare prioritization mechanism that enables computational jobs to be consistently prioritized across a federation of several decentralized grid installations. Each of these technologies will facilitate the emergence of decentralized computational infrastructures capable of utilizing resources from diverse infrastructure providers in an automatic and seamless manner.

Note that the author changed surname from Henriksson to Espling in 2011

APA, Harvard, Vancouver, ISO, and other styles
10

Espling, Daniel. "Metadata Management in Multi-Grids and Multi-Clouds." Licentiate thesis, Umeå universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-51159.

Full text
Abstract:
Grid computing and cloud computing are two related paradigms used to access and use vast amounts of computational resources. The resources are often owned and managed by a third party, relieving the users from the costs and burdens of acquiring and managing a considerably large infrastructure themselves. Commonly, the resources are either contributed by different stakeholders participating in shared projects (grids), or owned and managed by a single entity and made available to its users with charging based on actual resource consumption (clouds). Individual grid or cloud sites can form collaborations with other sites, giving each site access to more resources that can be used to execute tasks submitted by users. There are several different models of collaborations between sites, each suitable for different scenarios and each posing additional requirements on the underlying technologies. Metadata concerning the status and resource consumption of tasks are created during the execution of the task on the infrastructure. This metadata is used as the primary input in many core management processes, e.g., as a base for accounting and billing, as input when prioritizing and placing incoming task, and as a base for managing the amount of resources allocated to different tasks. Focusing on management and utilization of metadata, this thesis contributes to a better understanding of the requirements and challenges imposed by different collaboration models in both grids and clouds. The underlying design criteria and resulting architectures of several software systems are presented in detail. Each system addresses different challenges imposed by cross-site grid and cloud architectures: The LUTSfed approach provides a lean and optional mechanism for filtering and management of usage data between grid or cloud sites. An accounting and billing system natively designed to support cross-site clouds demonstrates usage data management despite unknown placement and dynamic task resource allocation. The FSGrid system enables fairshare job prioritization across different grid sites, mitigating the problems of heterogeneous scheduling software and local management policies. The results and experiences from these systems are both theoretical and practical, as full scale implementations of each system has been developed and analyzed as a part of this work. Early theoretical work on structure-based service management forms a foundation for future work on structured-aware service placement in cross- site clouds.
APA, Harvard, Vancouver, ISO, and other styles
11

Al-Hazmi, Yahya [Verfasser], Thomas [Akademischer Betreuer] Magedanz, Thomas [Gutachter] Magedanz, Ina [Gutachter] Schieferdecker, and Serge [Gutachter] Fdida. "Unification of monitoring interfaces of federated cloud and Future Internet testbed infrastructures / Yahya Al-Hazmi ; Gutachter: Thomas Magedanz, Ina Schieferdecker, Serge Fdida ; Betreuer: Thomas Magedanz." Berlin : Technische Universität Berlin, 2016. http://d-nb.info/115618651X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ferreira, Leite Alessandro. "A user-centered and autonomic multi-cloud architecture for high performance computing applications." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112355/document.

Full text
Abstract:
Le cloud computing a été considéré comme une option pour exécuter des applications de calcul haute performance. Bien que les plateformes traditionnelles de calcul haute performance telles que les grilles et les supercalculateurs offrent un environnement stable du point de vue des défaillances, des performances, et de la taille des ressources, le cloud computing offre des ressources à la demande, généralement avec des performances imprévisibles mais à des coûts financiers abordables. Pour surmonter les limites d’un cloud individuel, plusieurs clouds peuvent être combinés pour former une fédération de clouds, souvent avec des coûts supplémentaires légers pour les utilisateurs. Une fédération de clouds peut aider autant les fournisseurs que les utilisateurs à atteindre leurs objectifs tels la réduction du temps d’exécution, la minimisation des coûts, l’augmentation de la disponibilité, la réduction de la consommation d’énergie, pour ne citer que ceux-Là. Ainsi, la fédération de clouds peut être une solution élégante pour éviter le sur-Approvisionnement, réduisant ainsi les coûts d’exploitation en situation de charge moyenne, et en supprimant des ressources qui, autrement, resteraient inutilisées et gaspilleraient ainsi de énergie. Cependant, la fédération de clouds élargit la gamme des ressources disponibles. En conséquence, pour les utilisateurs, des compétences en cloud computing ou en administration système sont nécessaires, ainsi qu’un temps d’apprentissage considérable pour maîtrises les options disponibles. Dans ce contexte, certaines questions se posent: (a) Quelle ressource du cloud est appropriée pour une application donnée? (b) Comment les utilisateurs peuvent-Ils exécuter leurs applications HPC avec un rendement acceptable et des coûts financiers abordables, sans avoir à reconfigurer les applications pour répondre aux normes et contraintes du cloud ? (c) Comment les non-Spécialistes du cloud peuvent-Ils maximiser l’usage des caractéristiques du cloud, sans être liés au fournisseur du cloud ? et (d) Comment les fournisseurs de cloud peuvent-Ils exploiter la fédération pour réduire la consommation électrique, tout en étant en mesure de fournir un service garantissant les normes de qualité préétablies ? À partir de ces questions, la présente thèse propose une solution de consolidation d’applications pour la fédération de clouds qui garantit le respect des normes de qualité de service. On utilise un système multi-Agents pour négocier la migration des machines virtuelles entre les clouds. En nous basant sur la fédération de clouds, nous avons développé et évalué une approche pour exécuter une énorme application de bioinformatique à coût zéro. En outre, nous avons pu réduire le temps d’exécution de 22,55% par rapport à la meilleure exécution dans un cloud individuel. Cette thèse présente aussi une architecture de cloud baptisée « Excalibur » qui permet l’adaptation automatique des applications standards pour le cloud. Dans l’exécution d’une chaîne de traitements de la génomique, Excalibur a pu parfaitement mettre à l’échelle les applications sur jusqu’à 11 machines virtuelles, ce qui a réduit le temps d’exécution de 63% et le coût de 84% par rapport à la configuration de l’utilisateur. Enfin, cette thèse présente un processus d’ingénierie des lignes de produits (PLE) pour gérer la variabilité de l’infrastructure à la demande du cloud, et une architecture multi-Cloud autonome qui utilise ce processus pour configurer et faire face aux défaillances de manière indépendante. Le processus PLE utilise le modèle étendu de fonction avec des attributs pour décrire les ressources et les sélectionner en fonction des objectifs de l’utilisateur. Les expériences réalisées avec deux fournisseurs de cloud différents montrent qu’en utilisant le modèle proposé, les utilisateurs peuvent exécuter leurs applications dans un environnement de clouds fédérés, sans avoir besoin de connaître les variabilités et contraintes du cloud
Cloud computing has been seen as an option to execute high performance computing (HPC) applications. While traditional HPC platforms such as grid and supercomputers offer a stable environment in terms of failures, performance, and number of resources, cloud computing offers on-Demand resources generally with unpredictable performance at low financial cost. Furthermore, in cloud environment, failures are part of its normal operation. To overcome the limits of a single cloud, clouds can be combined, forming a cloud federation often with minimal additional costs for the users. A cloud federation can help both cloud providers and cloud users to achieve their goals such as to reduce the execution time, to achieve minimum cost, to increase availability, to reduce power consumption, among others. Hence, cloud federation can be an elegant solution to avoid over provisioning, thus reducing the operational costs in an average load situation, and removing resources that would otherwise remain idle and wasting power consumption, for instance. However, cloud federation increases the range of resources available for the users. As a result, cloud or system administration skills may be demanded from the users, as well as a considerable time to learn about the available options. In this context, some questions arise such as: (a) which cloud resource is appropriate for a given application? (b) how can the users execute their HPC applications with acceptable performance and financial costs, without needing to re-Engineer the applications to fit clouds' constraints? (c) how can non-Cloud specialists maximize the features of the clouds, without being tied to a cloud provider? and (d) how can the cloud providers use the federation to reduce power consumption of the clouds, while still being able to give service-Level agreement (SLA) guarantees to the users? Motivated by these questions, this thesis presents a SLA-Aware application consolidation solution for cloud federation. Using a multi-Agent system (MAS) to negotiate virtual machine (VM) migrations between the clouds, simulation results show that our approach could reduce up to 46% of the power consumption, while trying to meet performance requirements. Using the federation, we developed and evaluated an approach to execute a huge bioinformatics application at zero-Cost. Moreover, we could decrease the execution time in 22.55% over the best single cloud execution. In addition, this thesis presents a cloud architecture called Excalibur to auto-Scale cloud-Unaware application. Executing a genomics workflow, Excalibur could seamlessly scale the applications up to 11 virtual machines, reducing the execution time by 63% and the cost by 84% when compared to a user's configuration. Finally, this thesis presents a product line engineering (PLE) process to handle the variabilities of infrastructure-As-A-Service (IaaS) clouds, and an autonomic multi-Cloud architecture that uses this process to configure and to deal with failures autonomously. The PLE process uses extended feature model (EFM) with attributes to describe the resources and to select them based on users' objectives. Experiments realized with two different cloud providers show that using the proposed model, the users could execute their application in a cloud federation environment, without needing to know the variabilities and constraints of the clouds
APA, Harvard, Vancouver, ISO, and other styles
13

BATISTA, NETO Luiz Aurélio. "Um Mecanismo de Integração de Identidades Federadas entre Shibboleth e SimpleSAMLphp para aplicações de Nuvens." Universidade Federal do Maranhão, 2014. http://tedebc.ufma.br:8080/jspui/handle/tede/1784.

Full text
Abstract:
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-08-04T14:25:51Z No. of bitstreams: 1 Luiz Aurélio Batista Neto.pdf: 2595761 bytes, checksum: 07f714d6c1f7297c78081b105edc8633 (MD5)
Made available in DSpace on 2017-08-04T14:25:51Z (GMT). No. of bitstreams: 1 Luiz Aurélio Batista Neto.pdf: 2595761 bytes, checksum: 07f714d6c1f7297c78081b105edc8633 (MD5) Previous issue date: 2014-10-19
CAPES
Cloud computing applications are vulnerable to security threats originating from the Internet, because of the resources with other users and managed by third parties sharing. The diversity of services and technologies still presents a challenge to identity integration and user data in the distributed context. To address these issues, identity management techniques, especially those using a federated approach, appear crucial to protect the information from unauthorized access and allow the exchange of resources between the different trusted parties among themselves. The objective of this work is to develop a model that allows integration between identity providers through the Security Assertion Markup Language (SAML) protocol, in order to provide access to applications in multiple domains of Cloud Computing. In this scenario, each domain users and groups services as the mechanism of representation of the user according to the identity management system used (Shibboleth or SimpleSAMLphp). The proposed model is implemented to verify its applicability. In the experiments by computer simulation, the results obtained demonstrate the feasibility of the presented model.
Aplicações de Computação em Nuvem estão vulneráveis a ameaças de segurança oriundas da Internet, por conta do compartilhamento de recursos com outros usuários e gerenciados por terceiros. A diversidade de serviços e tecnologias se apresenta ainda como desafio para integração de identidades e dados de usuários no contexto distribuído. Para lidar com essas questões, técnicas de gerenciamento de identidades, especialmente as que utilizam a abordagem federada, se mostram fundamentais para proteger as informações de acessos não autorizados e permitir o intercâmbio de recursos entre as diferentes partes confiáveis entre si. O objetivo deste trabalho é desenvolver um modelo que permita a integração entre provedores de identidades por meio do protocolo Security Assertion Markup Language (SAML), com a finalidade de prover o acesso a aplicações em múltiplos domínios de Computação em Nuvem. Neste cenário, cada domínio agrupa usuários e serviços conforme o mecanismo de representação do usuário de acordo com o sistema de gerenciamento de identidades utilizado (Shibboleth ou SimpleSAMLphp). O modelo proposto é implementado para verificar a sua aplicabilidade. Nos experimentos realizados por simulação computacional, os resultados obtidos demonstram a viabilidade do modelo apresentado.
APA, Harvard, Vancouver, ISO, and other styles
14

Schreiner, Florian [Verfasser], Thomas [Akademischer Betreuer] Magedanz, Odej [Akademischer Betreuer] Kao, Serge [Akademischer Betreuer] Fdida, and Alfonso [Akademischer Betreuer] Ehijo. "Resource efficient quality of service management for NGN services in federated cloud environments / Florian Schreiner. Gutachter: Thomas Magedanz ; Odej Kao ; Serge Fdida ; Alfonso Ehijo. Betreuer: Thomas Magedanz." Berlin : Technische Universität Berlin, 2015. http://d-nb.info/1068255986/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Di, Donato Davide. "Sviluppo, Deployment e Validazione Sperimentale di Architetture Distribuite di Machine Learning su Piattaforma fog05." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19021/.

Full text
Abstract:
Ultimamente sta crescendo sempre di più l'interesse riguardo al fog computing e alle possibilità che offre, tra cui la capacità di poter fruire di una capacità computazionale considerevole anche nei nodi più vicini all’utente finale: questo permetterebbe di migliorare diversi parametri di qualità di un servizio come la latenza nella sua fornitura e il costo richiesto per le comunicazioni. In questa tesi, sfruttando le considerazioni sopra, abbiamo creato e testato due architetture di machine learning distribuito e poi le abbiamo utilizzate per fornire un servizio di predizione (legato al condition monitoring) che migliorasse la soluzione cloud relativamente ai parametri citati prima. Poi, è stata utilizzata la piattaforma fog05, un tool che permette la gestione efficiente delle varie risorse presenti in una rete, per eseguire il deployment delle architetture sopra. Gli obiettivi erano due: validare le architetture in termini di accuratezza e velocità di convergenza e confermare la capacità di fog05 di gestire deployment complessi come quelli necessari nel nostro caso. Innanzitutto, sono state scelte le architetture: per una, ci siamo basati sul concetto di gossip learning, per l'altra, sul federated learning. Poi, queste architetture sono state implementate attraverso Keras e ne è stato testato il funzionamento: è emerso chiaramente come, in casi d'uso come quello in esame, gli approcci distribuiti riescano a fornire performance di poco inferiori a una soluzione centralizzata. Infine, è stato eseguito con successo il deployment delle architetture utilizzando fog05, incapsulando le funzionalità di quest'ultimo dentro un orchestratore creato ad-hoc al fine di gestire nella maniera più automatizzata e resiliente possibile la fornitura del servizio offerto dalle architetture sopra.
APA, Harvard, Vancouver, ISO, and other styles
16

Riteau, Pierre. "Dynamic execution platforms over federated clouds." Rennes 1, 2011. https://tel.archives-ouvertes.fr/tel-00651258v2.

Full text
Abstract:
The increasing needs for computing power have led to parallel and distributed computing, which harness the power of large computing infrastructures in a concurrent manner. Recently, virtualization technologies have increased in popularity, thanks to hypervisors improvements, the shift to multi-core architectures, and the spread of Internet services. This has led to the emergence of cloud computing, a paradigm offering computing resources in an elastic, on-demand approach while charging only for consumed resources. In this context, this thesis proposes four contributions to leverage the power of multiple clouds. They follow two directions: the creation of elastic execution platforms on top of federated clouds, and inter-cloud live migration for using them in a dynamic manner. We propose mechanisms to efficiently build elastic execution platforms on top of multiple clouds using the sky computing federation approach. Resilin is a system for creating and managing MapReduce execution platforms on top of federated clouds, allowing to easily execute MapReduce computations without interacting with low level cloud interfaces. We propose mechanisms to reconfigure virtual network infrastructures in the presence of inter-cloud live migration, implemented in the ViNe virtual network from University of Florida. Finally, Shrinker is a live migration protocol improving the migration of virtual clusters over wide area networks by eliminating duplicated data between virtual machines
Les besoins croissants en ressources de calcul ont mené au parallélisme et au calcul distribué, qui exploitent des infrastructures de calcul large échelle de manière concurrente. Récemment, les technologies de virtualisation sont devenues plus populaires, grâce à l'amélioration des hyperviseurs, le passage vers des architectures multi-cœur, et la diffusion des services Internet. Cela a donné lieu à l'émergence de l'informatique en nuage, un paradigme qui offre des ressources de calcul de façon élastique et à la demande, en facturant uniquement les ressources consommées. Dans ce contexte, cette thèse propose quatre contributions pour tirer parti des capacités de multiples nuages informatiques. Elles suivent deux directions : la création de plates-formes d'exécution élastiques au-dessus de nuages fédérés, et la migration à chaud entre nuages pour les utiliser de façon dynamique. Nous proposons des mécanismes pour construire de façon efficace des plates-formes d'exécution élastiques au-dessus de plusieurs nuages utilisant l'approche de fédération sky computing. Resilin est un système pour créer et gérer des plates-formes d'exécution MapReduce au-dessus de nuages fédérés, permettant de facilement exécuter des calculs MapReduce sans interagir avec les interfaces bas niveau des nuages. Nous proposons des mécanismes pour reconfigurer des infrastructures réseau virtuelles lors de migrations à chaud entre nuages, mis en œuvre dans le réseau virtuel ViNe de l'Université de Floride. Enfin, Shrinker est un protocole de migration à chaud améliorant la migration de grappes de calcul virtuelles dans les réseaux étendus en éliminant les données dupliquées entre machines virtuelles
APA, Harvard, Vancouver, ISO, and other styles
17

Wen, Zhenyu. "Partitioning workflow applications over federated clouds to meet non-functional requirements." Thesis, University of Newcastle upon Tyne, 2016. http://hdl.handle.net/10443/3343.

Full text
Abstract:
With cloud computing, users can acquire computer resources when they need them on a pay-as-you-go business model. Because of this, many applications are now being deployed in the cloud, and there are many di erent cloud providers worldwide. Importantly, all these various infrastructure providers o er services with di erent levels of quality. For example, cloud data centres are governed by the privacy and security policies of the country where the centre is located, while many organisations have created their own internal \private cloud" to meet security needs. With all this varieties and uncertainties, application developers who decide to host their system in the cloud face the issue of which cloud to choose to get the best operational conditions in terms of price, reliability and security. And the decision becomes even more complicated if their application consists of a number of distributed components, each with slightly di erent requirements. Rather than trying to identify the single best cloud for an application, this thesis considers an alternative approach, that is, combining di erent clouds to meet users' non-functional requirements. Cloud federation o ers the ability to distribute a single application across two or more clouds, so that the application can bene t from the advantages of each one of them. The key challenge for this approach is how to nd the distribution (or deployment) of application components, which can yield the greatest bene ts. In this thesis, we tackle this problem and propose a set of algorithms, and a framework, to partition a work ow-based application over federated clouds in order to exploit the strengths of each cloud. The speci c goal is to split a distributed application structured as a work ow such that the security and reliability requirements of each component are met, whilst the overall cost of execution is minimised. To achieve this, we propose and evaluate a cloud broker for partitioning a work ow application over federated clouds. The broker integrates with the e-Science Central cloud platform to automatically deploy a work ow over public and private clouds. We developed a deployment planning algorithm to partition a large work ow appli- - i - cation across federated clouds so as to meet security requirements and minimise the monetary cost. A more generic framework is then proposed to model, quantify and guide the partitioning and deployment of work ows over federated clouds. This framework considers the situation where changes in cloud availability (including cloud failure) arise during work ow execution.
APA, Harvard, Vancouver, ISO, and other styles
18

Huang, Chih-Chieh, and 黃志傑. "Resource Brokerage for Federated Cloud Storage System." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/49057630766612178807.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
101
As the evolution of distributed computing systems, the need of high performance and large-scale computing is getting increase. Cloud applications or users may demand a storage system with security, availability, performance, and reliability. A variety of public cloud storage providers and private storage systems tried to meet the requirements by applying different approaches. However, no one of cloud storage providers is able to fulfil every requirement of expectations at the same time because of the CAP theorem. In addition, cloud storage providers usually offer different APIs for the access so as to lead users face an issue about vender lock-in. Leveraging heterogeneous storage resources in federated cloud storages is a prospective manner to solve these issues. In this paper, we focus on proposing a federated cloud storage system with a uniform interface. Based on a prioritized brokerage model, a resource brokerage is further presented to benefit the matchmaking with the considerations of user requirements, file classifications, and storage characteristics. We evaluate the system performance with real traces and workloads on 31 nodes. Experimental results show that our approach improve 35%~125% performance gains.
APA, Harvard, Vancouver, ISO, and other styles
19

Chuang, Hung Ming, and 莊閎名. "Federated Anonymous Identity Management for Cloud Computing." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/23862051980371450335.

Full text
Abstract:
碩士
長庚大學
資訊管理學系
101
We proposed a federated identity management for cloud computing, and cloud service providers compose an alliance via the agreement, users can use all provider's service. The third-party cloud trust center is in charge of alliance's maintenance and management. On the other hand, we also provide mutual authentication let users and providers can verify each other's identity legitimacy. In addition, users' personal data store in different providers, the personal data will be hard to guarantee not being disclosed or stolen. The thesis is based on CSA's cloud security guideline that users are anonymous in cloud environment and anonymity can decrease personal privacy data disclosing. The issuer can trace the users real identity to solve the dispute of the anonymous. Our scheme have some features below. (1) Federated identity management, let users single sign-on cloud services . (2) Mutual authentication, to verify each other's identity legitimacy. (3) Anonymous, decreasing personal data being disclosed and stolen by accessing service anonymously. (4) Tracing anonymous, the issuer can trace user's real identity. (5) Non-repudiation, the anonymous can't deny things was done by him. (6) Unforgeable, even the providers know the users' private key, they can't forge users' identity.
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Hua-Yi, and 黃華儀. "A Federated ID Based Authentication for Hybrid Cloud Environment." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/90618314676007511873.

Full text
Abstract:
碩士
世新大學
資訊管理學研究所(含碩專班)
99
With maturity of the network infrastructure technology and increasing of the network bandwidth, the network services has been applied widely and embedded as an essential part of human life or enterprise operations. The term “cloud computing” has quickly become the most popular topic in the world, which pushes the major IT companies to devote their efforts to the development of cloud computing related technologies and services. While the cloud computing has a spectrum of benefits including cost reducing, rapid deployment, flexible expansion, portability and globalization, etc., however, according to the report from Information Systems Audit and Control Association and Government Technology Research Alliance, enterprises are reluctant to adopt cloud computing solutions owing to the concerns of security mechanisms in these clouds. Furthermore, it is inconvenience for re-authentication when users access cloud services from different organizations. With these reasons, the cloud computing is still not broadly adopted. This thesis aims to provide a simple and secure approach in using cloud services. Based on the federated identity management architecture, users can access the service upon the hybrid cloud through an OpenID. To ensure the integrity and privacy of data transmission between clouds, it establishes encryption mechanisms during the authentication process.
APA, Harvard, Vancouver, ISO, and other styles
21

Miranda, Pedro Miguel Simões. "Enabling and sharing storage space under a federated cloud environment." Master's thesis, 2015. http://hdl.handle.net/10362/18543.

Full text
Abstract:
To support the Portuguese scientific community LIP, Laboratório de Instrumentação e Física Experimental de Partículas, has been operating a high performance computing (HPC) infrastructure under a grant from the Infraestrutura Nacional de Computação Distribuída (INCD) Program. Now, LIP is promoting another initiative towards the same community, that is, to build a Cloud Computing (CC) service which orchestrates the three fundamental resources: compute, network, and storage. The main goal of this dissertation is to research, implement, benchmark and adopt the most appropriate backend storage architecture for the OpenStack cloud computing software service, chosen by LIP (following EGI, the European Grid Infrastructure) to be the cloud platform to be deployed in the new CC-INCD program. For this work, our objectives are: a) to gain an understanding of OpenStack – its architecture, and the way it works to offer an Infrastructure-as-a-Service (IaaS) platform; b) look for candidates suitable to be deployed as OpenStack’ storage backends, which should be able to store templates (images, in the OpenStack terminology) of virtual machines (VMs) and ISO images (CDs/DVDs), ephemeral and persistent virtual disks for VM instances, etc.; c) to present a preliminary study of three file systems that are strong candidates to be integrated with OpenStack: NFS, Ceph and GlusterFS; and, d) to choose a candidate to integrate with OpenStack, and perform an experimental evaluation.
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Chao-Chi, and 黃昭棋. "A Federated Identity Assurance and Access Management System for Cloud Computing." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/kahvpa.

Full text
Abstract:
博士
國立臺北科技大學
機電科技研究所
99
Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet.However, cloud computing services are still in a developmental stage; cloud computing best practices are evolving, and security is still a major concern. Furthermore, the traditional Identity and Access Management (IAM) approach cannot fit into a cloud computing platform, because the enterprise does not control the cloud service provider’s IAM practices and has even less influence over strict security practices. The system provides a solution for a Federated Identity Assurance and Access Management System in the Identity and Access Management (IAM) process for a cloud computing environment. The Federated Identity Manager described in this paper is implemented. It supports cross domain single sign-on (CD SSO) and interchanges access control information with partners, reflecting trust relationships. Four subsystems have been successfully implemented in the proposed Management System: Identity Provisioning Module, Authentication and Authorization Management Module, Federated Identity Management Module, and Assurance Management Module. The results of this research can offer better security service management framework for large scale of cloud security services.
APA, Harvard, Vancouver, ISO, and other styles
23

Lin, Chun-Yo, and 林俊佑. "Realtime CNN-based Object Recognition Framework for SafetyCritical Application using Federated Mobile Cloud ComputingPlatforms." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/3bxhux.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
105
With novel deep learning method, vision-based object detection has become more and more powerful. However, the performance of state-of-the-art object detection method is still unacceptable when it comes to real life scenario. Take ADAS for example, object detection should recognize objects and run at high fps to avoid potential collision. In order to improve the performance of object detection, we first analyze the root cause of bad accuracy. We found there are big difference between training set and testing set, and the difference causes model unable to recognize objects in testing set. We visualize the difference by applying PCA on both set. Also, we calculate the minimum framerate the object detection need to achieve. Based on the ADAS/AEBS regulation tests, the minimum framerate the object detection need to achieve is 3.75fps. Thus, we proposed a framework which collaborate mobile and cloud device to enhance the accuracy and speed of object detection. We do online incremental training on cloud to enhance the accuracy of object detection model in real time. In addition, we interleave object detection with object tracking to speedup the overall detection fps. After applying our framework, the model''s recall increase about 56\% and the detection fps improves at least three times.
APA, Harvard, Vancouver, ISO, and other styles
24

Bhojwani, Sushil. "Interoperability in Federated Clouds." Thesis, 2015. http://hdl.handle.net/1828/6732.

Full text
Abstract:
Cloud Computing is the new trend in sharing resources, sharing and managing data and performing computations on a shared resource via the Internet. However, with the constant increase in demand, these resources are insufficient. So users often use another network in conjunction with the current one. All these networks accom- plish the goal of providing the user with a virtual or physical machine. However, to achieve the result, virtual machine users have to maintain multitude credentials and follow a different process for each network. In this thesis, we focus on SAGEFed, a product that enables a user to use the same credentials and commands to reserve the resources on two different federated clouds, i.e., SAVI and GENI. As a part of SAGEFed, the user can acquire or reserve resources on the clouds with the same API. The same service also manages the credentials, so they do not have to manage different credentials while acquiring resources. Furthermore, SAGEFed demonstrates that any cloud that has some form of client tool can be easily integrated.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
25

LIU, CHIEN YU, and 劉建佑. "Vertical / Horizontal Resource Allocation Mechanism in Federated Clouds." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/4g4rp7.

Full text
Abstract:
碩士
國立臺中教育大學
資訊工程學系
102
Cloud computing can be classified into three service models: IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service). Cloud computing significantly reduces the hardware cost and improves the resource utilization. Therefore, business and institutions are eager to build their private cloud systems. Otherwise, some famous public clouds, such as Amazon, Google and Microsoft, also provide cloud services; some cloud platforms support the cooperation mechanism to integrate individual clouds, for example, RightScale. According to the cooperation mechanism across clouds, we can make the federated cloud. In this study, we adopt a federated cloud, named Dcloud which consists of the clouds from NTHU, NTCU, CHU, PU, and NCHU. The Dcloud applies the GRE tunnel to connect individual clouds into one virtual internet domain. On the Dcloud, this study proposes a HAV (Horizontal and Vertical) resource allocation algorithm to improve the system and reduce communication. This study profiles the characteristics of jobs in advance, and then creates virtual clusters between different clouds in considering the system performance. In this study, three algorithms are proposed: HAV algorithm, SOS (Sum of Subset) algorithm and OP (OpenStack) algorithm. The SOS algorithm is a modified Sum of Subset algorithm for resource allocation. The OP algorithm is a modified NOVA module in the OpenStack. In this study, we adopt NAS Parallel Benchmarks (for example, Parallel (EP), Conjugate Gradient (CG), Integer Sort (IS)) for performance comparison. Experimental results show that our approach could reduce communication overhead and improve the resource utilization and throughput.
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Hsin-Han, and 劉欣翰. "OpenFlow Supporting Inter-Domain Virtual Cluster over federated clouds." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/92715328535854725874.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
102
To combine hardware and software resources and network functionality, network virtualization involves resource virtualization and platform virtualization. Several issues such as virtual clustering with federated clouds are related. To provide virtual clustering over federated clouds, traffic patterns seem to be potentially alleviated by a software-defined networking (SDN) approach such as OpenFlow (OF). Virtual clusters are interconnected logically by virtual network across several physical machines. \indent connectivity as a service, but it's designed for single cloud. In this thesis, we implement an experimental scenario as a proof of concept dealing with virtual machine clustering over federated clouds, Additional global rules are added to OpenStack’s virtual bridges to keep inter cloud connection and to reduce the complexity of federated clouds architecture. Then, we evaluate the network architecture and the OF rule space on tunnel bridges.
APA, Harvard, Vancouver, ISO, and other styles
27

Lee, Yi-Fang, and 李宜芳. "Virtual Clusters for Data Stream Processing on Federated Clouds." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/mqax98.

Full text
Abstract:
碩士
國立臺中教育大學
資訊工程學系
103
Recently, the Internet of Things rises up luxuriantly, which makes the link through the internet among people, people and object, object and object. In different applications such as heal-care, financial, climate information and GPS information, data could be generated and collected through sensors. The development of IoT brings large scale and continuous data, which is also called Big Data. One of challenges in IoT is to deal with this high-volume stream of real-time data, data stream processing could provide insights into the underlying data patterns. In general, the system processes data by a fixed-size window in the batch fashion. It is also known as the stream processing which uses the in-memory technology to accelerate the process speed to satisfy the requirement of low latency and high feasibility. In general, in-memory technology completes operations in memory, the memory access may affect the performance of stream processing and lead to the resource interference problem, for example, when many tasks require resources, how to allocate tasks and improve the performance is the important issue. This study proposes a resource scheduling approach, named Memory Access Critical Path Schedule (MACPS), for improving the performance of stream processing. The proposed approach could exploit the system performance by allocating the virtual resource to stream processing tasks according to the information generated by the profiling system. This work not only considers the tasks’ efficiency in critical paths, but also takes account of the effect of I/O interference. MACPS considers resource access to satisfy the request of stream processing. To demonstrate the system improvement, this paper simulates the execution environment with single cloud and multi-cloud of applying MACPS. Experimental results show the superiority of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Shie, Meng-Ru, and 謝孟儒. "A Distributed Scheduling Approach based on Game Theory in Federated Clouds." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/6vx8jk.

Full text
Abstract:
碩士
國立臺中教育大學
資訊工程學系
102
The cloud computing development gets more and more mature. These companies can get more and more competitiveness by using cloud computing. However, the physical cloud resources are finite and the cloud resource requirements are infinite. The cloud resources demands may exceed supply. For this reason, the federated cloud is more and more important. The clouds are completely independent in the federated cloud. The cloud providers own different pricing strategies and scheduling strategies and adopt selfish policy. While the cloud resources are not enough to satisfy the cloud resource requirement, the cloud provider can rent the lacking resources from other cloud providers with remaining resources. This behavior is called outsourcing. The federated cloud faces two challenges. The first challenge is network delay. Because of the cloud may locate at different place. Another challenge is the resource competition problem cause by the selfish policy and completely independent cloud in the federated cloud. This study tries to reduce the effort of above-mentioned challenges. This paper proposes the Task Grouping and Distributed Scheduling Approach (DSA). The Task Grouping groups the tasks considering the task communication pattern after profiling jobs. The DSA decides how to accept the outsourcing jobs considering the remaining resources, outsourcing requests, the next possible job and marginal cost by game theory. To evaluate the Task Grouping, this study performs several benchmarks over the Unicloud. This paper also simulates the large federated cloud to evaluate the DSA. The results demonstrate that the approach this study proposed can reduce the effort of network delay greatly and improve the cloud utilization.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography