To see the other types of publications on this topic, follow the link: Cloud drive.

Dissertations / Theses on the topic 'Cloud drive'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cloud drive.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gana, Ossama. "Google Drive un progetto sperimentale di Cloud Storage Forensic." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17712/.

Full text
Abstract:
La Cloud Storage Forensic è una parte della Digital Forensics, è un approccio ibrido di Computer Forensics e Network Forensics. L'utilizzo dei Cloud Storage per immagazzinare dati aumenta di giorno in giorno perché gli utenti possono accedere ai dati da qualsiasi luogo in modo sicuro. Però, il cloud diventa appetibile per coloro che intendono utilizzarlo per scopi più o meno illeciti, data la possibilità di "disperdere" i dati in una vasta infrastruttura. Gli investigatori forensi si trovano in grande difficoltà ad acquisire gli artefatti digitali dal cloud, visto la complessa architettura che sta dietro al Cloud Storage. In questa tesi viene proposto un processo di acquisizione dei dati dal cloud in modo automatizzato. L'attuale legislazione non prende in considerazione l'acquisizione degli artefatti di questa tipologia, la legge è molto vaga sull'acquisizione forense del Cloud Storage, è più indicata all'acquisizione su dispositivi fisici. In questa dissertazione di tesi mostro come la procedura che ho sviluppato rispetti la maggior parte dei requisiti richiesti dalla legge.
APA, Harvard, Vancouver, ISO, and other styles
2

Tumino, Andrea. "Progettazione di un applicativo web-based per il backup dei dati di Google Drive." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17653/.

Full text
Abstract:
Descrizione di un'applicazione web che offre un servizio di backup del proprio Google Drive, sia completo che personalizzato. Nella tesi vengono spiegati i requisiti, l'architettura e il funzionamento del software, approfondendo le tecnologie utilizzate per lo sviluppo e le difficoltà riscontrate.
APA, Harvard, Vancouver, ISO, and other styles
3

Barreto, Andres E. "API-Based Acquisition of Evidence from Cloud Storage Providers." ScholarWorks@UNO, 2015. http://scholarworks.uno.edu/td/2030.

Full text
Abstract:
Cloud computing and cloud storage services, in particular, pose a new challenge to digital forensic investigations. Currently, evidence acquisition for such services still follows the traditional approach of collecting artifacts on a client device. In this work, we show that such an approach not only requires upfront substantial investment in reverse engineering each service, but is also inherently incomplete as it misses prior versions of the artifacts, as well as cloud-only artifacts that do not have standard serialized representations on the client. In this work, we introduce the concept of API-based evidence acquisition for cloud services, which addresses these concerns by utilizing the officially supported API of the service. To demonstrate the utility of this approach, we present a proof-of-concept acquisition tool, kumodd, which can acquire evidence from four major cloud storage providers: Google Drive, Microsoft One, Dropbox, and Box. The implementation provides both command-line and web user interfaces, and can be readily incorporated into established forensic processes.
APA, Harvard, Vancouver, ISO, and other styles
4

Kis, Matej. "Analýza současných cloudových řešení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-220533.

Full text
Abstract:
This thesis describes existing cloud storage systems. Description of the prerequisites of developing cloud and distributed systems are presented. Current storage systems such as Dropbox, iCloud and Google drive are described. Description is mainly focused on the resources of both protocols and derive a conclusion for the use in the cloud storage systems. The practical part of this work is focused on creating two labs, what will be implemented in the teaching syllabus of Projecting, Administration and Security of Computer Networks subject. The first of the labs is focused on the implementation of own cloud services. In the last lab students attention will concentrate on interception of communication secured with SSL protocol.
APA, Harvard, Vancouver, ISO, and other styles
5

Henziger, Eric. "The Cost of Confidentiality in Cloud Storage." Thesis, Linköpings universitet, Databas och informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148907.

Full text
Abstract:
Cloud storage services allow users to store and access data in a secure and flexible manner. In recent years, cloud storage services have seen rapid growth in popularity as well as in technological progress and hundreds of millions of users use these services to store thousands of petabytes of data. Additionally, the synchronization of data that is essential for these types of services stands for a significant amount of the total internet traffic. In this thesis, seven cloud storage applications were tested under controlled experiments during the synchronization process to determine feature support and measure performance metrics. Special focus was put on comparing applications that perform client side encryption of user data to applicationsthat do not. The results show a great variation in feature support and performance between the different applications and that client side encryption introduces some limitations to other features but that it does not necessarily impact performance negatively. The results provide insights and enhances the understanding of the advantages and disadvantages that come with certain design choices of cloud storage applications. These insights will help future technological development of cloud storage services.
APA, Harvard, Vancouver, ISO, and other styles
6

Hellbe, Simon, and Peter Leung. "DIGITAL TRANSFORMATION : HOW APIS DRIVE BUSINESS MODEL CHANGE AND INNOVATION." Thesis, Linköpings universitet, Industriell ekonomi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119506.

Full text
Abstract:
Over the years, information technology has created opportunities to improve and extend businesses and to start conducting business in new ways. With the evolution of IT, all businesses and industries are becoming increasingly digitized. This process, or coevolution, of IT and business coming together is called digital transformation. One of the recent trends in this digital transformation is the use of application programmable interfaces (APIs). APIs are standardized digital communication interfaces, used for communication and exchange of information between systems, services and devices (such as computers, smartphones and connected machines). API communication is one of the foundational building blocks in recent disruptive technology trends such as mobile and cloud computing. The purpose of this study is to gain an understanding of the business impact that is created in digital transformation related to the use of APIs. To investigate this novel area, an exploratory study is performed where a frame of reference with an exploratory framework is created based on established academic literature. The exploratory framework consists of three main parts which cover the research questions, including Business Drivers, Business Model Change & Innovation and Challenges & Limitations related to API-enabled digital transformation. The framework is used to gather empirical data consisting of two types, interviews (primary data) and contemporary reports (secondary data). Interviews are performed with API-utilizing companies, consulting firms and IT solution providers and contemporary reports are published by consulting and technology research and advisory firms. Two main business drivers are identified in the study. The first is Understanding & Satisfying Customer Needs which is derived from companies experiencing stronger and changing demands for automated, personalized value-adding services. This requires higher degree of integration across channels and organizations. The second driver is Business Agility, which derives from higher requirements on adapting to changing environments while maintaining operational efficiency. Cost Reduction is also mentioned as a third and secondary driver, as a positive side-effect in combination with the other drivers. The identified impact on business models is that business model innovation is mostly happening in the front-end of business model towards customers. Several examples also exist of purely API-enabled businesses that sell services or manage information exchanges over APIs. The challenges and limitations identified are mostly classic challenges of using IT in businesses and not specific to use of APIs, where the general consensus is that IT and business need to become more integrated, and that strategy and governance for API-initiatives need to be established.
APA, Harvard, Vancouver, ISO, and other styles
7

Malmborg, Rasmus, and Frank Leonard Ödalen. "Jämförelse mellan populära molnlagringstjänster : Ur ett hastighetsperspektiv." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-37265.

Full text
Abstract:
Molnlagringstjänster används alltmer och är en växande marknad. Uppsatsen har fokuserat på att undersöka olika molnlagringstjänster ur ett hastighetsperspektiv. När små filer utbyts mellan klient och server har hastigheten på överföringen en mindre betydelse. Vid större överföringar får hastighetsaspekten en alltmer viktig roll. Regelbundna hastighetsmätningar har utförts mot de mest populära molnlagringstjänsterna. Testerna har utförts från Sverige och USA. Testerna har utförts under flera dagar och under olika tidpunkter på dygnet, för att undersöka om hastighetsskillnader existerar. Resultaten visar att stora skillnader finns i hastighet mellan Sverige och USA. Inom Sverige hade Mega och Goolgle Drive högst medelhastighet. Inom USA hade Google Drive högst medelhastighet, men här var variationerna mellan tjänsterna ej lika stora som i Sverige. I resultaten mellan olika tidpunkter var det svårare att urskilja ett mönster, med undantag för Google Drive i Sverige som konsistent fungerade bäst på natten/morgonen. Även Mega fungerade bäst under natten.<br>Cloud Storage services have seen increased usage and is an emerging market. This paper has focused on examining various cloud storage services from a speed perspective. When small files are exchanged between client and server, the speed of the service is of little importance. For larger transfers however, the speed of the service used plays a more important role. Regular speed measurements have been carried out against the most popular cloud storage services. The tests have been performed from Sweden and USA. The tests have been carried out over several days and at different times of day, to determine if speed differences exist. The results show that there are significant differences in speed between Sweden and the United States. In Sweden, Mega and Google Drive had the highest average speed. Within the United States, Google Drive had the highest average speed, but the variability between the services was not as great as in Sweden. In the results between different timeperiods, it was difficult to discern a pattern, with the exception of Google Drive in Sweden which consistently worked best during the night / morning. Mega also worked best during the night.
APA, Harvard, Vancouver, ISO, and other styles
8

Stanfield, Allison R. "The authentication of electronic evidence." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/93021/1/Allison_Stanfield_Thesis.pdf.

Full text
Abstract:
This thesis examines whether the rules for of evidence, which were developed around paper over centuries, are adequate for the authentication of electronic evidence. The history of documentary evidence is examined, and the nature of electronic evidence is explored, particularly recent types of electronic evidence such as social media and 'the Cloud'. The old rules are then critically applied to the varied types of electronic evidence to determine whether or not these old rules are indeed adequate.
APA, Harvard, Vancouver, ISO, and other styles
9

Cavallin, Riccardo. "Approccio blockchain per la gestione dei dati personali." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21604/.

Full text
Abstract:
L'elaborato presenta la tecnologia blockchain nelle sue funzionalità e nei suoi limiti. In particolare sono presentate le piattaforme Ethereum, RadixDLT e IOTA. Vengono discusse le implicazioni del regolamento GDPR nei confronti di questa tecnologia per la costruzione di Data Marketplace basato su Ethereum. Dopo aver presentato l'architettura di un marketplace si analizzano le prestazioni di diversi servizi di storage online per l'archiviazione di dati personali.
APA, Harvard, Vancouver, ISO, and other styles
10

Semmler, Jiří. "Nástroj pro podporu spolupráce při agilním modelování a vývoji software." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363769.

Full text
Abstract:
The ain of this thesis is to define and describe specific challenges occuring on the crossroard between project management and knowledge management with the focus on agile software development and agile modeling. Based on the found and verified problem it tries to find a existing solution. After that, it analyses, specifies and designs an own solution. Focusing on covering of three different perspectives makes this thesis unique. After process of design, there are technologies defined. For all used technologies there is described detailed implementation of the application. The third-party technologies are connected in this application.This connection creates the extra added value for the application and user in processes of agile software development and agile modeling.
APA, Harvard, Vancouver, ISO, and other styles
11

Kourtesis, Dimitrios. "Policy-driven governance in cloud service ecosystems." Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/17793/.

Full text
Abstract:
Cloud application development platforms facilitate new models of software co-development and forge environments best characterised as cloud service ecosystems. The value of those ecosystems increases exponentially with the addition of more users and third-party services. Growth however breeds complexity and puts reliability at risk, requiring all stakeholders to exercise control over changes in the ecosystem that may affect them. This is a challenge of governance. From the viewpoint of the ecosystem coordinator, governance is about preventing negative ripple effects from new software added to the platform. From the viewpoint of third-party developers and end-users, governance is about ensuring that the cloud services they consume or deliver comply with requirements on a continuous basis. To facilitate different forms of governance in a cloud service ecosystem we need governance support systems that achieve separation of concerns between the roles of policy provider, governed resource provider and policy evaluator. This calls for better modularisation of the governance support system architecture, decoupling governance policies from policy evaluation engines and governed resources. It also calls for an improved approach to policy engineering with increased automation and efficient exchange of governance policies and related data between ecosystem partners. The thesis supported by this research is that governance support systems that satisfy such requirements are both feasible and useful to develop through a framework that integrates Semantic Web technologies and Linked Data principles. The PROBE framework presented in this dissertation comprises four components: (1) a governance ontology serving as shared ecosystem vocabulary for policies and resources; (2) a method for the definition of governance policies; (3) a method for sharing descriptions of governed resources between ecosystem partners; (4) a method for evaluating governance policies against descriptions of governed ecosystem resources. The feasibility and usefulness of PROBE are demonstrated with the help of an industrial case study on cloud service ecosystem governance.
APA, Harvard, Vancouver, ISO, and other styles
12

García, García Andrés. "SLA-Driven Cloud Computing Domain Representation and Management." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/36579.

Full text
Abstract:
The assurance of Quality of Service (QoS) to the applications, although identified as a key feature since long ago [1], is one of the fundamental challenges that remain unsolved. In the Cloud Computing context, Quality of Service is defined as the measure of the compliance of certain user requirement in the delivery of a cloud resource, such as CPU or memory load for a virtual machine, or more abstract and higher level concepts such as response time or availability. Several research groups, both from academia and industry, have started working on describing the QoS levels that define the conditions under which the service need to be delivered, as well as on developing the necessary means to effectively manage and evaluate the state of these conditions. [2] propose Service Level Agreements (SLAs) as the vehicle for the definition of QoS guarantees, and the provision and management of resources. A Service Level Agreement (SLA) is a formal contract between providers and consumers, which defines the quality of service, the obligations and the guarantees in the delivery of a specific good. In the context of Cloud computing, SLAs are considered to be machine readable documents, which are automatically managed by the provider's platform. SLAs need to be dynamically adapted to the variable conditions of resources and applications. In a multilayer architecture, different parts of an SLA may refer to different resources. SLAs may therefore express complex relationship between entities in a changing environment, and be applied to resource selection to implement intelligent scheduling algorithms. Therefore SLAs are widely regarded as a key feature for the future development of Cloud platforms. However, the application of SLAs for Grid and Cloud systems has many open research lines. One of these challenges, the modeling of the landscape, lies at the core of the objectives of the Ph. D. Thesis.<br>García García, A. (2014). SLA-Driven Cloud Computing Domain Representation and Management [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/36579<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
13

Tziakouris, Giannis. "Economics-driven approach for self-securing assets in cloud." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7868/.

Full text
Abstract:
This thesis proposes the engineering of an elastic self-adaptive security solution for the Cloud that considers assets as independent entities, with a need for customised, ad-hoc security. The solution exploits agent-based, market-inspired methodologies and learning approaches for managing the changing security requirements of assets by considering the shared and on-demand nature of services and resources while catering for monetary and computational constraints. The usage of auction procedures allows the proposed framework to deal with the scale of the problem and the trade-offs that can arise between users and Cloud service provider(s). Whereas, the usage of a learning technique enables our framework to operate in a proactive, automated fashion and to arrive on more efficient bidding plans, informed by historical data. A variant of the proposed framework, grounded on a simulated university application environment, was developed to evaluate the applicability and effectiveness of this solution. As the proposed solution is grounded on market methods, this thesis is also concerned with asserting the dependability of market mechanisms. We follow an experimentally driven approach to demonstrate the deficiency of existing market-oriented solutions in facing common market-specific security threats and provide candidate, lightweight defensive mechanisms for securing them against these attacks.
APA, Harvard, Vancouver, ISO, and other styles
14

Ranabahu, Ajith Harshana. "Abstraction Driven Application and Data Portability in Cloud Computing." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1365271464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Giorgi, Gianmarco <1991&gt. "Cloud technologies and data-driven algorithms for interferometric sensors." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10106/1/giorgi_gianmarco_tesi.pdf.

Full text
Abstract:
The PhD project developed in these years started by studying the motivations behind Industry 4.0 and the most popular data-driven algorithms. The study was then oriented on the main success factors for the realization of a connected product continuing then for a study of the cloud in more detail. This study led me to analyze different components offered by the main providers, and to implement different solutions. The different solutions both at architectural and provider level allowed us to verify the differences between different implementations and their associated costs. The study of the cloud was then concluded with an exhaustive cost analysis that clearly highlights what are the gains or losses associated with the choice of a provider as a function of traffic exchanged. The study is then articulated on the more purely signal processing part, it is first presented the principle of operation of the interferometric sensor and then analyzes the main issues related to the signals produced by the instrument. It then analyzes the main strategies for solving the problems exposed, and the proposed solution also trying to explain how we arrived at that solution through the analysis of the main criticality of the signal. It concludes by analyzing the problems of implementation of the algorithmic part within the real sensor with limited processing capacity and the strategies undertaken to mitigate the impact of these issues on the final implementation. The analysis of the implementation is accompanied by some data about the timing with which it is possible to use the algorithm and its main limitations.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Ziwei. "Workflow Management Service based on an Event-driven Computational Cloud." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-141696.

Full text
Abstract:
The Event-driven computing paradigm, also known as trigger computing, is widely used in computer technology. Computer systems, such as database systems, introduce trigger mechanism to reduce repetitive human intervention. With the growing complexity of industrial use case requirements, independent and isolated triggers cannot fulfil the demands any more. Fortunately, an independent trigger can be triggered by the result produced by other triggered actions, and that enables the modelling of complex use cases, where the chains or graphs that consist of triggers are called workflows. Therefore, workflow construction and manipulation become a must for implementing the complex use cases. As the developing platform of this study, VISION Cloud is a computational storage system that executes small programs called storles as independent computation units in the storage. Similar to the trigger mechanism in database systems, storlets are triggered by specific events and then execute computations. As a result, one storlet can also be triggered by the result produced by other storlets, and it is called connections between storlets. Due to the growing complexity of use case requirements, an urgent demand is to have starlet workflow management supported in VISION system. Furthermore, because of the existence of connections between storlets in VISION, problems as non-termination triggering and unexpected overwriting appear as side-effects. This study develops a management service that consists of an analysis engine and a multi-level visualization interface. The analysis engine checks the connections between storlets by utilizing the technology of automatic theorem proving and deterministic finite automaton. The involved storlets and their connections are displayed in graphs via the multi-level visualization interface. Furthermore, the aforementioned connection problems are detected with graph theory algorithms. Finally, experimental results with practical use case examples demonstrate the correctness and comprehensiveness of the service. Algorithm performance and possible optimization are also discussed. They lead the way for future work to create a portable framework of event-driven workflow management services.
APA, Harvard, Vancouver, ISO, and other styles
17

Yanggratoke, Rerngvit. "Data-driven Performance Prediction and Resource Allocation for Cloud Services." Doctoral thesis, KTH, Kommunikationsnät, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-184601.

Full text
Abstract:
Cloud services, which provide online entertainment, enterprise resource management, tax filing, etc., are becoming essential for consumers, businesses, and governments. The key functionalities of such services are provided by backend systems in data centers. This thesis focuses on three fundamental problems related to management of backend systems. We address these problems using data-driven approaches: triggering dynamic allocation by changes in the environment, obtaining configuration parameters from measurements, and learning from observations.  The first problem relates to resource allocation for large clouds with potentially hundreds of thousands of machines and services. We developed and evaluated a generic gossip protocol for distributed resource allocation. Extensive simulation studies suggest that the quality of the allocation is independent of the system size for the management objectives considered. The second problem focuses on performance modeling of a distributed key-value store, and we study specifically the Spotify backend for streaming music. We developed analytical models for system capacity under different data allocation policies and for response time distribution. We evaluated the models by comparing model predictions with measurements from our lab testbed and from the Spotify operational environment. We found the prediction error to be below 12% for all investigated scenarios. The third problem relates to real-time prediction of service metrics, which we address through statistical learning. Service metrics are learned from observing device and network statistics. We performed experiments on a server cluster running video streaming and key-value store services. We showed that feature set reduction significantly improves the prediction accuracy, while simultaneously reducing model computation time. Finally, we designed and implemented a real-time analytics engine, which produces model predictions through online learning.<br><p>QC 20160411</p>
APA, Harvard, Vancouver, ISO, and other styles
18

Zia-ur-Rehman. "A framework for QoS driven user-side cloud service management." Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/742.

Full text
Abstract:
This thesis presents a comprehensive framework that assists the cloud service user in making cloud service management decisions, such as service selection and migration. The proposed framework utilizes the QoS history of the available services for QoS forecasting and multi-criteria decision making. It then integrates all the inherent necessary processes, such as QoS monitoring, forecasting, service comparison and ranking to recommend the best and optimal decision to the user.
APA, Harvard, Vancouver, ISO, and other styles
19

Sathyamoorthy, Peramanathan. "Enabling Energy-Efficient Data Communication with Participatory Sensing and Mobile Cloud : Cloud-assisted crowd-sourced data-driven optimization." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-274875.

Full text
Abstract:
This thesis proposes a novel power management solution for the resource-constrained devices in the context of Internet of Things (IoT). We focus on smartphones in the IoT, as they are getting increasingly popular and equipped with strong sensing capabilities. Smartphones have complex and chaotic asynchronous power consumption incurred by heterogeneous components including their onboard sensors. Their interaction with the cloud can support computation offloading and remote data access via the network. In this work, we aim at monitoring the power consumption behaviours of smartphones and profiling individual applications and platform to make better decisions in power management. A solution is to design architecture of cloud orchestration as an epic predictor of the behaviours of smart devices with respect to time, location, and context. We design and implement this architecture to provide an integrated cloud-based energy monitoring service. This service enables the monitoring of power consumption on smartphones and support data analysis on massive data logs collected by a large number of users.
APA, Harvard, Vancouver, ISO, and other styles
20

Krotsiani, M. "Model driven certification of Cloud service security based on continuous monitoring." Thesis, City University London, 2016. http://openaccess.city.ac.uk/15044/.

Full text
Abstract:
Cloud Computing technology offers an advanced approach for the provision of infrastructure, platform and software services without the need of extensive cost of owning, operating or maintaining the computational infrastructures required. However, despite being cost effective, this technology has raised concerns regarding the security, privacy and compliance of data or services offered through cloud systems. This is mainly due to the lack of transparency of services to the consumers, or due to the fact that service providers are unwilling to take full responsibility for the security of services that they offer through cloud systems, and accept liability for security breaches [18]. In such circumstances, there is a trust deficiency that needs to be addressed. The potential of certification as a means of addressing the lack of trust regarding the security of different types of services, including the cloud, has been widely recognised [149]. However, the recognition of this potential has not led to a wide adoption, as it was expected. The reason could be that certification has traditionally been carried out through standards and certification schemes (e.g., ISO27001 [149], ISO27002 [149] and Common Criteria [65]), which involve predominantly manual systems for security auditing, testing and inspection processes. Such processes tend to be lengthy and have a significant financial cost, which often prevents small technology vendors from adopting it [87]. In this thesis, we present an automated approach for cloud service certification, where the evidence is gathered through continuous monitoring. This approach can be used to: (a) define and execute automatically certification models, to continuously acquire and analyse evidence regarding the provision of services on cloud infrastructures through continuous monitoring; (b) use this evidence to assess whether the provision is compliant with required security properties; and (c) generate and manage digital certificates to confirm the compliance of services with specific security properties.
APA, Harvard, Vancouver, ISO, and other styles
21

Chinenyeze, Samuel Jaachimma. "Mango : a model-driven approach to engineering green Mobile Cloud Applications." Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/976572.

Full text
Abstract:
With the resource constrained nature of mobile devices and the resource abundant offerings of the cloud, several promising optimisation techniques have been proposed by the green computing research community. Prominent techniques and unique methods have been developed to offload resource/computation intensive tasks from mobile devices to the cloud. Most of the existing offloading techniques can only be applied to legacy mobile applications as they are motivated by existing systems. Consequently, they are realised with custom runtimes which incur overhead on the application. Moreover, existing approaches which can be applied to the software development phase, are difficult to implement (based on manual process) and also fall short of overall (mobile to cloud) efficiency in software qualityattributes or awareness of full-tier (mobile to cloud) implications. To address the above issues, the thesis proposes a model-driven architecturefor integration of software quality with green optimisation in Mobile Cloud Applications (MCAs), abbreviated as Mango architecture. The core aim of the architecture is to present an approach which easily integrates software quality attributes (SQAs) with the green optimisation objective of Mobile Cloud Computing (MCC). Also, as MCA is an application domain which spans through the mobile and cloud tiers; the Mango architecture, therefore, takesinto account the specification of SQAs across the mobile and cloud tiers, for overall efficiency. Furthermore, as a model-driven architecture, models can be built for computation intensive tasks and their SQAs, which in turn drives the development – for development efficiency. Thus, a modelling framework (called Mosaic) and a full-tier test framework (called Beftigre) were proposed to automate the architecture derivation and demonstrate the efficiency of Mango approach. By use of real world scenarios/applications, Mango has been demonstrated to enhance the MCA development process while achieving overall efficiency in terms of SQAs (including mobile performance and energy usage compared to existing counterparts).
APA, Harvard, Vancouver, ISO, and other styles
22

Engvall, Maja. "Data-driven cost management for a cloud-based service across autonomous teams." Thesis, Uppsala universitet, Avdelningen för datalogi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-326185.

Full text
Abstract:
Spotify started to use the cloud-based data warehouse BigQuery in 2016 with a business model of pay-as-you-go. Since then, usage has increased rapidly in volume and amount of users across the organisation which is a result of ease of use compared to previous data warehouse solutions. The technology procurement team lacks an overview of how BigQuery is used across Spotify and a strategy of how to maintain an environment where users make cost informed decisions when designing queries and creating tables. Incidents resulting in unexpected high bills are currently handled by a capacity analyst using billing data which is lacking the granularity of how cost maps to the users of BigQuery. The objective of this research is to provide recommendations on how audit data can enable a data driven cost-effective environment for BigQuery across the matrix formed engineering organisation at Spotify. First an overview of the current usage patterns is presented based on audit data which is modeled with regards to volume, complexity and utilization. Different patterns are identified using K-means clustering, including high-volume consuming squads and underutilized tables. Secondly, recommendations on transparency of audit data for cost-effectiveness are based on insights from cluster analysis, interviews and characteristics of organisation structure. Recommendations include transparency of data consumption to producers to prevent paying for unused resources and transparency to consumers on usage patterns to avoid paying for unexpected bills. Usage growth is recommended to be presented to the technology procurement squad which enables better understanding and mitigates challenges on cost forecasting and control.
APA, Harvard, Vancouver, ISO, and other styles
23

Le, Nhan Tam. "Model-Driven Software Engineering for Virtual Machine Images Provisioning in Cloud Computing." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00923811.

Full text
Abstract:
La couche Infrastructure- as-a-Service (IaaS) de Cloud Computing offre un service de déploiement des images de machines virtuelles (VMIs) à la demande. Ce service fournit une plate-forme flexible pour les utilisateurs de cloud computing pour développer , déployer et tester leurs applications. Le déploiement d'une VMI implique généralement le démarrage de l'image, l'installation et la configuration des paquets de logiciels. Dans l'approche traditionnelle, lorsqu'un utilisateur de cloud demande une nouvelle plate-forme, le fournisseur de cloud sélectionne une image de modèle approprié pour cloner et déployer sur les nœuds de cloud. L'image de modèle contient des paquets de logiciel pré-installés. Si elle ne correspond pas aux exigences, alors elle sera personnalisée ou la nouvelle image sera créé à partir de zéro pour s'adapter à la demande. Dans le cadre de la gestion des services de cloud, l'approche traditionnelle face aux questions difficiles de la manipulation de la complexité de l'interdépendance entre les paquets de logiciel, mise à l'échelle et le maintien de l' image déployée à l'exécution. Les fournisseurs de cloud souhaitent automatiser ce processus pour améliorer la performance de processus d'approvisionnement des VMIs, et de donner aux utilisateurs de cloud plus de flexibilité pour la sélection ou la création des images appropriées, tout en maximisant les avantages pour les fournisseurs en termes de temps, de ressources et de coût opérationnel. Cette thèse propose une approche pour gérer l'interdépendance des paquets de logiciels, pour modéliser et automatiser le processus de déploiement VMIs, et pour soutenir la reconfiguration VMIS à l'exécution, appelée l'approche dirigée par les modèle (Model-Driven approach). Nous nous adressons particulièrement aux défis suivants: (1) la modélisation de la variabilité des configurations d'image de machine virtuelle , (2) la réduction la quantité de transfert de données à travers le réseau , (3) l'optimisation de la consommation d'énergie des machines virtuelles ; (4) la facilité à utiliser pour les utilisateurs de cloud; (5) l'automatisation du déploiement des VMIs; (6) le support de la mise à l'échelle et la reconfiguration de VMIS à l'exécution ; (7) la manipulation de la topologie de déploiement complexe des VMIs . Dans notre approche, nous utilisons des techniques d'ingénierie dirigée par les modèles pour modéliser les représentations d'abstraction des configurations de VMI, le déploiement et les processus de reconfiguration d'image de machine virtuelle. Nous considérons que les VMIS comme une gamme de produits et utiliser les modèles de caractère pour représenter les configurations de VMIs. Nous définissons également le déploiement , les processus de reconfiguration et leurs facteurs (par exemple: les images de machines virtuelles, les paquets de logiciel, la plate-forme, la topologie de déploiement, etc.) comme les modèles. D'autre part, l'approche dirigée par les modèles s'appuie sur les abstractions de haut niveau de la configuration de VMIs et le déploiement de VMIs pour rendre la gestion d'images virtuelles dans le processus d'approvisionnement pour être plus flexible et plus facile que les approches traditionnelles.
APA, Harvard, Vancouver, ISO, and other styles
24

Pereira, Rosangela de Fátima. "A data-driven solution for root cause analysis in cloud computing environments." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-03032017-082237/.

Full text
Abstract:
The failure analysis and resolution in cloud-computing environments are a a highly important issue, being their primary motivation the mitigation of the impact of such failures on applications hosted in these environments. Although there are advances in the case of immediate detection of failures, there is a lack of research in root cause analysis of failures in cloud computing. In this process, failures are tracked to analyze their causal factor. This practice allows cloud operators to act on a more effective process in preventing failures, resulting in the number of recurring failures reduction. Although this practice is commonly performed through human intervention, based on the expertise of professionals, the complexity of cloud-computing environments, coupled with the large volume of data generated from log records generated in these environments and the wide interdependence between system components, has turned manual analysis impractical. Therefore, scalable solutions are needed to automate the root cause analysis process in cloud computing environments, allowing the analysis of large data sets with satisfactory performance. Based on these requirements, this thesis presents a data-driven solution for root cause analysis in cloud-computing environments. The proposed solution includes the required functionalities for the collection, processing and analysis of data, as well as a method based on Bayesian Networks for the automatic identification of root causes. The validation of the proposal is accomplished through a proof of concept using OpenStack, a framework for cloud-computing infrastructure, and Hadoop, a framework for distributed processing of large data volumes. The tests presented satisfactory performance, and the developed model correctly classified the root causes with low rate of false positives.<br>A análise e reparação de falhas em ambientes de computação em nuvem é uma questão amplamente pesquisada, tendo como principal motivação minimizar o impacto que tais falhas podem causar nas aplicações hospedadas nesses ambientes. Embora exista um avanço na área de detecção imediata de falhas, ainda há percalços para realizar a análise de sua causa raiz. Nesse processo, as falhas são rastreadas a fim de analisar o seu fator causal ou seus fatores causais. Essa prática permite que operadores da nuvem possam atuar de modo mais efetivo na prevenção de falhas, reduzindo-se o número de falhas recorrentes. Embora essa prática seja comumente realizada por meio de intervenção humana, com base no expertise dos profissionais, a complexidade dos ambientes de computação em nuvem, somada ao grande volume de dados oriundos de registros de log gerados nesses ambientes e à ampla inter-dependência entre os componentes do sistema tem tornado a análise manual inviável. Por esse motivo, torna-se necessário soluções que permitam automatizar o processo de análise de causa raiz de uma falha ou conjunto de falhas em ambientes de computação em nuvem, e que sejam escaláveis, viabilizando a análise de grande volume de dados com desempenho satisfatório. Com base em tais necessidades, essa dissertação apresenta uma solução guiada por dados para análise de causa raiz em ambientes de computação em nuvem. A solução proposta contempla as funcionalidades necessárias para a aquisição, processamento e análise de dados no diagnóstico de falhas, bem como um método baseado em Redes Bayesianas para a identificação automática de causas raiz de falhas. A validação da proposta é realizada por meio de uma prova de conceito utilizando o OpenStack, um arcabouço para infraestrutura de computação em nuvem, e o Hadoop, um arcabouço para processamento distribuído de grande volume de dados. Os testes apresentaram desempenhos satisfatórios da arquitetura proposta, e o modelo desenvolvido classificou corretamente com baixo número de falsos positivos.
APA, Harvard, Vancouver, ISO, and other styles
25

Albatli, Abdulaziz Mohammed N. "Provenance-driven diagnostic framework for task evictions mitigating strategy in cloud computing." Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/17170/.

Full text
Abstract:
Cloud computing is an evolving paradigm. It delivers virtualized, scalable and elastic resources (e.g. CPU, memory) over a network (e.g. Internet) from data centres to users (e.g. individuals, enterprises, governments). Applications, platforms, and infrastructures are Cloud services that users can access. Clouds enable users to run highly complex operations to satisfy computation needs through resource virtualization. Virtualization is a method to run a number of virtual machines (VM) on a single physical server. However, VMs are not a necessity in the Clouds. Cloud providers tend to overcommit resources, aiming to leverage unused capacity and maximize profits. This over-commitment of resources can lead to an overload of the actual physical machine, which lowers the performance or lead to the failure of tasks due to lack of resources, i.e. CPU or RAM, and consequently lead to SLA violations. There are a number of different strategies to mitigate the overload, one of which is VM task eviction. The ambition of this research is to adapt a provenance model, PROV, to help understand the historical usage of a Cloud system and the components contributed to the overload, so that the causes for task eviction can be identified for future prevention. A novel provenance-driven diagnostic framework is proposed. By studying Google’s 29-day Cloud dataset, the PROV model was extended to PROV-TE that underpinned a number of diagnostic algorithms for identifying evicted tasks due to specific causes. The framework was implemented and tested against the Google dataset. To further evaluate the framework, a simulation tool, SEED, was used to replicate task eviction behaviour with the specifications of Google Cloud and Amazon EC2. The framework, specifically the diagnostic algorithms, was then applied to audit the causes and to identify the relevant evicted tasks. The results were then analysed using precision and recall measures. The average precision and recall of the diagnostic algorithms are 83% and 90%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
26

Flatt, Taylor. "CrowdCloud: Combining Crowdsourcing with Cloud Computing for SLO Driven Big Data Analysis." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/theses/2234.

Full text
Abstract:
The evolution of structured data from simple rows and columns on a spreadsheet to more complex unstructured data such as tweets, videos, voice, and others, has resulted in a need for more adaptive analytical platforms. It is estimated that upwards of 80% of data on the Internet today is unstructured. There is a drastic need for crowdsourcing platforms to perform better in the wake of the tsunami of data. We investigated the employment of a monitoring service which would allow the system take corrective action in the event the results were trending in away from meeting the accuracy, budget, and time SLOs. Initial implementation and system validation has shown that taking corrective action generally leads to a better success rate of reaching the SLOs. Having a system which can dynamically adjust internal parameters in order to perform better can lead to more harmonious interactions between humans and machine algorithms and lead to more efficient use of resources.
APA, Harvard, Vancouver, ISO, and other styles
27

Dhanoa, H. "The pressure-driven fragmentation of clouds at high redshift." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1419505/.

Full text
Abstract:
Understanding the role of star formation and its feedback effects at high redshift is extremely important, as this greatly influences the nature of the first galaxies. This knowledge will also allow us to resolve the formation conditions of hyper-metal poor stars such as SDSS J102915+172927 (Caffau et al. 2011). This star is thought to be the first ‘truly’ low metallicity star, as it possesses a total metallicity between 10..5 .. 10..7 Z (Caffau et al. 2012). Hence it’s formation was probably triggered by a single primordial supernova (SN) event. Fragmentation studies that only include metal-line cooling, cannot reproduce the conditions in which such a star could form (Klessen et al. 2012). Therefore it is critical to simulate the physical processes that occur on the small scale as reliably as possible, as they impact large scale dynamics. At present early universe supernova shock models only include non-equilibrium chemistry and its associated cooling for temperatures below 104 K (Machida et al. 2005; Kitayama & Yoshida 2005; Nagakura et al. 2009; Chiaki et al. 2013). The metal-line cooling is often calculated separately assuming the collisional equilibrium. If we want to obtain realistic results, it is important to incorporate a complete chemistry (which includes metals, molecules and dust) and therefore evaluate the non-equilibrium cooling that occurs at all temperatures. In Chapter 2, we first try to understand the chemistry that would occur in a low metallicity gas. We investigate the chemical evolution of a metal-free cloud that has been mixed with ejecta from a single supernova. The very first stars would have been massive and simulations predict a range of masses (Bromm & Yoshida 2011). The initial mass of the star dictates the type of supernova explosion it will undergo. As each type of SN produces a different elemental yield, we would like to ascertain if it is possible to constrain molecular tracers of progenitor mass from a primordial cloud that interacted with that particular SN ejecta. A metal-free chemical network with its associated cooling is coupled to a hydrodynamics code in Chapter 3. Previous studies (Machida et al. 2005; Kitayama & Yoshida 2005; Nagakura et al. 2009; Chiaki et al. 2013) have focused on the fragmentation of the shell that forms as early universe supernova remnant interacts with an interstellar medium of a uniform density. Our model has improved upon these studies by modelling a multiphase medium with multidimensional simulations, with the goal to investigate the shock-driven fragmentation of a metal-free clump. We also investigate the effect of cosmic rays, CMB ionisation and deuterium chemistry on clumping and fragmentation of a neutral clump. Vasiliev et al. (2008) highlighted an important link between the formation of extremely metal poor stars and the radial distribution of primordial gas within a first galaxy, prior to the supernova explosion. This distribution is heavily dependent on the size of the metal-free star and its HII region prior the explosion. Hence in Chapter 4 we extend our metal-free model by simulating the formation of a HII region around a 40 M star in a number of gas clouds with differing density profiles. As the explosion mechanism for this star is not well understood, we explore a range of explosion energies and their impact on compression and fragmentation of the clump. The impact of metal and dust cooling on the fragmentation of low metallicity gas has been highlighted by a number os studies (e.g. Bromm & Loeb 2003; Santoro & Shull 2006; Omukai et al. 2005; Schneider et al. 2012). In Chapter 5 we consider the effect of cooling from metals, metal bearing molecules and dust. The nature and production of high redshift dust is not well constrained. Assuming dust-to-gas ratio can be scaled with metallicity, a simple dust model is implemented which includes cooling induced by gas-grain collisions is evaluated at all temperatures. The observational properties of dust and the physical consequences of its presence in the interstellar medium are extremely well-known and well-documented (Draine 2003). However its composition, structure and size-distribution are still subjects of much discussion. In Chapter 6 we have carried out an investigation of the chemical evolution of gas in different carbon-rich circumstellar environments. We pay careful attention to the accurate calculation of the molecular photoreaction rate coefficients to ascertain whether there is a universal formation mechanism for carbon dust in strongly irradiated astrophysical environments.
APA, Harvard, Vancouver, ISO, and other styles
28

Macias, Lloret Mario. "Business-driven resource allocation and management for data centres in cloud computing markets." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/144562.

Full text
Abstract:
Cloud Computing markets arise as an efficient way to allocate resources for the execution of tasks and services within a set of geographically dispersed providers from different organisations. Client applications and service providers meet in a market and negotiate for the sales of services by means of the signature of a Service Level Agreement that contains the Quality of Service terms that the Cloud provider has to guarantee by managing properly its resources. Current implementations of Cloud markets suffer from a lack of information flow between the negotiating agents, which sell the resources, and the resource managers that allocate the resources to fulfil the agreed Quality of Service. This thesis establishes an intermediate layer between the market agents and the resource managers. In consequence, agents can perform accurate negotiations by considering the status of the resources in their negotiation models, and providers can manage their resources considering both the performance and the business objectives. This thesis defines a set of policies for the negotiation and enforcement of Service Level Agreements. Such policies deal with different Business-Level Objectives: maximisation of the revenue, classification of clients, trust and reputation maximisation, and risk minimisation. This thesis demonstrates the effectiveness of such policies by means of fine-grained simulations. A pricing model may be influenced by many parameters. The weight of such parameters within the final model is not always known, or it can change as the market environment evolves. This thesis models and evaluates how the providers can self-adapt to changing environments by means of genetic algorithms. Providers that rapidly adapt to changes in the environment achieve higher revenues than providers that do not. Policies are usually conceived for the short term: they model the behaviour of the system by considering the current status and the expected immediate after their application. This thesis defines and evaluates a trust and reputation system that enforces providers to consider the impact of their decisions in the long term. The trust and reputation system expels providers and clients with dishonest behaviour, and providers that consider the impact of their reputation in their actions improve on the achievement of their Business-Level Objectives. Finally, this thesis studies the risk as the effects of the uncertainty over the expected outcomes of cloud providers. The particularities of cloud appliances as a set of interconnected resources are studied, as well as how the risk is propagated through the linked nodes. Incorporating risk models helps providers differentiate Service Level Agreements according to their risk, take preventive actions in the focus of the risk, and pricing accordingly. Applying risk management raises the fulfilment rate of the Service-Level Agreements and increases the profit of the provider
APA, Harvard, Vancouver, ISO, and other styles
29

Alkandari, Fatima A. A. A. "Model-driven engineering for analysing, modelling and comparing cloud computing service level agreements." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/11690/.

Full text
Abstract:
In cloud computing, service level agreements (SLAs) are critical, and underpin a pay- per-consumption business model. Different cloud providers offer different services (of different qualities) on demand, and their pre-defined SLAs are used both to advertise services and to define contracts with consumers. However, different providers express their SLAs using their own vocabularies, typically defined in terms of their own tech- nologies and unique selling points. This can make it difficult for consumers to compare cloud SLAs systematically and precisely. We propose a modelling framework that pro- vides mechanisms that can be used systematically and semi-automatically to model and compare cloud SLAs and consumer requirements. Using MDE principles and tools, we propose a metamodel for cloud provider SLAs and cloud consumer requirements, and thereafter demonstrate how to use model comparison technology for automating differ- ent matching processes, thus helping consumers to choose between different providers. We also demonstrate how the matching process can be interactively configured to take into account consumer preferences, via weighting models. The resulting framework can thus be used to better automate high-level consumer interactions with disparate cloud computing technology platforms.
APA, Harvard, Vancouver, ISO, and other styles
30

Perera, Jayasuriya Kuranage Menuka. "AI-driven Zero-Touch solutions for resource management in cloud-native 5G networks." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0427.

Full text
Abstract:
Le déploiement des réseaux 5G a introduit des architectures cloud-native et des systèmes de gestion automatisés, offrant aux fournisseurs de services de communication une infrastructure évolutive, flexible et agile. Ces avancées permettent une allocation dynamique des ressources, augmentant celles-ci en période de forte demande et les réduisant en période de faible utilisation, optimisant ainsi les CapEx et OpEx. Cependant, une observabilité limitée et une caractérisation insuffisante des charges de travail entravent la gestion des ressources. Une surprovisionnement pendant les périodes creuses augmente lescoûts, tandis qu’un sous-provisionnement dégrade la QoS lors des pics de demande. Malgré les solutions existantes dans l’industrie, le compromis entre efficacité des coûts et optimisation de la QoS reste difficile. Cette thèse aborde ces défis en proposant des solutions d’autoscaling proactives pour les fonctions réseau dans un environnement cloud native 5G. Elle se concentre sur la prévision précise de l’utilisation des ressources, l’identification des opérations de changement d’échelle à mettre en oeuvre, et l’optimisation des instants auxquels opérer ces ajustements pour préserver l’équilibre entre coût et QoS. De plus, une approche novatrice permet de tenir compte de façon efficace du throttling de la CPU. Le cadre développé assure une allocation efficace des ressources, réduisant les coûts opérationnels tout en maintenant une QoS élevée. Ces contributions établissent une base pour des opérations réseau 5G durables et efficaces et proposent une base pour les futures architectures cloud-native<br>The deployment of 5G networks has introduced cloud-native architectures and automated management systems, offering communication service providers scalable, flexible, and agile infrastructure. These advancements enable dynamic resource allocation, scaling resources up during high demand and down during low usage, optimizing CapEx and OpEx. However, limited observability and poor workload characterization hinder resource management. Overprovisioning during off-peak periods raises costs, while underprovisioning during peak demand degrades QoS. Despite industry solutions, the trade-off between cost efficiency and QoS remains unresolved. This thesis addresses these challenges by proposing proactive autoscaling solutions for network functions in cloud-native 5G. It focuses on accurately forecasting resource usage, intelligently differentiating scaling events (scaling up, down, or none), and optimizing timing to achieve a balance between cost and QoS. Additionally, CPU throttling, a significant barrier to this balance, is mitigated through a novel approach. The developed framework ensures efficient resource allocation, reducing operational costs while maintaining high QoS. These contributions establish a foundation for sustainable and efficient 5G network operations, setting a benchmark for future cloud-native architectures
APA, Harvard, Vancouver, ISO, and other styles
31

Chowdhury, Naser. "Factors Influencing the Adoption of Cloud Computing Driven by Big Data Technology| A Quantitative Study." Thesis, Capella University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10846772.

Full text
Abstract:
<p>A renewed interest in cloud computing adoption has occurred in academic and industry settings because emerging technologies have strong links to cloud computing and Big Data technology. Big Data technology is driving cloud computing adoption in large business organizations. For cloud computing adoption to increase, cloud computing must transition from low-level technology to high-level business solutions. The purpose of this study was to develop a predictive model for cloud computing adoption that included Big Data technology-related variables, along with other variables from two widely used technology adoption theories: technology acceptance model (TAM), and technology-organization-environment (TOE). The inclusion of Big Data technology-related variables extended the cloud computing?s mix theory adoption approach. The six independent variables were perceived usefulness, perceived ease of use, security effectiveness, the cost-effectiveness, intention to use Big Data technology, and the need for Big Data technology. Data collected from 182 U.S. IT professionals or managers were analyzed using binary logistic regression. The results showed that the model involving six independent variables was statistically significant for predicting cloud computing adoption with 92.1% accuracy. Independently, perceived usefulness was the only predictor variable that can increase cloud computing adoption. These results indicate that cloud computing may grow if it can be leveraged into the emerging Big Data technology trends to make cloud computing more useful for its users.
APA, Harvard, Vancouver, ISO, and other styles
32

Ribas, Maristella. "A Petri net decision model for cloud services adoption." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=15609.

Full text
Abstract:
Cloud services are now widely used, especially in Infrastructure as a Service (IaaS), with big players offering several purchasing options, and expanding almost daily the range of offered services. Cost reduction is a major factor promoting cloud services adoption. However, qualitative factors need to be evaluated as well, making the decision process of cloud services adoption a non-trivial task for managers. In this work, we propose a Petri net-based multi-criteria decision-making (MCDM) framework, in order to evaluate a cloud service in relation to a similar on-premises offer. The evaluation of both options considers cost and qualitative issues in a novel and simple method that incorporates best practices from academy and IT specialists. Furthermore, the use of Petri net models allows powerful extensions to perform deeper analysis of specific factors as needed. The framework can be helpful for IT managers to decide between the two options, and can be used for any type of cloud service (IaaS, SaaS, PaaS). Since cost is one of the most important factors promoting cloud adoption, we proceed with a deeper analysis of one important cost factor. We propose a Petri net to model cost savings using public clouds spot Instances purchasing option. Through extensive simulations in several scenarios we conclude that spot Instances can be a very interesting option for savings in auto scaling process, even in simple business applications using only a few servers. Exploring different purchasing options for cloud services can make the difference in the decision making process.<br>Atualmente, os serviÃos em nuvem sÃo amplamente utilizados, principalmente em infraestrutura como serviÃo (IaaS), com grandes fornecedores oferecendo vÃrias opÃÃes de compra e expandindo quase diariamente a gama de serviÃos oferecidos. A reduÃÃo de custos à o principal fator que promove a adoÃÃo de serviÃos em nuvem. No entanto, à preciso avaliar tambÃm fatores qualitativos, o que torna o processo de decisÃo de adoÃÃo de serviÃos em nuvem uma tarefa pouco trivial para os gestores. Este trabalho propÃe um modelo para tomada de decisÃo multicritÃrio (MDMC) utilizando redes de Petri para avaliar um serviÃo de nuvem comparado com um serviÃo disponibilizado localmente (on-premises), nas dependÃncias do usuÃrio. A avaliaÃÃo das duas opÃÃes considera questÃes qualitativas e de custo atravÃs de um mÃtodo novo e simples, que incorpora as melhores prÃticas de especialistas da academia e de tecnologia da informaÃÃo (TI). AlÃm disso, o uso de redes de Petri permite extensÃes poderosas para realizar anÃlises mais profundas de fatores especÃficos, conforme a necessidade de cada cenÃrio. O modelo pode ser Ãtil para apoiar gestores de TI na decisÃo entre as duas opÃÃes e pode ser usado para qualquer tipo de serviÃo de nuvem (IaaS, SaaS, PaaS). Como o custo à um dos fatores mais importantes para a adoÃÃo da nuvem, procedemos a uma anÃlise mais profunda de um fator de custo importante. à apresentada uma extensÃo ao modelo, tambÃm construÃdo com redes de Petri, para simular economias de custo usando uma determinada opÃÃo de compra de serviÃos em nuvens pÃblicas, as instÃncias spot. AtravÃs de extensas simulaÃÃes em vÃrios cenÃrios, o trabalho conclui que a utilizaÃÃo de instÃncias spot pode gerar uma grande economia no processo de escalonamento automÃtico, mesmo em aplicaÃÃes relativamente simples, utilizando apenas alguns servidores. Explorar diferentes opÃÃes de compra para os serviÃos em nuvem faz uma enorme diferenÃa nos custos e pode ter grande influÃncia no processo de tomada de decisÃo.
APA, Harvard, Vancouver, ISO, and other styles
33

Tan, Yue. "Stochastic Modeling, Optimization and Data-Driven Adaptive Control with Applications in Cloud Computing and Cyber Security." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1431098853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Nagrath, Vineet. "Software architectures for cloud robotics : the 5 view Hyperactive Transaction Meta-Model (HTM5)." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS005/document.

Full text
Abstract:
Le développement de logiciels pour les robots connectés est une difficulté majeure dans le domaine du génie logiciel. Les systèmes proposés sont souvent issus de la fusion de une ou plusieurs plates-formes provenant des robots, des ordinateurs autonomes, des appareils mobiles, des machines virtuelles, des caméras et des réseaux. Nous proposons ici une approche orientée agent permettant de représenter les robots et tous les systèmes auxiliaires comme des agents d’un système. Ce concept de l’agence préserve l’autonomie sur chacun des agents, ce qui est essentiel dans la mise en oeuvre logique d’un nuage d’éléments connectés. Afin de procurer une flexibilité de mise en oeuvre des échanges entre les différentes entités, nous avons mis en place un mécanisme d’hyperactivité ce qui permet de libérer sélectivement une certaine autonomie d’un agent par rapport à ces associés.Actuellement, il n’existe pas de solution orientée méta-modèle pour décrire les ensembles de robots interconnectés. Dans cette thèse, nous présentons un méta-modèle appelé HTM5 pour spécifier a structure, les relations, les échanges, le comportement du système et l’hyperactivité dans un système de nuages de robots. La thèse décrit l’anatomie du méta-modèle (HTM5) en spécifiant les différentes couches indépendantes et en intégrant une plate-forme indépendante de toute plateforme spécifique. Par ailleurs, la thèse décrit également un langage de domaine spécifique pour la modélisation indépendante dans HTM5. Des études de cas concernant la conception et la mise en oeuvre d’un système multi-robots basés sur le modèle développé sont également présentés dans la thèse. Ces études présentent des applications où les décisions commerciales dynamiques sont modélisées à l’aide du modèle HTM5 confirmant ainsi la faisabilité du méta-modèle proposé<br>Software development for cloud connected robotic systems is a complex software engineeringendeavour. These systems are often an amalgamation of one or more robotic platforms, standalonecomputers, mobile devices, server banks, virtual machines, cameras, network elements and ambientintelligence. An agent oriented approach represents robots and other auxiliary systems as agents inthe system.Software development for distributed and diverse systems like cloud robotic systems require specialsoftware modelling processes and tools. Model driven software development for such complexsystems will increase flexibility, reusability, cost effectiveness and overall quality of the end product.The proposed 5-view meta-model has separate meta-models for specifying structure, relationships,trade, system behaviour and hyperactivity in a cloud robotic system. The thesis describes theanatomy of the 5-view Hyperactive Transaction Meta-Model (HTM5) in computation independent,platform independent and platform specific layers. The thesis also describes a domain specificlanguage for computation independent modelling in HTM5.The thesis has presented a complete meta-model for agent oriented cloud robotic systems and hasseveral simulated and real experiment-projects justifying HTM5 as a feasible meta-model
APA, Harvard, Vancouver, ISO, and other styles
35

Shei, Shaun. "A model-driven approach towards designing and analysing secure systems for multi-clouds." Thesis, University of Brighton, 2018. https://research.brighton.ac.uk/en/studentTheses/53c11a93-3d8d-4cbe-82df-deb34be6ab1f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Silva, Elias Adriano Nogueira da. "Uma abordagem dirigida por modelos para desenvolvimento de aplicações multi-paas." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08022018-103528/.

Full text
Abstract:
No contexto da Engenharia de Software para a Computação em Nuvem as pesquisas relacionadas ao tema são cada vez mais crescentes e investiga-se como a Computação em Nuvem influenciará no desenvolvimento de sistemas de uma maneira geral. A atividade de construir sistemas para nuvem é uma tarefa complexa, criar aplicações de múltiplas nuvens, sobretudo, no contexto do modelo de serviço Plataforma-como-um-Serviço(PaaS), é ainda mais agravada devido especificidades de plataformas de nuvem que podem tornar a tarefa de desenvolvimento repetitiva, custosa e dependente de um provedor específico. As abordagens dirigidas por modelos(MDE) resolvem alguns desses problemas, elas propõem que a modelagem e mecanismos de transformação utilizados para gerar código a partir de modelos são uma melhor maneira de desenvolver sistemas de software, ao invés da codificação pura. Portanto, visando investigar como combinar os benefícios da Computação em Nuvem alinhados ao MDE, foi desenvolvida uma abordagem dirigida por modelos para desenvolvimento de aplicações multi-PaaS. Em direção a este objetivo foi realizado um Estudo de Caso em colaboração com uma empresa da indústria. Essa colaboração permitiu a criação de implementações de referencia que possibilitaram o desenvolvimento de uma Linguagem Específica de Domínio (DSL) e metaprogramas que compõem a abordagem. Para avaliar a abordagem desenvolvida foi realizado um Estudo de Caso. Os resultados mostram que MDE pode não só resolver o problema, mas trazer benefícios adicionais em relação a abordagens tradicionais de desenvolvimento de sistemas. Este trabalho explora esses benefícios, apresenta uma maneira de unir recursos heterogêneos de nuvem por meio de uma abordagem dirigida por modelos e aplicações orientadas a serviço.<br>Cloud computing is a computational paradigm that has increasingly been used in various sectors of industry and academia. Researchers have been studying how cloud technologies can influence several areas of science and research. In the context of Software Engineering, the researches related to cloud are increasingly increasing. Researchers are studying how to develop better cloud services offerings and how to find a strategy for combining existing resources to build improved services and solve problems. Building cloud systems is a complex task, in the context of the Platform-as-a-Service(PaaS) cloud service model; this activity is further aggravated by cloud platform specificities that can make the task of development repetitive, costly,and platform-specific. Model-driven approaches (MDE) solve some of these issues; they propose that the modeling and transformation mechanisms used to generate code from models are a better way to develop software systems, rather than pure coding. Development with MDE is a comprehensive and relevant research area and needs to be better explored in a wide range of contexts. Therefore, in order to investigate how to combine the benefits of multi-cloud appications aligned to the MDE, we developed a model-driven approach to build multi-PaaS applications.Toward this objective, we performed a case study in collaboration with an industry company.This collaboration allowed the creation of reference implementations that enabled the development of a Domain Specific Language (DSL) and metaprograms that constitute the approach. To evaluate the approach, we performed a case study. The results show that MDE cannot only solve the problem, but also bring additional benefits over traditional systems development approaches. This work explores these benefits, presents a way to combine heterogeneous cloud resources through a service-driven model and application-driven approach.
APA, Harvard, Vancouver, ISO, and other styles
37

Farneti, Thomas. "Design and Deployment of an Execution Platform based on Microservices for Aggregate Computing in the Cloud." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12948/.

Full text
Abstract:
Il termine Internet of Things viene spesso utilizzato per definire oggetti intelligenti, servizi, and applicazioni connessi attraverso Internet. Uno studio redatto da Cisco afferma che la crescita del numero e della varietà di devices da cui collezionare dati è estremamente rapida. Aumentando il numero di devices aumenta conseguentemente anche la complessità, quindi, ci si trova ad affrontare problemi tra i quali: mancanza di modularità e riusabilità, difficoltà nelle fasi di test, manutenzione e rilascio. La Programmazione Aggregata fornisce un'alternativa ai metodi di sviluppo software tradizionali, che semplifica drammaticamente progettazione, creazione, e manutenzione di sistemi IoT complessi. Con questa tecnica, l'unità base di computazione non è più un singolo device ma una collezione cooperativa di devices. Questa tesi descrive la progettazione e sviluppo di una Piattaforma per Programmazione Aggregata basata su microservizi nel Cloud. A differenza del modello distribuito della Programmazione Aggregata, il Cloud Computing rappresenta un'ulteriore opportunità per la costruzione di sistemi scalabili e può essere pensato come una strategia alternativa di esecuzione dove le computazioni sono per l'appunto eseguite su Cloud. Per poter ottenere il massimo dalle tipiche caratteristiche di scalabilità ed affidabilità fornite dal modello Cloud occorre adottare un'architettura adeguata. Questo lavoro descrive come poter servirsi dell'architettura a microservizi costruendo l'infrastruttura richiesta per la comunicazione tra processi dalle fondamenta. Data la maggiore complessità tecnologica delle architetture a microservizi, l'elaborato descrive come adottare un approccio a "container" alleviando le difficoltà di gestione attraverso un container orchestrator.
APA, Harvard, Vancouver, ISO, and other styles
38

Zuñiga, Prieto Miguel Ángel. "Reconfiguración Dinámica e Incremental de Arquitecturas de Servicios Cloud Dirigida por Modelos." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86288.

Full text
Abstract:
Cloud computing represents a fundamental change in the way organizations acquire technological resources (e.g., hardware, development and execution environments, applications); where, instead of buying them, they acquire remote access to them in the form of cloud services supplied through the Internet. Among the main characteristics of cloud computing is the allocation of resources in an agile and elastic way, reserved or released depending on the demand of the users or applications, enabling the payment model based on consumption metrics. The development of cloud applications mostly follows an incremental approach, where the incremental delivery of functionalities to the client changes - or reconfigures - successively the current architecture of the application. Cloud providers have their own standards for both implementation technologies and service management mechanisms, requiring solutions that facilitate: building, integrating and deploying portable services; interoperability between services deployed across different cloud providers; and continuity In the execution of the application while its architecture is reconfigured product of the integration of the successive increments. The principles of the model-driven development approach, the architectural style service-oriented architectures, and the dynamic reconfiguration play an important role in this context. The hypothesis of this doctoral thesis is that model-driven development methods provide cloud service developers with abstraction and automation mechanisms for the systematic application of the principles of model engineering during the design, implementation, and incremental deployment of cloud services, facilitating the dynamic reconfiguration of the service-oriented architecture of cloud applications. The main objective of this doctoral thesis is therefore to define and validate empirically DIARy, a method of dynamic and incremental reconfiguration of service-oriented architectures for cloud applications. This method will allow specifying the architectural integration of the increment with the current cloud application, and with this information to automate the derivation of implementation artifacts that facilitate the integration and dynamic reconfiguration of the service architecture of the cloud application. This dynamic reconfiguration is achieved by running reconfiguration artifacts that not only deploy / un-deploy increment's services and orchestration services between services of the increment with the services of the current cloud application; but also, they change the links between services at runtime. A software infrastructure that supports the activities of the proposed method has also been designed and implemented. The software infrastructure includes the following components: i) a set of DSLs, with their respective graphical editors, that allow to describe aspects related to the architectural integration, implementation and provisioning of increments in cloud environments; ii) transformations that generate platform-specific implementation and provisioning models; (iii) transformations that generate artifacts that implement integration logic and orchestration of services, and scripts of provisioning, deployment, and dynamic reconfiguration for different cloud vendors. This doctoral thesis contributes to the field of service-oriented architectures and in particular to the dynamic reconfiguration of cloud services architectures in an iterative and incremental development context. The main contribution is a well-defined method, based on the principles of model-driven development, which makes it easy to raise the level of abstraction and automate, through transformations, the generation of artifacts that perform the dynamic reconfiguration of cloud applications.<br>La computación cloud representa un cambio fundamental en la manera en la que las organizaciones adquieren recursos tecnológicos (p. ej., hardware, entornos de desarrollo y ejecución, aplicaciones); en donde, en lugar de comprarlos adquieren acceso remoto a ellos en forma de servicios cloud suministrados a través de Internet. Entre las principales características de la computación cloud está la asignación de recursos de manera ágil y elástica, reservados o liberados dependiendo de la demanda de los usuarios o aplicaciones, posibilitando el modelo de pago basado en métricas de consumo. El desarrollo de aplicaciones cloud sigue mayoritariamente un enfoque incremental, en donde la entrega incremental de funcionalidades al cliente cambia - o reconfigura - sucesivamente la arquitectura actual de la aplicación. Los proveedores cloud tienen sus propios estándares tanto para las tecnologías de implementación como para los mecanismos de gestión de servicios, requiriéndose soluciones que faciliten: la construcción, integración y despliegue de servicios portables; la interoperabilidad entre servicios desplegados en diferentes proveedores cloud; y la continuidad en la ejecución de la aplicación mientras su arquitectura es reconfigurada producto de la integración de los sucesivos incrementos. Los principios del enfoque de desarrollo dirigido por modelos, del estilo arquitectónico de arquitecturas orientadas a servicios y de la reconfiguración dinámica cumplen un papel importante en este contexto. La hipótesis de esta tesis doctoral es que los métodos de desarrollo dirigido por modelos brindan a los desarrolladores de servicios cloud mecanismos de abstracción y automatización para la aplicación sistemática de los principios de la ingeniería de modelos durante el diseño, implementación y despliegue incremental de servicios cloud, facilitando la reconfiguración dinámica de la arquitectura orientada a servicios de las aplicaciones cloud. El objetivo principal de esta tesis doctoral es por tanto definir y validar empíricamente DIARy, un método de reconfiguración dinámica e incremental de arquitecturas orientadas a servicios. Este método permitirá especificar la integración arquitectónica del incremento con la aplicación cloud actual, y con esta información automatizar la derivación de los artefactos de implementación que faciliten la integración y reconfiguración dinámica de la arquitectura de servicios de la aplicación cloud. Esta reconfiguración dinámica se consigue al ejecutar los artefactos de reconfiguración que no solo despliegan/repliegan los servicios del incremento y servicios de orquestación entre los servicios del incremento con los servicios de la aplicación cloud actual; sino también, cambian en tiempo de ejecución los enlaces entre servicios. También se ha diseñado e implementado una infraestructura software que soporta las actividades del método propuesto e incluye los siguientes componentes: i) un conjunto de DSLs, con sus respectivos editores gráficos, que permiten describir aspectos relacionados a la integración arquitectónica, implementación y aprovisionamiento de incrementos en entornos cloud; ii) transformaciones que generan modelos de implementación y aprovisionamiento; iii) transformaciones que generan artefactos que implementan la lógica de integración y orquestación de servicios, y scripts de aprovisionamiento, despliegue y reconfiguración dinámica específicos para distintos proveedores cloud. Esta tesis doctoral contribuye al campo de las arquitecturas orientadas a servicios y en particular a la reconfiguración dinámica de arquitecturas de servicios cloud en contextos de desarrollo iterativo e incremental. El principal aporte es un método bien definido, basado en los principios del desarrollo dirigido por modelos, que facilita elevar el nivel de abstracción y automatizar por medio de transformaciones la generación de artefactos que real<br>La computació cloud representa un canvi fonamental en la manera en què les organitzacions adquirixen recursos tecnològics (ej., maquinari, entorns de desplegament i execució, aplicacions) ; on, en compte de comprar-los adquirixen accés remot a ells en forma de servicis cloud subministrats a través d'Internet. Entre les principals característiques de la computació cloud els recursos cloud són assignats de manera àgil i elàstica, reservats o alliberats depenent de la demanda dels usuaris o aplicacions, possibilitant el model de pagament basat en mètriques de consum. El desenrotllament d'aplicacions cloud seguix majoritàriament un enfocament incremental, on l'entrega incremental de funcionalitats al client canvia - o reconfigura - successivament l'arquitectura actual de l'aplicació. Els proveïdors cloud tenen els seus propis estàndards tant per a les tecnologies d'implementació com per als mecanismes de gestió de servicis, requerint-se solucions que faciliten: la construcció, integració i desplegament de servicis portables; la interoperabilitat entre servicis desplegats en diferents proveïdors cloud; i la continuïtat en l'execució de l'aplicació mentres la seua arquitectura és reconfigurada producte de la integració dels successius increments. Els principis de l'enfocament de desenrotllament dirigit per models, de l'estil arquitectònic d'arquitectures orientades a servicis i de la reconfiguració dinàmica complixen un paper important en este context. La hipòtesi d'esta tesi doctoral és que els mètodes de desenrotllament dirigit per models brinden als desenvolupadors de servicis cloud mecanismes d'abstracció i automatització per a l'aplicació sistemàtica dels principis de l'enginyeria de models durant el disseny, implementació i desplegament incremental de servicis cloud, facilitant la reconfiguració dinàmica de l'arquitectura orientada a servicis de les aplicacions cloud. L'objectiu principal d'esta tesi doctoral és per tant de definir i validar empí-ricamente DIARy, un mètode de reconfiguració dinàmica i incremental d'arquitectures orientades a servicis per a aplicacions cloud. Este mètode permetrà especificar la integració arquitectònica de l'increment amb l'aplicació cloud actual, i amb esta informació automatitzar la derivació dels artefactes d'implementació que faciliten la integració i reconfiguració dinàmica de l'arquitectura de servicis de l'aplicació cloud. Esta reconfi-guración dinàmica s'aconseguix a l'executar els artefactes de reconfiguració que no sols despleguen/repleguen els servicis de l'increment i servicis d'orquestració entre els servicis de l'increment amb els servicis de l'aplicació cloud actual; sinó també, canvien en temps d'execució els enllaços entre servicis. També s'ha dissenyat i implementat una infraestructura programari que suporta les activitats del mètode proposat i inclou els següents components: i) un conjunt de DSLs, amb els seus respectius editors gràfics, que permeten descriure aspectes relacionats a la integració arquitectònica, implementació i aprovisionament en entorns cloud dels increments; ii) transformacions que generen models d'implementació i aprovisionament específics de la plataforma a partir dels models d'integració d'alt nivell; iii) transformacions que generen artefactes que implementen la lògica d'integració i orquestració de servicis, i scripts d'aprovisionament, desplegament i reconfiguració dinàmica específics per a distints proveïdors cloud. Esta tesi doctoral contribuïx al camp de les arquitectures orientades a servicis i en particular a la reconfiguració dinàmica d'arquitectures de servicis cloud en contextos de desenrotllament iteratiu i incremental. La principal aportació és un mètode ben definit, basat en els principis del desenrotllament dirigit per models, que facilita elevar el nivell d'abstracció i automatitzar per mitjà de transformacions la generació d'artefactes que r<br>Zuñiga Prieto, MÁ. (2017). Reconfiguración Dinámica e Incremental de Arquitecturas de Servicios Cloud Dirigida por Modelos [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86288<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
39

Guillaume, Fumeaux. "Public Software as a Service a Business-Driven Guidance for Risk Control." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-60510.

Full text
Abstract:
Because cloud computing adoption grows day-by-day, it is essential for theexecutives of a company to be able to rely on a risks management guidanceto fully grasp all the aspects concerning cloud computing security.The concerns of the industry, the security standards, the official guidelines,and the European laws about the security when using cloud serviceshave been analyzed. The risks, the measures, and the obligations have beengathered. This paper, with all these information collected, describes how torun a risk management for public SaaS security keeping a business-drivenmindset. While running the risk assessment, the management should look atthe impact a threat may have on company activities, image, and finances. Itwill decide on the measures that should be implemented by the administrationor the IT.Following this guidance should minimize the risk of using public SaaScloud computing and allowing a company to align its security goals with itsbusiness goals.
APA, Harvard, Vancouver, ISO, and other styles
40

Maiyama, Kabiru M. "Performance Analysis of Virtualisation in a Cloud Computing Platform. An application driven investigation into modelling and analysis of performance vs security trade-offs for virtualisation in OpenStack infrastructure as a service (IaaS) cloud computing platform architectures." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/18587.

Full text
Abstract:
Virtualisation is one of the underlying technologies that led to the success of cloud computing platforms (CCPs). The technology, along with other features such as multitenancy allows delivering of computing resources in the form of service through efficient sharing of physical resources. As these resources are provided through virtualisation, a robust agreement is outlined for both the quantity and quality-of-service (QoS) in a service level agreement (SLA) documents. QoS is one of the essential components of SLA, where performance is one of its primary aspects. As the technology is progressively maturing and receiving massive acceptance, researchers from industry and academia continue to carry out novel theoretical and practical studies of various essential aspects of CCPs with significant levels of success. This thesis starts with the assessment of the current level of knowledge in the literature of cloud computing in general and CCPs in particular. In this context, a substantive literature review was carried out focusing on performance modelling, testing, analysis and evaluation of Infrastructure as a Service (IaaS), methodologies. To this end, a systematic mapping study (SMSs) of the literature was conducted. SMS guided the choice and direction of this research. The SMS was followed by the development of a novel open queueing network model (QNM) at equilibrium for the performance modelling and analysis of an OpenStack IaaS CCP. Moreover, it was assumed that an external arrival pattern is Poisson while the queueing stations provided exponentially distributed service times. Based on Jackson’s theorem, the model was exactly decomposed into individual M/M/c (c ≥ 1) stations. Each of these queueing stations was analysed in isolation, and closed-form expressions for key performance metrics, such as mean response time, throughput, server (resource) utilisation as well as bottleneck device were determined. Moreover, the research was extended with a proposed open QNM with a bursty external arrival pattern represented by a Compound Poisson Process (CPP) with geometrically distributed batches, or equivalently, variable Generalised Exponential (GE) interarrival and service times. Each queueing station had c (c ≥ 1) GE-type servers. Based on a generic maximum entropy (ME) product form approximation, the proposed open GE-type QNM was decomposed into individual GE/GE/c queueing stations with GE-type interarrival and service times. The evaluation of the performance metrics and bottleneck analysis of the QNM were determined, which provided vital insights for the capacity planning of existing CCP architectures as well as the design and development of new ones. The results also revealed, due to a significant impact on the burstiness of interarrival and service time processes, resulted in worst-case performance bounds scenarios, as appropriate. Finally, an investigation was carried out into modelling and analysis of performance and security trade-offs for a CCP architecture, based on a proposed generalised stochastic Petri net (GSPN) model with security-detection control model (SDCM). In this context, ‘optimal’ combined performance and security metrics were defined with both M-type or GE-type arrival and service times and the impact of security incidents on performance was assessed. Typical numerical experiments on the GSPN model were conducted and implemented using the Möbius package, and an ‘optimal’ trade-offs were determined between performance and security, which are crucial in the SLA of the cloud computing services.<br>Petroleum technology development fund (PTDF) of the government of Nigeria Usmanu Danfodiyo University, Sokoto
APA, Harvard, Vancouver, ISO, and other styles
41

Åkers, Josephine. "Driving fashion with data : A qualitative study of how buying firms in the buyer-driven fashion supply chain can benefit from a digitized supply chain reconfiguration." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-15745.

Full text
Abstract:
Future customers will demand personalized goods and services. Value creation must therefore have a larger focus on product development and design, supply chain management and after-sales services. The key to success in the future fashion industry, is reduction of the reliance on traditional demand forecasting. Firms should instead put a larger focus on adapting shorter lead times and agile supply chain designs. Industry 4.0 will require an evolution of how clothing is designed and produced. It requires an implementation of new technologies able to identify data for expanding a consumer driven design and product development, combined with new technologies for flexible, local on-demand production. The purpose of the study is to explore how buying firms in the buyer-driven fashion supply chain utilize digitization and digital linking technology to create benefits for the firm. The study is of qualitative character and the reasoning is abductive, as theory on supply chain configuration is applied to the fashion supply chain. The empirical data was generated through in-depth, semi-structured expert interviews through a purposive sample of seven fashion industry professionals. In order to answer the research question, the empirical data was thematically analyzed and a main overarching theme and five subthemes emerged. The themes were compared to the theoretical framework of supply chain configuration. The elementary business opportunity in a digitized supply chain, is the combination of digital and physical resources to raise performance and support business innovation. The configuration between physical units, virtual units and information processing service supply chain units is crucial to create an added value to a service or a product. The empirical data revealed clear examples of how the configuration between the units is applied to create benefits for the firm. The findings elaborate the theory of supply chain configuration and contribute to the research field of strategic management and organizational theory.
APA, Harvard, Vancouver, ISO, and other styles
42

Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters." Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020/document.

Full text
Abstract:
Les workflows sont des systèmes typiques traitant le big data. Ces systèmes sont déployés sur des sites géo-distribués pour exploiter des infrastructures cloud existantes et réaliser des expériences à grande échelle. Les données générées par de telles expériences sont considérables et stockées à plusieurs endroits pour être réutilisées. En effet, les systèmes workflow sont composés de tâches collaboratives, présentant de nouveaux besoins en terme de dépendance et d'échange de données intermédiaires pour leur traitement. Cela entraîne de nouveaux problèmes lors de la sélection de données distribuées et de ressources de stockage, de sorte que l'exécution des tâches ou du job s'effectue à temps et que l'utilisation des ressources soit rentable. Par conséquent, cette thèse aborde le problème de gestion des données hébergées dans des centres de données cloud en considérant les exigences des systèmes workflow qui les génèrent. Pour ce faire, le premier problème abordé dans cette thèse traite le comportement d'accès aux données intermédiaires des tâches qui sont exécutées dans un cluster MapReduce-Hadoop. Cette approche développe et explore le modèle de Markov qui utilise la localisation spatiale des blocs et analyse la séquentialité des fichiers spill à travers un modèle de prédiction. Deuxièmement, cette thèse traite le problème de placement de données intermédiaire dans un stockage cloud fédéré en minimisant le coût de stockage. A travers les mécanismes de fédération, nous proposons un algorithme exacte ILP afin d’assister plusieurs centres de données cloud hébergeant les données de dépendances en considérant chaque paire de fichiers. Enfin, un problème plus générique est abordé impliquant deux variantes du problème de placement lié aux dépendances divisibles et entières. L'objectif principal est de minimiser le coût opérationnel en fonction des besoins de dépendances inter et intra-job<br>The typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
APA, Harvard, Vancouver, ISO, and other styles
43

Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020.

Full text
Abstract:
Les workflows sont des systèmes typiques traitant le big data. Ces systèmes sont déployés sur des sites géo-distribués pour exploiter des infrastructures cloud existantes et réaliser des expériences à grande échelle. Les données générées par de telles expériences sont considérables et stockées à plusieurs endroits pour être réutilisées. En effet, les systèmes workflow sont composés de tâches collaboratives, présentant de nouveaux besoins en terme de dépendance et d'échange de données intermédiaires pour leur traitement. Cela entraîne de nouveaux problèmes lors de la sélection de données distribuées et de ressources de stockage, de sorte que l'exécution des tâches ou du job s'effectue à temps et que l'utilisation des ressources soit rentable. Par conséquent, cette thèse aborde le problème de gestion des données hébergées dans des centres de données cloud en considérant les exigences des systèmes workflow qui les génèrent. Pour ce faire, le premier problème abordé dans cette thèse traite le comportement d'accès aux données intermédiaires des tâches qui sont exécutées dans un cluster MapReduce-Hadoop. Cette approche développe et explore le modèle de Markov qui utilise la localisation spatiale des blocs et analyse la séquentialité des fichiers spill à travers un modèle de prédiction. Deuxièmement, cette thèse traite le problème de placement de données intermédiaire dans un stockage cloud fédéré en minimisant le coût de stockage. A travers les mécanismes de fédération, nous proposons un algorithme exacte ILP afin d’assister plusieurs centres de données cloud hébergeant les données de dépendances en considérant chaque paire de fichiers. Enfin, un problème plus générique est abordé impliquant deux variantes du problème de placement lié aux dépendances divisibles et entières. L'objectif principal est de minimiser le coût opérationnel en fonction des besoins de dépendances inter et intra-job<br>The typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
APA, Harvard, Vancouver, ISO, and other styles
44

Principini, Gianluca. "Data Mesh: decentralizzare l'ownership dei dati mantenendo una governance centralizzata attraverso l'adozione di standard di processo e di interoperabilità." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Nel corso degli ultimi due decenni i progressi delle tecnologie cloud hanno consentito alle imprese di poter puntare su nuovi paradigmi implementativi per le Data Platform. Tuttavia, questi sono caratterizzati da centralizzazione e monoliticità, stretto accoppiamento tra gli stage di pipeline e da un'ownership dei dati centralizzata in team di data engineers altamente specializzati, ma lontani dal dominio. Queste caratteristiche, con l'aumentare delle sorgenti e dei consumatori dei dati, evidenziano un collo di bottiglia che rischia di pregiudicare la buona riuscita di progetti che spesso comportano grossi investimenti. Problemi simili sono stati affrontati dall'ingegneria del software con l'adozione del Domain Driven Design, con il passaggio da architetture monolitiche ad architetture orientate ai servizi e sistemi basati su microservizi, che ben si prestano ad operare in ambienti cloud. Nella tesi, svolta nel contesto aziendale di Agile Lab, viene illustrato come le stesse migliorie possano essere applicate alla progettazione delle Data Platform adottando il paradigma del Data Mesh, in cui ciascun dominio espone dati analitici attraverso i Data Product. Per dimostrare come sia possibile ridurre gli attriti nella predisposizione dell'infrastruttura di un Data Product attraverso l'adozione di standard di processo e di interoperabilità, che guidino l'interazione tra le diverse componenti all'interno della piattaforma, viene illustrata la progettazione e l'implementazione di un meccanismo di Infrastructure as Code per le risorse di observability di quest'ultimo.
APA, Harvard, Vancouver, ISO, and other styles
45

Ozturk, Karahan. "Modeling Of Software As A Service Architectures And Investigation On Their Design Alternatives." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612119/index.pdf.

Full text
Abstract:
In general, a common reference architecture can be derived for Software as a Service (SaaS) architecture. However, while designing particular applications one may derive various different application design alternatives from the same reference SaaS architecture specification. To meet the required functional and nonfunctional requirements of different enterprise applications it is important to model the possible design so that a feasible alternative can be defined. In this thesis, we propose a systematic approach and the corresponding tool support for guiding the design of the SaaS application architecture. The approach defines a SaaS reference architecture, a family feature model and a set of reference design rules. Based on the business requirements an application feature model is defined using the family feature model. Selected features are related to design decisions and a SaaS application architecture design is derived. By defining multiple application architectures based on different application feature models we can even compare multiple alternatives and based on this select the most feasible alternative.
APA, Harvard, Vancouver, ISO, and other styles
46

RABELO, FILHO Gerson Lobato. "Uma abordagem em engenharia dirigida por modelos e computação em nuvem para suportar o teste de modelos SAAS de código aberto." Universidade Federal do Maranhão, 2015. http://tedebc.ufma.br:8080/jspui/handle/tede/305.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:52:42Z (GMT). No. of bitstreams: 1 Dissertacao-GersonLobatoRabelo.pdf: 4450245 bytes, checksum: 15830f76fa41def073930b74d39d40e2 (MD5) Previous issue date: 2015-08-31<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>Cloud Computing is a computational paradigm which came with the idea of cloud-based services, resources and functionalities offered by enterprises to end users through service delivery models. The main models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Offered services and resources are commonly tested by testing and maintenance teams with the purpose of detecting and eliminating present failures. In relation to SaaS models, the most common test types used are functional, performance, scalability, component-based and tenant-based ones. However, tests like reliability, availability, usability and acceptance ones, more oriented to users, are less performed. In the field of Model Driven Engineering (MDE), the tests in SaaS, PaaS and IaaS models are performed through approaches like transformation of models and generation of test cases according to test types, allowing services and resources being tested with great quality and efficiency. This dissertation shows an approach based on Model Driven Engineering to support generation of availability, reliability, usability (after user uses the open-source SaaS model and answer a questionnaire about his/her profile and about the model) and acceptance test cases for SaaS models with open source code (Open Software as a Service Open SaaS). A framework and metamodels are proposed to this end, besides quantitatives metrics with the purpose of analize these criteria for the proposed Open SaaS model.<br>A Computação em Nuvem é um paradigma computacional que surgiu com a ideia de serviços, recursos e funcionalidades baseados em nuvem e que são ofertados por empresas para usuários finais através de modelos de entregas de serviços. Os modelos principais são Infrastructure as a Service (IaaS), Platform as a Service (PaaS) e Software as a Service (SaaS). Os serviços e recursos ofertados são constantemente testados pelas equipes de manutenção e teste com o objetivo de detectarem e eliminarem as falhas presentes. Em relação aos modelos SaaS, os tipos de teste mais usados são os funcionais, de performance, de escalabilidade, de componentes e os testes baseados nos inquilinos dos modelos SaaS. No entanto, testes como os de confiabilidade, de usabilidade, de disponibilidade e de aceitação, mais orientados ao cliente, são pouco realizados. No campo da Engenharia Dirigida por Modelos (Model Driven Engineering - MDE), os testes em modelos SaaS, PaaS e IaaS são feitos através de abordagens como a transformação de modelos e a geração de casos de teste de acordo com os tipos de teste, permitindo que os serviços e recursos sejam testados com maior qualidade e eficiência. Esta dissertação apresenta uma abordagem baseada em Engenharia Dirigida por Modelos para suportar a geração de casos de teste de disponibilidade, confiabilidade, usabilidade (após o usuário utilizar o modelo SaaS de código aberto e responder um questionário sobre seu perfil e sobre o modelo) e aceitação para modelos SaaS de código aberto (Open Software as a Service Open SaaS). Um framework e metamodelos são propostos para este fim, além de métricas quantitativas com o propósito de analisar estes critérios para o modelo Open SaaS proposto.
APA, Harvard, Vancouver, ISO, and other styles
47

Stella, Federico. "Learning a Local Reference Frame for Point Clouds using Spherical CNNs." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20197/.

Full text
Abstract:
Uno dei problemi più importanti della 3D Computer Vision è il cosiddetto surface matching, che consiste nel trovare corrispondenze tra oggetti tridimensionali. Attualmente il problema viene affrontato calcolando delle feature locali e compatte, chiamate descrittori, che devono essere riconosciute e messe in corrispondenza al mutare della posa dell'oggetto nello spazio, e devono quindi essere invarianti rispetto all'orientazione. Il metodo più usato per ottenere questa proprietà consiste nell'utilizzare dei Local Reference Frame (LRF): sistemi di coordinate locali che forniscono un'orientazione canonica alle porzioni di oggetti 3D che vengono usate per calcolare i descrittori. In letteratura esistono diversi modi per calcolare gli LRF, ma fanno tutti uso di algoritmi progettati manualmente. Vi è anche una recente proposta che utilizza reti neurali, tuttavia queste vengono addestrate mediante feature specificamente progettate per lo scopo, il che non permette di sfruttare pienamente i benefici delle moderne strategie di end-to-end learning. Lo scopo di questo lavoro è utilizzare un approccio data-driven per far imparare a una rete neurale il calcolo di un Local Reference Frame a partire da point cloud grezze, producendo quindi il primo esempio di end-to-end learning applicato alla stima di LRF. Per farlo, sfruttiamo una recente innovazione chiamata Spherical Convolutional Neural Networks, le quali generano e processano segnali nello spazio SO(3) e sono quindi naturalmente adatte a rappresentare e stimare orientazioni e LRF. Confrontiamo le prestazioni ottenute con quelle di metodi esistenti su benchmark standard, ottenendo risultati promettenti.
APA, Harvard, Vancouver, ISO, and other styles
48

Sahli, Nabil. "Contribution au problème de la sécurité sémantique des systèmes : approche basée sur l'ingénierie dirigée par les modèles." Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0699.

Full text
Abstract:
Les infrastructures industrielles critiques seront dotées de plusieurs équipements embarqués intelligents. Elles exploitent des systèmes complexes, embarqués, intelligents et sémantiques pour leurs fonctionnements, en local et à distance, dans un contexte de développement, de villes intelligentes et du web des objets. Elles emploient, de plus en plus de systèmes «contrôle/commande», pour la surveillance des plateformes industrielles critiques, en temps réel. Les infrastructures critiques seront de plus en plus communicantes dans le cadre des échanges des alarmes et la mise en place de marchés euro-méditerranéens de l’électricité et davantage plus vulnérables. La cybernétique des plateformes critiques se développe, de jour en jour, essentiellement avec l’usage de systèmes complexes embarqués intelligents sémantiques, des services web, des ontologies,..etc. Ils sont tous embarqués sur les instruments intelligents, composant les systèmes sémantiques. Des réseaux de télécommunication intelligents, filaire et sans fil, dit hybrides, se développent. Ils représentent un grand challenge, pour la sécurité des systèmes communicants futurs. Dans un contexte de développement du web, des objets et des villes intelligentes, nos travaux de recherche visent à renforcer les bases de la sécurité et de la cybernétique sémantique, pour les systèmes communicants. Dans notre solution globale, en matière de sécurité sémantique, des infrastructures critiques, nous avons proposé plusieurs sous-solutions, tels que des méta-modèles et des modèles, ainsi qu’une stratégie de sécurité de bout en bout, avec un fonctionnement sur un réseau Cloud global, hybride et sécurisé<br>Critical, modern, current, and even future industrial infrastructures will be equipped with several intelligent embedded equipment. They exploit complex, embedded, intelligent and semantic systems for their operations, locally and remotely, in a context of development, smart cities and the web of things. They are using more and more SCADA and DCS control systems to monitor critical industrial platforms in real time. Critical infrastructures will be more and more communicating in the framework of the exchanges of allarmes and the establishment of Euro-Mediterranean markets of the életcricité and also more and more vulnerable, to classic and even semantic attacks, to viruses, to Trojan horses. The cybernetics of critical platforms is growing, day by day, mainly with the use of complex embedded intelligent semantic systems, web services, ontologies, and format files (XML, OWL, RDF, etc.). They are all embedded in intelligent instruments, making up semantic SCADA systems. Intelligent telecommunication networks, wired and wireless, called hybrids, are developing. They represent a great challenge for the security of future communicating systems. In a context of development of the web of things and smart cities, our research aims to strengthen the bases of security and semantic cybernetics, for communicating systems. In our global solution for semantic security, critical infrastructures, we have proposed several sub-solutions, such as metamodels and models, as well as an end-to-end security strategy, with operation on a global cloud network, hybrid and secure
APA, Harvard, Vancouver, ISO, and other styles
49

Le, Nhan Tam. "Ingénierie dirigée par les modèles pour le provisioning d'images de machines virtuelles pour l'informatique en nuage." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00926228.

Full text
Abstract:
Le contexte et la problématique De nos jours, le cloud computing est omniprésent dans la recherche et aussi dans l'industrie. Il est considéré comme une nouvelle génération de l'informatique où les ressources informatiques virtuelles à l'échelle dynamique sont fournies comme des services via l'internet. Les utilisateurs peuvent accéder aux systèmes de cloud utilisant différentes interfaces sur leurs différents dis- positifs. Ils ont seulement besoin de payer ce qu'ils utilisent, respectant le l'accord de service (Service-Layer Agreement) établi entre eux et les fournisseurs de services de cloud. Une des caractéristiques principales du cloud computing est la virtualisation grâce à laquelle toutes les ressources deviennent transparentes aux utilisateurs. Les utilisateurs n'ont plus besoin de contrôler et de maintenir les infrastructures informatiques. La virtualisation dans le cloud computing combine des images de machines virtuelles (VMIs) et des machines physiques où ces images seront déployées. Typiquement, le déploiement d'une telle VMI comprend le démarrage de l'image, l'installation et la configuration des packages définis pas la VMI. Dans les approches traditionnelles, les VMIs sont crées par les experts techniques des fournisseurs de services cloud. Il s'agit des VMIs pré-packagés qui viennent avec des composants pré-installés et pré-configurés. Pour répondre à une requête d'un client, le fournisseur sélectionne une VMI appropriée pour cloner et déployer sur un nœud de cloud. Si une telle VMI n'existe pas, une nouvelle VMI va être créée pour cette requête. Cette VMI pourrait être générée à partir de la VMI existante la plus proche ou être entièrement neuve. Le cycle de vie de l'approvisionnement d'une VMI dans l'approche traditionnelle est décrite dans la Figure 1. Une VMI standard contient normalement plusieurs packages parmi lesquels certains qui ne seront jamais utilisés. Cela vient du fait que la VMI est créée au moment de conception pour le but d'être clonée plus tard. Cette approche a des inconvénients tels que la demande de ressources importantes pour stocker des VMIs ou pour les déployer. De plus, elle requiert le démarrage de plusieurs composants, y compris ceux non utilisés. Particulièrement, à partir du point de vue de gestion de services, il est difficile de gérer la complexité des interdépendances entre les différents composants afin de maintenir les VMIs déployées et de les faire évoluer. Pour résoudre les problèmes énumérés ci-dessus, les fournisseurs de services de cloud pourraient automatiser le processus d'approvisionnement et permettre aux utilisateurs de choisir des VMIs d'une manière flexible en gardant les profites des fournisseur en terme de temps, de ressources, et de coût. Dans cette optique, les fournisseurs devraient considérer quelques préoccupations: (1) Quels packages et dépendances seront déployés? (2) Comment optimiser une configuration en terme de coût, de temps, et de consommation de ressources? (3) Comment trouver la VMI la plus ressemblante et comment l'adapter pour obtenir une nouvelle VMI? (4) Comment éviter les erreurs qui viennent souvent des opérations manuelles? (5) Comment gérer l'évolution de la VMI déployée et l'adapter aux besoins de reconfigurer et de passer automatiquement à l'échelle? A cause de ces exigences, la construction d'un systèmes de gestion de plateformes cloud (PaaS-Platform as a Sevice) est difficile, particulièrement dans le processus d'approvisionnement de VMIs. Cette difficulté requiert donc une approche appropriée pour gérer les VMIs dans les systèmes de cloud computing. Cette méthode fournirait des solutions pour la reconfiguration et le passage automatique à l'échelle. Les défis et les problèmes clés A partir de la problématique, nous avons identifié sept défis pour le développement d'un processus d'approvisionnements dans cloud computing. * C1: Modélisation de la variabilité des options de configuration des VMIs afin de gérer les interdépendances entre les packages logiciels Les différents composants logiciels pourraient requérir des packages spécifiques ou des bibliothèques du système d'exploitation pour une configuration correcte. Ces dépendances doivent être arrangées, sélectionnées, et résolues manuellement pour chaque copie de la VMI standard. D'autre part, les VMIs sont créées pour répondre aux exigences d'utilisateurs qui pourraient partager des sous-besoins en commun. La modélisation de la similitude et de la variabilité des VMIs au regard de ces exigences est donc nécessaire. * C2: Réduction des données transférées via les réseaux pendant le processus d'approvisionnement Afin d'être prêt pour répondre aux requêtes de clients, plusieurs packages sont installés sur la machine virtuelle standard , y compris les packages qui ne seront pas utilisé. Ces packages devront être limités afin de minimaliser la taille des VMIs. * C3: Optimisation de la consommation de ressources pendant l'exécution Dans l'approche traditionnelle, les activités de création et de mise à jour des VMIs requièrent des opérations manuelles qui prennent du temps. D'autre part, tous les packages dans les VMIs, y compris ceux qui ne sont pas utilisés, sont démarrés et occupent donc des ressources. Ces consommations de ressources devraient être optimisées. * C4: Mise à disposition d'un outil interactif facilitant les choix de VMIs des utilisateurs Les fournisseurs de services cloud voudraient normalement donner la flexibilité aux utilisateurs clients dans leurs choix de VMIs. Cependant, les utilisateurs n'ont pas de con- naissances techniques approfondies. Pour cette raison, des outils facilitant les choix sont nécessaires. * C5: Automatisation du déploiement des VMIs Plusieurs opérations du processus d'approvisionnement sont très complexes. L'automatisation de ces opérations peut réduire le temps de déploiement et les erreurs. * C6: Support de la reconfiguration de VMIs pendant leurs exécutions Un des caractéristiques importantes de cloud computing est de fournir des services à la demande. Puisque les demandes évoluent pendant l'exécution des VMIs, les systèmes de cloud devraient aussi s'adapter à ces évolutions des demandes. * C7: Gestion de la topologie de déploiement de VMIs Le déploiement de VMIs ne doit pas seulement tenir en compte multiple VMIs avec la même configuration, mais aussi le cas de multiple VMIs ayant différentes configurations. De plus, le déploiement de VMIs pourrait être réalisé sur différentes plateformes de cloud quand le fournisseur de service accepte une infrastructure d'un autre fournisseur Afin d'adresser ces défis, nous considérons trois problèmes clés pour le déploiement du processus d'approvisionnement de VMIs: 1. Besoin d'un niveau d'abstraction pour la gestion de configurations de VMIs: Une approche appropriée devrait fournir un haut niveau d'abstraction pour la modélisation et la gestion des configurations des VMIs avec leurs packages et les dépendances entre ces packages. Cette abstraction permet aux ingénieurs experts des fournisseurs de services de cloud à spécifier la famille de produits de configurations de VMIs. Elle facilite aussi l'analyse et la modélisation de la similitude et de la variabilité des configurations de VMIs, ainsi que la création des VMIs valides et cohérentes. 2. Besoin d'un niveau d'abstraction pour le processus de déploiement de VMIs: Une ap- proche appropriée pour l'approvisionnement de VMIs devrait fournir une abstraction du processus de déploiement. 3. Besoin d'un processus de déploiement et de reconfiguration automatique: Une approche appropriée devrait fournir une abstraction du processus de déploiement et de reconfigura- tion automatique. Cette abstraction facilite la spécification, l'analyse, et la modélisation la modularité du processus. De plus, l'approche devrait supporter l'automatisation afin de réduire les tâches manuelles qui sont couteuses en terme de performance et contiennent potentiellement des erreurs.
APA, Harvard, Vancouver, ISO, and other styles
50

Norcini, Simone. "From data to applications in the Internet of Things." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11128/.

Full text
Abstract:
Con la crescita in complessità delle infrastrutture IT e la pervasività degli scenari di Internet of Things (IoT) emerge il bisogno di nuovi modelli computazionali basati su entità autonome capaci di portare a termine obiettivi di alto livello interagendo tra loro grazie al supporto di infrastrutture come il Fog Computing, per la vicinanza alle sorgenti dei dati, e del Cloud Computing per offrire servizi analitici complessi di back-end in grado di fornire risultati per milioni di utenti. Questi nuovi scenarii portano a ripensare il modo in cui il software viene progettato e sviluppato in una prospettiva agile. Le attività dei team di sviluppatori (Dev) dovrebbero essere strettamente legate alle attività dei team che supportano il Cloud (Ops) secondo nuove metodologie oggi note come DevOps. Tuttavia, data la mancanza di astrazioni adeguata a livello di linguaggio di programmazione, gli sviluppatori IoT sono spesso indotti a seguire approcci di sviluppo bottom-up che spesso risulta non adeguato ad affrontare la compessità delle applicazione del settore e l'eterogeneità dei compomenti software che le formano. Poichè le applicazioni monolitiche del passato appaiono difficilmente scalabili e gestibili in un ambiente Cloud con molteplici utenti, molti ritengono necessaria l'adozione di un nuovo stile architetturale, in cui un'applicazione dovrebbe essere vista come una composizione di micro-servizi, ciascuno dedicato a uno specifica funzionalità applicativa e ciascuno sotto la responsabilità di un piccolo team di sviluppatori, dall'analisi del problema al deployment e al management. Poichè al momento non si è ancora giunti a una definizione univoca e condivisa dei microservices e di altri concetti che emergono da IoT e dal Cloud, nè tantomento alla definzione di linguaggi sepcializzati per questo settore, la definzione di metamodelli custom associati alla produzione automatica del software di raccordo con le infrastrutture potrebbe aiutare un team di sviluppo ad elevare il livello di astrazione, incapsulando in una software factory aziendale i dettagli implementativi. Grazie a sistemi di produzione del sofware basati sul Model Driven Software Development (MDSD), l'approccio top-down attualmente carente può essere recuperato, permettendo di focalizzare l'attenzione sulla business logic delle applicazioni. Nella tesi viene mostrato un esempio di questo possibile approccio, partendo dall'idea che un'applicazione IoT sia in primo luogo un sistema software distribuito in cui l'interazione tra componenti attivi (modellati come attori) gioca un ruolo fondamentale.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography