To see the other types of publications on this topic, follow the link: Kubernetes.

Dissertations / Theses on the topic 'Kubernetes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Kubernetes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Baldini, Umberto. "Kubernetes." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20910/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Клименок, В. О. "Відмовостійка система на основі Kubernetes." Thesis, Чернігів, 2021. http://ir.stu.cn.ua/123456789/24848.

Full text
Abstract:
Клименок, В. О. Відмовостійка система на основі Kubernetes : випускна кваліфікаційна робота : 125 "Кібербезпека" / В. О. Клименок ; керівник роботи М. А. Синенко ; НУ "Чернігівська політехніка", кафедра кібербезпеки та математичного моделювання . – Чернігів, 2021. – 48 с.
Метою роботи є розробка відмовостійкої системи захисту веб-додатків на основі Kubernetes Об'єктом дослідження є процес створення відмовостійкої та захищеної системи для веб-додатків. Предметом дослідження є створена та налаштована відмовостійка система з використанням оркестратора Linux контейнерів. Розрахунок економічної ефективності не проводився. Результати та новизна, отримані в процесі дослідження, можуть бути використані при створенні відмовостійкого та захищеної системи для веб-додатків, інтернет магазинів тощо.
The aim of the work is to develop a fault-tolerant system for protecting web applications based on Kubernetes The object of research is the process of creating a fault-tolerant and secure system for web applications. The subject of the research is created and configured fault-tolerant system using the Linux container orchestrator. The calculation of economic efficiency was not performed. The results and novelty obtained in the research process can be used to create a fault-tolerant and secure system for web applications, online stores and more.
APA, Harvard, Vancouver, ISO, and other styles
3

Gasimli, Elkhan. "Container Network Interface Management with Kubernetes." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
The orchestration of containers and cloud-native computing has gained a lot of attention in recent years. The level of adoption has risen to such an extent that even businesses in finance, banking and the public sector are involved. Containers are deployed in both public and private cloud environments and are typically deployed in VM (Virtual Machines) environments for versatility. Microservices demand usage of large number of containers that require orchestration, and one of the most common solutions is Kubernetes. However, Kubernetes does not provide networking solution and CNI (Container Networking Interface) and its plugins provide it. Their capabilities must be measured in order to select the best plugin. In this thesis, the most used Kubernetes CNI plugins such as Flannel, Calico and Cilium are deployed and tested. At the end, main differences, advantages and disadvantages of the plugins are discussed in order to make a corresponding choice based on required project.
APA, Harvard, Vancouver, ISO, and other styles
4

Semprini, Gloria. "Resilienza di applicazioni e Cluster Kubernetes." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23103/.

Full text
Abstract:
La tesi mira a costruire uno strumento dedicato agli studenti dell'insegnamento di Systems Integration del Corso di Studio in Ingegneria e Scienze Informatiche. Lo strumento vuole mettere a disposizione degli studenti un Cluster Kubernetes i cui nodi siano macchine virtuali tutte operanti su una singola macchina fisica. Rispetto ad altri strumenti didattici pre-esistenti, questo cluster Kubernetes sarà composto da più nodi e permetterà di effettuare sperimentazioni dirette sulle funzionalità di resilienza del cluster. Infatti, lo strumento implementa la tecnologia HA (High Availability) di Kubernetes, replicando i nodi che svolgono le funzionalità di Master ed esponendo all'esterno, come punto di accesso, un nodo, denominato load balancer, con funzioni di bilanciamento di carico tra i master. Lo strumento utilizza l'hypervisor virtualbox operante su una macchina fisica Windows e sfrutta 7 macchine virtuali con sistema operativo Ubuntu server 20.10.
APA, Harvard, Vancouver, ISO, and other styles
5

Медведєв, В. І. "Контейнеризований python додаток в кластері Kubernetes." Master's thesis, Сумський державний університет, 2020. https://essuir.sumdu.edu.ua/handle/123456789/82325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baiardi, Martina. "Controllo e scalabilità automatizzati in cluster Kubernetes." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21629/.

Full text
Abstract:
Le moderne applicazioni che vogliono adattarsi al meglio nel distribuito sono costituite da container. Kubernetes (K8s) è un software che fornisce orchestrazione di container lavorando su cluster di macchine che interagiscono tra loro. Sviluppato inizialmente da Google e poi reso open-source nel 2014 proprio per aiutare gli sviluppatori a far funzionare correttamente questa tipologia di sistemi complessi: K8s permette di dispiegare applicazioni containerizzate, facilitandone la configurazione, e di gestire dinamicamente carichi di lavoro variabili nel tempo. L'obiettivo di questa tesi è perciò quello di fornire agli studenti del Corso di Ingegneria e Scienze Informatiche uno strumento per fare pratica con queste infrastrutture software direttamente sulle proprie macchine locali, virtualizzando completamente tutta l'infrastruttura su un singolo host mediante più macchine virtuali.
APA, Harvard, Vancouver, ISO, and other styles
7

Comandini, Alessio. "Kubernetes su GCP: Un approccio top down." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Nella tesi viene analizzata l'architettura a microservizi e nello specifico la piattaforma kubernetes e le sue topologie di installazione. Successivamente verrà presentata la piattaforma google cloud platform per valutare i servizi che questa offre per il dispiegamento di cluster kubernetes e di applicazioni sopra questo cluster. Finita la fase di analisi si procederà a mettere in opera un cluster kubernetes e successivamente questo verrà testato attraverso il dispiegamento di una piccola applicazione.
APA, Harvard, Vancouver, ISO, and other styles
8

Gunda, Pavan, and Sri Datta Voleti. "Performance evaluation of wireguard in kubernetes cluster." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21167.

Full text
Abstract:
Containerization has gained popularity for deploying applications in a lightweight environment. Kubernetes and Docker have gained a lot of dominance for scalable deployments of applications in containers. Usually, kubernetes clusters are deployed within a single shared network. For high availability of the application, multiple kubernetes clusters are deployed in multiple regions, due to which the number of kubernetes clusters keeps on increasing over time. Maintaining and managing mul-tiple kubernetes clusters is a challenging and time-consuming process for system administrators or DevOps engineers. These issues can be addressed by deploying a kubernetes cluster in a multi-region environment. A multi-region kubernetes de-ployment reduces the hassle of handling multiple kubernetes masters by having onlyone master with worker nodes spread across multiple regions. In this thesis, we investigated a multi-region kubernetes cluster’s network performance by deploying a multi-region kubernetes cluster with worker nodes across multiple openstack regions and tunneled using wireguard(a VPN protocol). A literature review on the common factors that influence the network performance in a multi-region deployment is conducted for the network performance metrics. Then, we compared the request-response time of this multi-region kubernetes cluster with the regular kubernetes cluster to evaluate the performance of the deployed multi-region kubernetescluster. The results obtained show that a kubernetes cluster with worker nodes ina single shared network has an average request-response time of 2ms. In contrast, the kubernetes cluster with worker nodes in different openstack projects and regions has an average request-response time of 14.804 ms. This thesis aims to provide a performance comparison of the kubernetes cluster with and without wireguard, fac-tors affecting the performance, and an in-depth understanding of concepts related to kubernetes and wireguard.
APA, Harvard, Vancouver, ISO, and other styles
9

Malina, Peter. "Řadič postupného nasazení software nad platformou Kubernetes." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403190.

Full text
Abstract:
Potreba dodania hodnoty uživatelom každodočne rastie na kompetitívnom trhu IT. Agilita a DevOps sa stávajú kritickými aspektami pre vývoj software, vyhľadávajúci nástroje ktoré podporujú agilnú kultúru. Softwarové projekty v agilnej kultúre majú častú tendenciu zaoberať sa stratégiami nasadenia ktoré redukujú risk nasadenia nových zmien do existujúceho systému. A však, prostredia určené pre vývoj a testovanie sa takmer vždy odlišujú od produkčných. Využitie primeranej stratégie nasadenie ako canary zlepšuje celkovú stabilitu systému testovaním nových zmien na malej vzorke produkčnej prevádzky. Bolo vykonaných niekoľko experimentov pre dôkaz, že stratégia canary môže pozitívne ovplyvniť stabilitu nasadení a redukovať risk ktorý prinášajú nové zmeny.
APA, Harvard, Vancouver, ISO, and other styles
10

Andersson, Johan, and Fredrik Norrman. "Container Orchestration : the Migration Path to Kubernetes." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97744.

Full text
Abstract:
As IT platforms grow larger and more complex, so does the underlying infrastructure. Virtualization is an essential factor for more efficient resource allocation, improving both the management and environmental impact. It allows more robust solutions and facilitates the use of IaC (infrastructure ascode). Many systems developed today consist of containerized microservices. Considered the standard of container orchestration, Kubernetes is the natural next step for many companies. But how do we move on from previous solutions to a Kubernetes cluster? We found that there are not a lot of detailed enough guidelines available, and set out to gain more knowledge by diving into the subject - implementing prototypes that would act as a foundation for a resulting guideline of how it can be done.
APA, Harvard, Vancouver, ISO, and other styles
11

Fabbri, Enrico Maria. "Kubernetes e Doker vs Virtual Private Server." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
12

Larsson, Olle. "RUNNING DATABASES IN A KUBERNETES CLUSTERAn evaluation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-165164.

Full text
Abstract:
A recent trend in so‰ftware engineering is to build applications composed of a set of small independent services – microservices. Kubernetes has become the common denominator for hosting stateless microservices. It off‚ers foundational features such as deployment and replication of microservices as well as cluster resource management. Whereas stateless microservices are well suited to being hosted in Kubernetes, stateful microservices such as databases are generally hosted outside of Kubernetes and managed by domain experts. It is desirable to run stateful services such as databases in Kubernetes to leverage its features, ease of operation, and to harmonize the environment across the entire application stack. Th‘e purpose of this thesis is to investigate and evaluate the current support for hosting stateful applications in the form of databases in Kubernetes, and to show how di‚fferent databases are able to operate in Kubernets. An experimental setup was used where a set of databases – MySQL, TiDB, and CockroachDB, were deployed in a Kubernetes cluster. For each of these databases, a set of operational tasks were performed that concerned backup, upgrading, and capacity re-scaling. During the operations a number of server-sided and clients idedmetrics related to the performance and resource effciency of the databases were captured. ‘The results showed that Kubernetes has got the native capabilities necessary to deploy and run databases, but not to fully operate them correctly. Furthermore, it was concluded that the operations had a widely di‚fferent performance impact depending on the database solution.
APA, Harvard, Vancouver, ISO, and other styles
13

Markstedt, Olof. "Kubernetes as an approach for solving bioinformatic problems." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-330217.

Full text
Abstract:
The cluster orchestration tool Kubernetes enables easy deployment and reproducibility of life science research by utilizing the advantages of the container technology. The container technology allows for easy tool creation, sharing and runs on any Linux system once it has been built. The applicability of Kubernetes as an approach to run bioinformatic workflows was evaluated and resulted in some examples of how Kubernetes and containers could be used within the field of life science and how they should not be used. The resulting examples serves as proof of concepts and the general idea of how implementation is done. Kubernetes allows for easy resource management and includes automatic scheduling of workloads. It scales rapidly and has some interesting components that are beneficial when conducting life science research.
APA, Harvard, Vancouver, ISO, and other styles
14

Заблоцький, К. В., С. В. Малик, and Б. С. Чумаченко. "Дослідження можливостей використання kubernetes для потреб телекомунікаційних систем." Thesis, Національний авіаційний університет, 2021. https://er.nau.edu.ua/handle/NAU/50535.

Full text
Abstract:
1. Документація по Kubernetes. URL: https://kubernetes.io/ru/docs/home/ (Last accessed: 17.02.2021). 2. Публікація про Kubernetes. URL: https://habr.com/ru/post/258443/ (Last accessed: 10.02.2021). 3. Публікація про Nokia. URL: https://kubernetes.io/case-studies/huawei/ (Last accessed: 01.02.2021). 4. Публікація про Nokia. URL: https://kubernetes.io/case-studies/nokia/ (Last accessed: 01.02.2021).
Традиційно організації використовували фізичні машини і сервера для запуску додатків. Таким чином, було багато проблем із розподіленням ресурсів, що призводило до ускладнення роботи додатків і не повного використання ресурсів. Наступним етапом покращення роботи було використання віртуалізації для забезпечення ізоляції додатків. Це дозволило створити краще масштабування та розділити доступ між додатками для забезпечення інформаційної безпеки.
APA, Harvard, Vancouver, ISO, and other styles
15

Ходаківський, М. А., В. М. Безрук, and О. П. Малінін. "Впровадження хмарної мікросервісної архітектури на прикладі оркестратора KUBERNETES." Thesis, ХНУРЕ, 2021. https://openarchive.nure.ua/handle/document/16489.

Full text
Abstract:
Today there is a problem with speed, complexity and efficiency of development of the final software product, for many years the developers have written large web applications with the so-called monolithic method, this is when developing a large application that stores all the modules and pieces of code, it was and is quite convenient in writing, but this method has significant disadvantages, first of all, if an application fails one module or another, the whole application ceases to function, which is a critical aspect when looking from a business standpoint, a difficult process debugging and application updates. Microservice architecture is the architecture of the current lion's fate of all the most popular resources, or projects that use this particular application building system. The principle of microservice architecture is that the whole application is broken down into services, for example, the application is authorized, in the service architecture it can be made a separate module, etc. The advantages of this principle are the stability of the application, if one service fails it will be easy to repair, easy to update the version of the application, also a disadvantage is the difficult process of debugging and updating the application.
APA, Harvard, Vancouver, ISO, and other styles
16

Ju, Li. "Proactive auto-scaling for edge computing systems with Kubernetes." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-453643.

Full text
Abstract:
With the emergence of the Internet of Things and 5G technologies, the edge computing paradigm is playing increasingly important roles with better availability, latency control, and performance. However,existing autoscaling tools for edge computing applications do not utilize heterogeneous resources of edge systems efficiently, leavingscope for performance improvement. In this work, we propose a ProactivePod Autoscaler (PPA) for edge computing applications on Kubernetes. The proposed PPA is able to forecast work loads in advance with multipleuser-defined/customized metrics and to scale edge computing applications up and down correspondingly. The PPA is optimized and evaluated on an example CPU-intensive edge computing application further. It can be concluded that the proposed PPA out performs the default pod autoscaler of Kubernetes on both efficiencies of resource utilization and application performance. The article also highlights future possible improvements on the proposed PPA.
APA, Harvard, Vancouver, ISO, and other styles
17

Коханевич, Є. Г., and О. І. Федюшин. "Автоматизація аналізу безпеки програмного коду за допомогою платформи Kubernetes." Thesis, НТУ «ХПІ», 2020. http://openarchive.nure.ua/handle/document/14295.

Full text
Abstract:
Метою доповіді є реалізовація Kubernetes Operator для наступних інструментів тестування програмного коду: SpotBugs (SAST) та OWASP ZAP (DAST). Отримано ефективний інструмент автоматизації сканування безпеки програмного коду, що розробляється в середовищі Kubernetes. Цей інструмент дозволяє забезпечити ефективний та швидкий процес аналізу безпеки програмного коду без необхідності конфігурації та постійної підтримки інструментів сканування.
APA, Harvard, Vancouver, ISO, and other styles
18

Olivi, Matteo. "Design of a Kubernetes-based Software-Defined Network Control Plane." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Negli ultimi anni, Kubernetes è emerso come l’orchestratore di applicazioni a containers dominante. Il suo design è basato su un’API che permette di descrivere in modo dichiarativo lo stato desiderato delle applicazioni e su un piano di controllo che lavora per far convergere lo stato effettivo delle applicazioni verso lo stato desiderato, ottenendo fault-­tolerance, self-­healing ed elevata scalabilità. Questo design pattern si è dimostrato estremamente efficace per la gestione dei container, ma è abbastanza generale da poter essere usato per orchestrare con successo qualsiasi tipo di risorsa virtuale che viene tradizionalmente offerta mediante il paradigma del cloud IaaS. Abbiamo testato questa idea estendendo Kubernetes per fargli gestire, oltre alle usuali applicazioni a containers, delle reti virtuali. Così facendo abbiamo di fatto realizzato il prototipo di un piano di controllo di una Software­Defined Network. Nel fare ciò sono emersi sia punti di forza che debolezze del design pattern di Kubernetes e delle librerie open source che lo supportano. Per verificare che il sistema ottenuto abbia una scalabilità adeguata a quella necessaria nei moderni cloud data centers, abbiamo condotto uno studio di performance.
APA, Harvard, Vancouver, ISO, and other styles
19

Lundgren, Jonas. "Kubernetes for Game Development : Evaluation of the Container-Orchestration Software." Thesis, Uppsala universitet, Institutionen för speldesign, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444739.

Full text
Abstract:
Kubernetes is a software for managing clusters of containerized applications and has recently risen in popularity in the tech industry. However, this popularity seems to not have spread to the game development industry, prompting the author to investigate if the reason is a technical limitation. The investigation is done by creating a proof-of-concept of a simple system setup for running a game server in Kubernetes, consisting of the Kubernetes-cluster itself, the game server to be run in the cluster, and a matchmaker server for managing client requests and creation of game server instances. Thanks to the successful proof-of-concept, a conclusion can be made that there is no inherent technical limitation causing its infrequent use in game development, but most likely habitual reasons in combination with how new Kubernetes is.
APA, Harvard, Vancouver, ISO, and other styles
20

Yström, Clara, and Alfred Stenborg. "Performance comparison between a Kubernetes cluster and an embedded system." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176816.

Full text
Abstract:
It is essential to know how the system used in a company performs and at what eventual point the system starts performing visibly worse. This thesis paper examines the performance of a Kubernetes cluster and an embedded system and challenges the two systems with different amounts of load to detect possible changes in performance. A program was developed that could measure performance by monitoring the latencies of three different metrics; inter process communication (IPC), memory access and network. The results showed that, for these two versions of the systems, Kubernetes generally displayed lower latencies than the embedded system. Another aspect that can be interpreted from the results is that the memory latency was constant, except for a few spikes, while the IPC and network latencies were more erratic. Since this thesis paper resulted in a program that can measure performance, this can be of use for a company like Ericsson.
APA, Harvard, Vancouver, ISO, and other styles
21

Saini, Shivam. "Spark on Kubernetes using HopsFS as a backing store : Measuring performance of Spark with HopsFS for storing and retrieving shuffle files while running on Kubernetes." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285561.

Full text
Abstract:
Data is a raw list of facts and details, such as numbers, words, measurements or observations that is not useful for us all by itself. Data processing is a technique that helps to process the data in order to get useful information out of it. Today, the world produces huge amounts of data that can not be processed using traditional methods. Apache Spark (Spark) is an open-source distributed general-purpose cluster computing framework for large scale data processing. In order to fulfill its task, Spark uses a cluster of machines to process the data in a parallel fashion. External shuffle service is a distributed component of Apache Spark cluster that provides resilience in case of a machine failure. A cluster manager helps spark to manage the cluster of machines and provide Spark with the required resources to run the application. Kubernetes is a new cluster manager that enables Spark to run in a containerized environment. However, running external shuffle service is not possible while running Spark using Kubernetes as the resource manager. This highly impacts the performance of Spark applications due to the failed tasks caused by machine failures. As a solution to this problem, the open source Spark community has developed a plugin that can provide the similar resiliency as provided by the external shuffle service. When used with Spark applications, the plugin asynchronously back-up the data onto an external storage. In order not to compromise the Spark application performance, it is important that the external storage provides Spark with a minimum latency. HopsFS is a next generation distribution of Hadoop Distributed Filesystem (HDFS) and provides special support to small files (<64 KB) by storing them in a NewSQL database and thus enabling it to provide lower client latencies. The thesis work shows that HopsFS provides 16% higher performance to Spark applications for small files as compared to larger ones. The work also shows that using the plugin to back-up Spark data on HopsFS can reduce the total execution time of Spark applications by 20%-30% as compared to recalculation of tasks in case of a node failure.
Data är en rå lista över fakta och detaljer, som siffror, ord, mätningar eller observationer som inte är användbara för oss alla i sig. Databehandling är en teknik som hjälper till att bearbeta data för att få användbar information ur den. Idag producerar världen enorma mängder data som inte kan bearbetas med traditionella metoder. Apache Spark (Spark) är en öppen källkod distribuerad ram för allmänt ändamål kluster dator för storskalig databehandling. För att fullgöra sin uppgift använder Spark ett kluster av maskiner för att bearbeta data på ett parallellt sätt. Extern shuffle-tjänst är en distribuerad komponent i Apache Spark-klustret som ger motståndskraft vid maskinfel. En klusterhanterare hjälper gnista att hantera kluster av maskiner och förse Spark med de resurser som krävs för att köra applikationen. Kubernetes är en ny klusterhanterare som gör att Spark kan köras i en containeriserad miljö. Det är dock inte möjligt att köra extern shuffle-tjänst när du kör Spark med Kubernetes som resurshanterare. Detta påverkar starkt prestanda för Spark-applikationer på grund av misslyckade uppgifter orsakade av maskinfel. Som en lösning på detta problem har Spark-communityn med öppen källkod utvecklat ett plugin-program som kan tillhandahålla liknande motståndskraft som tillhandahålls av den externa shuffle-tjänsten. När det används med Spark- applikationer säkerhetskopierar plugin-programmet asynkront data till en extern lagring. För att inte kompromissa med Spark-applikationsprestandan är det viktigt att det externa lagret ger Spark en minimal latens. HopsFS är en nästa generations distribution av Hadoop Distribuerat filsystem (HDFS) och ger specialstöd till små filer (<64 kB) genom att lagra dem i en NewSQL-databas och därmed möjliggöra lägre klientfördröjningar. Examensarbetet visar att HopsFS ger 16 % högre prestanda till Spark-applikationer för små filer jämfört med större. Arbetet visar också att användning av plugin för att säkerhetskopiera Spark-data på HopsFS kan minska den totala körningstiden för Spark-applikationer med 20 % - 30 % jämfört med omberäkning av uppgifter i händelse av ett nodfel.
APA, Harvard, Vancouver, ISO, and other styles
22

Бойко, Г. О. "Аналіз технологій контейнеризації та оптимізація розгортання масштабованого додатку на платформі Kubernetes." Master's thesis, Сумський державний університет, 2020. https://essuir.sumdu.edu.ua/handle/123456789/82123.

Full text
Abstract:
Проаналізовано деякі можливості що забезпечуються хмарними технологіями; Порівняно підхід для реалізації однієї з таких систем Kubernetes – найбільшими постачальниками хмарних послуг; Розроблено оптимізовану схему роботи з контейнерними додатками, та розглянути механізм розгортання одатку на платформі оркестрації контейнерів.
APA, Harvard, Vancouver, ISO, and other styles
23

Habbal, Nadin. "Enhancing Availability of Microservice Architecture : A Case Study on Kubernetes Security Configurations." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mrowczynski, Piotr. "Scaling cloud-native Apache Spark on Kubernetes for workloads in external storages." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237455.

Full text
Abstract:
CERN Scalable Analytics Section currently offers shared YARN clusters to its users as monitoring, security and experiment operations. YARN clusters with data in HDFS are difficult to provision, complex to manage and resize. This imposes new data and operational challenges to satisfy future physics data processing requirements. As of 2018, there were over 250 PB of physics data stored in CERN’s mass storage called EOS. Hadoop-XRootD Connector allows to read over network data stored in CERN EOS. CERN’s on-premise private cloud based on OpenStack allows to provision on-demand compute resources. Emergence of technologies as Containers-as-a-Service in Openstack Magnum and support for Kubernetes as native resource scheduler for Apache Spark, give opportunity to increase workflow reproducability on different compute infrastructures with use of containers, reduce operational effort of maintaining computing cluster and increase resource utilization via cloud elastic resource provisioning. This trades-off the operational features with datalocality known from traditional systems as Spark/YARN with data in HDFS.In the proposed architecture of cloud-managed Spark/Kubernetes with data stored in external storage systems as EOS, Ceph S3 or Kafka, physicists and other CERN communities can on-demand spawn and resize Spark/Kubernetes cluster, having fine-grained control of Spark Applications. This work focuses on Kubernetes CRD Operator for idiomatically defining and running Apache Spark applications on Kubernetes, with automated scheduling and on-failure resubmission of long-running applications. Spark Operator was introduced with design principle to allow Spark on Kubernetes to be easy to deploy, scale and maintain with similar usability of Spark/YARN.The analysis of concerns related to non-cluster local persistent storage and memory handling has been performed. The architecture scalability has been evaluated on the use case of sustained workload as physics data reduction, with files in ROOT format being stored in CERN mass-storage called EOS. The series of microbenchmarks has been performed to evaluate the architecture properties compared to state-of-the-art Spark/YARN cluster at CERN. Finally, Spark on Kubernetes workload use-cases have been classified, and possible bottlenecks and requirements identified.
CERN Scalable Analytics Section erbjuder för närvarande delade YARN-kluster till sina användare och för övervakning, säkerhet, experimentoperationer, samt till andra grupper som är intresserade av att bearbeta data med hjälp av Big Data-tekniker. Dock är YARNkluster med data i HDFS svåra att tillhandahålla, samt komplexa att hantera och ändra storlek på. Detta innebär nya data och operativa utmaningar för att uppfylla krav på dataprocessering för petabyte-skalning av fysikdata.Från och med 2018 fanns över 250 PB fysikdata lagrade i CERNs masslagring, kallad EOS. CERNs privata moln, baserat på OpenStack, gör det möjligt att tillhandahålla beräkningsresurser på begäran. Uppkomsten av teknik som Containers-as-a-Service i Openstack Magnum och stöd för Kubernetes som inbyggd resursschemaläggare för Apache Spark, ger möjlighet att öka arbetsflödesreproducerbarheten på olika databaser med användning av containers, minska operativa ansträngningar för att upprätthålla datakluster, öka resursutnyttjande via elasiska resurser, samt tillhandahålla delning av resurser mellan olika typer av arbetsbelastningar med kvoter och namnrymder.I den föreslagna arkitekturen av molnstyrda Spark / Kubernetes med data lagrade i externa lagringssystem som EOS, Ceph S3 eller Kafka, kan fysiker och andra CERN-samhällen på begäran skapa och ändra storlek på Spark / Kubernetes-klustrer med finkorrigerad kontroll över Spark Applikationer. Detta arbete fokuserar på Kubernetes CRD Operator för idiomatiskt definierande och körning av Apache Spark-applikationer på Kubernetes, med automatiserad schemaläggning och felåterkoppling av långvariga applikationer. Spark Operator introducerades med designprincipen att tillåta Spark över Kubernetes att vara enkel att distribuera, skala och underhålla. Analys av problem relaterade till icke-lokal kluster persistent lagring och minneshantering har utförts. Arkitekturen har utvärderats med användning av fysikdatareduktion, med filer i ROOT-format som lagras i CERNs masslagringsystem som kallas EOS. En serie av mikrobenchmarks har utförts för att utvärdera arkitekturegenskaperna såsom prestanda jämfört med toppmoderna Spark / YARN-kluster vid CERN, och skalbarhet för långvariga dataprocesseringsjobb.
APA, Harvard, Vancouver, ISO, and other styles
25

Leoni, Luca. "Utilizzo di Google Kubernetes Engine per un'applicazione IoT di gestione dell'acqua in agricoltura." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21008/.

Full text
Abstract:
Questa tesi ha come obiettivo quello di studiare l’ambiente Kubernetes e di importare l’applicazione sviluppata per il progetto SWAMP su Google Kubernetes Engine (GKE). SWAMP (Smart Water Management Platform) è un progetto Europeo H2020 che ha come obiettivo quello di realizzare una piattaforma per la gestione dell’acqua in agricoltura, con lo scopo primario di ridurne al minimo il consumo. Essa si basa su IoT, cioè “Internet of Things”, ed altre tecnologie avanzate come sensori e droni. Kubernetes è una piattaforma open source che permette di orchestrare tra loro applicazioni containerizzate, le quali hanno il vantaggio di essere facili da utilizzare e distribuire. Per creare i container contenenti le varie parti dell’applicazione complessiva si è utilizzata la tecnologia Docker. Come primo approccio si è studiato Kubernetes su di un sistema di prova, il quale contiene il database a grafo Virtuoso e il motore SEPA (alla base anche di SWAMP). In seguito si è testato il sistema di prova su Google Kubernetes Engine cercando di capire il suo funzionamento, con le relative similitudini a Kubernetes. Come ultimo passaggio è stata importata l’applicazione SWAMP su GKE. Per il corretto funzionamento dell’applicazione in GKE sono stati studiati anche una serie di concetti utili al corretto funzionamento dell’intera applicazione quali: volumi persistenti ed esposizione di servizi in rete.
APA, Harvard, Vancouver, ISO, and other styles
26

Mondani, Lorenzo. "Deployment e scaling automatici di un cluster Kubernetes low cost su architettura Arm." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23132/.

Full text
Abstract:
Realizzazione di un cluster Kubernetes basato su Raspberry Pi 4 B tramite procedure automatiche. Utilizzo degli strumenti Canonical Metal As A Service (MAAS) e Canonical Juju per l’installazione e configurazione automatiche dei sistemi operativi e dei componenti Kubernetes. Adattamento dei Raspberry, per garantire la compatibilità con MAAS, tramite l'implementazione open-source del firmware UEFI TianoCore EDKII per il Raspberry Pi 4 B. Realizzazione di un BMC (Baseboard Management Controller) personalizzato, tramite un ESP32, per permettere a MAAS di gestire da remoto le operazioni di accensione e spegnimento dei vari Raspberry. Definizione di un device di tipo "power button", nelle tabelle ACPI del firmware UEFI, per permettere lo spegnimento del Raspberry, da parte del BMC, attraverso l’interfaccia GPIO. Installazione del servizio di storage distribuito Ceph tramite Juju, per permettere l’allocazione di volumi persistenti su Kubernetes. Estensione del Kubernetes cluster autoscaler per permettergli di aggiungere o rimuovere nodi fisici, automaticamente, sulla base delle risorse disponibili, comunicando con Juju. Utilizzo del software di IaC (Infrastructure as Code) Terraform per effettuare le operazioni di deployment e configurazione delle applicazioni Kubernetes; tra queste, MetalLB, necessaria per allocare servizi di tipo "LoadBalancer". Descrizione delle operazioni compiute e valutazione dei risultati ottenuti.
APA, Harvard, Vancouver, ISO, and other styles
27

Abdelmassih, Christian. "Container Orchestration in Security Demanding Environments at the Swedish Police Authority." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-228531.

Full text
Abstract:
The adoption of containers and container orchestration in cloud computing is motivated by many aspects, from technical and organizational to economic gains. In this climate, even security demanding organizations are interested in such technologies but need reassurance that their requirements can be satisfied. The purpose of this thesis was to investigate how separation of applications could be achieved with Docker and Kubernetes such that it may satisfy the demands of the Swedish Police Authority. The investigation consisted of a literature study of research papers and official documentation as well as a technical study of iterative creation of Kubernetes clusters with various changes. A model was defined to represent the requirements for the ideal separation. In addition, a system was introduced to classify the separation requirements of the applications. The result of this thesis consists of three architectural proposals for achieving segmentation of Kubernetes cluster networking, two proposed systems to realize the segmentation, and one strategy for providing host-based separation between containers. Each proposal was evaluated and discussed with regard to suitability and risks for the Authority and parties with similar demands. The thesis concludes that a versatile application isolation can be achieved in Docker and Kubernetes. Therefore, the technologies can provide a sufficient degree of separation to be used in security demanding environments.
Populariteten av containers och container-orkestrering inom molntjänster motiveras av många aspekter, från tekniska och organisatoriska till ekonomiska vinster. I detta klimat är även säkerhetskrävande organisationer intresserade av sådana teknologier men söker försäkran att deras kravbild går att möta. Syftet med denna avhandling var att utreda hur separation mellan applikationer kan nås vid användning av Docker och Kubernetes så att Polismyndighetens krav kan uppfyllas. Undersökningen omfattade en litterär studie av vetenskapliga publikationer och officiell dokumentation samt en teknisk studie med iterativt skapande av Kubernetes kluster med diverse variationer. En modell definierades för att representera kravbilden för ideal separation. Vidare så introducerades även ett system för klassificering av separationskrav hos applikationer. Resultatet omfattar tre förslag på arkitekturer för att uppnå segmentering av klusternätverk i Kubernetes, två föreslagna systemkomponenter för att uppfylla segmenteringen, och en strategi för att erbjuda värd-baserad separation mellan containers. Varje förslag evaluerades med hänsyn till lämplighet och risker för myndigheten och parter med liknande kravbild. Avhandlingens slutsats är att en mångsidig applikationsisolering kan uppnås i Docker och Kubernetes. Därmed kan teknologierna uppnå en lämplig grad av separation för att kunna användas för säkerhetskrävande miljöer.
APA, Harvard, Vancouver, ISO, and other styles
28

Muresu, Daniel. "Investigating the security of a microservices architecture : A case study on microservice and Kubernetes Security." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302579.

Full text
Abstract:
The concept of breaking down a bigger application into smaller components is not a new idea, but it has been more commonly adopted in recent years due to the rise of the microservice application architecture. What has not been elaborated on enough however, is the security of the microservice architecture and how it differs from a monolithic application architecture. This leads to question what the most relevant security vulnerabilities of integrating and using a microservice architecture are, and what the correlating metrics that can be used to detect intrusions based on the vulnerabilities can be. In this report, the security of the microservice architecture is elaborated on in a case study of the system at Skatteverket, the Swedish tax agency, which is a microservice based architecture running on Kubernetes. Interviews are conducted with people that have experience in Kubernetes and microservices separately, both employed at Skatteverket and elsewhere. In the interviews, vulnerabilities and intrusion detection metrics are identified, which are then analyzed with respect to a use case in the Skatteverket system. A survey is also done on the existing technologies that can mitigate the identified vulnerabilities that are related to a microservice architecture. The vulnerabilities present in the use case are then concluded to be most relevant, the identified intrusion detection metrics are elaborated on and the service mesh technology Istio is found to mitigate largest number of the identified vulnerabilities.
Konceptet att bryta ner en större applikation i mindre komponenter är inte en ny idé, men den har blivit vanligare under de senaste åren på grund av växten i användning av mikrotjänstsarkitekturer. Vad som dock inte har utforskats tillräckligt är säkerheten för mikrotjänstarkitekturen och hur den skiljer sig från en monolitisk applikationsarkitektur. Detta leder till att fråga vilka de mest relevanta säkerhetsriskerna med att integrera och använda en mikrotjänstarkitektur är, och vilka mätvärden som kan användas för att upptäcka intrång baserat på riskerna kan vara. I denna rapport utforskas säkerheten för mikrotjänstarkitekturer genom en fallstudie av systemet hos Skatteverket, som är en mikrotjänstbaserad arkitektur som körs på Kubernetes. Intervjuer genomförs med personer som har erfarenhet av Kubernetes och mikrotjänster separat, både med anställda på Skatteverket och på annat håll. I intervjuerna identifieras risker och mätvärden för att märka av intrång som sedan analyseras med avseende på ett användningsfall i Skatteverketssystemet. En undersökning görs också om befintlig teknik som kan mildra de identifierade riskerna som är relaterade till en mikrotjänstarkitektur. De risker som förekommer i användningsfallet anses sedan till att vara mest relevanta i slutsatserna, de identifierade mätvärdena för att märka av intrång diskuteras och service mesh teknologin Istio anses mitigera störst antal av de identifierade riskerna.
APA, Harvard, Vancouver, ISO, and other styles
29

Łaskawiec, Sebastian. "Effective solutions for high performance communication in the cloud." Rozprawa doktorska, Uniwersytet Technologiczno-Przyrodniczy w Bydgoszczy, 2020. http://dlibra.utp.edu.pl/Content/2268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

MOGALLAPU, RAJA. "Scalability of Kubernetes Running Over AWS - A Performance Study while deploying CPU intensive application containers." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18841.

Full text
Abstract:
Background: Nowadays lot of companies are enjoying the benefits of kubernetes by maintaining their containerized applications over it. AWS is one of the leading cloud computing service providers and many well-known companies are their clients. Many researches have been conducted on kubernetes, docker containers, cloud computing platforms but a confusion exists on how to deploy the applications in Kubernetes. A research gap about the impact created by CPU limits and requests while deploying the Kubernetes application can be found. So, through this thesis I want to analyze the performance of the CPU intensive containerized application. It will help many companies avoid the confusion while deploying their applications over kubernetes. Objectives: We measure the scalability of kubernetes under CPU intensive containerized application running over AWS and we can study the impact created by changing CPU limits and requests while deploying the application in Kubernetes. Methods: we choose a blend of literature study and experimentation as methods to conduct the research. Results and Conclusion: From the experiments it is evident that the application performs better when we allocate more CPU limits and less CPU requests when compared to equal CPU requests and CPU limits in the deployment file. CPU metrics collected from SAR and Kubernetes metrics server are similar. It is better to allocate pods with more CPU limits and CPU requests than with equal CPU requests and CPU limits for better performance. Keywords: Kubernetes, CPU intensive containerized application, AWS, Stress-ng.
APA, Harvard, Vancouver, ISO, and other styles
31

Jendi, Khaled. "Evaluation and Improvement of Application Deployment in Hybrid Edge Cloud Environment : Using OpenStack, Kubernetes, and Spinnaker." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275714.

Full text
Abstract:
Traditional mechanisms of deployment of deferent applications can be costly in terms of time and resources, especially when the application requires a specific environment to run upon and has a different kind of dependencies so to set up such an application, it would need an expert to find out all required dependencies. In addition, it is difficult to deploy applications with efficient usage of resources available in the distributed environment of the cloud. Deploying different projects on the same resources is a challenge. To solve this problem, we evaluated different deployment mechanisms using heterogeneous infrastructure-as-a-service (IaaS) called OpenStack and Microsoft Azure. we also used platform-as-a-service called Kubernetes. Finally, to automate and auto integrate deployments, we used Spinnaker as the continuous delivery framework. The goal of this thesis work is to evaluate and improve different deployment mechanisms in terms of edge cloud performance. Performance depends on achieving efficient usage of cloud resources, reducing latency, scalability, replication and rolling upgrade, load balancing between data nodes, high availability and measuring zero- downtime for deployed applications. These problems are solved basically by designing and deploying infrastructure and platform in which Kubernetes (PaaS) is deployed on top of OpenStack (IaaS). In addition, the usage of Docker containers rather than regular virtual machines (containers orchestration) will have a huge impact. The conclusion of the report would demonstrate and discuss the results along with various test cases regarding the usage of different methods of deployment, and the presentation of the deployment process. It includes also suggestions to develop more reliable and secure deployment in the future when having heterogeneous container orchestration infrastructure.
Traditionella mekanismer för utplacering av deferentapplikationer kan vara kostsamma när det gäller tid och resurser, särskilt när applikationen kräver en specifik miljö att löpa på och har en annan typ av beroende, så att en sådan applikation upprättas, skulle det behöva en expert att hitta ut alla nödvändiga beroenden. Dessutom är det svårt att distribuera applikationer med effektiv användning av resurser tillgängliga i molnens distribuerade i Edge Cloud Computing. Att distribuera olika projekt på samma resurser är en utmaning. För att lösa detta problem skulle jag utvärdera olika implementeringsmekanismer genom att använda heterogen infrastruktur-as-a-service (IaaS) som heter OpenStack och Microsoft Azure. Jag skulle också använda plattform-som-en-tjänst som heter Kubernetes. För att automatisera och automatiskt integrera implementeringar skulle jag använda Spinnaker som kontinuerlig leveransram. Målet med detta avhandlingsarbete är att utvärdera och förbättra olika implementeringsmekanismer när det gäller Edge Cloud prestanda. Prestanda beror på att du uppnår effektiv användning av Cloud resurser, reducerar latens, skalbarhet, replikering och rullningsuppgradering, lastbalansering mellan datodenoder, hög tillgänglighet och mätning av nollstanntid för distribuerade applikationer. Dessa problem löses i grunden genom att designa och distribuera infrastruktur och plattform där Kubernetes (PaaS) används på toppen av OpenStack (IaaS). Dessutom kommer användningen av Docker- behållare istället för vanliga virtuella maskiner (behållare orkestration) att ha en stor inverkan. Slutsatsen av rapporten skulle visa och diskutera resultaten tillsammans med olika testfall angående användningen av olika metoder för implementering och presentationen av installationsprocessen. Det innehåller också förslag på att utveckla mer tillförlitlig och säker implementering i framtiden när den har heterogen behållareorkesteringsinfrastruktur.
APA, Harvard, Vancouver, ISO, and other styles
32

Mara, Jösch Ronja. "Managing Microservices with a Service Mesh : An implementation of a service mesh with Kubernetes and Istio." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280407.

Full text
Abstract:
The adoption of microservices facilitates extending computer systems in size, complexity, and distribution. Alongside their benefits, they introduce the possibility of partial failures. Besides focusing on the business logic, developers have to tackle cross-cutting concerns of service-to-service communication which now defines the applications' reliability and performance. Currently, developers use libraries embedded into the application code to address these concerns. However, this increases the complexity of the code and requires the maintenance and management of various libraries. The service mesh is a relatively new technology that possibly enables developers staying focused on their business logic. This thesis investigates one of the available service meshes called Istio, to identify its benefits and limitations. The main benefits found are that Istio adds resilience and security, allows features currently difficult to implement, and enables a cleaner structure and a standard implementation of features within and across teams. Drawbacks are that it decreases performance by adding CPU usage, memory usage, and latency. Furthermore, the main disadvantage of Istio is its limited testing tools. Based on the findings, the Webcore Infra team of the company can make a more informed decision whether or not Istio is to be introduced.
Tillämpningen av microservices underlättar utvidgningen av datorsystem i storlek, komplexitet och distribution. Utöver fördelarna introducerar de möjligheten till partiella misslyckanden. Förutom att fokusera på affärslogiken måste utvecklare hantera övergripande problem med kommunikation mellan olika tjänster som nu definierar applikationernas pålitlighet och prestanda. För närvarande använder utvecklare bibliotek inbäddade i programkoden för att hantera dessa problem. Detta ökar dock kodens komplexitet och kräver underhåll och hantering av olika bibliotek. Service mesh är en relativt ny teknik som kan möjliggöra för utvecklare att hålla fokus på sin affärslogik. Denna avhandling undersöker ett av de tillgängliga service mesh som kallas Istio för att identifiera dess fördelar och begränsningar. De viktigaste fördelarna som hittas är att Istio lägger till resistens och säkerhet, tillåter funktioner som för närvarande är svåra att implementera och möjliggör en renare struktur och en standardimplementering av funktioner inom och över olika team. Nackdelarna är att det minskar prestandan genom att öka CPU-användning, minnesanvändning och latens. Dessutom är Istios största nackdel dess begränsade testverktyg. Baserat på resultaten kan Webcore Infra-teamet i företaget fatta ett mer informerat beslut om Istio ska införas eller inte.
APA, Harvard, Vancouver, ISO, and other styles
33

Esposto, Matteo. "Guacamole, un sistema web per l'accesso remoto ai PC dei laboratori, implementato su un cluster kubernetes." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23437/.

Full text
Abstract:
Grazie al seguente lavoro, è stato creato un cluster Kubernetes in grado di gestire i servizi relativi a Guacamole,applicazione utilizzata dagli utenti dell'Ateneo per connettersi da remoto ai PC dei laboratori. Gli obbiettivi inizialmente prefissati sono stati raggiunti e in futuro, a seguito di opportune modifiche, il sistema verrà messo in produzione.
APA, Harvard, Vancouver, ISO, and other styles
34

Сослуєв, О. В. "Система автоматизованого балансування навантаження для мікросервісної архітектурі на базі Docker." Thesis, Чернігів, 2020. http://ir.stu.cn.ua/123456789/23531.

Full text
Abstract:
Сослуєв, О. В. Система автоматизованого балансування навантаження для мікросервісної архітектурі на базі Docker : випускна кваліфікаційна робота : 123 "Комп’ютерна інженерія" / О. В. Сослуєв ; керівник роботи Є. В. Риндич ; НУ "Чернігівська політехніка", кафедра інформаційних та комп’ютерних систем. – Чернігів, 2020. – 106 с.
Дипломна робота має полягає в написанні конфігурації для веб-сайту. Конфігурація повинна працювати на всіх платформах як Linux, Windows, Mac OS. Docker - це дуже зручний інструмент для управління ізольованими Linux-контейнерами. За допомогою цього інструменту можна операційній системи запускати процеси в ізольованому оточенні на базі спеціально створених образів. Метою роботи є : -Проектування системи з використанням системи віртуалізації за допомогою інструменту Docker, що включає розробку и налаштування конфігурації. -Розробка конфігурації для автоматизованого балансування навантаження. -Можливість налаштування проценту переходу користувачів на сайт. Наприклад в системі банкінгу, є дві версії, одна з яких бета-версія сайту, друга це стара стабільна версія. Це необхідно для того щоб тестувальники на великих проєктах, витрачали менше ресурсів та часу на виконання тестів. Середній час відвідування сайту банку займає близько 30 хвилин. За останні роки розробники додатків та сайтів, хотіли щоб їх продукт стабільно працював на декількох платформах. Швидко запускався та розгортався за короткий час. Один раз витрачавши час на конфігурацію, а потім її використовуєш в декількох проектах де вона необхідна, значно спрощує розробку.
The subject of the thesis is to write the configuration for the website. The configuration must work on all platforms like Linux, Windows, Mac OS. Docker is a very handy tool for managing isolated Linux containers. With this tool, you can run the operating system processes in an isolated environment based on specially created images. The purpose of the work is: -Design the system using a virtualization system using the Docker tool, which includes the development and configuration of the configuration. -Configuration development for automated load balancing. -Ability to adjust the percentage of users coming to the site. For example, in the banking system, there are two versions, one of which is a beta version of the site, the other is an old stable version. This is necessary so that testers on large projects spend less resources and time on tests. The average time to visit the bank's website is about 30 minutes. In recent years, application and website developers have wanted their product to run stably on multiple platforms. It started up and deployed quickly in a short time. Once you spend time on the configuration, and then use it in several projects where it is needed, greatly simplifies development.
APA, Harvard, Vancouver, ISO, and other styles
35

Ernfridsson, Alexander. "Sammansättning av ett privat moln som infrastruktur för utveckling." Thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-142891.

Full text
Abstract:
Idag är det vanligt att hantera, beskriva och konfigurera sin datainfrastruktur såsom processer, serverar och miljöer i maskinläsbara konfigurationsfiler istället för fysisk hårdvara eller interaktiva konfigureringsverktyg. Automatiserad datainfrastruktur blir mer och mer vanligt för att kunna fokusera mer på utveckling och samtidigt få ett stabilare system. Detta har gjort att antalet verktyg för automatisering av datainfrastruktur skjutit i höjden det senaste årtiondet. Lösningar för automatisering av olika typer av datainfrastrukturer har blivit mer komplexa och innehåller ofta många verktyg som interagerar med varandra. Det här kandidatarbetet jämför, väljer ut och sätter ihop existerande plattformar och verktyg och skapar ett privat moln som infrastruktur för utveckling. Detta för att effektivera livscykeln för en serverbaserad runtime-miljö. En jämförelse av molnplattformarna OpenStack, OpenNebula, CloudStack och Eucalyptus baserad på litteratur, lägger grunden för molnet. Molnplattformen kompletteras därefter med andra verktyg och lösningar för att fullborda livscykelautomatiseringen av runtime-miljöer. En prototyp av lösningen skapades för att analysera praktiska problem. Arbetet visar att en kombination av OpenStack, Docker, containerorkestrering samt konfigureringsverktyg är en lovande lösning. Lösningen skalar efter behov, automatiserar och hanterar verksamhetens konfigurationer för runtime-miljöer.
APA, Harvard, Vancouver, ISO, and other styles
36

Ceroni, Ruben. "Kubernetes su OpenStack: deployment automatizzato su un cluster ARM di un private cloud per l’orchestrazione di container." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24968/.

Full text
Abstract:
Kubernetes è ormai lo standard de facto dell’industria per l’orchestrazione di container. Tipicamente viene costruito su infrastruttura fornita da cloud provider pubblici, astraendo all’utente tutti gli aspetti di basso livello necessari alla messa in essere del cluster. L’obiettivo di questa tesi è di riprodurre in scala ridotta un’infrastruttura cloud privata su hardware a basso costo ed utilizzarla per costruire un cluster Kubernetes. In questo modo è possibile analizzare a fondo tutti gli aspetti inerenti alla costruzione di un cloud privato ed al suo utilizzo tramite Kubernetes. Per raggiungere gli obiettivi preposti è stato necessario, in seguito all’identificazione dell’hardware, costruire un bare metal cloud, gestito attraverso MAAS. Sfruttando questo primo livello è poı̀ stato possibile installare il cloud OpenStack, utilizzando Kolla-Ansible. Su questa base è stato possbile installare Kubernetes, includendo meccanismi di cluster autoscaling. Il tutto è stato effettuato automatizzando ove possibile i processi di istanziazione e configurazione delle risorse tramite strumenti di IaC: Terraform e Ansible. I risultati ottenuti hanno dimostrato la fattibilità degli obiettivi preposti, ottenendo un sistema con performance e caratteristiche adeguate, fornendo allo stesso tempo un’introspettiva del processo di gestione di un cloud privato.
APA, Harvard, Vancouver, ISO, and other styles
37

Midigudla, Dhananjay. "Performance Analysis of the Impact of Vertical Scaling on Application Containerized with Docker : Kubernetes on Amazon Web Services - EC2." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18189.

Full text
Abstract:
Containers are being used widely as a base technology to pack applications and microservice architecture is gaining popularity to deploy large scale applications, with containers running different aspects of the application. Due to the presence of dynamic load on the service, a need to scale up or scale down compute resources to the containerized applications arises in order to maintain the performance of the application. Objectives To evaluate the impact of vertical scaling on the performance of a containerized application deployed with Docker container and Kubernetes that includes identification of the performance metrics that are mostly affected and hence characterize the eventual negative effect of vertical scaling. Method Literature study on kubernetes and docker containers followed by proposing a vertical scaling solution that can add or remove compute resources like cpu and memory to the containerized application. Results and Conclusions Latency and connect times were the analyzed performance metrics of the containerized application. From the obtained results, it was concluded that vertical scaling has no significant impact on the performance of a containerized application in terms of latency and connect times.
APA, Harvard, Vancouver, ISO, and other styles
38

Gangalic, Catalin. "Improving Queuing Time in a Pull Based Containerized Continuous Integration Build System." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-303114.

Full text
Abstract:
Most of the medium and big size software companies around the world are now using some form of continuous automatic build systems, with smaller companies following through. This approach, towards a more continuous flow, has pushed for more innovation in the domain and the adoption of various orchestration tools for these builds. At the same time, most continuous integration build systems do not leverage the data for improving the total building time. This thesis intends to decrease the overall building time in a pull based build system, named Blazar. This is obtained by decreasing the average time a build waits before being allocated a resource by the orchestration tool, Kubernetes. The improvement of average queuing time is done by leveraging the past data regarding the queue load of the system with the scope of predicting the amount of resources and preemptively allocating them. In the thesis, various time series prediction models are explored in order to find the most relevant one with regards to the available data. The final choice of the model is Facebook’s Prophet due to its ability to leverage multiple seasonalities, handle outliers, accommodate holidays, and provide fast predictions. By tuning various model’s parameters, it was possible to achieve satisfactory results. Thus, for some of the tested periods, the average queuing time was decreased with up to 20%, while maintaining a reasonable resource usage, compared to the time without using any prediction models. Finally, this thesis represents a practical approach that can be applied to other applications and systems. This thesis also details its limitations while discussing other solutions and ideas to further improve the results.
De flesta medelstora och större mjukvaruföretag runt om i världen använder idag någon form av kontinuerliga automatiska byggsystem, något som mindre företag även har börjat efterfölja. Detta tillvägagångssätt mot ett mer kontinuerligt flöde har drivit för mer innovation inom domänen och adopteringen av olika orkestreringsverktyg för dessa byggda program. Samtidigt utnyttjar de flesta kontinuerliga integrationssystem inte den data de samlar in för att förbättra den totala byggtiden. Denna uppsats avser att minska den totala byggtiden i ett pull-baserat byggsystem som heter Blazar. Detta uppnås genom att minska den genomsnittliga tid som ett byggt program väntar innan den tilldelas en resurs av orkestreringsverktyget, Kubernetes. Förbättringen av den genomsnittliga kötiden fås genom att utnyttja tidigare data om systemets köbelastning med omfattningen att förutsäga mängden resurser och fördela dem förebyggande. I avhandlingen undersöks olika tidsserieprognosmodeller för att hitta den mest relevanta med avseende på tillgänglig data. Det slutliga valet av modellen är Facebooks Prophet på grund av dess förmåga att utnyttja flera säsongsbestämmelser, hantera avvikelser, helgdagar och ge snabba förutsägelser. Genom att ställa in olika modellparametrar var det möjligt att uppnå tillfredsställande resultat. Under några av de testade perioderna minskade således den genomsnittliga kötiden med upp till 20%, samtidigt som en rimlig resursanvändning bibehölls, jämfört med tiden som ficks utan att använda någon förutsägelsemodell. Slutligen avser denna avhandling inte att ge en toppmodern lösning. Således slutar det med att beskriva sina begränsningar samtidigt som de tillhandahåller andra lösningar och idéer som kan förbättra resultaten.
APA, Harvard, Vancouver, ISO, and other styles
39

Fahs, Ali Jawad. "Proximity-aware replicas management in geo-distributed fog computing platforms." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S076.

Full text
Abstract:
L'architecture géo-distribuée de fog computing fournit aux utilisateurs des ressources accessibles avec une faible latence. Cependant, exploiter pleinement cette architecture nécessite une distribution similaire de l'application par l'utilisation de techniques de réplication. Par conséquent, la gestion de ces répliques doit intégrer des algorithmes prenant en compte la proximité aux différents niveaux de gestion des ressources du système. Dans cette thèse, nous avons abordé ce problème à travers trois contributions. Premièrement, nous avons conçu un système de routage des requêtes entre les utilisateurs et les ressource prenant en compte la proximité. Deuxièmement, nous avons proposé des algorithmes dynamiques pour le placement des répliques prenant en compte les derniers percentiles de la latence. Enfin, nous avons développé un système de mise à l’échelle automatique qui ajustent le nombre des répliques de l'application en fonction de la charge subie par les applications fog computing
Geo-distributed fog computing architectures provide users with resources reachable within low latency. However, fully exploiting the fog architecture requires a similar distribution of the application by the means of replication. As a result, fog application replica management should implement proximity-aware algorithms to handle different levels of resource management. In this thesis, we addressed this problem over three contributions. First, we designed a proximity-aware user-to-replica routing mechanism. Second, we proposed dynamic tail-latency-aware replica placement algorithms. Finally, we developed autoscaling algorithms to dynamically scale the application resources according to the non-stationary workload experienced by fog platforms
APA, Harvard, Vancouver, ISO, and other styles
40

Falkman, Oscar, and Moa Thorén. "Improving Software Deployment and Maintenance : Case study: Container vs. Virtual Machine." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234930.

Full text
Abstract:
Setting up one's software environment and ensuring that all dependencies and settings are the same across the board when deploying an application, can nowadays be a time consuming and frustrating experience. To solve this, the industry has come up with an alternative deployment environment called software containers, or simply containers. These are supposed to help with eliminating the current troubles with virtual machines to create a more streamlined deployment experience.The aim of this study was to compare this deployment technique, containers, against the currently most popular method, virtual machines. This was done using a case study where an already developed application was migrated to a container and deployed online using a cloud provider’s services. Then the application could be deployed via the same cloud service but onto a virtual machine directly, enabling a comparison of the two techniques. During these processes, information was gathered concerning the usability of the two environments. To gain a broader perspective regarding the usability, an interview was conducted as well. Resulting in more well-founded conclusions. The conclusion is that containers are more efficient regarding the use of resources. This could improve the service provided to the customers by improving the quality of the service through more reliable uptimes and speed of service. However, containers also grant more freedom and transfers most of the responsibility over to the developers. This is not always a benefit in larger companies, where regulations must be followed, where a certain level of control over development processes is necessary and where quality control is very important. Further research could be done to see whether containers can be adapted to another company’s current environment. Moreover, how different cloud provider’s services differ.
Att sätta upp och konfigurera sin utvecklingsmiljö, samt att försäkra sig om att alla beroenden och inställningar är lika överallt när man distribuerar en applikation, kan numera vara en tidskrävande och frustrerande process. För att förbättra detta, har industrin utvecklat en alternativ distributionsmiljö som man kallar “software containers” eller helt enkelt “containers”. Dessa är ämnade att eliminera de nuvarande problemen med virtuella maskiner och skapa en mer strömlinjeformad distributionsupplevlese. Målet med denna studie var att jämföra denna nya distributionsteknik, containrar, med den mest använda tekniken i dagsläget, virtuella maskiner. Detta genomfördes med hjälp av en fallstudie, där en redan färdigutvecklad applikation migrerades till en container, och sedan distribuerades publikt genom en molnbaserad tjänst. Applikationen kunde sedan distribueras via samma molnbaserade tjänst men på en virtuell maskin istället, vilket möjliggjorde en jämförelse av de båda teknikerna. Under denna process, samlades även information in kring användbarheten av de båda teknikerna. För att få ett mer nyanserat perspektiv vad gäller användbarheten, så hölls även en intervju, vilket resulterade i något mer välgrundade slutsatser. Slutsatsen som nåddes var att containrar är mer effektiva resursmässigt. Detta kan förbättra den tjänst som erbjuds kunder genom att förbättra kvalitén på tjänsten genom pålitliga upp-tider och hastigheten av tjänsten. Däremot innebär en kontainerlösning att mer frihet, och därmed även mer ansvar, förflyttas till utvecklarna. Detta är inte alltid en fördel i större företag, där regler och begränsningar måste följas, en viss kontroll över utvecklingsprocesser är nödvändig och där det ofta är mycket viktigt med strikta kvalitetskontroller. Vidare forskning kan utföras för att undersöka huruvida containers kan anpassas till ett företags nuvarande utvecklingsmiljö. Olika molntjänster för distribuering av applikationer, samt skillnaderna mellan dessa, är också ett område där vidare undersökning kan bedrivas.
APA, Harvard, Vancouver, ISO, and other styles
41

Åsberg, Niklas. "Optimized Autoscaling of Cloud Native Applications." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84641.

Full text
Abstract:
Software containers are changing the way distributed applications are executedand managed on cloud computing resources. Autoscaling allows containerizedapplications and services to run resiliently with high availability without the demandof user intervention. However, specifying an auto­scaling policy that can guaranteethat no performance violations will take place is an extremely hard task, and doomedto fail unless considerable care is taken. Existing autoscaling solutions try to solvethis problem but fail to consider application specific parameters when doing so, thuscausing poor resource utilization and/or unsatisfactory quality of service in certaindynamic workload scenarios.This thesis proposes an autoscaling solution that enables cloud native application toautoscale based on application specific parameters. The proposed solution consistsof a profiling strategy that detects key parameters that affect the performance ofautoscaling, and an autoscaling algorithm that automatically enforces autoscalingdecisions based on derived parameters from the profiling strategy.The proposed solution is compared and evaluated against the default auto­scalingfeature in Kubernetes during different realistic user scenarios. Results from thetesting scenarios indicate that the proposed solution, which uses application specificparameters, outperforms the default autoscaling feature of Kubernetes in resourceutilization while keeping SLO violations at a minimum
APA, Harvard, Vancouver, ISO, and other styles
42

Johansson, Erik. "Lookaside Load Balancing in a Service Mesh Environment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286002.

Full text
Abstract:
As more online services are migrated from monolithic systems into decoupled distributed micro services, the need for efficient internal load balancing solutions increases. Today, there exists two main approaches for load balancing internal traffic between micro services. One approach uses either a central or sidecar proxy to load balance queries over all available server endpoints. The other approach lets client themselves decide which of all available endpoints to send queries to. This study investigates a new approach called lookaside load balancing. This approach consists of a load balancer that uses the control plane to gather a list of service endpoints and their current load. The load balancer can then dynamically provide clients with a subset of suitable endpoints they connect to directly. The endpoint distribution is controlled by a lookaside load balancing algorithm. This study presents such an algorithm that works by changing the endpoint assignment in order to keep current load between an upper and lower bound. In order to compare each of these three load balancing approaches, a test environment in Kubernetes is constructed and modeled to be similar to a real service mesh. With this test environment, we perform four experiments. The first experiment aims at finding suitable settings for the lookaside load balancing algorithm as well as a baseline load configuration for clients and servers. The second experiments evaluates the underlying network infrastructure to test for possible bias in latency measurements. The final two experiments evaluate each load balancing approach in both high and low load scenarios. Results show that lookaside load balancing can achieve similar performance as client-side load balancing in terms of latency and load distribution, but with a smaller CPU and memory footprint. When load is high and uneven, or when compute resource usage should be minimized, the centralized proxy approach is better. With regards to traffic flow control and failure resilience, we can show that lookaside load balancing is better than client-side load balancing. We draw the conclusion that lookaside load balancing can be an alternative approach to client-side load balancing as well as proxy load balancing for some scenarios.
Då fler online tjänster flyttas från monolitsystem till uppdelade distribuerade mikrotjänster, ökas behovet av intern lastbalansering. Idag existerar det två huvudsakliga tillvägagångssätt för intern lastbalansering mellan interna mikrotjänster. Ett sätt använder sig antingen utav en central- eller sido-proxy for att lastbalansera trafik över alla tillgängliga serverinstanser. Det andra sättet låter klienter själva välja vilken utav alla serverinstanser att skicka trafik till. Denna studie undersöker ett nytt tillvägagångssätt kallat extern lastbalansering. Detta tillvägagångssätt består av en lastbalanserare som använder kontrollplanet för att hämta en lista av alla serverinstanser och deras aktuella last. Lastbalanseraren kan då dynamiskt tillsätta en delmängd av alla serverinstanser till klienter och låta dom skapa direktkopplingar. Tillsättningen av serverinstanser kontrolleras av en extern lastbalanseringsalgoritm. Denna studie presenterar en sådan algoritm som fungerar genom att ändra på tillsättningen av serverinstanser för att kunna hålla lasten mellan en övre och lägre gräns. För att kunna jämföra dessa tre tillvägagångssätt för lastbalansering konstrueras och modelleras en testmiljö i Kubernetes till att vara lik ett riktigt service mesh. Med denna testmiljö utför vi fyra experiment. Det första experimentet har som syfte att hitta passande inställningar till den externa lastbalanseringsalgoritmen, samt att hitta en baskonfiguration för last hos klienter or servrar. Det andra experimentet evaluerar den underliggande nätverksinfrastrukturen för att testa efter potentiell partiskhet i latensmätningar. De sista två experimenten evaluerar varje tillvägagångssätt av lastbalansering i både scenarier med hög och låg belastning. Resultaten visar att extern lastbalansering kan uppnå liknande prestanda som klientlastbalansering avseende latens och lastdistribution, men med lägre CPU- och minnesanvändning. När belastningen är hög och ojämn, eller när beräkningsresurserna borde minimeras, är den centraliserade proxy-metoden bättre. Med hänsyn till kontroll över trafikflöde och resistans till systemfel kan vi visa att extern lastbalansering är bättre än klientlastbalansering. Vi drar slutsatsen att extern lastbalansering kan vara ett alternativ till klientlastbalansering samt proxylastbalansering i vissa fall.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Thomas. "Predictive vertical CPU autoscaling in Kubernetes based on time-series forecasting with Holt-Winters exponential smoothing and long short-term memory." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294163.

Full text
Abstract:
Private and public clouds require users to specify requests for resources such as CPU and memory (RAM) to be provisioned for their applications. The values of these requests do not necessarily relate to the application’s run-time requirements, but only help the cloud infrastructure resource manager to map requested virtual resources to physical resources. If an application exceeds these values, it might be throttled or even terminated. Consequently, requested values are often overestimated, resulting in poor resource utilization in the cloud infrastructure. Autoscaling is a technique used to overcome these problems. In this research, we formulated two new predictive CPU autoscaling strategies forKubernetes containerized applications, using time-series analysis, based on Holt-Winters exponential smoothing and long short-term memory (LSTM) artificial recurrent neural networks. The two approaches were analyzed, and their performances were compared to that of the default Kubernetes Vertical Pod Autoscaler (VPA). Efficiency was evaluated in terms of CPU resource wastage, and insufficient CPU percentage and amount for container workloads from Alibaba Cluster Trace 2018, and others. In our experiments, we observed that Kubernetes Vertical Pod Autoscaler (VPA) tended to perform poorly on workloads that periodically change. Our results showed that compared to VPA, predictive methods based on Holt- Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) can decrease CPU wastage by over 40% while avoiding CPU insufficiency for various CPU workloads. Furthermore, LSTM has been shown to generate stabler predictions compared to that of HW, which allowed for more robust scaling decisions.
Privata och offentliga moln kräver att användare begär mängden CPU och minne (RAM) som ska fördelas till sina applikationer. Mängden resurser är inte nödvändigtvis relaterat till applikationernas körtidskrav, utan är till för att molninfrastrukturresurshanteraren ska kunna kartlägga begärda virtuella resurser till fysiska resurser. Om en applikation överskrider dessa värden kan den saktas ner eller till och med krascha. För att undvika störningar överskattas begärda värden oftast, vilket kan resultera i ineffektiv resursutnyttjande i molninfrastrukturen. Autoskalning är en teknik som används för att överkomma dessa problem. I denna forskning formulerade vi två nya prediktiva CPU autoskalningsstrategier för containeriserade applikationer i Kubernetes, med hjälp av tidsserieanalys baserad på metoderna Holt-Winters exponentiell utjämning och långt korttidsminne (LSTM) återkommande neurala nätverk. De två metoderna analyserades, och deras prestationer jämfördes med Kubernetes Vertical Pod Autoscaler (VPA). Prestation utvärderades genom att observera under- och överutilisering av CPU-resurser, för diverse containerarbetsbelastningar från bl. a. Alibaba Cluster Trace 2018. Vi observerade att Kubernetes Vertical Pod Autoscaler (VPA) i våra experiment tenderade att prestera dåligt på arbetsbelastningar som förändras periodvist. Våra resultat visar att jämfört med VPA kan prediktiva metoder baserade på Holt-Winters exponentiell utjämning (HW) och långt korttidsminne (LSTM) minska överflödig CPU-användning med över 40 % samtidigt som de undviker CPU-brist för olika arbetsbelastningar. Ytterligare visade sig LSTM generera stabilare prediktioner jämfört med HW, vilket ledde till mer robusta autoskalningsbeslut.
APA, Harvard, Vancouver, ISO, and other styles
44

Patera, Lorenzo. "Performance analysis of Kafka distributed streaming platform for Industry 4.0." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17519/.

Full text
Abstract:
The transition to Industry 4.0 is producing a notable change in the management and organization of factories. Robots, sensors, and smart devices work together to provide advanced and modern services to the industries. Manufacturing industries have highly specialized machines worth thousands of euros that are often updated very slowly and do not have advanced interfaces to export the data they contain. These are extremely useful when analyzed with advanced tools, allowing the predictive maintenance of the machines and the discovery of any fragility. To collect and process data coming from hundreds of machines, located in many company offices, cutting-edge technologies such as message exchange middleware are used. These, when correctly configured, guarantee the scalability of the system as the number of messages increases. The use of containers and supporting infrastructures allowed the components to be easily and virtually detached from the lease. Finally, a layered architecture allows a logical and physical separation within the company, guaranteeing levels of security necessary to avoid damages deriving from possible cyber-attacks on the machines. The proposed architecture was finally validated and tested thanks to the collaboration with a company in the Emilia-Romagna territory.
APA, Harvard, Vancouver, ISO, and other styles
45

Haar, Christoph, and Erik Buchmann. "Securing Orchestrated Containers with BSI Module SYS.1.6." Hochschule für Telekommunikation, 2021. https://slub.qucosa.de/id/qucosa%3A73371.

Full text
Abstract:
Orchestrated container virtualization, such as Docker/Kubernetes, is an attractive option to transfer complex IT ecosystems into the cloud. However, this is associated with new challenges for IT security. Containers store sensitive data with the code. The orchestration decides at run-time which containers are executed on which host. Application code is obtained as images from external sources at run-time. Typically, the operator of the cloud is not the owner of the data. Therefore, the configuration of the orchestration is critical, and an attractive target for attackers. A prominent option to secure IT infrastructures is to use security guidelines from agencies, such as Germany’s Federal Office for Information Security. In this work, we analyze the module ”SYS.1.6 Container” from this agency. We want to find out how suitable this module is to secure a typical Kubernetes scenario. Our scenario is a classical 3-tier architecture with front end, business logic and databaseback end. We show that with orchestration, the protection needs for the entire Kubernetes cluster in terms of confidentiality, integrity and availability automatically become ”high” as soon as a sensitive data object is processed or stored in any container. Our analysis has shown that the SYS.1.6 module is generally suitable. However, we have identified three additional threats. Two of them could be exploited automatically, as soon as a respective vulnerability in Docker/Kubernetes appears.
APA, Harvard, Vancouver, ISO, and other styles
46

Leo, Zacharias. "Achieving a Reusable Reference Architecture for Microservices in Cloud Environments." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44601.

Full text
Abstract:
Microservices are a new trend in application development. They allow for breaking down big monolithic applications into smaller parts that can be updated and scaled independently. However, there are still many uncertainties when it comes to the standards of the microservices, which can lead to costly and time consuming creations or migrations of system architectures. One of the more common ways of deploying microservices is through the use of containers and container orchestration platform, most commonly the open-source platform Kubernetes. In order to speed up the creation or migration it is possible to use a reference architecture that acts as a blueprint to follow when designing and implementing the architecture. Using a reference architecture will lead to more standardized architectures, which in turn are most time and cost effective. This thesis proposes such a reference architecture to be used when designing microservice architectures. The goal of the reference architecture is to provide a product that meets the needs and expectations of companies that already use microservices or might adopt microservices in the future. In order to achieve the goal of the thesis, the work was divided into three main phases. First, a questionnaire was conducted and sent out to be answered by experts in the area of microservices or system architectures. Second, literature studies were made on the state of the art and practice of reference architectures and microservice architectures. Third, studies were made on the Kubernetes components found in the Kubernetes documentation, which were evaluated and chosen depending on how well they reflected the needs of the companies. This thesis finally proposes a reference architecture with components chosen according to the needs and expectations of the companies found from the questionnaire.
APA, Harvard, Vancouver, ISO, and other styles
47

Malpezzi, Paolo. "O-RAN software deployment and orchestration on virtualized infrastructures." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24128/.

Full text
Abstract:
Future Radio Access Networks will need to cope with an increasing complexity and new challenges due to the introduction of heterogeneous application scenarios, from enhanced Mobile Broadband to Network Slicing service diversification. In this context, the new trend of Open RAN is gaining importance, envisioning a transformation of traditional Radio Access Networks toward softwarization, virtualization and disaggregation of network functionalities, leveraging open and programmable protocols and interfaces. Embracing this new trend, this thesis aims to evaluate the software-based solutions promoted by the O-RAN Alliance, one of the major contributors toward an open and intelligent RAN architectural framework, aligned with 5G standards. The work will focus on the practical challenges which have been encountered while deploying the available O-RAN software over a virtualized infrastructure, investigating the compatibility of its integration within a containerized environment orchestrated by Kubernetes.
APA, Harvard, Vancouver, ISO, and other styles
48

Hornický, Michal. "Návrh a implementace distribuovaného systému pro algoritmické obchodování." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399197.

Full text
Abstract:
Inovácia na finančných trhoch poskytuje nové príležitosti. Algoritmické obchodovanie je vhodný spôsob využitia týchto príležitostí. Táto práca sa zaoberá návrhom a implementáciou systému, ktorý by dovoľoval svojím uživateľom vytvárať vlastné obchodovacie stratégie, a pomocou nich obchodovať na burzách. Práca kladie dôraz na návrh distribuovaného systému, ktorý bude škálovatelný, pomocou technológií cloud computingu.
APA, Harvard, Vancouver, ISO, and other styles
49

Widerberg, Anton, and Erik Johansson. "Observability of Cloud Native Systems: : An industrial case study of system comprehension with Prometheus & knowledge transfer." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-22019.

Full text
Abstract:
Background:                                                                                                            Acquiring comprehension and observability of software systems is a vital and necessary activity for testing and maintenance; however, these tasks are time-consuming for engineers. Concurrently cloud computing requires microservices to enhance the utilization of cloudnative deployment, which simultaneously introduces a high degree of complexity. Further,codifying and distributing technical knowledge within the organization has been proven to be vital for both competitiveness and financial performance. However, doing it successfully has been proven to be difficult, and transitioning to working virtually and in DevOps brings new potential challenges for software firms. Objective:                                                                                                              The objective of this study is to explore how system comprehension of a microservice architecture can be improved from performance metrics through an exploratory data analysis approach. To further enhance the practical business value, the thesis also aims to explore the effects transitioning to virtual work and DevOps have had on knowledge sharing for software firms. Method:                                                                                                                    A case study is conducted at Ericsson with performance data generated from testing of a system deployed in Kubernetes. Data is extracted with Prometheus, and the performance behavior of four interacting pods is explored with correlation analysis and visualization tools.Furthermore, to explore virtual work and DevOps effects on intra-organizational knowledge sharing of technical knowledge, semi-structured interviews were cross analyzed with literature. Results:                                                                                                                  An overall high correlation between performance metrics could be observed with deviations between test cases. Also, we were able to generate propositions regarding the performance behavior as well as bring forward possible candidates for predictive modeling. Four new potential decisive factors driving the choice of activities and transfer mechanisms for knowledge transfer are identified, namely, accessibility, dynamicity, established processes, and efficiency. The transition to virtual work showed five positive factors and three negatives. Effects from DevOps were mostly connected to the frequency of sharing and the potentials of automation.  Conclusions: Our findings suggest that correlation analysis, when used along with visualization tools, can improve system comprehension of cloud-native systems. And while it shows promise for analyzing individual services and hypothesis creation, the method utilized in the study showcased some drawbacks which are covered in the discussion. The findings also point towards the fact that performance metrics can be a rich information source for knowledge and thus deserves further investigation.Findings also suggest that knowledge sharing is not only considered an important element by academia but also deliberately practiced by industry agents. Looking at the transition to virtual work and DevOps, the results imply that they affect knowledge transfer, both in combination and isolation. However, the case study findings do point towards the fact that the transition to working virtually potentially exerts a larger influence. Interviewees expressed both positive and negative aspects of virtual knowledge sharing. Simultaneously, the positive influences of DevOps were followed by extensive challenges.
Bakgrund:  Att erhålla förståelse och observerbarhet av mjukvarusystem är en vital och nödvändig aktivitet, speciellt för testning och underhåll. Samtidigt så är dessa uppgifter både komplexa och tidskrävande för ingenjörer. Mikroservicearkitekturen som utnyttjas för att bygga molnintegrerade lösningar introducerar en hög grad av komplexitet. Fortsättningsvis, att kodifiera och distribuera teknisk kunskap har visats vara kritiskt för organisationers konkurrenskraft och finansiella resultat. Att göra det framgångsrik har dock flertal utmaningar och när flera mjukvarubolag under senare tid övergått till att arbeta virtuellt samt skiftat till DevOps har flertalet nya potentiella utmaningar uppdagats. Syfte:  Målet med denna studie är att utforska hur systemförståelse av mjukvarusystem baserade på en mikroservicearkitektur kan förbättras utifrån prestandamätningar med hjälp av undersökande dataanalysmetoder. För att ytterligare utöka det praktiska affärsvärdet så avser avhandlingen även att undersöka effekterna som övergången till virtuellt arbete och DevOps har haft på denintern kunskapsspridning inom mjukvarubolag.  Metod: En fallstudie utförs på Ericsson AB med prestandadata som genererats under testkörningar av ett system som kör på Kubernetes. Data extraherad med Prometheus och prestationsbeteendet utav fyra interagerande ”pods” utforskas genom korrelationsanalys och visualiseringsverktyg. För att undersöka effekterna från virtuellt arbete samt DevOps har på intraorganisatorisk kunskapsdelning av teknisk kunskap så utförs semi-strukturerade intervjuer som sedan korsanalyseras med litteratur. Resultat:  Överlag så uppvisas hög korrelation mellan prestandamätvärden samtidigt som tydliga avvikelser observerades mellan testfall. Utöver detta så generades propositioner angående prestationsbeteendet samtidigt som potentiella kandidater för prediktiv modellering framhävs. Fyra nya potentiella determinanter identifieras för valet av aktiviteter samt överföringsmekanism, nämligen tillgänglighet, dynamik, etablerade processer, och effektivitet. Övergången till virtuellt arbete uppvisade främst fem positiva faktorer och tre negativa. Effekterna utav DevOps var särskilt kopplade till frekvensen av delning samt potential för automation. Slutsatser: Våra resultat tyder på att korrelationsanalys i kombination med visualiseringsverktyg kan användas för att skapa systemförståelse av molnbaserade system. Samtidigt som metoden visar potential för att analysera individuella tjänster och generera hypoteser så påvisar metoden i vår studie vissa nackdelar vilket tas upp i diskussionen. Resultatet tyder dessutom på att prestandadata kan vara en rik informationskälla för kunskapsskapande och bör vara av intresse för ytterligare studier.Resultaten av den kvalitativa undersökning indikerar att kunskapshantering inte bara är ett viktigt element ur akademins perspektiv men även något som omsorgsfullt praktiseras av industrin. Resultatet angående övergången till virtuellt arbete samt DevOps antyder på att båda har inflytande på hur kunskapsspridning bedrivs, både var för sig och i kombination. Samtidigt pekar våra undersökningsresultat på att övergången till att arbeta virtuellt potentiellt har påverkat kunskapshantering i betydligt större utsträckning än DevOps. Intervjuerna uppvisade både positiva och negativa aspekter utav den virtuella påverkan samtidigt som de positiva effekter som uppmättes av DevOps uppföljdes av omfattande utmaningar.
APA, Harvard, Vancouver, ISO, and other styles
50

Das, Ruben. "Framework to set up a generic environment for applications." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175674.

Full text
Abstract:
Infrastructure is a common word used to express the basic equipment and structures that are needed e.g.  for a country or organisation to function properly. The same concept applies in the field of computer science, without infrastructure one would have problems operating software at scale. Provisioning and maintaining infrastructure through manual labour is a common occurrence in the "iron age" of IT. As the world is progressing towards the "cloud age" of IT, systems are decoupled from physical hardware enabling anyone who is software savvy to automate provisioning and maintenance of infrastructure. This study aims to determine how a generic environment can be created for applications that can run on Unix platforms and how that underlying infrastructure can be provisioned effectively. The results show that by utilising OS-level virtualisation, also known as "containers", one can deploy and serve any application that can use the Linux kernel in the sense that is needed. To further support realising the generic environment, hardware virtualisation was applied to provide the infrastructure needed to be able to use containers. This was done by provisioning a set of virtual machines on different cloud providers with a lightweight operating system that could support the container runtime needed. To manage these containers at scale a container orchestration tool was installed onto the cluster of virtual machines. To provision the said environment in an effective manner, the principles of infrastructure as code (IaC) were used to create a “blueprint" of the infrastructure that was desired. By using the metric mean time to environment (MTTE) it was noted that a cluster of virtual machines with a container orchestration tool installed onto it could be provisioned under 10 minutes for four different cloud providers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography