To see the other types of publications on this topic, follow the link: Application scalability.

Dissertations / Theses on the topic 'Application scalability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Application scalability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Topilski, Nickolay. "Improving scalability and fault tolerance in an application managment infrastructure." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p1457314.

Full text
Abstract:
Thesis (M.S.)--University of California, San Diego, 2008.<br>Title from first page of PDF file (viewed November 6, 2008). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 39-41).
APA, Harvard, Vancouver, ISO, and other styles
2

Shah, Rohan D. "Improving the Scalability and Usability of the Public Information Officer Monitoring Application." DigitalCommons@USU, 2015. https://digitalcommons.usu.edu/etd/4407.

Full text
Abstract:
This thesis work addresses the limitations of a web application called the Public Information Officer Monitoring Application (PMA). This application helps Public Information Officers (PIOs) to gather, monitor, sort, store, and report social media data during a crisis event. Before this work, PMA was unable to handle large data sets and as a result, it had not been adequately tested with potential users of the application. This thesis describes changes made to PMA to improve its ability to handle large data sets. After these changes were made, the application was then tested with target users. All test participants found the application useful and relevant to their work. Testing also revealed many ways to improve the usefulness of the application, which were subsequently implemented. The thesis concludes with suggestions for future work and distribution of PMA.
APA, Harvard, Vancouver, ISO, and other styles
3

MOGALLAPU, RAJA. "Scalability of Kubernetes Running Over AWS - A Performance Study while deploying CPU intensive application containers." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18841.

Full text
Abstract:
Background: Nowadays lot of companies are enjoying the benefits of kubernetes by maintaining their containerized applications over it. AWS is one of the leading cloud computing service providers and many well-known companies are their clients. Many researches have been conducted on kubernetes, docker containers, cloud computing platforms but a confusion exists on how to deploy the applications in Kubernetes. A research gap about the impact created by CPU limits and requests while deploying the Kubernetes application can be found. So, through this thesis I want to analyze the performance of the CPU intensive containerized application. It will help many companies avoid the confusion while deploying their applications over kubernetes. Objectives: We measure the scalability of kubernetes under CPU intensive containerized application running over AWS and we can study the impact created by changing CPU limits and requests while deploying the application in Kubernetes. Methods: we choose a blend of literature study and experimentation as methods to conduct the research. Results and Conclusion: From the experiments it is evident that the application performs better when we allocate more CPU limits and less CPU requests when compared to equal CPU requests and CPU limits in the deployment file. CPU metrics collected from SAR and Kubernetes metrics server are similar. It is better to allocate pods with more CPU limits and CPU requests than with equal CPU requests and CPU limits for better performance. Keywords: Kubernetes, CPU intensive containerized application, AWS, Stress-ng.
APA, Harvard, Vancouver, ISO, and other styles
4

Eriksson, Daniel. "Resource Allocation Guidelines : Configuring a large telecommunication application." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5606.

Full text
Abstract:
Changing the architecture of the Ericsson Billing Gateway application has shown to solve the problem with dynamic memory management that decreased the performance. The new architecture that is focused on processes instead of threads showed increased performance. It also allowed for the possibility to adjust the process / thread configuration towards the network topology and hardware. Measurements of different configurations showed the importance of an accurate configuration and also that certain guidelines could be established based on the results.<br>Genom att ändra architecturen på Billing Gateway löste man på Ericsson problemet med &quot;Dynamisk minneshantering&quot;. Den nya architecturen som fokuserar på processer istället för trådar visade ökad prestanda. Den nya architecturen tillät också vissa konfigurationsmöjligheter gentemot nätverkstopologi och hårdvara. Mätningar på olika konfigurationer visade vikten av rätt konfiguration och också att vissa riktlinjer kunde utrönas från resultaten.<br>Phone#: +46 457 66582
APA, Harvard, Vancouver, ISO, and other styles
5

Ben, hamida Amal. "Vers une nouvelle architecture de videosurveillance basée sur la scalabilité orientée vers l'application." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0144/document.

Full text
Abstract:
Le travail présenté dans ce mémoire a pour objectif le développement d'une nouvelle architecture pour les systèmes de vidéosurveillance. Tout d'abord, une étude bibliographique nous a conduit à classer les systèmes existants selon le niveau de leurs applications qui dépend directement des fonctions analytiques exécutées. Nous avons également constaté que les systèmes habituels traitent toutes les données enregistrées alors que réellement une faible partie des scènes sont utiles pour l'analyse. Ainsi, nous avons étendu l'architecture ordinaire des systèmes de surveillance par une phase de pré-analyse qui extrait et simplifie les régions d'intérêt en conservant les caractéristiques importantes. Deux méthodes différentes pour la pré-analyse dans le contexte de la vidéosurveillance ont été proposées : une méthode de filtrage spatio-temporel et une technique de modélisation des objets en mouvement. Nous avons contribué, aussi, par l'introduction du concept de la scalabilité orientée vers l'application à travers une architecture multi-niveaux applicatifs pour les systèmes de surveillance. Les différents niveaux d'applications des systèmes de vidéosurveillance peuvent être atteints incrémentalement pour répondre aux besoins progressifs de l'utilisateur final. Un exemple de système de vidéosurveillance respectant cette architecture et utilisant nos méthodes de pré-analyse est proposé<br>The work presented in this thesis aims to develop a new architecture for video surveillance systems. Firstly, a literature review has led to classify the existing systems based on their applications level which dependents directly on the performed analytical functions. We, also, noticed that the usual systems treat all captured data while, actually, a small part of the scenes are useful for analysis. Hence, we extended the common architecture of surveillance systems with a pre-analysis phase that extracts and simplifies the regions of interest with keeping the important characteristics. Two different methods for preanalysis were proposed : a spatio-temporal filtering and a modeling technique for moving objects. We contributed, too, by introducing the concept of application-oriented scalability through a multi-level application architecture for surveillance systems. The different applications levels can be reached incrementally to meet the progressive needs of the enduser. An example of video surveillance system respecting this architecture and using the preanalysis methods was proposed
APA, Harvard, Vancouver, ISO, and other styles
6

Awad, Ashraf A. "Scalable application-aware router mechanisms." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04052004-180005/unrestricted/awad%5Fashraf%5Fa%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Parry, Jack. "Enabling large-scale dataset analysis in resource-constrained environments through application-aware preprocessing." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/201328/1/Jack_Parry_Thesis.pdf.

Full text
Abstract:
In many computing applications, such as system monitoring, fault diagnostics, and bioinformatics, large datasets must be analysed. As the volume of such datasets increases inexorably, industry-standard analysis tools struggle to produce meaningful results in a reasonable amount of time or may fail to work at all. More efficient analysis software may not exist, and higher-performance computing environments may be prohibitively expensive or unsuitable for use in the field. We develop and present the technique of Application-Aware Preprocessing, incorporating user requirements directly into the data analysis process. The technique proves successful enough to allow large-scale dataset analysis in resource-constrained environments.
APA, Harvard, Vancouver, ISO, and other styles
8

Dunér, Daniel, and Marcus Nilsson. "Scalability of push and pull based event notification : A comparison between webhooks and polling." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279546.

Full text
Abstract:
Today’s web applications make extensive use of APIs between server and client, or server to server in order to provide new information in the form of events. The question was whether the different methods of procuring events are different in how they scale. This study aims to compare performance between webhooks and polling, the two most commonly used pull and push based methods for event notification when scaling up traffic. The purpose is to create a basis for developers when choosing the method for event notification. The comparison has been developed through measurements of typical indicators of good performance for web applications: CPU usage, memory usage and response time. The tests gave indications that webhooks perform better in most circumstances, but further testing is needed in a more well-defined environment to draw a confident conclusion.<br>Dagens webbapplikationer använder sig i stor utsträckning av API:er mellan server och klient, eller server till server för att inhämta ny information i form av events (händelser). Frågan är om de olika metoder som finns för att inhämta events skalar olika bra. Förevarande studie ämnar att jämföra prestanda mellan ”webhooks” och ”polling”, de två mest använda pull- och pushbaserade metoderna för eventnotifikation vid uppskalning av trafik. Syftet är att skapa ett underlag för utvecklare vid valet av metod för eventnotifikation. Jämförelsen har tagits fram genom mätningar av typiska indikatorer för god prestanda hos en webbapplikation: CPU-användning, minnesanvändning och svarstid. Testerna gav indikationer om att webhooks är bättre men det krävs vidare testning i en mer väldefinierad miljö för att dra en säkrare slutsats.
APA, Harvard, Vancouver, ISO, and other styles
9

Park, Junhee. "Performance scalability of n-tier application in virtualized cloud environments: Two case studies in vertical and horizontal scaling." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55018.

Full text
Abstract:
The prevalence of multi-core processors with recent advancement in virtualization technologies has enabled horizontal and vertical scaling within a physical node achieving economical sharing of computing infrastructures as computing clouds. Through hardware virtualization, consolidated servers each with specific number of core allotment run on the same physical node in dedicated Virtual Machines (VMs) to increase overall node utilization which increases profit by reducing operational costs. Unfortunately, despite the conceptual simplicity of vertical and horizontal scaling in virtualized cloud environments, leveraging the full potential of this technology has presented significant scalability challenges in practice. One of the fundamental problems is the performance unpredictability in virtualized cloud environments (ranked fifth in the top 10 obstacles for growth of cloud computing). In this dissertation, we present two case studies in vertical and horizontal scaling to this challenging problem. For the first case study, we describe concrete experimental evidence that shows important source of performance variations: mapping of virtual CPU to physical cores. We then conduct an experimental comparative study of three major hypervisors (i.e., VMware, KVM, Xen) with regard to their support of n-tier applications running on multi-core processor. For the second case study, we present empirical study that shows memory thrashing caused by interference among consolidated VMs is a significant source of performance interference that hampers horizontal scalability of an n-tier application performance. We then execute transient event analyses of fine-grained experiment data that link very short bottlenecks with memory thrashing to the very long response time (VLRT) requests. Furthermore we provide three practical techniques such as VM migration, memory reallocation, soft resource allocation and show that they can mitigate the effects of performance interference among consolidate VMs.
APA, Harvard, Vancouver, ISO, and other styles
10

Reva, Roman. "Design and Implementation of a Next Generation Web Application SaaS prototype." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-35005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hossain, Md Iqbal, and Md Iqbal Hossain. "Dynamic scaling of a web-based application in a Cloud Architecture." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142361.

Full text
Abstract:
With the constant growth of internet applications, such as social networks, online media, various online communities, and mobile applications, website user traffic has grown, is very dynamic, and is oftentimes unpredictable. These unpredictable natures of the traffic have led to many new and unique challenges which must be addressed by solution architects, application developers, and technology researchers. All of these actors must continually innovate to create new attractive application and new system architectures to support the users of these new applications. In addition, increased traffic increases the demands for resources, while users demand even faster response times, despite the ever-growing datasets underlying many of these new applications. Several concepts and best practices have been introduced to build highly scalable applications by exploiting cloud computing. As no one who expect to be or remain a leader in business today can afford to ignore cloud computing. Cloud computing has emerged as a platform upon which innovation, flexibility, availability, and faster time-to-market can be supported by new small and medium sized enterprises. Cloud computing is enabling these businesses to create massively scalable applications, some of which handle tens of millions of active users daily. This thesis concerns the design, implementation, demonstration, and evaluation of a highly scalable cloud based architectures designed for high performance and rapid evolution for new businesses, such as Ifoodbag AB, in order to meet the requirement for their web based application. This thesis examines how to scale resources both up and down dynamically, since there is no reason to allocate more or less resources than actually needed. Apart from implementing and testing the proposed design, this thesis presents several guidelines, best practices and recommendations for optimizing auto scaling process including cost analysis. Test results and analysis presented in this thesis, clearly shows the proposed architecture model is strongly capable of supporting high demand applications, provides greater flexibility and enables rapid market share growth for new businesses, without their need to investing in an expensive infrastructure.<br>Med den ständiga tillväxten av Internet- applikationer, såsom sociala nätverk, online media, olika communities och mobila applikationer, har trafiken mot webbplatser ökat samt blivit mycket mer dynamisk och är ofta oförutsägbara. Denna oförutsägbara natur av trafiken har lett till många nya och unika utmaningar som måste lösas med hjälp av lösningsarkitekter, applikationsutvecklare och teknikforskare. Alla dessa aktörer måste ständigt förnya sig för att skapa nya attraktiva program och nya systemarkitekturer för att stödja användarna av dessa nya tillämpningar. Dessutom ökar den ökade trafikmängden krav på resurser, samtidigt som användarna kräver ännu snabbare svarstider, trots den ständigt växande datamängden som ligger som grund för många av dessa nya tillämpningar . Flera koncept och branchstandarder har införts för att bygga skalbara applikationer genom att utnyttja ”molnet” (”cloud computing”), eftersom att ingen som förväntar sig att bli eller förbli en ledare i näringslivet idag har råd att ignorera ”molnet”. Cloud computing har vuxit fram som en plattform på vilken innovation, flexibilitet, tillgänglighet och snabbhet till marknaden kan uppnås av nya, små och medelstora företag. Cloud computing är möjligt för dessa företag att skapa mycket skalbara applikationer, vilka kan hanterar tiotals miljoner aktiva användare varje dag. Detta examensarbete handlar om utformning, genomförande, demonstration och utvärdering av en mycket skalbar molnbaseradearkitekturer som utformats för höga prestanda och snabb utveckling av nya företag, såsom Ifoodbag AB, för att uppfylla kravet på deras webb- baserad applikation. Detta examensarbete undersöker hur man både skalar upp och ner dynamiskt, eftersom det inte finns någon anledning att tillägna applikationer mer eller mindre resurser än vad som faktiskt behövs för stunden. Som en del av examensarbetet implementeras och testas den föreslagna utformningen, samt presenterar flera riktlinjer, branchstandarder och rekommendationer för att optimera automatisk skalning av processer. Testresultat och de analyser som presenteras i detta examensarbete, visar tydligt att den föreslagna arkitekturen/modellen kan stödja resurskrävande applikationer, ger större flexibilitet och möjliggör snabb tillväxt av marknadsandelar för nya företag, utan att deras behov av att investera i en dyr infrastruktur.
APA, Harvard, Vancouver, ISO, and other styles
12

Bytyn, Andreas [Verfasser], Gerd [Akademischer Betreuer] Ascheid, and Rainer [Akademischer Betreuer] Leupers. "Efficiency and scalability exploration of an application-specific instruction-set processor for deep convolutional neural networks / Andreas Bytyn ; Gerd Ascheid, Rainer Leupers." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1230325506/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Narasimha, Rajesh. "Application of Information Theory and Learning to Network and Biological Tomography." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19889.

Full text
Abstract:
Studying the internal characteristics of a network using measurements obtained from endhosts is known as network tomography. The foremost challenge in measurement-based approaches is the large size of a network, where only a subset of measurements can be obtained because of the inaccessibility of the entire network. As the network becomes larger, a question arises as to how rapidly the monitoring resources (number of measurements or number of samples) must grow to obtain a desired monitoring accuracy. Our work studies the scalability of the measurements with respect to the size of the network. We investigate the issues of scalability and performance evaluation in IP networks, specifically focusing on fault and congestion diagnosis. We formulate network monitoring as a machine learning problem using probabilistic graphical models that infer network states using path-based measurements. We consider the theoretical and practical management resources needed to reliably diagnose congested/faulty network elements and provide fundamental limits on the relationships between the number of probe packets, the size of the network, and the ability to accurately diagnose such network elements. We derive lower bounds on the average number of probes per edge using the variational inference technique proposed in the context of graphical models under noisy probe measurements, and then propose an entropy lower (EL) bound by drawing similarities between the coding problem over a binary symmetric channel and the diagnosis problem. Our investigation is supported by simulation results. For the congestion diagnosis case, we propose a solution based on decoding linear error control codes on a binary symmetric channel for various probing experiments. To identify the congested nodes, we construct a graphical model, and infer congestion using the belief propagation algorithm. In the second part of the work, we focus on the development of methods to automatically analyze the information contained in electron tomograms, which is a major challenge since tomograms are extremely noisy. Advances in automated data acquisition in electron tomography have led to an explosion in the amount of data that can be obtained about the spatial architecture of a variety of biologically and medically relevant objects with sizes in the range of 10-1000 nm A fundamental step in the statistical inference of large amounts of data is to segment relevant 3D features in cellular tomograms. Procedures for segmentation must work robustly and rapidly in spite of the low signal-to-noise ratios inherent in biological electron microscopy. This work evaluates various denoising techniques and then extracts relevant features of biological interest in tomograms of HIV-1 in infected human macrophages and Bdellovibrio bacterial tomograms recorded at room and cryogenic temperatures. Our approach represents an important step in automating the efficient extraction of useful information from large datasets in biological tomography and in speeding up the process of reducing gigabyte-sized tomograms to relevant byte-sized data. Next, we investigate automatic techniques for segmentation and quantitative analysis of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscope, and tomograms of Liposomal Doxorubicin formulations (Doxil), an anticancer nanodrug, imaged at cryogenic temperatures. A machine learning approach is formulated that exploits texture features, and joint image block-wise classification and segmentation is performed by histogram matching using a nearest neighbor classifier and chi-squared statistic as a distance measure.
APA, Harvard, Vancouver, ISO, and other styles
14

Morros, Rubió Josep Ramon. "Optimization of Segmentation-Based Video Sequence Coding Techniques. Application to content based functionalities." Doctoral thesis, Universitat Politècnica de Catalunya, 2004. http://hdl.handle.net/10803/6888.

Full text
Abstract:
En aquest treball s'estudia el problema de la compressió de video utilitzant funcionalitats basades en el contingut en el marc teòric dels sistemes de codificació de seqüències de video basats en regions. Es tracten bàsicament dos problemes: El primer està relacionat amb com es pot aconseguir una codificació òptima en sistemes de codificació de video basats en regions. En concret, es mostra com es pot utilitzar un metodologia de 'rate-distortion' en aquest tipus de problemes. El segon problema que es tracta és com introduir funcionalitats basades en el contingut en un d'aquests sistemes de codificació de video.<br/>La teoria de 'rate-distortion' defineix l'optimalitat en la codificació com la representació d'un senyal que, per una taxa de bits donada, resulta en una distorsió mínima al reconstruir el senyal. En el cas de sistemes de codificació basats en regions, això implica obtenir una partició òptima i al mateix temps, un repartiment òptim dels bits entre les diferents regions d'aquesta partició. Aquest problema es formalitza per sistemes de codificació no escalables i es proposa un algorisme per solucionar-lo. Aquest algorisme s'aplica a un sistema de codificació concret anomenat SESAME. En el SESAME, cada quadre de la seqüència de video es segmenta en un conjunt de regions que es codifiquen de forma independent. La segmentació es fa seguint criteris d'homogeneitat espaial i temporal. Per eliminar la redundància temporal, s'utilitza un sistema predictiu basat en la informació de moviment tant per la partició com per la textura. El sistema permet seguir l'evolució temporal de cada regió per tota la seqüència. Els resultats de la codificació són òptims (o quasi-òptims) pel marc donat en un sentit de 'rate-distortion'. El procés de codificació inclou trobar una partició òptima i també trobar la tècnica de codificació i nivell de qualitat més adient per cada regió. <br/>Més endavant s'investiga el problema de codificació de video en sistemes amb escalabilitat i que suporten funcionalitats basades en el contingut. El problema es generalitza incloent en l'esquema de codificació les dependències espaials i temporals entre els diferents quadres o entre les diferents capes d'escalabilitat. En aquest cas, la solució requereix trobar la partició òptima i les tècniques de codificació de textura òptimes tant per la capa base com per la capa de millora. A causa de les dependències que hi ha entre aquestes capes, la partició i el conjunt de tècniques de codificació per la capa de millora dependran de les decisions preses en la capa base. Donat que aquest tipus de solucions generalment són molt costoses computacionalment, també es proposa una solució que no té en compte aquestes dependències.<br/>Els algorismes obtinguts s'apliquen per extendre SESAME. El sistema de codificació extès, anomenat XSESAME suporta diferents tipus d'escalabilitat (PSNR, espaial i temporal) així com funcionalitats basades en el contingut i la possibilitat de seguiment d'objectes a través de la seqüència de video. El sistema de codificació permet utilitzar dos modes diferents pel que fa a la selecció de les regions de la partició de la capa de millora: El primer mode (supervisat) està pensat per utilitzar funcionalitats basades en el contingut. El segon mode (no supervisat) no suporta funcionalitats basades en el contingut i el seu objectiu és simplement obtenir una codificació òptima a la capa de millora.<br/>Un altre tema que s'ha investigat és la integració d'un mètode de seguiment d'objectes en el sistema de codificació. En el cas general, el seguiment d'objectes en seqüències de video és un problema molt complex. Si a més aquest seguiment es vol integrar en un sistema de codificació apareixen problemes addicionals degut a que els requisits necessaris per obtenir eficiència en la codificació poden entrar en conflicte amb els requisits per una bona precisió en el seguiment d'objectes. Aquesta aparent incompatibilitat es soluciona utilitzant un enfocament basat en una doble partició de cada quadre de la seqüència. La partició que s'utilitza per la codificació es resegmenta utilitzant criteris purament espaials. Al projectar aquesta segona partició permet una millor adaptació dels contorns de l'objecte a seguir. L'excés de regions que implicaria aquesta re-segmentació s'elimina amb una etapa de fusió de regions realitzada a posteriori.<br>En este trabajo se estudia el problema de la compresión de vídeo utilizando funcionalidades basadas en el contenido en el marco teórico de los sistemas de codificación de secuencias de vídeo basados en regiones. Se tratan básicamente dos problemas: El primero está relacionado con la obtención de una codificación óptima en sistemas de codificación de vídeo basados en regiones. En concreto, se muestra como se puede utilizar un metodología de 'rate-distortion' para este tipo de problemas. El segundo problema tratado es como introducir funcionalidades basadas en el contenido en uno de estos sistemas de codificación de vídeo.<br/>La teoría de 'rate-distortion' define la optimalidad en la codificación como la representación de una señal que, para un tasa de bits dada, resulta en una distorsión mínima al reconstruir la señal. En el caso de sistemas de codificación basados en regiones, esto implica obtener una partición óptima y al mismo tiempo, un reparto óptimo de los bits entre las diferentes regiones de esta partición. Este problema se formaliza para sistemas de codificación no escalables y se propone un algoritmo para solucionar este problema. Este algoritmo se aplica a un sistema de codificación concreto llamado SESAME. En SESAME, cada cuadro de la secuencia de vídeo se segmenta en un conjunto de regiones que se codifican de forma independiente. La segmentación se hace siguiendo criterios de homogeneidad espacial y temporal. Para eliminar la redundancia temporal, se utiliza un sistema predictivo basado en la información de movimiento tanto para la partición como para la textura. El sistema permite seguir la evolución temporal de cada región a lo largo de la secuencia. Los resultados de la codificación son óptimos (o casi-óptimos) para el marco dado en un sentido de 'rate-distortion'. El proceso de codificación incluye encontrar una partición óptima y también encontrar la técnica de codificación y nivel de calidad más adecuados para cada región.<br/>Más adelante se investiga el problema de la codificación de vídeo en sistemas con escalabilidad y que suporten funcionalidades basadas en el contenido. El problema se generaliza incluyendo en el esquema de codificación las dependencias espaciales y temporales entre los diferentes cuadros o entre las diferentes capas de escalabilidad. En este caso, la solución requiere encontrar la partición óptima y las técnicas de codificación de textura óptimas tanto para la capa base como para la capa de mejora. A causa de les dependencias que hay entre estas capas, la partición y el conjunto de técnicas de codificación para la capa de mejora dependerán de las decisiones tomadas en la capa base. Dado que este tipo de soluciones generalmente son muy costosas computacionalmente, también se propone una solución que no tiene en cuenta estas dependencias.<br/>Los algoritmos obtenido se usan en la extensión de SESAME. El sistema de codificación extendido, llamado XSESAME soporta diferentes tipos de escalabilidad (PSNR, espacial y temporal) así como funcionalidades basadas en el contenido y la posibilidad de seguimiento de objetos a través de la secuencia de vídeo. El sistema de codificación permite utilizar dos modos diferentes por lo que hace referencia a la selección de les regiones de la partición de la capa de mejora: <br/>El primer modo (supervisado) está pensado para utilizar funcionalidades basadas en el contenido. El segundo modo (no supervisado) no soporta funcionalidades basadas en el contenido y su objetivo es simplemente obtener una codificación óptima en la capa de mejora.<br/>Otro tema investigado es la integración de un método de seguimiento de objetos en el sistema de codificación.<br/>En el caso general, el seguimiento de objetos en secuencias de vídeo es un problema muy complejo. Si este seguimiento se quiere integrar en un sistema de codificación aparecen problemas adicionales debido a que los requisitos necesarios para obtener eficiencia en la codificación <br/>pueden entrar en conflicto con los requisitos para obtener una buena precisión en el seguimiento de objetos. Esta aparente incompatibilidad se soluciona usando un enfoque basado en una doble partición de cada cuadro de la secuencia. La partición que se usa para codificar se resegmenta usando criterios puramente espaciales. Proyectando esta segunda partición se obtiene una mejor adaptación de los contornos al objeto a seguir. El exceso de regiones que implicaría esta resegmentación se elimina con una etapa de fusión de regiones realizada a posteriori.<br>This work addresses the problem of video compression with content-based functionalities in the framework of segmentation-based video coding systems. Two major problems are considered. The first one is related with coding optimality in segmentation-based coding systems. Regarding this subject, the feasibility of a rate-distortion approach for a complete region-based coding system is shown. The second one is how to address content-based functionalities in the coding system proposed as a solution of the first problem. <br/>Optimality, as defined in the framework of rate-distortion theory, deals with obtaining a representation of the video sequence that leads to a minimum distortion of the coded signal for a given bit budget. In the case of segmentation-based coding systems this means to obtain an 'optimal' partition together with the best coding technique for each region of this partition so that the result is optimal in an operational rate-distortion sense. The problem is formalized for independent, non-scalable coding.<br/>An algorithm to solve this problem is provided as well.<br/>This algorithms is applied to a specific segmentation-based coding system, the so called SESAME. In SESAME, each frame is segmented into a set of regions, that are coded independently. Segmentation involves both spatial and motion homogeneity criteria. To exploit temporal redundancy, a prediction for both the partition and the texture of the current frame is created by using motion information. The time evolution of each region is defined along the sequence (time tracking). The results are optimal (or near-optimal) for the given framework in a rate-distortion sense. The definition of the coding strategy involves a global optimization of the partition as well as of the coding technique/quality level for each region. <br/>Later, the investigation is also extended to the problem of video coding optimization in the framework of a scalable video coding system that can address content-based functionalities. The focus is set in the various types of content-based scalability and object tracking. The generality of the problem has also been extended by including the spatial and temporal dependencies between frames and scalability layers into the optimization schema. In this case the solution implies finding the optimal partition and set of quantizers for both the base and the enhancement layers. Due to the coding dependencies of the enhancement layer with respect to the base layer, the partition and the set of quantizers of the enhancement layer depend on the decisions made on the base layer. Also, a solution for the independent optimization problem (i.e. without tacking into account dependencies between different frames of scalability layers) has been proposed to reduce the computational complexity. <br/>These solutions are used to extend the SESAME coding system. The extended coding system, named XSESAME, supports different types of scalability (PSNR, Spatial and temporal) as well as content-based functionalities, such as content-based scalability and object tracking. <br/>Two different operating modes for region selection in the enhancement layer have been presented: One (supervised) aimed at providing content-based functionalities at the enhancement layer and the other (unsupervised) aimed at coding efficiency, without content-based functionalities. <br/>Integration of object tracking into the segmentation-based coding system is also investigated.<br/>In the general case, tracking is a very complex problem. If this capability has to be integrated into a coding system, additional problems arise due to conflicting requirements between coding efficiency and tracking accuracy. This is solved by using a double partition approach, where pure spatial criteria are used to re-segment the partition used for coding. The projection of the re-segmented partition results in more precise adaptation to object contours. A merging step is performed a posteriori to eliminate the excess of regions originated by the re-segmentation.
APA, Harvard, Vancouver, ISO, and other styles
15

Morros, Rubio Josep Ramon. "Optimization of Segmentation-Based Video Sequence Coding Techniques. Application to content based functionalities." Doctoral thesis, Universitat Politècnica de Catalunya, 2004. http://hdl.handle.net/10803/6888.

Full text
Abstract:
En aquest treball s'estudia el problema de la compressió de video utilitzant funcionalitats basades en el contingut en el marc teòric dels sistemes de codificació de seqüències de video basats en regions. Es tracten bàsicament dos problemes: El primer està relacionat amb com es pot aconseguir una codificació òptima en sistemes de codificació de video basats en regions. En concret, es mostra com es pot utilitzar un metodologia de 'rate-distortion' en aquest tipus de problemes. El segon problema que es tracta és com introduir funcionalitats basades en el contingut en un d'aquests sistemes de codificació de video.La teoria de 'rate-distortion' defineix l'optimalitat en la codificació com la representació d'un senyal que, per una taxa de bits donada, resulta en una distorsió mínima al reconstruir el senyal. En el cas de sistemes de codificació basats en regions, això implica obtenir una partició òptima i al mateix temps, un repartiment òptim dels bits entre les diferents regions d'aquesta partició. Aquest problema es formalitza per sistemes de codificació no escalables i es proposa un algorisme per solucionar-lo. Aquest algorisme s'aplica a un sistema de codificació concret anomenat SESAME. En el SESAME, cada quadre de la seqüència de video es segmenta en un conjunt de regions que es codifiquen de forma independent. La segmentació es fa seguint criteris d'homogeneitat espaial i temporal. Per eliminar la redundància temporal, s'utilitza un sistema predictiu basat en la informació de moviment tant per la partició com per la textura. El sistema permet seguir l'evolució temporal de cada regió per tota la seqüència. Els resultats de la codificació són òptims (o quasi-òptims) pel marc donat en un sentit de 'rate-distortion'. El procés de codificació inclou trobar una partició òptima i també trobar la tècnica de codificació i nivell de qualitat més adient per cada regió. Més endavant s'investiga el problema de codificació de video en sistemes amb escalabilitat i que suporten funcionalitats basades en el contingut. El problema es generalitza incloent en l'esquema de codificació les dependències espaials i temporals entre els diferents quadres o entre les diferents capes d'escalabilitat. En aquest cas, la solució requereix trobar la partició òptima i les tècniques de codificació de textura òptimes tant per la capa base com per la capa de millora. A causa de les dependències que hi ha entre aquestes capes, la partició i el conjunt de tècniques de codificació per la capa de millora dependran de les decisions preses en la capa base. Donat que aquest tipus de solucions generalment són molt costoses computacionalment, també es proposa una solució que no té en compte aquestes dependències.Els algorismes obtinguts s'apliquen per extendre SESAME. El sistema de codificació extès, anomenat XSESAME suporta diferents tipus d'escalabilitat (PSNR, espaial i temporal) així com funcionalitats basades en el contingut i la possibilitat de seguiment d'objectes a través de la seqüència de video. El sistema de codificació permet utilitzar dos modes diferents pel que fa a la selecció de les regions de la partició de la capa de millora: El primer mode (supervisat) està pensat per utilitzar funcionalitats basades en el contingut. El segon mode (no supervisat) no suporta funcionalitats basades en el contingut i el seu objectiu és simplement obtenir una codificació òptima a la capa de millora.Un altre tema que s'ha investigat és la integració d'un mètode de seguiment d'objectes en el sistema de codificació. En el cas general, el seguiment d'objectes en seqüències de video és un problema molt complex. Si a més aquest seguiment es vol integrar en un sistema de codificació apareixen problemes addicionals degut a que els requisits necessaris per obtenir eficiència en la codificació poden entrar en conflicte amb els requisits per una bona precisió en el seguiment d'objectes. Aquesta aparent incompatibilitat es soluciona utilitzant un enfocament basat en una doble partició de cada quadre de la seqüència. La partició que s'utilitza per la codificació es resegmenta utilitzant criteris purament espaials. Al projectar aquesta segona partició permet una millor adaptació dels contorns de l'objecte a seguir. L'excés de regions que implicaria aquesta re-segmentació s'elimina amb una etapa de fusió de regions realitzada a posteriori.<br>En este trabajo se estudia el problema de la compresión de vídeo utilizando funcionalidades basadas en el contenido en el marco teórico de los sistemas de codificación de secuencias de vídeo basados en regiones. Se tratan básicamente dos problemas: El primero está relacionado con la obtención de una codificación óptima en sistemas de codificación de vídeo basados en regiones. En concreto, se muestra como se puede utilizar un metodología de 'rate-distortion' para este tipo de problemas. El segundo problema tratado es como introducir funcionalidades basadas en el contenido en uno de estos sistemas de codificación de vídeo.La teoría de 'rate-distortion' define la optimalidad en la codificación como la representación de una señal que, para un tasa de bits dada, resulta en una distorsión mínima al reconstruir la señal. En el caso de sistemas de codificación basados en regiones, esto implica obtener una partición óptima y al mismo tiempo, un reparto óptimo de los bits entre las diferentes regiones de esta partición. Este problema se formaliza para sistemas de codificación no escalables y se propone un algoritmo para solucionar este problema. Este algoritmo se aplica a un sistema de codificación concreto llamado SESAME. En SESAME, cada cuadro de la secuencia de vídeo se segmenta en un conjunto de regiones que se codifican de forma independiente. La segmentación se hace siguiendo criterios de homogeneidad espacial y temporal. Para eliminar la redundancia temporal, se utiliza un sistema predictivo basado en la información de movimiento tanto para la partición como para la textura. El sistema permite seguir la evolución temporal de cada región a lo largo de la secuencia. Los resultados de la codificación son óptimos (o casi-óptimos) para el marco dado en un sentido de 'rate-distortion'. El proceso de codificación incluye encontrar una partición óptima y también encontrar la técnica de codificación y nivel de calidad más adecuados para cada región.Más adelante se investiga el problema de la codificación de vídeo en sistemas con escalabilidad y que suporten funcionalidades basadas en el contenido. El problema se generaliza incluyendo en el esquema de codificación las dependencias espaciales y temporales entre los diferentes cuadros o entre las diferentes capas de escalabilidad. En este caso, la solución requiere encontrar la partición óptima y las técnicas de codificación de textura óptimas tanto para la capa base como para la capa de mejora. A causa de les dependencias que hay entre estas capas, la partición y el conjunto de técnicas de codificación para la capa de mejora dependerán de las decisiones tomadas en la capa base. Dado que este tipo de soluciones generalmente son muy costosas computacionalmente, también se propone una solución que no tiene en cuenta estas dependencias.Los algoritmos obtenido se usan en la extensión de SESAME. El sistema de codificación extendido, llamado XSESAME soporta diferentes tipos de escalabilidad (PSNR, espacial y temporal) así como funcionalidades basadas en el contenido y la posibilidad de seguimiento de objetos a través de la secuencia de vídeo. El sistema de codificación permite utilizar dos modos diferentes por lo que hace referencia a la selección de les regiones de la partición de la capa de mejora: El primer modo (supervisado) está pensado para utilizar funcionalidades basadas en el contenido. El segundo modo (no supervisado) no soporta funcionalidades basadas en el contenido y su objetivo es simplemente obtener una codificación óptima en la capa de mejora.Otro tema investigado es la integración de un método de seguimiento de objetos en el sistema de codificación.En el caso general, el seguimiento de objetos en secuencias de vídeo es un problema muy complejo. Si este seguimiento se quiere integrar en un sistema de codificación aparecen problemas adicionales debido a que los requisitos necesarios para obtener eficiencia en la codificación pueden entrar en conflicto con los requisitos para obtener una buena precisión en el seguimiento de objetos. Esta aparente incompatibilidad se soluciona usando un enfoque basado en una doble partición de cada cuadro de la secuencia. La partición que se usa para codificar se resegmenta usando criterios puramente espaciales. Proyectando esta segunda partición se obtiene una mejor adaptación de los contornos al objeto a seguir. El exceso de regiones que implicaría esta resegmentación se elimina con una etapa de fusión de regiones realizada a posteriori.<br>This work addresses the problem of video compression with content-based functionalities in the framework of segmentation-based video coding systems. Two major problems are considered. The first one is related with coding optimality in segmentation-based coding systems. Regarding this subject, the feasibility of a rate-distortion approach for a complete region-based coding system is shown. The second one is how to address content-based functionalities in the coding system proposed as a solution of the first problem. Optimality, as defined in the framework of rate-distortion theory, deals with obtaining a representation of the video sequence that leads to a minimum distortion of the coded signal for a given bit budget. In the case of segmentation-based coding systems this means to obtain an 'optimal' partition together with the best coding technique for each region of this partition so that the result is optimal in an operational rate-distortion sense. The problem is formalized for independent, non-scalable coding.An algorithm to solve this problem is provided as well.This algorithms is applied to a specific segmentation-based coding system, the so called SESAME. In SESAME, each frame is segmented into a set of regions, that are coded independently. Segmentation involves both spatial and motion homogeneity criteria. To exploit temporal redundancy, a prediction for both the partition and the texture of the current frame is created by using motion information. The time evolution of each region is defined along the sequence (time tracking). The results are optimal (or near-optimal) for the given framework in a rate-distortion sense. The definition of the coding strategy involves a global optimization of the partition as well as of the coding technique/quality level for each region. Later, the investigation is also extended to the problem of video coding optimization in the framework of a scalable video coding system that can address content-based functionalities. The focus is set in the various types of content-based scalability and object tracking. The generality of the problem has also been extended by including the spatial and temporal dependencies between frames and scalability layers into the optimization schema. In this case the solution implies finding the optimal partition and set of quantizers for both the base and the enhancement layers. Due to the coding dependencies of the enhancement layer with respect to the base layer, the partition and the set of quantizers of the enhancement layer depend on the decisions made on the base layer. Also, a solution for the independent optimization problem (i.e. without tacking into account dependencies between different frames of scalability layers) has been proposed to reduce the computational complexity. These solutions are used to extend the SESAME coding system. The extended coding system, named XSESAME, supports different types of scalability (PSNR, Spatial and temporal) as well as content-based functionalities, such as content-based scalability and object tracking. Two different operating modes for region selection in the enhancement layer have been presented: One (supervised) aimed at providing content-based functionalities at the enhancement layer and the other (unsupervised) aimed at coding efficiency, without content-based functionalities. Integration of object tracking into the segmentation-based coding system is also investigated.In the general case, tracking is a very complex problem. If this capability has to be integrated into a coding system, additional problems arise due to conflicting requirements between coding efficiency and tracking accuracy. This is solved by using a double partition approach, where pure spatial criteria are used to re-segment the partition used for coding. The projection of the re-segmented partition results in more precise adaptation to object contours. A merging step is performed a posteriori to eliminate the excess of regions originated by the re-segmentation.
APA, Harvard, Vancouver, ISO, and other styles
16

Vestman, Simon. "Cloud application platform - Virtualization vs Containerization : A comparison between application containers and virtual machines." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14590.

Full text
Abstract:
Context. As the number of organizations using cloud application platforms to host their applications increases, the priority of distributing physical resources within those platforms is increasing simultaneously. The goal is to host a higher quantity of applications per physical server, while at the same time retain a satisfying rate of performance combined with certain scalability. The modern needs of customers occasionally also imply an assurance of certain privacy for their applications. Objectives. In this study two types of instances for hosting applications in cloud application platforms, virtual machines and application containers, are comparatively analyzed. This investigation has the goal to expose advantages and disadvantages between the instances in order to determine which is more appropriate for being used in cloud application platforms, in terms of performance, scalability and user isolation. Methods. The comparison is done on a server running Linux Ubuntu 16.04. The virtual machine is created using Devstack, a development environment of Openstack, while the application container is hosted by Docker. Each instance is running an apache web server for handling HTTP requests. The comparison is done by using different benchmark tools for different key usage scenarios and simultaneously observing the resource usage in respective instance. Results. The results are produced by investigating the user isolation and resource occupation of respective instance, by examining the file system, active process handling and resource allocation after creation. Benchmark tools are executed locally on respective instance, for a performance comparison of the usage of physical resources. The amount of CPU operations executed within a given time is measured in order determine the processor performance, while the speed of read and write operations to the main memory is measured in order to determine the RAM performance. A file is also transmitted between host server and application in order to compare the network performance between respective instance, by examining the transfer speed of the file. Lastly a set of benchmark tools are executed on the host server to measure the HTTP server request handling performance and scalability of each instance. The amount of requests handled per second is observed, but also the resource usage for the request handling at an increasing rate of served requests and clients. Conclusions. The virtual machine is a better choice for applications where privacy is a higher priority, due to the complete isolation and abstraction from the rest of the physical server. Virtual machines perform better in handling a higher quantity of requests per second, while application containers is faster in transferring files through network. The container requires a significantly lower amount of resources than the virtual machine in order to run and execute tasks, such as responding to HTTP requests. When it comes to scalability the prefered type of instance depends on the priority of key usage scenarios. Virtual machines have quicker response time for HTTP requests but application containers occupy less physical resources, which makes it logically possible to run a higher quantity of containers than virtual machines simultaneously on the same physical server.
APA, Harvard, Vancouver, ISO, and other styles
17

Preißler, Steffen. "Skalierbare Ausführung von Prozessanwendungen in dienstorientierten Umgebungen." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-99727.

Full text
Abstract:
Die Strukturierung und Nutzung von unternehmensinternen IT-Infrastrukturen auf Grundlage dienstorientierter Architekturen (SOA) und etablierter XML-Technologien ist in den vergangenen Jahren stetig gewachsen. Lag der Fokus anfänglicher SOA-Realisierungen auf der flexiblen Ausführung klassischer, unternehmensrelevanter Geschäftsprozesse, so bilden heutzutage zeitnahe Datenanalysen sowie die Überwachung von geschäftsrelevanten Ereignissen weitere wichtige Anwendungsklassen, um sowohl kurzfristig Probleme des Geschäftsablaufes zu identifizieren als auch um mittel- und langfristige Veränderungen im Markt zu erkennen und die Geschäftsprozesse des Unternehmens flexibel darauf anzupassen. Aufgrund der geschichtlich bedingten, voneinander unabhängigen Entwicklung der drei Anwendungsklassen, werden die jeweiligen Anwendungsprozesse gegenwärtig in eigenständigen Systemen modelliert und ausgeführt. Daraus resultiert jedoch eine Reihe von Nachteilen, welche diese Arbeit aufzeigt und ausführlich diskutiert. Vor diesem Hintergrund beschäftigte sich die vorliegende Arbeit mit der Ableitung einer konsolidierten Ausführungsplattform, die es ermöglicht, Prozesse aller drei Anwendungsklassen gemeinsam zu modellieren und in einer SOA-basierten Infrastruktur effizient auszuführen. Die vorliegende Arbeit adressiert die Probleme einer solchen konsolidierten Ausführungsplattform auf den drei Ebenen der Dienstkommunikation, der Prozessausführung und der optimalen Verteilung von SOA-Komponenten in einer Infrastruktur.
APA, Harvard, Vancouver, ISO, and other styles
18

Hangwei, Qian. "Dynamic Resource Management of Cloud-Hosted Internet Applications." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1338317801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Marfia, Gustavo. "P2P vehicular applications mobility, fairness and scalability /." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1998391911&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tchappi, haman Igor. "Dynamic Multilevel and Holonic Model for the Simulation of a Large-Scale Complex System with Spatial Environment : Application to Road Traffic Simulation." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA004.

Full text
Abstract:
De nos jours, avec l’émergence d’objets et de voitures connectés, les systèmes de trafic routier deviennent de plus en plus complexes et présentent des comportements hiérarchiques à plusieurs niveaux de détail. L'approche de modélisation multiniveaux est une approche appropriée pour représenter le trafic sous plusieurs perspectives. Les modèles multiniveaux constituent également une approche appropriée pour modéliser des systèmes complexes à grande échelle comme le trafic routier. Cependant, la plupart des modèles multiniveaux de trafic proposés dans la littérature sont statiques car ils utilisent un ensemble de niveaux de détail prédéfinis et ces représentations ne peuvent pas commuter pendant la simulation. De plus ces modèles multiniveaux considèrent généralement seulement deux niveaux de détail. Très peu de travaux se sont intéressés à la modélisation dynamique multiniveau de trafic.Cette thèse propose un modèle holonique multiniveau et dynamique du trafic à grande échelle.La commutation dynamique des niveaux de détail lors de l’exécution de la simulation permet d’adapter le modèle aux contraintes liées à la qualité des résultats ou aux ressources de calcul disponibles.La proposition étend l'algorithme DBSCAN dans le contexte des systèmes multi-agents holoniques. De plus, une méthodologie permettant la commutation dynamique entre les différents niveaux de détail est proposée. Des indicateurs multiniveaux basés sur l'écart type sont aussi proposés afin d'évaluer la cohérence des résultats de la simulation<br>Nowadays, with the emergence of connected objects and cars, road traffic systems become more and more complex and exhibit hierarchical behaviours at several levels of detail. The multilevel modeling approach is an appropriate approach to represent traffic from several perspectives. Multilevel models are also an appropriate approach to model large-scale complex systems such as road traffic. However, most of the multilevel models of traffic proposed in the literature are static because they use a set of predefined levels of detail and these representations cannot change during simulation. Moreover, these multilevel models generally consider only two levels of detail. Few works have been interested on the dynamic multilevel traffic modeling.This thesis proposes a holonic multilevel and dynamic traffic model for large scale traffic systems. The dynamic switching of the levels of detail during the execution of the simulation allows to adapt the model to the constraints related to the quality of the results or to the available computing resources.The proposal extends the DBSCAN algorithm in the context of holonic multi-agent systems. In addition, a methodology allowing a dynamic transition between the different levels of detail is proposed. Multilevel indicators based on standard deviation are also proposed in order to assess the consistency of the simulation results
APA, Harvard, Vancouver, ISO, and other styles
21

Dawoud, Wesam. "Scalability and performance management of internet applications in the cloud." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6818/.

Full text
Abstract:
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost.<br>Cloud computing ist ein Model fuer einen Pool von Rechenressourcen, den sie auf Anfrage zur Verfuegung stellt. Internetapplikationen in einer Cloud-Infrastruktur koennen bei einer erhoehten Auslastung schnell die Lage meistern, indem sie die durch die Cloud-Infrastruktur auf Anfrage zur Verfuegung stehenden und virtuell unbegrenzten Ressourcen in Anspruch nehmen. Allerdings sind solche Applikationen durch den Verwaltungsaufwand zur Bereitstellung der Ressourcen mit Perioden von Verschlechterung der Performanz und Ressourcenunterversorgung konfrontiert. Ausserdem ist das Management der Performanz aufgrund der Konsolidierung in einer Cloud Umgebung kompliziert. Um die Auswirkung des Mehraufwands zur Bereitstellung von Ressourcen abzuschwächen, schlagen wir in dieser Dissertation zwei Methoden vor. Die erste Methode verwendet die Kontrolltheorie, um Ressourcen vertikal zu skalieren und somit schneller mit einer erhoehten Auslastung umzugehen. Diese Methode setzt voraus, dass der Provider das Wissen und die Kontrolle über die in virtuellen Maschinen laufende Plattform hat. Der Provider ist dadurch als „Plattform als Service (PaaS)“ und als „Software als Service (SaaS)“ Provider definiert. Die zweite Methode bezieht sich auf die Clientseite und behandelt die horizontale Skalierbarkeit in einem Infrastruktur als Service (IaaS)-Model. Sie behandelt den Zielkonflikt zwischen den Kosten und der Performanz mit einer mehrzieloptimierten Loesung. Sie findet massstaebliche Schwellenwerte, die die hoechste Performanz mit der niedrigsten Steigerung der Kosten gewaehrleisten. Ausserdem ist in der zweiten Methode ein Algorithmus der Zeitreifenvorhersage verwendet, um die Applikation proaktiv zu skalieren und Perioden der nicht optimalen Ausnutzung zu vermeiden. Um die Performanz der Internetapplikation zu verbessern, haben wir zusaetzlich ein System entwickelt, das die unter Beeintraechtigung der Performanz leidenden virtuellen Maschinen findet und entfernt. Das entwickelte System ist eine leichtgewichtige Lösung, die keine Provider-Beteiligung verlangt. Um die Skalierbarkeit unserer Methoden und der entwickelten Algorithmen auszuwerten, haben wir einen Simulator namens „ScaleSim“ entwickelt. In diesem Simulator haben wir Komponenten implementiert, die als Skalierbarkeitskomponenten der Amazon EC2 agieren. Die aktuelle Skalierbarkeitsimplementierung in Amazon EC2 ist als Referenzimplementierung fuer die Messesung der Verbesserungen in der Performanz von skalierbaren Applikationen. Der Simulator wurde auf realistische Modelle der RUBiS-Benchmark angewendet, die aus einer echten Umgebung extrahiert wurden. Die Auslastung ist aus den Zugriffslogs der World Cup Website von 1998 erzeugt. Die Ergebnisse zeigen, dass die Optimierung der Schwellenwerte und der angewendeten proaktiven Skalierbarkeit den Verwaltungsaufwand zur Bereitstellung der Ressourcen bis um 88% reduziert kann, während sich die Kosten nur um 9% erhöhen.
APA, Harvard, Vancouver, ISO, and other styles
22

Lindberg, Lars. "Remote rendering of physic simulations and scalability aspects in web applications." Thesis, Umeå universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-90042.

Full text
Abstract:
This thesis, assigned by Algoryx Simulation AB, explores the concept of implementing a web application for managing Algoryx based physics simulations. The application enables users to get access to the Algoryx simulation software by providing a user interface for clients to submit scenes files to be simulated, and to view a 3D visualization of the finished simulations rendered directly in the web browser. This allows clients to do away with the work of performing the compute intensive physic simulations locally and instead hand over that responsibility to the web service, while the client only needs to handle the actual rendering. Applications made available through a web browser allows users to get easy access to applications that does not require any installation procedures. For that reason the clients do not need any simulation software or any plugin installed to access the service. This makes it easy to share results of simulations to customers by just giving out a link that can be accessed through a browser. This paper also includes a theoretical study on scalability in web applications. The theory explains different ways of scaling, and common techniques and methods used to help achieving scalability that can be useful when designing and building scalable web system.
APA, Harvard, Vancouver, ISO, and other styles
23

Magapu, Akshay Kumar, and Nikhil Yarlagadda. "Performance, Scalability, and Reliability (PSR) challenges, metrics and tools for web testing : A Case Study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12801.

Full text
Abstract:
Context. Testing of web applications is an important task, as it ensures the functionality and quality of web applications. The quality of web application comes under non-functional testing. There are many quality attributes such as performance, scalability, reliability, usability, accessibility and security. Among these attributes, PSR is the most important and commonly used attributes considered in practice. However, there are very few empirical studies conducted on these three attributes. Objectives. The purpose of this study is to identify metrics and tools that are available for testing these three attributes. And also to identify the challenges faced while testing these attributes both from literature and practice. Methods. In this research, a systematic mapping study was conducted in order to collect information regarding the metrics, tools, challenges and mitigations related to PSR attributes. The required information is gathered by searching in five scientific databases. We also conducted a case study to identify the metrics, tools and challenges of the PSR attributes in practice. The case study is conducted at Ericsson, India where eight subjects were interviewed. And four subjects working in other companies (in India) were also interviewed in order to validate the results obtained from the case company. In addition to this, few documents of previous projects from the case company are collected for data triangulation. Results. A total of 69 metrics, 54 tools and 18 challenges are identified from systematic mapping study. And 30 metrics, 18 tools and 13 challenges are identified from interviews. Data is also collected through documents and a total of 16 metrics, 4 tools and 3 challenges were identified from these documents. We formed a list based on the analysis of data that is related to tools, metrics and challenges. Conclusions. We found that metrics available from literature are overlapping with metrics that are used in practice. However, tools found in literature are overlapping only to some extent with practice. The main reason for this deviation is because of the limitations that are identified for the tools, which lead to the development of their own in-house tool by the case company. We also found that challenges are partially overlapped between state of art and practice. We are unable to collect mitigations for all these challenges from literature and hence there is a need for further research to be done. Among the PSR attributes, most of the literature is available on performance attribute and most of the interviewees are comfortable to answer the questions related to performance attribute. Thus, we conclude there is a lack of empirical research related to scalability and reliability attributes. As of now, our research is dealing with PSR attributes in particular and there is a scope for further research in this area. It can be implemented on the other quality attributes and the research can be done in a larger scale (considering more number of companies).
APA, Harvard, Vancouver, ISO, and other styles
24

Cai, Xiaowei Ph D. Massachusetts Institute of Technology. "InGaAs MOSFETs for logic and RF applications : reliability, scalability and transport studies." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122683.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 133-141).<br>InGaAs has emerged as an extraordinary n-channel material due to its superb electron transport properties and low voltage operation. With tremendous advancements over the years, InGaAs MOSFETs have attracted much attention as promising device candidates for both logic and THz applications. However, many challenges remain. This thesis addresses some of the critical issues facing InGaAs MOSFETs and advances the understanding of the limiting factors confronting InGaAs MOSFET technology. First, it identifies a new instability mechanism in self-aligned InGaAs MOSFETs caused by fluorine migration and passivation of Si dopants in n-InAlAs. This problem is successfully mitigated by eliminating n-InAlAs from the device structure. The new device design achieves improved stability and record device performance. Second, it evaluates the impact of oxide trapping in InGaAs MOSFETs.<br>A comprehensive PBTI study shows that oxide trapping deteriorates device stability, resulting in threshold voltage shifts and degraded device performance. In addition, oxide trapping also compromises DC device performance. High frequency and fast pulse measurements reveal a rich spectrum of oxide traps with different capture/emission times. Furthermore, oxide trapping also complicates the extraction of fundamental parameters in InGaAs MOSFETs and leads to an underestimation of channel mobility. Thus, a new method has been developed, immune to the impact of oxide traps, to evaluate the intrinsic charge-control relationship of the device, and accurately estimate mobility. Thirdly, this thesis re-evaluates the impact of channel scaling on device performance and transport in InGaAs planar MOSFETs and FinFETs. In both cases, mobility degradation with channel thickness or fin width scaling is observed to be much less than suggested by conventional CV methods.<br>When the impact of oxide trapping is avoided, mitigated scaling induced degradation is observed and promising intrinsic transistor performance is revealed. Notably, InGaAs FinFETs exhibit g[subscript m,max] at 1 GHz competitive with current Silicon FinFET technology and high mobility even in very narrow fins ([mu][subscript peak] ~ 570 cm²/V·s at W[subscript f] = 7 nm). This thesis highlights the importance of mitigating oxide trapping. Further, in light of the results obtained here, the prospects of InGaAs MOSFET technology merit a reassessment.<br>This work was sponsored by DTRA, Lam Research, SRC and MIT MISTI<br>by Xiaowei Cai.<br>Ph. D.<br>Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
25

Remeika, Mantas, and Jovydas Urbanavicius. "Microservices in data intensive applications." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-88822.

Full text
Abstract:
The volumes of data which Big Data applications have to process are constantly increasing. This requires for the development of highly scalable systems. Microservices is considered as one of the solutions to deal with the scalability problem. However, the literature on practices for building scalable data-intensive systems is still lacking. This thesis aims to investigate and present the benefits and drawbacks of using microservices architecture in big data systems. Moreover, it presents other practices used to increase scalability. It includes containerization, shared-nothing architecture, data sharding, load balancing, clustering, and stateless design. Finally, an experiment comparing the performance of a monolithic application and a microservices-based application was performed. The results show that with increasing amount of load microservices perform better than the monolith. However, to cope with the constantly increasing amount of data, additional techniques should be used together with microservices.
APA, Harvard, Vancouver, ISO, and other styles
26

Didelot, Sylvain. "Improving memory consumption and performance scalability of HPC applications with multi-threaded network communications." Thesis, Versailles-St Quentin en Yvelines, 2014. http://www.theses.fr/2014VERS0029/document.

Full text
Abstract:
La tendance en HPC est à l'accroissement du nombre de coeurs par noeud de calcul pour une quantité totale de mémoire par noeud constante. A large échelle, l'un des principaux défis pour les applications parallèles est de garder une faible consommation mémoire. Cette thèse présente une couche de communication multi-threadée sur Infiniband, laquelle fournie de bonnes performances et une faible consommation mémoire. Nous ciblons les applications scientifiques parallélisées grâce à la bibliothèque MPI ou bien combinées avec un modèle de programmation en mémoire partagée. En partant du constat que le nombre de connexions réseau et de buffers de communication est critique pour la mise à l'échelle des bibliothèques MPI, la première contribution propose trois approches afin de contrôler leur utilisation. Nous présentons une topologie virtuelle extensible et entièrement connectée pour réseaux rapides orientés connexion. Dans un contexte agrégeant plusieurs cartes permettant d'ajuster dynamiquement la configuration des buffers réseau utilisant la technologie RDMA. La seconde contribution propose une optimisation qui renforce le potentiel d'asynchronisme des applications MPI, laquelle montre une accélération de deux des communications. La troisième contribution évalue les performances de plusieurs bibliothèques MPI exécutant une application de modélisation sismique en contexte hybride. Les expériences sur des noeuds de calcul jusqu'à 128 coeurs montrent une économie de 17 % sur la mémoire. De plus, notre couche de communication multi-threadée réduit le temps d'exécution dans le cas où plusieurs threads OpenMP participent simultanément aux communications MPI<br>A recent trend in high performance computing shows a rising number of cores per compute node, while the total amount of memory per compute node remains constant. To scale parallel applications on such large machines, one of the major challenges is to keep a low memory consumption. This thesis develops a multi-threaded communication layer over Infiniband which provides both good performance of communications and a low memory consumption. We target scientific applications parallelized using the MPI standard in pure mode or combined with a shared memory programming model. Starting with the observation that network endpoints and communication buffers are critical for the scalability of MPI runtimes, the first contribution proposes three approaches to control their usage. We introduce a scalable and fully-connected virtual topology for connection-oriented high-speed networks. In the context of multirail configurations, we then detail a runtime technique which reduces the number of network connections. We finally present a protocol for dynamically resizing network buffers over the RDMA technology. The second contribution proposes a runtime optimization to enforce the overlap potential of MPI communications, showing a 2x improvement factor on communications. The third contribution evaluates the performance of several MPI runtimes running a seismic modeling application in a hybrid context. On large compute nodes up to 128 cores, the introduction of OpenMP in the MPI application saves up to 17 % of memory. Moreover, we show a performance improvement with our multi-threaded communication layer where the OpenMP threads concurrently participate to the MPI communications
APA, Harvard, Vancouver, ISO, and other styles
27

Dawoud, Wesam [Verfasser], and Christoph [Akademischer Betreuer] Meinel. "Scalability and performance management of internet applications in the cloud / Wesam Dawoud. Betreuer: Christoph Meinel." Potsdam : Universitätsbibliothek der Universität Potsdam, 2013. http://d-nb.info/1043379266/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dawoud, Wesam Verfasser], and Christoph [Akademischer Betreuer] [Meinel. "Scalability and performance management of internet applications in the cloud / Wesam Dawoud. Betreuer: Christoph Meinel." Potsdam : Universitätsbibliothek der Universität Potsdam, 2013. http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-68187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Qingyang. "A study of transient bottlenecks: understanding and reducing latency long-tail problem in n-tier web applications." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54002.

Full text
Abstract:
An essential requirement of cloud computing or data centers is to simultaneously achieve good performance and high utilization for cost efficiency. High utilization through virtualization and hardware resource sharing is critical for both cloud providers and cloud consumers to reduce management and infrastructure costs (e.g., energy cost, hardware cost) and to increase cost-efficiency. Unfortunately, achieving good performance (e.g., low latency) for web applications at high resource utilization remains an elusive goal. Both practitioners and researchers have experienced the latency long-tail problem in clouds during periods of even moderate utilization (e.g., 50%). In this dissertation, we show that transient bottlenecks are an important contributing factor to the latency long-tail problem. Transient bottlenecks are bottlenecks with a short lifespan on the order of tens of milliseconds. Though short-lived, transient bottleneck can cause a long-tail response time distribution that spans a spectrum of 2 to 3 orders of magnitude, from tens of milliseconds to tens of seconds, due to the queuing effect propagation and amplification caused by complex inter-tier resource dependencies in the system. Transient bottlenecks can arise from a wide range of factors at different system layers. For example, we have identified transient bottlenecks caused by CPU dynamic voltage and frequency scaling (DVFS) control at the CPU architecture layer, Java garbage collection (GC) at the system software layer, and virtual machine (VM) consolidation at the application layer. These factors interact with naturally bursty workloads from clients, often leading to transient bottlenecks that cause overall performance degradation even if all the system resources are far from being saturated (e.g., less than 50%). By combining fine-grained monitoring tools and a sophisticated analytical method to generate and analyze monitoring data, we are able to detect and study transient bottlenecks in a systematic way.
APA, Harvard, Vancouver, ISO, and other styles
30

Awadallah, Amr A. "The vMatrix : a backward-compatible solution for improving the interactivity, scalability, and reliability of internet applications /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Soua, Ridha. "Wireless sensor networks in industrial environment : energy efficiency, delay and scalability." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066029.

Full text
Abstract:
Certaines applications industrielles nécessitent des délais de collecte déterministes et bornés, nous nous concentrons sur l'allocation conjointe de slots temporels et de canaux sans conflit qui minimisent la durée de collecte. Cette allocation permet aux noeuds de dormir dans n'importe quel slot où ils ne sont pas impliqués dans des transmissions. Nous calculons le nombre minimal de slots temporels nécessaire pour compléter la collecte de données brute pour un puits équipé de plusieurs interfaces radio et des demandes de trafic hétérogènes. Nous donnons également des ordonnancements optimaux qui permettent d'atteindre ces bornes optimales. Nous proposons ensuite MODESA, un algorithme centralisé d'allocation conjointe de slots et de canaux. Nous montrons l'optimalité de MODESA dans des topologies particulières. Par les simulations, nous montrons que MODESA surpasse TMCP , un ordonnancement centralisé à base de sous-arbre. Nous améliorons MODESA avec différentes stratégies d'allocation de canaux. En outre , nous montrons que le recours à un routage multi-chemins réduit le délai de collecte.Néanmoins, l'allocation conjointe de slot et de canaux doit être capable de s'adapter aux changements des demandes des noeuds (des alarmes, des demandes de trafic supplémentaires temporaires). Nous proposons AMSA , une solution d'assignation conjointe de slots et de canaux basée sur une technique incrémentale. Pour aborder la question du passage à l'échelle, nous proposons, WAVE , une solution d'allocation conjointe de slots et de canaux qui fonctionne en mode centralisé ou distribué. Nous montrons l'équivalence des ordonnancements fournis par les deux modes<br>Some industrial applications require deterministic and bounded gathering delays. We focus on the joint time slots and channel assignment that minimizes the time of data collection and provides conflict-free schedules. This assignment allows nodes to sleep in any slot where they are not involved in transmissions. Hence, these schedules save the energy budjet of sensors. We calculate the minimum number of time slots needed to complete raw data convergecast for a sink equipped with multiple radio interfaces and heterogeneous nodes traffic. We also give optimal schedules that achieve the optimal bounds. We then propose MODESA, a centralized joint slots and channels assignment algorithm. We prove the optimality of MODESA in specific topologies. Through simulations, we show that MODESA is better than TMCP, a centralized subtree based scheduling algorithm. We improve MODESA with different strategies for channels allocation. In addition, we show that the use of a multi-path routing reduces the time of data collection .Nevertheless, the joint time slot and channels assignment must be able to adapt to changing traffic demands of the nodes (alarms, additional requests for temporary traffic) . We propose AMSA , an adaptive joint time slots and channel assignment based on incremental technical solution. To address the issue of scalability, we propose, WAVE, a distributed scheduling algorithm for convergecat that operates in centralized or distributed mode. We show the equivalence of schedules provided by the two modes
APA, Harvard, Vancouver, ISO, and other styles
32

Palmiter, Russell. "A UNIFIED RESOURCE PLATFORM FOR THE RAPID DEVELOPMENT OF SCALABLE WEB APPLICATIONS." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/680.

Full text
Abstract:
This thesis presents Web Utility Kit (WUT): a platform that helps to simplify the process of creating modern web applications. It addresses the need to simplify the web development process through the creation of a hosted service that provides access to a unified set of resources. The resources are made available through a variety of protocols and formats to help simplify their consumption. It also provides a uniform model across all of its resources making multi-resource development an easier and more familiar task. WUT saves the time and cost associated with deployment, maintenance, and hosting of the hardware and software in which resources depend. It has a relatively low overhead averaging 123 ms per request and has been shown capable of linear scaling with each application server capable of handling 120+ requests per minute. This important property of being able to seamlessly scale to developer's needs helps to eliminate the expensive scaling process. Initial users of the platform have found it to be extremely easy to use and have paved the way for future developments.
APA, Harvard, Vancouver, ISO, and other styles
33

Ruan, Ning. "Network Backbone with Applications in Reachability and Shortest Path Computation." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1334516240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ramilli, Elisabetta. "Architetture a microservizi con Node.js: design e sviluppo di una web application." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
Nelle applicazioni web di oggi è di fondamentale importanza garantire performance di accesso soddisfacenti a prescindere dal carico delle richieste indirizzate al sistema. I server delle applicazioni web devono quindi essere in grado di "scalare" la propria portata in base alle esigenze di business. Una buona soluzione sarebbe quella di non costruire le web application con le tradizionali architetture monolitiche bensì suddividendo i dati delle applicazioni in data service indipendenti in modo da ridurre la complessità di carico di ogni servizio. Inoltre, l'esigenza di avere web application con buone performance e che riescano, quindi, a rispondere in modo veloce alle richieste degli utenti è fondamentale affinché la User Experience sia ottimale. In questo progetto di tesi si vuole affrontare la problematica di progettare e sviluppare applicazioni web con caratteristiche di scalabilità e alte performance. L'esperienza di tirocinio per tesi in azienda ha permesso di studiare tale problematica e cercarvi soluzione; lo studio si è concretizzato nella progettazione e sviluppo di un'applicazione web per la gestione di cataloghi ricambi di macchinari industriali. Per il lato client, si è deciso di utilizzare Vue.js, un progressive framework che permette di costruire interfacce web moderne ed eleganti. Essendo basato su JavaScript, che viene eseguito nel browser, Vue permette di realizzare applicazioni in cui non si devono più attendere richieste dal server, di conseguenza migliorano la reattività e la User Experience. Per il lato server, si è deciso di utilizzare la piattaforma Node.js che, con l'architettura a single-thread, garantisce scalabilità e responsiveness. Per superare le limitazioni del modello event-driven, che vincola le applicazioni alle performance del single-core si è scelto di adottare un'architettura a microservizi distribuiti, con l'obiettivo di favorire la scalabilità e l'efficienza, mantenendo indipendenti le varie parti che compongono il sistema.
APA, Harvard, Vancouver, ISO, and other styles
35

Kadlubiec, Jakub. "Mobilní systém pro sběr zpětné vazby zákazníků." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236177.

Full text
Abstract:
Práce se zabývá popisem tvorby mobilního systému pro monitoring zákaznické spokojenosti a sběr zpětné vazby od návštěvníků v restauracích s názvem Huerate. Komplexně jsou popsané všechny fáze vývoje systému. První část práce se zabývá analýzou existujících řešení a stavem na trhu. Následně jsou na základně komunikace s majiteli restaurací sestaveny požadavky na systém. Nakonec se práce věnuje samotnému návrhu systému, jeho implementaci a nasazení v restauracích. Systém Huerate běží jako webová aplikace a je dostupný na adrese http://huerate.cz.
APA, Harvard, Vancouver, ISO, and other styles
36

Leuzzi, Valerio. "Progettazione ed Implementazione di un Applicativo per la Generazione Automatica dell’Orario delle Lezioni tramite il Linguaggio MiniZinc." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/22194/.

Full text
Abstract:
Alla base di questa tesi vi è la progettazione di un applicativo web il quale, tramite l'integrazione di diverse tecnologie, permette di interfacciarsi con il linguaggio di modellizzazione MiniZinc. Lo scopo della suddetta applicazione è la generazione automatica dell'orario settimanale delle lezioni di un corso di Laurea, permettendo l'inserimento di vincoli sui singoli corsi in maniera chiara ed intuitiva per l'utente. Nel presente studio, senza perdere di generalità, è stato considerato il corso di Laurea da me frequentato, ovvero il corso di Laurea in Informatica dell'Università di Bologna.
APA, Harvard, Vancouver, ISO, and other styles
37

de, Gooijer Thijmen. "Performance Modeling of ASP.Net Web Service Applications: an industrial case study." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-12804.

Full text
Abstract:
During the last decade the gap between software modeling and performancemodeling has been closing. For example, UML annotations have been developed to enable the transformation of UML software models to performance models, thereby making performance modeling more accessible. However, as of yet few of these tools are ready for industrial application. In this thesis we explorer the current state of performance modeling tooling, the selection of a performance modeling tool for industrial application is described and a performance modeling case study on one of ABB's remote diagnostics systems (RDS) is presented. The case study shows the search for the best architectural alternative during a multi-million dollar redesign project of the ASP.Net web services based RDS back-end. The performance model is integrated with a cost model to provide valuable decision support for the construction of an architectural roadmap. Despite our success we suggest that the stability of software performance modeling tooling and the semantic gap between performance modeling and software architecture concepts are major hurdles to widespread industrial adaptation. Future work may use the experiences recorded in this thesis to continue improvement of performance modeling processes and tools for industrial use.
APA, Harvard, Vancouver, ISO, and other styles
38

Viotti, Paolo. "Cohérence dans les systèmes de stockage distribués : fondements théoriques avec applications au cloud storage." Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0016/document.

Full text
Abstract:
La conception des systèmes distribués est une tâche onéreuse : les objectifs de performance, d’exactitude et de fiabilité sont étroitement liés et ont donné naissance à des compromis complexes décrits par de nombreux résultats théoriques. Ces compromis sont devenus de plus en plus importants à mesure que le calcul et le stockage se sont déplacés vers des architectures distribuées. De plus, l’absence d’approches systématiques de ces problèmes dans les outils de programmation modernes les a aggravés — d’autant que de nos jours la plupart des programmeurs doivent relever les défis liés aux applications distribués. En conséquence, il existe un écart évident entre les abstractions de programmation, les exigences d’application et la sémantique de stockage, ce qui entrave le travail des concepteurs et des développeurs. Cette thèse présente un ensemble de contributions tourné vers la conception de systèmes de stockage distribués fiables, en examinant ces questions à travers le prisme de la cohérence. Nous commençons par fournir un cadre uniforme et déclarative pour définir formellement les modèles de cohérence. Nous utilisons ce cadre pour décrire et comparer plus de cinquante modèles de cohérence non transactionnelles proposés dans la littérature. La nature déclarative et composite de ce cadre nous permet de construire un classement partiel des modèles de cohérence en fonction de leur force sémantique. Nous montrons les avantages pratiques de la composabilité en concevant et en implémentant Hybris, un système de stockage qui utilise différents modèles pour améliorer la cohérence faible généralement offerte par les services de stockage dans les nuages. Nous démontrons l’efficacité d’Hybris et montrons qu’il peut tolérer les erreurs arbitraires des services du nuage au prix des pannes. Enfin, nous proposons une nouvelle technique pour vérifier les garanties de cohérence offertes par les systèmes de stockage du monde réel. Cette technique s’appuie sur notre approche déclarative de la cohérence : nous considérons les modèles de cohérence comme invariants sur les représentations graphiques des exécutions des systèmes de stockage. Une mise en œuvre préliminaire prouve cette approche pratique et utile pour améliorer l’état de l’art sur la vérification de la cohérence<br>Engineering distributed systems is an onerous task: the design goals of performance, correctness and reliability are intertwined in complex tradeoffs, which have been outlined by multiple theoretical results. These tradeoffs have become increasingly important as computing and storage have shifted towards distributed architectures. Additionally, the general lack of systematic approaches to tackle distribution in modern programming tools, has worsened these issues — especially as nowadays most programmers have to take on the challenges of distribution. As a result, there exists an evident divide between programming abstractions, application requirements and storage semantics, which hinders the work of designers and developers.This thesis presents a set of contributions towards the overarching goal of designing reliable distributed storage systems, by examining these issues through the prism of consistency. We begin by providing a uniform, declarative framework to formally define consistency semantics. We use this framework to describe and compare over fifty non-transactional consistency semantics proposed in previous literature. The declarative and composable nature of this framework allows us to build a partial order of consistency models according to their semantic strength. We show the practical benefits of composability by designing and implementing Hybris, a storage system that leverages different models and semantics to improve over the weak consistency generally offered by public cloud storage platforms. We demonstrate Hybris’ efficiency and show that it can tolerate arbitrary faults of cloud stores at the cost of tolerating outages. Finally, we propose a novel technique to verify the consistency guarantees offered by real-world storage systems. This technique leverages our declarative approach to consistency: we consider consistency semantics as invariants over graph representations of storage systems executions. A preliminary implementation proves this approach practical and useful in improving over the state-of-the-art on consistency verification
APA, Harvard, Vancouver, ISO, and other styles
39

Yeom, Jae-seung. "Optimizing Data Accesses for Scaling Data-intensive Scientific Applications." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64180.

Full text
Abstract:
Data-intensive scientific applications often process an enormous amount of data. The scalability of such applications depends critically on how to manage the locality of data. Our study explores two common types of applications that are vastly different in terms of memory access pattern and workload variation. One includes those with multi-stride accesses in regular nested parallel loops. The other is for processing large-scale irregular social network graphs. In the former case, the memory location or the data item accessed in a loop is predictable and the load on processing a unit work (an array element) is relatively uniform with no significant variation. On the other hand, in the latter case, the data access per unit work (a vertex) is highly irregular in terms of the number of accesses and the locations being accessed. This property is further tied to the load and presents significant challenges in the scalability of the application performance. Designing platforms to support extreme performance scaling requires understanding of how application specific information can be used to control the locality and improve the performance. Such insights are necessary to determine which control and which abstraction to provide for interfacing an underlying system and an application as well as for designing a new system. Our goal is to expose common requirements of data-intensive scientific applications for scalability. For the former type of applications, those with regular accesses and uniform workload, we contribute new methods to improve the temporal locality of software-managed local memories, and optimize the critical path of scheduling data transfers for multi-dimensional arrays in nested loops. In particular, we provide a runtime framework allowing transparent optimization by source-to-source compilers or automatic fine tuning by programmers. Finally, we demonstrate the effectiveness of the approach by comparing against a state-of-the-art language-based framework. For the latter type, those with irregular accesses and non-uniform workload, we analyze how the heavy-tailed property of input graphs limits the scalability of the application. Then, we introduce an application-specific workload model as well as a decomposition method that allows us to optimize locality with the custom load balancing constraints of the application. Finally, we demonstrate unprecedented strong scaling of a contagion simulation on two state-of-the-art high performance computing platforms.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
40

Alghamdi, Turki A. "Novel localised quality of service routing algorithms : performance evaluation of some new localised quality of service routing algorithms based on bandwidth and delay as the metrics for candidate path selection." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5420.

Full text
Abstract:
The growing demand on the variety of internet applications requires management of large scale networks by efficient Quality of Service (QoS) routing, which considerably contributes to the QoS architecture. The biggest contemporary drawback in the maintenance and distribution of the global state is the increase in communication overheads. Unbalancing in the network, due to the frequent use of the links assigned to the shortest path retaining most of the network loads is regarded as a major problem for best effort service. Localised QoS routing, where the source nodes use statistics collected locally, is already described in contemporary sources as more advantageous. Scalability, however, is still one of the main concerns of existing localised QoS routing algorithms. The main aim of this thesis is to present and validate new localised algorithms in order to develop the scalability of QoS routing. Existing localised routing, Credit Based Routing (CBR) and Proportional Sticky Routing (PSR), use the blocking probability as a factor in selecting the routing paths and work with either credit or flow proportion respectively, which makes impossible having up-to-date information. Therefore our proposed Highest Minimum Bandwidth (HMB) and Highest Average Bottleneck Bandwidth History (HABBH) algorithms utilise bandwidth as the direct QoS criterion to select routing paths. We introduce an Integrated Delay Based Routing and Admission Control mechanism. Using this technique Minimum Total Delay (MTD), Low Fraction Failure (LFF) and Low Path Failure (LPF) were compared against the global QoS routing scheme, Dijkstra, and localised High Path Credit (HPC) scheme and showed superior performance. The simulation with the non-uniformly distributed traffic reduced blocking probability of the proposed algorithms. Therefore, we advocate the algorithms presented in the thesis, as a scalable approach to control large networks. We strongly suggest that bandwidth and mean delay are feasible QoS constraints to select optimal paths by locally collected information. We have demonstrated that a few good candidate paths can be selected to balance the load in the network and minimise communication overhead by applying the disjoint paths method, recalculation of candidate paths set and dynamic paths selection method. Thus, localised QoS routing can be used as a load balancing tool in order to improve the network resource utilization. A delay and bandwidth combination is one of the future prospects of our work, and the positive results presented in the thesis suggest that further development of a distributed approach in candidate paths selection may enhance the proposed localised algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

Guo, Jia. "Trust-based Service Management of Internet of Things Systems and Its Applications." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82854.

Full text
Abstract:
A future Internet of Things (IoT) system will consist of a huge quantity of heterogeneous IoT devices, each capable of providing services upon request. It is of utmost importance for an IoT device to know if another IoT service is trustworthy when requesting it to provide a service. In this dissertation research, we develop trust-based service management techniques applicable to distributed, centralized, and hybrid IoT environments. For distributed IoT systems, we develop a trust protocol called Adaptive IoT Trust. The novelty lies in the use of distributed collaborating filtering to select trust feedback from owners of IoT nodes sharing similar social interests. We develop a novel adaptive filtering technique to adjust trust protocol parameters dynamically to minimize trust estimation bias and maximize application performance. Our adaptive IoT trust protocol is scalable to large IoT systems in terms of storage and computational costs. We perform a comparative analysis of our adaptive IoT trust protocol against contemporary IoT trust protocols to demonstrate the effectiveness of our adaptive IoT trust protocol. For centralized or hybrid cloud-based IoT systems, we propose the notion of Trust as a Service (TaaS), allowing an IoT device to query the service trustworthiness of another IoT device and also report its service experiences to the cloud. TaaS preserves the notion that trust is subjective despite the fact that trust computation is performed by the cloud. We use social similarity for filtering recommendations and dynamic weighted sum to combine self-observations and recommendations to minimize trust bias and convergence time against opportunistic service and false recommendation attacks. For large-scale IoT cloud systems, we develop a scalable trust management protocol called IoT-TaaS to realize TaaS. For hybrid IoT systems, we develop a new 3-layer hierarchical cloud structure for integrated mobility, service, and trust management. This architecture supports scalability, reconfigurability, fault tolerance, and resiliency against cloud node failure and network disconnection. We develop a trust protocol called IoT-HiTrust leveraging this 3-layer hierarchical structure to realize TaaS. We validate our trust-based IoT service management techniques developed with real-world IoT applications, including smart city air pollution detection, augmented map travel assistance, and travel planning, and demonstrate that our trust-based IoT service management techniques outperform contemporary non-trusted and trust-based IoT service management solutions.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Tawiah, Thomas Andzi-Quainoo. "Video content analysis for automated detection and tracking of humans in CCTV surveillance applications." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/7344.

Full text
Abstract:
The problems of achieving high detection rate with low false alarm rate for human detection and tracking in video sequence, performance scalability, and improving response time are addressed in this thesis. The underlying causes are the effect of scene complexity, human-to-human interactions, scale changes, and scene background-human interactions. A two-stage processing solution, namely, human detection, and human tracking with two novel pattern classifiers is presented. Scale independent human detection is achieved by processing in the wavelet domain using square wavelet features. These features used to characterise human silhouettes at different scales are similar to rectangular features used in [Viola 2001]. At the detection stage two detectors are combined to improve detection rate. The first detector is based on shape-outline of humans extracted from the scene using a reduced complexity outline extraction algorithm. A Shape mismatch measure is used to differentiate between the human and the background class. The second detector uses rectangular features as primitives for silhouette description in the wavelet domain. The marginal distribution of features collocated at a particular position on a candidate human (a patch of the image) is used to describe statistically the silhouette. Two similarity measures are computed between a candidate human and the model histograms of human and non human classes. The similarity measure is used to discriminate between the human and the non human class. At the tracking stage, a tracker based on joint probabilistic data association filter (JPDAF) for data association, and motion correspondence is presented. Track clustering is used to reduce hypothesis enumeration complexity. Towards improving response time with increase in frame dimension, scene complexity, and number of channels; a scalable algorithmic architecture and operating accuracy prediction technique is presented. A scheduling strategy for improving the response time and throughput by parallel processing is also presented.
APA, Harvard, Vancouver, ISO, and other styles
43

Viotti, Paolo. "Cohérence dans les systèmes de stockage distribués : fondements théoriques avec applications au cloud storage." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0016.

Full text
Abstract:
La conception des systèmes distribués est une tâche onéreuse : les objectifs de performance, d’exactitude et de fiabilité sont étroitement liés et ont donné naissance à des compromis complexes décrits par de nombreux résultats théoriques. Ces compromis sont devenus de plus en plus importants à mesure que le calcul et le stockage se sont déplacés vers des architectures distribuées. De plus, l’absence d’approches systématiques de ces problèmes dans les outils de programmation modernes les a aggravés — d’autant que de nos jours la plupart des programmeurs doivent relever les défis liés aux applications distribués. En conséquence, il existe un écart évident entre les abstractions de programmation, les exigences d’application et la sémantique de stockage, ce qui entrave le travail des concepteurs et des développeurs. Cette thèse présente un ensemble de contributions tourné vers la conception de systèmes de stockage distribués fiables, en examinant ces questions à travers le prisme de la cohérence. Nous commençons par fournir un cadre uniforme et déclarative pour définir formellement les modèles de cohérence. Nous utilisons ce cadre pour décrire et comparer plus de cinquante modèles de cohérence non transactionnelles proposés dans la littérature. La nature déclarative et composite de ce cadre nous permet de construire un classement partiel des modèles de cohérence en fonction de leur force sémantique. Nous montrons les avantages pratiques de la composabilité en concevant et en implémentant Hybris, un système de stockage qui utilise différents modèles pour améliorer la cohérence faible généralement offerte par les services de stockage dans les nuages. Nous démontrons l’efficacité d’Hybris et montrons qu’il peut tolérer les erreurs arbitraires des services du nuage au prix des pannes. Enfin, nous proposons une nouvelle technique pour vérifier les garanties de cohérence offertes par les systèmes de stockage du monde réel. Cette technique s’appuie sur notre approche déclarative de la cohérence : nous considérons les modèles de cohérence comme invariants sur les représentations graphiques des exécutions des systèmes de stockage. Une mise en œuvre préliminaire prouve cette approche pratique et utile pour améliorer l’état de l’art sur la vérification de la cohérence<br>Engineering distributed systems is an onerous task: the design goals of performance, correctness and reliability are intertwined in complex tradeoffs, which have been outlined by multiple theoretical results. These tradeoffs have become increasingly important as computing and storage have shifted towards distributed architectures. Additionally, the general lack of systematic approaches to tackle distribution in modern programming tools, has worsened these issues — especially as nowadays most programmers have to take on the challenges of distribution. As a result, there exists an evident divide between programming abstractions, application requirements and storage semantics, which hinders the work of designers and developers.This thesis presents a set of contributions towards the overarching goal of designing reliable distributed storage systems, by examining these issues through the prism of consistency. We begin by providing a uniform, declarative framework to formally define consistency semantics. We use this framework to describe and compare over fifty non-transactional consistency semantics proposed in previous literature. The declarative and composable nature of this framework allows us to build a partial order of consistency models according to their semantic strength. We show the practical benefits of composability by designing and implementing Hybris, a storage system that leverages different models and semantics to improve over the weak consistency generally offered by public cloud storage platforms. We demonstrate Hybris’ efficiency and show that it can tolerate arbitrary faults of cloud stores at the cost of tolerating outages. Finally, we propose a novel technique to verify the consistency guarantees offered by real-world storage systems. This technique leverages our declarative approach to consistency: we consider consistency semantics as invariants over graph representations of storage systems executions. A preliminary implementation proves this approach practical and useful in improving over the state-of-the-art on consistency verification
APA, Harvard, Vancouver, ISO, and other styles
44

Dricot, Antoine. "Light-field image and video compression for future immersive applications." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0008.

Full text
Abstract:
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux<br>Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
APA, Harvard, Vancouver, ISO, and other styles
45

Mlawanda, Joyce. "A comparative study of cloud computing environments and the development of a framework for the automatic deployment of scaleable cloud based applications." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/19994.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2012<br>ENGLISH ABSTRACT: Modern-day online applications are required to deal with an ever-increasing number of users without decreasing in performance. This implies that the applications should be scalable. Applications hosted on static servers are in exible in terms of scalability. Cloud computing is an alternative to the traditional paradigm of static application hosting and o ers an illusion of in nite compute and storage resources. It is a way of computing whereby computing resources are provided by a large pool of virtualised servers hosted on the Internet. By virtually removing scalability, infrastructure and installation constraints, cloud computing provides a very attractive platform for hosting online applications. This thesis compares the cloud computing infrastructures Google App Engine and AmazonWeb Services for hosting web applications and assesses their scalability performance compared to traditionally hosted servers. After the comparison of the three application hosting solutions, a proof-of-concept software framework for the provisioning and deployment of automatically scaling applications is built on Amazon Web Services which is shown to be best suited for the development of such a framework.
APA, Harvard, Vancouver, ISO, and other styles
46

Sordo, Guido. "Novel multi-modal wideband vibrations MEMS energy harvesting concepts for self-powered Internet of Things (IoT) applications, with focus on converter’s size and power scalability." Doctoral thesis, University of Trento, 2016. http://eprints-phd.biblio.unitn.it/1784/2/PhD-Thesis.pdf.

Full text
Abstract:
This doctorate thesis is focused on the design, fabrication and characterization of Micro Electro Mechanical System (MEMS) Vibrational Energy Harvesters (VEHs). The targeted field of application of such devices is the emerging Internet of Things (IoT), in particular for Ultra Low Power (ULP) autonomous applications. In order to realize the ubiquitous paradigm remote and distributed nodes have to be small and in high number. The power requirement of such nodes is generally satisfied by means of batteries, which require periodic replacement and so are not desirable in an autonomous system. To overcome this limitation devices able to harvest energy from the surrounding environment have been investigated. Among the different sources of energy that could be harvested, the vibrational one results promising due to its high power density and its spreading in most environments of interest. The devices developed convert the vibrational energy scattered in the environment into electrical energy by means of a piezoelectric material. The thesis presents studies on both the mechanical and the electric design of a MEMS piezoelectric VEH, with particular attention on multi-modal design. The thesis presents a novel multi-modal device able to extract energy from multiple resonances in a wider bandwidth. Such a design presents two enabling features for IoT application, a wider working band and the compactness, making it more attractive with respect to cantilever like devices.
APA, Harvard, Vancouver, ISO, and other styles
47

Schnorr, Lucas Mello. "Some visualization models applied to the analysis of parallel applications." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/37179.

Full text
Abstract:
Les systèmes distribués, tels que les grilles, sont utilisés aujourd’hui pour l’exécution des grandes applications parallèles. Quelques caractéristiques de ces systèmes sont l’interconnexion complexe de ressources qui pourraient être présent et de la facile passage à l’échelle. La complexité d’interconnexion vient, par exemple, d’un nombre plus grand de directives de routage pour la communication entre les processus et une latence variable dans le temps. La passage à l’échelle signifie que des ressources peuvent être ajoutées indéfiniment simplement en les reliant à l’infrastructure existante. Ces caractéristiques influencent directement la façon dont la performance des applications parallèles doit être analysée. Les techniques de visualisation traditionnelles pour cette analyse sont généralement basées sur des diagrammes de Gantt que disposent la liste des composants de l’application verticalement et metent la ligne du temps sur l’axe horizontal. Ces représentations visuelles ne sont généralement pas adaptés à l’analyse des applications exécutées en parallèle dans les grilles. La première raison est qu’elles n’ont pas été conçues pour offrir aux développeurs une analyse qui montre aussi la topologie du réseau des ressources. La deuxième raison est que les techniques de visualisation traditionnels ne s’adaptent pas bien quand des milliers d’entités doivent être analysés ensemble. Cette thèse tente de résoudre les problèmes des techniques traditionnelles dans la visualisation des applications parallèles. L’idée principale est d’exploiter le domaine de la visualisation de l’information et essayer d’appliquer ses concepts dans le cadre de l’analyse des programmes parallèles. Portant de cette idée, la thèse propose deux modèles de visualisation : les trois dimensions et le modèle d’agrégation visuelle. Le premier peut être utilisé pour analyser les programmes parallèles en tenant compte de la topologie du réseau. L’affichage lui-même se compose de trois dimensions, où deux sont utilisés pour indiquer la topologie et la troisième est utilisée pour représenter le temps. Le second modèle peut être utilisé pour analyser des applications parallèles comportant un très grand nombre de processsus. Ce deuxième modèle exploite une organisation hiérarchique des données utilisée par une technique appelée Treemap pour représenter visuellement la hiérarchie. Les implications de cette thèse sont directement liées à l’analyse et la compréhension des applications parallèles exécutés dans les systèmes distribués. Elle améliore la compréhension des modes de communication entre les processus et améliore la possibilité d’assortir les motifs avec cette topologie de réseau réel sur des grilles. Bien que nous utilisons abondamment l’exemple de la topologie du réseau, l’approche pourrait être adapté, avec presque pas de changements, à l’interconnexion fourni par un middleware d’une interconnexion logique. Avec la technique d’agrégation, les développeurs sont en mesure de rechercher des patterns et d’observer le comportement des applications à grande échelle.<br>Sistemas distribuídos tais como grids são usados hoje para a execução de aplicações paralelas com um grande número de processos. Algumas características desses sistemas são a presença de uma complexa rede de interconexão e a escalabilidade de recursos. A complexidade de rede vem, por exemplo, de largura de banda e latências variáveis ao longo do tempo. Escalabilidade é a característica pela qual novos recursos podem ser adicionados em um grid apenas através da conexão em uma infraestrutura pré-existente. Estas características influenciam a forma como o desempenho de aplicações paralelas deve ser analisado. Esquemas tradicionais de visualização de desempenho são usualmente baseados em gráficos Gantt com uma dimensão para listar entidades monitoradas e outra para o tempo. Visualizações como essa não são apropriadas para a análise de aplicações paralelas executadas em grid. A primeira razão para tal é que elas não foram concebidas para oferecer ao desenvolvedor uma análise que mostra a topologia dos recursos e a relação disso com a aplicação. A segunda razão é que técnicas tradicionais não são escaláveis quando milhares de entidades monitoradas devem ser analisadas conjuntamente. Esta tese tenta resolver estes problemas encontrados em técnicas de visualização tradicionais para a análise de aplicações paralelas. A idéia principal consiste em explorar técnicas da área de visualização da informação e aplicá-las no contexto de análise de programas paralelos. Levando em conta isto, esta tese propõe dois modelos de visualização: o de três dimensões e o modelo de agregação visual. O primeiro pode ser utilizado para analisar aplicações levando-se em conta a topologia da rede dos recursos. A visualização em si é composta por três dimensões, onde duas são usadas para mostrar a topologia e a terceira é usada para representar o tempo. O segundo modelo pode ser usado para analisar aplicações paralelas com uma grande quantidade de processos. Ela explora uma organização hierárquica dos dados de monitoramento e uma técnica de visualização chamada Treemap para representar visualmente a hierarquia. Os dois modelos representam uma nova forma de analisar aplicação paralelas visualmente, uma vez que eles foram concebidos para larga-escala e sistemas distribuídos complexos, como grids. As implicações desta tese estão diretamente relacionadas à análise e ao entendimento do comportamento de aplicações paralelas executadas em sistemas distribuídos. Um dos modelos de visualização apresentados aumenta a compreensão dos padrões de comunicação entre processos e oferece a possibilidade de observar tal padrão com a topologia de rede. Embora a topologia de rede seja usada, a abordagem pode ser adaptada sem grandes mudanças para levar em conta interconexões lógicas de bibliotecas de comunicação. Com a técnica de agregação apresentada nesta tese, os desenvolvedores são capazes de observar padrões de aplicações paralelas de larga escala.<br>Highly distributed systems such as grids are used today for the execution of large-scale parallel applications. Some characteristics of these systems are the complex resource interconnection that might be present and the scalability. The interconnection complexity comes from the different number of hops to provide communication among applications processes and differences in network latencies and bandwidth. The scalability means that the resources can be added indefinitely just by connecting them to the existing infrastructure. These characteristics influence directly the way parallel applications performance must be analyzed. Current traditional visualization schemes to this analysis are usually based on Gantt charts with one dimension to list the monitored entities and the other dimension dedicated to time. These visualizations are generally not suited to parallel applications executed in grids. The first reason is that they were not built to offer to the developer an analysis that also shows the network topology of the resources. The second reason is that traditional visualization techniques do not scale well when thousands of monitored entities must be analyzed together. This thesis tries to overcome the issues encountered on traditional visualization techniques for parallel applications. The main idea behind our efforts is to explore techniques from the information visualization research area and to apply them in the context of parallel applications analysis. Based on this main idea, the thesis proposes two visualization models: the three-dimensional and the visual aggregation model. The former might be used to analyze parallel applications taking into account the network topology of the resources. The visualization itself is composed of three dimensions, where two of them are used to render the topology and the third is used to represent time. The later model can be used to analyze parallel applications composed of several thousands of processes. It uses hierarchical organization of monitoring data and an information visualization technique called Treemap to represent that hierarchy. Both models represent a novel way to visualize the behavior of parallel applications, since they are conceived considering large-scale and complex distributed systems, such as grids. The implications of this thesis are directly related to the analysis and understanding of parallel applications executed in distributed systems. It enhances the comprehension of patterns in communication among processes and improves the possibility of matching this patterns with real network topology of grids. Although we extensively use the network topology example, the approach could be adapted with almost no changes to the interconnection provided by a middleware of a logical interconnection. With the scalable visualization technique, developers are able to look for patterns and observe the behavior of large-scale applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Rajbhandari, Samyam. "Locality Optimizations for Regular and Irregular Applications." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1469033289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Demaret, Laurent. "Etude de la scalabilité et de la représentation d'images fixes par maillages hiérarchiques exploitant les éléments finis et les ondelettes bidimensionnelles. Application au codage vidéo." Rennes 1, 2002. http://www.theses.fr/2002REN10147.

Full text
Abstract:
Ce mémoire est dedié à l'application de maillages triangulaires à la compression d'images fixes. On cherche à prouver l'efficacité des représentations hiérarchiques pour le codage avec perte. On commence par deux études portant respectivement sur les modèles d'approximation basées sur des éléments finis d'Hermite et sur la subdivison barycentrique. On aborde ensuite le thème principal : la multirésolution offerte dans le cadre des éléments finis hiérarchiques. Les méthodes de codage proposées exploitent l'hétérogénéité de la répartition statistique des coefficients d'amplitude forte. On a ensuite développé des méthodes originales de construction de préondelettes orthogonales à base d'éléments finis linéaires. De manière générale, le travail a permis de montrer les potentialités des schémas de codage par maillages et leurs bonnes performances lorsqu'on les compare aux meilleurs standards de compression actuels.
APA, Harvard, Vancouver, ISO, and other styles
50

Sordo, Guido. "Novel multi-modal wideband vibrations MEMS energy harvesting concepts for self-powered Internet of Things (IoT) applications, with focus on converter’s size and power scalability." Doctoral thesis, Università degli studi di Trento, 2016. https://hdl.handle.net/11572/367898.

Full text
Abstract:
This doctorate thesis is focused on the design, fabrication and characterization of Micro Electro Mechanical System (MEMS) Vibrational Energy Harvesters (VEHs). The targeted field of application of such devices is the emerging Internet of Things (IoT), in particular for Ultra Low Power (ULP) autonomous applications. In order to realize the ubiquitous paradigm remote and distributed nodes have to be small and in high number. The power requirement of such nodes is generally satisfied by means of batteries, which require periodic replacement and so are not desirable in an autonomous system. To overcome this limitation devices able to harvest energy from the surrounding environment have been investigated. Among the different sources of energy that could be harvested, the vibrational one results promising due to its high power density and its spreading in most environments of interest. The devices developed convert the vibrational energy scattered in the environment into electrical energy by means of a piezoelectric material. The thesis presents studies on both the mechanical and the electric design of a MEMS piezoelectric VEH, with particular attention on multi-modal design. The thesis presents a novel multi-modal device able to extract energy from multiple resonances in a wider bandwidth. Such a design presents two enabling features for IoT application, a wider working band and the compactness, making it more attractive with respect to cantilever like devices.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography