To see the other types of publications on this topic, follow the link: Clusters and Grid Computing.

Dissertations / Theses on the topic 'Clusters and Grid Computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Clusters and Grid Computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stewart, Sean. "Deploying a CMS Tier-3 Computing Cluster with Grid-enabled Computing Infrastructure." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2564.

Full text
Abstract:
The Large Hadron Collider (LHC), whose experiments include the Compact Muon Solenoid (CMS), produces over 30 million gigabytes of data annually, and implements a distributed computing architecture—a tiered hierarchy, from Tier-0 through Tier-3—in order to process and store all of this data. Out of all of the computing tiers, Tier-3 clusters allow scientists the most freedom and flexibility to perform their analyses of LHC data. Tier-3 clusters also provide local services such as login and storage services, provide a means to locally host and analyze LHC data, and allow both remote and local users to submit grid-based jobs. Using the Rocks cluster distribution software version 6.1.1, along with the Open Science Grid (OSG) roll version 3.2.35, a grid-enabled CMS Tier-3 computing cluster was deployed at Florida International University’s Modesto A. Maidique campus. Validation metric results from Ganglia, MyOSG, and CMS Dashboard verified a successful deployment.
APA, Harvard, Vancouver, ISO, and other styles
2

Petersen, Karsten. "Management-Elemente für mehrdimensional heterogene Cluster." Master's thesis, Universitätsbibliothek Chemnitz, 2003. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200300851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Watkins, Lanier A. "Using Network Traffic to Infer CPU and Memory Utilization for Cluster Grid Computing Applications." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/cs_diss/52.

Full text
Abstract:
In this body of work, we present the details of a novel method for passive resource discovery in cluster grid environments where resources constantly utilize inter-node communication. Our method offers the ability to non-intrusively identify resources that have available memory or CPU cycles; this is critical for lowering queue wait times in large cluster grid networks, and for memory-intensive cluster grid applica-tions such as Gaussian (computational chemistry package) and the Weather Research and Forecasting (WRF) modeling package. The benefits include: (1) low message complexity, (2) scalability, (3) load bal-ancing support, and (4) low maintainability. Using several test-beds (i.e., a small local test-bed and a 50-node Deterlab test-bed), we demonstrate the feasibility of our method with experiments utilizing TCP, UDP and ICMP network traffic. Using this technique, we observed a correlation between memory or CPU load and the timely response of network traffic. In such situations, we have observed that in highly utilized (due to multi-programming) nodes there will be numerous, active processes which require context switching or paging. The latency associated with numerous context switches or paging manifests as a de-lay signature within the packet transmission process. Our method detects this delay signature to determine the utilization of network resources. The aforementioned delay signature is the keystone that provides a correlation between network traffic and the internal state of the source node. We characterize this delay signature due to CPU utilization by (1) identifying the different types of assembly language instructions that source this delay and (2) describing how performance-enhancing techniques (e.g., instruction pipelin-ing, caching) impact this delay signature by using the LEON3, implemented as a 40 MHz development board. At the software level, results for medium sized networks show that our method can consistently and accurately identify nodes with available memory or CPU cycles (< 70% availability). At the hardware level, our results show that excessive context switching in active applications increases the average mem-ory access time, thus adding additional delay to the execution of LD instructions. Additionally, internal use of these instructions in heavily utilized situations to send network packets induces the delay signature into network traffic.
APA, Harvard, Vancouver, ISO, and other styles
4

Schuch, Silke. "Konzeption und Umsetzung einer Plattform zur Rechnerverwaltung und Auftragsentwicklung für heterogene Clustersysteme." Aachen Shaker, 2009. http://d-nb.info/1000958655/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Barreto, Marcos Ennes. "MultiCluster : um modelo de integração baseado em rede peer-to-peer para a concepção de grades locais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/22807.

Full text
Abstract:
As grades computacionais e as redes peer-to-peer (P2P) surgiram como áreas distintas, com diferentes propósitos, modelos e ferramentas. No decorrer dos últimos anos, estas áreas foram convergindo, uma vez que a infraestrutura e o modelo de execução descentralizada das redes P2P provaram ser uma alternativa adequada para o tratamento de questões relacionadas à manutenção de grades de larga escala, tais como escalabilidade, descoberta, alocação e monitoramento de recursos. O modelo MultiCluster trata a convergência entre grades computacionais e redes peer-to-peer de uma forma mais restrita: os problemas de escalabilidade, de descoberta e alocação de recursos são minimizados considerando-se apenas recursos localmente disponíveis para a construção de uma grade, a qual pode ser usada para a execução de aplicações com diferentes características de acoplamento e comunicação. Esse trabalho apresenta a arquitetura do modelo e seus aspectos funcionais, bem como um primeira implementação do modelo, realizada através da adaptação da biblioteca de programação DECK sobre os protocolos do projeto JXTA. A avaliação do funcionamento dessa implementação é apresentada e discutida, com base em algumas aplicações com diferentes características.<br>Grid computing and peer-to-peer computing emerged as distinct areas with different purposes, models and tools. Over the last years, these areas has been converging since the infrastructure and the execution model used in peer-to-peer networks have proven to be a suitable way to treat some problems related to the maintenance of large scale grids, such as scalability, monitoring, and resource discovery and allocation. The MultiCluster model addresses the convergence of grids and peer-to-peer networks in a more restricted way: the problems related to scalability, resource allocation and discovery are minimized by considering only local resources for the conception of a small scale grid, which can be used to run applications with different characteristics of granularity and communication. This work presents the MultiCluster architecture and its functional aspects, as well as a first implementation carried out by adapting the DECK programming library to use JXTA protocols and its consequent evaluation, based on applications with different characteristics.
APA, Harvard, Vancouver, ISO, and other styles
6

Oliveira, Juliano Amorim de. "Um estudo comparativo de cargas de trabalho e políticas de escalonamento para aplicações paralelas em clusters e grids computacionais." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-12012007-143257/.

Full text
Abstract:
Diversas políticas de escalonamento para aplicações paralelas voltadas a ambientes computacionais distribuídos têm sido propostas. Embora tais políticas apresentem bons resultados, elas são, geralmente, avaliadas em cenários específicos. Quando o cenário muda, com diferentes ambientes distribuídos e condições de carga, essas políticas podem ter seu desempenho deteriorado. Nesse contexto, este trabalho apresenta um estudo comparativo envolvendo dez políticas de escalonamento avaliadas em diferentes cenários. Cada uma das políticas foi submetida a uma combinação de quatro cargas de trabalho de ocupação da UCP e três variações da taxa de comunicação média entre os processos, utilizando a rede. Foram considerados ainda três sistemas distribuídos distintos: dois clusters, com diferentes quantidades de nós, e um grid computacional. Foi utilizada a simulação com ambientes próximos ao real e cargas de trabalho obtidas de modelos realísticos. Os resultados demonstraram que, embora as políticas sejam voltadas a ambientes computacionais paralelos e distribuídos, quando o cenário muda, o desempenho cai e a ordem de classificação entre as políticas se altera. Os resultados permitiram ainda demonstrar a necessidade de se considerar a comunicação entre os processos durante o escalonamento em grids computacionais.<br>Several scheduling policies for parallel applications directed to the distributed computational environments have been proposed. Although such policies present good results, they, generally, are evaluated in specific scenarios. When scenario change, by using different distributed environments and workload conditions, these policies can have its performance spoiled. In this context, this work presents a comparative study involving ten scheduling policies evaluated on different scenarios. Each policy was submitted to a combination of four CPU occupation workloads and three variations of interprocess average communication rates, using the network. Three different distributed systems had been yet considered: two clusters, with different amounts of nodes, and one grid computing. Simulation was used with environments near to the real and workloads obtained of realistic models. Although the policies are directed to parallel and distributed environments, the results have demonstrated that when scenario change, the performance falls and the ranking between the policies changes too. The results have still allowed to demonstrate the necessity of considering interprocess communication during the scheduling in a grid computing.
APA, Harvard, Vancouver, ISO, and other styles
7

Donassolo, Bruno Luis de Moura. "Análise do comportamento não cooperativo em computação voluntária." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/32857.

Full text
Abstract:
Os avanços nas tecnologias de rede e nos componentes computacionais possibilitaram a criação dos sistemas de Computação Voluntária (CV) que permitem que voluntários doem seus ciclos de CPU ociosos da máquina para um determinado projeto. O BOINC é a infra-estrutura mais popular atualmente, composta de mais 5.900.000 máquinas que processam mais de 4.003 TeraFLOP por dia. Os projetos do BOINC normalmente possuem centenas de milhares de tarefas independentes e estão interessados no throughput. Cada projeto tem seu próprio servidor que é responsável por distribuir unidades de trabalho para os clientes, recuperando os resultados e validando-os. Os algoritmos de escalonamento do BOINC são complexos e têm sido usados por muitos anos. Sua eficiência e justiça foram comprovadas no contexto dos projetos orientados ao throughput. Ainda, recentemente, surgiram projetos em rajadas, com menos tarefas e interessados no tempo de resposta. Diversos trabalhos propuseram novos algoritmos de escalonamento para otimizar seu tempo de resposta individual. Entretanto, seu uso pode ser problemático na presença de outros projetos. Neste texto, são estudadas as consequências do comportamento não cooperativo nos ambientes de Computação Voluntária. Para realizar o estudo, foi necessário modificar o simulador SimGrid para melhorar seu desempenho na simulação dos sistemas de CV. A primeira contribuição do trabalho é um conjunto de melhorias no núcleo de simulação do SimGrid para remover os gargalos de desempenho. O resultado é um simulador consideravelmente mais rápido que as versões anteriores e capaz de rodar experimentos nessa área. Ainda, como segunda grande contribuição, apresentou-se como os algoritmos de escalonamento atuais do BOINC são incapazes de garantir a justiça e isolação entre os projetos. Os projetos em rajadas podem impactar drasticamente o desempenho de todos os outros projetos (rajadas ou não). Para estudar tais interações, realizou-se um detalhado, multi jogador e multi objetivo, estudo baseado em teoria dos jogos. Os experimentos e análise realizados proporcionaram um bom entendimento do impacto dos diferentes parâmetros de escalonamento e mostraram que a otimização não cooperativa pode resultar em ineficiências e num compartilhamento injusto dos recursos.<br>Advances in inter-networking technology and computing components have enabled Volunteer Computing (VC) systems that allows volunteers to donate their computers’ idle CPU cycles to a given project. BOINC is the most popular VC infrastructure today with over 5.900.000 hosts that deliver over 4.003 TeraFLOP per day. BOINC projects usually have hundreds of thousands of independent tasks and are interested in overall throughput. Each project has its own server which is responsible for distributing work units to clients, recovering results and validating them. The BOINC scheduling algorithms are complex and have been used for many years now. Their efficiency and fairness have been assessed in the context of throughput oriented projects. Yet, recently, burst projects, with fewer tasks and interested in response time, have emerged. Many works have proposed new scheduling algorithms to optimize individual response time but their use may be problematic in presence of other projects. In this text, we study the consequences of non-cooperative behavior in volunteer computing environment. In order to perform our study, we needed to modify the SimGrid simulator to improve its performance simulating VC systems. So, the first contribution is a set of improvements in SimGrid’s core simulation to remove its performance bottlenecks. The result is a simulator considerably faster than the previous versions and able to run VC experiments. Also, in the second contribution, we show that the commonly used BOINC scheduling algorithms are unable to enforce fairness and project isolation. Burst projects may dramatically impact the performance of all other projects (burst or non-burst). To study such interactions, we perform a detailed, multi-player and multi-objective game theoretic study. Our analysis and experiments provide a good understanding on the impact of the different scheduling parameters and show that the non-cooperative optimization may result in inefficient and unfair share of the resources.
APA, Harvard, Vancouver, ISO, and other styles
8

Lacks, Daniel Jonathan. "MODELING, DESIGN AND EVALUATION OF NETWORKING SYSTEMS AND PROTOCOLS THROUGH SIMULATION." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3792.

Full text
Abstract:
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.<br>Ph.D.<br>School of Electrical Engineering and Computer Science<br>Engineering and Computer Science<br>Computer Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
9

Navarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.

Full text
Abstract:
Durant les últimes tres dècades, les limitacions tecnològiques (com per exemple la capacitat dels dispositius d'emmagatzematge o l'ample de banda de les xarxes de comunicació) i les creixents demandes dels usuaris (estructures d'informació, volums de dades) han conduït l'evolució de les bases de dades distribuïdes. Des dels primers repositoris de dades per arxius plans que es van desenvolupar en la dècada dels vuitanta, s'han produït importants avenços en els algoritmes de control de concurrència, protocols de replicació i en la gestió de transaccions. No obstant això, els reptes moderns d'emmagatzematge de dades que plantegen el Big Data i el cloud computing—orientats a millorar la limitacions pel que fa a escalabilitat i elasticitat de les bases de dades estàtiques—estan empenyent als professionals a relaxar algunes propietats importants dels sistemes transaccionals clàssics, cosa que exclou a diverses aplicacions les quals no poden encaixar en aquesta estratègia degut a la seva alta dependència transaccional. El propòsit d'aquesta tesi és abordar dos reptes importants encara latents en el camp de les bases de dades distribuïdes: (1) les limitacions pel que fa a escalabilitat dels sistemes transaccionals i (2) el suport transaccional en repositoris d'emmagatzematge en el núvol. Analitzar les tècniques tradicionals de control de concurrència i de replicació, utilitzades per les bases de dades clàssiques per suportar transaccions, és fonamental per identificar les raons que fan que aquests sistemes degradin el seu rendiment quan el nombre de nodes i / o quantitat de dades creix. A més, aquest anàlisi està orientat a justificar el disseny dels repositoris en el núvol que deliberadament han deixat de banda el suport transaccional. Efectivament, apropar el paradigma de l'emmagatzematge en el núvol a les aplicacions que tenen una forta dependència en les transaccions és fonamental per a la seva adaptació als requeriments actuals pel que fa a volums de dades i models de negoci. Aquesta tesi comença amb la proposta d'un simulador de protocols per a bases de dades distribuïdes estàtiques, el qual serveix com a base per a la revisió i comparativa de rendiment dels protocols de control de concurrència i les tècniques de replicació existents. Pel que fa a la escalabilitat de les bases de dades i les transaccions, s'estudien els efectes que té executar diferents perfils de transacció sota diferents condicions. Aquesta anàlisi contínua amb una revisió dels repositoris d'emmagatzematge de dades en el núvol existents—que prometen encaixar en entorns dinàmics que requereixen alta escalabilitat i disponibilitat—, el qual permet avaluar els paràmetres i característiques que aquests sistemes han sacrificat per tal de complir les necessitats actuals pel que fa a emmagatzematge de dades a gran escala. Per explorar les possibilitats que ofereix el paradigma del cloud computing en un escenari real, es presenta el desenvolupament d'una arquitectura d'emmagatzematge de dades inspirada en el cloud computing la qual s’utilitza per emmagatzemar la informació generada en les Smart Grids. Concretament, es combinen les tècniques de replicació en bases de dades transaccionals i la propagació epidèmica amb els principis de disseny usats per construir els repositoris de dades en el núvol. Les lliçons recollides en l'estudi dels protocols de replicació i control de concurrència en el simulador de base de dades, juntament amb les experiències derivades del desenvolupament del repositori de dades per a les Smart Grids, desemboquen en el que hem batejat com Epidemia: una infraestructura d'emmagatzematge per Big Data concebuda per proporcionar suport transaccional en el núvol. A més d'heretar els beneficis dels repositoris en el núvol en quant a escalabilitat, Epidemia inclou una capa de gestió de transaccions que reenvia les transaccions dels clients a un conjunt jeràrquic de particions de dades, cosa que permet al sistema oferir diferents nivells de consistència i adaptar elàsticament la seva configuració a noves demandes de càrrega de treball. Finalment, els resultats experimentals posen de manifest la viabilitat de la nostra contribució i encoratgen als professionals a continuar treballant en aquesta àrea.<br>Durante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.<br>Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
APA, Harvard, Vancouver, ISO, and other styles
10

Peretti-Pezzi, Guilherme. "Simulations hydrauliques d'haute performance dans la Grille avec Java et ProActive." Phd thesis, Université Nice Sophia Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00977574.

Full text
Abstract:
L'optimisation de la distribution de l'eau est un enjeu crucial qui a déjà été ciblé par de nombreux outils de modélisation. Des modèles utiles, implémentés il y a des décennies, ont besoin d'évoluer vers des formalismes et des environnements informatiques plus récents. Cette thèse présente la refonte d'un ancien logiciel de simulation hydraulique (IRMA) écrit en FORTRAN, qui a été utilisé depuis plus de 30 ans par la Société du Canal de Provence, afin de concevoir et maintenir les réseaux de distribution d'eau. IRMA a été développé visant principalement pour le traitement des réseaux d'irrigation - en utilisant le modèle probabiliste d'estimation de la demande de Clément - et il permet aujourd'hui de gérer plus de 6.000 km de réseaux d'eau sous pression. L'augmentation de la complexité et de la taille des réseaux met en évidence le besoin de moderniser IRMA et de le réécrire dans un langage plus actuel (Java). Cette thèse présente le modèle de simulation implémenté dans IRMA, y compris les équations de perte de charge, les méthodes de linéarisation, les algorithmes d'analyse de la topologie, la modélisation des équipements et la construction du système linéaire. Quelques nouveaux types de simulation sont présentés: la demande en pointe avec une estimation probabiliste de la consommation (débit de Clément), le dimensionnement de pompe (caractéristiques indicées), l'optimisation des diamètres des tuyaux, et la variation de consommation en fonction de la pression. La nouvelle solution adoptée pour résoudre le système linéaire est décrite et une comparaison avec les solveurs existants en Java est présentée. La validation des résultats est réalisée d'abord avec une comparaison entre les résultats obtenus avec l'ancienne version FORTRAN et la nouvelle solution, pour tous les réseaux maintenus par la Société du Canal de Provence. Une deuxième validation est effectuée en comparant des résultats obtenus à partir d'un outil de simulation standard et bien connu (EPANET). Concernant les performances de la nouvelle solution, des mesures séquentielles de temps sont présentées afin de les comparer avec l'ancienne version FORTRAN. Enfin, deux cas d'utilisation sont présentés afin de démontrer la capacité d'exécuter des simulations distribuées dans une infrastructure de grille, utilisant la solution ProActive. La nouvelle solution a déjà été déployée dans un environnement de production et démontre clairement son efficacité avec une réduction significative du temps de calcul, une amélioration de la qualité des résultats et une intégration facilitée dans le système d'information de la Société du Canal de Provence, notamment la base de données spatiales.
APA, Harvard, Vancouver, ISO, and other styles
11

Petersen, Karsten. "Grid Computing - Eine Einführung." Universitätsbibliothek Chemnitz, 2003. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200301292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Morel, Matthieu. "Components for grid computing." Nice, 2006. http://www.theses.fr/2006NICE4086.

Full text
Abstract:
This thesis aims at facilitating the design and deployment of distributed applications on Grids, using a component-based programming approach. The specific issues in Grid computing addressed by the proposal of this thesis are: complexity of design, deployment, flexibility and high performance. We propose and justify a component model and an implementation framework. Our component model grounds on the Fractal component model and the active object model, It takes advantage of, first, the hierarchical model, well defined semantics and extensibility of the Fractal model, and second, the identification of components as configurable activities. We define a deployment model based on the concept of virtual architectures, and we propose primitives for collective communications through the specification of collective interfaces. Collective interfaces handle data distribution, parallelism and synchronization of invocations. They establish a basis for defining complex interactions between multiple components. We realized an implementation of this model on top of the ProActive Grid middleware, therefore benefiting from underlying features of ProActive. We demonstrate the scalability and efficiency of the framework by developing and deploying on several hundred nodes a compute and communication-intensive application, and we take advantage of the collective interfaces to develop a component-based SPMD application with benchmarks<br>L’objectif de cette thèse est de faciliter la conception et le déploiement d’applications distribuées sur la Grille, en utilisant une approche orientée composants. Les problématiques du calcul sur grilles abordées dans notre proposition sont: la complexité de conception, le déploiement, la flexibilité et la performance. Nous proposons et justifions un modèle de composants et son implantation. Le modèle proposé repose sur le modèle de composants Fractal et sur le modèle des objets actifs. Il bénéficie d’une part, de la structure hiérarchique et de la définition précise du modèle Fractal, et d’autre part, de l’identification des composants comme activités configurables. Nous proposons un modèle de déploiement et nous spécifions un ensemble de primitives pour les communications collectives, grâce à la définition d’interfaces collectives. Les interfaces collectives permettent de gérer la distribution des données, le parallélisme et la synchronisation des invocations. Nous avons développé une implantation du modèle proposé avec l’intergiciel de grille ProActive. Le framework de composants bénéficie ainsi des fonctionnalités sous-jacentes offertes par l’intergiciel ProActive. Nous démontrons la capacité de passage à l’échelle et l’efficacité de notre framework en déployant sur plusieurs centaines de machines des applications intensives en termes de calcul et de communications. Nous mettons à profit les interfaces collectives pour développer une application SPMD à base de composants, dont nous évaluons les performances
APA, Harvard, Vancouver, ISO, and other styles
13

Copaja, Cornejo Richard Nivaldo. "Grid computing para propósitos científicos." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2007. https://hdl.handle.net/20.500.12672/14091.

Full text
Abstract:
La investigación busca enfatizar las bondades del Grid Computing en el desarrollo de proyectos de investigación científica. La evolución de las redes de comunicación de alta velocidad, ha creado un escenario idóneo para el desarrollo de esta tecnología que proporcionará funcionalidades análogas a las existentes en las redes de suministro eléctrico; es decir un único punto de acceso a un conjunto de recursos distribuidos geográficamente como supercomputadores, clusters, sistemas de almacenamiento, fuentes de información, instrumentos y personal. La tecnología Grid Computing actual ofrece la funcionalidad mínima necesaria para, de forma transparente y segura, compartir y explotar simultáneamente los recursos pertenecientes a diferentes organizaciones, respetando sus propias políticas y procedimientos de seguridad y gestión de recursos. La propuesta constituye una solución viable para la difusión y creación de un Grid universitario, a nivel de Lima-Metropolitana, en una primera etapa y en un futuro a nivel nacional. De esta forma se contribuirá en la elevación del nivel de las investigaciones científicas peruanas.<br>Trabajo de suficiencia profesional
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Lizhe. "Virtual environments for Grid computing." Karlsruhe : Universitätsverlag, 2008. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000009892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Avila, George Himer. "Constructing Covering Arrays using Parallel Computing and Grid Computing." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/17027.

Full text
Abstract:
A good strategy to test a software component involves the generation of the whole set of cases that participate in its operation. While testing only individual values may not be enough, exhaustive testing of all possible combinations is not always feasible. An alternative technique to accomplish this goal is called combinato- rial testing. Combinatorial testing is a method that can reduce cost and increase the effectiveness of software testing for many applications. It is based on con- structing functional test-suites of economical size, which provide coverage of the most prevalent configurations. Covering arrays are combinatorial objects, that have been applied to do functional tests of software components. The use of cov- ering arrays allows to test all the interactions, of a given size, among the input parameters using the minimum number of test cases. For software testing, the fundamental problem is finding a covering array with the minimum possible number of rows, thus reducing the number of tests, the cost, and the time expended on the software testing process. Because of the importance of the construction of (near) optimal covering arrays, much research has been carried out in developing effective methods for constructing them. There are several reported methods for constructing these combinatorial models, among them are: (1) algebraic methods, recursive methods, (3) greedy methods, and (4) metaheuristics methods. Metaheuristic methods, particularly through the application of simulated anneal- ing has provided the most accurate results in several instances to date. Simulated annealing algorithm is a general-purpose stochastic optimization method that has proved to be an effective tool for approximating globally optimal solutions to many optimization problems. However, one of the major drawbacks of the simulated an- nealing is the time it requires to obtain good solutions. In this thesis, we propose the development of an improved simulated annealing algorithm<br>Avila George, H. (2012). Constructing Covering Arrays using Parallel Computing and Grid Computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17027<br>Palancia
APA, Harvard, Vancouver, ISO, and other styles
16

Doan, Trung-Tung. "Epidémiologie moléculaire et métagénomique à haut débit sur la grille." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00778073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mehdi, Malika. "PARALLEL HYBRID OPTIMIZATION METHODS FOR PERMUTATION BASED PROBLEMS." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00841962.

Full text
Abstract:
La résolution efficace de problèmes d'optimisation a permutation de grande taille nécessite le développement de méthodes hybrides complexes combinant différentes classes d'algorithmes d'optimisation. L'hybridation des metaheuristiques avec les méthodes exactes arborescentes, tel que l'algorithme du branch-and-bound (B&B), engendre une nouvelle classe d'algorithmes plus efficace que ces deux classes de méthodes utilisées séparément. Le défi principal dans le développement de telles méthodes consiste a trouver des liens ou connections entre les stratégies de recherches divergentes utilisées dans les deux classes de méthodes. Les Algorithmes Genetiques (AGs) sont des metaheuristiques, a base de population, tr'es populaires bas'es sur des op'erateurs stochastiques inspirés de la théorie de l'évolution. Contrairement aux AGs et aux m'etaheuristiques généralement, les algorithmes de B&B sont basées sur l'énumération implicite de l'espace de recherche représente par le moyen d'un arbre, dit arbre de recherche. Notre approche d'hybridation consiste a définir un codage commun des solutions et de l'espace de recherche ainsi que des opérateurs de recherche ad'equats afin de permettre un couplage efficace de bas niveau entre les deux classes de méthodes AGs et B&B. La représentation de l'espace de recherche par le moyen d'arbres est traditionnellement utilis'ee dans les algorithmes de B&B. Dans cette thèse, cette représentation a été adaptée aux metaheuristiques. L'encodage des permutations au moyen de nombres naturels faisant référence a l'ordre d'énumération lexicographique des permutations dans l'arbre du B&B, est proposé comme une nouvelle manière de représenter l'espace de recherche des problèmes 'a permutations dans les metaheuristiques. Cette méthode de codage est basée sur les propriétés mathématiques des permutations, 'a savoir les codes de Lehmer et les tables d'inversions ainsi que les système d'énumération factoriels. Des fonctions de transformation permettant le passage entre les deux représentations (permutations et nombres) ainsi que des opérateurs de recherche adaptes au codage, sont définis pour les problèmes 'a permutations généralisés. Cette représentation, désormais commune aux metaheuristiques et aux algorithmes de B&B, nous a permis de concevoir des stratégies d'hybridation et de collaboration efficaces entre les AGs et le B&B. En effet, deux approches d'hybridation entre les AGs et les algorithmes de B&B (HGABB et COBBIGA) bas'es sur cette représentation commune ont été proposées dans cette thèse. Pour validation, une implémentation a été réalisée pour le problème d'affectation quadratique 'a trois dimension (Q3AP). Afin de résoudre de larges instances de ce problème, nous avons aussi propose une parallélisation pour les deux algorithme hybrides, basée sur des techniques de décomposition d'espace (décomposition par intervalle) utilisées auparavant pour la parallélisation des algorithmes de B&B. Du point de vue implémentation, afin de faciliter de futurs conceptions et implémentations de méthodes hybrides combinant metaheuristiques et méthodes exacte arborescentes, nous avons développe une plateforme d'hybridation intégrée au logiciel pour metaheuristiques, ParadisEO. La nouvelle plateforme a été utilisée pour réaliser des expérimentations intensives sur la grille de calcul Grid'5000.
APA, Harvard, Vancouver, ISO, and other styles
18

Koehler, Stephan. "Video Streams in a Computing Grid." Thesis, KTH, School of Information and Communication Technology (ICT), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24271.

Full text
Abstract:
<p>The growth of online video services such as YouTube enabled a new broadcasting medium for video. Similarly, consumer television is moving from analog to digital distribution of video content. Being able to manipulate the video stream by integrating a video or image overlay while streaming could enable a personalized video stream for each viewer. This master thesis explores the digital video domain to understand how streaming video can be efficiently modified, and designs and implements a prototype system for distributed video modification and streaming.</p><p>This thesis starts by examining standards and protocols related to video coding, formats and network distribution. To support multiple concurrent video streams to users, a distributed data and compute grid is used to create a scalable system for video streaming. Several (commercial) products are examined to find that GigaSpaces provides the optimal features for implementing the prototype. Furthermore third party libraries like libavcodec by FFMPEG and JBoss Netty are selected for respectively video coding and network streaming. The prototype design is then formulated including the design choices, the functionality in terms of user stories, the components that will make up the system and the flow of events in the system. Finally, the implementation is described followed by an evaluation of the fault tolerance, throughput, scalability and configuration. The evaluation shows that the prototype is fault tolerant and its throughput scales bothvertically and horizontally.</p><p>Intended audience</p><p>This thesis focuses on topics in the area of general computer science and network technology. It is therefore assumed that the reader has knowledge of basic concepts and techniques in these areas. More specifically this report focuses on topics related to digital video and distributed computer systems. Knowledge in these areas is helpful but not required.</p>
APA, Harvard, Vancouver, ISO, and other styles
19

Polze, Andreas, and Bettina Schnor. "Grid-Computing : [Seminar im Sommersemester 2003]." Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2009/3316/.

Full text
Abstract:
1. Applikationen für weitverteiltes Rechnen Dennis Klemann, Lars Schmidt-Bielicke, Philipp Seuring 2. Das Globus-Toolkit Dietmar Bremser, Alexis Krepp, Tobias Rausch 3. Open Grid Services Architecture Lars Trieloff 4. Condor, Condor-G, Classad Stefan Henze, Kai Köhne 5. The Cactus Framework Thomas Hille, Martin Karlsch 6. High Performance Scheduler mit Maui/PBS Ole Weidner, Jörg Schummer, Benedikt Meuthrath 7. Bandbreiten-Monitoring mit NWS Alexander Ritter, Gregor Höfert 8. The Paradyn Parallel Performance Measurement Tool Jens Ulferts, Christian Liesegang 9. Grid-Applikationen in der Praxis Steffen Bach, Michael Blume, Helge Issel
APA, Harvard, Vancouver, ISO, and other styles
20

Cai, Wei. "Reconfigurable resource management in grid computing." Thesis, Lancaster University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ong, Sze Hwei 1979. "Grid computing : business and policy implications." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/30035.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2003.<br>Includes bibliographical references (leaves 84-86).<br>The Grid is a distributed computing infrastructure that facilitates the exchange of expertise and resources. It is somewhat analogous to the electric power grid in that it can potentially provide a universal source of IT resources that can have a huge impact on human capabilities and on the entire society. Currently the Grid is being deployed (in limited ways) in some research and academic institutions. As Grid computing technologies mature further, the commercial sector can also benefit. With Grid technologies enabling utility computing, enterprises will be able to access IT resources on-demand in a utility-like way. This thesis gives a brief introduction on Grids and looks back into the history of power grids for lessons learned. It suggests that the Grid and the power grid are both infrastructures and factors of reliability, standardization, universal access and affordability are necessary to ensure the success of any infrastructure. Once the Grid is successful, it can open up new opportunities in the field of utility computing and impact IT provision in the commercial sector. The new utility computing ecosystem would consist of five major players - the Grid resource supplier, the Grid infrastructure supplier, the utility service provider, the re-seller and the end user. Further industry analysis reveals that there are new roles for current players in the traditional IT provision industry and opportunities for new entrants in this new ecosystem. The thesis attempts to identify the characteristics of each of the five major players to help the IT industry better understand the requirements of these new roles. Current players in the IT provision industry would have to decide which of the above roles to play in this new utility computing ecosystem and to re-define their market strategies accordingly. New entrants to the field would likely be players in the telecommunication sector who want a share of this growing pie and whose existing relationship with bandwidth subscribers can be leveraged upon. This thesis concludes with recommendations on several policy issues: Grid standardization for inter-operability, decentralized Grid governance to encourage optimal resource sharing and mechanisms for transcending cultural/organizational barriers inhibiting the commercial adoption of Grid computing.<br>by Sze Hwei Ong.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
22

Constantinescu-Fuløp, Zoran. "A Desktop Grid Computing Approach for Scientific Computing and Visualization." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2191.

Full text
Abstract:
<p>Scientific Computing is the collection of tools, techniques, and theories required to solve on a computer, mathematical models of problems from science and engineering, and its main goal is to gain insight in such problems. Generally, it is difficult to understand or communicate information from complex or large datasets generated by Scientific Computing methods and techniques (computational simulations, complex experiments, observational instruments etc.). Therefore, support of Scientific Visualization is needed, to provide the techniques, algorithms, and software tools needed to extract and display appropriately important information from numerical data.</p><p>Usually, complex computational and visualization algorithms require large amounts of computational power. The computing power of a single desktop computer is insufficient for running such complex algorithms, and, traditionally, large parallel supercomputers or dedicated clusters were used for this job. However, very high initial investments and maintenance costs limit the availability of such systems. A more convenient solution, which is becoming more and more popular, is based on the use of nondedicated desktop PCs in a Desktop Grid Computing environment. Harnessing idle CPU cycles, storage space and other resources of networked computers to work together on a particularly computational intensive application does this. Increasing power and communication bandwidth of desktop computers provides for this solution.</p><p>In a desktop grid system, the execution of an application is orchestrated by a central scheduler node, which distributes the tasks amongst the worker nodes and awaits workers’ results. An application only finishes when all tasks have been completed. The attractiveness of exploiting desktop grids is further reinforced by the fact that costs are highly distributed: every volunteer supports her resources (hardware, power costs and internet connections) while the benefited entity provides management infrastructures, namely network bandwidth, servers and management services, receiving in exchange a massive and otherwise unaffordable computing power. The usefulness of desktop grid computing is not limited to major high throughput public computing projects. Many institutions, ranging from academics to enterprises, hold vast number of desktop machines and could benefit from exploiting the idle cycles of their local machines.</p><p>In the work presented in this thesis, the central idea has been to provide a desktop grid computing framework and to prove its viability by testing it in some Scientific Computing and Visualization experiments. We present here QADPZ, an open source system for desktop grid computing that have been developed to meet the above presented needs. QADPZ enables users from a local network or Internet to share their resources. It is a multi-platform, heterogeneous system, where different computing resources from inside an organization can be used. It can be used also for volunteer computing, where the communication infrastructure is the Internet. QADPZ supports the following native operating systems: Linux, Windows, MacOS and Unix variants. The reason behind natively supporting multiple operating systems, and not only one (Unix or Windows, as other systems do), is that often, in real life, this kind of limitation restricts very much the usability of desktop grid computing.</p><p>QADPZ provides a flexible object-oriented software framework that makes it easy for programmers to write various applications, and for researchers to address issues such as adaptive parallelism, fault-tolerance, and scalability. The framework supports also the execution of legacy applications, which for different reasons could not be rewritten, and that makes it suitable for other domains as business. It also supports low-level programming languages as C/C++ or high-level language applications, (e.g. Lisp, Python, and Java), and provides the necessary mechanisms to use such applications in a computation. Consequently, users with various backgrounds can benefit from using QADPZ. The flexible object-oriented structure and the modularity allow facile improvements and further extensions to other programming languages.</p><p>We have developed a general-purpose runtime and an API to support new kinds of high performance computing applications, and therefore to benefit from the advantages offered by desktop grid computing. This API directly supports the C/C++ programming language. We have shown how distributed computing extends beyond the master-worker paradigm (typical for such systems) and provided QADPZ with an extended API that supports in addition lightweight tasks and parallel computing (using the message passing paradigm - MPI). This extends the range of applications that can be used to already existing MPI based applications - e.g. parallel numerical solvers used in computational science, or parallel visualization algorithms.</p><p>Another restriction of existing systems, especially middleware based, is that each resource provider needs to install a runtime module with administrator privileges. This poses some issues regarding data integrity and accessibility on providers computers. The QADPZ system tries to overcome this by allowing the middleware module to run as a non-privileged user, even with restricted access, to the local system.</p><p>QADPZ provides also low-level optimizations, such as on-the-fly compression and encryption for communication. The user can choose from different algorithms, depending on the application, improving both the communication overhead imposed by large data transfers and keeping privacy of the data. The system goes further, by providing an experimental, adaptive compression algorithm, which can transparently choose different algorithms to improve the application. QADPZ support two different protocols (UDP and TCP/IP) in order to improve the efficiency of communication.</p><p>Free source code allows its flexible installations and modifications based on the particular needs of research projects and institutions. In addition to being a very powerful tool for computationally intensive research, the open sourceness makes QADPZ a flexible educational platform for numerous smallsize student projects in the areas of operating systems, distributed systems, mobile agents, parallel algorithms, etc. Open source software is a natural choice for modern research as well, because it encourages effectively integration, cooperation and boosting of new ideas.</p><p>This thesis proposes also an improved conceptual model (based on the master-worker paradigm), which makes contributions in several directions: pull vs. push work-units, pipelining of work-units, more work-units sent at a time, adaptive number of workers, adaptive time-out interval for work-units, and multithreading. We have also demonstrated that the use of desktop grids should not be limited to only master-worker applications, but it can be used for more fine-grained parallel Scientific Computing and Visualization applications, by performing some specific experiments. This thesis makes supplementary contributions: a hierarchical taxonomy of the main existing desktop grids, and an adaptive compression algorithm for remote visualization. QADPZ has also pioneered autonomic computing approach for desktop grids and presents specific self-management features: self-knowledge, self-configuration, selfoptimization and self-healing. It is worth to mention that to the present the QADPZ has over a thousand users who have download it (since July, 2001 when it has been uploaded to sourceforge.net), and many of them use it for their daily tasks (see the appendix). Many of the results have been published or are in course of publishing as it can be seen from the references.</p>
APA, Harvard, Vancouver, ISO, and other styles
23

Fuad, Syed Ahmed. "Consensus Based Distributed Control in Micro-Grid Clusters." Thesis, Michigan Technological University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10267526.

Full text
Abstract:
<p> With the increasing trend of utilizing renewable energy generators such as photovoltaic (PV) cells and wind turbines, power systems are transforming from a centralized power grid structure to a cluster of smart micro-grids with more autonomous power sharing capabilities. Even though the decentralized control of power systems is a reliable and cost effective solution, due to the inherent heterogeneous nature of micro-grids, optimal and efficient power sharing among distributed generators (DG&rsquo;s) is a major issue which calls for advanced control techniques for voltage stabilization of the entire micro-grid cluster. The proposed consensus based algorithm in this thesis is a solution to overcome these control challenges, which only requires each DG to exchange information with its directly connected neighboring DG&rsquo;s, in order to maintain the power balance and voltage stability of the entire micro-grid cluster. The proposed method in this thesis is simulated in PSCAD and its effectiveness is demonstrated using several realistic and practical cases including micro-grid topology changes, communication delays, and load changes.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Burgess, David A. "Parallel computing for unstructured mesh algorithms." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Shum, Kam Hong. "Adaptive parallelism for computing on heterogeneous clusters." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

HEMMATPOUR, MASOUD. "High Performance Computing using Infiniband-based clusters." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2750549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chakroun, Imen. "Algorithmes Branch and Bound parallèles hétérogènes pour environnements multi-coeurs et multi-GPU." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2013. http://tel.archives-ouvertes.fr/tel-00841965.

Full text
Abstract:
Les algorithmes Branch and Bound (B&B) sont attractifs pour la résolution exacte de problèmes d'optimisation combinatoire (POC) par exploration d'un espace de recherche arborescent. Néanmoins, ces algorithmes sont très gourmands en temps de calcul pour des instances de problèmes de grande taille (exemple : benchmarks de Taillard pour FSP) même en utilisant le calcul sur grilles informatiques [Mezmaz et al., IEEE IPDPS'2007]. Le calcul massivement parallèle fourni à travers les plates-formes de calcul hétérogènes d'aujourd'hui [TOP500 ] est requis pour traiter effi cacement de telles instances. Le dé fi est alors d'exploiter tous les niveaux de parallélisme sous-jacents et donc de repenser en conséquence les modèles parallèles des algorithmes B&B. Dans cette thèse, nous nous attachons à revisiter la conception et l'implémentation des ces algorithmes pour la résolution de POC de grande taille sur (larges) plates-formes de calcul multi-coeurs et multi-GPUs. Le problème d'ordonnancement Flow-Shop (FSP) est considéré comme étude de cas. Une étude expérimentale préliminaire sur quelques grandes instances du FSP a révélé que l'arbre de recherche est hautement irrégulier (en forme et en taille) et très large (milliards de milliards de noeuds), et que l'opérateur d'évaluation des bornes est exorbitant en temps de calcul (environ 97% du temps de B&B). Par conséquent, notre première contribution est de proposer une approche GPU avec un seul coeur CPU (GB&B) dans laquelle seul l'opérateur d'évaluation est exécuté sur GPU. L'approche traite deux dé fis: la divergence de threads et l'optimisation de la gestion de la mémoire hiérarchique du GPU. Comparée à une version séquentielle, des accélérations allant jusqu'à ( 100) sont obtenues sur Nvidia Tesla C2050. L'analyse des performances de GB&B a montré que le surcoût induit par le transfert des données entre le CPU et le GPU est élevé. Par conséquent, l'objectif de la deuxième contribution est d'étendre l'approche (LL-GB&B) a fin de minimiser la latence de communication CPU-GPU. Cet objectif est réalisé grâce à une parallélisation à grain fin sur GPU des opérateurs de séparation et d'élagage. Le défi majeur relevé ici est la divergence de threads qui est due à la nature fortement irrégulière citée ci-dessus de l'arbre exploré. Comparée à une exécution séquentielle, LL-GB&B permet d'atteindre des accélérations allant jusqu'à ( 160) pour les plus grandes instances. La troisième contribution consiste à étudier l'utilisation combinée des GPUs avec les processeurs multi-coeurs. Deux scénarios ont été explorés conduisant à deux approches: une concurrente (RLL-GB&B) et une coopérative (PLL-GB&B). Dans le premier cas, le processus d'exploration est eff ectué simultanément par le GPU et les coeurs du CPU. Dans l'approche coopérative, les coeurs du CPU préparent et transfèrent les sous-problèmes en utilisant le streaming CUDA tandis que le GPU eff ectue l'exploration. L'utilisation combinée du multi-coeur et du GPU a montré que l'utilisation de RLL-GB&B n'est pas bénéfi que et que PLL-GB&B permet une amélioration allant jusqu'à (36%) par rapport à LL-GB&B. Sachant que récemment des grilles de calcul comme Grid5000 (certains sites) ont été équipées avec des GPU, la quatrième contribution de cette thèse traite de la combinaison du calcul sur GPU et multi-coeur avec le calcul distribué à grande échelle. Pour ce faire, les diff érentes approches proposées ont été réunies dans un méta-algorithme hétérofigène qui sélectionne automatiquement l'algorithme à déployer en fonction de la con figuration matérielle cible. Ce méta-algorithme est couplé avec l'approche B&B@Grid proposée dans [Mezmaz et al., IEEE IPDPS'2007]. B&B@Grid répartit les unités de travail (sous-espaces de recherche codés par des intervalles) entre les noeuds de la grille tandis que le méta-algorithme choisit et déploie localement un algorithme de B&B parallèle sur les intervalles reçus. L'approche combinée nous a permis de résoudre à l'optimalité et e fficacement les instances (20 20) de Taillard.
APA, Harvard, Vancouver, ISO, and other styles
28

Toch, Lamiel. "Contributions aux techniques d’ordonnancement sur plates-formes parallèles ou distribuées." Electronic Thesis or Diss., Besançon, 2012. http://www.theses.fr/2012BESA2045.

Full text
Abstract:
Les travaux présentés dans ce document portent sur l'ordonnancement d'applications parallèles sur des plates-formes parallèles (cluster) ou distribuées (grilles de calcul). Dans nos travaux de recherche nous nous sommes concentrés sur l'ordonnancement d'applications modélisées par un DAG, graphe orienté sans cycle, pour les grilles de calcul et sur l'ordonnancement pour les (cluster, machines multiprocesseurs) de programmes parallèles (jobs parallèles) représentés sous la forme de surface rectangulaire dont les deux dimensions sont le nombre de processeurs requis et la durée d'exécution. Les recherches s'articulent autour de trois grands axes. Le premier axe concerne l'ordonnancement d'un ensemble d'instances d'une application pour les grilles de calcul. Le deuxième axe est l'ordonnancement de jobs parallèles dans les clusters. Le troisième est l'ordonnancement d'un lot de jobs parallèles pour les machines parallèles. Cette thèse apporte des contributions sur les trois axes. La première contribution associée au premier axeest l'étude expérimentale avancée de trois algorithmes pour l'ordonnancement d'un ensemble d'instances d'une application sur une plate-forme hétérogène où les coûts de communication sont négligeables : un algorithme de liste, un algorithme de régime permanent et un algorithme génétique. D'autre part nous apportons l'intégration des communications dans cet algorithme génétique. La deuxième contribution associée au deuxième axe est la conception d'une nouvelle technique d'ordonnancement de jobs parallèles pour les clusters : le pliage de jobs qui utilise la virtualisation des processeurs. La dernière contribution porte sur la conception d'une nouvelletechnique inspirée du domaine des statistiques et du traitement du signal appliquée à l'ordonnancement de jobs parallèles dans une machine multiprocesseur. Enfin nous donnons quelques travaux de recherches qui on été réalisés mais qui n'ont pas abouti à des résultats significatifs pour l'ordonnancement<br>Works presented in this document tackle scheduling of parallel applications in either parallel (cluster) or distributed (computing grid) platforms. In our researches we were concentrated on either scheduling of applications modeled by a DAG, directed acyclic graph, for computing grid or scheduling of parallel programs (parallel jobs) represented by a rectangular shape whose the two dimensions are the number of requested processors and the execution time. The researches follow three main topics. The first topic concerns the scheduling of a set of instances of an application for computing grid. The second topic deals with the scheduling of parallel jobs inclusters. The third one tackles the scheduling of parallel jobs in multiprocessor machines. We brought contributions on these three topics. The first contribution under the first topic consists of the advanced experimental study of three algorithms for scheduling a set of instances of an application on a heterogeneous platform without communication costs : a list-based algorithm, a steady-state algorithm and genetic algorithm. Moreover we integrate communications in this genetic algorithm. The second contribution under the second topic is the design of a new technique for scheduling parallel jobs in clusters : job folding which uses virtualization of processors. The third contribution deals with a new technique which comes from statistics and signal cessing applied to scheduling of parallel jobs in a multiprocessor machine. Eventually we givesome works that we carried out but which did not give significant results for scheduling
APA, Harvard, Vancouver, ISO, and other styles
29

Belli, Stefano. "Tecniche di resource discovery nel grid computing." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9528/.

Full text
Abstract:
In questa tesi vengono analizzate le principali tecniche di Resource Discovery in uso nei sistemi di Grid Computing, valutando i principali vantaggi e svantaggi di ogni soluzione. Particolare attenzione verrà riposta sul Resource Discovery ad Agenti, che si propone come architettura capace di risolvere in maniera definitiva i classici problemi di queste reti. All'interno dell'elaborato, inoltre, ogni tecnica presentata verrà arricchita con una sua implementazione pratica: tra queste, ricordiamo MDS, Chord e l'implementazione Kang.
APA, Harvard, Vancouver, ISO, and other styles
30

Kaya, Ozgur. "Efficient Scheduling In Distributed Computing On Grid." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607928/index.pdf.

Full text
Abstract:
Today many computing resources distributed geographically are idle much of time. The aim of the grid computing is collecting these resources into a single system. It helps to solve problems that are too complex for a single PC. Scheduling plays a critical role in the efficient and effective management of resources to achieve high performance on grid computing environment. Due to the heterogeneity and highly dynamic nature of grid, developing scheduling algorithms for grid computing involves some challenges. In this work, we concentrate on efficient scheduling of distributed tasks on grid. We propose a novel scheduling heuristic for bag-of-tasks applications. The proposed algorithm primarily makes use of history based runtime estimation. The history stores information about the applications whose runtimes and other specific properties are recorded during the previous executions. Scheduling decisions are made according to similarity between the applications. Definition of similarity is an important aspect of this approach, apart from the best resource allocation. The aim of this scheduling algorithm (HISA-History Injected Scheduling Algorithm) is to define and find the similarity, and assign the job to the most suitable resource, making use of the similarity. In our evaluation, we use Grid simulation tool called GridSim. A number of intensive experiments with various simulation settings have been conducted. Based on the experimental results, the effectiveness of HISA scheduling heuristic is studied and compared to the other scheduling algorithms embedded in GridSim. The results show that history injection improves the performance of future job submissions on a grid.
APA, Harvard, Vancouver, ISO, and other styles
31

Popuri, Vamsi. "Intrusion detection for grid and cloud computing." Thesis, Linköpings universitet, Institutionen för systemteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70364.

Full text
Abstract:
In today’s life providing security has become more cumbersome because of all the malicious possibilities in data transmission, so we need a system which makes data transmission more secure beyond encryption, passwords and digital signatures. The system that we are discussing in this thesis is an Intrusion Detection System, which is a platform that provides security in the distributed systems.   This paper also attempts to explain the drawbacks in conventional system designs, which results in low performance due to network congestion and less data efficiency. We consider cloud and grid computing systems to improve the performance of the system. Cloud systems are characterized by a main server and other connected servers which provide certain services. Cloud systems, especially public cloud systems are prone to intrusions and care must be taken to secure the system. The emphasis in this thesis is to make cloud systems secure using intrusion detection system. Intrusion detection can be performed using either behaviour based or knowledge based techniques or both. We use UML as a tool to design the system, which helps in reducing the design complexity.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Lizhe [Verfasser]. "Virtual environments for grid computing / Lizhe Wang." Karlsruhe : KIT Scientific Publishing, 2009. http://www.ksp.kit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Smith, Andrew Cameron. "LHCb data management on the computing grid." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3018.

Full text
Abstract:
The LHCb detector is one of the four experiments being built to harness the proton-proton collisions provided by the Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN). The data rate expected, when the LHC experiments are fully operational, eclipses that of any previous scientific experiments and has motivated the adoption of a grid computing paradigm to store and process the data. Managing PetaBytes of data in a distributed environment provides a rich set of challenges related to scalability, reliability and performance. This thesis will present the data management requirements for executing the workload of the LHCb collab- oration. We present the systems designed that support all aspects of the grid data management for LHCb, from data transfer, to data integrity, and efficient data access. The distributed computing environment is inherently unstable and much focus has been made on providing systems that are ro- bust and resilient to observed failures.
APA, Harvard, Vancouver, ISO, and other styles
34

Cao, Junwei. "Agent-based resource management for grid computing." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/4172/.

Full text
Abstract:
A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capability. An ideal grid environment should provide access to the available resources in a seamless manner. Resource management is an important infrastructural component of a grid computing environment. The overall aim of resource management is to efficiently schedule applications that need to utilise the available resources in the grid environment. Such goals within the high performance community will rely on accurate performance prediction capabilities. An existing toolkit, known as PACE (Performance Analysis and Characterisation Environment), is used to provide quantitative data concerning the performance of sophisticated applications running on high performance resources. In this thesis an ASCI (Accelerated Strategic Computing Initiative) kernel application, Sweep3D, is used to illustrate the PACE performance prediction capabilities. The validation results show that a reasonable accuracy can be obtained, cross-platform comparisons can be easily undertaken, and the process benefits from a rapid evaluation time. While extremely well-suited for managing a locally distributed multi-computer, the PACE functions do not map well onto a wide-area environment, where heterogeneity, multiple administrative domains, and communication irregularities dramatically complicate the job of resource management. Scalability and adaptability are two key challenges that must be addressed. In this thesis, an A4 (Agile Architecture and Autonomous Agents) methodology is introduced for the development of large-scale distributed software systems with highly dynamic behaviours. An agent is considered to be both a service provider and a service requestor. Agents are organised into a hierarchy with service advertisement and discovery capabilities. There are four main performance metrics for an A4 system: service discovery speed, agent system efficiency, workload balancing, and discovery success rate. Coupling the A4 methodology with PACE functions, results in an Agent-based Resource Management System (ARMS), which is implemented for grid computing. The PACE functions supply accurate performance information (e. g. execution time) as input to a local resource scheduler on the fly. At a meta-level, agents advertise their service information and cooperate with each other to discover available resources for grid-enabled applications. A Performance Monitor and Advisor (PMA) is also developed in ARMS to optimise the performance of the agent behaviours. The PMA is capable of performance modelling and simulation about the agents in ARMS and can be used to improve overall system performance. The PMA can monitor agent behaviours in ARMS and reconfigure them with optimised strategies, which include the use of ACTs (Agent Capability Tables), limited service lifetime, limited scope for service advertisement and discovery, agent mobility and service distribution, etc. The main contribution of this work is that it provides a methodology and prototype implementation of a grid Resource Management System (RMS). The system includes a number of original features that cannot be found in existing research solutions.
APA, Harvard, Vancouver, ISO, and other styles
35

Alfawair, Mai. "A framework for evolving grid computing systems." Thesis, De Montfort University, 2009. http://hdl.handle.net/2086/3423.

Full text
Abstract:
Grid computing was born in the 1990s, when researchers were looking for a way to share expensive computing resources and experiment equipment. Grid computing is becoming increasingly popular because it promotes the sharing of distributed resources that may be heterogeneous in nature, and it enables scientists and engineering professionals to solve large scale computing problems. In reality, there are already huge numbers of grid computing facilities distributed around the world, each one having been created to serve a particular group of scientists such as weather forecasters, or a group of users such as stock markets. However, the need to extend the functionalities of current grid systems lends itself to the consideration of grid evolution. This allows the combination of many disjunct grids into a single powerful grid that can operate as one vast computational resource, as well as for grid environments to be flexible, to be able to change and to evolve. The rationale for grid evolution is the current rapid and increasing advances in both software and hardware. Evolution means adding or removing capabilities. This research defines grid evolution as adding new functions and/or equipment and removing unusable resources that affect the performance of some nodes. This thesis produces a new technique for grid evolution, allowing it to be seamless and to operate at run time. Within grid computing, evolution is an integration of software and hardware and can be of two distinct types, external and internal. Internal evolution occurs inside the grid boundary by migrating special resources such as application software from node to node inside the grid. While external evolution occurs between grids. This thesis develops a framework for grid evolution that insulates users from the complexities of grids. This framework has at its core a resource broker together with a grid monitor to cope with internal and external evolution, advance reservation, fault tolerance, the monitoring of the grid environment, increased resource utilisation and the high availability of grid resources. The starting point for the present framework of grid evolution is when the grid receives a job whose requirements do not exist on the required node which triggers grid evolution. If the grid has all the requirements scattered across its nodes, internal evolution enabling the grid to migrate the required resources to the required node in order to satisfy job requirements ensues, but if the grid does not have these resources, external evolution enables the grid either to collect them from other grids (permanent evolution) or to send the job to other grids for execution (just in time) evolution. Finally a simulation tool called (EVOSim) has been designed, developed and tested. It is written in Oracle 10g and has been used for the creation of four grids, each of which has a different setup including different nodes, application software, data and polices. Experiments were done by submitting jobs to the grid at run time, and then comparing the results and analysing the performance of those grids that use the approach of evolution with those that do not. The results of these experiments have demonstrated that these features significantly improve the performance of grid environments and provide excellent scheduling results, with a decreasing number of rejected jobs.
APA, Harvard, Vancouver, ISO, and other styles
36

Cardenas, Baron Yonni Brunie Lionel Pierson Jean-Marc. "Grid caching specification and implementation of collaborative cache services for grid computing /." Villeurbanne : Doc'INSA, 2008. http://docinsa.insa-lyon.fr/these/pont.php?id=cardenas_baron.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Cardenas, Baron Yonny. "Grid caching : specification and implementation of collaborative cache services for grid computing." Lyon, INSA, 2007. http://theses.insa-lyon.fr/publication/2007ISAL0107/these.pdf.

Full text
Abstract:
This thesis proposes an approach for the design and implementation of collaborative cache systems in grids that supports capabilities for monitoring and controlling cache interactions. Our approach permits to compose and evaluate high-level collaborative cache functions in a flexible way. Our proposal is based on a multilayer model that defines the main functions of a collaborative grid cache system. This model and the provided specification are used to build a flexible and generic software infrastructure for the operation and control of collaborative caches. This infrastructure is composed of a group of autonomous cache elements called Grid Cache Services (GCS). The GCS is a local administrator of temporary storage and data which is implemented as a grid service that provides the cache capabilities defined by the model. We study a possible configuration for a group of GCS that constitutes a basic management system of temporary data called Temporal Storage Service (TSS)<br>Cette thèse propose une approche de la conception et de l'implémentation de systèmes de cache collaboratif dans les grilles de données. Notre approche permet la composition et l'évaluation des fonctions d‘un système de cache collaboratif de haut niveau de façon flexible. Notre proposition est basée sur un modèle multicouche qui définit les fonctions principales d'un système de cache collaboratif pour les grilles. Ce modèle et la spécification fournie sont utilisés pour construire une infrastructure logicielle flexible et générique pour l'opération et le contrôle du cache collaboratif. Cette infrastructure est composée d'un groupe d’éléments autonomes de cache appelés "Grid Cache Services" (GCS). Le GCS est un administrateur local de moyens de stockage et de données temporaires. Nous étudions une possible configuration d’un groupe de GCS qui constitue un système basique d'administration de données temporaires appelé "Temporal Storage Service" (TSS)
APA, Harvard, Vancouver, ISO, and other styles
38

Killian, Rudi. "Dynamic superscalar grid for technical debt reduction." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2726.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018.<br>Organizations and the private individual, look to technology advancements to increase their ability to make informed decisions. The motivation for technology adoption by entities sprouting from an innate need for value generation. The technology currently heralded as the future platform to facilitate value addition, is popularly termed cloud computing. The move to cloud computing however, may conceivably increase the obsolescence cycle for currently retained Information Technology (IT) assets. The term obsolescence, applied as the inability to repurpose or scale an information system resource for needed functionality. The incapacity to reconfigure, grow or shrink an IT asset, be it hardware or software is a well-known narrative of technical debt. The notion of emergent technical debt realities is professed to be all but inevitable when informed by Moore’s Law, as technology must inexorably advance. Of more imminent concern however are that major accelerating factors of technical debt are deemed as non-holistic conceptualization and design conventions. Should management of IT assets fail to address technical debt continually, the technology platform would predictably require replacement. The unrealized value, functional and fiscal loss, together with the resultant e-waste generated by technical debt is meaningfully unattractive. Historically, the cloud milieu had evolved from the grid and clustering paradigms which allowed for information sourcing across multiple and often dispersed computing platforms. The parallel operations in distributed computing environments are inherently value adding, as enhanced effective use of resources and efficiency in data handling may be achieved. The predominant information processing solutions that implement parallel operations in distributed environments are abstracted constructs, styled as High Performance Computing (HPC) or High Throughput Computing (HTC). Regardless of the underlying distributed environment, the archetypes of HPC and HTC differ radically in standard implementation. The foremost contrasting factors of parallelism granularity, failover and locality in data handling have recently been the subject of greater academic discourse towards possible fusion of the two technologies. In this research paper, we uncover probable platforms of future technical debt and subsequently recommend redeployment alternatives. The suggested alternatives take the form of scalable grids, which should provide alignment with the contemporary nature of individual information processing needs. The potential of grids, as efficient and effective information sourcing solutions across geographically dispersed heterogeneous systems are envisioned to reduce or delay aspects of technical debt. As part of an experimental investigation to test plausibility of concepts, artefacts are designed to generically implement HPC and HTC. The design features exposed by the experimental artefacts, could provide insights towards amalgamation of HPC and HTC.
APA, Harvard, Vancouver, ISO, and other styles
39

Batool, Munira. "Performance Optimisation of Standalone and Grid Connected Microgrid Clusters." Thesis, Curtin University, 2018. http://hdl.handle.net/20.500.11937/73584.

Full text
Abstract:
Remote areas usually supplied by isolated electricity systems known as microgrids which can operate in standalone and grid-connected mode. This research focus on reliable operation of microgrids with minimal fuel consumption and maximal renewables penetration, ensuring least voltage and frequency deviations. These problems can be solved by an optimisation-based technique. The objective function is formulated and solved with a Genetic Algorithm approach and performance of the proposal is evaluated by exhaustive numerical analyses in Matlab.
APA, Harvard, Vancouver, ISO, and other styles
40

Cooper, Andrew. "Towards a trusted grid architecture." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:42268964-c1db-4599-9dbc-a1ceb1015ef1.

Full text
Abstract:
The malicious host problem is challenging in distributed systems such as grids and clouds. Rival organisations may share the same physical infrastructure. Administrators might deliberately or accidentally compromise users' data. The thesis concerns the development of a security architecture that allows users to place a high degree of trust in remote systems to process their data securely. The problem is tackled through a new security layer that ensures users' data can only be accessed within a trusted execution environment. Access to encrypted programs and data is authorised by a key management service using trusted computing attestation. Strong data integrity and confidentiality protection on remote hosts is provided by the job security manager virtual machine. The trusted grid architecture supports the enforcement of digital rights management controls. Subgrids allow users to define a strong trusted boundary for delegated grid jobs. Recipient keys enforce a trusted return path for job results to help users create secure grid workflows. Mandatory access controls allow stakeholders to mandate the software that is available to grid users. A key goal of the new architecture is backwards compatibility with existing grid infrastructure and data. This is achieved using a novel virtualisation architecture where the security layer is pushed down to the remote host, so it does not need to be pre-installed by the service provider. A new attestation scheme, called origin attestation, supports the execution of unmodified, legacy grid jobs. These features will ease the transition to a trusted grid and help make it practical for deployment on a global scale.
APA, Harvard, Vancouver, ISO, and other styles
41

Aji, Ashwin M. "Programming High-Performance Clusters with Heterogeneous Computing Devices." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52366.

Full text
Abstract:
Today's high-performance computing (HPC) clusters are seeing an increase in the adoption of accelerators like GPUs, FPGAs and co-processors, leading to heterogeneity in the computation and memory subsystems. To program such systems, application developers typically employ a hybrid programming model of MPI across the compute nodes in the cluster and an accelerator-specific library (e.g.; CUDA, OpenCL, OpenMP, OpenACC) across the accelerator devices within each compute node. Such explicit management of disjointed computation and memory resources leads to reduced productivity and performance. This dissertation focuses on designing, implementing and evaluating a runtime system for HPC clusters with heterogeneous computing devices. This work also explores extending existing programming models to make use of our runtime system for easier code modernization of existing applications. Specifically, we present MPI-ACC, an extension to the popular MPI programming model and runtime system for efficient data movement and automatic task mapping across the CPUs and accelerators within a cluster, and discuss the lessons learned. MPI-ACC's task-mapping runtime subsystem performs fast and automatic device selection for a given task. MPI-ACC's data-movement subsystem includes careful optimizations for end-to-end communication among CPUs and accelerators, which are seamlessly leveraged by the application developers. MPI-ACC provides a familiar, flexible and natural interface for programmers to choose the right computation or communication targets, while its runtime system achieves efficient cluster utilization.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Ulmer, Craig D. "Extensible message layers for resource-rich cluster computers." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/13306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

In, Jang-Uk. "Efficient scheduling techniques and systems for grid computing." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0013834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cederström, Andreas. "On using Desktop Grid Computing in software industry." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5800.

Full text
Abstract:
Context. When dealing with large data sets and heavy calculations the common solution is clusters, supercomputers or Grids of these two. However, there are ways of gaining large computational power by utilizing the unused cycles of regular home or office computers, this are referred to as Desktop Grids. Objectives. In this study we review the current field of solutions for open source Desktop Grid computing capable of dealing with a heterogeneous set of clients and dynamic size of the Desktop Grid. We investigate current use, interest of use and priority of key attributes of Desktop Grids. Finally we want to show how time effective Desktop Grids are compared to execution on a single machine and in the process show effort needed to setup a Desktop Grid and start computing. The overall purpose of this study is to provide a path for industry organizations to take when taking the first step into Desktop Grid computing. Methods. We use a systematic review to collect information of existing open source Desktop Grid solutions. Studies are selected based on inclusion criterions and a quality assessment. A survey questioner is used to assess industry usage, interest and prioritization of attributes of Desktop Grids. We will conduct an experiment to show execution speedup as well as setup effort. Results. We found ten open source Desktop Grids fulfilling our requirements. The survey shows that Desktop Grids is used to a very little extent within industry while a majority of the participants state that there is an interest for Desktop Grids. As result of the experiment, we can say that we achieved very high speedup and that effort needed to setup a Desktop Grid is about 40 hours for one person with no prior experience to the selected Desktop Grid system. Conclusions. We conclude that industry organizations have a possible need for Desktop Grids but in order to be more successful, Desktop Grid developers must put more effort into areas as automated testing and code compilation.
APA, Harvard, Vancouver, ISO, and other styles
45

Andrade, Jorge. "Grid and High-Performance Computing for Applied Bioinformatics." Doctoral thesis, Stockholm : Bioteknologi, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bsoul, Mohammad. "Economic scheduling in Grid computing using Tender models." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/3094.

Full text
Abstract:
Economic scheduling needs to be considered for Grid computing environment, because it gives an incentive for resource providers to supply their resources. Moreover, it enforces efficient use of resources, because the users have to pay for their use. Tendering is a suitable model for Grid scheduling because users start the negotiations for finding suitable resources for executing their jobs. Furthermore, the users specify their job requirements with their requests and therefore the resources reply with bids that are based on the cost of taking on the job and the availability of their processors. In this thesis, a framework for economic Grid scheduling using tendering is proposed. The framework entities such as users, brokers and resources employ tender/contract-net model to negotiate the prices and deadlines. The brokers' role is acting on behalf of users. During the negotiations, the entities aim to maximise their performance which is measured by a number of metrics. In order to evaluate the entities' performance under different scenarios, a Java- based simulator, called MICOSim, supporting event-driven simulation of economic Grid scheduling is presented. MICOSim can perform a simulation of more than one hundred entities faster than real time. It is concluded from the evaluation that users who are interested in increasing the job success rate and paying less for executing their jobs have to consider received prices to select the most appropriate bids, while users who are interested in improving the job average satisfaction rate have to consider either received completion time or both price and completion time to select the most suitable bids when the submission of jobs is static. The best broker strategy is the one that doesn't take into account meeting the job deadlines in the bids it sends to job owners. Finally, the resource strategy that considers the price to determine if to reply to a request or not is superior to other resource strategies. The only exception is employing this strategy with price that is too low. However, there is a tiny difference between the performances of different user strategies in dynamic submission. It is also concluded from the evaluation that broker strategies have the best performance when the revenue they target from the users is reasonable. Thus, the broker's aim has to be receiving reasonable revenue (neither too low nor too high) from acting on behalf of users. It is observed from the results that the strategy performance is influenced by the behaviour of other entities such as the submission time of user jobs. Finally, it is observed that the characteristics of entities have an effect on the performance of strategies. For example, the two user strategies that consider the received completion time and both price and completion time to determine if to accept a broker bid have similar performance, because of the existence of resources with various prices from cheap to expensive and existence of resources which don't care about the price paid for the execution. So, the price threshold doesn't have a large effect on the performance.
APA, Harvard, Vancouver, ISO, and other styles
47

Paterson, Stuart Keble. "LHCb distributed data analysis on the computing grid." Thesis, University of Glasgow, 2006. http://theses.gla.ac.uk/1077/.

Full text
Abstract:
LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.
APA, Harvard, Vancouver, ISO, and other styles
48

Mustafee, Navonil. "A grid computing framework for commercial simulation packages." Thesis, Brunel University, 2007. http://bura.brunel.ac.uk/handle/2438/4009.

Full text
Abstract:
An increased need for collaborative research among different organizations, together with continuing advances in communication technology and computer hardware, has facilitated the development of distributed systems that can provide users non-trivial access to geographically dispersed computing resources (processors, storage, applications, data, instruments, etc.) that are administered in multiple computer domains. The term grid computing or grids is popularly used to refer to such distributed systems. A broader definition of grid computing includes the use of computing resources within an organization for running organization-specific applications. This research is in the context of using grid computing within an enterprise to maximize the use of available hardware and software resources for processing enterprise applications. Large scale scientific simulations have traditionally been the primary benefactor of grid computing. The application of this technology to simulation in industry has, however, been negligible. This research investigates how grid technology can be effectively exploited by simulation practitioners using Windows-based commercially available simulation packages to model simulations in industry. These packages are commonly referred to as Commercial Off-The-Shelf (COTS) Simulation Packages (CSPs). The study identifies several higher level grid services that could be potentially used to support the practise of simulation in industry. It proposes a grid computing framework to investigate these services in the context of CSP-based simulations. This framework is called the CSP-Grid Computing (CSP-GC) Framework. Each identified higher level grid service in this framework is referred to as a CSP-specific service. A total of six case studies are presented to experimentally evaluate how grid computing technologies can be used together with unmodified simulation packages to support some of the CSP-specific services. The contribution of this thesis is the CSP-GC framework that identifies how simulation practise in industry may benefit from the use of grid technology. A further contribution is the recognition of specific grid computing software (grid middleware) that can possibly be used together with existing CSPs to provide grid support. With its focus on end-users and end-user tools, it is intended that this research will encourage wider adoption of grid computing in the workplace and that simulation users will derive benefit from using this technology.
APA, Harvard, Vancouver, ISO, and other styles
49

Omar, Wail M. "Self-management middleware services for autonomic grid computing." Thesis, Liverpool John Moores University, 2006. http://researchonline.ljmu.ac.uk/5784/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Van, Le. "Gridsec : une architecture sécurisée pour le grid computing." Besançon, 2003. http://www.theses.fr/2003BESA2028.

Full text
Abstract:
Les grilles de calcul se développent de plus en plus et sont de plus en plus puissantes. Ce concept est utilisé aussi bien pour des calculs intensifs que pour l'exécution d'applications à grance échelle. L'accès aux ressources ne doit pas être limité par l'hétérogénéité des machines et des réseaux interconnectés. Les données qui traversent ces nombreux réseaux doivent être sécurisées. Dans ce document, nous présentons GRIDSec, une architecture sécurisée permettant d'identifier et d'authentifier un site auprès des autres, sans pour autant entraver l'autorité des entités locales. GRIDSec se base sur l'authentification du site (i. E. Du participant) et protège les échanges sans affecter la compatibilité des protocoles existants ni les performances. GRIDSec utilise les fonctions de sécurité de DNSSec de façon inhabituelle: nous avons transformé un serveur DNS en un serveur de distribution de clés. Nous avons utilisé SSH pour protéger la phase d'authentification et d'échange de clés. Une fois les serveurs authentifiés, nous avons utilisé IPSec pour protéger les données. Notre système de sécurité est validé d'une part en simulant une application distribuée à l'aide de SimGrid et d'autre part à l'aide de résultats expérimentaux. Des tests de performances montrent le faible coût des mécanismes mis en oeuvre.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography