To see the other types of publications on this topic, follow the link: Grid Computing. Unvollkommene Information.

Dissertations / Theses on the topic 'Grid Computing. Unvollkommene Information'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Grid Computing. Unvollkommene Information.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schneider, Jörg. "Grid workflow scheduling based on incomplete information /." kostenfrei, 2010. http://opus.kobv.de/tuberlin/volltexte/2010/2574/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Radwan, Ahmed M. "Information Integration in a Grid Environment Applications in the Bioinformatics Domain." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/509.

Full text
Abstract:
Grid computing emerged as a framework for supporting complex operations over large datasets; it enables the harnessing of large numbers of processors working in parallel to solve computing problems that typically spread across various domains. We focus on the problems of data management in a grid/cloud environment. The broader context of designing a services oriented architecture (SOA) for information integration is studied, identifying the main components for realizing this architecture. The BioFederator is a web services-based data federation architecture for bioinformatics applications. Based on collaborations with bioinformatics researchers, several domain-specific data federation challenges and needs are identified. The BioFederator addresses such challenges and provides an architecture that incorporates a series of utility services; these address issues like automatic workflow composition, domain semantics, and the distributed nature of the data. The design also incorporates a series of data-oriented services that facilitate the actual integration of data. Schema integration is a core problem in the BioFederator context. Previous methods for schema integration rely on the exploration, implicit or explicit, of the multiple design choices that are possible for the integrated schema. Such exploration relies heavily on user interaction; thus, it is time consuming and labor intensive. Furthermore, previous methods have ignored the additional information that typically results from the schema matching process, that is, the weights and in some cases the directions that are associated with the correspondences. We propose a more automatic approach to schema integration that is based on the use of directed and weighted correspondences between the concepts that appear in the source schemas. A key component of our approach is a ranking mechanism for the automatic generation of the best candidate schemas. The algorithm gives more weight to schemas that combine the concepts with higher similarity or coverage. Thus, the algorithm makes certain decisions that otherwise would likely be taken by a human expert. We show that the algorithm runs in polynomial time and moreover has good performance in practice. The proposed methods and algorithms are compared to the state of the art approaches. The BioFederator design, services, and usage scenarios are discussed. We demonstrate how our architecture can be leveraged on real world bioinformatics applications. We preformed a whole human genome annotation for nucleosome exclusion regions. The resulting annotations were studied and correlated with tissue specificity, gene density and other important gene regulation features. We also study data processing models on grid environments. MapReduce is one popular parallel programming model that is proven to scale. However, using the low-level MapReduce for general data processing tasks poses the problem of developing, maintaining and reusing custom low-level user code. Several frameworks have emerged to address this problem; these frameworks share a top-down approach, where a high-level language is used to describe the problem semantics, and the framework takes care of translating this problem description into the MapReduce constructs. We highlight several issues in the existing approaches and alternatively propose a novel refined MapReduce model that addresses the maintainability and reusability issues, without sacrificing the low-level controllability offered by directly writing MapReduce code. We present MapReduce-LEGOS (MR-LEGOS), an explicit model for composing MapReduce constructs from simpler components, namely, "Maplets", "Reducelets" and optionally "Combinelets". Maplets and Reducelets are standard MapReduce constructs that can be composed to define aggregated constructs describing the problem semantics. This composition can be viewed as defining a micro-workflow inside the MapReduce job. Using the proposed model, complex problem semantics can be defined in the encompassing micro-workflow provided by MR-LEGOS while keeping the building blocks simple. We discuss the design details, its main features and usage scenarios. Through experimental evaluation, we show that the proposed design is highly scalable and has good performance in practice.
APA, Harvard, Vancouver, ISO, and other styles
3

Altowaijri, Saleh. "Grid and cloud computing : technologies, applications, market sectors, and workloads." Thesis, Swansea University, 2013. https://cronfa.swan.ac.uk/Record/cronfa42944.

Full text
Abstract:
Developments in electronics, computing and communication technologies have transformed IT systems from desktop and tightly coupled mainframe computers of the past to modern day highly complex distributed systems. These ICT systems interact with humans at a much advanced level than what was envisaged during the early years of computer development. The ICT systems of today have gone through various phases of developments by absorbing intermediate and modern day concepts such as networked computing, utility, on demand and autonomic computing, virtualisation and so on. We now live in a ubiquitous computing and digital economy era where computing systems have penetrated into the human lives to a degree where these systems are becoming invisible. The price of these developments is in the increased costs, higher risks and higher complexity. There is a compelling need to study these emerging systems, their applications, and the emerging market sectors that they are penetrating into. Motivated by the challenges and opportunities offered by the modern day ICT technologies, we aim in this thesis to explore the major technological developments that have happened in the ICT systems during this century with a focus on developing techniques to manage applied ICT systems in digital economy. In the process, we wish to also touch on the evolution of ICT systems and discuss these in context of the state of the art technologies and applications. We have identified the two most transformative technologies of this century, grid computing and cloud computing, and two application areas, intelligent healthcare and transportation systems. The contribution of this thesis is multidisciplinary in four broad areas. Firstly, a workload model of a grid-based ICT system in the healthcare sector is proposed and analysed using multiple healthcare organisations and applications. Secondly, an innovative intelligent system for the management of disasters in urban environments using cloud computing is proposed and analysed. Thirdly, cloud computing market sectors, applications, and workload are analysed using over 200 real life case studies. Fourthly, a detailed background and literature review is provided on grid computing and cloud computing. Finally, directions for future work are given. The work contributes in multidisciplinary fields involving healthcare, transportation, mobile computing, vehicular networking, grid, cloud, and distributed computing. The discussions presented in this thesis on the historical developments, technology and architectural details of grid computing have served to understand as to how and why grid computing was seen in the past as the global infrastructure of the future. These discussions on grid computing also provided the basis that we subsequently used to explain the background, motivations, technological details, and ongoing developments in cloud computing. The introductory chapters on grid and cloud computing, collectively, have provided an insight into the evolution of ICT systems over the last 50+ years - from mainframes to microcomputers, internet, distributed computing, cluster computing, and computing as a utility and service. The existing and proposed applications of grid and cloud computing in healthcare and transport were used to further elaborate the two technologies and the ongoing ICT developments in the digital economy. The workload models and analyses of grid and cloud computing systems can be used by the practitioners for the design and resource management of ICT systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Tewelde, Yigzaw Samuel. "A generic campus grid computing framework for tertiary institutions : the case of the University of Stellenbosch." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50248.

Full text
Abstract:
Thesis (MPhil)--Stellenbosch University, 2005.
ENGLISH ABSTRACT: Prior to the invention of Personal Computers the scope of research activities was limited by the pre-existing capabilities of problem solving mechanisms. However, with the advent of PCs and inter-networking thereof, the new tools (hardware and software) enabled the scientific community to tackle more complex research challenges and this led to a better understanding of our environment. The development of the Internet also enabled research communities to communicate and share information in real time. However, even the Internet has limitations of its own when it comes to the need of sharing not only information but also massive storage, processing power, huge databases and applications, expensive and delicate scientific instruments, knowledge and expertise. This led to the need for a networking system that includes these above-mentioned services, using the Internet infrastructure, semantic web technologies and pervasive computing devices, which is so called Grid Computing. This research study deals with a Generic Campus Grid Computing framework, which mobilizes the available idle/extra computing resources residing in the faculty-computing centres for use by the e-community on CPU-intensive or Data-intensive jobs. This unused computing capacity could be utilized for Grid computing services; hence, the already available resources could be more efficiently exploited. Besides, this could be a huge saving when compared to the cost of acquiring supercomputers by these institutions. Therefore, this research study intends to establish a simple and functional Generic Campus Grid Computing Framework at this stage, with the consent that subsequent research studies could deal with further assessment in a more detailed perspective and practical implementation thereof.
AFRIKAANSE OPSOMMING: Voor die uitvinding van die Persoonlike Rekenaar is die omvang van navorsingsaktiwiteite beperk deur die voorafbestaande vermoëns van probleemoplossingsmeganismes. Met die verskyning van PR's en die daaropvolgende internetwerking daarvan, het die nuwe gereedskap (hardeware en sagteware) die wetenskaplike gemeenskap in staat gestel om meer komplekse navorsingsuitdagings aan te pak. Dit het gelei tot groter begrip van ons omgewing. Die onwikkeling van die Internet het navorsingsgemeenskappe ook in staat gestel om in reële tyd te kommunikeer en inligting te deel. Nietemin, selfs die Internet het gebreke wanneer dit kom by die behoefte om nie slegs inligting te deel nie, maar ook massiewe stoorruimte, verwerkingskrag, baie groot databasisse en toepassings, duur en delikate wetenskaplike toerusting, kennis en kundigheid. Dit het gelei tot die behoefte aan 'n netwerksisteem wat bogenoemde dienste insluit, deur gebruik te maak van Internet-infrastruktuur, semantiese web tegnologieë, en alomteenwoordige rekenaartoestelle. Hierdie sisteem staan bekend as "Grid Computing" of te wel Rooster Komputasie. Hierdie navorsingstudie handel oor 'n Generiese Kampus Rooster Komputasie Raamwerk wat die ongebruikte, ekstra komputasiebronne, wat beskikbaar is in fakulteite se rekenaargebruikersareas, mobiliseer vir gebruik deur die e-gemeenskap op SVE-intensiewe of Dataintensiewe toepassings. Hierdie ongebruikte komputasie kapasiteit kan aangewend word vir Rooster komputasie dienste; gevolglik kan die beskikbare bronne dan meer effektief benut word. Verder kan dit lei tot groot besparings wanneer dit vergelyk word met die koste om superrekenaars aan te koop deur die betrokke instansies. Dus, op hierdie stadium stel hierdie navorsingstudie dit ten doel om 'n eenvoudige en funksionele Generiese Kampus Rooster Komputasie Raamwerk te skep met dien verstande dat daaropvolgende studies sou kon fokus op verdere assessering met 'n meer gedetaileerde perspektief en met praktiese implementasie.
APA, Harvard, Vancouver, ISO, and other styles
5

Bach, Eric J. Fickel Mark G. "An analysis of the feasibility and applicability of IEEE 802.X wireless mesh networks within the Global Information Grid /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FBach.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, Sept. 2004.
Thesis advisor(s): Alexander Bordetsky. Includes bibliographical references (p. 81-91). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
6

Fickel, Mark G., and Eric J. Bach. "An analysis of the feasibility and applicability of IEEE 802.X wireless mesh networks within the Global Information Grid." Thesis, Monterey California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1462.

Full text
Abstract:
Approved for public release; distribution is unlimited
This thesis analyzes the feasibility, functionality, efficacy and usability of IEEE 802.x wireless mesh networks in multiple DoD contexts. Through multiple field and lab experiments and hardware investigations, an assessment is performed on the realistic implementation issues of wireless mesh networks and their possible applications. A detailed examination is conducted of the variable elements, operational constraints, and possible decision points for developing a usable, robust, self-organizing, wireless mesh network that can be leveraged for maximum usability and shared situational awareness in network-centric operations. The research investigates the suitability of currently available COTS hardware and software wireless mesh networking components for geographically distributed networks. Additionally, a product-line software architecture and a common data interchange XML vocabulary are proposed as the enabling technology elements to carry application layer mesh forward for integration of collaborative sensor-decision maker adaptive networks within the Global Information Grid. The thesis includes the design and implementation of the first Naval Postgraduate School testbed for tactical level mesh networking with unmanned vehicles, unattended sensors, and warrior networking nodes. This thesis also lays the groundwork for further research into lower OSI-layer routing protocols for DoD mesh networks, development of mesh-aware applications, as well as a GIG-wide mesh network architecture.
Lieutenant Commander, Supply Corps, United States Navy
Lieutenant Commander, United States Navy
APA, Harvard, Vancouver, ISO, and other styles
7

Yadav, Pavan Kumar, and Kosuri Naga Krishna Kalyan. "Support for Information Management in Virtual Organizations." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1709.

Full text
Abstract:
Globalization and innovation are revolutionizing the higher education forcing to create new market trends. Different nations have their own pattern and framework of education in delivering the educational services. Educational institutions are also seeking different organizational and behavioural changes for their better future as they hunt for new financial resources, face new competition and seek greater prestige domestically and internationally. The coming future will decide which universities would survive the market trends, competition and expectations of the students (Clients). The survival-of-the-fittest paradigm framework plays a prominent role in ideas of how the higher education would be delivered to the students in future with the Instruction Technology and distance education. According to us the education trend has changed its phase of delivery of services form the management point of view to student’s point of view. Leading to delivery of educational service’s which would have more impact on student’s education, knowledge and experience within the institution. In our thesis we try to provide some information about how to support and manage the information in Virtual Organizations. We also explore the frameworks of the university and discussed a case study about the different ways of providing better support for information management resulting in delivery of best students driven services and unique facilities. We would be looking at the different aspects of the university work flows and procedures and gain an insight on the student’s expectation from the organization. This investigation would be helpful for the students to know what are the services they should expect from the universities and also helpful for management to know better the needs of the students and their needs and to develop a framework for proper execution of these services.
Pavan Kumar Yadav, S/o: B.R.Basant Kumar Yadav, Hno: 291,292, Lalbazar, Trimulgherry, Secunderabad, Andhra Pradesh, India 500015. PH: (+91)(040)27793414
APA, Harvard, Vancouver, ISO, and other styles
8

Milicic, Gregory J. "An analysis of tactical mesh networking hardware requirements for airborne mobile modes /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FMilicic.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, March 2005.
Thesis Advisor(s): Alexander Bordetsky. Includes bibliographical references (p. 39-40). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
9

Weishäupl, Thomas. "Business and the grid : economic and transparent utilization of virtual resources /." Berlin : Akademische Verl.-Ges, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2854951&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Al-Shishtawy, Ahmad. "Self-Management for Large-Scale Distributed Systems." Doctoral thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101661.

Full text
Abstract:
Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management. In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control.

QC 20120831

APA, Harvard, Vancouver, ISO, and other styles
11

Wei, Longfei. "Game-Theoretic and Machine-Learning Techniques for Cyber-Physical Security and Resilience in Smart Grid." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3850.

Full text
Abstract:
The smart grid is the next-generation electrical infrastructure utilizing Information and Communication Technologies (ICTs), whose architecture is evolving from a utility-centric structure to a distributed Cyber-Physical System (CPS) integrated with a large-scale of renewable energy resources. However, meeting reliability objectives in the smart grid becomes increasingly challenging owing to the high penetration of renewable resources and changing weather conditions. Moreover, the cyber-physical attack targeted at the smart grid has become a major threat because millions of electronic devices interconnected via communication networks expose unprecedented vulnerabilities, thereby increasing the potential attack surface. This dissertation is aimed at developing novel game-theoretic and machine-learning techniques for addressing the reliability and security issues residing at multiple layers of the smart grid, including power distribution system reliability forecasting, risk assessment of cyber-physical attacks targeted at the grid, and cyber attack detection in the Advanced Metering Infrastructure (AMI) and renewable resources. This dissertation first comprehensively investigates the combined effect of various weather parameters on the reliability performance of the smart grid, and proposes a multilayer perceptron (MLP)-based framework to forecast the daily number of power interruptions in the distribution system using time series of common weather data. Regarding evaluating the risk of cyber-physical attacks faced by the smart grid, a stochastic budget allocation game is proposed to analyze the strategic interactions between a malicious attacker and the grid defender. A reinforcement learning algorithm is developed to enable the two players to reach a game equilibrium, where the optimal budget allocation strategies of the two players, in terms of attacking/protecting the critical elements of the grid, can be obtained. In addition, the risk of the cyber-physical attack can be derived based on the successful attack probability to various grid elements. Furthermore, this dissertation develops a multimodal data-driven framework for the cyber attack detection in the power distribution system integrated with renewable resources. This approach introduces the spare feature learning into an ensemble classifier for improving the detection efficiency, and implements the spatiotemporal correlation analysis for differentiating the attacked renewable energy measurements from fault scenarios. Numerical results based on the IEEE 34-bus system show that the proposed framework achieves the most accurate detection of cyber attacks reported in the literature. To address the electricity theft in the AMI, a Distributed Intelligent Framework for Electricity Theft Detection (DIFETD) is proposed, which is equipped with Benford’s analysis for initial diagnostics on large smart meter data. A Stackelberg game between utility and multiple electricity thieves is then formulated to model the electricity theft actions. Finally, a Likelihood Ratio Test (LRT) is utilized to detect potentially fraudulent meters.
APA, Harvard, Vancouver, ISO, and other styles
12

Ayoubi, Tarek. "Distributed Data Management Supporting Healthcare Workflow from Patients’ Point of View." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6030.

Full text
Abstract:
Patient’s mobility throughout his lifetime leaves a trial of information scattered in laboratories, clinical institutes, primary care units, and other hospitals. Hence, the medical history of a patient is valuable when subjected to special healthcare units or undergoes home-care/personal-care in elderly stage cases. Despite the rhetoric about patient-centred care, few attempts were made to measure and improve in this arena. In this thesis, we will describe and implement a high-level view of a Patient Centric information management, deploying at a preliminary stage, the use of Agent Technologies and Grid Computing. Thus, developing and proposing an infrastructure that allows us to monitor and survey the patient, from the doctor’s point of view, and investigate a Persona, from the patients’ side, that functions and collaborates among different medical information structures. The Persona will attempt to interconnect all the major agents (human and software), and realize a distributed grid info-structure that directly affect the patient, therefore, revealing an adequate and cost-effective solution for most critical information needs. The results comprehended in the literature survey, consolidating Healthcare Information Management with emerged intelligent Multi-Agent System Technologies (MAS) and Grid Computing; intends to provide a solid basis for further advancements and assessments in this field, by bridging and proposing a framework between the home-care sector and the flexible agent architecture throughout the healthcare domain.
APA, Harvard, Vancouver, ISO, and other styles
13

Milicic, Gregory J. "Analysis of hardware requirements for airborne tactical mesh networking nodes." Thesis, Monterey California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/2218.

Full text
Abstract:
Approved for public release, distribution is unlimited
Wireless mesh mobile ad hoc networks (MANETs) provide the military with the opportunity to spread information superiority to the tactical battlespace in support of network-centric warfare (NCW). These mesh networks provide the tactical networking framework for providing improved situational awareness through ubiquitous sharing of information including remote sensor and targeting data. The Naval Postgraduate School's Tactical Network Topology (TNT) project sponsored by US Special Operations Command seeks to adapt commercial off the shelf (COTS) information technology for use in military operational environments. These TNT experiments rely on a variety of airborne nodes including tethered balloon and UAVs such as the Tern to provide reachback from nodes on the ground to the Tactical Operations Center (TOC) as well as to simulate the information and traffic streams expected from UAVs conducting surveillance missions and fixed persistent sensor nodes. Airborne mesh nodes have unique requirements that can be implemented with COTS technology including single board computers and compact flash.
Lieutenant, United States Navy
APA, Harvard, Vancouver, ISO, and other styles
14

Hanyk, Tomáš. "Výběr informačního systému." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2016. http://www.nusl.cz/ntk/nusl-241520.

Full text
Abstract:
This diploma thesis deals with the selection of an appropriate information system which should help to make the company processes more effective. It should also save time and enable faster adaptation to new changes in the company PORTABELL. The first part describes the essential theory to get the reader acquainted with the issues, the second part is dedicated to the analysis of corporate environment and its trouble spots. The last part deals with the selection of the information system. The information system is chosen based on the predetermined criteria and requirements.
APA, Harvard, Vancouver, ISO, and other styles
15

Mrkvičková, Pavlína. "Výběr a implementace informačního systému." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2017. http://www.nusl.cz/ntk/nusl-318354.

Full text
Abstract:
This master’s thesis deals with the selection and implementation of the new information system for managing orders in the company. Theoretical knowledge, procedures and methods are used as a basis for elaboration of the practical part. The analytical part focuses on the analysis of the current situation in the company and includes the assessment of the information system. The proposal part contains a two-round selection of a new information system based on established criteria and the implementation of the selected best solution. There is also the economic assessment of the costs and description of the expected benefits of the selected solution for the company.
APA, Harvard, Vancouver, ISO, and other styles
16

Chung, Wu-Chun, and 鍾武君. "An Information Retrieving Protocol for Resource Monitoring in Grid Computing." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/56810469516954939246.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
94
Grid computing is a technology for distributed computing. To manage the large scale of Grid resources with dynamically access, resource management is a key component in Grid computing. The basic task is to manage Grid resource services and their constantly changing information. In this thesis, a Grid Resource Information Monitoring (GRIM) prototype is introduced to accomplish the above task. To support the constantly changing of resource states in the GRIM prototype, the push-based data delivery protocol named Grid Resource Information Retrieving (GRIR) is provided. There is a trade-off between information fidelity and updating transmission. The more frequent the report is, the more precise the information is, but the more overhead occurs. The offset-sensitive mechanism, time-sensitive mechanism, and hybrid mechanism in GRIR are used to achieve a high degree of data accuracy with the less useless update messages. In offset-sensitive mechanism, a tolerable threshold determines whether render the present status or not. If the differential value calculated between current monitored value and last announced value is greater or equal to the threshold, the present status will be rendered to update. In time-sensitive mechanism, the dynamic time interval calculated according to historic timestamps ensures the cached information could be updated within an adaptable period. In hybrid mechanism, the dynamic time interval calculated according to the dynamic threshold updates the present status. The updating overhead and bandwidth consumption are reduced then. Experimental result shows that the proposed mechanism not only alleviates the update transmission but also achieves the less loss of data accuracy than the prior work.
APA, Harvard, Vancouver, ISO, and other styles
17

Lin, Cheng-Fang, and 林正芳. "A Workflow-based Resource Broker Portal with Information Monitoring on Grid Computing Environments." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/56298767263917164925.

Full text
Abstract:
碩士
東海大學
資訊工程與科學系
94
The computational Grid is the beacon to scientists for solving large-scale problems over the Internet as a tremendous virtual computer. As Grid Computing becomes a reality, a resource broker is needed to manage and monitor available resources. This thesis presents a workflow-based computational resource broker whose main function is to match available resources with user requests and consider network information status during matchmaking. The resource broker provides a uniform interface for accessing available and the appropriate resources via user credentials. We utilize NWS tool to monitor the network-related information and resources status. In order to identify and schedule jobs that are suitable for determined resources, an execution time estimation model is required. In this thesis, it is described a Chronological history-based execution time estimation model to predict current execution time, according to the previous execution results. The experimental results shown that our model can accurately predict the execution time of embarrassingly parallel applications. Also, we constructed a grid platform using Globus Toolkit that integrates the distributed resources of five universities in Taichung, Taiwan, under TIGER project, where the resource broker is developed. As a result, the proposed broker provides secure and updated information about available resources and serves as a link to the diverse systems available in the Grid.
APA, Harvard, Vancouver, ISO, and other styles
18

Lin, Chih-Hao, and 林志豪. "A Heuristic QoS Measurement with Domain-based Network Information Model on Grid Computing Environments." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/56099933927151242900.

Full text
Abstract:
碩士
東海大學
資訊工程與科學系碩士在職專班
97
Recently, Grid computing is more and more common and widespread. Therefore, there exists a common issue, i.e., how to manage and monitor numerous resources of grid computing environments. In most cases, we use Ganglia and NWS to monitor Grid nodes’ status and network-related information, respectively. With supports of Ganglia and NWS services, we could effectively monitor and manage available resources of our grid environments. Comprehensive monitoring and effective management are criterions to archiving higher performance of grid computation. Ganglia is often adopted to monitor resources’ status, like hosts’ live status, CPU or memory utilizations, in grid environments. Certainly, Ganglia also has the ability to monitor network relative information. Instead of Ganglia, more often than that, we use NWS services to measure network relative information, like end to end TCP/IP performance. Compare to Ganglia, NWS services provide more flexibility and choices for measurement mechanism. Besides, NWS services could be deployed with non-intruding manner which could help us to deploy services to each grid nodes rapidly and easily. We could obtain network relative information in a short term following deployment. NWS services also provide measurements for CPU or memory utilizations. But NWS provides less functionality than Ganglia in this dimension. Therefore, we combine services provided by both Ganglia and NWS mostly to meet our requirements to effectively monitor and manage available resources of our grid environments. Unfortunately, owing to diverse user requirements, information provided by Ganglia and NWS services is not sufficient in real cases, especially for application developers. For example, users couldn’t directly retrieve utilizations or allocations of resources in grid environments through proper “interface” or “channel” with help of Ganglia or NWS. In addition, NWS services that deployed based on “Domain-based Network Information Model” could greatly reduce overheads caused by unnecessary measurements. Therefore, in this thesis, we propose a heuristic QoS measurement which is constructed with domain-based information model. This measurement has ability to provide more effective information to meet user requirements, especially for application developer. We hope users could manage and monitor numerous resources of grid environments more effectively and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Tsui-Ting, and 陳翠婷. "Implementation of Information and Monitoring Services for Resource Broker on Cross Grid Computing Environments." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/81425649421082159836.

Full text
Abstract:
碩士
東海大學
資訊工程與科學系
96
In solving large-scale computation problems using open standards over networks, grid computing must deal with geographically distributed heterogeneous resources, including differing computing platforms, hardware, software, architectures, and languages that are owned by various Administrative Domains. As Grid numbers worldwide increase, multi-institution collaborations grow rapidly as well. However, to realize the full potential of Grid computing, it is expected that Grid participants will have to be able to use one another’s resources. This work presents a Cross Grid Information Service (CGIS) that enables Resource Brokers to get information from cross grid environments for other components, and proposed adaptive query information algorithm to discuss and experiment the information update time. We implemented the Automatic Backup model, and Single Sign-On model on Resource Broker web portal. The proposed Resource Broker provided secure, updated information about available resources and served as a link to the diverse systems available in the Grid.
APA, Harvard, Vancouver, ISO, and other styles
20

Ertaç, Özgür [Verfasser]. "Integrating grid computing and server-based geographical information systems to facilitate a disaster management system / Özgür Ertaç." 2010. http://d-nb.info/1007748788/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nureni, Ayofe Azeez. "Towards ensuring scalability, interoperability and efficient access control in a triple-domain grid-based environment." Thesis, 2012. http://hdl.handle.net/11394/3809.

Full text
Abstract:
Philosophiae Doctor - PhD
The high rate of grid computing adoption, both in academe and industry, has posed challenges regarding efficient access control, interoperability and scalability. Although several methods have been proposed to address these grid computing challenges, none has proven to be completely efficient and dependable. To tackle these challenges, a novel access control architecture framework, a triple-domain grid-based environment, modelled on role based access control, was developed. The architecture’s framework assumes three domains, each domain with an independent Local Security Monitoring Unit and a Central Security Monitoring Unit that monitors security for the entire grid.The architecture was evaluated and implemented using the G3S, grid security services simulator, meta-query language as “cross-domain” queries and Java Runtime Environment 1.7.0.5 for implementing the workflows that define the model’s task. The simulation results show that the developed architecture is reliable and efficient if measured against the observed parameters and entities. This proposed framework for access control also proved to be interoperable and scalable within the parameters tested.
APA, Harvard, Vancouver, ISO, and other styles
22

Coetzee, Serena Martha. "An analysis of a data grid approach for spatial data infrastructures." Thesis, 2009. http://hdl.handle.net/2263/28232.

Full text
Abstract:
The concept of grid computing has permeated all areas of distributed computing, changing the way in which distributed systems are designed, developed and implemented. At the same time ‘geobrowsers’, such as Google Earth, NASA World Wind and Virtual Earth, along with in-vehicle navigation, handheld GPS devices and maps on mobile phones, have made interactive maps and geographic information an everyday experience. Behind these maps lies a wealth of spatial data that is collated from a vast number of different sources. A spatial data infrastructure (SDI) aims to make spatial data from multiple sources available to as wide an audience as possible. Current research indicates that, due to a number of reasons, data sharing in these SDIs is still not common. This dissertation presents an analysis of the data grid approach for SDIs. Starting off, two imaginary scenarios spell out for the first time how data grids can be applied to enable the sharing of address data in an SDI. The work in this dissertation spans two disciplines: Computer Science (CS) and Geographic Information Science (GISc). A study of related work reveals that the data grid approach in SDIs is both a novel application for data grids (CS), as well as a novel technology in SDI environments (GISc), and this dissertation advances mutual understanding between the two disciplines. The novel evaluation framework for national address databases in an SDI is used to evaluate existing information federation models against the data grid approach. This evaluation, as well as an analysis of address data in an SDI, confirms that there are similarities between the data grid approach and the requirement for consolidated address data in an SDI. The evaluation further shows that where a large number of organizations are involved, such as for a national address database, and where there is a lack of a single organization tasked with the management of a national address database, the data grid is an attractive alternative to other models. The Compartimos (Spanish for ‘we share’) reference model was developed to identify the components with their capabilities and relationships that are required to grid-enable address data sharing in an SDI. The definition of an address in the broader sense (i.e. not only for postal delivery), the notion of an address as a reference and the definition of an addressing system and its comparison to a spatial reference system contribute towards the understanding of what an address is. A novel address data model shows that it is possible to design a data model for sharing and exchange of address data, despite diverse addressing systems and without impacting on, or interfering with, local laws for address allocation. The analysis in this dissertation confirms the need for standardization of domain specific geographic information, such as address data, and their associated services in order to integrate data from distributed heterogeneous sources. In conclusion, results are presented and recommendations for future work, drawn from the experience on the work in this dissertation, are made.
Thesis (PhD)--University of Pretoria, 2009.
Computer Science
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
23

Prem, Hema. "Architecting Resource Management Services For Computational Grids : Patterns And Performance Models." Thesis, 2005. http://etd.iisc.ernet.in/handle/2005/1419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography