To see the other types of publications on this topic, follow the link: Networking : Distributed computer systems.

Dissertations / Theses on the topic 'Networking : Distributed computer systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Networking : Distributed computer systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Knight, Jon. "Supporting distributed computation over wide area gigabit networks." Thesis, Loughborough University, 1995. https://dspace.lboro.ac.uk/2134/7329.

Full text
Abstract:
The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Jiang, Qiangfeng. "ALGORITHMS FOR FAULT TOLERANCE IN DISTRIBUTED SYSTEMS AND ROUTING IN AD HOC NETWORKS." UKnowledge, 2013. http://uknowledge.uky.edu/cs_etds/16.

Full text
Abstract:
Checkpointing and rollback recovery are well-known techniques for coping with failures in distributed systems. Future generation Supercomputers will be message passing distributed systems consisting of millions of processors. As the number of processors grow, failure rate also grows. Thus, designing efficient checkpointing and recovery algorithms for coping with failures in such large systems is important for these systems to be fully utilized. We presented a novel communication-induced checkpointing algorithm which helps in reducing contention for accessing stable storage to store checkpoints. Under our algorithm, a process involved in a distributed computation can independently initiate consistent global checkpointing by saving its current state, called a tentative checkpoint. Other processes involved in the computation come to know about the consistent global checkpoint initiation through information piggy-backed with the application messages or limited control messages if necessary. When a process comes to know about a new consistent global checkpoint initiation, it takes a tentative checkpoint after processing the message. The tentative checkpoints taken can be flushed to stable storage when there is no contention for accessing stable storage. The tentative checkpoints together with the message logs stored in the stable storage form a consistent global checkpoint. Ad hoc networks consist of a set of nodes that can form a network for communication with each other without the aid of any infrastructure or human intervention. Nodes are energy-constrained and hence routing algorithm designed for these networks should take this into consideration. We proposed two routing protocols for mobile ad hoc networks which prevent nodes from broadcasting route requests unnecessarily during the route discovery phase and hence conserve energy and prevent contention in the network. One is called Triangle Based Routing (TBR) protocol. The other routing protocol we designed is called Routing Protocol with Selective Forwarding (RPSF). Both of the routing protocols greatly reduce the number of control packets which are needed to establish routes between pairs of source nodes and destination nodes. As a result, they reduce the energy consumed for route discovery. Moreover, these protocols reduce congestion and collision of packets due to limited number of nodes retransmitting the route requests.
APA, Harvard, Vancouver, ISO, and other styles
3

Chung, Edward Chi-Fai. "Quality of service analysis for distributed multimedia systems in a local area networking environment." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1174610545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Berglund, Anders. "Learning computer systems in a distributed project course : The what, why, how and where." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5754.

Full text
Abstract:
Senior university students taking an internationally distributed project course in computer systems find themselves in a complex learning situation. To understand how they experience computer systems and act in their learning situation, the what, the why, the how and the where of their learning have been studied from the students’ perspective. The what aspect concerns the students’ understanding of concepts within computer systems: network protocols. The why aspect concerns the students’ objectives to learn computer systems. The how aspect concerns how the students go about learning. The where aspect concerns the students’ experience of their learning environment. These metaphorical entities are then synthesised to form a whole. The emphasis on the students’ experience of their learning motivates a phenomenographic research approach as the core of a study that is extended with elements of activity theory. The methodological framework that is developed from these research approaches enables the researcher to retain focus on learning, and specifically the learning of computer systems, throughout. By applying the framework, the complexity in the learning is unpacked and conclusions are drawn on the students’ learning of computer systems. The results are structural, qualitative, and empirically derived from interview data. They depict the students’ experience of their learning of computer systems in their experienced learning situation and highlight factors that facilitate learning. The results comprise sets of qualitatively different categories that describe how the students relate to their learning in their experienced learning environment. The sets of categories, grouped after the four components (what, why, how and where), are synthesised to describe the whole of the students’ experience of learning computer systems. This study advances the discussion about learning computer systems and demonstrates how theoretically anchored research contributes to teaching and learning in the field. Its multi-faceted, multi-disciplinary character invites further debate, and thus, advances the field.
APA, Harvard, Vancouver, ISO, and other styles
5

Lacks, Daniel Jonathan. "MODELING, DESIGN AND EVALUATION OF NETWORKING SYSTEMS AND PROTOCOLS THROUGH SIMULATION." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3792.

Full text
Abstract:
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
6

PRABHU, SHALAKA K. "NETWORKING ISSUES IN DEFER CACHE- IMPLEMENTATION AND ANALYSIS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1069850377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ruan, Jianhua, Han-Shen Yuh, and Koping Wang. "Spider III: A multi-agent-based distributed computing system." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2249.

Full text
Abstract:
The project, Spider III, presents architecture and protocol of a multi-agent-based internet distributed computing system, which provides a convenient development and execution environment for transparent task distribution, load balancing, and fault tolerance. Spider is an on going distribution computing project in the Department of Computer Science, California State University San Bernardino. It was first proposed as an object-oriented distributed system by Han-Sheng Yuh in his master's thesis in 1997. It has been further developed by Koping Wang in his master's project, of where he made large contribution and implemented the Spider II System.
APA, Harvard, Vancouver, ISO, and other styles
8

Butterfield, Ellis H. "Fog Computing with Go: A Comparative Study." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1348.

Full text
Abstract:
The Internet of Things is a recent computing paradigm, de- fined by networks of highly connected things – sensors, actuators and smart objects – communicating across networks of homes, buildings, vehicles, and even people. The Internet of Things brings with it a host of new problems, from managing security on constrained devices to processing never before seen amounts of data. While cloud computing might be able to keep up with current data processing and computational demands, it is unclear whether it can be extended to the requirements brought forth by Internet of Things. Fog computing provides an architectural solution to address some of these problems by providing a layer of intermediary nodes within what is called an edge network, separating the local object networks and the Cloud. These edge nodes provide interoperability, real-time interaction, routing, and, if necessary, computational delegation to the Cloud. This paper attempts to evaluate Go, a distributed systems language developed by Google, in the context of requirements set forth by Fog computing. Similar methodologies of previous literature are simulated and benchmarked against in order to assess the viability of Go in the edge nodes of Fog computing architecture.
APA, Harvard, Vancouver, ISO, and other styles
9

Ikusan, Ademola A. "Collaboratively Detecting HTTP-based Distributed Denial of Service Attack using Software Defined Network." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1515067456228498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wright, Chantal E. (Chantal Elise). "Information networking networking for distributed semicondutor techology development." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40205.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 57-58).
by Chantal E. Wright.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
11

Felker, Keith A. "Security and efficiency concerns with distributed collaborative networking environments /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03sep%5FFelker.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Felker, Keith A. "Security and efficiency concerns with distributed collaborative networking environments." Thesis, Monterey, California. Naval Postgraduate School, 2009. http://hdl.handle.net/10945/852.

Full text
Abstract:
Approved for public release, distribution unlimited
The progression of technology is continuous and the technology that drives interpersonal communication is not an exception. Recent technology advancements in the areas of multicast, firewalls, encryption techniques, and bandwidth availability have made the next level of interpersonal communication possible. This thesis answers why collaborative environments are important in today's online productivity. In doing so, it gives the reader a comprehensive background in distributed collaborative environments, answers how collaborative environments are employed in the Department of Defense and industry, details the effects network security has on multicast protocols, and compares collaborative solutions with a focus on security. The thesis ends by providing a recommendation for collaborative solutions to be utilized by NPS/DoD type networks. Efficient multicast collaboration, in the framework of security is a secondary focus of this research. As such, it takes security and firewall concerns into consideration while comparing and contrasting both multicast-based and non-multicast-based collaborative solutions.
APA, Harvard, Vancouver, ISO, and other styles
13

Jahromi, M. Z. "Low level networking for distributed monitoring and control." Thesis, University of Bradford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Da, Silva Silvestre Guthemberg. "Designing Adaptive Replication Schemes for Efficient Content Delivery in Edge Networks." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00931562.

Full text
Abstract:
La disponibilité des contenus partagés en ligne devient un élément essentiel pour toute la chaîne de distribution de vidéos. Pour fournir des contenus aux utilisateurs avec une excellente disponibilité et répondre à leurs exigences toujours croissantes, les opérateurs de content delivery networks (CDNs) doivent assurer une haute qualité de services, définie par des métriques comme le taux de transfert ou la latence inclus dans les contrats de Service Level Agreement (SLA). La réplication adaptative se présente comme un mécanisme de stockage très prometteur pour at- teindre cet objectif. Par contre, une question importante reste encore ouverte: comment assurer la mise en place de ces SLAs, tout en évitant le gaspillage de ressources? Le sujet de la thèse porte précisément sur l'étude et l'évaluation de systèmes de réplication de données pour la nouvelle génération de CDNs hybrides, dont une partie des ressources de réseaux et de stockage proviennent de l'équipement des utilisateurs. Pour cela, nous proposons (i) une architecture de gestion de ressources des utilisateurs nommée Caju, et (ii) trois nouveaux systèmes de réplication adaptatifs, AREN, Hermes, et WiseReplica. Des simulations précises avec Caju montrent que nos systèmes de réplication adaptatifs sont très performants et peuvent être facilement étendus à d'autres types d'architecture. Comme perspectives, nous comptons réaliser le développement et l'évaluation d'un prototype proof-of-concept sur PlanetLab.
APA, Harvard, Vancouver, ISO, and other styles
15

Lurain, Sher. "Networking security : risk assessment of information systems /." Online version of thesis, 1990. http://hdl.handle.net/1850/10587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Afzal, Tahir Mahmood. "Load sharing in distributed computer systems." Thesis, University of Newcastle Upon Tyne, 1987. http://hdl.handle.net/10443/2066.

Full text
Abstract:
In this thesis the problem of load sharing in distributed computer systems is investigated. Fundamental issues that need to be resolved in order to implement a load sharing scheme in a distributed system are identified and possible solutions suggested. A load sharing scheme has been designed and implemented on an existing Unix United system. The performance of this load sharing scheme is then measured for different types of programs. It is demonstrated that a load sharing scheme can be implemented on the Unix United systems using the existing mechanisms provided by the Newcastle Connection, and without making any significant changes to the existing software. It is concluded that under some circumstances a substantial improvement in the system performance can be obtained by the load sharing scheme.
APA, Harvard, Vancouver, ISO, and other styles
17

Bennett, John K. "Distributed Smalltalk : inheritance and reactiveness in distributed systems /." Thesis, Connect to this title online; UW restricted, 1988. http://hdl.handle.net/1773/6923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Detmold, Henry. "Communication in worldwide distributed object systems /." Title page, contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09PH/09phd481.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

O'Daniel, Graham M. "HTTP 1.2: DISTRIBUTED HTTP FOR LOAD BALANCING SERVER SYSTEMS." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/302.

Full text
Abstract:
Content hosted on the Internet must appear robust and reliable to clients relying on such content. As more clients come to rely on content from a source, that source can be subjected to high levels of load. There are a number of solutions, collectively called load balancers, which try to solve the load problem through various means. All of these solutions are workarounds for dealing with problems inherent in the medium by which content is served thereby limiting their effectiveness. HTTP, or Hypertext Transport Protocol, is the dominant mechanism behind hosting content on the Internet through websites. The entirety of the Internet has changed drastically over its history, with the invention of new protocols, distribution methods, and technological improvements. However, HTTP has undergone only three versions since its inception in 1991, and all three versions serve content as a text stream that cannot be interrupted to allow for load balancing decisions. We propose a solution that takes existing portions of HTTP, augments them, and includes some new features in order to increase usability and management of serving content over the Internet by allowing redirection of content in-stream. This in-stream redirection introduces a new step into the client-server connection where servers can make decisions while continuing to serve content to the client. Load balancing methods can then use the new version of HTTP to make better decisions when applied to multi-server systems making load balancing more robust, with more control over the client-server interaction.
APA, Harvard, Vancouver, ISO, and other styles
20

Chang, Jaewoong. "A modelling and networking architecture for distributed virtual environments with multiple servers." Thesis, University of Hull, 1999. http://hydra.hull.ac.uk/resources/hull:8383.

Full text
Abstract:
Virtual Environments (VEs) attempt to give people the illusion of immersion that they are in a computer generated world. VEs allow people to actively participate in a synthetic environment. They range from a single-person running on a single computer, to multiple-people running on several computers connected through a network. When VEs are distributed on multiple computers across a network, we call this a Distributed Virtual Environment (DVE). Virtual Environments can benefit greatly from distributed strategies. A networked VE system based on the Client-Server model is the most commonly used paradigm in constructing DVE systems. In a Client-Server model, data can be distributed on several server computers. The server computers provide services to their own clients via networks. In some client-server models, however, a powerful server is required, or it will become a bottleneck. To reduce the amount of data and traffic maintained by a single server, the servers themselves can be distributed, and the virtual environment can be divided over a network of servers. The system described in this thesis, therefore, is based on the client-server model with multiple servers. This grouping is called a Distributed Virtual Environment System with Multiple- Servers (DVM). A DVM system shows a new paradigm of distributed virtual environments based on shared 3D synthetic environments. A variety of network elements are required to support large scale DVM systems. The network is currently the most constrained resource of the DVM system. Development of networking architectures is the key to solving the DVM challenge. Therefore, a networking architecture for implementing a DVM model is proposed. Finally, a DVM prototype system is described to demonstrate the validity of the modelling and network architecture of a DVM model.
APA, Harvard, Vancouver, ISO, and other styles
21

Coffield, D. T. "Network and distributed systems management." Thesis, Lancaster University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wiseman, Simon Robert. "Garbage collection in distributed systems." Thesis, University of Newcastle Upon Tyne, 1988. http://hdl.handle.net/10443/1980.

Full text
Abstract:
The provision of system-wide heap storage has a number of advantages. However, when the technique is applied to distributed systems automatically recovering inaccessible variables becomes a serious problem. This thesis presents a survey of such garbage collection techniques but finds that no existing algorithm is entirely suitable. A new, general purpose algorithm is developed and presented which allows individual systems to garbage collect largely independently. The effects of these garbage collections are combined, using recursively structured control mechanisms, to achieve garbage collection of the entire heap with the minimum of overheads. Experimental results show that new algorithm recovers most inaccessible variables more quickly than a straightforward garbage collection, giving an improved memory utilisation.
APA, Harvard, Vancouver, ISO, and other styles
23

Thanh-Son, Nguyen. "Adaptive routing for distributed multi-computer systems." Thesis, University of Westminster, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Butt, Wajeeh U. N. "Load balancing strategies for distributed computer systems." Thesis, Loughborough University, 1993. https://dspace.lboro.ac.uk/2134/14162.

Full text
Abstract:
The study investigates various load balancing strategies to improve the performance of distributed computer systems. A static task allocation and a number of dynamic load balancing algorithms are proposed, and their performances evaluated through simulations. First, in the case of static load balancing, the precedence constrained scheduling heuristic is defined to effectively allocate the task systems with high communication to computation ratios onto a given set of processors. Second, the dynamic load balancing algorithms are studied using a queueing theoretic model. For each algorithm, a different load index has been used to estimate the host loads. These estimates are utilized in simple task placement heuristics to determine the probabilities for transferring tasks between every two hosts in the system. The probabilities determined in this way are used to perform dynamic load balancing in a distributed computer system. Later, these probabilities are adjusted to include the effects of inter-host communication costs. Finally, network partitioning strategies are proposed to reduce the communication overhead of load balancing algorithms in a large distributed system environment. Several host-grouping strategies are suggested to improve the performance of load balancing algorithms. This is achieved by limiting the exchange of load information messages within smaller groups of hosts while restricting the transfer of tasks to long distance remote hosts which involve high communication costs. Effectiveness of the above-mentioned algorithms is evaluated by simulations. The model developed in this study for such simulations can be used in both static and dynamic load balancing environments.
APA, Harvard, Vancouver, ISO, and other styles
25

Veillard, Daniel. "Conception et réalisation d'un protocole de diffusion fiable pour réseaux locaux." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00005020.

Full text
Abstract:
Cette thèse aborde le problème du support pour les applications distribuées coopératives. La notion de groupes de processus auxquels sont délivrés des messages est l'un des mécanismes fondamentaux de la construction de telles applications. L'état de l'art présente les différentes sémantiques pour de tels protocoles et les principales réalisations. Le protocole retenu pour l'implantation est une version dérivée du protocole d'Amoeba modifié pour le support de groupes opaques. Diverses optimisations ont aussi été ajoutées. La mise en oeuvre est basée sur une couche générique permettant de s'abstraire des dépendances système et de faciliter la réalisation de nouveaux protocoles. La réalisation initiale a été faite en mode utilisateur sur le micro-noyau Mach 3.0 et a été suivie de portages sur diverses plates-formes Unix. Cette thèse analyse en détail les performances du protocole et leur évolution en fonction de nombreux critères. Enfin, une étude fine du temps d'exécution du protocole implanté en mode utilisateur valide les choix d'implantation.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Honglei. "BYZANTINE FAULT TOLERANCE FOR DISTRIBUTED SYSTEMS." Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1402168557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shivaratri, Niranjan G. "Adaptive load distributing in distributed systems /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu148785431487161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lambiri, Cristian. "Temporal logic models for distributed systems." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10056.

Full text
Abstract:
Since the beginning of the 1980's, the way the computer systems are conceived has changed dramatically. This is a direct result of the appearance, on a large scale, of personal computers and engineering workstations. As a result, networks of independent systems have appeared. This thesis presents a formal specification framework that can be used in the design of distributed systems. The abstract models that are presented are based on a systemic view of distributed systems and discrete event systems. Two base abstract models called deterministic discrete event systems (DDES) and discrete event automaton (DEA) are presented. For the DEA the series and parallel compositions as well as feedback connection are defined. Universal algebra is employed to study the parallel composition of DEAs. From the DDES/DEA an abstract model for distributed systems is obtained. Subsequently, linear time temporal logic is modified for use with the abstract chosen model of distributed systems. The logic is described in three aspects: syntax, semantics and axiomatics. The syntax is modified by the addition of two operators. The semantics of the logic is given over the abstract models. Five axioms are added to the axiomatic system for the two new operators. A programming language called TLL, based on the theoretical framework, links the theory with practice. The syntax and semantics of the programming language are presented. Finally an example of modeling in the framework is given.
APA, Harvard, Vancouver, ISO, and other styles
29

Crane, John Stephen. "Dynamic binding for distributed systems." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.484185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Allison, Colin. "Systems support for distributed learning environments." Thesis, University of St Andrews, 2003. http://hdl.handle.net/10023/14519.

Full text
Abstract:
This thesis contends that the growing phenomena of multi-user networked "learning environments" should be treated as distributed interactive systems and that their developers should be aware of the systems and networks issues involved in their construction and maintenance. Such environments are henceforth referred to as distributed learning environments, or DLEs. Three major themes are identified as part of systems support: i) shared resource coherence in DLEs; ii) Quality of Service for the end- users of DLEs; and iii) the need for an integrating framework to develop, deploy and manage DLEs. The thesis reports on several distinct implementations and investigations that are each linked by one or more of those themes. Initially, responsiveness and coherence emerged as potentially conflicting requirements, and although a system was built that successfully resolved this conflict it proved difficult to move from the "clean room" conditions of a research project into a real world learning context. Accordingly, subsequent systems adopted a web-based approach to aid deployment in realistic settings. Indeed, production versions of these systems have been used extensively in credit-bearing modules in several Scottish Universities. Interactive responsiveness then emerged as a major Quality of Service issue in its own right, and motivated a series of investigations into the sources of delay, as experienced by end users of web-oriented distributed learning environments. Investigations into this issue provided insight into the nature of web-oriented interactive distributed learning and highlighted the need to be QoS-aware. As the volume and the range of usage of distributed learning applications increased the need for an integrating framework emerged. This required identifying and supporting a wide variety of educational resource types and also the key roles occupied by users of the system, such as tutors, students, supervisors, service providers, administrators, examiners. The thesis reports on the approaches taken and lessons learned from researching, designing and implementing systems which support distributed learning. As such, it constitutes a documented body of work that can inform the future design and deployment of distributed learning environments.
APA, Harvard, Vancouver, ISO, and other styles
31

Meth, Halli Elaine. "DecaFS: A Modular Distributed File System to Facilitate Distributed Systems Education." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1206.

Full text
Abstract:
Data quantity, speed requirements, reliability constraints, and other factors encourage industry developers to build distributed systems and use distributed services. Software engineers are therefore exposed to distributed systems and services daily in the workplace. However, distributed computing is hard to teach in Computer Science courses due to the complexity distribution brings to all problem spaces. This presents a gap in education where students may not fully understand the challenges introduced with distributed systems. Teaching students distributed concepts would help better prepare them for industry development work. DecaFS, Distributed Educational Component Adaptable File System, is a modular distributed file system designed for educational use. The goal of the system is to teach distributed computing concepts to undergraduate and graduate level students by allowing them to develop small, digestible portions of the system. The system is broken up into layers, and each layer is broken up into modules so that students can build or modify different components in small, assignment- sized portions. Students can replace modules or entire layers by following the DecaFS APIs and recompiling the system. This allows the behavior of the DFS (Distributed File System) to change based on student implementation, while providing base functionality for students to work from. Our implementation includes a code base of core DecaFS Modules that students can work from and basic implementations of non-core DecaFS Modules. Our basic non-core modules can be modified to implement more complex distribution techniques without modifying core modules. We have shown the feasibility of developing a modular DFS, while adhering to requirements such as configurable sizes (file, stripe, chunk) and support of multiple data replication strategies.
APA, Harvard, Vancouver, ISO, and other styles
32

Chapman, Martin David. "Access to services in distributed systems." Thesis, Open University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Phelps, Andrew Jacob. "ink - An HTTP Benchmarking Tool." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98918.

Full text
Abstract:
The Hypertext Transfer Protocol (HTTP) is one the foundations of the modern Internet. Because HTTP servers may be subject to unexpected periods of high load, developers use HTTP benchmarking utilities to simulate the load generated by users. However, many of these tools do not report performance details at a per-client level, which deprives developers of crucial insights into a server's performance capabilities. In this work, we present ink, an HTTP benchmarking tool that enables developers to better understand server performance. ink provides developers with a way of visualizing the level of service that each individual client receives. It does this by recording a trace of events for each individual simulated client. We also present a GUI that enables users to explore and visualizing the data that is generated by an HTTP benchmark. Lastly, we present a method for running HTTP benchmarks that uses a set of distributed machines to scale up the achievable load on the benchmarked server. We evaluate ink by performing a series of case studies to show that ink is both performant and useful. We validate ink's load generation abilities within the context of a single machine and when using a set of distributed machines. ink is shown to be capable of simulating hundreds of thousands of HTTP clients and presenting per-client results through the ink GUI. We also perform a set of HTTP benchmarks where ink is able to highlight performance issues and differences between server implementations. We compare servers like NGINX and Apache and highlight their differences using ink.
Master of Science
The World Wide Web (WWW) uses the Hypertext Transfer Protocol to send web content such as HTML pages or video to users. The servers providing this content are called HTTP servers. Sometimes, the performance of these HTTP servers is compromised because a large number of users requests documents at the same time. To prepare for this, server maintainers test how many simultaneous users a server can handle by using benchmarking utilities. These benchmarking utilities work by simulating a set of clients. Currently, these tools focus only on the amount of requests that a server can process per second. Unfortunately, this coarse-grained metric can hide important information, such as the level of service that individual clients received. In this work, we present ink, an HTTP benchmarking utility we developed that focuses on reporting information for each simulated client. Reporting data in this way allows for the developer to see how well each client was served during the benchmark. We achieve this by constructing data visualizations that include a set of client timelines. Each of these timelines represents the service that one client received. We evaluated ink through a series of case studies. These focus on the performance of the utility and the usefulness of the visualizations produced by ink. Additionally, we deployed ink in Virginia Tech's Computer Systems course. The students were able to use the tool and took a survey pertaining to their experience with the tool.
APA, Harvard, Vancouver, ISO, and other styles
34

Rizvanovic, Larisa. "Resource Management Framework for Distributed Heterogeneous Systems." Licentiate thesis, Västerås : School of Innovation, Design and Engineering [Akademin för innovation, design och teknik], Mälardalen University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Tosun, Ali Saman. "Security mechanisms for multimedia networking." Columbus, OH : Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1054700514.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xvi, 135 p.: ill. Includes abstract and vita. Co-advisors: Wu-Chi Feng, Dong Xuan, Dept. of Computer ad information Science. Includes bibliographical references (p. 129-135).
APA, Harvard, Vancouver, ISO, and other styles
36

Merritt, John W. "Distributed file systems in an authentication system." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lu, Rong 1969. "Detecting race conditions in distributed concurrent systems." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33422.

Full text
Abstract:
Nondeterminism makes distributed concurrent systems difficult to test, monitor and control. An execution of a message-passing system is nondeterministic when message races exist. Message race occurs when, e.g., multiple conflicting requests from different clients are competing to be executed within a server entity. Therefore, techniques for race detection by tracing messages are an essential part of a test and debugging tool for distributed concurrent systems. In this thesis, race conditions are investigated in the context of the Testing and Monitoring Tool (TMT). TMT is a monitoring and testing tool for distributed software systems, which is developed by the Department of Software Engineering, Corporate Technology (ZT SE), Siemens AG, Munich, Germany.
An existing method is used to determine the race set for each receive event in the trace from a single execution. Race sets indicate the potential races in a system run that may happen. The execution of the program is deterministic if and only if all race sets of the program execution are empty. The method is detailed, implemented in Java and integrated in the TMT tool.
A trace comparison method is developed that determines whether races actually occurred during two particular executions of the same system. If the race set for a receive event in the first trace is equal to the race set of the matching receive event in the second trace, a race did not happen for this receive event, otherwise, a race happened. The method is also implemented in Java and integrated in the TMT tool.
The GUI of the developed prototype tool is presented and the tool is illustrated on an example.
APA, Harvard, Vancouver, ISO, and other styles
38

Vijayakumar, Nithya Nirmal. "Data management in distributed stream processing systems." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278228.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6093. Adviser: Beth Plale. Title from dissertation home page (viewed May 9, 2008).
APA, Harvard, Vancouver, ISO, and other styles
39

He, Jun. "Customizable multi-dimensional QoS in distributed systems." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280705.

Full text
Abstract:
As computer-based services become pervasive, their attributes related to quality of service (QoS) such as reliability, security, and timeliness become both more important and more difficult to achieve. This is especially true for distributed services, where the service is accessed across a network. In such scenarios, the execution environment is dynamic, and a service must often support a diverse set of users, each with different multi-dimensional QoS requirements. Although many new distributed service platforms such as CORBA, Java RMI, and Web Services have emerged, and many QoS techniques have been developed, no existing approach provides a complete solution to these issues. This dissertation addresses these challenges by introducing two novel QoS architectures that facilitate customizable multi-dimensional QoS in distributed service platforms. The first, called CQoS, is designed for platforms in which most of the functionality is implemented by the endpoints, such as CORBA and Java RMI. CQoS consists of two parts: application- and platform-dependent interceptors and generic QoS components. The generic QoS components are implemented using Cactus, a system for building highly configurable protocols and services in distributed systems. The CQoS architecture is described, along with experimental results for a prototype constructed on Linux. Compared with other approaches, CQoS emphasizes portability across different object platforms, while the use of Cactus allows custom combinations of attributes to be realized on a per-session basis in a straightforward way. The second architecture, called the QBox architecture, is designed for platforms in which most of the functionality is implemented in the network, such as mobile service platforms where endpoints may be resource-limited mobile devices. In addition to QoS components implemented using Cactus, the architecture includes policy components that evaluate each request's requirements and dynamically determines an appropriate execution strategy. A specific strategy based on QoS-aware request ordering and replica selection algorithms that manage the tradeoffs between the reliability and timeliness requirements of different requests is presented and evaluated. The architecture has been integrated into an experimental version of iMobile, a mobile service platform from AT&T. The design and implementation of the architecture are described, together with experimental results from the iMobile prototype.
APA, Harvard, Vancouver, ISO, and other styles
40

Pang, Gene. "Scalable Transactions for Scalable Distributed Database Systems." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733329.

Full text
Abstract:

With the advent of the Internet and Internet-connected devices, modern applications can experience very rapid growth of users from all parts of the world. A growing user base leads to greater usage and large data sizes, so scalable database systems capable of handling the great demands are critical for applications. With the emergence of cloud computing, a major movement in the industry, modern applications depend on distributed data stores for their scalable data management solutions. Many large-scale applications utilize NoSQL systems, such as distributed key-value stores, for their scalability and availability properties over traditional relational database systems. By simplifying the design and interface, NoSQL systems can provide high scalability and performance for large data sets and high volume workloads. However, to provide such benefits, NoSQL systems sacrifice traditional consistency models and support for transactions typically available in database systems. Without transaction semantics, it is harder for developers to reason about the correctness of the interactions with the data. Therefore, it is important to support transactions for distributed database systems without sacrificing scalability.

In this thesis, I present new techniques for scalable transactions for scalable database systems. Distributed data stores need scalable transactions to take advantage of cloud computing, and to meet the demands of modern applications. Traditional techniques for transactions may not be appropriate in a large, distributed environment, so in this thesis, I describe new techniques for distributed transactions, without having to sacrifice traditional semantics or scalability.

I discuss three facets to improving transaction scalability and support in distributed database systems. First, I describe a new transaction commit protocol that reduces the response times for distributed transactions. Second, I propose a new transaction programming model that allows developers to better deal with the unexpected behavior of distributed transactions. Lastly, I present a new scalable view maintenance algorithm for convergent join views. Together, the new techniques in this thesis contribute to providing scalable transactions for modern, distributed database systems.

APA, Harvard, Vancouver, ISO, and other styles
41

Shands, Deborah Ann. "A formal method for classifying distributed systems /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487854314871212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ruiz, Gerard. "Distributed Data Management in Internet of Things Networking Environments : IOTA Tangle and Bitcoin Blockchain Distributed Ledger Technologies." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-77359.

Full text
Abstract:
Distributed ledger technology (DLT) is one of the latest in a long list of digital technologies, which appear to be heading towards a new industrial revolution. DLT has become very popular with the publication of the Bitcoin Blockchain in 2008. However, when we consider its suitability for dynamic networking environments, such as the Internet of Things, issues like transaction fees, scalability, and offline accessibility have not been resolved. The IOTA Foundation has designed the IOTA protocol, which is the data and value transfer layer for the Machine Economy. IOTA protocol uses an alternative blockless Blockchain which claims to solve the previous problems: the Tangle. This thesis first inquires into the theoretical concepts of both technologies Tangleand Blockchain, to understand them and identify the reasons to be compatible or not with the Internet of Things networking environments. After the analysis, the thesis focuses on the proposed implementation as a solution to address the connectivity issue suffered by the IOTA network. The answer to the problem is the development of a Neighbor Discovery algorithm, which has been designed to fulfill the requirements demanded by the IOTA application. Dealing with IOTA network setup can be very interesting for the community that is looking for new improvements at each release. Testing the solution in a peer-to-peer specific protocol (PeerSim), with different networking scenarios, allowed us to get valuable and more realistic information. Thus, after analyzing the results, we were able to determine the appropriate IOTA network configuration to build a more reliable and long-lasting network.
APA, Harvard, Vancouver, ISO, and other styles
43

Saia, Jared. "Algorithms for managing data in distributed systems /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Fossa, Halldor. "Interactive configuration management for distributed systems." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

De, Prisco Roberto. "On building blocks for distributed systems." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/87155.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2000.
"December 1999."
Includes bibliographical references (p. 174-180).
by Roberto De Prisco.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
46

Ajmani, Sameer 1976. "Automatic software upgrades for distributed systems." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28717.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 156-164).
Upgrading the software of long-lived, highly-available distributed systems is difficult. It is not possible to upgrade all the nodes in a system at once, since some nodes may be unavailable and halting the system for an upgrade is unacceptable. Instead, upgrades may happen gradually, and there may be long periods of time when different nodes are running different software versions and need to communicate using incompatible protocols. We present a methodology and infrastructure that address these challenges and make it possible to upgrade distributed systems automatically while limiting service disruption. Our methodology defines how to enable nodes to interoperate across versions, how to preserve the state of a system across upgrades, and how to schedule an upgrade so as to limit service disrup- tion. The approach is modular: defining an upgrade requires understanding only the new software and the version it replaces. The upgrade infrastructure is a generic platform for distributing and installing software while enabling nodes to interoperate across versions. The infrastructure requires no access to the system source code and is transparent: node software is unaware that different versions even exist. We have implemented a prototype of the infrastructure called Upstart that intercepts socket communication using a dynamically-linked C++ library. Experiments show that Upstart has low overhead and works well for both local-area-and Internet systems.
by Sameer Ajmani.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
47

Long, Brian S. "Implementation of a distributed time based simulation of underwater acoustic networking using Java." Thesis, Monterey, California. Naval Postgraduate School, 2006. http://hdl.handle.net/10945/2571.

Full text
Abstract:
Approved for public release, distribution unlimited
Underwater Acoustic Networks (UAN) have two immutable obstacles to overcome; the hostile environment in which it must operate; and the combination of the propagation speed of sound in water, and the latency in communication that this produces, and the dynamic nature of the water column with respect to its attenuation of the sound signal. These combined issues make it very costly and time consuming to setup a UAN just to test new protocols that may or may not be able to mitigate the limitations of this environment. There exists, then, a need for an ability to test a new protocol without the overhead of creating a physical UAN. The goal of this thesis is to provide a more hospitable, adaptable, flexible, and easily useable tool with which to test new protocols for UANs, as well as providing the ability for the Physics field to test new physical layer encodings. This simulation environment will provide the glue, or bridge, between the two disciplines by working as a common tool for both.
APA, Harvard, Vancouver, ISO, and other styles
48

Baba, Mohd Dani. "Fault tolerance in distributed real-time computer systems." Thesis, University of Sussex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307238.

Full text
Abstract:
A distributed real-time computer system consists of several processing nodes interconnected by communication channels. In a safety critical application, the real-time system should maintain timely and dependable services despite component failures or transient overloads due to changes in application environment. When a component fails or an overload occurs, the hard real-time tasks may miss their timing constraints, and it is desired that the system to degrade in a graceful, predictable manner. The approach adopted to the problem in this thesis is by integrating the resource scheduling with fault tolerance mechanism. This thesis provides a basis for the modelling and design of an adaptive fault tolerant distributed real-time computer system. The main issue is to determine a priori the worst case timing response of the given hard realtime tasks. In this thesis the worst case timing response of the given hard real-time task of the distributed system using the Controller Area Network (CAN) communication protocol is evaluated as to whether they can satisfy their timing deadlines. In a hard real-time system, the task scheduling is the most critical problem since the scheduling strategy ensures that tasks meet their deadlines. In this thesis several fixed priority scheduling schemes are evaluated to select the most efficient scheduler in terms of the bus utilisation and access time. Static scheduling is used as it can be considered to be most appropriate for safety critical applications since the schedulability can easily be verified. Furthermore for a typical industrial application, the hard real-time system has to be adaptable to accommodate changes in the system or application requirements. This .goal of flexibility can be achieved by integrating the static scheduler using an imprecise computation technique with the fault tolerant mechanism which uses active redundant components.
APA, Harvard, Vancouver, ISO, and other styles
49

Hansen, André Skoglund. "Distributed Hosting of Systems using donated Computer Resources." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-24453.

Full text
Abstract:
To host a value-added internet service, like a web page with a large user base, an organization either has to rely on cash donations or it has to monetize the service. The monetization of the service often means degrading the quality of the service or making it less appealing. This is why this project introduces a new business model where services can be run by the users themselves by letting them donate computer resources. This again should lower the operating cost of the service. The new business model is introduced by developing a framework that allows developers to implement their services in a way that let dedicated users participate in hosting the service. First the framework was developed, and then the framework was used to develop an example implementation of a distributed web page. For it to be realistic that users would be able to partake in an operation like this, a project goal was to make sure that the technical demand from users are low. The framework is written with this in mind and the reached simplicity is presented at the end of the report.
APA, Harvard, Vancouver, ISO, and other styles
50

Bass, Julian M. "Voting in real-time distributed computer control systems." Thesis, University of Sheffield, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography