To see the other types of publications on this topic, follow the link: Computer Networking.

Dissertations / Theses on the topic 'Computer Networking'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer Networking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Alsebae, Alaa. "Network coding for computer networking." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/72647/.

Full text
Abstract:
Conventional communication networks route data packets in a store-and-forward mode. A router buffers received packets and forwards them intact towards their intended destination. Network Coding (NC), however, generalises this method by allowing the router to perform algebraic operations on the packets before forwarding them. The purpose of NC is to improve the network performance to achieve its maximum capacity also known as max-flow min-cut bound. NC has become very well established in the field of information theory, however, practical implementations in real-world networks is yet to be explored. In this thesis, new implementations of NC are brought forward. The effect of NC on flow error control protocols and queuing over computer networks is investigated by establishing and designing a mathematical and simulation framework. One goal of such investigation is to understand how NC technique can reduce the number of packets required to acknowledge the reception of those sent over the network while error-control schemes are employed. Another goal is to control the network queuing stability by reducing the number of packets required to convey a set of information. A custom-built simulator based on SimEvents® has been developed in order to model several scenarios within this approach. The work in this thesis is divided into two key parts. The objective of the first part is to study the performance of communication networks employing error control protocols when NC is adopted. In particular, two main Automatic Repeat reQuest (ARQ) schemes are invoked, namely the Stop-and-Wait (SW) and Selective Repeat (SR) ARQ. Results show that in unicast point-to point communication, the proposed NC scheme offers an increase in the throughput over traditional SW ARQ between 2.5% and 50.5% at each link, with negligible decoding delay. Additionally, in a Butterfly network, SR ARQ employing NC achieves a throughput gain between 22% and 44% over traditional SR ARQ when the number of incoming links to the intermediate node varies between 2 and 5. Moreover, in an extended Butterfly network, NC offered a throughput increase of up to 48% under an error-free scenario and 50% in the presence of errors. Despite the extensive research on synchronous NC performance in various fields, little has been said about its queuing behaviour. One assumption is that packets are served following a Poisson distribution. The packets from different streams are coded prior to being served and then exit through only one stream. This study determines the arrival distribution that coded packets follow at the serving node. In general this leads to study general queuing systems of type G/M/1. Hence, the objective of the second part of this study is twofold. The study aims to determine the distribution of the coded packets and estimate the waiting time faced by coded packets before their complete serving process. Results show that NC brings a new solution for queuing stability as evidenced by the small waiting time the coded packets spend in the intermediate node queue before serving. This work is further enhanced by studying the server utilization in traditional routing and NC scenarios. NC-based M/M/1 with finite capacity K is also analysed to investigate packet loss probability for both scenarios. Based on the results achieved, the utilization of NC in error-prone and long propagation delay networks is recommended. Additionally, since the work provides an insightful prediction of particular networks queuing behaviour, employing synchronous NC can bring a solution for systems’ stability with packet-controlled sources and limited input buffers.
APA, Harvard, Vancouver, ISO, and other styles
2

Wright, Chantal E. (Chantal Elise). "Information networking networking for distributed semicondutor techology development." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40205.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 57-58).
by Chantal E. Wright.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Lurain, Sher. "Networking security : risk assessment of information systems /." Online version of thesis, 1990. http://hdl.handle.net/1850/10587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Teng. "Connected Car Networking." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1544728665967784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kamppi, Tomi. "ICT System for Courses in Computer Networking." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-53605.

Full text
Abstract:
The project focuses on renewing the current ICT-system in the 8th floor server room, at KTH, Kista. The current ICT-system, surrounding administrative tasks and user functionality are described, and a new improved ICT-system proposal is given. The current and proposed systems are compared. The current ICT-system gives users access to 16 Intel E7501, servers with 2.4 GHz Xeon processors, and 1.5-2 GB of RAM, and 16 SUN Fire v120 servers. These servers are in the proposed ICT-system replaced with hardware capable of running 64-bit software. The future ICT‐system proposal is based on VMware vSphere 4, and surrounding VMware management software. The solution focuses on providing a more flexible and easier administration of the environment, as well as more possibilities for the users, for example in the form of virtual networking configurations. The server room has networking equipment most notably in the form of HP switches, which are kept in the proposed system. The servers that support the server room are also incorporated into the proposed system, these supporting server provide the server room with all surrounding services. Due to hardware incompatibilities the proposed ICT-system has not yet been implemented.
APA, Harvard, Vancouver, ISO, and other styles
6

Muhammad, Arshad. "A gateway solution for accessing networking appliances." Thesis, Liverpool John Moores University, 2009. http://researchonline.ljmu.ac.uk/5946/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tosun, Ali Saman. "Security mechanisms for multimedia networking." Columbus, OH : Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1054700514.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xvi, 135 p.: ill. Includes abstract and vita. Co-advisors: Wu-Chi Feng, Dong Xuan, Dept. of Computer ad information Science. Includes bibliographical references (p. 129-135).
APA, Harvard, Vancouver, ISO, and other styles
8

Paradis, Thomas. "Software-Defined Networking." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143882.

Full text
Abstract:
Software Defined Networks (SDN) is a paradigm in which routing decisions are taken by a control layer. In contrast to conventional network structures, the control plane and forwarding plane are separated and communicate through standard protocols like OpenFlow. Historically, network management was based on a layered approach, each one isolated from the others. SDN proposes a radically different approach by bringing together the management of all these layers into a single controller. It is therefore easy to get a unified management policy despite the complexity of current networks requirements while ensuring performance through the use of dedicated devices for the forwarding plane. Such an upheaval can meet the current challenges of managing an increasingly dynamic network imposed by the development of cloud computing or the increased mobility of everyday devices. Many solutions have emerged, but all do not satisfy the same issues and are not necessarily usable in a real environment. The purpose of this thesis is to study and report on existing solutions and technologies as well as conceive a demonstration prototype to present the benefits of this approach. This project also focuses on an analysis of risks posed by these technologies and the possible solutions.
APA, Harvard, Vancouver, ISO, and other styles
9

Ren, Zhen. "Towards Confident Body Sensor Networking." W&M ScholarWorks, 2012. https://scholarworks.wm.edu/etd/1539623606.

Full text
Abstract:
With the recent technology advances of wireless communication and lightweight low-power sensors, Body Sensor Network (BSN) is made possible. More and more researchers are interested in developing numerous novel BSN applications, such as remote health/fitness monitoring, military and sport training, interactive gaming, personal information sharing, and secure authentication. Despite the unstable wireless communication, various confidence requirements are placed on the BSN networking service. This thesis aims to provide Quality of Service (QoS) solutions for BSN communication, in order to achieve the required confidence goals.;We develop communication quality solutions to satisfy confidence requirements from both the communication and application levels, in single and multiple BSNs. First, we build communication QoS, targeting at providing service quality guarantees in terms of throughput and time delay on the communication level. More specifically, considering the heterogeneous BSN platform in a real deployment, we develop a radio-agnostic solution for wireless resource scheduling in the BSN. Second, we provide a QoS solution for both inter- and intra-BSN communications when more than one BSNs are involved. Third, we define application fidelity for two neurometric applications as examples, and bridge a connection between the communication QoS and application QoS.
APA, Harvard, Vancouver, ISO, and other styles
10

Yi, Qing. "Power-Aware Datacenter Networking and Optimization." PDXScholar, 2017. https://pdxscholar.library.pdx.edu/open_access_etds/3474.

Full text
Abstract:
Present-day datacenter networks (DCNs) are designed to achieve full bisection bandwidth in order to provide high network throughput and server agility. However, the average utilization of typical DCN infrastructure is below 10% for significant time intervals. As a result, energy is wasted during these periods. In this thesis we analyze traffic behavior of datacenter networks using traces as well as simulated models. Based on the insight developed, we present techniques to reduce energy waste by making energy use scale linearly with load. The solutions developed are analyzed via simulations, formal analysis, and prototyping. The impact of our work is significant because the energy savings we obtain for networking infrastructure of DCNs are near optimal. A key finding of our traffic analysis is that network switch ports within the DCN are grossly under-utilized. Therefore, the first solution we study is to modify the routing within the network to force most traffic to the smallest of switches. This increases the hop count for the traffic but enables the powering off of many switch ports. The exact extent of energy savings is derived and validated using simulations. An alternative strategy we explore in this context is to replace about half the switches with fewer switches that have higher port density. This has the effect of enabling even greater traffic consolidation, thus enabling even more ports to sleep. Finally, we explore a third approach in which we begin with end-to-end traffic models and incrementally build a DCN topology that is optimized for that model. In other words, the network topology is optimized for the potential use of the datacenter. This approach makes sense because, as other researchers have observed, the traffic in a datacenter is heavily dependent on the primary use of the datacenter. A second line of research we undertake is to merge traffic in the analog domain prior to feeding it to switches. This is accomplished by use of a passive device we call a merge network. Using a merge network enables us to attain linear scaling of energy use with load regardless of datacenter traffic models. The challenge in using such a device is that layer 2 and layer 3 protocols require a one-to-one mapping of hardware addresses to IP (Internet Protocol) addresses. We overcome this problem by building a software shim layer that hides the fact that traffic is being merged. In order to validate the idea of a merge network, we build a simple mere network for gigabit optical interfaces and demonstrate correct operation at line speeds of layer 2 and layer 3 protocols. We also conducted measurements to study how traffic gets mixed in the merge network prior to being fed to the switch. We also show that the merge network uses only a fraction of a watt of power, which makes this a very attractive solution for energy efficiency. In this research we have developed solutions that enable linear scaling of energy with load in datacenter networks. The different techniques developed have been analyzed via modeling and simulations as well as prototyping. We believe that these solutions can be easily incorporated into future DCNs with little effort.
APA, Harvard, Vancouver, ISO, and other styles
11

Jun, Jangeun. "Networking in Wireless Ad Hoc Networks." NCSU, 2006. http://www.lib.ncsu.edu/theses/available/etd-08172006-150002/.

Full text
Abstract:
In modern communication systems, wireless ad hoc networking has become an irreplaceable technology where communication infrastructure is insufficient or unavailable. An ad hoc network is a collection of self-organizing nodes that are rapidly deployable and adaptable to frequent topology changes. In this dissertation, the key problems related to the network layer (i.e., forwarding, routing, and network-layer topology control) are addressed. The problem of unfair forwarding in ad hoc nodes is identified and cross-layer solutions are proposed. Because a typical ad hoc node functions both as a router and a host, severe unfairness occurs between originated and forwarded packets which eventually leads to a serious starvation problem. The results show that, to restore the fairness and enhance the capacity efficiency, non-traditional queueing schemes are required where both the network and the MAC layers should be considered together. Routing is a critical protocol, which directly affects the scalability and reliability of wireless ad hoc networks. A good routing protocol for wireless ad hoc networks should overcome the dynamic nature of the topology arising from unreliable wireless links and node mobility. In ad hoc networks, it is very important to balance the route accuracy and overhead efficiency. A number of routing protocols have been proposed for wireless ad hoc networks, but it is well known that current routing protocols scale poorly with the number of nodes, the number of traffic flows, the intensity of mobility. The main objective of this dissertation is to provide efficient routing protocols for different types of wireless ad hoc networks including wireless mesh networks (WMNs), mobile ad hoc networks (MANETs), and wireless sensor networks (WSNs). Since each category has different assumptions and constraints, different solutions should be considered. WMNs and WSNs have low mobility and centralized (one-to-any) traffic patterns while MANETs have relatively high mobility and uniform (any-to-any) traffic patterns. WSNs are highly resource-constrained while WMNs are not. A new routing protocol specially designed for WMNs is proposed. Simulation experiments show that the protocol outperforms existing generic ad hoc routing protocols. This improvement is enabled by the essential characteristics of WMNs, and as a result, the protocol does not rely on bandwidth-greedy flooding mechanism. For MANET routing, an existing de facto the standard Internet intra-AS (autonomous system) routing protocol is extended to enhance the scalability in ad hoc environments. When extended for MANETs, Open Shortest Path First (OSPF) is expected to provide the benefits of maturity, interoperability, and scalability. The scalability extension is two-fold: the notions of distance effect and multiple areas are explored as extensions. Both approaches provide significant gain in scalability by efficiently reducing flooding overhead without compromising routing or forwarding performance. Finally, a new scalable and reliable sensor network routing is proposed. Since WSNs are the most resource-constrained type of ad hoc networks, the protocol should be very simple yet reliable. The proposed WSN routing protocol is designed to provide reliability (via multi-path redundancy), scalability (with efficiently contained flooding), and flexibility (source-tunable per-packet priority), which are achieved without adding protocol complexity or resource consumption. The protocol is implemented on real sensor motes and its performance is tested through outdoor sensor field deployments. The results show that the protocol outperforms even sophisticated link estimation based sensor network routing protocols.
APA, Harvard, Vancouver, ISO, and other styles
12

Karim, Hawkar. "IoT Networking Using MPTCP Protocol." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48424.

Full text
Abstract:
The progress of technology is moving in a rapid pace forward, with new solutions and improvements being developed each year. Internet of Things (IoT) is one area of computer science that seen a growing interest from the population leading to more deployments of the technology. IoT devices often operate in low-power lossy networks making them depend upon low energy consumption but also high reliability. As the devices become more mobile this also exposes several challenges, one being connectivity in regard to mobility. Our proposed solution to this problem use Multipath Transmission Control Protocol (MPTCP) as a way of delivering high level of performance and connectivity and thereby high reliability. There has been research and implementations of MPTCP in different networks, however in low power radio networks, such as the ones IoT devices resides in, it is still a novel idea.  We reproduced and tested an implementation of MPTCP, against a similar network that is using regular TCP and compared the results. The MPTCP network showed a higher throughput and data transfer, proving to be more efficient while also providing a higher level of reliability in regard to connectivity. However, MPTCP showed a higher rate of packet retransmission compared to regular TCP. To be able to fully deploy MPTCP in low energy IoT devices there needs to be more improvements to accommodate the needs that such networks depend upon. There are use cases, such as for mobile cellular devices where MPTCP would make an impactful difference.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Wei. "Generic models for performance evaluation of computer networking protocols." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/8209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Eljarn, Hatana Hannan. "Computer mediated communication, social networking sites & maintaining relationships." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/computer-mediated-communication-social-networking-sites-and-maintaining-relationships(14a3c8f9-a6a7-4acd-833f-42b4c9b9bc7d).html.

Full text
Abstract:
The past decade has witnessed a proliferation of internet use for socialising with dedicated websites such as Facebook, and also for maintaining relationships using computer mediated communication. Individuals can extend the boundary associated with traditional forms of communication, and use technology to meet strangers online to share interests, or maintain existing relationships remotely. One of the most significant functions of computer-mediated communication (CMC) is its contribution to the evolution of social communication. CMC is “communication that takes place between human beings via the instrumentality of computers” (Thurlow, Lengel, & Tomic, 2004). As a consequence of the convenience and flexibility that this channel provides, CMC can be effectively used to orchestrate a variety of communication situations. Furthermore, social networks sites are becoming the choice in which individuals are maintaining relationships or meeting new people. The potential distinctions between these relationships and their offline counterparts remain contradictory. Online relationships may face different challenges, such as anonymity, restricted interaction (Walther, 1992), and the lack of physical presence. For example, sharing activities online such as playing games or visiting Web sites together differs from offline activities, such as going to the movies or dining together. These observations question whether CMC relationships have any parallels with real world relationships. Dunbar (1992) structured real world relationship by strength of ties and formulated the social brain hypothesis (SBH). This work uses the SBH as an interpretive lens in analyzing CMC relationship ties. Thus, a major focus of this work is to investigate implications of the SBH (Dunbar, 1992) within the context of CMC usage. It is recognised that CMC allows for the maintenance of a large number of friendships. Thus potentially, the use of CMC could alter the SBH ratios. Within the main findings consistency with SBH was found. Furthermore, CMC has many parallels with real world communication methods. Face-to-face communications were strongly preferred for maintenance of strong ties. Also phone usage was analysed and identified as an indicator of strong tie relationships, for both local and distant communications. The findings also address questions on displaced communities communication habits and their use of CMC. The phone was found to be most popular media and culture had a strong influence on communication content. The research used a mixed method approach, combining data collection via questionnaires, semi structured interviews and a diary study completed by participants. Based on the findings, a framework is proposed categorising groups on their level of real world socialising and CMC use. There are four essential contributions impacting on current theory. The findings offer new knowledge within the research of CMC and relationship maintenance theory. In our understanding these exploratory questions have not yet been addressed and therefore the findings of this research project are significant in their contributions.
APA, Harvard, Vancouver, ISO, and other styles
15

Hu, Jie. "Mobile social networking aided content dissemination in heterogeneous networks." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/386988/.

Full text
Abstract:
Thanks to the rapid development of the wireless Internet, numerous mobile applications provide platforms for Mobile Users (MUs) to share any Information of Common Interest (IoCI) with their friends. For example, mobile applications of Facebook and Twitter enableMUs to share information via posts and status updates. Similarly, the mobile application of Wase (http://www.waze.com/) enables the drivers to share the real-time traffic information collected by themselves. However, at the time of writing, the dissemination process of the IoCI is predominantly supported by the Centralised Infrastructure (CI) based communication. In order to receive the IoCI, the mobile devices of MUs have to be connected either to a Base Station (BS) or to a Wi-Fi hotspot. However, the CI-based information dissemination faces the following three limitations: i) intermittent connectivity in rural areas; ii) overloaded CI-based network; iii) inefficient data service in densely populated areas. Due to the rapid development of the powerful mobile computing technique, mobile devices are typically equipped with large storage capacity, say dozens or possibly hundreds of Giga Bytes. Furthermore, they support multiple communication standards, such as Infrared, Bluetooth and Wi-Fi modules, in support of direct peer-to-peer communications. As a result, this treatise contributed towards mitigating the above-mentioned design problems in the conventional CI-based information dissemination by seeking assistance both from the opportunistic contacts and opportunistic multicast amongst MUs, who share common interest in the same information. This results in an integrated cellular and opportunistic network. Since mobile devices are carried by MUs, exploring the social behaviours exhibited by individuals and the social relationships amongst MUs may assist us in enhancing the communication experience. We firstly study an integrated cellular and large-scale opportunistic network, where theMUs are sparsely distributed within a large area. Due to the large size of the area and the sparse distribution of the MUs, the connectivity between a BS and a MU as well as that between a pair of MUs exhibit an intermittent nature. As a result, the information delivery has to be realised by the opportunistic contact between a transmitter and receiver pair, when the receiver enter the range of the transmitter. However, successful information delivery requires that the duration of this opportunistic contact has to be longer than the downloading period of the information. This integrated network is relied upon for disseminating delay-tolerant IoCI amongst the MUs. We model the information dissemination in this integrated network by a Continuous-Time-Pure-Birth-Markov-Chain (CT-PBMC) and further derive its relevant delay metrics and information delivery ratio. With the assistance of large-scale opportunistic networks, the information delivery ratio before the IoCI expires may be doubled, when compared to the conventional CI-based information dissemination. Furthermore, upon modelling the contact history of the MUs as a social network, social centrality based schemes are proposed for the sake of off-loading tele-traffic from the potentially congested CI to the large-scale opportunistic network. As demonstrated by our simulation results, in the scenario considered, as many as 58% MUs can be served by the large-scale opportunistic network before the oCI expires. In the above-mentioned large-scale networks, the MUs tend to be dispersed. By contrast, in the densely populated scenario, where numerous MUs can be found within a small area, classic BS-aided multicast is often invoked as a traditional measure of disseminating the IoCI by relying on the broadcast nature of the wireless channels. However, BS-aided multicasting becomes inefficient, when the number of MUs is high. If we efficiently exploit the redundant copies of the IoCI held by the already served MUs and activate these IoCI holders as potential relays for the next stage of cooperative multicast, the resultant diversity gain beneficially accelerates the information dissemination process. This approach may be regarded as opportunistic cooperative multicast, since no deterministic relay selection scheme is required. Its promising advantages experienced during disseminating the delay-sensitive IoCI across the densely populated area considered motivate us to study an integrated cellular and small-scale opportunistic network. By jointly considering both the effects of the channel model in the physical layer, as well as the resource scheduling in the Medium Access-Control (MAC) layer and the information dissemination protocol in the network layer, we model the information dissemination process by a Discrete-Time-Pure-Birth-Markov-Chain (DT PBMC). Apart from the above-mentioned factors related to wireless transmission, we also consider various MUs’ social characteristics, such as their altruistic behaviours and their geographic social relationships. Relying upon the above-mentioned DT-PBMC, we are capable of studying the delay versus energy dissipation trade-off during the information dissemination process. As demonstrated by our simulation results, the integrated cellular and small-scale opportunistic network considered is capable of substantially reducing the total information dissemination delay and the total energy dissipation of the classic BS-aided multicast. However, these benefits are achieved at the cost of the additional energy dissipated by the individual MUs. In order to further reduce both the information dissemination delay and the energy dissipation, Social Network Analysis (SNA) tools are relied upon for proposing a range of efficient resource scheduling approaches in the MAC layer. As demonstrated by our simulation results, the so-called shortest-shortest-distance scheduling regime outperforms its counterparts in terms of both its delay and energy metrics. Since the distance-related path-loss predetermines the successful information delivery in the scenario of the integrated cellular and small-scale opportunistic network, it is crucial for us to study the statistical properties of the random distance between a transmitter and receiver pair. As a result, we derive the closedform distributions of both the random distance between a pair of MUs and that between a BS and MU pair for different scenarios. Apart from assisting us in analysing the information dissemination process, these results may be further relied upon for evaluating the path loss, the throughput, the spectral efficiency and the outage probability in mobile networks. Although mobile communication techniques evolved from the well known analog ‘1G’ mobile networks to the emerging heterogeneous ‘5G’ mobile networks, the operational systems still rely on the CI-dominated ‘Comm 1.0’ era. Explicitly, direct interaction amongst the MUs without any aid of the CI is rare. Since powerfulmobile devices and pervasive social networking services are popular amongst theMUs, more direct interaction amongst theMUs would be advocated for the sake of achieving a more reliable, more prompt, and more energy-efficient communication experience. This treatise may contribute in a modest way towards the new ‘Comm 2.0’ era and may inspire further efforts from both the industrial and academic communities so as to embrace both the opportunities and the challenges of this new era from both the technical and economic perspectives.
APA, Harvard, Vancouver, ISO, and other styles
16

Wallach, Deborah A. (Deborah Anne). "High-performance application-specific networking." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10261.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (p. 107-112).
by Deborah Anne Wallach.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
17

Voellmy, Andreas Richard. "Programmable and Scalable Software-Defined Networking Controllers." Thesis, Yale University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3580888.

Full text
Abstract:

A major recent development in computer networking is the notion of Software-Defined Networking (SDN), which allows a network to customize its behaviors through centralized policies at a conceptually centralized network controller. The SDN architecture replaces closed, vertically-integrated, and fixed-function appliances with general-purpose packet processing devices, programmed through open, vendor-neutral APIs by control software executing on centralized servers. This open design exposes the capabilities of network devices and provides consumers with increased flexibility.

Although several elements of the SDN architecture, notably the OpenFlow standards, have been developed, writing an SDN controller remains highly difficult. Existing programming frameworks require either explicit or restricted declarative specification of flow patterns and provide little support for maintaining consistency between controller and distributed switch state, thereby introducing a major source of complexity in SDN programming.

In this dissertation, we demonstrate that it is feasible to use arguably the simplest possible programming model for centralized SDN policies, in which the programmer specifies the forwarding behavior of a network by defining a packet-processing function as an ordinary algorithm in a general-purpose language. This function, which we call an algorithmic policy, is conceptually executed on every packet in the network and has access to centralized network and policy state. This programming model eliminates the complex and performance-critical task of generating and maintaining sets of rules on individual, distributed switches.

To implement algorithmic policies efficiently, we introduce Maple, an SDN programming framework that can be embedded into any programming language with appropriate support. We have implemented Maple in both Java and Haskell, including an optimizing compiler and runtime system with three novel components. First, Maple's optimizer automatically discovers reusable forwarding decisions from a generic running control program. Specifically, the optimizer observes algorithm execution traces, organizes these traces to develop a partial decision tree for the algorithm, called a trace tree, and incrementally compiles these trace trees into optimized flow tables for distributed switches. Second, Maple introduces state dependency localization and fast repair techniques to efficiently maintain consistency between algorithmic policy and distributed flow tables. Third, Maple includes the McNettle OpenFlow network controller that efficiently executes user-defined OpenFlow event handlers written in Haskell on multicore CPUs, supporting the execution of algorithmic policies that require the central controller to process many packets. Through efficient message processing and enhancements to the Glasgow Haskell Compiler runtime system, McNettle network controllers can scale to handle over 20 million OpenFlow events per second on 40 CPU cores.

APA, Harvard, Vancouver, ISO, and other styles
18

Kemper, Marlyn. "Networking: Choosing A Lan Path to Interconnection." NSUWorks, 1986. http://nsuworks.nova.edu/gscis_etd/628.

Full text
Abstract:
A combination of evolving technologies, economic circumstances, and the need to manage the increased flow of information culminated in the utilization of computer networks in libraries to enhance information retrieval and document delivery and facilitate access to resources. Computer networks have enabled librarians to streamline support services and reduce the costs of labor intensive operations. A computer network is a structure that makes available to an end user at one place some service performed at another place. Ever since computer users started accessing central processor resources from remote terminals three decades ago, computer networks have become complex, powerful, and versatile. Technically, computer networks have evolved from dedicated private networks to those utilizing multiplexed links or accessing hosts from multiple vendors. Geographically, computer networks have the capability to link several buildings, a few states or span the globe. Advanced telecommunications technology in concert with computer technology triggered the emergence of all sizes, shapes, and types of computer networks to enable terminals and/or users connected to the networks to communicate with each other. While libraries' needs are changing in the fields of video and voice communications, the most profound alterations are in the realm of data communications or telecommunications among computers and between terminals and computers. To cope with the mix of requirements for sharing resources ranging from bibliographic citations and cataloging and circulation records to grant proposals and publications, librarians in the 1980s are turning to a special form of computer network, the extended local area network or linked LAN, as a tool for connecting diverse communications equipment. A local area network is a facility providing data communications within a geographically limited area. Essentially, a local area network can bind together proliferating personal computers, terminals, host computers, and other communications equipment for information interchange. These local area networks can be connected into linked LANs or extended local area networks through fiber optic transmission in a dispersed geographical area for maximum communications efficiency. The dissertation, NETWORKING: CHOOSING A LAN PATH TO INTERCONNECTION, is about the mortar and bricks out of which computer networks are built. Special emphasis is placed on the process of designing, installing, and implementing local area networks and on the linkage of these LANs as a strategy for information management for multi type library consortia such as SEFLIN (SouthEast Library Information Network). Consisting of public and academic libraries in Southeast Florida, SEFLIN is responsible for promoting resource sharing and the dissemination of new knowledge in all areas of study for the 3.2 million residents of Florida's Broward, Dade, and Palm Beach Counties. SEFLIN participants include the Broward Community College Library, the Broward County Library System, the Florida Atlantic University Library, the Florida International University Library, the Miami – Dade Community College Library, the Miami-Dade Public Library, and the University of Miami Library. Scarcity of space, shrinking budgets, growing user demands and expectations, spiraling costs of materials and services, and price reductions in communications and computing equipment contributed to the emergence of multi type library networks such as SEFLIN for resource sharing. SEFLIN was formed to serve the needs of a mixed community of library users by supplying access to a full range of information resources and offering sophisticated support for a host of library and management function including online processing and cataloging, circulation control, serials control, fund accounting, statistical reporting, word processing and electronic mail. SEFLIN's primary mission is to link libraries in a common pattern of information exchange through the creation of an extended local area network or linked LAN. Sharing decision making data among public and academic libraries broadens the scope of sources and services available to the user community. Few areas in data communications much recent offering as technological innovation and local area networks. Local have seen as new commercial area network development has responded to the users' demand for greater transmission speed and capacity .Among user advantages provided by a LAN are enhanced reliability; faster response time; flexibility in applications programming; better supported facilities; and internetworking capabilities for multiple remote locations. Spanning short distances ranging from a few meters to several kilometers and involving high data rates and short propagation delays, local area networks are characterized by a variable number of devices requiring interconnection. As more and more materials become available and the costs of operations skyrocket, library professionals have to come to terms with the facts that not only is there not enough money around to be self-sufficient but often any semblance of self-sufficiency has become an impossibility. As a consequence, computer networks such as linked LANs are playing an increasingly important role in library activities. NETWORKING: CHOOSING A LAN PATH TO INTERCONNECTION is the result of efforts to explore the processes involved in developing a framework for interconnecting disparate computer systems in use by SEFLIN members in an extended local area network (LAN) or linked LAN based on the Open Systems Interconnection (OSI) Reference Model promulgated by the International Organization for Standardization (ISO) . Within a multi-type library consortium such as SEFLIN, an extended local area network or linked LAN facilitates access to decision making data. A networking system such as Network Systems Corporation's HYPERbus which can be extended via private or public communications facilities such as Bell South T1 transmission technology, compatible Timep1ex devices, and Microtel's LaserNet, a fiber optic transmission system, into an extended local area network or linked LAN is a mechanism for facilitating the accomplishment of this mission. Inasmuch as HYPERbus has been utilized since 1984 by the Broward County Main Library, the major reference and research facility of the Broward County Library System, and can provide maximum performance networking capabilities for high speed digital data communications applications, NETWORKING: CHOOSING A LAN PATH TO INTERCONNECTION examines the feasibility of using HYPERbus as the basis for an extended local area network linking two SEFLIN participants, namely, the Broward County Main Library and the Florida Atlantic University Library. Developed by Network Systems Corporation, HYPERbus is a local area network implemented by Broward County's Information Resources Management Division in downtown Fort Lauderdale. Presently, HYPERbus is used to link the Broward County Main Library, the Broward County Governmental Center, the Broward County Courthouse, and the new jail facility. A multi-drop coaxial cable based system which transmits data at speeds up to 10M bps, HYPERbus features a flexible architecture capable of handling simultaneously diverse data rate, traffic types, and protocols and incorporating a variety of transmission media within the network. Moreover, HYPERbus can support extended span geographic distances using communication links. HYPERbus provides a data communications resource that is transparent to differences in communications media and equipment. As a consequence, network reconfiguration and expansion can be readily accomplished as new technologies, protocols, and user requirements emerge. The generic model developed in this dissertation for interconnecting two SEFLIN participants based on HYPERbus technology, however, is by no means exhaustive of all existing schemes; the field is presently so wide open that new schemes are being introduced constantly. Technological advances and economic pressures have stimulated interest in resource sharing through computer networking as an option for overcoming barriers in accessing information. The linking of computerized systems to enable one system to exchange data with another system is an essential and basic goal to any effective cooperative intersystem resource sharing effort. In this world of electronic communications, the future of computer networking which provides tailored data communications services to the user community is definitely on an upward climb as a consequence of such factors as competitive pricing, improved and faster transmission speeds, and conformance to standards devised by the International Organization for Standardization (ISO). NETWORKING: CHOOSING A LAN PATH TO INTERCONNECTION deals with designing and implementing a computer network for resource sharing and is intended for technical service and public service staff of the Broward County Library System who are using the building blocks of a local area network and still appear to be somewhat mystified by its information capabilities as well as for those who are not yet doing so. This dissertation examines the history of library involvement with networking, briefly reviews the history of library networking and automation, presents examples of library use of networking, and provides the user with the basic information needed for understanding networking technology and terminology. Further, NETWORKING: CHOOSING A LAN PATH TO INTERCONNECTION proposes an enhanced role for the Broward County Library System as a community information provider by using an extended local area network or linked LAN to actively pursue, organize, and make available to present and potential library patrons a range of information resources never before offered in today's information based society.
APA, Harvard, Vancouver, ISO, and other styles
19

Druschel, Peter. "Operating system support for high-speed networking." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186828.

Full text
Abstract:
The advent of high-speed networks may soon increase the network bandwidth available to workstation class computers by two orders of magnitude. Combined with the dramatic increase in microprocessor speed, these technological advances make possible new kinds of applications, such as multimedia and parallel computing on networks of workstations. At the same time, the operating system, in its role as mediator and multiplexor of computing resources, is threatening to become a bottleneck. The underlying cause is that main memory performance has not kept up with the growth of CPU and I/O speed, thus opening a bandwidth gap between CPU and main memory, while closing the old gap between main memory and I/O. Current operating systems fail to properly take into account the performance characteristics of the memory subsystem. The trend towards server-based operating systems exacerbates this problem, since a modular OS structure tends to increase pressure on the memory system. This dissertation is concerned with the I/O bottleneck in operating systems, with particular focus on high-speed networking. We start by identifying the causes of this bottleneck, which are rooted in a mismatch of operating system behavior with the performance characteristics of modern computer hardware. Then, traditional approaches to supporting I/O in operating systems are re-evaluated in light of current hardware performance tradeoffs. This re-evaluation gives rise to a set of novel techniques that eliminate the I/O bottleneck.
APA, Harvard, Vancouver, ISO, and other styles
20

Strong, Cynthia D. "Addressing the gender gap : teaching preadolescent girls computer networking concepts /." Online version of thesis, 2010. http://hdl.handle.net/1850/12239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Felker, Keith A. "Security and efficiency concerns with distributed collaborative networking environments /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03sep%5FFelker.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Marin, Nogueras Gerard. "Federation of Community Networking Testbeds." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Berndtsson, Andreas. "VPN Mesh in Industrial Networking." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-18160.

Full text
Abstract:
This thesis report describes the process and present the results gained while evaluating available VPN mesh solutions and equipment for integration into Industrial systems. The task was divided into several sub steps; summarize the previous work done in the VPN mesh area, evaluate the available VPN mesh solutions, verify that the interesting equipment comply with the criteria set by ABB and lastly verify that the equipment can be integrated transparently into already running systems. The result shows that there is equipment that complies with the criteria, which can also be integrated transparently into running systems. The result also shows that IPSec should be used as the VPN protocol since IPSec can make use of the crypto hardware whereas TLS based VPNs currently cannot. Even though the implementation of secure gateways would provide authentication and authorization to the network, the cost of implementing these gateways would be great. The best solution would be to present the evaluated equipment as an optional feature instead of making it standard equipment in each system.
Denna examensarbetesrapport beskriver den process, samt presenterar de resultat som har insamlats, när tillgängliga VPN-mesh-lösningar- och utrustning har utvärderats för integrering i Industriella system. Uppgiften var uppdelad i ett flertal delmoment, varvid det första bestod i att summera tidigare utfört arbete inom VPN-mesh-området. De efterföljande delmomenten bestod i att utvärdera tillgängliga VPN-mesh-lösningar, verifiera att den utvärderade utrustningen uppfyller de krav som fastställts av ABB samt verifiera att utrustningen har stöd för transparent integrering i system under drift. Resultatet visar att det finns utrustning som uppfyller ABB’s krav, vilken även kan bli transparent integrerade i system under drift. Resultatet visar även att IPSec bör användas som VPN-protokoll eftersom IPSec kan nyttja krypto-hårdvara medan TLS-baserade VPN-lösningar för tillfället saknar denna funktionalitet. Implementeringen av säkra gateways medför autentisering och auktorisering till nätverket, dock är kostnaden att implementera dessa hög. Den bästa lösningen vore att erbjuda de utvärderade produkterna som möjliga tillägg, istället för att göra dem till standardutrustning vid köp av ett industriellt system.
APA, Harvard, Vancouver, ISO, and other styles
24

Sibanda, Phathisile. "Connection management applications for high-speed audio networking." Thesis, Rhodes University, 2008. http://hdl.handle.net/10962/d1006532.

Full text
Abstract:
Traditionally, connection management applications (referred to as patchbays) for high-speed audio networking, are predominantly developed using third-generation languages such as C, C# and C++. Due to the rapid increase in distributed audio/video network usage in the world today, connection management applications that control signal routing over these networks have also evolved in complexity to accommodate more functionality. As the result, high-speed audio networking application developers require a tool that will enable them to develop complex connection management applications easily and within the shortest possible time. In addition, this tool should provide them with the reliability and flexibility required to develop applications controlling signal routing in networks carrying real-time data. High-speed audio networks are used for various purposes that include audio/video production and broadcasting. This investigation evaluates the possibility of using Adobe Flash Professional 8, using ActionScript 2.0, for developing connection management applications. Three patchbays, namely the Broadcast patchbay, the Project studio patchbay, and the Hospitality/Convention Centre patchbay were developed and tested for connection management in three sound installation networks, namely the Broadcast network, the Project studio network, and the Hospitality/Convention Centre network. Findings indicate that complex connection management applications can effectively be implemented using the Adobe Flash IDE and ActionScript 2.0.
APA, Harvard, Vancouver, ISO, and other styles
25

Shumard, Sally L. "A Collaborative PDS Project About Computer Networking in Art Education /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487928649988749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ho, Tracey 1976. "Networking from a network coding perspective." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/87910.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 143-149).
Network coding generalizes network operation beyond traditional routing, or store-and-forward, approaches, allowing for mathematical operations across data streams within a network. This thesis considers a number of theoretical and practical networking issues from network coding perspectives. We describe a new distributed randomized coding approach to multi-source multicast network scenarios, in which one or more, possibly correlated, sources transmit common information to one or more receivers. This approach substantially widens the scope of applicability of network coding to three new areas. Firstly, we show that it achieves robust network operation in a decentralized fashion, approaching optimal capacity with error probability decreasing exponentially in the length of the codes. Secondly, in the area of network security, we show how to extend this approach to obtain a low-overhead scheme for detecting the presence of faulty or malicious nodes exhibiting Byzantine (arbitrary) behavior. Thirdly, we show that this approach compresses information where necessary in a network, giving error bounds in terms of network parameters. Another area of our work develops an information theoretic framework for network management for recovery from non-ergodic link failures, based on the very general network coding concept of network behavior as a code. This provides a way to quantify essential management information as that needed to switch among di®erent codes (behaviors) for di®erent failure scenarios. We compare two di®erent recovery approaches, and give bounds, many of which are tight, on management requirements for various network connection problems in terms of network parameters.
by Tracey Ho.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
27

Sturgeon, Thomas. "Exploratory learning for wireless networking." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/1702.

Full text
Abstract:
This dissertation highlights the importance of computer networking education and the challenges in engaging and educating students. An exploratory learning approach is discussed with reference to other learning models and taxonomies. It is felt that an exploratory learning approach to wireless networks improves student engagement and perceived educational value. In order to support exploratory learning and improve the effectiveness of computer networking education the WiFi Virtual Laboratory (WiFiVL) has been developed. This framework enables students to access a powerful network simulator without the barrier of learning a specialised systems programming language. The WiFiVL has been designed to provide “anytime anywhere” access to a self-paced or guided exploratory learning environment. The initial framework was designed to enable users to access a network simulator using an HTML form embedded in a web page. Users could construct a scenario wherein multiple wireless nodes were situated. Traffic links between the nodes were also specified using the form interface. The scenario is then translated into a portable format, a URL, and simulated using the WiFiVL framework detailed in this dissertation. The resulting simulation is played back to the user on a web page, via a Flash animation. This initial approach was extended to exploit the greater potential for interaction afforded by a Rich Internet Application (RIA), referred to as WiFiVL II. The dissertation also details the expansion of WiFiVL into the realm of 3-dimensional, immersive, virtual worlds. It is shown how these virtual worlds can be exploited to create an engaging and educational virtual laboratory for wireless networks. Throughout each development the supporting framework has been re-used and has proved capable of supporting multiple interfaces and views. Each of the implementations described in this dissertation has been evaluated with learners in undergraduate and postgraduate degrees at the University of St Andrews. The results validate the efficacy of a virtual laboratory approach for supporting exploratory learning for wireless networks.
APA, Harvard, Vancouver, ISO, and other styles
28

Felker, Keith A. "Security and efficiency concerns with distributed collaborative networking environments." Thesis, Monterey, California. Naval Postgraduate School, 2009. http://hdl.handle.net/10945/852.

Full text
Abstract:
Approved for public release, distribution unlimited
The progression of technology is continuous and the technology that drives interpersonal communication is not an exception. Recent technology advancements in the areas of multicast, firewalls, encryption techniques, and bandwidth availability have made the next level of interpersonal communication possible. This thesis answers why collaborative environments are important in today's online productivity. In doing so, it gives the reader a comprehensive background in distributed collaborative environments, answers how collaborative environments are employed in the Department of Defense and industry, details the effects network security has on multicast protocols, and compares collaborative solutions with a focus on security. The thesis ends by providing a recommendation for collaborative solutions to be utilized by NPS/DoD type networks. Efficient multicast collaboration, in the framework of security is a secondary focus of this research. As such, it takes security and firewall concerns into consideration while comparing and contrasting both multicast-based and non-multicast-based collaborative solutions.
APA, Harvard, Vancouver, ISO, and other styles
29

Al-Malki, Dana Mohammed. "Development of virtual network computing (VNC) environment for networking and enhancing user experience." Thesis, City, University of London, 2006. http://openaccess.city.ac.uk/18319/.

Full text
Abstract:
Virtual Network Computing (VNC) is a thin client developed by Real VNC Ltd, Formerly of Olivetti Research Ltd/AT&T labs Cambridge and can be used as a collaborative environment, therefore it has been chosen as the basis of this research study. The purpose of this thesis is to investigate and develop a VNC based environment over the network and to improve the users’ Quality of Experience (QoE) of using VNC between networked groups by the incorporation of videoconferencing with VNC and enhancing QoE in Mobile environments where the network status is far from ideal and is prone to disconnection. This thesis investigates the operation of VNC in different environments and scenarios such as wireless environments by investigating user and device mobility and ways to sustain their seamless connection when in motion. As part of the study I also researched all groups that implement VNC like universities, research groups and laboratories and virtual laboratories. In addition to that I identified the successful features and security measures in VNC in order to create a secure environment. This was achieved by pinpointing the points of strength and weakness in VNC as opposed to popular thin clients and remote control applications and analysing VNC according to conforming to several security measures. Furthermore, it is reasonable to say that the success of any scheme that attempts to deliver desirable levels of Quality of Service (QoS) of an effective application for the future Internet must be based, not only on the progress of technology, but on usersʹ requirements. For instance, a collaborative environment has not yet reached the desired expectation of its users since it is not capable of handling any unexpected events which can result from a sudden disconnection of a nomadic user engaged in an ongoing collaborative session; this is consequently associated with breaking the social dynamics of the group collaborating in the session. Therefore, I have concluded that knowing the social dynamics of application’s users as a group and their requirements and expectations of a successful experience can lead an application designer to exploit technology to autonomously support the initiating and maintaining of social interaction. Moreover, I was able to successfully develop a VNC based environment for networked groups that facilitates the administration of different remote VNC sessions. In addition to a prototype that uses videoconferencing in parallel to VNC to provide a better user’s QoE of VNC. The last part of the thesis was concerned with designing a framework to improve and assess QoE of all users in a collaborative environment where it can be especially applied in the presence of nomadic clients with their much frequent disconnections. I have designed a conceptual algorithm called Improved Collaborative Quality of Experience (IC‐QoE), an algorithm that aims to eliminate frustration and improve QoE of users in a collaborative session in the case of disconnections and examined its use and benefits in real world scenarios such as research teams and implemented a prototype to present the concepts of this algorithm. Finally, I have designed a framework to suggest ways to evaluate this algorithm.
APA, Harvard, Vancouver, ISO, and other styles
30

He, Chunzhi, and 何春志. "Load-balanced switch design and data center networking." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/198826.

Full text
Abstract:
High-speed routers and high-performance data centers share a common system-level architecture in which multiple processing nodes are connected by an interconnection network for high-speed communications. Load balancing is an important technique for maximizing throughput and minimizing delay of the interconnection network. In this thesis, efficient load balancing schemes are designed and analyzed for next-generation routers and data centers. In high-speed router design, two preferred switch architectures are input-queued switch and load-balanced switch. In an input-queued switch, time-domain load balancing can be carried out by an iterative algorithm that schedules packets for sending in different time slots. The complexity of an iterative algorithm increases rapidly with the number of scheduling iterations. To address this problem, a single-iteration scheduling algorithm called D-LQF is designed, in which exhaustive service policy is adopted for reusing the matched input-output pairs in the previous time slots to grow the match size. Unlike an input-queued switch, a load-balanced switch consists of two stages of crossbar switch fabrics, where load balancing is carried out in both time and space domains. Among various load-balanced switches, the feedback-based switch gives the best delay-throughput performance. In this thesis, the feedback-based switch is enhanced in three aspects. Firstly, we focus on reducing its switch fabric complexity. Instead of using crossbars, a dual-banyan network is proposed. The complexity of dual-banyan can be further reduced by merging the two banyans to form a Clos network, resulting in a Clos-banyan network. Secondly, we target at improving the delay performance of the feedback-based switch. A Clos-feedback switch architecture is devised where each switch module in the Clos network is a small feedback-based switch. With application-flow based load balancing, packet order is ensured and the average packet delay is reduced from O(N) to O(n), where N and n are the switch and switch module sizes, respectively. Thirdly, we extend the feedback-based switch to support multicast traffic. Based on the notion of pointer-based multicast VOQ, an efficient multicast scheduling algorithm with packet replication at the middle-stage ports only is proposed. In order to provide close-to-100% throughput for any admissible multicast traffic patterns, a three-stage implementation of feedback-based switch is also designed. In designing load balancing schemes for data centers, we focus on the most popular fat-tree based data centers. Notably, packet-based load balancing is widely considered infeasible for data centers. This is because the associated packet out-of-order problem will cause unnecessary TCP fast retransmits, and as a result, severely undermine TCP performance. In this thesis, we show that if packet-based load balancing is performed properly, the packet out-of-order problem can be easily addressed by slightly increasing the number of duplicate ACKs required for triggering fast retransmit. Admittedly, in case of a real packet loss, the loss recovery time will be increased. But our simulation results show that such an increase is far less than the reduction in the network queueing delay (due to a better load-balanced network). As compared to a flow-based load balancing scheme, our packet-based scheme consistently provides significantly higher goodput and noticeably smaller delay.
published_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
31

Abdul, Azeez Khan Mohamed Shoaib Khan. "Redefining Insteon home control networking protocol." Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1583654.

Full text
Abstract:

The two main purposes of developing a Home Control Networking protocol are to offer indoor lifestyle sophistication and for the security of our residences. There are numerous protocols based on ZigBee, Z-Wave, Wavenis, X10 and Insteon technologies. These technologies do have good indoor lifestyle sophistication features. Insteon provides wide range of products in this aspect and is the latest and improved version. But the existing Insteon protocol is functional in only smaller Power Line Communication networks. There will be demand for the implementation of Insteon home control networking protocol in larger residential areas and in industrial areas due to its steady growth and popularity. Implementing the existing protocol in larger networks is infeasible because of data collision due to flooding. Therefore, there is a need to redefine and expand the protocol, such that the network could accommodate many devices and increase the size of the network. To achieve the same gradient based routing is implemented that helps to choose a particular path to reach a particular end device. This eventually reduces flooding and useful data packets can be saved from collision. After implementation of gradient based protocol, collision has reduced by 56.63%, delay decreased by 65% and throughput increased by 105.6%.

APA, Harvard, Vancouver, ISO, and other styles
32

Mahood, Christian. "Data center design & enterprise networking /." Online version of thesis, 2009. http://hdl.handle.net/1850/8699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Holman, Jason (Jason William) 1974. "Optical networking equipment manufacturing." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/44603.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; in conjunction with the Leaders for Manufacturing Program at MIT, 2001.
Includes bibliographical references (leaf 70).
Celestica, a global contract manufacturer specializing in printed circuit board assembly and computer assembly, has recently begun manufacturing equipment for the optical networking equipment (ONE) industry. The expansion to include ONE manufacturing requires the development of new skills in handling optical fiber and components, a new supply chain strategy, and a new approach to manufacturing systems control. Celestica is developing a set of standards for ONE manufacturing that will support the rapid development of the new skills required for this industry. This work outlines the standards and explores the specific issues related to manufacturing with optical fiber, including the mechanical reliability and optical performance of various types of optical fibers. An overview of the telecommunications industry is provided, including an analysis of its supply chain structure. Observations are made on trends in the industry and the ways that these trends have affected Celestica in the past, and could impact Celestica in the future. Finally, Celestica's current approach to manufacturing systems control is evaluated, and suggestions are made for improving systems control and project management when manufacturing for such a rapidly evolving industry.
by Jason Holman.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
34

Yu, Weikuan. "Enhancing MPI with modern networking mechanisms in cluster interconnects." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1150470374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Cruz-Zendejas, Rogelio. "IslaNet| An isolated and secure approach to implementing computer networking laboratories." Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1527541.

Full text
Abstract:

The SIGITE Computing Curricula suggests that a hands-on laboratory component is essential in teaching networking courses. However there are some drawbacks and limitations including high costs of implementation and maintenance, security risks to the campus network, and a limited number of practical guides that feature both design and implementation. Furthermore, with the advancement of other approaches such as virtualization and simulation it has become increasingly difficult to justify funding a hands-on laboratory.

IslaNet is an isolated and secure approach to implementing computer networking laboratories which produce a low-cost model, focused on hands-on implementation. IslaNet uses various components from other approaches to mitigate, or in some cases, completely eliminate the risks and deficiencies that traditional hands-on laboratories introduce. The laboratory objectives are derived from the SIGITE Computing Curriculum to provide a solid, well developed foundation. IslaNet offers concept, design, and implementation using a unique multi-layer approach.

APA, Harvard, Vancouver, ISO, and other styles
36

Yulga, James. "Implementation of Microsoft's Virtual PC in networking curriculum." [Denver, Colo.] : Regis University, 2006. http://165.236.235.140/lib/JYulgaPartI2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bihari, Jeevan Jyoti. "Software emulation of networking components." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/935942.

Full text
Abstract:
Software emulation of local area and wide area networks provides an alternative method for the design of such networks and for analyzing their performance. Emulation of bridges and routers that link networks together may provide valuable information regarding network congestion, network storms and the like before putting expensive hardware into place. Such an emulation also enables students taking a networking course to develop their own client-server applications and to visualize the basic functioning of the UDP/IP and RIP protocols.This thesis builds on the emulated local area network, Metanet, created by a previous graduate student. It adds the capability of attaching routers and bridges to multiple local and non-local emulated networks so that data may be transferred between two hosts on different segments of the same LAN (via an emulated bridge) or two different networks altogether (via an emulated router). The machines running the Metanet software should support UNIX which has Berkeley's Socket interface as emulated networks on different physical machines utilize this interface for communicating. A comparison of the new networking capabilities of Metanet and other experimental systems like XINU and MINIX is researched.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
38

Chetty, Marshini. "Making infrastructure visible: a case study of home networking." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41152.

Full text
Abstract:
In this dissertation, I examine how making infrastructure visible affects users' engagement with that infrastructure, through the case study of home networking. I present empirical evidence of the visibility issues that home networks present to users and how these results informed the design of a prototype called Kermit to visualize aspects of the home network. Through my implementation and evaluation of Kermit, I derive implications for making infrastructure visible in ways that enable end-users to manage and understand the systems they use everyday. I conclude with suggestions for future work for making home networks, and infrastructure more generally, more visible.
APA, Harvard, Vancouver, ISO, and other styles
39

Mercer, Logan (Logan James McClure). "Deployment of a next generation networking protocol." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106740.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, June 2016.
Cataloged from PDF version of thesis. "May 2016."
Includes bibliographical references (pages 72-73).
This thesis presents experimental verification of the performance of Group Centric Networking (GCN), a next generation networking protocol developed for robust and scalable communications in lossy networks where users are localized to geographic areas, such as military tactical networks. In previous work, initial simulations in NS3 showed that GCN offers high delivery with low network overhead in the presence of high packet loss and high mobility. We extend this prior work to verify GCN's performance in actual over-the-air experimentation. In the experiments, we deployed GCN on a 90-node Android phone test bed that was distributed across an office building, allowing us to evaluate its performance over-the-air on real-world hardware in a realistic environment. GCN's performance is compared against multiple popular wireless routing protocols, which we also run on our testbed. These tests yield two notable results: (1) the seemingly benign environment of an office is in fact quite lossy, with high packet error rates between users that are geographically close to one another, and (2) that GCN does indeed offer high delivery with low network overhead, which is in contrast to traditional wireless routing schemes that offer either high delivery or low overhead, or sometimes neither.
by Logan Mercer.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
40

Reblitz-Richardson, Orion Aubrey 1976. "Architecture for biological model and database networking." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86495.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 73-75).
by Orion Aubrey Reblitz-Richardson.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
41

Erazo, Miguel A. "Leveraging Symbiotic Relationships for Emulation of Computer Networks." FIU Digital Commons, 2013. http://digitalcommons.fiu.edu/etd/827.

Full text
Abstract:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Ma, Yongsen. "Improving Wifi Sensing And Networking With Channel State Information." W&M ScholarWorks, 2019. https://scholarworks.wm.edu/etd/1593091976.

Full text
Abstract:
In recent years, WiFi has a very rapid growth due to its high throughput, high efficiency, and low costs. Multiple-Input Multiple-Output (MIMO) and Orthogonal Frequency-Division Multiplexing (OFDM) are two key technologies for providing high throughput and efficiency for WiFi systems. MIMO-OFDM provides Channel State Information (CSI) which represents the amplitude attenuation and phase shift of each transmit-receiver antenna pair of each carrier frequency. CSI helps WiFi achieve high throughput to meet the growing demands of wireless data traffic. CSI captures how wireless signals travel through the surrounding environment, so it can also be used for wireless sensing purposes. This dissertation presents how to improve WiFi sensing and networking with CSI. More specifically, this dissertation proposes deep learning models to improve the performance and capability of WiFi sensing and presents network protocols to reduce CSI feedback overhead for high efficiency WiFi networking. For WiFi sensing, there are many wireless sensing applications using CSI as the input in recent years. To get a better understanding of existing WiFi sensing technologies and future WiFi sensing trends, this dissertation presents a survey of signal processing techniques, algorithms, applications, performance results, challenges, and future trends of CSI-based WiFi sensing. CSI is widely used for gesture recognition and sign language recognition. Existing methods for WiFi-based sign language recognition have low accuracy and high costs when there are more than 200 sign gestures. The dissertation presents SignFi for sign language recognition using CSI and Convolutional Neural Networks (CNNs). SignFi provides high accuracy and low costs for run-time testing for 276 sign gestures in the lab and home environments. For WiFi networking, although CSI provides high throughput for WiFi networks, it also introduces high overhead. WiFi transmitters need CSI feedback for transmit beamforming and rate adaptation. The size of CSI packets is very large and it grows very fast with respect to the number of antennas and channel width. CSI feedback introduces high overhead which reduces the performance and efficiency of WiFi systems, especially mobile and hand-held WiFi devices. This dissertation presents RoFi to reduce CSI feedback overhead based on the mobility status of WiFi receivers. CSI feedback compression reduces overhead, but WiFi receivers still need to send CSI feedback to the WiFi transmitter. The dissertation presents EliMO for eliminating CSI feedback without sacrificing beamforming gains.
APA, Harvard, Vancouver, ISO, and other styles
43

Kandula, Dheeraj. "End-to-end Behavior of Delay Tolerant Networks with Message Ferries." NCSU, 2008. http://www.lib.ncsu.edu/theses/available/etd-02272008-201512/.

Full text
Abstract:
Delay Tolerant Networks (DTN) are high delay networks with intermittent connectivity. Transport protocols developed either for high bandwidth networks or low delay networks suffer significantly on these type of networks. We have studied the impact of various transport protocols and application level protocols on a specific type of DTN namely Message Ferry Networks. At present there is no specific transport protocol that adapts well to the characteristics of Message Ferry networks. We developed a protocol that is well suited for Message ferry networks. Our protocol ensures major characteristics of a reliable transport protocol like in order delivery and reliable transfer of data without compromising on the throughput. We simulated our protocol by modifying the TCP process model in Opnet and compared it with standard TCP. The simulation results show a drastic improvement over the standard TCP protocol.
APA, Harvard, Vancouver, ISO, and other styles
44

Tanwir, Savera. "Network Resource Scheduling and Management of Optical Grids." NCSU, 2007. http://www.lib.ncsu.edu/theses/available/etd-05082007-194445/.

Full text
Abstract:
Advance reservation of lightpaths in an optical network has become a popular concept of reserving network resources in support of Grid applications. In this thesis, we have evaluated and compared several algorithms for dynamic scheduling of lightpaths using a flexible advance reservation model. The main aim is to find the best scheduling policy that improves network utilization and minimizes blocking. The scheduling of lightpaths involve both routing and wavelength assignment. Our simulation results show that minimum cost adaptive routing where link costs are determined by the current and future usage of the link provides the minimum blocking. Moreover, searching for k alternate paths within the scheduling window significantly improves the performance. For wavelength assignment, we have used a scheme that reduces fragmentation by minimizing unused leading or trailing gaps. We have also analyzed approaches for failure recovery and lightpath re-optimization. Finally, an advance reservation scheme needs timely information regarding the status of the optical links. To this end, we have surveyed various monitoring tools and techniques and we have proposed a monitoring framework to support fast restoration.
APA, Harvard, Vancouver, ISO, and other styles
45

McKinney, Steven. "Insider Threat: User Identification Via Process Profiling." NCSU, 2008. http://www.lib.ncsu.edu/theses/available/etd-05092008-154325/.

Full text
Abstract:
The issue of insider threat is one that organizations have dealt with for many years. Insider threat research began in the early 80's, but has yet to provide satisfactory results despite the fact that insiders pose a greater threat to organizations than external attackers. One of the key issues relating to this problem is that the amount of collectable data is enormous and it is currently impossible to analyze all of it, for each insider, in a timely manner. The purpose of this research is to analyze a portion of this collectable data, process usage, and determine if this data is useful in identifying insiders. Identification of the person controlling the workstation is useful in environments where workstations are left unattended, even for a short amount of time. To do this, we developed an insider threat detection system based on the Naive Bayes method which examines process usage data and creates individual profiles for users. By comparing collected data to these profiles we are able to determine who is controlling the workstation with high accuracy. We are able to achieve true positive rates of 96\% while maintaining fewer than 0.5\% false positives.
APA, Harvard, Vancouver, ISO, and other styles
46

Jun, Jangeun. "Capacity Estimation of Wireless Mesh Networks." NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-11062002-163505/.

Full text
Abstract:
The goal of this research is to estimate the capacity of wireless mesh networks (WMNs). WMNs have unique topology and traffic patterns when compared to conventional wireless Internet access networks. In WMNs, user nodes act as a host and a router simultaneously and form a meshed topology. Traffic is forwarded towards a gateway connected to the Internet by cooperating user nodes in a multihop fashion. Since the considered WMNs use IEEE 802.11 for medium access control and physical layer implementation, theoretical maximum throughput and fairness issues in IEEE 802.11 networks are investigated as a preliminary framework for the capacity estimation of WMN. Due to a centralized traffic pattern and meshed topology, forwarded traffic becomes heavier as it gets closer to the gateway. The characteristics of the traffic behavior in WMNs are thoroughly examined and an analytical solution for capacity estimation is presented. The analytical solution is derived for various topologies and validated using simulations.
APA, Harvard, Vancouver, ISO, and other styles
47

Iyer, Vijay R. "A Simulation Study of Wavelength Assignment and Reservation Policies with Signaling Delays." NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-11072002-192327/.

Full text
Abstract:
This thesis studies the effect of non-negligible signaling delays on the performance of wavelength-assignment heuristics, wavelength reservation schemes, routing schemes, holding time (average being 1/μ) of the lightpaths and traffic loads (average being λ/μ), in second-generation optical wide area networks (WANs). A network simulator was developed using the C++ language for this study. The simulator supports any input topology with single or multi-fiber links, many routing schemes (static, alternate and dynamic), dynamic traffic loads, and may be modified easily to accomodate different wavelength-assignment policies. The signaling messages used, in our study, to establish lightpaths, follow the Constrained-Routing Label Distribution Protocol (CR-LDP) semantics. The problem studied here falls under the general category of Routing and Wavelength Assignment (RWA) Problem which has been proved to be NP-hard. Previous studies have mostly considered static routing (with static or dynamic traffic demand), and static traffic demand (with static or alternate routing) under zero propagation delays. A few papers in the recent past have studied the effect of signaling delays but have been limited in scope. We study the effect of varying holding times, compare random versus first-fit wavelength assignment policy, compare fixed versus alternate routing, compare backward wavelength reservation schemes to forward reservation schemes, and lastly study the effect of traffic loads. We find that, in general, the random wavelength assignment policy performs better than first-fit policy and that under certain conditions, alternate routing scheme performs worse than fixed routing scheme.
APA, Harvard, Vancouver, ISO, and other styles
48

Nguyen, Ngoc Tan. "A Security Monitoring Plane for Information Centric Networking : application to Named Data Networking." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0020.

Full text
Abstract:
L'architecture de l'Internet a été conçue pour connecter des hôtes distants. Mais l'évolution de son usage, qui s'apparente à celui d'une plate-forme mondiale pour la distribution de contenu met à mal son modèle de communication originale. Afin de mettre en cohérence l'architecture de l'Internet et son usage, de nouvelles architectures réseaux orientées contenu ont été proposées et celles-ci sont prêtes à être mises en oeuvre. Les questions de leur gestion, déploiement et sécurité se posent alors comme des verrous indispensables à lever pour les opérateurs de l'Internet. Dans cette thèse, nous proposons un plan de surveillance de la sécurité pour Named Data Networking (NDN), l'architecture la plus aboutie et bénéficiant d'une implémentation fonctionnelle. Dans le déploiement réel, nous avons caractérisé les attaques NDN les plus importantes - Interest Flooding Attack (IFA) et Content Poisoning Attack (CPA). Ces résultats ont permis de concevoir des micro-détecteurs qui reposent sur la théorie des tests d'hypothèses. L'approche permet de concevoir un test optimal (AUMP) capable d'assurer une probabilité de fausses alarmes (PFA) désirée en maximisant la puissance de détection. Nous avons intégré ces micro-détecteurs dans un plan de surveillance de la sécurité permettant de détecter des changements anormaux et les corréler par le réseau Bayésien, qui permet d'identifier les événements de sécurité dans un noeud NDN. Cette solution a été validée par simulation et expérimentation sur les attaques IFA et CPA
The current architecture of the Internet has been designed to connect remote hosts. But the evolution of its usage, which is now similar to that of a global platform for content distribution undermines its original communication model. In order to bring consistency between the Internet's architecture with its use, new content-oriented network architectures have been proposed, and these are now ready to be implemented. The issues of their management, deployment, and security now arise as locks essential to lift for Internet operators. In this thesis, we propose a security monitoring plan for Named Data Networking (NDN), the most advanced architecture which also benefits from a functional implementation. In this context, we have characterized the most important NDN attacks - Interest Flooding Attack (IFA) and Content Poisoning Attack (CPA) - under real deployment conditions. These results have led to the development of micro-detector-based attack detection solutions leveraging hypothesis testing theory. The approach allows the design of an optimal (AUMP) test capable of providing a desired false alarm probability (PFA) by maximizing the detection power. We have integrated these micro-detectors into a security monitoring plan to detect abnormal changes and correlate them through a Bayesian network, which can identify events impacting security in an NDN node. This proposal has been validated by simulation and experimentation on IFA and CPA attacks
APA, Harvard, Vancouver, ISO, and other styles
49

Svantesson, Björn. "Software Defined Networking : Virtual Router Performance." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-13417.

Full text
Abstract:
Virtualization is becoming more and more popular since the hardware that is available today often has theability to run more than just a single machine. The hardware is too powerful in relation to the requirementsof the software that is supposed to run on the hardware, making it inefficient to run too little software ontoo powerful of machines. With virtualization, the ability exists to run a lot of different software on thesame hardware, thereby increasing the efficiency of hardware usage.Virtualization doesn't stop at just virtualizing operating systems or commodity software, but can also beused to virtualize networking components. These networking components include everything from routersto switches and are possible to set up on any kind of virtulized system.When discussing virtualization of networking components, the experssion “Software Defined Networking”is hard to miss. Software Defined Networking is a definition that contains all of these virtualized networkingcomponents and is the expression that should be used when researching further into this subject. There'san increasing interest in these virtualized networking components now in relation to just a few years ago.This is due to company networking becoming much more complex now in relation to the complexity thatcould be found in a network a few years back. More services need to be up inside of the network and a lotof people believe that Software Defined Networking can help in this regard.This thesis aim is to try to find out what kind of differences there are between multiple different softwarerouters. Finding out things like, which one of the routers that offer the highest network speed for the leastamount of hardware cost, are the kind of things that this thesis will be focused on. It will also look at somedifferent aspects of performance that the routers offer in relation to one another in order to try toestablish if there exists any kind of “best” router in multiple different areas.The idea is to build up a virtualized network that somewhat relates to how a normal network looks insmaller companies today. This network will then be used for different types of testing while having thesoftware based router placed in the middle and having it take care of routing between different local virtualnetworks. All of the routers will be placed on the same server and their configuration will be very basicwhile also making sure that each of the routers get access to the same amount of hardware.After initial testing, all routers that perform bad will be opted out for additional testing. This is done tomake sure that there's no unnecessary testing done on routers that seem to not be able to keep up withthe other ones. The results from these tests will be compared to the results of a hardware router with thesame kind of tests used with it in the middle in relation to the tests the software routers had to go through.The results from the testing were fairly surprising, only having one single router being eliminated early onas the remaining ones continued to “battle” one another with more tests. These tests were compared tothe results of a hardware router and the results here were also quite surprising with a much betterperformance in many different areas from the software routers perspective.
APA, Harvard, Vancouver, ISO, and other styles
50

Beckler, Kendra K. "Improved caching strategies for publish/subscribe internet networking." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100334.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February 2015.
Cataloged from PDF version of thesis. "September 2014."
Includes bibliographical references (pages 70-73).
The systemic structure of TCP/IP is outdated; a new scheme for data transportation is needed in order to make the internet more adaptive to modern demands of mobility, information-driven demand, ever-increasing quantity of users and data, and performance requirements. While an information centric networking system addresses these issues, one required component for publish subscribe or content-addressed internet networking systems to work properly is an improved caching system. This allows the publish subscribe internet networking to dynamically route packets to mobile users, as an improvement over pure hierarchical or pure distributed caching systems, To this end, I proposed, implemented, and analyzed the workings of a superdomain caching system. The superdomain caching system is a hybrid of hierarchical and dynamic caching systems designed to continue reaping the benefits of the caching system for mobile users (who may move between neighboring domains in the midst of a network transaction) while minimizing the latency inherent in any distributed caching system to improve upon the content-addressed system.
by Kendra K. Beckler.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography