To see the other types of publications on this topic, follow the link: Proxy Caching.

Dissertations / Theses on the topic 'Proxy Caching'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Proxy Caching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bouazizi, Imed. "Proxy caching for robust video delivery over lossy networks." kostenfrei, 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972714316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Logren, Dély Tobias. "Caching HTTP : A comparative study of caching reverse proxies Varnish and Nginx." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9679.

Full text
Abstract:
With the amount of users on the web steadily increasing websites must at times endure heavy loads and risk grinding to a halt beneath the flood of visitors. One solution to this problem is by using HTTP reverse proxy caching, which acts as an intermediate between web application and user. Content from the application is stored and passed on, avoiding the need for the application produce it anew for every request. One popular application designed solely for this task is Varnish; another interesting application for the task is Nginx which is primarily designed as a web server. This thesis compares the performance of the two applications in terms of number of requests served in relation to response time, as well as system load and free memory. With both applications using their default configuration, the experiments find that Nginx performs better in the majority of tests performed. The difference is however very slightly in tests with low request rate.
APA, Harvard, Vancouver, ISO, and other styles
3

Doswell, Felicia. "Improving Network Performance and Document Dissemination by Enhancing Cache Consistency on the Web Using Proxy and Server Negotiation." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28735.

Full text
Abstract:
Use of proxy caches in the World Wide Web is beneficial to the end user, network administrator, and server administrator since it reduces the amount of redundant traffic that circulates through the network. In addition, end users get quicker access to documents that are cached. However, the use of proxies introduces additional issues that need to be addressed. In particular, there is growing concern over how to maintain cache consistency and coherency among cached versions of documents. The existing consistency protocols used in the Web are proving to be insufficient to meet the growing needs of the Internet population. For example, too many messages sent over the network are due to caches guessing when their copy is inconsistent. One option is to apply the cache coherence strategies already in use for many other distributed systems, such as parallel computers. However, these methods are not satisfactory for the World Wide Web due to its larger size and more diverse access patterns. Many decisions must be made when exploring World Wide Web coherency, such as whether to provide consistency at the proxy level (client pull) or to allow the server to handle it (server push). What trade offs are inherent for each of these decisions? The relevant usage of any method strongly depends upon the conditions of the network (e.g., document types that are frequently requested or the state of the network load) and the resources available (e.g., disk space and type of cache available). Version 1.1 of HTTP is the first protocol version to give explicit rules for consistency on the Web. Many proposed algorithms require changes to HTTP/1.1. However, this is not necessary to provide a suitable solution. One goal of this dissertation is to study the characteristics of document retrieval and modification to determine their effect on proposed consistency mechanisms. A set of effective consistency policies is identified from the investigation. The main objective of this dissertation is to use these findings to design and implement a consistency algorithm that provides improved performance over the current mechanisms proposed in the literature. Optimistically, we want an algorithm that provides strong consistency. However, we do not want to further degrade the network or cause undue burden on the server to gain this advantage. We propose a system based on the notion of soft-state and based on server push. In this system, the proxy would have some influence on what state information is maintained at the server (spatial consideration) as well as how long to maintain the information (temporal consideration). We perform a benchmark study of the performance of the new algorithm in comparison with existing proposed algorithms. Our results show that the Synchronous Nodes for Consistency (SINC) framework provides an average of 20% control message savings by limiting how much polling occurs with the current Web cache consistency mechanism, Adaptive Client Polling. In addition, the algorithm shows 30% savings on state space overhead at the server by limiting the amount of per-proxy and per-document state information required at the server.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Neves, Bruno Silveira. "Proposta de algoritmo de cacheamento para proxies VoD e sua avaliação usando um novo conjunto de métricas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/119427.

Full text
Abstract:
Atualmente, o serviço digital conhecido como Vídeo sob Demanda - Video on Demand (VoD) - está em ascensão e costuma requerer uma quantidade significativa de recursos físicos para a sua implementação. Para reduzir os custos de operacionalização desse serviço, uma das alternativas comumente usada é o emprego de proxies que cacheiam as partes mais importantes do acervo, com o objetivo de atender a demanda para esse conteúdo no lugar do servidor primário do sistema VoD. Nesse contexto, para melhorar a eficiência do proxy, propõe-se neste trabalho um novo algoritmo de cacheamento que explora o posicionamento dos clientes ativos para determinar a densidade de clientes dentro de uma janela de tempo existente em frente de cada trecho de vídeo. Ao cachear os trechos de vídeo com maior densidade em frente a eles, o algoritmo é capaz de alcançar um alto desempenho, em termos de taxa de acertos para as requisições recebidas pelo proxy, durante intervalos de alta carga de trabalho. Para avaliar esta abordagem, o novo algoritmo desenvolvido foi comparado com outros de natureza semelhante, fazendo uso tanto de métricas tradicionais, como a taxa de acertos, como também de métricas físicas, como, por exemplo, o uso de recursos de processamento. Os resultados mostram que o novo algoritmo explora melhor a banda de processamento disponível na arquitetura de base do proxy para obter uma taxa de acertos maior em comparação com os algoritmos usados na análise comparativa. Por fim, para dispor das ferramentas necessárias para construir essa análise, produziu-se uma outra contribuição importante nesse trabalho: a implementação de um simulador de proxies VoD que, até onde se sabe, é o primeiro a possibilitar a avaliação do hardware utilizado para implementar essa aplicação.<br>Today, Video on Demand (VoD) is a digital service on the rise that requires a lot of resources for its implementation. To reduce the costs of running this service, one of the commonly used alternatives is using proxies that cache the most important portions of the collection in order to meet the demand for this content in place of the primary server of the VoD system. In this context, to improve the efficiency of proxy, we proposed a novel caching algorithm that explores the positioning of the active clients to determine the density of clients inside a time window existing in front of each video chunk. By caching the video chunks with the greater density in front of them, the algorithm is able to achieve high performance, in terms of the hit ratio for the requests received by the proxy, during periods of high workload. To better evaluate our approach, we compare it with others of similar nature, using both traditional metrics like hit rate, as well as physical metrics, such as the use of processing resources. The results show that the new algorithm exploits the processing bandwidth available in the underlying architecture of the proxy for obtaining a larger hit rate in comparison to the other algorithms used in the comparative analysis. Finally, to dispose of the necessary tools to perform this analysis, we produced another important contribution in this work: the implementation of a VoD proxy simulator that, to the best of our knowledge, is the first one to enable the evaluation of the hardware used to implement this application.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Binzhang M. S. "Characterizing Web Response Time." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36741.

Full text
Abstract:
It is critical to understand WWW latency in order to design better HTTP protocols. In this study we characterize Web response time and examine the effects of proxy caching, network bandwidth, traffic load, persistent connections for a page, and periodicity. Based on studies with four workloads, we show that at least a quarter of the total elapsed time is spent on establishing TCP connections with HTTP/1.0. The distributions of connection time and elapsed time can be modeled using Pearson, Weibul, or Log-logistic distributions. We also characterize the effect of a user's network bandwidth on response time. Average connection time from a client via a 33.6 K modem is two times longer than that from a client via switched Ethernet. We estimate the elapsed time savings from using persistent connections for a page to vary from about a quarter to a half. Response times display strong daily and weekly patterns. This study finds that a proxy caching server is sensitive to traffic loads. Contrary to the typical thought about Web proxy caching, this study also finds that a single stand-alone squid proxy cache does not always reduce response time for our workloads. Implications of these results to future versions of the HTTP protocol and to Web application design also are discussed.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Haoming. "Least relative benefit algorithm for caching continuous media data at the Web proxy." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59413.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abdulla, Ghaleb. "Analysis and Modeling of World Wide Web Traffic." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30470.

Full text
Abstract:
This dissertation deals with monitoring, collecting, analyzing, and modeling of World Wide Web (WWW) traffic and client interactions. The rapid growth of WWW usage has not been accompanied by an overall understanding of models of information resources and their deployment strategies. Consequently, the current Web architecture often faces performance and reliability problems. Scalability, latency, bandwidth, and disconnected operations are some of the important issues that should be considered when attempting to adjust for the growth in Web usage. The WWW Consortium launched an effort to design a new protocol that will be able to support future demands. Before doing that, however, we need to characterize current users' interactions with the WWW and understand how it is being used. We focus on proxies since they provide a good medium or caching, filtering information, payment methods, and copyright management. We collected proxy data from our environment over a period of more than two years. We also collected data from other sources such as schools, information service providers, and commercial aites. Sampling times range from days to years. We analyzed the collected data looking for important characteristics that can help in designing a better HTTP protocol. We developed a modeling approach that considers Web traffic characteristics such as self-similarity and long-range dependency. We developed an algorithm to characterize users' sessions. Finally we developed a high-level Web traffic model suitable for sensitivity analysis. As a result of this work we develop statistical models of parameters such as arrival times, file sizes, file types, and locality of reference. We describe an approach to model long-range and dependent Web traffic and we characterize activities of users accessing a digital library courseware server or Web search tools. Temporal and spatial locality of reference within examined user communities is high, so caching can be an effective tool to help reduce network traffic and to help solve the scalability problem. We recommend utilizing our findings to promote a smart distribution or push model to cache documents when there is likelihood of repeat accesses.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Holl, David J. "A performance analysis of TCP and STP implementations and proposals for new QoS classes for TCP/IP." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0501103-111419.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.<br>Keywords: TCP; RED; satellite; PEP; STP; performance enhancing proxy; segment caching; IP-ABR; Internet; bandwidth reservation; IP-VBR; congestion avoidance; bandwidth sharing. Includes bibliographical references (p. 98-99).
APA, Harvard, Vancouver, ISO, and other styles
9

Chen-Lung, Chan. "Proxy Caching Strategies for Streaming Video Applications." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-1303200709261075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

吳楊盛. "Design of Caching Proxy with QoS Enhancement." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/69716789851811708206.

Full text
Abstract:
碩士<br>中華大學<br>資訊工程學系碩士班<br>90<br>As the Internet technology develops and the number of Internet users grows, WWW has become one of the most used services on the network. In order to reduce the duplicated transmission of information over the Internet, most of the web users retrieve web pages through a regional caching proxy server. Thus, caching server becomes an entry point of WWW. However, the existing caching proxy server is not QoS-aware and it offers typical best-effort delivery service. No differential service is supported in the caching proxy server. Therefore, how to use the limited bandwidth more effectively and provide differential service level to various requirements is a critical issue at present. In this thesis, we developed a new QoS enhanced framework for the caching proxy servers. The proposed framework aims to give more control on the ways a proxy server uses to manage the bandwidth resource. Protection against overuse is enforced for solving the overload problem. A resource management mechanism is developed to allocate bandwidth dynamically. Then the differential service can be established to provide better service to critical users. Thus, the overall performance of the caching proxy perceived by the users is improved.
APA, Harvard, Vancouver, ISO, and other styles
11

Chan, Chen-Lung, and 詹振隆. "Proxy Caching Strategies for Streaming Video Applications." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/98103837133672870062.

Full text
Abstract:
博士<br>國立清華大學<br>資訊工程學系<br>94<br>Proxy caching strategies, especially prefix caching and interval caching, are commonly used in video-on-demand (VOD) systems to improve both system performance and the playback experience of users. However, because these caching strategies are designed for homogeneous clients, they do not perform well in the real world where clients are heterogeneous (i.e., different available network bandwidths and different sizes of client-side buffers). This thesis investigates the problems caused by heterogeneous client-side buffers and proposes two mechanisms for performance optimization. The first mechanism is a caching strategy for minimizing the input bandwidth of individual proxy while serving heterogeneous clients. We analyze the theoretical performance of prefix caching and interval caching, and then derive cost functions to formulate the corresponding performance gains. Based on these analytical results, we propose a hybrid caching strategy that employs both prefix caching and interval caching to minimize the input bandwidth of a proxy. An optimal cache allocation algorithm is also presented to determine the best ratio of prefix caches and interval caches in a proxy. The other mechanism we proposed is cooperating proxies to construct an overlay multicast infrastructure, called Buffer-Assisted On-Demand Multicast (BAODM). In BAODM, the receivers in a multicast group can access the multicast stream asynchronously, so the server load of a VOD system can be further reduced. We prove that the time complexity to determine an optimal routing path and the corresponding buffer allocations for each request over general graph networks is NP-complete. Besides, we propose an optimal routing algorithm for fully-connected overlay networks and a heuristic routing algorithm for general graph networks, respectively. Through the simulation results, both mechanisms show that they can significantly reduce both the server load and the network bandwidth consumption of a proxy-assisted VOD system.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Kuei-Hui, and 陳桂慧. "Partial Caching Replacement Policies for Web Proxy." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/95988729777781586978.

Full text
Abstract:
碩士<br>元智大學<br>資訊工程研究所<br>88<br>The performance of accessing information is crucial for the success of Web. The Web proxy server plays an important role in improving the performance and quality of services. The access latency time can be reduced, if users get objects from the near proxy servers. In addition, the load of Web servers and network traffic can be thus reduced. From our studies, we find that the caching replacement algorithm is a key issue in Web proxy server design. It decides which object will be evicted from the cache to get enough space for the new object. The design of the cache replacement algorithm influences the re-usability of the cached information. In this thesis, two novel caching replacement policies are proposed. The first is the Partial Caching LRU( PC-LRU), and the second is called the Partial Replacement LRU-THOLD( PR-LRU-THOLD). Trace-driven simulations are also performed. The experiment results show that our schemes improve the cache hit rate, the byte hit rate and the access latency. In addition, the complexity of the schemes is near O(1) on average. Compared with LRU, PC-LRU improves the hit rate by 26%, the byte hit rate by 32%, and the reduced latency rate by 50% in the receptive best case. PR-LRU-THOLD improves the hit rate by 12%, the byte hit rate by 18%, and the reduced latency rate by 19% in the best case. Compared with LRU-THOLD, PC-LRU improves the hit rate by 17%, the byte hit rate by 114%, and the reduced latency rate by 48% in the best case. PR-LRU-THOLD improves the hit rate by 12%, the byte hit rate by 30%, and the reduced latency rate by 20% in the best case. We conclude that the partial caching replacement policies indeed improve the Web proxy performance. Furthermore, the concept of our schemes can be potentially applied to other categories of replacement algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Gao, Mulong. "Proxy-based adaptive push-pull web caching." Thesis, 2004. http://hdl.handle.net/2429/15503.

Full text
Abstract:
Although the volume of Web traffic on the Internet is staggering, a large percentage of the traffic is redundant — multiple users at any given site request much of the same content. This means that a significant percentage of the W A N infrastructure carries identical content (and identical requests for it) day after day. Web caching performs the local storage of Web content to serve these redundant user requests more quickly, without sending the requests and the resulting content over the WAN. There are two major categories of Web caching mechanisms: client pull and origin server or parent proxy push. This thesis investigates a proxy-based, adaptive push-pull mechanism to enhance user experience by selecting some of the most frequently accessed objects to push to a proxy instead of letting the proxy request it later. Web servers or parent proxies collect the objects access frequency for specific proxies rather than on a global scope. Web servers or parent proxies adopt a push policy to distribute hot objects to cooperating proxies; other objects are pulled by the proxies as usual. As time moves, frequently requested objects may become cold in one region and become hot in another region; Web servers and caching proxies can thus learn the change pattern and adjust their distribution policy accordingly, avoiding pushing objects to proxies which may not request those objects in the near future. By using the adaptive push-pull distribution mechanism, the most frequently updated and accessed objects, which form the major chunk of Internet traffic, are pushed to proxies, saving many If-Modified-Since GET requests. Hence, Web traffic is more productive and user perceived latency is reduced.
APA, Harvard, Vancouver, ISO, and other styles
14

Pang-Shin, Shih, and 施邦欣. "Performance Study and Implementation for Segment-based Proxy Caching." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/13150478082921236777.

Full text
Abstract:
碩士<br>國立交通大學<br>電信工程系<br>90<br>Proxy partial caching, e.g. segment-based proxy caching or proxy prefix caching, partitions the object into non-overlapping pieces. The proxy cache then treats each piece as an individual file when performing caching and replacing. Upon receipt of request to the object, the proxy cache delivers the cached portion to the client immediately to mask the start-up delay. In this thesis, we examine the viability of applying this technique to web proxy cache. First, we propose a architecture of caching multimedia streams via HTTP. With in this proposed architecture, the start-up delay of video playout is easily masked, the interactive VCR function can be realized, and caching the multimedia streams in the web proxy cache can be greatly simplified. Second, proxy partial caching introduces a problem called dirty-first-segment problem, i.e. cached portion in the proxy cache is inconsistent with the original in the server. If it happens, cached portion cannot be concatenated with the latter portion from the server. As a result, user will receive cached portion first and then perceive an undesirable situation, such as a discontinuous video playout. To relieve this problem, we propose a novel algorithm called validating upon partial replacement, which synchronizes the operation of proactive consistency validation, partition, and replacement performed upon an object. We show that our proposed algorithm bears the merits of controllable chance to disturb users, easily embedding into the existing web proxy cache software with minimum modification, and significant improvement of cache performance in the aspect of start-up delay reduction for multimedia stream accesses.
APA, Harvard, Vancouver, ISO, and other styles
15

Tseng, Jann-Perng, and 曾展鵬. "The Study of Proxy Caching Server and Deployment Experience." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/83315063934657572792.

Full text
Abstract:
博士<br>大同大學<br>資訊工程研究所<br>90<br>The explosive growth of web server leads the consequence that user perceives more latency time to wait the web pages retrieved. During the recent years, web proxy sever has been widely deployed to improve this circumstance. It proved to be an effective way to reduce the bandwidth needed to alleviate the latency time, and reduce the server load. Meanwhile, web proxy server related studies have become a popular and hot research topic. In this dissertation, we explored the proxy caching server and studied its related applications and deployment experiences. Besides the background theory of web proxy server, we focus on three areas of interest about proxy server: “the implementation and evaluation of performance impact of proxy server based information filtering”, “the design and implementation of visual assistance to simplify proxy server management work”, and “the deployment experience and survey of cooperative proxy server”. In chapter 4, we surveyed the available proxy server based public domain software for information filtering. For performance reason, we choose the Squirm and SquidGuard, which are implemented in C programming language. We tailored the source code and did the experiments of both standalone and operation test of each package. The result shows that the response time of the SquidGuard nearly remains constant while increasing the number of URL block-list. We can apply this software to simplify the maintenance work in the environment where information filtering is necessary. The management of proxy server is complicated if you expect not only keeping it working but also performing well. In chapter 5, we designed and implemented three visual interfaces to facilitate proxy server with different features and capabilities. To support the visualized assistant, we integrate the MRTG [3] with these interfaces. These three interfaces are the SNMP agent, the interception of the run-time internal objects and the instantaneously parsing the access log file. Each approach has its own limitation and fitness for specific circumstance or objective. It hides the difficult and complicated things and greatly improves the maintenance work and relief the management workload. In Chapter 6, we survey the cooperative proxy server. The objective of this chapter is two-folded. The former is to report the experience of the deployment of cooperative proxy server on TANet. The latter is survey of current state-of-the-art cooperative proxy server schemes which were proposed in the literature or proprietary product. An analytical model was proposed to verify with real traffic and packet flow of regional network proxy server. From the analysis of total packets gain, the increment load of the border router make the ICP fail to work. We should take it into consideration in deploying cooperation proxy server. The Internet has resulted in an increasing growth in demand for audio and video streaming during recent years. Though web caching gives much hope to improve user latency, there remain a number of ongoing issues to be further studied. The feasibility of handling dynamic and real-time data (video or streaming) is exciting. Both researchers and vendors address these challenges.
APA, Harvard, Vancouver, ISO, and other styles
16

Chang, Chun-Cheng, and 江家宏. "An Adaptive Suffix-Window Caching Scheme for CM Proxy Server." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/08209779807930718092.

Full text
Abstract:
碩士<br>國立屏東科技大學<br>資訊管理系<br>92<br>A proxy server is usually deployed for serving a set of clients to access remote information from Internet. This thesis focuses on designing such a proxy server for accessing continuous media, like video streams. In principle, two kinds of segments are stored in the proxy: (1) prefix: the beginning segment of a video and (2) suffix-window: the video segment just transmitted. Different videos usually show different popularity and the popularity may be changed with time. Ideally, the system should allow more popular videos to gain more proxy storage than the less popular ones. For this purpose, we propose an adaptive suffix-window scheme, which is able to self-tune the storage allocation for suffix window in response to the video popularity change to gain the optimal performance. The simulation results reveal that our scheme can quickly adapt to video popularity and improve the system performance with great significance particularly when the client buffer is very limited.
APA, Harvard, Vancouver, ISO, and other styles
17

學冠昇. "A packet-based caching proxy with loss recovery for video streaming." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/76650271199556739092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ou, Hui-Ling, and 區慧齡. "A Caching Mechanism of Proxy Server Using Weighted Multi-Attribute Settings." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/33423795409892240484.

Full text
Abstract:
碩士<br>國立交通大學<br>資訊管理所<br>91<br>World Wide Web (WWW) grows up rapidly in recent years. The network communication has become a part of daily life for many people. Although the population of the internet access increases year by year, the resource and bandwidth of the internet do not raise correspondingly. These reason leads to a serious network-congestion problem. Therefore, how to improve the efficiency of data accessing and reduce the querying response time have become critical issues. Caching at proxy servers is one of the methods to increase the transmission efficiency. To improve the efficiency of the proxy server, we need to replace the less-using objects so that we have enough space to store the frequently- accessing objects. This thesis proposes a replacing policy for the caching mechanism of proxy server - using weighted multi-attribute settings. Observing the relation between the various attributes and the record numbers in the log files of proxy server, we group related attribute values to generate high-supporting rules by using association rule. Then, we set weight according to ‘the finishing time of the transaction’, ’the size of the file’ and ’the depth of the browsing path’. In addition, the user behavior - ‘aiming at searching’, ‘roaming among different web sites’ and ‘surfing only at one web site’ are also considered to calculate the web-weight for the priority of the replacement. Therefore, the proxy server can replace an object simply by its weight. Thus, we can reduce waiting time. The result of the simulation shows that the method proposed in this thesis does increase the hit rate of proxy server and the traffic flow in the internet decreases in the meantime.
APA, Harvard, Vancouver, ISO, and other styles
19

Lin, Chin, and 林青. "Proxy Caching Strategies for Delivery of Continuous Media in a Heterogeneous Environment." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/75552534681839170936.

Full text
Abstract:
碩士<br>國立屏東科技大學<br>資訊管理系<br>91<br>The requirements of large storage amount and high bandwidth of videos impose a grand challenge on large-scaled video distribution over Internet. Normally, the wide-area networks (WAN) provide much lower bandwidth and longer response time than the local-area network (LAN). Also the network exists heterogeneous devices with different buffering capabilities. How to design a video-streaming service self-adapting to such environments is a critical task. Normally, a proxy server is deployed in LAN as an intermediator between the remote server and clients to reduce the WAN traffic and speed up the response. This thesis, based on such a three-tier architecture, proposes a novel proxy-caching strategy, termed the suffix-window caching. In principle, the proxy server caches two kinds of video segments, an initial video segment (prefix) and a middle video segment, the last received from the remote server (suffix window). A sub-batching scheme for serving video streams is presented. It tends to explore the LAN bandwidth to improve the system acceptance rate. The key idea is that it utilizes the suffix window to create new video channels to serve those clients who were supposed to be rejected due to short of client buffer. Through simulation, we compare the performance of our scheme with that of several previous schemes, where only the prefix caching was assumed. The important found is that with the assistance of suffix window, we can significantly reduce the rejection rate particularly under a tight client buffer constraint while incurring only moderate increase on LAN bandwidth.
APA, Harvard, Vancouver, ISO, and other styles
20

Lan, Chen-Shang, and 藍振商. "The Study of a Caching Proxy Server with Real-Time Upgrade Ability." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/89743466354833409121.

Full text
Abstract:
碩士<br>大葉大學<br>資訊管理學系碩士班<br>97<br>Along with customer’s equipment and bandwidth upgrading, web service must provide loading of server’s equipment to match relative service. The goal of this study is decreasing loading of server and reducing response time of web page service. This research use building the way of Caching Proxy Server to reduce huge loading when saving the database or data operation on general dynamic website, on the other hand it can increase hits rate of catch server to provide processing function with user behavior in advance. In view of it will bring function of webpage change notice on dynamic website in common problem which is reading past data. To link up processing function with user behavior in advance and function of web page change notice, obtaining certainly data on website display, it can make user reduce waiting time、decrease loading of CPU、increase hits rate and upgrade information uniformity.
APA, Harvard, Vancouver, ISO, and other styles
21

Tu, Wei [Verfasser]. "Proxy-based video transmission : error resiliency, resource allocation, and dynamic caching / Wei Tu." 2009. http://d-nb.info/993283950/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bouazizi, Imed [Verfasser]. "Proxy caching for robust video delivery over lossy networks / vorgelegt von Imed Bouazizi." 2004. http://d-nb.info/972714316/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ravindranath, C. K. "A Peer To Peer Web Proxy Cache For Enterprise Networks." Thesis, 2007. http://hdl.handle.net/2005/612.

Full text
Abstract:
In this thesis, we propose a decentralized peer-to-peer (P2P) Web proxy cache for enterprise networks (ENs). Currently, enterprises use a centralized proxy-based Web cache, where a dedicated proxy server does the caching. A dedicated proxy Web Cache has to be over-provisioned to handle peak loads. It is expensive, a single point of failure, and a bottleneck. In a P2P Web Cache, the clients themselves cooperate in caching the Web objects without any dedicated proxy cache. The resources from the client machines are pooled together to form a Web cache. This eliminates the need for extra hardware and the single point of failure, and improves the average response time, since all the machines serve the request queue. The most important attraction for the P2P scheme is its inherent scalability. Squirrel was the earliest P2P Web cache. Squirrel is built upon a structured P2P protocol called Pastry. Pastry is based on consistent hashing; a special hashing that performs well in the presence of client membership changes. Consistent hashing based protocols are designed for Internet-wide environments to handle very large membership sizes and high rates of membership change. To minimize the protocol bandwidth, the membership state maintained at each peer is very small. This state consists of the information about the peer’s immediate neighbours, and those of a few other P2P members, to achieve faster look-up. This scheme has the following advantages: (i) since peers do not maintain information about all the other peers in the system, any peer needing an object has to find the peer responsible for the object through a multi-hop lookup, thereby increasing the latency, and (ii) the number of objIds assigned to a peer depends on the hashing used, and this can be skewed, which affects the load distribution. The popular applications of the P2P paradigm have been file-sharing systems. These systems are deployed across the Internet. Hence, the existing P2P protocols were designed to operate within the constraints of Internet environments. The P2P proxy Web cache has been a recent application of the P2P paradigm. P2P Web Proxy caches operate across the entire network of an enterprise. An enterprise network(EN) comprises all the computing and communications capabilities of an institution. Institutions typically consist of many departments, with each department having and managing its own local area netwok (LAN). The available bandwidth in LANs is very high. LANs have low latency and low error rates. EN environments have smaller membership size, less frequent membership changes and more available bandwidth. Hence, in such environments, the P2P protocol can afford to store more membership information. This thesis explores the significant differences between EN and Internet environments. It proposes a new P2P protocol designed to exploit these differences, and a P2P Web proxy caching scheme based on this new protocol. Specifically, it shows that it is possible to maintain complete the consistent membership information on ENs. The thesis then presents a load distribution policy for a P2P system with complete and consistent membership information to achieve (i) load balance and (ii) minimum object migrations subsequent to each node join or node leave event. The proposed system requires extra storage and bandwidth costs. We have seen that the necessary storage is available in general workstations and the required bandwidth is feasible in modern networks. We then evaluated the improvement in performance achieved by the system over existing consistent hashing based systems. We have shown that without investing in any special hardware, the P2P system can match the performance of dedicated proxy caches. We have further shown that the buddy based P2P scheme has a better load distribution, especially under heavy loads when load balancing becomes critical. We have also shown that for large P2P systems, the buddy based scheme has a lower latency than the consistent hashing based schemes. Further, we have compared the costs of the proposed scheme and the existing consistent hashing based scheme for different loads (i.e., rate of Web object requests), and identified the situations in which the proposed scheme is likely to perform best. In summary, the thesis shows that (i) the membership dynamics of P2P systems on ENs are different from that of Internet file-sharing systems and (ii) it is feasible in ENs, to maintain complete the consistent view of the P2P membership at all the peers. We have designed a structured P2P protocol for LANs that maintains a complete and consistent view of membership information at all peers. P2P Web caches achieve single hop routing and a better balanced load distribution using this scheme. Complete and consistent view of membership information enabled a single-hop lookup and a flexible load assignment.
APA, Harvard, Vancouver, ISO, and other styles
24

"A cooperative and incentive-based proxy-and-client caching system for on-demand media streaming." 2005. http://library.cuhk.edu.hk/record=b5892574.

Full text
Abstract:
Ip Tak Shun.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.<br>Includes bibliographical references (leaves 95-101).<br>Abstracts in English and Chinese.<br>Abstract --- p.i<br>Acknowledgement --- p.iv<br>Chapter 1 --- Introduction --- p.1<br>Chapter 1.1 --- Background --- p.1<br>Chapter 1.1.1 --- Media Streaming --- p.1<br>Chapter 1.1.2 --- Incentive Mechanism --- p.2<br>Chapter 1.2 --- Cooperative and Incentive-based Proxy-and-Client Caching --- p.4<br>Chapter 1.2.1 --- Cooperative Proxy-and-Client Caching --- p.4<br>Chapter 1.2.2 --- Revenue-Rewarding Mechanism --- p.5<br>Chapter 1.3 --- Thesis Contribution --- p.6<br>Chapter 1.4 --- Thesis Organization --- p.7<br>Chapter 2 --- Related Work --- p.9<br>Chapter 2.1 --- Media Streaming --- p.9<br>Chapter 2.2 --- Incentive Mechanism --- p.11<br>Chapter 2.3 --- Resource Pricing --- p.14<br>Chapter 3 --- Cooperative Proxy-and-Client Caching --- p.16<br>Chapter 3.1 --- Overview of the COPACC System --- p.16<br>Chapter 3.2 --- Optimal Cache Allocation (CAP) --- p.21<br>Chapter 3.2.1 --- Single Proxy with Client Caching --- p.21<br>Chapter 3.2.2 --- Multiple Proxies with Client Caching --- p.24<br>Chapter 3.2.3 --- Cost Function with Suffix Multicast --- p.26<br>Chapter 3.3 --- Cooperative Proxy-Client Caching Protocol --- p.28<br>Chapter 3.3.1 --- Cache Allocation and Organization --- p.29<br>Chapter 3.3.2 --- Cache Lookup and Retrieval --- p.30<br>Chapter 3.3.3 --- Client Access and Integrity Verification --- p.30<br>Chapter 3.4 --- Performance Evaluation --- p.33<br>Chapter 3.4.1 --- Effectiveness of Cooperative Proxy and Client Caching --- p.34<br>Chapter 3.4.2 --- Robustness --- p.37<br>Chapter 3.4.3 --- Scalability and Control Overhead --- p.38<br>Chapter 3.4.4 --- Sensitivity to Network Topologies --- p.40<br>Chapter 4 --- Revenue-Rewarding Mechanism --- p.43<br>Chapter 4.1 --- System Model --- p.44<br>Chapter 4.1.1 --- System Overview --- p.44<br>Chapter 4.1.2 --- System Formulation --- p.47<br>Chapter 4.2 --- Resource Allocation Game --- p.50<br>Chapter 4.2.1 --- Non-Cooperative Game --- p.50<br>Chapter 4.2.2 --- Profit Maximizing Game --- p.52<br>Chapter 4.2.3 --- Utility Maximizing Game --- p.61<br>Chapter 4.3 --- Performance Evaluation --- p.74<br>Chapter 4.3.1 --- Convergence --- p.76<br>Chapter 4.3.2 --- Participation Incentive --- p.77<br>Chapter 4.3.3 --- Cost effectiveness --- p.85<br>Chapter 5 --- Conclusion --- p.87<br>Chapter A --- NP-Hardness of the CAP problem --- p.90<br>Chapter B --- Optimality of the Greedy Algorithm --- p.92<br>Bibliography --- p.95
APA, Harvard, Vancouver, ISO, and other styles
25

Chang, Kun-Yuan, and 張坤源. "An Effective Proxy Caching and Rate Control Mechanism for Transporting Multimedia Streams Over The Internet." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/18753771939193710502.

Full text
Abstract:
碩士<br>國立交通大學<br>資訊工程系<br>90<br>The advances in Internet and related services have changed human’s life. The application of multimedia-on-demand also grows up dramatically. However, this kind of services may cause the bottleneck of Internet due to the bandwidth requirement and storage constraint of multimedia data. In order to resolve this problem, we propose the streaming proxy architecture in this thesis. We place a streaming proxy that is close to clients, and the proxy forwards and caches frequent and popular video programs according to the viewing behavior of clients. The cache-hit rate should be high in order to reduce the load of network and response time of client. To achieve this goal, we store portions of each video program according to its popularity. Additionally, we use Fibonacci function to divide the video data into variable-sized segments. By this way, we can speed up the data allocation and cache replacement. During the transmission of video data for each session, we also propose a new transmission rate control mechanism based on exponential function. It can achieve higher bandwidth utilization and lower data loss rate. According to our preliminary analysis and performance evaluations by simulation, we can show that our techniques have positive results compared with other similar methods. The detail information will be given in the contents.
APA, Harvard, Vancouver, ISO, and other styles
26

Suresha, *. "Caching Techniques For Dynamic Web Servers." Thesis, 2006. http://hdl.handle.net/2005/438.

Full text
Abstract:
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites. A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code. We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation. In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography