Dissertations / Theses on the topic 'Caching'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Caching.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Miller, Jason Eric 1976. "Software instruction caching." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40317.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 185-193).
As microprocessor complexities and costs skyrocket, designers are looking for ways to simplify their designs to reduce costs, improve energy efficiency, or squeeze more computational elements on each chip. This is particularly true for the embedded domain where cost and energy consumption are paramount. Software instruction caches have the potential to provide the required performance while using simpler, more efficient hardware. A software cache consists of a simple array memory (such as a scratchpad) and a software system that is capable of automatically managing that memory as a cache. Software caches have several advantages over traditional hardware caches. Without complex cache-management logic, the processor hardware is cheaper and easier to design, verify and manufacture. The reduced access energy of simple memories can result in a net energy savings if management overhead is kept low. Software caches can also be customized to each individual program's needs, improving performance or eliminating unpredictable timing for real-time embedded applications. The greatest challenge for a software cache is providing good performance using general-purpose instructions for cache management rather than specially-designed hardware. This thesis designs and implements a working system (Flexicache) on an actual embedded processor and uses it to investigate the strengths and weaknesses of software instruction caches. Although both data and instruction caches can be implemented in software, very different techniques are used to optimize performance; this work focuses exclusively on software instruction caches. The Flexicache system consists of two software components: a static off-line preprocessor to add caching to an application and a dynamic runtime system to manage memory during execution. Key interfaces and optimizations are identified and characterized. The system is evaluated in detail from the standpoints of both performance and energy consumption. The results indicate that software instruction caches can perform comparably to hardware caches in embedded processors. On most benchmarks, the overhead relative to a hardware cache is less than 12% and can be as low as 2.4%. At the same time, the software cache uses up to 6% less energy. This is achieved using a simple, directly-addressed memory and without requiring any complex, specialized hardware structures.
by Jason Eric Miller.
Ph.D.
Gu, Wenzheng. "Ubiquitous Web caching." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0002406.
Full textLogren, Dély Tobias. "Caching HTTP : A comparative study of caching reverse proxies Varnish and Nginx." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9679.
Full textCaheny, Paul. "Runtime-assisted coherent caching." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670564.
Full textA mediados de los 2000 se produjo un cambio fundamental en el campo de la arquitectura de computadores debido a que técnicas como el escalado de frecuencia y el paralelismo a nivel de instrucción dejaron de proveer mejoras significativas. Desde entonces, la mejora en rendimiento se ha basado en explotar el paralelismo a través de incrementar el número de núcleos en los procesadores, lo que ha exacerbado el problema ya existente del muro de moria. En respuesta a este problema, se han desarrollado jerarquías de caché y de memoria más complejas, aún manteniendo el paradigma de memoria compartida desde el punto de vista del software. Como consecuencia de la tendencia de incrementar el paralelismo y la heterogeneidad, la importancia de la jerarquía de la memoria en el rendimiento global del sistema no ha parado de crecer. Otra consecuencia del aumento en el número de núcleos desde mediados de los 2000 es el deterioro de la programabilidad. Unos de los avances más importantes en el área de los modelos de programación han sido los modelos de programación paralelos basados en tareas. Estos modelos de programación facilitan la programación para el usuario y ofrecen un nivel de abstracción suficiente para que sus librerías de gestión optimicen la ejecución paralela para el hardware sobre el que se ejecutan las aplicaciones. El objetivo de esta tesis es aprovechar la información disponible en las librerías de gestión de modelos de programación paralelos basados en tareas para optimizar las jerarquías de memoria en un enfoque de co-diseño de hardware y software. La primera contribución de esta tesis estudia la habilidad de las librerías de gestión de modelos de programación paralelos basados en tareas para restringir las transferencias de datos en un sistema real de memoria compartida de gran escala. Esta contribución caracteriza directamente y en detalle la capacidad de las librerías de gestión de minimizar el tráfico de datos en el hardware. El análisis demuestra que las librerías de gestión pueden maximizar la localidad entre las tareas y los datos que utilizan, minimizando el tráfico de coherencia de cachés en la red de interconexión. La segunda y la tercera contribución de esta tesis proponen co-diseños de hardware y software para mejorar la eficiencia de las jerarquías de cachés. Estas dos contribuciones aprovechan la información disponible en las librerías de gestión de modelos de programación paralelos basados en tareas, comunican la información de las librerías al hardware y éste utiliza la información para mejorar el consumo energético y el rendimiento en la jerarquía de cachés. La segunda contribución trata de mejorar la escalabilidad de los protocolos de coherencia de cachés. El escalado de los protocolos de coherencia es un problema fundamental en arquitecturas con elevado número de núcleos. Esta contribución demuestra los beneficios de co-diseñar las librerías de gestión y el hardware, que consigue reducir drásticamente la presión sobre el directorio del protocolo de coherencia de caches, que es uno de los mayores problemas para escalar los protocolos de coherencia. La tercera contribución de esta tesis propone optimizar las cachés compartidas con tiempo de acceso no uniforme (NUCA) y aumentar su eficiencia para lidiar con el problema del muro de memoria. Las cachés NUCA también son cada vez más grandes y tienen más importancia, ya que son las última línea de defensa ante los costosos accesos a memoria. Esta contribución muestra que un co-diseño de las librerías de gestión y las cachés NUCA puede mejorar la gestión de estas memorias y reducir los costes de las transferencias de memoria en la red de interconexión. Las tres contribuciones de esta tesis demuestran el potencial que poseen las librerías de gestión de los modelos de programación basados en tareas para optimizar aspectos claves de las jerarquías de memoria y mejorar la escalabilidad
Irwin, James Patrick John. "Systems with predictable caching." Thesis, University of Bristol, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288213.
Full textKimbrel, Tracy. "Parallel prefetching and caching /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6943.
Full textSarkar, Prasenjit 1970. "Hint-based cooperative caching." Diss., The University of Arizona, 1998. http://hdl.handle.net/10150/288892.
Full textRecayte, Estefania <1988>. "Caching in Heterogeneous Networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/8974/1/0_Thesis.pdf.
Full textOu, Yi [Verfasser]. "Caching for flash-based databases and flash-based caching for databases / Yi Ou." München : Verlag Dr. Hut, 2012. http://d-nb.info/1028784120/34.
Full textPohl, Christoph. "Adaptive Caching of Distributed Components." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1117701363347-79965.
Full textLocality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach
Liu, Wei. "Distributed Collaborative Caching for WWW." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0014/MQ53180.pdf.
Full textDev, Kashinath. "Concurrency control in distributed caching." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-10112005-172329/.
Full textFineman, Jeremy T. "Algorithms incorporating concurrency and caching." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55110.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 189-203).
This thesis describes provably good algorithms for modern large-scale computer systems, including today's multicores. Designing efficient algorithms for these systems involves overcoming many challenges, including concurrency (dealing with parallel accesses to the same data) and caching (achieving good memory performance.) This thesis includes two parallel algorithms that focus on testing for atomicity violations in a parallel fork-join program. These algorithms augment a parallel program with a data structure that answers queries about the program's structure, on the fly. Specifically, one data structure, called SP-ordered-bags, maintains the series-parallel relationships among threads, which is vital for uncovering race conditions (bugs) in the program. Another data structure, called XConflict, aids in detecting conflicts in a transactional-memory system with nested parallel transactions. For a program with work T and span To, maintaining either data structure adds an overhead of PT, to the running time of the parallel program when executed on P processors using an efficient scheduler, yielding a total runtime of O(T1/P + PTo). For each of these data structures, queries can be answered in 0(1) time. This thesis also introduces the compressed sparse rows (CSB) storage format for sparse matrices, which allows both Ax and ATx to be computed efficiently in parallel, where A is an n x n sparse matrix with nnz > n nonzeros and x is a dense n-vector. The parallel multiplication algorithm uses e(nnz) work and ... span, yielding a parallelism of ... , which is amply high for virtually any large matrix.
(cont.) Also addressing concurrency, this thesis considers two scheduling problems. The first scheduling problem, motivated by transactional memory, considers randomized backoff when jobs have different lengths. I give an analysis showing that binary exponential backoff achieves makespan V2e(6v 1- i ) with high probability, where V is the total length of all n contending jobs. This bound is significantly larger than when jobs are all the same size. A variant of exponential backoff, however, achieves makespan of ... with high probability. I also present the size-hashed backoff protocol, specifically designed for jobs having different lengths, that achieves makespan ... with high probability. The second scheduling problem considers scheduling n unit-length jobs on m unrelated machines, where each job may fail probabilistically. Specifically, an input consists of a set of n jobs, a directed acyclic graph G describing the precedence constraints among jobs, and a failure probability qij for each job j and machine i. The goal is to find a schedule that minimizes the expected makespan. I give an O(log log(min {m, n}))-approximation for the case of independent jobs (when there are no precedence constraints) and an O(log(n + m) log log(min {m, n}))-approximation algorithm when precedence constraints form disjoint chains. This chain algorithm can be extended into one that supports precedence constraints that are trees, which worsens the approximation by another log(n) factor. To address caching, this thesis includes several new variants of cache-oblivious dynamic dictionaries.
(cont.) A cache-oblivious dictionary fills the same niche as a classic B-tree, but it does so without tuning for particular memory parameters. Thus, cache-oblivious dictionaries optimize for all levels of a multilevel hierarchy and are more portable than traditional B-trees. I describe how to add concurrency to several previously existing cache-oblivious dictionaries. I also describe two new data structures that achieve significantly cheaper insertions with a small overhead on searches. The cache-oblivious lookahead array (COLA) supports insertions/deletions and searches in O((1/B) log N) and O(log N) memory transfers, respectively, where B is the block size, M is the memory size, and N is the number of elements in the data structure. The xDict supports these operations in O((1/1B E1-) logB(N/M)) and O((1/)0logB(N/M)) memory transfers, respectively, where 0 < E < 1 is a tunable parameter. Also on caching, this thesis answers the question: what is the worst possible page-replacement strategy? The goal of this whimsical chapter is to devise an online strategy that achieves the highest possible fraction of page faults / cache misses as compared to the worst offline strategy. I show that there is no deterministic strategy that is competitive with the worst offline. I also give a randomized strategy based on the most recently used heuristic and show that it is the worst possible pagereplacement policy. On a more serious note, I also show that direct mapping is, in some sense, a worst possible page-replacement policy. Finally, this thesis includes a new algorithm, following a new approach, for the problem of maintaining a topological ordering of a dag as edges are dynamically inserted.
(cont.) The main result included here is an O(n2 log n) algorithm for maintaining a topological ordering in the presence of up to m < n(n - 1)/2 edge insertions. In contrast, the previously best algorithm has a total running time of O(min { m3/ 2, n5/2 }). Although these algorithms are not parallel and do not exhibit particularly good locality, some of the data structural techniques employed in my solution are similar to others in this thesis.
by Jeremy T. Fineman.
Ph.D.
Stafford, Matthew. "Online searching and connecting caching /." Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.
Full textChen, Li. "Semantic caching for XML queries." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0129104-174457.
Full textKeywords: Replacement strategy; Query rewriting; Query containment; Semantic caching; Query; XML. Includes bibliographical references (p. 210-222).
Buerli, Michael. "Radiance Caching with Environment Maps." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/991.
Full textBen, Mazziane Younes. "Analyse probabiliste pour le caching." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4014.
Full textCaches are small memories that speed up data retrieval. Caching policies may aim to choose cache content to minimize latency in responding to item requests. A more general problem permits an item's request to be approximately answered by a similar cached item. This concept, referred to as "similarity caching," proves valuable for content-based image retrieval and recommendation systems. The objective is to further minimize latency while delivering satisfactory answers.Theoretical understanding of cache memory management algorithms under specific assumptions on the requests provides guidelines for choosing a suitable algorithm. The Least-Frequently-Used (LFU) and the Least-Recently-Used (LRU) are popular caching eviction policies. LFU is efficient when the requests process is stationary, while LRU adapts to changes in the patterns of the requests. Online learning algorithms, such as the randomized Follow-the-Perturbed Leader (FPL) algorithm, applied for caching, enjoy worst-case guarantees. Both LFU and FPL rely on items' request count. However, counting is challenging in memory-constrained scenarios. To overcome this problem, caching policies operate with approximate counting schemes, such as the Count-Min Sketch with Conservative Updates (CMS-CU), to balance counts' accuracy and memory usage. In the similarity caching setting, RND-LRU is a modified LRU where a request is probabilistically answered by the most similar cached item. Unfortunately, a theoretical analysis of an LFU cache utilizing CMS-CU, an FPL cache with an approximate counting algorithm, and RND-LRU remains difficult.This thesis investigates three randomized algorithms: CMS-CU, FPL with noisy items' request counts estimations (NFPL), and RND-LRU. For CMS-CU, we propose a novel approach to derive new upper bounds on the expected value and the complementary cumulative distribution function of the estimation error under a renewal request process. Additionally, we prove that NFPL behaves as well as the optimal omniscient static caching policy for any request sequence under specific conditions on the noisy counts. Finally, we introduce a new analytically tractable similarity caching policy and show that it can approximate RND-LRU
Herber, Robert. "Distributed Caching in a Multi-Server Environment : A study of Distributed Caching mechanisms and an evaluation of Distributed Caching Platforms available for the .NET Framework." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-10394.
Full textMahdavi, Mehregan Computer Science & Engineering Faculty of Engineering UNSW. "Caching dynamic data for web applications." Awarded by:University of New South Wales. Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/32316.
Full textChupisanyarote, Sanpetch. "Content Caching in Opportunistic Wireless Networks." Thesis, KTH, Kommunikationsnät, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91875.
Full textGaspar, Cristian. "Variations on the Theme of Caching." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1048.
Full textIn the first variation we define different cost models involving page sizes and page costs. We also present the Torng cost framework introduced by Torng in [29]. Next we analyze the competitive ratio of online deterministic marking algorithms in the BIT cost model combined with the Torng framework. We show that given some specific restrictions on the set of possible request sequences, any marking algorithm is 2-competitive.
The second variation consists in using the relative competitiveness ratio on an access graph as a complexity measure. We use the concept of access graphs introduced by Borodin [11] to define our own concept of relative competitive ratio. We demonstrate results regarding the relative competitiveness of two cache eviction policies in both the basic and the Torng framework combined with the CLASSICAL cost model.
The third variation is caching with request reordering. Two reordering models are defined. We prove some important results about the value of a move and number of orderings, then demonstrate results about the approximation factor and competitive ratio of offline and online reordering schemes, respectively.
Arteaga, Clavijo Dulcardo Ariel. "Flash Caching for Cloud Computing Systems." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2496.
Full textHo, Henry, and Axel Odelberg. "Efficient caching of rich data sets." Thesis, KTH, Data- och elektroteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-145765.
Full textVikten av en snabb användarupplevelse ökar i nya applikationer. För att få ut mer prestanda när användare interagerar med resurstung data är det viktigt att implementera en effektiv cachingsmetod. Målet med arbetet är att undersöka hur man implementerar en effektiv cache i en Android-applikation. Användarfallet är att ladda ner metadata och bilder på filmer från ett WebAPI som tillhandahölls av June AB. För att undersöka vilken cachingsmetod som är effektivast gjordes en förstudie på några av de mest vanliga cachingsmetoderna idag. Baserat på förstudiens resultat valdes två cachingsalgoritmer för testning och utvärdering: First-In First-Out (FIFO) och Least Recently Used (LRU). Dessa två algoritmer implementerades i en Android-applikation Prototypen som gjordes har ett responsivt användargränsnitt som kan cacha stora mängder data utan märkbar prestandaförlust jämfört med en icke-cachad version. Prototypen visade att LRU är den bättre strategin för vårt användarfall, men upptäckte att bufferstorleken på cachen har den största påverkan av prestandan, inte cachestrategin.
Liang, Zhengang. "Transparent Web caching with load balancing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59383.pdf.
Full textGupta, Priya S. M. Massachusetts Institute of Technology. "Providing caching abstractions for web applications." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62453.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 99-101).
Web-based applications are used by millions of users daily, and as a result a key challenge facing web application designers is scaling their applications to handle this load. A crucial component of this challenge is scaling the data storage layer, especially for the newer class of social networking applications that have huge amounts of shared data. Caching is an important scaling technique and is a critical part of the storage layer for such high-traffic web applications. Usually, building caching mechanisms involves significant effort from the application developer to maintain and invalidate data in the cache. In this work we present CacheGenie, a system which aims to make it easy for web application developers to build caching mechanisms in their applications. It achieves this by proposing high-level caching abstractions for frequently observed query patterns in web applications. These abstractions take the form of declarative query objects, and once the developer defines them, she does not have to worry about managing the cache (i.e., insertion and deletion) or maintaining consistency (e.g., invalidation or updates) when writing application code. We designed and implemented CacheGenie in the popular Django web application framework, with PostgreSQL as the database backend and memcached as the caching layer. We use triggers inside the database to automatically invalidate or keep the cache synchronized, as desired by the developer. We have not made any modifications to PostgreSQL or memcached. To evaluate our prototype, we ported several Pinax web applications to use our caching abstractions and performed several experiments. Our results show that it takes little effort for application developers to use CacheGenie, and that caching provides a throughput improvement by a factor of 2-2.5 for read-mostly workloads.
by Priya Gupta.
S.M.
Ports, Dan R. K. (Dan Robert Kenneth). "Application-level caching with transactional consistency." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/75448.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 147-159).
Distributed in-memory application data caches like memcached are a popular solution for scaling database-driven web sites. These systems increase performance significantly by reducing load on both the database and application servers. Unfortunately, such caches present two challenges for application developers. First, they cannot ensure that the application sees a consistent view of the data within a transaction, violating the isolation properties of the underlying database. Second, they leave the application responsible for locating data in the cache and keeping it up to date, a frequent source of application complexity and programming errors. This thesis addresses both of these problems in a new cache called TxCache. TxCache is a transactional cache: it ensures that any data seen within a transaction, whether from the cache or the database, reflects a slightly stale but consistent snapshot of the database. TxCache also offers a simple programming model. Application developers simply designate certain functions as cacheable, and the system automatically caches their results and invalidates the cached data as the underlying database changes. Our experiments found that TxCache can substantially increase the performance of a web application: on the RUBiS benchmark, it increases throughput by up to 5.2x relative to a system without caching. More importantly, on this application, TxCache achieves performance comparable (within 5%) to that of a non-transactional cache, showing that consistency does not have to come at the price of performance.
by Dan R. K. Ports.
Ph.D.
Johnson, Edwin N. (Edwin Neil) 1975. "A protocol for network level caching." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9860.
Full textIncludes bibliographical references (p. 44-45).
by Edwin N. Johnson.
S.B.and M.Eng.
Mertz, Jhonny Marcos Acordi. "Understanding and automating application-level caching." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/156813.
Full textLatency and cost of Internet-based services are encouraging the use of application-level caching to continue satisfying users’ demands, and improve the scalability and availability of origin servers. Application-level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an adhoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application-level caching is a time-consuming and error-prone task, becoming a common source of bugs. This dissertation advances work on application-level caching by providing an understanding of its state-of-practice and automating the decision regarding cacheable content, thus providing developers with substantial support to design, implement and maintain application-level caching solutions. More specifically, we provide three key contributions: structured knowledge derived from a qualitative study, a survey of the state-of-the-art on static and adaptive caching approaches, and a technique and framework that automate the challenging task of identifying cache opportunities The qualitative study, which involved the investigation of ten web applications (open-source and commercial) with different characteristics, allowed us to determine the state-of-practice of application-level caching, along with practical guidance to developers as patterns and guidelines to be followed. Based on such patterns and guidelines derived, we also propose an approach to automate the identification of cacheable methods, which is often manually done and is not supported by existing approaches to implement application-level caching. We implemented a caching framework that can be seamlessly integrated into web applications to automatically identify and cache opportunities at runtime, by monitoring system execution and adaptively managing caching decisions. We evaluated our approach empirically with three open-source web applications, and results indicate that we can identify adequate caching opportunities by improving application throughput up to 12.16%. Furthermore, our approach can prevent code tangling and raise the abstraction level of caching.
Chiang, Cho-Yu. "On building dynamic web caching hierarchies /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488199501403111.
Full textArshinov, Alex. "Building high-performance web-caching servers." Thesis, De Montfort University, 2004. http://hdl.handle.net/2086/13257.
Full textXu, Ji. "Data caching in wireless mobile networks /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20XU.
Full textIncludes bibliographical references (leaves 57-60). Also available in electronic version. Access restricted to campus users.
Rajani, Meena. "Application Server Caching with Freshness Guarantees." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9730.
Full textPervej, Md Ferdous. "Edge Caching for Small Cell Networks." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7580.
Full textJurk, Steffen. "A simultaneous execution scheme for database caching." [S.l.] : [s.n.], 2005. http://se6.kobv.de:8000/btu/volltexte/2006/22.
Full textZou, Qing. "Transparent Web caching with minimum response time." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ65661.pdf.
Full textThadani, Prakash. "Local caching architecture for RFID tag resolution." Thesis, Wichita State University, 2007. http://hdl.handle.net/10057/1559.
Full textThesis (M.S)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering
"December 2007."
Race, Nicholas John Paul. "Support for video distribution through multimedia caching." Thesis, Lancaster University, 2000. http://eprints.lancs.ac.uk/11801/.
Full textLarsson, Carl-Johan. "User-Based Predictive Caching of Streaming Media." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246044.
Full textStrömmande media är en globalt växande marknad som medför striktakrav på de mobila nätverken. Grunden för en god användarupplevelsenär man tillhandahåller en tjänst inom strömmande media är att garanteratillgång till det begärda innehållet. På grund av den varierandetillgängligheten av mobila nätverk är det viktigt att dessa tjänster eliminerardet strikta kravet på den mobila nätverksuppkopplingen. Denhär master-uppsatsen undersöker maskininlärnings-modellen LongShort-Term Memory för att förutspå framtida geografiska positionerför en mobil enhet. Den förutspådda positionen används sedan i kombinationmed information om uppkopplingshastigheten i det förutspåddaområdet för att hämta strömmad media i förtid innan den mobilaenheten träder in i ett område med låg uppkopplingshastighet.Detta görs av två syften, för att förbättra användarupplevelsen menäven för att minska den data som lagras i förtid utan att den mobilaenhetens strömmade media riskeras att avbrytas. Modellen somutvärderades uppnådde en exakthet av 85.15% i medelvärde mätt över20000 olika geografiska rutter. Förtids-lagringen av strömmad mediaupprätthöll användarupplevelsen medan den förbrukade datan minskade.
Law, Ching 1975. "A new competitive analysis of randomized caching." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80094.
Full textJing, Yuxin. "Evaluating caching mechanisms in future Internet architectures." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106120.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 54-57).
This thesis seeks to test and evaluate the effects of in-network storage in novel proposed Internet architectures in terms of their performance. In a world where more and more people are mobile and connected to the Internet, we look at how the added variable of user mobility can affect how these architectures perform under different loads. Evaluating the effects of in-network storage and caching in these novel architectures will provide another facet to understanding how viable of an alternative they would be to the current TCP/IP paradigm of today's Internet. In Named Data Networking, where the storage is used to directly cache content, we see its use of storage impact the locality of where things are, while in MobilityFirst, where storage is used to cache chunks to provide robust delivery, we look at how its different layers work together in a mobility event.
by Yuxin Jing.
M. Eng.
Håkansson, Fredrik, and Carl-Johan Larsson. "User-Based Predictive Caching of Streaming Media." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151008.
Full textThis thesis is written as a joint thesis between two students from different universities. This means the exact same thesis is published at two universities (LiU and KTH) but with different style templates. The other report has identification number: TRITA-EECS-EX-2018:403
Sherman, Alexander 1975. "Distributed web caching system with consistent hashing." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80121.
Full textIncludes bibliographical references (p. 63-64).
by Alexander Sherman.
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Marlow, Gregory. "Week 11, Video 06: Caching The Simulation." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/digital-animation-videos-oer/75.
Full textAlHassoun, Yousef. "TOWARDS EFFICIENT CODED CACHING FOR WIRELESS NETWORKS." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1575632062797432.
Full textMohamed, Rozlina. "Implications of query caching for JXTA peers." Thesis, Aston University, 2014. http://publications.aston.ac.uk/24370/.
Full textThadani, Prakash Pendse Ravindra. "Local caching architecture for RFID tag resolution /." Thesis, A link to full text of this thesis in SOAR, 2007. http://hdl.handle.net/10057/1559.
Full textKaplan, Scott Frederick. "Compressed caching and modern virtual memory simulation /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.
Full textLoukopoulos, Athanasios. "Caching and replication schemes on the Internet /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20LOUKOP.
Full textIncludes bibliographical references (leaves 149-163). Also available in electronic version. Access restricted to campus users.
Wolman, Alastair. "Sharing and caching characteristics of Internet content /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6918.
Full textObalappa, Dinesh Tretiak Oleh J. "Optimal caching of large multi-dimensional datasets /." Philadelphia, Pa. : Drexel University, 2004. http://dspace.library.drexel.edu/handle/1860/307.
Full text