To see the other types of publications on this topic, follow the link: Caching services.

Journal articles on the topic 'Caching services'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Caching services.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Premkumar, Vandana, and Vinil Bhandari. "Caching in Amazon Web Services." International Journal of Computer Trends and Technology 69, no. 4 (April 25, 2021): 1–5. http://dx.doi.org/10.14445/22312803/ijctt-v69i4p101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Yunkon, and Eui-Nam Huh. "EDCrammer: An Efficient Caching Rate-Control Algorithm for Streaming Data on Resource-Limited Edge Nodes." Applied Sciences 9, no. 12 (June 23, 2019): 2560. http://dx.doi.org/10.3390/app9122560.

Full text
Abstract:
This paper explores data caching as a key factor of edge computing. State-of-the-art research of data caching on edge nodes mainly considers reactive and proactive caching, and machine learning based caching, which could be a heavy task for edge nodes. However, edge nodes usually have relatively lower computing resources than cloud datacenters as those are geo-distributed from the administrator. Therefore, a caching algorithm should be lightweight for saving computing resources on edge nodes. In addition, the data caching should be agile because it has to support high-quality services on edge nodes. Accordingly, this paper proposes a lightweight, agile caching algorithm, EDCrammer (Efficient Data Crammer), which performs agile operations to control caching rate for streaming data by using the enhanced PID (Proportional-Integral-Differential) controller. Experimental results using this lightweight, agile caching algorithm show its significant value in each scenario. In four common scenarios, the desired cache utilization was reached in 1.1 s on average and then maintained within a 4–7% deviation. The cache hit ratio is about 96%, and the optimal cache capacity is around 1.5 MB. Thus, EDCrammer can help distribute the streaming data traffic to the edge nodes, mitigate the uplink load on the central cloud, and ultimately provide users with high-quality video services. We also hope that EDCrammer can improve overall service quality in 5G environment, Augmented Reality/Virtual Reality (AR/VR), Intelligent Transportation System (ITS), Internet of Things (IoT), etc.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Xiaolong, Zijie Fang, Jie Zhang, Qiang He, Dongxiao Yu, Lianyong Qi, and Wanchun Dou. "Edge Content Caching with Deep Spatiotemporal Residual Network for IoV in Smart City." ACM Transactions on Sensor Networks 17, no. 3 (June 21, 2021): 1–33. http://dx.doi.org/10.1145/3447032.

Full text
Abstract:
Internet of Vehicles (IoV) enables numerous in-vehicle applications for smart cities, driving increasing service demands for processing various contents (e.g., videos). Generally, for efficient service delivery, the contents from the service providers are processed on the edge servers (ESs), as edge computing offers vehicular applications low-latency services. However, due to the reusability of the same contents required by different distributed vehicular users, processing the copies of the same contents repeatedly in an edge server leads to a waste of resources (e.g., storage, computation, and bandwidth) in ESs. Therefore, it is a challenge to provide high-quality services while guaranteeing the resource efficiency with edge content caching. To address the challenge, an edge content caching method for smart cities with service requirement prediction, named E-Cache, is proposed. First, the future service requirements from the vehicles are predicted based on the deep spatiotemporal residual network (ST-ResNet). Then, preliminary content caching schemes are elaborated based on the predicted service requirements, which are further adjusted by a many-objective optimization aiming at minimizing the execution time and the energy consumption of the vehicular services. Eventually, experimental evaluations prove the efficiency and effectiveness of E-Cache with spatiotemporal traffic trajectory big data.
APA, Harvard, Vancouver, ISO, and other styles
4

Terry, Douglas D., and Venugopalan Ramasubramanian. "Caching XML Web Services for Mobility." Queue 1, no. 3 (May 2003): 70–78. http://dx.doi.org/10.1145/846057.864024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Edris, Ed Kamya Kiyemba, Mahdi Aiash, and Jonathan Loo. "DCSS Protocol for Data Caching and Sharing Security in a 5G Network." Network 1, no. 2 (July 7, 2021): 75–94. http://dx.doi.org/10.3390/network1020006.

Full text
Abstract:
Fifth Generation mobile networks (5G) promise to make network services provided by various Service Providers (SP) such as Mobile Network Operators (MNOs) and third-party SPs accessible from anywhere by the end-users through their User Equipment (UE). These services will be pushed closer to the edge for quick, seamless, and secure access. After being granted access to a service, the end-user will be able to cache and share data with other users. However, security measures should be in place for SP not only to secure the provisioning and access of those services but also, should be able to restrict what the end-users can do with the accessed data in or out of coverage. This can be facilitated by federated service authorization and access control mechanisms that restrict the caching and sharing of data accessed by the UE in different security domains. In this paper, we propose a Data Caching and Sharing Security (DCSS) protocol that leverages federated authorization to provide secure caching and sharing of data from multiple SPs in multiple security domains. We formally verify the proposed DCSS protocol using ProVerif and applied pi-calculus. Furthermore, a comprehensive security analysis of the security properties of the proposed DCSS protocol is conducted.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Hongna, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, and Cheng Li. "Transcoding Based Video Caching Systems: Model and Algorithm." Wireless Communications and Mobile Computing 2018 (August 1, 2018): 1–8. http://dx.doi.org/10.1155/2018/1818690.

Full text
Abstract:
The explosive demand of online video watching brings huge bandwidth pressure to cellular networks. Efficient video caching is critical for providing high-quality streaming Video-on-Demand (VoD) services to satisfy the rapid increasing demands of online video watching from mobile users. Traditional caching algorithms typically treat individual video files separately and they tend to keep the most popular video files in cache. However, in reality, one video typically corresponds to multiple different files (versions) with different sizes and also different video resolutions. Thus, caching of such files for one video leads to a lot of redundancy since one version of a video can be utilized to produce other versions of the video by using certain video coding techniques. Recently, fog computing pushes computing power to edge of network to reduce distance between service provider and users. In this paper, we take advantage of fog computing and deploy cache system at network edge. Specifically, we study transcoding based video caching in cellular networks where cache servers are deployed at the edge of cellular network for providing improved quality of online VoD services to mobile users. By using transcoding, a cached video can be used to convert to different low-quality versions of the video as needed by different users in real time. We first formulate the transcoding based caching problem as integer linear programming problem. Then we propose a Transcoding based Caching Algorithm (TCA), which iteratively finds the placement leading to the maximal delay gain among all possible choices. We deduce the computational complexity of TCA. Simulation results demonstrate that TCA significantly outperforms traditional greedy caching algorithm with a decrease of up to 40% in terms of average delivery delay.
APA, Harvard, Vancouver, ISO, and other styles
7

STOLLBERG, MICHAEL, JÖRG HOFFMANN, and DIETER FENSEL. "A CACHING TECHNIQUE FOR OPTIMIZING AUTOMATED SERVICE DISCOVERY." International Journal of Semantic Computing 05, no. 01 (March 2011): 1–31. http://dx.doi.org/10.1142/s1793351x11001146.

Full text
Abstract:
The development of sophisticated technologies for service-oriented architectures (SOA) is a grand challenge. A promising approach is the employment of semantic technologies to better support the service usage cycle. Most existing solutions show significant deficits in the computational performance, which hampers the applicability in large-scale SOA systems. We present an optimization technique for automated service discovery — one of the central operations in semantically enabled SOA environments — that can ensure a sophisticated performance while maintaining a high retrieval accuracy. The approach is based on goals that formally describe client objectives, and it employs a caching mechanism for enhancing the computational performance of a two-phased discovery framework. At design time, the suitable services for generic and reusable goal descriptions are determined by semantic matchmaking. The result is captured in a continuously updated graph structure that organizes goals and services with respect to the requested and provided functionalities. This is exploited at runtime in order to detect the suitable services for concrete client requests with minimal effort. We formalize the approach within a first-order logic framework, and define the graph structure along with the associated storage and retrieval algorithms. An empirical evaluation shows that significant performance improvements can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yan, and Zheng Wan. "Blockchain-Enabled Intelligent Video Caching and Transcoding in Clustered MEC Networks." Security and Communication Networks 2021 (September 7, 2021): 1–17. http://dx.doi.org/10.1155/2021/7443260.

Full text
Abstract:
In recent years, the number of smart devices has exploded, leading to an unprecedented increase in demand for video live and video-on-demand (VoD) services. Also, the privacy of video providers and requesters and the security of requested video data are much more threatened. In order to solve these issues, in this paper, a blockchain-enabled CMEC video transmission model (Bl-CMEC) for intelligent video caching and transcoding will be proposed to ensure the transactions’ transparency, system security, user information privacy, and integrity of the video data, enhance the ability of severs in actively caching popular video content in the CMEC system, and realize transcoding function at network edge nodes. Furthermore, we chose a scheme based on deep reinforcement learning (DRL) to intelligently access the intracluster joint caching and transcoding decisions. Then, the joint video caching and transcoding decision smart contract is specially designed to automatically manage the transaction process of the joint caching and transcoding service, which records key information of joint caching and transcoding transactions and payment information on a continuous blockchain. The simulation results demonstrate that the proposed Bl-CMEC framework not only can provide users with better QoE performance for video streaming service but also can ensure the security, integrity, and consistency for the video providers, video requesters, and video data.
APA, Harvard, Vancouver, ISO, and other styles
9

Leng, Tao, Yuanyuan Xu, Gaofeng Cui, and Weidong Wang. "Caching-Aware Intelligent Handover Strategy for LEO Satellite Networks." Remote Sensing 13, no. 11 (June 7, 2021): 2230. http://dx.doi.org/10.3390/rs13112230.

Full text
Abstract:
Recently, many Low Earth Orbit (LEO) satellite networks are being implemented to provide seamless communication services for global users. Since the high mobility of LEO satellites, handover strategy has become one of the most important topics for LEO satellite systems. However, the limited on-board caching resource of satellites make it difficult to guarantee the handover performance. In this paper, we propose a multiple attributes decision handover strategy jointly considering three factors, which are caching capacity, remaining service time and the remaining idle channels of the satellites. Furthermore, a caching-aware intelligent handover strategy is given based on the deep reinforcement learning (DRL) to maximize the long-term benefits of the system. Compared with the traditional strategies, the proposed strategy reduces the handover failure rate by up to nearly 81% when the system caching occupancy reaches 90%, and it has a lower call blocking rate in high user arrival scenarios. Simulation results show that this strategy can effectively mitigate handover failure rate due to caching resource occupation, as well as flexibly allocate channel resources to reduce call blocking.
APA, Harvard, Vancouver, ISO, and other styles
10

Qin, Yana, Danye Wu, Zhiwei Xu, Jie Tian, and Yujun Zhang. "Adaptive In-Network Collaborative Caching for Enhanced Ensemble Deep Learning at Edge." Mathematical Problems in Engineering 2021 (September 25, 2021): 1–14. http://dx.doi.org/10.1155/2021/9285802.

Full text
Abstract:
To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can, in natural, leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, and analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.
APA, Harvard, Vancouver, ISO, and other styles
11

Spiga, Daniele, Diego Ciangottini, Mirco Tracolli, Tommaso Tedeschi, Daniele Cesini, Tommaso Boccali, Valentina Poggioni, Marco Baioletti, and Valentin Y. Kuznetsov. "Smart Caching at CMS: applying AI to XCache edge services." EPJ Web of Conferences 245 (2020): 04024. http://dx.doi.org/10.1051/epjconf/202024504024.

Full text
Abstract:
The projected Storage and Compute needs for the HL-LHC will be a factor up to 10 above what can be achieved by the evolution of current technology within a flat budget. The WLCG community is studying possible technical solutions to evolve the current computing in order to cope with the requirements; one of the main focus is resource optimization, with the ultimate aim of improving performance and efficiency, as well as simplifying and reducing operation costs. As of today the storage consolidation based on a Data Lake model is considered a good candidate for addressing HL-LHC data access challenges. The Data Lake model under evaluation can be seen as a logical system that hosts a distributed working set of analysis data. Compute power can be “close” to the lake, but also remote and thus completely external. In this context we expect data caching to play a central role as a technical solution to reduce the impact of latency and reduce network load. A geographically distributed caching layer will be functional to many satellite computing centers that might appear and disappear dynamically. In this talk we propose a system of caches, distributed at national level, describing both deployment and results of the studies made to measure the impact on the CPU efficiency. In this contribution, we also present the early results on novel caching strategy beyond the standard XRootD approach whose results will be a baseline for an AI-based smart caching system.
APA, Harvard, Vancouver, ISO, and other styles
12

Shukla, Samiksha, D. K. Mishra, and Kapil Tiwari. "Performance Enhancement of Soap Via Multi Level Caching." Mapana - Journal of Sciences 9, no. 2 (November 30, 2010): 47–52. http://dx.doi.org/10.12723/mjs.17.6.

Full text
Abstract:
Due to complex infrastructure of web application response time for different service request by client requires significantly larger time. Simple Object Access Protocol (SOAP) is a recent and emerging technology in the field of web services, which aims at replacing traditional methods of remote communications. Basic aim of designing SOAP was to increase interoperability among broad range of programs and environment, SOAP allows applications from different languages, installed on different platforms to communicate with each other over the network. Web services demand security, high performance and extensibility. SOAP provides various benefits for interoperability but we need to pay price of performance degradation and security for that. This formulates SOAP a poor preference for high performance web services. In this paper we present a new approach by enabling multi-level caching at client side as well as server side. Reference describes implementation based on the Apache Java SOAP client, which gives radically enhanced performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Dik Lun, Manli Zhu, and Haibo Hu. "When Location-Based Services Meet Databases." Mobile Information Systems 1, no. 2 (2005): 81–90. http://dx.doi.org/10.1155/2005/941816.

Full text
Abstract:
As location-based services (LBSs) grow to support a larger and larger user community and to provide more and more intelligent services, they must face a few fundamental challenges, including the ability to not only accept coordinates as location data but also manipulate high-level semantics of the physical environment. They must also handle a large amount of location updates and client requests and be able to scale up as their coverage increases. This paper describes some of our research in location modeling and updates and techniques for enhancing system performance by caching and batch processing. It can be observed that the challenges facing LBSs share a lot of similarity with traditional database research (i.e., data modeling, indexing, caching, and query optimization) but the fact that LBSs are built into the physical space and the opportunity to exploit spatial locality in system design shed new light on LBS research.
APA, Harvard, Vancouver, ISO, and other styles
14

Hasan, Kamrul, and Seong-Ho Jeong. "Efficient Caching for Data-Driven IoT Applications and Fast Content Delivery with Low Latency in ICN." Applied Sciences 9, no. 22 (November 6, 2019): 4730. http://dx.doi.org/10.3390/app9224730.

Full text
Abstract:
Edge computing is a key paradigm for the various data-intensive Internet of Things (IoT) applications where caching plays a significant role at the edge of the network. This paradigm provides data-intensive services, computational activities, and application services to the proximity devices and end-users for fast content retrieval with a very low response time that fulfills the ultra-low latency goal of the 5G networks. Information-centric networking (ICN) is being acknowledged as an important technology for the fast content retrieval of multimedia content and content-based IoT applications. The main goal of ICN is to change the current location-dependent IP network architecture to location-independent and content-centric network architecture. ICN can fulfill the needs for caching to the vicinity of the edge devices without further storage deployment. In this paper, we propose an architecture for efficient caching at the edge devices for data-intensive IoT applications and a fast content access mechanism based on new clustering and caching procedures in ICN. The proposed cluster-based efficient caching mechanism provides the solution to the problem of the existing hash and on-path caching mechanisms, and the proposed content popularity mechanism increases the content availability at the proximity devices for reducing the content transfer time and packet loss ratio. We also provide the simulation results and mathematical analysis to prove that the proposed mechanism is better than other state-of-the-art caching mechanisms and the overall network efficiencies are increased.
APA, Harvard, Vancouver, ISO, and other styles
15

Yu, Ying. "Application of Mobile Edge Computing Technology in Civil Aviation Express Marketing." Wireless Communications and Mobile Computing 2021 (May 31, 2021): 1–11. http://dx.doi.org/10.1155/2021/9932977.

Full text
Abstract:
With the popularization of mobile terminals and the rapid development of mobile communication technology, many PC-based services have placed high demands on data processing and storage functions. Cloud laptops that transfer data processing tasks to the cloud cannot meet the needs of users due to low latency and high-quality services. In view of this, some researchers have proposed the concept of mobile edge computing. Mobile edge computing (MEC) is based on the 5G evolution architecture. By deploying multiple service servers on the base station side near the edge of the user’s mobile core network, it provides nearby computing and processing services for user business. This article is aimed at studying the use of caching and MEC processing functions to design an effective caching and distribution mechanism across the network edge and apply it to civil aviation express marketing. This paper proposes to focus on mobile edge computing technology, combining it with data warehouse technology, clustering algorithm, and other methods to build an experimental model of MEC-based caching mechanism applied to civil aviation express marketing. The experimental results in this paper show that when the cache space and the number of service contents are constant, the LECC mechanism among the five cache mechanisms is more effective than LENC, LRU, and RR in cache hit rate, average content transmission delay, and transmission overhead. For example, with the same cache space, ATC under the LECC mechanism is about 4%~9%, 8%~13%, and 18%~22% lower than that of LENC, LRU, and RR, respectively.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhanikeev, Marat. "Fog Caching and a Trace-Based Analysis of its Offload Effect." International Journal of Information Technologies and Systems Approach 10, no. 2 (July 2017): 50–68. http://dx.doi.org/10.4018/ijitsa.2017070104.

Full text
Abstract:
Many years of research on Content Delivery Networks (CDNs) offers a number of effective methods for caching of content replicas or forwarding requests. However, recently CDNs have aggressively started migrating to clouds. Clouds present a new kind of distribution environment as each location can support multiple caching options varying in the level of persistence of stored content. A subclass of clouds located at network edge is referred to as fog clouds. Fog clouds help by allowing CDNs to offload popular content to network edge, closer to end users. However, due to the fact that fog clouds are extremely heterogeneous and vary wildly in network and caching performance, traditional caching technology is no longer applicable. This paper proposes a multi-level caching technology specific to fog clouds. To deal with the heterogeneity problem and, at the same time, avoid centralized control, this paper proposes a function that allows CDN services to discover local caching facilities dynamically, at runtime. Using a combination of synthetic models and real measurement dataset, this paper analyzes efficiency of offload both at the local level of individual fog locations and at the global level of the entire CDN infrastructure. Local analysis shows that the new method can reduce inter-cloud traffic by between 16 and 18 times while retaining less than 30% of total content in a local cache. Global analysis further shows that, based on existing measurement datasets, centralized optimization is preferred to distributed coordination among services.
APA, Harvard, Vancouver, ISO, and other styles
17

Ying Lu, T. F. Abdelzaher, and A. Saxena. "Design, implementation, and evaluation of differentiated caching services." IEEE Transactions on Parallel and Distributed Systems 15, no. 5 (May 2004): 440–52. http://dx.doi.org/10.1109/tpds.2004.1278101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Haraty, Ramzi A. "Innovative Mobile E-Healthcare Systems: A New Rule-Based Cache Replacement Strategy Using Least Profit Values." Mobile Information Systems 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/6141828.

Full text
Abstract:
Providing and managing e-health data from heterogeneous and ubiquitous e-health service providers in a content distribution network (CDN) for providing e-health services is a challenging task. A content distribution network is normally utilized to cache e-health media contents such as real-time medical images and videos. Efficient management, storage, and caching of distributed e-health data in a CDN or in a cloud computing environment of mobile patients facilitate that doctors, health care professionals, and other e-health service providers have immediate access to e-health information for efficient decision making as well as better treatment. Caching is one of the key methods in distributed computing environments to improve the performance of data retrieval. To find which item in the cache can be evicted and replaced, cache replacement algorithms are used. Many caching approaches are proposed, but the SACCS—Scalable Asynchronous Cache Consistency Scheme—has proved to be more scalable than the others. In this work, we propose a new cache replacement algorithm—Profit SACCS—that is based on the rule-based least profit value. It replaces the least recently used strategy that SACCS uses. A comparison with different cache replacement strategies is also presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Wei, Hua, Hong Luo, and Yan Sun. "Mobility-Aware Service Caching in Mobile Edge Computing for Internet of Things." Sensors 20, no. 3 (January 22, 2020): 610. http://dx.doi.org/10.3390/s20030610.

Full text
Abstract:
The mobile edge computing architecture successfully solves the problem of high latency in cloud computing. However, current research focuses on computation offloading and lacks research on service caching issues. To solve the service caching problem, especially for scenarios with high mobility in the Sensor Networks environment, we study the mobility-aware service caching mechanism. Our goal is to maximize the number of users who are served by the local edge-cloud, and we need to make predictions about the user’s target location to avoid invalid service requests. First, we propose an idealized geometric model to predict the target area of a user’s movement. Since it is difficult to obtain all the data needed by the model in practical applications, we use frequent patterns to mine local moving track information. Then, by using the results of the trajectory data mining and the proposed geometric model, we make predictions about the user’s target location. Based on the prediction result and existing service cache, the service request is forwarded to the appropriate base station through the service allocation algorithm. Finally, to be able to train and predict the most popular services online, we propose a service cache selection algorithm based on back-propagation (BP) neural network. The simulation experiments show that our service cache algorithm reduces the service response time by about 13.21% on average compared to other algorithms, and increases the local service proportion by about 15.19% on average compared to the algorithm without mobility prediction.
APA, Harvard, Vancouver, ISO, and other styles
20

Chiu, Hsuan, Chi-He Chang, Chao-Wei Tseng, and Chi-Shi Liu. "Window-Based Popularity Caching for IPTV On-Demand Services." ISRN Communications and Networking 2011 (October 12, 2011): 1–11. http://dx.doi.org/10.5402/2011/201314.

Full text
Abstract:
In recent years, many telecommunication companies have regarded IP network as a new delivery platform for providing TV services because IP network is equipped with two-way and high-speed communication abilities which are appropriate to provide on-demand services and linear TV programs. However, in this IPTV system, the requests of VOD (video on demand) are usually aggregated in a short period intensively and user preferences are fluctuated dynamically. Moreover, the VOD content is updated frequently under the management of IPTV providers. Thus, an accurate popularity prediction method and an effective cache system are vital because they affect the IPTV performance directly. This paper proposed a new window-based popularity mechanism which automatically responds to the fluctuation of user interests and instantly adjusts the popularity of VOD. Further, we applied our method to a commercial IPTV system and the results illustrated that our mechanism indeed offers a significant improvement.
APA, Harvard, Vancouver, ISO, and other styles
21

De Vleeschauwer, D., and K. Laevens. "Performance of Caching Algorithms for IPTV On-Demand Services." IEEE Transactions on Broadcasting 55, no. 2 (June 2009): 491–501. http://dx.doi.org/10.1109/tbc.2009.2015983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Chockler, G., G. Laden, and Y. Vigfusson. "Design and implementation of caching services in the cloud." IBM Journal of Research and Development 55, no. 6 (November 2011): 9:1–9:11. http://dx.doi.org/10.1147/jrd.2011.2171649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Das, Sajal K., Zohar Naor, and Mayank Raj. "Popularity-based caching for IPTV services over P2P networks." Peer-to-Peer Networking and Applications 10, no. 1 (October 29, 2015): 156–69. http://dx.doi.org/10.1007/s12083-015-0414-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Kan, Ruijie Wang, Junhuai Li, and Meng Li. "Joint V2V-Assisted Clustering, Caching, and Multicast Beamforming in Vehicular Edge Networks." Wireless Communications and Mobile Computing 2020 (November 19, 2020): 1–12. http://dx.doi.org/10.1155/2020/8837751.

Full text
Abstract:
As an emerging type of Internet of Things (IoT), Internet of Vehicles (IoV) denotes the vehicle network capable of supporting diverse types of intelligent services and has attracted great attention in the 5G era. In this study, we consider the multimedia content caching with multicast beamforming in IoV-based vehicular edge networks. First, we formulate a joint vehicle-to-vehicle- (V2V-) assisted clustering, caching, and multicasting optimization problem, to minimize the weighted sum of flow cost and power cost, subject to the quality-of-service (QoS) constraints for each multicast group. Then, with the two-timescale setup, the intractable and stochastic original problem is decoupled at separate timescales. More precisely, at the large timescale, we leverage the sample average approximation (SAA) technique to solve the joint V2V-assisted clustering and caching problem and then demonstrate the equivalence of optimal solutions between the original problem and its relaxed linear programming (LP) counterpart; and at the small timescale, we leverage the successive convex approximation (SCA) method to solve the nonconvex multicast beamforming problem, whereby a series of convex subproblems can be acquired, with the convergence also assured. Finally, simulations are conducted with different system parameters to show the effectiveness of the proposed algorithm, revealing that the network performance can benefit from not only the power saving from wireless multicast beamforming in vehicular networks but also the content caching among vehicles.
APA, Harvard, Vancouver, ISO, and other styles
25

Sathiyamoorthi, V. "A Novel Cache Replacement Policy for Web Proxy Caching System Using Web Usage Mining." International Journal of Information Technology and Web Engineering 11, no. 2 (April 2016): 1–13. http://dx.doi.org/10.4018/ijitwe.2016040101.

Full text
Abstract:
Network congestion remains one of the main barriers to the continuing success of the internet and Web based services. In this background, proxy caching is one of the most successful solutions for civilizing the performance of Web since it reduce network traffic, Web server load and improves user perceived response time. Here, the most popular Web objects that are likely to be revisited in the near future are stored in the proxy server thereby it improves the Web response time and saves network bandwidth. The main component of Web caching is it cache replacement policy. It plays a key role in replacing existing objects when there is no room for new one especially when cache is full. Moreover, the conventional replacement policies are used in Web caching environments which provide poor network performance. These policies are suitable for memory caching since it involves fixed sized objects. But, Web caching which involves objects of varying size and hence there is a need for an efficient policy that works better in Web cache environment. Moreover, most of the existing Web caching policies have considered few factors and ignored the factors that have impact on the efficiency of Web proxy caching. Hence, it is decided to propose a novel policy for Web cache environment. The proposed policy includes size, cost, frequency, ageing, time of entry into the cache and popularity of Web objects in cache removal policy. It uses the Web usage mining as a technique to improve Web caching policy. Also, empirical analyses shows that proposed policy performs better than existing policies in terms of various performance metrics such as hit rate and byte hit rate.
APA, Harvard, Vancouver, ISO, and other styles
26

Ma, Chunguang, Lei Zhang, Songtao Yang, and Xiaodong Zheng. "Hiding Yourself Behind Collaborative Users When Using Continuous Location-Based Services." Journal of Circuits, Systems and Computers 26, no. 07 (March 17, 2017): 1750119. http://dx.doi.org/10.1142/s0218126617501195.

Full text
Abstract:
The prosperity of location-based services (LBSs) makes more and more people pay close attention to personal privacy. In order to preserve users privacy, several schemes utilized a trusted third party (TTP) to obfuscate users, but these schemes were suspected as the TTP may become the single point of failure or service performance bottleneck. To alleviate the suspicion, schemes with collaborative users to achieve [Formula: see text]-anonymity were proposed. In these schemes, users equipped with short-range communication devices could communicate with adjacent users to establish an anonymous group. With this group, the user can obfuscate and hide herself behind at least [Formula: see text] other users. However, these schemes are usually more efficient in snapshot services than continuous ones. To cope with the inadequacy, with the help of caching in mobile devices, we propose a query information blocks random exchange and results caching scheme (short for CaQBE). In this scheme, a particular user is hidden behind collaborative users in snapshot service, and then the caches further preserve the privacy in continuous service. In case of the active adversary launching the query correlation attack and the passive adversary launching the impersonation attack, a random collaborative user selection and a random block exchange algorithm are also utilized. Then based on the feature of entropy, a metric to measure the privacy of the user against attacks from the active and passive adversaries is proposed. Finally, security analysis and experimental comparison with other similar schemes further verify the optimal of our scheme in effectiveness of preservation and efficiency of performance.
APA, Harvard, Vancouver, ISO, and other styles
27

Papageorgiou, Apostolos, Marius Schatke, Stefan Schulte, and Ralf Steinmetz. "Lightweight Wireless Web Service Communication Through Enhanced Caching Mechanisms." International Journal of Web Services Research 9, no. 2 (April 2012): 42–68. http://dx.doi.org/10.4018/jwsr.2012040103.

Full text
Abstract:
Reducing the size of the wirelessly transmitted data during the invocation of third-party Web services is a worthwhile goal of many mobile application developers. Among many adaptation mechanisms that can be used for the mediation of such Web service invocations, the automated enhancement of caching mechanisms is a promising approach that can spare the re-transmission of entire content fields of the exchanged messages. However, it is usually impeded by technological constraints and by various other factors, such as the inherent risk of using responses that are not fresh, i.e., are not up-to-date. This paper presents the roadmap, the most important technical and algorithmic details, and a thorough evaluation of the first solution for generically and automatically enriching the communication with any third-party Web service in a way that cached responses can be exploited while a freshness of 100% is maintained.
APA, Harvard, Vancouver, ISO, and other styles
28

Hosanagar, Kartik, Ramayya Krishnan, John Chuang, and Vidyanand Choudhary. "Pricing and Resource Allocation in Caching Services with Multiple Levels of Quality of Service." Management Science 51, no. 12 (December 2005): 1844–59. http://dx.doi.org/10.1287/mnsc.1050.0420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Huang, Mingfeng, Yuxin Liu, Ning Zhang, Neal N. Xiong, Anfeng Liu, Zhiwen Zeng, and Houbing Song. "A Services Routing Based Caching Scheme for Cloud Assisted CRNs." IEEE Access 6 (2018): 15787–805. http://dx.doi.org/10.1109/access.2018.2815039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Abdelhamid, Sherin, Hossam S. Hassanein, and Glen Takahara. "On-Road Caching Assistance for Ubiquitous Vehicle-Based Information Services." IEEE Transactions on Vehicular Technology 64, no. 12 (December 2015): 5477–92. http://dx.doi.org/10.1109/tvt.2015.2480711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wauters, Tim, Wim Van De Meerssche, Peter Backx, Filip De Turck, Bart Dhoedt, Piet Demeester, Tom Van Caenegem, and Erwin Six. "Proxy caching algorithms and implementation for time-shifted TV services." European Transactions on Telecommunications 19, no. 2 (2008): 111–22. http://dx.doi.org/10.1002/ett.1181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tran, Anh-Tien, Nhu-Ngoc Dao, and Sungrae Cho. "Bitrate Adaptation for Video Streaming Services in Edge Caching Systems." IEEE Access 8 (2020): 135844–52. http://dx.doi.org/10.1109/access.2020.3011517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Ran, Edmund Yeh, and Atilla Eryilmaz. "Proactive Caching for Low Access-Delay Services under Uncertain Predictions." ACM SIGMETRICS Performance Evaluation Review 47, no. 1 (December 17, 2019): 89–90. http://dx.doi.org/10.1145/3376930.3376987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Tadrous, John, Atilla Eryilmaz, and Hesham El Gamal. "Joint Smart Pricing and Proactive Content Caching for Mobile Services." IEEE/ACM Transactions on Networking 24, no. 4 (August 2016): 2357–71. http://dx.doi.org/10.1109/tnet.2015.2453793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Ran, Edmund Yeh, and Atilla Eryilmaz. "Proactive Caching for Low Access-Delay Services under Uncertain Predictions." Proceedings of the ACM on Measurement and Analysis of Computing Systems 3, no. 1 (March 26, 2019): 1–46. http://dx.doi.org/10.1145/3322205.3311073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kim, Taekook, and Eui-Jik Kim. "Hybrid storage-based caching strategy for content delivery network services." Multimedia Tools and Applications 74, no. 5 (August 16, 2014): 1697–709. http://dx.doi.org/10.1007/s11042-014-2215-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Fang, Zhenyuan Zhang, Zunfu Wang, and Yuting Xing. "ECC: Edge Collaborative Caching Strategy for Differentiated Services Load-Balancing." Computers, Materials & Continua 69, no. 2 (2021): 2045–60. http://dx.doi.org/10.32604/cmc.2021.018303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Raghunathan, A., and K. Murugesan. "Performance-Enhanced Caching Scheme for Web Clusters for Dynamic Content." International Journal of Business Data Communications and Networking 7, no. 3 (July 2011): 16–36. http://dx.doi.org/10.4018/jbdcn.2011070102.

Full text
Abstract:
In order to improve the QoS of applications, clusters of web servers are increasingly used in web services. Caching helps improve performance in web servers, but is largely exploited only for static web content. With more web applications using backend databases today, caching of dynamic content has a crucial role in web performance. This paper presents a set of cache management schemes for handling dynamic data in web clusters by sharing cached contents. These schemes use either automatic or expiry-based cache validation, and work with any type of request distribution. The techniques improve response by utilizing the caches efficiently and reducing redundant database accesses by web servers while ensuring cache consistency. The authors present caching schemes for both horizontal and vertical cluster architectures. Simulations show an appreciable performance rise in response times of queries in clustered web servers.
APA, Harvard, Vancouver, ISO, and other styles
39

Sun, Yunlei, Xiuquan Qiao, Wei Tan, Bo Cheng, Ruisheng Shi, and Junliang Chen. "A Low-Delay, Light-Weight Publish/Subscribe Architecture for Delay-Sensitive IOT Services." International Journal of Web Services Research 10, no. 3 (July 2013): 60–81. http://dx.doi.org/10.4018/ijwsr.2013070104.

Full text
Abstract:
In order to build a low-latency and light-weight publish/subscribe (pub/sub) system for delay-sensitive IOT services, the authors propose an efficient and scalable broker architecture named Grid Quorum-based pub/sub (GQPS). As a key component in Event-Driven Service-Oriented Architecture (EDSOA) for IOT services, this architecture organizes multiple pub/sub brokers into a Quorum-based peer-to-peer topology for efficient topic searching. It also leverages a topic searching algorithm and a one-hop caching strategy to minimize the search latency. Light-weight RESTful interfaces make the authors’ GQPS more suitable for IOT services. Cost analysis and experimental study demonstrate that the GQPS achieves a significant performance gain in search satisfaction without compromising search cost. The authors apply the proposed GQPS in the District Heating Control and Information Service System in Beijing, China. This system validates the effectiveness of GQPS.
APA, Harvard, Vancouver, ISO, and other styles
40

Lei, Fangyuan, Jun Cai, Qingyun Dai, and Huimin Zhao. "Deep Learning Based Proactive Caching for Effective WSN-Enabled Vision Applications." Complexity 2019 (May 2, 2019): 1–12. http://dx.doi.org/10.1155/2019/5498606.

Full text
Abstract:
Wireless Sensor Networks (WSNs) have a wide range of applications scenarios in computer vision, from pedestrian detection to robotic visual navigation. In response to the growing visual data services in WSNs, we propose a proactive caching strategy based on Stacked Sparse Autoencoder (SSAE) to predict content popularity (PCDS2AW). Firstly, based on Software Defined Network (SDN) and Network Function Virtualization (NFV) technologies, a distributed deep learning network SSAE is constructed in the sink nodes and control nodes of the WSN network. Then, the SSAE network structure parameters and network model parameters are optimized through training. The proactive cache strategy implementation procedure is divided into four steps. (1) The SDN controller is responsible for dynamically collecting user request data package information in the WSNs network. (2) The SSAEs predicts the packet popularity based on the SDN controller obtaining user request data. (3) The SDN controller generates a corresponding proactive cache strategy according to the popularity prediction result. (4) Implement the proactive caching strategy at the WSNs cache node. In the simulation, we compare the influence of spatiotemporal data on the SSAE network structure. Compared with the classic caching strategy Hash + LRU, Betw + LRU, and classic prediction algorithms SVM and BPNN, the proposed PCDS2AW proactive caching strategy can significantly improve WSN performance.
APA, Harvard, Vancouver, ISO, and other styles
41

Lee, Seung-Won, Hwa-Sei Lee, Seong-Ho Park, and Ki-Dong Chung. "A Cooperative Proxy Caching for Continuous Media Services in Mobile Environments." KIPS Transactions:PartB 11B, no. 6 (October 1, 2004): 691–700. http://dx.doi.org/10.3745/kipstb.2004.11b.6.691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Romero, Pablo, Franco Robledo, Pablo Rodríguez-Bocca, and Claudia Rostagnol. "Mathematical Analysis of caching policies and cooperation in YouTube-like services." Electronic Notes in Discrete Mathematics 41 (June 2013): 221–28. http://dx.doi.org/10.1016/j.endm.2013.05.096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hasnain, Muhammad, Muhammad Fermi Pasha, and Imran Ghani. "Drupal core 8 caching mechanism for scalability improvement of web services." Software Impacts 3 (February 2020): 100014. http://dx.doi.org/10.1016/j.simpa.2020.100014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Wei, Jian Xiong, Lin Gui, Bo Liu, Meikang Qiu, and Zhiping Shi. "Distributed Caching Mechanism for Popular Services Distribution in Converged Overlay Networks." IEEE Transactions on Broadcasting 66, no. 1 (March 2020): 66–77. http://dx.doi.org/10.1109/tbc.2019.2902818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Burckhardt, Sebastian, and Tim Coppieters. "Reactive caching for composed services: polling at the speed of push." Proceedings of the ACM on Programming Languages 2, OOPSLA (October 24, 2018): 1–28. http://dx.doi.org/10.1145/3276522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Linder, H., H. D. Clausen, and B. Collini-Nocker. "Satellite Internet services using DVB/MPEG-2 and multicast Web caching." IEEE Communications Magazine 38, no. 6 (June 2000): 156–61. http://dx.doi.org/10.1109/35.846088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chan, Chen-Lung, Shih-Yu Huang, and Jia-Shung Wang. "Performance Analysis of Proxy Caching for VOD Services With Heterogeneous Clients." IEEE Transactions on Communications 55, no. 11 (November 2007): 2142–51. http://dx.doi.org/10.1109/tcomm.2007.908524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Noh, Hyunmin, and Hwangjun Song. "Progressive Caching System for Video Streaming Services Over Content Centric Network." IEEE Access 7 (2019): 47079–89. http://dx.doi.org/10.1109/access.2019.2909563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Shaobo, Kim-Kwang Raymond Choo, Qin Liu, and Guojun Wang. "Enhancing privacy through uniform grid and caching in location-based services." Future Generation Computer Systems 86 (September 2018): 881–92. http://dx.doi.org/10.1016/j.future.2017.06.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhao, Hui, Qinghua Zheng, Weizhan Zhang, and Haifei Li. "MSC: a multi-version shared caching for multi-bitrate VoD services." Multimedia Tools and Applications 75, no. 4 (November 26, 2014): 1923–45. http://dx.doi.org/10.1007/s11042-014-2380-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography