Segui questo link per vedere altri tipi di pubblicazioni sul tema: Caching services.

Articoli di riviste sul tema "Caching services"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Caching services".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Premkumar, Vandana, e Vinil Bhandari. "Caching in Amazon Web Services". International Journal of Computer Trends and Technology 69, n. 4 (25 aprile 2021): 1–5. http://dx.doi.org/10.14445/22312803/ijctt-v69i4p101.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Kim, Yunkon, e Eui-Nam Huh. "EDCrammer: An Efficient Caching Rate-Control Algorithm for Streaming Data on Resource-Limited Edge Nodes". Applied Sciences 9, n. 12 (23 giugno 2019): 2560. http://dx.doi.org/10.3390/app9122560.

Testo completo
Abstract (sommario):
This paper explores data caching as a key factor of edge computing. State-of-the-art research of data caching on edge nodes mainly considers reactive and proactive caching, and machine learning based caching, which could be a heavy task for edge nodes. However, edge nodes usually have relatively lower computing resources than cloud datacenters as those are geo-distributed from the administrator. Therefore, a caching algorithm should be lightweight for saving computing resources on edge nodes. In addition, the data caching should be agile because it has to support high-quality services on edge nodes. Accordingly, this paper proposes a lightweight, agile caching algorithm, EDCrammer (Efficient Data Crammer), which performs agile operations to control caching rate for streaming data by using the enhanced PID (Proportional-Integral-Differential) controller. Experimental results using this lightweight, agile caching algorithm show its significant value in each scenario. In four common scenarios, the desired cache utilization was reached in 1.1 s on average and then maintained within a 4–7% deviation. The cache hit ratio is about 96%, and the optimal cache capacity is around 1.5 MB. Thus, EDCrammer can help distribute the streaming data traffic to the edge nodes, mitigate the uplink load on the central cloud, and ultimately provide users with high-quality video services. We also hope that EDCrammer can improve overall service quality in 5G environment, Augmented Reality/Virtual Reality (AR/VR), Intelligent Transportation System (ITS), Internet of Things (IoT), etc.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Xu, Xiaolong, Zijie Fang, Jie Zhang, Qiang He, Dongxiao Yu, Lianyong Qi e Wanchun Dou. "Edge Content Caching with Deep Spatiotemporal Residual Network for IoV in Smart City". ACM Transactions on Sensor Networks 17, n. 3 (21 giugno 2021): 1–33. http://dx.doi.org/10.1145/3447032.

Testo completo
Abstract (sommario):
Internet of Vehicles (IoV) enables numerous in-vehicle applications for smart cities, driving increasing service demands for processing various contents (e.g., videos). Generally, for efficient service delivery, the contents from the service providers are processed on the edge servers (ESs), as edge computing offers vehicular applications low-latency services. However, due to the reusability of the same contents required by different distributed vehicular users, processing the copies of the same contents repeatedly in an edge server leads to a waste of resources (e.g., storage, computation, and bandwidth) in ESs. Therefore, it is a challenge to provide high-quality services while guaranteeing the resource efficiency with edge content caching. To address the challenge, an edge content caching method for smart cities with service requirement prediction, named E-Cache, is proposed. First, the future service requirements from the vehicles are predicted based on the deep spatiotemporal residual network (ST-ResNet). Then, preliminary content caching schemes are elaborated based on the predicted service requirements, which are further adjusted by a many-objective optimization aiming at minimizing the execution time and the energy consumption of the vehicular services. Eventually, experimental evaluations prove the efficiency and effectiveness of E-Cache with spatiotemporal traffic trajectory big data.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Terry, Douglas D., e Venugopalan Ramasubramanian. "Caching XML Web Services for Mobility". Queue 1, n. 3 (maggio 2003): 70–78. http://dx.doi.org/10.1145/846057.864024.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Edris, Ed Kamya Kiyemba, Mahdi Aiash e Jonathan Loo. "DCSS Protocol for Data Caching and Sharing Security in a 5G Network". Network 1, n. 2 (7 luglio 2021): 75–94. http://dx.doi.org/10.3390/network1020006.

Testo completo
Abstract (sommario):
Fifth Generation mobile networks (5G) promise to make network services provided by various Service Providers (SP) such as Mobile Network Operators (MNOs) and third-party SPs accessible from anywhere by the end-users through their User Equipment (UE). These services will be pushed closer to the edge for quick, seamless, and secure access. After being granted access to a service, the end-user will be able to cache and share data with other users. However, security measures should be in place for SP not only to secure the provisioning and access of those services but also, should be able to restrict what the end-users can do with the accessed data in or out of coverage. This can be facilitated by federated service authorization and access control mechanisms that restrict the caching and sharing of data accessed by the UE in different security domains. In this paper, we propose a Data Caching and Sharing Security (DCSS) protocol that leverages federated authorization to provide secure caching and sharing of data from multiple SPs in multiple security domains. We formally verify the proposed DCSS protocol using ProVerif and applied pi-calculus. Furthermore, a comprehensive security analysis of the security properties of the proposed DCSS protocol is conducted.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zhao, Hongna, Chunxi Li, Yongxiang Zhao, Baoxian Zhang e Cheng Li. "Transcoding Based Video Caching Systems: Model and Algorithm". Wireless Communications and Mobile Computing 2018 (1 agosto 2018): 1–8. http://dx.doi.org/10.1155/2018/1818690.

Testo completo
Abstract (sommario):
The explosive demand of online video watching brings huge bandwidth pressure to cellular networks. Efficient video caching is critical for providing high-quality streaming Video-on-Demand (VoD) services to satisfy the rapid increasing demands of online video watching from mobile users. Traditional caching algorithms typically treat individual video files separately and they tend to keep the most popular video files in cache. However, in reality, one video typically corresponds to multiple different files (versions) with different sizes and also different video resolutions. Thus, caching of such files for one video leads to a lot of redundancy since one version of a video can be utilized to produce other versions of the video by using certain video coding techniques. Recently, fog computing pushes computing power to edge of network to reduce distance between service provider and users. In this paper, we take advantage of fog computing and deploy cache system at network edge. Specifically, we study transcoding based video caching in cellular networks where cache servers are deployed at the edge of cellular network for providing improved quality of online VoD services to mobile users. By using transcoding, a cached video can be used to convert to different low-quality versions of the video as needed by different users in real time. We first formulate the transcoding based caching problem as integer linear programming problem. Then we propose a Transcoding based Caching Algorithm (TCA), which iteratively finds the placement leading to the maximal delay gain among all possible choices. We deduce the computational complexity of TCA. Simulation results demonstrate that TCA significantly outperforms traditional greedy caching algorithm with a decrease of up to 40% in terms of average delivery delay.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

STOLLBERG, MICHAEL, JÖRG HOFFMANN e DIETER FENSEL. "A CACHING TECHNIQUE FOR OPTIMIZING AUTOMATED SERVICE DISCOVERY". International Journal of Semantic Computing 05, n. 01 (marzo 2011): 1–31. http://dx.doi.org/10.1142/s1793351x11001146.

Testo completo
Abstract (sommario):
The development of sophisticated technologies for service-oriented architectures (SOA) is a grand challenge. A promising approach is the employment of semantic technologies to better support the service usage cycle. Most existing solutions show significant deficits in the computational performance, which hampers the applicability in large-scale SOA systems. We present an optimization technique for automated service discovery — one of the central operations in semantically enabled SOA environments — that can ensure a sophisticated performance while maintaining a high retrieval accuracy. The approach is based on goals that formally describe client objectives, and it employs a caching mechanism for enhancing the computational performance of a two-phased discovery framework. At design time, the suitable services for generic and reusable goal descriptions are determined by semantic matchmaking. The result is captured in a continuously updated graph structure that organizes goals and services with respect to the requested and provided functionalities. This is exploited at runtime in order to detect the suitable services for concrete client requests with minimal effort. We formalize the approach within a first-order logic framework, and define the graph structure along with the associated storage and retrieval algorithms. An empirical evaluation shows that significant performance improvements can be achieved.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Li, Yan, e Zheng Wan. "Blockchain-Enabled Intelligent Video Caching and Transcoding in Clustered MEC Networks". Security and Communication Networks 2021 (7 settembre 2021): 1–17. http://dx.doi.org/10.1155/2021/7443260.

Testo completo
Abstract (sommario):
In recent years, the number of smart devices has exploded, leading to an unprecedented increase in demand for video live and video-on-demand (VoD) services. Also, the privacy of video providers and requesters and the security of requested video data are much more threatened. In order to solve these issues, in this paper, a blockchain-enabled CMEC video transmission model (Bl-CMEC) for intelligent video caching and transcoding will be proposed to ensure the transactions’ transparency, system security, user information privacy, and integrity of the video data, enhance the ability of severs in actively caching popular video content in the CMEC system, and realize transcoding function at network edge nodes. Furthermore, we chose a scheme based on deep reinforcement learning (DRL) to intelligently access the intracluster joint caching and transcoding decisions. Then, the joint video caching and transcoding decision smart contract is specially designed to automatically manage the transaction process of the joint caching and transcoding service, which records key information of joint caching and transcoding transactions and payment information on a continuous blockchain. The simulation results demonstrate that the proposed Bl-CMEC framework not only can provide users with better QoE performance for video streaming service but also can ensure the security, integrity, and consistency for the video providers, video requesters, and video data.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Leng, Tao, Yuanyuan Xu, Gaofeng Cui e Weidong Wang. "Caching-Aware Intelligent Handover Strategy for LEO Satellite Networks". Remote Sensing 13, n. 11 (7 giugno 2021): 2230. http://dx.doi.org/10.3390/rs13112230.

Testo completo
Abstract (sommario):
Recently, many Low Earth Orbit (LEO) satellite networks are being implemented to provide seamless communication services for global users. Since the high mobility of LEO satellites, handover strategy has become one of the most important topics for LEO satellite systems. However, the limited on-board caching resource of satellites make it difficult to guarantee the handover performance. In this paper, we propose a multiple attributes decision handover strategy jointly considering three factors, which are caching capacity, remaining service time and the remaining idle channels of the satellites. Furthermore, a caching-aware intelligent handover strategy is given based on the deep reinforcement learning (DRL) to maximize the long-term benefits of the system. Compared with the traditional strategies, the proposed strategy reduces the handover failure rate by up to nearly 81% when the system caching occupancy reaches 90%, and it has a lower call blocking rate in high user arrival scenarios. Simulation results show that this strategy can effectively mitigate handover failure rate due to caching resource occupation, as well as flexibly allocate channel resources to reduce call blocking.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Qin, Yana, Danye Wu, Zhiwei Xu, Jie Tian e Yujun Zhang. "Adaptive In-Network Collaborative Caching for Enhanced Ensemble Deep Learning at Edge". Mathematical Problems in Engineering 2021 (25 settembre 2021): 1–14. http://dx.doi.org/10.1155/2021/9285802.

Testo completo
Abstract (sommario):
To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can, in natural, leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, and analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Spiga, Daniele, Diego Ciangottini, Mirco Tracolli, Tommaso Tedeschi, Daniele Cesini, Tommaso Boccali, Valentina Poggioni, Marco Baioletti e Valentin Y. Kuznetsov. "Smart Caching at CMS: applying AI to XCache edge services". EPJ Web of Conferences 245 (2020): 04024. http://dx.doi.org/10.1051/epjconf/202024504024.

Testo completo
Abstract (sommario):
The projected Storage and Compute needs for the HL-LHC will be a factor up to 10 above what can be achieved by the evolution of current technology within a flat budget. The WLCG community is studying possible technical solutions to evolve the current computing in order to cope with the requirements; one of the main focus is resource optimization, with the ultimate aim of improving performance and efficiency, as well as simplifying and reducing operation costs. As of today the storage consolidation based on a Data Lake model is considered a good candidate for addressing HL-LHC data access challenges. The Data Lake model under evaluation can be seen as a logical system that hosts a distributed working set of analysis data. Compute power can be “close” to the lake, but also remote and thus completely external. In this context we expect data caching to play a central role as a technical solution to reduce the impact of latency and reduce network load. A geographically distributed caching layer will be functional to many satellite computing centers that might appear and disappear dynamically. In this talk we propose a system of caches, distributed at national level, describing both deployment and results of the studies made to measure the impact on the CPU efficiency. In this contribution, we also present the early results on novel caching strategy beyond the standard XRootD approach whose results will be a baseline for an AI-based smart caching system.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Shukla, Samiksha, D. K. Mishra e Kapil Tiwari. "Performance Enhancement of Soap Via Multi Level Caching". Mapana - Journal of Sciences 9, n. 2 (30 novembre 2010): 47–52. http://dx.doi.org/10.12723/mjs.17.6.

Testo completo
Abstract (sommario):
Due to complex infrastructure of web application response time for different service request by client requires significantly larger time. Simple Object Access Protocol (SOAP) is a recent and emerging technology in the field of web services, which aims at replacing traditional methods of remote communications. Basic aim of designing SOAP was to increase interoperability among broad range of programs and environment, SOAP allows applications from different languages, installed on different platforms to communicate with each other over the network. Web services demand security, high performance and extensibility. SOAP provides various benefits for interoperability but we need to pay price of performance degradation and security for that. This formulates SOAP a poor preference for high performance web services. In this paper we present a new approach by enabling multi-level caching at client side as well as server side. Reference describes implementation based on the Apache Java SOAP client, which gives radically enhanced performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Lee, Dik Lun, Manli Zhu e Haibo Hu. "When Location-Based Services Meet Databases". Mobile Information Systems 1, n. 2 (2005): 81–90. http://dx.doi.org/10.1155/2005/941816.

Testo completo
Abstract (sommario):
As location-based services (LBSs) grow to support a larger and larger user community and to provide more and more intelligent services, they must face a few fundamental challenges, including the ability to not only accept coordinates as location data but also manipulate high-level semantics of the physical environment. They must also handle a large amount of location updates and client requests and be able to scale up as their coverage increases. This paper describes some of our research in location modeling and updates and techniques for enhancing system performance by caching and batch processing. It can be observed that the challenges facing LBSs share a lot of similarity with traditional database research (i.e., data modeling, indexing, caching, and query optimization) but the fact that LBSs are built into the physical space and the opportunity to exploit spatial locality in system design shed new light on LBS research.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Hasan, Kamrul, e Seong-Ho Jeong. "Efficient Caching for Data-Driven IoT Applications and Fast Content Delivery with Low Latency in ICN". Applied Sciences 9, n. 22 (6 novembre 2019): 4730. http://dx.doi.org/10.3390/app9224730.

Testo completo
Abstract (sommario):
Edge computing is a key paradigm for the various data-intensive Internet of Things (IoT) applications where caching plays a significant role at the edge of the network. This paradigm provides data-intensive services, computational activities, and application services to the proximity devices and end-users for fast content retrieval with a very low response time that fulfills the ultra-low latency goal of the 5G networks. Information-centric networking (ICN) is being acknowledged as an important technology for the fast content retrieval of multimedia content and content-based IoT applications. The main goal of ICN is to change the current location-dependent IP network architecture to location-independent and content-centric network architecture. ICN can fulfill the needs for caching to the vicinity of the edge devices without further storage deployment. In this paper, we propose an architecture for efficient caching at the edge devices for data-intensive IoT applications and a fast content access mechanism based on new clustering and caching procedures in ICN. The proposed cluster-based efficient caching mechanism provides the solution to the problem of the existing hash and on-path caching mechanisms, and the proposed content popularity mechanism increases the content availability at the proximity devices for reducing the content transfer time and packet loss ratio. We also provide the simulation results and mathematical analysis to prove that the proposed mechanism is better than other state-of-the-art caching mechanisms and the overall network efficiencies are increased.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Yu, Ying. "Application of Mobile Edge Computing Technology in Civil Aviation Express Marketing". Wireless Communications and Mobile Computing 2021 (31 maggio 2021): 1–11. http://dx.doi.org/10.1155/2021/9932977.

Testo completo
Abstract (sommario):
With the popularization of mobile terminals and the rapid development of mobile communication technology, many PC-based services have placed high demands on data processing and storage functions. Cloud laptops that transfer data processing tasks to the cloud cannot meet the needs of users due to low latency and high-quality services. In view of this, some researchers have proposed the concept of mobile edge computing. Mobile edge computing (MEC) is based on the 5G evolution architecture. By deploying multiple service servers on the base station side near the edge of the user’s mobile core network, it provides nearby computing and processing services for user business. This article is aimed at studying the use of caching and MEC processing functions to design an effective caching and distribution mechanism across the network edge and apply it to civil aviation express marketing. This paper proposes to focus on mobile edge computing technology, combining it with data warehouse technology, clustering algorithm, and other methods to build an experimental model of MEC-based caching mechanism applied to civil aviation express marketing. The experimental results in this paper show that when the cache space and the number of service contents are constant, the LECC mechanism among the five cache mechanisms is more effective than LENC, LRU, and RR in cache hit rate, average content transmission delay, and transmission overhead. For example, with the same cache space, ATC under the LECC mechanism is about 4%~9%, 8%~13%, and 18%~22% lower than that of LENC, LRU, and RR, respectively.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Zhanikeev, Marat. "Fog Caching and a Trace-Based Analysis of its Offload Effect". International Journal of Information Technologies and Systems Approach 10, n. 2 (luglio 2017): 50–68. http://dx.doi.org/10.4018/ijitsa.2017070104.

Testo completo
Abstract (sommario):
Many years of research on Content Delivery Networks (CDNs) offers a number of effective methods for caching of content replicas or forwarding requests. However, recently CDNs have aggressively started migrating to clouds. Clouds present a new kind of distribution environment as each location can support multiple caching options varying in the level of persistence of stored content. A subclass of clouds located at network edge is referred to as fog clouds. Fog clouds help by allowing CDNs to offload popular content to network edge, closer to end users. However, due to the fact that fog clouds are extremely heterogeneous and vary wildly in network and caching performance, traditional caching technology is no longer applicable. This paper proposes a multi-level caching technology specific to fog clouds. To deal with the heterogeneity problem and, at the same time, avoid centralized control, this paper proposes a function that allows CDN services to discover local caching facilities dynamically, at runtime. Using a combination of synthetic models and real measurement dataset, this paper analyzes efficiency of offload both at the local level of individual fog locations and at the global level of the entire CDN infrastructure. Local analysis shows that the new method can reduce inter-cloud traffic by between 16 and 18 times while retaining less than 30% of total content in a local cache. Global analysis further shows that, based on existing measurement datasets, centralized optimization is preferred to distributed coordination among services.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Ying Lu, T. F. Abdelzaher e A. Saxena. "Design, implementation, and evaluation of differentiated caching services". IEEE Transactions on Parallel and Distributed Systems 15, n. 5 (maggio 2004): 440–52. http://dx.doi.org/10.1109/tpds.2004.1278101.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Haraty, Ramzi A. "Innovative Mobile E-Healthcare Systems: A New Rule-Based Cache Replacement Strategy Using Least Profit Values". Mobile Information Systems 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/6141828.

Testo completo
Abstract (sommario):
Providing and managing e-health data from heterogeneous and ubiquitous e-health service providers in a content distribution network (CDN) for providing e-health services is a challenging task. A content distribution network is normally utilized to cache e-health media contents such as real-time medical images and videos. Efficient management, storage, and caching of distributed e-health data in a CDN or in a cloud computing environment of mobile patients facilitate that doctors, health care professionals, and other e-health service providers have immediate access to e-health information for efficient decision making as well as better treatment. Caching is one of the key methods in distributed computing environments to improve the performance of data retrieval. To find which item in the cache can be evicted and replaced, cache replacement algorithms are used. Many caching approaches are proposed, but the SACCS—Scalable Asynchronous Cache Consistency Scheme—has proved to be more scalable than the others. In this work, we propose a new cache replacement algorithm—Profit SACCS—that is based on the rule-based least profit value. It replaces the least recently used strategy that SACCS uses. A comparison with different cache replacement strategies is also presented.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Wei, Hua, Hong Luo e Yan Sun. "Mobility-Aware Service Caching in Mobile Edge Computing for Internet of Things". Sensors 20, n. 3 (22 gennaio 2020): 610. http://dx.doi.org/10.3390/s20030610.

Testo completo
Abstract (sommario):
The mobile edge computing architecture successfully solves the problem of high latency in cloud computing. However, current research focuses on computation offloading and lacks research on service caching issues. To solve the service caching problem, especially for scenarios with high mobility in the Sensor Networks environment, we study the mobility-aware service caching mechanism. Our goal is to maximize the number of users who are served by the local edge-cloud, and we need to make predictions about the user’s target location to avoid invalid service requests. First, we propose an idealized geometric model to predict the target area of a user’s movement. Since it is difficult to obtain all the data needed by the model in practical applications, we use frequent patterns to mine local moving track information. Then, by using the results of the trajectory data mining and the proposed geometric model, we make predictions about the user’s target location. Based on the prediction result and existing service cache, the service request is forwarded to the appropriate base station through the service allocation algorithm. Finally, to be able to train and predict the most popular services online, we propose a service cache selection algorithm based on back-propagation (BP) neural network. The simulation experiments show that our service cache algorithm reduces the service response time by about 13.21% on average compared to other algorithms, and increases the local service proportion by about 15.19% on average compared to the algorithm without mobility prediction.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Chiu, Hsuan, Chi-He Chang, Chao-Wei Tseng e Chi-Shi Liu. "Window-Based Popularity Caching for IPTV On-Demand Services". ISRN Communications and Networking 2011 (12 ottobre 2011): 1–11. http://dx.doi.org/10.5402/2011/201314.

Testo completo
Abstract (sommario):
In recent years, many telecommunication companies have regarded IP network as a new delivery platform for providing TV services because IP network is equipped with two-way and high-speed communication abilities which are appropriate to provide on-demand services and linear TV programs. However, in this IPTV system, the requests of VOD (video on demand) are usually aggregated in a short period intensively and user preferences are fluctuated dynamically. Moreover, the VOD content is updated frequently under the management of IPTV providers. Thus, an accurate popularity prediction method and an effective cache system are vital because they affect the IPTV performance directly. This paper proposed a new window-based popularity mechanism which automatically responds to the fluctuation of user interests and instantly adjusts the popularity of VOD. Further, we applied our method to a commercial IPTV system and the results illustrated that our mechanism indeed offers a significant improvement.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

De Vleeschauwer, D., e K. Laevens. "Performance of Caching Algorithms for IPTV On-Demand Services". IEEE Transactions on Broadcasting 55, n. 2 (giugno 2009): 491–501. http://dx.doi.org/10.1109/tbc.2009.2015983.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Chockler, G., G. Laden e Y. Vigfusson. "Design and implementation of caching services in the cloud". IBM Journal of Research and Development 55, n. 6 (novembre 2011): 9:1–9:11. http://dx.doi.org/10.1147/jrd.2011.2171649.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Das, Sajal K., Zohar Naor e Mayank Raj. "Popularity-based caching for IPTV services over P2P networks". Peer-to-Peer Networking and Applications 10, n. 1 (29 ottobre 2015): 156–69. http://dx.doi.org/10.1007/s12083-015-0414-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Wang, Kan, Ruijie Wang, Junhuai Li e Meng Li. "Joint V2V-Assisted Clustering, Caching, and Multicast Beamforming in Vehicular Edge Networks". Wireless Communications and Mobile Computing 2020 (19 novembre 2020): 1–12. http://dx.doi.org/10.1155/2020/8837751.

Testo completo
Abstract (sommario):
As an emerging type of Internet of Things (IoT), Internet of Vehicles (IoV) denotes the vehicle network capable of supporting diverse types of intelligent services and has attracted great attention in the 5G era. In this study, we consider the multimedia content caching with multicast beamforming in IoV-based vehicular edge networks. First, we formulate a joint vehicle-to-vehicle- (V2V-) assisted clustering, caching, and multicasting optimization problem, to minimize the weighted sum of flow cost and power cost, subject to the quality-of-service (QoS) constraints for each multicast group. Then, with the two-timescale setup, the intractable and stochastic original problem is decoupled at separate timescales. More precisely, at the large timescale, we leverage the sample average approximation (SAA) technique to solve the joint V2V-assisted clustering and caching problem and then demonstrate the equivalence of optimal solutions between the original problem and its relaxed linear programming (LP) counterpart; and at the small timescale, we leverage the successive convex approximation (SCA) method to solve the nonconvex multicast beamforming problem, whereby a series of convex subproblems can be acquired, with the convergence also assured. Finally, simulations are conducted with different system parameters to show the effectiveness of the proposed algorithm, revealing that the network performance can benefit from not only the power saving from wireless multicast beamforming in vehicular networks but also the content caching among vehicles.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Sathiyamoorthi, V. "A Novel Cache Replacement Policy for Web Proxy Caching System Using Web Usage Mining". International Journal of Information Technology and Web Engineering 11, n. 2 (aprile 2016): 1–13. http://dx.doi.org/10.4018/ijitwe.2016040101.

Testo completo
Abstract (sommario):
Network congestion remains one of the main barriers to the continuing success of the internet and Web based services. In this background, proxy caching is one of the most successful solutions for civilizing the performance of Web since it reduce network traffic, Web server load and improves user perceived response time. Here, the most popular Web objects that are likely to be revisited in the near future are stored in the proxy server thereby it improves the Web response time and saves network bandwidth. The main component of Web caching is it cache replacement policy. It plays a key role in replacing existing objects when there is no room for new one especially when cache is full. Moreover, the conventional replacement policies are used in Web caching environments which provide poor network performance. These policies are suitable for memory caching since it involves fixed sized objects. But, Web caching which involves objects of varying size and hence there is a need for an efficient policy that works better in Web cache environment. Moreover, most of the existing Web caching policies have considered few factors and ignored the factors that have impact on the efficiency of Web proxy caching. Hence, it is decided to propose a novel policy for Web cache environment. The proposed policy includes size, cost, frequency, ageing, time of entry into the cache and popularity of Web objects in cache removal policy. It uses the Web usage mining as a technique to improve Web caching policy. Also, empirical analyses shows that proposed policy performs better than existing policies in terms of various performance metrics such as hit rate and byte hit rate.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Ma, Chunguang, Lei Zhang, Songtao Yang e Xiaodong Zheng. "Hiding Yourself Behind Collaborative Users When Using Continuous Location-Based Services". Journal of Circuits, Systems and Computers 26, n. 07 (17 marzo 2017): 1750119. http://dx.doi.org/10.1142/s0218126617501195.

Testo completo
Abstract (sommario):
The prosperity of location-based services (LBSs) makes more and more people pay close attention to personal privacy. In order to preserve users privacy, several schemes utilized a trusted third party (TTP) to obfuscate users, but these schemes were suspected as the TTP may become the single point of failure or service performance bottleneck. To alleviate the suspicion, schemes with collaborative users to achieve [Formula: see text]-anonymity were proposed. In these schemes, users equipped with short-range communication devices could communicate with adjacent users to establish an anonymous group. With this group, the user can obfuscate and hide herself behind at least [Formula: see text] other users. However, these schemes are usually more efficient in snapshot services than continuous ones. To cope with the inadequacy, with the help of caching in mobile devices, we propose a query information blocks random exchange and results caching scheme (short for CaQBE). In this scheme, a particular user is hidden behind collaborative users in snapshot service, and then the caches further preserve the privacy in continuous service. In case of the active adversary launching the query correlation attack and the passive adversary launching the impersonation attack, a random collaborative user selection and a random block exchange algorithm are also utilized. Then based on the feature of entropy, a metric to measure the privacy of the user against attacks from the active and passive adversaries is proposed. Finally, security analysis and experimental comparison with other similar schemes further verify the optimal of our scheme in effectiveness of preservation and efficiency of performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Papageorgiou, Apostolos, Marius Schatke, Stefan Schulte e Ralf Steinmetz. "Lightweight Wireless Web Service Communication Through Enhanced Caching Mechanisms". International Journal of Web Services Research 9, n. 2 (aprile 2012): 42–68. http://dx.doi.org/10.4018/jwsr.2012040103.

Testo completo
Abstract (sommario):
Reducing the size of the wirelessly transmitted data during the invocation of third-party Web services is a worthwhile goal of many mobile application developers. Among many adaptation mechanisms that can be used for the mediation of such Web service invocations, the automated enhancement of caching mechanisms is a promising approach that can spare the re-transmission of entire content fields of the exchanged messages. However, it is usually impeded by technological constraints and by various other factors, such as the inherent risk of using responses that are not fresh, i.e., are not up-to-date. This paper presents the roadmap, the most important technical and algorithmic details, and a thorough evaluation of the first solution for generically and automatically enriching the communication with any third-party Web service in a way that cached responses can be exploited while a freshness of 100% is maintained.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Hosanagar, Kartik, Ramayya Krishnan, John Chuang e Vidyanand Choudhary. "Pricing and Resource Allocation in Caching Services with Multiple Levels of Quality of Service". Management Science 51, n. 12 (dicembre 2005): 1844–59. http://dx.doi.org/10.1287/mnsc.1050.0420.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Huang, Mingfeng, Yuxin Liu, Ning Zhang, Neal N. Xiong, Anfeng Liu, Zhiwen Zeng e Houbing Song. "A Services Routing Based Caching Scheme for Cloud Assisted CRNs". IEEE Access 6 (2018): 15787–805. http://dx.doi.org/10.1109/access.2018.2815039.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Abdelhamid, Sherin, Hossam S. Hassanein e Glen Takahara. "On-Road Caching Assistance for Ubiquitous Vehicle-Based Information Services". IEEE Transactions on Vehicular Technology 64, n. 12 (dicembre 2015): 5477–92. http://dx.doi.org/10.1109/tvt.2015.2480711.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Wauters, Tim, Wim Van De Meerssche, Peter Backx, Filip De Turck, Bart Dhoedt, Piet Demeester, Tom Van Caenegem e Erwin Six. "Proxy caching algorithms and implementation for time-shifted TV services". European Transactions on Telecommunications 19, n. 2 (2008): 111–22. http://dx.doi.org/10.1002/ett.1181.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Tran, Anh-Tien, Nhu-Ngoc Dao e Sungrae Cho. "Bitrate Adaptation for Video Streaming Services in Edge Caching Systems". IEEE Access 8 (2020): 135844–52. http://dx.doi.org/10.1109/access.2020.3011517.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Liu, Ran, Edmund Yeh e Atilla Eryilmaz. "Proactive Caching for Low Access-Delay Services under Uncertain Predictions". ACM SIGMETRICS Performance Evaluation Review 47, n. 1 (17 dicembre 2019): 89–90. http://dx.doi.org/10.1145/3376930.3376987.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Tadrous, John, Atilla Eryilmaz e Hesham El Gamal. "Joint Smart Pricing and Proactive Content Caching for Mobile Services". IEEE/ACM Transactions on Networking 24, n. 4 (agosto 2016): 2357–71. http://dx.doi.org/10.1109/tnet.2015.2453793.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Liu, Ran, Edmund Yeh e Atilla Eryilmaz. "Proactive Caching for Low Access-Delay Services under Uncertain Predictions". Proceedings of the ACM on Measurement and Analysis of Computing Systems 3, n. 1 (26 marzo 2019): 1–46. http://dx.doi.org/10.1145/3322205.3311073.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Kim, Taekook, e Eui-Jik Kim. "Hybrid storage-based caching strategy for content delivery network services". Multimedia Tools and Applications 74, n. 5 (16 agosto 2014): 1697–709. http://dx.doi.org/10.1007/s11042-014-2215-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Liu, Fang, Zhenyuan Zhang, Zunfu Wang e Yuting Xing. "ECC: Edge Collaborative Caching Strategy for Differentiated Services Load-Balancing". Computers, Materials & Continua 69, n. 2 (2021): 2045–60. http://dx.doi.org/10.32604/cmc.2021.018303.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Raghunathan, A., e K. Murugesan. "Performance-Enhanced Caching Scheme for Web Clusters for Dynamic Content". International Journal of Business Data Communications and Networking 7, n. 3 (luglio 2011): 16–36. http://dx.doi.org/10.4018/jbdcn.2011070102.

Testo completo
Abstract (sommario):
In order to improve the QoS of applications, clusters of web servers are increasingly used in web services. Caching helps improve performance in web servers, but is largely exploited only for static web content. With more web applications using backend databases today, caching of dynamic content has a crucial role in web performance. This paper presents a set of cache management schemes for handling dynamic data in web clusters by sharing cached contents. These schemes use either automatic or expiry-based cache validation, and work with any type of request distribution. The techniques improve response by utilizing the caches efficiently and reducing redundant database accesses by web servers while ensuring cache consistency. The authors present caching schemes for both horizontal and vertical cluster architectures. Simulations show an appreciable performance rise in response times of queries in clustered web servers.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Sun, Yunlei, Xiuquan Qiao, Wei Tan, Bo Cheng, Ruisheng Shi e Junliang Chen. "A Low-Delay, Light-Weight Publish/Subscribe Architecture for Delay-Sensitive IOT Services". International Journal of Web Services Research 10, n. 3 (luglio 2013): 60–81. http://dx.doi.org/10.4018/ijwsr.2013070104.

Testo completo
Abstract (sommario):
In order to build a low-latency and light-weight publish/subscribe (pub/sub) system for delay-sensitive IOT services, the authors propose an efficient and scalable broker architecture named Grid Quorum-based pub/sub (GQPS). As a key component in Event-Driven Service-Oriented Architecture (EDSOA) for IOT services, this architecture organizes multiple pub/sub brokers into a Quorum-based peer-to-peer topology for efficient topic searching. It also leverages a topic searching algorithm and a one-hop caching strategy to minimize the search latency. Light-weight RESTful interfaces make the authors’ GQPS more suitable for IOT services. Cost analysis and experimental study demonstrate that the GQPS achieves a significant performance gain in search satisfaction without compromising search cost. The authors apply the proposed GQPS in the District Heating Control and Information Service System in Beijing, China. This system validates the effectiveness of GQPS.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Lei, Fangyuan, Jun Cai, Qingyun Dai e Huimin Zhao. "Deep Learning Based Proactive Caching for Effective WSN-Enabled Vision Applications". Complexity 2019 (2 maggio 2019): 1–12. http://dx.doi.org/10.1155/2019/5498606.

Testo completo
Abstract (sommario):
Wireless Sensor Networks (WSNs) have a wide range of applications scenarios in computer vision, from pedestrian detection to robotic visual navigation. In response to the growing visual data services in WSNs, we propose a proactive caching strategy based on Stacked Sparse Autoencoder (SSAE) to predict content popularity (PCDS2AW). Firstly, based on Software Defined Network (SDN) and Network Function Virtualization (NFV) technologies, a distributed deep learning network SSAE is constructed in the sink nodes and control nodes of the WSN network. Then, the SSAE network structure parameters and network model parameters are optimized through training. The proactive cache strategy implementation procedure is divided into four steps. (1) The SDN controller is responsible for dynamically collecting user request data package information in the WSNs network. (2) The SSAEs predicts the packet popularity based on the SDN controller obtaining user request data. (3) The SDN controller generates a corresponding proactive cache strategy according to the popularity prediction result. (4) Implement the proactive caching strategy at the WSNs cache node. In the simulation, we compare the influence of spatiotemporal data on the SSAE network structure. Compared with the classic caching strategy Hash + LRU, Betw + LRU, and classic prediction algorithms SVM and BPNN, the proposed PCDS2AW proactive caching strategy can significantly improve WSN performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Lee, Seung-Won, Hwa-Sei Lee, Seong-Ho Park e Ki-Dong Chung. "A Cooperative Proxy Caching for Continuous Media Services in Mobile Environments". KIPS Transactions:PartB 11B, n. 6 (1 ottobre 2004): 691–700. http://dx.doi.org/10.3745/kipstb.2004.11b.6.691.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Romero, Pablo, Franco Robledo, Pablo Rodríguez-Bocca e Claudia Rostagnol. "Mathematical Analysis of caching policies and cooperation in YouTube-like services". Electronic Notes in Discrete Mathematics 41 (giugno 2013): 221–28. http://dx.doi.org/10.1016/j.endm.2013.05.096.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Hasnain, Muhammad, Muhammad Fermi Pasha e Imran Ghani. "Drupal core 8 caching mechanism for scalability improvement of web services". Software Impacts 3 (febbraio 2020): 100014. http://dx.doi.org/10.1016/j.simpa.2020.100014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Zhang, Wei, Jian Xiong, Lin Gui, Bo Liu, Meikang Qiu e Zhiping Shi. "Distributed Caching Mechanism for Popular Services Distribution in Converged Overlay Networks". IEEE Transactions on Broadcasting 66, n. 1 (marzo 2020): 66–77. http://dx.doi.org/10.1109/tbc.2019.2902818.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Burckhardt, Sebastian, e Tim Coppieters. "Reactive caching for composed services: polling at the speed of push". Proceedings of the ACM on Programming Languages 2, OOPSLA (24 ottobre 2018): 1–28. http://dx.doi.org/10.1145/3276522.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Linder, H., H. D. Clausen e B. Collini-Nocker. "Satellite Internet services using DVB/MPEG-2 and multicast Web caching". IEEE Communications Magazine 38, n. 6 (giugno 2000): 156–61. http://dx.doi.org/10.1109/35.846088.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Chan, Chen-Lung, Shih-Yu Huang e Jia-Shung Wang. "Performance Analysis of Proxy Caching for VOD Services With Heterogeneous Clients". IEEE Transactions on Communications 55, n. 11 (novembre 2007): 2142–51. http://dx.doi.org/10.1109/tcomm.2007.908524.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Noh, Hyunmin, e Hwangjun Song. "Progressive Caching System for Video Streaming Services Over Content Centric Network". IEEE Access 7 (2019): 47079–89. http://dx.doi.org/10.1109/access.2019.2909563.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Zhang, Shaobo, Kim-Kwang Raymond Choo, Qin Liu e Guojun Wang. "Enhancing privacy through uniform grid and caching in location-based services". Future Generation Computer Systems 86 (settembre 2018): 881–92. http://dx.doi.org/10.1016/j.future.2017.06.022.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Zhao, Hui, Qinghua Zheng, Weizhan Zhang e Haifei Li. "MSC: a multi-version shared caching for multi-bitrate VoD services". Multimedia Tools and Applications 75, n. 4 (26 novembre 2014): 1923–45. http://dx.doi.org/10.1007/s11042-014-2380-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia