Letteratura scientifica selezionata sul tema "Caching services"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Caching services".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Caching services"

1

Premkumar, Vandana, e Vinil Bhandari. "Caching in Amazon Web Services". International Journal of Computer Trends and Technology 69, n. 4 (25 aprile 2021): 1–5. http://dx.doi.org/10.14445/22312803/ijctt-v69i4p101.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Kim, Yunkon, e Eui-Nam Huh. "EDCrammer: An Efficient Caching Rate-Control Algorithm for Streaming Data on Resource-Limited Edge Nodes". Applied Sciences 9, n. 12 (23 giugno 2019): 2560. http://dx.doi.org/10.3390/app9122560.

Testo completo
Abstract (sommario):
This paper explores data caching as a key factor of edge computing. State-of-the-art research of data caching on edge nodes mainly considers reactive and proactive caching, and machine learning based caching, which could be a heavy task for edge nodes. However, edge nodes usually have relatively lower computing resources than cloud datacenters as those are geo-distributed from the administrator. Therefore, a caching algorithm should be lightweight for saving computing resources on edge nodes. In addition, the data caching should be agile because it has to support high-quality services on edge nodes. Accordingly, this paper proposes a lightweight, agile caching algorithm, EDCrammer (Efficient Data Crammer), which performs agile operations to control caching rate for streaming data by using the enhanced PID (Proportional-Integral-Differential) controller. Experimental results using this lightweight, agile caching algorithm show its significant value in each scenario. In four common scenarios, the desired cache utilization was reached in 1.1 s on average and then maintained within a 4–7% deviation. The cache hit ratio is about 96%, and the optimal cache capacity is around 1.5 MB. Thus, EDCrammer can help distribute the streaming data traffic to the edge nodes, mitigate the uplink load on the central cloud, and ultimately provide users with high-quality video services. We also hope that EDCrammer can improve overall service quality in 5G environment, Augmented Reality/Virtual Reality (AR/VR), Intelligent Transportation System (ITS), Internet of Things (IoT), etc.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Xu, Xiaolong, Zijie Fang, Jie Zhang, Qiang He, Dongxiao Yu, Lianyong Qi e Wanchun Dou. "Edge Content Caching with Deep Spatiotemporal Residual Network for IoV in Smart City". ACM Transactions on Sensor Networks 17, n. 3 (21 giugno 2021): 1–33. http://dx.doi.org/10.1145/3447032.

Testo completo
Abstract (sommario):
Internet of Vehicles (IoV) enables numerous in-vehicle applications for smart cities, driving increasing service demands for processing various contents (e.g., videos). Generally, for efficient service delivery, the contents from the service providers are processed on the edge servers (ESs), as edge computing offers vehicular applications low-latency services. However, due to the reusability of the same contents required by different distributed vehicular users, processing the copies of the same contents repeatedly in an edge server leads to a waste of resources (e.g., storage, computation, and bandwidth) in ESs. Therefore, it is a challenge to provide high-quality services while guaranteeing the resource efficiency with edge content caching. To address the challenge, an edge content caching method for smart cities with service requirement prediction, named E-Cache, is proposed. First, the future service requirements from the vehicles are predicted based on the deep spatiotemporal residual network (ST-ResNet). Then, preliminary content caching schemes are elaborated based on the predicted service requirements, which are further adjusted by a many-objective optimization aiming at minimizing the execution time and the energy consumption of the vehicular services. Eventually, experimental evaluations prove the efficiency and effectiveness of E-Cache with spatiotemporal traffic trajectory big data.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Terry, Douglas D., e Venugopalan Ramasubramanian. "Caching XML Web Services for Mobility". Queue 1, n. 3 (maggio 2003): 70–78. http://dx.doi.org/10.1145/846057.864024.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Edris, Ed Kamya Kiyemba, Mahdi Aiash e Jonathan Loo. "DCSS Protocol for Data Caching and Sharing Security in a 5G Network". Network 1, n. 2 (7 luglio 2021): 75–94. http://dx.doi.org/10.3390/network1020006.

Testo completo
Abstract (sommario):
Fifth Generation mobile networks (5G) promise to make network services provided by various Service Providers (SP) such as Mobile Network Operators (MNOs) and third-party SPs accessible from anywhere by the end-users through their User Equipment (UE). These services will be pushed closer to the edge for quick, seamless, and secure access. After being granted access to a service, the end-user will be able to cache and share data with other users. However, security measures should be in place for SP not only to secure the provisioning and access of those services but also, should be able to restrict what the end-users can do with the accessed data in or out of coverage. This can be facilitated by federated service authorization and access control mechanisms that restrict the caching and sharing of data accessed by the UE in different security domains. In this paper, we propose a Data Caching and Sharing Security (DCSS) protocol that leverages federated authorization to provide secure caching and sharing of data from multiple SPs in multiple security domains. We formally verify the proposed DCSS protocol using ProVerif and applied pi-calculus. Furthermore, a comprehensive security analysis of the security properties of the proposed DCSS protocol is conducted.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zhao, Hongna, Chunxi Li, Yongxiang Zhao, Baoxian Zhang e Cheng Li. "Transcoding Based Video Caching Systems: Model and Algorithm". Wireless Communications and Mobile Computing 2018 (1 agosto 2018): 1–8. http://dx.doi.org/10.1155/2018/1818690.

Testo completo
Abstract (sommario):
The explosive demand of online video watching brings huge bandwidth pressure to cellular networks. Efficient video caching is critical for providing high-quality streaming Video-on-Demand (VoD) services to satisfy the rapid increasing demands of online video watching from mobile users. Traditional caching algorithms typically treat individual video files separately and they tend to keep the most popular video files in cache. However, in reality, one video typically corresponds to multiple different files (versions) with different sizes and also different video resolutions. Thus, caching of such files for one video leads to a lot of redundancy since one version of a video can be utilized to produce other versions of the video by using certain video coding techniques. Recently, fog computing pushes computing power to edge of network to reduce distance between service provider and users. In this paper, we take advantage of fog computing and deploy cache system at network edge. Specifically, we study transcoding based video caching in cellular networks where cache servers are deployed at the edge of cellular network for providing improved quality of online VoD services to mobile users. By using transcoding, a cached video can be used to convert to different low-quality versions of the video as needed by different users in real time. We first formulate the transcoding based caching problem as integer linear programming problem. Then we propose a Transcoding based Caching Algorithm (TCA), which iteratively finds the placement leading to the maximal delay gain among all possible choices. We deduce the computational complexity of TCA. Simulation results demonstrate that TCA significantly outperforms traditional greedy caching algorithm with a decrease of up to 40% in terms of average delivery delay.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

STOLLBERG, MICHAEL, JÖRG HOFFMANN e DIETER FENSEL. "A CACHING TECHNIQUE FOR OPTIMIZING AUTOMATED SERVICE DISCOVERY". International Journal of Semantic Computing 05, n. 01 (marzo 2011): 1–31. http://dx.doi.org/10.1142/s1793351x11001146.

Testo completo
Abstract (sommario):
The development of sophisticated technologies for service-oriented architectures (SOA) is a grand challenge. A promising approach is the employment of semantic technologies to better support the service usage cycle. Most existing solutions show significant deficits in the computational performance, which hampers the applicability in large-scale SOA systems. We present an optimization technique for automated service discovery — one of the central operations in semantically enabled SOA environments — that can ensure a sophisticated performance while maintaining a high retrieval accuracy. The approach is based on goals that formally describe client objectives, and it employs a caching mechanism for enhancing the computational performance of a two-phased discovery framework. At design time, the suitable services for generic and reusable goal descriptions are determined by semantic matchmaking. The result is captured in a continuously updated graph structure that organizes goals and services with respect to the requested and provided functionalities. This is exploited at runtime in order to detect the suitable services for concrete client requests with minimal effort. We formalize the approach within a first-order logic framework, and define the graph structure along with the associated storage and retrieval algorithms. An empirical evaluation shows that significant performance improvements can be achieved.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Li, Yan, e Zheng Wan. "Blockchain-Enabled Intelligent Video Caching and Transcoding in Clustered MEC Networks". Security and Communication Networks 2021 (7 settembre 2021): 1–17. http://dx.doi.org/10.1155/2021/7443260.

Testo completo
Abstract (sommario):
In recent years, the number of smart devices has exploded, leading to an unprecedented increase in demand for video live and video-on-demand (VoD) services. Also, the privacy of video providers and requesters and the security of requested video data are much more threatened. In order to solve these issues, in this paper, a blockchain-enabled CMEC video transmission model (Bl-CMEC) for intelligent video caching and transcoding will be proposed to ensure the transactions’ transparency, system security, user information privacy, and integrity of the video data, enhance the ability of severs in actively caching popular video content in the CMEC system, and realize transcoding function at network edge nodes. Furthermore, we chose a scheme based on deep reinforcement learning (DRL) to intelligently access the intracluster joint caching and transcoding decisions. Then, the joint video caching and transcoding decision smart contract is specially designed to automatically manage the transaction process of the joint caching and transcoding service, which records key information of joint caching and transcoding transactions and payment information on a continuous blockchain. The simulation results demonstrate that the proposed Bl-CMEC framework not only can provide users with better QoE performance for video streaming service but also can ensure the security, integrity, and consistency for the video providers, video requesters, and video data.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Leng, Tao, Yuanyuan Xu, Gaofeng Cui e Weidong Wang. "Caching-Aware Intelligent Handover Strategy for LEO Satellite Networks". Remote Sensing 13, n. 11 (7 giugno 2021): 2230. http://dx.doi.org/10.3390/rs13112230.

Testo completo
Abstract (sommario):
Recently, many Low Earth Orbit (LEO) satellite networks are being implemented to provide seamless communication services for global users. Since the high mobility of LEO satellites, handover strategy has become one of the most important topics for LEO satellite systems. However, the limited on-board caching resource of satellites make it difficult to guarantee the handover performance. In this paper, we propose a multiple attributes decision handover strategy jointly considering three factors, which are caching capacity, remaining service time and the remaining idle channels of the satellites. Furthermore, a caching-aware intelligent handover strategy is given based on the deep reinforcement learning (DRL) to maximize the long-term benefits of the system. Compared with the traditional strategies, the proposed strategy reduces the handover failure rate by up to nearly 81% when the system caching occupancy reaches 90%, and it has a lower call blocking rate in high user arrival scenarios. Simulation results show that this strategy can effectively mitigate handover failure rate due to caching resource occupation, as well as flexibly allocate channel resources to reduce call blocking.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Qin, Yana, Danye Wu, Zhiwei Xu, Jie Tian e Yujun Zhang. "Adaptive In-Network Collaborative Caching for Enhanced Ensemble Deep Learning at Edge". Mathematical Problems in Engineering 2021 (25 settembre 2021): 1–14. http://dx.doi.org/10.1155/2021/9285802.

Testo completo
Abstract (sommario):
To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can, in natural, leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, and analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti

Tesi sul tema "Caching services"

1

Drolia, Utsav. "Adaptive Distributed Caching for Scalable Machine Learning Services". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1004.

Testo completo
Abstract (sommario):
Applications for Internet-enabled devices use machine learning to process captured data to make intelligent decisions or provide information to users. Typically, the computation to process the data is executed in cloud-based backends. The devices are used for sensing data, offloading it to the cloud, receiving responses and acting upon them. However, this approach leads to high end-to-end latency due to communication over the Internet. This dissertation proposes reducing this response time by minimizing offloading, and pushing computation close to the source of the data, i.e. to edge servers and devices themselves. To adapt to the resource constrained environment at the edge, it presents an approach that leverages spatiotemporal locality to push subparts of the model to the edge. This approach is embodied in a distributed caching framework, Cachier. Cachier is built upon a novel caching model for recognition, and is distributed across edge servers and devices. The analytical caching model for recognition provides a formulation for expected latency for recognition requests in Cachier. The formulation incorporates the effects of compute time and accuracy. It also incorporates network conditions, thus providing a method to compute expected response times under various conditions. This is utilized as a cost function by Cachier, at edge servers and devices. By analyzing requests at the edge server, Cachier caches relevant parts of the trained model at edge servers, which is used to respond to requests, minimizing the number of requests that go to the cloud. Then, Cachier uses context-aware prediction to prefetch parts of the trained model onto devices. The requests can then be processed on the devices, thus minimizing the number of offloaded requests. Finally, Cachier enables cooperation between nearby devices to allow exchanging prefetched data, reducing the dependence on remote servers even further. The efficacy of Cachier is evaluated by using it with an art recognition application. The application is driven using real world traces gathered at museums. By conducting a large-scale study with different control variables, we show that Cachier can lower latency, increase scalability and decrease infrastructure resource usage, while maintaining high accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Gouta, Ali. "Caching and prefetching for efficient video services in mobile networks". Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S001/document.

Testo completo
Abstract (sommario):
Les réseaux cellulaires ont connu une croissance phénoménale du trafic alimentée par les nouvelles technologies d'accès cellulaire. Cette croissance est en grande partie tirée par l'émergence du trafic HTTP adaptatif streaming (HAS) comme une nouvelle technologie de diffusion des contenus vidéo. Le principe du HAS est de rendre disponible plusieurs qualités de la même vidéo en ligne et que les clients choisissent la meilleure qualité qui correspond à leur bande passante. Chaque niveau d'encodage est segmenté en des chunks, qui dont la durée varie de 2 à 10 secondes. L'émergence du HAS a introduit des nouvelles contraintes sur les systèmes de livraison des contenus vidéo en particulier sur les systèmes de caches. Dans ce contexte, nous menons une analyse détaillée des données du trafic HAS collecté en France et fournie par le plus grand opérateur de téléphonie mobile du pays. Tout d'abord, nous analysons et modélisons le comportement des clients qui demandent des contenus VoD et live. Ces analyses nous ont permis d'identifier les facteurs qui impactent la performance des systèmes de cache et de proposer un nouveau algorithme de remplacement de contenus qu'on appelle WA-LRU. WA-LRU exploite la localité temporelle des chunks dans le contenu et la connaissance de la charge du trafic dans le réseau afin d'améliorer la performance du cache. Ensuite, nous analysons et modélisons la logique d'adaptation entre les qualités vidéo basés sur des observations empiriques. Nous montrons que le changement fréquent entre les encodages réduit considérablement la performance des systèmes de cache. Dans ce contexte, nous présentons CF-DASH une implémentation libre d'un player DASH qui vise à réduire les changements fréquents entre qualités, assure une bonne QoE des clients et améliore la performance des systèmes de caches. La deuxième partie de la thèse est dédié à la conception, simulation et implémentation d'une solution de préchargement des contenus vidéo sur terminaux mobiles. Nous concevons un système que nous appelons «Central Predictor System (CPsys)" qui prédit le comportement des clients mobiles et leurs consommations des vidéos. Nous évaluons CPSys avec des traces de trafic réel. Enfin, nous développons une preuve de concept de notre solution de préchargement
Recently, cellular networks have witnessed a phenomenal growth of traffic fueled by new high speed broadband cellular access technologies. This growth is in large part driven by the emergence of the HTTP Adaptive Streaming (HAS) as a new video delivery method. In HAS, several qualities of the same videos are made available in the network so that clients can choose the quality that best fits their bandwidth capacity. This strongly impacts the viewing pattern of the clients, their switching behavior between video qualities, and thus beyond on content delivery systems. In this context, we provide an analysis of a real HAS dataset collected in France and provided by the largest French mobile operator. Firstly, we analyze and model the viewing patterns of VoD and live streaming HAS sessions and we propose a new cache replacement strategy, named WA-LRU. WA-LRU leverages the time locality of video segments within the HAS content. We show that WA-LRU improves the performance of the cache. Second, we analyze and model the adaptation logic between the video qualities based on empirical observations. We show that high switching behaviors lead to sub optimal caching performance, since several versions of the same content compete to be cached. In this context we investigate the benefits of a Cache Friendly HAS system (CF-DASH) which aims at improving the caching efficiency in mobile networks and to sustain the quality of experience of mobile clients. Third, we investigate the mobile video prefetching opportunities. We show that CPSys can achieve high performance as regards prediction correctness and network utilization efficiency. We further show that CPSys outperforms other prefetching schemes from the state of the art. At the end, we provide a proof-of-concept implementation of our prefetching system
Gli stili APA, Harvard, Vancouver, ISO e altri
3

van, Wyk David. "The effects of micro data centres for multi-service access nodes on latency and services". Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/61342.

Testo completo
Abstract (sommario):
Latency is becoming a significant factor in many Internet applications such as P2P sharing and online gaming. Coupled with the fact that an increasing number of people are using online services for backup and replication purposes and it is clear that congestion increases exponentially on the network. One of the ways in which the latency problem can be solved is to remove core network congestion or to limit it in such a way that it does not pose a problem. In South Africa, Telkom rolled out MSAN cabinets as part of their Fibre-to-the-curb (FTTC) upgrades. This created an unique opportunity to provide new services, like BaRaaS, by implementing micro data centres within the MSAN to reduce congestion on the core network. It is important to have background knowledge on what exactly latency is and what causes it on a network. It is also essential to have an understanding of how congestion (and thus latency) can be avoided on a network. The background literature covered helps to determine which tools are available to do this, as well as to highlight any possible gaps that exist for new congestion control mechanisms. A simulation study was performed to determine whether implementing micro data centres inside the MSAN will in fact reduce latency. Simulations must be done as realistically as possible to ensure that the results can be correlated to a real-world problem. Two different simulations were performed to model the behaviour of the network when backup and replication data is sent to the Internet and when it is sent to a local MSAN. In both models the core network throughput as well as the Round Trip Times (RTTs) from the client to the Internet and the MSAN cabinets, were recorded. The RTT results were then used to determine whether latency had been reduced. Once it was established that micro data centres will indeed help in reducing congestion and latency on the network, the design of a storage server, for inclusion inside the MSAN cabinet, was done. A cost benefit analysis was also performed to ensure that the project will be financially viable in the long term. The cost analysis took into account all the costs associated with the project and then expanded them over a certain period of time to determine initial expenses. Extra information was then taken into consideration to determine the possible income per year as well as extra expenditure. It was found that the inclusion of a micro data centre reduces latency on the core network due to the removal of large backup data traffic from the core network, which reduces congestion and improves latency. From the Cost Benefit Analysis (CBA) it was found that the BaRaaS service is viable from a subscription point of view. Finally, the relevant conclusions with regard to the effects of data centres in MSAN cabinets on latency and services were drawn.
Vertraagtyd word 'n belangrike faktor in baie Internet toepassings soos P2P-deel en aanlyn-speletjies. Gekoppel met die feit dat 'n toenemende getal mense internetdienste gebruik vir rugsteun en replisering, word opeenhoping in die datanetwerk eksponensieel verhoog. Een van die maniere waarop die vertraagtydsprobleem opgelos kan word, is om opeenhoping in die kern-datanetwerk te verwyder of om dit op so 'n manier te beperk dat dit nie 'n probleem veroorsaak nie. In Suid Afrika het Telkom MSAN-kaste uitgerol as deel van hulle "Fibre-to-the-Curb" (FTTC) opgraderings. Dit het 'n unieke geleentheid geskep om nuwe dienste te skep, soos BaRaaS, deur mikro-datasentrums in die MSAN-kas te implementeer om opeenhoping in die kernnetwerk te verminder. Dit is belangrik om agtergrondkennis te hê van presies wat vertraagtyd is en waardeur dit op die netwerk veroorsaak word. Dit is ook belangrik om 'n begrip te hê van hoe opeenhoping (en dus vertraagtyd) op die netwerk vermy kan word. Die agtergrondsliteratuur wat gedek is help om te bepaal watter instrumente beskikbaar is, asook om moontlikhede na vore te bring vir nuwe meganismes om opeenhoping te beheer. 'n Simulasiestudie is uitgevoer om vas te stel of die insluiting van datasentrums in die MSAN-kaste inderdaad 'n verskil sal maak aan die vertraagtyd in die datanetwerk. Twee simulasies is uitgevoer om die gedrag van die netwerk te modelleer wanneer rugsteun- en repliseringsdata na onderskeidelik die Internet en die plaaslike MSAN gestuur word. In altwee is die deurset van die kernnetwerk sowel as die sogenaamde Round Trip Times (RTTs) van die kliënt na die Internet en die MSAN-kaste aangeteken. Die RTTs-resultate sal gebruik word om te bepaal of vertraagtyd verminder is. Nadat dit bepaal is dat mikro-datasentrums wel die opeenhoping in die netwerk sal verminder, is die ontwerp van 'n stoorbediener gedoen, vir insluiting in die MSAN-kas. 'n Koste-ontleding neem alle koste wat met die projek verband hou in ag en versprei dit dan oor 'n bepaalde tydperk om die aanvanklike kostes te bepaal. Verdere inligting word voorts in ag geneem om die moontlike inkomste per jaar sowel as addisionele uitgawes te bepaal. Daar is bevind dat die insluiting van 'n mikro-datasentrum vertraagtyd verminder deur groot rugsteen-dataverkeer van die kernnetwerk af te verwyder. Die koste-ontleding het gewys dat uit 'n subskripsie-oogpunt, die BaRaaS diens lewensvatbaar is. Uiteindelik word relevante gevoltrekkings gemaak oor die effek van datasentrums in MSAN-kaste op vertraagtyd en dienste.
Dissertation (MEng)--University of Pretoria, 2017.
Electrical, Electronic and Computer Engineering
MEng
Unrestricted
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Cardenas, Baron Yonni Brunie Lionel Pierson Jean-Marc. "Grid caching specification and implementation of collaborative cache services for grid computing /". Villeurbanne : Doc'INSA, 2008. http://docinsa.insa-lyon.fr/these/pont.php?id=cardenas_baron.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Cardenas, Baron Yonny. "Grid caching : specification and implementation of collaborative cache services for grid computing". Lyon, INSA, 2007. http://theses.insa-lyon.fr/publication/2007ISAL0107/these.pdf.

Testo completo
Abstract (sommario):
This thesis proposes an approach for the design and implementation of collaborative cache systems in grids that supports capabilities for monitoring and controlling cache interactions. Our approach permits to compose and evaluate high-level collaborative cache functions in a flexible way. Our proposal is based on a multilayer model that defines the main functions of a collaborative grid cache system. This model and the provided specification are used to build a flexible and generic software infrastructure for the operation and control of collaborative caches. This infrastructure is composed of a group of autonomous cache elements called Grid Cache Services (GCS). The GCS is a local administrator of temporary storage and data which is implemented as a grid service that provides the cache capabilities defined by the model. We study a possible configuration for a group of GCS that constitutes a basic management system of temporary data called Temporal Storage Service (TSS)
Cette thèse propose une approche de la conception et de l'implémentation de systèmes de cache collaboratif dans les grilles de données. Notre approche permet la composition et l'évaluation des fonctions d‘un système de cache collaboratif de haut niveau de façon flexible. Notre proposition est basée sur un modèle multicouche qui définit les fonctions principales d'un système de cache collaboratif pour les grilles. Ce modèle et la spécification fournie sont utilisés pour construire une infrastructure logicielle flexible et générique pour l'opération et le contrôle du cache collaboratif. Cette infrastructure est composée d'un groupe d’éléments autonomes de cache appelés "Grid Cache Services" (GCS). Le GCS est un administrateur local de moyens de stockage et de données temporaires. Nous étudions une possible configuration d’un groupe de GCS qui constitue un système basique d'administration de données temporaires appelé "Temporal Storage Service" (TSS)
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ye, Zakaria. "Analyse de Performance des Services de Vidéo Streaming Adaptatif dans les Réseaux Mobiles". Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0219/document.

Testo completo
Abstract (sommario):
Le trafic vidéo a subi une augmentation fulgurante sur Internet ces dernières années. Pour pallier à cette importante demande de contenu vidéo, la technologie du streaming adaptatif sur HTTP est utilisée. Elle est devenue par ailleurs très populaire car elle a été adoptée par les différents acteurs du domaine de la vidéo streaming. C’est une technologie moins couteuse qui permet aux fournisseurs de contenu, la réutilisation des serveurs web et des caches déjà déployés. En plus, elle est exempt de tout blocage car elle traverse facilement les pare-feux et les translations d’adresses sur Internet. Dans cette thèse, nous proposons une nouvelle méthode de vidéo streaming adaptatif appelé “Backward-Shifted Coding (BSC)”. Il se veut être une solution complémentaire au standard DASH, le streaming adaptatif et dynamique utilisant le protocole HTTP. Nous allons d’abord décrire ce qu’est la technologie BSC qui se base sur le codec (encodeur décodeur) à multi couches SVC, un algorithme de compression extensible ou évolutif. Nous détaillons aussi l’implémentation de BSC dans un environnement DASH. Ensuite,nous réalisons une évaluation analytique de BSC en utilisant des résultats standards de la théorie des files d’attente. Les résultats de cette analyse mathématique montrent que le protocole BSC permet de réduire considérablement le risque d’interruption de la vidéo pendant la lecture, ce dernier étant très pénalisant pour les utilisateurs. Ces résultats vont nous permettre de concevoir des algorithmes d’adaptation de qualité à la bande passante en vue d’améliorer l’expérience utilisateur. Ces algorithmes permettent d’améliorer la qualité de la vidéo même étant dans un environnement où le débit utilisateur est très instable.La dernière étape de la thèse consiste à la conception de stratégies de caching pour optimiser la transmission de contenu vidéo utilisant le codec SVC. En effet, dans le réseau, des serveurs de cache sont déployés dans le but de rapprocher le contenu vidéo auprès des utilisateurs pour réduire les délais de transmission et améliorer la qualité de la vidéo. Nous utilisons la programmation linéaire pour obtenir la solution optimale de caching afin de le comparer avec nos algorithmes proposés. Nous montrons que ces algorithmes augmentent la performance du système tout en permettant de décharger les liens de transmission du réseau cœur
Due to the growth of video traffic over the Internet in recent years, HTTP AdaptiveStreaming (HAS) solution becomes the most popular streaming technology because ithas been succesfully adopted by the different actors in Internet video ecosystem. Itallows the service providers to use traditional stateless web servers and mobile edgecaches for streaming videos. Further, it allows users to access media content frombehind Firewalls and NATs.In this thesis we focus on the design of a novel video streaming delivery solutioncalled Backward-Shifted Coding (BSC), a complementary solution to Dynamic AdaptiveStreaming over HTTP (DASH), the standard version of HAS. We first describe theBackward-Shifted Coding scheme architecture based on the multi-layer Scalable VideoCoding (SVC). We also discuss the implementation of BSC protocol in DASH environment.Then, we perform the analytical evaluation of the Backward-Sihifted Codingusing results from queueing theory. The analytical results show that BSC considerablydecreases the video playback interruption which is the worst event that users can experienceduring the video session. Therefore, we design bitrate adaptation algorithms inorder to enhance the Quality of Experience (QoE) of the users in DASH/BSC system.The results of the proposed adaptation algorithms show that the flexibility of BSC allowsus to improve both the video quality and the variations of the quality during thestreaming session.Finally, we propose new caching policies to be used with video contents encodedusing SVC. Indeed, in DASH/BSC system, cache servers are deployed to make contentsclosed to the users in order to reduce network latency and improve user-perceived experience.We use Linear Programming to obtain optimal static cache composition tocompare with the results of our proposed algorithms. We show that these algorithmsincrease the system overall hit ratio and offload the backhaul links by decreasing thefetched content from the origin web servers
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Ait, Chellouche Soraya. "Délivrance de services média suivant le contexte au sein d'environnements hétérogènes pour les réseaux médias du futur". Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14415/document.

Testo completo
Abstract (sommario):
La généralisation de l’usage de l’Internet, ces dernières années, a été marquée par deux tendances importantes. Nous citerons en premier, l’enthousiasme de plus en plus grand des utilisateurs pour les services médias. Cette tendance est particulièrement accentuée par l’avènement des contenus générés par les utilisateurs qui amènent dans les catalogues des fournisseurs de services un choix illimité de contenus. L’autre tendance est la diversification et l’hétérogénéité en ressources des terminaux et réseaux d’accès. Seule la valeur du service lui-même compte aujourd’hui pour les utilisateurs et non le moyen d’y accéder. Cependant, offrir aux utilisateurs un accès ubiquitaire à de plus en plus de services Internet, impose des exigences très rigoureuses sur l’infrastructure actuelle de l’Internet. En effet, L’évolution de l’Internet devient aujourd’hui une évidence et cette évolution est d’autant plus nécessaire dans un contexte de services multimédias qui sont connus pour leur sensibilité au contexte dans lequel ils sont consommés et pour générer d’énormes quantités de trafic. Dans le cadre de cette thèse, nous nous focalisons sur deux enjeux importants dans l’évolution de l’Internet. A savoir, faciliter le déploiement de services médias personnalisés et adaptatifs et améliorer les plateformes de distribution de ces derniers afin de permettre leur passage à l’échelle tout en gardant la qualité de service à un niveau satisfaisant pour les utilisateurs finaux. Afin de permettre ceci, nous introduisons en premier, une nouvelle architecture multi environnements et multi couches permettant un environnement collaboratif pour le partage et la consommation des services médias dans un cadre des réseaux média du futur. Puis, nous proposons deux contributions majeures que nous déployons sur la couche virtuelle formés par les Home-Boxes (passerelles résidentielles évoluées) introduite dans l’architecture précédente. Dans notre première contribution, nous proposons un environnement permettant le déploiement à grande échelle de services sensibles au contexte. Deux approches ont été considérées dans la modélisation et la gestion du contexte. La première approche est basée sur les langages de balisage afin de permettre un traitement du contexte plus léger et par conséquent des temps de réponse très petits. La seconde approche, quant à elle est basée sur les ontologies et les règles afin de permettre plus d’expressivité et un meilleur partage et réutilisation des informations de contexte. Les ontologies étant connues pour leur complexité, le but de cette proposition et de prouver la faisabilité d’une telle approche dans un contexte de services multimédias par des moyen de distribution de la gestion du contexte. Concernant notre deuxième contribution, l’idée et de tirer profit des ressources (disque et connectivité) des Home-Boxes déjà déployées, afin d’améliorer les plateformes de distribution des services médias et d’améliorer ainsi le passage à l’échelle, la performance et la fiabilité de ces derniers et ce, à moindre coût. Pour cela, nous proposons deux solutions pour deux problèmes communément traités dans la réplication des contenus : (1) la redirection de requêtes pour laquelle nous proposons un algorithme de sélection à deux niveaux de filtrage, un premier filtrage basé sur les règles afin de personnaliser les services en fonction du contexte de leur consommation suivi d’un filtrage basé sur des métriques réseaux (charges des serveurs et délais entre les serveurs et les clients) ; et (2) le placement et la distribution des contenus sur les caches pour lesquels on a proposé une stratégie de mise en cache online, basée sur la popularité des contenus
Users’ willingness to consume media services along with the compelling proliferation of mobile devices interconnected via multiple wired and wireless networking technologies place high requirements on the Future Internet. It is a common belief today that Internet should evolve towards providing end users with ubiquitous and high quality media services and this, in a scalable, reliable, efficient and interoperable way. However, enabling such a seamless media delivery raises a number of challenges. On one hand, services should be more context-aware to enable their delivery to a large and disparate computational context. On another hand, current Internet media delivery infrastructures need to scale in order to meet the continuously growing number of users while keeping quality at a satisfying level. In this context, we introduce a novel architecture, enabling a novel collaborative framework for sharing and consuming Media Services within Future Internet (FI). The introduced architecture comprises a number of environments and layers aiming to improve today’s media delivery networks and systems towards a better user experience. In this thesis, we are particulary interested in enabling context-aware multimedia services provisioning that meets on one hand, the users expectations and needs and on another hand, the exponentially growing users’ demand experienced by these services. Two major and demanding challenges are then faced in this thesis (1) the design of a context-awareness framework that allows adaptive multimedia services provisioning and, (2) the enhancement of the media delivery platform to support large-scale media services. The proposed solutions are built on the newly introduced virtual Home-Box layer in the latter proposed architecture.First, in order to achieve context-awareness, two types of frameworks are proposed based on the two main models for context representation. The markup schemes-based framework aims to achieve light weight context management to ensure performance in term of responsiveness. The second framework uses ontology and rules to model and manage context. The aim is to allow higher formality and better expressiveness and sharing. However, ontology is known to be complex and thus difficult to scale. The aim of our work is then to prove the feasibility of such a solution in the field of multimedia services provisioning when the context management is distributed among the Home-Box layer. Concerning the media services delivery enhancement, the idea is to leverage the participating and already deployed Home-Boxes disk storage and uploading capabilities to achieve service performance, scalability and reliability. Towards this, we have addressed two issues that are commonly induced by the content replication: (1) the server selection for which we have proposed a two-level anycast-based request redirection strategy that consists in a preliminary filtering based on the clients’ contexts and in a second stage provides accurate network distance information, using not only the end-to-end delay metric but also the servers’ load one and, (2) the content placement and replacement in cache for which we have designed an adaptive online popularity-based video caching strategy among the introduced HB overlay
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bladin, Kalle, e Erik Broberg. "Design and Implementation of an Out-of-Core Globe Rendering System Using Multiple Map Services". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-137671.

Testo completo
Abstract (sommario):
This thesis focuses on the design and implementation of a software system enabling out-of-core rendering of multiple map datasets mapped on virtual globes around our solar system. Challenges such as precision, accuracy, curvature and massive datasets were considered. The result is a globe visualization software using a chunked level of detail approach for rendering. The software can render texture layers of various sorts to aid in scientific visualization on top of height mapped geometry, yielding accurate visualizations rendered at interactive frame rates. The project was conducted at the American Museum of Natural History (AMNH), New York and serves the goal of implementing a planetary visualization software to aid in public presentations and bringing space science to the public. The work is part of the development of the software OpenSpace, which is the result of a collaboration between Linköping University, AMNH and the National Aeronautics and Space Administration (NASA) among others.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Spik, Charlotta, e Isabel Ghourchian. "Improving Back-End Service Data Collection". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210646.

Testo completo
Abstract (sommario):
This project was done for a company called Anchr that develops a location based mobile application for listing nearby hangouts in a specified area. For this, they integrate a number of services which they send requests to in order to see if there are any nearby locations listed for these services. One of these services is Meetup, which is an application where users can create social events and gatherings. The problem this project aims to solve is that a large number of requests are sent to Meetup’s service in order to get information about the events, so that they then can be displayed in the application. This is a problem since only a limited number of requests can be sent within a specified time period before the service is locked. This means that Meetup’s service cannot be integrated into the application as it is now implemented, as the feature will become useless if no requests can be sent. The purpose of this project is therefore to find an alternative way of collecting the events from the service without it locking. This would enable the service to be integrated into the application. The hypothesis is that instead of using the current method of sending requests to get events, implement a listener that listens for incoming events from Meetup’s stream, to directly get updates whenever an event is created or updated. The result of the project is that there now exists a system which listens for events instead of repeatedly sending requests. The issue with the locking of the service does not exist anymore since no requests are sent to Meetup’s service.
Detta projekt genomfördes för ett företag som heter Anchr som utvecklar en platsbaserad mobilapplikation för att lista närliggande sociala platser inom ett specificerat område. För detta integrerade de ett antal tjänster som de skickar förfrågningar till för att se om det finns några närliggande platser listade för dessa tjänster. En av dessa tjänster är Meetup som är en applikation där användare kan skapa sociala evenemang. Problemet detta examensarbete syftar till att lösa är att ett stort antal förfrågningar skickas till Meetups tjänst för att få information om evenemangen så att de kan visas i applikationen. Detta är ett problem då endast ett begränsat antal förfrågningar kan skickas till deras tjänst inom ett visst tidsintervall innan tjänsten spärras. Detta betyder att Meetups tjänst inte kan integreras in i applikationen såsom den är implementerad i nuläget, eftersom funktionen kommer bli oanvändbar om inga förfrågningar kan skickas. Syftet med detta projekt är därför att hitta ett alternativt sätt att samla in evenemang från tjänsten utan att den spärras. Detta skulle göra så tjänsten kan integreras in i applikationen. Hypotesen är att istället för att använda den nuvarande metoden som går ut på att skicka förfrågningar för att få nya händelser, implementera en lyssnare som lyssnar efter inkommande händelser från Meetups stream, för att direkt få uppdateringar när ett evenemang skapas eller uppdateras. Resultatet av detta är att det nu finns ett system som lyssnar efter evenemang istället för att upprepningsvis skicka förfrågningar. Problemet med låsningen av tjänsten existerar inte längre då inga förfrågningar skickas till Meetup’s tjänst.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Gouge, Jeffrey B. "A Targeted Denial of Service Attack on Data Caching Networks". UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/575.

Testo completo
Abstract (sommario):
With the rise of data exchange over the Internet, information-centric networks have become a popular research topic in computing. One major research topic on Information Centric Networks (ICN) is the use of data caching to increase network performance. However, research in the security concerns of data caching networks is lacking. One example of a data caching network can be seen using a Mobile Ad Hoc Network (MANET). Recently, a study has shown that it is possible to infer military activity through cache behavior which is used as a basis for a formulated denial of service attack (DoS) that can be used to attack networks using data caching. Current security issues with data caching networks are discussed, including possible prevention techniques and methods. A targeted data cache DoS attack is developed and tested using an ICN as a simulator. The goal of the attacker would be to fill node caches with unpopular content, thus making the cache useless. The attack would consist of a malicious node that requests unpopular content in intervals of time where the content would have been just purged from the existing cache. The goal of the attack would be to corrupt as many nodes as possible without increasing the chance of detection. The decreased network throughput and increased delay would also lead to higher power consumption on the mobile nodes, thus increasing the effects of the DoS attack. Various caching polices are evaluated in an ICN simulator program designed to show network performance using three common caching policies and various cache sizes. The ICN simulator is developed using Java and tested on a simulated network. Baseline data are collected and then compared to data collected after the attack. Other possible security concerns with data caching networks are also discussed, including possible smarter attack techniques and methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti

Libri sul tema "Caching services"

1

Vasiliev, Yuli. PHP Oracle web development: Data processing, security, caching, XML, web services and AJAX : a practical guide to combining the power, performance, scalability, and reliability of Oracle Database with the ease of use, short development time, and high performance of PHP. Birmingham, U.K: Packt Pub., 2007.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

PHP Oracle Web Development: Data processing, Security, Caching, XML, Web Services, and Ajax. Packt Publishing, 2007.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Caching services"

1

Seltzsam, Stefan, Roland Holzhauser e Alfons Kemper. "Semantic Caching for Web Services". In Service-Oriented Computing – ICSOC 2007, 324–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11596141_25.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Vancea, Andrei, Guilherme Sperb Machado, Laurent d’Orazio e Burkhard Stiller. "Cooperative Database Caching within Cloud Environments". In Dependable Networks and Services, 14–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-30633-4_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Vancea, Andrei, e Burkhard Stiller. "Answering Queries Using Cooperative Semantic Caching". In Scalability of Networks and Services, 203–6. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02627-0_22.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wan, Hai, Xiao-Wei Hao, Tao Zhang e Lei Li. "Semantic Caching Services for Data Grids". In Lecture Notes in Computer Science, 959–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30208-7_148.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Peng, Huailiang, Majing Su, Qiong Dai e Jianlong Tan. "Enhancing Security and Robustness of P2P Caching System". In Trustworthy Computing and Services, 56–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47401-3_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Kim, Iksoo, Yoseop Woo, Hyunchul Kang, Backhyun Kim e Jinsong Ouyang. "Layered Web-Caching Technique for VOD Services". In Computational Science and Its Applications – ICCSA 2004, 345–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24707-4_43.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Lee, Ken C. K., Wang-Chien Lee, Baihua Zheng e Jianliang Xu. "Caching Complementary Space for Location-Based Services". In Lecture Notes in Computer Science, 1020–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11687238_59.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Dabran, Itai, e Danny Raz. "On the Power of Cooperation in Multimedia Caching". In Autonomic Management of Mobile Multimedia Services, 85–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11907381_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Böttcher, Stefan, e Adelhard Türling. "XML Fragment Caching for Small Mobile Internet Devices". In Web, Web-Services, and Database Systems, 268–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36560-5_20.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Dammicco, Giacinto, e Ugo Mocci. "Program caching and multicasting techniques in VoD networks". In Interactive Distributed Multimedia Systems and Telecommunication Services, 65–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0000340.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Caching services"

1

Wu, Budan, Rongheng Lin e Hua Zou. "SNS Based Web Caching Algorithm for PaaS SNS Hosting". In 2012 IEEE World Congress on Services (SERVICES). IEEE, 2012. http://dx.doi.org/10.1109/services.2012.50.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Schapranow, Matthieu-P., Jens Krueger, Vadym Borovskiy, Alexander Zeier e Hasso Plattner. "Data Loading and Caching Strategies in Service-Oriented Enterprise Applications". In 2009 IEEE Congress on Services (SERVICES). IEEE, 2009. http://dx.doi.org/10.1109/services-i.2009.92.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Sun, Meng, Haopeng Chen e Buqing Shu. "Predict-then-Prefetch Caching Strategy to Enhance QoE in 5G Networks". In 2018 IEEE World Congress on Services (SERVICES). IEEE, 2018. http://dx.doi.org/10.1109/services.2018.00047.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Goebbels, Stephan. "Wireless Broadband Services using Smart Caching". In 2008 IEEE 68th Vehicular Technology Conference (VTC 2008-Fall). IEEE, 2008. http://dx.doi.org/10.1109/vetecf.2008.288.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Das, Sajal K., Mayank Raj e Naor Zohar. "Popularity-based caching for IPTV services". In GLOBECOM 2012 - 2012 IEEE Global Communications Conference. IEEE, 2012. http://dx.doi.org/10.1109/glocom.2012.6503430.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Shekhawat, Virendra Singh, Ankur Vineet e Avinash Gautam. "Efficient content caching for named data network nodes". In MobiQuitous: Computing, Networking and Services. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3360774.3360804.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

He, Dongbiao, Jinlei Jiang, Guangwen Yang e Cedric Westphal. "Pushing smart caching to the edge with BayCache". In MobiQuitous: Computing, Networking and Services. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3360774.3360823.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Elbashir, K., e R. Deters. "Transparent caching for nomadic WS clients". In IEEE International Conference on Web Services (ICWS'05). IEEE, 2005. http://dx.doi.org/10.1109/icws.2005.123.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Chaudhari, Dharmishtha L., e Sankita Patel. "Investigating caching strategies in Personal Communication Services". In 2010 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccsit.2010.5563671.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Li, Lei, Chunlei Niu, Haoran Zheng e Jun Wei. "An Adaptive Caching Mechanism for Web Services". In 2006 Sixth International Conference on Quality Software (QSIC'06). IEEE, 2006. http://dx.doi.org/10.1109/qsic.2006.9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia