To see the other types of publications on this topic, follow the link: Memcached.

Journal articles on the topic 'Memcached'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 journal articles for your research on the topic 'Memcached.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kusuma, Mandahadi. "Metode Optimasi Memcached sebagai NoSQL Key-value Memory Cache." JISKA (Jurnal Informatika Sunan Kalijaga) 3, no. 3 (August 30, 2019): 14. http://dx.doi.org/10.14421/jiska.2019.33-02.

Full text
Abstract:
Memcached is an application that is used to store client query results on the web into the memory server as a temporary storage (cache). The goal is that the web remains responsive even though many access the web. Memcached uses key-value and the LRU (Least Recenly Used) algorithm to store data. In the default configuration Memcached can handle web-based applications properly, but if it is faced with an actual situation, where the process of transferring data and cache objects swells to thousands to millions of items, optimization steps are needed so that Memcached services can always be optimal, not experiencing Input / Output (I / O) overhead, and low latency. In a review of this paper, we will show some of the latest research in memcached optimization efforts. Some methods that can be used are clustering are; Memory partitioning, Graphic Processor Unit hash, User Datagram Protocol (UDP) transmission, Solid State Drive Hybird Memory and Memcached Hadoop distributed File System (HDFS)Keywords : memcached, optimization, web-app, overhead, latency
APA, Harvard, Vancouver, ISO, and other styles
2

Mishra, Nivedita, Sharnil Pandya, Chirag Patel, Nagaraj Cholli, Kirit Modi, Pooja Shah, Madhuri Chopade, Sudha Patel, and Ketan Kotecha. "Memcached: An Experimental Study of DDoS Attacks for the Wellbeing of IoT Applications." Sensors 21, no. 23 (December 2, 2021): 8071. http://dx.doi.org/10.3390/s21238071.

Full text
Abstract:
Distributed denial-of-service (DDoS) attacks are significant threats to the cyber world because of their potential to quickly bring down victims. Memcached vulnerabilities have been targeted by attackers using DDoS amplification attacks. GitHub and Arbor Networks were the victims of Memcached DDoS attacks with 1.3 Tbps and 1.8 Tbps attack strengths, respectively. The bandwidth amplification factor of nearly 50,000 makes Memcached the deadliest DDoS attack vector to date. In recent times, fellow researchers have made specific efforts to analyze and evaluate Memcached vulnerabilities; however, the solutions provided for security are based on best practices by users and service providers. This study is the first attempt at modifying the architecture of Memcached servers in the context of improving security against DDoS attacks. This study discusses the Memcached protocol, the vulnerabilities associated with it, the future challenges for different IoT applications associated with caches, and the solutions for detecting Memcached DDoS attacks. The proposed solution is a novel identification-pattern mechanism using a threshold scheme for detecting volume-based DDoS attacks. In the undertaken study, the solution acts as a pre-emptive measure for detecting DDoS attacks while maintaining low latency and high throughput.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Yuehai, Eitan Frachtenberg, Song Jiang, and Mike Paleczny. "Characterizing Facebook's Memcached Workload." IEEE Internet Computing 18, no. 2 (March 2014): 41–49. http://dx.doi.org/10.1109/mic.2013.80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, X., B. Zhou, and W. Li. "A Streaming Protocol for Memcached." Information Technology Journal 11, no. 12 (November 15, 2012): 1776–80. http://dx.doi.org/10.3923/itj.2012.1776.1780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hidayat, Eka Wahyu, and Alam Rahmatulloh. "Optimasi Server SIMAK Menggunakan Memcached dan Mirror Server Untuk Meningkatkan Kecepatan Akses Layanan Akademik Universitas Siliwangi." S@CIES 5, no. 2 (April 30, 2015): 69–78. http://dx.doi.org/10.31598/sacies.v5i2.57.

Full text
Abstract:
Sistem Informasi Akademik Universitas Siliwangi (SIMAK) menjamin transparansi penggunaan sumber daya informasi berbasis internet untuk layanan dan operasional akademik day-to-day. Kendala klasik yang sering terjadi dalam penyelenggaraan sistem berbasis internet adalah jumlah pengguna sistem akan mempengaruhi kinerja dan kecepatan sistem dalammemberikan layanan. Peningkatan jumlah mahasiswa di Universitas Siliwangi berimbas kepada banyaknya user system yang meng-akses sistem akademik dan menurunnya kecepatan response sistem dalam menangani request dari pengguna. Kondisi ini dapat ditangani dengan optimasi sistem dengan Memcached dan Mirror Server. Memcached adalah suatu script tambahan yang diletakkan dalam suatu Server Mirror sebagai jembatan (bridge system) antara server utama dengan web service. Sehingga saat penggunaan, setiap kali komunikasi data terjadi,request dari pengguna akan di seleksi apabila permintaan tersebut tersedia di Cached Memory pada Mirror Server maka langsung ditampilkan ke antarmuka pengguna tanpamelalui eksekusi ke database sehingga beban sistem utama dan database berkurang. Hasil pengujian Memcached dan Mirror Server menggunakan Jmeter dengan skenario standar adalah terjadi peningkatan throughput sekitar 490transaksi per menit lebih besar dibandingkan sebelum implementasi yaitu 171 transaksi per menit.
APA, Harvard, Vancouver, ISO, and other styles
6

Carra, Damiano, and Pietro Michiardi. "Memory Partitioning and Management in Memcached." IEEE Transactions on Services Computing 12, no. 4 (July 1, 2019): 564–76. http://dx.doi.org/10.1109/tsc.2016.2613048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lavasani, Maysam, Hari Angepat, and Derek Chiou. "An FPGA-based In-Line Accelerator for Memcached." IEEE Computer Architecture Letters 13, no. 2 (July 15, 2014): 57–60. http://dx.doi.org/10.1109/l-ca.2013.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Issa, Joseph, and Silvia Figueira. "Hadoop and memcached: Performance and power characterization and analysis." Journal of Cloud Computing: Advances, Systems and Applications 1, no. 1 (2012): 10. http://dx.doi.org/10.1186/2192-113x-1-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Berezecki, Mateusz, Eitan Frachtenberg, Mike Paleczny, and Kenneth Steele. "Power and performance evaluation of Memcached on the TILEPro64 architecture." Sustainable Computing: Informatics and Systems 2, no. 2 (June 2012): 81–90. http://dx.doi.org/10.1016/j.suscom.2012.01.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fukuda, Eric S., Hiroaki Inoue, Takashi Takenaka, Dahoo Kim, Tsunaki Sadahisa, Tetsuya Asai, and Masato Motomura. "Enhancing Memcached by Caching Its Data and Functionalities at Network Interface." Journal of Information Processing 23, no. 2 (2015): 143–52. http://dx.doi.org/10.2197/ipsjjip.23.143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

XUE, Xian-peng, Ming-tian PENG, and Huai-qing HE. "Optimal design and implementation of calendar shopping system based on Memcached." Journal of Computer Applications 31, no. 3 (May 18, 2011): 865–68. http://dx.doi.org/10.3724/sp.j.1087.2011.00865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Wei, Songping Yu, and Zhiying Wang. "Fast In-Memory Key–Value Cache System with RDMA." Journal of Circuits, Systems and Computers 28, no. 05 (May 2019): 1950074. http://dx.doi.org/10.1142/s0218126619500749.

Full text
Abstract:
The quick advances of Cloud and the advent of Fog computing impose more and more critical demand for computing and data transfer of low latency onto the underlying distributed computing infrastructure. Remote direct memory access (RDMA) technology has been widely applied for its low latency of remote data access. However, RDMA gives rise to a host of challenges in accelerating in-memory key–value stores, such as direct remote memory writes, making the remote system more vulnerable. This study presents an in-memory key–value system based on RDMA, named Craftscached, which enables: (1) buffering remote memory writes into a communication cache memory to eliminate direct remote memory writes to the data memory area; (2) dividing the communication cache memory into RDMA-writable and RDMA-readable memory zones to reduce the possibility of data corruption due to stray memory writes and caching data into an RDMA-readable memory zone to improve the remote memory read performance; and (3) adopting remote out-of-place direct memory write to achieve high performance of remote read and write. Experimental results in comparison with Memcached indicate that Craftscached provides a far better performance: (1) in the case of read-intensive workloads, the data access of Craftscached is about 7–43[Formula: see text] and 18–72.4% better than those of TCP/IP-based and RDMA-based Memcached, respectively; (2) the memory utilization of small objects is more efficient with only about 3.8% memory compaction overhead.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Chengjian, Kai Ouyang, Xiaowen Chu, Hai Liu, and Yiu-Wing Leung. "R-memcached: A reliable in-memory cache for big key-value stores." Tsinghua Science and Technology 20, no. 6 (December 2015): 560–73. http://dx.doi.org/10.1109/tst.2015.7349928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Liao, Jianwei, and Xiaoning Peng. "A Data-Consistency Scheme for the Distributed-Cache Storage of the Memcached System." Journal of Computing Science and Engineering 11, no. 3 (September 30, 2017): 92–99. http://dx.doi.org/10.5626/jcse.2017.11.3.92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Yuan Qiang. "Design and Implementation of Distributed Cache System in Cluster Environment." Applied Mechanics and Materials 635-637 (September 2014): 1530–34. http://dx.doi.org/10.4028/www.scientific.net/amm.635-637.1530.

Full text
Abstract:
Within the layered enterprise application architecture, database is usually the bottleneck of system. Cache technology can significantly improve the system performance and scalability by caching the data in application layer. Firstly, this paper discussed the significance and importance of the Distributed Cache System. Our system provided two kinds of cache mechanism, including replicated cache and partitioned cache. Each of these has advantages and disadvantages. The design and implementation of this system covered problems including consistent hashing arithmetic. This paper described and analyzed data distribution, data update and the procedure of operations in different cache mechanisms. At last, we made the compared test between our system and the similar product, Memcached. We found that there are both advantages and disadvantages of these two products, it points out the direction for our future improvement.
APA, Harvard, Vancouver, ISO, and other styles
16

Kulkarni, Chinmay, Badrish Chandramouli, and Ryan Stutsman. "Achieving high throughput and elasticity in a larger-than-memory store." Proceedings of the VLDB Endowment 14, no. 8 (April 2021): 1427–40. http://dx.doi.org/10.14778/3457390.3457406.

Full text
Abstract:
Millions of sensors, mobile applications and machines now generate billions of events. Specialized many-core key-value stores (KVSs) can ingest and index these events at high rates (over 100 Mops/s on one machine) if events are generated on the same machine; however, to be practical and cost-effective they must ingest events over the network and scale across cloud resources elastically. We present Shadowfax, a new distributed KVS based on FASTER, that transparently spans DRAM, SSDs, and cloud blob storage while serving 130 Mops/s/VM over commodity Azure VMs using conventional Linux TCP. Beyond high single-VM performance, Shadowfax uses a unique approach to distributed reconfiguration that avoids any server-side key ownership checks or cross-core coordination both during normal operation and migration. Hence, Shadowfax can shift load in 17 s to improve system throughput by 10 Mops/s with little disruption. Compared to the state-of-the-art, it has 8x better throughput (than Seastar+memcached) and avoids costly I/O to move cold data during migration. On 12 machines, Shadowfax retains its high throughput to perform 930 Mops/s, which, to the best of our knowledge, is the highest reported throughput for a distributed KVS used for large-scale data ingestion and indexing.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhao, Wenze, Yajuan Du, Mingzhe Zhang, Mingyang Liu, Kailun Jin, and Rachata Ausavarungnirun. "Application-Oriented Data Migration to Accelerate In-Memory Database on Hybrid Memory." Micromachines 13, no. 1 (December 29, 2021): 52. http://dx.doi.org/10.3390/mi13010052.

Full text
Abstract:
With the advantage of faster data access than traditional disks, in-memory database systems, such as Redis and Memcached, have been widely applied in data centers and embedded systems. The performance of in-memory database greatly depends on the access speed of memory. With the requirement of high bandwidth and low energy, die-stacked memory (e.g., High Bandwidth Memory (HBM)) has been developed to extend the channel number and width. However, the capacity of die-stacked memory is limited due to the interposer challenge. Thus, hybrid memory system with traditional Dynamic Random Access Memory (DRAM) and die-stacked memory emerges. Existing works have proposed to place and manage data on hybrid memory architecture in the view of hardware. This paper considers to manage in-memory database data in hybrid memory in the view of application. We first perform a preliminary study on the hotness distribution of client requests on Redis. From the results, we observe that most requests happen on a small portion of data objects in in-memory database. Then, we propose the Application-oriented Data Migration called ADM to accelerate in-memory database on hybrid memory. We design a hotness management method and two migration policies to migrate data into or out of HBM. We take Redis under comprehensive benchmarks as a case study for the proposed method. Through the experimental results, it is verified that our proposed method can effectively gain performance improvement and reduce energy consumption compared with existing Redis database.
APA, Harvard, Vancouver, ISO, and other styles
18

Sharma, Navin, Dilip Krishnappa, Sean Barker, David Irwin, and Prashant Shenoy. "Managing server clusters on intermittent power." PeerJ Computer Science 1 (December 9, 2015): e34. http://dx.doi.org/10.7717/peerj-cs.34.

Full text
Abstract:
Reducing the energy footprint of data centers continues to receive significant attention due to both its financial and environmental impact. There are numerous methods that limit the impact of both factors, such as expanding the use of renewable energy or participating in automated demand-response programs. To take advantage of these methods, servers and applications must gracefully handle intermittent constraints in their power supply. In this paper, we propose blinking—metered transitions between a high-power active state and a low-power inactive state—as the primary abstraction for conforming to intermittent power constraints. We design Blink, an application-independent hardware–software platform for developing and evaluating blinking applications, and define multiple types of blinking policies. We then use Blink to design both a blinking version of memcached (BlinkCache) and a multimedia cache (GreenCache) to demonstrate how application characteristics affect the design of blink-aware distributed applications. Our results show that for BlinkCache, a load-proportional blinking policy combines the advantages of both activation and synchronous blinking for realistic Zipf-like popularity distributions and wind/solar power signals by achieving near optimal hit rates (within 15% of an activation policy), while also providing fairer access to the cache (within 2% of a synchronous policy) for equally popular objects. In contrast, for GreenCache, due to multimedia workload patterns, we find that a staggered load proportional blinking policy with replication of the first chunk of each video reduces the buffering time at all power levels, as compared to activation or load-proportional blinking policies.
APA, Harvard, Vancouver, ISO, and other styles
19

Cicotti, Pietro, Manu Shantharam, and Laura Carrington. "Reducing communication in parallel graph search algorithms with software caches." International Journal of High Performance Computing Applications 33, no. 2 (April 15, 2018): 384–96. http://dx.doi.org/10.1177/1094342018762510.

Full text
Abstract:
In many scientific and computational domains, graphs are used to represent and analyze data. Such graphs often exhibit the characteristics of small-world networks: few high-degree vertexes connect many low-degree vertexes. Despite the randomness in a graph search, it is possible to capitalize on the characteristics of small-world networks and cache relevant information of high-degree vertexes. We applied this idea by caching remote vertex ids in a parallel breadth-first search benchmark. Our experiment with different implementations demonstrated significant performance improvements over the reference implementation in several configurations, using 64 to 1024 cores. We proposed a system design in which resources are dedicated exclusively to caching and shared among a set of nodes. Our evaluation demonstrates that this design reduces communication and has the potential to improve performance on large-scale systems in which the communication cost increases significantly with the distance between nodes. We also tested a memcached system as the cache server finding that its generic protocol, which does not match our usage semantics, hinders significantly the potential performance improvements and suggested that a generic system should also support a basic and lightweight communication protocol to meet the needs of high-performance computing applications. Finally, we explored different configurations to find efficient ways to utilize the resources allocated to solve a given problem size; to this extent, we found utilizing half of the compute cores per allocated node improves performance, and even in this case, caching variants always outperform the reference implementation.
APA, Harvard, Vancouver, ISO, and other styles
20

Matev, Rosen. "Fast distributed compilation and testing of large C++ projects." EPJ Web of Conferences 245 (2020): 05001. http://dx.doi.org/10.1051/epjconf/202024505001.

Full text
Abstract:
High energy physics experiments traditionally have large software codebases primarily written in C++ and the LHCb physics software stack is no exception. Compiling from scratch can easily take 5 hours or more for the full stack even on an 8-core VM. In a development workflow, incremental builds often do not significantly speed up compilation because even just a change of the modification time of a widely used header leads to many compiler and linker invokations. Using powerful shared servers is not practical as users have no control and maintenance is an issue. Even though support for building partial checkouts on top of published project versions exists, by far the most practical development workflow involves full project checkouts because of off-the-shelf tool support (git, intellisense, etc.) This paper details a deployment of distcc, a distributed compilation server, on opportunistic resources such as development machines. The best performance operation mode is achieved when preprocessing remotely and profiting from the shared CernVM File System. A 10 (30) fold speedup of elapsed (real) time is achieved when compiling Gaudi, the base of the LHCb stack, when comparing local compilation on a 4 core VM to remote compilation on 80 cores, where the bottleneck becomes non-distributed work such as linking. Compilation results are cached locally using ccache, allowing for even faster rebuilding. A recent distributed memcached-based shared cache is tested as well as a more modern distributed system by Mozilla, sccache, backed by S3 storage. These allow for global sharing of compilation work, which can speed up both central CI builds and local development builds. Finally, we explore remote caching and execution services based on Bazel, and how they apply to Gaudi-based software for distributing not only compilation but also linking and even testing.
APA, Harvard, Vancouver, ISO, and other styles
21

Pan, Cheng, Xiaolin Wang, Yingwei Luo, and Zhenlin Wang. "Penalty- and Locality-aware Memory Allocation in Redis Using Enhanced AET." ACM Transactions on Storage 17, no. 2 (May 28, 2021): 1–45. http://dx.doi.org/10.1145/3447573.

Full text
Abstract:
Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.
APA, Harvard, Vancouver, ISO, and other styles
22

Chang, Bao-Rong, Hsiu-Fen Tsai, Yun-Che Tsai, Chin-Fu Kuo, and Chi-Chung Chen. "Integration and optimization of multiple big data processing platforms." Engineering Computations 33, no. 6 (August 1, 2016): 1680–704. http://dx.doi.org/10.1108/ec-08-2015-0247.

Full text
Abstract:
Purpose – The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big data environment. Design/methodology/approach – First, the integration of Apache Hive, Cloudera Impala and BDAS Shark make the platform support SQL-like query. Next, users can access a single interface and select the best performance of big data warehouse platform automatically by the proposed optimizer. Finally, the distributed memory storage system Memcached incorporated into the distributed file system, Apache HDFS, is employed for fast caching query results. Therefore, if users query the same SQL command, the same result responds rapidly from the cache system instead of suffering the repeated searches in a big data warehouse and taking a longer time to retrieve. Findings – As a result the proposed approach significantly improves the overall performance and dramatically reduces the search time as querying a database, especially applying for the high-repeatable SQL commands under multi-user mode. Research limitations/implications – Currently, Shark’s latest stable version 0.9.1 does not support the latest versions of Spark and Hive. In addition, this series of software only supports Oracle JDK7. Using Oracle JDK8 or Open JDK will cause serious errors, and some software will be unable to run. Practical implications – The problem with this system is that some blocks are missing when too many blocks are stored in one result (about 100,000 records). Another problem is that the sequential writing into In-memory cache wastes time. Originality/value – When the remaining memory capacity is 2 GB or less on each server, Impala and Shark will have a lot of page swapping, causing extremely low performance. When the data scale is larger, it may cause the JVM I/O exception and make the program crash. However, when the remaining memory capacity is sufficient, Shark is faster than Hive and Impala. Impala’s consumption of memory resources is between those of Shark and Hive. This amount of remaining memory is sufficient for Impala’s maximum performance. In this study, each server allocates 20 GB of memory for cluster computing and sets the amount of remaining memory as Level 1: 3 percent (0.6 GB), Level 2: 15 percent (3 GB) and Level 3: 75 percent (15 GB) as the critical points. The program automatically selects Hive when memory is less than 15 percent, Impala at 15 to 75 percent and Shark at more than 75 percent.
APA, Harvard, Vancouver, ISO, and other styles
23

Cai, Tao, Qingjian He, Dejiao Niu, Fuli Chen, Jie Wang, and Lei Li. "A New Embedded Key–Value Store for NVM Device Simulator." Micromachines 11, no. 12 (December 2, 2020): 1075. http://dx.doi.org/10.3390/mi11121075.

Full text
Abstract:
The non-volatile memory (NVM) device is a useful way to solve the memory wall in computers. However, the current I/O software stack in operating systems becomes a performance bottleneck for applications based on NVM devices, especially for key–value stores. We analyzed the characteristics of key–value stores and NVM devices and designed a new embedded key–value store for an NVM device simulator named PMEKV. The embedded processor in NVM devices was used to manage key–value pairs to reduce the data transfer between NVM devices and key–value applications. Meanwhile, it also cut down the data copy between the user space and the kernel space in the operating system to alleviate the I/O software stacks on the efficiency of key–value stores. The architecture, data layout, management strategy, new interface and log strategy of PMEKV are given. Finally, a prototype of PMEKV was implemented based on PMEM. We used YCSB to test and compare it with Redis, MongDB, and Memcache. Meanwhile, the Redis for PMEM named PMEM-Redis and PMEM-KV were also used to test and compared with PMEKV. The results show that PMEKV had the advantage of throughput and adaptability compared with the current key–value stores.
APA, Harvard, Vancouver, ISO, and other styles
24

Cheng, Wenxue, Wanchun Jiang, Tong Zhang, and Fengyuan Ren. "Optimizing the Response Time of Memcached Systems via Model and Quantitative Analysis." IEEE Transactions on Computers, 2020, 1. http://dx.doi.org/10.1109/tc.2020.3011619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Arul, E., and A. Punidha. "Supervised Deep Learning Vector Quantization to Detect MemCached DDOS Malware Attack on Cloud." SN Computer Science 2, no. 2 (February 10, 2021). http://dx.doi.org/10.1007/s42979-021-00477-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Lei, Jiacheng Zhao, Chenxi Wang, Ting Cao, John Zigman, Haris Volos, Onur Mutlu, et al. "Unified Holistic Memory Management Supporting Multiple Big Data Processing Frameworks over Hybrid Memories." ACM Transactions on Computer Systems, February 4, 2022. http://dx.doi.org/10.1145/3511211.

Full text
Abstract:
To process real-world datasets, modern data-parallel systems often require extremely large amounts of memory, which are both costly and energy-inefficient. Emerging non-volatile memory (NVM) technologies offer high capacity compared to DRAM and low energy compared to SSDs. Hence, NVMs have the potential to fundamentally change the dichotomy between DRAM and durable storage in Big Data processing. However, most Big Data applications are written in managed languages and executed on top of a managed runtime that already performs various dimensions of memory management. Supporting hybrid physical memories adds in a new dimension, creating unique challenges in data replacement. This paper proposes Panthera, a semantics-aware, fully automated memory management technique for Big Data processing over hybrid memories. Panthera analyzes user programs on a Big Data system to infer their coarse-grained access patterns, which are then passed to the Panthera runtime for efficient data placement and migration. For Big Data applications, the coarse-grained data division information is accurate enough to guide the GC for data layout, which hardly incurs overhead in data monitoring and moving. We implemented Panthera in OpenJDK and Apache Spark. Based on Big Data applications’ memory access pattern, we also implemented a new profiling-guided optimization strategy, which is transparent to applications. With this optimization, our extensive evaluation demonstrates that Panthera reduces energy by 32 – 53% at less than 1% time overhead on average. To show Panthera’s applicability, we extend it to QuickCached, a pure Java implementation of Memcached. Our evaluation results show that Panthera reduces energy by 28.7% at 5.2% time overhead on average.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography