To see the other types of publications on this topic, follow the link: Prefetch techniques.

Journal articles on the topic 'Prefetch techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 journal articles for your research on the topic 'Prefetch techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

VERMA, SANTHOSH, and DAVID M. KOPPELMAN. "THE INTERACTION AND RELATIVE EFFECTIVENESS OF HARDWARE AND SOFTWARE DATA PREFETCH." Journal of Circuits, Systems and Computers 21, no. 02 (2012): 1240002. http://dx.doi.org/10.1142/s0218126612400026.

Full text
Abstract:
A major performance limiter in modern processors is the long latencies caused by data cache misses. Both compiler- and hardware-based prefetching schemes help hide these latencies and so improve performance. Compiler techniques infer memory access patterns through code analysis, and insert appropriate prefetch instructions. Hardware prefetching techniques work independently from the compiler by monitoring an access stream, detecting patterns in this stream and issuing prefetches based on these patterns. This paper looks at the interplay between compiler and hardware architecture-based prefetching techniques. Does either technique make the other one unnecessary? First, compilers' ability to achieve good results without extreme expertise is evaluated by preparing binaries with no prefetch, one-flag prefetch (no tuning), and expertly tuned prefetch. From runs of SPECcpu2006 binaries, we find that expertise avoids minor slowdown in a few benchmarks and provides substantial speedup in others. We compare software schemes to hardware prefetching schemes and our simulations show software alone substantially outperforms hardware alone on about half of a selection of benchmarks. While hardware matches or exceeds software in a few cases, software is better on average. Analysis reveals that in many cases hardware is not prefetching access patterns that it is capable of recognizing, due to some irregularities in the observed miss sequence. Hardware outperforms software on address sequences that the compiler would not guess. In general, while software is better at prefetching individual loads, hardware partly compensates for this by identifying more loads to prefetch. Using the two schemes together provides further benefits, but less than the sum of the contributions of each alone.
APA, Harvard, Vancouver, ISO, and other styles
2

Srivastava, Swapnita, and P. K. Singh. "ADDP: The Data Prefetching Protocol for Monitoring Capacity Misses." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 14 (April 11, 2025): e31782. https://doi.org/10.14201/adcaij.31782.

Full text
Abstract:
Prefetching is essential to minimizing the number of misses in cache and improving processor performance. Many prefetchers have been proposed, including simple but highly effective stream-based prefetchers and prefetchers that predict complex access patterns based on structures such as history buffers and bit vectors. However, many cache misses still occur in many applications. After analyzing the various techniques in Instruction and Data Prefetcher, several key features were extracted which impact system performance. Data prefetching is an essential technique used in all commercial processors. Data prefetchers aim at hiding the long data access latency. In this paper, we present the design of an Adaptive Delta-based Data Prefetching (ADDP) that employs four different tables organized in a hierarchical manner to address the diversity of access patterns. Firstly, the Entry Table is queue, which tracks recent cache fill. Secondly, the Predict Table which has trigger (Program Counter) PCs as tags. Thirdly, the (Address Difference Table) ADT which has target PCs as tags. Lastly, the Prefetch Table is divided into two parts, i.e., Prefetch Filter and the actual Prefetch Table. The Prefetch Filter table filters unnecessary prefetch accesses and the Prefetch Table is used to track other additional information for each prefetch. The ADDP has been implemented in a multicache-level prefetching system under the 3rd Data Prefetching Championship (DPC-3) framework. ADDP is an effective solution for data-intensive applications since it shows notable gains in cache hit rates and latency reduction. The simulation results show that ADDP outperforms the top three data prefetchers MLOP, SPP and BINGO by 5.312 %, 13.213 % and 10.549 %, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Deb, Dipika, and John Jose. "ZPP: A Dynamic Technique to Eliminate Cache Pollution in NoC based MPSoCs." ACM Transactions on Embedded Computing Systems 22, no. 5s (2023): 1–25. http://dx.doi.org/10.1145/3609113.

Full text
Abstract:
Data prefetching efficiently reduces the memory access latency in NUCA architectures as the Last Level Cache (LLC) is shared and distributed across multiple cores. But cache pollution generated by prefetcher reduces its efficiency by causing contention for shared resources such as LLC and the underlying network. The paper proposes Zero Pollution Prefetcher (ZPP) that eliminates cache pollution for NUCA architecture. For this purpose, ZPP uses L1 prefetcher and places the prefetched blocks in the data locations of LLC where modified blocks are stored. Since modified blocks in LLC are stale and request for such blocks are served from the exclusively owned private cache, their space unnecessary consumes power to maintain such stale data in the cache. The benefits of ZPP are (a) Eliminates cache pollution in L1 and LLC by storing prefetched blocks in LLC locations where stale blocks are stored. (b) Insufficient cache space is solved by placing prefetched blocks in LLC as LLCs are larger in size than L1 cache. This helps in prefetching more cache blocks, thereby increasing prefetch aggressiveness. (c) Increasing prefetch aggressiveness increases its coverage. (d) It also maintains an equivalent lookup latency to L1 cache for prefetched blocks. Experimentally it has been found that ZPP increases weighted speedup by 2.19x as compared to a system with no prefetching while prefetch coverage and prefetch accuracy increases by 50%, and 12%, respectively compared to the baseline. 1
APA, Harvard, Vancouver, ISO, and other styles
4

Alves, Ricardo, Stefanos Kaxiras, and David Black-Schaffer. "Early Address Prediction." ACM Transactions on Architecture and Code Optimization 18, no. 3 (2021): 1–22. http://dx.doi.org/10.1145/3458883.

Full text
Abstract:
Achieving low load-to-use latency with low energy and storage overheads is critical for performance. Existing techniques either prefetch into the pipeline (via address prediction and validation) or provide data reuse in the pipeline (via register sharing or L0 caches). These techniques provide a range of tradeoffs between latency, reuse, and overhead. In this work, we present a pipeline prefetching technique that achieves state-of-the-art performance and data reuse without additional data storage, data movement, or validation overheads by adding address tags to the register file. Our addition of register file tags allows us to forward (reuse) load data from the register file with no additional data movement, keep the data alive in the register file beyond the instruction’s lifetime to increase temporal reuse, and coalesce prefetch requests to achieve spatial reuse. Further, we show that we can use the existing memory order violation detection hardware to validate prefetches and data forwards without additional overhead. Our design achieves the performance of existing pipeline prefetching while also forwarding 32% of the loads from the register file (compared to 15% in state-of-the-art register sharing), delivering a 16% reduction in L1 dynamic energy (1.6% total processor energy), with an area overhead of less than 0.5%.
APA, Harvard, Vancouver, ISO, and other styles
5

Bishwa Ranjan Roy, Purnendu Das, Nurulla Mansur Barbhuiya,. "PP-Bridge: Establishing a Bridge between the Prefetching and Cache Partitioning." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 897–906. http://dx.doi.org/10.17762/ijritcc.v11i9.8982.

Full text
Abstract:
— Modern computer processors are equipped with multiple cores, each boasting its own dedicated cache memory, while collectively sharing a generously sized Last Level Cache (LLC). To ensure equitable utilization of the LLC space and bolster system security, partitioning techniques have been introduced to allocate the shared LLC space among the applications running on different cores. This partition dynamically adapts to the requirements of these applications. Prefetching plays a vital role in enhancing cache performance by proactively loading data into the cache before it get requested explicitly by a core. Each core employs prefetch engines to decide which data blocks to fetch preemptively. However, a haphazard prefetcher may bring in more data blocks than necessary, leading to cache pollution and a subsequent degradation in system performance. To maximize the benefits of prefetching, it is essential to keep cache pollution to a minimum. Intriguingly, our research has uncovered that when existing prefetching techniques are combined with partitioning methods, they tend to exacerbate cache pollution within the LLC, resulting in a noticeable decline in system performance. In this paper, we present a novel approach aimed at mitigating cache pollution when combining prefetching with partitioning techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

Hariharan, I., and M. Kannan. "Efficient Use of On-Chip Memories and Scheduling Techniques to Eliminate the Reconfiguration Overheads in Reconfigurable Systems." Journal of Circuits, Systems and Computers 28, no. 14 (2019): 1950246. http://dx.doi.org/10.1142/s0218126619502463.

Full text
Abstract:
Modern embedded systems are packed with dedicated Field Programmable Gate Arrays (FPGAs) to accelerate the overall system performance. However, the FPGAs are susceptible to reconfiguration overheads. The reconfiguration overheads are mainly because of the configuration data being fetched from the off-chip memory at run-time and also due to the improper management of tasks during execution. To reduce these overheads, our proposed methodology mainly focuses on the prefetch heuristic, reuse technique, and the available memory hierarchy to provide an efficient mapping of tasks over the available memories. Our paper includes a new replacement policy which reduces the overall time and energy reconfiguration overheads for static systems in their subsequent iterations. It is evident from the result that most of the reconfiguration overheads are eliminated when the applications are managed and executed based on our methodology.
APA, Harvard, Vancouver, ISO, and other styles
7

Liang, Ye. "Big Data Storage Method in Wireless Communication Environment." Advanced Materials Research 756-759 (September 2013): 899–904. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.899.

Full text
Abstract:
Big data phenomenon refers to the practice of collection and processing of very large data sets and associated systems and algorithms used to analyze these massive data sets. Big data service is very attractive in the field of wireless communication environment, especially when we face the spatial applications, which are typical applications of big data. Because of the complexity to ingest, store and analyze geographical information data, this paper reflects on a few of the technical problems presented by the exploration of big data, and puts forward an effective storage method in wireless communication environment, which is based on the measurement of moving regularity through proposing three key techniques: partition technique, index technique and prefetch technique. Experimental results show that the performance of big data storage method using these new techniques is better than the other storage methods on managing a great capacity of big data in wireless communication environment.
APA, Harvard, Vancouver, ISO, and other styles
8

NATARAJAN, RAGAVENDRA, VINEETH MEKKAT, WEI-CHUNG HSU, and ANTONIA ZHAI. "EFFECTIVENESS OF COMPILER-DIRECTED PREFETCHING ON DATA MINING BENCHMARKS." Journal of Circuits, Systems and Computers 21, no. 02 (2012): 1240006. http://dx.doi.org/10.1142/s0218126612400063.

Full text
Abstract:
For today's increasingly power-constrained multicore systems, integrating simpler and more energy-efficient in-order cores becomes attractive. However, since in-order processors lack complex hardware support for tolerating long-latency memory accesses, developing compiler technologies to hide such latencies becomes critical. Compiler-directed prefetching has been demonstrated effective on some applications. On the application side, a large class of data centric applications has emerged to explore the underlying properties of the explosively growing data. These applications, in contrast to traditional benchmarks, are characterized by substantial thread-level parallelism, complex and unpredictable control flow, as well as intensive and irregular memory access patterns. These applications are expected to be the dominating workloads on future microprocessors. Thus, in this paper, we investigated the effectiveness of compiler-directed prefetching on data mining applications in in-order multicore systems. Our study reveals that although properly inserted prefetch instructions can often effectively reduce memory access latencies for data mining applications, the compiler is not always able to exploit this potential. Compiler-directed prefetching can become inefficient in the presence of complex control flow and memory access patterns; and architecture dependent behaviors. The integration of multithreaded execution onto a single die makes it even more difficult for the compiler to insert prefetch instructions, since optimizations that are effective for single-threaded execution may or may not be effective in multithreaded execution. Thus, compiler-directed prefetching must be judiciously deployed to avoid creating performance bottlenecks that otherwise do not exist. Our experiences suggest that dynamic performance tuning techniques that adjust to the behaviors of a program can potentially facilitate the deployment of aggressive optimizations in data mining applications.
APA, Harvard, Vancouver, ISO, and other styles
9

T.M, Veeragangadhara swamy, and Raju G.T. "A Novel Prefetching Technique through Frequent Sequential Patterns from Web Usage Data." COMPUSOFT: An International Journal of Advanced Computer Technology 04, no. 06 (2015): 1826–36. https://doi.org/10.5281/zenodo.14785813.

Full text
Abstract:
Frequent sequential patterns (fsp) from web usage data (wud) are very important for analyzing and understanding users behavior to improve the quality of services offered by the world wide web(www). Web prefetching is one of the techniques for reducing the web latency there by improve the web retrieval process. This technique makes use of prefetching rules that are derived from fsps. In this paper, we explore the different fsp mining algorithms such as spm, fp growth, and spade for extraction of fsps from wud of an academic website for a period that varies from weekly to quarterly. Performance analysis on all of these fsp algorithms has been made against the number of fsps they generate with a given minimum support. Experimental results shows that spade fsp mining algorithm perform better compared to spm and fp growth algorithms. Based on the fsps, we propose a novel prefetching technique that generate prefetching rules from the fsps and prefetch the web pages so as to reduce the users’ perceived latency. 
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Haoyu, Qizhi Chen, Yixin Zhang, Tong Yang, and Bin Cui. "Stingy sketch." Proceedings of the VLDB Endowment 15, no. 7 (2022): 1426–38. http://dx.doi.org/10.14778/3523210.3523220.

Full text
Abstract:
Recording the frequency of items in highly skewed data streams is a fundamental and hot problem in recent years. The literature demonstrates that sketch is the most promising solution. The typical metrics to measure a sketch are accuracy and speed, but existing sketches make only trade-offs between the two dimensions. Our proposed solution is a new sketch framework called Stingy sketch with two key techniques: Bit-pinching Counter Tree ( BCTree ) and Prophet Queue ( PQueue ) which optimizes both the accuracy and speed. The key idea of BCTree is to split a large fixed-size counter into many small nodes of a tree structure, and to use a precise encoding to perform carry-in operations with low processing overhead. The key idea of PQueue is to use pipelined prefetch technique to make most memory accesses happen in L2 cache without losing precision. Importantly, the two techniques are cooperative so that Stingy sketch can improve accuracy and speed simultaneously. Extensive experimental results show that Stingy sketch is up to 50% more accurate than the SOTA of accuracy-oriented sketches and is up to 33% faster than the SOTA of speed-oriented sketches.
APA, Harvard, Vancouver, ISO, and other styles
11

Sasongko, Muhammad Aditya, Milind Chabbi, Mandana Bagheri Marzijarani, and Didem Unat. "ReuseTracker : Fast Yet Accurate Multicore Reuse Distance Analyzer." ACM Transactions on Architecture and Code Optimization 19, no. 1 (2022): 1–25. http://dx.doi.org/10.1145/3484199.

Full text
Abstract:
One widely used metric that measures data locality is reuse distance —the number of unique memory locations that are accessed between two consecutive accesses to a particular memory location. State-of-the-art techniques that measure reuse distance in parallel applications rely on simulators or binary instrumentation tools that incur large performance and memory overheads. Moreover, the existing sampling-based tools are limited to measuring reuse distances of a single thread and discard interactions among threads in multi-threaded programs. In this work, we propose ReuseTracker —a fast and accurate reuse distance analyzer that leverages existing hardware features in commodity CPUs. ReuseTracker is designed for multi-threaded programs and takes cache-coherence effects into account. By utilizing hardware features like performance monitoring units and debug registers, ReuseTracker can accurately profile reuse distance in parallel applications with much lower overheads than existing tools. It introduces only 2.9× runtime and 2.8× memory overheads. Our tool achieves 92% accuracy when verified against a newly developed configurable benchmark that can generate a variety of different reuse distance patterns. We demonstrate the tool’s functionality with two use-case scenarios using PARSEC, Rodinia, and Synchrobench benchmark suites where ReuseTracker guides code refactoring in these benchmarks by detecting spatial reuses in shared caches that are also false sharing and successfully predicts whether some benchmarks in these suites can benefit from adjacent cache line prefetch optimization.
APA, Harvard, Vancouver, ISO, and other styles
12

Eldeeb, Tamer, Sebastian Burckhardt, Reuben Bond, Asaf Cidon, Junfeng Yang, and Philip A. Bernstein. "Cloud Actor-Oriented Database Transactions in Orleans." Proceedings of the VLDB Endowment 17, no. 12 (2024): 3720–30. http://dx.doi.org/10.14778/3685800.3685801.

Full text
Abstract:
Microsoft Orleans is a popular open source distributed programming framework and platform which invented the virtual actor model, and has since evolved into an actor-oriented database system with the addition of database abstractions such as ACID transactions. Properties of Orleans' virtual actor model imply that any ACID transaction mechanism for operations spanning multiple actors must support distributed transactions on top of pluggable cloud storage drivers. Unfortunately, distributed transactions usually perform poorly in this environment, partly because of the high performance and contention overhead of performing two-phase commit (2PC) on slow cloud storage systems. In this paper we describe the design and implementation of ACID transactions in Orleans. The system uses two primary techniques to mask the high latency of cloud storage and enable high transaction throughput. First, Orleans pioneered the use of a distributed form of early lock release by releasing all of a transaction's locks during phase one of 2PC, and by tracking commit dependencies to implement cascading abort. This avoids blocking transactions while running 2PC and enables a distributed form of group commit. Second, Orleans leverages reconnaissance queries to prefetch the state of all actors involved in a transaction from cloud storage prior to running the transaction and acquiring any locks, thus ensuring no locks are held while blocking on high latency cloud storage in most cases.
APA, Harvard, Vancouver, ISO, and other styles
13

Cao, Ronghui, Julong Wang, Liming Zheng, et al. "Optimizing Lattice Basis Reduction Algorithm on ARM V8 Processors." Applied Sciences 15, no. 4 (2025): 2021. https://doi.org/10.3390/app15042021.

Full text
Abstract:
The LLL (Lenstra–Lenstra–Lovász) algorithm is an important method for lattice basis reduction and has broad applications in computer algebra, cryptography, number theory, and combinatorial optimization. However, current LLL algorithms face challenges such as inadequate adaptation to domestic supercomputers and low efficiency. To enhance the efficiency of the LLL algorithm in practical applications, this research focuses on parallel optimization of the LLL_FP (LLL double-precision floating-point type) algorithm from the NTL library on the domestic Tianhe supercomputer using the Phytium ARM V8 processor. The optimization begins with the vectorization of the Gram–Schmidt coefficient calculation and row transformation using the SIMD instruction set of the Phytium chip, which significantly improve computational efficiency. Further assembly-level optimization fully utilizes the low-level instructions of the Phytium processor, and this increases execution speed. In terms of memory access, data prefetch techniques were then employed to load necessary data in advance before computation. This will reduce cache misses and accelerate data processing. To further enhance performance, loop unrolling was applied to the core loop, which allows more operations per loop iteration. Experimental results show that the optimized LLL_FP algorithm achieves up to a 42% performance improvement, with a minimum improvement of 34% and an average improvement of 38% in single-core efficiency compared to the serial LLL_FP algorithm. This study provides a more efficient solution for large-scale lattice basis reduction and demonstrates the potential of the LLL algorithm in ARM V8 high-performance computing environments.
APA, Harvard, Vancouver, ISO, and other styles
14

Lee, Minsuk, Sang Lyul Min, and Chong Sang Kim. "A worst case timing analysis technique for instruction prefetch buffers." Microprocessing and Microprogramming 40, no. 10-12 (1994): 681–84. http://dx.doi.org/10.1016/0165-6074(94)90017-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bok, Kyoungsoo, Seunghun Yoo, Dojin Choi, Jongtae Lim, and Jaesoo Yoo. "In-Memory Caching for Enhancing Subgraph Accessibility." Applied Sciences 10, no. 16 (2020): 5507. http://dx.doi.org/10.3390/app10165507.

Full text
Abstract:
Graphs have been utilized in various fields because of the development of social media and mobile devices. Various studies have also been conducted on caching techniques to reduce input and output costs when processing a large amount of graph data. In this paper, we propose a two-level caching scheme that considers the past usage pattern of subgraphs and graph connectivity, which are features of graph topology. The proposed caching is divided into a used cache and a prefetched cache to manage previously used subgraphs and subgraphs that will be used in the future. When the memory is full, a strategy that replaces a subgraph inside the memory with a new subgraph is needed. Subgraphs in the used cache are managed by a time-to-live (TTL) value, and subgraphs with a low TTL value are targeted for replacement. Subgraphs in the prefetched cache are managed by the queue structure. Thus, first-in subgraphs are targeted for replacement as a priority. When a cache hit occurs in the prefetched cache, the subgraphs are migrated and managed in the used cache. As a result of the performance evaluation, the proposed scheme takes into account subgraph usage patterns and graph connectivity, thus improving cache hit rates and data access speeds compared to conventional techniques. The proposed scheme can quickly process and analyze large graph queries in a computing environment with small memory. The proposed scheme can be used to speed up in-memory-based processing in applications where relationships between objects are complex, such as the Internet of Things and social networks.
APA, Harvard, Vancouver, ISO, and other styles
16

Mutua, Mercy, Henry K. Kiplangat, and Frederick B. J. A. Ngala. "Relationship between classroom management practice and students' disruptive behaviour in mixed secondary schools in Kisauni Sub-county, Mombasa County, Kenya." Editon Consortium Journal of Educational Management and Leadership 4, no. 1 (2023): 239–52. http://dx.doi.org/10.51317/ecjeml.v4i1.436.

Full text
Abstract:
This study sought to assess and recommend ways of solving the problem of students’ disruptive behaviour in the classroom in mixed secondary schools in Kisauni Sub-county, Mombasa County, Kenya. The objective of this study was to find out the relationship between classroom control practices by prefects and student disruptive behaviour in the classroom mixed secondary schools in Kisauni Sub-county, Mombasa County, Kenya. The data was collected and analysed using a descriptive design, and the study's target population included 24 mixed secondary schools, 96 class teachers, and 840 form four students in Kisauni Sub-county. The study sampled 8 schools and 24 class teachers using both the purposive and simple random sampling techniques. A simple random sampling procedure was employed in order to select the actual students/respondents to participate in the study. Descriptive statistics computed included means, frequencies, standard deviation and percentages. In order to test hypotheses, f- and t-statistics shall be computed to test significant statistical differences at a 95 per cent significance level. Data were presented in diagrams, charts and tables. There is a moderate positive correlation between prefects' classroom control practices and students' disruptive behaviour (r =.269, p .000<.05). Prefects' classroom control practices are an important predictor of the students' disruptive behaviour (? = .269, p =.000<0.05, t = 4.286). The study is significant in that it will help teachers understand different student disruptive behaviours in secondary school, which will give directions on how to curb such behaviours.
APA, Harvard, Vancouver, ISO, and other styles
17

MAČEK, Jože. "Delovanje dunajskega Terezianuma in Theodorja Kravine na področju ekonomije in agronomije." Acta agriculturae Slovenica 115, no. 2 (2020): 485. http://dx.doi.org/10.14720/aas.2020.115.2.1664.

Full text
Abstract:
<p>Theodor Kravina von Kronstein (1720-1789) was born in Slovenska Bistrica. As jesuit he became prefect and later rector of Vienna Military Academy, later general Academy Theresianum. The contribution deals with his work entitled <em>Entwurf der oekonomischen Kenntnisse</em>, published in 1773 representing systematic outline of economic sciences, tought at Theresianumu. It was predominantly about practical expertises in knowing the soil, plants, minerals and raw materials and techniques of their processing into final products. In this published monography Kravina described also the »Economic garden«, school agricultural enterprise and mineral collections, which all improved significantly under his leadership the quality of schooling process.</p>
APA, Harvard, Vancouver, ISO, and other styles
18

Raj, M. P., F. P. Savaliya, and A. B. Patel. "Selective Breeding under a Hierarchical Mating Using Osborne Index Web App." Asian Journal of Research in Computer Science 16, no. 4 (2023): 1–7. http://dx.doi.org/10.9734/ajrcos/2023/v16i4365.

Full text
Abstract:
The poultry industry has targets to meet consumption trends and thus to produce genetically superior birds with high productivity of egg. Better egg production techniques are recommended, to satisfy in-house and export demand. The correlation of egg production with various parameters is considered by various breeders. With the efforts of breeders to satisfy demand, poultry breeding has introduced individual feed conversion testing, Osborne index, pedigreeing, hybridization, selection index, artificial insemination, and mass selection etc. The most reliable and proven Osborne index states that the maximum efficiency of egg production can be obtained by selection on the basis of a combination of family average and individual record. Technological advances have fostered the poultry sector in the last few decades. ICT led transmutation of processes and practices is apparent in almost all aspects of human activities. Knowledge about a particular breeding technique is required for its prefect implementation. These techniques require a kind of data mining and statistical analysis for matting sires and dams. In the era of 5G, Web Apps can provide better options for providing timely, precise analyzed information to poultry owners or breeders. This paper proposes a device responsive web app for Osborne Index for hierarchical mating using selective breeding.
APA, Harvard, Vancouver, ISO, and other styles
19

Nalajala, Anusha, T. Ragunathan, Ranesh Naha, and Sudheer Kumar Battula. "HRFP: Highly Relevant Frequent Patterns-Based Prefetching and Caching Algorithms for Distributed File Systems." Electronics 12, no. 5 (2023): 1183. http://dx.doi.org/10.3390/electronics12051183.

Full text
Abstract:
Data-intensive applications are generating massive amounts of data which is stored on cloud computing platforms where distributed file systems are utilized for storage at the back end. Most users of those applications deployed on cloud computing systems read data more often than they write. Hence, enhancing the performance of read operations is an important research issue. Prefetching and caching are used as important techniques in the context of distributed file systems to improve the performance of read operations. In this research, we introduced a novel highly relevant frequent patterns (HRFP)-based algorithm that prefetches content from the distributed file system environment and stores it in the client-side caches that are present in the same environment. We have also introduced a new replacement policy and an efficient migration technique for moving the patterns from the main memory caches to the caches present in the solid-state devices based on a new metric namely the relevancy of the patterns. According to the simulation results, the proposed approach outperformed other algorithms that have been suggested in the literature by a minimum of 15% and a maximum of 53%.
APA, Harvard, Vancouver, ISO, and other styles
20

Nurhidayanti, Nurhidayanti, Dedy Cahyadi, and Zainal Arifin. "Sistem Pendukung Keputusan Pemilihan Pustakawan Berprestasi Terbaik Menggunakan Metode Technique For Order Prefence By Similarity To Ideal Solution (TOPSIS) di Dinas Perpustakaan dan Kearsipan Daerah Provinsi Kalimantan Timur." Informatika Mulawarman : Jurnal Ilmiah Ilmu Komputer 15, no. 2 (2020): 120. http://dx.doi.org/10.30872/jim.v15i2.1591.

Full text
Abstract:
Pustakawan berprestasi terbaik adalah tenaga kerja perpustakaan yang memiliki prestasi unggul dalam bidang kepustakawanan. Salah satu upaya untuk meningkatkan kinerja pustakawan dalam mengolah perpustakaan adalah memberikan penghargaan melalui penyelenggaraan kegiatan Pemilihan Pustakawan Berprestasi Terbaik. Masalah yang dihadapi sampai saat ini adalah sulitnya menyatukan persepsi juri dalam melakukan penilaian terhadap peserta, kondisi ini terjadi karena ketiadaan sistem yang baku dalam melakukan proses pemilihan pustakawan berprestasi terbaik, yang tentunya menambah jam kerja tim juri dalam melakukan penilaian dan tidak objektif. Berdasarkan permasalahan tersebut dibutuhkan suatu Sistem Pendukung Keputusan Pemilihan Pustakawan Berprestasi Terbaik yang terkomputerisasi dalam mencari suatu keputusan yang tepat, efektif dan efisien. Sistem ini bertujuan membantu tim juri dalam menentukan pemenang pemilihan pustakawan berprestasi terbaik. Metode yang digunakan dalam Sistem Pendukung Keputusan Pemilihan Pustakawan Berprestasi Terbaik adalah menggunakan metode Techique For Other Reference by Similarity to Ideal Solution (TOPSIS). Data nilai yang telah dimasukkan ke dalam sistem akan dihitung menggunakan metode TOPSIS, dengan mencari jarak terjauh dan terdekat dari solusi ideal positif dan negatif. Peserta dengan nilai tertinggi akan menempati urutan teratas dalam sistem ini. Perbandingan terhadap data hasil dengan sistem yang telah dibuat menghasilkan akurasi 80%.
APA, Harvard, Vancouver, ISO, and other styles
21

Malik, Umair Shafqat, Andrea Garzulino, and Davide Del Curto. "3D Heterogeneous Database for Structural Analysis of Historic Buildings. A Discussion on Process Pipelines." Studies in Digital Heritage 7, no. 1 (2023): 17–46. http://dx.doi.org/10.14434/sdh.v7i1.36296.

Full text
Abstract:
This paper presents a methodology for creating a comprehensive heterogeneous 3D database for the structural evaluation of a historic building by using both non-destructive and destructive surveys combined with historical information. The availability of adequate data on the actual conditions is crucial when assessing the seismic vulnerability and structural behavior of a historic building and validating the results. A reliable 3D database must accept different kinds of data, e.g., the results of destructive/non-destructive surveys, historical information, etc. It must also be interrogated and enriched at any time. Therefore, creating such a 3D database may present several challenges in terms of data-gathering pipeline, comprehensiveness/redundancy, interpretation, organization, and integration with other heterogeneous data. The methodology we present in this paper includes 3D laser scanning, thermal imaging, and endoscopy combined with information regarding the state of conservation, construction history, materials, and techniques. We tested such methodology to create a database that was later used for Finite Element Modeling (FEM) to assess the seismic vulnerability of Diotti Palace, a neoclassical building that has been the seat of the Prefect of Milan since 1859. The results are analytically presented here. In conclusion, we highlight the pros and cons of the proposed methodology by means of a comparative discussion with the state of the art about 3D documentation pipelines for historic buildings and sites.
APA, Harvard, Vancouver, ISO, and other styles
22

Koech, Betty Chemutai. "Structure and Functions of Student Councils in Secondary Schools in Kericho County, Kenya." SCIENCE MUNDI 4, no. 1 (2024): 36–51. http://dx.doi.org/10.51867/scimundi.4.1.4.

Full text
Abstract:
In Kenyan secondary schools, student conflicts pose significant challenges in the 21st century. This study aimed to evaluate the structure and functions of student councils in secondary schools in Kericho County, Kenya, based on the functionalism theory. The evaluation research design was employed, targeting students, teachers, and school principals, school boards of management, County director of education, and sub-county directors of education in Kericho County. A combination of probability and non-probability sampling techniques was used to select 568 respondents, including 384 students, 120 teachers, and various school administrators. Data was collected through questionnaires, interviews, and focus group discussions. Quantitative data was analyzed using descriptive statistics, presenting frequencies, percentages, mean, and standard deviation. Qualitative data was analyzed through data coding and narrative analysis, presented using graphs, charts, and tables. Results showed that 81% of respondents indicated the student council was elected by students, although the administration had significant input. Only 16% agreed that there was no administration interference in the council formation process. Most student councils (47%) comprised 21-40 prefects, with only a few (10%) having 1-20 students. Regarding effectiveness, 68% of respondents were comfortable with the council's performance, while 20% believed it was too large to be effective, and 12% considered it too small. In conclusion, the student council structure allows for effective discipline management and conflict resolution. However, the administration's influence in council formation affects its perceived efficacy, leading to student perception of loyalty to the administration. The study recommends government intervention to limit administrative interference in student council formation, fostering true democratic processes in school governance.
APA, Harvard, Vancouver, ISO, and other styles
23

M., Kelani Khadiga, Wafaa Nassar Ahmed M., Wael Talaat, and Samir Morshedy. "Comparative Chomatographic Study for Determination of Dalfampridine and its Derivative in Pharmaceutical Formulations." Der Pharma Chemica 13, no. 1 (2021): 8. https://doi.org/10.5281/zenodo.13643967.

Full text
Abstract:
Two sensitive and precise methods were developed and validated for simultaneous estimation of dalfampridine with its oxidative degradation in pharmaceutical formulations without noticeable interference. Among the techniques adopted were chromatography [coupled TLC-densitometry and HPLC]. Method I: chromatographic separation a Spheri-5 RP C8 (220 X 4.6 X 5μm particle size) column was used in addition to the mobile phase [acetonitrile and 0.05 M KH2PO4 - (pH=5), (65: 35 v /v)], with a flow rate of (1ml.min−1) by detection (UV) at 298 nm. Method II: Densitometric separation of the drugs was performed on aluminium plates precoated with silica gel 60 F254 plates, with mobile phase chloroform, acetonitrile and methanol (volume 6:3:1) and measuring densitometry at 254 nm. HPLC calibration outline were set by the zone (0.2-6 μg mL-1 ), and retention time (Rt ) values for oxidative dalfampridine degradation and intact dalfampridine were setup (2.0+0.03, and 3.5+0.02) minutes respectively. While TLC outline were set by the zone (0.5-6 μg /spot) and retention time (Rt) values were setup to be 0.84 for entire dalfampridine and (0.26 and 0.72) for its degradation, respectively. LOD (µg. ml-1 ) was (0.054, 0.098) and LOQ (µg. ml-1 ) was (0.179, 0.325) for HPLC and TLC respectively. The improved procedural steps were verified with prefect precision, consistency in addition to cost efficient, according to the guidelines requirement of the International Conference on Harmonization (ICH). And statistically, the results were compared and matched to the published method with no considerable interference.
APA, Harvard, Vancouver, ISO, and other styles
24

Demirsoy, Nilüfer, Aysun Türe Yılmaz, and Ömür Şaylıgil. "Nurses' approaches to ethical dilemmas: An example of a public hospital Hemşirelerin etik ikilemlere yaklaşımları: Bir kamu hastanesi örneği." Journal of Human Sciences 15, no. 3 (2018): 1568. http://dx.doi.org/10.14687/jhs.v15i3.5354.

Full text
Abstract:
Aim: In this study, the ethical conflicts faced by the nurses working in the secondary health care facility in Eskişehir were investigated with the Nursing Ethical Dilemma Test .Method: It was a descriptive study. 233 nurses working in a secondary health care facility in Eskişehir were reached. The data were evaluated in the SPSS-21.00 statistical program,Pearson correlation and descriptive statistical techniques were used in the analysis.Findings: The average age is 32,53 ± 6,23. Nurses' Principle Thinking average point (17,64 ± 11,34); Practical Thinking average point (6,16± 5,07) ; acquaintance average point (13,86± 3,91) was determined that found to be well below the average in every three types of points.Conclusion: When the obtained data were analyzed,it was determined that the nurses frequently encountered situations with ethical problems during their professional lives,but they were nor sufficient to take ethical principles into account when deciding an ethical issues,and the most basic reason was not at the level of prefence with environmental factors. Extended English summary is in the end of Full Text PDF (TURKISH) file.ÖzetAmaç: Bu araştırmada Eskişehir ilinde İkinci basamak sağlık kurumunda çalışan hemşirelerin karşı karşıya kaldıkları etik ikilemlerin, “Hemşirelik Etik İkilem Testi’ile düzeylerinin incelenmesi amaçlanmıştır.Metod: Araştırmanın tipi tanımlayıcıdır. Eskişehir’de İkinci basamak sağlık kurumunda çalışan 233 hemşireye ulaşılmıştır. Veriler SPSS-21.00 istatistik programında değerlendirilmiş, analizinde Pearson korelasyonu ve tanımlayıcı istatistik teknikleri kullanılmıştır.Bulgular: Hemşirelerin Yaş ortalaması 32,53 ± 6,23’dir. Hemşirelerin İlkesel Düşünme (İD) puan ortalaması (17,64 ± 11,34) ; Pratik Düşünme (PD) puan ortalaması (6,16 ± 5,07);Aşinalık puan ortalaması (13,86± 3,91) olarak[N1] belirlenmiş, her üç puan türünde de ortalamanın çok altında puan aldıkları tespit edilmiştir.Sonuç: Elde edilen veriler analiz edildiğinde, hemşirelerin mesleki yaşantıları süresince sıklıkla etik sorun içeren durumlarla karşılaşmış olduğu, fakat etik sorunlara yönelik karar verirken etik ilkeleri göz önünde bulundurma konusunda oldukça yetersiz oldukları, bunun en temel nedenin çevresel faktörler etkisi ile istendik düzeyde olmadığı belirlenmiştir.
APA, Harvard, Vancouver, ISO, and other styles
25

R. LIMO, CHEPKAWAI, KOSGEI ZACHARY K., and Dr JOSEPH K. LELAN. "INFLUENCE OF STUDENT COUNCILS’ INVOLVEMENT IN COMMUNICATION ON MANAGEMENT OF PUBLIC SECONDARY SCHOOLS IN KISII COUNTY." INTERNATIONAL JOURNAL OF RESEARCH IN EDUCATION HUMANITIES AND COMMERCE 04, no. 03 (2023): 182–202. http://dx.doi.org/10.37602/ijrehc.2023.4316.

Full text
Abstract:
In the recent past, there has been a large number of secondary school unrests and other forms of indiscipline not forgetting that Kisii County in Kenya also got its share. This happens despite the inclusion of student councils in secondary school management. The specific objectives of the study were to; establish the influence of student councils’ involvement in communication between students and the administration and management of public secondary schools in Kisii County. The study was anchored on functional leadership theory. The study adopted a mixedmethod design. The target population was 140948 respondents comprising of 104 principals, 2080 teachers, 1040 student leaders, 137713 students, and 11 Sub County Directors of Education. The sample size was 1066 respondents comprising of 31 Principals, 336 teachers, 289 Student leaders, 399 students, and 11 Sub-County Directors of Education. Stratified, simple random sampling and purposive sampling techniques were used to select respondents. Data collection was done through the administration of questionnaires, interviews, and document analysis. Validity was established using expert judgment, while reliability was determined using Cronbach's Alpha Coefficient. Data analysis was done by using descriptive and inferential statistics such as Correlation analysis and multiple regression with the aid of SPSS V26. From the linear regression model, (R2 = .525 shows that student councils’ involvement in communication accounts for 52.5% variation in the management of public schools. The study findings depicted that there was a positive significant effect of student councils’ involvement in communication on the management of public schools (β1=0.780 and p-value <0.05). Therefore, an increase in student councils’ involvement in communication led to an increase in the quality management of public secondary schools. It was concluded that creating networks and involving student councils in school administration reduces conflicts. School administration should put in place good communication systems in schools to ensure a smooth two-way flow of information to all students’ council/(prefects,) students, teachers, and support staff. It is recommended that the sustainable communication link between the students’ council and school administration be developed.
APA, Harvard, Vancouver, ISO, and other styles
26

Fu, Chen, Heming Sun, Zhiqiang Zhang, and Jinjia Zhou. "A Highly Pipelined and Highly Parallel VLSI Architecture of CABAC Encoder for UHDTV Applications." Sensors 23, no. 9 (2023): 4293. http://dx.doi.org/10.3390/s23094293.

Full text
Abstract:
Recently, specifically designed video codecs have been preferred due to the expansion of video data in Internet of Things (IoT) devices. Context Adaptive Binary Arithmetic Coding (CABAC) is the entropy coding module widely used in recent video coding standards such as HEVC/H.265 and VVC/H.266. CABAC is a well known throughput bottleneck due to its strong data dependencies. Because the required context model of the current bin often depends on the results of the previous bin, the context model cannot be prefetched early enough and then results in pipeline stalls. To solve this problem, we propose a prediction-based context model prefetching strategy, effectively eliminating the clock consumption of the contextual model for accessing data in memory. Moreover, we offer multi-result context model update (MCMU) to reduce the critical path delay of context model updates in multi-bin/clock architecture. Furthermore, we apply pre-range update and pre-renormalize techniques to reduce the multiplex BAE’s route delay due to the incomplete reliance on the encoding process. Moreover, to further speed up the processing, we propose to process four regular and several bypass bins in parallel with a variable bypass bin incorporation (VBBI) technique. Finally, a quad-loop cache is developed to improve the compatibility of data interactions between the entropy encoder and other video encoder modules. As a result, the pipeline architecture based on the context model prefetching strategy can remove up to 45.66% of the coding time due to stalls of the regular bin, and the parallel architecture can also save 29.25% of the coding time due to model update on average under the condition that the Quantization Parameter (QP) is equal to 22. At the same time, the throughput of our proposed parallel architecture can reach 2191 Mbin/s, which is sufficient to meet the requirements of 8 K Ultra High Definition Television (UHDTV). Additionally, the hardware efficiency (Mbins/s per k gates) of the proposed architecture is higher than that of existing advanced pipeline and parallel architectures.
APA, Harvard, Vancouver, ISO, and other styles
27

Abhishek Das, Sivaprasad Nadukuru, Saurabh Ashwini kumar Dave, Om Goel, Prof.(Dr.) Arpit Jain, and Dr. Lalit Kumar. "N Optimizing Multi-Tenant DAG Execution Systems for High-Throughput Inference." Darpan International Research Analysis 12, no. 3 (2024): 1007–36. http://dx.doi.org/10.36676/dira.v12.i3.139.

Full text
Abstract:
In large-scale data processing and machine learning systems, Directed Acyclic Graphs (DAGs) serve as the backbone for orchestrating complex workflows that involve multiple dependent stages. Multi-tenant DAG execution systems are increasingly being used to handle concurrent workloads from multiple users and applications. However, these systems face significant challenges when it comes to achieving high-throughput inference, particularly in shared environments where resource contention, scheduling efficiency, and tenant isolation become critical concerns. High-throughput inference is a necessity in use cases such as real-time recommendation engines, large-scale data processing pipelines, and cloud-based AI services, where latency and throughput are vital to maintaining system performance. This research paper aims to address the primary challenges associated with optimizing multi-tenant DAG execution systems for high-throughput inference. We begin by analyzing the limitations of existing frameworks such as Apache Airflow, Luigi, and Prefect in multi-tenant environments, focusing on issues like resource contention, inefficient scheduling, and lack of dynamic scalability. To tackle these issues, we propose a set of optimization strategies that include adaptive resource allocation, tenant-aware scheduling, and hybrid execution models that balance between real-time and batch inference. Our first strategy involves dynamic partitioning of resources to prevent contention and ensure fair allocation among tenants based on workload priority and expected resource utilization. This approach is supplemented by intelligent scheduling techniques that leverage cost-based heuristics and priority queues, reducing overall latency and improving system throughput. Additionally, we introduce a hybrid execution model that supports both real-time and batch processing pipelines, enabling flexible execution of diverse workload types in the same shared environment. This allows the system to dynamically switch between real-time and batch modes based on workload characteristics, thereby optimizing resource utilization. To further enhance performance, we propose incorporating memory-aware caching mechanisms that prioritize data locality and reduce redundant data movements between nodes in the DAG. This not only decreases execution time for individual DAG stages but also minimizes I/O overhead, a critical factor in high-throughput systems. These strategies are integrated into a multi-tenant DAG execution framework designed to support various machine learning and data analytics workloads in a cloud-native environment. The effectiveness of our optimizations is evaluated through comprehensive experiments using real-world datasets and synthetic benchmarks, comparing our approach against baseline systems. Our results demonstrate significant improvements in throughput, latency, and scalability, validating the proposed techniques for real-world adoption in multi-tenant DAG execution systems. We also present a case study of applying these optimizations to a large-scale AI inference platform, highlighting the practical benefits and potential challenges of deploying such systems in a production environment. Ultimately, this research provides valuable insights into optimizing DAG execution for high-throughput inference, offering a blueprint for building scalable, efficient, and tenant-aware DAG systems capable of handling diverse and dynamic workloads.
APA, Harvard, Vancouver, ISO, and other styles
28

Katendra, Priyanka, D. Kalaichelvi, and Hemlata Sapha. "A Pre-Experimental Study to Assess the Effectiveness of Structured Teaching Programme on Knowledge Regarding Awareness of POCSO Act Among Adolescent Girls 16-18 Years in a Selected School at Abhanpur, Chhattisgarh." International Journal of Recent Advances in Multidisciplinary Topics 5, no. 5 (2024): 107–8. https://doi.org/10.5281/zenodo.11243785.

Full text
Abstract:
Background of the Study: The POCSO Act, 2012 is a comprehensive law to provide for the protection of children from the offences of sexual assault, sexual harassment, and pornography, while safeguarding the interest of the child at every stage of the judicial process by incorporating child-f A pre- experimental study to assess the effectiveness of structured teaching programme on knowledge regarding awareness of POCSO Act among adolescent girls 16-18 years in a selected school at Abhanpur, Chhattisgarh”. The aim of this study was to enhance the knowledge of adolescent girls regarding awareness of POCSO Act. In this study establish three objectives, first is to assess the pre-test and post test knowledge score regarding awareness of POCSO Act among adolescent girls16-18 years in a selected school at Abhanpur Chhattisgarh. Second objective is to evaluate the effectiveness of structured teaching programme on the knowledge regarding awareness of POCSO Act among adolescent girls (16-18 year) in a selected school at Abhanpur Chhattisgarh. Third objective is to find out the association between pre-test knowledge score regarding awareness of POCSO Act with their sociodemographic variables among adolescent girls 16-18years in a selected school at Abhanpur, Chhattisgarh. The research approach used for the study was quantitative research approach with pre-experimental research design and, research setting is Higher secondary school Abhanpur, target population is adolescent girls 16-18years using non-probability purposive sampling techniques, sample sizes 60 were selected, formulate the self-questionnaire nine socio-demographic, and fourteen content, and tool validated by seven experts, method of data collection by self-structured questionnaire, and data analysis by used descriptive and inferential statistics, in this study finding reliability of tool used karlpearson’s test and re -test method reliability is 0.84 which indicate prefect reliability, The findings of the study revealed that there was marked increased in the post-test knowledge The post-test mean score was 19.23 and the pre-test mean score was 13.13 mean difference is (6.1) Score, it reflects the structured teaching programme was effective. Hence it is concluded that overall post-test mean knowledge score (19.25) is greater than overall pre-test knowledge score (13.13) and after structured teaching programme POCSO Act gain 15.25 of the knowledge. It showed that the “t” value is 11.17 was greater than the table value 3.46 at (p<0.001) level of significance, which shows the effectiveness of structured teaching programme in which calculated value was x2 test. On applying chi-square test demographic variable family monthly income and occupation of father and number of family members as the chi-square value 38.49, 14.66, 9.55 were greater than table value 12.59 at 0.05 level significance respectively. Hence hypothesis H2 was accepted regards to variables. i.e., family monthly income occupation of father number of family members.
APA, Harvard, Vancouver, ISO, and other styles
29

Ghazali, Rana, and Douglas G. Down. "Smart Data Prefetching Using KNN to Improve Hadoop Performance." ICST Transactions on Scalable Information Systems 12, no. 3 (2025). https://doi.org/10.4108/eetsis.9110.

Full text
Abstract:
Hadoop is an open-source framework that enables the parallel processing of large data sets across a cluster of machines. It faces several challenges that can lead to poor performance, such as I/O operations, network data transmission, and high data access time. In recent years, researchers have explored prefetching techniques to reduce the data access time as a potential solution to these problems. Nevertheless, several issues must be considered to optimize the prefetching mechanism. These include launching the prefetch at an appropriate time to avoid conflicts with other operations and minimize waiting time, determining the amount of prefetched data to avoid overload and underload, and placing the prefetched data in locations that can be accessed efficiently when required. In this paper, we propose a smart prefetch mechanism that consists of three phases designed to address these issues. First, we enhance the task progress rate to calculate the optimal time for triggering prefetch operations. Next, we utilize K-Nearest Neighbor clustering to identify which data blocks should be prefetched in each round, employing the data locality feature to determine the placement of prefetched data. Our experimental results demonstrate that our proposed smart prefetch mechanism improves job execution time by an average of 28.33% by increasing the rate of local tasks.
APA, Harvard, Vancouver, ISO, and other styles
30

Sofien, Chtourou, Chtourou Mohamed, and Hammami Omar. "Performance Evaluation of Neural Network Prediction for Data Prefetching in Embedded Applications." International Journal of Information, Control and Computer Sciences 1.0, no. 12 (2007). https://doi.org/10.5281/zenodo.1331439.

Full text
Abstract:
Embedded systems need to respect stringent real time constraints. Various hardware components included in such systems such as cache memories exhibit variability and therefore affect execution time. Indeed, a cache memory access from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. Neural network have the potential to tackle the nonlinear behavior observed in memory accesses during program execution and their demonstrated numerous hardware implementation emphasize this choice over traditional forecasting techniques for their inclusion in embedded systems. However, embedded applications execute millions of instructions and therefore millions of addresses to be predicted. This very challenging problem of neural network based prediction of large time series is approached in this paper by evaluating various neural network architectures based on the recurrent neural network paradigm with pre-processing based on the Self Organizing Map (SOM) classification technique.
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Dong, Dan Feng, Qiankun Liu, et al. "SplitZNS: Towards an Efficient LSM-tree on Zoned Namespace SSDs." ACM Transactions on Architecture and Code Optimization, July 10, 2023. http://dx.doi.org/10.1145/3608476.

Full text
Abstract:
The Zoned Namespace (ZNS) Solid State Drive (SSD) is a nascent form of storage device that offers novel prospects for the Log Structured Merge Tree (LSM-tree). ZNS exposes erase blocks in SSD as append-only zones, enabling the LSM-tree to gain awareness of the physical layout of data. Nevertheless, LSM-tree on ZNS SSDs necessitates Garbage Collection (GC) owing to the mismatch between the gigantic zones and relatively small Sorted String Tables (SSTables). Through extensive experiments, we observe that a smaller zone size can reduce data migration in GC at the cost of a significant performance decline owing to inadequate parallelism exploitation. In this paper, we present SplitZNS, which introduces small zones by tweaking the zone-to-chip mapping to maximize GC efficiency for LSM-tree on ZNS SSDs. Following the multi-level peculiarity of LSM-tree and the inherent parallel architecture of ZNS SSDs, we propose a number of techniques to leverage and accelerate small zones to alleviate the performance impact due to underutilized parallelism. (1) First, we use small zones selectively to prevent exacerbating write slowdowns and stalls due to their suboptimal performance. (2) Second, to enhance parallelism utilization, we propose SubZone Ring, which employs a per-chip FIFO buffer to imitate a large zone writing style; (3) Read Prefetcher, which prefetches data concurrently through multiple chips during compactions; (4) and Read Scheduler, which assigns query requests the highest priority. We build a prototype integrated with SplitZNS to validate its efficiency and efficacy. Experimental results demonstrate that SplitZNS achieves up to 2.77x performance and reduces data migration considerably compared to the lifetime-based data placement. 1
APA, Harvard, Vancouver, ISO, and other styles
32

P., Sengottuvelan, and Gopalakrishnan T. "Efficient Web Usage Mining Based on K-Medoids Clustering Technique." November 2, 2015. https://doi.org/10.5281/zenodo.1110664.

Full text
Abstract:
Web Usage Mining is the application of data mining techniques to find usage patterns from web log data, so as to grasp required patterns and serve the requirements of Web-based applications. User's expertise on the internet may be improved by minimizing user's web access latency. This may be done by predicting the future search page earlier and the same may be prefetched and cached. Therefore, to enhance the standard of web services, it is needed topic to research the user web navigation behavior. Analysis of user's web navigation behavior is achieved through modeling web navigation history. We propose this technique which cluster's the user sessions, based on the K-medoids technique.
APA, Harvard, Vancouver, ISO, and other styles
33

Gellert, Arpad. "Web Usage Mining by Neural Hybrid Prediction with Markov Chain Components." Journal of Web Engineering, July 19, 2021. http://dx.doi.org/10.13052/jwe1540-9589.2053.

Full text
Abstract:
This paper presents and evaluates a two-level web usage prediction technique, consisting of a neural network in the first level and contextual component predictors in the second level. We used Markov chains of different orders as contextual predictors to anticipate the next web access based on specific web access history. The role of the neural network is to decide, based on previous behaviour, whose predictor’s output to use. The predicted web resources are then prefetched into the cache of the browser. In this way, we considerably increase the hit rate of the web browser, which shortens the load times. We have determined the optimal configuration of the proposed hybrid predictor on a real dataset and compared it with other existing web prefetching techniques in terms of prediction accuracy. The best configuration of the proposed neural hybrid method provides an average web access prediction accuracy of 86.95%.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Yucheng, Wenbin Zeng, Hong Jiang, et al. "An Efficient Delta Compression Framework Seamlessly Integrated into Inline Deduplication." ACM Transactions on Storage, March 5, 2025. https://doi.org/10.1145/3721485.

Full text
Abstract:
Delta compression can complement data deduplication by further minimizing redundancy through the compression of non-duplicate data chunks. When adding delta compression to deduplication-based backup systems, however, two primary challenges arise that degrade performance of inline deduplication. First, extra I/Os are introduced along the critical paths of backup and restoration for retrieving base chunks, slowing the system. Second, rewriting techniques prohibit specific data chunks from serving as base chunks for delta compression to improve restore performance, resulting in a loss of compression efficiency. In this paper, we introduce LoopDelta, a framework that seamlessly integrates delta compression into inline deduplication for backup storage, addressing the aforementioned challenges by using three techniques: (1) dual-locality-based similarity tracking leverages both logical and physical locality to detect most of the similar chunks, which, due to their locality, can be prefetched by piggybacking on routine operations during deduplication, thereby eliminating extra I/Os during backup; (2) cache-aware filter identifies base chunks requiring extra I/Os during restore and prevents their referencing, thus eliminating extra restore I/Os; and (3) inversed delta compression, which reverses the roles of base and target chunks in the traditional delta compression approach, thereby allowing for the delta compression of data chunks that are otherwise prohibited as base chunks due to rewriting techniques. Experiments show that LoopDelta increases the compression ratio by 1.28 to 11.33 times over basic deduplication, without significantly affecting backup throughput, and enhances restore performance by up to 3.57 times.
APA, Harvard, Vancouver, ISO, and other styles
35

Meyer, Alexander, Alex Mullen, and Roger Tomlin. "Slavery on the Northern Frontier: A Stylus Tablet from Vindolanda." Britannia, January 15, 2025, 1–23. https://doi.org/10.1017/s0068113x24000230.

Full text
Abstract:
Abstract A Roman stylus tablet discovered at Vindolanda in 2014 preserves the partial text of a deed-of-sale for an enslaved person, only the second such document from Britain. This article presents the results of multiple techniques used to reveal the almost illegible text and proposes a restoration of the format of the document and its lost content, based on more complete examples from Italy and around the Empire. We examine the late first-century archaeological and historical context and suggest that the purchaser is probably the prefect Iulius Verecundus. We consider other possible evidence for the servi of the commanders at Vindolanda, for example in another hard-to-decipher stylus tablet which may be related to their travel. The deed-of-sale provides a new type of testimony for slavery at Vindolanda and adds to knowledge of enslavement in the Roman military.
APA, Harvard, Vancouver, ISO, and other styles
36

Dr., Sara Talal Mohammed Musallam. "AIR POLISHING FOR DECONTAMINATION IN PERI-IMPLANTITIS." June 16, 2020. https://doi.org/10.5281/zenodo.3897209.

Full text
Abstract:
<strong><em>Background</em></strong><em>: Peri-implant disease is a condition of pathological inflammation that develop in the tissue surrounding a load-bearing implant which can lead to loss of the implant itself. Anaerobic plaque bacteria are considered risk factor with negative impacts on the peri-implant tissue health leading to peri-implantitis. Air polishing technique which first introduced as an alternative technique for removing biofilm and supragingival extrinsic stain. </em> <strong><em>Aims</em></strong><em>: define Air polishing technique and indicate the different between peri-implant mucositis and peri-implantitis and the risk factors related to them besides, evaluate the effectiveness of glycine powder air polishing on treatment of peri-implantitis.</em> <strong><em>Methodology</em></strong><em>: we depend on systemic review exploring different medical online databases about peri-implantitis and Air polishing techniques in any studies published from 2000 until 2019.</em> <strong><em>Results</em></strong><em>: 10 articles were included in the qualitative synthesis of the present review, three studies were systemic reviews, two studies were articles<strong>,</strong> one was an observational clinical trial and 4 studies were randomized Clinical Trial. </em> <strong><em>Conclusion</em></strong><em>: Peri-implantitis is one of the most complication of implant which has biological source considering that plaque is the most risk factors of peri-implantitis. Peri-implantitis occurrence is considering a critical risk factors of the success of the implant which can be a cause of implant loss if untreated. Different technique used to decontaminate the implant to prevent the infection and used also in treatment of peri-implantitis through complete debridement and removal of bacteria biofilm which have critical role in removing plaque. Air polishing seems to have a positive effect on improving oral hygiene of tissue surrounding the implant and decontaminate the implant of further plaque. Moreover, we found that air polishing using glycine powder is an effective way to remove the plaque, treatment of inflammation, improving oral hygiene and reducing bleeding on probing (BOP) with minimum side effect and prefect patients&#39; compliance. However, we recommend further investigation to evaluate other powder than glycine powder and their effect on peri-implantitis. Moreover, further investigation should include exploring different factors affecting the efficiency of air polishing and factors that affect the success of implant itself.&nbsp;&nbsp;&nbsp; </em>
APA, Harvard, Vancouver, ISO, and other styles
37

Sabina, Maharjan. "FACTORS AFFECTING CAREER PROMOTION OF EMPLOYEES IN BANKING SECTOR OF NEPAL." August 19, 2021. https://doi.org/10.5281/zenodo.8358044.

Full text
Abstract:
Introduction: Banking career are lime light opportunities in Nepalese commercial sector. Banking sector being the economic backbone of Nepal where banking employees are pillar of these standing foundation. Hence, banking employee&rsquo;s needs has to address timely and properly. Among various necessity of banking job, career promotion is one essential factor. However, the enlarge volume of turnover rate of employee retention reveals the subdue lacking in employee skill development related necessity. This under prevailed focus on banking employee&rsquo;s career development has increased banking turnover ratio which ultimately decline the banking performance. Thus, to reduce the impact of affect on career promotion of employees in banking sector of Nepal in-depth study is needed. Glimpse of spark of career promotion can help to retain employee for longer tenure in corporate sector which helps in maximizing the profitability of banking sector. Objectives: In the banking sector, preceding problem regarding performance enhancement of bank has been the to and fro movement of employee in banking sector. To overcome such problem employee has to be motivated and promoted timely. Thus, the main aim of research is to explain the factor affecting the career promotion of employees in banking sectors of Nepal. Design: This research is related with the subject matter regarding the factor influencing the career promotion of banking employees in Nepalese banking sector. The core subject matter of research is the primal topic of discussion in the aftermath of global pandemic. Hence, the research adopts the quantitative research design for analysis. To find the ground reality, researcher use wide spectrums of self designed questionnaire techniques. The research also measured the data through various statistical tools which prefect the justified fact from the angle of employee&rsquo;s spectrums. Quantitative research design used in this research also analyzed the adopted answers through static charts and graphs. Findings: To enhance the economic progression in developing countries, banks have to play the crucial role. Banking sector being the limelight sector for new comer&rsquo;s job seeker has to maintain their glory through proper employee development practices. Thus, need of skill development and career promotion become inheritant in banking sector. This research thus revels the significant relationship between employee career growths with banking performance enhancement. This research identify various factors which helps in career promotion of banking employee which ultimately can be movers for strengthen banking success. Among various factors banking sector highlight the core factor which can enhance the career promotion of employees in banking sector of Nepal. Practical Implication: Researcher has higher potential to define the means for enhancing the employee efficiency of banking sector through identification of various core factor of career growth. The research touches the wider angle for rigorously understanding employee need of banking sector of Nepal. This research is also the essence of banking growth in current aftermath of pandemic situation. Likewise, research helps to draw attention of policy maker for understanding the necessity of employee in current corporate world. Originality/ Value: This research paper is based on primary research work carried out through ground based research where data are collected from self structured questionnaire. Since, researcher is done from field survey of banking sector research is original and higher reliability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!