Academic literature on the topic 'Key-value database'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Key-value database.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Key-value database"

1

Osemwegie, Omoruyi, Kennedy Okokpujie, Nsikan Nkordeh, Charles Ndujiuba, Samuel John, and Uzairue Stanley. "Performance Benchmarking of Key-Value Store NoSQL Databases." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 6 (2018): 5333. http://dx.doi.org/10.11591/ijece.v8i6.pp5333-5341.

Full text
Abstract:
<p>Increasing requirements for scalability and elasticity of data storage for web applications has made Not Structured Query Language NoSQL databases more invaluable to web developers. One of such NoSQL Database solutions is Redis. A budding alternative to Redis database is the SSDB database, which is also a key-value store but is disk-based. The aim of this research work is to benchmark both databases (Redis and SSDB) using the Yahoo Cloud Serving Benchmark (YCSB). YCSB is a platform that has been used to compare and benchmark similar NoSQL database systems. Both databases were given variable workloads to identify the throughput of all given operations. The results obtained shows that SSDB gives a better throughput for majority of operations to Redis’s performance.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Peng, Mei Li, Jing Huang, and Hua Fang. "Research on Database Schema Comparison of Relational Databases and Key-Value Stores." Advanced Materials Research 1049-1050 (October 2014): 1860–63. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1860.

Full text
Abstract:
With the rapid development of Internet technology, the management capacity of traditional relational databases becomes relatively inefficient when facing the access and processing of big data. As a kind of non-relational databases, the key-value stores, with its high scalability, provide an efficient solution to the problem. This article introduces the concept and features of Key-Value stores, and followed by the comparison with the traditional relational databases, and an example is illustrated to explain its typical application and finally the existing problems of Key-Value stores are summarized.
APA, Harvard, Vancouver, ISO, and other styles
3

Iliakis, Konstantinos, Konstantina Koliogeorgi, Antonios Litke, Theodora Varvarigou, and Dimitrios Soudris. "GPU accelerated blockchain over key‐value database transactions." IET Blockchain 2, no. 1 (2022): 1–12. http://dx.doi.org/10.1049/blc2.12011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dourhri, Ahmed, Mohamed Hanine, and Hassan Ouahmane. "KVMod—A Novel Approach to Design Key-Value NoSQL Databases." Information 14, no. 10 (2023): 563. http://dx.doi.org/10.3390/info14100563.

Full text
Abstract:
The growth of structured, semi-structured, and unstructured data produced by the new applications is a result of the development and expansion of social networks, the Internet of Things, web technology, mobile devices, and other technologies. However, as traditional databases became less suitable to manage the rapidly growing quantity of data and variety of data structures, a new class of database management systems named NoSQL was required to satisfy the new requirements. Although NoSQL databases are generally schema-less, significant research has been conducted on their design. A literature review presented in this paper lets us claim the need to create modeling techniques to describe how to structure data in NoSQL databases. Key-value is one of the NoSQL families that has received too little attention, especially in terms of its design methodology. Most studies have focused on the other families, like column-oriented and document-oriented. This paper aims to present a design approach named KVMod (key-value modeling) specific to key-value databases. The purpose is to provide to the scientific community and engineers with a methodology for the design of key-value stores using the maximum automation and therefore the minimum human intervention, which equals the minimum number of errors. A software tool called KVDesign has been implemented to automate the proposed methodology and, thus, the most time-consuming database modeling tasks. The complexity is also discussed to assess the efficiency of our proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

송경태 and Sanghyun Park. "A Recent Trend of Database for Big Data Handling using Key-value database." Journal of Knowledge Information Technology and Systems 12, no. 1 (2017): 47–57. http://dx.doi.org/10.34163/jkits.2017.12.1.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Jiangning, Congtian Lin, Yan Han, and Liqiang Ji. "Discussion of the Method for Constructing Animal Traits." Biodiversity Information Science and Standards 2 (April 25, 2018): e26168. https://doi.org/10.3897/biss.2.26168.

Full text
Abstract:
Trait data in biology can be extracted from text and structured for reuse within and across taxa. For example, body length is one trait applicable to many species and "body length is about 170 cm" is one trait data point for the human species. Trait data can be used in more detailed analyses to describe species evolution and development processes, so it has begun to be valued by more than taxonomists. The EOL (Encyclopedia of Life) TraitBank provides an example of a trait database. Current trait databases are in their infancy. Most are based on morphological data such as shape, color, structural and sexual characteristics. In fact, some data such as behavioral and biological characteristics may be similarly included in trait databases. To build a trait database we constructed a list of controlled vocabulary to record the states of various terms. These terms may exhibit common characteristics: They can be grouped as conceptual (subject) and descriptive (delimiter) terms. For example, in "the shoulder height is 65–70 cm", "shoulder height" is the conceptual term and "65–70 cm" is the descriptive term. Conceptual terms may be part of an interdependent hierarchical structure. Examples in morphology, physiology and conservation or protection status, demonstrate how parts or systems may be broken into smaller measurable (quantifiable) or enumerable pieces. Descriptive terms will modify or delimit parameters of conceptual terms. These may be numerical with distinguishing units, counts, or other adjectives or enumerable with special nouns. Although controlled vocabularies about animals are complex, they can be normalized using RDF (Resource Description Framework) and OWL (web ontology language) standards. Next, we extract traits from two main types of existing descriptions. tabular data, which is more easily digested by machine, and descriptive text, which is complex. Pure text often needs to be extracted manually or by NLP (computerized natural language processing). Sometimes machine learning methods can be used. Moreover, different human languages may demand different extraction methods. Because the number of recordable traits exceeds current collection records, the database structure should be optimized for retrieval speed. For this reason, key-value databases are more suitable for storage of traits data than relational databases. EOL used the database Virtuoso for Traitbank, which is a non-relational database. Using existing mature tools and standards of ontology, we can construct a preliminary work-flow for animal trait data, but some tools and specifications for data analysis and use need to await additional data accumulation.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Qian. "A High Performance Memory Key-Value Database Based on Redis." Journal of Computers 14, no. 3 (2019): 170–83. http://dx.doi.org/10.17706/jcp.14.3.170-183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kelly, Terence. "Crashproofing the Original NoSQL Key-Value Store." Queue 19, no. 4 (2021): 5–18. http://dx.doi.org/10.1145/3487019.3487353.

Full text
Abstract:
This episode of Drill Bits unveils a new crash-tolerance mechanism that vaults the venerable gdbm database into the league of transactional NoSQL data stores. We'll motivate this upgrade by tracing gdbm's history. We'll survey the subtle science of crashproofing, navigating a minefield of traps for the unwary. We'll arrive at a compact and rugged design that leverages modern file-system features, and we'll tour the production-ready implementation of this design and its ergonomic interface.
APA, Harvard, Vancouver, ISO, and other styles
9

Fruth, Michael, and Stefanie Scherzinger. "Live Patching for Distributed In-Memory Key-Value Stores." Proceedings of the ACM on Management of Data 2, no. 6 (2024): 1–26. https://doi.org/10.1145/3698816.

Full text
Abstract:
Providers of high-availability data stores need to roll out software updates without causing noticeable downtimes. For distributed data stores like Redis Cluster, the state-of-the-art is a rolling update, where the nodes are restarted in sequence. This requires preserving, restoring, and resynchronizing the database state, which can significantly prolong updates for larger memory states, and thus delay critical security fixes. In this article, we propose applying software updates directly in memory without restarting any nodes. We present the first fully operational live patching solution for Redis Cluster on Linux. We support both push- and pull-based distribution of patches, trading dissemination speed against cluster elasticity, the ability to allow nodes to dynamically join or leave the cluster. Our integration is very lightweight, as it piggybacks on the cluster-internal gossip protocol. Our experiments benchmark live patching against state-of-the-art rolling updates. In one scenario, live patching updates the entire cluster orders of magnitude faster, without unfavorable trade-offs regarding throughput, tail latencies, or network consumption. To showcase generalizability, we provide general guidelines on integrating live patching for distributed database systems and successfully apply them to a primary-replica PostgreSQL setup. Given our overall promising results, we discuss the opportunities of live patching in database DevOps.
APA, Harvard, Vancouver, ISO, and other styles
10

Malykh, Mikhail D., Anton L. Sevastianov, and Leonid A. Sevastianov. "About Symbolic Integration in the Course of Mathematical Analysis." Computer tools in education, no. 4 (December 28, 2019): 94–106. http://dx.doi.org/10.32603/2071-2340-2019-4-94-106.

Full text
Abstract:
The work of transforming a database from one format periodically appears in different organizations for various reasons. Today, the mechanism for changing the format of relational databases is well developed. But with the advent of new types of database such as NoSQL, this problem was exacerbated due to the radical difference in the way data was organized. This article discusses a formalized method based on set theory, at the choice of the number and composition of collections for a key-value type database. The initial data are the properties of the objects, information about which is stored in the database, and the set of queries that are most frequently executed or the speed of which should be maximized. The considered method can be applied not only when creating a new key-value database, but also when transforming an existing one, when moving from relational databases to NoSQL, when consolidating databases.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Key-value database"

1

Rose, Kyle R. (Kyle Robert) 1976. "Asynchronous generic key/value database." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hsiue, Kevin D. "FPGA-based hardware acceleration for a key-value store database." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91829.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 89-90).<br>As the modern world processes larger amounts of data and demand increases for better transaction performance, a growing number of innovative solutions in the field of database technologies have been discovered and implemented. In software and systems design, the development and deployment of the NoSQL (Not Only SQL) database challenged yesterday's relational database and answered the demand for exceedingly higher volumes, accesses, and types of data. However, one less investigated route to bolster current database performance is the use of dedicated hardware to effectively complement or replace software to 'accelerate' the overall system. This thesis investigates the use of a Field-Programmable Gate Array (FPGA) as a hardware accelerator for a key-value database. Utilized as a platform of reconfigurable logic, the FPGA offers massively parallel usability at a much faster pace than a traditional software-enabled database system. This project implements a key-value store database hardware accelerator in order to investigate the potential improvements in performance. Furthermore, as new technologies in materials science and computer architecture arise, a revision in database design welcomes the use of hardware for maximizing key-value database performance.<br>by Kevin D. Hsiue.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Jansson, Jens, Alexandar Vukosavljevic, and Ismet Catovic. "Performance comparison between multi-model, key-value and documental NoSQL database management systems." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19857.

Full text
Abstract:
This study conducted an experiment that compares the multi-model NoSQL DBMS ArangoDB with other NoSQL DBMS, in terms of the average response time of queries. The DBMS compared in this experiment are the following: Redis, MongoDB, Couchbase, and OrientDB. The hypothesis that is answered in this study is the following: “There is a significant difference between ArangoDB, OrientDB, Couchbase, Redis, MongoDB in terms of the average response time of queries”. This is examined by comparing the average response time of 1 000, 100 000, and 1 000 000 queries between these database systems. The results show that ArangoDB performs worse compared to the other DBMS. Examples of future work include using additional DBMS in the same experiment and replacing ArangoDB with another multi-model DBMS to decide whether such a DBMS, in general, performs worse than single-model DBMS.
APA, Harvard, Vancouver, ISO, and other styles
4

Klapač, Milan. "Výhody a nevýhody relačních a nerelačních (noSQL) databází pro analytické úlohy." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-193931.

Full text
Abstract:
This work focuses on NoSQL databases, their use for analytical tasks and on comparison of NoSQL databases with relational and OLAP databases. The aim is to analyse the benefits of NoSQL databases and their use for analytical purposes. The first part presents the basic principles of Business Intelligence, Data Warehousing, and Big Data. The second part deals with the key features of relational and NoSQL databases. The last part of the thesis describes the properties of four basic types of NoSQL databases, analyses their advantages, disadvantages and areas of application. The end of this part in-cludes specific examples of the use of NoSQL databases, together with the reasons for the selection of those solutions.
APA, Harvard, Vancouver, ISO, and other styles
5

Balmau, Oana Maria. "Redesigning Persistent Key-Value Stores for Future Workloads, Hardware, and Performance Requirements." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/22913.

Full text
Abstract:
Cloud storage stacks are being challenged by new workloads, new hardware, and new performance requirements. First, workloads evolved from following a read-heavy pattern (e.g., a static web-page) to a write-heavy profile where the read:write ratio is closer to 1:1 (e.g., as in the Internet of Things). Second, the hardware is undergoing rapid changes. The divide between fine-grained volatile memory and slow block-level storage is rapidly being bridged by the emerging byte-addressable non-volatile memory devices and the fast block-addressable NVMe SSDs (e.g., Intel Optane NVMe SSDs). Third, performance requirements in storage systems now emphasize low tail latency, in addition to high throughput. This dissertation argues that existing storage systems, in particular persistent key-value stores (KVs), have fundamental limitations that do not allow them to fully meet these challenges. This dissertation proposes four new KVs designed for future hardware, workloads, and performance requirements. FloDB shows how to scale the throughput of KVs on servers with ample memory sizes of up to hundreds of GBs. TRIAD introduces novel techniques to reduce write amplification and to increase throughput in log-structured merge based KVs running on SSDs. SILK presents an I/O bandwidth scheduler to decrease tail latency in log-structured merge based KVs. Finally, KVell demonstrates that NVMe SSDs shift the performance bottleneck from I/O to CPU, invalidating an assumption that has underpinned all past storage system design. In line with this observation KVell then presents a new design for KVs that departs from the conventional wisdom of optimizing disk usage and instead optimizes CPU usage.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Sainan. "Creating a NoSQL database for the Internet of Things : Creating a key-value store on the SensibleThings platform." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25525.

Full text
Abstract:
Due to the requirements of the Web 2.0 applications and the relational databaseshave a limitation in horizontal scalability. NoSQL databases have become moreand more popular in recent years. However, it is not easy to select a databasethat is suitable for a specific use. This thesis describes the detailed design, im plementation and final performance evaluation of a key-value NoSQL databasefor the SensibleThings platform, which is an Internet of Things platform. Thethesis starts by comparing the different types of NoSQL databases to select themost appropriate one. During the implementation of the database, the algorithms for data partition, data access, replication, addition and removal ofnodes, failure detection and handling are dealt with. The final results for theload distribution and the performance evaluation are also presented in this pa per. At the end of the thesis, some problems and improvements that need betaken into consideration in the futures.
APA, Harvard, Vancouver, ISO, and other styles
7

Smailji, Liridon. "Performance comparison of differentNoSQL structure orientations." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-20971.

Full text
Abstract:
This study proposes a performance comparison between the different structures of NoSQL databases; document, key-value, column and graph. A second study is also conducted, when looking at performance comparison between three different NoSQL databases, all of the same structure; document based, the databases that are tested here are; MongoDB, OrientDB and Couchbase. Performance tests are conducted using a benchmarking tool YCSB (Yahoo! Cloud Serving Benchmark), and by looking at time to execute and throughput (operations/ second). Beside benchmarking literature reviews are conducted to be able to understand the different NoSQL structures, and to elaborate our benchmarking results. Every NoSQL structure and database in our benchmark is tested in the same way, a loading phase of 1k, 10k and 100k entries, and a running phase with a workload of approximately 50% reads and 50% updates with 1k, 10k and 100k operations. The finding of this study is that there are differences in performance, both between different structures and between same structured NoSQL databases. Document based OrientDB was the highest performing database at high volumes of data, and key-value store database Redis performed best at low volumes of data. Reasons for performance differences are both linked to specific trademarks of the structural orientation, the usage of the specific attributes of CAP theorem, storage type and development language.
APA, Harvard, Vancouver, ISO, and other styles
8

Mormone, Giovanni. "Sistemi NoSQL:motivazioni, tecnologie e possibili impieghi." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15724/.

Full text
Abstract:
Per oltre quarant'anni il modello relazionale è stato il punto di riferimento per la gestione dei dati all'interno dei sistemi informatici. Con lo sviluppo di internet ed il successivo avvento del web 2.0, è cresciuta esponenzialmente la necessità di gestire grandi moli di dati in rete, in un ambiente dinamico ed in cui ogni utente diventa, oltre che fruitore, anche generatore di contenuti. Le richieste di flessibilità e indipendenza dagli schemi, favorite dalla mancanza di struttura e coerenza dei dati generati in rete hanno messo a nudo le difficoltà, che saranno approfondite nel corso di questo elaborato, dei sistemi relazionali ad adattarsi alle moderne applicazioni distribuite in cui è necessario garantire una grande disponibilità del sistema per permettere l'accesso ai dati a milioni di utenti. Per venire incontro a queste richieste sono nati i DBMS NoSQL, che si offrono come alternativa ai classici sistemi relazionali e tentano di risolvere problemi quali disponibilità e partizionamento dei dati in rete sfruttando a pieno l'avanzamento tecnologico che stiamo sperimentando negli ultimi anni, che ha ridotto notevolmente i costi di gestione e produzione di hardware ed ha incrementato esponenzialmente le capacità di calcolo da parte delle macchine riducendone i costi di produzione e sviluppo. Nel corso di questo elaborato saranno analizzate le varie tipologie di DBMS NoSQL disponibili al momento, esaminandone caratteristiche e possibili scenari di utilizzo e presentando i concetti fondamentali dell'approccio non relazionale alla gestione dei dati.
APA, Harvard, Vancouver, ISO, and other styles
9

Persson, Ragnvald. "NoSQL-databaser i socialt nätverk." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20692.

Full text
Abstract:
Syftet med studien är att göra en fördjupning inom NoSQL-databaser och undersöka vilka uppgifter som de olika NoSQL-grupperna passar bäst till i ett socialt nätverk, som t.ex. Facebook och Twitter. Det finns fyra olika typer av NoSQL-databaser: kolumndatabaser, grafdatabaser, nyckelvärdedatabaser och dokumentdatabaser. Frågan är vilken NoSQL-databas ska man välja till en viss uppgift i ett givet socialt nätverk. När man ska utveckla ett socialt nätverk, som kräver lagring av data, är det viktigt att känna till vilken typ av databas som bör användas till en vis typ av uppgift. För att få svar på frågorna har det gjorts en undersökning över vad tidigare forskning har kommit fram till. Det har även gjorts en praktisk studie med alla fyra NoSQL-grupper i ett experiment med lagring av användaruppgifter, meddelanden och vänner.<br>The purpose of the study is to deepen within NoSQL databases and investigate what tasks the different NoSQL groups fit best in a social network, such as Facebook and Twitter. The data is, for example, about the storage of personal data or social networking. There are four different types of NoSQL databases: column databases, graph databases, key value databases and document databases. The question is which NoSQL database should be chosen for a particular task in a given social network. When developing a social network that requires data storage, it is important to know what kind of database should be used for a certain type of task.In order to answer the questions, an investigation has been made of what previous research has reached. There has also been a practical study of all four NoSQL groups in an experiment with storing user information, messages and friends.
APA, Harvard, Vancouver, ISO, and other styles
10

Toney, Ethan. "Improving Table Scans for Trie Indexed Databases." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/76.

Full text
Abstract:
We consider a class of problems characterized by the need for a string based identifier that reflect the ontology of the application domain. We present rules for string-based identifier schemas that facilitate fast filtering in databases used for this class of problems. We provide runtime analysis of our schema and experimentally compare it with another solution. We also discuss performance in our solution to a game engine. The string-based identifier schema can be used in addition scenarios such as cloud computing. An identifier schema adds metadata about an element. So the solution hinges on additional memory but as long as queries operate only on the included metadata there is no need to load the element from disk which leads to huge performance gains.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Key-value database"

1

Xu, Chen, and Aoying Zhou. Quality-Aware Scheduling for Key-value Data Stores. Springer London, Limited, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Quality-aware Scheduling for Key-value Data Stores. Springer, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Key-value database"

1

Sacco, Andres. "Redis: Key/Value Database." In Beginning Spring Data. Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8764-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Chen, Mohamed A. Sharaf, Minqi Zhou, Aoying Zhou, and Xiaofang Zhou. "Adaptive Query Scheduling in Key-Value Data Stores." In Database Systems for Advanced Applications. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37487-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ooley, Nathan, Nick Tichawa, and Brian Miller. "Basic Setup of iCloud and Key-Value Storage." In Beginning iOS Cloud and Database Development. Apress, 2014. http://dx.doi.org/10.1007/978-1-4302-4114-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Takeuchi, Susumu, Jun Shinomiya, Toru Shiraki, et al. "A Large Scale Key-Value Store Based on Range-Key Skip Graph and Its Applications." In Database Systems for Advanced Applications. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12098-5_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jiaoyang, Yinliang Yue, and Weiping Wang. "GHStore: A High Performance Global Hash Based Key-Value Store." In Database Systems for Advanced Applications. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00123-9_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Jiaoyang, Yinliang Yue, and Weiping Wang. "GHStore: A High Performance Global Hash Based Key-Value Store." In Database Systems for Advanced Applications. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00123-9_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Zihao, Chuan Hu, Zhihong Shen, Along Mao, and Hao Ren. "A Key-Value Based Approach to Scalable Graph Database." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39847-6_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chang, Dong, Yanfeng Zhang, and Ge Yu. "MaiterStore: A Hot-Aware, High-Performance Key-Value Store for Graph Processing." In Database Systems for Advanced Applications. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-43984-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Jiangtao, Zhiliang Guo, and Xiaofeng Meng. "SASS: A High-Performance Key-Value Store Design for Massive Hybrid Storage." In Database Systems for Advanced Applications. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18120-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Chenchen, Huiqi Hu, Xuecheng Qi, Xuan Zhou, and Aoying Zhou. "RS-store: A SkipList-Based Key-Value Store with Remote Direct Memory Access." In Database Systems for Advanced Applications. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59410-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Key-value database"

1

Piyawarapong, Piraboon, and Somchart Fugkeaw. "Optimizing Database Image Retrieval with In-Memory Key-Value Store, Image Compression, and Smart Cache Updater Algorithm." In 2025 17th International Conference on Knowledge and Smart Technology (KST). IEEE, 2025. https://doi.org/10.1109/kst65016.2025.11003319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Hao, Kai Lu, Gen Li, Xiaoping Wang, and Tianye Xu. "CAAC: A Key-Value Database Performance Boosting Algorithm." In 2012 Fourth International Conference on Computational and Information Sciences (ICCIS). IEEE, 2012. http://dx.doi.org/10.1109/iccis.2012.97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moreno, Julio, Eduardo B. Fernandez, Eduardo Fernandez-Medina, and Manuel A. Serrano. "A Security Pattern for Key-Value NoSQL Database Authorization." In EuroPLoP '18: 23rd European Conference on Pattern Languages of Programs. ACM, 2018. http://dx.doi.org/10.1145/3282308.3282321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Polyakov, Artem Y., Alexandr V. Efimov, Konstantin E. Kramarenko, and Kirill V. Pavsky. "Key-Value Database Access Optimization For PMIx Standard Implementation." In 2021 IEEE Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). IEEE, 2021. http://dx.doi.org/10.1109/usbereit51232.2021.9455075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Thanh, Ta Minh, Nguyen Huu Thuy, and Ngoc-Tu Huynh. "Key-value based data hiding method for NoSQL database." In 2018 10th International Conference on Knowledge and Systems Engineering (KSE). IEEE, 2018. http://dx.doi.org/10.1109/kse.2018.8573334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hui-jun, Wu, Lu Kai, Jiang Jing-fei, and Wang Shuang-xi. "Agent-Based Fault Tolerance Mechanism for Distributed Key-Value Database." In 2014 5th International Conference on Digital Home (ICDH). IEEE, 2014. http://dx.doi.org/10.1109/icdh.2014.58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Xin, Kai Lu, Xiao-Ping Wang, and Gen Li. "GSkiplist: A GPU-method to accelerate key-value database search." In International Conference on Computer Science, Technology and Application (CSTA2016). WORLD SCIENTIFIC, 2016. http://dx.doi.org/10.1142/9789813200449_0045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Puangsaijai, Wittawat, and Sutheera Puntheeranurak. "A comparative study of relational database and key-value database for big data applications." In 2017 International Electrical Engineering Congress (iEECON). IEEE, 2017. http://dx.doi.org/10.1109/ieecon.2017.8075813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alami, Alae El, Mohamed Bahaj, and Younes Khourdifi. "Supply of a key value database redis in-memory by data from a relational database." In 2018 19th IEEE Mediterranean Electrotechnical Conference (MELECON). IEEE, 2018. http://dx.doi.org/10.1109/melcon.2018.8379066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kruber, Nico, Florian Schintke, and Michael Berlin. "A relational database schema on the transactional key-value store scalaris." In 2014 IEEE International Conference on Big Data (Big Data). IEEE, 2014. http://dx.doi.org/10.1109/bigdata.2014.7004441.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Key-value database"

1

Cogan, Dan, and Dan Cogan. Vegetation mapping inventory project: Death Valley National Park. National Park Service, 2024. https://doi.org/10.36967/2306941.

Full text
Abstract:
This study presents a comprehensive vegetation mapping inventory project undertaken in Death Valley National Park (DEVA) within the National Park Service?s Mojave Desert Network (MOJN). Spanning 3.4 million acres across California and Nevada, DEVA is the largest national park in the contiguous United States. Renowned for its harsh environment?characterized by intense heat, aridity, and low elevation?DEVA encompasses a variety of landscapes, including playas, alluvial fans, sand dunes, and mountain ranges. Despite its desolate appearance, the park harbors a diverse array of vegetation, with plant communities adapted to the park?s varying elevation, moisture, salinity, and substrate conditions. The project, which began in 2011, was initiated by the National Park Service?s (NPS) Vegetation Mapping Inventory (VMI), aimed to document and classify the plant communities within DEVA. The ten-year project, divided into six phases, began with a thorough review of legacy data and a summary of plant communities. In collaboration with the California Native Plant Society (CNPS) and the University of Nevada, Las Vegas (UNLV), field data were collected across the park, including 111 classification plots and 518 observation points. This data, along with 1,242 samples from previous studies, were entered into the NPS VMI-specific PLOTS database. The CNPS analyzed the collected data to classify 85 plant alliances according to the revised US National Vegetation Classification (rUSNVC) standard, leading to the identification of 186 plant associations within DEVA. Cogan Technology, Inc., then developed a digital vegetation map layer covering the entire park, using a combination of manual and automated mapping techniques. This map was based on imagery from the National Agriculture Imagery Program (NAIP) and field data, resulting in the delineation of 90 map units (74 vegetated and 16 land-use/land-cover units). The map?s overall thematic accuracy was assessed at 82%, with a Kappa value of 89%. The final products, including the spatial geodatabase, digital vegetation map layer, field photos, metadata, classification report, and a field key to the vegetation alliances, were delivered to the NPS VMI. These resources provide a comprehensive overview of DEVA?s vegetation and support ongoing conservation and management efforts within this unique and challenging landscape.
APA, Harvard, Vancouver, ISO, and other styles
2

Iammarino, Simona, Sumontheany MUTH, and Kosal NITH. 20 Years of FDI in Cambodia: Towards Upper Middle-Income Status and Beyond. Cambodia Development Resource Institute, 2024. https://doi.org/10.64202/wp.149.202410.

Full text
Abstract:
Although Cambodia has a strong ambition to become an upper middle-income nation by 2030 and a high income by 2050, it faces various challenges in achieving these goals. This study investigates Cambodia’s progress and potential in this regard by analysing its position and trajectory relative to Greenfield Foreign Direct Investment (FDI) inflows and outflows – where foreign firms establish new operations in Cambodia and Cambodian investors set up businesses abroad. This study also provides preliminary insights on Cambodia's integration into Global and Regional Value Chains (GVCs), always using FDI as a proxy, considering sectoral, functional, and geographical trends and comparing them with those of its neighbouring countries – Lao People’s Democratic Republic and Vietnam – over the 20 years between 2003 and 2022. The research employs a desk review, SWOT analysis, and descriptive statistics by using academic literature, policy documents, stakeholder policy dialogues, and the fDiMarkets database by Financial Times. The analysis shows that FDI has been instrumental in reshaping Cambodia’s economic structure, significantly contributing to economic development and job creation. Key sectors attracting FDI include real estate, financial services, and alternative/renewable energy, while textiles, real estate, and consumer products are notable for generating employment opportunities. However, most FDI projects are concentrated in the capital and coastal areas, and have focused on low-tech manufacturing, which offer limited opportunities for spillovers and industrial upgrading. Cambodia’s outward FDI began in 2008, mainly targeting ASEAN countries. This paper highlights that Cambodia has developed a robust policy framework to attract and re-orient inward FDI, including a provision of various incentives for Qualified Investment Projects. Recent FDI inflow trends indicate growing interest in sectors such as alternative and renewable energy, rubber, automotive OEM, leisure and entertainment, food, tobacco, beverages, and paper, printing, and packaging industries. These sectors could be pivotal for Cambodia’s future growth. While Cambodia is making progress in addressing business challenges, there is a critical need to accelerate efforts to fully leverage FDI and support local firms’ development. This paper offers policy recommendations to address these challenges and maximise the potential benefits of FDI and regional GVCs, supporting Cambodia’s economic transition towards its income targets.
APA, Harvard, Vancouver, ISO, and other styles
3

LaBonte, Don, Etan Pressman, Nurit Firon, and Arthur Villordon. Molecular and Anatomical Characterization of Sweetpotato Storage Root Formation. United States Department of Agriculture, 2011. http://dx.doi.org/10.32747/2011.7592648.bard.

Full text
Abstract:
Original objectives: Anatomical study of storage root initiation and formation. Induction of storage root formation. Isolation and characterization of genes involved in storage root formation. During the normal course of storage root development. Following stress-induced storage root formation. Background:Sweetpotato is a high value vegetable crop in Israel and the U.S. and acreage is expanding in both countries and the research herein represents an important backstop to improving quality, consistency, and yield. This research has two broad objectives, both relating to sweetpotato storage root formation. The first objective is to understand storage root inductive conditions and describe the anatomical and physiological stages of storage root development. Sweetpotato is propagated through vine cuttings. These vine cuttings form adventitious roots, from pre-formed primordiae, at each node underground and it is these small adventitious roots which serve as initials for storage and fibrous (non-storage) “feeder” roots. What perplexes producers is the tremendous variability in storage roots produced from plant to plant. The marketable root number may vary from none to five per plant. What has intrigued us is the dearth of research on sweetpotato during the early growth period which we hypothesize has a tremendous impact on ultimate consistency and yield. The second objective is to identify genes that change the root physiology towards either a fleshy storage root or a fibrous “feeder” root. Understanding which genes affect the ultimate outcome is central to our research. Major conclusions: For objective one, we have determined that the majority of adventitious roots that are initiated within 5-7 days after transplanting possess the anatomical features associated with storage root initiation and account for 86 % of storage root count at 65 days after transplanting. These data underscore the importance of optimizing the growing environment during the critical storage root initiation period. Water deprivation during this phenological stage led to substantial reduction in storage root number and yield as determined through growth chamber, greenhouse, and field experiments. Morphological characterization of adventitious roots showed adjustments in root system architecture, expressed as lateral root count and density, in response to water deprivation. For objective two, we generated a transcriptome of storage and lignified (non-storage) adventitious roots. This transcriptome database consists of 55,296 contigs and contains data as regards to differential expression between initiating and lignified adventitious roots. The molecular data provide evidence that a key regulatory mechanism in storage root initiation involves the switch between lignin biosynthesis and cell division and starch accumulation. We extended this research to identify genes upregulated in adventitious roots under drought stress. A subset of these genes was expressed in salt stressed plants.
APA, Harvard, Vancouver, ISO, and other styles
4

Langlais, Pierre-Carl. Open Scientific Data. Comité pour la science ouverte, 2023. https://doi.org/10.52949/69.

Full text
Abstract:
Not opening scientific data is costly. It has been estimated that a significant share of scientific knowledge disappears every year. In a 2014 study less than half of biological datasets from the 1990s have been recovered and when possible the recovery has necessitated significant time and efforts. In comparison, 98% of datasets published on PLOS with unique identifiers (data DOIs) are still available for future research. Open scientific data are fundamental resources for a large variety of scientific activities: meta-analysis, replication of research results or accessibility to primary sources. They also bring a significant economic and social value, as scientific data is commonly used by non-academic professionals as well as public agencies and non-profit organizations. Yet open scientific data is not costless. Ensuring that data is not only downloadable but usable requires significant investment in regards to documentation, data cleaning, licensing and indexation. Not all scientific data can be shared and verifications are frequently necessary to ensure that they do not incorporate copyrighted contents or personal information. To be effective, data sharing has to be anticipated throughout the entire research lifecycle. New principles of scientific data management aims to formalize the preexisting cultures of data in scientific communities and apply common standards. First published in 2016, the FAIR Guiding Principles (findability, accessibility, interoperability, and reusability) is an influential framework for opening scientific data. Policies in support of data sharing have moved from general and broad encouragement to the concrete development of data sharing services. Early initiatives go back to the first computing infrastructures: in 1957 the World Data Center system aimed to make a large range of scientific data readily available. Open data programs were yet severely limited by the lack of technical support and compatibility for data transfer. After 1991, the web created a universal framework for data exchange and entailed a massive expansion of scientific databases. Yet, numerous projects ran into critical issues of long term sustainability. Open science infrastructure have recently become key stakeholders in the diffusion and management of open scientific data. Data repositories ensure the preservation of scientific resources as well as their discoverability. Data hosted on repositories are more frequently used and quoted than data published in a supplementary file.
APA, Harvard, Vancouver, ISO, and other styles
5

Burak, Leonid Ch, and Nataliya L. Ovsyannikova. Modern methods of storage and packaging of garden strawberries (Fragaria × ananassa Duch.) (review). Contemporary horticulture, 2024. https://doi.org/10.12731/2312-6701-266171.

Full text
Abstract:
Postharvest treatment of garden strawberries and the development of effective storage methods are crucial to increase the shelf life and preserve its quality until consumption. Although some reviews on certain treatment technologies have been published, we have not found studies that considered and compared common and advanced methods of storing garden strawberries. Therefore, the goal of this study is to review modern postharvest methods of strawberry storage (Fragaria × ananassa Duch). The review includes reports published in English and Russian in 2014—2024. PubMed, Scopus, Web of Science, Elibrary and Google Scholar databases were used to search by keywords. 50 scientific publications have been studied. In the first part of our study, the metabolic and biochemistry processes that underlie the ripening process of strawberries are considered, the factors that cause spoilage of strawberry berries are analyzed, and modern methods of strawberry treatment are presented. The preservation of garden strawberries using radiation, light or heat treatment can prevent the development of microorganisms and increase the resistance of berries to diseases. However, these methods can have a negative impact on the nutritional value, color and taste of berries over time. Cold storage is the most commonly used method of storing garden strawberries after harvest throughout the supply chain. In addition to cold storage, post-harvest treatment methods, including thermal, cold plasma and chemical treatments, have been carefully studied and individually applied to further increase of the strawberry shelf life. These treatments help to prevent fungal infection, activate the metabolic protection system and improve the structural integrity of strawberry berries, thereby maintaining their quality over time, especially during cold storage. In addition to treatment methods, storage in a modified atmosphere, the application of active packaging and functional coatings have been recognized as effective ways to preserve the quality of berries and effectively prevent spoilage after harvest. In addition, the combined use of two or more of these methods has proven to be the most effective for improving the shelf life of garden strawberries. The analysis of the antifungal effectiveness of modern storage methods, study of the synergy between different methods and the development of solutions based on biopolymers represent a key path for future research.
APA, Harvard, Vancouver, ISO, and other styles
6

Downes, Jane, ed. Chalcolithic and Bronze Age Scotland: ScARF Panel Report. Society for Antiquaries of Scotland, 2012. http://dx.doi.org/10.9750/scarf.09.2012.184.

Full text
Abstract:
The main recommendations of the panel report can be summarised under five key headings:  Building the Scottish Bronze Age: Narratives should be developed to account for the regional and chronological trends and diversity within Scotland at this time. A chronology Bronze Age Scotland: ScARF Panel Report iv based upon Scottish as well as external evidence, combining absolute dating (and the statistical modelling thereof) with re-examined typologies based on a variety of sources – material cultural, funerary, settlement, and environmental evidence – is required to construct a robust and up to date framework for advancing research.  Bronze Age people: How society was structured and demographic questions need to be imaginatively addressed including the degree of mobility (both short and long-distance communication), hierarchy, and the nature of the ‘family’ and the ‘individual’. A range of data and methodologies need to be employed in answering these questions, including harnessing experimental archaeology systematically to inform archaeologists of the practicalities of daily life, work and craft practices.  Environmental evidence and climate impact: The opportunity to study the effects of climatic and environmental change on past society is an important feature of this period, as both palaeoenvironmental and archaeological data can be of suitable chronological and spatial resolution to be compared. Palaeoenvironmental work should be more effectively integrated within Bronze Age research, and inter-disciplinary approaches promoted at all stages of research and project design. This should be a two-way process, with environmental science contributing to interpretation of prehistoric societies, and in turn, the value of archaeological data to broader palaeoenvironmental debates emphasised. Through effective collaboration questions such as the nature of settlement and land-use and how people coped with environmental and climate change can be addressed.  Artefacts in Context: The Scottish Chalcolithic and Bronze Age provide good evidence for resource exploitation and the use, manufacture and development of technology, with particularly rich evidence for manufacture. Research into these topics requires the application of innovative approaches in combination. This could include biographical approaches to artefacts or places, ethnographic perspectives, and scientific analysis of artefact composition. In order to achieve this there is a need for data collation, robust and sustainable databases and a review of the categories of data.  Wider Worlds: Research into the Scottish Bronze Age has a considerable amount to offer other European pasts, with a rich archaeological data set that includes intact settlement deposits, burials and metalwork of every stage of development that has been the subject of a long history of study. Research should operate over different scales of analysis, tracing connections and developments from the local and regional, to the international context. In this way, Scottish Bronze Age studies can contribute to broader questions relating both to the Bronze Age and to human society in general.
APA, Harvard, Vancouver, ISO, and other styles
7

Rankin, Nicole, Deborah McGregor, Candice Donnelly, et al. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
8

Honey authenticity: collaborative data sharing feasibility study. Food Standards Agency, 2023. http://dx.doi.org/10.46756/sci.fsa.fbt231.

Full text
Abstract:
According to the UN,1 there are more than 90 million managed beehives around the world producing about 1.9 million tonnes of honey worth more than £5 billion a year. That honey will then be packaged, as single origin or a blend of honey from different sources, and sold for consumption. Given the size of the market and the immense environmental benefits of beekeeping – three out of four crops depend on pollination by bees – it is an industry on which both livelihoods and lives depend. Target for adulteration As a labour-intensive, high-value expensive product with an often complex supply chain, honey is subject to internationally and nationally agreed definitions – and is a target for adulteration. Testing honey is therefore critical, but there is no single universal analytical method available which is capable of detecting all types of adulteration with adequate sensitivity. A variety of methods are used to detect honey adulteration, each test has strengths and weaknesses, and there are issues with interpretation. NMR analysis Testing for honey adulterated with added sugars may be based on analytical techniques using analytical tools, such as those using nuclear magnetic resonance spectroscopy (NMR). This is especially helpful in detecting certain types of adulteration, such as the addition of cane or beet sugars. Bees generally forage on plants that use the same photosynthetic pathway as beet sugars. This makes it difficult for traditional tests based on isotopic differences to provide effective results. The ‘chemical fingerprint’ provided by NMR is specific to the sample that has been tested and can be compared with the fingerprint from other sample results enabling the user to assess consistency. Reference databases Interpretation of results depends on comparison against a reference database of authenticated samples. The reference database needs to be representative of the variation that can occur, which includes differing beekeeping practices, origins, seasonality and variations in climate. Information is also needed on the collection of reference samples, curation of databases, interpretation and reporting of data. The nature of the reference databases is key to understanding how the results have been interpreted. However, these reference databases are owned by and commercially sensitive for the testing laboratories that have developed them. How can such data be shared in a trustworthy way between key stakeholders along the honey and analytical supply chain so that all parties can have confidence in honey authenticity test results? This research is looking into the implications of these hidden databases, especially in terms of the trust related to the validation certificates and the value that they have in the honey supply chain.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!