Добірка наукової літератури з теми "Data freshness and consistency"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Data freshness and consistency".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Data freshness and consistency":

1

Kazantsev, E. V., N. B. Kondratyev, M. V. Osipov, and O. S. Rudenko. "Influence of different types of hydrocolloids on the structure and preservation of sugary confectionery with a jelly structure consistency: a Review." Proceedings of the Voronezh State University of Engineering Technologies 82, no. 2 (September 18, 2020): 107–15. http://dx.doi.org/10.20914/2310-1202-2020-2-107-115.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Quality is a time-varying, complex property of a confectionery that shows a measure of acceptability for the customer and rapidly or slowly deteriorates after the manufacture of foodstuffs. The safety of raw materials and finished products during storage is the most important task of global importance, according to WHO, in 2020 year. One of the important problems in the confectionery industry is to ensure long shelf life of confectionery products without reducing their taste properties, as exemplified by jelly marmalade. The task of preserving the freshness of the product is to preserve its consistency, taste, smell, appearance by retaining moisture and preventing damage by microorganisms. Freshness criterion for long shelf life is one of the main factors affecting the sales and competitiveness of sugary confectionery. The aspects of the influence of the properties of structure-forming agents (pectins, agars, modified starches) on the formation of a gelatinous consistency and storage of marmalade are considered. The physical and chemical indicators characterizing the process of moisture transfer in the body of the marmalade during storage are indicated. To assess the migration of moisture during storage, the graphical dependence of aw on the mass fraction of moisture in the marmalade is used - the isotherm of moisture sorption. Analysis of the obtained data of desorption isotherms can serve as a useful tool that shows what proportion of moisture a product is capable of receiving or giving away without losing the properties that characterize the quality of a particular confectionery product. Modern methods are indicated in assessing the quality function of marmalade using a mathematical equation to predict its storage capacity. An integrated approach to ensure the safety of marmalade is considered, which allows predicting its shelf life
2

Vinçon, Tobias, Christian Knödler, Leonardo Solis-Vasquez, Arthur Bernhardt, Sajjad Tamimi, Lukas Weber, Florian Stock, Andreas Koch, and Ilia Petrov. "Near-data processing in database systems on native computational storage under HTAP workloads." Proceedings of the VLDB Endowment 15, no. 10 (June 2022): 1991–2004. http://dx.doi.org/10.14778/3547305.3547307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Today's Hybrid Transactional and Analytical Processing (HTAP) systems, tackle the ever-growing data in combination with a mixture of transactional and analytical workloads. While optimizing for aspects such as data freshness and performance isolation, they build on the traditional data-to-code principle and may trigger massive cold data transfers that impair the overall performance and scalability. Firstly, in this paper we show that Near-Data Processing (NDP) naturally fits in the HTAP design space. Secondly, we propose an NDP database architecture, allowing transactionally consistent in-situ executions of analytical operations in HTAP settings. We evaluate the proposed architecture in state-of-the-art key/value-stores and multi-versioned DBMS. In contrast to traditional setups, our approach yields robust, resource- and cost-efficient performance.
3

Agiwal, Ankur, Kevin Lai, Gokul Nath Babu Manoharan, Indrajit Roy, Jagan Sankaranarayanan, Hao Zhang, Tao Zou, et al. "Napa." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2986–97. http://dx.doi.org/10.14778/3476311.3476377.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Google services continuously generate vast amounts of application data. This data provides valuable insights to business users. We need to store and serve these planet-scale data sets under the extremely demanding requirements of scalability, sub-second query response times, availability, and strong consistency; all this while ingesting a massive stream of updates from applications used around the globe. We have developed and deployed in production an analytical data management system, Napa, to meet these requirements. Napa is the backend for numerous clients in Google. These clients have a strong expectation of variance-free, robust query performance. At its core, Napa's principal technologies for robust query performance include the aggressive use of materialized views, which are maintained consistently as new data is ingested across multiple data centers. Our clients also demand flexibility in being able to adjust their query performance, data freshness, and costs to suit their unique needs. Robust query processing and flexible configuration of client databases are the hallmark of Napa design. Most of the related work in this area takes advantage of full flexibility to design the whole system without the need to support a diverse set of preexisting use cases. In comparison, a particular challenge we faced is that Napa needs to deal with hard constraints from existing applications and infrastructure, so we could not do a "green field" system, but rather had to satisfy existing constraints. These constraints led us to make particular design decisions and also devise new techniques to meet the challenges. In this paper, we share our experiences in designing, implementing, deploying, and running Napa in production with some of Google's most demanding applications.
4

Liu, Sheng, Qiyang Chen, and Linlin You. "Fed2A: Federated Learning Mechanism in Asynchronous and Adaptive Modes." Electronics 11, no. 9 (April 27, 2022): 1393. http://dx.doi.org/10.3390/electronics11091393.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Driven by emerging technologies such as edge computing and Internet of Things (IoT), recent years have witnessed the increasing growth of data processing in a distributed way. Federated Learning (FL), a novel decentralized learning paradigm that can unify massive devices to train a global model without compromising privacy, is drawing much attention from both academics and industries. However, the performance dropping of FL running in a heterogeneous and asynchronous environment hinders its wide applications, such as in autonomous driving and assistive healthcare. Motivated by this, we propose a novel mechanism, called Fed2A: Federated learning mechanism in Asynchronous and Adaptive Modes. Fed2A supports FL by (1) allowing clients and the collaborator to work separately and asynchronously, (2) uploading shallow and deep layers of deep neural networks (DNNs) adaptively, and (3) aggregating local parameters by weighing on the freshness of information and representational consistency of model layers jointly. Moreover, the effectiveness and efficiency of Fed2A are also analyzed based on three standard datasets, i.e., FMNIST, CIFAR-10, and GermanTS. Compared with the best performance among three baselines, i.e., FedAvg, FedProx, and FedAsync, Fed2A can reduce the communication cost by over 77%, as well as improve model accuracy and learning speed by over 19% and 76%, respectively.
5

Etuk, Aniebiet, Joseph A. Anyadighibe, Christian Amadi, and Edim James James. "Service quality delivery and consumers’ choice of fast-food outlets." International research journal of management, IT and social sciences 9, no. 2 (February 8, 2022): 264–73. http://dx.doi.org/10.21744/irjmis.v9n2.2038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study examined the effect of service quality delivery on consumer’s choice of fast foods outlets. Cross-sectional survey research design was adopted. Primary data was collected from respondents using structured questionnaire. Simple regression in the Statistical Package for Social Science (SPSS) was adopted to analyze the data collected. Consequently, it was found that service tangibility, reliability, responsiveness, assurance and empathy had significant effects on consumer’s choice of fast foods. Thus, it was recommended amongst others, that fast food outlets should be more responsive to consumers’ service requirements by rapidly eliciting and resolving consumers’ enquiries and complaints; consistently deliver fast, strong and reliable service; ensure their personnel treat consumers with politeness and consideration at every point of service encounter and constantly seek ways to offer freshness in order to remain relevant in the market place.
6

Drejeris, Rolandas. "New Approach to a Modeling Actions of New Dietary Meals Creation." Current Developments in Nutrition 4, Supplement_2 (May 29, 2020): 709. http://dx.doi.org/10.1093/cdn/nzaa051_006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Objectives Objective of the research is to provide a reasonable model of new meals designing for hospitals food departments. Methods On summarizing the information presented in a wide spectrum of special scientific literature, after assessing it from the perspective of practical adaptability, the original model for new dietary meals designing was presented. The model was tested in the two biggest clinical hospitals of Lithuania, then a patient survey was conducted and appropriate decisions were made. Results The model consists of the following key components: research and assessment of the patients’ needs (customs, traditions or hobbies), processing survey results (generalization of them in order to identify unified and general trends for the different groups of population and health disorders), selection and adaptation of appropriate resources according to the nature of the patients disease (according requirements of the dietary nutrition), choice of suitable processing procedures correspondingly a sufferings of the patients, calculation of the portion size (amounts of an ingredients), planning of the quality (decoration, components arrangement, equipment selection), technology description and approval by head of the department. The model was tested in Kaunas clinical hospital. Patients aged 60–70 in the pulmonology department were interviewed about nutrition. Patients had to assess quality in 10 points system. Freshness of the salads was only 7,45, although freshness was checked very carefully. By the model we found, crispness of any food always adds to the impression of freshness. So salads (beets, carrots, parsnips, celery, etc.) were supplemented with dried vegetable ingredient after conformity assessment of products’ energy value. Patients evaluated the new created meal very positively. Conclusions Use of the model reduces the failure chance and affect the decisions of new dietary meals creation. Application of the suggested model will allow food production departments in hospitals to be consistent in new dietary meals creation and increase the likelihood of their patients’ success of recovery. Funding Sources Any funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.
7

Guridi Lopategui, Ibai, Julen Castellano Paulis, and Ibon Echeazarra Escudero. "Physical Demands and Internal Response in Football Sessions According to Tactical Periodization." International Journal of Sports Physiology and Performance 16, no. 6 (June 1, 2021): 858–64. http://dx.doi.org/10.1123/ijspp.2019-0829.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose: The objectives of the present study were (1) to analyze the internal and external load profile of training and competition carried out by semiprofessional football players during a 27-week period and (2) to examine the possible link between this type of periodization and players’ fitness status and their readiness to compete. Methods: Training and match data were obtained from 26 semiprofessional football players belonging to the reserve squad of a Spanish La Liga club during the 2018/19 season. For the purpose of this study, the distribution of external and internal load during a typical training microcycle, with 6 or 7 days between matches, was analyzed. Five types of sessions were considered: strength, duration, velocity, preofficial match, and official match. Results: The results showed a different internal and external load profile for each type of session, with the load being consistently higher during matches when compared with training sessions (28.9%–94% higher), showing significant differences in all the variables. There was a clear tapering strategy in the last days of the week to arrive with enough freshness to compete, shown by the decrease of the values in the 2 days before the match (15%–83% reduction, depending on the variable). Furthermore, the horizontal alternation of the load allowed the players to maintain their fitness level during the 27-week period. Conclusions: Our findings suggest that this weekly periodization approach could help achieve a double conditional target, allowing a short tapering strategy to face the match with enough freshness and serving as a strategy for maintaining or optimizing players’ physical performance during the season.
8

Houhamdi, Zina, and Belkacem Athamena. "Data freshness evaluation in data integration systems." International Journal of Economics and Business Research 11, no. 2 (2016): 132. http://dx.doi.org/10.1504/ijebr.2016.075306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Murray, Nicholas J., Emma V. Kennedy, Jorge G. Álvarez-Romero, and Mitchell B. Lyons. "Data Freshness in Ecology and Conservation." Trends in Ecology & Evolution 36, no. 6 (June 2021): 485–87. http://dx.doi.org/10.1016/j.tree.2021.03.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Errickson, Lauren, and Douglas Zemeckis. "Industry Insights on Consumer Receptivity to Aquaculture Products in the Retail Marketplace: Considerations for Increasing Seafood Intake." Current Developments in Nutrition 5, Supplement_2 (June 2021): 553. http://dx.doi.org/10.1093/cdn/nzab043_005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Objectives Americans consistently fail to meet dietary guidelines for seafood intake. Efforts are needed to increase consumption, especially of sustainable seafood that can be supplied by domestic aquaculture. However, consumer receptivity to aquaculture products is mixed. The objective of this study was to elicit industry perspectives regarding influences on consumer purchases of aquaculture products. Methods Key informant interviews (n = 12) were conducted in late 2020 with U.S. salmon, shrimp, and oyster producers, marketers, and industry interest groups. Participants were recruited via snowball sampling. Virtual interviews were conducted by a trained moderator and assistant moderator/notetaker using a semi-structured interview guide. Qualitative data analysis included a thematic review of interview recordings and notes, with key concepts coded according to a priori themes derived from the literature. Results Interviews yielded important insights into consumer receptivity to aquaculture products. Participants believe that outdated misperceptions of aquaculture persist, noting that despite advances in domestic aquaculture production practices to comply with U.S. standards, some consumers perceive aquaculture as environmentally detrimental and unsustainable. Further, participants believe negative attitudes toward genetically modified organisms, corn and soy-based feeds, antibiotics, and chemicals are misplaced, yet contribute to hesitancy among some consumers. Industry opinions on what is important to consumers reflect strong valuation of seafood quality, freshness, local harvest, and sustainability. Participants suggest product labeling efforts be developed accordingly, and that innovative marketing strategies be undertaken, such as aquaculture product promotion through “know your farmer” campaigns, chef education initiatives, and home delivery programs. Conclusions For domestic aquaculture products to have a meaningful impact on U.S. seafood intake, positive consumer receptivity is key. Industry perspectives will inform marketing and educational efforts toward addressing consumer hesitancy to purchase aquaculture products by resolving misguided concerns, with important implications for consumer health and sustainability of the domestic seafood supply. Funding Sources United States Department of Agriculture.

Дисертації з теми "Data freshness and consistency":

1

Bedewy, Ahmed M. "OPTIMIZING DATA FRESHNESS IN INFORMATION UPDATE SYSTEMS." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618573325086709.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mueller, G. "Data Consistency Checks on Flight Test Data." International Foundation for Telemetering, 2014. http://hdl.handle.net/10150/577405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA
This paper reflects the principal results of a study performed internally by Airbus's flight test centers. The purpose of this study was to share the body of knowledge concerning data consistency checks between all Airbus business units. An analysis of the test process is followed by the identification of the process stakeholders involved in ensuring data consistency. In the main part of the paper several different possibilities for improving data consistency are listed; it is left to the discretion of the reader to determine the appropriateness these methods.
3

Tran, Sy Nguyen. "Consistency techniques for test data generation." Université catholique de Louvain, 2005. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-05272005-173308/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents a new approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. A test program (with procedure calls) is represented by an Interprocedural Control Flow Graph (ICFG). The classical testing criteria (statement, branch, and path coverage), widely used in unit testing, are extended to the ICFG. Path coverage is the core of our approach. Given a specified path of the ICFG, a path constraint is derived and solved to obtain a test case. The constraint solving is carried out based on a consistency notion. For statement (and branch) coverage, paths reaching a specified node or branch are dynamically constructed. The search for suitable paths is guided by the interprocedural control dependences of the program. The search is also pruned by our consistency filter. Finally, test data are generated by the application of the proposed path coverage algorithm. A prototype system implements our approach for C programs. Experimental results, including complex numerical programs, demonstrate the feasibility of the method and the efficiency of the system, as well as its versatility and flexibility to different classes of problems (integer and/or float variables; arrays, procedures, path coverage, statement coverage).
4

Yu, Wenyuan. "Improving data quality : data consistency, deduplication, currency and accuracy." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8899.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Data quality is one of the key problems in data management. An unprecedented amount of data has been accumulated and has become a valuable asset of an organization. The value of the data relies greatly on its quality. However, data is often dirty in real life. It may be inconsistent, duplicated, stale, inaccurate or incomplete, which can reduce its usability and increase the cost of businesses. Consequently the need for improving data quality arises, which comprises of five central issues of improving data quality, namely, data consistency, data deduplication, data currency, data accuracy and information completeness. This thesis presents the results of our work on the first four issues with regards to data consistency, deduplication, currency and accuracy. The first part of the thesis investigates incremental verifications of data consistencies in distributed data. Given a distributed database D, a set S of conditional functional dependencies (CFDs), the set V of violations of the CFDs in D, and updates ΔD to D, it is to find, with minimum data shipment, changes ΔV to V in response to ΔD. Although the problems are intractable, we show that they are bounded: there exist algorithms to detect errors such that their computational cost and data shipment are both linear in the size of ΔD and ΔV, independent of the size of the database D. Such incremental algorithms are provided for both vertically and horizontally partitioned data, and we show that the algorithms are optimal. The second part of the thesis studies the interaction between record matching and data repairing. Record matching, the main technique underlying data deduplication, aims to identify tuples that refer to the same real-world object, and repairing is to make a database consistent by fixing errors in the data using constraints. These are treated as separate processes in most data cleaning systems, based on heuristic solutions. However, our studies show that repairing can effectively help us identify matches, and vice versa. To capture the interaction, a uniform framework that seamlessly unifies repairing and matching operations is proposed to clean a database based on integrity constraints, matching rules and master data. The third part of the thesis presents our study of finding certain fixes that are absolutely correct for data repairing. Data repairing methods based on integrity constraints are normally heuristic, and they may not find certain fixes. Worse still, they may even introduce new errors when attempting to repair the data, which may not work well when repairing critical data such as medical records, in which a seemingly minor error often has disastrous consequences. We propose a framework and an algorithm to find certain fixes, based on master data, a class of editing rules and user interactions. A prototype system is also developed. The fourth part of the thesis introduces inferring data currency and consistency for conflict resolution, where data currency aims to identify the current values of entities, and conflict resolution is to combine tuples that pertain to the same real-world entity into a single tuple and resolve conflicts, which is also an important issue for data deduplication. We show that data currency and consistency help each other in resolving conflicts. We study a number of associated fundamental problems, and develop an approach for conflict resolution by inferring data currency and consistency. The last part of the thesis reports our study of data accuracy on the longstanding relative accuracy problem which is to determine, given tuples t1 and t2 that refer to the same entity e, whether t1[A] is more accurate than t2[A], i.e., t1[A] is closer to the true value of the A attribute of e than t2[A]. We introduce a class of accuracy rules and an inference system with a chase procedure to deduce relative accuracy, and the related fundamental problems are studied. We also propose a framework and algorithms for inferring accurate values with users’ interaction.
5

Ntaryamira, Evariste. "Une méthode asynchrone généralisée préservant la qualité des données des systèmes temps réel embarqués : cas de l’autopilote PX4-RT." Thesis, Sorbonne université, 2021. https://tel.archives-ouvertes.fr/tel-03789654.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les systèmes embarqués en temps réel, malgré leurs ressources limitées, évoluent très rapidement. Pour ces systèmes, il est impératif de garantir que les tâches ne manquent pas leurs échéances, mais aussi la bonne qualité des données transmises de tâche en tâche. Il est obligatoire de trouver des compromis entre les contraintes d'ordonnancement du système et celles appliquées aux données. Pour garantir ces propriétés, nous considérons le mécanisme sans attente. L'accès aux ressources partagées suit le principe d'un seul producteur, plusieurs lecteurs. Pour contenir toutes les particularités de communication apportées par le mécanisme de communication uORB, nous avons modélisé les interactions entre les tâches par un graphe biparti que nous avons appelé graphe de communication et qui est composé d'ensembles de messages dits de domaine. Pour améliorer la prévisibilité de la communication inter-tâches, nous étendons le modèle de Liu & Layland avec le paramètre état de communication utilisé pour contrôler les points d'écriture/lecture.Nous avons considéré deux types de contraintes de données : les contraintes locales de données et les contraintes globales de données. Pour vérifier les contraintes locales des données, nous nous appuyons sur le mécanisme de sous-échantillonnage destiné à vérifier les contraintes locales des données. En ce qui concerne les contraintes globales des données, nous avons introduit deux nouveaux mécanismes : le " dernier lecteur de marque" et le " mécanisme de défilement ou d'écrasement ". Ces 2 mécanismes sont en quelque sorte complémentaires. Le premier fonctionne au début du fuseau tandis que le second fonctionne à la fin du fuseau
Real-time embedded systems, despite their limited resources, are evolving very quickly. For such systems, it is not enough to ensure that all jobs do not miss their deadlines, it is also mandatory to ensure the good quality of the data being transmitted from tasks to tasks. Speaking of the data quality constraints, they are expressed by the maintenance of a set of properties that a data sample must exhibit to be considered as relevant. It is mandatory to find trade-offs between the system scheduling constraints and those applied to the data. To ensure such properties, we consider the wait-free mechanism. The size of each communication buffer is based on the lifetime bound method. Access to the shared resources follows the single writer, many readers. To contain all the communication particularities brought by the uORB communication mechanism we modeled the interactions between the tasks by a bipartite graph that we called communication graph which is comprised of sets of so-called domain messages. To enhance the predictability of inter-task communication, we extend Liu and Layland model with the parameter communication state used to control writing/reading points.We considered two types of data constraints: data local constraints and data global constraints. To verify the data local constraints, we rely on the sub-sampling mechanism meant to verify data local constraints. Regarding the data global constraints, we introduced two new mechanism: the last reader tags mechanism and the scroll or overwrite mechanism. These 2 mechanisms are to some extent complementary. The first one works at the beginning of the spindle while the second one works at the end of the spindle
6

湯志輝 and Chi-fai Tong. "On checking the temporal consistency of data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31211914.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tong, Chi-fai. "On checking the temporal consistency of data /." [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13570353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shah, Nikhil Jeevanlal. "A simulation framework to ensure data consistency in sensor networks." Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/541.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gustafsson, Thomas. "Maintaining data consistency in embedded databases for vehicular systems." Licentiate thesis, Linköping : Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5681.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Khan, Tareq Jamal. "Robust, fault-tolerant majority based key-value data store supporting multiple data consistency." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-42474.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Web 2.0 has significantly transformed the way how modern society works now-a-days. In today‘s Web, information not only flows top down from the web sites to the readers; but also flows bottom up contributed by mass user. Hugely popular Web 2.0 applications like Wikis, social applications (e.g. Facebook, MySpace), media sharing applications (e.g. YouTube, Flickr), blogging and numerous others generate lots of user generated contents and make heavy use of the underlying storage. Data storage system is the heart of these applications as all user activities are translated to read and write requests and directed to the database for further action. Hence focus is on the storage that serves data to support the applications and its reliable and efficient design is instrumental for applications to perform in line with expectations. Large scale storage systems are being used by popular social networking services like Facebook, MySpace where millions of users‘ data have been stored and fully accessed by these companies. However from users‘ point of view there has been justified concern about user data ownership and lack of control over personal data. For example, on more than one occasions Facebook have exercised its control over users‘ data without respecting users‘ rights to ownership of their own content and manipulated data for its own business interest without users‘ knowledge or consent. The thesis proposes, designs and implements a large scale, robust and fault-tolerant key-value data storage prototype that is peer-to-peer based and intends to back away from the client-server paradigm with a view to relieving the companies from data storage and management responsibilities and letting users control their own personal data. Several read and write APIs (similar to Yahoo!‘s P NUTS but different in terms of underlying design and the environment they are targeted for) with various data consistency guarantees are provided from which a wide range of web applications would be able to choose the APIs according to their data consistency, performance and availability requirements. An analytical comparison is also made against the PNUTS system that targets a more stable environment. For evaluation, simulation has been carried out to test the system availability, scalability and fault-tolerance in a dynamic environment. The results are then analyzed and conclusion is drawn that the system is scalable, available and shows acceptable performance.

Книги з теми "Data freshness and consistency":

1

Gustafsson, Thomas. Maintaining data consistency in embedded databases for vehicular systems. Linko ping: Department of Computer and Information Science, Linko pings universitet, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Dahmen, E. R. Screening of hydrological data: Tests for stationarity and relative consistency. Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kufoniyi, Olajide. Spatial coincidence modelling, automated database updating, and data consistency in vector GIS. Enschede: International Institute for Aerospace Survey and Earth Sciences, 1995.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pellechio, Anthony J. Data consistency in IMF publications: Country Staff Reports versus International Financial Statistics. Washington, D.C: International Monetary Fund, Statistics Dept., 2005.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rao, M. J. Manohar. Indian macroeconomic data base in a consistency accounting framework, 1950-51 to 1997-98: Identifying empirical patterns and regularities. Mumbai: Dr. Babasaheb Ambedkar Chair: RBI Unit in Political Economy, Dept. of Economics, University of Mumbai, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

CIRP International Seminar on Computer-Aided Tolerancing (6th 1999 Universiteit Twente). Global consistency of tolerances: Proceedings of the 6th CIRP International Seminar on Computer-Aided Tolerancing, University of Twente, Enschede, The Netherlands, 22-24 March, 1999. Dordrecht: Kluwer Academic, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Houten, Fred. Global Consistency of Tolerances: Proceedings of the 6 th CIRP International Seminar on Computer-Aided Tolerancing, University of Twente, Enschede, The Netherlands, 22-24 March, 1999. Dordrecht: Springer Netherlands, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Office, United States Government Accountability. Small business innovation research: Agencies need to strengthen efforts to improve the completeness, consistency, and accuracy of awards data : report to congressional committees. Washington, D.C: GAO, 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Macrez, Franck. Créations informatiques: Bouleversement des droits de la propriété intellectuelle? : essai sur la cohérence des droits = Computer creation : disruption of intellectual property rights? : essay on law consistency. Paris: LexisNexis, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Office, General Accounting. Immigration enforcement: Better data and controls are needed to assure consistency with the Supreme Court decision on long-term alien detention : report to congressional requesters. Washington, D.C: U.S. General Accounting Office, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Data freshness and consistency":

1

Terry, Douglas B. "Data Consistency." In Replicated Data Management for Mobile Computing, 23–26. Cham: Springer International Publishing, 2008. http://dx.doi.org/10.1007/978-3-031-02477-1_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wang, Guiling, and Shuo Zhang. "Freshness-Aware Data Service Mashups." In Lecture Notes in Computer Science, 435–49. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49178-3_33.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Nawab, Faisal. "Weaker Consistency Models/Eventual Consistency." In Encyclopedia of Big Data Technologies, 1793–99. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-77525-8_181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Nawab, Faisal. "Weaker Consistency Models/Eventual Consistency." In Encyclopedia of Big Data Technologies, 1–7. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_181-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Meier, Andreas, and Michael Kaufmann. "Ensuring Data Consistency." In SQL & NoSQL Databases, 123–42. Wiesbaden: Springer Fachmedien Wiesbaden, 2019. http://dx.doi.org/10.1007/978-3-658-24549-8_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Nahler, Gerhard. "consistency of data." In Dictionary of Pharmaceutical Medicine, 38. Vienna: Springer Vienna, 2009. http://dx.doi.org/10.1007/978-3-211-89836-9_279.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shapiro, Marc, and Pierre Sutra. "Database Consistency Models." In Encyclopedia of Big Data Technologies, 591–601. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-77525-8_203.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shapiro, Marc, and Pierre Sutra. "Database Consistency Models." In Encyclopedia of Big Data Technologies, 1–11. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_203-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Suresh, Anandhu, Arathi Vinayachandran, Chinju Philip, Jithu George Velloor, and Anju Pratap. "Fresko Pisces: Fish Freshness Identification Using Deep Learning." In Innovative Data Communication Technologies and Application, 843–56. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9651-3_68.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Martins, Pedro, Maryam Abbasi, and Pedro Furtado. "AScale: Big/Small Data ETL and Real-Time Data Freshness." In Communications in Computer and Information Science, 315–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-34099-9_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Data freshness and consistency":

1

Bouzeghoub, Mokrane. "A framework for analysis of data freshness." In the 2004 international workshop. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1012453.1012464.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Golomb, Dagaen, Deepak Gangadharan, Sanjian Chen, Oleg Sokolsky, and Insup Lee. "Data Freshness Over-Engineering: Formulation and Results." In 2018 IEEE 21st International Symposium on Real-Time Distributed Computing (ISORC). IEEE, 2018. http://dx.doi.org/10.1109/isorc.2018.00034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Behrouzi-Far, Amir, Emina Soljanin, and Roy D. Yates. "Data Freshness in Leader-Based Replicated Storage." In 2020 IEEE International Symposium on Information Theory (ISIT). IEEE, 2020. http://dx.doi.org/10.1109/isit44484.2020.9174411.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Takatsuka, Yasunari, Hiroya Nagao, Takashi Yaguchi, Masatoshi Hanai, and Kazuyuki Shudo. "A caching mechanism based on data freshness." In 2016 International Conference on Big Data and Smart Computing (BigComp). IEEE, 2016. http://dx.doi.org/10.1109/bigcomp.2016.7425940.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Neumaier, Sebastian, and Jurgen Umbrich. "Measures for Assessing the Data Freshness in Open Data Portals." In 2016 2nd International Conference on Open and Big Data (OBD). IEEE, 2016. http://dx.doi.org/10.1109/obd.2016.10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Guiling, and Feng Zhang. "Freshness-Aware Sensor Mashups Based on Data Services." In 2013 IEEE International Conference on Green Computing and Communications (GreenCom) and IEEE Internet of Things(iThings) and IEEE Cyber, Physical and Social Computing(CPSCom). IEEE, 2013. http://dx.doi.org/10.1109/greencom-ithings-cpscom.2013.378.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jaber, Ghada, Rahim Kacimi, and Thierry Gayraud. "Data Freshness Aware Content-Centric Networking in WSNs." In 2017 Wireless Days (WD). IEEE, 2017. http://dx.doi.org/10.1109/wd.2017.7918152.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gustafsson, T., and J. Hansson. "Data Freshness and Overload Handling in Embedded Systems." In 12th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA'06). IEEE, 2006. http://dx.doi.org/10.1109/rtcsa.2006.25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Broadhead, James Scott, and Przemyslaw Pawelczak. "Data Freshness in Mixed-Memory Intermittently-Powered Systems." In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9518156.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zaza, Nosheen, and Nathaniel Nystrom. "Data-centric Consistency Policies." In PMLDC '16: Programming Models and Languages for Distributed Computing. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2957319.2957377.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Data freshness and consistency":

1

Copping, Andrea E., Alicia M. Gorton, and Mikaela C. Freeman. Data Transferability and Collection Consistency in Marine Renewable Energy. Office of Scientific and Technical Information (OSTI), September 2018. http://dx.doi.org/10.2172/1491572.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Goodson, Garth R., Jay J. Wylie, Gregory R. Ganger, and Micahel K. Reiter. Efficient Consistency for Erasure-Coded Data via Versioning Servers. Fort Belvoir, VA: Defense Technical Information Center, March 2003. http://dx.doi.org/10.21236/ada461126.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lu, Qi, and M. Satyanarayanan. Improving Data Consistency in Mobile Computing using Isolation-Only Transactions. Fort Belvoir, VA: Defense Technical Information Center, March 1995. http://dx.doi.org/10.21236/ada293110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fox, K. M. Product Consistency Test Leachate Data for Nepheline Scoping Study Glasses. Office of Scientific and Technical Information (OSTI), April 2019. http://dx.doi.org/10.2172/1508735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Cole, Nancy, and Janet Currie. Reported Income in the NLSY: Consistency Checks and Methods for Cleaningthe Data. Cambridge, MA: National Bureau of Economic Research, July 1994. http://dx.doi.org/10.3386/t0160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Gray, H. L., Wayne A. Woodward, and Suojin Wang. Testing the Consistency of Soviet Data Using a Sequence of Hypothesis Tests. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada241711.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mayega, Jova, Ronald Waiswa, Jane Nabuyondo, and Milly Nalukwago Isingoma. How Clean Are Our Taxpayer Returns? Data Management in Uganda Revenue Authority. Institute of Development Studies (IDS), April 2021. http://dx.doi.org/10.19088/ictd.2021.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The paper assesses the cleanliness of taxpayer returns at the Uganda Revenue Authority (URA) in terms of: (a) completeness – the extent to which taxpayers submit all the required information as specified in the return forms; (b) accuracy – the extent to which the submitted information is correct; (c) consistency – the extent to which taxpayers submit similar information in cases where the same information is required in different types of tax returns, or submitted in the same type of tax return, but for different time periods; and (d) permanence – the extent to which the returns are likely to be later modified by taxpayers.
8

Wallgren, Anders, and Britt Wallgren. Toward an Integrated Statistical System Based on Registers. Inter-American Development Bank, April 2021. http://dx.doi.org/10.18235/0003204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This note describes how Latin American and Caribbean countries can join a revolution in statistical systems, moving from data collection based on geographic frames to one based on administrative registers, and the advantages of making this change. Northern European countries have already shifted from a traditional area frame-based statistical system to a register-based system, in which all surveys are based on statistical registers. Among the key advantages of the shift are: i) lower production costs; ii) potential for higher levels of geographic disaggregation and greater frequency; and iii) reduce the burden on informants by following the maxim of “ask once, use many times”. Evidence from Colombia, Ecuador, Mexico, and Peru points to the viability of this transition in the region. However, to take better advantage of the new strategy, countries should invest to improve the quality and coverage of their administrative systems and should create an integrated register system, allowing for efficient data use, and ensuring consistency and coherence across statistical registries.
9

Cheng, Wen, Yongping Zhang, and Edward Clay. Comprehensive Performance Assessment of Passive Crowdsourcing for Counting Pedestrians and Bikes. Mineta Transportation Institute, February 2022. http://dx.doi.org/10.31979/mti.2022.2025.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Individuals who walk and cycle experience a variety of health and economic benefits while simultaneously benefiting their local environments and communities. It is essential to correctly obtain pedestrian and bicyclist counts for better design and planning of active transportation-related facilities. In recent years, crowdsourcing has seen a rise in popularity due to the multiple advantages relative to traditional methods. Nevertheless, crowdsourced data have been applied in fewer studies, and their reliability and performance relative to other conventional methods are rarely documented. To this end, this research examines the consistency between crowdsourced and traditionally collected count data. Additionally, the research aims to develop the adjustment factor between the crowdsourced and permanent counter data and to estimate the annual average daily traffic (AADT) data based on hourly volume and other predictor variables such as time, day, weather, land use, and facility type. With some caveats, the results demonstrate that the Street Light crowdsourcing count data for pedestrians and bicyclists appear to be a promising alternative to the permanent counters.
10

LaBonte, Don, Etan Pressman, Nurit Firon, and Arthur Villordon. Molecular and Anatomical Characterization of Sweetpotato Storage Root Formation. United States Department of Agriculture, December 2011. http://dx.doi.org/10.32747/2011.7592648.bard.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Original objectives: Anatomical study of storage root initiation and formation. Induction of storage root formation. Isolation and characterization of genes involved in storage root formation. During the normal course of storage root development. Following stress-induced storage root formation. Background:Sweetpotato is a high value vegetable crop in Israel and the U.S. and acreage is expanding in both countries and the research herein represents an important backstop to improving quality, consistency, and yield. This research has two broad objectives, both relating to sweetpotato storage root formation. The first objective is to understand storage root inductive conditions and describe the anatomical and physiological stages of storage root development. Sweetpotato is propagated through vine cuttings. These vine cuttings form adventitious roots, from pre-formed primordiae, at each node underground and it is these small adventitious roots which serve as initials for storage and fibrous (non-storage) “feeder” roots. What perplexes producers is the tremendous variability in storage roots produced from plant to plant. The marketable root number may vary from none to five per plant. What has intrigued us is the dearth of research on sweetpotato during the early growth period which we hypothesize has a tremendous impact on ultimate consistency and yield. The second objective is to identify genes that change the root physiology towards either a fleshy storage root or a fibrous “feeder” root. Understanding which genes affect the ultimate outcome is central to our research. Major conclusions: For objective one, we have determined that the majority of adventitious roots that are initiated within 5-7 days after transplanting possess the anatomical features associated with storage root initiation and account for 86 % of storage root count at 65 days after transplanting. These data underscore the importance of optimizing the growing environment during the critical storage root initiation period. Water deprivation during this phenological stage led to substantial reduction in storage root number and yield as determined through growth chamber, greenhouse, and field experiments. Morphological characterization of adventitious roots showed adjustments in root system architecture, expressed as lateral root count and density, in response to water deprivation. For objective two, we generated a transcriptome of storage and lignified (non-storage) adventitious roots. This transcriptome database consists of 55,296 contigs and contains data as regards to differential expression between initiating and lignified adventitious roots. The molecular data provide evidence that a key regulatory mechanism in storage root initiation involves the switch between lignin biosynthesis and cell division and starch accumulation. We extended this research to identify genes upregulated in adventitious roots under drought stress. A subset of these genes was expressed in salt stressed plants.

До бібліографії