Academic literature on the topic 'Replication stre'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Replication stre.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Replication stre"

1

Merritt, Ronald. "Utilizing the Generalized Linear Mixed Model for Specification and Simulation of Transient Vibration Environments." Journal of the IEST 53, no. 2 (2010): 35–49. http://dx.doi.org/10.17764/jiet.53.2.y7291022622225x3.

Full text
Abstract:
Transient vibration environments are an important consideration in qualification of aircraft store components — particularly for aircraft with internal storage bays. Generally, these transient vibration environments provide high stimulus input to a store via aerodynamic forces for up to 15 seconds on numerous occasions during training. With the recent introduction of the technique of Time Waveform Replication (TWR) to laboratory testing (MIL-STD-810G Method 525), store components can be readily tested to replications of field-measured transient vibration environments. This paper demonstrates the use of the Generalized Linear Mixed Model (GLMM) on a collection of measured field responses for specification of transient vibration environments. The paper establishes a basis for moving from (1) transient vibration measured field response to (2) transient vibration stochastic specification of the measured field response to (3) laboratory simulation of transient vibration environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Compain, Fabrice, Lionel Frangeul, Laurence Drieux, et al. "Complete Nucleotide Sequence of Two Multidrug-Resistant IncR Plasmids from Klebsiella pneumoniae." Antimicrobial Agents and Chemotherapy 58, no. 7 (2014): 4207–10. http://dx.doi.org/10.1128/aac.02773-13.

Full text
Abstract:
ABSTRACTWe report here the complete nucleotide sequence of two IncR replicons encoding multidrug resistance determinants, including β-lactam (blaDHA-1,blaSHV-12), aminoglycoside (aphA1,strA,strB), and fluoroquinolone (qnrB4,aac6′-1b-cr) resistance genes. The plasmids have backbones that are similar to each other, including the replication and stability systems, and contain a wide variety of transposable elements carrying known antibiotic resistance genes. This study confirms the increasing clinical importance of IncR replicons as resistance gene carriers.
APA, Harvard, Vancouver, ISO, and other styles
3

Madi, Mohammed K., Yuhanis Yusof, and Suhaidi Hassan. "Replica Placement Strategy for Data Grid Environment." International Journal of Grid and High Performance Computing 5, no. 1 (2013): 70–81. http://dx.doi.org/10.4018/jghpc.2013010105.

Full text
Abstract:
Data Grid is an infrastructure that manages huge amount of data files, and provides intensive computational resources across geographically distributed collaboration. To increase resource availability and to ease resource sharing in such environment, there is a need for replication services. Data replication is one of the methods used to improve the performance of data access in distributed systems by replicating multiple copies of data files in the distributed sites. Replica placement mechanism is the process of identifying where to place copies of replicated data files in a Grid system. Existing work identifies the suitable sites based on number of requests and read cost of the required file. Such approaches consume large bandwidth and increases the computational time. The authors propose a replica placement strategy (RPS) that finds the best locations to store replicas based on four criteria, namely, 1) Read Cost, 2) File Transfer Time, 3) Sites’ Workload, and 4) Replication Sites. OptorSim is used to evaluate the performance of this replica placement strategy. The simulation results show that RPS requires less execution time and consumes less network usage compared to existing approaches of Simple Optimizer and LFU (Least Frequently Used).
APA, Harvard, Vancouver, ISO, and other styles
4

Fang, Kuo-Chi, Husnu S. Narman, Ibrahim Hussein Mwinyi, and Wook-Sung Yoo. "PPHA-Popularity Prediction Based High Data Availability for Multimedia Data Center." International Journal of Interdisciplinary Telecommunications and Networking 11, no. 1 (2019): 17–29. http://dx.doi.org/10.4018/ijitn.2019010102.

Full text
Abstract:
Due to the growth of internet-connected devices and extensive data analysis applications in recent years, cloud computing systems are largely utilized. Because of high utilization of cloud storage systems, the demand for data center management has been increased. There are several crucial requirements of data center management, such as increase data availability, enhance durability, and decrease latency. In previous works, a replication technique is mostly used to answer those needs according to consistency requirements. However, most of the works consider full data, popular data, and geo-distance-based replications by considering storage and replication cost. Moreover, the previous data popularity based-techniques rely on the historical and current data access frequencies for replication. In this article, the authors approach this problem from a distinct aspect while developing replication techniques for a multimedia data center management system which can dynamically adapt servers of a data center by considering popularity prediction in each data access location. Therefore, they first label data objects from one to ten to track access frequencies of data objects. Then, they use those data access frequencies from each location to predict the future access frequencies of data objects to determine the replication levels and locations to replicate the data objects, and store the related data objects to close storage servers. To show the efficiency of the proposed methods, the authors conduct an extensive simulation by using real data. The results show that the proposed method has an advantage over the previous works in terms of data availability and increases the data availability up to 50%. The proposed method and related analysis can assist multimedia service providers to enhance their service qualities.
APA, Harvard, Vancouver, ISO, and other styles
5

de Groot, Jasper H. B., Charly Walther, and Rob W. Holland. "A Fresh Look on Old Clothes: Laundry Smell Boosts Second-Hand Store Sales." Brain Sciences 12, no. 11 (2022): 1526. http://dx.doi.org/10.3390/brainsci12111526.

Full text
Abstract:
The clothing industry is one of the biggest polluters impacting the environment. Set in a sustainable environment, this study addresses whether certain ambient odors can influence the purchase of second-hand clothing. This study fulfilled three aims, increasing methodological, statistical, and theoretical rigor. First, replicating the finding that fresh laundry odor can boost purchasing behavior in a second-hand store—this time in a larger sample, using a fully counterbalanced design, in a pre-registered study. Second, assessing the effectiveness of another cleanliness priming control condition (citrus odor) unrelated to the products at hand, to test hypotheses from a hedonic vs. utilitarian model. Third, combining questionnaire data tapping into psychological processes with registered sales. The results (316 questionnaires, 6781 registered transactions) showed that fresh laundry odor significantly increased the amount of money spent by customers compared to the no smell condition, (replication) and compared to citrus odor (extension). Arguably, fresh laundry odor boosts the utilitarian value of the product at (second) hand by making it smell like non-used clothing, ultimately causing customers to purchase far greater amounts in this sustainable setting.
APA, Harvard, Vancouver, ISO, and other styles
6

Puranik, Sunil, Mahesh Barve, Swapnil Rodi, and Rajendra Patrikar. "FPGA-Based High-Throughput Key-Value Store Using Hashing and B-Tree for Securities Trading System." Electronics 12, no. 1 (2022): 183. http://dx.doi.org/10.3390/electronics12010183.

Full text
Abstract:
Field-Programmable Array (FPGA) technology is extensively used in Finance. This paper describes a high-throughput key-value store (KVS) for securities trading system applications using an FPGA. The design uses a combination of hashing and B-Tree techniques and supports a large number of keys (40 million) as required by the Trading System. We have used a novel technique of using buckets of different capacities to reduce the amount of Block-RAM (BRAM) and perform a high-speed lookup. The design uses high-bandwidth-memory (HBM), an On-chip memory available in Virtex Ultrascale+ FPGAs to support a large number of keys. Another feature of this design is the replication of the database and lookup logic to increase the overall throughput. By implementing multiple lookup engines in parallel and replicating the database, we could achieve high throughput (up to 6.32 million search operations/second) as specified by our client, which is a major stock exchange. The design has been implemented with a combination of Verilog and high-level-synthesis (HLS) flow to reduce the implementation time.
APA, Harvard, Vancouver, ISO, and other styles
7

Rambabu, D. "Enhancing replica management in a cloud environment using data mining based dynamic replication algorithm." i-manager’s Journal on Cloud Computing 9, no. 1 (2022): 17. http://dx.doi.org/10.26634/jcc.9.1.18566.

Full text
Abstract:
Cloud computing has recently gained a lot of popularity and attention from the research community. One of the many on-demand services that large-scale applications provide to cloud customers is storage, which accumulates more generated data and subsequently leads to the need for storage. Despite the fact that users can use the cloud to store and provide the type of storage that desire, it still takes a significant amount of time to store and retrieve data due to the large accumulation of data. Due to the need to improve data availability, response time, reliability, and migration costs, the current storage engine needs to be replicated across multiple sites. When copies are properly distributed, data replication speeds up execution. The biggest challenges in data replication are choosing which data to replicate, where to put it, how to manage replication, and how many replicas it needs. Therefore, various studies have been carried out on some data mining-based data replication systems to evaluate replication issues and manage cloud storage. In most cases, data replication in a data mining environment is done using data mining along with a replication algorithm and a data grid policy. In addition, this paper addresses replica management issues and proposes affordable data replication in the cloud that satisfies all Quality of Service (QoS) requirements.
APA, Harvard, Vancouver, ISO, and other styles
8

Alshammari, Mohammad M., Ali A. Alwan, Azlin Nordin, and Abedallah Zaid Abualkishik. "Data Backup and Recovery With a Minimum Replica Plan in a Multi-Cloud Environment." International Journal of Grid and High Performance Computing 12, no. 2 (2020): 102–20. http://dx.doi.org/10.4018/ijghpc.2020040106.

Full text
Abstract:
Cloud computing has become a desirable choice to store and share large amounts of data among several users. The two main concerns with cloud storage are data recovery and cost of storage. This article discusses the issue of data recovery in case of a disaster in a multi-cloud environment. This research proposes a preventive approach for data backup and recovery aiming at minimizing the number of replicas and ensuring high data reliability during disasters. This approach named Preventive Disaster Recovery Plan with Minimum Replica (PDRPMR) aims at reducing the number of replications in the cloud without compromising the data reliability. PDRPMR means preventive action checking of the availability of replicas and monitoring of denial of service attacks to maintain data reliability. Several experiments were conducted to evaluate the effectiveness of PDRPMR and the results demonstrated that the storage space used one-third to two-thirds compared to typical 3-replicas replication strategies.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Gyuyeong, and Wonjun Lee. "In-network leaderless replication for distributed data stores." Proceedings of the VLDB Endowment 15, no. 7 (2022): 1337–49. http://dx.doi.org/10.14778/3523210.3523213.

Full text
Abstract:
Leaderless replication allows any replica to handle any type of request to achieve read scalability and high availability for distributed data stores. However, this entails burdensome coordination overhead of replication protocols, degrading write throughput. In addition, the data store still requires coordination for membership changes, making it hard to resolve server failures quickly. To this end, we present NetLR, a replicated data store architecture that supports high performance, fault tolerance, and linearizability simultaneously. The key idea of NetLR is moving the entire replication functions into the network by leveraging the switch as an on-path in-network replication orchestrator. Specifically, NetLR performs consistency-aware read scheduling, high-performance write coordination, and active fault adaptation in the network switch. Our in-network replication eliminates inter-replica coordination for writes and membership changes, providing high write performance and fast failure handling. NetLR can be implemented using programmable switches at a line rate with only 5.68% of additional memory usage. We implement a prototype of NetLR on an Intel Tofino switch and conduct extensive testbed experiments. Our evaluation results show that NetLR is the only solution that achieves high throughput and low latency and is robust to server failures.
APA, Harvard, Vancouver, ISO, and other styles
10

Heryanto, Ahmad, and Albert Albert. "Implementasi Sistem Database Terdistribusi Dengan Metode Multi-Master Database Replication." JURNAL MEDIA INFORMATIKA BUDIDARMA 3, no. 1 (2019): 30. http://dx.doi.org/10.30865/mib.v3i1.1098.

Full text
Abstract:
Databases are the main need for every computer application to store, process and modify data. One important problem faced in databases is the availability of adequate information technology infrastructure in managing and securing data contained in the database. Data stored on the database must have protection against threats and disturbances. Threats and disruptions can result from a variety of things, such as maintenance, data damage, and natural disasters. To anticipate data loss and damage, replication of the database system needs to be done. The replication mechanism used by researchers is multi-master replication. The replication technique is able to form a database cluster with replication time of fewer than 0.2 seconds.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Replication stre"

1

ROSSI, SILVIA EMMA. "INTERPLAY BETWEEN THE DNA HELICASES PIF1 AND RRM3, THE NUCLEASE DNA2 AND THE CHECKPOINT PATHWAYS IN THE MAINTENANCE OF THE DNA REPLICATION FORK INTEGRITY." Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/471797.

Full text
Abstract:
Eukaryotic cells have evolved the ATR/hCHK1, MEC1/RAD53 kinase-mediated signal transduction pathway, known as replication checkpoint, to protect and stabilize stalled replication forks in human cells and budding yeasts, respectively. rad53 mutants, exposed to high doses of the DNA replication inhibitor hydroxyurea (HU), accumulate hemireplicated, gapped and reversed forks, while treatments with low HU doses induce massive chromosome fragmentation. The aim of my work was to better understand the molecular mechanisms through which Rad53 prevents unusual alterations of the architecture of the stalled replication forks and chromosome fragility, under replication stress. We revealed that Rrm3 and Pif1, DNA helicases assisting fork progression across pausing sites in unperturbed conditions, are detrimental in rad53 mutants experiencing HU-induced replication stress. Rrm3 and Pif1 ablation synergistically rescues cell lethality, chromosome fragmentation, replisome dissociation, fork reversal and ssDNA gaps formation at the forks of rad53 cells exposed to replication stress. We provide evidence that Pif1 and Rrm3 associate with stalled DNA replication forks and are regulated through Rad53-mediated phosphorylation. Our findings uncover a new replication-stress-induced regulative loop in which Rad53 down regulates the Pif1 DNA helicases at the stalled replication forks. In the second part of this thesis we examined the crosstalk between Rrm3, Pif1, the mediator of the DNA damage checkpoint Rad9 and the nuclease Dna2, during unperturbed DNA replication. The experimental evidence collected in this second part of the project, together with pioneering work previously reported from other laboratories, strongly suggests that Dna2, Pif1 and Rrm3 cooperate to finalize late stages of DNA replication.
APA, Harvard, Vancouver, ISO, and other styles
2

GNOCCHI, ANDREA. "UNDERSTANDING THE IMPACT OF REPLICATION STRESS ON THE EXPRESSION OF EARLY GENES IN MOUSE EMBRYONIC STEM CELLS." Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/814703.

Full text
Abstract:
Embryonic stem cells (ESCs) are characterized by a rapid cell cycle, which leads to high replication stress (RS) in otherwise unperturbed conditions. The mechanisms that ESCs adopt to cope with their endogenous RS, however, remain to this day elusive. In our recent work we demonstrated that the activation of the checkpoint kinase ATR in response to RS leads to a broad activation of 2-cells stage specific genes in mouse ESCs. This response relies on the up-regulation of Dux, a transcription factor encoded in a macrosatellite sequence repeated in tandem. Dux is repressed by variant Polycomb repressive complex 1 (vPRC1) in unperturbed ESCs, independently from PRC2 presence. Here we demonstrate that RS causes a major rearrangement of both PRC1 and PRC2 in ESCs nuclei, resulting in a major loss of both repressive marks in correspondence to target promoters. Surprisingly, Dux undergoes an increase in vPRC1 occupancy upon RS in an ATR-dependent manner, possibly due to PRC1 involvement in the replication of highly repeated DNA sequences. More interestingly, Dux activation upon RS requires the presence of PRC2. This result is possibly due to PRC2 proved role in the processing of stalled replication forks, which are the main structure signaling RS. In agreement to this data, also the fork remodeling translocases HLTF and ZRANB3 displayed an effect in Dux activation following RS. Taken together, our results show that the up-regulation of 2-cells genes following RS not only requires ATR activation, but also downstream remodeling processes.
APA, Harvard, Vancouver, ISO, and other styles
3

Sousa, Valter Balegas de. "Key-CRDT stores." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/7802.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática<br>The Internet has opened opportunities to create world scale services. These systems require highavailability and fault tolerance, while preserving low latency. Replication is a widely adopted technique to provide these properties. Different replication techniques have been proposed through the years, but to support these properties for world scale services it is necessary to trade consistency for availability, fault-tolerance and low latency. In weak consistency models, it is necessary to deal with possible conflicts arising from concurrent updates. We propose the use of conflict free replicated data types (CRDTs) to address this issue. Cloud computing systems support world scale services, often relying on Key-Value stores for storing data. These systems partition and replicate data over multiple nodes, that can be geographically disperse over the network. For handling conflict, these systems either rely on solutions that lose updates (e.g. last-write-wins) or require application to handle concurrent updates. Additionally, these systems provide little support for transactions, a widely used abstraction for data access. In this dissertation, we present the design and implementation of SwiftCloud, a Key-CRDT store that extends a Key-Value store by incorporating CRDTs in the system’s data-model. The system provides automatic conflict resolution relying on properties of CRDTs. We also present a version of SwiftCloud that supports transactions. Unlike traditional transactional systems, transactions never abort due to write/write conflicts, as the system leverages CRDT properties to merge concurrent transactions. For implementing SwiftCloud, we have introduced a set of new techniques, including versioned CRDTs, composition of CRDTs and alternative serialization methods. The evaluation of the system, with both micro-benchmarks and the TPC-W benchmark, shows that SwiftCloud imposes little overhead over a key-value store. Allowing clients to access a datacenter close to them with SwiftCloud, can reduce latency without requiring any complex reconciliation mechanism. The experience of using SwiftCloud has shown that adapting an existing application to use SwiftCloud requires low effort.<br>Project PTDC/EIA-EIA/108963/2008
APA, Harvard, Vancouver, ISO, and other styles
4

Rodier, Denise N. "Degenerate Oligonucleotide Primed-PCR: Thermalcycling Modifications and Comparison Studies." VCU Scholars Compass, 2006. http://scholarscompass.vcu.edu/etd/1496.

Full text
Abstract:
Degenerate Oligonucleotide Primed-PCR (DOP-PCR) can potentially enhance analysis of low copy number DNA samples. Theoretically, this procedure replicates fragments of the genome that can then be used for downstream multiplex STR analysis. The objective of this study is to optimize DOP-PCR by examining ramplelongation times and cycle numbers in the non-specific amplification portion of DOP-PCR, and by modifying the degenerate primer. Additionally, other methods such as Multiple Displacement Amplification (MDA) and Low Copy Number PCR (LCN PCR) were examined for their ability to create accurate DNA profiles from low DNA input amounts. Increasing the ramplelongation times showed no effect on downstream STR amplification success. An increase of cycle number increased DNA yield, but STR amplification success was undetermined. Although modifying the degenerate primer to one with a higher degeneracy decreased DNA yield, it ultimately improved STR amplification success. In comparison studies, LCN PCR produced higher STR amplification success than MDA.
APA, Harvard, Vancouver, ISO, and other styles
5

BALZANO, ELISA. "Common fragile sites: a new tool to study chromosome instability diseases." Doctoral thesis, 2022. http://hdl.handle.net/11573/1633607.

Full text
Abstract:
Replication stress is a major cause of Chromosomal Instability (CIN) that manifests as chromosome rearrangements, gaps and breaks, including those cytological expressed within specific chromosome regions named Common Fragile Sites (CFSs). The molecular mechanisms of CFSs instability have not been completely elucidated yet. In the first part of my work, I characterized the expression and the replication timing of human CFSs upon treatment with aphidicolin (APH), a DNA polymerase α (alpha) inhibitor, in three cellular lines: Glioblastoma Multiforme U-251 MG cell line and two isogenic Fanconi Anemia lymphoblastoid lines (the mutated HSC72 FA-A and the corrected HSC72 FANCA). GBM and FA cell lines are both associated with high physiological levels of CIN and thus are good genetic models to understand the causes underlying CFSs instability. Glioblastoma Multiforme (GBM) is a tumor of the Central Nervous System (CNS) and Fanconi Anemia (FA) is a rare multigenic disorder caused by mutations in FA DNA repair genes. I identified CFSs that showed a frequency equal to at least 1% of the total gaps/breaks: 17 CFSs in GBM, 16 CFSs in HSC72 FA-A, 19 CFSs in HSC72 FANCA. Only few of them were found to be cell type-specific. In the last part of my work, CFSs induced by 4', 6′-diamidino-2-phenylindole hydrochloride (DAPI), a DNA dye binding to AT-rich sequences and acting as an under-condensing agent in G2-phase, were analyzed in a pathological background such as FA cells (which are characterized by a prolonged G2-phase upon DNA damage) to understand how the post-replicative chromatin compaction is essential to their integrity. Presence of long genes, incomplete replication, improper chromatin condensation and DNA synthesis during mitosis (MiDAS) after APH and DAPI treatment suggest that impaired replication process and defective chromatin compaction may contribute to the loci-specific fragility in U-251 MG cells and in both HSC72 FA lymphoblasts cell lines. Altogether, my work offers a comprehensive characterization of CFSs expressed in GBM and FA cells that may be further exploited for cytogenetic and clinical studies to advance our understanding of the physiological status and these genic and genetic disorders.
APA, Harvard, Vancouver, ISO, and other styles
6

chen, Yan-shi, and 陳元熙. "The Relationship of Organization Routine, Knowledge transfer and Replication: The Case of Chain Store." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/57442495162727899850.

Full text
Abstract:
碩士<br>國立東華大學<br>企業管理學系<br>97<br>There are a lot of chain stores around the streets or some popular business areas, for example, 7/11, Watson, Pizza Hot and Mentor hairdressing, ranging from retailing to hairdressing. These chain stores are everywhere in our life. But after deeply observation about these chain store businesses, we noticed that not all of them perform effectively and some of them even fail to survive. In investigating what factors would affect the outcome of replication, this research attempts to build a framework and using statistical methods to analyze the survey data, and also conducts a case study to explain that model. With 500 survey questionnaires and store interview, 171 questionnaires are validly collected. The statistical result of this research indicated that: (1) Organization routine would positively affect organization’s replication. (2) Knowledge transfer would partially and negatively affect organization’s replication. However, the hypothesis of “complexity would affect organization’s replication” is not supported. We also found that appropriate complexity in some industry may positively affect organization’s replication. It is different from some researchers’ study which argued that low complexity would enhance replication. This difference would invite further study later on.
APA, Harvard, Vancouver, ISO, and other styles
7

"Identification of host factors in swine respiratory epithelial cells that contribute to host anti-viral defense and influenza virus replication." Thesis, 2016. http://hdl.handle.net/10388/ETD-2016-02-2444.

Full text
Abstract:
Swine influenza viruses (SIV) are a common and an important cause of respiratory disease in pigs. Pigs can serve as mixing vessels for the evolution of reassortment viruses containing both avian and human signatures, which have the potential to cause pandemics. NS1 protein of influenza A viruses is a major antagonist of host defence and it regulates multiple functions during infection by interacting with a variety of host proteins. Therefore, it is important to study swine viruses and NS1-interacting host factors in order to understand the mechanisms by which NS1 regulates virus replication and exerts its host defense functions. Influenza A viruses enter the host through the respiratory tract and infect epithelial cells in the respiratory tract, which form the primary sites of virus replication in the host. Thus, studying SIV infection in primary swine respiratory epithelial cells (SRECs) would resemble conditions similar to natural infection. The objectives of this study were to identify NS1-interacting host factors in the virus-infected SRECs and to understand the physiological role of at least one of the factors in influenza virus infection. The approaches to meet this objective were to generate a recombinant SIV carrying a Strep-tag in the NS1 protein, infect SRECs with the Strep-tag virus, purify NS1-interacting host protein complex from the infected cells by pull-down using strep-tactin resin and then study the physiological role of one of the NS1-interacting partners during influenza infection. Using a reverse-genetics strategy, a recombinant virus carrying the Strep-tag NS1 was successfully rescued and the SRECs were infected with this recombinant virus. The Strep-tag in the NS1 protein facilitated the isolation of an intact NS1-interacting protein complex and the proteins present in the complex were identified by liquid chromatography-tandem mass spectrometry. The identified proteins were grouped to enrich for different functions using bioinformatics. This gave an insight into the different functions that NS1 may regulate during infection and the potential host partners involved in these functions. Among the host proteins identified as potential interaction partners, RNA helicases were particularly of interest to study. Influenza being an RNA virus, RNA helicases could have important functions in the virus life cycle. Among the identified RNA helicases, DDX3 has been shown to regulate IFNβ induction and affect the life cycle of a number of viruses. However, its function in influenza A virus life cycle has not been studied. Hence, this study explored whether DDX3 has any role in the influenza A virus life cycle. Immunoprecipitation studies revealed viral proteins NP and NS1 as direct interaction partners with DDX3. DDX3 is a known component of stress granules (SGs) and influenza A virus lacking the NS1 gene is reported to induce SG formation. Therefore, the role of DDX3 in SG formation, induced by PR8 influenza A virus lacking NS1 (PR8 del NS1) was explored. The results from this study showed that DDX3 co-localized with NP in SGs indicating that DDX3 may interact with NP in the SGs. NS1 protein was found to inhibit virus-induced SGs and DDX3 downregulation impaired virus-induced SG formation. The contribution of the different domains of DDX3 to viral protein interaction and virus-induced SG formation was also studied. While DDX3 helicase domain did not interact with NS1 and NP, it was essential for DDX3 localization in virus induced SGs. Moreover, DDX3 downregulation resulted in the increased replication of PR8 del NS1virus, accompanied by an impairment of SG induction in infected cells. Since DDX3 is reported to regulate IFNβ induction, the role of DDX3 in influenza A virus induced IFNβ induction was also examined. Using small molecule inhibitors and siRNA-mediated gene knockdown, the RIG-I pathway was identified as the major contributor of influenza induced IFNβ induction in newborn porcine tracheal epithelial (NPTr) cells. DDX3 downregulation and overexpression also showed that DDX3 has an inhibitory effect on IFNβ expression induced by both influenza infection and low molecular weight (LMW) poly I:C treatment, which is also a RIG-I ligand. RNA competition assay to identify the mechanism of DDX3-mediated inhibition, showed that RIG-I binding affinity to its ligands LMW poly I:C and influenza viral RNA (vRNA) is much higher than that of DDX3. Furthermore, DDX3 downregulation enhanced titers of the PR8 del NS1 virus, while it did not affect the titers of the wild-type strains of PR8 and SIV/SK viruses. Overall, the results show that DDX3 has an antiviral role and the SG regulatory function of DDX3 has a profound effect on virus replication than the IFNβ regulatory function.
APA, Harvard, Vancouver, ISO, and other styles
8

Naščáková, Zuzana. "Genomová nestabilita spojená se vznikem RNA:DNA hybridů a mechanismy jejího potlačení." Doctoral thesis, 2021. http://www.nusl.cz/ntk/nusl-437804.

Full text
Abstract:
One of the most common infections of a human organism is an infection of stomach induced by pathogenic bacteria Helicobacter pylori (H. pylori). It is estimated that every second person is infected, with even higher prevalence in developing countries. As a quiet enemy, H. pylori can colonise a human stomach for decades without manifestation of infection-associated symptoms. However, chronic infection may cause severe damage to the stomach tissue, subsequently leading to the development of gastric diseases, including gastritis and ulcer disease. H. pylori infection is also a driving cause of gastric cancer, with 80% of gastric cancers being associated with chronic infection. H. pylori ensures its life-long persistence in a human host organism via the action of its virulence factors, which have a pleiotropic effect on multiple systems, mostly acting on the attenuation of a human immune system and the induction of atrophy of stomach tissue. The irreversible changes of stomach epithelium are induced by activation of an innate immune response in H. pylori-exposed epithelial cells through the stimulation of ALPK1/TIFA/NF-κB signalling pathway upon a recognition of β-ADP heptose, an intermediate product of bacterial lipopolysaccharide biosynthesis, and consequently leading to the formation of DNA...
APA, Harvard, Vancouver, ISO, and other styles
9

Fryzelková, Jana. "Úloha helikázy RECQ5 při stabilizaci a opravě replikačních vidlic po jejich kolizi s transkripčním komplexem." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-355971.

Full text
Abstract:
The progression of replication forks can be slowed down or paused by various external and internal factors during DNA replication. This phenomenon is referred to as replication stress and substantially contributes to genomic instability that is a hallmark of cancer. Transcription complex belongs to the internal replication-interfering factors and represents a barrier for progression of the replication complex. The replication forks are slowed down or paused while passing through the transcriptionally active regions of the genome that can lead to subsequent collapse of stalled forks and formation of DNA double-strand breaks, especially under conditions of increased replication stress. DNA helicase RECQ5 is significantly involved in maintenance of genomic stability during replication stress, but the mechanisms of its action are not clear. In this diploma theses, we have shown that RECQ5 helicase, in collaboration with BRCA1 protein, participates in the resolution of collisions between replication and transcription complexes. BRCA1 protein is a key factor in the homologous recombination process, which is essential for the restart of stalled replication forks. Furthermore, we have shown that RECQ5 helicase is involved in ubiquitination of PCNA protein at stalled replication forks. Key words DNA...
APA, Harvard, Vancouver, ISO, and other styles
10

Rocha, Luís Miguel Dias. "Ginger: A Transactional Middleware with Data and Operation Centric Mixed Consistency." Master's thesis, 2020. http://hdl.handle.net/10362/119342.

Full text
Abstract:
Many modern digital services to correspond to user demand need to offer high availability and low response times. To that end, a lot of digital services resort to geo-replicateddistributed systems. These systems are deployed closer to users, splitting latency acrossmultiple servers and allowing for faster access and communication. However, to accommodate these systems the data stores are also split up across multiple locations. Committing an operation is such systems requires coordination among the multiple replicas.These systems must allow data to be stored as fast as possible without breaking safety constraints of the developers systems.There are three main approaches to define the level of consistency to be guaranteed when accessing the data: over data, over operations or over transactions. The problem with approaches such as consistency over data or consistency over transactions is that they are very limited, as they can result in operations that could be executed in lower consistency levels to be executed at higher consistency levels. Our approach to this problemis the conciliation of executing transactions while expressing consistency in both data and operations. We instantiate this proposition in a middleware system, called Ginger,that is deployed between the user and the data stores. Ginger benefits from all the other approaches, allowing for execution of transactions, that include operations with different levels of consistency, over data with different levels of consistency. This provides the benefits of the isolation from transactions while also providing the performance and control,that consistency defined over operations and consistency defined over data provide.Our experimental results show that Ginger comparing to previously mentioned approaches, such as consistency over data and consistency over transaction, provides faster transaction committing speeds. Ginger serves as proof of concept that using consistency defined both over data and operations while using transactions is possible and may be aviable approach. Further development of the system will provide more functionalities,further evaluation, and a more in-depth comparison to other systems.<br>Os serviços digitais modernos para corresponder às necessidades dos utilizadores precisam de oferecer alta disponibilidade e baixos tempos de resposta. Para tal, os serviços digitais recorrem a sistemas geo-replicados. Esses sistemas são implantados perto dos utilizadores, dividindo a latência entre servidores. No entanto, para acomodar esses sistemas, os serviços de armazenamentos de dados são divididos. O commiting de uma operação nesses sistemas requer coordenação entre múltiplas réplicas. Esses sistemas devem permitir que os dados sejam armazenados rapidamente, sem quebrar restrições de segurança.Existem três abordagens principais para definir o nível de consistência a ser garantido durante o acesso aos dados: sobre dados, sobre operações ou sobre transacções. O problema com abordagens como consistência sobre dados ou sobre transacções é que são limitadas, podendo resultar em operações de níveis de consistência baixos serem executadas com níveis de consistência mais altos. A nossa abordagem a este problema é a conciliação da expressão de consistência tanto nos dados como nas operações. Instanciámos esta proposição num sistema de middleware, denominado Ginger, que é implantado entre o usuário e os serviços de armazenamentos de dados. O Ginger beneficia de todas as abordagens referidas, permitindo a execução de transacções, que incluem operações com diferentes níveis de consistência, sobre dados com diferentes níveis de consistência. Isto beneficia do isolamento das transacções, ao mesmo tempo que fornece o desempenho e o controle, que a consistência definida nas operações e a consistência definida nos dados fornecem.Os nossos resultados experimentais mostram que o Ginger, em comparação com as outras abordagens, como por exemplo consistência sobre os dados e consistência sobre a transação, fornece velocidades de commiting de transacções mais rápidas. Ginger serve como prova de conceito de que o uso de transacções com níveis de consistência definidos sobre os dados e operações é possível e pode ser uma abordagem viável. O desenvolvimento futuro do sistema fornecerá mais funcionalidades, avaliação adicional e uma comparação mais aprofundada com outros sistemas.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Replication stre"

1

Redbooks, IBM. Building the Operational Data Store on DB2 Udb Using IBM Data Replication, Websphere Mq Family, and DB2 Warehouse Manager. Ibm, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Replication stre"

1

Wang, Donghui, Peng Cai, Weining Qian, Aoying Zhou, Tianze Pang, and Jing Jiang. "Fast Log Replication in Highly Available Data Store." In Web and Big Data. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63564-4_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Junhu, Dongqing Yang, and Shiwei Tang. "ACB-R: An Adaptive Clustering-Based Data Replication Algorithm on a P2P Data-Store." In Advances in Computer Science – ASIAN 2005. Data Management on the Web. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11596370_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Poongodi, C., and A. M. Natarajan. "Buffer Managed Multiple Replication Strategy Using Knapsack Policy for Intermittently Connected Mobile Networks." In Advances in Wireless Technologies and Telecommunication. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4715-2.ch001.

Full text
Abstract:
Intermittently Connected Mobile Networks (ICMNs) are a kind of wireless network where, due to mobility of nodes and lack of connectivity, there may be disconnections among the nodes for a long time. To deal with such networks, store-carry-forward method is adopted for routing. This method buffers the messages in each node for a long time until a forwarding opportunity comes. Multiple replications are made for each message. It results in an increase in network overhead and high resource consumption because of uncontrolled replications. Uncontrolled replications are done due to lack of global knowledge about the messages and the forwarding nodes. The authors introduce a new simple scheme that applies knapsack policy-based replication strategy while replicating the messages residing in a node buffer. The numbers of replications are controlled by appropriately selecting messages based on the total count on replications already made and the message size. In addition, the messages are selected for forwarding based on the relay node goodness in contacting the destination and the remaining buffer size of that relay node. Therefore, useful replications are made based on the dynamic environment of a network, and it reduces the network overhead, resource consumption, delivery delay, and in turn, increases the delivery ratio.
APA, Harvard, Vancouver, ISO, and other styles
4

Alshammari, Mohammad M., Ali A. Alwan, Azlin Nordin, and Abedallah Zaid Abualkishik. "Data Backup and Recovery With a Minimum Replica Plan in a Multi-Cloud Environment." In Research Anthology on Privatizing and Securing Data. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8954-0.ch036.

Full text
Abstract:
Cloud computing has become a desirable choice to store and share large amounts of data among several users. The two main concerns with cloud storage are data recovery and cost of storage. This article discusses the issue of data recovery in case of a disaster in a multi-cloud environment. This research proposes a preventive approach for data backup and recovery aiming at minimizing the number of replicas and ensuring high data reliability during disasters. This approach named Preventive Disaster Recovery Plan with Minimum Replica (PDRPMR) aims at reducing the number of replications in the cloud without compromising the data reliability. PDRPMR means preventive action checking of the availability of replicas and monitoring of denial of service attacks to maintain data reliability. Several experiments were conducted to evaluate the effectiveness of PDRPMR and the results demonstrated that the storage space used one-third to two-thirds compared to typical 3-replicas replication strategies.
APA, Harvard, Vancouver, ISO, and other styles
5

Maynard Smith, John, and Eors Szathmary. "The chicken and egg problem." In The Major Transitions in Evolution. Oxford University Press, 1997. http://dx.doi.org/10.1093/oso/9780198502944.003.0009.

Full text
Abstract:
The most fundamental distinction in biology is between nucleic acids, with their role as carriers of information, and proteins, which generate the phenotype. In existing organisms, nucleic acids and proteins mutually presume one another. The former, owing to their template activity, store the heritable information: the latter, by enzymatic activity, read and express this information. It seems that neither can function without the other. Which came first, nucleic acids or proteins? There are three possible answers: (1) nucleic acids; (2) proteins; (3) neither: they coevolved. In this chapter, we discuss various possible answers to this 'chicken or egg?' problem. In section 5.2, we discuss what seems to us the most likely answer, that at first RNA performed both functions, as replicator and enzyme. In section 5.3, we consider an alternative view, in which protein enzymes existed either before, or alongside, the first nucleic acids. In section 5.4, we ask whether, perhaps, the first replicators were not nucleic acids. Finally, in section 5.5, we ask why, given that the genetic message is carried by nucleic acids, there are only four nucleotides and two base pairs. So far, we have tacitly assumed nucleic acids preceeded proteins, without stating the main reason. Nucleic acids came first because they can perform both functions: they are replicable, and they can have enzymatic activity. For many years, a common opinion was that to be replicable almost amounted to self-replicative ability, but that it was far-fetched to assume enzymatic activity. Today, there is increasing evidence that RNA can act as an enzyme, but we are more aware of the difficulty of self-replication. It should have been expected on theoretical grounds that RNA could act as an enzyme: the possibility was discussed by Woese (1967), Crick (1968) and Orgel (1968). Consider first why proteins can act as enzymes. An enzyme has a well-determined three-dimensional structure of chemical groups that, in most cases, arises automatically from the primary structure. Substrates of the enzyme are bound by the chemical groups on the surface. This means that the reactants will be kept in close proximity, and hence experience a much higher local concentration of each other than in solution. This by itself increases the rate of the reaction.
APA, Harvard, Vancouver, ISO, and other styles
6

Ball, Philip. "7. The chemical computer: molecular information." In Molecules: A Very Short Introduction. Oxford University Press, 2003. http://dx.doi.org/10.1093/actrade/9780192854308.003.0007.

Full text
Abstract:
‘The chemical computer: molecular information’ outlines the ways that molecules can store and transmit information. Genetics is living proof that complex information can be encoded through systems using molecular recognition. Genetic systems have a vast array of copying, proof–reading and editing tools available to them to prevent errors when replicating, transcribing and translating data (although occasional errors — mutations — are essential for evolutionary progress). These tools can be commandeered by scientists to manipulate the genome. Moore's law states that computer power will double every two years. New technologies, such as genetic and molecular computers, are needed to ensure this law holds true.
APA, Harvard, Vancouver, ISO, and other styles
7

Mavrogeorgi, Nikoletta, Spyridon V. Gogouvitis, Athanasios Voulodimos, and Vasilios Alexandrou. "SLA Management in Storage Clouds." In Data Intensive Storage Services for Cloud Environments. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3934-8.ch006.

Full text
Abstract:
The need for online storage and backup of data constantly increases. Many domains, such as media, enterprises, healthcare, and telecommunications need to store large amounts of data and access them rapidly any time and from any geographic location. Storage Cloud environments satisfy these requirements and can therefore provide an adequate solution for these needs. Customers of Cloud environments do not need to own any hardware for storing their data or handle management tasks, such as backups, replication levels, etc. In order for customers to be willing to move their data to Cloud solutions, proper Service Level Agreements (SLAs) should be offered and guaranteed. SLA is a contract between the customer and the service provider, where the terms and conditions of the offered service are agreed upon. In this chapter, the authors present existing SLA schemas and SLA management mechanisms and compare various features that Cloud providers support with existing SLAs. Finally, they address the problem of managing SLAs in cloud computing environments exploiting the content term that concerns the stored objects, in order to provide more efficient capabilities to the customer.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Replication stre"

1

Li, Yuzhe, Jiang Zhou, Weiping Wang, and Yong Chen. "RE-Store: Reliable and Efficient KV-Store with Erasure Coding and Replication." In 2019 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2019. http://dx.doi.org/10.1109/cluster.2019.8891013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumalakov, Bolatzhan, and Timur Bakibayev. "Distributed Data Store Architecture Towards Colonial Data Replication." In 2017 IEEE 11th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2017. http://dx.doi.org/10.1109/icaict.2017.8686925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fazul, Rhauani W., and Patricia Pitthan Barcelos. "Efetividade da Política de Posicionamento de Blocos no Balanceamento de Réplicas do HDFS." In XX Workshop de Testes e Tolerância a Falhas. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/wtf.2019.7716.

Full text
Abstract:
The Hadoop Distributed File System (HDFS) is designed to store and transfer data in large scale. To ensure availability and reliability, it uses data replication as a fault tolerance mechanism. However, this strategy can significantly affect replication balancing in the cluster. This paper provides an analysis of the default data replication policy used by HDFS and measures its impacts on the system behavior, while presenting different strategies for cluster balancing and rebalancing. In order to highlight the required requirements for efficient replica placement, a comparative study of the HDFS performance has been conduct considering a variety of factors that may result in cluster imbalance.
APA, Harvard, Vancouver, ISO, and other styles
4

Jang, Minwoo, Woochur Kim, Yookun Cho, and Jiman Hong. "Impacts of delayed replication on the key-value store." In SAC 2014: Symposium on Applied Computing. ACM, 2014. http://dx.doi.org/10.1145/2554850.2559927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schiper, Nicolas, Pierre Sutra, and Fernando Pedone. "P-Store: Genuine Partial Replication in Wide Area Networks." In 2010 IEEE International Symposium on Reliable Distributed Systems (SRDS). IEEE, 2010. http://dx.doi.org/10.1109/srds.2010.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Han, Sangyun, and Sungwon Lee. "Persistent store-based dual replication system for distributed SDN controller." In 2016 International Conference on Selected Topics in Mobile & Wireless Networking (MoWNeT). IEEE, 2016. http://dx.doi.org/10.1109/mownet.2016.7496618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brahneborg, Daniel, Wasif Afzal, Adnan Čaušević, and Mats Björkman. "Superlinear and Bandwidth Friendly Geo-replication for Store-and-forward Systems." In 15th International Conference on Software Technologies. SCITEPRESS - Science and Technology Publications, 2020. http://dx.doi.org/10.5220/0009835403280338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Accardi, Mario Alberto, and Daniele Dini. "Modelling of the Mechanical Behaviour of Human Joints Cartilage." In STLE/ASME 2008 International Joint Tribology Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/ijtc2008-71308.

Full text
Abstract:
A significant component of our understanding of cartilage mechanical behaviour is the ability to model its response to various types of mechanical loading, for which we require detailed knowledge of cartilage material properties. The Finite Element Analysis software ABAQUS is renowned for the ability to model poroelastic materials using the soil consolidation theory. In this research, ABAQUS has been used to model and investigate the mechanical behaviour of articular cartilage, mainly using indentation and unconfined compression techniques. A biphasic model of articular cartilage was first created and subsequently modified to incorporate more detailed material descriptions. Various material constitutive laws (and mechanical properties), accounting for the strain dependent permeability of the porous matrix, solid viscoelasticity and transverse isotropy, have been adopted to produce increasingly sophisticated models. The presence of collagen fibril networks embedded in the solid has been also considered and Fibril Reinforced Elastic and Viscoelastic models produced. A salient feature of these models is their ability to simulate fibril stiffening by replicating the nonlinear fibrillar response. In this paper, we provide an overview of the state-of-art modelling techniques adopted to simulate cartilage behaviour. The comparative study performed by the authors provides a critical assessment of the effectiveness of such techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Asaka, Yusuke, Keiichi Watanuki, Shuichi Fukuda, Keiichi Muramatsu, and Lei Hou. "Analysis of Brain Activity Influenced by Replication Accuracy in Imitation Learning in Manufacturing Industries." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-60452.

Full text
Abstract:
Here, we investigate and discuss the effect of accuracy of imitation for improvement of skills on brain activity. In order to improve the skills, learners combine and accumulate information of the skills through practice. Thus, we used near-infrared spectroscopy (NIRS) to investigate brain activity during the process of improvement. Evaluation of the level of knowledge acquisition with monitoring of brain activity can be an indicator of the learner’s degree of skill progression. Therefore, our final goal is constructing a new learning model based on brain activity monitoring and improving learning efficiency. We experimented on the assembly operation by imitation learning that assumed work in the manufacturing industries from a previous example. As a result, we showed the possibility of brain activity shift with improvement of the skill. In this article, we targeted task accuracy and investigated whether the brain activity shift is caused by a progress in the task accuracy, act of practice, or some other factor. As a result, we showed a possibility that the trend shift in the right and left dorsolateral prefrontal area and frontal pole was not caused by the simple task accuracy improvement but by the action of practice, which helped subjects store the information.
APA, Harvard, Vancouver, ISO, and other styles
10

Caudell, Thomas P., John Sharpe, and Kristina M. Johnson. "Ferroelectric liquid crystal optoelectronic ART1 neural processor." In OSA Annual Meeting. Optica Publishing Group, 1992. http://dx.doi.org/10.1364/oam.1992.tud1.

Full text
Abstract:
Adaptive resonance theory (ART) neural networks are becoming an important component of neural network industrial applications. Electronic implementation of these network models has proven difficult to scale to practical input dimensionality. In this paper, a new implementation of ART1 is proposed that efficiently combines optical and electronic devices. Global computations are performed by the optics, while local operations are performed in electronics. A physical implementation of this architecture that uses ferroelectric liquid crystal modulators integrated with VLSI circuitry is presented.1 The system has the capacity to learn in real time and to store the neural weights in a two dimensional optically addressed smart spatial light modulator (SSLM). The implementation can be packaged in a multi-chip module of small physical dimensions. Macro-circuits can be constructed from these modules to perform complex logical functions. This system can also be modified to allow read-out and readin of the learned weights stored in the SSLM, for replication of trained systems. The sensitivity of the ART1 network algorithm to variations in modulator and detector device characteristics is discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography