Academic literature on the topic 'Next Bit Tests'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Next Bit Tests.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Next Bit Tests"

1

Wu, Qi. "A Dependent Variable Harmonically Coupled Chaotic System for a Pseudorandom bit Generator." MATEC Web of Conferences 173 (2018): 03074. http://dx.doi.org/10.1051/matecconf/201817303074.

Full text
Abstract:
Coupling is a common approach for constructing new chaotic systems. In this paper, we present a novel way of coupling, which is utilized to construct a new chaotic system. Afterwards, a pseudorandom bit generator is proposed based on it. Next, we employ five statistic tests to evaluate the pseudo randomness of generated sequences. Linear complexity and cipher space are analyzed at last. All the results demonstrate that the proposed generator possesses excellent properties.
APA, Harvard, Vancouver, ISO, and other styles
2

Seo, J., and T. Kim. "COMPARISON OF PIXEL-BASED AND FEATURE-BASED APPROACH FOR SMALL OBJECT CHANGE DETECTION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2021 (June 28, 2021): 353–57. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2021-353-2021.

Full text
Abstract:
Abstract. Satellite image resolution has evolved to daily revisit and sub-meter GSD. Main targets of previous remote sensing were forest, vegetation, damage area by disasters, land use and land cover. Developments in satellite images have brought expectations on more sophisticated and various change detection of objects. Accordingly, we focused on unsupervised change detection of small objects, such as vehicles and ships. In this paper, existing change detection methods were applied to analyze their performances for pixel-based and feature-based change of small objects. We used KOMPSAT-3A images for tests. Firstly, we applied two change detection algorithms, MAD and IR-MAD, which are most well-known pixel-based change detection algorithms, to the images. We created a change magnitude map using the change detection methods. Thresholding was applied to determine change and non-change pixels. Next, the satellite images were transformed as 8-bit images for extracting feature points. We extracted feature points using SIFT and SURF methods to analyze feature-based change detection. We assumed to remove false alarms by eliminating feature points of non-changed objects. Therefore, we applied a feature-based matcher and matched feature points on identical image locations were eliminated. We used non-matched feature points for change/non-change analysis. We observed changes by creating a 5x5 size ROI around extracted feature points in the change/non-change map. We determined that change has occurred on feature points if the rate of change pixels with ROI was more than 50%. We analyzed the performance of pixel-based and feature-based change detection using ground truths. The F1-score, AUC value, and ROC were used to compare the performance of change detection. Performance showed that feature-based approaches performed better than pixel-based approaches.
APA, Harvard, Vancouver, ISO, and other styles
3

Clery, Daniel. "Magnet tests kick off bid for net fusion energy." Science 371, no. 6534 (March 11, 2021): 1091. http://dx.doi.org/10.1126/science.371.6534.1091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Singer, Jay, Raymond M. Hurley, and John P. Preece. "Effectiveness of Central Auditory Processing Tests With Children." American Journal of Audiology 7, no. 2 (October 1998): 73–84. http://dx.doi.org/10.1044/1059-0889(1998/015).

Full text
Abstract:
The purpose of this investigation was to determine central auditory processing (CAP) individual test efficacy and test battery efficacy and to estimate the costs that are associated with the identification of a targeted sample. Ninety-one children with normal learning (NL) abilities and 147 children with a classroom learning disability (CLD) and presumed CAP disorders (CAPDs) ranging in age from 7 to 13 years were given a battery of seven CAP tests. The test battery consisted of: (1) Binaural Fusion Test (BFT), (2) Masking Level Difference (MLD) test, (3) Filtered Speech Test (FST), (4) Time Compressed Speech (TCS) test, (5) Dichotic Digits Test (DDT), (6) Staggered Spondaic Word (SSW) test, and (7) Pitch Pattern Test (PPT). We believe that this investigation is the first report regarding the assessment of the utility of CAP tests using clinical decision analysis (CDA). We determined that the BFT separated the two samples most effectively and that the FST was the next most effective. A test protocol with BFT and FST or BFT and MLD represented the best battery approach when hit rate, false positive rate, and cost factors were considered. However, if the intent is to be certain that a child with CLD has CAPD given a positive test result, then the BFT and MLD would be the test battery of choice.
APA, Harvard, Vancouver, ISO, and other styles
5

Wright, Blake. "Automated Mud Skid Holds Promise of Safety, Efficiency Improvements With Real-Time Fluid Monitoring." Journal of Petroleum Technology 73, no. 06 (June 1, 2021): 31–33. http://dx.doi.org/10.2118/0621-0031-jpt.

Full text
Abstract:
As industry buzzwords go, “automation” has spent its time in oilfield vernacular climbing the ranks of widely used terms. It now resides as one of the go-to designations for signs of advancement in any number of disciplines. Its use has been tied most frequently with drilling operations as contractors look to keep employees out of harm’s way via a robotic take-over of most motion-intensive jobs on the rig’s drill floor—basically anything that grips, clamps, or spins. More recently, the term has moved away from the drill floor and into other well construction operations allowing for things such as remote, real-time measurements without the need for boots on the ground. For areas like west Texas and the Permian Basin shales, having the option for remote readouts and a component of automation that can allow for corrective actions should the need arise can go a long way in terms of safety and efficiency gains as well as better manpower application. Unsurprisingly, the area has become a solid testing ground for new, expanding efforts in automation. With dreams of new drilling-fluid-monitoring automation, Eric van Oort, a professor at The University of Texas at Austin and former Shell research scientist, and select students came up with a new way to automatically measure mud parameters such as viscosity without the use of a traditional viscometer. “The fact that we still use manual measurements, some of them now 90 years old, is quite puzzling in this day and age,” van Oort said. “The Marsh funnel, for instance, was introduced in the 1930s, and other mud tests go back to the 1950s and 1960s. These API measurements have served us well, but the question is, can you do something more now with modern measurement techniques and sensors? So, I started working on new ways of measuring the viscosity and density, and then later fluid loss and even solids and salinity in muds. That proved to be all very successful and promising.” Construction of a mud skid to house the equipment and sensors needed to conduct these tests in real time was the next step in the evolution of van Oort’s concept. That initial skid was a cannibalized and reworked version of a unit that was employed on Shell’s Rig 1, which the supermajor built for its in-house rig-automation research based in Pennsylvania. This early mud skid, considered the prototype of van Oort’s design, was abandoned before it was properly tested. “We generated quite a bit of IP [intellectual property], my students and I at UT,” he said. “The Shell skid hadn’t seen a significant amount of service, and it had some nice components that we could reuse. We took that skid apart and reconfigured it and put it out in the field with Pioneer Natural Resources for a set of field trials in the Permian. Those went well.” The field trial results were shared in a paper presented at the 2019 Unconventional Resources Technology Conference (URTEC 2019-964). The paper concluded that the pipe viscometer employed by the skid allows for the characterization of additional rheology parameters, which cannot be obtained with Couette-type viscometers, such as the critical Reynolds number, characterizing the transition from laminar into turbulent flow, and the friction factor in the turbulent flow regime.
APA, Harvard, Vancouver, ISO, and other styles
6

Velásquez-Zapata, Valeria, J. Mitch Elmore, Sagnik Banerjee, Karin S. Dorman, and Roger P. Wise. "Next-generation yeast-two-hybrid analysis with Y2H-SCORES identifies novel interactors of the MLA immune receptor." PLOS Computational Biology 17, no. 4 (April 2, 2021): e1008890. http://dx.doi.org/10.1371/journal.pcbi.1008890.

Full text
Abstract:
Protein-protein interaction networks are one of the most effective representations of cellular behavior. In order to build these models, high-throughput techniques are required. Next-generation interaction screening (NGIS) protocols that combine yeast two-hybrid (Y2H) with deep sequencing are promising approaches to generate interactome networks in any organism. However, challenges remain to mining reliable information from these screens and thus, limit its broader implementation. Here, we present a computational framework, designated Y2H-SCORES, for analyzing high-throughput Y2H screens. Y2H-SCORES considers key aspects of NGIS experimental design and important characteristics of the resulting data that distinguish it from RNA-seq expression datasets. Three quantitative ranking scores were implemented to identify interacting partners, comprising: 1) significant enrichment under selection for positive interactions, 2) degree of interaction specificity among multi-bait comparisons, and 3) selection of in-frame interactors. Using simulation and an empirical dataset, we provide a quantitative assessment to predict interacting partners under a wide range of experimental scenarios, facilitating independent confirmation by one-to-one bait-prey tests. Simulation of Y2H-NGIS enabled us to identify conditions that maximize detection of true interactors, which can be achieved with protocols such as prey library normalization, maintenance of larger culture volumes and replication of experimental treatments. Y2H-SCORES can be implemented in different yeast-based interaction screenings, with an equivalent or superior performance than existing methods. Proof-of-concept was demonstrated by discovery and validation of novel interactions between the barley nucleotide-binding leucine-rich repeat (NLR) immune receptor MLA6, and fourteen proteins, including those that function in signaling, transcriptional regulation, and intracellular trafficking.
APA, Harvard, Vancouver, ISO, and other styles
7

McVay, Michael C., Ralph D. Ellis, Bjorn Birgisson, Gary R. Consolazio, Sastry Putcha, and Sang Min Lee. "Load and Resistance Factor Design, Cost, and Risk: Designing a Drilled-Shaft Load Test Program in Florida Limestone." Transportation Research Record: Journal of the Transportation Research Board 1849, no. 1 (January 2003): 98–106. http://dx.doi.org/10.3141/1849-12.

Full text
Abstract:
Currently there are few if any guidelines on estimating the number of load tests in the design of drilled-shaft foundations in Florida limestone. For instance, for many sites there may be a similar number of field load tests but a significantly different number of design shafts. Moreover, little if any information exists on risk or reliability versus cost of drilled-shaft foundations or on the cost of field load testing. The collection of a large database of drilled-shaft tests (more than 25 with Osterberg and Statnamic devices), in situ laboratory data, drilled-shaft construction costs, and field load testing costs for Florida limestone is reported on. From the field load tests, the average unit skin friction for various sites was found, as well as the predicted values based on the Florida Department of Transportation recommended design approach. Next, using load and resistance factor design (LRFD), the resistance (ϕ) values were found for various reliabilities (risk or probability of failure). Once the factored design loads were known (from plans), drilled-shaft lengths were estimated on the basis of the computed LRFD ϕ-values for different reliabilities (i.e., risk). From the linear length of the designed shaft as well as the expected cost per meter, a plot of total foundation cost versus reliability (risk) was generated for each site. On the basis of the latter plot, acceptable risk, and the cost of field load testing (bid and itemized), the designer can identify the cost savings of load testing and the appropriate number of tests to be performed.
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Na, Menglu Li, Le Zhao, Bo Zhang, Yuhua Yang, Chun-Hou Zheng, and Junfeng Xia. "Comparison and integration of computational methods for deleterious synonymous mutation prediction." Briefings in Bioinformatics 21, no. 3 (June 3, 2019): 970–81. http://dx.doi.org/10.1093/bib/bbz047.

Full text
Abstract:
Abstract Synonymous mutations do not change the encoded amino acids but may alter the structure or function of an mRNA in ways that impact gene function. Advances in next generation sequencing technologies have detected numerous synonymous mutations in the human genome. Several computational models have been proposed to predict deleterious synonymous mutations, which have greatly facilitated the development of this important field. Consequently, there is an urgent need to assess the state-of-the-art computational methods for deleterious synonymous mutation prediction to further advance the existing methodologies and to improve performance. In this regard, we systematically compared a total of 10 computational methods (including specific method for deleterious synonymous mutation and general method for single nucleotide mutation) in terms of the algorithms used, calculated features, performance evaluation and software usability. In addition, we constructed two carefully curated independent test datasets and accordingly assessed the robustness and scalability of these different computational methods for the identification of deleterious synonymous mutations. In an effort to improve predictive performance, we established an ensemble model, named Prediction of Deleterious Synonymous Mutation (PrDSM), which averages the ratings generated by the three most accurate predictors. Our benchmark tests demonstrated that the ensemble model PrDSM outperformed the reviewed tools for the prediction of deleterious synonymous mutations. Using the ensemble model, we developed an accessible online predictor, PrDSM, available at http://bioinfo.ahu.edu.cn:8080/PrDSM/. We hope that this comprehensive survey and the proposed strategy for building more accurate models can serve as a useful guide for inspiring future developments of computational methods for deleterious synonymous mutation prediction.
APA, Harvard, Vancouver, ISO, and other styles
9

Harima, Hayato, Michihito Sasaki, Yasuko Orba, Kosuke Okuya, Yongjin Qiu, Christida E. Wastika, Katendi Changula, et al. "Attenuated infection by a Pteropine orthoreovirus isolated from an Egyptian fruit bat in Zambia." PLOS Neglected Tropical Diseases 15, no. 9 (September 7, 2021): e0009768. http://dx.doi.org/10.1371/journal.pntd.0009768.

Full text
Abstract:
Background Pteropine orthoreovirus (PRV) is an emerging bat-borne zoonotic virus that causes severe respiratory illness in humans. Although PRVs have been identified in fruit bats and humans in Australia and Asia, little is known about the prevalence of PRV infection in Africa. Therefore, this study performed an PRV surveillance in fruit bats in Zambia. Methods Egyptian fruit bats (Rousettus aegyptiacus, n = 47) and straw-colored fruit bats (Eidolon helvum, n = 33) captured in Zambia in 2017–2018 were screened for PRV infection using RT-PCR and serum neutralization tests. The complete genome sequence of an isolated PRV strain was determined by next generation sequencing and subjected to BLAST and phylogenetic analyses. Replication capacity and pathogenicity of the strain were investigated using Vero E6 cell cultures and BALB/c mice, respectively. Results An PRV strain, tentatively named Nachunsulwe-57, was isolated from one Egyptian fruit bat. Serological assays demonstrated that 98% of sera (69/70) collected from Egyptian fruit bats (n = 37) and straw-colored fruit bats (n = 33) had neutralizing antibodies against PRV. Genetic analyses revealed that all 10 genome segments of Nachunsulwe-57 were closely related to a bat-derived Kasama strain found in Uganda. Nachunsulwe-57 showed less efficiency in viral growth and lower pathogenicity in mice than another PRV strain, Miyazaki-Bali/2007, isolated from a patient. Conclusions A high proportion of Egyptian fruit bats and straw-colored fruit bats were found to be seropositive to PRV in Zambia. Importantly, a new PRV strain (Nachunsulwe-57) was isolated from an Egyptian fruit bat in Zambia, which had relatively weak pathogenicity in mice. Taken together, our findings provide new epidemiological insights about PRV infection in bats and indicate the first isolation of an PRV strain that may have low pathogenicity to humans.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Fuyi, Yanan Wang, Chen Li, Tatiana T. Marquez-Lago, André Leier, Neil D. Rawlings, Gholamreza Haffari, et al. "Twenty years of bioinformatics research for protease-specific substrate and cleavage site prediction: a comprehensive revisit and benchmarking of existing methods." Briefings in Bioinformatics 20, no. 6 (August 29, 2018): 2150–66. http://dx.doi.org/10.1093/bib/bby077.

Full text
Abstract:
Abstract The roles of proteolytic cleavage have been intensively investigated and discussed during the past two decades. This irreversible chemical process has been frequently reported to influence a number of crucial biological processes (BPs), such as cell cycle, protein regulation and inflammation. A number of advanced studies have been published aiming at deciphering the mechanisms of proteolytic cleavage. Given its significance and the large number of functionally enriched substrates targeted by specific proteases, many computational approaches have been established for accurate prediction of protease-specific substrates and their cleavage sites. Consequently, there is an urgent need to systematically assess the state-of-the-art computational approaches for protease-specific cleavage site prediction to further advance the existing methodologies and to improve the prediction performance. With this goal in mind, in this article, we carefully evaluated a total of 19 computational methods (including 8 scoring function-based methods and 11 machine learning-based methods) in terms of their underlying algorithm, calculated features, performance evaluation and software usability. Then, extensive independent tests were performed to assess the robustness and scalability of the reviewed methods using our carefully prepared independent test data sets with 3641 cleavage sites (specific to 10 proteases). The comparative experimental results demonstrate that PROSPERous is the most accurate generic method for predicting eight protease-specific cleavage sites, while GPS-CCD and LabCaS outperformed other predictors for calpain-specific cleavage sites. Based on our review, we then outlined some potential ways to improve the prediction performance and ease the computational burden by applying ensemble learning, deep learning, positive unlabeled learning and parallel and distributed computing techniques. We anticipate that our study will serve as a practical and useful guide for interested readers to further advance next-generation bioinformatics tools for protease-specific cleavage site prediction.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Next Bit Tests"

1

Cardinal, Robert W. "DATA REDUCTION AND PROCESSING SYSTEM FOR FLIGHT TEST OF NEXT GENERATION BOEING AIRPLANES." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608878.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This paper describes the recently developed Loral Instrumentation ground-based equipment used to select and process post-flight test data from the Boeing 777 airplane as it is played back from a digital tape recorder (e.g., the Ampex DCRSi II) at very high speeds. Gigabytes (GB) of data, stored on recorder cassettes in the Boeing 777 during flight testing, are played back on the ground at a 15-30 MB/sec rate into ten multiplexed Loral Instrumentation System 500 Model 550s for high-speed decoding, processing, time correlation, and subsequent storage or distribution. The ten Loral 550s are multiplexed for independent data path processing from ten separate tape sources simultaneously. This system features a parallel multiplexed configuration that allows Boeing to perform critical 777 flight test processing at unprecedented speeds. Boeing calls this system the Parallel Multiplexed Processing Data (PMPD) System. The key advantage of the ground station's design is that Boeing engineers can add their own application-specific control and setup software. The Loral 550 VMEbus allows Boeing to add VME modules when needed, ensuring system growth with the addition of other LI-developed products, Boeing-developed products or purchased VME modules. With hundreds of third-party VME modules available, system expansion is unlimited. The final system has the capability to input data at 15 MB/sec. The present aggregate throughput capability of all ten 24-bit Decoders is 150 MB/sec from ten separate tape sources. A 24-bit Decoder was designed to support the 30 MB/sec DCRSi III so that the system can eventually support a total aggregate throughput of 300 MB/sec. Clearly, such high data selection, rejection, and processing will significantly accelerate flight certification and production testing of today's state-of-the-art aircraft. This system was supplied with low level software interfaces such that the customer would develop their own applications specific code and displays. The Loral 550 lends itself to this kind of applications due to its VME chassis, VxWorks operating system and the modularity of the software.
APA, Harvard, Vancouver, ISO, and other styles
2

Raddo, Thiago Roberto. "Next generation access networks: flexible OCDMA systems and cost-effective chaotic VCSEL sources for secure communications." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18155/tde-31082017-093005/.

Full text
Abstract:
The significant advances in fiber-optic technology have broadened the optical network\'s reach into end-user business premises and even homes, allowing new services and technologies to be delivered to the customers. The next wave of innovation will certainly generate numerous opportunities provided by the widespread popularity of emerging solutions and applications such as tactile Internet, telemedicine and real time 3-D content generation, making them part of everyday life. Nevertheless, to support such an unprecedented and insatiable demand of data traffic, higher capacity and security, flexible bandwidth allocation and cost-efficiency have become crucial requirements for technologies candidate for future optical access networks. To this aim, optical code-division multiple-access (OCDMA) technology is considered as a prospective candidate, particularly due to features like asynchronous transmissions, flexible as well as conscious bandwidth resource distribution and support to differentiated services at the physical layer, to name but a few. In this context, this thesis proposes new mathematical formalisms for bit error rate, packet throughput and packet delay to assess the performance of flexible OCDMA networks capable of providing multiservice multirate transmissions according to users\' requirements. The proposed analytical formalisms do not require the knowledge a priori of the users\' code sequences, which means that the network performance can be addressed in a simple and straightforward manner using the code parameters only. In addition, the developed analytical formalisms account for a general number of distinct users\' classes as well as general probability of interference among users. Hence, these formalisms can be successfully applied for performance evaluation of flexible OCDMA networks not only under any number of users\' classes in a network, but also for most spreading codes with good correlation properties. The packet throughput expression is derived assuming Poisson, binomial and Markov chain approaches for the composite packet arrivals with the latter defined as benchmark. Then, it is shown via numerical simulation the Poisson-based expression is not appropriate for a reliable throughput estimate when compared to the benchmark (Markov) results. The binomial-based throughput equation, by its turn, provides results as accurate as the benchmark. In addition, the binomial-based throughput is numerically more convenient and computationally more efficient than the Markov chain approach, whereas the Markov-based one is computationally expensive, particularly if the number of users is large. The bit error rate (BER) expressions are derived considering gaussian and binomial distributions for the multiple-access interference and it is shown via numerical simulations that accurate performance of flexible OCDMA networks is only obtained with the binomial-based BER expression. This thesis also proposes and investigates a network architecture for Internet protocol traffic over flexible OCDMA with support to multiservice multirate transmissions, which is independent of the employed spreading code and does not require any new optical processing technology. In addition, the network performance assumes users transmitting asynchronously using receptors based on intensity-modulation direct-detection schemes. Numerical simulations shown that the proposed network performs well when its users are defined with high-weight code or when the channel utilization is low. The BER and packet throughput performance of an OCDMA network that provides multirate transmissions via multicode technique with two codes assigned to each single user is also addressed. Numerical results show that this technique outperforms classical techniques based on multilength code. Finally, this thesis addresses a new breakthrough technology that might lead to higher levels of security at the physical layer of optical networks. This technology consists in the generation of deterministic chaos from a commercial free-running vertical-cavity surface-emitting laser (VCSEL). The chaotic dynamics is generated by means of mechanical strains loaded onto an off-the-shelf quantum-well VCSEL using a simple and easily replicable holder. Deterministic chaos is then achieved, for the first time, without any additional complexity of optical feedback, parameter modulation or optical injection. The simplicity of the proposed system, which is based entirely on low-cost and easily available components, opens the way to the widespread use of commercial and free-running VCSEL devices for chaos-based applications. This off-the-shelf and cost-effective optical chaos generator has the potential for not only paving the way towards new security platforms in optical networks like, for example, successfully hiding the user information in an unpredictable, random-like signal against eventual eavesdroppers, but also for influencing emerging chaos applications initially limited or infeasible due to the lack of low-cost solutions. Furthermore, it leads the way to future realization of emerging applications with high-integrability and -scalability such as two-dimensional arrays of chaotic devices comprising hundreds of individual sources to increase requirements for random bit generation, cryptography or large-scale quantum networks.
Os avanços relacionados a tecnologia fotônica ampliaram o alcance das redes de comunicação óptica tanto em instalações de estabelecimentos comerciais quanto em residências, permitindo que novos serviços e tecnologias fossem entregues aos clientes. A próxima onda de inovação certamente gerará inúmeras oportunidades proporcionadas pela popularidade de soluções emergentes e aplicações como a Internet tátil, a telemedicina e a geração de conteúdo 3-D em tempo real, tornando-os parte da vida cotidiana. No entanto, para suportar a crescente demanda de tráfego atual, uma maior capacidade e segurança, alocação flexível de largura de banda e custo-eficiência tornaram-se requisitos cruciais para as tecnologias candidatas a futuras redes de acesso óptico. Para este fim, a tecnologia de acesso múltiplo por divisão de código óptico (OCDMA) é considerada um candidato em potencial, particularmente devido a características como transmissões assíncronas, distribuição flexível de banda larga e suporte a serviços diferenciados na camada física, para citar apenas alguns. Neste contexto, esta tese propõe novos formalismos matemáticos para a taxa de erro de bits, taxa de transferência de pacotes e atraso de pacotes para avaliar o desempenho de redes OCDMA flexíveis capazes de fornecer transmissões em múltiplas qualidades de serviço (QoS) de acordo com as necessidades dos usuários. Os formalismos analíticos propostos não requerem o conhecimento a priori das sequências de código dos usuários, o que significa que o desempenho da rede pode ser abordado de forma simples e direta usando apenas os parâmetros de código. Além disso, os formalismos analíticos desenvolvidos representam um número geral de classes de usuários distintos, bem como a probabilidade geral de interferência entre os usuários. Portanto, esses formalismos podem ser aplicados com sucesso na avaliação de desempenho de redes OCDMA flexíveis não apenas em qualquer número de classes de usuários em uma rede, mas também para a maioria dos códigos de espalhamento com boas propriedades de correlação. A expressão de taxa de transferência de pacotes é derivada assumindo aproximações de Poisson, binomial e de cadeia de Markov para as chegadas de pacotes compostos, com a última definida como benchmark. Em seguida, é mostrado via simulação numérica que a expressão baseada em Poisson não é apropriada para uma estimativa confiável de taxa de transferência quando comparada aos resultados de benchmark (Markov). A equação de taxa de transferência binomial, por sua vez, fornece resultados tão precisos quanto o benchmark. Além disso, a taxa de transferência binomial é numericamente mais conveniente e computacionalmente eficiente quando comparada com abordagem de Markov, enquanto esta última é computacionalmente dispendiosa, particularmente se o número de usuários é grande. As expressões de taxa de erro de bit (BER) são derivadas considerando distribuições gaussianas e binomiais para a interferência de acesso múltiplo e é mostrado por meio de simulações numéricas que o desempenho exato de redes OCDMA flexíveis é obtido somente com a expressão binomial de BER. Esta tese também propõe e investiga uma arquitetura de rede para o tráfego de protocolo de Internet sobre OCDMA flexível com suporte a transmissões de QoS e de múltiplas taxas, que é independente do código de espalhamento empregado e não requer qualquer nova tecnologia de processamento óptico. Além disso, o desempenho da rede assume que os usuários transmitem de forma assíncrona usando receptores baseados em esquemas de detecção direta de modulação de intensidade. As simulações numéricas mostraram que a rede proposta possui melhor desempenho quando seus usuários são definidos com peso de código alto ou quando a utilização do canal é baixa. O desempenho da BER e da taxa de transferência de pacotes de uma rede OCDMA que fornece transmissões de múltiplas taxas por meio de uma técnica multi-código com dois códigos atribuídos a cada usuário é também abordado. Os resultados numéricos mostram que esta técnica supera as técnicas clássicas baseadas no código de comprimento múltiplo. Finalmente, esta tese aborda uma nova tecnologia que pode levar a níveis mais elevados de segurança na camada física de redes ópticas. Esta tecnologia consiste na geração de caos determinístico a partir de um laser de emissão superficial com cavidade vertical (VCSEL). A dinâmica caótica é gerada através da aplicação de forças mecânicas em um VCSEL comercial usando um suporte simples e facilmente replicável. O caos determinístico é então alcançado, pela primeira vez, sem qualquer complexidade adicional de realimentação óptica, modulação de parâmetros ou injeção óptica. A simplicidade do sistema proposto, o qual se baseia inteiramente em componentes de baixo custo e que são facilmente encontrados, abre o caminho para o uso de dispositivos VCSEL comerciais para aplicações baseadas em caos. Este gerador de caos óptico tem o potencial não só de pavimentar o caminho para novas plataformas de segurança em redes ópticas, como, por exemplo, ocultar com êxito as informações do usuário em um sinal imprevisível e aleatório contra eventuais invasores, como também tem o potencial de influenciar aplicações de caos emergentes inicialmente limitadas ou inviáveis devido à falta de soluções de baixo custo. Além disso, ele conduz o caminho para a realização futura de aplicações emergentes com alta integridade e escalabilidade, tais como matrizes bidimensionais de dispositivos caóticos que compreendem centenas de fontes individuais para aumentar as necessidades de geração de bit aleatória, criptografia ou redes quânticas de grande escala.
APA, Harvard, Vancouver, ISO, and other styles
3

Nižnanský, Petr. "Testování náhodnosti a použití statistických testů v kryptografii." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-329723.

Full text
Abstract:
Pseudorandom generators belong to the primary focus of cryptology. The key to every cipher has to be generated at random, otherwise the security of the whole cipher is threatened. Another point of importance is the pseudorandom generators' close relationship to the stream ciphers. In this work, we first introduce statistical theory related to randomness testing. Then, we describe 8 classical statistical tests. We introduce a concept of next bit testing and derive variants of previous tests. Moreover, with this new battery of tests we examine the randomness of SHA-3 second round candidates and present the results. Also a sensitivity of tests is investigated and several useful transformations are shown. Powered by TCPDF (www.tcpdf.org)
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Next Bit Tests"

1

Western Electronic Show and Convention (1998 Anaheim, Calif.). Wescon/98: Systems-on-a-chip - next generation IP networks, chip-level design, system design, embedded systems, aerospace applications, quality/reliability/test, EDA, system environment, system interface, wireless system design, network system design, bio-medical systems : conference proceedings : Anaheim Convention Center, Anaheim, California, September 15-17, 1998. [New York]: Institute of Electrical and Electronics Engineers, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Redmon, Allen H., ed. Next Generation Adaptation. University Press of Mississippi, 2021. http://dx.doi.org/10.14325/mississippi/9781496832603.001.0001.

Full text
Abstract:
Next Generation Adaptation: Spectatorship and Process explores the ways in which cross-cultural adaptations often stage a collusion between competing cultural capital. The collusion conceals and reveals commonalities and differences between these cultural traditions before giving way to the differences that can distinguish one textual expression from another, just as it ultimately distinguishes one set of readers from another. An adaptation of any sort, but especially those that cross accepted stereotypes, or geographic or political boundaries, provide spectators space to negotiate attitudes and ideas that might otherwise lay latent in the text. Spectators are left to parse through each, often with special attention to the differences that exist between two expressions. Each new set of readers, each generation, distinguishes itself from an earlier set of readers, even as they exist along the same family tree. Given enough time, some new shared organizing strategy emerges until a new encounter or new expression of a text restarts the adaptational process every adaptation can trigger. Taken together, the chapters in Next Generation Adaptation each argue that the texts they consider foreground the kinds of space that exists between texts, between political commitments, between ethical obligations that every filmic text can open when the text is experienced as an adaptation. The chapters esteem the expansive dialogue adaptations accelerate when they realize their capacity to bring together two or more texts, two or more peoples, two or more ideologies without allowing one expression to erase another.
APA, Harvard, Vancouver, ISO, and other styles
3

Ingles, Jodie, Charlotte Burns, and Laura Yeates. Genetic counselling. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780198784906.003.0145.

Full text
Abstract:
Cardiac genetic counselling is an emerging but important subspecialty. The qualifications of cardiac genetic counsellors depend on the country of practice, but at a minimum they are Master’s-level trained health professionals with expertise in genetics, and are integral members of the multidisciplinary inherited cardiovascular disease clinic. Though the framework is diverse in different countries, key roles include investigation and confirmation of family history details, discussion of inheritance risks and facilitation of cardiac genetic testing, communication with at-risk relatives, and increasingly, curation of genetic test results. The use of next-generation sequencing technologies has seen a recent shift in the uptake of genetic testing, due to greater availability and lowered costs. As these gene tests become more comprehensive, including large panels of genes and even whole exome or whole genome sequencing, the need for cardiac genetic counsellors to provide informed consent, appropriate pre- and post-test genetic counselling, and ongoing curation of the variants identified is evident. Finally, given the improved understanding of the psychological implications of living with a cardiovascular genetic disease, cardiac genetic counsellors are integral in delivering psychosocial care and identifying patients requiring intervention with a clinical psychologist.
APA, Harvard, Vancouver, ISO, and other styles
4

Graves, Tracey. Neurogenetic disease. Edited by Patrick Davey and David Sprigings. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199568741.003.0223.

Full text
Abstract:
There are many genetic diseases which affect the nervous system. Although some of these are extremely rare, several are quite common and, as a group, they comprise a significant proportion of neurological disease. Almost all clinical neurological syndromes can have a genetic cause. Not all of these have been genetically elucidated, but some have been extensively characterized in terms of clinical phenotype, molecular genetics, and cellular pathophysiology. Given the improvement in laboratory techniques and subsequent reduction in the cost of direct DNA sequencing, there is likely to be a rapid expansion over the next decade in the identification of causative genes and hence the availability of genetic tests. Thus, all clinicians should have a basic understanding about genetic disease; inheritance patterns; availability of genetic tests; genetic counselling; and ethics. Particular subspeciality areas where neurogenetic disease is common include neuromuscular disease and movement disorders.
APA, Harvard, Vancouver, ISO, and other styles
5

Pattenden, Miles. Choosing Candidates. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198797449.003.0005.

Full text
Abstract:
This chapter analyses the difficulty cardinals faced in deciding for whom to vote. All cardinals, and those whom they represented, entered each election with preferences for what they wanted the new pope to be like—and perhaps even for particular candidates. But they lacked the information to know how to express those preferences and the process of gathering it and assessing it was arduous. Much information was available through Rome’s burgeoning public sphere but its reliability was hard to test. Several treatises also offered advice to cardinals but it was not always easy to follow. Many cardinals in most elections seem with hindsight to have assessed the situation poorly. However, most importantly, their primary concern when selecting the next pope seems to have been to take as few risks as possible.
APA, Harvard, Vancouver, ISO, and other styles
6

Scadding, Alys. Terminal care in respiratory illness. Edited by Patrick Davey and David Sprigings. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199568741.003.0146.

Full text
Abstract:
The terminal phase is the period of time between living with a reasonable quality of life, and the process of dying. While lung cancer and pulmonary fibrosis have the potential to deteriorate rapidly, the majority of lung diseases worsen over years. Every exacerbation of the condition leads to a decline in both lung function and performance status, and often the pre-exacerbation level of functioning is never regained. There is not a defining point to indicate whether a patient is entering the terminal stages of their illness, but practice shows that the following signs are suggestive: increasing breathlessness and thus becoming increasingly housebound; increasing oxygen requirements; declining pulmonary function test results; increasingly frequent exacerbations requiring hospital admission and/or non-invasive ventilation; developing cor pulmonale; weight loss and difficulty maintaining weight; anxiety and depression; if the death of the patient within the next year would not be a surprise.
APA, Harvard, Vancouver, ISO, and other styles
7

Thompson, William R., and Leila Zakhirova. Rome as the Pinnacle of the Western Ancient World. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190699680.003.0004.

Full text
Abstract:
Ancient Rome is an important turning point in Eurasian history. But it is also an interesting test case in its own right. In the past, the standard view was that ancient economies operated on a subsistence production basis and lacked technological innovation, and that therefore, economic growth in the modern sense was highly unlikely. Yet we have the sense that Rome was wealthy. Was this wealth simply another example of the efflorescence that has been possible at various times in history without regard to innovations in either technology or energy? Our answer is yes. In that respect, it provides a useful baseline with which to compare the evolving nature of global power—military, political, and economic. It belongs in the story for what it did not do: it made little effort to escape the constraints of an agrarian political economy, As such, it is something of a negative template for what was to come in the next one and a half millennia.
APA, Harvard, Vancouver, ISO, and other styles
8

Grosse Ruse-Khan, Henning. Conflict Rules and Integration Principles in the International IP System. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199663392.003.0012.

Full text
Abstract:
This chapter first looks at the (few) formal conflict rules in the international intellectual property (IP) system. It focusses on those found in the Trade Related Aspects of International Property Rights (TRIPS) Agreement. It then assesses those rules and principles that are not about directly allowing other international norms to prevail, but rather indirectly allow states implementing IP treaties and adjudicative bodies interpreting them to take into account external norms and the interests and objectives they protect. Next, the chapter provides an overview of three different areas where the international IP system has either provided specific responses to its intersections with other areas of international law, or where such a response is under negotiation in international IP fora. Finally, this chapter turns to the main horizontal tool in TRIPS and other IP treaties that allows states to take into account other interests and objectives that coincide or conflict with IP protection: the so called ‘three-step-test’.
APA, Harvard, Vancouver, ISO, and other styles
9

Bisseling, Rob H. Parallel Scientific Computation. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198788348.001.0001.

Full text
Abstract:
This book explains how to use the bulk synchronous parallel (BSP) model to design and implement parallel algorithms in the areas of scientific computing and big data. Furthermore, it presents a hybrid BSP approach towards new hardware developments such as hierarchical architectures with both shared and distributed memory. The book provides a full treatment of core problems in scientific computing and big data, starting from a high-level problem description, via a sequential solution algorithm to a parallel solution algorithm and an actual parallel program written in the communication library BSPlib. Numerical experiments are presented for parallel programs on modern parallel computers ranging from desktop computers to massively parallel supercomputers. The introductory chapter of the book gives a complete overview of BSPlib, so that the reader already at an early stage is able to write his/her own parallel programs. Furthermore, it treats BSP benchmarking and parallel sorting by regular sampling. The next three chapters treat basic numerical linear algebra problems such as linear system solving by LU decomposition, sparse matrix-vector multiplication (SpMV), and the fast Fourier transform (FFT). The final chapter explores parallel algorithms for big data problems such as graph matching. The book is accompanied by a software package BSPedupack, freely available online from the author’s homepage, which contains all programs of the book and a set of test programs.
APA, Harvard, Vancouver, ISO, and other styles
10

Spiegel, Avi Max. Young Islam. Princeton University Press, 2017. http://dx.doi.org/10.23943/princeton/9780691159843.001.0001.

Full text
Abstract:
Today, two-thirds of all Arab Muslims are under the age of thirty. This book takes readers inside the evolving competition for their support—a competition not simply between Islamism and the secular world, but between different and often conflicting visions of Islam itself. Drawing on extensive ethnographic research among rank-and-file activists in Morocco, the book shows how Islamist movements are encountering opposition from an unexpected source—each other. In vivid detail, the book describes the conflicts that arise as Islamist groups vie with one another for new recruits, and the unprecedented fragmentation that occurs as members wrangle over a shared urbanized base. Looking carefully at how political Islam is lived, expressed, and understood by young people, the book moves beyond the top-down focus of current research. Instead, it makes the compelling case that Islamist actors are shaped more by their relationships to each other than by their relationships to the state or even to religious ideology. By focusing not only on the texts of aging elites but also on the voices of diverse and sophisticated Muslim youths, the book exposes the shifting and contested nature of Islamist movements today—movements that are being reimagined from the bottom up by young Islam. This book, the first to shed light on this new and uncharted era of Islamist pluralism in the Middle East and North Africa, uncovers the rivalries that are redefining the next generation of political Islam.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Next Bit Tests"

1

El-Said, M. "Bio-Inspired Approach for the Next Generation of Cellular Systems." In Encyclopedia of Mobile Computing and Commerce, 63–67. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-002-8.ch011.

Full text
Abstract:
In the current 3G systems and the upcoming 4G wireless systems, missing neighbor pilot refers to the condition of receiving a high-level pilot signal from a Base Station (BS) that is not listed in the mobile receiver’s neighbor list (LCC International, 2004; Agilent Technologies, 2005). This pilot signal interferes with the existing ongoing call, causing the call to be possibly dropped and increasing the handoff call dropping probability. Figure 1 describes the missing pilot scenario where BS1 provides the highest pilot signal compared to BS1 and BS2’s signals. Unfortunately, this pilot is not listed in the mobile user’s active list. The horizontal and vertical handoff algorithms are based on continuous measurements made by the user equipment (UE) on the Primary Scrambling Code of the Common Pilot Channel (CPICH). In 3G systems, UE attempts to measure the quality of all received CPICH pilots using the Ec/Io and picks a dominant one from a cellular system (Chiung & Wu, 2001; El-Said, Kumar, & Elmaghraby, 2003). The UE interacts with any of the available radio access networks based on its memorization to the neighboring BSs. As the UE moves throughout the network, the serving BS must constantly update it with neighbor lists, which tell the UE which CPICH pilots it should be measuring for handoff purposes. In 4G systems, CPICH pilots would be generated from any wireless system including the 3G systems (Bhashyam, Sayeed, & Aazhang, 2000). Due to the complex heterogeneity of the 4G radio access network environment, the UE is expected to suffer from various carrier interoperability problems. Among these problems, the missing neighbor pilot is considered to be the most dangerous one that faces the 4G industry. The wireless industry responded to this problem by using an inefficient traditional solution relying on using antenna downtilt such as given in Figure 2. This solution requires shifting the antenna’s radiation pattern using a mechanical adjustment, which is very expensive for the cellular carrier. In addition, this solution is permanent and is not adaptive to the cellular network status (Agilent Technologies, 2005; Metawave, 2005).Therefore, a self-managing solution approach is necessary to solve this critical problem. Whisnant, Kalbarczyk, and Iyer (2003) introduced a system model for dynamically reconfiguring application software. Their model relies on considering the application’s static structure and run-time behaviors to construct a workable version of reconfiguration software application. Self-managing applications are hard to test and validate because they increase systems complexity (Clancy, 2002). The ability to reconfigure a software application requires the ability to deploy a dynamically hardware infrastructure in systems in general and in cellular systems in particular (Jann, Browning, & Burugula, 2003).
APA, Harvard, Vancouver, ISO, and other styles
2

Pal, Kamalendu. "Quality Assurance Issues for Big Data Applications in Supply Chain Management." In Predictive Intelligence Using Big Data and the Internet of Things, 51–76. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-6210-8.ch003.

Full text
Abstract:
Heterogeneous data types, widely distributed data sources, huge data volumes, and large-scale business-alliance partners describe typical global supply chain operational environments. Mobile and wireless technologies are putting an extra layer of data source in this technology-enriched supply chain operation. This environment also needs to provide access to data anywhere, anytime to its end-users. This new type of data set originating from the global retail supply chain is commonly known as big data because of its huge volume, resulting from the velocity with which it arrives in the global retail business environment. Such environments empower and necessitate decision makers to act or react quicker to all decision tasks. Academics and practitioners are researching and building the next generation of big-data-based application software systems. This new generation of software applications is based on complex data analysis algorithms (i.e., on data that does not adhere to standard relational data models). The traditional software testing methods are insufficient for big-data-based applications. Testing big-data-based applications is one of the biggest challenges faced by modern software design and development communities because of lack of knowledge on what to test and how much data to test. Big-data-based applications developers have been facing a daunting task in defining the best strategies for structured and unstructured data validation, setting up an optimal test environment, and working with non-relational databases testing approaches. This chapter focuses on big-data-based software testing and quality-assurance-related issues in the context of Hadoop, an open source framework. It includes discussion about several challenges with respect to massively parallel data generation from multiple sources, testing methods for validation of pre-Hadoop processing, software application quality factors, and some of the software testing mechanisms for this new breed of applications
APA, Harvard, Vancouver, ISO, and other styles
3

LoBrutto, Vincent. "Mythology." In Ridley Scott, 78–83. University Press of Kentucky, 2019. http://dx.doi.org/10.5810/kentucky/9780813177083.003.0008.

Full text
Abstract:
In the Orwellian year of 1984, during Super Bowl XVIII, a commercial for Apple’s Mackintosh computer ran and became one of the most eye-catching and provocative sixty-second spots ever made. It was never shown again on television. As directed by Ridley Scott, the commercial portrays the grim world of the future dominated by Big Brother until a beautiful, athletic woman liberates everyone. For his next feature film Scott embraced the fantasy genre with Legend, a good versus evil tale set it a mythical land. Disaster hit the production when the entire elaborate set burned down. Miraculously, no one was injured, and the fairy tale environment was quickly rebuilt. The original version of Legend did poorly in front of test audiences and Scott cut it down radically, which hurt the film even more at the box office. In 1986 Ridley Scott Associates was expanded with the addition of a New York office, with more to come in the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Baecker, Ronald M. "Free speech, politics, and government." In Computers and Society. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198827085.003.0010.

Full text
Abstract:
Politics and government are undergoing dramatic changes through the advent of new technology. The early developers of community networks (mentioned in Section 1.2) had hopeful visions of information technology (IT)-facilitating participatory democracy. Yet the most memorable visions have been literary dystopias, where surveillance is omnipresent and governments have absolute control. We shall begin by highlighting some of these important writings. We shall then consider a current and present topic—the cultural and legal frameworks governing free speech and other forms of expression on the internet. We review several kinds of ‘undesirable’ speech that test our commitment to free speech—messages that are viewed as obscene, hateful, seditious, or encouraging of terrorism. Next, we examine methods governments worldwide use to censor web content and prevent digital transmission of messages of which they disapprove, as well as a similar role for social media firms in what is now known as content moderation. We shall also mention one new form of rampant and very harmful internet speech— fake news. Fake news becomes especially troubling when it is released into and retransmitted widely into filter bubbles that select these messages and echo chambers that focus and sensationalize such points of view to the exclusion of other contradictory ideas. The prevalence and dangers of fake news became obvious during post facto analyses of the 2016 US presidential campaign. The internet and social media enable greater civic participation, which is usually called e-democracy or civic tech. Most such uses of social media are relatively benign, as in online deliberations about the desired size of a bond issue, or internet lobbying to get libraries to stay open longer during the summer. However, for more significant issues, such as violations of fundamental human rights, or unpopular political decisions that incite public unrest, social media communications may facilitate political protest that can lead to political change. IT also plays a role in elections—social media can be used to mobilize the electorate and build enthusiasm for a candidate. Correspondingly, surveys and big data are used to target potential voters during political campaigns and to tailor specific messages to key voters.
APA, Harvard, Vancouver, ISO, and other styles
5

"far, far cry from the broad swathe beaten to the British market by soaps ranging from The Sullivans to Flying Doctors and from Prisoner: Cell Block H to Country Practice which preceded the Neighbours phenomenon there. “The accents” were constantly cited as a crucial point of resistance. KCOP: “People couldn’t understand the Australian accent” (Inouye 1992). WWOR: “We received some complaints about accents, but maybe that’s not the real issue” (Darby 1992). KCOP: “The actors are unknown, and it takes place in a country that few people know about” (Inouye 1992). WWOR: “One problem with anything from out of this country is making the transition from one country to the next. We’re all chauvinists, I guess. We want to see American actors in American stuff” (Leibert 1992). The tenor of these reflections in fact gainsays the New York Daily News’s own report five days prior to Neighbours’s first New York transmission: The program was test-marketed in both cities, and viewers were asked whether they prefer [sic] the original Australian version or the same plots with American actors. “All of them chose the Australian program over the US version,” Pinne said. It won’t hurt, he added, that a program from Australia will be perceived as “a little bit of exotica” without subtitles. (Alexander 1991: 23) The station’s verdict within three months was clearly less sanguine. Australian material did not stay the course, even as exotica. Two additional factors militated against Neighbours’s US success: scheduling, and the length of run required to build up a soap audience. Scheduling was a key factor of the US “mediascape” which contributed to the foundering of Neighbours. Schedule competition tends to squeeze the untried and unknown into the 9–5 time slots. Whatever its British track-record, the Australian soap had no chance of a network sale in the face of the American soaps already locked in mortal combat over the ratings. The best time for Neighbours on US television, between 6:00 p.m. and 7:00 p.m., could be met no better by the independent stations. For the 6:00–8:00 p.m. period, when the networks run news, are the independents’ most competitive time slots, representing their best opportunity to attract viewers away from the networks – principally by rerunning network sitcoms such as The Cosby Show and Cheers. An untried foreign show, Neighbours simply would not, in executives’ views, have pleased advertisers enough; it was too great a risk. Even the 5:00–6:00 p.m. hour, which well suited Neighbours’s youth audience, was denied it in Los Angeles after its first month, with its ratings dropping from 4 per cent to 1 per cent as a consequence. Cristal lamented most the fourth factor contributing to Neighbours’s demise: the stations’ lack of perseverance with it, giving it only three-month runs either side of the States. This is the crucial respect in which public service broadcasting might have benefited it, by probably giving it a longer run. Until the late 1980s, when networks put on a daytime soap, they would." In To Be Continued..., 121. Routledge, 2002. http://dx.doi.org/10.4324/9780203131855-23.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Next Bit Tests"

1

Banerjee, Subharthi, Michael Hempel, Pejman Ghasemzadeh, Hamid Sharif, and Tarek Omar. "Wireless Communication for High-Speed Passenger Rail Services: A Study on the Design and Evaluation of a Unified Architecture." In 2020 Joint Rail Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/jrc2020-8068.

Full text
Abstract:
Abstract High-speed trains, though prevalent in Europe and Asia, are not yet a reality in the US. But interest and industry engagement are growing, especially around commercial hubs close to commuter homes for alleviating commute times. With support from the Federal Railroad Administration in the United States, the authors are exploring the design requirements, challenges, and technology capabilities for wireless communication between passenger cars, on-board systems and with trackside infrastructure, all using next-generation radio access technologies. Key aspects of this work focus on interoperability, modularity of the architecture to facilitate a future-proof design, high-performance operations for passenger services and ultra-low latency capabilities for train control operations. This paper presents the theoretical studies and computer simulations of the proposed network architectures, as well as the results of an LTE/5G field test framework using an OpenAir-Interface (OAI)-based software-defined radio (SDR) approach. Through various test scenarios the OAI LTE/5G implementation is first evaluated in a lab environment and through field tests. These tests provide ground-truth data that can be leveraged to refine the computer simulation model for evaluating large-scale environments with high fidelity and high accuracy. Of particular focus in this evaluation are performance aspects related to delay, handover, bit error rate, frequency offset and achievable uplink/downlink throughput.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahmed, Daiyan, Yingjian Xiao, Jeronimo de Moura, and Stephen D. Butt. "Drilling Cutting Analysis to Assist Drilling Performance Evaluation in Hard Rock Hole Widening Operation." In ASME 2020 39th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/omae2020-19286.

Full text
Abstract:
Abstract Optimum production from vein-type deposits requires the Narrow Vein Mining (NVM) process where excavation is accomplished by drilling larger diameter holes. To drill into the veins to successfully extract the ore deposits, a conventional rotary drilling rig is mounted on the ground. These operations are generally conducted by drilling a pilot hole in a narrow vein followed by a hole widening operation. Initially, a pilot hole is drilled for exploration purposes, to guide the larger diameter hole and to control the trajectory, and the next step in the excavation is progressed by hole widening operation. Drilling cutting properties, such as particle size distribution, volume, and shape may expose a significant drilling problem or may provide justification for performance enhancement decisions. In this study, a laboratory hole widening drilling process performance was evaluated by drilling cutting analysis. Drill-off Tests (DOT) were conducted in the Drilling Technology Laboratory (DTL) by dint of a Small Drilling Simulator (SDS) to generate the drilling parameters and to collect the cuttings. Different drilling operations were assessed based on Rate of Penetration (ROP), Weight on Bit (WOB), Rotation per Minute (RPM), Mechanical Specific Energy (MSE) and Drilling Efficiency (DE). A conducive schedule for achieving the objectives was developed, in addition to cuttings for further interpretation. A comprehensive study for the hole widening operation was conducted by involving intensive drilling cutting analysis, drilling parameters, and drilling performance leading to recommendations for full-scale drilling operations.
APA, Harvard, Vancouver, ISO, and other styles
3

Levinger, M., A. Ziv, B. Bailey, J. Abraham, B. Bentley, B. Joyner, and Y. Kashai. "What's the next 'big thing' in simulation-based verification?" In Eighth IEEE International High-Level Design Validation and Test Workshop. IEEE, 2003. http://dx.doi.org/10.1109/hldvt.2003.1252493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Marculescu, R., J. Rabaey, and A. Sangiovanni-Vincentelli. "Is “ Network ” the Next “ Big Idea ” in Design?" In 2006 Design, Automation and Test in Europe. IEEE, 2006. http://dx.doi.org/10.1109/date.2006.244112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nakayama, Mitusyuki, and Hideto Suzuki. "Analysis on Acoustic Emission Produced by Strength Test of Bio-Ceramics: Dependency of AE Parameters on the Material Property." In ASME 2003 International Electronic Packaging Technical Conference and Exhibition. ASMEDC, 2003. http://dx.doi.org/10.1115/ipack2003-35012.

Full text
Abstract:
Recently, advanced material used as a bio-ceramic is developed. These materials have one of the important features that characteristics of biocompatibility are improved considerably. However, these advanced materials include some problems that the strength of bio-ceramics decreases in accordance with improvement of biocompatibility. In this paper, it is the main purpose to accurate the acoustic emission characteristics of bio-ceramics on the breaking test, which has never been discussed enough. Therefore, AE parameters are calculated with the wave form of AE signal emitted on the breaking test. Next, the relation between AE parameters and material property of bio-ceramics are discussed in order to accurate the effectiveness of micro-structure elements to mechanical characteristics. As a result, it is found that AE parameters have remarkable dependence on micro-structure element in the body of bio-ceramics. Consequently, it is clarified that the acoustic emission method gives good agreement with the mechanical characteristics.
APA, Harvard, Vancouver, ISO, and other styles
6

Beal, Aaron, Dave Dae-Wook Kim, Kyung-Hee Park, and Patrick Kwon. "A Comparative Study of Carbide Tools in Drilling of CFRP and CFRP-Ti Stacks." In ASME 2011 International Manufacturing Science and Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/msec2011-50114.

Full text
Abstract:
A comparative study was conducted to investigate drilling of a titanium (Ti) plate stacked on a carbon fiber reinforced plastic panel. The effects on tool wear and hole quality in drilling using micrograin tungsten carbide (WC) tools were analyzed. The experiments were designed to first drill CFRP alone to create 20 holes. Then CFRP-Ti stacks were drilled for the next 20 holes with the same drill bit. This process was repeated until drill failure. The drilling was done with tungsten carbide (WC) twist drills at two different speeds (high and low). The feed rate was kept the same for each test, but differs for each material drilled. A Scanning Electron Microscope (SEM), and a Confocal Laser Scanning Microscope (CLSM), were used for tool wear analysis. Hole size and profile, surface roughness, and Ti burrs were analyzed using a coordinate measuring system, profilometer, and an optical microscope with a digital measuring device. The experimental results indicate that the Ti drilling accelerated WC flank wear while CFRP drilling deteriorated the cutting edge. Entry delamination, hole diameter errors, and surface roughness of the CFRP plate became more pronounced during drilling of CFRP-Ti stacks, when compared with the results from CFRP only drilling. Damage to CFRP holes during CFRP-Ti stack drilling may be caused by Ti chips, Ti adhesion on the tool outer edge, and increased instability as the drill bits wear.
APA, Harvard, Vancouver, ISO, and other styles
7

Joshi, Deep R., Alfred W. Eustes, Jamal Rostami, and Christopher Dreyer. "Evaluating Data-Driven Techniques to Optimize Drilling on the Moon." In SPE/IADC International Drilling Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/204108-ms.

Full text
Abstract:
Abstract Several companies and countries have announced plans to drill in the lunar South Pole in the next five years. The drilling process on the Moon or any other planetary body is similar to other exploration drilling by using rotary drills, for example the oil and gas drilling. However, the key performance indicators (KPIs) for this type of drilling are significantly different. This work aimed to develop the drilling optimization algorithms to optimize drilling on the Moon based on the experiences with the terrestrial drilling in related industries. A test drilling unit was designed and fabricated under a NASA Early Stage Innovation (ESI) grant; A high-frequency data acquisition system was used to record drilling responses at 1000 Hz. Parameters like weight on bit (WOB), torque, RPM, rate of penetration (ROP), mechanical specific energy (MSE), field penetration index (FPI), and the uniaxial compressive strength (UCS) were recorded for 40 boreholes in the analog formations. This work utilizes the large dataset comprising of more than 1 billion data points recorded while drilling into various lunar analogous formations and cryogenic lunar formations to optimize power consumption and bit wear during drilling operations. The dataset was processed to minimize the noise. The effect of drilling dysfunctions like auger choking and bit wear was also removed. Extensive feature engineering was performed to identify how each of the parameter affects power consumption and bit wear. The data was then used to train various regression algorithms based on the machine learning approaches like the random forest, gradient boosting, support vector machines, logistic regression, polynomial regression, and artificial neural network to evaluate the applicability of each of these approach in optimizing the power consumption using the control variables like RPM and penetration rate. The best performing algorithm based on ease of application, runtime, and accuracy of the algorithm was selected to provide recommendations for ROP and RPM which would result in minimum power consumption and bit wear for a specific bit design. Since the target location for most lunar expeditions is in permanently shadowed regions, the power available for a drilling operation is extremely limited. The bit wear will significantly affect the mission life too. Algorithms developed here would be vital in ensuring efficient and successful operations on the Moon leading to more robust exploration of the targeted lunar regions.
APA, Harvard, Vancouver, ISO, and other styles
8

Kabassi, Koudous, and Yong K. Cho. "BLCC Analysis Derived from BIM and Energy Data of Zero Net Energy Test Home." In International Conference on Sustainable Design and Construction (ICSDC) 2011. Reston, VA: American Society of Civil Engineers, 2012. http://dx.doi.org/10.1061/41204(426)37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, John Luis, and Edwin N. Quiros. "Performance Characteristics of Philippine Hydrous Ethanol-Gasoline Blends: Preliminary Findings." In ASME 2019 13th International Conference on Energy Sustainability collocated with the ASME 2019 Heat Transfer Summer Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/es2019-3824.

Full text
Abstract:
Abstract To reduce dependence on imported fossil fuels and develop indigenous biofuels, the Philippines enacted the Biofuels Act of 2006 which currently mandates a 10% by volume blend of 99.6% anhydrous bio-ethanol for commercially sold Unleaded and Premium gasolines. To urge a regulation review of the anhydrous requirement and examine the suitability for automotive use of hydrous bioethanol (HBE) blends, preliminary engine dynamometer tests at 1400–4400 rpm were conducted to measure specific fuel consumption (SFC) and power. In this study, HBE (95 % ethanol and 5% water by volume) produced from sweet sorghum using a locally-developed process, was blended volumetrically with three base gasoline fuels — Neat, Unleaded, and Premium. The four HBE blends tested were 10% and 20% with Neat gasoline, 20% with Unleaded gasoline, and 20% with Premium gasoline. For blends with Neat gasoline, the SFC of the 10%HBE blend was comparable with to slightly higher than Neat gasoline. The SFC of the 20%HBE blend was comparable with Neat gasoline up to 2800 rpm and lower beyond this speed thus being better overall than the 10%HBE blend. Compared to their respective commercial base fuels, the HBE-Unleaded blend showed lower SFC while the HBE-Premium blend yielded slightly higher SFC over most of the engine speed range. Between commercial fuel blends, the HBE-Unleaded blend gave better SFC than the HBE-Premium blend. Power was practically similar for the fuels tested. No engine operational problems and fuel blend phase separation were encountered during the tests. This preliminary study indicated the suitability of and possible optimum hydrous bio-ethanol blends for automotive use under Philippine conditions.
APA, Harvard, Vancouver, ISO, and other styles
10

Versen, Martin, and Michael Hayn. "Introduction to Verification and Test Using a 4-Bit Arithmetic Logic Unit Including a Failure Module in a Xilinx XC9572XL CPLD." In ISTFA 2014. ASM International, 2014. http://dx.doi.org/10.31399/asm.cp.istfa2014p0533.

Full text
Abstract:
Abstract In order to educate students in a practical way, a test object for a lab course is created: shorts and opens in an electrical model of physical defects are injected to a net list of a 4-bit arithmetic logic unit and are implemented in a Xilinx CPLD 9572XL. The fails are electrically controllable and observable in verification and electrical hardware test. By using a Test Access Port (TAP), the fails are analyzed in terms of their root cause. The arithmetic logic unit is used as a key component for lab exercises that complement the test part of an Integrated Circuit System Design and Test course in the master program Electrical Engineering and Information Technology at the University of Applied Sciences in Rosenheim. The labs include an introduction to a HILEVEL Griffin III test system, creation of pin and test setup, the import of vector files from verification test benches, control of a scan test engine and analysis of scan test data.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Next Bit Tests"

1

Clausen, Jay, D. Moore, L. Cain, and K. Malinowski. VI preferential pathways : rule or exception. Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41305.

Full text
Abstract:
Trichloroethylene (TCE) releases from leaks and spills next to a large government building occurred over several decades with the most recent event occurring 20 years ago. In response to a perceived conventional vapor intrusion (VI) issue a sub-slab depressurization system (SSDS) was installed 6 years ago. The SSDS is operating within design limits and has achieved building TCE vapor concentration reductions. However, subsequent periodic TCE vapor spikes based on daily HAPSITE™ measurements indicate additional source(s). Two rounds of smoke tests conducted in 2017 and 2018 involved introduction of smoke into a sanitary sewer and storm drain manholes located on effluent lines coming from the building until smoke was observed exiting system vents on the roof. Smoke testing revealed many leaks in both the storm sewer and sanitary sewer systems within the building. Sleuthing of the VI source term using a portable HAPSITE™ indicate elevated vapor TCE levels correspond with observed smoke emanation from utility lines. In some instances, smoke odors were perceived but no leak or suspect pipe was identified suggesting the odor originates from an unidentified pipe located behind or enclosed in a wall. Sleuthing activities also found building roof materials explain some of the elevated TCE levels on the 2nd floor. A relationship was found between TCE concentrations in the roof truss area, plenum space above 2nd floor offices, and breathing zone of 2nd floor offices. Installation of an external blower in the roof truss space has greatly reduced TCE levels in the plenum and office spaces. Preferential VI pathways and unexpected source terms may be overlooked mechanisms as compared to conventional VI.
APA, Harvard, Vancouver, ISO, and other styles
2

Holland, Darren, and Nazmina Mahmoudzadeh. Foodborne Disease Estimates for the United Kingdom in 2018. Food Standards Agency, January 2020. http://dx.doi.org/10.46756/sci.fsa.squ824.

Full text
Abstract:
In February 2020 the FSA published two reports which produced new estimates of foodborne norovirus cases. These were the ‘Norovirus Attribution Study’ (NoVAS study) (O’Brien et al., 2020) and the accompanying internal FSA technical review ‘Technical Report: Review of Quantitative Risk Assessment of foodborne norovirus transmission’ (NoVAS model review), (Food Standards Agency, 2020). The NoVAS study produced a Quantitative Microbiological Risk Assessment model (QMRA) to estimate foodborne norovirus. The NoVAS model review considered the impact of using alternative assumptions and other data sources on these estimates. From these two pieces of work, a revised estimate of foodborne norovirus was produced. The FSA has therefore updated its estimates of annual foodborne disease to include these new results and also to take account of more recent data related to other pathogens. The estimates produced include: •Estimates of GP presentations and hospital admissions for foodbornenorovirus based on the new estimates of cases. The NoVAS study onlyproduced estimates for cases. •Estimates of foodborne cases, GP presentations and hospital admissions for12 other pathogens •Estimates of unattributed cases of foodborne disease •Estimates of total foodborne disease from all pathogens Previous estimates An FSA funded research project ‘The second study of infectious intestinal disease in the community’, published in 2012 and referred to as the IID2 study (Tam et al., 2012), estimated that there were 17 million cases of infectious intestinal disease (IID) in 2009. These include illness caused by all sources, not just food. Of these 17 million cases, around 40% (around 7 million) could be attributed to 13 known pathogens. These pathogens included norovirus. The remaining 60% of cases (equivalent to 10 million cases) were unattributed cases. These are cases where the causal pathogen is unknown. Reasons for this include the causal pathogen was not tested for, the test was not sensitive enough to detect the causal pathogen or the pathogen is unknown to science. A second project ‘Costed extension to the second study of infectious intestinal disease in the community’, published in 2014 and known as IID2 extension (Tam, Larose and O’Brien, 2014), estimated that there were 566,000 cases of foodborne disease per year caused by the same 13 known pathogens. Although a proportion of the unattributed cases would also be due to food, no estimate was provided for this in the IID2 extension. New estimates We estimate that there were 2.4 million cases of foodborne disease in the UK in 2018 (95% credible intervals 1.8 million to 3.1 million), with 222,000 GP presentations (95% Cred. Int. 150,000 to 322,000) and 16,400 hospital admissions (95% Cred. Int. 11,200 to 26,000). Of the estimated 2.4 million cases, 0.9 million (95% Cred. Int. 0.7 million to 1.2 million) were from the 13 known pathogens included in the IID2 extension and 1.4 million1 (95% Cred. Int. 1.0 million to 2.0 million) for unattributed cases. Norovirus was the pathogen with the largest estimate with 383,000 cases a year. However, this estimate is within the 95% credible interval for Campylobacter of 127,000 to 571,000. The pathogen with the next highest number of cases was Clostridium perfringens with 85,000 (95% Cred. Int. 32,000 to 225,000). While the methodology used in the NoVAS study does not lend itself to producing credible intervals for cases of norovirus, this does not mean that there is no uncertainty in these estimates. There were a number of parameters used in the NoVAS study which, while based on the best science currently available, were acknowledged to have uncertain values. Sensitivity analysis undertaken as part of the study showed that changes to the values of these parameters could make big differences to the overall estimates. Campylobacter was estimated to have the most GP presentations with 43,000 (95% Cred. Int. 19,000 to 76,000) followed by norovirus with 17,000 (95% Cred. Int. 11,000 to 26,000) and Clostridium perfringens with 13,000 (95% Cred. Int. 6,000 to 29,000). For hospital admissions Campylobacter was estimated to have 3,500 (95% Cred. Int. 1,400 to 7,600), followed by norovirus 2,200 (95% Cred. Int. 1,500 to 3,100) and Salmonella with 2,100 admissions (95% Cred. Int. 400 to 9,900). As many of these credible intervals overlap, any ranking needs to be undertaken with caution. While the estimates provided in this report are for 2018 the methodology described can be applied to future years.
APA, Harvard, Vancouver, ISO, and other styles
3

Financial Stability Report - First Semester of 2020. Banco de la República de Colombia, March 2021. http://dx.doi.org/10.32468/rept-estab-fin.1sem.eng-2020.

Full text
Abstract:
In the face of the multiple shocks currently experienced by the domestic economy (resulting from the drop in oil prices and the appearance of a global pandemic), the Colombian financial system is in a position of sound solvency and adequate liquidity. At the same time, credit quality has been recovering and the exposure of credit institutions to firms with currency mismatches has declined relative to previous episodes of sudden drops in oil prices. These trends are reflected in the recent fading of red and blue tonalities in the performance and credit risk segments of the risk heatmaps in Graphs A and B.1 Naturally, the sudden, unanticipated change in macroeconomic conditions has caused the appearance of vulnerabilities for short-term financial stability. These vulnerabilities require close and continuous monitoring on the part of economic authorities. The main vulnerability is the response of credit and credit risk to a potential, temporarily extreme macroeconomic situation in the context of: (i) recently increased exposure of some banks to household sector, and (ii) reductions in net interest income that have led to a decline in the profitability of the banking business in the recent past. Furthermore, as a consequence of greater uncertainty and risk aversion, occasional problems may arise in the distribution of liquidity between agents and financial markets. With regards to local markets, spikes have been registered in the volatility of public and private fixed income securities in recent weeks that are consistent with the behavior of the international markets and have had a significant impact on the liquidity of those instruments (red portions in the most recent past of some market risk items on the map in Graph A). In order to adopt a forward-looking approach to those vulnerabilities, this Report presents a stress test that evaluates the resilience of credit institutions in the event of a hypothetical scenario thatseeks to simulate an extreme version of current macroeconomic conditions. The scenario assumes a hypothetical negative growth that is temporarily strong but recovers going into the middle of the coming year and has extreme effects on credit quality. The results suggest that credit institutions have the ability to withstand a significant deterioration in economic conditions in the short term. Even though there could be a strong impact on credit, liquidity, and profitability under the scenario being considered, aggregate capital ratios would probably remain at above their regulatory limits over the horizon of a year. In this context, the recent measures taken by both Banco de la República and the Office of the Financial Superintendent of Colombia that are intended to help preserve the financial stability of the Colombian economy become highly relevant. In compliance with its constitutional objectives and in coordination with the financial system’s security network, Banco de la República will continue to closely monitor the outlook for financial stability at this juncture and will make the decisions that are necessary to ensure the proper functioning of the economy, facilitate the flow of sufficient credit and liquidity resources, and further the smooth functioning of the payment system. Juan José Echavarría Governor
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography