To see the other types of publications on this topic, follow the link: Sample rate.

Dissertations / Theses on the topic 'Sample rate'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sample rate.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rothacher, Fritz Markus. "Sample-rate conversion : algorithms and VLSI implementation /." [Konstanz] : Hartung-Gorre, 1995. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Walke, Richard Lewis. "High sample-rate Givens rotations for recursive least squares." Thesis, University of Warwick, 1997. http://wrap.warwick.ac.uk/36283/.

Full text
Abstract:
The design of an application-specific integrated circuit of a parallel array processor is considered for recursive least squares by QR decomposition using Givens rotations, applicable in adaptive filtering and beamforming applications. Emphasis is on high sample-rate operation, which, for this recursive algorithm, means that the time to perform arithmetic operations is critical. The algorithm, architecture and arithmetic are considered in a single integrated design procedure to achieve optimum results. A realisation approach using standard arithmetic operators, add, multiply and divide is adopted. The design of high-throughput operators with low delay is addressed for fixed- and floating-point number formats, and the application of redundant arithmetic considered. New redundant multiplier architectures are presented enabling reductions in area of up to 25%, whilst maintaining low delay. A technique is presented enabling the use of a conventional tree multiplier in recursive applications, allowing savings in area and delay. Two new divider architectures are presented showing benefits compared with the radix-2 modified SRT algorithm. Givens rotation algorithms are examined to determine their suitability for VLSI implementation. A novel algorithm, based on the Squared Givens Rotation (SGR) algorithm, is developed enabling the sample-rate to be increased by a factor of approximately 6 and offering area reductions up to a factor of 2 over previous approaches. An estimated sample-rate of 136 MHz could be achieved using a standard cell approach and O.35pm CMOS technology. The enhanced SGR algorithm has been compared with a CORDIC approach and shown to benefit by a factor of 3 in area and over 11 in sample-rate. When compared with a recent implementation on a parallel array of general purpose (GP) DSP chips, it is estimated that a single application specific chip could offer up to 1,500 times the computation obtained from a single OP DSP chip.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Kit-ying Ida. "Empirical exchange rate models : out-of-sample forecasts for the HK$/Yen exchange rate /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20666895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Jeffrey C. "Design Considerations for a Variable sample Rate Signal Conditioning Module." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606212.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
Modern telemetry systems require flexible sampling rates for analog signal conditioning within telemetry encoders in order to optimize mission formats for varying data acquisition needs and data rate constraints. Implementing a variable sample rate signal conditioning module for a telemetry encoder requires consideration of several possible architectural topologies that place different system requirements on data acquisition modules within the encoder in order to maintain adequate signal fidelity of sensor information. This paper focuses on the requirements, design considerations and tradeoffs associated with differing architectural topologies for implementing a variable sample rate signal conditioning module and the resulting implications on the encoder system's data acquisition units.
APA, Harvard, Vancouver, ISO, and other styles
5

De, Boyrie Maria Eugenia. "Out-of-sample exchange rate forecasting structural and non-structural nonlinear approaches." FIU Digital Commons, 1994. http://digitalcommons.fiu.edu/etd/2727.

Full text
Abstract:
Forecasting foreign exchange rates is a perennial dilemma for exporters, importers, foreign exchange rate traders, and the business community as a whole. Foreign exchange rate models using popular linear and non-linear specifications do not produce particularly accurate forecasts. In point of fact, these models have not improved much upon the random walk model, especially in out-of-sample forecasting. Given these results, this dissertation constructs and evaluates new forecasting models to generate as accurate as possible out-of-sample forecasts of foreign exchange rates. The information content of futures contracts on foreign exchange rates is investigated and used to forecast future exchange rates using alternative techniques, both structural (econometric) and non-structural (fuzzy) models. The results of two specifications of a structural model are compared against the well-known random walk model. The first specification assumes future exchange rates are determined by futures prices and a lagged structure of spot rates. The second specification assumes that future spot rates are a function of only a lagged structure of the futures prices. The forecasting accuracy of the models is tested for both in-sample and out-of-sample periods; out-of-sample tests range from the short term to the long term (30- to 180-day forecasts). The results indicate that the random walk model remains a competitive alternative. In out-of-sample predictions, however, we can improve upon it in certain cases. The results also show that the predictive accuracy of the models is better in the short term (30 to 60 days) than in the longer term (180 days).
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Ruijuan. "Sample comparisons using microarrays -- application of false discovery rate and quadratic logistic regression." Worcester, Mass. : Worcester Polytechnic Institute, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-010808-173747/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Ruijuan. "Sample comparisons using microarrays: - Application of False Discovery Rate and quadratic logistic regression." Digital WPI, 2008. https://digitalcommons.wpi.edu/etd-theses/28.

Full text
Abstract:
In microarray analysis, people are interested in those features that have different characters in diseased samples compared to normal samples. The usual p-value method of selecting significant genes either gives too many false positives or cannot detect all the significant features. The False Discovery Rate (FDR) method controls false positives and at the same time selects significant features. We introduced Benjamini's method and Storey's method to control FDR, applied the two methods to human Meningioma data. We found that Benjamini's method is more conservative and that, after the number of the tests exceeds a threshold, increase in number of tests will lead to decrease in number of significant genes. In the second chapter, we investigate ways to search interesting gene expressions that cannot be detected by linear models as t-test or ANOVA. We propose a novel approach to use quadratic logistic regression to detect genes in Meningioma data that have non-linear relationship within phenotypes. By using quadratic logistic regression, we can find genes whose expression correlates to their phenotypes both linearly and quadratically. Whether these genes have clinical significant is a very interesting question, since these genes most likely be neglected by traditional linear approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Song. "RESTORE PCM TELEMETRY SIGNAL WAVEFORM BY MAKING USE OF MULTI-SAMPLE RATE INTERPOLATION TECHNOLOGY." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/607318.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
There are two misty understandings about PCM telemetry system in conventional concept: Waveform can not be restored accurately; to be restored accurately, a measured signal must be sampled at a higher sample rate. This paper discusses that by making use of multi-sample rate DSP technology, the sample rate of a measured signal can be reduced in transmission equipment, or system precision can be retained even if the performance of low pass filter declined.
APA, Harvard, Vancouver, ISO, and other styles
9

Ghanavati, Goodarz. "Statistical Analysis of High Sample Rate Time-series Data for Power System Stability Assessment." ScholarWorks @ UVM, 2015. http://scholarworks.uvm.edu/graddis/333.

Full text
Abstract:
The motivation for this research is to leverage the increasing deployment of the phasor measurement unit (PMU) technology by electric utilities in order to improve situational awareness in power systems. PMUs provide unprecedentedly fast and synchronized voltage and current measurements across the system. Analyzing the big data provided by PMUs may prove helpful in reducing the risk of blackouts, such as the Northeast blackout in August 2003, which have resulted in huge costs in past decades. In order to provide deeper insight into early warning signs (EWS) of catastrophic events in power systems, this dissertation studies changes in statistical properties of high-resolution measurements as a power system approaches a critical transition. The EWS under study are increases in variance and autocorrelation of state variables, which are generic signs of a phenomenon known as critical slowing down (CSD). Critical slowing down is the result of slower recovery of a dynamical system from perturbations when the system approaches a critical transition. CSD has been observed in many stochastic nonlinear dynamical systems such as ecosystem, human body and power system. Although CSD signs can be useful as indicators of proximity to critical transitions, their characteristics vary for different systems and different variables within a system. The dissertation provides evidence for the occurrence of CSD in power systems using a comprehensive analytical and numerical study of this phenomenon in several power system test cases. Together, the results show that it is possible extract information regarding not only the proximity of a power system to critical transitions but also the location of the stress in the system from autocorrelation and variance of measurements. Also, a semi-analytical method for fast computation of expected variance and autocorrelation of state variables in large power systems is presented, which allows one to quickly identify locations and variables that are reliable indicators of proximity to instability.
APA, Harvard, Vancouver, ISO, and other styles
10

Brownlow, Briana Nicole. "Patterns of Heart Rate Variability Predictive of Internalizing Symptoms in a Non-Clinical Youth Sample." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1518179804584445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, Songnian. "The impact of sample size re-estimation on the type I error rate in the analysis of a continuous end-point." Kansas State University, 2017. http://hdl.handle.net/2097/35326.

Full text
Abstract:
Master of Science
Department of Statistics
Christopher Vahl
Sample size estimation is generally based on assumptions made during the planning stage of a clinical trial. Often, there is limited information available to estimate the initial sample size. This may result in a poor estimate. For instance, an insufficient sample size may not have the capability to produce statistically significant results, while an over-sized study will lead to a waste of resources or even ethical issues in that too many patients are exposed to potentially ineffective treatments. Therefore, an interim analysis in the middle of a trial may be worthwhile to assure that the significance level is at the nominal level and/or the power is adequate to detect a meaningful treatment difference. In this report, the impact of sample size re-estimation on the type I error rate for the continuous end-point in a clinical trial with two treatments is evaluated through a simulation study. Two sample size estimation methods are taken into consideration: blinded and partially unblinded. For the blinded method, all collected data for two groups are used to estimate the variance, while only data from the control group are used to re-estimate the sample size for the partially unblinded method. The simulation study is designed with different combinations of assumed variance, assumed difference in treatment means, and re-estimation methods. The end-point is assumed to follow normal distribution and the variance for both groups are assumed to be identical. In addition, equal sample size is required for each group. According to the simulation results, the type I error rates are preserved for all settings.
APA, Harvard, Vancouver, ISO, and other styles
12

Lindeberg, Johan. "Design and Implementation of a Low-Power SAR-ADC with Flexible Sample-Rate and Internal Calibration." Thesis, Linköpings universitet, Elektroniksystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-103229.

Full text
Abstract:
The objective of this Master's thesis was to design and implement a low power Analog to Digital Converter (ADC) used for sensor measurements. In the complete measurement unit, in which the ADC is part of, different sensors will be measured. One set of these sensors are three strain gauges with weak output signals which are to be pre-amplified before being converted. The focus of the application for the ADC has been these sensors as they were considered a limiting factor. The report describes theory for the algorithmic and incremental converter as well as a hybrid converter utilizing both of the two converter structures. All converters are based on one operational amplifier and they operate in repetitive fashions to obtain power efficient designs on a small chip area although at low conversion rates. Two converters have been designed and implemented to different degrees of completeness. One is a 13 bit algorithmic (or cyclic) converter which uses a switching scheme to reduce the problem of capacitor mismatch. This converter was implemented at transistor level and evaluated separately and to some extent also with sub-components. The second converter is a hybrid converter using both the operation of the algorithmic and incremental converter to obtain 16 bits of resolution while still having a fairly high sample rate.
APA, Harvard, Vancouver, ISO, and other styles
13

Hunter, Matthew. "DESIGN OF POLYNOMIAL-BASED FILTERS FOR CONTINUOUSLY VARIABLE SAMPLE RATE CONVERSION WITH APPLICATIONS IN SYNTHETIC INSTRUMENTATI." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2120.

Full text
Abstract:
In this work, the design and application of Polynomial-Based Filters (PBF) for continuously variable Sample Rate Conversion (SRC) is studied. The major contributions of this work are summarized as follows. First, an explicit formula for the Fourier Transform of both a symmetrical and nonsymmetrical PBF impulse response with variable basis function coefficients is derived. In the literature only one explicit formula is given, and that for a symmetrical even length filter with fixed basis function coefficients. The frequency domain optimization of PBFs via linear programming has been proposed in the literature, however, the algorithm was not detailed nor were explicit formulas derived. In this contribution, a minimax optimization procedure is derived for the frequency domain optimization of a PBF with time-domain constraints. Explicit formulas are given for direct input to a linear programming routine. Additionally, accompanying Matlab code implementing this optimization in terms of the derived formulas is given in the appendix. In the literature, it has been pointed out that the frequency response of the Continuous-Time (CT) filter decays as frequency goes to infinity. It has also been observed that when implemented in SRC, the CT filter is sampled resulting in CT frequency response aliasing. Thus, for example, the stopband sidelobes of the Discrete-Time (DT) implementation rise above the CT designed level. Building on these observations, it is shown how the rolloff rate of the frequency response of a PBF can be adjusted by adding continuous derivatives to the impulse response. This is of great advantage, especially when the PBF is used for decimation as the aliasing band attenuation can be made to increase with frequency. It is shown how this technique can be used to dramatically reduce the effect of alias build up in the passband. In addition, it is shown that as the number of continuous derivatives of the PBF increases the resulting DT implementation more closely matches the Continuous-Time (CT) design. When implemented for SRC, samples from a PBF impulse response are computed by evaluating the polynomials using a so-called fractional interval, µ. In the literature, the effect of quantizing µ on the frequency response of the PBF has been studied. Formulas have been derived to determine the number of bits required to keep frequency response distortion below prescribed bounds. Elsewhere, a formula has been given to compute the number of bits required to represent µ to obtain a given SRC accuracy for rational factor SRC. In this contribution, it is shown how these two apparently competing requirements are quite independent. In fact, it is shown that the wordlength required for SRC accuracy need only be kept in the µ generator which is a single accumulator. The output of the µ generator may then be truncated prior to polynomial evaluation. This results in significant computational savings, as polynomial evaluation can require several multiplications and additions. Under the heading of applications, a new Wideband Digital Downconverter (WDDC) for Synthetic Instruments (SI) is introduced. DDCs first tune to a signal's center frequency using a numerically controlled oscillator and mixer, and then zoom-in to the bandwidth of interest using SRC. The SRC is required to produce continuously variable output sample rates from a fixed input sample rate over a large range. Current implementations accomplish this using a pre-filter, an arbitrary factor resampler, and integer decimation filters. In this contribution, the SRC of the WDDC is simplified reducing the computational requirements to a factor of three or more. In addition to this, it is shown how this system can be used to develop a novel computationally efficient FFT-based spectrum analyzer with continuously variable frequency spans. Finally, after giving the theoretical foundation, a real Field Programmable Gate Array (FPGA) implementation of a novel Arbitrary Waveform Generator (AWG) is presented. The new approach uses a fixed Digital-to-Analog Converter (DAC) sample clock in combination with an arbitrary factor interpolator. Waveforms created at any sample rate are interpolated to the fixed DAC sample rate in real-time. As a result, the additional lower performance analog hardware required in current approaches, namely, multiple reconstruction filters and/or additional sample clocks, is avoided. Measured results are given confirming the performance of the system predicted by the theoretical design and simulation.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
14

Sommer, Josef. "Analýza predikční schopnosti vybraných fundamentálních modelů měnového kurzu na základě statistických metod." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-264299.

Full text
Abstract:
This diploma thesis evaluates out-of-sample predictive ability of exchange rate models. The first part of the thesis summarizes existing empirical findings about exchange rate predictability and describes exchange rate models chosen to be evaluated. The second part of the thesis evaluates predictive ability of purchasing power parity, uncovered interest parity, monetary model and Taylor rule model. The exchange rate models are evaluated on CZK/EUR and CZK/USD currency pairs. The analysis is made using quarterly data from 1999 to 2013, while 2009 to 2013 period is reserved for forecast evaluation. The predictive ability of exchange rate models is evaluated in one quarter, one year and three years horizons. The exchange rate models are specified in first differences and estimated by ordinary least squares method. The forecasts are made using rolling regression. The exchange rate models are evaluated using RMSE, Theil's U, CW test and direction of change criterion. The diploma thesis concludes with description of own empirical findings.
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Keunpyo. "Process Monitoring with Multivariate Data:Varying Sample Sizes and Linear Profiles." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/29741.

Full text
Abstract:
Multivariate control charts are used to monitor a process when more than one quality variable associated with the process is being observed. The multivariate exponentially weighted moving average (MEWMA) control chart is one of the most commonly recommended tools for multivariate process monitoring. The standard practice, when using the MEWMA control chart, is to take samples of fixed size at regular sampling intervals for each variable. In the first part of this dissertation, MEWMA control charts based on sequential sampling schemes with two possible stages are investigated. When sequential sampling with two possible stages is used, observations at a sampling point are taken in two groups, and the number of groups actually taken is a random variable that depends on the data. The basic idea is that sampling starts with a small initial group of observations, and no additional sampling is done at this point if there is no indication of a problem with the process. But if there is some indication of a problem with the process then an additional group of observations is taken at this sampling point. The performance of the sequential sampling (SS) MEWMA control chart is compared to the performance of standard control charts. It is shown that that the SS MEWMA chart is substantially more efficient in detecting changes in the process mean vector than standard control charts that do not use sequential sampling. Also the situation is considered where different variables may have different measurement costs. MEWMA control charts with unequal sample sizes based on differing measurement costs are investigated in order to improve the performance of process monitoring. Sequential sampling plans are applied to MEWMA control charts with unequal sample sizes and compared to the standard MEWMA control charts with a fixed sample size. The steady-state average time to signal (SSATS) is computed using simulation and compared for some selected sets of sample sizes. When different variables have significantly different measurement costs, using unequal sample sizes can be more cost effective than using the same fixed sample size for each variable. In the second part of this dissertation, control chart methods are proposed for process monitoring when the quality of a process or product is characterized by a linear function. In the historical analysis of Phase I data, methods including the use of a bivariate T² chart to check for stability of the regression coefficients in conjunction with a univariate Shewhart chart to check for stability of the variation about the regression line are recommended. The use of three univariate control charts in Phase II is recommended. These three charts are used to monitor the Y-intercept, the slope, and the variance of the deviations about the regression line, respectively. A simulation study shows that this type of Phase II method can detect sustained shifts in the parameters better than competing methods in terms of average run length (ARL) performance. The monitoring of linear profiles is also related to the control charting of regression-adjusted variables and other methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Gref, Margareta. "Glomerular filtration rate in adults : a single sample plasma clearance method based on the mean sojurn time." Licentiate thesis, Umeå universitet, Klinisk fysiologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-42319.

Full text
Abstract:
Glomerular filtration rate (GFR) is a key parameter in evaluating kidney function. After a bolus injection of an exogenous GFR marker in plasma an accurate determination of GFR can be made by measuring the marker concentration in plasma during the excretion. Simplified methods have been developed to reduce the number of plasma samples needed and yet still maintain a high accuracy in the GFR determination. Groth previously developed a single sample GFR method based on the mean sojourn time of a GFR marker in its distribution volume. This method applied in adults using the marker 99m Tc-DTPA is recommended for use when GFR is estimated to be ≥ 30 mL/min. The aim of the present study was to further develop the single plasma sample GFR method by Groth including patients with severely reduced renal function and different GFR markers. Three different GFR markers 51Cr-EDTA, 99mTc-DTPA and iohexol were investigated. Formulas were derived for the markers 51Cr-EDTA and iohexol when GFR is estimated to be ≥ 30 mL/min. For patients with an estimated GFR < 30 mL/min a special low clearance formula with a single sample obtained about 24 h after marker injection was developed. The low clearance formula was proven valid for use with all three markers. The sources of errors and their influence on the calculated single sample clearance were investigated. The estimated distribution volume is the major source of error but its influence can be reduced by choosing a suitable sampling time. The optimal time depends on the level of GFR; the lower GFR the later the single sample should be obtained. For practical purpose a 270 min sample is recommended when estimated GFR ≥ 30 mL/min and a 24 h sample when estimated GFR < 30 mL/min. Sampling at 180 min after marker injection may be considered if GFR is estimated to be essentially normal.
APA, Harvard, Vancouver, ISO, and other styles
17

Claflin, Ray Jr, and Ray III Claflin. "DATA ACQUISITION AND THE ALIASING PHENOMENON." International Foundation for Telemetering, 2001. http://hdl.handle.net/10150/607663.

Full text
Abstract:
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada
In current practice sensor data is digitized and input into computers, displays, and recorders. To try to reduce the volume of digitized data, our original hypothesis was that by selecting a subset of digital values from an over-sampled signal, we could improve signal identification and improve perhaps Nyquist performance. Our investigations did not lead to significant improvements but did clarify our thinking regarding the usage of digitized data.
APA, Harvard, Vancouver, ISO, and other styles
18

Anderson, Jesse Glenn Sigrid S. "An examination of the effects of accuracy+rate versus accuracy+observing response training methods on matching-to-sample performance." [Denton, Tex.] : University of North Texas, 2007. http://digital.library.unt.edu/permalink/meta-dc-3708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Hui. "Adjusting for Bounding and Time-in-Sample Eects in the National Crime Victimization Survey (NCVS) Property Crime Rate Estimation." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1452167047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Anderson, Jesse. "An examination of the effects of accuracy+rate versus accuracy+observing response training methods on matching-to-sample performance." Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc3708/.

Full text
Abstract:
The relative efficacy of training procedures emphasizing accuracy versus those which add a rate criterion is a topic of debate. The desired learning outcome is fluent responding, assessed by measures of retention, endurance, stability, and application. The current study examined the effects of these two procedures on fluency outcomes using a matching-to-sample paradigm to train participants to match English to Japanese characters. An explicit FR-3 observing response was added to an accuracy-only condition to assess the extent to which it may facilitate learning. Total time spent responding in practice drills in accuracy-only conditions was yoked to total time spent in drills achieving rate aims in accuracy+rate (AR) conditions. One participant clearly demonstrated superior fluency outcomes after AR training while another displayed superior endurance and stability outcomes after such training. The remaining two participants did not demonstrate significantly different fluency outcomes across conditions.
APA, Harvard, Vancouver, ISO, and other styles
21

Wheetley, Brook. "The Effects of Rate of Responding on Retention, Endurance, Stability, and Application of Performance on a Match-to-sample Task." Thesis, University of North Texas, 2005. https://digital.library.unt.edu/ark:/67531/metadc4923/.

Full text
Abstract:
Fluent performance has been described as the retention, endurance, stability, and application of the material learned. Fluent performers not only respond quickly during training, they also make many correct responses during training. The current study used a within-subject design to analyze the effects of increased response rates on Retention, Endurance, Stability, and Application tests. Number of correct responses and number of unprompted, correct responses in error correction procedures were yoked for individual participants across an Accuracy-plus-Rate training condition and an Accuracy-Only training condition. One participant scored better in tests that followed the Accuracy-Only condition. One participant showed results that slightly favor the Accuracy-plus-Rate training condition. The two participants whose response rates were successfully reduced in the Accuracy-Only condition performed better on all tests that followed the Accuracy-plus-Rate condition.
APA, Harvard, Vancouver, ISO, and other styles
22

Reed, Trudy L. (Trudy Lenore) Carleton University Dissertation Psychology. "What the MacAndrew alcoholism Scale-R measures in a sample of criminal offenders with a high base rate of substance abuse." Ottawa, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Murtagh, Kurowski Eileen M. D. "Evaluation of Differences Between Pediatric and General Emergency Departments in Rate of Admission and Resource Utilization for Visits by Children and Young Adults with Complex Chronic Conditions." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1353950161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rahman, Fahmida. "EVALUATE PROBE SPEED DATA QUALITY TO IMPROVE TRANSPORTATION MODELING." UKnowledge, 2019. https://uknowledge.uky.edu/ce_etds/80.

Full text
Abstract:
Probe speed data are widely used to calculate performance measures for quantifying state-wide traffic conditions. Estimation of the accurate performance measures requires adequate speed data observations. However, probe vehicles reporting the speed data may not be available all the time on each road segment. Agencies need to develop a good understanding of the adequacy of these reported data before using them in different transportation applications. This study attempts to systematically assess the quality of the probe data by proposing a method, which determines the minimum sample rate for checking data adequacy. The minimum sample rate is defined as the minimum required speed data for a segment ensuring the speed estimates within a defined error range. The proposed method adopts a bootstrapping approach to determine the minimum sample rate within a pre-defined acceptance level. After applying the method to the speed data, the results from the analysis show a minimum sample rate of 10% for Kentucky’s roads. This cut-off value for Kentucky’s roads helps to identify the segments where the availability is greater than the minimum sample rate. This study also shows two applications of the minimum sample rates resulted from the bootstrapping. Firstly, the results are utilized to identify the geometric and operational factors that contribute to the minimum sample rate of a facility. Using random forests regression model as a tool, functional class, section length, and speed limit are found to be the significant variables for uninterrupted facility. Contrarily, for interrupted facility, signal density, section length, speed limit, and intersection density are the significant variables. Lastly, the speed data associated with the segments are applied to improve Free Flow Speed estimation by the traditional model.
APA, Harvard, Vancouver, ISO, and other styles
25

Pereira, Vinicius Vale. "Uma abordagem GVAR de previsões de taxas de câmbio." reponame:Repositório Institucional do FGV, 2016. http://hdl.handle.net/10438/15643.

Full text
Abstract:
Submitted by Vinicius Vale Pereira (viniciusvale@gmail.com) on 2016-03-01T18:04:06Z No. of bitstreams: 1 tese.v4.1.FINAL.pdf: 1143339 bytes, checksum: e3167f8ba5e577bce7f0c40605716c5d (MD5)
Rejected by Renata de Souza Nascimento (renata.souza@fgv.br), reason: Vinicius, boa noite Para que possamos aceitar seu trabalho deverá realizar os ajustes: O título deve estar em letras maiúsculas. A ficha catalográfica após a contra capa, na parte inferior. Não deve constar números romanos. As páginas anteriores não podem estar numeradas. Centralizar os títulos Agradecimentos, Resumo e Abstract. Após ajustes, submeter novamente o trabalho. Att on 2016-03-02T00:43:38Z (GMT)
Submitted by Vinicius Vale Pereira (viniciusvale@gmail.com) on 2016-03-02T13:49:51Z No. of bitstreams: 1 tese.v4.2.pdf: 1141368 bytes, checksum: 31f0646f0c15281795a6b96bd647ef42 (MD5)
Rejected by Renata de Souza Nascimento (renata.souza@fgv.br), reason: Vinicius, boa tarde Por gentileza, colocar o código CDU no canto do lado direito, da ficha catalográfica. Att on 2016-03-02T16:18:05Z (GMT)
Submitted by Vinicius Vale Pereira (viniciusvale@gmail.com) on 2016-03-02T16:48:39Z No. of bitstreams: 1 tese.v4.2.pdf: 1141278 bytes, checksum: 9f554d94d8c1ab631bb8127fadbbe43a (MD5)
Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2016-03-02T17:07:51Z (GMT) No. of bitstreams: 1 tese.v4.2.pdf: 1141278 bytes, checksum: 9f554d94d8c1ab631bb8127fadbbe43a (MD5)
Made available in DSpace on 2016-03-02T17:27:52Z (GMT). No. of bitstreams: 1 tese.v4.2.pdf: 1141278 bytes, checksum: 9f554d94d8c1ab631bb8127fadbbe43a (MD5) Previous issue date: 2016-02-03
O presente trabalho propõe um modelo de previsão simultânea de taxas de câmbio de vários países utilizando a abordagem GVAR e analisa a qualidade destas previsões. Para isso foram utilizados dados de 10 países ou regiões de taxa de câmbio, taxas de juros e nível de preços com frequência mensal entre 2003 e 2015. As previsões foram feitas utilizando janela móvel de 60 meses e avaliadas através da comparação dos erros quadráticos médios contra o benchmark padrão, o random walk, e dos testes de Pesaran e Timmermann e de Diebold e Mariano. Foram feitas previsões out-of-sample para horizontes de 1, 3, 12 e 18 meses. Os resultados mostram que o modelo proposto não consegue superar sistematicamente o random walk, contudo apresenta algum poder de previsão em alguns casos específicos
This paper proposes a model that simultaneously forecasts foreign exchange rate for several countries using the GVAR framework and analyzes the quality of these forecasts. For this purpose, data from 10 countries or regions regarding exchange rates, interest rates and price levels on a monthly basis between 2003 and 2015 was used. The forecasting was performed using a 60 months moving window and the evaluation of these was performed comparing the root mean square errors against the standard benchmark, the random walk, and by Pesaran-Timmermann and Diebold-Mariano Tests. Out-of-sample forecasts were estimated for horizons of 1, 3, 12 and 18 months. The results show that the model cannot systematically outperform the random walk, although it has some predictive power in some specific cases.
APA, Harvard, Vancouver, ISO, and other styles
26

Rao, Youlan. "Statistical Analysis of Microarray Experiments in Pharmacogenomics." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244756072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zarrouk, Pauline. "Clustering Analysis in Configuration Space and Cosmological Implications of the SDSS-IV eBOSS Quasar Sample." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS297/document.

Full text
Abstract:
Le modèle ΛCDM de la cosmologie repose sur l’existence d’une composante exotique, appelée énergie noire, pour expliquer l’accélération tardive de l’expansion de l’univers à z < 0.7. Des alternatives à cette constante cosmologique proposent de modifier la théorie de la gravitation basée sur la relativité générale aux échelles cosmologiques. Depuis l’automne 2014, le multi-spectrographe SDSS-eBOSS effectue un relevé de quasars dans un domaine en redshift peu exploré entre 0.8 ≤ z ≤ 2.2 dont l’un des objectifs majeurs est d’étendre les contraintes sur la nature de l’énergie noire et de tester la validité de la théorie de la relativité générale à plus haut redshift en utilisant les quasars comme traceurs de la matière.Dans cette thèse, nous mesurons et analysons la fonction de corrélation à deux points de l’échantillon de quasars obtenu après deux ans d'observation de eBOSS pour contraindre les distances cosmiques, à savoir la distance angulaire DA et le taux d'expansion H, ainsi que le taux de croissance des structures fσ8 à un redshift effectif Zeff = 1.52. Nous commençons par construire des catalogues des grandes structures qui prennent en compte la géométrie angulaire et radiale du relevé. Puis pour obtenir des contraintes robustes, nous identifions plusieurs sources d’effets systématiques, en particulier ceux liés à la modélisation et aux observations sont étudiées avec des « mock catalogues » dédiés qui correspondent à des réalisations fictives de l’échantillon de quasars eBOSS. Les paramètres cosmologiques de ces catalogues fictifs étant connus, ils sont utilisés comme référence pour tester notre procédure d’analyse. Les résultats de ce travail sur l’évolution des distances cosmiques sont compatibles avec les prédictions du modèle ΛCDM utilisant les paramètres de Planck et basé sur l’existence d’une constante cosmologique. La mesure du taux de croissance des structures est compatible avec la prédiction de ce modèle basé sur la relativité générale, ce qui étend ainsi la validité de la théorie aux échelles cosmologiques à grand redshift. Nous utilisons également notre mesure pour mettre à jour les contraintes sur les modèles d'extensions à ΛCDM et sur les scénarios de gravité modifiée. Ce travail de thèse constitue une première étude menée avec les données de quasars eBOSS et sera utilisée pour l’analyse de l’échantillon final à la fin 2019 ou l’on attend une amélioration de la précision statistique d’un facteur 2. Associé à BOSS, eBOSS ouvrira la voie pour les futurs programmes d’observation, comme le télescope au sol DESI et le satellite Euclid. Ces deux programmes sonderont intensivement l’époque de l’univers entre 1 < z < 2 en observant plusieurs millions de spectres, ce qui permettra d'améliorer d'un ordre de grandeur au moins les contraintes actuelles sur les paramètres cosmologiques
The ΛCDM model of cosmology assumes the existence of an exotic component, called dark energy, to explain the late-time acceleration of the expansion of the universe at redshift z < 0.7. Alternative scenarios to this cosmological constant suggest to modify the theory of gravitation based on general relativity at cosmological scales. Since fall 2014, the SDSS-IV eBOSS multi-object spectrograph has undertaken a survey of quasars in the almost unexplored redshift range 0.8 ≤ z ≤ 2.2 with the key science goal to complement the constraints on dark energy and extend the test of general relativity at higher redshifts by using quasars as direct tracers of the matter field.In this thesis work, we measure and analyse the two-point correlation function of the two-year data taking of eBOSS quasar sample to constrain the cosmic distances, i.e. the angular diameter distance DA and the expansion rate H, and the growth rate of structure fσ8 at an effective redshift Zeff = 1.52. First, we build large-scale structure catalogues that account for the angular and radial incompleteness of the survey. Then to obtain robust results, we investigate several potential systematics, in particular modeling and observational systematics are studied using dedicated mock catalogs which are fictional realizations of the data sample. These mocks are created with known cosmological parameters such that they are used as a benchmark to test the analysis pipeline. The results on the evolution of distances are consistent with the predictions for ΛCDM with Planck parameters assuming a cosmological constant. The measurement of the growth of structure is consistent with general relativity and hence extends its validity to higher redshift. We also provide updated constraints on extensions of ΛCDM and models of modified gravity. This study is a first use of eBOSS quasars as tracers of the matter field and will be included in the analysis of the final eBOSS sample at the end of 2019 with an expected improvement on the statistical precision of a factor 2. Together with BOSS, eBOSS will pave the way for future programs such as the ground-based Dark Energy Spectroscopic Instrument (DESI) and the space-based mission Euclid. Both programs will extensively probe the intermediate redshift range 1 < z < 2 with millions of spectra, improving the cosmological constraints by an order of magnitude with respect to current measurements
APA, Harvard, Vancouver, ISO, and other styles
28

Franck, Andreas [Verfasser], Karlheinz [Akademischer Betreuer] Brandenburg, III Julius Orion [Akademischer Betreuer] Smith, and Vesa [Akademischer Betreuer] Välimäki. "Efficient Algorithms for Arbitrary Sample Rate Conversion with Application to Wave Field Synthesis / Andreas Franck. Gutachter: Julius Orion Smith III ; Vesa Välimäki. Betreuer: Karlheinz Brandenburg." Ilmenau : Universitätsbibliothek Ilmenau, 2012. http://d-nb.info/1022376489/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Piskin, Hatice. "Design And Implementation Of Fir Digital Filters With Variable Frequency Characteristics." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606853/index.pdf.

Full text
Abstract:
Variable digital filters (VDF) find many application areas in communication, audio, speech and image processing. This thesis analyzes design and implementation of FIR digital filters with variable frequency characteristics and introduces two design methods. The design and implementation of the proposed methods are realized on Matlab software program. Various filter design examples and comparisons are also outlilned. One of the major application areas of VDFs is software defined radio (SDR). The interpolation problem on sample rate converter (SRC) unit of the SDR is solved by using these filters. Realizations of VDFs on SRC are outlined and described. Simulations on Simulink and a specific hardware are examined.
APA, Harvard, Vancouver, ISO, and other styles
30

Ekström, Peter, and Fredrik Hoel. "Audio over Bluetooth and MOST." Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1200.

Full text
Abstract:

In this Master Thesis the possibility of connecting standard products wirelessly to MOST, a multimedia network for vehicles, is investigated. The wireless technique analysed is Bluetooth. The report theoretically describes how MOST could be integrated with Bluetooth via a gateway. Future scenarios that are made possible by this gateway are also described. The solution describes how a connection could be established and how the synchronous audio is transferred from a Bluetooth sound source to the MOST network.


I detta examensarbete studeras möjligheten att ansluta standardprodukter trådlöst till MOST, ett multimedianätverk för fordon. Den trådlösa tekniken som analyseras är Bluetooth. Rapporten beskriver teoretiskt hur MOST ska integreras med Bluetooth via en gateway och tar även upp olika framtida scenarier som möjliggörs med hjälp av denna gateway. Lösningen beskriver hur en förbindelse kan upprättas och ljuddata överföras från en ljudkälla till MOST-nätet med hjälp av Bluetooth-teknik.

APA, Harvard, Vancouver, ISO, and other styles
31

Katskov, DA, and N. Darangwa. "Application of Langmuir theory of evaporation to the simulation of sample vapor composition and release rate in graphite tube atomizers. Part 1. The model and calculation algorithm." Journal of Analytical Atomic Spectrometry, 2010. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001252.

Full text
Abstract:
A method is suggested for simulation of transient sample vapor composition and release rate during vaporization of analytes in electrothermal (ET) atomizers for AAS. The approach is based on the Langmuir theory of evaporation of metals in the presence of a gas at atmospheric pressure, which advocates formation of mass equilibrium in the boundary layer next to the evaporation surface. It is suggested in this work that in ET atomizers the release of atoms and molecules from the boundary layer next to the dry residue of the analyte is accompanied by spreading of the layer around the sample droplets or crystals. Thus, eventually, the vapor source forms an effective area associated with a monolayer of the analyte. In particular, for the case of a metal oxide analyte as discussed in the work, the boundary layer contains the species present in thermodynamic equilibrium with oxide, which are metal atoms and dimers, oxide molecules and oxygen. Because of an excess of Ar, the probability of mass and energy exchange between the evolved gaseous species is low, this substantiates independent mass transport of each type of species from the boundary layer and through absorption volume. Diffusion, capture by Ar flow and gas thermal expansion is considered to control vapor transport and release rate. Each specific flow is affected by secondary processes occurring in collisions of the evolved molecules and atoms with the walls of graphite tube. Diffusion of oxygen containing species out of the boundary layer is facilitated by annihilation of oxygen and reduction of oxide on the graphite surface, while interaction of metal vapor with graphite slows down transport of atomic vapor out of the atomizer. These assumptions are used as the basis for the presentation of the problem as a system of first order differential equations describing mass and temperature balance in the atomizer. Numerical solution of the system of equations provides the simulation of temporal composition of the sample constituents in condensed and gas phase in the atomizer according to chemical properties of the analyte and experimental conditions. The suggested approach avoids the description of atomization processes via kinetic parameters such as activation energy, frequency factor, surface coverage or reaction order.
APA, Harvard, Vancouver, ISO, and other styles
32

Widerberg, Carl. "The Two-Sample t-test and the Influence of Outliers : - A simulation study on how the type I error rate is impacted by outliers of different magnitude." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375767.

Full text
Abstract:
This study investigates how outliers of different magnitude impact the robustness of the twosample t-test. A simulation study approach is used to analyze the behavior of type I error rates when outliers are added to generated data. Outliers may distort parameter estimates such as the mean and variance and cause misleading test results. Previous research has shown that Welch’s ttest performs better than the traditional Student’s t-test when group variances are unequal. Therefore these two alternative statistics are compared in terms of type I error rates when outliers are added to the samples. The results show that control of type I error rates can be maintained in the presence of a single outlier. Depending on the magnitude of the outlier and the sample size, there are scenarios where the t-test is robust. However, the sensitivity of the t-test is illustrated by deteriorating type I error rates when more than one outlier are included. The comparison between Welch’s t-test and Student’s t-test shows that the former is marginally more robust against outlier influence.
APA, Harvard, Vancouver, ISO, and other styles
33

Yount, William R. "A Monte Carlo Analysis of Experimentwise and Comparisonwise Type I Error Rate of Six Specified Multiple Comparison Procedures When Applied to Small k's and Equal and Unequal Sample Sizes." Thesis, North Texas State University, 1985. https://digital.library.unt.edu/ark:/67531/metadc332059/.

Full text
Abstract:
The problem of this study was to determine the differences in experimentwise and comparisonwise Type I error rate among six multiple comparison procedures when applied to twenty-eight combinations of normally distributed data. These were the Least Significant Difference, the Fisher-protected Least Significant Difference, the Student Newman-Keuls Test, the Duncan Multiple Range Test, the Tukey Honestly Significant Difference, and the Scheffe Significant Difference. The Spjøtvoll-Stoline and Tukey—Kramer HSD modifications were used for unequal n conditions. A Monte Carlo simulation was used for twenty-eight combinations of k and n. The scores were normally distributed (µ=100; σ=10). Specified multiple comparison procedures were applied under two conditions: (a) all experiments and (b) experiments in which the F-ratio was significant (0.05). Error counts were maintained over 1000 repetitions. The FLSD held experimentwise Type I error rate to nominal alpha for the complete null hypothesis. The FLSD was more sensitive to sample mean differences than the HSD while protecting against experimentwise error. The unprotected LSD was the only procedure to yield comparisonwise Type I error rate at nominal alpha. The SNK and MRT error rates fell between the FLSD and HSD rates. The SSD error rate was the most conservative. Use of the harmonic mean of the two unequal sample n's (HSD-TK) yielded uniformly better results than use of the minimum n (HSD-SS). Bernhardson's formulas controlled the experimentwise Type I error rate of the LSD and MRT to nominal alpha, but pushed the HSD below the 0.95 confidence interval. Use of the unprotected HSD produced fewer significant departures from nominal alpha. The formulas had no effect on the SSD.
APA, Harvard, Vancouver, ISO, and other styles
34

Wyatt, Stefanie Michele. "A Retrospective Chart Review: Are Gastrointestinal Complications Associated With Formula Brand and Rate Changes Outside of the Standard Protocol in a Random Sample of Pediatric Burn and Trauma Patients?" The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1352900178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Huselius, Gylling Kira. "Quadratic sample entropy as a measure of burstiness : A study in how well Rényi entropy rate and quadratic sampleentropy can capture the presence of spikes in time-series data." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-130593.

Full text
Abstract:
Requests to internet servers do not in general behave in a manner which can be easily modelled and forecast with typical time-series methods, but often have a significant presence of spikes in the data, a property we call “burstiness”. In this thesis we study various entropy measures and their properties for different distributions, both theoretically and via simulation, in order to better find out how these measures could be used to characterise the predictability and burstiness of time series. We find that a low entropy can indicate a heavy-tailed distribution, which for time series corresponds to a high burstiness. Using a previous result that connects the quadratic sample entropy for a time series with the Rényi entropy rate of order 2, we suggest a way of detecting burstiness by comparing the quadratic sample entropy of the time series with the Rényi entropy rate of order 2 for a symmetric and a heavy-tailed distribution.
Anrop till internetservrar beter sig i allmänhet inte på sätt som lätt kan modelleras och prediceras med typiska tidsseriemetoder, utan har ofta en signifikant andel spikar i datan, en egenskap som kallas “burstiness” (adekvat svensk översättning saknas). I den här uppsatsen studerar vi diverse entropimått och deras egenskaper för olika fördelningar, både teoretiskt och via simulering, för att kunna ta reda på hur dessa mått kan användas för att karaktärisera förutsägbarhet och burstiness hos tidsserier. Vi finner att en låg entropi kan indikera en tungsvansad fördelning, vilket för tidsserier motsvarar hög burstiness. Genom att använda ett tidigare resultat som sammankopplar kvadratisk stickprovsentropi med Rényis entropitakt av ordning 2 föreslår vi ett sätt att upptäcka burstiness genom att jämföra den kvadratiska stickprovsentropin för en tidsserie med Rényis entropitakt av ordning 2 för en symmetrisk och en tungsvansad fördelning.
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Xiangui. "Effects of digital audio quality on students' performance in LAN delivered English listening comprehension tests." Ohio : Ohio University, 2009. http://www.ohiolink.edu/etd/view.cgi?ohiou1236796324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ericsson, Ola. "An Experimental Study of Liquid Steel Sampling." Licentiate thesis, Stockholm : Skolan för industriell teknik och management, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Nåtman, Jonatan. "The performance of inverse probability of treatment weighting and propensity score matching for estimating marginal hazard ratios." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385502.

Full text
Abstract:
Propensity score methods are increasingly being used to reduce the effect of measured confounders in observational research. In medicine, censored time-to-event data is common. Using Monte Carlo simulations, this thesis evaluates the performance of nearest neighbour matching (NNM) and inverse probability of treatment weighting (IPTW) in combination with Cox proportional hazards models for estimating marginal hazard ratios. Focus is on the performance for different sample sizes and censoring rates, aspects which have not been fully investigated in this context before. The results show that, in the absence of censoring, both methods can reduce bias substantially. IPTW consistently had better performance in terms of bias and MSE compared to NNM. For the smallest examined sample size with 60 subjects, the use of IPTW led to estimates with bias below 15 %. Since the data were generated using a conditional parametrisation, the estimation of univariate models violates the proportional hazards assumption. As a result, censoring the data led to an increase in bias.
APA, Harvard, Vancouver, ISO, and other styles
39

Shykula, Mykola. "Quantization of Random Processes and Related Statistical Problems." Doctoral thesis, Umeå : Department of Mathematics and Mathematical Statistics, Umeå University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

BOURLEGAT, FERNANDA M. LE. "Disponibilidade de metais em amostras de fosfogesso e fertilizantes fosfatados utilizados na agricultura." reponame:Repositório Institucional do IPEN, 2010. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9582.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:28:17Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:04:04Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
41

Mello, Eduardo Morato. "In search of exchange rate predictability: a study about accuracy, consistency, and granger causality of forecasts generated by a Taylor Rule Model." reponame:Repositório Institucional do FGV, 2015. http://hdl.handle.net/10438/13308.

Full text
Abstract:
Submitted by EDUARDO MORATO MELLO (eduardo.mello@br.natixis.com) on 2015-02-04T19:07:16Z No. of bitstreams: 1 MPFE_EduardoMello.pdf: 1511350 bytes, checksum: 0c43eb471871651f1d5b9ab8996e0e63 (MD5)
Rejected by JOANA MARTORINI (joana.martorini@fgv.br), reason: Eduardo, Alterar o ano para 2015. on 2015-02-05T15:09:42Z (GMT)
Submitted by EDUARDO MORATO MELLO (eduardo.mello@br.natixis.com) on 2015-02-05T15:14:07Z No. of bitstreams: 1 MPFE_EduardoMello.pdf: 1511130 bytes, checksum: ee2bf1cdb611b05a4c962200c29ff28f (MD5)
Approved for entry into archive by JOANA MARTORINI (joana.martorini@fgv.br) on 2015-02-05T15:15:33Z (GMT) No. of bitstreams: 1 MPFE_EduardoMello.pdf: 1511130 bytes, checksum: ee2bf1cdb611b05a4c962200c29ff28f (MD5)
Made available in DSpace on 2015-02-05T15:21:22Z (GMT). No. of bitstreams: 1 MPFE_EduardoMello.pdf: 1511130 bytes, checksum: ee2bf1cdb611b05a4c962200c29ff28f (MD5) Previous issue date: 2015-01-30
Este estudo investiga o poder preditivo fora da amostra, um mês à frente, de um modelo baseado na regra de Taylor para previsão de taxas de câmbio. Revisamos trabalhos relevantes que concluem que modelos macroeconômicos podem explicar a taxa de câmbio de curto prazo. Também apresentamos estudos que são céticos em relação à capacidade de variáveis macroeconômicas preverem as variações cambiais. Para contribuir com o tema, este trabalho apresenta sua própria evidência através da implementação do modelo que demonstrou o melhor resultado preditivo descrito por Molodtsova e Papell (2009), o 'symmetric Taylor rule model with heterogeneous coefficients, smoothing, and a constant'. Para isso, utilizamos uma amostra de 14 moedas em relação ao dólar norte-americano que permitiu a geração de previsões mensais fora da amostra de janeiro de 2000 até março de 2014. Assim como o critério adotado por Galimberti e Moura (2012), focamos em países que adotaram o regime de câmbio flutuante e metas de inflação, porém escolhemos moedas de países desenvolvidos e em desenvolvimento. Os resultados da nossa pesquisa corroboram o estudo de Rogoff e Stavrakeva (2008), ao constatar que a conclusão da previsibilidade da taxa de câmbio depende do teste estatístico adotado, sendo necessária a adoção de testes robustos e rigorosos para adequada avaliação do modelo. Após constatar não ser possível afirmar que o modelo implementado provém previsões mais precisas do que as de um passeio aleatório, avaliamos se, pelo menos, o modelo é capaz de gerar previsões 'racionais', ou 'consistentes'. Para isso, usamos o arcabouço teórico e instrumental definido e implementado por Cheung e Chinn (1998) e concluímos que as previsões oriundas do modelo de regra de Taylor são 'inconsistentes'. Finalmente, realizamos testes de causalidade de Granger com o intuito de verificar se os valores defasados dos retornos previstos pelo modelo estrutural explicam os valores contemporâneos observados. Apuramos que o modelo fundamental é incapaz de antecipar os retornos realizados.
This study investigates whether a Taylor rule-based model provides short-term, one-month-ahead, out-of-sample exchange-rate predictability. We review important research that concludes that macroeconomic models are able to forecast exchange rates over short horizons. We also present studies that are skeptical about the forecast predictability of exchange rates with fundamental models. In order to provide our own evidence and contribution to the discussion, we implement the model that presents the strongest results in Molodtsova and Papell’s (2009) influential paper, the 'symmetric Taylor rule model with heterogeneous coefficients, smoothing, and a constant.' We use a sample of 14 currencies vis-à-vis the US dollar to make out-of-sample monthly forecasts from January 2000 to March 2014. As with the work of Galimberti and Moura (2012), we focus on free-floating exchange rate and inflation-targeting economies, but we use a sample of both developed and developing countries. In line with Rogoff and Stavrakeva (2008), we find that the conclusion about a model’s out-of-sample exchange-rate forecast capability largely depends on the test statistics used: it is necessary to use stringent and robust test statistics to properly evaluate the model. After concluding that it is not possible to claim that the forecasts of the implemented model are more accurate than those of a random walk, we inquire as to whether the fundamental model is at least capable of providing 'rational,' or 'consistent,' predictions. To test this, we adopt the theoretical and procedural framework laid out by Cheung and Chinn (1998). We find that the implemented Taylor rule model’s forecasts do not meet the 'consistent' criteria. Finally, we implement Granger causality tests to verify whether lagged predicted returns are able to partially explain, or anticipate, the actual returns. Once again, the performance of the structural model disappoints, and we are unable to confirm that the lagged forecasted returns antedate the actual returns.
APA, Harvard, Vancouver, ISO, and other styles
42

Buschmann, Tilo. "The Systematic Design and Application of Robust DNA Barcodes." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-209812.

Full text
Abstract:
High-throughput sequencing technologies are improving in quality, capacity, and costs, providing versatile applications in DNA and RNA research. For small genomes or fraction of larger genomes, DNA samples can be mixed and loaded together on the same sequencing track. This so-called multiplexing approach relies on a specific DNA tag, index, or barcode that is attached to the sequencing or amplification primer and hence accompanies every read. After sequencing, each sample read is identified on the basis of the respective barcode sequence. Alterations of DNA barcodes during synthesis, primer ligation, DNA amplification, or sequencing may lead to incorrect sample identification unless the error is revealed and corrected. This can be accomplished by implementing error correcting algorithms and codes. This barcoding strategy increases the total number of correctly identified samples, thus improving overall sequencing efficiency. Two popular sets of error-correcting codes are Hamming codes and codes based on the Levenshtein distance. Levenshtein-based codes operate only on words of known length. Since a DNA sequence with an embedded barcode is essentially one continuous long word, application of the classical Levenshtein algorithm is problematic. In this thesis we demonstrate the decreased error correction capability of Levenshtein-based codes in a DNA context and suggest an adaptation of Levenshtein-based codes that is proven of efficiently correcting nucleotide errors in DNA sequences. In our adaptation, we take any DNA context into account and impose more strict rules for the selection of barcode sets. In simulations we show the superior error correction capability of the new method compared to traditional Levenshtein and Hamming based codes in the presence of multiple errors. We present an adaptation of Levenshtein-based codes to DNA contexts capable of guaranteed correction of a pre-defined number of insertion, deletion, and substitution mutations. Our improved method is additionally capable of correcting on average more random mutations than traditional Levenshtein-based or Hamming codes. As part of this work we prepared software for the flexible generation of DNA codes based on our new approach. To adapt codes to specific experimental conditions, the user can customize sequence filtering, the number of correctable mutations and barcode length for highest performance. However, not every platform is susceptible to a large number of both indel and substitution errors. The Illumina “Sequencing by Synthesis” platform shows a very large number of substitution errors as well as a very specific shift of the read that results in inserted and deleted bases at the 5’-end and the 3’-end (which we call phaseshifts). We argue in this scenario that the application of Sequence-Levenshtein-based codes is not efficient because it aims for a category of errors that barely occurs on this platform, which reduces the code size needlessly. As a solution, we propose the “Phaseshift distance” that exclusively supports the correction of substitutions and phaseshifts. Additionally, we enable the correction of arbitrary combinations of substitution and phaseshift errors. Thus, we address the lopsided number of substitutions compared to phaseshifts on the Illumina platform. To compare codes based on the Phaseshift distance to Hamming Codes as well as codes based on the Sequence-Levenshtein distance, we simulated an experimental scenario based on the error pattern we identified on the Illumina platform. Furthermore, we generated a large number of different sets of DNA barcodes using the Phaseshift distance and compared codes of different lengths and error correction capabilities. We found that codes based on the Phaseshift distance can correct a number of errors comparable to codes based on the Sequence-Levenshtein distance while offering the number of DNA barcodes comparable to Hamming codes. Thus, codes based on the Phaseshift distance show a higher efficiency in the targeted scenario. In some cases (e.g., with PacBio SMRT in Continuous Long Read mode), the position of the barcode and DNA context is not well defined. Many reads start inside the genomic insert so that adjacent primers might be missed. The matter is further complicated by coincidental similarities between barcode sequences and reference DNA. Therefore, a robust strategy is required in order to detect barcoded reads and avoid a large number of false positives or negatives. For mass inference problems such as this one, false discovery rate (FDR) methods are powerful and balanced solutions. Since existing FDR methods cannot be applied to this particular problem, we present an adapted FDR method that is suitable for the detection of barcoded reads as well as suggest possible improvements.
APA, Harvard, Vancouver, ISO, and other styles
43

Chadima, Bedřich. "Návrh shozové laboratoře pro testy balistických záchranných systémů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232089.

Full text
Abstract:
This thesis is focussed on designing of the air drop labradory for testing the balistic recovery systems. The first part of the thesis describes balistic recovery systems, their parts as well as methods of testing this devices. In the second charter there is a description of older vision of a testing device and simple test with the electronics of the testing device. The next step is designing of the new koncept of testing device and the structure analysis of the frame. The product of the thesis is a modular automated testing device, which is able to test balistic recovery systems for aicrafts with the weights between 230 and 1700 kilograms.
APA, Harvard, Vancouver, ISO, and other styles
44

Zeineddine, Ali. "Design of a generic digital front-end for the internet of things." Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0001.

Full text
Abstract:
Le nombre de technologies et de normes de communications sans fil est en augmentation constante afin de fournir des solutions de communication à distance pour les différents besoins technologiques. Ceci est particulièrement le cas de l'Internet des objets (IoT), où déjà de nombreuses solutions sont disponibles, et de nombreuses autres sont attendues. Pour un déploiement efficace du réseau IoT, l'interopérabilité entre les différentes solutions est essentielle. L'interopérabilité sur le plan physique est fournie par des modems multistandards. Ces modems sont possibles grâce au front end numérique (DFE), qui offre une interface radio flexible capable de traiter une large gamme de signaux. Cette thèse développe tout d’abord deux architectures génériques de DFE pour la transmission et la réception, pouvant être facilement adaptées aux différentes normes IoT. Ces architectures mettent en évidence le rôle principal du changement de rythme (SRC) dans le DFE et l'importance de l'optimisation de la mise en œuvre de cette fonction. Cette optimisation est ensuite réalisée grâce à une étude approfondie des fonctions SRC, et au développement de nouvelles structures plus efficaces en termes de complexité de mise en œuvre et de consommation, pour des performances égales ou supérieures. La dernière partie de la thèse concerne l'optimisation de la mise en œuvre matérielle du DFE, réalisée par le développement d'une méthode de quantification optimale qui minimise l'utilisation de ressources matérielles tout en garantissant un certain niveau de performance. Les résultats obtenus sont enfin mis en valeurs en comparant différentes stratégies de mise en œuvre sur des cibles FPGA et ASIC
The number of wireless communication technologies and standards is constantly increasing to provide communication solutions for today’s technological needs. This is particularly relevant in the domain of the Internet of Things (IoT), where many standards are available, and many others are expected. To efficiently deploy the IoT network, the interoperability between the different solutions is critical. Interoperability on the physical level is achieved through multi-standard modems. These modems are made possible through the digital front-end (DFE), that offers a flexible radio front-end able of processing a wide range of signal types. This thesis first develops a generic architecture of both transmission and reception DFEs, which can be easily adapted to support different IoT standards. These architectures highlight the main role of sample rate conversion (SRC) in the DFE, and the importance of optimizing the SRC implementation. This optimization is then achieved through an in-depth study of the SRC functions, and the development of new structures of improved efficiency in terms of implementation complexity and power consumption, while offering equivalent or improved performance. The final part of the thesis addresses the optimization of the DFE hardware implementation, which is achieved through developing an optimal quantization method that minimizes the use of hardware resources while guaranteeing a given performance constraint. The obtained results are finally highlighted through implementing and comparing different implementation strategies on both field programmable gate array (FPGA) and application specific integrated circuit (ASIC) targets
APA, Harvard, Vancouver, ISO, and other styles
45

U, Seng-Pan. "Impulse sampled switched-capacitor sampling rate converters." Thesis, University of Macau, 1997. http://umaclib3.umac.mo/record=b1445562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Djaoui, Léo. "Analyse des performances physiques, des incidences physiologiques d’un match de football de haut niveau et des facteurs d’influence : mention spéciale au contexte d’enchaînement des matchs." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1266/document.

Full text
Abstract:
Le football moderne est caractérisé par des efforts intermittents de très haute intensité. Pendant un match, les joueurs réalisent des performances, qu'elles soient physiques ou techniques, en lien direct avec la spécificité de leur poste de jeu, leur rôle tactique et leur positionnement sur le terrain. Un match de football de haut niveau induit des variations de fréquence cardiaque, une baisse de réserves énergétiques, une augmentation des dommages musculaires, du stress oxydatif et une affectation du statut immunitaire. Incidences physiologiques auxquelles se rajoutent des modifications de perception de la fatigue, des douleurs musculaires, du bien-être, de la qualité du sommeil, du stress psychologique et de l'humeur. Toutes ces incidences se mesurent, se quantifient et s'analysent en lien direct avec des facteurs contextuels comme le lieu du match, le moment de la journée, le système de jeu, …, et les périodes d'enchainement de match (e.g. deux à trois matchs par semaine) qui peuvent avoir une influence significative. La présente thèse a pour objectif principal l'étude de l'influence de l'enchainement de matchs sur les performances physiques et sur les cinétiques de récupération mesurées sur des marqueurs sanguins, salivaires et des questionnaires de perception, sur des joueurs de football de haut-niveau
Modern football is characterized by very high-intensity intermittent efforts. During a match, players perform technical and physical tasks in relation to their specific positions on the field. A high-level football match induces heart-rate variations, energetic storage lowering, muscular damage and oxidative stress increase and immune status alteration. These physiological variations are accompanied by modifications of fatigue perceived, muscle soreness, wellness, sleep quality, psychological stress and overall mood. All these incidences can be measured, quantified and analyzed, in direct relation to contextual factors like game location, time of the day, playing system, …, and congested period of matches (e.g. two to three matches per week). The present thesis aims to report all the ways to monitor match load and fatigue and aims to analyze the influence of playing matches during congested periods on physical activity and on physiological post-match kinetics, over high-level football players
APA, Harvard, Vancouver, ISO, and other styles
47

Williams, James Dickson. "Contributions to Profile Monitoring and Multivariate Statistical Process Control." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/30032.

Full text
Abstract:
The content of this dissertation is divided into two main topics: 1) nonlinear profile monitoring and 2) an improved approximate distribution for the T^2 statistic based on the successive differences covariance matrix estimator. (Part 1) In an increasing number of cases the quality of a product or process cannot adequately be represented by the distribution of a univariate quality variable or the multivariate distribution of a vector of quality variables. Rather, a series of measurements are taken across some continuum, such as time or space, to create a profile. The profile determines the product quality at that sampling period. We propose Phase I methods to analyze profiles in a baseline dataset where the profiles can be modeled through either a parametric nonlinear regression function or a nonparametric regression function. We illustrate our methods using data from Walker and Wright (2002) and from dose-response data from DuPont Crop Protection. (Part 2) Although the T^2 statistic based on the successive differences estimator has been shown to be effective in detecting a shift in the mean vector (Sullivan and Woodall (1996) and Vargas (2003)), the exact distribution of this statistic is unknown. An accurate upper control limit (UCL) for the T^2 chart based on this statistic depends on knowing its distribution. Two approximate distributions have been proposed in the literature. We demonstrate the inadequacy of these two approximations and derive useful properties of this statistic. We give an improved approximate distribution and recommendations for its use.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
48

Ericksen, Glenda Joy. "Psychiatric sequelae of rape: a hospital sample." Master's thesis, University of Cape Town, 1993. http://hdl.handle.net/11427/26265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lenez, Thierry. "Synchronisation et égalisation en communication numérique : application au modem VDSL." Grenoble INPG, 2001. http://www.theses.fr/2001INPG0058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Dorléans, Vincent. "Caractérisation et modélisation du comportement et de la rupture de thermoplastiques pour une large gamme de vitesse de déformation et de température." Thesis, Valenciennes, Université Polytechnique Hauts-de-France, 2020. http://www.theses.fr/2020UPHF0031.

Full text
Abstract:
De nos jours les matériaux polymères sont omniprésents dans l’habitacle d’un véhicule. C’est le cas notamment pour les composants comme la planche de bord ou les panneaux de portes. Ces éléments sont soumis à des cahiers des charges imposés par les réglementations internationales pour minimiser les blessures occasionnées à l’occupant en cas d’accident. Il est donc indispensable de caractériser les propriétés mécaniques de ces matériaux par des essais physiques impliquant différents cas de chargement sur de larges gammes de vitesse de sollicitation et de température. Les données recueillies permettent ainsi d’alimenter des modèles de comportement numériques censés reproduire avec fidélité le comportement complet d’un polymère, si possible jusqu’à rupture, en prenant en compte toutes ses spécificités. En effet, dans un souci d’optimisation des coûts de développement, la simulation numérique est aujourd’hui un outil incontournable dans la conception et le dimensionnement de composants. Ainsi dans le cadre de ces travaux, il est proposé de caractériser le comportement complet d’un polymère semi-cristallin jusqu’à la rupture pour une gamme étendue de vitesse de déformation et de température. Dans un premier temps, des DMA et des essais de traction sont réalisés pour caractériser les propriétés viscoélastiques-viscoplastiques du matériau. Puis la notion de principe d’équivalence temps/température est introduite et vérifiée expérimentalement dans les deux domaines. Un modèle basé sur les équations constitutives développées par Balieu et al. et enrichi grâce à ce principe est alors identifié puis implémenté dans le code de calcul Ls Dyna. Il est validé par comparaison à divers résultats expérimentaux. Dans un second temps, les travaux portent sur la caractérisation expérimentale et la modélisation de la rupture. Après avoir défini des géométries d’éprouvettes permettant d’atteindre des taux de triaxialité variés, des essais à rupture sont réalisés à différentes vitesses de déformation pour deux températures +23 et -30°C. Une surface de comportement à rupture est ainsi identifiée et introduite dans le modèle de rupture GISSMO. Le modèle complet, i.e. associant les lois de comportement et de rupture, et ensuite validé en comparaison à des résultats expérimentaux sur éprouvettes, mais également sur démonstrateur industriel
Nowadays, polymers are used for the interior parts of vehicles. It is particularly the case for components such as dashboards and door panels. These elements are submitted to requirements imposed by international regulations in order to minimize the injuries of the passengers in case of a car crash. It is therefore essential to characterize the mechanical properties of those polymeric materials for several load cases on a wide range of strain-rate and temperature. The collected data can be used to fill numerical behavior models supposed to accurately predict the whole behavior of a polymer, if possible up the failure, taking all polymer behavior specificities into account. Indeed, aiming at optimizing development costs, numerical simulation is currently a key tool in the design of engineering components. Thus, in this thesis work, it is proposed to characterize the complete whole behavior of a semi-crystalline polymer in a wide range of strain-rate and temperature until failure. First, some DMA and tensile tests are carried out in order to characterize the viscoelastic and viscoplastic properties of the material. Then, time-temperature-superposition principle is introduced and validated in the two domains. A model based on the constitutive equations developed by Balieu and al. and enriched thanks to this principle is identified and implemented in the Ls Dyna code. It is validated by comparison with experimental results. Secondly, the work focusses on the experimental characterization and the modelling of failure. Several specimen geometries are designed to reach some specific triaxiality ratios and are tested at different strain-rates and two temperatures +23 and -30°C. A failure behavior surface is thus identified and introduced in the GISSMO failure model. The complete model of behavior, i.e. constituted of behavior laws and failure criterion, is then validated based on comparison with experimental data extracted from tests on specimens, but also on an industrial demonstrator
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography