To see the other types of publications on this topic, follow the link: Standard Error of Indices.

Dissertations / Theses on the topic 'Standard Error of Indices'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Standard Error of Indices.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ratzer, Edward Alexander. "Error-correction on non-standard communication channels." Thesis, University of Cambridge, 2004. https://www.repository.cam.ac.uk/handle/1810/237471.

Full text
Abstract:
Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels. First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB. Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels. Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes. Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code.
APA, Harvard, Vancouver, ISO, and other styles
2

Durney, Ann Wells. "Truncation and its effect on standard error of correlation coefficients." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/277950.

Full text
Abstract:
A Monte Carlo study was conducted to investigate the effect of truncation of score distributions on systematic bias and random error of correlation coefficient distributions. The findings were twofold: Correlation decreases systematically due to increasing truncation; and the standard error of the correlation coefficient, which is a measure of random error, increases due to increasing truncation.
APA, Harvard, Vancouver, ISO, and other styles
3

Irani, Ramin. "Error Detection for DMB Video Streams." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5086.

Full text
Abstract:
The purpose of this thesis is to detect errors in Digital Multimedia Broadcasting (DMB) transport stream. DMB uses the MPEG-4 standard for encapsulating Packetized Elementary Stream (PES), and uses the MPEG-2 standard for assembling them in the form of transport stream packets. Recently many research works have been carried out about video stream error detection. They mostly do this by focusing on some decoding parameters related to frame. Processing complexity can be a disadvantage for the proposed methods. In this thesis, we investigated syntax error occurrences due to corruption in the header of the video transport stream. The main focus of the study is the video streams that cannot be decoded. The proposed model is implemented by filtering video and audio packets in order to find the errors. The filters investigate some sources that can affect the video stream playback. The output from this method determines the type, location and duration of the errors. The simplicity of the structure is one of advantages of this model. It can be implemented by three simple filters for detecting errors and a “calculation unit” for calculating the duration of an error. Fast processing is another benefit of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
4

Miura, Tomoaki. "Evaluation and characterization of vegetation indices with error/uncertainty analysis for EOS-MODIS." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/284157.

Full text
Abstract:
A set of error/uncertainty analyses were performed on several "improved" vegetation indices (VIs) planned for operational use in the Moderate Resolution Imaging Spectroradiometer (MODIS) VI products onboard the Terra (EOS AM-1) and Aqua (EOS PM-1) satellite platforms. The objective was to investigate the performance and accuracy of the satellite-derived VI products under improved sensor characteristics and algorithms. These include the "atmospheric resistant" VIs that incorporate the "blue" band for normalization of aerosol effects and the most widely-used, normalized difference vegetation index (NDVI). The analyses were conducted to evaluate specifically: (1) the impact of sensor calibration uncertainties on VI accuracies, (2) the capabilities of the atmospheric resistant VIs and various middle-infrared (MIR) derived VIs to minimize smoke aerosol contamination, and (3) the performances of the atmospheric resistant VIs under "residual" aerosol effects resulting from the assumptions in the MODIS aerosol correction algorithm. The results of these studies showed both the advantages and disadvantages of using the atmospheric resistant VIs for operational vegetation monitoring. The atmospheric resistant VIs successfully minimized optically thin aerosol smoke contamination (aerosol optical thickness (AOT) at 0.67 μm < 1.0) but not optically thick smoke (AOT at 0.67 μm > 1.0). On the other hand, their resistances to "residual" aerosol effects were greater when the effects resulted from the correction of optically-thick aerosol atmosphere. The atmospheric resistant VIs did not successfully minimize the residual aerosol effects from optically-thin aerosol atmosphere (AOT at 0.67 μm ≤ ∼0.15), which was caused mainly by the possible wrong choice of aerosol model used for the AOT estimation and correction. The resultant uncertainties of the atmospheric resistant Vls associated with calibration, which were twice as large as that of the NDVI, increased with increasing AOT. These results suggest that the atmospheric resistant VIs be computed from partially (Rayleigh/O₃) corrected reflectances under normal atmospheric conditions (e.g., visibility > 10 km). Aerosol corrections should only be performed when biomass burning, urban/industrial pollution, and dust storms (larger AOT) are detected.
APA, Harvard, Vancouver, ISO, and other styles
5

Cirineo, Tony, and Bob Troublefield. "STANDARD INTEROPERABLE DATALINK SYSTEM, ENGINEERING DEVELOPMENT MODEL." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608398.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada<br>This paper describes an Engineering Development Model (EDM) for the Standard Interoperable Datalink System (SIDS). This EDM represents an attempt to design and build a programmable system that can be used to test and evaluate various aspects of a modern digital datalink. First, an investigation was started of commercial wireless components and standards that could be used to construct the SIDS datalink. This investigation lead to the construction of an engineering developmental model. This model presently consists of wire wrap and prototype circuits that implement many aspects of a modern digital datalink.
APA, Harvard, Vancouver, ISO, and other styles
6

Schlessman, Bradley R. "Type I Error Rates and Power Estimates for Several Item Response Theory Fit Indices." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1261404833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dutschke, Cynthia F. (Cynthia Fleming). "The Characteristics and Properties of the Threshold and Squared-Error Criterion-Referenced Agreement Indices." Thesis, North Texas State University, 1988. https://digital.library.unt.edu/ark:/67531/metadc331415/.

Full text
Abstract:
Educators who use criterion-referenced measurement to ascertain the current level of performance of an examinee in order that the examinee may be classified as either a master or a nonmaster need to know the accuracy and consistency of their decisions regarding assignment of mastery states. This study examined the sampling distribution characteristics of two reliability indices that use the squared-error agreement function: Livingston's k^2(X,Tx) and Brennan and Kane's M(C). The sampling distribution characteristics of five indices that use the threshold agreement function were also examined: Subkoviak's Pc. Huynh's p and k. and Swaminathan's p and k. These seven methods of calculating reliability were also compared under varying conditions of sample size, test length, and criterion or cutoff score. Computer-generated data provided randomly parallel test forms for N = 2000 cases. From this, 1000 samples were drawn, with replacement, and each of the seven reliability indices was calculated. Descriptive statistics were collected for each sample set and examined for distribution characteristics. In addition, the mean value for each index was compared to the population parameter value of consistent mastery/nonmastery classifications. The results indicated that the sampling distribution characteristics of all seven reliability indices approach normal characteristics with increased sample size. The results also indicated that Huynh's p was the most accurate estimate of the population parameter, with the smallest degree of negative bias. Swaminathan's p was the next best estimate of the population parameter, but it has the disadvantage of requiring two test administrations, while Huynh's p index only requires one administration.
APA, Harvard, Vancouver, ISO, and other styles
8

Natu, Ambarish Shrikrishna Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.

Full text
Abstract:
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
APA, Harvard, Vancouver, ISO, and other styles
9

Kamaras, Konstantinos. "JPEG2000 image compression and error resilience for transmission over wireless channels." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://sirsi.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FKamaras.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Silvia, E. Suyapa M. "Effects of sampling error and model misspecification on goodness of fit indices for structural equation models /." The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487597424138163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Wei, and Zhiyuan Guo. "On Forward Error Correction in IEEE 802.15.4 Wireless Sensor Networks." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16614.

Full text
Abstract:
Wireless Sensor Networks (WSN) are used in many applications, for example industrial applications, automatic control applications, monitoring applications, to name but a few. Although WSN can employ different standards in order to achieve short range wireless communication, the mainstream of the market is toadopt the low-power, low-rate IEEE 802.15.4 standard. However, this standard does not specify any block codes on the Physical layer (PHY) and the MAC sublayer. Reliability and energy efficiency are two important metrics used to evaluate the WSN performance. In order to enhance the reliability of the WSN performance, schemes such as Forward Error Correction (FEC) and HybridAutomatic Repeat-reQuest (HARQ) can be introduced on the PHY and MACsublayer when transmitting signals. However, this will reduce the energy efficiency of the WSN. In order to investigate what does affect the reliability and energy efficiency, this thesis has been conducted with the assistance of Matlab simulations, which simulate different transmission schemes proposed by the authors. Based on the simulations, both the reliability and energy efficiency can be evaluated and the results are illustrated for both metrics. The objective of this thesis is to determine a scheme that is able to meet these metric requirements.
APA, Harvard, Vancouver, ISO, and other styles
12

Moore, Joann Lynn. "Estimating standard errors of estimated variance components in generalizability theory using bootstrap procedures." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/860.

Full text
Abstract:
This study investigated the extent to which rules proposed by Tong and Brennan (2007) for estimating standard errors of estimated variance components held up across a variety of G theory designs, variance component structures, sample size patterns, and data types. Simulated data was generated for all combinations of conditions, and point estimates, standard error estimates, and coverage for three types of confidence intervals were calculated for each estimated variance component and relative and absolute error variance across a variety of bootstrap procedures for each combination of conditions. It was found that, with some exceptions, Tong and Brennan's (2007) rules produced adequate standard error estimates for normal and polytomous data, while some of the results differed for dichotomous data. Additionally, some refinements to the rules were suggested with respect to nested designs. This study provides support for the use of bootstrap procedures for estimating standard errors of estimated variance components when data are not normally distributed.
APA, Harvard, Vancouver, ISO, and other styles
13

Dubbert, Dale F. "The RMS phase error of a phase-locked loop FM demodulator for standard NTSC video." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Blad, Wiktor, and Vilim Nedic. "GARCH models applied on Swedish Stock Exchange Indices." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-386185.

Full text
Abstract:
In the financial industry, it has been increasingly popular to measure risk. One of the most common quantitative measures for assessing risk is Value-at-Risk (VaR). VaR helps to measure extreme risks that an investor is exposed to. In addition to acquiring information of the expected loss, VaR was introduced in the regulatory frameworks of Basel I and II as a standardized measure of market risk. Due to necessity of measuring VaR accurately, this thesis aims to be a contribution to the research field of applying GARCH-models to financial time series in order to forecast the conditional variance and find accurate VaR-estimations. The findings in this thesis is that GARCH-models which incorporate the asymmetric effect of positive and negative returns perform better than a standard GARCH. Further on, leptokurtic distributions have been found to outperform normal distribution. In addition to various models and distributions, various rolling windows have been used to examine how the forecasts differ given window lengths.
APA, Harvard, Vancouver, ISO, and other styles
15

Briggs, Nancy Elizabeth. "Estimation of the standard error and confidence interval of the indirect effect in multiple mediator models." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1158693880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Nelson, N. Thomas. "Probability of Bit Error on a Standard IRIG Telemetry Channel Using the Aeronautical Fading Channel Model." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/611662.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California<br>This paper analyzes the probability of bit error for PCM-FM over a standard IRIG channel subject to multipath interference modeled by the aeronautical fading channel. The aeronautical channel model assumes a mobile transmitter and a stationary receiver and specifies the correlation of the fading component. This model describes fading which is typical of that encountered at military test ranges. An expression for the bit error rate on the fading channel with a delay line demodulator is derived and compared with the error rate for the Gaussian channel. The increase in bit error rate over that of the Gaussian channel is determined along with the power penalty caused by the fading. In addition, the effects of several channel parameters on the probability of bit error are determined.
APA, Harvard, Vancouver, ISO, and other styles
17

Tataryn, Douglas Joseph 1960. "Standard errors of measurement, confidence intervals, and the distribution of error for the observed score curve." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277223.

Full text
Abstract:
This paper reviews the basic literature on the suggested applications of the standard error of measurement (SEM), and points out that there are discrepancies in its suggested application. In the process of determining the efficacy and appropriateness of each of the proposals, a formula to determine the distribution of error for the observed score curve is derived. The final recommendation, which is congruent with Cronbach, Gleser, Nanda & Rajaratnam's (1972) recommendations, is to not use the SEM to create confidence intervals around the observed score: The predicted true score and the standard error of the prediction are better suited (non-biased and more efficient) for the task of estimating a confidence interval which will contain an individual's true score. Finally, the distribution of future observed scores around the expected true score is derived.
APA, Harvard, Vancouver, ISO, and other styles
18

Justino, Júlia Maria da Rocha Vilaverde. "Nonstandard linear algebra with error analysis." Doctoral thesis, Universidade de Évora, 2013. http://hdl.handle.net/10174/16316.

Full text
Abstract:
Neste trabalho consideramos sistemas de equações lineares exíveis, sistemas de equações lineares cujos coe cientes têm incertezas de tipo o (:) ou O (:). Este tipo de incertezas irá ser analisado, à luz da análise não standard, como conjuntos de in nitesimais conhecidos como neutrizes. Em sistemas de equações lineares exíveis nem sempre existe uma solução exata. No entanto, neste trabalho apresentam-se condições que garantem a existência de pelo menos uma solução admissível, no sentido de inclusão, e as condições que garantem a existência de solução maximal nesse tipo de sistemas. Tais condições são restrições àcerca da ordem de grandeza do tipo de incertezas existentes, tanto na matriz dos coe cientes do sistema como na respetiva matriz dos termos independentes. Utilizando a regra de Cramer sob essas condições é possível produzir, pelo menos, uma solução admissível do sistema. No caso em que se garante a obtenção da solução maximal do sistema pela re- gra de Cramer, prova-se que essa solução corresponde à solução obtida pelo método de eliminação de Gauss; ABSTRACT: Systems of linear equations, called exible systems, with coe¢ cients having uncertain- ties of type o (:) or O (:) are studied from the point of view of nonstandard analysis. Then uncertainties of the afore-mentioned kind will be given in the form of so-called neutrices, for instance the set of all in nitesimals. In some cases an exact solution of a exible system may not exist. In this work conditions are presented that guarantee the existence of an admissible solution, in terms of inclusion, and also conditions that guarantee the existence of a maximal solution. These conditions concern restrictions on the size of the uncertainties appearing in the matrix of coe¢ cients and in the constant term vector of the system. Applying Cramer s rule under these conditions, one obtains, at least, an admis- sible solution of the system. In the case a maximal solution is produced by Cramer s rule, one proves that it is the same solution produced by Gauss-Jordan elimination.
APA, Harvard, Vancouver, ISO, and other styles
19

Meki, Brian. "Examining long-run relationships of the BRICS stock market indices to identify opportunities for implementation of statistical arbitrage strategies." Thesis, University of the Western Cape, 2012. http://hdl.handle.net/11394/4348.

Full text
Abstract:
>Magister Scientiae - MSc<br>Purpose:This research investigates the existence of long-term equilibrium relationships among the stock market indices of Brazil, Russia, India, China and South Africa (BRICS). It further investigates cointegrated stock pairs for possible implementation of statistical arbitrage trading techniques.Design:We utilize standard multivariate time series analysis procedures to inspect unit roots to assess stationarity of the series. Thereafter, cointegration is tested by the Johansen and Juselius (1990) procedure and the variables are interpreted by a Vector Error Correction Model (VECM). Statistical arbitrage is investigated through the pairs trading technique.Findings:The five stock indices are found to be cointegrated. Analysis shows that the cointegration rank among the variables is significantly influenced by structural breaks. Two pairs of stock variables are also found to be cointegrated. This guaranteed the mean reversion property necessary for the successful execution of the pairs trading technique. Determining the optimal spread threshold also proved to be highly significant with respect to the success of this trading technique.Value:This research seeks to expand on the literature covering long-run co-movements of the volatile emerging market indices. Based on the cointegration relation shared by the BRICS, the research also seeks to encourage risk taking when investing. We achieve this by showing the potential rewards that can be realized through employing appropriate statistical arbitrage trading techniques in these markets.
APA, Harvard, Vancouver, ISO, and other styles
20

DE, AMICIS AMEDEO. "Serially concatenated low-density parity-check codes for error correction." Doctoral thesis, Università Politecnica delle Marche, 2011. http://hdl.handle.net/11566/241943.

Full text
Abstract:
Questa tesi studia il progetto di codici Multiple Serially Concatenated Multiple Parity-Check (M-SC-MPC) che sono una classe di codici LDPC strutturati, caratterizzati da una codifica molto semplice. Si è anche studiato come il progetto dei codici M-SC-MPC può essere ottimizzato per l'uso in applicazioni wireless. E' stato dimostrato che i codici LDPC irregolari presentano migliori prestazioni rispetto a quelli regolari, specialmente per i codici a basso tasso di rate. Per questo motivo, particolare attenzione è dedicata alla semplice modifica della struttura interna dei codici M-SC-MPC che può migliorare le prestazioni nella correzione d'errore introducendo delle irregolarità nella matrice a parità di controllo e incrementando la lunghezza dei cicli locali nel grafo di Tanner associato. Inoltre, questa tesi presenta una versione modificata dell'algoritmo PEG (Progressive Edge Growth) che migliora i codici M-SC-MPC in termini di lunghezza dei cicli locali. Tali codici possono essere visti come codici M-SC-MPC dove è aggiunto un interleaver ad ogni coppia di codici componenti. Pertanto sono denominati Permuted Serially Concatenated Multiple Parity-Check (P-SC-MPC). Le simulazioni numeriche mostrano come i codici P-SC-MPC hanno prestazioni simili o spesso migliori rispetto ai codici M-SC-MPC regolari e irregolari e ai codici Quasi-Ciclici inclusi nello standard IEEE 802.16e.<br>This thesis elaborates on the design of Multiple Serially Concatenated Multiple Parity-Check (M-SC-MPC) codes, that are a class of structured Low-Density Parity-Check (LDPC), characterized by very simple encoding. It is also studied how the design of M-SC-MPC codes can be optimized for their usage in wireless applications. Irregular LDPC codes, in fact, have been proved to be better than regular ones, especially for low code rates. Particular attention is devoted to a simple modification of the inner structure of M-SC-MPC codes that can help to improve their error correction performance by introducing irregularity in the parity-check matrix and increasing the length of local cycles in the associated Tanner graph. Furthermore, this thesis presents a modified version of the Progressive Edge Growth (PEG) algorithm to improve the design of M-SC-MPC codes in terms of local cycles length. The proposed codes can be seen as M-SC-MPC codes where an interleaver is added between each pair of component codes; so they are denoted as Permuted Serially Concatenated Multiple Parity-Check (P-SC-MPC) codes. The numerical simulations show that the proposed codes perform comparably or even better than both regular and irregular M-SC-MPC codes and Quasi-Cyclic (QC) codes included in the IEEE 802.16e standard.
APA, Harvard, Vancouver, ISO, and other styles
21

Lohrmann, Carol A. "An analysis of four error detection and correction schemes for the proposed Federal Standard 1024 (Land Mobile Radio)." Thesis, Monterey, California. Naval Postgraduate School, 1990. http://hdl.handle.net/10945/30698.

Full text
Abstract:
Approved for public release, distribution is unlimited<br>Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote federal communication standards (FS). This thesis surveys one ares of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three Fortran programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
22

Johnson, Natalie. "A Comparative Simulation Study of Robust Estimators of Standard Errors." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1937.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Якім'юк, Анна Дмитрівна, Мар'яна Іванівна Кривчанська, and Мар'яна Іванівна Грицюк. "The indices of ion-regulating renal function at melatonin administration on the bankground of anaprilinum action under condition of standard lighting." Thesis, Abstracts journal: 13-th Edition of Craiova International Medical Students Conference. - 10-th-13-th November 2011. - Craiova, Romania/, 2011. http://dspace.bsmu.edu.ua:8080/xmlui/handle/123456789/2181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kanyongo, Gibbs Y. "Using Large-Scale Datasets to Teach Abstract Statistical Concepts: Sampling Distribution." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-82613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Chunxin. "An investigation of bootstrap methods for estimating the standard error of equating under the common-item nonequivalent groups design." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1188.

Full text
Abstract:
The purpose of this study was to investigate the performance of the parametric bootstrap method and to compare the parametric and nonparametric bootstrap methods for estimating the standard error of equating (SEE) under the common-item nonequivalent groups (CINEG) design with the frequency estimation (FE) equipercentile method under a variety of simulated conditions. When the performance of the parametric bootstrap method was investigated, bivariate polynomial log-linear models were employed to fit the data. With the consideration of the different polynomial degrees and two different numbers of cross-product moments, a total of eight parametric bootstrap models were examined. Two real datasets were used as the basis to define the population distributions and the "true" SEEs. A simulation study was conducted reflecting three levels for group proficiency differences, three levels of sample sizes, two test lengths and two ratios of the number of common items and the total number of items. Bias of the SEE, standard errors of the SEE, root mean square errors of the SEE, and their corresponding weighted indices were calculated and used to evaluate and compare the simulation results. The main findings from this simulation study were as follows: (1) The parametric bootstrap models with larger polynomial degrees generally produced smaller bias but larger standard errors than those with lower polynomial degrees. (2) The parametric bootstrap models with a higher order cross product moment (CPM) of two generally yielded more accurate estimates of the SEE than the corresponding models with the CPM of one. (3) The nonparametric bootstrap method generally produced less accurate estimates of the SEE than the parametric bootstrap method. However, as the sample size increased, the differences between the two bootstrap methods became smaller. When the sample size was equal to or larger than 3,000, the differences between the nonparametric bootstrap method and the parametric bootstrap model that produced the smallest RMSE were very small. (4) Of all the models considered in this study, parametric bootstrap models with the polynomial degree of four performed better under most simulation conditions. (5) Aside from method effects, sample size and test length had the most impact on estimating the SEE. Group proficiency differences and the ratio of the number of common items to the total number of items had little effect on a short test, but had slight effect on a long test.
APA, Harvard, Vancouver, ISO, and other styles
26

Tanner, Whitney Ford. "Improved Standard Error Estimation for Maintaining the Validities of Inference in Small-Sample Cluster Randomized Trials and Longitudinal Studies." UKnowledge, 2018. https://uknowledge.uky.edu/epb_etds/20.

Full text
Abstract:
Data arising from Cluster Randomized Trials (CRTs) and longitudinal studies are correlated and generalized estimating equations (GEE) are a popular analysis method for correlated data. Previous research has shown that analyses using GEE could result in liberal inference due to the use of the empirical sandwich covariance matrix estimator, which can yield negatively biased standard error estimates when the number of clusters or subjects is not large. Many techniques have been presented to correct this negative bias; However, use of these corrections can still result in biased standard error estimates and thus test sizes that are not consistently at their nominal level. Therefore, there is a need for an improved correction such that nominal type I error rates will consistently result. First, GEEs are becoming a popular choice for the analysis of data arising from CRTs. We study the use of recently developed corrections for empirical standard error estimation and the use of a combination of two popular corrections. In an extensive simulation study, we find that nominal type I error rates can be consistently attained when using an average of two popular corrections developed by Mancl and DeRouen (2001, Biometrics 57, 126-134) and Kauermann and Carroll (2001, Journal of the American Statistical Association 96, 1387-1396) (AVG MD KC). Use of this new correction was found to notably outperform the use of previously recommended corrections. Second, data arising from longitudinal studies are also commonly analyzed with GEE. We conduct a simulation study, finding two methods to attain nominal type I error rates more consistently than other methods in a variety of settings: First, a recently proposed method by Westgate and Burchett (2016, Statistics in Medicine 35, 3733-3744) that specifies both a covariance estimator and degrees of freedom, and second, AVG MD KC with degrees of freedom equaling the number of subjects minus the number of parameters in the marginal model. Finally, stepped wedge trials are an increasingly popular alternative to traditional parallel cluster randomized trials. Such trials often utilize a small number of clusters and numerous time intervals, and these components must be considered when choosing an analysis method. A generalized linear mixed model containing a random intercept and fixed time and intervention covariates is the most common analysis approach. However, the sole use of a random intercept applies assumptions that will be violated in practice. We show, using an extensive simulation study based on a motivating example and a more general design, alternative analysis methods are preferable for maintaining the validity of inference in small-sample stepped wedge trials with binary outcomes. First, we show the use of generalized estimating equations, with an appropriate bias correction and a degrees of freedom adjustment dependent on the study setting type, will result in nominal type I error rates. Second, we show the use of a cluster-level summary linear mixed model can also achieve nominal type I error rates for equal cluster size settings.
APA, Harvard, Vancouver, ISO, and other styles
27

Yuksekkaya, Mehmet. "Implementation And Performance Analysis Of The Dvb-t Standard System." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606904/index.pdf.

Full text
Abstract:
Terrestrial Digital Video Broadcasting (DVB-T) is a standard for wireless broadcast of MPEG-2 video. DVB-T is based on channel coding algorithms and uses Orthogonal Frequency Division Multiplexing (OFDM) as a modulation scheme. In this thesis, we have implemented the standard of ETSI EN 300 744 for Digital Video Broadcasting in MATLAB. This system is composed of the certain blocks which include OFDM modulation, channel estimation, channel equalization, frame synchronization, error-protection coding, to name a few of such blocks. We have investigated the performance of the complete system for different wireless broadcast impairments. In this performance analysis, we have considered Rayleigh fading multi-path channels with Doppler shift and framing synchronization errors and obtained the bit error rate (BER), and channel minimum square error performances versus different maximum Doppler shift values, different channel equalization techniques and different channel estimation algorithms. Furthermore, we have investigated different interpolations methods for the interpolation of channel response. It is shown that minimum mean-square error (MMSE) type equalization has a better performance in symbol estimation compared to zero forcing (ZF) equalizer. Also linear interpolation in time and low pass frequency interpolation, for time frequency interpolation of channel response can be used for practical application.
APA, Harvard, Vancouver, ISO, and other styles
28

West, Miriam Anne. "Comparison of H-alpha and H-beta Temperature Indices in the Hyades and Coma Star Clusters and Selected H-beta Standard Stars." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2327.

Full text
Abstract:
Using the Dominion Astrophysical Observatory's 1.2-m McKellar Telescope, we have obtained spectra on 81 stars from the Hyades Cluster, the Coma Cluster, and selected H-beta standard stars. These spectra cover from 4500 Å to 6900 Å which includes both the H-β and H-α absorption lines. The H-β absorption line has a long history of being used as a temperature index and more recently, calibration of an H-α index has been established for photometric observations. Through spectrophotometric comparison of temperature indices from the H-α and H-β absorption lines we find the expected strong correlation between photometric indices based on the strength of these two lines. This result confirms that the H-α index is a strong indicator of temperature.
APA, Harvard, Vancouver, ISO, and other styles
29

Maerten-Rivera, Jaime. "A Comparison of Modern Longitudinal Change Models with an Examination of Alternative Error Covariance Structures." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/376.

Full text
Abstract:
The purpose of this research was to compare results from two approaches to measuring change over time. The multilevel model (MLM) and latent growth model (LGM) were imposed and the parameter estimates were compared, along with model fit. The study came out of education and used data collected from 191 teachers as part of a professional development intervention in science, which took place over four years. There were missing data as a result of teacher attrition. Teachers reported use of reform-oriented practices (ROP) was used as the outcome, and teacher-level variables were examined for their impact on initial ROP and change in ROP from baseline to one year after the intervention. Change in ROP was examined using a piecewise change model where two linear slopes were modeled. The first slope estimated the change from baseline to T1, or the initial change after the intervention while the second slope estimated the change from T1 to T3, or the secondary change. Parameter estimates obtained from MLM and LGM for a model using the error covariance structure commonly assumed in MLM (i.e., random slopes, homogeneous level-1 variance) were nearly identical. Models with various alternative covariance structures (commonly associated with the LGM framework) were examined, and results were nearly identical. Most of the model fit information was in agreement regarding the best fitting model being the model that assumed the typical MLM error covariance structure with the exception of the standardized root mean square residual (SRMR) fit index. The results from the models demonstrated that ROP increased after participating in the first year of the intervention and this level was sustained, though did not increase significantly in subsequent years. There was more variation in ROP at baseline. This information tells us that the intervention was successful in that after participating in the intervention the teachers' used ROP more frequently. The success of the intervention did not depend on any of the predictors that we assessed, and, as a group, the teachers became more similar in their use of reform-oriented practices over time.
APA, Harvard, Vancouver, ISO, and other styles
30

Pucci, Lorenzo. "Analisi sperimentale dell’interferenza in sistemi UWB basati sullo standard IEEE 802.15.4a." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Find full text
Abstract:
In questo elaborato si è effettuata un’analisi sperimentale dell’interferenza in sistemi Ultra-Wide Band (UWB), utilizzando un sistema creato da DecaWave, azienda irlandese che si occupa di sviluppare circuiti integrati per la localizzazione indoor basati su tecnologia UWB. Si tratta del kit di sviluppo EVK1000, che include due Evaluation Board EVB1000, ciascuna delle quali incorpora il ricetrasmettitore DW1000, basato sullo standard UWB IEEE 802.15.4-2011. Questo kit è ottimizzato per la localizzazione indoor e lo scambio di dati per applicazioni come il Real-time Locating System (RTLS) e le Wireless Sensor Networks (WSN). L’obiettivo di questo elaborato è dunque quello di fornire un’analisi delle prestazioni di comunicazione del sistema in presenza di un interferente UWB, prestando particolare attenzione agli scenari piu' critici.
APA, Harvard, Vancouver, ISO, and other styles
31

Knowles, Rachel. "Factors contributing to the commission of errors and omission of standard nursing practice among new nurses." Honors in the Major Thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/860.

Full text
Abstract:
Every year, millions of medical errors are committed, costing not only patient health and satisfaction, but thousands of lives and billions of dollars. Errors occur in many areas of the healthcare environment, including the profession of nursing. Nurses provide and delegate patient care and consequently, standard nursing responsibilities such as medication administration, charting, patient education, and basic life support protocol may be incorrect, inadequate, or omitted. Although there is much literature about errors among the general nurse population and there is indication that new nurses commit more errors than experienced nurses, not much literature asks the following question: What are the factors contributing to the commission of errors, including the omission of standard nursing care, among new nurses? Ten studies (quantitative, qualitative, and mixed-mode) were examined to identify these factors. From the 10 studies, the researcher identified the three themes of lack of experience, stressful working conditions, and interpersonal and intrapersonal factors. New nurses may not have had enough clinical time, may develop poor habits, may not turn to more experienced nurses and other professionals, may be fatigued from working too many hours with not enough staffing, may not be able to concentrate at work, and may not give or receive adequate communication. Based on these findings and discussion, suggested implications for nursing practice include extended clinical experience, skills practice, adherence to the nursing process, adherence to medications standards such as the five rights and independent double verification, shorter working hours, adequate staffing, no-interruption and no-phone zones, creating a culture of support, electronically entered orders, translation phones, read-backs, and standardized handoff reports.<br>B.S.N.<br>Bachelors<br>Nursing<br>Nursing
APA, Harvard, Vancouver, ISO, and other styles
32

Incorvaia, Nicolas. "L'enseignement-apprentissage de l'arabe standard moderne aux-par les apprenants français." Electronic Thesis or Diss., Toulouse 2, 2020. http://www.theses.fr/2020TOU20036.

Full text
Abstract:
Les relations entre les habitants de la France et les locuteurs de la langue-culture arabe sont très anciennes puisqu'elles remontent au moins aux Croisades et aux premières traductions du Coran en latin. Après avoir rappelé les principales étapes historiques de l'enseignement de l'arabe en France (qui fut organisé officiellement depuis le XVIe siècle), nous nous intéressons à la situation actuelle de l'arabe dans l’Hexagone où il est aujourd'hui la deuxième langue la plus parlée. Or, cette langue connaît une remarquable situation de pluriglossie où se côtoient cinq variétés : arabe classique, arabe standard moderne (ou ASM), arabe intermédiaire, dialectes arabes et francarabe. Ces éléments historiques et sociolinguistiques, complétés par une étude comparative de l’ASM et du français standard, nous permettent d'aborder notre problématique qui relève de la didactique des langues-cultures : quelles sont les difficultés majeures que peut rencontrer l’apprenant Français adulte lorsqu'il commence à apprendre l’ASM ? L’analyse d’un corpus d’erreurs nous permet de répondre à cette question et de formuler quelques propositions didactiques propres à faciliter l’apprentissage de la communication en ASM. Afin d’approfondir notre réflexion, nous avons aussi cherché à connaître les motivations qui conduisaient les enquêtés à apprendre l’ASM ainsi que les utilisations qu’ils faisaient de leurs compétences en arabe. Notre problématique s’inscrit également dans le contexte social de la France d’aujourd’hui, où la communication interculturelle revêt une importance capitale<br>The relationships between the inhabitants of France and the speakers of the Arab language-culture are very old, as they date back at least to the Crusades and the first Latin translations of the Koran. After reminding the main historical steps of the teaching of Arabic in France (officially organised in France since the 16th century), we will look into the current situation of this language in France, where it is considered the second most spoken language. However, this language presents a remarkable pluriglossia situation with five varieties living alongside: Classical Arabic, Modern Standard Arabic (or MSA), Middle Arabic, the Arabic dialects and the “francarabe”. These historical and sociolinguistic elements, completed by a comparative study between MSA and Standard French, allow us to approach our problematic that falls into the didactics of languages-cultures : What are the main issues that can be encountered by a adult French learner who starts studying MSA? The analysis of a corpus of errors allows us to answer this question and to proffer some didactic proposals to facilitate the learning of communication in MSA. In order to deepen our thinking on this matter, we also sought to know the motivations that led the respondents to learn MSA, as well as the uses they made of their skills in Arabic. Our problematic is also set in the social context of contemporary France, where intercultural communication is of paramount importance
APA, Harvard, Vancouver, ISO, and other styles
33

Davidson, Fiona. "Predicting Glass Sponge (Porifera, Hexactinellida) Distributions in the North Pacific Ocean and Spatially Quantifying Model Uncertainty." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40028.

Full text
Abstract:
Predictions of species’ ranges from distribution modeling are often used to inform marine management and conservation efforts, but few studies justify the model selected or quantify the uncertainty of the model predictions in a spatial manner. This thesis employs a multi-model, multi-area SDM analysis to develop a higher certainty in the predictions where similarities exist across models and areas. Partial dependence plots and variable importance rankings were shown to be useful in producing further certainty in the results. The modeling indicated that glass sponges (Hexactinellida) are most likely to exist within the North Pacific Ocean where alkalinity is greater than 2.2 μmol l-1 and dissolved oxygen is lower than 2 ml l-1. Silicate was also found to be an important environmental predictor. All areas, except Hecate Strait, indicated that high glass sponge probability of presence coincided with silicate values of 150 μmol l-1 and over, although lower values in Hecate Strait confirmed that sponges can exist in areas with silicate values of as low as 40 μmol l-1. Three methods of showing spatial uncertainty of model predictions were presented: the standard error (SE) of a binomial GLM, the standard deviation of predictions made from 200 bootstrapped GLM models, and the standard deviation of eight commonly used SDM algorithms. Certain areas with few input data points or extreme ranges of predictor variables were highlighted by these methods as having high uncertainty. Such areas should be treated cautiously regardless of the overall accuracy of the model as indicated by accuracy metrics (AUC, TSS), and such areas could be targeted for future data collection. The uncertainty metrics produced by the multi-model SE varied from the GLM SE and the bootstrapped GLM. The uncertainty was lowest where models predicted low probability of presence and highest where the models predicted high probability of presence and these predictions differed slightly, indicating high confidence in where the models predicted the sponges would not exist.
APA, Harvard, Vancouver, ISO, and other styles
34

Hillefors, Hanna, and Nathalie Isaksson. "De svenska hushållens sparande : Vilka faktorer påverkar sparkvoten? En reflektion under den rådande Corona-pandemin." Thesis, Högskolan Väst, Avd för juridik, ekonomi, statistik och politik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-17329.

Full text
Abstract:
The savings ratio for Swedish households is record-breaking and Sweden, together with the rest of the world, is currently in the middle of a pandemic. What drives individuals to save is based on a number of different factors that previous research has concluded. The purpose of this study is to, with previous research as a basis, investigate which factors affect the savings ratio for Swedish households. Quarterly data for the years 1982–2020 is analyzed in a time series by first processing for unit roots and then cointegration. The data is then estimated in a multiple linear regression in the form of an “Error Correction Model”, with the intention of investigating both the short-term and long-term relationship. The results of the study indicate that the variables that have a significant impact on the change in the household savings ratio are GDP per capita, inflation, unemployment and consumption, while public savings and the development of the stock market have a significant but less considerable effekt. The economic theories that the study findssupport for are the theory of precautionary savings as well as the standard buffer-stock model.<br>Sparkvoten hos svenska hushåll är rekordhög och Sverige, tillsammans med resten av världen, befinner sig för närvarande mitt i en pandemi. Vad som driver individer till att spara grundar sig i en rad olika faktorer som tidigare forskning kommit fram till. Syftet med denna studie är att, med tidigare forskning som grund, undersöka vilka faktorer som påverkar sparkvoten för svenska hushåll. Kvartalsdata för åren 1982–2020 analyseras i en tidsserie genom att först behandlas för enhetsrötter och sedan kointegration. Därefter skattas de i en multipel linjär regressionsanalys i form av en ”Error Correction Model”, med avsikt att utreda både det kortsiktiga- och långsiktiga sambandet. Resultatet av studien indikerar att de variabler som har en signifikant betydande påverkan på förändringen i hushållens sparkvot är BNP per capita, inflation, arbetslöshet samt konsumtion, medan offentligt sparande och utveckling av aktiemarknaden har en signifikant men mindre betydande effekt. De ekonomiska teorier som studien finner stöd i är teorin om försiktighetssparandet samt standard buffertlager-modellen.
APA, Harvard, Vancouver, ISO, and other styles
35

Choi, Kai-san. "Automatic source camera identification by lens aberration and JPEG compression statistics." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B38902345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Choi, Kai-san, and 蔡啟新. "Automatic source camera identification by lens aberration and JPEG compression statistics." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B38902345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Csörgö, Tomáš. "Meranie výkonnosti portfólia." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-195516.

Full text
Abstract:
The goal of the master thesis is to analyze portfolio performance. The theoretical part of the thesis describes risk, portfolio performance measurement, investment funds, theory of portfolio. The analysis of portfolio performance is measured by different portfolio measurement tools.
APA, Harvard, Vancouver, ISO, and other styles
38

Coraggio, James Thomas. "A Monte Carlo approach for exploring the generalizability of performance standards." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bose, Biswojit. "Bit error rate estimation in WiMAX communications at vehicular speeds using Nakagami-m fading model." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2013. https://ro.ecu.edu.au/theses/530.

Full text
Abstract:
The wireless communication industry has experienced a rapid technological evolution from its basic first generation (1G) wireless systems to the latest fourth generation (4G) wireless broadband systems. Wireless broadband systems are becoming increasingly popular with consumers and the technological strength of 4G has played a major role behind the success of wireless broadband systems. The IEEE 802.16m standard of the Worldwide Interoperability for Microwave Access (WiMAX) has been accepted as a 4G standard by the Institute of Electrical and Electronics Engineers in 2011. The IEEE 802.16m is fully optimised for wireless communications in fixed environments and can deliver very high throughput and excellent quality of service. In mobile communication environments however, WiMAX consumers experience a graceful degradation of service as a direct function of vehicular speeds. At high vehicular speeds, the throughput drops in WiMAX systems and unless proactive measures such as forward error control and packet size optimisation are adopted and properly adjusted, many applications cannot be facilitated at high vehicular speeds in WiMAX communications. For any proactive measure, bit error rate estimation as a function of vehicular speed, serves as a useful tool. In this thesis, we present an analytical model for bit error rate estimation in WiMAX communications using the Nakagami-m fading model. We also show, through an analysis of the data collected from a practical WiMAX system, that the Nakagami-m model can be made adaptive as a function of speed, to represent fading in fixed environments as well as mobile environments.
APA, Harvard, Vancouver, ISO, and other styles
40

Witzky, Marcus. "Three essays on accounting standard setting, corporate governance and investor behavior." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17358.

Full text
Abstract:
Die vorliegende kumulative Doktorarbeit umfasst drei Arbeiten aus dem Bereich der empirischen Rechnungslegungsforschung. Die erste Arbeit untersucht die Rolle persönlicher Eigenschaften von Rechnungslegungsstandardsetzern bei der Entwicklung der Internationalen Rechnungslegungsstandards IFRS. Sie dokumentiert, dass in den IFRS insgesamt ein Rückgang der Bedeutung von Prinzipien gegenüber Regeln sowie ein Anstieg der Bedeutung des beizulegenden Zeitwerts im Zeitablauf zu verzeichnen sind. Zwischen Änderungen von IFRS-Eigenschaften sowie beruflichen und kulturellen Eigenschaften von Mitgliedern des International Accounting Standards Board (IASB) wird ein Zusammenhang festgestellt. Die zweite Arbeit widmet sich Ursachen und Folgen fehlerhafter Finanzberichterstattung im Rahmen des deutschen Systems der Durchsetzung von Rechnungslegungsregeln. Sie findet systematische Unterschiede in der Unternehmensführung von Unternehmen, bei denen fehlerhafte Finanzberichte festgestellt werden, gegenüber einer Kontrollgruppe. Weitere Ergebnisse lassen die Vermutung zu, dass die Aufdeckung fehlerhafter Finanzberichte Verbesserungen in der unternehmensspezifischen Aufsicht über den Rechnungslegungsprozess auslösen könnte. Die dritte Arbeit nutzt umfangreiche Befragungsergebnisse deutscher Privatanleger zur Untersuchung der Ursachen ihres Unternehmensüberwachungsverhaltens. Demnach üben Anleger, die ein geringeres Vertrauen in andere Anspruchsgruppen eines Unternehmens haben, zugleich eine geringere Unternehmensüberwachung aus. Darüber hinaus dokumentiert die Arbeit, dass Vertrauen und Unternehmensüberwachung in einem Zusammenhang mit dem Ausmaß der Teilnahme am Aktienmarkt und dem Bildungshintergrund der Anleger stehen.<br>This cumulative doctoral thesis consists of three papers within the field of empirical financial accounting research. The first paper examines the role of personal characteristics of accounting standard setters in the development of the International Financial Reporting Standards (IFRS). It documents that the full set of IFRS exhibited a decrease in the importance of principles relative to rules and an increase in its fair value orientation over time. Changes in IFRS properties are found to be associated with the professional and cultural background of International Accounting Standards Board (IASB) members. The second paper investigates determinants and consequences of erroneous financial reporting under the German financial reporting enforcement regime. The corporate governance of firms detected with erroneous financial reporting is found to differ systematically from that of control firms. Further results suggest that error detection might trigger improvements in firm-level accounting oversight. The third paper uses large-scale survey evidence from German individual investors to explore the determinants of their monitoring behavior. Investors who are less trusting in their fellow stakeholders are found to engage in less monitoring. Furthermore, trust and monitoring are documented to be associated with the stock market exposure and the educational background of investors.
APA, Harvard, Vancouver, ISO, and other styles
41

Křižka, Adam. "Diverzifikace portfolia prostřednictvím investic do burzovních indexů." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2020. http://www.nusl.cz/ntk/nusl-414481.

Full text
Abstract:
The diploma thesis focuses on the design of suitable stock exchange indices for portfolio diversification. The essence and principle of functioning of financial markets and investment funds is presented. According to suitable indicators, stock exchange indices are analyzed and compared with the market. Suitable indices are verified by means of correlation analysis and subsequently recommended to diversify the portfolios of investment funds managed through the investment company.
APA, Harvard, Vancouver, ISO, and other styles
42

Yockey, Ron David. "An investigation of the type I error rates and power of standard and alternative multivariate tests on means under homogeneous and heterogeneous covariance matrices and multivariate normality and nonnormality /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Huahui. "ARMOR - adjusting repair and media scaling with operations research for streaming video." Link to electronic dissertation, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050406-144021/.

Full text
Abstract:
Dissertation (Ph.D.)--Worcester Polytechnic Institute.<br>Keywords: Streaming MPEG, User Study, Video Quality, Forward Error Correction, Temporal Scaling, Quality Scaling. Includes bibliographical references (p.186-198).
APA, Harvard, Vancouver, ISO, and other styles
45

Chungbaek, Youngyun. "Impacts of Ignoring Nested Data Structure in Rasch/IRT Model and Comparison of Different Estimation Methods." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77086.

Full text
Abstract:
This study involves investigating the impacts of ignoring nested data structure in Rasch/1PL item response theory (IRT) model via a two-level and three-level hierarchical generalized linear model (HGLM). Currently, Rasch/IRT models are frequently used in educational and psychometric researches for data obtained from multistage cluster samplings, which are more likely to violate the assumption of independent observations of examinees required by Rasch/IRT models. The violation of the assumption of independent observation, however, is ignored in the current standard practices which apply the standard Rasch/IRT for the large scale testing data. A simulation study (Study Two) was conducted to address this issue of the effects of ignoring nested data structure in Rasch/IRT models under various conditions, following a simulation study (Study One) to compare the performances of three methods, such as Penalized Quasi-Likelihood (PQL), Laplace approximation, and Adaptive Gaussian Quadrature (AGQ), commonly used in HGLM in terms of accuracy and efficiency in estimating parameters. As expected, PQL tended to produce seriously biased item difficulty estimates and ability variance estimates whereas almost unbiased for Laplace or AGQ for both 2-level and 3-level analysis. As for the root mean squared errors (RMSE), three methods performed without substantive differences for item difficulty estimates and ability variance estimates in both 2-level and 3-level analysis, except for level-2 ability variance estimates in 3-level analysis. Generally, Laplace and AGQ performed similarly well in terms of bias and RMSE of parameter estimates; however, Laplace exhibited a much lower convergence rate than that of AGQ in 3-level analyses. The results from AGQ, which produced the most accurate and stable results among three computational methods, demonstrated that the theoretical standard errors (SE), i.e., asymptotic information-based SEs, were underestimated by at most 34% when 2-level analyses were used for the data generated from 3-level model, implying that the Type I error rate would be inflated when the nested data structures are ignored in Rasch/IRT models. The underestimated theoretical standard errors were substantively more severe as the true ability variance increased or the number of students within schools increased regardless of test length or the number of schools.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Salmon, Brian P. "Optimizing LDPC codes for a mobile WiMAX system with a saturated transmission amplifier." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-01262009-160431/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Filipe, Vitor. "Cinemática tri-dimensional do tronco durante uma tarefa de lifting: estudo da fiabilidade teste-reteste e diferença mínima detetável em indivíduos saudáveis." Master's thesis, Instituto Politécnico de Setúbal. Escola Superior de Saúde, 2017. http://hdl.handle.net/10400.26/19914.

Full text
Abstract:
Relatório do Projeto de Investigação apresentado para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Fisioterapia, área de especialização em Fisioterapia em Condições Músculo-Esqueléticas<br>INTRODUÇÃO: A dor lombar (DL) é uma das condições músculo-esqueléticas que provoca um maior índice de incapacidade entre os indivíduos. Devido à relação estabelecida entre o movimento, dor e incapacidade, avaliação do movimento lumbo-pélvico é extremamente importante durante o exame de um paciente com DL. Assim, o estudo de padrões de movimentos de indivíduos saudáveis é importante para criar uma base empírica para a diferenciação entre o movimento normal e patológico. Embora exista conhecimento sobre os padrões lumbo-pélvicos em indivíduos saudáveis durante diferentes atividades diárias, falta informação sobre as propriedades psicométricas dos instrumentos de medição usados na sua avaliação. Assim, este estudo tem como objetivo avaliar a fiabilidade teste-reteste, o erro padrão de medida (EPM) e a diferença mínima detetável (DMD) da análise cinemática 3D do tronco e membro inferior durante uma tarefa de lifting, em indivíduos assintomáticos. OBJETIVO: Avaliar a fiabilidade teste-reteste, o EPM e a DMD da análise cinemática 3D do tronco e membro inferior durante uma tarefa de lifting, em indivíduos assintomáticos. METODOLOGIA: O presente estudo utilizou uma amostra de 14 indivíduos assintomáticos, em que cada um participou em dois momentos de avaliação, separados por uma média de 7 dias. Esta avaliação consistiu na recolha e análise da cinemática 3D do tronco e membro inferior durante uma tarefa de lifting. Para aferir acerca da fiabilidade e da concordância, foram calculados os valores de coeficiente de correlação intraclasse (CCI), assim como os valores de EPM, DMD e os limites de concordância (LDC). RESULTADOS: Foram obtidos valores elevados de fiabilidade teste-reteste (CCI >0.80) assim como valores de EPM baixos (<4°) para a maioria dos ângulos articulares analisados. No que toca à EPM% observou-se uma variação de 1.7 a 619% para os ângulos articulares máximos e mínimos e uma variação de 6.9 a 37.8% para as amplitudes articulares nos diferentes planos. Por último os valores absolutos de DMD variaram entre 2.2º e 23º, sendo que para a DMD% variou entre 4.7 e 1715.7% para os ângulos articulares máximos e mínimos. Já para os valores das amplitudes articulares nos diferentes planos, os valores absolutos de DMD variaram entre 1.5º e 19.7º, sendo que para a DMD% variaram entre 419 e 104.7%. DISCUSSÃO E CONCLUSÃO: O presente estudo veio demonstrar uma elevada fiabilidade para a medição cinemática 3D dos ângulos articulares do tronco e do membro inferior, assim como valores de EPM clinicamente aceitáveis, principalmente no que diz respeito às amplitudes articulares. Os resultados obtidos suportam a utilização desta medida na avaliação da tarefa do lifting em indivíduos assintomáticos, particularmente em contexto de investigação.<br>INTRODUCTION and AIM: Low Back Pain (LBP) is one of the musculoskeletal conditions that led to high levels of disability among individuals. Due to the established relationship between movement, pain and disability, the assessment of lumbo-pelvic movement is extremely important during the examination of a LBP patient. Thus, the study of healthy individuals movement patterns` is of importance in order to create an empirical basis for differentiation between normal and pathological movement. Although, knowledge do exist regarding lumbo-pelvic patterns in healthy individuals during different daily activities, information regarding the psychometric properties of the measurement tools used in their assessment is lacking. Thus, this study aims to evaluate the test-retest reliability, measurement error (SEM) and the minimal detectable change (MDC) of 3D kinematic analysis of the trunk and lower limb during a lifting task, in healthy individuals METHODS: The present study used a sample of 14 healthy individuals, who participated in two measurement moments separated by 7 days. This measurement consisted on the collection and analysis of the trunk and lower limb 3D kinematics during a lifting task. Intraclass correlation coefficient (ICC) values and respective 95% CI, as well as the SEM values, the 95% of limits of agreement (95% LOA), and SEM% were calculated. Finally, the absolute and percentage values of MDC were computed. RESULTS: High test-retest reliability (ICC >0.80) as well as low SEM values (<4°) were obtained for the most of the peak joint angles. Regarding the SEM%, the values ranged from 1.7 to 619% for the maximum and minimum joint angles, and from 6.9 to 37.8% for range of motion (ROM) on different movement planes. Finally, absolute MDC for maximum and minimum joint angles ranged from 2.2 to 23°, and MDC% ranged from 4.7 to 1715.7%. The absolute MDC for range of motion on different planes ranged from 1.5 to 19.7°, and the MDC% ranged from 419 e 104.7%. DISCUSSION AND CONCLUSION: The results of this study show high test-retest reliability and low measurement error for trunk and the lower limb joint angles, particularly regarding ROM parameters. High values for SEM% and MDC% were also found, especially in the horizontal plane parameters. Despite this, the obtained results seem to support the use of 3D analysis of the trunk and lower limb during lifting task, particularly in research contexts.
APA, Harvard, Vancouver, ISO, and other styles
48

Galvão, Ailton Fonseca. "Um modelo inteligente para seleção de itens em testes adaptativos computadorizados." Universidade Federal de Juiz de Fora (UFJF), 2013. https://repositorio.ufjf.br/jspui/handle/ufjf/4781.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T14:57:31Z No. of bitstreams: 1 ailtonfonsecagalvao.pdf: 1341901 bytes, checksum: 6bca81f10b97b6393ed399ed87e900ff (MD5)<br>Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-01T11:37:37Z (GMT) No. of bitstreams: 1 ailtonfonsecagalvao.pdf: 1341901 bytes, checksum: 6bca81f10b97b6393ed399ed87e900ff (MD5)<br>Made available in DSpace on 2017-06-01T11:37:37Z (GMT). No. of bitstreams: 1 ailtonfonsecagalvao.pdf: 1341901 bytes, checksum: 6bca81f10b97b6393ed399ed87e900ff (MD5) Previous issue date: 2013-10-06<br>Testes Adaptativos Computadorizados (TAC) são um tipo de avaliação aplicada utilizando se de computadores que tem como principal característica a adequação do nível das ques- tões do teste ao desempenho de cada indivíduo avaliado. Os dois principais elementos que compõem um TAC são: (i) o banco de itens, que é o conjunto das questões disponíveis para serem utilizadas no teste; (ii) o modelo de seleção, que faz a escolha de quais questões, chamadas aqui de itens, são aplicadas aos indivíduos. O modelo de seleção de itens é o núcleo do TAC, pois é o responsável por identificar o nível de conhecimento dos indivíduos à medida que os itens são aplicados fazendo com que o teste se adapte, selecionando os itens mais adequados para produzir uma medida precisa. Nesta dissertação, é proposto um modelo para seleção de itens baseado em metas para a precisão do teste através da estimativa do erro padrão da proficiência, por meio de um controle específico do mesmo para cada fase do teste. Utilizando simulações de testes, os resultados são comparados aos de outros dois modelos tradicionais de seleção, avaliando o desempenho do modelo proposto em termos da precisão do resultado e do nível de exposição dos itens do banco. Por fim, é feita uma análise específica sobre o cumprimento das metas ao longo dos testes e a possível influência no resultado final, além de considerações sobre o comportamento do modelo em relação às características do banco de itens.<br>Computerized Adaptive Tests (CAT) are a type of assessment tests applied through computers which main feature is the adequacy of the test questions to the performance of each examinee. The two main elements of a CAT are: (i) the item pool, which is the set of available questions for testing; (ii) the selection model, which pick out the questions, named items, applied to the examinees. The item selection model is the core of CAT, and its main task is to identify examinees knowledge level as the items are applied and to adapt the test, selecting the most proper items to produce an accurate measure. This thesis proposes a model for item selection based on goals for the test precision using the estimation of the proficiency standard error. For that, an specific control of the goals for each step of the test is developed. Using simulated tests, the results are compared to two traditional item selection models, evaluating the performance of the proposed model in terms of measure accuracy and the level of exposure of the items. Finally, a specific analysis is performed on the accomplishment of goals over the tests and the possible influence on the final result, in addition to considerations on the behavior of the model in relation to the characteristics of the item pool.
APA, Harvard, Vancouver, ISO, and other styles
49

Zbránek, Jakub. "Měření horizontálních a vertikálních posunů gabionové zdi." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2014. http://www.nusl.cz/ntk/nusl-226754.

Full text
Abstract:
The main subject of this diploma thesis is monitoring of horizontal and vertical displacements of the supporting wall in village Smědčice. The thesis describes the whole production process, from construction of the reference net and the net of observed points to the final review. There are also displayed main theoretical basis. Final outputs of the thesis are charts, graphical sketches, tables and final word summary.
APA, Harvard, Vancouver, ISO, and other styles
50

Domrow, Nathan Craig. "Design, maintenance and methodology for analysing longitudinal social surveys, including applications." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16518/1/Nathan_Domrow_Thesis.pdf.

Full text
Abstract:
This thesis describes the design, maintenance and statistical analysis involved in undertaking a Longitudinal Survey. A longitudinal survey (or study) obtains observations or responses from individuals over several times over a defined period. This enables the direct study of changes in an individual's response over time. In particular, it distinguishes an individual's change over time from the baseline differences among individuals within the initial panel (or cohort). This is not possible in a cross-sectional study. As such, longitudinal surveys give correlated responses within individuals. Longitudinal studies therefore require different considerations for sample design and selection and analysis from standard cross-sectional studies. This thesis looks at the methodology for analysing social surveys. Most social surveys comprise of variables described as categorical variables. This thesis outlines the process of sample design and selection, interviewing and analysis for a longitudinal study. Emphasis is given to categorical response data typical of a survey. Included in this thesis are examples relating to the Goodna Longitudinal Survey and the Longitudinal Survey of Immigrants to Australia (LSIA). Analysis in this thesis also utilises data collected from these surveys. The Goodna Longitudinal Survey was conducted by the Queensland Office of Economic and Statistical Research (a portfolio office within Queensland Treasury) and began in 2002. It ran for two years whereby two waves of responses were collected.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography