To see the other types of publications on this topic, follow the link: Test de Monte Carlo.

Dissertations / Theses on the topic 'Test de Monte Carlo'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Test de Monte Carlo.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ding, Jie. "Monte Carlo Pedigree Disequilibrium Test with Missing Data and Population Structure." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218475579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sheytanova, Teodora. "The Accuracy of the Hausman Test in Panel Data: a Monte Carlo Study." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-44288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yeung, Alan B. (Alan Brian) Carleton University Dissertation Physics. "A Monte Carlo study of the Sudbury Neutrino Observatory small test detector experiment." Ottawa, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xie, Wen. "A Monte Carlo Simulation Study for Poly-k Test in Animal Carcinogenicity Studies." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10638898.

Full text
Abstract:

An objective of animal carcinogenicity studies is to identify a tumorigenic potential in animals and to assess relevant risks in humans. Without using the cause-of-death information, the Cochran-Armitage test is applied for detecting a linear trend in the incidence of a tumor of interest across dose groups. The survival-adjusted Cochran–Armitage test, known as the Poly-k test, is investigated for the animals not at equal risk of tumor development by reflecting the shapes of the tumor onset distributions. In this thesis, we will validate Poly-k test through a Monte Carlo simulation study. We will design the simulation study to assess the size and power of the poly-k test using a wide range of k values for various tumor onset rates, for various competing risks rates, and for various tumor lethality rates. In this thesis, the Poly-k testing approach will be investigated to evaluate a dose-related linear trend of a test subject on the incidence of tumor and will be implemented in R package to be used widely amongst toxicologists.

APA, Harvard, Vancouver, ISO, and other styles
5

Leite, Nelson Paiva Oliveira, and Lucas Benedito dos Reis Sousa. "Uncertainty Determination with Monte-Carlo Based Algorithm." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595756.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
The measurement result is complete only if it contains the measurand and its units, uncertainty and coverage factor. The uncertainty estimation for the parameters acquired by the FTI is a known process. To execute this task the Institute of Research and Flight Test (IPEV) developed the SALEV© system which is fully compliant with the applicable standards. But the measurement set also includes Derived Parameters. The uncertainty evaluation of these parameters can be solved by cumbersome partial derivates. The search for a simpler solution leads us to a Monte-Carlo based algorithm. The result of using this approach are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Georgii, Hellberg Kajsa-Lotta, and Andreas Estmark. "Fisher's Randomization Test versus Neyman's Average Treatment Test." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385069.

Full text
Abstract:
The following essay describes and compares Fisher's Randomization Test and Neyman's average treatment test, with the intention of concluding an easily understood blueprint for the comprehension of the practical execution of the tests and the conditions surrounding them. Focus will also be directed towards the tests' different implications on statistical inference and how the design of a study in relation to assumptions affects the external validity of the results. The essay is structured so that firstly the tests are presented and evaluated, then their different advantages and limitations are put against each other before they are applied to a data set as a practical example. Lastly the results obtained from the data set are compared in the Discussion section. The example used in this paper, which compares cigarette consumption after having treated one group with nicotine patches and another with fake nicotine patches, shows a decrease in cigarette consumption for both tests. The tests differ however, as the result from the Neyman test can be made valid for the population of interest. Fisher's test on the other hand only identifies the effect derived from the sample, consequently the test cannot draw conclusions about the population of heavy smokers. In short, the findings of this paper suggests that a combined use of the two tests would be the most appropriate way to test for treatment effect. Firstly one could use the Fisher test to check if any effect at all exist in the experiment, and then one could use the Neyman test to compensate the findings of the Fisher test, by estimating an average treatment effect for example.
APA, Harvard, Vancouver, ISO, and other styles
7

Lindahl, John, and Douglas Persson. "Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.

Full text
Abstract:
Large and complex software that is frequently changed leads to testing challenges. It is well established that the later a fault is detected in software development, the more it costs to fix. This thesis aims to research and develop a method of generating relevant and non-redundant test cases for a regression test suite, to catch bugs as early in the development process as possible. The research was executed at Axis Communications AB with their products and systems in mind. The approach utilizes user data to dynamically generate a Markov chain model and with a Markov chain Monte Carlo method, strengthen that model. The model generates test case proposals, detects test gaps, and identifies redundant test cases based on the user data and data from a test suite. The sampling in the Markov chain Monte Carlo method can be modified to bias the model for test coverage or relevancy. The model is generated generically and can therefore be implemented in other API-driven systems. The model was designed with scalability in mind and further implementations can be made to increase the complexity and further specialize the model for individual needs.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Zhisui. "A Bayesian Analysis of a Multiple Choice Test." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/269.

Full text
Abstract:
In a multiple choice test, examinees gain points based on how many correct responses they got. However, in this traditional grading, it is assumed that questions in the test are replications of each other. We apply an item response theory model to estimate students' abilities characterized by item's feature in a midterm test. Our Bayesian logistic Item response theory model studies the relation between the probability of getting a correct response and the three parameters. One parameter measures the student's ability and the other two measure an item's difficulty and its discriminatory feature. In this model the ability and the discrimination parameters are not identifiable. To address this issue, we construct a hierarchical Bayesian model to nullify the effects of non-identifiability. A Gibbs sampler is used to make inference and to obtain posterior distributions of the three parameters. For a "nonparametric" approach, we implement the item response theory model using a Dirichlet process mixture model. This new approach enables us to grade and cluster students based on their "ability" automatically. Although Dirichlet process mixture model has very good clustering property, it suffers from expensive and complicated computations. A slice sampling algorithm has been proposed to accommodate this issue. We apply our methodology to a real dataset obtained on a multiple choice test from WPI’s Applied Statistics I (Spring 2012) that illustrates how a student's ability relates to the observed scores.
APA, Harvard, Vancouver, ISO, and other styles
9

Senteney, Michael H. "A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou160433478343909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Diep Thi. "Statistical Models to Test Measurement Invariance with Paired and Partially Nested Data: A Monte Carlo Study." Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7869.

Full text
Abstract:
While assessing emotions, behaviors or performance of preschoolers and young children, scores from adults such as parent psychiatrist and teacher ratings are used rather scores from children themselves. Data from parent ratings or from parents and teachers are often nested such as students are within teachers and a child is within their parents. This popular nested feature of data in educational, social and behavioral sciences makes measurement invariance (MI) testing across informants of children methodologically challenging. There was lack of studies that take into account the nested structure of data in MI testing for multiple adult informants, especially no simulation study that examines the performance of different models used to test MI across different raters. This dissertation focused on two specific nesting data types in testing MI between adult raters of children: paired and partial nesting. For the paired data, the independence assumption of regular MI testing is often violated because the two informants (e.g., father and mother) rate the same child and their scores are anticipated to be related or dependent. The partial nesting data refers to the research situation where teacher and parent ratings are compared. In this scenario, it is common that each parent has only one child to rate while each teacher has multiple children in their classroom. Thus, in case of teacher and parent ratings of the same children, data are repeated measures and also partially nested. Because of these unique features of data, MI testing between adult informants of children requires statistical models that take into account different types of data dependency. I proposed and evaluated the performance of the two statistical models that can handle repeated measures and partial nesting with several simulated research scenarios in addition to one commonly used and one potentially appropriate statistical models across several research scenario. Results of the two simulation studies in this dissertation showed that for the paired data, both multiple-group confirmatory factor analysis (CFA) and repeated measure CFA models were able to detect scalar invariance most of the time using Δχ2 test and ΔCFI. Although the multiple-group CFA (Model 2) was able to detect scalar invariance better than the repeated measure CFA model (Model 1), the detection rates of Model 1 were still at the high level (88% - 91% using Δχ2 test and 84% - 100% using ΔCFI or ΔRMSEA). For configural invariance and metric invariance conditions for the paired data, Model 1 had higher detection rate than Model 2 in almost examined research scenario in this dissertation. Particularly while Model 1 could detect noninvariance (either in intercepts only or in both intercepts and factor loadings) than Model 2 for paired data most of the time, Model 2 could rarely catch it if using suggested cut-off of 0.01 for RMSEA differences. For the paired data, although both Models 1 and 2 could be a good choice to test measurement invariance, Model 1 might be favored if researchers are more interested in detecting noninvariance due to its overall high detection rates for all three levels (i.e. configural, metric, and scalar) of measurement invariance. For scalar invariance with partially nested data, both multilevel repeated measure CFA and design-based multilevel CFA could detect invariance most of the time (from 81% to 100% of examined cases) with slightly higher detection rate for the former model than the later. Multiple-group CFA model hardly detect scalar invariance except when ICC was small. The detection rates for configural invariance using Δχ2 test or Satorra-Bentler LRT were also highest for Model 3 (82% to 100% except only two conditions with detection rates of 61%), following by Model 5 and lowest Model 4. Models 4 and 5 could reach these rates only with the largest sample sizes (i.e., large number of cluster or large cluster size or large in both factors) when the magnitude of noninvariance was small. Unlike scalar and configural invariance, the ability to detect metric invariance was highest for Model 4, following by Model 5 and lowest for Model 3 across many conditions using all of the three performance criteria. As higher detection rates for all configural and scalar invariance, and moderate detection rates for many metric invariance conditions (except cases of small number of clusters combined with large ICC), Model 3 could be a good candidate to test measurement invariance with partially nested data when having sufficient number of clusters or if having small number of clusters with small ICC. Model 5 might be also a reasonable option for this type of data if both the number of clusters and cluster size were large (i.e., 80 and 20, respectively), or either one of these two factors was large coupled with small ICC. If ICC is not small, it is recommended to have a large number of clusters or combination of large number of clusters and large cluster size to ensure high detection rates of measurement invariance for partially nested data. As multiple group CFA had better and reasonable detection rates than the design-based and multilevel repeated measure CFA models cross configural, metric and scalar invariance with the conditions of small cluster size (10) and small ICC (0.13), researchers can consider using this model to test measurement invariance when they can only collect 10 participants within a cluster (e.g. students within a classroom) and there is small degree of data dependency (e.g. small variance between clusters) in the data.
APA, Harvard, Vancouver, ISO, and other styles
11

Schrepfer, Thomas. "Swiss Solvency Test und Solvency II Theorie und Praxis in der Ausgestaltung von Solvabilitätsvorschriften für Lebensversicherungen in Europa /." St. Gallen, 2007. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/01653963002/$FILE/01653963002.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Khedri, Shiler. "Markov chain Monte Carlo methods for exact tests in contingency tables." Thesis, Durham University, 2012. http://etheses.dur.ac.uk/5579/.

Full text
Abstract:
This thesis is mainly concerned with conditional inference for contingency tables, where the MCMC method is used to take a sample of the conditional distribution. One of the most common models to be investigated in contingency tables is the independence model. Classic test statistics for testing the independence hypothesis, Pearson and likelihood chi-square statistics rely on large sample distributions. The large sample distribution does not provide a good approximation when the sample size is small. The Fisher exact test is an alternative method which enables us to compute the exact p-value for testing the independence hypothesis. For contingency tables of large dimension, the Fisher exact test is not practical as it requires counting all tables in the sample space. We will review some enumeration methods which do not require us to count all tables in the sample space. However, these methods would also fail to compute the exact p-value for contingency tables of large dimensions. \cite{DiacStur98} introduced a method based on the Grobner basis. It is quite complicated to compute the Grobner basis for contingency tables as it is different for each individual table, not only for different sizes of table. We also review the method introduced by \citet{AokiTake03} using the minimal Markov basis for some particular tables. \cite{BuneBesa00} provided an algorithm using the most fundamental move to make the irreducible Markov chain over the sample space, defining an extra space. The algorithm is only introduced for $2\times J \times K$ tables using the Rasch model. We introduce direct proof for irreducibility of the Markov chain achieved by the Bunea and Besag algorithm. This is then used to prove that \cite{BuneBesa00} approach can be applied for some tables of higher dimensions, such as $3\times 3\times K$ and $3\times 4 \times 4$. The efficiency of the Bunea and Besag approach is extensively investigated for many different settings such as for tables of low/moderate/large dimensions, tables with special zero pattern, etc. The efficiency of algorithms is measured based on the effective sample size of the MCMC sample. We use two different metrics to penalise the effective sample size: running time of the algorithm and total number of bits used. These measures are also used to compute the efficiency of an adjustment of the Bunea and Besag algorithm which show that it outperforms the the original algorithm for some settings.
APA, Harvard, Vancouver, ISO, and other styles
13

Parham, Jonathan Brent. "Physically consistent boundary conditions for free-molecular satellite aerodynamics." Thesis, Boston University, 2014. https://hdl.handle.net/2144/21230.

Full text
Abstract:
Thesis (M.Sc.Eng.)
To determine satellite trajectories in low earth orbit, engineers need to adequately estimate aerodynamic forces. But to this day, such a task su↵ers from inexact values of drag forces acting on complicated shapes that form modern spacecraft. While some of the complications arise from the uncertainty in the upper atmosphere, this work focuses on the problems in modeling the flow interaction with the satellite geometry. The only numerical approach that accurately captures e↵ects in this flow regime—like self-shadowing and multiple molecular reflections—is known as Test Particle Monte Carlo. This method executes a ray-tracing algorithm to follow particles that pass through a control volume containing the spacecraft and accumulates the momentum transfer to the body surfaces. Statistical fluctuations inherent in the approach demand particle numbers on the order of millions, often making this scheme too costly to be practical. This work presents a parallel Test Particle Monte Carlo method that takes advantage of both graphics processing units and multi-core central processing units. The speed at which this model can run with millions of particles enabled the exploration of regimes where a flaw was revealed in the model’s initial particle seeding. A new model introduces an analytical fix to this flaw—consisting of initial position distributions at the boundary of a spherical control volume and an integral for the correct number flux—which is used to seed the calculation. This thesis includes validation of the proposed model using analytical solutions for several simple geometries and demonstrates uses of the method for the aero-stabilization of the Phobos-Grunt Martian probe and pose-estimation for the ICESat mission.
2031-01-01
APA, Harvard, Vancouver, ISO, and other styles
14

Al-Abdullatif, Fatimah. "Discriminant Function Analysis Versus Univariate ANOVAs as Post Hoc Procedures Following Significant MANOVA Test: A Monte Carlo Study." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1585072063453955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Öhman, Marie-Louise. "Aspects of analysis of small sample right censored data using generalized Wilcoxon rank tests /." Umeå : Univ, 1994. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=006873072&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ibrahim, Abdul K. "Distribution and power of selected item bias indices: A Monte Carlo study." Thesis, University of Ottawa (Canada), 1992. http://hdl.handle.net/10393/7831.

Full text
Abstract:
This study examines the following DIF procedures--Transformed Item Difficulty (TID), Full Chi-Square, Mantel-Haenszel chi-square, Mantel-Haenszel delta, Logistic Regression, SOS2, SOS4, and Lord's chi-square under three sample sizes, two test lengths, four cases of item discrimination arrangement, and three item difficulty levels. The study is in two parts: The first part examines the distributions of the indices under null (no bias) conditions. The second part deals with the power of the procedures to detect known bias in simulated test data. Agreements among procedures are also addressed. Lord's chi-square certainly appears to perform very well. Its detection rates were very good, and its percentiles were not affected by discrimination level or test length. In retrospect, one would like to know how well it might do at smaller sample sizes. When the tabled values were used, it performed equally well in detecting bias and improved in reducing false positive rates. Of the other indices, the Mantel-Haenszel and the logistic regression indices seemed the best. Camilli chi-square had a number of problems. Its tabled values were not at all useful for detection of bias. The TID was somewhat better but does not have a significance test associated with it. One would need to rely on baseline studies, if one were to use it. For uniform bias either Mantel-Haenszel chi-square or logistic regression would be recommended, while for nonuniform bias logistic regression would be appropriate. It is interesting to note that Lord's chi-square was effective for detecting either kinds of bias. We have been told that sample size is related to chi-square values. For each of the chi square indices the observed values were considerably lower than tabled values. Of course, these were conditions where no bias was present except that which might be randomly induced in data generation. Perhaps it is those instances where bias is truly present that larger sample sizes allow us to more easily identify biased items. Certainly the proportions of biased items detected was greater for large sample sizes for Camilli chi-square, Mantel-Haenszel chi-square, and logistic regression chi-squares.
APA, Harvard, Vancouver, ISO, and other styles
17

Lehmann, Rüdiger, and Anja Voß-Böhme. "On the statistical power of Baarda’s outlier test and some alternative." Hochschule für Technik und Wirtschaft, 2017. https://htw-dresden.qucosa.de/id/qucosa%3A31756.

Full text
Abstract:
Baarda’s outlier test is one of the best established theories in geodetic practice. The optimal test statistic of the local model test for a single outlier is known as the normalized residual. Also other model disturbances can be detected and identified with this test. It enjoys the property of being a uniformly most powerful invariant (UMPI) test, but is not a uniformly most powerful (UMP) test. In this contribution we will prove that in the class of test statistics following a common central or non-central chi2 distribution, Baarda’s solution is also uniformly most powerful, i.e. UMPchi2 for short. It turns out that UMPchi2 is identical to UMPI, such that this proof can be seen as another proof of the UMPI property of the test. We demonstrate by an example that by means of the Monte Carlo method it is even possible to construct test statistics, which are regionally more powerful than Baarda’s solution. They follow a socalled generalized chi2 distribution. Due to high computational costs we do not yet propose this as a ”new outlier detection method”, but only as a proof that it is in principle possible to outperform the statistical power of Baarda’s test.
Der Ausreißertest nach Baarda ist eine der am besten etablierten Theorien in der geodätischen Praxis. Die optimale Teststatistik des lokalen Modelltests für einen einzelnen Ausreißer wird als normierte Verbesserung bezeichnet. Auch andere Modellabweichungen können mit diesem Test aufgedeckt und identifiziert werden. Er hat die Eigenschaft, der gleichmäßig stärkste invariante (uniformly most powerful invariant, UMPI) Test zu sein, aber er ist kein gleichmäßig stärkster (uniformly most powerful, UMP) Test. In diesem Beitrag werden wir beweisen, dass in der Klasse der Teststatistiken mit einer gewöhnlichen zentralen oder nichtzentralen chi2 Verteilung die Baardaschen Lösung auch gleichmäßig stärkster Test ist, abgekürzt UMPchi2. Es stellt sich heraus, dass UMPchi2 gleichwertig mit UMPI ist, so dass dieser Beweis als ein weiterer Beweis der UMPI-Eigenschaft des Testes angesehen werden kann. Wir zeigen an einem Beispiel, dass es mittels der Monte Carlo Methode sogar möglich ist, Teststatistiken zu konstruieren, die regional stärker ist, als die Baardasche Lösung. Diese weisen eine sogenannte verallgemeinerte chi2 Verteilung auf. Wegen der hohen Rechenkosten schlagen wir das noch nicht als neue Ausreißererkennungsmethode vor, sondern nur als Beweis, dass es im Prinzip möglich ist, die Teststärke des Ausreißertests nach Baarda zu übertreffen.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Xiangrong. "Effect of Sample Size on Irt Equating of Uni-Dimensional Tests in Common Item Non-Equivalent Group Design: a Monte Carlo Simulation Study." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/37555.

Full text
Abstract:
Test equating is important to large-scale testing programs because of the following two reasons: strict test security is a key concern for high-stakes tests and fairness of test equating is important for test takers. The question of adequacy of sample size often arises in test equating. However, most recommendations in the existing literature are based on classical test equating. Very few research studies systematically investigated the minimal sample size which leads to reasonably accurate equating results based on item response theory (IRT). The main purpose of this study was to examine the minimal sample size for desired IRT equating accuracy for the common-item nonequivalent groups design under various conditions. Accuracy was determined by examining the relative magnitude of six accuracy statistics. Two IRT equating methods were carried out on simulated tests with combinations of test length, test format, group ability difference, similarity of the form difficulty, and parameter estimation methods for 14 sample sizes using Monte Carlo simulations with 1,000 replications per cell. Observed score equating and true score equating were compared to the criterion equating to obtain the accuracy statistics. The results suggest that different sample size requirements exist for different test lengths, test formats and parameter estimation methods. Additionally, the results show the following: first, the results for true score equating and observed score equating are very similar. Second, the longer test has less accurate equating than the shorter one at the same sample size level and as the sample size decreases, the gap is greater. Third, concurrent parameter estimation method produced less equating error than separate estimation at the same sample size level and as the sample size reduces, the difference increases. Fourth, the cases with different group ability have larger and less stable error comparing to the base case and the cases with different test difficulty, especially when using separate parameter estimation method with sample size less than 750. Last, the mixed formatted test is more accurate than the single formatted one at the same sample size level.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Gogokhia, Lia. "Simulative Untersuchung der Eigenschaften von Testverfahren auf Elliptizität mit Anwendung auf Finanzmarktdaten." Aachen Shaker, 2007. http://d-nb.info/98807737X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Jiang, Niantao Bollen Kenneth A. "Three tests of dimensionality in structural equation modeling a Monte Carlo simulation study /." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2006. http://dc.lib.unc.edu/u?/etd,476.

Full text
Abstract:
Thesis (M.A.)--University of North Carolina at Chapel Hill, 2006.
Title from electronic title page (viewed Oct. 10, 2007). "... in partial fulfillment of the requirements for the degree of Master of Arts in the Department of Sociology." Discipline: Sociology; Department/School: Sociology.
APA, Harvard, Vancouver, ISO, and other styles
21

Bergmair, Richard. "Monte Carlo semantics : robust inference and logical pattern processing with natural language text." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ernesto, Dulcidia Carlos Guezimane. "Teste monte carlo para a tendência estocástica em modelos de espaços de estados." Universidade Federal de Viçosa, 2016. http://www.locus.ufv.br/handle/123456789/9188.

Full text
Abstract:
Submitted by Marco Antônio de Ramos Chagas (mchagas@ufv.br) on 2016-12-12T12:46:34Z No. of bitstreams: 1 texto completo.pdf: 319856 bytes, checksum: 1b0348b3a8140f307e31d0b9e716fcbf (MD5)
Made available in DSpace on 2016-12-12T12:46:34Z (GMT). No. of bitstreams: 1 texto completo.pdf: 319856 bytes, checksum: 1b0348b3a8140f307e31d0b9e716fcbf (MD5) Previous issue date: 2016-10-20
Intituto de Bolsas de Moçambique
Nyblom e Makelainen propuseram uma estatística de teste que leva a um teste localmente mais poderoso para testar se a variância do ruído para o termo de nível num modelo estrutural linear é determinística ou estocástica. O teste de Nyblom e Makelainen é um teste formal que serve para que possamos decidir se uma série temporal deve ser modelada supondo componentes estocásticos ou se supondo componentes determinísticos. As previsões feitas por estas diferentes abordagens (estocástica versus determinística) resultam em valores muito distintos, ademais, a interpretação sobre o comportamento do fenômeno estudado muda muito entre os dois casos. Nyblom e Makelainen propuseram o uso de aproximação assintótica da estatística NM. Porém chegaram à conclusão de que esta prática não é muito amigável, pois além de sua difícil manipulação prática não oferece o controle na ocorrência do erro do tipo I., portanto, uma forma alternativa de aplicar o teste NM ao invés de aproximação assintótica, é usar a simulação Monte Carlo sob a hipótese nula. Neste trabalho apresenta-se um método de teste de hipóteses exato para testar se o termo de tendência em um determinado modelo estrutural linear é determinístico ou estocástico. Por exato, entende-se que a probabilidade de erro tipo I é analiticamente sob controle para qualquer tamanho de amostra utilizado. O teste proposto é válido para quaisquer séries temporais com distribuição na família locação – escala. O processo não requer estimar o modelo. Além disso, sob a hipótese alternativa de um termo de tendência aleatório tanto a expressão e o termo de tendência têm qualquer formato de distribuição. O problema bem conhecido dos testes na fronteira do espaço de parâmetro também foi resolvido. Investigações numéricas intensivas para vários membros da família de locação-escala têm evidenciado que o nosso método apresenta melhor desempenho estatístico mesmo para pequenas séries temporais. Palavras-chave: Teste Monte Carlo, Tendência estocástica, modelo de nível local
Nyblom and Makelainen proposed a test statistic that leads to a locally most powerful test to test if the variance of the noise level for a structural model is deterministic or stochastic linear. Nyblom testing and formal test Makelainen serving so we can decide whether a time series must be modeled assuming stochastic components or if assuming deterministic components. The predictions made by these different approaches (stochastic vs. deterministic) result in very different values, moreover, the interpretation about the behavior of the studied phenomenon changes a lot between the two cases. Nyblom and Makelainen proposed the use of asymptotic approximation of NM. However came to the conclusion that this practice is not very friendly, as well as its difficult handling practice does not control the occurrence of type I error, so an alternative way of applying the test NM instead of asymptotic approximation, is to use the Monte Carlo simulation under the null hypothesis. This paper presents a method of testing hypotheses for testing if the trend in a particular structural model is deterministic or stochastic linear. For accurate, means that the probability of type I error is analytically under control for any sample size used. The proposed test is valid for any time series with distribution in leasing family-scale. The process does not require estimating the model. In addition, under the alternative hypothesis of a random expression both trend and trend have any distribution format. The well-known problem of the tests at the border parameter space has also been resolved. Numerical intensive investigations for several family members of lease-scale have shown that our method offers better performance even for small statistical time series. Keywords: test Monte Carlo, stochastic Trend, local-level model
APA, Harvard, Vancouver, ISO, and other styles
23

Akiri, Tarek. "Test des Flash-ADCs, optimisation de la conception du détecteur et développement d'un nouveau concept de reconstruction spatiale dans l'expérience d'oscillation de neutrinos Double Chooz." Paris 7, 2010. http://www.theses.fr/2010PA077138.

Full text
Abstract:
Double Chooz (DC) est une expérience d'oscillation de neutrinos auprès de réacteurs, dont la finalité est la mesure du dernier angle de mélange encore inconnu 913. Elle hérite de l'expérience passée CHOOZ qui était limitée par des erreurs statistiques et systématiques à un niveau similaire d'environ 2. 8%. Afin de diminuer l'erreur statistique, la masse de la cible du détecteur DC a été augmentée tandis que la réduction de l'erreur systématique est assurée par l'utilisation de deux détecteurs identiques. Un détecteur sera situé dans le voisinage des coeurs des réacteurs dans le but de contrôler le flux et le spectre des ~ve émis alors que l'autre sera placé à l'endroit où l'effet d'oscillation maximal est attendu. Le premier est communément dénommé 'détecteur proche' par opposition au second dénommé 'détecteur lointain'. Les erreurs attendues sont p. 5% (stat. ) et 0. 6% (syst. ) pour une ultime mesure sin2 2913 = 0. 05 (913 = 6. 5° ) à trois écart-type après trois années de prise de données. Le démarrage du détecteur lointain est attendu pour novembre 2010 tandis que le détecteur proche sera opérationnel pour la mi-2012. Cette thèse présente tout d'abord une contribution matérielle à l'expérience avec le test des Flash- ADCs qui constituent le coeur du système d'acquisition. Ensuite, elle présente des analyses effectuées sur des simulations Monte Carlo afin d'optimiser la conception du détecteur. Ce travail était composé d'analyses dans le but de choisir des composantes du détecteur avec la contamination radioactives qui convient, des analyses dans le but d'obtenir la meilleure résolution en énergie possible et une manière de déclencher la sauvegarde des données par le système d'acquisition la plus stable et la plus robuste possible. Les travaux sur l'optimisation du détecteur et les connaissances acquises sur les Flash-ADCs nous ont amené à envisager une nouvelle reconstruction spatiale basée sur le temps de vol des photons. Toutes ces contributions à l'expérience sont présentées en détails à travers ce manuscrit
Double Chooz (DC) is a reactor neutrino oscillation experiment whose purpose is the measurement of the last unknown mixing angle 613. It inherits from the past CHOOZ experiment which was limited by the statistical and systematic errors at the same extent of about 2. 8%. To lower the statistical error, the DC detector target mass has been increased and a longer exposure k foreseen while the lowering of the systematic error is ensured by the use of two identical detectors One will be located in the vicinity of the reactor cores to monitor the flux and spectrum of the ~ve emitted whereas the other one will be located where the effect of the oscillation is expected to be maximal. They are respectively so-called 'near' and 'far' detectors. The expected errors are 0. 5% (stat. ) and 0. 6% (syst. ) for a measurement down to sin2 2613 = 0. 05 (613 = 6. 5°) at three standard deviations after three years of data taking. The far detector is expected for November 2010 while the near detector will be operational in mid-2012. This thesis presents first a hardware work consisting in testing the Flash-ADCs that are the core of the main acquisition System of the experiment. Subsequently, it presents analyses performed on Monte Carlo simulations towards the optimization of the detector design. This work was composed of analyses to choose some detector components with the appropriate natural radioactivity contamination, analyses for the best achievable energy resolution and the most stable and robust way of triggering. The work on the optimization of the detector together with the acquired knowledge on the Flash-ADCs led us to envisage the possibility of a new spatial reconstruction based on the time of flight. All these contributions to the experiment are described in details throughout this manuscript
APA, Harvard, Vancouver, ISO, and other styles
24

Yang, Qing. "A computational fluid dynamic approach and Monte Carlo simulation of phantom mixing techniques for quality control testing of gamma cameras." Thesis, University of Canterbury. Physics and Astronomy, 2013. http://hdl.handle.net/10092/8742.

Full text
Abstract:
In order to reduce the unnecessary radiation exposure for the clinical personnel, the optimization of procedures in the quality control test of gamma camera was investigated. A significant component of the radiation dose in performing the quality control testing is handling phantoms of radioactivity, especially the mixing to get a uniform activity concentration. Improving the phantom mixing techniques appeared to be a means of reducing radiation dose to personnel. However, this is difficult to perform without a continuous dynamic tomographic acquisition system to study mixing the phantom. In the first part of this study a computational fluid dynamics model was investigated to simulate the mixing procedure. Mixing techniques of shaking and spinning were simulated using the computational fluid dynamics tool FLUENT. In the second part of this study a Siemens E.Cam gamma camera was simulated using the Monte Carlo software SIMIND. A series of validation experiments demonstrated the reliability of the Monte Carlo simulation. In the third part of this study the simulated the mixing data from FLUENT was used as the source distribution in SIMIND to simulate a tomographic acquisition of the phantom. The planar data from the simulation was reconstructed using filtered back projection to produce a tomographic data set for the activity distribution in the phantom. This completed the simulation routine for phantom mixing and verified the Proof-in-Concept that the phantom mixing problem can be studied using a combination of computational fluid dynamics and nuclear medicine radiation transport simulations.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Chwen-Huan. "Prediction of the residual strength of liquefied soils /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/10138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bouquerel, Elian J. A. "Atomic beam merging and suppression of Alkali Contaminants in multi body high power targets : design and test of target and ion source prototypes at ISOLDE." Paris 11, 2009. http://www.theses.fr/2009PA112253.

Full text
Abstract:
Deux points clés de développements nécessaires pour la prochaine génération d’unités de cibles et sources d’ions ISOL sont testés et démontrés dans cette thèse. La pureté des radioisotopes rares ou ayant une courte durée de vie souffre de contaminations isobariques, parmi lesquelles, les alcalins. En tenant compte de leur nature chimique, des lignes de transferts ont été équipées avec un tube de quartz pour retenir ces éléments avant que ceux-ci n’atteignent la source d’ions. L’application a montré avec succès la suppression des alcalins avec un facteur important pour différents éléments (ex: 80, 82mRb, 126, 142Cs, 8Li, 46K, 25Na, 114In, 77Ga, 95, 96Sr) à des températures de quartz allant de 300ºC à 1100ºC. Les enthalpies d’adsorption du quartz ont été mesurées pour le Rubidium et le Césium. Pour un faisceau de protons avec une puissance de 100 kW (EURISOL-DS), des unités de cibles constituées de plusieurs parties connectées à une seule source d’ions sont proposées. Le prototype de cible appelé Bi-Valve a pour objectif de valider les outils d’ingénierie requis pour simuler l’effusion des pertes par désintégrations et le concept d’une cible à plusieurs compartiments. Quatre isotopes ont été étudiés en ligne : 34,35Ar et 18,19Ne. L’efficacité de la double ligne a été mesurée allant de 75 à 95%. La diffusion (analytique) et l’effusion (Monte Carlo) étudiées avec le Code RIBO a permis l’élaboration du profil de la distribution de l’effusion des isotopes à travers le Bi-Valve pour différents modes opératoires. Une expression mathématique de la probabilité, p(t), qu’un isotope diffuse et effuse à travers le système est proposée
Two key issues of developments mandatory for the forthcoming generation of ISOL target-ion source units are assessed and demonstrated in this thesis. The purity of short lived or rare radioisotopes suffer from isobaric contaminants, notably alkalis which are highly volatile and easily ionized elements. Therefore, relying on their chemical nature, temperature controlled transfer lines were equipped with a tube of quartz that aimed at trapping these unwanted elements before they reached the ion source. The successful application yields high alkali-suppression factors for several elements (ie: 80, 82mRb, 126, 142Cs, 8Li, 46K, 25Na, 114In, 77Ga, 95, 96Sr) for quartz temperatures between 300ºC and 1100ºC. The enthalpies of adsorption on quartz were measured for Rubidium and Caesium. For proton beam power of the order of 100 kW (EURISOL-DS) multi-body target units connected to a single ion-source are proposed. The so-called “Bi-Valve” target prototype aims to benchmark the engineering tools required to simulate effusion related decay losses and to validate the multi body target concept. Four isotopes were investigated online: 34,35Ar and 18,19Ne. The efficiency of the double line merging was found to be in the range of 75 to 95%. The diffusion (analytical) and effusion (Monte Carlo) code RIBO provided the profile of the effusion distribution of the isotopes within the Bi-Valve unit for the different operation modes. A mathematical expression for the probability that an isotope diffuses and effuses through the system is proposed
APA, Harvard, Vancouver, ISO, and other styles
27

Alhadabi, Amal Mohammed. "AUTOMATED GROWTH MIXTURE MODEL FITTING AND CLASSES HETEROGENEITY DEDUCTION: MONTE CARLO SIMULATION STUDY." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1615986232296185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wolf, Katja. "Vergleich von Schätz- und Testverfahren unter alternativen Spezifikationen linearer Panelmodelle /." Lohmar ; Köln : Eul, 2005. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=013220938&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Thompson, Jane. "A Monte Carlo comparison of tests for the number of factors under alternative factor models /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2004. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ly, Boucar, and Boucar Ly. "Simulations Monte Carlo et tests de score sur les matrices nulles : approche par inférence exacte." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/37854.

Full text
Abstract:
Ce document propose des outils de simulation de matrices nulles basés sur la loi conditionnelle d’une matrice de présence-absence sachant ses statistiques exhaustives. Ces outils sont basés sur la régression logistique et de plus, ils tiennent compte de l’hétérogénéité des sites et aussi de l’interaction qui peut exister entre les variables qui définissent cette hétérogénéité. Dans ce travail, nous avons traité le cas où les variables qui caractérisent l’hétérogénéité des sites sont binaires et elles sont au plus au nombre de deux. Ainsi, deux outils ont été mis en place à savoir l’algorithme basé sur la régression logistique avec interaction entre les deux variables sites et celui sans interaction entre les variables sites. À partir d’une étude de simulation sur10 000 matrices de présence-absence, nous avons pu, non seulement décrire les propriétés des algorithmes mis en place, mais aussi comparer ces derniers avec d’autres algorithmes de simulation de matrices nulles. Ces comparaisons ont permis de constater que les tests scores avec les algorithmes basés sur la régression logistique avec ou sans interaction entre lesvariables sites donnent des résultats acceptables peu importe l’impact des variables sites. En revanche, l’algorithme ’fixed-fixed’, lorsque les variables sites ont des effets alternés, devient vulnérable aux erreurs de type I. Avec l’algorithme basé sur le modèle d’indépendance, les résultats obtenus ne sont pas fiables parce que le test est très vulnérable aux erreurs de type I.Pour l’algorithme de Peres-Neto, le test de score est très conservateur mais celui-ci s’améliore avec les variables sites à effets alternés. Pour finir, ces différents algorithmes ont été utiliséspour simuler des matrices nulles à partir d’un jeu de données réelles. Cela nous a permis decomparer la structure des matrices simulées par les différents algorithmes par rapport à celle de la matrice observée.
Ce document propose des outils de simulation de matrices nulles basés sur la loi conditionnelle d’une matrice de présence-absence sachant ses statistiques exhaustives. Ces outils sont basés sur la régression logistique et de plus, ils tiennent compte de l’hétérogénéité des sites et aussi de l’interaction qui peut exister entre les variables qui définissent cette hétérogénéité. Dans ce travail, nous avons traité le cas où les variables qui caractérisent l’hétérogénéité des sites sont binaires et elles sont au plus au nombre de deux. Ainsi, deux outils ont été mis en place à savoir l’algorithme basé sur la régression logistique avec interaction entre les deux variables sites et celui sans interaction entre les variables sites. À partir d’une étude de simulation sur10 000 matrices de présence-absence, nous avons pu, non seulement décrire les propriétés des algorithmes mis en place, mais aussi comparer ces derniers avec d’autres algorithmes de simulation de matrices nulles. Ces comparaisons ont permis de constater que les tests scores avec les algorithmes basés sur la régression logistique avec ou sans interaction entre lesvariables sites donnent des résultats acceptables peu importe l’impact des variables sites. En revanche, l’algorithme ’fixed-fixed’, lorsque les variables sites ont des effets alternés, devient vulnérable aux erreurs de type I. Avec l’algorithme basé sur le modèle d’indépendance, les résultats obtenus ne sont pas fiables parce que le test est très vulnérable aux erreurs de type I.Pour l’algorithme de Peres-Neto, le test de score est très conservateur mais celui-ci s’améliore avec les variables sites à effets alternés. Pour finir, ces différents algorithmes ont été utiliséspour simuler des matrices nulles à partir d’un jeu de données réelles. Cela nous a permis decomparer la structure des matrices simulées par les différents algorithmes par rapport à celle de la matrice observée.
This document proposes tools of simulation of null matrices based on the conditional law of a presence-absence matrix knowing its sufficient statistics. These tools are based on logistic regression and, moreover, they take into account the heterogeneity of the sites and also the interaction that can exist between the variables that define this heterogeneity. In this work, we have treated the case where the variables that characterize the heterogeneity of the sites are binary and there are more than two. Thus, two tools have been put in place, namely the logistic regression algorithm with interaction between the two site variables and the one without interaction between the site variables. From a simulation study on10 000 presence-absence matrices, we were able not only to describe the properties of the implemented algorithms, but also to compare these algorithms with other null matrix simulation algorithms. These comparisons showed that the score tests with the logistic regression based algorithms with or without interaction between the site variables give acceptable results regardless of the impactof the site variables. On the other hand, the ’fixed-fixed’ algorithm, when the site variables have alternate effects, becomes vulnerable to type I errors. With the algorithm based on the independence model, the results obtained are not reliable because the test is very vulnerable to type I errors. For the Peres-Neto algorithm, the score test is very conservative but itimproves with the alternate effect site variables. Finally, these different algorithms were used to simulate null matrices from a real dataset. This enabled us to compare the structure of the matrices simulated by the different algorithms with respect to that of the observed matrix.
This document proposes tools of simulation of null matrices based on the conditional law of a presence-absence matrix knowing its sufficient statistics. These tools are based on logistic regression and, moreover, they take into account the heterogeneity of the sites and also the interaction that can exist between the variables that define this heterogeneity. In this work, we have treated the case where the variables that characterize the heterogeneity of the sites are binary and there are more than two. Thus, two tools have been put in place, namely the logistic regression algorithm with interaction between the two site variables and the one without interaction between the site variables. From a simulation study on10 000 presence-absence matrices, we were able not only to describe the properties of the implemented algorithms, but also to compare these algorithms with other null matrix simulation algorithms. These comparisons showed that the score tests with the logistic regression based algorithms with or without interaction between the site variables give acceptable results regardless of the impactof the site variables. On the other hand, the ’fixed-fixed’ algorithm, when the site variables have alternate effects, becomes vulnerable to type I errors. With the algorithm based on the independence model, the results obtained are not reliable because the test is very vulnerable to type I errors. For the Peres-Neto algorithm, the score test is very conservative but itimproves with the alternate effect site variables. Finally, these different algorithms were used to simulate null matrices from a real dataset. This enabled us to compare the structure of the matrices simulated by the different algorithms with respect to that of the observed matrix.
APA, Harvard, Vancouver, ISO, and other styles
31

Engström, Henrik, and Jens Raine. "Finite Element based Parametric Studies of a Truck Cab subjected to the Swedish Pendulum Test." Thesis, Linköping University, Department of Management and Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8318.

Full text
Abstract:

Scania has a policy to attain a high crashworthiness standard and their trucks have to conform to Swedish cab safety standards. The main objective of this thesis is to clarify which parameter variations, present during the second part of the Swedish cab crashworthiness test on a Scania R-series cab, that have significance on the intrusion response. An LS-DYNA FE-model of the test case is analysed where parameter variations are introduced through the use of the probabilistic analysis tool LS-OPT.

Example of analysed variations are the sheet thickness variation as well as the material variations such as stress-strain curve of the structural components, but also variations in the test setup such as the pendulum velocity and angle of approach on impact are taken into account. The effect of including the component forming in the analysis is investigated, where the variations on the material parameters are implemented prior to the forming. An additional objective is to analyse the influence of simulation and model dependent variations and weigh their respective effect on intrusion with the above stated physical variations.

A submodel is created due to the necessity to speed up the simulations since the numerous parameter variations yield a large number of different designs, resulting in multiple analyses.

Important structural component sensitivities are taken from the results and should be used as a pointer where to focus the attention when trying to increase the robustness of the cab. Also, the results show that the placement of the pendulum in the y direction (sideways seen from the driver perspective) is the most significant physical parameter variation during the Swedish pendulum test. It is concluded that to be able to achieve a fair comparison of the structural performance from repeated crash testing, this pendulum variation must be kept to a minimum.

Simulation and model dependent parameters in general showed to have large effects on the intrusion. It is concluded that further investigations on individual simulation or model dependent parameters should be performed to establish which description to use.

Mapping material effects from the forming simulation into the crash model gave a slight stiffer response compared to the mean pre-stretch approximations currently used by Scania. This is still however a significant result considering that Scanias approximations also included bake hardening effects from the painting process.

APA, Harvard, Vancouver, ISO, and other styles
32

von, Eye Alexander, Patrick Mair, and Michael Schauerhuber. "Significance Tests for the Measure of Raw Agreement." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/1336/1/document.pdf.

Full text
Abstract:
Significance tests for the measure of raw agreement are proposed. First, it is shown that the measure of raw agreement can be expressed as a proportionate reduction-in-error measure, sharing this characteristic with Cohen's Kappa and Brennan and Prediger's Kappa_n. Second, it is shown that the coefficient of raw agreement is linearly related to Brennan and Prediger's Kappa_n. Therefore, using the same base model for the estimation of expected cell frequencies as Brennan and Prediger's Kappa_n, one can devise significance tests for the measure of raw agreement. Two tests are proposed. The first uses Stouffer's Z, a probability pooler. The second test is the binomial test. A data example analyzes the agreement between two psychiatrists' diagnoses. The covariance structure of the agreement cells in a rater by rater table is described. Simulation studies show the performance and power functions of the test statistics. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
33

Araujo, Jose Iranildo da Silva. "Performance of unit root tests with change in level cross-section dependence." Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=9872.

Full text
Abstract:
nÃo hÃ
Unit root tests have been widely used to validate or reject economic modelâs hypotheses. Because of this, many authors have created different versions of this kind of test in order to generate statistics which are more precise in identifying the presence of a unit root. Some authors have increased the power of these statistics using panel data. However, the use of panel data brings the possibility of dependence between cross-sections, this has been initially handled by the independence between cross-sections hypothesis. Only the second generation tests consider dependence between cross-sections. Nevertheless, in the literature there is no test which allows changes in the dependence between cross-sections over time. Thus, this paper uses Monte Carlo experiments to analyze the small sample properties of some statistics used to identify the presence of a unit root. It is noticed that the size of these statistics has a large distortion when the level of dependence between cross-sections changes.
Teste de raiz unitÃria tem sido muito importante no sentido de validar ou rejeitar as hipÃteses dos modelos econÃmicos. Devido essa importÃncia, diversos autores tÃm criado diferentes versÃes desse teste, a fim de gerar estatÃsticas que sejam mais precisas em identificar a presenÃa de raiz unitÃria. Usando dados em painel, alguns autores conseguiram aumentar o poder dessas estatÃsticas. No entanto, o uso de dados em painel trÃs a possibilidade de dependÃncia cross-section nos dados, fato esse inicialmente tratado pela hipÃtese de independÃncia cross-section. Somente nos testes chamados de segunda geraÃÃo à que se trata dependÃncia cross-section. Entretanto, nÃo hà na literatura nenhum teste que permita mudanÃas nesse nÃvel de dependÃncia ao longo do tempo. Com isso, esse trabalho pretende avaliar, por meio de um experimento de Monte Carlo, as propriedades de pequenas amostras de algumas estatÃsticas usadas para identificar a presenÃa de raiz unitÃria. Percebe-se que o tamanho dessas estatÃsticas sofre uma grande distorÃÃo para as situaÃÃes de mudanÃa no nÃvel de dependÃncia cross-section.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Hangcheng. "Comparing Welch's ANOVA, a Kruskal-Wallis test and traditional ANOVA in case of Heterogeneity of Variance." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3985.

Full text
Abstract:
Analysis of variance (ANOVA) is a robust test against the normality assumption, but it may be inappropriate when the assumption of homogeneity of variance has been violated. Welch ANOVA and the Kruskal-Wallis test (a non-parametric method) can be applicable for this case. In this study we compare the three methods in empirical type I error rate and power, when heterogeneity of variance occurs and find out which method is the most suitable with which cases including balanced/unbalanced, small/large sample size, and/or with normal/non-normal distributions.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Huaqiao. "Analyse ttH,H; WW (*) avec ATLAS au LHC et étude des électrons à très basse énergie dans le test faisceau combiné 2004." Aix-Marseille 2, 2008. http://theses.univ-amu.fr.lama.univ-amu.fr/2008AIX22105.pdf.

Full text
Abstract:
Utilisant des données Monte-Carlo ATLAS en simulation complète du canal ttH;H->WW(*), cette thèse prsésente une étude de la mesure du couplage de Yukawa du quark top pour une lumisosité intégrée de 30fb(-1) dans la gamme de masses du Higgs allant de 120 à 200GeV. Les effets du système de déclenchement, de l'empilement des événements, ainsi que de toutes les erreures systématiques possibles ont été étudiés. Pour une masses du Higgs de 160GeV, en incluant les erreures systématiques, la significance obtenue du signal est à 2 sigma en combinant les états finaux à deux et trois leptons. Le rapport d'embranchement combiné sigma_ttH*Br(H->WW(*) peut atteindre une précision de 47 %. Cette thèse comprend également une étude de la linéarité de la réponse à des électrons VLE avec des données de tests en faisceau combiné d'un secteur du détecteur ATLAS. Les résultats MC utilisant une graine multiple 5X5 permettent d'envisager une amélioration future de la linéarité de réponse à des électron VLE
Using ATLAS Computing System Commissioning Monte Carlo(MC) full simulation data, this thesis studies the feasibility of measuring top-quark Yukawa Coupling in 30fb(-1) integrated luminosity of the ttH;H->WW(*) channel, within the Higgs mass range from 120 to 200 GeV. The trigger, pileup effects and all possible systematic uncertainties are studied. For a Higgs mass of 160 GeV, with the detailed systematics uncertainties, the signal signifficance is shown to exceed 2 sigma by combining two leptons and three leptons final states together. The combined branching ratio of sigma_ttH*BR(H->WW(*)) can reach an accuracy of 47%. This thesis also presents a study of the linearity of VLE electron from 2004 ATLAS Combined Test Beam data, and MC study shows a possible improvement by using 5*5 multiple seed clustering
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Tianyi. "Power Comparison of Some Goodness-of-fit Tests." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2572.

Full text
Abstract:
There are some existing commonly used goodness-of-fit tests, such as the Kolmogorov-Smirnov test, the Cramer-Von Mises test, and the Anderson-Darling test. In addition, a new goodness-of-fit test named G test was proposed by Chen and Ye (2009). The purpose of this thesis is to compare the performance of some goodness-of-fit tests by comparing their power. A goodness-of-fit test is usually used when judging whether or not the underlying population distribution differs from a specific distribution. This research focus on testing whether the underlying population distribution is an exponential distribution. To conduct statistical simulation, SAS/IML is used in this research. Some alternative distributions such as the triangle distribution, V-shaped triangle distribution are used. By applying Monte Carlo simulation, it can be concluded that the performance of the Kolmogorov-Smirnov test is better than the G test in many cases, while the G test performs well in some cases.
APA, Harvard, Vancouver, ISO, and other styles
37

Costa, Paulo Renato Freitas. "Aproximações assimptóticas na análise da heterogeneidade negligenciada em modelos de duração." Master's thesis, Instituto Superior de Economia e Gestão, 2005. http://hdl.handle.net/10400.5/18849.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
Na análise econométrica de modelos de duração, é bastante comum depararmo- nos com a existência de heterogeneidade negligenciada. Este problema deve-se, em grande parte, ao facto de não ser possível observar todas as características individuais que afectam a duração de um evento. Neste trabalho, utilizando aproximações assimptóticas, vamos estudar o im- pacto da heterogeneidade em modelos de duração com parâmetro de escala. Como as aproximações são sensíveis ao efeito da heterogeneidade, irão constituir a base para a construção de um teste para a sua detecção. Uma expressão que caracterize o en- viesamento causado pela heterogeneidade será derivada utilizando as aproximações, sendo essa expressão a base para a construção de um estimador GMM que corrige o enviesamento na condição de momentos do modelo que ignora a heterogeneidade. Utilizando duas distribuições paramétricas, Weibull e Log-logistic, e através de simulações de Monte Carlo, iremos analisar o comportamento do estimador GMM proposto.
In the econometric analysis of duration models, it is common to come across itself with the problem of neglected heterogeneity. This problem, to a large extent, is due to the fact of not being possible to observe ali the individual characteristics that affect the duration of an event. In this work, small parameters aproximations will be used to study the impact of the heterogeneity in duration models with a scale paramter. The aproximations are sensible to the effect of the heterogeneity so, they will be the basis to construct a test for its detection. An expression that characterizes the bias caused by neglecting heterogeneity will be derived using the aproximations, being this expression the base for the constrution of a GMM that corrects the bias in the moment conditions of the model that ignores heterogeneity. Using two parametric distributions, Weibull and Log-logistic, and through Monte Carlo simulation, we will analyze the perfomance of the proposed GMM estimator.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
38

Baron, Standley-Réginald. "Cycle économique et modélisation de la structure à terme du spread CDS : implémentation de la simulation de Monte Carlo au modèle dynamique de Nelson-Siegel." Master's thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/27368.

Full text
Abstract:
Au cours de ces dernières années, le spread du Credit Default Swap (CDS) est devenu l'un des instruments très utilisé pour mesurer le risque de crédit. La modélisation de la courbe de la structure à terme du spread CDS s'avère nécessaire, et elle est d'importance capitale pour la gestion quantitative du risque de crédit. Nous proposons une extension du modèle dynamique de Nelson Siegel, en modélisant les changements des facteurs bêtas par un processus autorégressif avec erreurs hétéroscédastiques tenant compte du cycle économique. Par la technique de Monte Carlo nous simulons, à l'aide d'une copule de Student, les prévisions sur un et quatre trimestres du spread CDS à différentes maturités. Notre modèle suggère que le niveau d'influence du cycle économique sur le spread CDS dépend de la maturité. Son impact sur les spreads à longue échéance est plus significatif que sur les spreads à courte maturité. Notre modèle AR-GARCH performe mieux, quel que soit l'horizon de prévision analysé dans le cadre de ce travail, que le modèle vecteur autorégressif (VAR) et le modèle de marche aléatoire.
APA, Harvard, Vancouver, ISO, and other styles
39

Martiniani, Stefano. "On the complexity of energy landscapes : algorithms and a direct test of the Edwards conjecture." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/266695.

Full text
Abstract:
When the states of a system can be described by the extrema of a high-dimensional function, the characterisation of its complexity, i.e. the enumeration of the accessible stable states, can be reduced to a sampling problem. In this thesis a robust numerical protocol is established, capable of producing numerical estimates of the total number of stable states for a broad class of systems, and of computing the a-priori probability of observing any given state. The approach is demonstrated within the context of the computation of the configurational entropy of two and three-dimensional jammed packings. By means of numerical simulation we show the extensivity of the granular entropy as proposed by S.F. Edwards for three-dimensional jammed soft-sphere packings and produce a direct test of the Edwards conjecture for the equivalent two dimensional systems. We find that Edwards’ hypothesis of equiprobability of all jammed states holds only at the (un)jamming density, that is precisely the point of practical significance for many granular systems. Furthermore, two new recipes for the computation of high-dimensional volumes are presented, that improve on the established approach by either providing more statistically robust estimates of the volume or by exploiting the trajectories of the paths of steepest descent. Both methods also produce as a natural by-product unprecedented details on the structures of high-dimensional basins of attraction. Finally, we present a novel Monte Carlo algorithm to tackle problems with fluctuating weight functions. The method is shown to improve accuracy in the computation of the ‘volume’ of high dimensional ‘fluctuating’ basins of attraction and to be able to identify transition states along known reaction coordinates. We argue that the approach can be extended to the optimisation of the experimental conditions for observing certain phenomena, for which individual measurements are stochastic and provide little guidance.
APA, Harvard, Vancouver, ISO, and other styles
40

Shi, Weiling. "An Alternative Goodness-of-fit Test for Normality with Unknown Parameters." FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1623.

Full text
Abstract:
Goodness-of-fit tests have been studied by many researchers. Among them, an alternative statistical test for uniformity was proposed by Chen and Ye (2009). The test was used by Xiong (2010) to test normality for the case that both location parameter and scale parameter of the normal distribution are known. The purpose of the present thesis is to extend the result to the case that the parameters are unknown. A table for the critical values of the test statistic is obtained using Monte Carlo simulation. The performance of the proposed test is compared with the Shapiro-Wilk test and the Kolmogorov-Smirnov test. Monte-Carlo simulation results show that proposed test performs better than the Kolmogorov-Smirnov test in many cases. The Shapiro Wilk test is still the most powerful test although in some cases the test proposed in the present research performs better.
APA, Harvard, Vancouver, ISO, and other styles
41

Ben, Hdech Yassine. "Contrôle de qualité dosimétrique des systèmes de planification des traitements par radiothérapie externe à l’aide d’objets-tests numériques calculés par simulations Monte-Carlo PENELOPE." Nantes, 2011. http://archive.bu.univ-nantes.fr/pollux/show.action?id=4c446483-cc21-4cf9-a704-4500df2828ab.

Full text
Abstract:
En raison des besoins de précision et de l’importance des risques associés, les traitements anticancéreux par radiothérapie externe sont simulés sur des Systèmes de Planification des Traitements ou TPS avant d’être réalisés, afin de s’assurer que la prescription médicale est respectée tant du point de vue de l’irradiation des volumes cibles que de la protection des tissus sains. Ainsi les TPS calculent des distributions de dose prévisionnelles dans le patient et les temps de traitement par faisceau qui seront nécessaires pour délivrer la dose prescrite. Le TPS occupe une position clé dans le processus décisionnel des traitements par radiothérapie. Il est donc impératif qu’il fasse l’objet d’un contrôle approfondi des ses performances (contrôle de qualité ou CQ) et notamment de ses capacités à calculer avec précision les distributions de dose dans les patients pour l’ensemble des situations cliniques qu’il est possible de rencontrer. Les méthodes « traditionnelles » recommandées pour le CQ dosimétrique des algorithmes de calcul de dose implémentés dans les TPS s’appuient sur des comparaisons entre des distributions de dose calculées par le TPS et des distributions de dose mesurées sous l’appareil de traitement dans des Objets-Tests Physiques (OTP). Dans ce travail de thèse nous proposons de substituer aux mesures de dose de référence réalisées dans des OTP des calculs de distribution de dose dans des Objets-Tests Numériques, calculs réalisés grâce au code Monte-Carlo PENELOPE. L’avantage de cette méthode est de pouvoir d’une part, simuler des conditions proches de la clinique, conditions très souvent inaccessibles par la mesure du fait de leur complexité, et d’autre part, de pouvoir automatiser le processus de CQ du fait de la nature d’emblée numérique des données de référence. Enfin une telle méthode permet de réaliser un CQ du TPS extrêmement complet sans monopoliser les appareils de traitement prioritairement utilisés pour traiter les patients. Cette nouvelle méthode de CQ a été testée avec succès sur le TPS Eclipse de la société Varian Medical Systems
To ensure the required accuracy and prevent from misadministration, cancer treatments, by external radiation therapy are simulated on Treatment Planning System or TPS before radiation delivery in order to ensure that the prescription is achieved both in terms of target volumes coverage and healthy tissues protection. The TPS calculates the patient dose distribution and the treatment time per beam required to deliver the prescribed dose. TPS is a key system in the decision process of treatment by radiation therapy. It is therefore essential that the TPS be subject to a thorough check of its performance (quality control or QC) and in particular its ability to accurately compute dose distributions for patients in all clinical situations that be met. The "traditional" methods recommended to carry out dosimetric CQ of algorithms implemented in the TPS are based on comparisons between dose distributions calculed with the TPS and dose measured in physical test objects (PTO) using the treatment machine. In this thesis we propose to substitute the reference dosimetric measurements performed in OTP by benchmark dose calculations in Digital Test Objects using PENELOPE Monte-Carlo code. This method has three advantages: (i) it allows simulation in situations close to the clinic and often too complex to be experimentally feasible; (ii) due to the digital form of reference data the QC process may be automated; (iii) it allows a comprehensive TPS CQ without hindering the use of an equipment devoted primarily to patients treatments. This new method of CQ has been tested successfully on the Eclipse TPS from Varian Medical Systems Company
APA, Harvard, Vancouver, ISO, and other styles
42

Cole, Gary. "A Monte Carlo study of the effects of four factors on the effectiveness of the LZ and ECIZ4 appropriateness indices." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6538.

Full text
Abstract:
While a test score may be valid for a group there may sometimes be reason to suspect its validity for an individual. Unusual examinee response patterns may indicate that the test may be invalid for the individual and quantitative measures called appropriateness indices have been developed to detect these unusual patterns. For a number of reasons, Lz and ECIZ4 have so far proven to be two of the most useful of these indices. There were three purposes for this study. The first purpose was to investigate the effects of four variables on the cutoff values of the indices: the range of the distribution of the b parameter (Diff), the level of the a parameter (Disc), IRT model (Model), and sample size used to estimate item parameters (Sampsiz). The second purpose was to investigate the effects of these same variables on the detection rates for response vectors that were made spuriously high(i.e. high aberrance) and for response vectors that were made spuriously low (i.e. low aberrance). The third purpose was to determine the extent to which detection rates obtained by using cutoff values from the standard normal distribution were similar to those obtained by using cutoff values obtained by simulating non-aberrant response vectors. Two levels were set for each of the four variables. For Diff, a broad and a narrow range of the b parameter was used. For Disc, a high and low level for the a parameter of the test items was used. For Model, the 2PL and 3PL models were used. For Sampsiz, a sample size of 1000 and 2500 was used to estimate item parameters. For each of the 16 combinations of these variables, non-aberrant as well as aberrant response vectors were simulated for a 60 item test. For the aberrant response vectors, both high and low aberrance was created by modifying ten of the test items. Detection rates were obtained at the.01,.05, and.10 false-positive rates using cutoff values based on the distribution of the non-aberrant response vectors and using cutoff values based on a standard normal distribution. The simulation of each combination of conditions was replicated 90 times. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
43

Mâsse, Louise C. "A presentation and comparison of some new statistical techniques in the analysis of polytomous differential item functioning: A Monte Carlo investigation." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/9904.

Full text
Abstract:
There is a need to develop and investigate methods which can assess the Item Response Differences (IRD) found in all the options of an item. In this study, such an investigation was referred to as Polytomous Differential Item Functioning (PDIF). The purpose of this study was to present and investigate the performance of four new approaches in the assessment of PDIF. The four approaches are a MANOVA (MCO) and a MANCOVA (MCA) approach applied to categorical dependent variables, a Polytomous Logistic Regression (PLR) approach, and an ANOVA analysis based on the item responses quantified by Dual Scaling (DS). In this study the effectiveness of these approaches (MCA, MCO, PLR, and DS) as well as the Log-Linear (LOG) approach of Mellenbergh (1982) were assessed under various conditions of test length, sample size, item difficulty, and the amount and location of PDIF. A two-parameter polytomous logistic regression model was used to generate the data. In this study, only uniform PDIF was introduced in the alternatives of the item. The type of PDIF simulated (e.g. uniform) in this study did not allow for a direct comparison of the nonuniform test of hypothesis between the Logistic (LOG and PLR) approaches and the MAN(C)OVA (MCA and MCO) approaches because the Logistic approaches test for a difference in logits while the MAN(C)OVA approaches test for a difference in proportions. It was shown in this study that varying the probability of choosing the alternatives resulted in uniform logit differences which did not only translate into uniform differences in proportions but also translated into nonuniform differences in proportions. These differences affected the interpretation of the PDIF results because the test of nonuniform PDIF for the Logistic procedures corresponded to a valid test of the null hypothesis while the MAN(C)OVA results for nonuniform PDIF had to be adjusted in order to yield a test which approximated a true test of the null hypothesis. The results of this study lend some optimism to the employment of the MCA and PLR approaches. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
44

St, Aubyn Miguel Pedro Brito. "Evaluating tests for convergence of economic series using Monte Carlo methods with an application to real GDP's per head." Thesis, London Business School (University of London), 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hatzinger, Reinhold, and Walter Katzenbeisser. "A Combination of Nonparametric Tests for Trend in Location." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/1298/1/document.pdf.

Full text
Abstract:
A combination of some well known nonparametric tests to detect trend in location is considered. Simulation results show that the power of this combination is remarkably increased. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
46

LIMA, Leonardo Bomfim de. "Um teste de especificação correta em modelos de regressão beta." Universidade Federal de Pernambuco, 2007. https://repositorio.ufpe.br/handle/123456789/6311.

Full text
Abstract:
Made available in DSpace on 2014-06-12T18:04:10Z (GMT). No. of bitstreams: 2 arquivo7191_1.pdf: 588691 bytes, checksum: 2a4cdc6ab84afaa4aa34cb0cf4f67a47 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2007
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Uma ferramenta útil para a modelagem de dados em que a variável resposta assume continuamente valores no intervalo (0,1) é o modelo de regressão beta proposto por Ferrari & Cribari-Neto (2004). O modelo proposto é baseado na suposição que a variável resposta tem distribuição beta utilizando uma parametrização da lei beta que é indexada pela média e por um parâmetro de precisão. Neste modelo, supõe-se também que a variável resposta é relacionada a outras variáveis através de uma estrutura de regressão. O objetivo desta dissertação é propor um teste de erro de especificação correta em modelos de regressão beta, a partir do teste RESET proposto por Ramsey (1969). A avaliação numérica realizada revelou que o teste proposto é útil para detecção do uso de função de ligação a incorreta bem como de não-linearidades no predito linear. Mostramos que o teste proposto realizado através do teste escore apresentou, em geral, melhores resultados no que tange a tamanho e poder. Adicionalmente, mostramos que o melhor desempenho é alcançado quando se utiliza uma potência do preditor linear ajustado ou uma potência da resposta média estimada como variável de teste. O teste proposto também apresenta bom desempenho para pequenos tamanhos mostrais, apesar de ser baseado em aproximações assintóticas
APA, Harvard, Vancouver, ISO, and other styles
47

Boulet, John R. "A Monte Carlo comparison of the Type I error rates of the likelihood ratio chi-square test statistic and Hotelling's two-sample T2 on testing the differences between group means." Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5708.

Full text
Abstract:
The present paper demonstrates how Structural Equation Modelling (SEM) can be used to formulate a test of the difference in means between groups on a number of dependent variables. A Monte Carlo study compared the Type I error rates of the Likelihood Ratio (LR) Chi-square ($\chi\sp2$) statistic (SEM test criterion) and Hotelling's two-sample T$\sp2$ statistic (MANOVA test criterion) in detecting differences in means between two independent samples. Seventy-two conditions pertaining to average sample size ((n$\sb1$ + n$\sb2$)/2), extent of inequality of sample sizes (n$\sb1$:n$\sb2$), number of variables (p), and degree of inequality of variance-covariance matrices ($\Sigma\sb1$:$\Sigma\sb2$) were modelled. Empirical sampling distributions of the LR $\chi\sp2$ statistic and Hotelling's T$\sp2$ statistic consisted fo 2000 samples drawn from multivariate normal parent populations. The actual proportion of values that exceeded the nominal levels are presented. The results indicated that, in terms of maintaining Type I error rates that were close to the nominal levels, the LR $\chi\sp2$ statistic and Hotelling's T$\sp2$ statistic were comparable when $\Sigma\sb1$ = $\Sigma\sb2$ and (n$\sb1$ + n$\sb2$)/2:p was relatively large (i.e., 30:1). However, when $\Sigma\sb1$ = $\Sigma\sb2$ and (n$\sb1$ + n$\sb2$)/2:p was small (i.e., 10:1) Hotelling's T$\sp2$ statistic was preferred. When $\Sigma\sb{1} \not=\Sigma\sb2$ the LR $\chi\sp2$ statistic provided more appropriate Type I error rates under all of the simulated conditions. The results are related to earlier findings, and implications for the appropriate use of the SEM method of testing for group mean differences are noted.
APA, Harvard, Vancouver, ISO, and other styles
48

Hadley, Patrick. "The performance of the Mantel-Haenszel and logistic regression dif detection procedures across sample size and effect size: A Monte Carlo study." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10019.

Full text
Abstract:
In recent years, public attention has become focused on the issue of test and item bias in standardized tests. Since the 1980's, the Mantel-Haenszel (Holland & Thayer, 1986) and Logistic Regression procedures (Swaminathan & Rogers, 1990) have been developed to detect item bias, or differential item functioning (dif). In this study the effectiveness of the MH and LR procedures was compared under a variety of conditions, using simulated data. The ability of the MH and LR to detect dif was tested at sample sizes of 100/100, 200/200, 400/400, 600/600, and 800/800. The simulated test had 66 items, the first 33 items with item discrimination ("a") set at 0.80, the second 33 items with "a" set at 1.20. The pseudo-guessing parameter ("c") was 0.15 for all items. The item difficulty ("b") parameter ranged from $-$2.00 to 2.00 in increments of 0.125 for the first 33 items, and again for the second 33 items. Both the MH and LRU detected dif with a high degree of success whenever sample size was large (600 or more), especially when effect size, no matter how measured, was also large. The LRU outperformed the MH marginally under almost every condition of the study. However, the LRU also had a higher false-positive rate than the MH, a finding consistent with previous studies (Pang et al., 1994, Tian et al., 1994a, 1994b). Since the "a" and "b" parameters which underly the computation of the three measures of effect size used in the study are not always determinable in data derived from real world test administrations, it may be that the $\Delta\sb{\rm MH}$ is the best available measure of effect size in real world test items. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
49

Rogers, Catherine Jane. "Power comparisons of four post-MANOVA tests under variance-covariance heterogeneity and non-normality in the two group case." Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/40171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kwon, Hyukje. "A Monte Carlo Study of Missing Data Treatments for an Incomplete Level-2 Variable in Hierarchical Linear Models." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1303846627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography