To see the other types of publications on this topic, follow the link: Testing hypothesis.

Dissertations / Theses on the topic 'Testing hypothesis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Testing hypothesis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Zhongfa. "Multiple hypothesis testing for finite and infinite number of hypotheses." online version, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=case1121461130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chwialkowski, K. P. "Topics in kernal hypothesis testing." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1519607/.

Full text
Abstract:
This thesis investigates some unaddressed problems in kernel nonparametric hypothesis testing. The contributions are grouped around three main themes: Wild Bootstrap for Degenerate Kernel Tests. A wild bootstrap method for nonparametric hypothesis tests based on kernel distribution embeddings is proposed. This bootstrap method is used to construct provably consistent tests that apply to random processes. It applies to a large group of kernel tests based on V-statistics, which are degenerate under the null hypothesis, and non-degenerate elsewhere. In experiments, the wild bootstrap gives strong performance on synthetic examples, on audio data, and in performance benchmarking for the Gibbs sampler. A Kernel Test of Goodness of Fit. A nonparametric statistical test for goodness-of-fit is proposed: given a set of samples, the test determines how likely it is that these were generated from a target density function. The measure of goodness-of-fit is a divergence constructed via Stein's method using functions from a Reproducing Kernel Hilbert Space. Construction of the test is based on the wild bootstrap method. We apply our test to quantifying convergence of approximate Markov Chain Monte Carlo methods, statistical model criticism, and evaluating quality of fit vs model complexity in nonparametric density estimation. Fast Analytic Functions Based Two Sample Test. A class of nonparametric two-sample tests with a cost linear in the sample size is proposed. Two tests are given, both based on an ensemble of distances between analytic functions representing each of the distributions. Experiments on artificial benchmarks and on challenging real-world testing problems demonstrate good power/time tradeoff retained even in high dimensional problems. The main contributions to science are the following. We prove that the kernel tests based on the wild bootstrap method tightly control the type one error on the desired level and are consistent i.e. type two error drops to zero with increasing number of samples. We construct a kernel goodness of fit test that requires only knowledge of the density up to an normalizing constant. We use this test to construct first consistent test for convergence of Markov Chains and use it to quantify properties of approximate MCMC algorithms. Finally, we construct a linear time two-sample test that uses new, finite dimensional feature representation of probability measures.
APA, Harvard, Vancouver, ISO, and other styles
3

Varshney, Kush R. (Kush Raj). "Frugal hypothesis testing and classification." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60182.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 157-175).
The design and analysis of decision rules using detection theory and statistical learning theory is important because decision making under uncertainty is pervasive. Three perspectives on limiting the complexity of decision rules are considered in this thesis: geometric regularization, dimensionality reduction, and quantization or clustering. Controlling complexity often reduces resource usage in decision making and improves generalization when learning decision rules from noisy samples. A new margin-based classifier with decision boundary surface area regularization and optimization via variational level set methods is developed. This novel classifier is termed the geometric level set (GLS) classifier. A method for joint dimensionality reduction and margin-based classification with optimization on the Stiefel manifold is developed. This dimensionality reduction approach is extended for information fusion in sensor networks. A new distortion is proposed for the quantization or clustering of prior probabilities appearing in the thresholds of likelihood ratio tests. This distortion is given the name mean Bayes risk error (MBRE). The quantization framework is extended to model human decision making and discrimination in segregated populations.
by Kush R. Varshney.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Vilela, Lucas Pimentel. "Hypothesis testing in econometric models." reponame:Repositório Institucional do FGV, 2015. http://hdl.handle.net/10438/18249.

Full text
Abstract:
Submitted by Lucas Pimentel Vilela (lucaspimentelvilela@gmail.com) on 2017-05-04T01:19:37Z No. of bitstreams: 1 Hypothesis Testing in Econometric Models - Vilela 2017.pdf: 2079231 bytes, checksum: d0387462f36ab4ab7e5d33163bb68416 (MD5)
Approved for entry into archive by Maria Almeida (maria.socorro@fgv.br) on 2017-05-15T19:31:43Z (GMT) No. of bitstreams: 1 Hypothesis Testing in Econometric Models - Vilela 2017.pdf: 2079231 bytes, checksum: d0387462f36ab4ab7e5d33163bb68416 (MD5)
Made available in DSpace on 2017-05-15T19:32:18Z (GMT). No. of bitstreams: 1 Hypothesis Testing in Econometric Models - Vilela 2017.pdf: 2079231 bytes, checksum: d0387462f36ab4ab7e5d33163bb68416 (MD5) Previous issue date: 2015-12-11
This thesis contains three chapters. The first chapter considers tests of the parameter of an endogenous variable in an instrumental variables regression model. The focus is on one-sided conditional t-tests. Theoretical and numerical work shows that the conditional 2SLS and Fuller t-tests perform well even when instruments are weakly correlated with the endogenous variable. When the population F-statistic is as small as two, the power is reasonably close to the power envelopes for similar and non-similar tests which are invariant to rotation transformations of the instruments. This finding is surprising considering the poor performance of two-sided conditional t-tests found in Andrews, Moreira, and Stock (2007). These tests have bad power because the conditional null distributions of t-statistics are asymmetric when instruments are weak. Taking this asymmetry into account, we propose two-sided tests based on t-statistics. These novel tests are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test. The second and third chapters are interested in maxmin and minimax regret tests for broader hypothesis testing problems. In the second chapter, we present maxmin and minimax regret tests satisfying more general restrictions than the alpha-level and the power control over all alternative hypothesis constraints. More general restrictions enable us to eliminate trivial known tests and obtain tests with desirable properties, such as unbiasedness, local unbiasedness and similarity. In sequence, we prove that both tests always exist and under suficient assumptions, they are Bayes tests with priors that are solutions of an optimization problem, the dual problem. In the last part of the second chapter, we consider testing problems that are invariant to some group of transformations. Under the invariance of the hypothesis testing, the Hunt-Stein Theorem proves that the search for maxmin and minimax regret tests can be restricted to invariant tests. We prove that the Hunt-Stein Theorem still holds under the general constraints proposed. In the last chapter we develop a numerical method to implement maxmin and minimax regret tests proposed in the second chapter. The parametric space is discretized in order to obtain testing problems with a finite number of restrictions. We prove that, as the discretization turns finer, the maxmin and the minimax regret tests satisfying the finite number of restrictions have the same alternative power of the maxmin and minimax regret tests satisfying the general constraints. Hence, we can numerically implement tests for a finite number of restrictions as an approximation for the tests satisfying the general constraints. The results in the second and third chapters extend and complement the maxmin and minimax regret literature interested in characterizing and implementing both tests.
Esta tese contém três capítulos. O primeiro capítulo considera testes de hipóteses para o coeficiente de regressão da variável endógena em um modelo de variáveis instrumentais. O foco é em testes-t condicionais para hipóteses unilaterais. Trabalhos teóricos e numéricos mostram que os testes-t condicionais centrados nos estimadores de 2SLS e Fuller performam bem mesmo quando os instrumentos são fracamente correlacionados com a variável endógena. Quando a estatística F populacional é menor que dois, o poder é razoavelmente próximo do poder envoltório para testes que são invariantes a transformações que rotacionam os instrumentos (similares ou não similares). Este resultado é surpreendente considerando a baixa performance dos testes-t condicionais para hipóteses bilaterais apresentado em Andrews, Moreira, and Stock (2007). Estes testes possuem baixo poder porque as distribuições das estatísticas-t na hipótese nula são assimétricas quando os instrumentos são fracos. Explorando tal assimetria, nós propomos testes para hipóteses bilaterais baseados em estatísticas-t. Estes testes são aproximadamente não viesados e podem performar tão bem quanto o teste de razão de máxima verossimilhança condicional. No segundo e no terceiro capítulos, nosso interesse é em testes do tipo maxmin e minimax regret para testes de hipóteses mais gerais. No segundo capítulo, nós apresentamos testes maxmin e minimax regret que satisfazem restrições mais gerais que as restrições de tamanho e de controle sobre todo o poder na hipótese alternativa. Restrições mais gerais nos possibilitam eliminar testes triviais e obter testes com propriedades desejáveis, como por exemplo não viés, não viés local e similaridade. Na sequência, nós provamos que ambos os testes existem e, sob condições suficientes, eles são testes Bayesianos com priors que são solução de um problema de otimização, o problema dual. Na última parte do segundo capítulo, nós consideramos testes de hipóteses que são invariantes à algum grupo de transformações. Sob invariância, o Teorema de Hunt-Stein implica que a busca por testes maxmin e minimax regret pode ser restrita a testes invariantes. Nós provamos que o Teorema de Hunt-Stein continua válido sob as restrições gerais propostas. No último capítulo, nós desenvolvemos um procedimento numérico para implementar os testes maxmin e minimax regret propostos no segundo capítulo. O espaço paramétrico é discretizado com o objetivo de obter testes de hipóteses com um número finito de pontos. Nós provamos que, ao considerarmos partições mais finas, os testes maxmin e minimax regret que satisfazem um número finito de pontos possuem o mesmo poder na hipótese alternativa que os testes maxmin e minimax regret que satisfazem as restrições gerais. Portanto, nós podemos implementar numericamente os testes que satisfazem um número finito de pontos como aproximação aos testes que satisfazem as restrições gerais.
APA, Harvard, Vancouver, ISO, and other styles
5

Lapenta, Elia. "Three Essays in Hypothesis Testing." Thesis, Toulouse 1, 2020. http://www.theses.fr/2020TOU10053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Donmez, Ayca. "Adaptive Estimation And Hypothesis Testing Methods." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611724/index.pdf.

Full text
Abstract:
For statistical estimation of population parameters, Fisher&rsquo
s maximum likelihood estimators (MLEs) are commonly used. They are consistent, unbiased and efficient, at any rate for large n. In most situations, however, MLEs are elusive because of computational difficulties. To alleviate these difficulties, Tiku&rsquo
s modified maximum likelihood estimators (MMLEs) are used. They are explicit functions of sample observations and easy to compute. They are asymptotically equivalent to MLEs and, for small n, are equally efficient. Moreover, MLEs and MMLEs are numerically very close to one another. For calculating MLEs and MMLEs, the functional form of the underlying distribution has to be known. For machine data processing, however, such is not the case. Instead, what is reasonable to assume for machine data processing is that the underlying distribution is a member of a broad class of distributions. Huber assumed that the underlying distribution is long-tailed symmetric and developed the so called M-estimators. It is very desirable for an estimator to be robust and have bounded influence function. M-estimators, however, implicitly censor certain sample observations which most practitioners do not appreciate. Tiku and Surucu suggested a modification to Tiku&rsquo
s MMLEs. The new MMLEs are robust and have bounded influence functions. In fact, these new estimators are overall more efficient than M-estimators for long-tailed symmetric distributions. In this thesis, we have proposed a new modification to MMLEs. The resulting estimators are robust and have bounded influence functions. We have also shown that they can be used not only for long-tailed symmetric distributions but for skew distributions as well. We have used the proposed modification in the context of experimental design and linear regression. We have shown that the resulting estimators and the hypothesis testing procedures based on them are indeed superior to earlier such estimators and tests.
APA, Harvard, Vancouver, ISO, and other styles
7

Allison, James Samuel. "Bootstrap-based hypothesis testing / J.S. Allison." Thesis, North-West University, 2008. http://hdl.handle.net/10394/3701.

Full text
Abstract:
One of the main objectives of this dissertation is the development of a new method of evaluating the performance of bootstrap-based tests. The evaluation method that is currently in use in the literature has some major shortcomings, for example, it does not allow one to determine the robustness of a bootstrap estimator of a critical value. This is because the evaluation and the estimation are based on the same data. This traditional method of evaluation often leads to too optimistic probability of type I errors when bootstrap critical values are used. We show how this new, more robust, method can detect defects of bootstrap estimated critical values which cannot be observed if one uses the current evaluation method. Based on the new evaluation method, some theoretical properties regarding the bootstrap critical value are derived when testing for the mean in a univariate population. These theoretical findings again highlight the importance of the two guidelines proposed by Hall and Wilson (1991) for bootstrap-based testing, namely that resampling must be done in a way that reflects the null hypothesis and bootstrap tests should be based on test statistics that are pivotal (or asymptotically pivotal). We also developed a new nonparametric bootstrap test for Spearman's rho and, based on the results obtained from a Monte-Carlo study, we recommend that this new test should be used when testing for Spearman's rho. A semiparametric test based on copulas was also developed as a useful benchmark tool for measuring the performance of the nonparametric test. Other research objectives of this dissertation include, among others, a brief overview of the nonparametric bootstrap and a general formulation of methods which can be used to apply the bootstrap correctly when conducting hypothesis testing.
Thesis (Ph.D. (Statistics))--North-West University, Potchefstroom Campus, 2009.
APA, Harvard, Vancouver, ISO, and other styles
8

Lewsey, James Daniel. "Hypothesis testing in unbalanced experimental designs." Thesis, Glasgow Caledonian University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vu, Hung Thi Hong. "Testing the individual effective dose hypothesis." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1247508549/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sestok, Charles K. (Charles Kasimer). "Data selection in binary hypothesis testing." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/16613.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.
Includes bibliographical references (p. 119-123).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Traditionally, statistical signal processing algorithms are developed from probabilistic models for data. The design of the algorithms and their ultimate performance depend upon these assumed models. In certain situations, collecting or processing all available measurements may be inefficient or prohibitively costly. A potential technique to cope with such situations is data selection, where a subset of the measurements that can be collected and processed in a cost-effective manner is used as input to the signal processing algorithm. Careful evaluation of the selection procedure is important, since the probabilistic description of distinct data subsets can vary significantly. An algorithm designed for the probabilistic description of a poorly chosen data subset can lose much of the potential performance available to a well-chosen subset. This thesis considers algorithms for data selection combined with binary hypothesis testing. We develop models for data selection in several cases, considering both random and deterministic approaches. Our considerations are divided into two classes depending upon the amount of information available about the competing hypotheses. In the first class, the target signal is precisely known, and data selection is done deterministically. In the second class, the target signal belongs to a large class of random signals, selection is performed randomly, and semi-parametric detectors are developed.
by Charles K. Sestok, IV.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Tarighati, Alla. "Decentralized Hypothesis Testing in Sensor Networks." Doctoral thesis, KTH, Signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-195173.

Full text
Abstract:
Wireless sensor networks (WSNs) play an important role in the future ofInternet of Things IoT systems, in which an entire physical infrastructurewill be coupled with communication and information technologies. Smartgrids, smart homes, and intelligent transportation systems are examples ofinfrastructure that will be connected with sensors for intelligent monitoringand management. Thus, sensing, information gathering, and efficientprocessing at the sensors are essential. An important problem in wireless sensor networks is that of decentralizeddetection. In a decentralized detection network, spatially separatedsensors make observations on the same phenomenon and send informationabout the state of the phenomenon towards a central processor. The centralprocessor (or the fusion center, FC) makes a decision about the state of thephenomenon, base on the aggregate received messages from the sensors. Inthe context of decentralized detection, the object is often to make the bestdecision at the FC. Since this decision is made based on the received messagesfrom the sensors, it is of interest to optimally design decision rules atthe remote sensors. This dissertation deals mainly with the problem of designing decisionrules at the remote sensors and at the FC, while the network is subjectto some limitation on the communication between nodes (sensors and theFC). The contributions of this dissertation can be divided into three (overlapping)parts. First, we consider the case where the network is subjectto communication rate constraint on the links connecting different nodes.Concretely, we propose an algorithm for the design of decision rules at thesensors and the FC in an arbitrary network in a person-by-person (PBP)methodology. We first introduce a network of two sensors, labeled as therestricted model. We then prove that the design of sensors’ decision rules,in the PBP methodology, is in an arbitrary network equivalent to designingthe sensors’ decision rules in the corresponding restricted model. We alsopropose an efficient algorithm for the design of the sensors’ decision rules inthe restricted model. Second, we consider the case where remote sensors share a commonmultiple access channel (MAC) to send their messages towards the FC, andwhere the MAC channel is subject to a sum rate constraint. In this situation,ithe sensors compete for communication rate to send their messages. Wefind sufficient conditions under which allocating equal rate to the sensors,so called rate balancing, is an optimal strategy. We study the structure ofthe optimal rate allocation in terms of the Chernoff information and theBhattacharyya distance. Third, we consider a decentralized detection network where not onlyare the links between nodes subject to some communication constraints,but the sensors are also subject to some energy constraints. In particular,we study the network under the assumption that the sensors are energyharvesting devices that acquire all the energy they need to transmit theirmessages from their surrounding environment. We formulate a decentralizeddetection problem with system costs due to the random behavior of theenergy available at the sensors in terms of the Bhattacharyya distance.

QC 20161103

APA, Harvard, Vancouver, ISO, and other styles
12

LEONARD, ANTHONY CHARLES. "HYPOTHESIS TESTING WITH THE SIMILARITY INDEX." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1005680996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Punzi, Maria Teresa, and Karlo Kauko. "Testing the Global Banking Glut Hypothesis." WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4494/1/wp194.pdf.

Full text
Abstract:
This paper presents VAR results on the recent economic history of the U.S and focuses on the dependence of U.S. macrofinancial variables on international capital flows. Both gross and net flows are included in the analysis. The results indicate that cross-border funding has affected the build-up in the U.S. housing market irrespective of how these flows are defined and measured. Both the savings glut hypothesis and the banking glut hypothesis are supported by these findings. However, net banking flows appear to explain the higher volatility in the increase in house prices as well as the mortgage loan boom. (authors' abstract)
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
14

Sazak, Hakan Savas. "Estimation And Hypothesis Testing In Stochastic Regression." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/3/724294/index.pdf.

Full text
Abstract:
Regression analysis is very popular among researchers in various fields but almost all the researchers use the classical methods which assume that X is nonstochastic and the error is normally distributed. However, in real life problems, X is generally stochastic and error can be nonnormal. Maximum likelihood (ML) estimation technique which is known to have optimal features, is very problematic in situations when the distribution of X (marginal part) or error (conditional part) is nonnormal. Modified maximum likelihood (MML) technique which is asymptotically giving the estimators equivalent to the ML estimators, gives us the opportunity to conduct the estimation and the hypothesis testing procedures under nonnormal marginal and conditional distributions. In this study we show that MML estimators are highly efficient and robust. Moreover, the test statistics based on the MML estimators are much more powerful and robust compared to the test statistics based on least squares (LS) estimators which are mostly used in literature. Theoretically, MML estimators are asymptotically minimum variance bound (MVB) estimators but simulation results show that they are highly efficient even for small sample sizes. In this thesis, Weibull and Generalized Logistic distributions are used for illustration and the results given are based on these distributions. As a future study, MML technique can be utilized for other types of distributions and the procedures based on bivariate data can be extended to multivariate data.
APA, Harvard, Vancouver, ISO, and other styles
15

LaPointe, Mitchell, and University of Lethbridge Faculty of Arts and Science. "Testing the animate monitoring hypothesis / Mitchell LaPointe." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Psychology, c2010, 2010. http://hdl.handle.net/10133/3054.

Full text
Abstract:
The detection of human and non-human animals and their unique (and potentially dangerous) “animation” would have been important to our ancestors’ survival. It would seem plausible that our ancestors would have required a vigilance above and beyond that dedicated to other, inanimate, objects. Considering the millions of years of expending extra energy to monitor these objects, it would also seem likely, at least as advocated by New, Cosmides, and Tooby (2007), that the human visual system would have developed mechanisms to allocate attention automatically and quickly to these objects. We tested the New et al. (2007) “animate monitoring” hypothesis by presenting viewers with a group of animate objects and a group of inanimate objects using the flicker task—a task that is assumed to be one that measures automatic visual attention. These objects were presented on a variety of backgrounds of natural scenes, including some backgrounds that were contextually inconsistent with the target objects. These objects were also presented in either a consistent location within each scene or a location that violated that consistency. Only people objects were consistently more readily detected, not animated objects in general. Detection in this task was affected by more than just the information provided by the target object. Both results provide a serious challenge to the “animate monitoring” hypothesis. Furthermore, the results were shown not to be due to peculiarities of our stimulus set, or by how interesting the members of each object category were. iv
xii, 108 leaves ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
16

Hirche, Christoph. "From asymptotic hypothesis testing to entropy inequalities." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/565901.

Full text
Abstract:
Aquesta tesi tracta sobre la relació entre el contrast d'hipòtesis quàntiques i les desigualtats entròpiques en la teoria quàntica de la informació. A la primera part de la tesi ens centrem en el contrast d'hipòtesis. Aquí, considerem dues configuracions principals, o bé fixar els estats quàntics i optimitzar sobre les possibles mesures, o bé fixar una mesura i avaluar la seva capacitat de discriminació d'estats quàntics optimitzant sobre aquests últims. A la primera configuració, demostrem un resultat general a la taxa d'error òptima en el contrast d'hipòtesis compostes asimètriques, que porta a un Lema de Stein quàntic compost. També discutim com això dóna una interpretació operacional a diverses quantitats d'interès, com l'entropia relativa de la coherència, i com transferir aquest resultat al contrast d'hipòtesis en el règim asimptòtic. A la segona, donem la taxa d'error asimptòtica òptima en diverses configuracions, tant simètriques com asimètriques, i també discutim les propietats i alguns exemples d'aquestes taxes. A la segona part, ens centrem en les desigualtats entròpiques. Comencem amb les desigualtats de recuperabilitat, que han rebut molta atenció recentment. Com veiem, estan estretament relacionades amb la primera part de la tesi. Utilitzant les eines desenvolupades per demostrar el Lema de Stein compost, demostrem una fita inferior per a la informació mútua condicionada en termes d'una entropia relativa regularitzada que presenta un mapa de recuperació universal explícit. A continuació, mostrem dos enfocs a priori diferents per donar una interpretació operacional a l'entropia relativa de recuperació a través del contrast d'hipòtesis compostes. Després discutim i ampliem alguns contraexemples recents afirmant que l'entropia relativa de recuperació no regularitzada no és una bona fita inferior a la informació mútua quàntica condicionada. A més, aportem més contraexemples on alguns dels sistemes quàntics involucrats són, en realitat, clàssics, veient que fins i tot en aquesta configuració restringida la fita inferior no és correcta. Al final, fem servir la connexió entre la contrast d'hipòtesis i la recuperabilitat per mostrar que la regularització al nostre Lema de Stein compost és, de fet, necessària. Després ens centrem en un tipus aparentment diferent de desigualtats entrópicas, anomenades fites a la combinació d'informació, relacionades amb l'entropia condicional de la suma de variables aleatòries amb informació lateral associada. Usant una desigualtat de recuperabilitat particular, mostrem una cota inferior no trivial i, addicionalment, conjecturem cotes òptimes tant inferiors com superiors. A més, discutim les implicacions de les nostres cotes en el comportament de la longitud de bloc finita en codis polars, emprats en comunicació clàssica en canals quànitcs. Finalment, discutim les desigualtats de l'entropia de Rényi-2 per a estats Gaussians en sistemes de dimensió infinita, fent ús de la seva formulació com desigualtats log-det per trobar límits relacionats amb la recuperabilitat en diverses quantitats d'interès. Això últim ho apliquem a mesures gaussianes d'entrellaçament i steerability, demostrant així la seva monogàmia entre altres característiques.
Esta tesis trata sobre la relación entre el contraste de hipótesis cuánticas y las desigualdades entrópicas en la teoría cuántica de la información. En la primera parte de la tesis nos centramos en el contraste de hipótesis. Aquí, consideramos dos configuraciones principales, o bien fijar los estados cuánticos y optimizar sobre las posibles medidas, o bien fijar una medida y evaluar su capacidad de discriminación de estados cuánticos optimizando sobre estos últimos. En la primera configuración, demostramos un resultado general en la tasa de error óptima en el contraste de hipótesis compuestas asimétricas, que lleva a un Lema de Stein cuántico compuesto. También discutimos como esto da una interpretación operacional a varias cantidades de interés, como la entropía relativa de la coherencia, y cómo transferir este resultado al contraste de hipótesis en el régimen asintotico. En la segunda, damos la tasa de error asintótica óptima en varias configuraciones, tanto simétricas como asimétricas, y también discutimos las propiedades y algunos ejemplos de estas tasas. En la segunda parte, nos centramos en las desigualdades entrópicas. Empezamos con las desigualdades de recuperabilidad, que han recibido mucha atención recientemente. Como vemos, están estrechamente relacionadas con la primera parte de la tesis. Utilizando las herramientas desarrolladas para demostrar el Lema de Stein compuesto, demostramos un límite inferior para la información mutua condicionada en términos de una entropía relativa regularizada que presenta un mapa de recuperación universal explícito. A continuación, mostramos dos enfoques a priori diferentes para dar una interpretación operacional a la entropía relativa de recuperación a través del contraste de hipótesis compuestas. Luego discutimos y ampliamos algunos contraejemplos recientes afirmando que la entropía relativa de recuperación no regularizada no es una cota inferior a la información mutua cuántica condicionada. Además, aportamos más contraejemplos donde algunos de los sistemas cuánticos involucrados son, en realidad, clásicos, viendo que incluso en esta configuración restringida la cota inferior no es correcta. En última instancia, empleamos la conexión entre la contraste de hipótesis y la recuperabilidad para mostrar que la regularización en nuestro Lema de Stein compuesto es, de hecho, necesaria. Luego nos centramos en un tipo aparentemente diferente de desigualdades entrópicas, llamadas cotas a la combinación de información, relacionadas con la entropía condicional de la suma de variables aleatorias con información lateral asociada. Usando una desigualdad de recuperabilidad particular, mostramos una cota inferior no trivial y, adicionalmente, conjeturamos cotas óptimas tanto inferiores como superiores. Además, discutimos las implicaciones de nuestras cotas en el comportamiento de la longitud de bloque finita en códigos polares, utilizados en la comunicación clásica sobre canales cuánticos. Finalmente, discutimos las desigualdades de la entropía de Rényi-2 para estados Gaussianos en sistemas de dimension infinita, haciendo uso de su formulación como desigualdades log-det para encontrar límites relacionados con la recuperabilidad en varias cantidades de interés. Esto último lo aplicamos a medidas Gaussianas de entrelazamiento y steerability, lo que demuestra su monogamia entre otras características.
This thesis addresses the interplay between asymptotic hypothesis testing and entropy inequalities in quantum information theory. In the first part of the thesis we focus on hypothesis testing. Here, we consider two main settings; one can either fix quantum states while optimizing over possible measurements or fix a measurement and evaluate its capability to discriminate quantum states by optimizing over such states. With regard to the former setting, we prove a general result on the optimal error rate in asymmetric composite hypothesis testing, which leads to a composite quantum Stein's Lemma. We also discuss how this gives an operational interpretation to several quantities of interest, such as the relative entropy of coherence, and how to transfer the result to symmetric hypothesis testing. For the latter, we give the optimal asymptotic error rates in several symmetric and asymmetric settings, as well as discuss properties and examples of these rates. In the second part, the focus is shifted to entropy inequalities. We start with recoverability inequalities, which have gained much attention recently. As it turns out, they are closely related to the first part of the thesis. Using tools which we developed to prove the composite Stein's Lemma, we further prove a strengthened lower bound on the conditional quantum mutual information in terms of a regularized relative entropy featuring an explicit and universal recovery map. Next, we show two a priori different approaches to give an operational interpretation to the relative entropy of recovery via composite hypothesis testing. Then, we discuss and extend some recent counterexamples, which show that the non-regularized relative entropy of recovery is not a lower bound on the conditional quantum mutual information; additionally we provide more counterexamples where some of the involved systems are classical, showing that also in this restricted setting the same bound does not hold. Ultimately we employ the connection between hypothesis testing and recoverability to show that the regularization in our composite Stein's Lemma is indeed needed. We then turn to a seemingly different type of entropy inequalities called bounds on information combining, which are concerned with the conditional entropy of the sum of random variables with associated side information. Using a particular recoverability inequality, we show a non-trivial lower bound and additionally conjecture optimal lower and upper bounds. Furthermore, we discuss implications of our bounds to the finite blocklength behavior of Polar codes to attain optimal communication capacities in quantum channels. Finally, we discuss RΘnyi-2 entropy inequalities for Gaussian states on infinite dimensional systems, by exploiting their formulation as log-det inequalities to find recoverability related bounds on several interesting quantities. We apply this to Gaussian steerability and entanglement measures, proving their monogamy and several other features.
APA, Harvard, Vancouver, ISO, and other styles
17

Johansson, Erik. "Testing the Explanation Hypothesis using Experimental Methods." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57308.

Full text
Abstract:

The Explanation Hypothesis is a psychological hypothesis about how people attribute moral responsibility. The hypothesis makes general claims about everyday thinking of moral responsibility and is also said to have important consequences for related philosophical issues. Since arguments in favor of the hypothesis are largely based on a number of intuitive cases, there is need to investigate whether it is supported by empirical evidence. In this study, the hypothesis was tested by means of quantitative experimental methods. The data were collected by conducting online surveys in which participants were introduced to a number of different scenarios. For each scenario, questions about moral responsibility were asked. Results provide general support for the Explanation Hypothesis and there are therefore more reasons to take its proposed consequences seriously.

APA, Harvard, Vancouver, ISO, and other styles
18

Santos, Andrés. "Essays in hypothesis testing with instrumental variables /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Datar, Satyajit V. "Hypothesis Testing for the Process Capability Ratio." Ohio University / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1040054409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

GUO, WENGE. "GENERALIZED ERROR CONTROL IN MULTIPLE HYPOTHESIS TESTING." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1186500727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Glore, Mary Lee. "The Threshold Prior in Bayesian Hypothesis Testing." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1416570546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Qumsiyeh, Sahar Botros. "Non-normal Bivariate Distributions: Estimation And Hypothesis Testing." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608941/index.pdf.

Full text
Abstract:
When using data for estimating the parameters in a bivariate distribution, the tradition is to assume that data comes from a bivariate normal distribution. If the distribution is not bivariate normal, which often is the case, the maximum likelihood (ML) estimators are intractable and the least square (LS) estimators are inefficient. Here, we consider two independent sets of bivariate data which come from non-normal populations. We consider two distinctive distributions: the marginal and the conditional distributions are both Generalized Logistic, and the marginal and conditional distributions both belong to the Student&rsquo
s t family. We use the method of modified maximum likelihood (MML) to find estimators of various parameters in each distribution. We perform a simulation study to show that our estimators are more efficient and robust than the LS estimators even for small sample sizes. We develop hypothesis testing procedures using the LS and the MML estimators. We show that the latter are more powerful and robust. Moreover, we give a comparison of our tests with another well known robust test due to Tiku and Singh (1982) and show that our test is more powerful. The latter is based on censored normal samples and is quite prominent (Lehmann, 1986). We also use our MML estimators to find a more efficient estimator of Mahalanobis distance. We give real life examples.
APA, Harvard, Vancouver, ISO, and other styles
23

Ulgen, Burcin Emre. "Robust Estimation And Hypothesis Testing In Microarray Analysis." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612352/index.pdf.

Full text
Abstract:
Microarray technology allows the simultaneous measurement of thousands of gene expressions simultaneously. As a result of this, many statistical methods emerged for identifying differentially expressed genes. Kerr et al. (2001) proposed analysis of variance (ANOVA) procedure for the analysis of gene expression data. Their estimators are based on the assumption of normality, however the parameter estimates and residuals from this analysis are notably heavier-tailed than normal as they commented. Since non-normality complicates the data analysis and results in inefficient estimators, it is very important to develop statistical procedures which are efficient and robust. For this reason, in this work, we use Modified Maximum Likelihood (MML) and Adaptive Maximum Likelihood estimation method (Tiku and Suresh, 1992) and show that MML and AMML estimators are more efficient and robust. In our study we compared MML and AMML method with widely used statistical analysis methods via simulations and real microarray data sets.
APA, Harvard, Vancouver, ISO, and other styles
24

Denis, Daniel J. "Null hypothesis significance testing, history, criticisms and alternatives." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ59127.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Thulin, Måns. "On Confidence Intervals and Two-Sided Hypothesis Testing." Doctoral thesis, Uppsala universitet, Tillämpad matematik och statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-229399.

Full text
Abstract:
This thesis consists of a summary and six papers, dealing with confidence intervals and two-sided tests of point-null hypotheses. In Paper I, we study Bayesian point-null hypothesis tests based on credible sets. A decision-theoretic justification for tests based on central credible intervals is presented. Paper II is concerned with a new two-sample test for the difference of mean vectors, in the high-dimensional setting where the number of variables is greater than the sample size. A simulation study indicates that the proposed test yields higher power when the variables are correlated. Computational aspects of the test are discussed. In Paper III, we discuss randomized confidence intervals for a binomial proportion. How some classical intervals fare is compared to how a recently proposed interval fares, in terms of coverage, length and sensitivity to the randomization. In Paper IV, a level-adjustment of the Clopper-Pearson interval for a binomial proportion is proposed. The adjusted interval is shown to have good coverage properties and short expected length. In Paper V we study the cost of using the exact Clopper-Pearson interval rather than shorter approximate intervals, in terms of the increase in expected length and the increase in sample size required to obtain a given length. Comparisons are made using asymptotic expansions. Paper VI deals with exact confidence intervals and point-null hypothesis tests for parameters of a class of discrete distributions. A large class of intervals are shown to lack strict nestedness and to have bounds that are not strictly monotone and typically also discontinuous. The p-values of the corresponding hypothesis test are shown to lack desirable continuity properties, and to typically also lack certain monotonicity properties.
APA, Harvard, Vancouver, ISO, and other styles
26

Gabriel, Joseph R. "Invariant hypothesis testing with applications in signal processing /." View online ; access limited to URI, 2004. http://0-wwwlib.umi.com.helin.uri.edu/dissertations/dlnow/3135904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gilliland, Gene Clay. "Field testing Bakke and Roberts' 'Old First' hypothesis." Theological Research Exchange Network (TREN), 1997. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Papastavrou, Jason D. "Decentralized decision making in a hypothesis testing environment." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/14023.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
Includes bibliographical references (leaves 225-231).
by Jason D. Papastavrou.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
29

Wissinger, John W. (John Weakley). "Distributed nonparametric training algorithms for hypothesis testing networks." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12006.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 495-502).
by John W. Wissinger.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
30

Bauer, Laura L. "Hypothesis testing procedures for non-nested regression models." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/74755.

Full text
Abstract:
Theory often indicates that a given response variable should be a function of certain explanatory variables yet fails to provide meaningful information as to the specific form of this function. To test the validity of a given functional form with sensitivity toward the feasible alternatives, a procedure is needed for comparing non-nested families of hypotheses. Two hypothesized models are said to be non-nested when one model is neither a restricted case nor a limiting approximation of the other. These non-nested hypotheses cannot be tested using conventional likelihood ratio procedures. In recent years, however, several new approaches have been developed for testing non-nested regression models. A comprehensive review of the procedures for the case of two linear regression models was presented. Comparisons between these procedures were made on the basis of asymptotic distributional properties, simulated finite sample performance and computational ease. A modification to the Fisher and McAleer JA-test was proposed and its properties investigated. As a compromise between the JA-test and the Orthodox F-test, it was shown to have an exact non-null distribution. Its properties, both analytically and empirically derived, exhibited the practical worth of such an adjustment. A Monte Carlo study of the testing procedures involving non-nested linear regression models in small sample situations (n ≤ 40) provided information necessary for the formulation of practical guidelines. It was evident that the modified Cox procedure, N̄ , was most powerful for providing correct inferences. In addition, there was strong evidence to support the use of the adjusted J-test (AJ) (Davidson and MacKinnon's test with small-sample modifications due to Godfrey and Pesaran), the modified JA-test (NJ) and the Orthodox F-test for supplemental information. Under non normal disturbances, similar results were yielded. An empirical study of spending patterns for household food consumption provided a practical application of the non-nested procedures in a large sample setting. The study provided not only an example of non-nested testing situations but also the opportunity to draw sound inferences from the test results.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Zhongfa. "Multiple Hypothesis Testing For Finite and Infinite Test." Case Western Reserve University School of Graduate Studies / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=case1121461130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Namavari, Hamed. "Essays on Objective Procedures for Bayesian Hypothesis Testing." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872718411158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Fonseca, Miguel dos Santos. "Estimation and hypothesis testing in mixed linear models." Doctoral thesis, FCT - UNL, 2007. http://hdl.handle.net/10362/1989.

Full text
Abstract:
Dissertação apresentada para obtenção do Grau de Doutor em Matemática, Estatística, pela Universidade Nova de Lisboa, faculdade de Ciências e Tecnologia
This thesis focuses on inference and hypothesis testing for parameters in orthogonal mixed linear models. A canonical form for such models is obtained using the matrices in principal basis of the commutative Jordan algebra associated to the model. UMVUE are derived and used for hypothesis tests. When usual F tests are not possible to use, generalized F tests arise, and the distribution for the statistic of such tests is obtained under some mild conditions. application of these results is made to cross-nested models.
APA, Harvard, Vancouver, ISO, and other styles
34

Grabitzky, Vera Katharina. "Vulnerable language areas in attriting L1 German : testing the interface hypothesis and structural overlap hypothesis." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1328.

Full text
Abstract:
Linguists studying language acquisition often assume that once a first language is fully acquired, its mental linguistic representation remains constant and stable. Observations of native language attrition due to the influence of a dominant second language have led researchers to rethink the nature of the first language and consider the possibility that the mental representation of our first language may not be completely stable. The purpose of this dissertation is to investigate specific areas of the first language that may be particularly vulnerable to L1 attrition if exposed to a dominant L2. I test Sorace's (2003) Vulnerable Interface Hypothesis, and propose and test the Structural Overlap Hypothesis. The Vulnerable Interface Hypothesis for first language attrition claims that linguistic properties located in the interfaces between the linguistic computational system and external domains (e.g. discourse or pragmatics) are particularly vulnerable to attrition, while internal interfaces (e.g. the syntax-semantics interface) are only somewhat vulnerable to attrition. The domain of narrow syntax is assumed to remain stable unless the L1 begins to attrite in childhood (Montrul, 2008). The Structural Overlap Hypothesis assumes that properties which exhibit structural overlap between the L1 and L2 are more vulnerable to L1 attrition. The predictions of both hypotheses are tested using 15 L1 German adult attriters whose dominant L2 is English, in order to observe the degree of stability of the linguistic system in adult onset bilinguals. Four linguistic properties of German are examined, which are grouped in two pairings of a purely syntactic property with a grammatically related interface property. 15 monolingual L1 German speakers and 15 monolingual L1 English speakers serve as controls. The data obtained also shed light on a frequently debated question of attrition research, viz. whether L1 attrition is due to transfer from the L2, or a decrease in the linguistic processing capacity due to competition of a dominant L2, or both.
APA, Harvard, Vancouver, ISO, and other styles
35

Strikholm, Birgit. "Essays on nonlinear time series modelling och hypothesis testing." Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-535.

Full text
Abstract:
There seems to be a common understanding nowadays that the economy is nonlinear. Economic theory suggests features that can not be incorporated into linear frameworks, and over the decades a solid body of empirical evidence of nonlinearities in economic time series has been gathered. This thesis consists of four essays that have to do with various forms of nonlinear statistical inference. In the first chapter the problem of determining the number regimes in a threshold autoregressive (TAR) model is considered. Typically, the number of regimes (or thresholds) is assumed unknown and has to be determined from the data. The solution provided in the chapter first uses the smooth transition autoregressive (STAR) model with a fixed and rapid transition to approximate the TAR model. The number of thresholds is then determined using sequential misspecification tests developed for the STAR model.  The main characteristic of the proposed method is that only standard statistical inference is used, as opposed to non-standard inference or computation intensive bootstrap-based methods. In the second chapter a similar idea is employed and the structural break model is approximated with a smoothly time-varying autoregressive model. By making the smooth changes in parameters rapid, the model is able to closely approximate the corresponding model with breaks in the parameter structure. This approximation makes the misspecification tests developed for the STR modelling framework available and they can be used for sequentially determining the number of breaks. Again, the method is computationally simple as all tests rely on standard statistical inference. There exists literature suggesting that business cycle fluctuations affect the pattern of seasonality in macroeconomic series. A question asked in the third chapter is whether other factors such as changes in institutions or technological change may have this effect as well. The time-varying smooth transition autoregressive (TV- STAR) models that can incorporate both types of change are used to model the (possible) changes in seasonal patterns and shed light on the hypothesis that institutional and technological changes (proxied by time) may have a stronger effect on seasonal patterns than business cycle. The TV-STAR testing framework is applied to nine quarterly industrial production series from the G7 countries, Finland and Sweden. These series display strong seasonal patterns and also contain the business cycle fluctuations. The empirical results of the chapter suggest that seasonal patterns in these series have been changing over time and, furthermore, that the business cycle fluctuations do not seem to be the main cause for this change. The last chapter of the thesis considers the possibility of testing for Granger causality in bivariate nonlinear systems when the exact form of the nonlinear relationship between variables is not known. The idea is to linearize the testing problem by approximating the nonlinear system by its Taylor expansion. The expansion is linear in parameters and one gets round the difficulty caused by the unknown functional form of the relationship under investigation.

Diss. Stockholm : Handelshögskolan, 2004

APA, Harvard, Vancouver, ISO, and other styles
36

Salek, Shishavan Farzin. "From hypothesis testing of quantum channels to secret sharing." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670890.

Full text
Abstract:
The present thesis has three major thrusts: the first thrust presents a broad investigation of asymptotic binary hypothesis testing, when each hypothesis represents asymptotically many independent instances of a quantum channel. Unlike the familiar setting of quantum states as hypotheses, there is a fundamental distinction between adaptive and non-adaptive strategies with respect to the channel uses, and we introduce a number of further variants of the discrimination tasks by imposing different restrictions on the test strategies. The following results are obtained: (1) The first separation between adaptive and non-adaptive symmetric hypothesis testing exponents for quantum channels, which we derive from a general lower bound on the error probability for non-adaptive strategies. (2) We prove that for classical-quantum channels, adaptive and non-adaptive strategies lead to the same error exponents both in the symmetric (Chernoff) and asymmetric (Hoeffding) settings. (3) We prove that, in some sense generalizing the previous statement, for general channels adaptive strategies restricted to classical feed-forward and product state channel inputs are not superior to non-adaptive product state strategies. (4) As application of our findings, we address the discrimination power of quantum channels and show that neither adaptive strategies nor input quantum memory can increase the discrimination power of an entanglement-breaking channel. In the second thrust, we construct new protocols for the tasks of converting noisy multipartite quantum correlations into noiseless classical and quantum ones using local operations and classical communications (LOCC). For the former, known as common randomness (CR) distillation, two new lower bounds are obtained. Our proof relies on a generalization of communication for omniscience (CO). Our contribution here is a novel simultaneous decoder for the compression of correlated classical sources by random binning with quantum side information at the decoder. For the latter, we derive two new lower bounds on the rate at which Greenberger-Horne-Zeilinger (GHZ) states can be asymptotically distilled from any given pure state under LOCC. Our approach consists in “making coherent” the proposed CR distillation protocols and recycling of resources. The final thrust studies communication over a single-serving two-receiver quantum broadcast channel with legitimate receiver and eavesdropper. We find inner and outer boundary regions for the tradeoff between common, individualized, confidential messages as well as the rate of the dummy randomness used for obfuscation. As applications, we find one-shot capacity bounds on the simultaneous transmission of classical and quantum information and re-derive a number of asymptotic results in the literature.
Esta tesis está estructurada en tres ejes: En la primera parte se presenta una investigación extensa de los test de hipótesis binarios asintóticos cuando cada hipótesis representa diferentes instancias independientes de un canal cuántico. A diferencia del caso habitual ya conocido en el que los estados cuánticos se modelan como hipótesis, en este trabajo se distingue entre estrategias adaptativas y no adaptativas con respecto al uso del canal, y se presentan una serie de variantes adicionales de las tareas de discriminación imponiendo diferentes restricciones a la estrategia de test. Se obtienen los siguientes resultados: (1) Se obtiene por primera vez una separación entre los exponentes de test de hipótesis simétricos adaptativos y no adaptativos, derivándose por primera vez una cota inferior de la probabilidad de error para estrategias no adaptativas. (2) Se demuestra que para canales cuánticos clásicos, las estrategias adaptativas y no adaptativas conducen a los mismos exponentes de error tanto en el caso simétrico (Chernoff) como en configuraciones asimétricas (Hoeffding); (3) Se demuestra, generalizando el resultado anterior, que en general estrategias adaptativas restringidas al feedforward clásico y entradas de tipo producto no son superiores a estrategias de tipo producto no adaptativas; (4) Como aplicación de los resultados anteriores se aborda la discriminación de potencia en canales cuánticos. Se demuestra que, ni estrategias adaptativas, ni la utilización de entradas con memoria, permiten mejorar la discriminación de potencia de canales del tipo entanglement-breaking. En la segunda parte de la tesis se construyen nuevos protocolos para convertir correlaciones cuánticas ruidosas en correlaciones clásicas, o bien en correlaciones cuánticas, ambas libres de ruido, mediante la utilización de operaciones locales y comunicaciones clásicas (LOCC). Para la primera tarea, conocida como destilación de aleatoriedad común (CR), se obtienen dos nuevos límites inferiores de la aleatoriedad común destilable. Este trabajo supone una generalización de la comunicación para la omnisciencia. En la segunda tarea, se obtienen dos nuevos límites inferiores de la tasa a la que los estados Greenberger-Horne-Zeilinger (GHZ) pueden destilarse asintóticamente, desde cualquier estado puro, utilizando LOCC. El enfoque consiste en hacer coherente el protocolo de destilación CR propuesto, así como en la reutilización de recursos. La última parte de la tesis final estudia la comunicación mediante un solo uso del canal cuántico en presencia de un receptor legítimo y de un observador no autorizado. Se obtienen regiones interiores y exteriores asociadas al compromiso entre la tasa de transmisión común, confidencial, individualizada y de la fuente aleatoria.
Aquesta tesi està estructurada en tres eixos: A la primera part es presenta una investigació extensa dels test d’hipòtesis binaris asimptòtics quan cada hipòtesi representa diferents instàncies independents d’un canal quàntic. A diferència del cas habitual ja conegut en el qual els estats quàntics es modelen com a hipòtesi, en aquest treball es distingeix entre estratègies adaptatives i no adaptatives pel que fa a l’ús del canal, i es presenten una sèrie de variants addicionals de les tasques de discriminació imposant diferents restriccions a la estratègies de test. S’obtenen els següents resultats: (1) S’obté per primera vegada una separació entre els exponents de test d’- hipòtesis simètrics adaptatius i no adaptatius, derivant per primera vegada una fita inferior de la probabilitat d’error per estratègies no adaptatives. (2) Es demostra que per a canals quàntics clàssics, les estratègies adaptatives i no adaptatives condueixen als mateixos exponents d’error tant en el cas simètric (Chernoff) com en configuracions asimètriques (Hoeffding); (3) Es demostra, generalitzant el resultat anterior, que en general estratègies adaptatives restringides a feed-forward clàssic i entrades de tipus producte no són superiors a estratègies de tipus producte no adaptatives; (4) Com a aplicació dels resultats anteriors s’aborda la discriminació de potència en canals quàntics. Es demostra que, ni estratègies adaptatives, ni la utilització d’entrades amb memòria, permeten millorar la discriminació de potència de canals de tipus entanglement-breaking. A la segona part de la tesi es construeixen nous protocols per convertir correlacions quàntiques sorolloses en correlacions clàssiques, o bé en correlacions quàntiques, ambdues lliures de soroll, mitjançant la utilització d’operacions locals i comunicacions clàssiques (LOCC). Per a la primera tasca, coneguda com destil·lació d’aleatorietat comú (CR), s’obtenen dos nous límits inferiors de l’aleatorietat comuna destil·lable. Aquest treball suposa una generalització de la comunicació per a l’omnisciència. En la segona tasca, s’obtenen dos nous límits inferiors de la taxa a la qual els estats Greenberger- Horne-Zeilinger (GHZ) poden destil·lar asimptòticament, des de qualsevol estat pur, utilitzant LOCC. L’enfocament consisteix a fer coherent el protocol de destil·lació CR proposat així com en la reutilització de recursos. L’última part de la tesi final estudia la comunicació mitjançant un sol ús de canal quàntic en presència d’un receptor legítim i d’un observador no autoritzat. S’obtenen regions interiors i exteriors associades a el compromís entre la taxa de transmissió comú, confidencial, individualitzada i de la font aleatòria.
APA, Harvard, Vancouver, ISO, and other styles
37

Sriananthakumar, Sivagowry 1968. "Contributions to the theory and practice of hypothesis testing." Monash University, Dept. of Econometrics and Business Statistics, 2000. http://arrow.monash.edu.au/hdl/1959.1/8836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Holm, Linus. "Predictive eyes precede retrieval : visual recognition as hypothesis testing." Doctoral thesis, Umeå : Department of Psychology, Umeå University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Santamaria, Amy. "Testing and modeling a two-component hypothesis of timing." Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3212111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Norris, Sasha. "Signals, steroids and sparrows : testing the immunocompetence handicap hypothesis." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Horton, Dean. "Testing the SUSY hypothesis though naturalness ans spin measurements." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Holtgraves, Marnell M. "Diagnosis and schemata : counselors' perceptions and hypothesis-testing strategies." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/832991.

Full text
Abstract:
The Diagnostic and Statistical Manual of Mental Disorders, Third Edition, Revised (DSM-III-R) published by the American Psychiatric Association (APA) in 1987 is currently the primary tool used by counselors in clinical settings for diagnosing clients' psychological and behavioral problems. Beginning with the third edition of the manual (DSM-III; APA, 1968) a multiaxial process for diagnosis was introduced to encourage a biopsychosocial perspective of client's problems.This study was designed to investigate if alterations in diagnosis on Axis IV and V could further encourage a biopsychosocial perspective. It was designed to imitate the rapid diagnostic process that takes place in many clinical settings. In the present study, the criterion for a biopsychosocial perspective was the maintenance of neutral perceptions and unbiased hypothesis-testing strategies following diagnosis of a client.Twenty-four counseling psychology trainees participated in the study. The counselor trainees diagnosed a client after listening to approximately 20 minutes of an audio-tape of an initial assessment interview. The 12 counselor trainees in the control group diagnosed the client using the standard multiaxial format for diagnosis. The 12 counselor trainees in the experimental group diagnosed the client using an alternative format which encouraged a focus on positive aspects of the client and the client's environment.The counselor trainees completed the Impression Formation Questionnaire to assess their perceptions of the client. They then wrote 12 questions they would ask the client in the next counseling session. These questions constituted their hypothesis-testing strategies for their diagnoses on Axis I and Axis II.The counselor trainees in both groups maintained primarily neutral perceptions of the client as measured by the IFQ. The counselor trainees in both groups favored confirmatory hypothesis-testing strategies when assigned to groups based on their hypothesis-testing strategy score (p < .05).The results of this study indicated that the standard and alternative multiaxial formats for diagnosis may encourage neutral perceptions of a client when counselors must diagnose the client based on very little information. Neither format for diagnosis was successful, however, in discouraging a biased, confirmatory search for information.
Department of Counseling Psychology and Guidance Services
APA, Harvard, Vancouver, ISO, and other styles
43

Smith, Toni Michelle. "An investigation into student understanding of statistical hypothesis testing." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8565.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Dept. of Curriculum and Instruction. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
44

Hahn, Georg. "Statistical methods for Monte-Carlo based multiple hypothesis testing." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/25279.

Full text
Abstract:
Statistical hypothesis testing is a key technique to perform statistical inference. The main focus of this work is to investigate multiple testing under the assumption that the analytical p-values underlying the tests for all hypotheses are unknown. Instead, we assume that they can be approximated by drawing Monte Carlo samples under the null. The first part of this thesis focuses on the computation of test results with a guarantee on their correctness, that is decisions on multiple hypotheses which are identical to the ones obtained with the unknown p-values. We present MMCTest, an algorithm to implement a multiple testing procedure which yields correct decisions on all hypotheses (up to a pre-specified error probability) based solely on Monte Carlo simulation. MMCTest offers novel ways to evaluate multiple hypotheses as it allows to obtain the (previously unknown) correct decision on hypotheses (for instance, genes) in real data studies (again up to an error probability pre-specified by the user). The ideas behind MMCTest are generalised in a framework for Monte Carlo based multiple testing, demonstrating that existing methods giving no guarantees on their test results can be modified to yield certain theoretical guarantees on the correctness of their outputs. The second part deals with multiple testing from a practical perspective. We assume that in practice, it might also be desired to sacrifice the additional computational effort needed to obtain guaranteed decisions and to invest it instead in the computation of a more accurate ad-hoc test result. This is attempted by QuickMMCTest, an algorithm which adaptively allocates more samples to hypotheses whose decisions are more prone to random fluctuations, thereby achieving an improved accuracy. This work also derives the optimal allocation of a finite number of samples to finitely many hypotheses under a normal approximation, where the optimal allocation is understood as the one minimising the expected number of erroneously classified hypotheses (with respect to the classification based on the analytical p-values). An empirical comparison of the optimal allocation of samples to the one computed by QuickMMCTest indicates that the behaviour of QuickMMCTest might not be too far away from being optimal.
APA, Harvard, Vancouver, ISO, and other styles
45

Escamilla, Pierre. "On cooperative and concurrent detection in distributed hypothesis testing." Electronic Thesis or Diss., Institut polytechnique de Paris, 2019. http://www.theses.fr/2019IPPAT007.

Full text
Abstract:
L’inférence statistique prend une place prépondérante dans le développement des nouvelles technologies et inspire un grand nombre d’algorithmes dédiés à des tâches de détection, d’identification et d’estimation. Cependant il n’existe pas de garantie théorique pour les performances de ces algorithmes. Dans cette thèse, nous considérons un réseau simplifié de capteurs communicant sous contraintes pour tenter de comprendre comment des détecteurs peuvent se partager au mieux les informations à leur disposition pour détecter un même événement ou des événements distincts. Nous investiguons différents aspects de la coopération entre détecteurs et comment des besoins contradictoires peuvent être satisfaits au mieux dans le cas de tâches de détection. Plus spécifiquement nous étudions un problème de test d’hypothèse où chaque détecteur doit maximiser l’exposant de décroissance de l’erreur de Type II sous une contrainte d’erreur de Type I donnée. Comme il y a plusieurs détecteurs intéressés par des informations distinctes, un compromis entre les vitesses de décroissance atteignables va apparaître. Notre but est de caractériser la région des compromis possibles entre exposants d’erreurs de Type II. Dans le cadre des réseaux de capteurs massifs, la quantité d’information est souvent soumise à des limitations pour des raisons de consommation d’énergie et de risques de saturation du réseau. Nous étudions donc, en particulier, le cas du régime de communication à taux de compression nul (i.e. le nombre de bits des messages croit de façon sous-linéaire avec le nombre d’observations). Dans ce cas, nous caractérisons complètement la région des exposants d’erreurs de Type II dans les configurations où les détecteurs peuvent avoir des buts différents. Nous étudierons aussi le cas d’un réseau avec des taux de compressions positifs (i.e. le nombre de bits des messages augmente de façon linéaire avec le nombre d’observations). Dans ce cas, nous présentons des sous-parties de la région des exposants d’erreur de Type II. Enfin, nous proposons dans le cas d’un problème point à point avec un taux de compression positif une caractérisation complète de l’exposant de l’erreur de Type II optimal pour une famille de tests gaussiens
Statistical inference plays a major role in the development of new technologies and inspires a large number of algorithms dedicated to detection, identification and estimation tasks. However, there is no theoretical guarantee for the performance of these algorithms. In this thesis we try to understand how sensors can best share their information in a network with communication constraints to detect the same or distinct events. We investigate different aspects of detector cooperation and how conflicting needs can best be met in the case of detection tasks. More specifically we study a hypothesis testing problem where each detector must maximize the decay exponent of the Type II error under a given Type I error constraint. As the detectors are interested in different information, a compromise between the achievable decay exponents of the Type II error appears. Our goal is to characterize the region of possible trade-offs between Type II error decay exponents. In massive sensor networks, the amount of information is often limited due to energy consumption and network saturation risks. We are therefore studying the case of the zero rate compression communication regime (i.e. the messages size increases sub-linearly with the number of observations). In this case we fully characterize the region of Type II error decay exponent. In configurations where the detectors have or do not have the same purposes. We also study the case of a network with positive compression rates (i.e. the messages size increases linearly with the number of observations). In this case we present subparts of the region of Type II error decay exponent. Finally, in the case of a single sensor single detector scenario with a positive compression rate, we propose a complete characterization of the optimal Type II error decay exponent for a family of Gaussian hypothesis testing problems
APA, Harvard, Vancouver, ISO, and other styles
46

Borel-Saladin, Jacqueline. "Testing the social polarization hypothesis in Johannesburg, South Africa." Doctoral thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/10098.

Full text
Abstract:
Includes bibliographical references.
This study assesses both the social polarisation hypothesis and the role migrants play in this process, using survey and population census data of the Johannesburg region of South Africa from 1970 to 2010. The manufacturing sector, once a major source of urban employment and consisting of a large percentage of skilled and semi-skilled, middle-income jobs has declined while the service sector, argued to consist of predominantly either high-skill, high-pay or low-skill, low-pay jobs, has grown. Thus, the decline of manufacturing and the growth of the service sector are argued to result in a more polarised society. Low-wage, low-skill service sector jobs are also argued to attract poorly-educated, unskilled immigrants unable to compete in the urban labour market for anything other than low-skill, low-pay jobs. Thus, the contention is that immigration contributes to social polarisation.
APA, Harvard, Vancouver, ISO, and other styles
47

Jenkins, Bradlee A., and L. Lee Glenn. "Cystic Fibrosis Carrier Screening Attitudes and Multiple Hypothesis Testing." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etsu-works/7466.

Full text
Abstract:
The recent study by Cunningham, Lewis, Curnow, Glazner, and Massie [1] on the attitudes of respiratory physicians and clinic coordinators towards cystic fibrosis (CF) carrier screening drew several unsupported conclusions because the α level of 0.05 was not corrected for the large number of hypothesis tests conducted, leading to a Type 2 error and the acceptance of hypotheses that were likely false [2].
APA, Harvard, Vancouver, ISO, and other styles
48

Sechidis, Konstantinos. "Hypothesis testing and feature selection in semi-supervised data." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/hypothesis-testing-and-feature-selection-in-semisupervised-data(97f5f950-f020-4ace-b6cd-49cb2f88c730).html.

Full text
Abstract:
A characteristic of most real world problems is that collecting unlabelled examples is easier and cheaper than collecting labelled ones. As a result, learning from partially labelled data is a crucial and demanding area of machine learning, and extending techniques from fully to partially supervised scenarios is a challenging problem. Our work focuses on two types of partially labelled data that can occur in binary problems: semi-supervised data, where the labelled set contains both positive and negative examples, and positive-unlabelled data, a more restricted version of partial supervision where the labelled set consists of only positive examples. In both settings, it is very important to explore a large number of features in order to derive useful and interpretable information about our classification task, and select a subset of features that contains most of the useful information. In this thesis, we address three fundamental and tightly coupled questions concerning feature selection in partially labelled data; all three relate to the highly controversial issue of when does additional unlabelled data improve performance in partially labelled learning environments and when does not. The first question is what are the properties of statistical hypothesis testing in such data? Second, given the widespread criticism of significance testing, what can we do in terms of effect size estimation, that is, quantification of how strong the dependency between feature X and the partially observed label Y? Finally, in the context of feature selection, how well can features be ranked by estimated measures, when the population values are unknown? The answers to these questions provide a comprehensive picture of feature selection in partially labelled data. Interesting applications include for estimation of mutual information quantities, structure learning in Bayesian networks, and investigation of how human-provided prior knowledge can overcome the restrictions of partial labelling. One direct contribution of our work is to enable valid statistical hypothesis testing and estimation in positive-unlabelled data. Focusing on a generalised likelihood ratio test and on estimating mutual information, we provide five key contributions. (1) We prove that assuming all unlabelled examples are negative cases is sufficient for independence testing, but not for power analysis activities. (2) We suggest a new methodology that compensates this and enables power analysis, allowing sample size determination for observing an effect with a desired power by incorporating user’s prior knowledge over the prevalence of positive examples. (3) We show a new capability, supervision determination, which can determine a-priori the number of labelled examples the user must collect before being able to observe a desired statistical effect. (4) We derive an estimator of the mutual information in positive-unlabelled data, and its asymptotic distribution. (5) Finally, we show how to rank features with and without prior knowledge. Also we derive extensions of these results to semi-supervised data. In another extension, we investigate how we can use our results for Markov blanket discovery in partially labelled data. While there are many different algorithms for deriving the Markov blanket of fully supervised nodes, the partially labelled problem is far more challenging, and there is a lack of principled approaches in the literature. Our work constitutes a generalization of the conditional tests of independence for partially labelled binary target variables, which can handle the two main partially labelled scenarios: positive-unlabelled and semi-supervised. The result is a significantly deeper understanding of how to control false negative errors in Markov Blanket discovery procedures and how unlabelled data can help. Finally, we present how our results can be used for information theoretic feature selection in partially labelled data. Our work extends naturally feature selection criteria suggested for fully-supervised data, to partially labelled scenarios. These criteria can capture both the relevancy and redundancy of the features and can be used for semi-supervised and positive-unlabelled data.
APA, Harvard, Vancouver, ISO, and other styles
49

Kibler, Robyn M. "Testing the Medical Arms Race Hypothesis: a Spatial Approach." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6879.

Full text
Abstract:
The surgical robot experienced rapid uptake throughout hospitals in the US despite lack of clinical evidence that it is superior to existing methods and undeterred by its high cost. This type of technology may be a “weapon” in the medical arms race hypothesis which asserts that competition among hospitals may be welfare reducing wherein it encourages resource use that is not commensurate with beneficial health outcomes. This paper is a case-study of the diffusion of the surgical robot among hospitals in Florida. We address the medical arms race hypothesis directly by investigating whether a hospital’s decision to adopt a robot is a function of the neighboring, competing hospitals’ decisions to do so. Using a spatial autoregressive probit model, we find that the spatial coefficient is significant and negative. That is, when neighboring hospitals operate a robot, a given hospital is less likely to operate one. Indeed, hospitals appear to consider the behavior of rival hospitals, but not in a way that would be consistent with a medical arms race. Support is lent to the hypothesis that as more hospitals become providers of robotic-assisted surgery, the less profitable it becomes to enter the market.
APA, Harvard, Vancouver, ISO, and other styles
50

Pan, Jianhong. "A unified theory of hypothesis testing based on rankings." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/9716.

Full text
Abstract:
A unified theory of hypothesis testing based on the ranks of the data is proposed. A hypothesis testing problem often gives rise to two separate permutation sets corresponding to the data and to the alternative, respectively. By defining the distance between permutation sets as the average of all distances between pairs of permutations, one from each set, it is possible to obtain various test statistics. The limiting distributions of test statistics derived by the unified approach herein are obtained under both the null hypothesis and contiguous alternatives. The unified approach produces not only some well-known test statistics but also some new yet plausible test statistics. The corresponding results are extensions of the simple linear rank statistics defined by Hajek and Sidak (1967) to the generalized linear rank statistics and of the two-sample case to the multi-sample case. Furthermore, a combined method was developed for the case of composite alternatives.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography