To see the other types of publications on this topic, follow the link: Distributed Sequential Hypothesis Testing.

Dissertations / Theses on the topic 'Distributed Sequential Hypothesis Testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Distributed Sequential Hypothesis Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wissinger, John W. (John Weakley). "Distributed nonparametric training algorithms for hypothesis testing networks." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12006.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 495-502).
by John W. Wissinger.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Durazo-Arvizu, Ramon Angel. "Bias-adjusted estimates of survival following group sequential hypothesis testing." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186830.

Full text
Abstract:
It is known that repeated use of fixed sample hypothesis testing causes inflation of the type one statistical error. Since in a group sequential design a study is stopped preferentially when extreme data have been observed, we expect that the usual estimators are biased toward the extremes. We demonstrate through simulations that the fixed sample survival estimators are biased if computed after stopping a group sequential trial in which stopping is based on survival statistics. The problem is first studied under no censored data and then with the more realistic setting of censoring. In particular, we studied the two sample problem, that is, the estimation of survival following repeatedly testing the equality of two survival distributions. We first investigated some potential measures of bias. Bias-adjusted estimators of survival of the two distributions were suggested first based on the Whitehead (1986, (33), (34)) bias-adjusted estimator. This was done for the proportional hazards model. We found that this approach is not well behaved for the unconditional measures of bias, whereas it behaves well for the conditional measures, provided that the power of the test to detect the true value of the parameter was not too large (< 94%). However, for large powers, it has a tendency to overestimate the absolute difference of the two distributions. A generalized Whitehead bias-adjusted semiparametric estimator was suggested. This is not only applicable to the proportional hazards model, but to other survival data models. The conditional semiparametric estimator was found to behave well for all the data models and powers considered. The unconditional semiparametric estimator produced adjustments of survival in the right direction for the proportional hazards model, but not for the others. It was also seen to be applicable to the two main group sequential designs, namely the Pocock and O'Brien-Fleming designs. Finally, the findings were applied to a leukemia clinical trial. The control treatment (Daunorubicin) was compared to the new treatment (Idarubicin) based on a 0.05-level O'Brien-Fleming group sequential design with a maximum of 4 analyses and equal number of observations per group.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Yan. "Asymptotic theory for decentralized sequential hypothesis testing problems and sequential minimum energy design algorithm." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41082.

Full text
Abstract:
The dissertation investigates asymptotic theory of decentralized sequential hypothesis testing problems as well as asymptotic behaviors of the Sequential Minimum Energy Design (SMED). The main results are summarized as follows. 1.We develop the first-order asymptotic optimality theory for decentralized sequential multi-hypothesis testing under a Bayes framework. Asymptotically optimal tests are obtained from the class of "two-stage" procedures and the optimal local quantizers are shown to be the "maximin" quantizers that are characterized as a randomization of at most M-1 Unambiguous Likelihood Quantizers (ULQ) when testing M >= 2 hypotheses. 2. We generalize the classical Kullback-Leibler inequality to investigate the quantization effects on the second-order and other general-order moments of log-likelihood ratios. It is shown that a quantization may increase these quantities, but such an increase is bounded by a universal constant that depends on the order of the moment. This result provides a simpler sufficient condition for asymptotic theory of decentralized sequential detection. 3. We propose a class of multi-stage tests for decentralized sequential multi-hypothesis testing problems, and show that with suitably chosen thresholds at different stages, it can hold the second-order asymptotic optimality properties when the hypotheses testing problem is "asymmetric." 4. We characterize the asymptotic behaviors of SMED algorithm, particularly the denseness and distributions of the design points. In addition, we propose a simplified version of SMED that is computationally more efficient.
APA, Harvard, Vancouver, ISO, and other styles
4

Escamilla, Pierre. "On cooperative and concurrent detection in distributed hypothesis testing." Electronic Thesis or Diss., Institut polytechnique de Paris, 2019. http://www.theses.fr/2019IPPAT007.

Full text
Abstract:
L’inférence statistique prend une place prépondérante dans le développement des nouvelles technologies et inspire un grand nombre d’algorithmes dédiés à des tâches de détection, d’identification et d’estimation. Cependant il n’existe pas de garantie théorique pour les performances de ces algorithmes. Dans cette thèse, nous considérons un réseau simplifié de capteurs communicant sous contraintes pour tenter de comprendre comment des détecteurs peuvent se partager au mieux les informations à leur disposition pour détecter un même événement ou des événements distincts. Nous investiguons différents aspects de la coopération entre détecteurs et comment des besoins contradictoires peuvent être satisfaits au mieux dans le cas de tâches de détection. Plus spécifiquement nous étudions un problème de test d’hypothèse où chaque détecteur doit maximiser l’exposant de décroissance de l’erreur de Type II sous une contrainte d’erreur de Type I donnée. Comme il y a plusieurs détecteurs intéressés par des informations distinctes, un compromis entre les vitesses de décroissance atteignables va apparaître. Notre but est de caractériser la région des compromis possibles entre exposants d’erreurs de Type II. Dans le cadre des réseaux de capteurs massifs, la quantité d’information est souvent soumise à des limitations pour des raisons de consommation d’énergie et de risques de saturation du réseau. Nous étudions donc, en particulier, le cas du régime de communication à taux de compression nul (i.e. le nombre de bits des messages croit de façon sous-linéaire avec le nombre d’observations). Dans ce cas, nous caractérisons complètement la région des exposants d’erreurs de Type II dans les configurations où les détecteurs peuvent avoir des buts différents. Nous étudierons aussi le cas d’un réseau avec des taux de compressions positifs (i.e. le nombre de bits des messages augmente de façon linéaire avec le nombre d’observations). Dans ce cas, nous présentons des sous-parties de la région des exposants d’erreur de Type II. Enfin, nous proposons dans le cas d’un problème point à point avec un taux de compression positif une caractérisation complète de l’exposant de l’erreur de Type II optimal pour une famille de tests gaussiens
Statistical inference plays a major role in the development of new technologies and inspires a large number of algorithms dedicated to detection, identification and estimation tasks. However, there is no theoretical guarantee for the performance of these algorithms. In this thesis we try to understand how sensors can best share their information in a network with communication constraints to detect the same or distinct events. We investigate different aspects of detector cooperation and how conflicting needs can best be met in the case of detection tasks. More specifically we study a hypothesis testing problem where each detector must maximize the decay exponent of the Type II error under a given Type I error constraint. As the detectors are interested in different information, a compromise between the achievable decay exponents of the Type II error appears. Our goal is to characterize the region of possible trade-offs between Type II error decay exponents. In massive sensor networks, the amount of information is often limited due to energy consumption and network saturation risks. We are therefore studying the case of the zero rate compression communication regime (i.e. the messages size increases sub-linearly with the number of observations). In this case we fully characterize the region of Type II error decay exponent. In configurations where the detectors have or do not have the same purposes. We also study the case of a network with positive compression rates (i.e. the messages size increases linearly with the number of observations). In this case we present subparts of the region of Type II error decay exponent. Finally, in the case of a single sensor single detector scenario with a positive compression rate, we propose a complete characterization of the optimal Type II error decay exponent for a family of Gaussian hypothesis testing problems
APA, Harvard, Vancouver, ISO, and other styles
5

Krantz, Elizabeth. "Sharpening the Boundaries of the Sequential Probability Ratio Test." TopSCHOLAR®, 2012. http://digitalcommons.wku.edu/theses/1169.

Full text
Abstract:
In this thesis, we present an introduction to Wald’s Sequential Probability Ratio Test (SPRT) for binary outcomes. Previous researchers have investigated ways to modify the stopping boundaries that reduce the expected sample size for the test. In this research, we investigate ways to further improve these boundaries. For a given maximum allowable sample size, we develop a method intended to generate all possible sets of boundaries. We then find the one set of boundaries that minimizes the maximum expected sample size while still preserving the nominal error rates. Once the satisfying boundaries have been created, we present the results of simulation studies conducted on these boundaries as a means for analyzing both the expected number of observations and the amount of variability in the sample size required to make a decision in the test.
APA, Harvard, Vancouver, ISO, and other styles
6

Pereira, Pratap 1969. "Digitizing Technique with Sequential Hypothesis Testing For Reverse Engineering Using Coordinate Measuring Machines /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487934589976702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ramdas, Aaditya Kumar. "Computational and Statistical Advances in Testing and Learning." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/790.

Full text
Abstract:
This thesis makes fundamental computational and statistical advances in testing and estimation, making critical progress in theory and application of classical statistical methods like classification, regression and hypothesis testing, and understanding the relationships between them. Our work connects multiple fields in often counter-intuitive and surprising ways, leading to new theory, new algorithms, and new insights, and ultimately to a cross-fertilization of varied fields like optimization, statistics and machine learning. The first of three thrusts has to do with active learning, a form of sequential learning from feedback-driven queries that often has a provable statistical advantage over passive learning. We unify concepts from two seemingly different areas—active learning and stochastic firstorder optimization. We use this unified view to develop new lower bounds for stochastic optimization using tools from active learning and new algorithms for active learning using ideas from optimization. We also study the effect of feature noise, or errors-in-variables, on the ability to actively learn. The second thrust deals with the development and analysis of new convex optimization algorithms for classification and regression problems. We provide geometrical and convex analytical insights into the role of the margin in margin-based classification, and develop new greedy primal-dual algorithms for non-linear classification. We also develop a unified proof for convergence rates of randomized algorithms for the ordinary least squares and ridge regression problems in a variety of settings, with the purpose of investigating which algorithm should be utilized in different settings. Lastly, we develop fast state-of-the-art numerically stable algorithms for an important univariate regression problem called trend filtering with a wide variety of practical extensions. The last thrust involves a series of practical and theoretical advances in nonparametric hypothesis testing. We show that a smoothedWasserstein distance allows us to connect many vast families of univariate and multivariate two sample tests. We clearly demonstrate the decreasing power of the families of kernel-based and distance-based two-sample tests and independence tests with increasing dimensionality, challenging existing folklore that they work well in high dimensions. Surprisingly, we show that these tests are automatically adaptive to simple alternatives and achieve the same power as other direct tests for detecting mean differences. We discover a computation-statistics tradeoff, where computationally more expensive two-sample tests have a provable statistical advantage over cheaper tests. We also demonstrate the practical advantage of using Stein shrinkage for kernel independence testing at small sample sizes. Lastly, we develop a novel algorithmic scheme for performing sequential multivariate nonparametric hypothesis testing using the martingale law of the iterated logarithm to near-optimally control both type-1 and type-2 errors. One perspective connecting everything in this thesis involves the closely related and fundamental problems of linear regression and classification. Every contribution in this thesis, from active learning to optimization algorithms, to the role of the margin, to nonparametric testing fits in this picture. An underlying theme that repeats itself in this thesis, is the computational and/or statistical advantages of sequential schemes with feedback. This arises in our work through comparing active with passive learning, through iterative algorithms for solving linear systems instead of direct matrix inversions, and through comparing the power of sequential and batch hypothesis tests.
APA, Harvard, Vancouver, ISO, and other styles
8

Atta-Asiamah, Ernest. "Distributed Inference for Degenerate U-Statistics with Application to One and Two Sample Test." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/31777.

Full text
Abstract:
In many hypothesis testing problems such as one-sample and two-sample test problems, the test statistics are degenerate U-statistics. One of the challenges in practice is the computation of U-statistics for a large sample size. Besides, for degenerate U-statistics, the limiting distribution is a mixture of weighted chi-squares, involving the eigenvalues of the kernel of the U-statistics. As a result, it’s not straightforward to construct the rejection region based on this asymptotic distribution. In this research, we aim to reduce the computation complexity of degenerate U-statistics and propose an easy-to-calibrate test statistic by using the divide-and-conquer method. Specifically, we randomly partition the full n data points into kn even disjoint groups, and compute U-statistics on each group and combine them by averaging to get a statistic Tn. We proved that the statistic Tn has the standard normal distribution as the limiting distribution. In this way, the running time is reduced from O(n^m) to O( n^m/km_n), where m is the order of the one sample U-statistics. Besides, for a given significance level , it’s easy to construct the rejection region. We apply our method to the goodness of fit test and two-sample test. The simulation and real data analysis show that the proposed test can achieve high power and fast running time for both one and two-sample tests.
APA, Harvard, Vancouver, ISO, and other styles
9

Tout, Karim. "Automatic Vision System for Surface Inspection and Monitoring : Application to Wheel Inspection." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0008.

Full text
Abstract:
L'inspection visuelle des produits industriels a toujours été l'une des applications les plus reconnues du contrôle de qualité. Cette inspection reste en grande partie un processus manuel mené par des opérateurs et ceci rend l’opération peu fiable. Par conséquent, il est nécessaire d'automatiser cette inspection pour une meilleure efficacité. L'objectif principal de cette thèse est de concevoir un système d'inspection visuelle automatique pour l'inspection et la surveillance de la surface du produit. L'application spécifique de l'inspection de roues est considérée pour étudier la conception et l'installation du système d'imagerie. Ensuite, deux méthodes d'inspection sont développées : une méthode de détection des défauts à la surface du produit et une méthode de détection d’un changement brusque dans les paramètres du processus d’inspection non stationnaire. Parce que dans un contexte industriel, il est nécessaire de contrôler le taux de fausses alarmes, les deux méthodes proposées s’inscrivent dans le cadre de la théorie de la décision statistique. Un modèle paramétrique des observations est développé. Les paramètres du modèle sont estimés afin de concevoir un test statistique dont les performances sont analytiquement connues. Enfin, l'impact de la dégradation de l'éclairage sur la performance de détection des défauts est étudié afin de prédire les besoins de maintenance du système d'imagerie. Des résultats numériques sur un grand nombre d'images réelles mettent en évidence la pertinence de l'approche proposée
Visual inspection of finished products has always been one of the basic and most recognized applications of quality control in any industry. This inspection remains largely a manual process conducted by operators, and thus faces considerable limitations that make it unreliable. Therefore, it is necessary to automatize this inspection for better efficiency. The main goal of this thesis is to design an automatic visual inspection system for surface inspection and monitoring. The specific application of wheel inspection is considered to study the design and installation setup of the imaging system. Then, two inspection methods are developed: a defect detection method on the product’s surface and a change-point detection method in the parameters of the non-stationary inspection process. Because in an industrial context it is necessary to control the false alarm rate, the two proposed methods are cast into the framework of hypothesis testing theory. A parametric approach is proposed to model the non-anomalous part of the observations. The model parameters are estimated to design a statistical test whose performances are analytically known. Finally, the impact of illumination degradation on the defect detection performance is studied in order to predict the maintenance needs of the imaging system. Numerical results on a large set of real images highlight the relevance of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
10

Kang, Shin-jae. "Korea's export performance : three empirical essays." Diss., Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cheifetz, Nicolas. "Détection et classification de signatures temporelles CAN pour l’aide à la maintenance de sous-systèmes d’un véhicule de transport collectif." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1077/document.

Full text
Abstract:
Le problème étudié dans le cadre de cette thèse porte essentiellement sur l'étape de détection de défaut dans un processus de diagnostic industriel. Ces travaux sont motivés par la surveillance de deux sous-systèmes complexes d'un autobus impactant la disponibilité des véhicules et leurs coûts de maintenance : le système de freinage et celui des portes. Cette thèse décrit plusieurs outils dédiés au suivi de fonctionnement de ces deux systèmes. On choisit une approche de diagnostic par reconnaissance des formes qui s'appuie sur l'analyse de données collectées en exploitation à partir d'une nouvelle architecture télématique embarquée dans les autobus. Les méthodes proposées dans ces travaux de thèse permettent de détecter un changement structurel dans un flux de données traité séquentiellement, et intègrent des connaissances disponibles sur les systèmes surveillés. Le détecteur appliqué aux freins s'appuie sur les variables de sortie (liées au freinage) d'un modèle physique dynamique du véhicule qui est validé expérimentalement dans le cadre de nos travaux. L'étape de détection est ensuite réalisée par des cartes de contrôle multivariées à partir de données multidimensionnelles. La stratégie de détection pour l'étude du système porte traite directement les données collectées par des capteurs embarqués pendant des cycles d'ouverture et de fermeture, sans modèle physique a priori. On propose un test séquentiel à base d'hypothèses alimenté par un modèle génératif pour représenter les données fonctionnelles. Ce modèle de régression permet de segmenter des courbes multidimensionnelles en plusieurs régimes. Les paramètres de ce modèle sont estimés par un algorithme de type EM dans un mode semi-supervisé. Les résultats obtenus à partir de données réelles et simulées ont permis de mettre en évidence l'efficacité des méthodes proposées aussi bien pour l'étude des freins que celle des portes
This thesis is mainly dedicated to the fault detection step occurring in a process of industrial diagnosis. This work is motivated by the monitoring of two complex subsystems of a transit bus, which impact the availability of vehicles and their maintenance costs: the brake and the door systems. This thesis describes several tools that monitor operating actions of these systems. We choose a pattern recognition approach based on the analysis of data collected from a new IT architecture on-board the buses. The proposed methods allow to detect sequentially a structural change in a datastream, and take advantage of prior knowledge of the monitored systems. The detector applied to the brakes is based on the output variables (related to the brake system) from a physical dynamic modeling of the vehicle which is experimentally validated in this work. The detection step is then performed by multivariate control charts from multidimensional data. The detection strategy dedicated to doors deals with data collected by embedded sensors during opening and closing cycles, with no need for a physical model. We propose a sequential testing approach using a generative model to describe the functional data. This regression model allows to segment multidimensional curves in several regimes. The model parameters are estimated via a specific EM algorithm in a semi-supervised mode. The results obtained from simulated and real data allow to highlight the effectiveness of the proposed methods on both the study of brakes and doors
APA, Harvard, Vancouver, ISO, and other styles
12

Sun, Lan. "Essays on two-player games with asymmetric information." Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01E056/document.

Full text
Abstract:
Cette thèse est une contribution à la théorie économique sur trois aspects: la dynamique de prix dans les marchés financiers avec asymétrie d’information, la mise à jour des croyances et les raffinements d'équilibre dans les jeux de signaux, et l'introduction de l'ambiguïté dans la théorie du prix limite. Dans le chapitre 2, nous formalisons un jeu d'échange à somme nulle entre un secteur mieux informé et un autre qui l'est moins, pour déterminer de façon endogène, la dynamique du prix sous-jacent. Dans ce modèle, joueur 1 est informé de la conjoncture (L) mais est incertain de la croyance de joueur 2, car ce dernier est seulement informé à travers un message (M) qui est lié à cette conjoncture. Si L et M sont indépendants, alors le processus de prix sera une Martingale Continue à Variation Maximale (CMMV) et joueur 1 peut disposer de cet avantage informationnel. Par contre, si L et M ne sont pas indépendants, joueur 1 ne révèlera pas son information pendant le processus, et il ne bénéficiera donc pas de son avantage en matière d'information. Dans le chapitre 3, je propose une définition de l'équilibre de Test d'hypothèse (HTE) pour des jeux de signaux généraux, avec des joueurs non-Bayésiens qui sont soumis à une règle de mise à jour selon le modèle de vérification d'hypothèse caractérisé par Ortoleva (2012). Un HTE peut être différent d'un équilibre séquentiel de Nash en raison d'une incohérence dynamique. Par contre, dans le cas où joueur 2 traite seulement un message à probabilité nulle comme nouvelle inespérée, un HTE est un raffinement d'équilibre séquentiel de Nash et survit au critère intuitif dans les jeux de signaux généraux mais pas inversement. Nous fournissons un théorème d'existence qui couvre une vaste classe de jeux de signaux qui sont souvent étudiés en économie. Dans le chapitre 4, j'introduis l’ambiguïté dans un modèle d'organisation industrielle classique, dans lequel l'entreprise déjà établie est soit informée de la vraie nature de la demande agrégée, soit soumise à une incertitude mesurable classique sur la conjoncture, tandis qu'un éventuel nouvel arrivant fait face à une incertitude a la Knight (ambiguïté) concernant cette conjoncture. Je caractérise les conditions sou lesquelles le prix limite émerge en équilibre, et par conséquent l'ambigüité diminue la probabilité d'entrée. L'analyse du bien-être montre que le prix limite est plus nocif dans un marché où la demande escomptée est plus élevée que dans un autre où celle-ci est moindre
This thesis contributes to the economic theory literature in three aspects: price dynamics in financial markets with asymmetric information belief updating and equilibrium refinements in signaling games, and introducing ambiguity in limit pricing theory. In chapter 2, we formulate a zero-sum trading game between a better informed sector and a less 1nformed sector to endogenously determine the underlying price dynamics. In this model, player 1 is informed of the state (L) but is uncertain about player 2's belief about the state, because player 2 is informed through some message (M) related to the state. If L and M are independent, then the price proces s will be a Continuous Martingale of Maximal Variation (CMMV), and player 1 can benefit from his informational advantage. However, if L and M are not independent, player 1 will not reveal his information during the trading process, therefore, he does not benefit from his informational advantage. In chapter 3, I propose a definition of Hypothesis Testing Equilibrium (HTE) for general signaling games with non-Bayesian players nested, by an updating rule according to the Hypothesis Testing model characterized by Ortoleva (2012). An HTE may differ from a sequential Nash equilibrium because of dynamic inconsistency. However, in the case in which player 2 only treats a zero-probability message as an unexpected news, an HTE is a refinement of sequential Nash equilibrium and survives the intuitive Critenon in general signaling games but not vice versa. We provide an existence theorem covering a broad class of signaling games often studied in economics. In chapter 4, I introduce ambiguity in a standard industry organization model, in which the established firm is either informed of the true state of aggregate demand or is under classical measurable uncertainty about the state, while the potential entrant is under Knightian uncertainty (ambiguity) about the state. I characterize the conditions under which limit pricing emerges in equilibria, and thus ambiguity decreases the probability of entry. Welfare analysis shows that limit pricing is more harmful in a market with higher expected demand than in a market with lower expected demand
APA, Harvard, Vancouver, ISO, and other styles
13

Jithin, K. S. "Spectrum Sensing in Cognitive Radios using Distributed Sequential Detection." Thesis, 2013. http://hdl.handle.net/2005/3278.

Full text
Abstract:
Cognitive Radios are emerging communication systems which efficiently utilize the unused licensed radio spectrum called spectral holes. They run Spectrum sensing algorithms to identify these spectral holes. These holes need to be identified at very low SNR (<=-20 dB) under multipath fading, unknown channel gains and noise power. Cooperative spectrum sensing which exploits spatial diversity has been found to be particularly effective in this rather daunting endeavor. However despite many recent studies, several open issues need to be addressed for such algorithms. In this thesis we provide some novel cooperative distributed algorithms and study their performance. We develop an energy efficient detector with low detection delay using decentralized sequential hypothesis testing. Our algorithm at the Cognitive Radios employ an asynchronous transmission scheme which takes into account the noise at the fusion center. We have developed a distributed algorithm, DualSPRT, in which Cognitive Radios (secondary users) sequentially collect the observations, make local decisions and send them to the fusion center. The fusion center sequentially processes these received local decisions corrupted by Gaussian noise to arrive at a final decision. Asymptotically, this algorithm is shown to achieve the performance of the optimal centralized test, which does not consider fusion center noise. We also theoretically analyze its probability of error and average detection delay. Even though DualSPRT performs asymptotically well, a modification at the fusion node provides more control over the design of the algorithm parameters which then performs better at the usual operating probabilities of error in Cognitive Radio systems. We also analyze the modified algorithm theoretically. DualSPRT requires full knowledge of channel gains. Thus we extend the algorithm to take care the imperfections in channel gain estimates. We also consider the case when the knowledge about the noise power and channel gain statistic is not available at the Cognitive Radios. This problem is framed as a universal sequential hypothesis testing problem. We use easily implementable universal lossless source codes to propose simple algorithms for such a setup. Asymptotic performance of the algorithm is presented. A cooperative algorithm is also designed for such a scenario. Finally, decentralized multihypothesis sequential tests, which are relevant when the interest is to detect not only the presence of primary users but also their identity among multiple primary users, are also considered. Using the insight gained from binary hypothesis case, two new algorithms are proposed.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Shang. "Cooperative Sequential Hypothesis Testing in Multi-Agent Systems." Thesis, 2017. https://doi.org/10.7916/D8CJ8RV9.

Full text
Abstract:
Since the sequential inference framework determines the number of total samples in real-time based on the history data, it yields quicker decision compared to its fixed-sample-size counterpart, provided the appropriate early termination rule. This advantage is particularly appealing in the system where data is acquired in sequence, and both the decision accuracy and latency are of primary interests. Meanwhile, the Internet of Things (IoT) technology has created all types of connected devices, which can potentially enhance the inference performance by providing information diversity. For instance, smart home network deploys multiple sensors to perform the climate control, security surveillance, and personal assistance. Therefore, it has become highly desirable to pursue the solutions that can efficiently integrate the classic sequential inference methodologies into the networked multi-agent systems. In brief, this thesis investigates the sequential hypothesis testing problem in multi-agent networks, aiming to overcome the constraints of communication bandwidth, energy capacity, and network topology so that the networked system can perform sequential test cooperatively to its full potential. The multi-agent networks are generally categorized into two main types. The first one features a hierarchical structure, where the agents transmit messages based on their observations to a fusion center that performs the data fusion and sequential inference on behalf of the network. One such example is the network formed by wearable devices connected with a smartphone. The central challenges in the hierarchical network arise from the instantaneous transmission of the distributed data to the fusion center, which is constrained by the battery capacity and the communication bandwidth in practice. Therefore, the first part of this thesis is dedicated to address these two constraints for the hierarchical network. In specific, aiming to preserve the agent energy, Chapter 2 devises the optimal sequential test that selects the "most informative" agent online at each sampling step while leaving others in idle status. To overcome the communication bottleneck, Chapter 3 proposes a scheme that allows distributed agents to send only one-bit messages asynchronously to the fusion center without compromising the performance. In contrast, the second type of networks does not assume the presence of a fusion center, and each agent performs the sequential test based on its own samples together with the messages shared by its neighbours. The communication links can be represented by an undirected graph. A variety of applications conform to such a distributed structure, for instance, the social networks that connect individuals through online friendship and the vehicular network formed by connected cars. However, the distributed network is prone to sub-optimal performance since each agent can only access the information from its local neighborhood. Hence the second part of this thesis mainly focuses on optimizing the distributed performance through local message exchanges. In Chapter 4, we put forward a distributed sequential test based on consensus algorithm, where agents exchange and aggregate real-valued local statistics with neighbours at every sampling step. In order to further lower the communication overhead, Chapter 5 develops a distributed sequential test that only requires the exchange of quantized messages (i.e., integers) between agents. The cluster-based network, which is a hybrid of the hierarchical and distributed networks, is also investigated in Chapter 5.
APA, Harvard, Vancouver, ISO, and other styles
15

Chou, Shebg-Hsien, and 周聖晛. "Application of Sequential Multi-Hypothesis Tests to Computerized Testing." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/27426391643072636704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lee, Sung-Yen, and 李松晏. "Adaptive Sequential Hypothesis Testing for Accurate Detection of Scanning Worms." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/00995973763617731301.

Full text
Abstract:
碩士
國立交通大學
電信工程系所
97
Early detction techniques of scaning worms are based on simple observations of high port/address scanning rates of malicious hosts. Such apporaches are not able to detect stealthy scanners and can be easily evaded once the threshold of scanning rate for generating alerts is known to the attackers. To overcome this problem, sequential hypothesis testing was developed as an alternative detection technique. It was found that the technique based on sequential hypothesis testing can detect scanning worms faster than those based on scanning rates in the sense that it needs fewer observations for the outcomes of connection attempts. However, the performance of the detection technique based on sequential hypothesis testing is sensitve to the probabilities of success for the first-contact connection attempts sent by benign and malicious hosts. The false positive and false negative probabilities could be much larger than the desired values if these probabilities are not known. In this paper, we presnt a simple adpative algorithm which provides accurate estimates of these probabilities. Numerical results show that the proposed adaptive estimation algorithm is an important enhancement of sequential hypothesis testing because it makes the technique robust for detection of scanning worms.
APA, Harvard, Vancouver, ISO, and other styles
17

Lin, Jian-Cheng, and 林建成. "Adaptive Sequential Hypothesis Testing for Fast Detection of Port/Address Scan." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/51125404519318488596.

Full text
Abstract:
碩士
國立交通大學
電信工程系所
95
As more and more network applications and services are provided, the topic of network security becomes more and more important. The behavior anomaly of port/address scans is a way to intrude hosts on the Internet. Early detection techniques of port/address scans are based on the observation that malicious hosts could send scans with high scanning rates. But such approaches are not suitable to detect scanners with lower scanning rate. Once the threshold of scanning rate for generating alerts is known to the attackers, the detection will be easily evaded. In order to overcome the problems, sequential hypothesis testing is an alternative detection technique. According to the probabilities of success for the first-contact connection attempts sent by the hosts, sequential hypothesis testing can detect the senders as benign or malicious. If these probabilities are unknown, the false positive and false negative rates could be much larger than the desired values. In this thesis, we compare several techniques based on sequential hypothesis testing and realize these techniques inadequate for a real network. Therefore, we propose a simple adaptive algorithm which provides accurate estimation of these probabilities. Simulation results show that the proposed adaptive estimation algorithm provides a great improvement for sequential hypothesis testing.
APA, Harvard, Vancouver, ISO, and other styles
18

"A distributed hypothesis-testing team decision problem with communications cost." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1986. http://hdl.handle.net/1721.1/2919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

"On optimal distributed decision architectures in a hypothesis testing environment." Massachusetts Institute of Technology, Laboratory for Information and Decision Systems], 1990. http://hdl.handle.net/1721.1/3167.

Full text
Abstract:
Jason D. Papastavrou and Michael Athans.
Cover title.
Includes bibliographical references (p. 35-37).
Research supported by the National Science Foundation. NSF/IRI-8902755 Research supported by the Office of Naval Research. ONR/N00014-84-K-0519
APA, Harvard, Vancouver, ISO, and other styles
20

(7491243), Wenyu Wang. "Sequential Procedures for the "Selection" Problems in Discrete Simulation Optimization." Thesis, 2019.

Find full text
Abstract:
The simulation optimization problems refer to the nonlinear optimization problems whose objective function can be evaluated through stochastic simulations. We study two significant discrete simulation optimization problems in this thesis: Ranking and Selection (R&S) and Factor Screening (FS). Both R&S and FS are the "selection" problems defined upon a finite set of candidate systems or factors. They vary mainly in their objectives: the R&S problems is to find the "best" system(s) among all alternatives; whereas the FS is to select important factors that are critical to the stochastic systems.

In this thesis, we develop efficient sequential procedures for these two problems. For the R&S problem, we propose fully-sequential procedures for selecting the "best" systems with a guaranteed probability of correct selection (PCS). The main features of the stated methods are: (1) a Bonferroni-free model, these procedures overcome the conservativeness of the Bonferroni correction and deliver the exact probabilistic guarantee without overshooting; (2) asymptotic optimality, these procedures achieve the lower bound of average sample size asymptotically; (3) an indifference-zone-flexible formulation, these procedures bridge the gap between the indifference-zone formulation and the indifference-zone-free formulation so that the indifference-zone parameter is not indispensable but could be helpful if provided. We establish the validity and asymptotic efficiency for the proposed procedure and conduct numerical studies to investigates the performance under multiple configurations.

We also consider the multi-objective R&S (MOR&S) problem. To the best of our knowledge, the procedure proposed is the first frequentist approach for MOR&S. These procedures identify the Pareto front with a guaranteed probability of correct selection (PCS). In particular, these procedures are fully sequential using the test statistics built upon the Generalized Sequential Probability Ratio Test (GSPRT). The main features are: 1) an objective-dimension-free model, the performance of these procedures do not deteriorate as the number of objectives increases, and achieve the same efficiency as KN family procedures for single-objective ranking and selection problem; 2) an indifference-zone-flexible formulation, the new methods eliminate the necessity of indifference-zone parameter while makes use of the indifference-zone information if provided. A numerical evaluation demonstrates the validity efficiency of the new procedure.

For the FS problem, our objective is to identify important factors for simulation experiments with controlled Family-Wise Error Rate. We assume a Multi-Objective first-order linear model where the responses follow a multivariate normal distribution. We offer three fully-sequential procedures: Sum Intersection Procedure (SUMIP), Sort Intersection Procedure (SORTIP), and Mixed Intersection procedure (MIP). SUMIP uses the Bonferroni correction to adjust for multiple comparisons; SORTIP uses the Holms procedure to overcome the conservative of the Bonferroni method, and MIP combines both SUMIP and SORTIP to work efficiently in the parallel computing environment. Numerical studies are provided to demonstrate the validity and efficiency, and a case study is presented.
APA, Harvard, Vancouver, ISO, and other styles
21

(9154928), Aritra Mitra. "New Approaches to Distributed State Estimation, Inference and Learning with Extensions to Byzantine-Resilience." Thesis, 2020.

Find full text
Abstract:
In this thesis, we focus on the problem of estimating an unknown quantity of interest, when the information required to do so is dispersed over a network of agents. In particular, each agent in the network receives sequential observations generated by the unknown quantity, and the collective goal of the network is to eventually learn this quantity by means of appropriately crafted information diffusion rules. The abstraction described above can be used to model a variety of problems ranging from environmental monitoring of a dynamical process using autonomous robot teams, to statistical inference using a network of processors, to social learning in groups of individuals. The limited information content of each agent, coupled with dynamically changing networks, the possibility of adversarial attacks, and constraints imposed by the communication channels, introduce various unique challenges in addressing such problems. We contribute towards systematically resolving some of these challenges.

In the first part of this thesis, we focus on tracking the state of a dynamical process, and develop a distributed observer for the most general class of LTI systems, linear measurement models, and time-invariant graphs. To do so, we introduce the notion of a multi-sensor observable decomposition - a generalization of the Kalman observable canonical decomposition for a single sensor. We then consider a scenario where certain agents in the network are compromised based on the classical Byzantine adversary model. For this worst-case adversarial setting, we identify certain fundamental necessary conditions that are a blend of system- and network-theoretic requirements. We then develop an attack-resilient, provably-correct, fully distributed state estimation algorithm. Finally, by drawing connections to the concept of age-of-information for characterizing information freshness, we show how our framework can be extended to handle a broad class of time-varying graphs. Notably, in each of the cases above, our proposed algorithms guarantee exponential convergence at any desired convergence rate.

In the second part of the thesis, we turn our attention to the problem of distributed hypothesis testing/inference, where each agent receives a stream of stochastic signals generated by an unknown static state that belongs to a finite set of hypotheses. To enable each agent to uniquely identify the true state, we develop a novel distributed learning rule that employs a min-protocol for data-aggregation, as opposed to the large body of existing techniques that rely on "belief-averaging". We establish consistency of our rule under minimal requirements on the observation model and the network structure, and prove that it guarantees exponentially fast convergence to the truth with probability 1. Most importantly, we establish that the learning rate of our algorithm is network-independent, and a strict improvement over all existing approaches. We also develop a simple variant of our learning algorithm that can account for misbehaving agents. As the final contribution of this work, we develop communication-efficient rules for distributed hypothesis testing. Specifically, we draw on ideas from event-triggered control to reduce the number of communication rounds, and employ an adaptive quantization scheme that guarantees exponentially fast learning almost surely, even when just 1 bit is used to encode each hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
22

Alizamir, Saed. "Essays on Optimal Control of Dynamic Systems with Learning." Diss., 2013. http://hdl.handle.net/10161/8066.

Full text
Abstract:

This dissertation studies the optimal control of two different dynamic systems with learning: (i) diagnostic service systems, and (ii) green incentive policy design. In both cases, analytical models have been developed to improve our understanding of the system, and managerial insights are gained on its optimal management.

We first consider a diagnostic service system in a queueing framework, where the service is in the form of sequential hypothesis testing. The agent should dynamically weigh the benefit of performing an additional test on the current task to improve the accuracy of her judgment against the incurred delay cost for the accumulated workload. We analyze the accuracy/congestion tradeoff in this setting and fully characterize the structure of the optimal policy. Further, we allow for admission control (dismissing tasks from the queue without processing) in the system, and derive its implications on the structure of the optimal policy and system's performance.

We then study Feed-in-Tariff (FIT) policies, which are incentive mechanisms by governments to promote renewable energy technologies. We focus on two key network externalities that govern the evolution of a new technology in the market over time: (i) technological learning, and (ii) social learning. By developing an intertemporal model that captures these dynamics, we investigate how lawmakers should leverage on such effects to make FIT policies more efficient. We contrast our findings against the current practice of FIT-implementing jurisdictions, and also determine how the FIT regimes should depend on specific technology and market characteristics.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
23

Vaidhiyan, Nidhin Koshy. "Neuronal Dissimilarity Indices that Predict Oddball Detection in Behaviour." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2669.

Full text
Abstract:
Our vision is as yet unsurpassed by machines because of the sophisticated representations of objects in our brains. This representation is vastly different from a pixel-based representation used in machine storages. It is this sophisticated representation that enables us to perceive two faces as very different, i.e, they are far apart in the “perceptual space”, even though they are close to each other in their pixel-based representations. Neuroscientists have proposed distances between responses of neurons to the images (as measured in macaque monkeys) as a quantification of the “perceptual distance” between the images. Let us call these neuronal dissimilarity indices of perceptual distances. They have also proposed behavioural experiments to quantify these perceptual distances. Human subjects are asked to identify, as quickly as possible, an oddball image embedded among multiple distractor images. The reciprocal of the search times for identifying the oddball is taken as a measure of perceptual distance between the oddball and the distractor. Let us call such estimates as behavioural dissimilarity indices. In this thesis, we describe a decision-theoretic model for visual search that suggests a connection between these two notions of perceptual distances. In the first part of the thesis, we model visual search as an active sequential hypothesis testing problem. Our analysis suggests an appropriate neuronal dissimilarity index which correlates strongly with the reciprocal of search times. We also consider a number of alternative possibilities such as relative entropy (Kullback-Leibler divergence), the Chernoff entropy and the L1-distance associated with the neuronal firing rate profiles. We then come up with a means to rank the various neuronal dissimilarity indices based on how well they explain the behavioural observations. Our proposed dissimilarity index does better than the other three, followed by relative entropy, then Chernoff entropy and then L1 distance. In the second part of the thesis, we consider a scenario where the subject has to find an oddball image, but without any prior knowledge of the oddball and distractor images. Equivalently, in the neuronal space, the task for the decision maker is to find the image that elicits firing rates different from the others. Here, the decision maker has to “learn” the underlying statistics and then make a decision on the oddball. We model this scenario as one of detecting an odd Poisson point process having a rate different from the common rate of the others. The revised model suggests a new neuronal dissimilarity index. The new dissimilarity index is also strongly correlated with the behavioural data. However, the new dissimilarity index performs worse than the dissimilarity index proposed in the first part on existing behavioural data. The degradation in performance may be attributed to the experimental setup used for the current behavioural tasks, where search tasks associated with a given image pair were sequenced one after another, thereby possibly cueing the subject about the upcoming image pair, and thus violating the assumption of this part on the lack of prior knowledge of the image pairs to the decision maker. In conclusion, the thesis provides a framework for connecting the perceptual distances in the neuronal and the behavioural spaces. Our framework can possibly be used to analyze the connection between the neuronal space and the behavioural space for various other behavioural tasks.
APA, Harvard, Vancouver, ISO, and other styles
24

Carland, Matthew A. "A theoretical and experimental dissociation of two models of decision‐making." Thèse, 2014. http://hdl.handle.net/1866/12038.

Full text
Abstract:
La prise de décision est un processus computationnel fondamental dans de nombreux aspects du comportement animal. Le modèle le plus souvent rencontré dans les études portant sur la prise de décision est appelé modèle de diffusion. Depuis longtemps, il explique une grande variété de données comportementales et neurophysiologiques dans ce domaine. Cependant, un autre modèle, le modèle d’urgence, explique tout aussi bien ces mêmes données et ce de façon parcimonieuse et davantage encrée sur la théorie. Dans ce travail, nous aborderons tout d’abord les origines et le développement du modèle de diffusion et nous verrons comment il a été établi en tant que cadre de travail pour l’interprétation de la plupart des données expérimentales liées à la prise de décision. Ce faisant, nous relèveront ses points forts afin de le comparer ensuite de manière objective et rigoureuse à des modèles alternatifs. Nous réexaminerons un nombre d’assomptions implicites et explicites faites par ce modèle et nous mettrons alors l’accent sur certains de ses défauts. Cette analyse servira de cadre à notre introduction et notre discussion du modèle d’urgence. Enfin, nous présenterons une expérience dont la méthodologie permet de dissocier les deux modèles, et dont les résultats illustrent les limites empiriques et théoriques du modèle de diffusion et démontrent en revanche clairement la validité du modèle d'urgence. Nous terminerons en discutant l'apport potentiel du modèle d'urgence pour l'étude de certaines pathologies cérébrales, en mettant l'accent sur de nouvelles perspectives de recherche.
Decision‐making is a computational process of fundamental importance to many aspects of animal behavior. The prevailing model in the experimental study of decision‐making is the drift‐diffusion model, which has a long history and accounts for a broad range of behavioral and neurophysiological data. However, an alternative model – called the urgency‐gating model – has been offered which can account equally well for much of the same data in a more parsimonious and theoretically‐sound manner. In what follows, we will first trace the origins and development of the DDM, as well as give a brief overview of the manner in which it has supplied an explanatory framework for a large number of behavioral and physiological studies in the domain of decision‐making. In so doing, we will attempt to build a strong and clear case for its strengths so that it can be fairly and rigorously compared to potential alternative models. We will then re‐examine a number of the implicit and explicit theoretical assumptions made by the drift‐diffusion model, as well as highlight some of its empirical shortcomings. This analysis will serve as the contextual backdrop for our introduction and discussion of the urgency‐gating model. Finally, we present a novel experiment, the methodological design of which uniquely affords a decisive empirical dissociation of the models, the results of which illustrate the empirical and theoretical shortcomings of the drift‐diffusion model and instead offer clear support for the urgency‐gating model. We finish by discussing the potential for the urgency gating model to shed light on a number of clinical disorders, highlighting a number of future directions for research.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography