To see the other types of publications on this topic, follow the link: Distance de covariance.

Dissertations / Theses on the topic 'Distance de covariance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 dissertations / theses for your research on the topic 'Distance de covariance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Youssef, Pierre, and Pierre Youssef. "Invertibilité restreinte, distance au cube et covariance de matrices aléatoires." Phd thesis, Université Paris-Est, 2013. http://tel.archives-ouvertes.fr/tel-00952297.

Full text
Abstract:
Dans cette thèse, on aborde trois thèmes : problème de sélection de colonnes dans une matrice, distance de Banach-Mazur au cube et estimation de la covariance de matrices aléatoires. Bien que les trois thèmes paraissent éloignés, les techniques utilisées se ressemblent tout au long de la thèse. Dans un premier lieu, nous généralisons le principe d'invertibilité restreinte de Bourgain-Tzafriri. Ce résultat permet d'extraire un "grand" bloc de colonnes linéairement indépendantes dans une matrice et d'estimer la plus petite valeur singulière de la matrice extraite. Nous proposons ensuite un algorithme déterministe pour extraire d'une matrice un bloc presque isométrique c'est à dire une sous-matrice dont les valeurs singulières sont proches de 1. Ce résultat nous permet de retrouver le meilleur résultat connu sur la célèbre conjecture de Kadison-Singer. Des applications à la théorie locale des espaces de Banach ainsi qu'à l'analyse harmonique sont déduites. Nous donnons une estimation de la distance de Banach-Mazur d'un corps convexe de Rn au cube de dimension n. Nous proposons une démarche plus élémentaire, basée sur le principe d'invertibilité restreinte, pour améliorer et simplifier les résultats précédents concernant ce problème. Plusieurs travaux ont été consacrés pour approcher la matrice de covariance d'un vecteur aléatoire par la matrice de covariance empirique. Nous étendons ce problème à un cadre matriciel et on répond à la question. Notre résultat peut être interprété comme une quantification de la loi des grands nombres pour des matrices aléatoires symétriques semi-définies positives. L'estimation obtenue s'applique à une large classe de matrices aléatoires
APA, Harvard, Vancouver, ISO, and other styles
2

Youssef, Pierre. "Invertibilité restreinte, distance au cube et covariance de matrices aléatoires." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1022/document.

Full text
Abstract:
Dans cette thèse, on aborde trois thèmes : problème de sélection de colonnes dans une matrice, distance de Banach-Mazur au cube et estimation de la covariance de matrices aléatoires. Bien que les trois thèmes paraissent éloignés, les techniques utilisées se ressemblent tout au long de la thèse. Dans un premier lieu, nous généralisons le principe d'invertibilité restreinte de Bourgain-Tzafriri. Ce résultat permet d'extraire un "grand" bloc de colonnes linéairement indépendantes dans une matrice et d'estimer la plus petite valeur singulière de la matrice extraite. Nous proposons ensuite un algorithme déterministe pour extraire d'une matrice un bloc presque isométrique c’est à dire une sous-matrice dont les valeurs singulières sont proches de 1. Ce résultat nous permet de retrouver le meilleur résultat connu sur la célèbre conjecture de Kadison-Singer. Des applications à la théorie locale des espaces de Banach ainsi qu'à l'analyse harmonique sont déduites. Nous donnons une estimation de la distance de Banach-Mazur d'un corps convexe de Rn au cube de dimension n. Nous proposons une démarche plus élémentaire, basée sur le principe d'invertibilité restreinte, pour améliorer et simplifier les résultats précédents concernant ce problème. Plusieurs travaux ont été consacrés pour approcher la matrice de covariance d'un vecteur aléatoire par la matrice de covariance empirique. Nous étendons ce problème à un cadre matriciel et on répond à la question. Notre résultat peut être interprété comme une quantification de la loi des grands nombres pour des matrices aléatoires symétriques semi-définies positives. L'estimation obtenue s'applique à une large classe de matrices aléatoires
In this thesis, we address three themes : columns subset selection in a matrix, the Banach-Mazur distance to the cube and the estimation of the covariance of random matrices. Although the three themes seem distant, the techniques used are similar throughout the thesis. In the first place, we generalize the restricted invertibility principle of Bougain-Tzafriri. This result allows us to extract a "large" block of linearly independent columns inside a matrix and estimate the smallest singular value of the restricted matrix. We also propose a deterministic algorithm in order to extract an almost isometric block inside a matrix i.e a submatrix whose singular values are close to 1. This result allows us to recover the best known result on the Kadison-Singer conjecture. Applications to the local theory of Banach spaces as well as to harmonic analysis are deduced. We give an estimate of the Banach-Mazur distance between a symmetric convex body in Rn and the cube of dimension n. We propose an elementary approach, based on the restricted invertibility principle, in order to improve and simplify the previous results dealing with this problem. Several studies have been devoted to approximate the covariance matrix of a random vector by its sample covariance matrix. We extend this problem to a matrix setting and we answer the question. Our result can be interpreted as a quantified law of large numbers for positive semidefinite random matrices. The estimate we obtain, applies to a large class of random matrices
APA, Harvard, Vancouver, ISO, and other styles
3

Lescornel, Hélène. "Covariance estimation and study of models of deformations between distributions with the Wasserstein distance." Toulouse 3, 2014. http://www.theses.fr/2014TOU30045.

Full text
Abstract:
La première partie de cette thèse est consacrée à l'estimation de covariance de processus stochastiques non stationnaires. Le modèle étudié amène à estimer la covariance du processus dans différents espaces vectoriels de matrices. Nous étudions dans le chapitre 3 une méthode de sélection de modèle par minimisation d'un critère pénalisé en utilisant des inégalités de concentration, et le chapitre 4 présente une méthode basée sur l'estimation sans biais du risque. Dans les deux cas des inégalités oracles sont obtenues. La seconde partie de cette thèse concerne l'étude de modèles de déformations entre distributions. On suppose observer une quantité aléatoire epsilon à travers une fonction de déformation. C'est l'importance de la déformation, représentée par un paramètre theta, que l'on cherche à retrouver. Nous présentons plusieurs méthodes d'estimation basées sur la distance de Wasserstein en alignant les lois des observations pour retrouver le paramètre de déformation. Dans le cas où les variables aléatoires sont à valeurs réelles, le chapitre 7 donne des propriétés de consistance pour un M-estimateur et sa distribution asymptotique. On y utilise des techniques de Hadamard différentiabilité pour appliquer une Delta-Méthode fonctionnelle. Le chapitre 8 concerne l'étude d'un estimateur de type Robbins-Monro et présente des propriétés de convergence pour un estimateur à noyau de la densité de la variable epsilon obtenu à l'aide des observations. Le modèle est généralisé à des variables dans des espaces métriques complets dans le chapitre 9, puis, dans l'optique de créer un test d'adéquation, le chapitre 10 donne des résultats sur la distribution asymptotique d'une statistique de test
The first part of this thesis concerns the covariance estimation of non stationary processes. We are estimating the covariance in different vectorial spaces of matrices. In Chapter 3, we give a model selection procedure by minimizing a penalized criterion and using concentration inequalities, and Chapter 4 presents an Unbiased Risk Estimation method. In both cases we give oracle inequalities. The second part deals with the study of models of deformation between distributions. We assume that we observe a random quantity epsilon through a deformation function. The importance of the deformation is represented by a parameter theta that we aim to estimate. We present several methods of estimation based on the Wasserstein distance by aligning the distributions of the observations to recover the deformation parameter. In the case of real random variables, Chapter 7 presents properties of consistency for a M-estimator and its asymptotic distribution. We use Hadamard differentiability techniques to apply a functional Delta method. Chapter 8 concerns a Robbins-Monro estimator for the deformation parameter and presents properties of convergence for a kernel estimator of the density of the variable epsilon obtained with the observations. The model is generalized to random variables in complete metric spaces in Chapter 9. Then, in the aim to build a goodness of fit test, Chapter 10 gives results on the asymptotic distribution of a test statistic
APA, Harvard, Vancouver, ISO, and other styles
4

Gunay, Melih. "Representation Of Covariance Matrices In Track Fusion Problems." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609026/index.pdf.

Full text
Abstract:
Covariance Matrix in target tracking algorithms has a critical role at multi- sensor track fusion systems. This matrix reveals the uncertainty of state es- timates that are obtained from diferent sensors. So, many subproblems of track fusion usually utilize this matrix to get more accurate results. That is why this matrix should be interchanged between the nodes of the multi-sensor tracking system. This thesis mainly deals with analysis of approximations of the covariance matrix that can best represent this matrix in order to efectively transmit this matrix to the demanding site. Kullback-Leibler (KL) Distance is exploited to derive some of the representations for Gaussian case. Also com- parison of these representations is another objective of this work and this is based on the fusion performance of the representations and the performance is measured for a system of a 2-radar track fusion system.
APA, Harvard, Vancouver, ISO, and other styles
5

Pierre, Cyrille. "Localisation coopérative robuste de robots mobiles par mesure d’inter-distance." Thesis, Université Clermont Auvergne‎ (2017-2020), 2020. http://www.theses.fr/2020CLFAC045.

Full text
Abstract:
Il existe de plus en plus d'applications en robotique mobile faisant intervenir plusieurs robots capables de communiquer entre eux et naviguer en coopération. L'objectif de ces travaux est d'exploiter la communication et la détection des robots avoisinants afin de réaliser de la localisation coopérative. Le moyen de détection des robots utilisé ici se base sur la technologie ultra-wideband,permettant d'effectuer des mesures précises de distance entre deux capteurs. L'approche que nous avons développée privilégie la robustesse et l'intégrité des estimations. Elle permet de prendre en compte des scénarios dans lesquels le peu d'information disponible rend la tâche de localisation difficile à gérer pour les robots. Ainsi, notre solution gère deux problématiques importantes de la localisation coopérative par mesure de distance : la corrélation des données échangées entre les robots et la non-linéarité du modèle d'observation des mesures. Pour résoudre ces problématiques, nous avons fait le choix de développer une approche décentralisée dans laquelle l'aspect coopératif est pris en compte par une modélisation particulière des observations de robots. Une observation correspond ici à une mesure de distance avec une balise (représentant un robot ou un objet statique) dont la position est représentée par une distribution normale. Après plusieurs observations successives d'une même balise, la corrélation devient forte entre l'état du robot et la position de la balise observée. Notre approche s'appuie alors sur la méthode de fusion d'un filtre à intersection de covariance partitionné afin d'éviter les problèmes de sur-convergence dus à la corrélation. De plus, les estimations des états des robots sont modélisées par des mixtures de gaussiennes permettant ainsi de représenter au mieux les formes réelles des distributions obtenues après la fusion d'une mesure de distance. Notre algorithme de localisation coopérative a également la particularité de faire évoluer le nombre de particules des mixtures de gaussiennes afin de se ramener à une modélisation équivalente à un filtre gaussien lorsque les conditions le permettent. Le fonctionnement de notre approche de localisation coopérative est présenté grâce à l'étude de quelques situations basiques mettant en évidences différentes caractéristiques de l'algorithme. Le manuscrit se termine par la présentation de trois scénarios de localisation coopérative faisant intervenir plusieurs robots et objets statiques. Les deux premiers exploitent un simulateur réaliste capable de simuler la physique des robots. Le troisième est une expérimentation réelle utilisant une plateforme urbaine dont dispose le laboratoire. L'objectif de ces expérimentations est de montrer que notre approche reste intègre dans les situations les plus difficiles
There is an increasing number of applications in mobile robotics involving several robots able tocommunicate with each other to navigate cooperatively.The aim of this work is to exploit the communication and the detection of robots in order to achievecooperative localization.The perception tool used here rely on ultra-wideband technology, which allows to perfomprecise range measurements between two sensors.The approach we have developed focuses on the robustness and consistency of robot state estimation.It enables to take into account scenarios where the localization task is difficult to handle due tolimited data available.In that respect, our solution of cooperative localization by range measurements addresses twoimportant problematics: the correlation of data exchanged between robots and the non-linearity ofthe observation model.To solve these issues, we have choosed to develop a decentralized approach in which the cooperativeaspect is taken into account by a specific robot observation model.In this context, an observation corresponds to a range measurement with a beacon (that is, a robotor a static object) where the position is reprensented by a normal distribution. After several observations of the same beacon, the correlation between the robot state and thebeacon position increases.Our approach is based on the fusion method of the Split Covariance Intersection Filter in order toavoid the problem of over-convergence induced by data correlation.In addition, the robot state estimates are modeled by Gaussian mixtures allowing best representationof the distributions obtained after merging a range measurement.Our localization algorithm is also able to dynamically adjust the number of Gaussians of mixturemodels and can be reduced to a simple Gaussian filter when conditions are favorable.Our cooperative localization approach is studied using basic situations, highlighting importantcharacteristics of the algorithm.The manuscript ends with the presentation of three scenarios of cooperative localization implyingseveral robots and static objects.The first two take advantage of a realistic simulator able to simulate the physics of robots.The third is a real world experimentation using a platform for urban experimentation with vehicles.The aim of these scenarios is to show that our approach stay consistent in difficult situations
APA, Harvard, Vancouver, ISO, and other styles
6

Kashlak, Adam B. "A concentration inequality based statistical methodology for inference on covariance matrices and operators." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/267833.

Full text
Abstract:
In the modern era of high and infinite dimensional data, classical statistical methodology is often rendered inefficient and ineffective when confronted with such big data problems as arise in genomics, medical imaging, speech analysis, and many other areas of research. Many problems manifest when the practitioner is required to take into account the covariance structure of the data during his or her analysis, which takes on the form of either a high dimensional low rank matrix or a finite dimensional representation of an infinite dimensional operator acting on some underlying function space. Thus, novel methodology is required to estimate, analyze, and make inferences concerning such covariances. In this manuscript, we propose using tools from the concentration of measure literature–a theory that arose in the latter half of the 20th century from connections between geometry, probability, and functional analysis–to construct rigorous descriptive and inferential statistical methodology for covariance matrices and operators. A variety of concentration inequalities are considered, which allow for the construction of nonasymptotic dimension-free confidence sets for the unknown matrices and operators. Given such confidence sets a wide range of estimation and inferential procedures can be and are subsequently developed. For high dimensional data, we propose a method to search a concentration in- equality based confidence set using a binary search algorithm for the estimation of large sparse covariance matrices. Both sub-Gaussian and sub-exponential concentration inequalities are considered and applied to both simulated data and to a set of gene expression data from a study of small round blue-cell tumours. For infinite dimensional data, which is also referred to as functional data, we use a celebrated result, Talagrand’s concentration inequality, in the Banach space setting to construct confidence sets for covariance operators. From these confidence sets, three different inferential techniques emerge: the first is a k-sample test for equality of covariance operator; the second is a functional data classifier, which makes its decisions based on the covariance structure of the data; the third is a functional data clustering algorithm, which incorporates the concentration inequality based confidence sets into the framework of an expectation-maximization algorithm. These techniques are applied to simulated data and to speech samples from a set of spoken phoneme data. Lastly, we take a closer look at a key tool used in the construction of concentration based confidence sets: Rademacher symmetrization. The symmetrization inequality, which arises in the probability in Banach spaces literature, is shown to be connected with optimal transport theory and specifically the Wasserstein distance. This insight is used to improve the symmetrization inequality resulting in tighter concentration bounds to be used in the construction of nonasymptotic confidence sets. A variety of other applications are considered including tests for data symmetry and tightening inequalities in Banach spaces. An R package for inference on covariance operators is briefly discussed in an appendix chapter.
APA, Harvard, Vancouver, ISO, and other styles
7

Gliga, Lavinius ioan. "Diagnostic d'une Turbine Eolienne à Distance à l'aide du Réseau de Capteurs sans Fil." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR063/document.

Full text
Abstract:
Les Éoliennes à Entraînement Direct (ÉED) sont équipées de Générateurs Syn- chrones à Aimants Permanents (GSAP). Leurs trois plus courantes défaillances sont la dé- magnétisation, l’excentricité (statique, dynamique et mixte) et le court-circuit inter-tour. L’analyse de la signature du courant de la machine est souvent utilisée pour rechercher des problèmes du générateur, car ces altérations introduisent des harmoniques supplémen- taires dans les courants générés. La Transformée de Fourier Rapide (TFR) est utilisée pour calculer le spectre des courants. Cependant, la TFR permet de calculer l’ensemble du spec- tre, tandis que le nombre de défauts possible et le nombre d’harmoniques introduites sont faibles. L’algorithme de Goertzel, mis en oeuvre sous forme de filtre (le filtre de Goertzel), est présenté comme une alternative plus efficace au TFR. Le spectre des courants change avec la vitesse du vent, ce qui rend la détection plus difficile. Le Filtre de Kalman Étendu (FKÉ) est proposé comme solution. Le spectre de résidus, calcule entre les courants estimés et les courants générés, est constant, quelle que soit la vitesse du vent. Cependant, l’effet des défauts est visible dans leur spectre. Lors de l’utilisation de l’FKÉ, un défi consiste à estime la matrice de covariance pour le bruit du processus. Une nouvelle méthode était développée pour ça, qui n’utilise aucune de maîtrise du filtre. Les ÉED sont placés soit dans des zones éloignées, soit dans des villes. Pour la surveillance des ÉED, des dizaines ou des centaines de kilomètres de câbles sont nécessaires. Les Réseaux de Capteurs sans Fil (RCF) sont bien adaptés pour être utilisés dans l’infrastructure de communication des ÉED. RCF ont des coûts initiaux et d’entretien plus faibles et leurs installations sont rapides. De plus, ils peuvent compléter les réseaux câblés. Différentes technologies sans fil sont comparées : les technologies à grande surface, ainsi que les technologies à courte portée qui supportent des débits de données élevés
Direct Drive Wind Turbines (DDWTs) are equipped with Permanent Magnet Syn- chronous Generators (PMSGs). Their three most common failures are demagnetization, ec- centricity (static, dynamic and mixed) and inter-turn short circuit. Machine Current Signa- ture Analysis is often used to look for generator problems, as these impairments introduce additional harmonics into the generated currents. The Fast Fourier Transform (FFT) is utilized to compute the spectrum of the currents. However, the FFT calculates the whole spectrum, while the number of possible faults and the number of introduced harmonics is low. The Goertzel algorithm, implemented as a filter (the Goertzel filter), is presented as a more efficient alternative to the FFT. The spectrum of the currents changes with the wind speed, and thus the detection is made more difficult. The Extended Kalman Filter (EKF) is proposed as a solution. The spectrum of the residuals, computed between the estimated and the generated current, is constant, regardless of the wind speed. However, the effect of the faults is visible in the spectrum. When using the EKF, one challenge is to find out the covariance matrix of the process noise. A new method was developed in this regard, which does not use any of the matrices of the filter. DDWTs are either placed in remote areas or in cities. For the monitoring of a DDWT, tens or hundreds of kilometers of cables are necessary. Wireless Sensor Networks (WSNs) are suited to be used in the communication infrastructure of DDWTs. WSNs have lower initial and maintenance costs, and they are quickly installed. Moreover, they can complement wired networks. Different wireless technologies are com- pared - both wide area ones, as well as short range technologies which support high data rates
APA, Harvard, Vancouver, ISO, and other styles
8

Young, Barrington R. St A. "Efficient Algorithms for Data Mining with Federated Databases." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1179332091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Zhi. "Applications of stochastic control and statistical inference in macroeconomics and high-dimensional data." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54401.

Full text
Abstract:
This dissertation is dedicated to study the modeling of drift control in foreign exchange reserves management and design the fast algorithm of statistical inference with its application in high dimensional data analysis. The thesis has two parts. The first topic involves the modeling of foreign exchange reserve management as an drift control problem. We show that, under certain conditions, the control band policies are optimal for the discounted cost drift control problem and develop an algorithm to calculate the optimal thresholds of the optimal control band policy. The second topic involves the fast computing algorithm of partial distance covariance statistics with its application in feature screening in high dimensional data. We show that an O(n log n) algorithm for a version of the partial distance covariance exists, compared with the O(n^2) algorithm implemented directly accordingly to its definition. We further propose an iterative feature screening procedure in high dimensional data based on the partial distance covariance. This procedure enjoys two advantages over the correlation learning. First, an important predictor that is marginally uncorrelated but jointly correlated with the response can be picked by our procedure and thus entering the estimation model. Second, our procedure is robust to model mis- specification.
APA, Harvard, Vancouver, ISO, and other styles
10

Paler, Mary Elvi Aspiras. "On Modern Measures and Tests of Multivariate Independence." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1447628176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lundström, Tomas. "Matched Field Beamforming applied to Sonar Data." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16338.

Full text
Abstract:

Two methods for evaluating and improving plane wave beamforming have beendeveloped. The methods estimate the shape of the wavefront and use theinformation in the beamforming. One of the methods uses estimates of the timedelays between the sensors to approximate the shape of the wavefront, and theother estimates the wavefront by matching the received wavefront to sphericalwavefronts of different radii. The methods are compared to a third more commonmethod of beamforming, which assumes that the impinging wave is planar. Themethods’ passive ranging abilities are also evaluated, and compared to a referencemethod based on triangulation.Both methods were evaluated with both real and simulated data. The simulateddata was obtained using Raylab, which is a simulation program based on ray-tracing. The real data was obtained through a field-test performed in the Balticsea using a towed array sonar and a stationary source emitted tones.The performance of the matched beamformers depends on the distance to the tar-get. At a distance of 600 m near broadside the power received by the beamformerincreases by 0.5-1 dB compared to the plane wave beamformer. At a distance of300 m near broadside the improvement is approximately 2 dB. In general, obtain-ing an accurate distance estimation proved to be difficult, and highly dependenton the noise present in the environment. A moving target at a distance of 600 mat broadside can be estimated with a maximum error of 150 m, when recursiveupdating of the covariance matrix with a updating constant of 0.25 is used. Whenrecursive updating is not used the margin of error increases to 400 m.

APA, Harvard, Vancouver, ISO, and other styles
12

Ahmad, Shafiq, and Shafiq ahmad@rmit edu au. "Process capability assessment for univariate and multivariate non-normal correlated quality characteristics." RMIT University. Mathematical and Geospatial Sciences, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20091127.121556.

Full text
Abstract:
In today's competitive business and industrial environment, it is becoming more crucial than ever to assess precisely process losses due to non-compliance to customer specifications. To assess these losses, industry is extensively using Process Capability Indices for performance evaluation of their processes. Determination of the performance capability of a stable process using the standard process capability indices such as and requires that the underlying quality characteristics data follow a normal distribution. However it is an undisputed fact that real processes very often produce non-normal quality characteristics data and also these quality characteristics are very often correlated with each other. For such non-normal and correlated multivariate quality characteristics, application of standard capability measures using conventional methods can lead to erroneous results. The research undertaken in this PhD thesis presents several capability assessment methods to estimate more precisely and accurately process performances based on univariate as well as multivariate quality characteristics. The proposed capability assessment methods also take into account the correlation, variance and covariance as well as non-normality issues of the quality characteristics data. A comprehensive review of the existing univariate and multivariate PCI estimations have been provided. We have proposed fitting Burr XII distributions to continuous positively skewed data. The proportion of nonconformance (PNC) for process measurements is then obtained by using Burr XII distribution, rather than through the traditional practice of fitting different distributions to real data. Maximum likelihood method is deployed to improve the accuracy of PCI based on Burr XII distribution. Different numerical methods such as Evolutionary and Simulated Annealing algorithms are deployed to estimate parameters of the fitted Burr XII distribution. We have also introduced new transformation method called Best Root Transformation approach to transform non-normal data to normal data and then apply the traditional PCI method to estimate the proportion of non-conforming data. Another approach which has been introduced in this thesis is to deploy Burr XII cumulative density function for PCI estimation using Cumulative Density Function technique. The proposed approach is in contrast to the approach adopted in the research literature i.e. use of best-fitting density function from known distributions to non-normal data for PCI estimation. The proposed CDF technique has also been extended to estimate process capability for bivariate non-normal quality characteristics data. A new multivariate capability index based on the Generalized Covariance Distance (GCD) is proposed. This novel approach reduces the dimension of multivariate data by transforming correlated variables into univariate ones through a metric function. This approach evaluates process capability for correlated non-normal multivariate quality characteristics. Unlike the Geometric Distance approach, GCD approach takes into account the scaling effect of the variance-covariance matrix and produces a Covariance Distance variable that is based on the Mahanalobis distance. Another novelty introduced in this research is to approximate the distribution of these distances by a Burr XII distribution and then estimate its parameters using numerical search algorithm. It is demonstrates that the proportion of nonconformance (PNC) using proposed method is very close to the actual PNC value.
APA, Harvard, Vancouver, ISO, and other styles
13

Dihl, Leandro Lorenzett. "Rastreamento de objetos usando descritores estatísticos." Universidade do Vale do Rio do Sinos, 2009. http://www.repositorio.jesuita.org.br/handle/UNISINOS/2273.

Full text
Abstract:
Made available in DSpace on 2015-03-05T14:01:20Z (GMT). No. of bitstreams: 0 Previous issue date: 13
Nenhuma
O baixo custo dos sistemas de aquisição de imagens e o aumento no poder computacional das máquinas disponíveis têm causado uma demanda crescente pela análise automatizada de vídeo, em diversas aplicações, como segurança, interfaces homem-computador, análise de desempenho esportivo, etc. O rastreamento de objetos através de câmeras de vídeo é parte desta análise, e tem-se mostrado um problema desafiador na área de visão computacional. Este trabalho apresenta uma nova abordagem para o rastreamento de objetos baseada em fragmentos. Inicialmente, a região selecionada para o rastreamento é dividida em sub-regiões retangulares (fragmentos), e cada fragmento é rastreado independentemente. Além disso, o histórico de movimentação do objeto é utilizado para estimar sua posição no quadro seguinte. O deslocamento global do objeto é então obtido combinando os deslocamentos de cada fragmento e o deslocamento previsto, de modo a priorizar fragmentos com deslocamento coerente. Um esquema de atualização é aplicado no modelo
The low cost of image acquisition systems and increase the computational power of available machines have caused a growing demand for automated video analysis in several applications, such as surveillance, human-computer interfaces, analysis of sports performance, etc. Object tracking through the video sequence is part of this analysis, and it has been a challenging problem in the computer vision area. This work presents a new approach for object tracking based on fragments. Initially, the region selected for tracking is divided into rectangular subregions (patches, or fragments), and each patch is tracked independently. Moreover, the motion history of the object is used to estimate its position in the subsequent frames. The overall displacement of the object is then obtained combining the displacements of each patch and the predicted displacement vector in order to priorize fragments presenting consistent displacement. An update scheme is also applied to the model, to deal with illumination and appearance c
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Zongyi. "Gait-Based Recognition at a Distance: Performance, Covariate Impact and Solutions." Scholar Commons, 2004. https://scholarcommons.usf.edu/etd/1134.

Full text
Abstract:
It has been noticed for a long time that humans can identify others based on their biological movement from a distance. However, it is only recently that computer vision based gait biometrics has received much attention. In this dissertation, we perform a thorough study of gait recognition from a computer vision perspective. We first present a parameterless baseline recognition algorithm, which bases similarity on spatio-temporal correlation that emphasizes gait dynamics as well as gait shapes. Our experiments are performed with three popular gait databases: the USF/NIST HumanID Gait Challenge outdoor database with 122 subjects, the UMD outdoor database with 55 subjects, and the CMU Mobo indoor database with 25 subjects. Despite its simplicity, the baseline algorithm shows strong recognition power. On the other hand, the outcome suggests that changes in surface and time have strong impact on recognition with significant drop in performance. To gain insight into the effects of image segmentation on recognition -- a possible cause for performance degradation, we propose a silhouette reconstruction method based on a Population Hidden Markov Model (pHMM), which models gait over one cycle, coupled with an Eigen-stance model utilizing the Principle Component Analysis (PCA) of the silhouette shapes. Both models are built from a set of manually created silhouettes of 71 subjects. Given a sequence of machine segmented silhouettes, each frame is matched into a stance by pHMM using the Viterbi algorithm, and then is projected into and reconstructed by the Eigen-stance model. We demonstrate that the system dramatically improves the silhouette quality. Nonetheless, it does little help for recognition, indicating that segmentation is not the key factor of the covariate impacts. To improve performance, we look into other aspects. Toward this end, we propose three recognition algorithms: (i) an averaged silhouette based algorithm that deemphasizes gait dynamics, which substantially reduces computation time but achieves similar recognition power with the baseline algorithm; (ii) an algorithm that normalizes gait dynamics using pHMM and then uses Euclidean distance between corresponding selected stances -- this improves recognition over surface and time; and (iii) an algorithm that also performs gait dynamics normalization using pHMM, but instead of Euclidean distances, we consider distances in shape space based on the Linear Discriminant Analysis (LDA) and consider measures that are invariant to morphological deformation of silhouettes. This algorithm statistically improves the recognition over all covariates. Compared with the best reported algorithm to date, it improves the top-rank identification rate (gallery size: 122 subjects) for comparison across hard covariates: briefcase, surface type and time, by 22%, 14%, and 12% respectively. In addition to better gait algorithms, we also study multi-biometrics combination to improve outdoor biometric performance, specifically, fusing with face data. We choose outdoor face recognition, a "known" hard problem in face biometrics, and test four combination schemes: score sum, Bayesian rule, confidence score sum, and rank sum. We find that the recognition power after combination is significantly stronger although individual biometrics are weak, suggesting another effective approach to improve biometric recognition. The fundamental contributions of this work include (i) establishing the "hard" problems for gait recognition involving comparison across time, surface, and briefcase carrying conditions, (ii) revealing that their impacts cannot be explained by silhouette segmentation, (iii) demonstrating that gait shape is more important than gait dynamics in recognition, and (iv) proposing a novel gait algorithm that outperforms other gait algorithms to date.
APA, Harvard, Vancouver, ISO, and other styles
15

Zerbini, Alexandre N. "Improving precision in multiple covariate distance sampling : a case study with whales in Alaska /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ben, Abdallah Rayen. "Statistical signal processing exploiting low-rank priors with applications to detection in Heterogeneous Environment." Thesis, Paris 10, 2019. http://www.theses.fr/2019PA100076.

Full text
Abstract:
Dans un premier lieu, nous considérons le problème de l'estimation de sous-espace d'un signal d'intérêt à partir d'un jeu de données bruité. Pour ce faire, nous adoptons une approche Bayésienne afin d'obtenir un estimateur minimisant la distance moyenne entre la vraie matrice de projection et son estimée. Plus particulièrement, nous étendons les estimateurs au contexte Gaussien composé pour les sources où l'a priori sur la base sera une loi complexe generalized Bingham Langevin. Enfin, nous étudions numériquement les performances de l'estimateur proposé sur une application de type space time adaptive processing pour un radar aéroporté au travers de données réelles.Dans un second lieu, nous nous intéressons au test de propriété communes entre les matrices de covariance. Nous proposons des nouveaux tests statistiques dans le contexte de matrices de covariance structurées. Plus précisément, nous considérons un signal de rang faible corrompu par un bruit blanc Gaussien additif. Notre objectif est de tester la similarité des composantes principales à rang faible communes à un ensemble de matrices de covariance. Dans un premier temps, une statistique de décision est dérivée en utilisant le rapport de vraisemblance généralisée. Le maximum de vraisemblance n'ayant pas d'expression analytique dans ce cas, nous proposons un algorithme d'estimation itératif de type majoration-minimisation pour pouvoir évaluer les tests proposés. Enfin, nous étudions les propriétés des détecteurs proposés à l'aide de simulations numériques
In this thesis, we consider first the problem of low dimensional signal subspace estimation in a Bayesian context. We focus on compound Gaussian signals embedded in white Gaussian noise, which is a realistic modeling for various array processing applications. Following the Bayesian framework, we derive algorithms to compute both the maximum a posteriori and the so-called minimum mean square distance estimator, which minimizes the average natural distance between the true range space of interest and its estimate. Such approaches have shown their interests for signal subspace estimation in the small sample support and/or low signal to noise ratio contexts. As a byproduct, we also introduce a generalized version of the complex Bingham Langevin distribution in order to model the prior on the subspace orthonormal basis. Numerical simulations illustrate the performance of the proposed algorithms. Then, a practical example of Bayesian prior design is presented for the purpose of radar detection.Second, we aim to test common properties between low rank structured covariance matrices.Indeed, this hypothesis testing has been shown to be a relevant approach for change and/oranomaly detection in synthetic aperture radar images. While the term similarity usually refersto equality or proportionality, we explore the testing of shared properties in the structure oflow rank plus identity covariance matrices, which are appropriate for radar processing. Specifically,we derive generalized likelihood ratio tests to infer i) on the equality/proportionality ofthe low rank signal component of covariance matrices, and ii) on the equality of the signalsubspace component of covariance matrices. The formulation of the second test involves nontrivialoptimization problems for which we tailor ecient Majorization-Minimization algorithms.Eventually, the proposed detection methods enjoy interesting properties, that are illustrated on simulations and on an application to real data for change detection
APA, Harvard, Vancouver, ISO, and other styles
17

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Wan, Phyllis. "Application of Distance Covariance to Extremes and Time Series and Inference for Linear Preferential Attachment Networks." Thesis, 2018. https://doi.org/10.7916/D8Q25GQB.

Full text
Abstract:
This thesis covers four topics: i) Measuring dependence in time series through distance covariance; ii) Testing goodness-of-fit of time series models; iii) Threshold selection for multivariate heavy-tailed data; and iv) Inference for linear preferential attachment networks. Topic i) studies a dependence measure based on characteristic functions, called distance covariance, in time series settings. Distance covariance recently gathered popularity for its ability to detect nonlinear dependence. In particular, we characterize a general family of such dependence measures and use them to measure lagged serial and cross dependence in stationary time series. Assuming strong mixing, we establish the relevant asymptotic theory for the sample auto- and cross- distance correlation functions. Topic ii) proposes a goodness-of-fit test for general classes of time series model by applying the auto-distance covariance function (ADCV) to the fitted residuals. Under the correct model assumption, the limit distribution for the ADCV of the residuals differs from that of an i.i.d. sequence by a correction term. This adjustment has essentially the same form regardless of the model specification. Topic iii) considers data in the multivariate regular varying setting where the radial part $R$ is asymptotically independent of the angular part $\Theta$ as $R$ goes to infinity. The goal is to estimate the limiting distribution of $\Theta$ given $R\to\infty$, which characterizes the tail dependence of the data. A typical strategy is to look at the angular components of the data for which the radial parts exceed some threshold. We propose an algorithm to select the threshold based on distance covariance statistics and a subsampling scheme. Topic iv) investigates inference questions related to the linear preferential attachment model for network data. Preferential attachment is an appealing mechanism based on the intuition “the rich get richer” and produces the well-observed power-law behavior in net- works. We provide methods for fitting such a model under two data scenarios, when the network formation is given, and when only a single-time snapshot of the network is observed.
APA, Harvard, Vancouver, ISO, and other styles
19

Guetsop, Nangue Aurélien. "Tests de permutation d’indépendance en analyse multivariée." Thèse, 2016. http://hdl.handle.net/1866/18476.

Full text
Abstract:
Cette thèse est rédigée par articles. Les articles sont rédigés en anglais et le reste de la thèse est rédigée en français.
Le travail établit une équivalence en termes de puissance entre les tests basés sur la alpha-distance de covariance et sur le critère d'indépendance de Hilbert-Schmidt (HSIC) avec fonction caractéristique de distribution de probabilité stable d'indice alpha avec paramètre d'échelle suffisamment petit. Des simulations en grandes dimensions montrent la supériorité des tests de distance de covariance et des tests HSIC par rapport à certains tests utilisant les copules. Des simulations montrent également que la distribution de Pearson de type III, très utile et moins connue, approche la distribution exacte de permutation des tests et donne des erreurs de type I précises. Une nouvelle méthode de sélection adaptative des paramètres d'échelle pour les tests HSIC est proposée. Trois simulations, dont deux sont empruntées de l'apprentissage automatique, montrent que la nouvelle méthode de sélection améliore la puissance des tests HSIC. Le problème de tests d'indépendance entre deux vecteurs est généralisé au problème de tests d'indépendance mutuelle entre plusieurs vecteurs. Le travail traite aussi d'un problème très proche à savoir, le test d'indépendance sérielle d'une suite multidimensionnelle stationnaire. La décomposition de Möbius des fonctions caractéristiques est utilisée pour caractériser l'indépendance. Des tests généralisés basés sur le critère d'indépendance de Hilbert-Schmidt et sur la distance de covariance en sont obtenus. Une équivalence est également établie entre le test basé sur la distance de covariance et le test HSIC de noyau caractéristique d'une distribution stable avec des paramètres d'échelle suffisamment petits. La convergence faible du test HSIC est obtenue. Un calcul rapide et précis des valeurs-p des tests développés utilise une distribution de Pearson de type III comme approximation de la distribution exacte des tests. Un résultat fascinant est l'obtention des trois premiers moments exacts de la distribution de permutation des statistiques de dépendance. Une méthodologie similaire a été développée pour le test d'indépendance sérielle d'une suite. Des applications à des données réelles environnementales et financières sont effectuées.
The main result establishes the equivalence in terms of power between the alpha-distance covariance test and the Hilbert-Schmidt independence criterion (HSIC) test with the characteristic kernel of a stable probability distribution of index alpha with sufficiently small scale parameters. Large-scale simulations reveal the superiority of these two tests over other tests based on the empirical independence copula process. They also establish the usefulness of the lesser known Pearson type III approximation to the exact permutation distribution. This approximation yields tests with more accurate type I error rates than the gamma approximation usually used for HSIC, especially when dimensions of the two vectors are large. A new method for scale parameter selection in HSIC tests is proposed which improves power performance in three simulations, two of which are from machine learning. The problem of testing mutual independence between many random vectors is addressed. The closely related problem of testing serial independence of a multivariate stationary sequence is also considered. The Möbius transformation of characteristic functions is used to characterize independence. A generalization to p vectors of the alpha -distance covariance test and the Hilbert-Schmidt independence criterion (HSIC) test with the characteristic kernel of a stable probability distributionof index alpha is obtained. It is shown that an HSIC test with sufficiently small scale parameters is equivalent to an alpha -distance covariance test. Weak convergence of the HSIC test is established. A very fast and accurate computation of p-values uses the Pearson type III approximation which successfully approaches the exact permutation distribution of the tests. This approximation relies on the exact first three moments of the permutation distribution of any test which can be expressed as the sum of all elements of a componentwise product of p doubly-centered matrices. The alpha -distance covariance test and the HSIC test are both of this form. A new selection method is proposed for the scale parameter of the characteristic kernel of the HSIC test. It is shown in a simulation that this adaptive HSIC test has higher power than the alpha-distance covariance test when data are generated from a Student copula. Applications are given to environmental and financial data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography