To see the other types of publications on this topic, follow the link: Alpha estimation.

Dissertations / Theses on the topic 'Alpha estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Alpha estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hägglund, Kristoffer. "Symmetric alpha-Stable Adapted Demodulation and Parameter Estimation." Thesis, Luleå tekniska universitet, Signaler och system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70719.

Full text
Abstract:
Transmission and reception of signals in wireless communication systems is affected by additive interference corrupting the signal. Traditionally, the interference is assumed to be AWGN and the system designs are usually based on that assumption. Modern military platforms consists of many electrical components and systems and as such the noise affecting the signals is often a product of interference between the components and systems. This type of noise tend to be very impulsive in nature. The standard AWGN model is not suited for impulsive noise which leaves an opportunity to investigate the performance of a demodulation scheme adapted to the current interference environment in order to increase the performance gain. To properly analyze the performance of an interference-adapted demodulator, knowledge about the characteristic parameters of the chosen noise model is required to perform the necessary calculations.  This project combines the aspect of adaptive demodulation with parameter estimation evaluation. Four different parameter estimation techniques specifically customized for Symmetric alpha-Stable distributed noise were implemented and examined. The four methods were the Empirical Characteristic Function (ECF) method, Fractional Lower-Order Moments (FLOM) method, Extreme-Order Statistics (EOS) method as well as the Quantiles method. The effectiveness and performance of the methods were investigated in two Symmetric alpha-Stable processes of varying level of impulsiveness as well as two Class A processes in order to monitor the performance in noise not specifically distributed according to the intended model, functioning as an arbitrary representation of non-Gaussian interference. The results were evaluated using the measure of Kullback-Leibler Divergence. The demodulator was designed for Symmetric alpha-Stable distributed noise and implemented using an LLR-algorithm. The simulations were performed using an LDPC-coding protocol and the experiment was conducted in both Class A and Symmetric alpha-Stable distributed noise. The modulation schemes were 4-QAM and BPSK. The simulations showed that ECF was the most consistent parameter estimation method overall, regardless of distribution model or number of available samples. FLOM performed well in alpha-Stable noise but struggled in Class A processes. EOS and Quantiles shared the struggles of fewer available samples. The experiments show that an alpha-Stable adapted demodulator coupled with a parameter estimation technique based on the empirical characteristic function (ECF) is a very competitive and viable option in impulsive interference environments regardless of the origin of the noise distribution. The performance gain vis-a-vis demodulation using the standard AWGN option exceeded thresholds of upwards 25 dB for impulsive noise processes.
APA, Harvard, Vancouver, ISO, and other styles
2

Jaoua, Nouha. "Estimation Bayésienne non Paramétrique de Systèmes Dynamiques en Présence de Bruits Alpha-Stables." Phd thesis, Ecole Centrale de Lille, 2013. http://tel.archives-ouvertes.fr/tel-00929691.

Full text
Abstract:
Dans un nombre croissant d'applications, les perturbations rencontrées s'éloignent fortement des modèles classiques qui les modélisent par une gaussienne ou un mélange de gaussiennes. C'est en particulier le cas des bruits impulsifs que nous rencontrons dans plusieurs domaines, notamment celui des télécommunications. Dans ce cas, une modélisation mieux adaptée peut reposer sur les distributions alpha-stables. C'est dans ce cadre que s'inscrit le travail de cette thèse dont l'objectif est de concevoir de nouvelles méthodes robustes pour l'estimation conjointe état-bruit dans des environnements impulsifs. L'inférence est réalisée dans un cadre bayésien en utilisant les méthodes de Monte Carlo séquentielles. Dans un premier temps, cette problématique a été abordée dans le contexte des systèmes de transmission OFDM en supposant que les distorsions du canal sont modélisées par des distributions alpha-stables symétriques. Un algorithme de Monte Carlo séquentiel a été proposé pour l'estimation conjointe des symboles OFDM émis et des paramètres du bruit $\alpha$-stable. Ensuite, cette problématique a été abordée dans un cadre applicatif plus large, celui des systèmes non linéaires. Une approche bayésienne non paramétrique fondée sur la modélisation du bruit alpha-stable par des mélanges de processus de Dirichlet a été proposée. Des filtres particulaires basés sur des densités d'importance efficaces sont développés pour l'estimation conjointe du signal et des densités de probabilité des bruits
APA, Harvard, Vancouver, ISO, and other styles
3

Fries, Sébastien. "Anticipative alpha-stable linear processes for time series analysis : conditional dynamics and estimation." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLG005/document.

Full text
Abstract:
Dans le contexte des séries temporelles linéaires, on étudie les processus strictement stationnaires dits anticipatifs dépendant potentiellement de tous les termes d'une suite d'erreurs alpha-stables indépendantes et identiquement distribuées.On considère en premier lieu les processus autoregressifs (AR) et l'on montre que des moments conditionnels d'ordres plus élevés que les moments marginaux existent dès lors que le polynôme caractéristique admet au moins une racine à l'intérieur du cercle unité.Des formules fermées sont obtenues pour les moments d'ordre un et deux dans des cas particuliers.On montre que la méthode des moindres carrés permet d'estimer une représentation all-pass causale du processus dont la validité peut être vérifiée par un test de type portmanteau, et l'on propose une méthode fondée sur des propriétés d'extreme clustering pour retrouver la représentation AR originale.L'AR(1) stable anticipatif est étudié en détails dans le cadre des vecteurs stables bivariés et des formes fonctionnelles pour les quatre premiers moments conditionnels sont obtenues pour toute paramétrisation admissible.Lors des évènements extrêmes, il est montré que ces moments deviennent équivalents à ceux d'une distribution de Bernoulli chargeant deux évolutions futures opposées: accroissement exponentiel ou retour aux valeurs centrales.Des résultats parallèles sont obtenus pour l'analogue de l'AR(1) en temps continu, le processus d'Ornstein-Uhlenbeck stable anticipatif.Pour des moyennes mobiles alpha-stables infinies, la distribution conditionnelle des chemins futurs sachant la trajectoire passée est obtenue lors des évènements extrêmes par le biais d'une nouvelle représentation des vecteurs stables multivariés sur des cylindres unités relatifs à des semi-normes.Contrairement aux normes, ce type de représentation donne lieu à une propriété de variations régulières des queues de distribution utilisable dans un contexte de prévision, mais tout vecteur stable n'admet pas une telle représentation. Une caractérisation est donnée et l'on montre qu'un chemin fini de moyenne mobile alpha-stable sera représentable pourvu que le processus soit "suffisamment anticipatif".L'approche s'étend aux processus résultant de la combinaison linéaire de moyennes mobiles alpha-stables, et la distribution conditionnelle des chemins futurs s'interprète naturellement en termes de reconnaissance de formes
In the framework of linear time series analysis, we study a class of so-called anticipative strictly stationary processes potentially depending on all the terms of an independent and identically distributed alpha-stable errors sequence.Focusing first on autoregressive (AR) processes, it is shown that higher order conditional moments than marginal ones exist provided the characteristic polynomials admits at least one root inside the unit circle. The forms of the first and second order moments are obtained in special cases.The least squares method is shown to provide a consistent estimator of an all-pass causal representation of the process, the validity of which can be tested by a portmanteau-type test. A method based on extreme residuals clustering is proposed to determine the original AR representation.The anticipative stable AR(1) is studied in details in the framework of bivariate alpha-stable random vectors and the functional forms of its first four conditional moments are obtained under any admissible parameterisation.It is shown that during extreme events, these moments become equivalent to those of a two-point distribution charging two polarly-opposite future paths: exponential growth or collapse.Parallel results are obtained for the continuous time counterpart of the AR(1), the anticipative stable Ornstein-Uhlenbeck process.For infinite alpha-stable moving averages, the conditional distribution of future paths given the observed past trajectory during extreme events is derived on the basis of a new representation of stable random vectors on unit cylinders relative to semi-norms.Contrary to the case of norms, such representation yield a multivariate regularly varying tails property appropriate for prediction purposes, but not all stable vectors admit such a representation.A characterisation is provided and it is shown that finite length paths of a stable moving average admit such representation provided the process is "anticipative enough".Processes resulting from the linear combination of stable moving averages are encompassed, and the conditional distribution has a natural interpretation in terms of pattern identification
APA, Harvard, Vancouver, ISO, and other styles
4

Azzaoui, Nourddine. "Analyse et Estimations Spectrales des Processus alpha-Stables non-Stationnaires." Phd thesis, Université de Bourgogne, 2006. http://tel.archives-ouvertes.fr/tel-00138027.

Full text
Abstract:
Dans cette thèse une nouvelle représentation spectrale des processus symétriques alpha-stables est introduite. Elle est basée sur une propriété de pseudo-additivité de la covariation et l'intégrale au sens de Morse-Transue par rapport à une bimesure que nous construisons en utilisant la pseudo-additivité. L'intérêt de cette représentation est qu'elle est semblable à celle de la covariance des processus du second ordre; elle généralise celle établie pour les intégrales stochastiques par rapport à un processus symétrique alpha-stable à accroissements indépendants. Une classification des processus harmonisables non stationnaires a été étudiée selon la structure de la bimesure qui les caractérise et les processus périodiquement covariés ont été définis. Pour pouvoir simuler cette inhabituelle classe de processus, une nouvelle décomposition en séries de type Lepage a été apportée. Finalement des techniques non paramétriques d'estimation spectrale sont discutées. En particulier un estimateur presque sûrement convergeant sous une condition de mélange fort, a été introduit pour les processus périodiquement covariés.
APA, Harvard, Vancouver, ISO, and other styles
5

Poulin, Nicolas. "Estimation de la fonction des quantiles pour des données tronquées." Littoral, 2006. http://www.theses.fr/2006DUNK0159.

Full text
Abstract:
Dans le modèle de troncature à gauche, deux variables aléatoires Y et T, de fonctions de répartition respectives F et G ne sont observables que si Y ≥ T. Considérons un échantillon observé (Yi, Ti) ; 1 ≤ i ≤ n de ce couple de variables aléatoires. La fonction des quantiles de F est estimée par la fonction des quantiles de l’estimateur de Lynden-Bell (1971). Après avoir présenté les principaux résultats de la littérature dans le cadre de données indépendantes, nous considérons le cas des données α-mélangeantes. Nous établissons la convergence forte ainsi qu’une représentation forte du quantile sous la forme d’une moyenne de variables aléatoires avec un reste négligeable, ainsi que la normalité asymptotique. Pour le second travail de cette thèse, nous considérons un problème de régression de Y par une variable aléatoire explicative multi-dimensionnelle X. Nous établissons la convergence et la normalité asymptotique de la fonction de répartition conditionnelle ainsi que celles du quantile conditionnel de Y sachant X lorsque Y est tronquée. Des simulations nous ont permis de vérifier la qualité de l’estimation sur des échantillons de taille finie
In the left-truncation model, the pair of random variables Y and T with respective distribution function F and G are observed only if Y ≥ T. Let (Yi,Ti) ; 1 ≤ i ≤ n be an observed sample of this pair of random variables. The quantile function of F is estimated by the quantile function of the Lynden-Bell (1971) estimator. After giving some results of the literature in the case of independant data, we consider the α-mixing framework. We obtain strong consistency with rates, give a strong representation for the estimator of the quantile as a mean of random variables with a neglible rest and asymptotic normality. As regards the second topic of this thesis, we consider a multidimensionnal explanatory random variable X of Y which plays the role of a response. We establish strong consitency and asymptotic normality of the conditional distribution function and those of the conditional quantile function of Y given X when Y is subject to truncation. Simulations are drawn to illustrate the results for finite samples
APA, Harvard, Vancouver, ISO, and other styles
6

Fourt, Olivier. "Traitement des signaux à phase polynomiale dans des environnements fortement bruités : séparation et estimation des paramètres." Paris 11, 2008. http://www.theses.fr/2008PA112064.

Full text
Abstract:
Les travaux de cette thèse sont consacrés aux différents problèmes de traitement des Signaux à Phase Polynomiale dans des environnements fortement dégradés, que se soit par de fort niveaux de bruit ou par la présence de bruit impulsif, bruit que nous avons modélisé en ayant recourt à des lois alpha-stables. La robustesse au bruit est un sujet classique de traitement du signal et si de nombreux algorithmes sont capables de fonctionner avec de forts niveaux de bruits gaussiens, la présence de bruit impulsif a souvent pour conséquence une forte dégradation des performances voir une impossibilité d'utilisation. Récemment, plusieurs algorithmes ont été proposés pour prendre en compte la présence de bruit impulsif avec toutefois une contrainte: ces algorithmes voient généralement leurs performances se dégrader lorsqu'ils sont utilisés avec du bruit gaussien, et en conséquence nécessitent une sélection préalable de l'algorithme adapté en fonction de l'usage. L'un des points abordé dans cette thèse a donc été la réalisation d'algorithmes robustes à la nature du bruit en ce sens que leurs performances sont similaires, que le bruit additif soit gaussien ou alpha-stable. Le deuxième point abordé a été la réalisation d'algorithmes rapides, une capacité difficile à cumuler à la robustesse
The research works of this thesis deal with the processings of polynomial phase signals in heavily corrupted environnements, whatsoever noise with high levels or impulse noise, noise modelled by the use of alpha-stable laws. Noise robustness is a common task in signal processing and if several algorithms are able to work with high gaussian noise level, the presence of impulse noise often leads to a great loss in performances or makes algorithms unable to work. Recently, some algorithms have been built in order to support impulse noise environnements but with one limit: the achievable results decrease with gaussian noise situations and thus needs as a first step to select the good method versus the kind of the noise. So one of the key points of this thesis was building algorithms who were robust to the kind of the noise which means that they have similar performances with gaussian noise or alpha-stable noise. The second key point was building fast algorithms, something difficult to add to robustness
APA, Harvard, Vancouver, ISO, and other styles
7

Ferrani, Yacine. "Sur l'estimation non paramétrique de la densité et du mode dans les modèles de données incomplètes et associées." Thesis, Littoral, 2014. http://www.theses.fr/2014DUNK0370/document.

Full text
Abstract:
Cette thèse porte sur l'étude des propriétés asymptotiques d'un estimateur non paramétrique de la densité de type Parzen-Rosenblatt, sous un modèle de données censurées à droite, vérifiant une structure de dépendance de type associé. Dans ce cadre, nous rappelons d'abord les résultats existants, avec détails, dans les cas i.i.d. et fortement mélangeant (α-mélange). Sous des conditions de régularité classiques, il est établi que la vitesse de coonvergence uniforme presque sûre de l'estimateur étudié, est optimale. Dans la partie dédiée aux résultats de cette thèse, deux résultats principaux et originaux sont présentés : le premier résultat concerne la convergence uniforme presque sûre de l'estimateur étudié sous l'hypothèse d'association. L'outil principal ayant permis l'obtention de la vitesse optimale est l'adaptation du Théorème de Doukhan et Neumann (2007), dans l'étude du terme des fluctuations (partie aléatoire) de l'écart entre l'estimateur considéré et le paramètre étudié (densité). Comme application, la convergence presque sûre de l'estimateur non paramétrique du mode est établie. Les résultats obtenus ont fait l'objet d'un article accepté pour publication dans Communications in Statistics-Theory and Methods ; Le deuxième résultat établit la normalité asymptotique de l'estimateur étudié sous le même modèle et constitute ainsi une extension au cas censuré, du résultat obtenu par Roussas (2000). Ce résultat est soumis pour publication
This thesis deals with the study of asymptotic properties of e kernel (Parzen-Rosenblatt) density estimate under associated and censored model. In this setting, we first recall with details the existing results, studied in both i.i.d. and strong mixing condition (α-mixing) cases. Under mild standard conditions, it is established that the strong uniform almost sure convergence rate, is optimal. In the part dedicated to the results of this thesis, two main and original stated results are presented : the first result concerns the strong uniform consistency rate of the studied estimator under association hypothesis. The main tool having permitted to achieve the optimal speed, is the adaptation of the Theorem due to Doukhan and Neumann (2007), in studying the term of fluctuations (random part) of the gap between the considered estimator and the studied parameter (density). As an application, the almost sure convergence of the kernel mode estimator is established. The stated results have been accepted for publication in Communications in Statistics-Theory & Methods ; The second result establishes the asymptotic normality of the estimator studied under the same model and then, constitute an extension to the censored case, the result stated by Roussas (2000). This result is submitted for publication
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, Francyelle de Lima e. "Estimação de cópulas via ondaletas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-03122014-214943/.

Full text
Abstract:
Cópulas tem se tornado uma importante ferramenta para descrever e analisar a estrutura de dependência entre variáveis aleatórias e processos estocásticos. Recentemente, surgiram alguns métodos de estimação não paramétricos, utilizando kernels e ondaletas. Neste contexto, sabendo que cópulas podem ser escritas como expansão em ondaletas, foi proposto um estimador não paramétrico via ondaletas para a função cópula para dados independentes e de séries temporais, considerando processos alfa-mixing. Este estimador tem como característica principal estimar diretamente a função cópula, sem fazer suposição alguma sobre a distribuição dos dados e sem ajustes prévios de modelos ARMA - GARCH, como é feito em ajuste paramétrico para cópulas. Foram calculadas taxas de convergência para o estimador proposto em ambos os casos, mostrando sua consistência. Foram feitos também alguns estudos de simulação, além de aplicações a dados reais.
Copulas are important tools for describing the dependence structure between random variables and stochastic processes. Recently some nonparametric estimation procedures have appeared, using kernels and wavelets. In this context, knowing that a copula function can be expanded in a wavelet basis, we have proposed a nonparametric copula estimation procedure through wavelets for independent data and times series under alpha-mixing condition. The main feature of this estimator is the copula function estimation without assumptions about the data distribution and without ARMA - GARCH modeling, like in parametric copula estimation. Convergence rates for the estimator were computed, showing the estimator consistency. Some simulation studies were made, as well as analysis of real data sets.
APA, Harvard, Vancouver, ISO, and other styles
9

Khrifi, Saâd. "Etude de la densité électronique précise du composé "2-amino-5-nitropyridinium-L-monohydrogènetartrate" : estimation des propriétés optiques linéaire [alpha] et non linéaire [bêta] à partir des propriétés électrostatiques." Lille 1, 1996. http://www.theses.fr/1996LIL10005.

Full text
Abstract:
Ce travail a pour objet la détermination, par les méthodes de diffraction des rayons x et des neutrons, de la densité électronique de la molécule 2-amino-5-nitropyridinium-l-monohydrogentartrate (2a5nplt). Ce compose est un sel organique appartenant à une famille des nouveaux matériaux dont les propriétés optiques non linéaires sont de plus en plus efficaces. Les méthodes de la série différence x-xho et des modèles de déformation (hansen-coppens) permettent, à partir des données de diffraction des rayons x, de relever de précieux renseignements sur la densité électronique de l'édifice moléculaire. Les propriétés électrostatiques (charges, moments moléculaires et potentiel électrostatique) sont déterminées par différentes techniques telles que les affinements kappa et multipolaire et les méthodes d'intégration directe. En outre, nous avons pu montrer l'influence des phases des facteurs de structure sur la densité électronique de déformation et sur les propriétés électrostatiques dans le cas des structures non centrosymetriques. Un calcul semi-empirique utilisant mopac a permis d'évaluer les composantes des tenseurs de polarisabilité linéaire alpha et des hyperpolarisabilites non linéaires beta et gamma. Enfin, en utilisant l'approximation d'unsold nous avons pu établir un lien direct entre les moments de distribution de charges et les tenseurs alpha et beta.
APA, Harvard, Vancouver, ISO, and other styles
10

Boulanger, Frédéric. "Modelisation et simulation de variables regionalisees par des fonctions aleatoires stables." Paris, ENMP, 1990. http://www.theses.fr/1990ENMP0195.

Full text
Abstract:
Dans cette these, nous proposons de travailler sur la classe des fonctions aleatoires stables pour la modelisation et simulation de variables regionalisees. Dans la premiere partie, nous abordons le probleme de la realisation d'une fonction de covariance. Nous proposons deux methodes permettant d'une part d'etablir, par une nouvelle approche, les principaux resultats concernant les processus auttoregressifs a moyennes mobiles et d'autre part de calculer les parametres associes. Enfin, nous abordons le probleme de la simulation conditionnelle pour lequel nous proposons une solution directe en voisinage glissant. Dans la seconde partie, nous introduisons les fonctions aleatoires stables. Apres un rapide rappel des definitions des lois stables reelles et vectorielles, nous abordons le probleme de l'adequation de ce modele (de variance infinie) aux variables regionalisees (de variance experimentale finie). Apres une etude des differentes methodes existantes, il nous est apparu qu'il n'existait pas de tests de stabilite performants. Nous proposons deux tests reposant sur l'etude de la fonction caracteristique experimentale. Cette etude permet dans le meme temps d'ameliorer les methodes d'estimation des parametres d'une loi stable, notamment pour les petits echantillons. Une fois l'hypothese de stabilite acceptee, il est indispensable de passer a l'etape de l'analyse structurale. Dans cette optique, nous avons developpe une methode d'estimation semi-parametrique des lois stables vectorielles. Si la methode fournit des resultats interessants dans le cadre d'une loi stable, il semble que la methode ne puisse pas etre utilisee pour l'estimation de toutes les lois finies dimensionnelles. Nous avons donc propose deux outils d'analyse structurale plus simples generalisant la notion de covariance, la fonction d'autocorrelation experimentale et l'alpha-variogramme. Apres une etude generale des fonctions aleatoir
APA, Harvard, Vancouver, ISO, and other styles
11

Karlsson, Fredrik. "Matting of Natural Image Sequences using Bayesian Statistics." Thesis, Linköping University, Department of Science and Technology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2355.

Full text
Abstract:

The problem of separating a non-rectangular foreground image from a background image is a classical problem in image processing and analysis, known as matting or keying. A common example is a film frame where an actor is extracted from the background to later be placed on a different background. Compositing of these objects against a new background is one of the most common operations in the creation of visual effects. When the original background is of non-constant color the matting becomes an under determined problem, for which a unique solution cannot be found.

This thesis describes a framework for computing mattes from images with backgrounds of non-constant color, using Bayesian statistics. Foreground and background color distributions are modeled as oriented Gaussians and optimal color and opacity values are determined using a maximum a posteriori approach. Together with information from optical flow algorithms, the framework produces mattes for image sequences without needing user input for each frame.

The approach used in this thesis differs from previous research in a few areas. The optimal order of processing is determined in a different way and sampling of color values is changed to work more efficiently on high-resolution images. Finally a gradient-guided local smoothness constraint can optionally be used to improve results for cases where the normal technique produces poor results.

APA, Harvard, Vancouver, ISO, and other styles
12

Oksar, Yesim. "Target Tracking With Correlated Measurement Noise." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608198/index.pdf.

Full text
Abstract:
A white Gaussian noise measurement model is widely used in target tracking problem formulation. In practice, the measurement noise may not be white. This phenomenon is due to the scintillation of the target. In many radar systems, the measurement frequency is high enough so that the correlation cannot be ignored without degrading tracking performance. In this thesis, target tracking problem with correlated measurement noise is considered. The correlated measurement noise is modeled by a first-order Markov model. The effect of correlation is thought as interference, and Optimum Decoding Based Smoothing Algorithm is applied. For linear models, the estimation performances of Optimum Decoding Based Smoothing Algorithm are compared with the performances of Alpha-Beta Filter Algorithm. For nonlinear models, the estimation performances of Optimum Decoding Based Smoothing Algorithm are compared with the performances of Extended Kalman Filter by performing various simulations.
APA, Harvard, Vancouver, ISO, and other styles
13

Yin, Ling. "Automatic Stereoscopic 3D Chroma-Key Matting Using Perceptual Analysis and Prediction." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31851.

Full text
Abstract:
This research presents a novel framework for automatic chroma keying and the optimizations for real-time and stereoscopic 3D processing. It first simulates the process of human perception on isolating foreground elements in a given scene by perceptual analysis, and then predicts foreground colours and alpha map based on the analysis results and the restored clean background plate rather than direct sampling. Besides, an object level depth map is generated through stereo matching on a carefully determined feature map. In addition, three prototypes on different platforms have been implemented according to their hardware capability based on the proposed framework. To achieve real-time performance, the entire procedures are optimized for parallel processing and data paths on the GPU, as well as heterogeneous computing between GPU and CPU. The qualitative comparisons between results generated by the proposed algorithm and other existing algorithms show that the proposed one is able to generate more acceptable alpha maps and foreground colours especially in those regions that contain translucencies and details. And the quantitative evaluations also validate our advantages in both quality and speed.
APA, Harvard, Vancouver, ISO, and other styles
14

Aït, Hennani Larbi. "Comportement asymptotique du processus de vraisemblance dans le cas non régulier." Rouen, 1989. http://www.theses.fr/1989ROUES039.

Full text
Abstract:
Ce travail a pour point de départ les résultats de Ibragimov et Has'minskii concernant la convergence des lois marginales du processus de vraisemblance dans le cas où, la densité considérée, est de la forme f(x-t) et possède un nombre fini de singularités. Il a pour but la généralisation de ces résultats, et l'étude de la convergence en moyenne quadratique du processus de vraisemblance lorsqu'on remplace f(x-t) par f(x,t) où t est un réel fixe appartenant à un ouvert de R. Apres avoir défini les singularités d'ordre alpha de f(. ,t), on établit une expression maniable du processus de vraisemblance, et on applique les résultats obtenus à l'aide de cette expression d'un paramètre inconnu thêta
APA, Harvard, Vancouver, ISO, and other styles
15

Guerrero, José-Luis. "Robust Water Balance Modeling with Uncertain Discharge and Precipitation Data : Computational Geometry as a New Tool." Doctoral thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-190686.

Full text
Abstract:
Models are important tools for understanding the hydrological processes that govern water transport in the landscape and for prediction at times and places where no observations are available. The degree of trust placed on models, however, should not exceed the quality of the data they are fed with. The overall aim of this thesis was to tune the modeling process to account for the uncertainty in the data, by identifying robust parameter values using methods from computational geometry. The methods were developed and tested on data from the Choluteca River basin in Honduras. Quality control of precipitation and discharge data resulted in a rejection of 22% percent of daily raingage data and the complete removal of one out of the seven discharge stations analyzed. The raingage network was not found sufficient to capture the spatial and temporal variability of precipitation in the Choluteca River basin. The temporal variability of discharge was evaluated through a Monte Carlo assessment of the rating-equation parameter values over a moving time window of stage-discharge measurements. Al hydrometric stations showed considerable temporal variability in the stage-discharge relationship, which was largest for low flows, albeit with no common trend. The problem with limited data quality was addressed by identifying robust model parameter values within the set of well-performing (behavioral) parameter-value vectors with computational-geometry methods. The hypothesis that geometrically deep parameter-value vectors within the behavioral set were hydrologically robust was tested, and verified, using two depth functions. Deep parameter-value vectors tended to perform better than shallow ones, were less sensitive to small changes in their values, and were better suited to temporal transfer. Depth functions rank multidimensional data. Methods to visualize the multivariate distribution of behavioral parameters based on the ranked values were developed. It was shown that, by projecting along a common dimension, the multivariate distribution of behavioral parameters for models of varying complexity could be compared using the proposed visualization tools. This has a potential to aid in the selection of an adequate model structure considering the uncertainty in the data. These methods allowed to quantify observational uncertainties. Geometric methods have only recently begun to be used in hydrology. It was shown that they can be used to identify robust parameter values, and some of their potential uses were highlighted.
Modeller är viktiga verktyg för att förstå de hydrologiska processer som bestämmer vattnets transport i landskapet och för prognoser för tider och platser där det saknas mätdata. Graden av tillit till modeller bör emellertid inte överstiga kvaliteten på de data som de matas med. Det övergripande syftet med denna avhandling var att anpassa modelleringsprocessen så att den tar hänsyn till osäkerheten i data och identifierar robusta parametervärden med hjälp av metoder från beräkningsgeometrin. Metoderna var utvecklade och testades på data från Cholutecaflodens avrinningsområde i Honduras. Kvalitetskontrollen i nederbörds- och vattenföringsdata resulterade i att 22 % av de dagliga nederbördsobservationerna måste kasseras liksom alla data från en av sju analyserade vattenföringsstationer. Observationsnätet för nederbörd befanns otillräckligt för att fånga upp den rumsliga och tidsmässiga variabiliteten i den övre delen av Cholutecaflodens avrinningsområde. Vattenföringens tidsvariation utvärderades med en Monte Carlo-skattning av värdet på parametrarna i avbördningskurvan i ett rörligt tidsfönster av vattenföringsmätningar. Alla vattenföringsstationer uppvisade stor tidsvariation i avbördningskurvan som var störst för låga flöden, dock inte med någon gemensam trend. Problemet med den måttliga datakvaliteten bedömdes med hjälp av robusta modellparametervärden som identifierades med hjälp av beräkningsgeometriska metoder. Hypotesen att djupa parametervärdesuppsättningar var robusta testades och verifierades genom två djupfunktioner. Geometriskt djupa parametervärdesuppsättningar verkade ge bättre hydrologiska resultat än ytliga, var mindre känsliga för små ändringar i parametervärden och var bättre lämpade för förflyttning i tiden. Metoder utvecklades för att visualisera multivariata fördelningar av välpresterande parametrar baserade på de rangordnade värdena. Genom att projicera längs en gemensam dimension, kunde multivariata fördelningar av välpresterande parametrar hos modeller med varierande komplexitet jämföras med hjälp av det föreslagna visualiseringsverktyget. Det har alltså potentialen att bistå vid valet av en adekvat modellstruktur som tar hänsyn till osäkerheten i data. Dessa metoder möjliggjorde kvantifiering av observationsosäkerheter. Geometriska metoder har helt nyligen börjat användas inom hydrologin. I studien demonstrerades att de kan användas för att identifiera robusta parametervärdesuppsättningar och några av metodernas potentiella användningsområden belystes.
APA, Harvard, Vancouver, ISO, and other styles
16

Chaouch, Mohamed. "Contribution à l'estimation non paramétrique des quantiles géométriques et à l'analyse des données fonctionnelles." Phd thesis, Université de Bourgogne, 2008. http://tel.archives-ouvertes.fr/tel-00364538.

Full text
Abstract:
Cette thèse est consacré à l'estimation non paramétrique des quantiles géométriques conditionnels ou non et à l'analyse des données fonctionnelles. Nous nous sommes intéressés, dans un premier temps, à l'étude des quantiles géométriques. Nous avons montré, avec plusieurs simulations, qu'une étape de Transformation-retransformation est nécessaire, pour estimer le quantile géométrique, lorsqu'on s'éloigne du cadre d'une distribution sphérique. Une étude sur des données réelles a confirmée que la modélisation des données est mieux adaptée lorsqu'on utilise les quantiles géométriques à la place des quantiles mariginaux, notamment lorsque les variables qui constituent le vecteur aléatoire sont corrélées. Ensuite nous avons étudié l'estimation des quantiles géométriques lorsque les observations sont issues d'un plan de sondage. Nous avons proposé un estimateur sans biais du quantile géométrique et à l'aide des techniques de linéarisation par les équations estimantes, nous avons déterminé la variance asymptotique de l'estimateur. Nous avons ensuite montré que l'estimateur de type Horvitz-Thompson de la variance converge en probabilité. Nous nous sommes placés par la suite dans le cadre de l'estimation des quantiles géométriques conditionnels lorsque les observations sont dépendantes. Nous avons démontré que l'estimateur du quantile géométrique conditionnel converge uniformement sur tout ensemble compact. La deuxième partie de ce mémoire est consacrée à l'étude des différents paramètres caractérisant l'ACP fonctionnelle lorsque les observations sont tirées selon un plan de sondage. Les techniques de linéarisation basées sur la fonction d'influence permettent de fournir des estimateurs de la variance dans le cadre asymptotique. Sous certaines hypothèses, nous avons démontré que ces estimateurs convergent en probabilité.
APA, Harvard, Vancouver, ISO, and other styles
17

Chainais, Pierre. "Cascades log-infiniment divisibles et analyse multiresolution. Application à l'étude des intermittences en turbulence." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2001. http://tel.archives-ouvertes.fr/tel-00001584.

Full text
Abstract:
Les cascades log-infiniment divisibles fournissent un cadre général à l'étude de la propriété d' invariance d'échelle. Nous introduisons ces objets en décrivant l'évolution historique des différents modèles proposés pour décrire le phénomène d'intermittence statistique en turbulence. Nous nous appliquons alors à préciser une définition formelle des cascades log-infiniment divisibles. Nous remplaçons aussi les accroissements, usuels en turbulence, par les coefficients d'une transformée en ondelettes associée à une analyse multirésolution, outil dédié à l'analyse temps-échelle. Une réflexion approfondie sur la signification du formalisme nous amène à démontrer sa flexibilité pour la modélisation, ainsi que sa richesse en lien avec les cascades multiplicatives, les processus de Markov, l'équation de Langevin, l'équation de Fokker-Planck...Grâce à l'étude des cascades log-Poisson composées, nous proposons une vision originale du phénomène d'intermittence statistique. Ensuite, des estimateurs des exposants de lois d'échelle (éventuellement relatives) sont étudiés en insistant sur la correction du biais et la détermination d'intervalles de confiance. Nous les appliquons à des données de télétrafic informatique. Nous expliquons pourquoi une procédure usuelle d'estimation du spectre multifractal appliquée aux mouvements linéaires stables fractionnaires risque de mener à une méprise. Enfin, le lien entre intermittence statistique et intermittence spatio-temporelle (structures cohérentes) en turbulence est étudié à partir de l'enregistrement de signaux de vitesse et de pression conjointement en espace et en temps dans un écoulement turbulent. De fortes dépressions associées à des tourbillons filamentaires sont détectées. Une analyse statistique des coefficients d'ondelette de la vitesse conditionnée à ces événements nous permet de décrire l'influence de ces structures cohérentes à différents nombres de Reynolds.
APA, Harvard, Vancouver, ISO, and other styles
18

Laitrakun, Seksan. "Distributed detection and estimation with reliability-based splitting algorithms in random-access networks." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53008.

Full text
Abstract:
We design, analyze, and optimize distributed detection and estimation algorithms in a large, shared-channel, single-hop wireless sensor network (WSN). The fusion center (FC) is allocated a shared transmission channel to collect local decisions/estimates but cannot collect all of them because of limited energy, bandwidth, or time. We propose a strategy called reliability-based splitting algorithm that enables the FC to collect local decisions/estimates in descending order of their reliabilities through a shared collision channel. The algorithm divides the transmission channel into time frames and the sensor nodes into groups based on their observation reliabilities. Only nodes with a specified range of reliabilities compete for the channel using slotted ALOHA within each frame. Nodes with the most reliable decisions/estimates attempt transmission in the first frame; nodes with the next most reliable set of decisions/estimates attempt in the next frame; etc. The reliability-based splitting algorithm is applied in three scenarios: time-constrained distributed detection; sequential distributed detection; and time-constrained estimation. Performance measures of interest - including detection error probability, efficacy, asymptotic relative efficiency, and estimator variance - are derived. In addition, we propose and analyze algorithms that exploit information from the occurrence of collisions to improve the performance of both time-constrained distributed detection and sequential distributed detection.
APA, Harvard, Vancouver, ISO, and other styles
19

Ferdous, Arundhoti. "Comparative Analysis of Tag Estimation Algorithms on RFID EPC Gen-2 Performance." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6837.

Full text
Abstract:
In a passive radio-frequency identification (RFID) system the reader communicates with the tags using the EPC Global UHF Class 1 Generation 2 (EPC Gen-2) protocol with dynamic framed slotted ALOHA. Due to the unique challenges presented by a low-power, random link, the channel efficiency of even the most modern passive RFID system is less than 40%. Hence, a variety of methods have been proposed to estimate the number of tags in the environment and set the optimal frame size. Some of the algorithms in the literature even claim system efficiency beyond 90%. However, these algorithms require fundamental changes to the underlying protocol framework which makes them ineligible to be used with the current hardware running on the EPC Gen-2 platform and this infrastructure change of the existing industry will cost billions of dollars. Though numerous types of tag estimation algorithms have been proposed in the literature, none had their performance analyzed thoroughly when incorporated with the industry standard EPC Gen-2. In this study, we focus on some of the algorithms which can be utilized on today’s current hardware with minimal modifications. EPC Gen-2 already provides a dynamic platform in adjusting frame sizes based on subsequent knowledge of collision slots in a given frame. We choose some of the popular probabilistic tag estimation algorithms in the literature such as Dynamic Frame Slotted ALOHA (DFSA) – I, and DFSA – II, and rule based algorithms such as two conditional tag estimation (2CTE) method and incorporate them with EPC Gen-2 using different strategies to see if they can significantly improve channel efficiency and dynamicity. The results from each algorithm are also evaluated and compared with the performance of pure EPC Gen-2. It is important to note that while integrating these algorithms with EPC Gen-2 to modify the frame size, the protocol is not altered in any substantial way. We also kept the maximum system efficiency for any MAC layer protocol using DFSA as the upper bound to have an impartial comparison between the algorithms. Finally, we present a novel and comprehensive analysis of the probabilistic tag estimation algorithms (DFSA-I & DFSA-II) in terms of their statistically significant correlations between channel efficiency, algorithm estimation accuracy and algorithm utilization rate as the existing literature only look at channel efficiency with no auxiliary analysis. In this study, we use a scalable and flexible simulation framework and created a light-weight, verifiable Gen-2 simulation tool to measure these performance parameters as it is very difficult, if not impossible, to calculate system performance analytically. This framework can easily be used to test and compare more algorithms in the literature with Gen-2 and other DFSA based approaches.
APA, Harvard, Vancouver, ISO, and other styles
20

Adam, Hassan Ali. "A solid phase microextraction/gas chromatography method for estimating the concentrations of chlorpyrifos, endosulphan-alpha, edosulphan-beta and endosulphan sulphate in water." Thesis, Peninsula Technikon, 2003. http://hdl.handle.net/20.500.11838/899.

Full text
Abstract:
Thesis (MTech (Chemical Engineering))--Peninsula Technikon, Cape Town, 2003
The monitoring of pesticide contamination in surface and groundwater is an essential aspect of an assessment of the potential environmental and health impacts of widespread pesticide use. Previous research in three Western Cape farming areas found consistent (37% to 69% of samples) pesticide contamination of rural water sources. However, despite the need, monitoring of pesticides in water is not done due to lack of analytical capacity and the cost of analysis in South Africa. The Solid Phase Microextraction (SPME) sampling method has been developed over the last decade as a replacement for solvent-based analyte extraction procedures. The method utilizes a short, thin, solid rod of fused silica coated with an absorbent polymer. The fibre is exposed to the pesticide contaminated water sample under vigorous agitation. The pesticide is absorbed into the polymer coating; the mass absorbed depends on the partition coefficient of the pesticide between the sample phase and the polymeric coating, the exposure time and factors such as agitation rate, the diffusivity of the analyte in water and the polymeric coating, and the volume and thickness of the coating. After absorption, the fibre is directly inserted into the Gas Chromatograph (GC) injection port for analysis. For extraction from a stirred solution a fibre will have a boundary region where the solution moves slowly near the fibre surface and faster further away until the analyte is practically perfectly mixed in the bulk solution by convection. The boundary region may be modelled as a layer of stationary solution surrounded by perfectly mixed solution.
APA, Harvard, Vancouver, ISO, and other styles
21

Choate, Radmila. "ESTIMATING DISEASE SEVERITY, SYMPTOM BURDEN AND HEALTH-RELATED BEHAVIORS IN PATIENTS WITH CHRONIC PULMONARY DISEASES." UKnowledge, 2019. https://uknowledge.uky.edu/epb_etds/22.

Full text
Abstract:
Chronic pulmonary diseases include a wide range of illnesses that differ in etiology, prevalence, symptomatology and available therapy. A common link among these illnesses is their impact on patients’ vital function of breathing, high symptom burden and significantly impaired quality of life. This dissertation research evaluates disease severity, symptom burden and health behaviors of patients with three different chronic pulmonary conditions. First, alpha-1 antitrypsin deficiency (AATD) is an inherited condition that typically is associated with an increased risk of early onset pulmonary emphysema. This study examines differences in demographic, health, and behavioral characteristics and compares clinical outcomes and health related behaviors and attitudes between two severe genotypes of AATD - ZZ and SZ. The findings of the study suggest that patients with SZ genotype and less severe form of deficiency report higher number of exacerbations, comorbidities, as well as unhealthy behaviors such as lack of exercise and current smoking. In addition, individuals with the more severely deficient ZZ genotype are more adherent to disease management and prevention program recommendations and maintain a healthier lifestyle than individuals with SZ genotype. Second chronic lung disease examined in this research was chronic obstructive pulmonary disease (COPD), the fourth leading cause of death and second leading cause of disability in the United States. Prevalence and burden of cough and phlegm, two of the most common symptoms of the COPD, were assessed among participants of the COPD Foundation’s Patient-Powered Research Network (COPD PPRN). In addition, association between patient-reported levels of phlegm and cough, clinical outcomes and patients’ quality of life were evaluated. Participants’ quality of life was assessed using Patient Reported Outcome Measurement Information System instrument PROMIS-29. Association between changes in symptom severity over time and patient-reported quality of life were examined. Findings of this study indicated that severity of cough and phlegm were associated with higher number of exacerbations, greater dyspnea, and worsened patient-reported quality of life including physical and social functioning. Improvement in cough and phlegm severity over time was associated with better patient-reported quality of life. Third pulmonary illness described in this dissertation is non-cystic fibrosis bronchiectasis (NCFB), a rare and etiologically diverse condition characterized by dilated bronchi, poor mucus clearance and susceptibility to bacterial infection. Association between presence of Pseudomonas aeruginosa (PA), one of the most frequently isolated pathogens in patients with NCFFB, and disease severity was assessed utilizing enrollment data from the Bronchiectasis and NTM Research Registry (BRR). NCFB disease severity was evaluated using modified versions of validated in large international cohorts instruments, the Bronchiectasis Severity Index (BSI) and FACED. The findings of this study indicate that PA infection is common in NCFB patients, and presence of PA in patients’ sputum is associated with having moderate and high severity of bronchiectasis. In addition, the results of this study suggest that the two severity assessment instruments classify patients with NCFB differently which may be attributed to a greater number of severity markers utilized in the calculation of the BSI compared to FACED. In conclusion, the proposed dissertation aims to enhance understanding of differences in health outcomes between genotypes of AATD within AlphaNet registry, and to guide future health-promoting behaviors. It highlights the burden of common symptoms such as cough and phlegm in patients with COPD within COPD PPRN and their association with patients’ quality of life. In addition, it introduces modified indices of NCFB severity and emphasizes high burden of the disease in patients with presence of PA within the US BRR.
APA, Harvard, Vancouver, ISO, and other styles
22

Hamonier, Julien. "Analyse par ondelettes du mouvement multifractionnaire stable linéaire." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00753510.

Full text
Abstract:
Le mouvement brownien fractionnaire (mbf) constitue un important outil de modélisation utilisé dans plusieurs domaines (biologie, économie, finance, géologie, hydrologie, télécommunications, etc.) ; toutefois, ce modèle ne parvient pas toujours à donner une description suffisamment fidèle de la réalité, à cause, entre autres, des deux limitations suivantes : d'une part le mbf est un processus gaussien, et d'autre part, sa rugosité locale (mesurée par un exposant de Hölder) reste la même tout le long de sa trajectoire, puisque cette rugosité est partout égale au paramètre de Hurst H qui est une constante. En vue d'y remédier, S. Stoev et M.S. Taqqu (2004 et 2005) ont introduit le mouvement multifractionnaire stable linéaire (mmsl) ; ce processus stochastique strictement α-stable (StαS), désigné par {Y(t)}, est obtenu en remplaçant la mesure brownienne par une mesure StαS et le paramètre de Hurst H par une fonction H(.) dépendant de t. On se place systématiquement dans le cas où cette fonction est continue et à valeurs dans l'intervalle ouvert ]1/α,1[. Il convient aussi de noter que l'on a pour tout t, Y(t)=X(t,H(t)), où {X(u,v)} est le champ stochastique StαS, tel que pour tout v fixé, le processus {X(u,v)} est un mouvement fractionnaire stable linéaire. L'objectif de la thèse est de mener une étude approfondie du mmsl, au moyen de méthodes d'ondelettes ; elle consiste principalement en trois parties. (1) On détermine de fins modules de continuité, globaux et locaux de {Y(t)} ; cela repose essentiellement sur une nouvelle représentation de {X(u,v)}, sous la forme d'une série aléatoire, dont on montre la convergence presque sûre dans certains espaces de Hölder. (2) On introduit, via la base de Haar, une autre représentation de {X(u,v)} en série aléatoire ; cette dernière permet la mise en place d'une méthode de simulation efficace du mmsl, ainsi que de ses parties hautes et basses fréquences. (3) On construit des estimateurs par ondelettes du paramètre fonctionnel H(.) du mmsl, ainsi que de son paramètre de stabilité α.
APA, Harvard, Vancouver, ISO, and other styles
23

Lucena, Filho Walfredo da Costa. "Mecanismo de controle de potência para estimativa de etiquetas em redes de identificação por rádio frequência." Universidade Federal do Amazonas, 2015. http://tede.ufam.edu.br/handle/tede/4722.

Full text
Abstract:
Submitted by Geyciane Santos (geyciane_thamires@hotmail.com) on 2015-11-23T21:24:44Z No. of bitstreams: 1 Dissertação - Walfredo da Costa Lucena Filho.pdf: 2083187 bytes, checksum: 72f63311dba60bbea7ef2d5cc474c601 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-11-30T19:51:08Z (GMT) No. of bitstreams: 1 Dissertação - Walfredo da Costa Lucena Filho.pdf: 2083187 bytes, checksum: 72f63311dba60bbea7ef2d5cc474c601 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-11-30T19:55:39Z (GMT) No. of bitstreams: 1 Dissertação - Walfredo da Costa Lucena Filho.pdf: 2083187 bytes, checksum: 72f63311dba60bbea7ef2d5cc474c601 (MD5)
Made available in DSpace on 2015-11-30T19:55:40Z (GMT). No. of bitstreams: 1 Dissertação - Walfredo da Costa Lucena Filho.pdf: 2083187 bytes, checksum: 72f63311dba60bbea7ef2d5cc474c601 (MD5) Previous issue date: 2015-08-03
FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas
An RFID system is typically composed of a reader and a set of tags. An anti-collision algorithm is necessary to avoid collision between tags that respond simultaneously to a reader. The most widely used anti-collision algorithm is DFSA (Dynamic Framed Slotted ALOHA) due to its simplicity and low computational cost. In DFSA algorithms, the optimal TDMA (Time Division Multiple Access) frame size must be equal to the number of unread tags. If the exact number of tags is unknown, the DFSA algorithm needs a tag estimator to get closer to the optimal performance. Currently, applications have required the identification of large numbers of tags, which causes an increase in collisions and hence the degradation in performance of the traditional algorithms DFSA. This work proposes a power control mechanism to estimate the number of tags for radio frequency identification networks (RFID). The mechanism divides the interrogation zone into subgroups of tags and then RSSI (Received Signal Strength Indicator) measurements estimate the number of tags in a subarea. The mechanism is simulated and evaluated using a simulator developed in C/C++ language. In this study, we compare the number of slots and identification time, with ideal DFSA algorithm and Q algorithm EPCglobal standard. Simulation results shows the proposed mechanism provides 99% performance of ideal DFSA in dense networks, where there are many tags. Regarding the Q algorithm, we can see the improvement in performance of 6.5%. It is also important to highlight the lower energy consumption of the reader comparing to ideal DFSA is 63%.
Um sistema de identificação por rádio frequência (RFID) é composto basicamente de um leitor e etiquetas. Para que o processo de identificação das etiquetas seja bem sucedido, é necessário um algoritmo anticolisão a fim de evitar colisões entre etiquetas que respondem simultaneamente à interrogação do leitor. O algoritmo anticolisão mais usado é o DFSA (Dynamic Framed Slotted ALOHA) devido à sua simplicidade e baixo custo computacional. Em algoritmos probabilísticos, tal como o DFSA, o tamanho ótimo do quadro TDMA (Time Division Multiple Access) utilizado para leitura das etiquetas deve ser igual à quantidade de etiquetas não lidas. Uma vez que no processo de leitura, normalmente não se sabe a quantidade exata de etiquetas, o algoritmo DFSA faz uso de um estimador para obter um desempenho mais próximo do ideal. Atualmente, as aplicações têm demandado a identificação de grandes quantidades de etiquetas, o que ocasiona um aumento das colisões e, consequentemente, a degradação no desempenho dos algoritmos DFSA tradicionais. Este trabalho propõe um mecanismo de controle de potência para estimar a quantidade de etiquetas em redes de identificação por rádio frequência (RFID). O mecanismo baseia-se na divisão da área de interrogação em subáreas e, consequentemente, subgrupos de etiquetas. Tal divisão é utilizada para realizar medições de RSSI (Received Signal Strength Indicator) e, assim, estimar a quantidade de etiquetas por subárea. O mecanismo é simulado e avaliado utilizando um simulador próprio desenvolvido em linguagem C/C++. Neste estudo, comparam-se os resultados de quantidade de slots e tempo de identificação das etiquetas, com os obtidos a partir da utilização dos algoritmos DFSA ideal e algoritmo padrão Q da norma EPCglobal. A partir dos resultados da simulação, é possível perceber que o mecanismo proposto apresenta desempenho 99% do DFSA ideal em redes densas, onde há grande quantidade de etiquetas. Em relação ao algoritmo Q, percebe-se a melhoria de 6,5% no desempenho. É importante ressaltar também a redução no consumo de energia do leitor em torno de 63% em relação ao DFSA ideal.
APA, Harvard, Vancouver, ISO, and other styles
24

Hee, Sonke. "Computational Bayesian techniques applied to cosmology." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273346.

Full text
Abstract:
This thesis presents work around 3 themes: dark energy, gravitational waves and Bayesian inference. Both dark energy and gravitational wave physics are not yet well constrained. They present interesting challenges for Bayesian inference, which attempts to quantify our knowledge of the universe given our astrophysical data. A dark energy equation of state reconstruction analysis finds that the data favours the vacuum dark energy equation of state $w {=} -1$ model. Deviations from vacuum dark energy are shown to favour the super-negative ‘phantom’ dark energy regime of $w {< } -1$, but at low statistical significance. The constraining power of various datasets is quantified, finding that data constraints peak around redshift $z = 0.2$ due to baryonic acoustic oscillation and supernovae data constraints, whilst cosmic microwave background radiation and Lyman-$\alpha$ forest constraints are less significant. Specific models with a conformal time symmetry in the Friedmann equation and with an additional dark energy component are tested and shown to be competitive to the vacuum dark energy model by Bayesian model selection analysis: that they are not ruled out is believed to be largely due to poor data quality for deciding between existing models. Recent detections of gravitational waves by the LIGO collaboration enable the first gravitational wave tests of general relativity. An existing test in the literature is used and sped up significantly by a novel method developed in this thesis. The test computes posterior odds ratios, and the new method is shown to compute these accurately and efficiently. Compared to computing evidences, the method presented provides an approximate 100 times reduction in the number of likelihood calculations required to compute evidences at a given accuracy. Further testing may identify a significant advance in Bayesian model selection using nested sampling, as the method is completely general and straightforward to implement. We note that efficiency gains are not guaranteed and may be problem specific: further research is needed.
APA, Harvard, Vancouver, ISO, and other styles
25

Tardivel, Patrick. "Représentation parcimonieuse et procédures de tests multiples : application à la métabolomique." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30316/document.

Full text
Abstract:
Considérons un vecteur gaussien Y de loi N (m,sigma²Idn) et X une matrice de dimension n x p avec Y observé, m inconnu, Sigma et X connus. Dans le cadre du modèle linéaire, m est supposé être une combinaison linéaire des colonnes de X. En petite dimension, lorsque n ≥ p et que ker (X) = 0, il existe alors un unique paramètre Beta* tel que m = X Beta* ; on peut alors réécrire Y sous la forme Y = X Beta* + Epsilon. Dans le cadre du modèle linéaire gaussien en petite dimension, nous construisons une nouvelle procédure de tests multiples contrôlant le FWER pour tester les hypothèses nulles Beta*i = 0 pour i appartient à [[1,p]]. Cette procédure est appliquée en métabolomique au travers du programme ASICS qui est disponible en ligne. ASICS permet d'identifier et de quantifier les métabolites via l'analyse des spectres RMN. En grande dimension, lorsque n < p on a ker (X) ≠ 0, ainsi le paramètre Beta* décrit précédemment n'est pas unique. Dans le cas non bruité lorsque Sigma = 0, impliquant que Y = m, nous montrons que les solutions du système linéaire d'équations Y = X Beta avant un nombre de composantes non nulles minimales s'obtiennent via la minimisation de la "norme" lAlpha avec Alpha suffisamment petit
Let Y be a Gaussian vector distributed according to N (m,sigma²Idn) and X a matrix of dimension n x p with Y observed, m unknown, sigma and X known. In the linear model, m is assumed to be a linear combination of the columns of X In small dimension, when n ≥ p and ker (X) = 0, there exists a unique parameter Beta* such that m = X Beta*; then we can rewrite Y = Beta* + Epsilon. In the small-dimensional linear Gaussian model framework, we construct a new multiple testing procedure controlling the FWER to test the null hypotheses Beta*i = 0 for i belongs to [[1,p]]. This procedure is applied in metabolomics through the freeware ASICS available online. ASICS allows to identify and to qualify metabolites via the analyse of RMN spectra. In high dimension, when n < p we have ker (X) ≠ 0 consequently the parameter Beta* described above is no longer unique. In the noiseless case when Sigma = 0, implying thus Y = m, we show that the solutions of the linear system of equation Y = X Beta having a minimal number of non-zero components are obtained via the lalpha with alpha small enough
APA, Harvard, Vancouver, ISO, and other styles
26

Gomes, André Filipe Correia. "Nonparametric estimation of Expected Shortfall." Master's thesis, 2017. http://hdl.handle.net/10316/84757.

Full text
Abstract:
Dissertação de Mestrado em Métodos Quantitativos em Finanças apresentada à Faculdade de Ciências e Tecnologia
A Perda Esperada é uma medida de risco muito presente no ramo financeiro. Este trabalho procura avaliar as propriedades assintóticas de dois estimadores não paramétricos da Perda Esperada, sob a hipótese de existência de um dado grau de dependência na série financeira em estudo. O primeiro estimador a ser analisado pode ser visto como uma média de valores que satisfazem certa propriedade, e o segundo estimador é uma versão modificada do primeiro, utilizando kernel smoothing. A hipótese de dependência considerada é das mais fracas (alpha-mixing), pelo que o controlo das variáveis aleatórias apresentadas (nomeadamente das suas variâncias e covariâncias) tem bastante ênfase no trabalho. Devido a este controlo, conseguimos concluir um Teorema do Limite Central para cada estimador, que permite chegar a conclusões sobre a eficiência de ambos.A Perda Esperada é uma medida de risco muito presente no ramo financeiro. Este trabalho procura avaliar as propriedades assintóticas de dois estimadores não paramétricos da Perda Esperada, sob a hipótese de existência de um dado grau de dependência na série financeira em estudo. O primeiro estimador a ser analisado pode ser visto como uma média de valores que satisfazem certa propriedade, e o segundo estimador é uma versão modificada do primeiro, utilizando kernel smoothing. A hipótese de dependência considerada é das mais fracas (alpha-mixing), pelo que o controlo das variáveis aleatórias apresentadas (nomeadamente das suas variâncias e covariâncias) tem bastante ênfase no trabalho. Devido a este controlo, conseguimos concluir um Teorema do Limite Central para cada estimador, que permite chegar a conclusões sobre a eficiência de ambos.
The Expected Shortfall is an increasingly popular risk measure in financial risk management. This work seeks to study the asymptotic statistical properties of two nonparametric estimators of Expected Shortfall, under the assumption of dependence in the time series of study. The first estimator can be seen as an average of values that satisfy a certain property, whereas the second estimator is a kernel smoothed version of the first. The assumption of dependence is considered one of weakest (alpha-mixing), for which reason the control of the presented random variables (namely they variances and covariances) has a big emphasis on this work. Due to this control we are able to present a Central Limit Theorem for each estimator, from which we are to draw relevant conclusions about the efficiency of both estimators.The Expected Shortfall is an increasingly popular risk measure in financial risk management. This work seeks to study the asymptotic statistical properties of two nonparametric estimators of Expected Shortfall, under the assumption of dependence in the time series of study. The first estimator can be seen as an average of values that satisfy a certain property, whereas the second estimator is a kernel smoothed version of the first. The assumption of dependence is considered one of weakest (alpha-mixing), for which reason the control of the presented random variables (namely they variances and covariances) has a big emphasis on this work. Due to this control we are able to present a Central Limit Theorem for each estimator, from which we are to draw relevant conclusions about the efficiency of both estimators.
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Fang-Ju, and 林芳如. "Image Super Resolution, Image Alpha Estimation, and Metric Learning for Image Classification." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/w6g95t.

Full text
Abstract:
博士
國立交通大學
資訊科學與工程研究所
107
Computer vision technology uses digital cameras to simulate human vision, computer programs and algorithms to simulate people's understanding and thinking about things. Computer vision algorithms combine a wide range of disciplines such as artificial intelligence, machine learning, image processing, and neurobiology. In this thesis, we discuss three different applications combined with machine learning and pattern recognition algorithms, from pixel, patch, and feature units of the image to achieve better results than traditional image processing algorithms. In the first part, image super resolution is the process of generating a high-resolution (HR) image using one or more low-resolution (LR) inputs. Many SR methods have been proposed but generating the small-scale structure of an SR image remains a challenging task. We hence propose a single-image SR algorithm that combines the benefits of both internal and external SR methods. First, we estimate the enhancement weights of each LR-HR image patch pair. Next, we multiply each patch by the estimated enhancement weight to generate an initial SR patch. We then employ a method to recover the missing information from the high-resolution patches and create that missing information to generate a final SR image. We then employ iterative back-projection to further enhance visual quality. The method is compared qualitatively and quantitatively with several state-of-the-art methods, and the experimental results indicate that the proposed framework provides high contrast and better visual quality, particularly for non-smooth texture areas. The primary focus of the second part was to present a new approach for extracting foreground elements from an image by means of color and opacity (alpha) estimation which considers available samples in a searching window of variable size for each unknown pixel. Alpha-matting is conventionally defined as the endeavor of softly extracting foreground objects from a single input image and plays a central role within the realm of image-processing. In particular, the challenging case of natural image matting has received considerable research attention since there are virtually no restrictions for characterizing background regions. Many algorithms are presently available for estimating foreground samples and background samples for all unknown pixels of an image, along with opacity values. Given a trimap configuration of background/foreground/unknown regions of an input image, a straightforward approach for determining an alpha value is to sample (collect) unknown foreground and background colors for each unknown pixel defined in the trimap. Such a proposed sampling method is robust in that similar sampling results can be generated for input trimaps of different unknown regions. Moreover, after an initial estimation of the alpha matte, a fully-connected conditional random field (CRF) can be adopted to correct a predicted matte at the pixel level. In the third part, we developed effective weather features to solve the problem of weather recognition using a metric learning method. The recognition of weather conditions based on single image in large datasets is a challenging problem in computer vision. Although previous approaches have proposed methods to classify weather conditions into classes such as sunny and cloudy, their performance is still far from satisfactory. Under different weather conditions, we defined several categories of more robust weather features based on observations of outdoor images. We improve the classification accuracy using metric learning approaches. The results indicate that our method is able to provide much better performance than previous methods. The proposed method is also straightforward to implement and is computationally inexpensive, demonstrating the effectiveness of metric learning methods with computer vision problems.
APA, Harvard, Vancouver, ISO, and other styles
28

Achim, Alin. "Novel Bayesian multiscale methods for image denoising using alpha-stable distributions." 2003. http://nemertes.lis.upatras.gr/jspui/handle/10889/1265.

Full text
Abstract:
Before launching into ultrasound research, it is important to recall that the ultimate goal is to provide the clinician with the best possible information needed to make an accurate diagnosis. Ultrasound images are inherently affected by speckle noise, which is due to image formation under coherent waves. Thus, it appears to be sensible to reduce speckle artifacts before performing image analysis, provided that image texture that might distinguish one tissue from another is preserved. The main goal of this thesis was the development of novel speckle suppression methods from medical ultrasound images in the multiscale wavelet domain. We started by showing, through extensive modeling, that the subband decompositions of ultrasound images have significantly non-Gaussian statistics that are best described by families of heavy-tailed distributions such as the alpha-stable. Then, we developed Bayesian estimators that exploit these statistics. We used the alpha-stable model to design both the minimum absolute error (MAE) and the maximum a posteriori (MAP) estimators for alpha-stable signal mixed in Gaussian noise. The resulting noise-removal processors perform non-linear operations on the data and we relate this non-linearity to the degree of non-Gaussianity of the data. We compared our techniques to classical speckle filters and current state-of-the-art soft and hard thresholding methods applied on actual ultrasound medical images and we quantified the achieved performance improvement. Finally, we have shown that our proposed processors can find application in other areas of interest as well, and we have chosen as an illustrative example the case of synthetic aperture radar (SAR) images.
Ο απώτερος σκοπός της έρευνας που παρουσιάζεται σε αυτή τη διδακτορική διατριβή είναι η διάθεση στην κοινότητα των κλινικών επιστημόνων μεθόδων οι οποίες να παρέχουν την καλύτερη δυνατή πληροφορία για να γίνει μια σωστή ιατρική διάγνωση. Οι εικόνες υπερήχων προσβάλλονται ενδογενώς από θόρυβο, ο οποίος οφείλεται στην διαδικασία δημιουργίας των εικόνων μέσω ακτινοβολίας που χρησιμοποιεί σύμφωνες κυματομορφές. Είναι σημαντικό πριν τη διαδικασία ανάλυσης της εικόνας να γίνεται απάλειψη του θορύβου με κατάλληλο τρόπο ώστε να διατηρείται η υφή της εικόνας, η οποία βοηθά στην διάκριση ενός ιστού από έναν άλλο. Κύριος στόχος της διατριβής αυτής υπήρξε η ανάπτυξη νέων μεθόδων καταστολής του θορύβου σε ιατρικές εικόνες υπερήχων στο πεδίο του μετασχηματισμού κυματιδίων. Αρχικά αποδείξαμε μέσω εκτενών πειραμάτων μοντελοποίησης, ότι τα δεδομένα που προκύπτουν από τον διαχωρισμό των εικόνων υπερήχων σε υποπεριοχές συχνοτήτων περιγράφονται επακριβώς από μη-γκαουσιανές κατανομές βαρέων ουρών, όπως είναι οι άλφα-ευσταθείς κατανομές. Κατόπιν, αναπτύξαμε Μπεϋζιανούς εκτιμητές που αξιοποιούν αυτή τη στατιστική περιγραφή. Πιο συγκεκριμένα, χρησιμοποιήσαμε το άλφα-ευσταθές μοντέλο για να σχεδιάσουμε εκτιμητές ελάχιστου απόλυτου λάθος και μέγιστης εκ των υστέρων πιθανότητας για άλφα-ευσταθή σήματα αναμεμειγμένα με μη-γκαουσιανό θόρυβο. Οι επεξεργαστές αφαίρεσης θορύβου που προέκυψαν επενεργούν κατά μη-γραμμικό τρόπο στα δεδομένα και συσχετίζουν με βέλτιστο τρόπο αυτή την μη-γραμμικότητα με τον βαθμό κατά τον οποίο τα δεδομένα είναι μη-γκαουσιανά. Συγκρίναμε τις τεχνικές μας με κλασσικά φίλτρα καθώς και σύγχρονες μεθόδους αυστηρού και μαλακού κατωφλίου εφαρμόζοντάς τες σε πραγματικές ιατρικές εικόνες υπερήχων και ποσοτικοποιήσαμε την απόδοση που επιτεύχθηκε. Τέλος, δείξαμε ότι οι προτεινόμενοι επεξεργαστές μπορούν να βρουν εφαρμογές και σε άλλες περιοχές ενδιαφέροντος και επιλέξαμε ως ενδεικτικό παράδειγμα την περίπτωση εικόνων ραντάρ συνθετικής διατομής.
APA, Harvard, Vancouver, ISO, and other styles
29

Chang, Yen-Chieh, and 張彥介. "Estimation of Treatment Effects without Monotonicity Assumption in Dose-Finding Studies — The Application of alpha-Splitting Procedure." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/45927947851332165268.

Full text
Abstract:
碩士
國立成功大學
統計學系碩博士班
96
During the process of drug development, dose-response studies are conducted to evaluate the treatment effects at various doses of the test drug. In such clinical trials, subjects or patients are randomly allocated to separate groups to receive either several increasing dose levels of the test drug or a placebo. The primary focus of dose-response studies is usually on identifying the minimal effective dose (MED) and on estimating the treatment effect at each dose level. Based on the suggestion of the International Conference on Harmonization (ICH) E9 guideline that the confidence interval is the preferable way to present the treatment effect, we propose a method to construct the simultaneous confidence intervals to estimate the treatment effects and to define the MED accordingly. As a rule of thumb, the relation between the dose level and corresponding response is assumed to be monotone. However, sometimes the response may drop when the test dose is beyond certain level. Such phenomenon is the so-called "dose-response with non-monotonicity". In this article, the authors apply the alpha-splitting approach (Tu, 2006) that divides the pre-specified significant level into testing and estimating parts to the method proposed by Stefansson, Kim, and Hsu (1988) with a view to obtaining more precise confidence bounds when the dose-response relation is non-monotone. Through simulations, our extended method further demonstrates the ability to construct more informative confidence intervals for the treatment effects whether the dose-response relation is monotone or non-monotone.
APA, Harvard, Vancouver, ISO, and other styles
30

Rauf, Awais. "Estimation of Pile Capacity by Optimizing Dynamic Pile Driving Formulae." Thesis, 2012. http://hdl.handle.net/10012/6651.

Full text
Abstract:
Piles have been used since prehistoric times in areas with weak subsurface conditions either to reinforce existing ground, create new ground for habitation or trade, and support bridges and buildings. Originally piles were composed of timber and driven with drop hammers using very heavy ram weights. As technology improved so did the materials that piles are composed of as well as the equipment itself. Currently, piling is a multibillion dollar a year industry, thus the need to develop more accurate prediction methods can potentially represent a significant savings in cost, material, and man power. Multiple predictive methods have been developed to estimate developed pile capacity. These range from static theoretical formulae based on geotechnical investigation prior to pile driving even occurring using specific pile and hammer types to semi empirically based dynamic formulae used during actual driving operations to more recently developed computer modeling and signal matching programs which are calibrated with site condition during initial geotechnical investigations or test piling to full scale static load tests where piles are loaded to some predetermined value or failure condition. In this thesis, dynamic formulae are used to predict pile capacity from those installed by drop and diesel hammers and are compared to the results from pile load tests, which are taken as the true measure of developed bearing capacity. The dynamic formulae examined are the Engineering News Record (ENR), Gates, Federal Highway Administration (FHWA) modified Gates, Hiley, and Ontario Ministry of Transportation (MTO) modified Hiley formulae. Methods of investigation include calculating pile capacities from the formulae as they are, omitting the factors of safety, revising the formulae with averaged coefficients and conducting multi regression analysis to solve for one or two coefficients simultaneously and revising the dynamic formula to determine if more accurate bearing capacity prediction are possible. To objectively determine which formulae provide the most accurate bearing capacities, the predicted capacities will be compared to results obtained from static pile load tests and simple statistics on the resulting data set will be calculated including regression analysis, standard deviations, coefficients of variation, coefficients of determination, and correlation values.
APA, Harvard, Vancouver, ISO, and other styles
31

Chiang, Che-Yuan, and 江哲元. "Performance Analysis of Multi-channel Slotted ALOHA System using Iterative Contending-user Estimation Method." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/75144211451774381194.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
101
Multi-channel slotted ALOHA is the main channel access scheme for the random access channels of the next generation cellular networks. Seo and Leung [1] proposed a novel Markov-chain-based analytical model to analyze the performance of multi-channel slotted ALOHA adopting uniform backoff and exponential backoff policies. The model is developed assuming that each terminal adopts a delay-first transmission (i.e., performs random backoff before transmitting a new packet. However, this assumption is not always enforced in all networks. This paper presents a difference-equation-based analytical model to solve the same problem considered in [1]. The proposed model is developed based on an iterative contending-user estimation method and it can be applied for general networks with or without the delay-first transmission. Computer simulations were conducted to verify the accuracy of the analysis. The results show that the proposed model can accurately e7stimate the system throughput, average access delay, and packet-dropping probability of the networks
APA, Harvard, Vancouver, ISO, and other styles
32

Ndongo, Mor. "Les processus à mémoire longue saisonniers avec variance infinie des innovations et leurs applications." Phd thesis, 2011. http://tel.archives-ouvertes.fr/tel-00947321.

Full text
Abstract:
Dans ce travail, nous étudions de manière approfondie les processus à mémoire longue saisonniers avec variance infinie des innovations. Dans le premier chapitre, nous rappelons les différentes propriétés des lois -stables univariées (stabilité, calcul des moments, simulation, : : :). Nous introduisons ensuite deux modèles à variance infinie largement utilisés dans la littérature statistique : les modèles ARMA -stables et les modèles ARFIMA -stables développés respectivement par Mikosch et al. [57] et Kokoszka et Taqqu [45]. Constatant les limites de ces modèles, nous construisons dans le second chapitre de nouveaux modèles appelés processus ARFISMA symétriques -stables. Ces modèles nous permettent de prendre en compte dans une modélisation la présence éventuelle des trois éléments caractéristiques suivants : mémoire longue, saisonnalité et variance infinie, que l'on rencontre souvent en finance, en télécommunication ou en hydrologie. Après avoir conclu le chapitre par l'étude du comportement asymptotique du modèle par des simulations, nous abordons dans le troisième chapitre, le problème d'estimation des paramètres d'un processus ARFISMA -stable. Nous présentons plusieurs méthodes d'estimation : une méthode semiparamétrique développée par Reisen et al.[67], une méthode de Whittle classique utilisée par Mikosch et al. [57] et par Kokoszka et Taqqu [45], et une autre approche de la méthode de Whittle basée sur l'évaluation de la vraisemblance de Whittle par une méthode de Monte Carlo par chaînes de Markov (MCMC). De nombreuses simulations, effectuées avec le logiciel R [64], permettent de comparer ces méthodes d'estimation. Cependant, ces méthodes ne permettent pas d'estimer le paramètre d'innovation . Ainsi, nous introduisons, dans le quatrième chapitre deux méthodes d'estimation : la méthode de la fonction caractéristique empirique développée par Knight et Yu [43] et la méthode des moments généralisés basée sur des moments conditionnels continus, suggérée par Carrasco et Florens [16]. De plus, afin de comparer les propriétés asymptotiques des estimateurs, des simulations de Monte Carlo sont effectuées. Enfin, dans le cinquième chapitre, nous appliquons ce modèle sur des données de débits du fleuve Sénégal à la station de Bakel. En guise de comparaison, nous considérons le modèle linéaire classique de Box et Jenkins [11], et nous comparons leurs capacités prédictives.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography