To see the other types of publications on this topic, follow the link: Optimal estimator.

Dissertations / Theses on the topic 'Optimal estimator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimal estimator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Laurich, P. H. (Peter Hermann) Carleton University Dissertation Engineering Electrical. "Modeling of a wave generator and the design of an optimal estimator." Ottawa, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Xusheng. "Optimal distributed detection and estimation in static and mobile wireless sensor networks." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44825.

Full text
Abstract:
This dissertation develops optimal algorithms for distributed detection and estimation in static and mobile sensor networks. In distributed detection or estimation scenarios in clustered wireless sensor networks, sensor motes observe their local environment, make decisions or quantize these observations into local estimates of finite length, and send/relay them to a Cluster-Head (CH). For event detection tasks that are subject to both measurement errors and communication errors, we develop an algorithm that combines a Maximum a Posteriori (MAP) approach for local and global decisions with low-complexity channel codes and processing algorithms. For event estimation tasks that are subject to measurement errors, quantization errors and communication errors, we develop an algorithm that uses dithered quantization and channel compensation to ensure that each mote's local estimate received by the CH is unbiased and then lets the CH fuse these estimates into a global one using a Best Linear Unbiased Estimator (BLUE). We then determine both the minimum energy required for the network to produce an estimate with a prescribed error variance and show how this energy must be allocated amongst the motes in the network. In mobile wireless sensor networks, the mobility model governing each node will affect the detection accuracy at the CH and the energy consumption to achieve this level of accuracy. Correlated Random Walks (CRWs) have been proposed as mobility models that accounts for time dependency, geographical restrictions and nonzero drift. Hence, the solution to the continuous-time, 1-D, finite state space CRW is provided and its statistical behavior is studied both analytically and numerically. The impact of the motion of sensor on the network's performance is also studied.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiu, Wanjing. "FAULT LOCATION ALGORITHMS, OBSERVABILITY AND OPTIMALITY FOR POWER DISTRIBUTION SYSTEMS." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/48.

Full text
Abstract:
Power outages usually lead to customer complaints and revenue losses. Consequently, fast and accurate fault location on electric lines is needed so that repair work can be carried out as fast as possible. Chapter 2 describes novel fault location algorithms for radial and non-radial ungrounded power distribution systems. For both types of systems, fault location approaches using line to neutral or line to line measurements are presented. It’s assumed that network structure and parameters are known, so that during-fault bus impedance matrix of the system can be derived. Functions of bus impedance matrix and available measurements at substation are formulated, from which the unknown fault location can be estimated. Evaluation studies on fault location accuracy and robustness of fault location methods to load variations and measurement errors has been performed. Most existing fault location methods rely on measurements obtained from meters installed in power systems. To get the most from a limited number of meters available, optimal meter placement methods are needed. Chapter 3 presents a novel optimal meter placement algorithm to keep the system observable in terms of fault location determination. The observability of a fault location in power systems is defined first. Then, fault location observability analysis of the whole system is performed to determine the least number of meters needed and their best locations to achieve fault location observability. Case studies on fault location observability with limited meters are presented. Optimal meter deployment results based on the studied system with equal and varying monitoring cost for meters are displayed. To enhance fault location accuracy, an optimal fault location estimator for power distribution systems with distributed generation (DG) is described in Chapter 4. Voltages and currents at locations with power generation are adopted to give the best estimation of variables including measurements, fault location and fault resistances. Chi-square test is employed to detect and identify bad measurement. Evaluation studies are carried out to validate the effectiveness of optimal fault location estimator. A set of measurements with one bad measurement is utilized to test if a bad data can be identified successfully by the presented method.
APA, Harvard, Vancouver, ISO, and other styles
4

Schiavon, Francesca <1984&gt. "An optimal estimator for the correlation of CMB anisotropies with Large Scale Structures and its application to WMAP-7year and NVSS." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4793/1/schiavon_francesca_tesi.pdf.

Full text
Abstract:
In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).
APA, Harvard, Vancouver, ISO, and other styles
5

Schiavon, Francesca <1984&gt. "An optimal estimator for the correlation of CMB anisotropies with Large Scale Structures and its application to WMAP-7year and NVSS." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4793/.

Full text
Abstract:
In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).
APA, Harvard, Vancouver, ISO, and other styles
6

El, Heda Khadijetou. "Choix optimal du paramètre de lissage dans l'estimation non paramétrique de la fonction de densité pour des processus stationnaires à temps continu." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0484/document.

Full text
Abstract:
Les travaux de cette thèse portent sur le choix du paramètre de lissage dans le problème de l'estimation non paramétrique de la fonction de densité associée à des processus stationnaires ergodiques à temps continus. La précision de cette estimation dépend du choix de ce paramètre. La motivation essentielle est de construire une procédure de sélection automatique de la fenêtre et d'établir des propriétés asymptotiques de cette dernière en considérant un cadre de dépendance des données assez général qui puisse être facilement utilisé en pratique. Cette contribution se compose de trois parties. La première partie est consacrée à l'état de l'art relatif à la problématique qui situe bien notre contribution dans la littérature. Dans la deuxième partie, nous construisons une méthode de sélection automatique du paramètre de lissage liée à l'estimation de la densité par la méthode du noyau. Ce choix issu de la méthode de la validation croisée est asymptotiquement optimal. Dans la troisième partie, nous établissons des propriétés asymptotiques, de la fenêtre issue de la méthode de la validation croisée, données par des résultats de convergence presque sûre<br>The work this thesis focuses on the choice of the smoothing parameter in the context of non-parametric estimation of the density function for stationary ergodic continuous time processes. The accuracy of the estimation depends greatly on the choice of this parameter. The main goal of this work is to build an automatic window selection procedure and establish asymptotic properties while considering a general dependency framework that can be easily used in practice. The manuscript is divided into three parts. The first part reviews the literature on the subject, set the state of the art and discusses our contribution in within. In the second part, we design an automatical method for selecting the smoothing parameter when the density is estimated by the Kernel method. This choice stemming from the cross-validation method is asymptotically optimal. In the third part, we establish an asymptotic properties pertaining to consistency with rate for the resulting estimate of the window-width
APA, Harvard, Vancouver, ISO, and other styles
7

Vollant, Antoine. "Evaluation et développement de modèles sous-maille pour la simulation des grandes échelles du mélange turbulent basés sur l'estimation optimale et l'apprentissage supervisé." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAI118/document.

Full text
Abstract:
Dans ce travail, des méthodes de diagnostics et des techniques de développement de modèles sous-maille sont proposées pour la simulation des grandes échelles (SGE) du mélange turbulent. Plusieurs modèles sous-maille issus de ces stratégies sont ainsi présentés pour illustrer ces méthodes.Le principe de la SGE est de résoudre les grandes échelles de l'écoulement responsables des transferts principaux et de modéliser l'action des petites échelles de l'écoulement sur les échelles résolues. Au cours de ce travail, nous nous sommes appuyés sur le classement des modèles sous-maille en deux catégories. Les modèles "fonctionnels" qui s'attachent à reproduire les transferts énergétiques entre les échelles résolues et les échelles modélisées et les modèles "structurels" qui cherchent à bien reproduire le terme sous-maille. Le premier enjeu important a été d'évaluer la performance des modèles sous-maille en prenant en compte leur comportement à la fois fonctionnel (capacité à reproduire les transferts d'énergie) et structurel (capacité à reproduire le terme sous-maille exact). Des diagnosctics des modèles sous-maille ont pu être conduits avec l'utilisation de la notion d'estimateur optimal ce qui permet de connaitre le potentiel d'amélioration structurelle des modèles. Ces principes ont dans un premier temps servi au développement d'une première famille de modèles sous-maille algébrique appelée DRGM pour "Dynamic Regularized Gradient Model". Cette famille de modèles s'appuie sur le diagnostic structurel des termes issus de la régularisation des modèles de la famille du gradient. D'après les tests menés, cette nouvelle famille de modèle structurel a de meilleures performances fonctionnelles et structurelles que les modèles de la famille du gradient. L'amélioration des performances fonctionnelles consiste à supprimer la prédiction excessive de transferts inverses d'énergie (backscatter) observés dans les modèles de la famille du gradient. Cela permet ainsi de supprimer le comportement instable classiquement observé pour cette famille de modèles. La suite de ce travail propose ensuite d'utiliser l'estimateur optimal directement comme modèle sous-maille. Comme l'estimateur optimal fournit le modèle ayant la meilleure performance structurelle pour un jeu de variables donné, nous avons recherché le jeu de variable optimisant cette performance. Puisque ce jeu comporte un nombre élevé de variables, nous avons utilisé les fonctions d'approximation de type réseaux de neurones pour estimer cet estimateur optimal. Ce travail a mené au nouveau modèle substitut ANNM pour "Artificial Neural Network Model". Ces fonctions de substitution se construisent à partir de bases de données servant à émuler les termes exacts nécessaire à la détermination de l'estimateur optimal. Les tests de ce modèle ont montré qu'il avait de très bonnes perfomances pour des configurations de simulation peu éloignées de la base de données servant à son apprentissage, mais qu'il pouvait manquer d'universalité. Pour lever ce dernier verrou, nous avons proposé une utilisation hybride des modèles algébriques et des modèles de substitution à base de réseaux de neurones. La base de cette nouvelle famille de modèles ACM pour "Adaptative Coefficient Model" s'appuie sur les décompositions vectorielles et tensorielles des termes sous-maille exacts. Ces décompositions nécessitent le calcul de coefficients dynamiques qui sont modélisés par les réseaux de neurones. Ces réseaux bénéficient d'une méthode d'apprentissage permettant d'optimiser directement les performances structurelles et fonctionnelles des modèles ACM. Ces modèles hybrides allient l'universalité des modèles algébriques avec la performance élevée mais spécialisée des fonctions de substitution. Le résultat conduit à des modèles plus universels que l'ANNM<br>This work develops subgrid model techniques and proposes methods of diagnosis for Large Eddy Simulation (LES) of turbulent mixing.Several models from these strategies are thus presented to illustrate these methods.The principle of LES is to solve the largest scales of the turbulent flow responsible for major transfers and to model the action of small scales of flowon the resolved scales. Formally, this operation leads to filter equations describing turbulent mixing. Subgrid terms then appear and must bemodeled to close the equations. In this work, we rely on the classification of subgrid models into two categories. "Functional" models whichreproduces the energy transfers between the resolved scales and modeled scales and "Structural" models that seek to reproduce the exact subgrid termitself. The first major challenge is to evaluate the performance of subgrid models taking into account their functional behavior (ability to reproduce theenergy transfers) and structural behaviour (ability to reproduce the term subgrid exactly). Diagnostics of subgrid models have been enabled with theuse of the optimal estimator theory which allows the potential of structural improvement of the model to be evaluated.These methods were initially involved for the development of a first family of models called algebraic subgrid $DRGM$ for "Dynamic Regularized GradientModel". This family of models is based on the structural diagnostic of terms given by the regularization of the gradient model family.According to the tests performed, this new structural model's family has better functional and structural performance than original model's family of thegradient. The improved functional performance is due to the vanishing of inverse energy transfer (backscatter) observed in models of thegradient family. This allows the removal of the unstable behavior typically observed for this family of models.In this work, we then propose the use of the optimal estimator directly as a subgrid scale model. Since the optimal estimator provides the modelwith the best structural performance for a given set of variables, we looked for the set of variables which optimize that performance. Since this set of variablesis large, we use surrogate functions of artificial neural networks type to estimate the optimal estimator. This leads to the "Artificial Neural Network Model"(ANNM). These alternative functions are built from databases in order to emulate the exact terms needed to determine the optimal estimator. The tests of this modelshow that he it has very good performance for simulation configurations not very far from its database used for learning, so these findings may fail thetest of universality.To overcome this difficulty, we propose a hybrid method using an algebraic model and a surrogate model based on artificial neural networks. Thebasis of this new model family $ACM$ for "Adaptive Coefficient Model" is based on vector and tensor decomposition of the exact subgrid terms. Thesedecompositions require the calculation of dynamic coefficients which are modeled by artificial neural networks. These networks have a learning method designedto directlyoptimize the structural and functional performances of $ACM$. These hybrids models combine the universality of algebraic model with high performance butvery specialized performance of surrogate models. The result give models which are more universal than ANNM
APA, Harvard, Vancouver, ISO, and other styles
8

Teixeira, Marcos Vinícius. "Estudos sobre a implementação online de uma técnica de estimação de energia no calorímetro hadrônico do atlas em cenários de alta luminosidade." Universidade Federal de Juiz de Fora (UFJF), 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/4169.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-25T13:40:30Z No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5)<br>Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-25T15:26:43Z (GMT) No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5)<br>Made available in DSpace on 2017-04-25T15:26:43Z (GMT). No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5) Previous issue date: 2015-08-21<br>CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>Este trabalho tem como objetivo o estudo de técnicas para a estimação da amplitude de sinais no calorímetro de telhas (TileCal) do ATLAS no LHC em cenários de alta luminosidade. Em alta luminosidade, sinais provenientes de colisões adjacentes são observados, ocasionando o efeito de empilhamento de sinais. Neste ambiente, o método COF (do inglês, Constrained Optimal Filter), apresenta desempenho superior ao algoritmo atualmente implementado no sistema. Entretanto, o COF requer a inversão de matrizes para o cálculo da pseudo-inversa de uma matriz de convolução, dificultando sua implementação online. Para evitar a inversão de matrizes, este trabalho apresenta métodos interativos, para a daptação do COF, que resultam em operações matemáticas simples. Baseados no Gradiente Descendente, os resultados demonstraram que os algoritmos são capazes de estimar a amplitude de sinais empilhados, além do sinal de interesse com eficiência similar ao COF. Visando a implementação online, este trabalho apresenta estudos sobre a complexidade dos métodos iterativos e propõe uma arquitetura de processamento em FPGA. Baseado em uma estrutura sequencial e utilizando lógica aritmética em ponto fixo, os resultados demonstraram que a arquitetura desenvolvida é capaz executar o método iterativo, atendendo os requisitos de tempo de processamento exigidos no TileCal.<br>This work aims at the study of techniques for online energy estimation in the ATLAS hadronic Calorimeter (TileCal) on the LHC collider. During further periods of the LHC operation, signals coming from adjacent collisions will be observed within the same window, producing a signal superposition. In this environment, the energy reconstruction method COF (Constrained Optimal Filter) outperforms the algorithm currently implemented in the system. However , the COF method requires an inversion of matrices and its online implementation is not feasible. To avoid such inversion of matrices, this work presents iteractive methods to implement the COF, resulting in simple mathematical operations. Based on the Gradient Descent, the results demonstrate that the algorithms are capable of estimating the amplitude of the superimposed signals with efficiency similar to COF. In addition, a processing architecture for FPGA implementation is proposed. The analysis has shown that the algorithms can be implemented in the new TilaCal electronics, reaching the processing time requirements.
APA, Harvard, Vancouver, ISO, and other styles
9

Bringmann, Philipp. "Adaptive least-squares finite element method with optimal convergence rates." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/22350.

Full text
Abstract:
Die Least-Squares Finite-Elemente-Methoden (LSFEMn) basieren auf der Minimierung des Least-Squares-Funktionals, das aus quadrierten Normen der Residuen eines Systems von partiellen Differentialgleichungen erster Ordnung besteht. Dieses Funktional liefert einen a posteriori Fehlerschätzer und ermöglicht die adaptive Verfeinerung des zugrundeliegenden Netzes. Aus zwei Gründen versagen die gängigen Methoden zum Beweis optimaler Konvergenzraten, wie sie in Carstensen, Feischl, Page und Praetorius (Comp. Math. Appl., 67(6), 2014) zusammengefasst werden. Erstens scheinen fehlende Vorfaktoren proportional zur Netzweite den Beweis einer schrittweisen Reduktion der Least-Squares-Schätzerterme zu verhindern. Zweitens kontrolliert das Least-Squares-Funktional den Fehler der Fluss- beziehungsweise Spannungsvariablen in der H(div)-Norm, wodurch ein Datenapproximationsfehler der rechten Seite f auftritt. Diese Schwierigkeiten führten zu einem zweifachen Paradigmenwechsel in der Konvergenzanalyse adaptiver LSFEMn in Carstensen und Park (SIAM J. Numer. Anal., 53(1), 2015) für das 2D-Poisson-Modellproblem mit Diskretisierung niedrigster Ordnung und homogenen Dirichlet-Randdaten. Ein neuartiger expliziter residuenbasierter Fehlerschätzer ermöglicht den Beweis der Reduktionseigenschaft. Durch separiertes Markieren im adaptiven Algorithmus wird zudem der Datenapproximationsfehler reduziert. Die vorliegende Arbeit verallgemeinert diese Techniken auf die drei linearen Modellprobleme das Poisson-Problem, die Stokes-Gleichungen und das lineare Elastizitätsproblem. Die Axiome der Adaptivität mit separiertem Markieren nach Carstensen und Rabus (SIAM J. Numer. Anal., 55(6), 2017) werden in drei Raumdimensionen nachgewiesen. Die Analysis umfasst Diskretisierungen mit beliebigem Polynomgrad sowie inhomogene Dirichlet- und Neumann-Randbedingungen. Abschließend bestätigen numerische Experimente mit dem h-adaptiven Algorithmus die theoretisch bewiesenen optimalen Konvergenzraten.<br>The least-squares finite element methods (LSFEMs) base on the minimisation of the least-squares functional consisting of the squared norms of the residuals of first-order systems of partial differential equations. This functional provides a reliable and efficient built-in a posteriori error estimator and allows for adaptive mesh-refinement. The established convergence analysis with rates for adaptive algorithms, as summarised in the axiomatic framework by Carstensen, Feischl, Page, and Praetorius (Comp. Math. Appl., 67(6), 2014), fails for two reasons. First, the least-squares estimator lacks prefactors in terms of the mesh-size, what seemingly prevents a reduction under mesh-refinement. Second, the first-order divergence LSFEMs measure the flux or stress errors in the H(div) norm and, thus, involve a data resolution error of the right-hand side f. These difficulties led to a twofold paradigm shift in the convergence analysis with rates for adaptive LSFEMs in Carstensen and Park (SIAM J. Numer. Anal., 53(1), 2015) for the lowest-order discretisation of the 2D Poisson model problem with homogeneous Dirichlet boundary conditions. Accordingly, some novel explicit residual-based a posteriori error estimator accomplishes the reduction property. Furthermore, a separate marking strategy in the adaptive algorithm ensures the sufficient data resolution. This thesis presents the generalisation of these techniques to three linear model problems, namely, the Poisson problem, the Stokes equations, and the linear elasticity problem. It verifies the axioms of adaptivity with separate marking by Carstensen and Rabus (SIAM J. Numer. Anal., 55(6), 2017) in three spatial dimensions. The analysis covers discretisations with arbitrary polynomial degree and inhomogeneous Dirichlet and Neumann boundary conditions. Numerical experiments confirm the theoretically proven optimal convergence rates of the h-adaptive algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Ngwenya, Mzabalazo Z. "Investigating 'optimal' kriging variance estimation :analytic and bootstrap estimators." Master's thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/11265.

Full text
Abstract:
Kriging is a widely used group of techniques for predicting unobserved responses at specified locations using a set of observations obtained from known locations. Kriging predictors are best linear unbiased predictors (BLUPs) and the precision of predictions obtained from them are assessed by the mean squared prediction error (MSPE), commonly termed the kriging variance.
APA, Harvard, Vancouver, ISO, and other styles
11

Waqar, Mohsin. "Robust nonlinear observer for a non-collocated flexible motion system." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhu, Zi. "Estimation of the Optimal Threshold Using Kernel Estimate and ROC Curve Approaches." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/107.

Full text
Abstract:
Credit Line Analysis plays a very important role in the housing market, especially with the situation of large number of frozen loans during the current financial crisis. In this thesis, we apply the methods of kernel estimate and the Receiver Operating Characteristic (ROC) curve in the credit loan application process in order to help banks select the optimal threshold to differentiate good customers from bad customers. Better choice of the threshold is essential for banks to prevent loss and maximize profit from loans. One of the main advantages of our study is that the method does not require us to specify the distribution of the latent risk score. We apply bootstrap method to construct the confidence interval for the estimate.
APA, Harvard, Vancouver, ISO, and other styles
13

Dvořáček, Martin. "Estimátor v systému regulace s proměnlivou strukturou." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217558.

Full text
Abstract:
The thesis write about the linear discrete time incremental estimators. These are used for the choice of the best control system in systems with variable structure and further for direct control with status controller. There is an application of this on physical plane. In this paper PID variation controllers are discussed and optimized using Nelder-Mead Simplex Method. Feedback control with optimal PID is compared with control using linear discrete incremental estimators and status regulator.
APA, Harvard, Vancouver, ISO, and other styles
14

Mamduhi, Mohammadhossein. "Optimal Distributed Estimation-Deterministic Framework." Thesis, KTH, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105126.

Full text
Abstract:
Estimation Theory has always been a very important and necessary tool in dealing with complex systems and control engineering, from its birth in 18th century. In the last decades, and by raising the hot topics of distributed systems, estimation over networks has been of great interest among the scientists, and lots of effort has been made to solve the various aspects of this problem. An important question in solving the estimation problems, either over networks or a single system, is how much the obtained estimation is reliable, or in the other words, how much close our estimation is to the subject being estimated. Undoubtedly, a good estimation is an estimation which produces the least error. This leads us to combine the estimation theory with optimization techniques to obtain the best estimation of a given variable, which it is called Optimal Estimation. In control systems theory, we can have the optimal estimation problem in a static system, which is not progressing in time, and also, we can have the optimal estimation problem in a dynamic system, which is changing by time. Moreover, from another point of view, we can divide the common problems into two different frameworks, Stochastic Estimation Problem, and Deterministic Estimation Problem, which less attention has been made on the latter. Actually, treating a problem in deterministic framework is tougher than stochastic case, since in deterministic case we are not allowed to use the nice properties of stochastic random variables. In this Master thesis, the optimal estimation problem over distributed systems consist of a finite number of players, in deterministic framework, and in static setting has been treated. We assume a special case of estimation problem, in which the measurements available for different players are completely decoupled from each other. In the other words, no player can have access to the other players’ information space. We will derive the mathematical conditions for this problem as well as the optimal estimation minimizing the given cost function. For ease of understanding, some numerical examples are also provided, and the performance of the given approach is derived. This thesis consists of five chapters. In chapter 1, a brief introduction about the considered problems in this thesis and their history is given. Chapter 2 introduces the reader with the mathematical tools used in the thesis through the solving a very classic problem in estimation theory. In chapter 3, we have treated the main part of this thesis which is static team estimation problem. In chapter 4, we have looked at the performance of derived estimators, and compare our results with the available numerical solutions. Chapter 5 is a short conclusion, stating the main results, and summarizing the main points of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
15

Müller, Werner, and Dale L. Zimmerman. "Optimal Design for Variogram Estimation." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/756/1/document.pdf.

Full text
Abstract:
The variogram plays a central role in the analysis of geostatistical data. A valid variogram model is selected and the parameters of that model are estimated before kriging (spatial prediction) is performed. These inference procedures are generally based upon examination of the empirical variogram, which consists of average squared differences of data taken at sites lagged the same distance apart in the same direction. The ability of the analyst to estimate variogram parameters efficiently is affected significantly by the sampling design, i.e., the spatial configuration of sites where measurements are taken. In this paper, we propose design criteria that, in contrast to some previously proposed criteria oriented towards kriging with a known variogram, emphasize the accurate estimation of the variogram. These criteria are modifications of design criteria that are popular in the context of (nonlinear) regression models. The two main distinguishing features of the present context are that the addition of a single site to the design produces as many new lags as there are existing sites and hence also produces that many new squared differences from which the variograrn is estimated. Secondly, those squared differences are generally correlated, which inhibits the use of many standard design methods that rest upon the assumption of uncorrelated errors. Several approaches to design construction which account for these features are described and illustrated with two examples. We compare their efficiency to simple random sampling and regular and space-filling designs and find considerable improvements. (author's abstract)<br>Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
16

Garcia, Luz Mery González. "Modelos baseados no planejamento para análise de populações finitas." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-19062008-183609/.

Full text
Abstract:
Estudamos o problema de obtenção de estimadores/preditores ótimos para combinações lineares de respostas coletadas de uma população finita por meio de amostragem aleatória simples. Nesse contexto, estendemos o modelo misto para populações finitas proposto por Stanek, Singer & Lencina (2004, Journal of Statistical Planning and Inference) para casos em que se incluem erros de medida (endógenos e exógenos) e informação auxiliar. Admitindo que as variâncias são conhecidas, mostramos que os estimadores/preditores propostos têm erro quadrático médio menor dentro da classe dos estimadores lineares não viciados. Por meio de estudos de simulação, comparamos o desempenho desses estimadores/preditores empíricos, i.e., obtidos com a substituição das componentes de variância por estimativas, com aquele de competidores tradicionais. Também, estendemos esses modelos para análise de estudos com estrutura do tipo pré-teste/pós-teste. Também por intermédio de simulação, comparamos o desempenho dos estimadores empíricos com o desempenho do estimador obtido por meio de técnicas clássicas de análise de medidas repetidas e com o desempenho do estimador obtido via análise de covariância por meio de mínimos quadrados, concluindo que os estimadores/ preditores empíricos apresentaram um menor erro quadrático médio e menor vício. Em geral, sugerimos o emprego dos estimadores/preditores empíricos propostos para dados com distribuição assimétrica ou amostras pequenas.<br>We consider optimal estimation of finite population parameters with data obtained via simple random samples. In this context, we extend a finite population mixed model proposed by Stanek, Singer & Lencina (2004, Journal of Statistical Planning and Inference) by including measurement errors (endogenous or exogenous) and auxiliary information. Assuming that variance components are known, we show that the proposed estimators/predictors have the smallest mean squared error in the class of unbiased estimators. Using simulation studies, we compare the performance of the empirical estimators/predictors obtained by replacing variance components with estimates with the performance of a traditional estimator. We also extend the finite population mixed model to data obtained via pretest-posttest designs. Through simulation studies, we compare the performance of the empirical estimator of the difference in gain between groups with the performance of the usual repeated measures estimator and with the performance of the usual analysis of covariance estimator obtained via ordinary least squares. The empirical estimator has smaller mean squared error and bias than the alternative estimators under consideration. In general, we recommend the use of the proposed estimators/ predictors for either asymmetric response distributions or small samples.
APA, Harvard, Vancouver, ISO, and other styles
17

Charlier, Isabelle. "Conditional quantile estimation through optimal quantization." Thesis, Universite Libre de Bruxelles, 2015. http://www.theses.fr/2015BORD0274/document.

Full text
Abstract:
Les applications les plus courantes des méthodes non paramétriques concernent l’estimation d’une fonction de régression (i.e. de l’espérance conditionnelle). Cependant, il est souvent intéressant de modéliser les quantiles conditionnels, en particulier lorsque la moyenne conditionnelle ne permet pas de représenter convenablement l’impact des covariables sur la variable dépendante. De plus, ils permettent d’obtenir des graphiques plus compréhensibles de la distribution conditionnelle de la variable dépendante que ceux obtenus avec la moyenne conditionnelle. À l’origine, la « quantification » était utilisée en ingénierie du signal et de l’information. Elle permet de discrétiser un signal continu en un nombre fini de quantifieurs. En mathématique, le problème de la quantification optimale consiste à trouver la meilleure approximation d’une distribution continue d’une variable aléatoire par une loi discrète avec un nombre fixé de quantifieurs. Initialement utilisée pour des signaux univariés, la méthode a été étendue au cadre multivarié et est devenue un outil pour résoudre certains problèmes en probabilités numériques. Le but de cette thèse est d’appliquer la quantification optimale en norme Lp à l’estimation des quantiles conditionnels. Différents cas sont abordés : covariable uni- ou multidimensionnelle, variable dépendante uni- ou multivariée. La convergence des estimateurs proposés est étudiée d’un point de vue théorique. Ces estimateurs ont été implémentés et un package R, nommé QuantifQuantile, a été développé. Leur comportement numérique est évalué sur des simulations et des données réelles<br>One of the most common applications of nonparametric techniques has been the estimation of a regression function (i.e. a conditional mean). However it is often of interest to model conditional quantiles, particularly when it is felt that the conditional mean is not representative of the impact of the covariates on the dependent variable. Moreover, the quantile regression function provides a much more comprehensive picture of the conditional distribution of a dependent variable than the conditional mean function. Originally, the “quantization” was used in signal and information theories since the fifties. Quantization was devoted to the discretization of a continuous signal by a finite number of “quantizers”. In mathematics, the problem of optimal quantization is to find the best approximation of the continuous distribution of a random variable by a discrete law with a fixed number of charged points. Firstly used for a one-dimensional signal, the method has then been developed in the multi-dimensional case and extensively used as a tool to solve problems arising in numerical probability. The goal of this thesis is to study how to apply optimal quantization in Lp-norm to conditional quantile estimation. Various cases are studied: one-dimensional or multidimensional covariate, univariate or multivariate dependent variable. The convergence of the proposed estimators is studied from a theoretical point of view. The proposed estimators were implemented and a R package, called QuantifQuantile, was developed. Numerical behavior of the estimators is evaluated through simulation studies and real data applications
APA, Harvard, Vancouver, ISO, and other styles
18

Iolov, Alexandre V. "Parameter Estimation, Optimal Control and Optimal Design in Stochastic Neural Models." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34866.

Full text
Abstract:
This thesis solves estimation and control problems in computational neuroscience, mathematically dealing with the first-passage times of diffusion stochastic processes. We first derive estimation algorithms for model parameters from first-passage time observations, and then we derive algorithms for the control of first-passage times. Finally, we solve an optimal design problem which combines elements of the first two: we ask how to elicit first-passage times such as to facilitate model estimation based on said first-passage observations. The main mathematical tools used are the Fokker-Planck partial differential equation for evolution of probability densities, the Hamilton-Jacobi-Bellman equation of optimal control and the adjoint optimization principle from optimal control theory. The focus is on developing computational schemes for the solution of the problems. The schemes are implemented and are tested for a wide range of parameters.
APA, Harvard, Vancouver, ISO, and other styles
19

Rastelli, Riccardo, and Nial Friel. "Optimal Bayesian estimators for latent variable cluster models." Springer Nature, 2018. http://dx.doi.org/10.1007/s11222-017-9786-y.

Full text
Abstract:
In cluster analysis interest lies in probabilistically capturing partitions of individuals, items or observations into groups, such that those belonging to the same group share similar attributes or relational profiles. Bayesian posterior samples for the latent allocation variables can be effectively obtained in a wide range of clustering models, including finite mixtures, infinite mixtures, hidden Markov models and block models for networks. However, due to the categorical nature of the clustering variables and the lack of scalable algorithms, summary tools that can interpret such samples are not available. We adopt a Bayesian decision theoretical approach to define an optimality criterion for clusterings and propose a fast and context-independent greedy algorithm to find the best allocations. One important facet of our approach is that the optimal number of groups is automatically selected, thereby solving the clustering and the model-choice problems at the same time. We consider several loss functions to compare partitions and show that our approach can accommodate a wide range of cases. Finally, we illustrate our approach on both artificial and real datasets for three different clustering models: Gaussian mixtures, stochastic block models and latent block models for networks.
APA, Harvard, Vancouver, ISO, and other styles
20

Hellberg, Joakim, and Axel Sundkvist. "Comparing Control Strategies fora Satcom on the Move Antenna." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279331.

Full text
Abstract:
Satellite communication is a widely known method for communicating with remote or disaster-strickenplaces. Sometimes, thecommunication can be a matter of life and death,and it is thus vital that it works well. For two-way communication (such as internet) it is necessary for the antenna on Earth to point towards the satellite with a pointing error not larger than a few tenths of a degree. For example, regulations decided by the authorities in the U.S. forbid pointing errors larger than 0.5°. In some cases the antenna on Earth has to be moving while satellite communication is maintained. Such cases can be when the antenna is mounted to a vehicle, and the antenna thus has to compensate for the vehicle’s movement in order to point at the satellite. This application of satellite communication is called Satcom on the Move (SOTM). By constructing a Simulink model of an entire SOTM-system, including vehicle dynamics, satellite position, signal behavior, sensors, and actuators, different control strategies can be compared. This thesis compares the performance of an H2- and an LQG-controller for a static initial acquisition case, and a dynamic inertial stabilization case. The static initial acquisition case is performed with a search algorithm (SpiralSearch) aiming to find the satellite signalin the shortest possible time for a given initial pointing error. The dynamic inertial stabilization case is performed by allowing the simulated vehicle to drive in a slalom pattern and over uneven grounds. The controllers are designed based on modern control theory.The conclusion of this thesis is that the H2-controller performs slightly better in the static testcase,whereastheLQG-controller performs slightly better in the dynamic test cases. However, the results are greatly influenced by the tuning of the controllers, meaning that the comparison is not necessarily true for the controllers rather than the tuning parameters.<br>Satellitkommunikation är en allmänt känd metod för att kommunicera med avlägsna eller katastrofdrabbade platser. Ibland kan kommunikationen vara en fråga om liv och död, och det är därför viktigt att den fungerar bra. För tvåvägskommunikation (som internet) är det nödvändigt att antennen på jorden pekar mot satelliten med ett pekfel som inte är större än några tiondels grader. Exempelvis finns det lagar i USA som förbjuder pekfel större än 0,5°. I vissa fall måste antennen på jorden röra sig medan satellitkommunikation upprätthålls. Sådana fall kan vara när antennen är monterad på ett fordon och antennen således måste kompensera för fordonets rörelse för att peka mot satelliten. Denna applikation av satellitkommunikation kallas Satcom on the Move(SOTM). Genom att konstruera en simulinkmodell av ett fullständigt SOTM-system, inklusive fordonsdynamik, satellitposition, signalbeteende, sensorer och ställdon, kan olika reglerstrategier jämföras. Denna avhandling jämför en H2 - och en LQG-regulator för ett statiskt fall, samt ett dynamiskt fall. Det statiska fallet utförs med en sökalgoritm (spiralsökning) som syftar till att hitta en specifik satellitsignal på kortast möjliga tid för ett givet initialt pekfel. Det dynamiska fallet utförs genom att låta det simulerade fordonet köra i slalommönster och på ojämnt underlag. Regulatorerna är designade baserade på modern kontrollteori.  Slutsatsen av denna avhandling är att H2-regulatorn presterar något bättre i det statiska testfallet, medan LQG-regulatorn presterar något bättre i de dynamiska testfallen. Resultaten påverkas emellertid kraftigt av de designade reglerparametrarna, vilket innebär att jämförelsen inte nödvändigtvis är sann för kontrollerna, utan snarare förde specifika reglerparametrarna.
APA, Harvard, Vancouver, ISO, and other styles
21

Fode, Adamou M. "A Discontinuous Galerkin - Front Tracking Scheme and its Optimal -Optimal Error Estimation." Bowling Green State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1399902459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cheung, Man-Fung. "On optimal algorithms for parameter set estimation." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1302628544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Uyar, Olcay. "Sequential estimation of optimal age replacement policies." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA238696.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 1990.<br>Thesis Advisor(s): Whitaker, Lyn R. Second Reader: Read, Robert R. "September 1990." Description based on title screen as viewed on December 18, 2009. DTIC Identifier(s): Replacement Theory, Cost Analysis, Weibull Distribution Function, Gamma Distribution Functions, Theses. Author(s) subject terms: Sequential Estimation Procedure, Age Replacement Policy, Optimal Replacement, Preventive Maintenance. Includes bibliographical references (p. 72-73). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
24

Wirfält, Petter, Guillaume Bouleux, Magnus Jansson, and Petre Stoica. "Optimal prior knowledge-based direction of arrival estimation." KTH, Signalbehandling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-109489.

Full text
Abstract:
In certain applications involving direction of arrival (DOA) estimation the operator may have a-priori information on some of the DOAs. This information could refer to a target known to be present at a certain position or to a reflection. In this study, the authors investigate a methodology for array processing that exploits the information on the known DOAs for estimating the unknown DOAs as accurately as possible. Algorithms are presented that can efficiently handle the case of both correlated and uncorrelated sources when the receiver is a uniform linear array. The authors find a major improvement in estimator accuracy in feasible scenarios, and they compare the estimator performance to the corresponding theoretical stochastic Cramer-Rao bounds as well as to the performance of other methods capable of exploiting such prior knowledge. In addition, real data from an ultra-sound array is applied to the investigated estimators.<br><p>QC 20130107</p>
APA, Harvard, Vancouver, ISO, and other styles
25

Tian, Yuandong. "Theory and Practice of Globally Optimal Deformation Estimation." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/269.

Full text
Abstract:
Nonrigid deformation modeling and estimation from images is a technically challenging task due to its nonlinear, nonconvex and high-dimensional nature. Traditional optimization procedures often rely on good initializations and give locally optimal solutions. On the other hand, learning-based methods that directly model the relationship between deformed images and their parameters either cannot handle complicated forms of mapping, or suffer from the Nyquist Limit and the curse of dimensionality due to high degrees of freedom in the deformation space. In particular, to achieve a worst-case guarantee of ∈ error for a deformation with d degrees of freedom, the sample complexity required is O(1/∈d). In this thesis, a generative model for deformation is established and analyzed using a unified theoretical framework. Based on the framework, three algorithms, Data-Driven Descent, Top-down and Bottom-up Hierarchical Models, are designed and constructed to solve the generative model. Under Lipschitz conditions that rule out unsolvable cases (e.g., deformation of a blank image), all algorithms achieve globally optimal solutions to the specific generative model. The sample complexity of these methods is substantially lower than that of learning-based approaches, which are agnostic to deformation modeling. To achieve global optimality guarantees with lower sample complexity, the structureembedded in the deformation model is exploited. In particular, Data-driven Descentrelates two deformed images that are far away in the parameter space by compositionalstructures of deformation and reduce the sample complexity to O(Cd log 1/∈).Top-down Hierarchical Model factorizes the local deformation into patches once theglobal deformation has been estimated approximately and further reduce the samplecomplexity to O(Cd/1+C2 log 1/∈). Finally, the Bottom-up Hierarchical Model buildsrepresentations that are invariant to local deformation. With the representations, theglobal deformation can be estimated independently of local deformation, reducingthe sample complexity to O((C/∈)d0) (d0 ≪ d). From the analysis, this thesis showsthe connections between approaches that are traditionally considered to be of verydifferent nature. New theoretical conjectures on approaches like Deep Learning, arealso provided. practice, broad applications of the proposed approaches have also been demonstrated to estimate water distortion, air turbulence, cloth deformation and human pose with state-of-the-art results. Some approaches even achieve near real-time performance. Finally, application-dependent physics-based models are built with good performance in document rectification and scene depth recovery in turbulent media.
APA, Harvard, Vancouver, ISO, and other styles
26

Rey, Daniel. "Chaos, Observability and Symplectic Structure in Optimal Estimation." Thesis, University of California, San Diego, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10281245.

Full text
Abstract:
<p> Observation, estimation and prediction are universal challenges that become especially difficult when the system under consideration is dynamical and chaotic. Chaos injects dynamical noise into the estimation process that must be suppressed to satisfy the necessary conditions for success: namely, synchronization of the estimate and the observed data. The ability to control the growth of errors is constrained by the spatiotemporal resolution of the observations, and often exhibits critical thresholds below which the probability of success becomes effectively zero. This thesis examines the connections between these limits and basic issues of complexity, conditioning, and instability in the observation and forecast models. The results suggest several new ideas to improve the collaborative design of combined observation, analysis, and forecast systems. Among these, the most notable is perhaps the fundamental role that symplectic structure plays in the remarkable observational efficiency of Kalman-based estimation methods.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
27

Helin, Mikael. "Inverse Parameter Estimation using Hamilton-Jacobi Equations." Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123092.

Full text
Abstract:
Inthis degree project, a solution on a coarse grid is recovered by fitting apartial differential equation to a few known data points. The PDE to consideris the heat equation and the Dupire’s equation with their synthetic data,including synthetic data from the Black-Scholes formula. The approach to fit aPDE is by optimal control to derive discrete approximations to regularized Hamiltoncharacteristic equations to which discrete stepping schemes, and parameters forsmoothness, are examined. By non-parametric numerical implementation thedervied method is tested and then a few suggestions on possible improvementsare given<br>I detta examensarbete återskapas en lösning på ett glest rutnät genom att anpassa en partiell differentialekvation till några givna datapunkter. De partiella differentialekvationer med deras motsvarande syntetiska data som betraktas är värmeledningsekvationen och Dupires ekvation inklusive syntetiska data från Black-Scholes formel. Tillvägagångssättet att anpassa en PDE är att med hjälp av optimal styrning härleda diskreta approximationer på ett system av regulariserade Hamilton karakteristiska ekvationer till vilka olika diskreta stegmetoder och parametrar för släthet undersöks. Med en icke-parametrisk numerisk implementation prövas den härledda metoden och slutligen föreslås möjliga förbättringar till metoden.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Shuo. "Object Trajectory Estimation Using Optical Flow." DigitalCommons@USU, 2009. https://digitalcommons.usu.edu/etd/462.

Full text
Abstract:
Object trajectory tracking is an important topic in many different areas. It is widely used in robot technology, traffic, movie industry, and others. Optical flow is a useful method in the object tracking branch and it can calculate the motion of each pixel between two frames, and thus it provides a possible way to get the trajectory of objects. There are numerous papers describing the implementation of optical flow. Some results are acceptable, but in many projects, there are limitations. In most previous applications, because the camera is usually static, it is easy to apply optical flow to identify the moving targets in a scene and get their trajectories. When the camera moves, a global motion will be added to the local motion, which complicates the issue. In this thesis we use a combination of optical flow and image correlation to deal with this problem, and have good experimental results. For trajectory estimation, we incorporate a Kalman Filter with the optical flow. Not only can we smooth the motion history, but we can also estimate the motion into the next frame. The addition of a spatial-temporal filter improves the results in our later process.
APA, Harvard, Vancouver, ISO, and other styles
29

Huebsch, Jesse. "Optimal Online Tuning of an Adaptive Controller." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/853.

Full text
Abstract:
A novel adaptive controller, suitable for linear and non-linear systems was developed. The controller is a discrete algorithm suitable for computer implementation and is based on gradient descent adaptation rules. Traditional recursive least squares based algorithms suffer from performance deterioration due to the continuous reduction of a covariance matrix used for adaptation. When this covariance matrix becomes too small, recursive least squares algorithms respond slow to changes in model parameters. Gradient descent adaptation was used to avoid the performance deterioration with time associated with regression based adaptation such as Recursive Least Squares methods. Stability was proven with Lyapunov stability theory, using an error filter designed to fulfill stability requirements. Similarities between the proposed controller with PI control have been found. A framework for on-line tuning was developed using the concept of estimation tracks. Estimation tracks allow the estimation gains to be selected from a finite set of possible values, while meeting Lyapunov stability requirements. The trade-off between sufficient excitation for learning and controller performance, typical for dual adaptive control techniques, are met by properly tuning the adaptation and filter gains to drive the rate of adaptation in response to a fixed excitation signal. Two methods for selecting the estimation track were developed. The first method uses simulations to predict the value of the bicriteria cost function that is a combination of prediction and feedback errors, to generate a performance score for each estimation track. The second method uses a linear matrix inequality formulation to find an upper bound on feedback error within the range of uncertainty of the plant parameters and acceptable reference signals. The linear matrix inequality approach was derived from a robust control approach. Numerical simulations were performed to systematically evaluate the performance and computational burden of configuration parameters, such as the number of estimation tracks used for tuning. Comparisons were performed for both tuning methods with an arbitrarily tuned adaptive controller, with arbitrarily selected tuning parameters as well as a common adaptive control algorithm.
APA, Harvard, Vancouver, ISO, and other styles
30

Mahey, Guillaume. "Unbalanced and linear optimal transport for reliable estimation of the Wasserstein distance." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR29.

Full text
Abstract:
Dans le contexte de l'apprentissage automatique, plusieurs problèmes peuvent se formuler comme des problèmes de comparaison entre distributions. La théorie mathématique du transport optimal permet une comparaison entre deux mesures de probabilité. Bien que très élégante en théorie, le transport optimal (TO) souffre de plusieurs inconvénients en pratique, notamment la charge de calcul, le risque de surapprentissage (overfitting) et sa sensibilité aux artefacts d'échantillonnage. Tout cela a motivé l'introduction de variantes à la fonction de perte associée au TO dans la communauté du machine learning. Dans cette thèse, nous proposons de nouvelles variantes afin, d'une part, de réduire la charge computationnelle et statistique et, d'autre part, la sensibilité aux artefacts d'échantillonnage de la perte TO. Pour ce faire, nous nous sommes appuyés sur les distributions intermédiaires introduites à la fois par les variantes de TO linéaire et de TO déséquilibré<br>In the context of machine learning, several problems can be formulated as distribution comparison problems. The mathematical theory of optimal transport allows for a comparison between two probability measures. Although very elegant in theory, optimal transport (OT) suffers from several practical drawbacks, notably the computational burden, the risk of overfitting, and its sensitivity to artifacts of sampling. All of this has motivated the introduction of variants to the loss function associated with OT in the machine learning community. In this thesis, we propose such variants in order, on one hand, to reduce the computational and statistical burden and, on the other hand, the sensitivity to sampling artifacts of the OT loss. To achieve this, we relied on intermediate distributions introduced by both the linear OT and unbalanced OT variants
APA, Harvard, Vancouver, ISO, and other styles
31

Dahmen, Wolfgang, Helmut Harbrecht, and Reinhold Schneider. "Compression Techniques for Boundary Integral Equations - Optimal Complexity Estimates." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600464.

Full text
Abstract:
In this paper matrix compression techniques in the context of wavelet Galerkin schemes for boundary integral equations are developed and analyzed that exhibit optimal complexity in the following sense. The fully discrete scheme produces approximate solutions within discretization error accuracy offered by the underlying Galerkin method at a computational expense that is proven to stay proportional to the number of unknowns. Key issues are the second compression, that reduces the near field complexity significantly, and an additional a-posteriori compression. The latter one is based on a general result concerning an optimal work balance, that applies, in particular, to the quadrature used to compute the compressed stiffness matrix with sufficient accuracy in linear time. The theoretical results are illustrated by a 3D example on a nontrivial domain.
APA, Harvard, Vancouver, ISO, and other styles
32

Maloney, Alan. "Optimal (Adaptive) Design and Estimation Performance in Pharmacometric Modelling." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-182284.

Full text
Abstract:
The pharmaceutical industry now recognises the importance of the newly defined discipline of pharmacometrics. Pharmacometrics uses mathematical models to describe and then predict the performance of new drugs in clinical development. To ensure these models are useful, the clinical studies need to be designed such that the data generated allows the model predictions to be sufficiently accurate and precise. The capability of the available software to reliably estimate the model parameters must also be well understood.  This thesis investigated two important areas in pharmacometrics: optimal design and software estimation performance. The three optimal design papers progressed significant areas of optimal design research, especially relevant to phase II dose response designs. The use of exposure, rather than dose, was investigated within an optimal design framework. In addition to using both optimal design and clinical trial simulation, this work employed a wide range of metrics for assessing design performance, and was illustrative of how optimal designs for exposure response models may yield dose selections quite different to those based on standard dose response models. The investigation of the optimal designs for Poisson dose response models demonstrated a novel mathematical approach to the necessary matrix calculations for non-linear mixed effects models. Finally, the enormous potential of using optimal adaptive designs over fixed optimal designs was demonstrated. The results showed how the adaptive designs were robust to initial parameter misspecification, with the capability to "learn" the true dose response using the accruing subject data. The two estimation performance papers investigated the relative performance of a number of different algorithms and software programs for two complex pharmacometric models. In conclusion these papers, in combination, cover a wide spectrum of study designs for non-linear dose/exposure response models, covering: normal/non-normal data, fixed/mixed effect models, single/multiple design criteria metrics, optimal design/clinical trial simulation, and adaptive/fixed designs.
APA, Harvard, Vancouver, ISO, and other styles
33

Mathur, Shailendra. "Edge localization in surface reconstruction using optimal estimation theory." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23750.

Full text
Abstract:
In this thesis the problem of localizing discontinuities while smoothing noisy data is solved for the surface reconstruction method known as Curvature Consistency. In this algorithm, noisy initial estimates of surface patches are refined according to a continuity model, using a relaxation process. The interaction between neighbouring pixels in local neighbourhoods during relaxation is shown to be equivalent to a multiple measurement fusion process, where each pixel acts as a measurement source. Using optimal estimation theory as a basis, an adaptive weighting technique is developed to estimate interpolant surface patch parameters from neighbouring pixels. By applying the weighting process iteratively within local neighbourhoods, discontinuities are localized and a piecewise-continuous surface description is achieved. The resulting discontinuity localization algorithm is adaptive over different signal to noise ratios, robust over discontinuities of different scales and independent of user set parameters.
APA, Harvard, Vancouver, ISO, and other styles
34

Jonsson, Robin. "Optimal Linear Combinations of Portfolios Subject to Estimation Risk." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28524.

Full text
Abstract:
The combination of two or more portfolio rules is theoretically convex in return-risk space, which provides for a new class of portfolio rules that gives purpose to the Mean-Variance framework out-of-sample. The author investigates the performance loss from estimation risk between the unconstrained Mean-Variance portfolio and the out-of-sample Global Minimum Variance portfolio. A new two-fund rule is developed in a specific class of combined rules, between the equally weighted portfolio and a mean-variance portfolio with the covariance matrix being estimated by linear shrinkage. The study shows that this rule performs well out-of-sample when covariance estimation error and bias are balanced. The rule is performing at least as good as its peer group in this class of combined rules.
APA, Harvard, Vancouver, ISO, and other styles
35

Magnúsdóttir, Bergrún Tinna. "Estimation and optimal designs for multi-response Emax models." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-102888.

Full text
Abstract:
This thesis concerns optimal designs and estimation approaches for a class of nonlinear dose response models, namely multi-response Emax models. These models describe the relationship between the dose of a drug and two or more efficacy and/or safety variables. In order to obtain precise parameter estimates it is important to choose efficient estimation approaches and to use optimal designs to control the level of the doses administered to the patients in the study. We provide some optimal designs that are efficient for estimating the parameters, a subset of the parameters, and a function of the parameters in multi-response Emax models. The function of interest is an estimate of the best dose to administer to a group of patients. More specifically the dose that maximizes the Clinical Utility Index (CUI) which assesses the net benefit of a drug taking both effects and side-effects into account. The designs derived in this thesis are locally optimal, that is they depend upon the true parameter values. An important part of this thesis is to study how sensitive the optimal designs are to misspecification of prior parameter values. For multi-response Emax models it is possible to derive maximum likelihood (ML) estimates separately for the parameters in each dose response relation. However, ML estimation can also be carried out simultaneously for all response profiles by making use of dependencies between the profiles (system estimation). In this thesis we compare the performance of these two approaches by using a simulation study where a bivariate Emax model is fitted and by fitting a four dimensional Emax model to real dose response data. The results are that system estimation can substantially increase the precision of parameter estimates, especially when the correlation between response profiles is strong or when the study has not been designed in an efficient way.<br><p>At the time of the doctoral defence the following papers were unpublished and had a status as follows: Paper 1: Manuscript; Paper 2: Manuscript; Paper 3: Manuscript; Paper 4: Manuscript.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Povey, Adam Charles. "The application of optimal estimation retrieval to lidar observations." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:eb94de02-ad92-4eeb-b15c-094b05fa11c6.

Full text
Abstract:
Optimal estimation retrieval is a nonlinear regression scheme to determine the conditions statistically most-likely to produce a given measurement, weighted against any a priori knowledge. The technique is applied to three problems within the field of lidar data analysis. A retrieval of the aerosol backscatter and either the extinction or lidar ratio from two-channel Raman lidar data is developed using the lidar equations as a forward model. It produces profiles consistent with existing techniques at a resolution of 10-1000 m and uncertainty of 5-20%, dependent on the quality of data. It is effective even when applied to noisy, daytime data but performs poorly in the presence of cloud. Two of the most significant sources of uncertainty in that retrieval are the nonlinearity of the detectors and the instrument's calibration (known as the dead time and overlap function). Attempts to retrieve a nonlinear correction from a pair of lidar profiles, one attenuated by a neutral density filter, are not successful as uncertainties in the forward model eliminate any information content in the measurements. The technique of Whiteman et al. [1992] is found to be the most accurate. More successful is a retrieval of the overlap function of a Raman channel using a forward model combining an idealised extinction profile and an adaptation of the equations presented in Halldórsson and Langerholc [1978]. After refinement, the retrieval is shown to be at least as accurate, and often superior to, existing methods of calibration from routine measurements, presenting uncertainties of 5-15%. These techniques are then applied to observations of ash over southern England from the Eyjafjallajökull eruption of April 2010. Lidar ratios of 50-60 sr were observed when the plume first appeared, which reduced to 20-30 sr after several days within the planetary boundary layer, indicating an alteration of the particles over time.
APA, Harvard, Vancouver, ISO, and other styles
37

Soobhug, Divij. "Optimal state estimation for a power line inspection robot." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29474.

Full text
Abstract:
Following a paper published by E. Boje[1], this thesis discusses the design and off-line testing of different types of Kalman filters to estimate the attitude, position and velocity of a robotic platform moving along a power line. The nature of this problem limits the use of magnetometers. Magnetic field interference from the steel pylons and steel cored conductors will affect the local magnetic field. Moreover, high frequency signals from on-board power electronic drives and induced magnetic fields due to ferromagnetic components of the robot along with aliasing, quantization effects and a low signal to noise ratio make notch filtering at 50 Hz impractical. Thus, a GPS/IMU filter solution, which uses the power line curvature and horizontal direction in measurements, to constrain the robot to the line was designed. Different types of filters were implemented; The Extended Kalman filter (EKF), the Unscented Kalman filter (UKF) and the Error State Kalman filter (ErKF). Measurements were recorded and the filters were tested offline. While all the filters tracked properly, it was found that the EKF was better in computational speed completing an iteration in 87 µs, the ErKF was second best with an average time of 120 µs for one iteration and the UKF was last with an average time of 1040 µs for one iteration. Errors between the true state and estimated state for the simulation were quantified using root mean square values (RMS). The RMS values were almost the same for the EKF and ErKF with the error for the x position at 0.81 m and z position at 0.038 m. The UKF produced RMS errors of 0.79 m for x position and 0.11 m for z position. It can be seen that the UKF is slightly better for the x position but is much worse for the z position. Overall, the GPS measurement RMS values used were 4 m and 20 m for the horizontal and vertical positions respectively. Thus, the filters brought a big improvement. However, the recommended filter is the EKF as is produced comparable or better results as compared to other filters and expends the least computational effort. A state estimator was also developed for a J.Patel’s PLIR project [2], where a brachiating version of a power line robot was modeled. The brachiation mechanism was approximated to a double pendulum and kinematics based Kalman filter was designed. Simulations of EKF and UKF were made. The EKF is still recommended as its estimates are closer to the true values and its computation time is about five times faster.
APA, Harvard, Vancouver, ISO, and other styles
38

Ahmed, Shakil. "Robust estimation and sub-optimal predictive control for satellites." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10553.

Full text
Abstract:
This thesis explores the attitude estimation and control problem of a magnetically controlled small satellite in initial acquisition phase. During this phase, large data uncertainties pose estimation challenges, while highly nonlinear dynamics and inherent limitations of the magnetic actuation are primary issues in control. We aim to design algorithms, which can improve performance compared to the state of the art techniques and remain tractable for practical applications. Static attitude estimation, which is an essential part of a satellite control system, uses vector information and solves a constrained weighted least-square problem. With large data uncertainties, this technique results in large errors rendering divergence or infeasibility in dynamic filtering and control. When static estimation is the primary source of attitude, these errors become critical; for example in low budget small satellites. To address this issue, we formulate a robust static estimation problem with norm-bounded uncertainties, which is a difficult optimization problem due to its unfavorable convexity properties and nonlinear constraints. By deriving an analytical upper bound for the convex maximization, the robust min-max problem is approximated with a minimization problem with quadratic cost and constraints (a QCQP), which is non-convex. Semidefinite relaxation is used to upper bound the non-convex QCQP with a semi-definite program, which can efficiently be solved in a polynomial time. Furthermore, it is shown that the derived upper bound has no gap in solving the robust problem in practice. Semi-definite relaxations are also applied to solve the robust formulations of a more general class of problems known as the orthogonal Procrustes problem (OPP). It is shown that the solution of the relaxed OPP is exact when no uncertainties are considered; however, for the robust case, only a sub-optimal solution can be obtained. Finally, a satellite rate damping in initial acquisition phase is addressed by using nonlinear model predictive control (NMPC). Standard NMPC schemes with guaranteed stability show superior performance than existing techniques; however, they are computationally expensive. With large initial rates, the computational burden of NMPC becomes prohibitively excessive. For these cases, an algorithm is presented with an additional constraint on the cost reduction that allows an early termination of the optimizer based on the available computational resources. The presented algorithm significantly reduces the de-tumbling time due to the imposed cost reduction constraint.
APA, Harvard, Vancouver, ISO, and other styles
39

Gan, Liping. "Optimal traffic counting location for origin-destination matrix estimation /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?CIVL%202002%20GAN.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.<br>Includes bibliographical references (leaves 104-106). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Junruo. "Optimal detection with imperfect channel estimation for wireless communications." Thesis, University of York, 2009. http://etheses.whiterose.ac.uk/1640/.

Full text
Abstract:
In communication systems transmitting data through unknown fading channels, traditional detection techniques are based on channel estimation (e.g., by using pilot signals), and then treating the estimates as perfect in a minimum distance detector. In this thesis, we derive and investigate an optimal detector that does not estimate the channel explicitly but jointly processes the received pilot and data symbols to recover the data. This optimal detector outperforms the traditional detectors (mismatched detectors). In order to approximate correlated fading channels, such as fast fading channels and frequency-selective fading channels, basis expansion models (BEMs) are used due to high accuracy and low complexity. There are various BEMs used to represent the time-variant channels, such as Karhunen-Loeve (KL) functions, discrete prolate spheroidal (DPS) functions, generalized complex exponential (GCE) functions, B-splines (BS), and the others. We derive the mean square error (MSE) of a generic BEM-based linear channel estimator with perfect or imperfect knowledge of the Doppler spread in time-variant channels. We compare the performance and complexity of minimum mean square error (MMSE) and maximum likelihood (ML) channel estimators using the four BEMs, for the case with perfect Doppler spread. Although all BEM-based MMSE estimators allow achievement of the optimal performance of the Wiener solution, the complexity of estimators using KL and DPS BEMs is significantly higher than that of estimators using BS and GCE BEMs. We then investigate the sensitivity of BEM-based estimators to the mismatched Doppler spread. All the estimators are sensitive to underestimation of the Doppler spread but may be robust to overestimation. The results show that the traditional way of estimating the fading statistics and generating the KL and DPS basis functions by using the maximum Doppler spread will lead to a degradation of the performance. A better performance can be obtained by using an overestimate of the Doppler spread instead of using the maximum Doppler spread. For this case, due to the highest robustness and the lowest complexity, the best practical choice of BEM is the B-splines. We derive a general expression for optimal detection for pilot-assisted transmission in Rayleigh fading channels with imperfect channel estimation. The optimal detector is specified for single-input single-output (SISO) Rayleigh fading channels. The slow (timeinvariant) fading channels and fast (time-variant) fading channels following Jakes’ model are considered. We use the B-splines to approximate the channel gain time variations and compare the detection performance of the optimal detector with that of different mismatched detectors using ML or MMSE channel estimates. Furthermore, we investigate the detection performance of an iterative receiver implementing the optimal detector in the initial iteration and mismatched detectors in following iterations in a system transmitting turbo-encoded data. Simulation results show that the optimal detection outperforms the mismatched detection with ML channel estimation. However, the improvement in the detection performance compared to the mismatched detection with theMMSE channel estimation is modest. We then extend the optimal detector to channels with more unknown parameters, such as spatially correlated MIMO Rayleigh fading channels, and compare the performance of the optimal detector with that of mismatched detectors. Simulation results show that the benefit in detection performance caused by using the optimal detector is not affected by the spatial correlation between antennas, but becomes more significant when the number of antennas increases. This optimal detector is extended to the case of orthogonal frequency-division multiplexing (OFDM) signals in frequency-selective fading channels. We compare the performance and complexity of this optimal detector with that of mismatched detectors using ML and MMSE channel estimates in SISO and MIMO channels. In SISO systems, the performance of the optimal detector is close to that of the mismatched detector with MMSE channel estimates. However, the optimal detector significantly outperforms the mismatched detectors in MIMO channels.
APA, Harvard, Vancouver, ISO, and other styles
41

Skamangas, Emmanuel Epaminondas. "New Optimal-Control-Based Techniques for Midcourse Guidance of Gun-Launched Guided Projectiles." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/102752.

Full text
Abstract:
The following is an exploration into the optimal guidance and control of gun-launched guided projectiles. Unlike their early counterparts, modern-day gun-launched projectiles are capable of considerable accuracy. This ability is enabled through the use of control surfaces, such as fins or wings, which allow the projectile to maneuver towards a target. These aerodynamic features are part of a control system which lets the projectile achieve some effect at the target. With the advent of very high velocity guns, such as the Navy's electromagnetic railgun, these systems are a necessary part of the projectile design. This research focuses on a control scheme that uses the projectile's angle of attack as the single control in the development of an optimal control methodology that maximizes impact velocity, which is directly related to the amount of damage in icted on the target. This novel approach, which utilizes a reference trajectory as a seed for an iterative optimization scheme, results in an optimal control history for a projectile. The investigation is geared towards examining how poor an approximation of the true optimal solution that reference trajectory can be and still lead to the determination of an optimal control history. Several different types of trajectories are examined for their applicability as a reference trajectory. Although the use of aerodynamic control surfaces enables control of the projectile, there is a potential down side. With steady development of guns with longer ranges and higher launch velocities, it becomes increasingly likely that a projectile will y into a region of the atmosphere (and beyond) in which there is not sufficient air ow over the control surfaces to maintain projectile control. This research is extended to include a minimum dynamic pressure constraint in the problem; the imposition of such a constraint is not examined in the literature. Several methods of adding the constraint are discussed and a number of cases with varying dynamic pressure limits are evaluated. As a result of this research, a robust methodology exists to quickly obtain an optimal control history, with or without constraints, based on a rough reference trajectory as input. This methodology finds its applicability not only for gun-launched weapons, but also for missiles and hypersonic vehicles.<br>Doctor of Philosophy<br>As the name implies, optimal control problems involve determining a control history for a system that optimizes some aspect of the system's behavior. In aerospace applications, optimal control problems often involve finding a control history that minimizes time of ight, uses the least amount of fuel, maximizes final velocity, or meets some constraint imposed by the designer or user. For very simple problems, this optimal control history can be analytically derived; for more practical problems, such as the ones considered here, numerical methods are required to determine a solution. This research focuses on the optimal control problem of a gun-launched guided projectile. Guided projectiles have the potential to be significantly more accurate than their unguided counterparts; this improvement is achieved through the use of a control mechanism. For this research, the projectile is modeled using a single control approach, namely using the angle of attack as the only control for the projectile. The angle of attack is the angle formed between the direction the projectile is pointing and the direction it is moving (i.e., between the main body axis and the velocity vector of the projectile). An approach is then developed to determine an optimal angle of attack history that maximizes the projectile's final impact velocity. While this problem has been extensively examined by other researchers, the current approach results in the analytical determination of the costate estimates that eliminates the need to iterate on their solutions. Subsequently, a minimum dynamic pressure constraint is added to the problem. While extensive investigation has been conducted in the examination of a maximum dynamic pressure constraint for aerospace applications, the imposition of a minimum represents a novel body of work. For an aerodynamically controlled projectile, (i.e., one controlled with movable surfaces that interact with the air stream), dropping below a minimum dynamic pressure may result in loss of sufficient control. As such, developing a control history that accommodates this constraint and prevents the loss of aerodynamic control is critical to the ongoing development of very long range, gun-launched guided projectiles. This new methodology is applied with the minimum dynamic pressure constraint imposed and the resulting optimal control histories are then examined. In addition, the possibility of implementing other constraints is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
42

Yan, Hongshi. "Robust optical flow estimation and motion segmentation." Thesis, University of Warwick, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zayouna, Ammar. "Optical flow estimation using steered-L1 norm." Thesis, Middlesex University, 2016. http://eprints.mdx.ac.uk/21273/.

Full text
Abstract:
Motion is a very important part of understanding the visual picture of the surrounding environment. In image processing it involves the estimation of displacements for image points in an image sequence. In this context dense optical flow estimation is concerned with the computation of pixel displacements in a sequence of images, therefore it has been used widely in the field of image processing and computer vision. A lot of research was dedicated to enable an accurate and fast motion computation in image sequences. Despite the recent advances in the computation of optical flow, there is still room for improvements and optical flow algorithms still suffer from several issues, such as motion discontinuities, occlusion handling, and robustness to illumination changes. This thesis includes an investigation for the topic of optical flow and its applications. It addresses several issues in the computation of dense optical flow and proposes solutions. Specifically, this thesis is divided into two main parts dedicated to address two main areas of interest in optical flow. In the first part, image registration using optical flow is investigated. Both local and global image registration has been used for image registration. An image registration based on an improved version of the combined Local-global method of optical flow computation is proposed. A bi-lateral filter was used in this optical flow method to improve the edge preserving performance. It is shown that image registration via this method gives more robust results compared to the local and the global optical flow methods previously investigated. The second part of this thesis encompasses the main contribution of this research which is an improved total variation L1 norm. A smoothness term is used in the optical flow energy function to regularise this function. The L1 is a plausible choice for such a term because of its performance in preserving edges, however this term is known to be isotropic and hence decreases the penalisation near motion boundaries in all directions. The proposed improved L1 (termed here as the steered-L1 norm) smoothness term demonstrates similar performance across motion boundaries but improves the penalisation performance along such boundaries.
APA, Harvard, Vancouver, ISO, and other styles
44

Morris, Russell A. "Optimal state estimation for the optimal control of far-field acoustic radiation pressure from submerged plates." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06232009-063036/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Staerman, Guillaume. "Functional anomaly detection and robust estimation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT021.

Full text
Abstract:
L’engouement pour l’apprentissage automatique s’étend à presque tous les domaines comme l’énergie, la médecine ou la finance. L’omniprésence des capteurs met à disposition de plus en plus de données avec une granularité toujours plus fine. Une abondance de nouvelles applications telles que la surveillance d’infrastructures complexes comme les avions ou les réseaux d’énergie, ainsi que la disponibilité d’échantillons de données massives, potentiellement corrompues, ont mis la pression sur la communauté scientifique pour développer de nouvelles méthodes et algorithmes d’apprentissage automatique fiables. Le travail présenté dans cette thèse s’inscrit dans cette ligne de recherche et se concentre autour de deux axes : la détection non-supervisée d’anomalies fonctionnelles et l’apprentissage robuste, tant du point de vue pratique que théorique.La première partie de cette thèse est consacrée au développement d’algorithmes efficaces de détection d’anomalies dans le cadre fonctionnel. Plus précisément, nous introduisons Functional Isolation Forest (FIF), un algorithme basé sur le partitionnement aléatoire de l’espace fonctionnel de manière flexible afin d’isoler progressivement les fonctions les unes des autres. Nous proposons également une nouvelle notion de profondeur fonctionnelle basée sur l’aire de l’enveloppe convexe des courbes échantillonnées, capturant de manière naturelle les écarts graduels de centralité. Les problèmes d’estimation et de calcul sont abordés et diverses expériences numériques fournissent des preuves empiriques de la pertinence des approches proposées. Enfin, afin de fournir des recommandations pratiques, la performance des récentes techniques de détection d’anomalies fonctionnelles est évaluée sur deux ensembles de données réelles liés à la surveillance des hélicoptères en vol et à la spectrométrie des matériaux de construction.La deuxième partie est consacrée à la conception et à l’analyse de plusieurs approches statistiques, potentiellement robustes, mêlant la profondeur de données et les estimateurs robustes de la moyenne. La distance de Wasserstein est une métrique populaire résultant d’un coût de transport entre deux distributions de probabilité et permettant de mesurer la similitude de ces dernières. Bien que cette dernière ait montré des résultats prometteurs dans de nombreuses applications d’apprentissage automatique, elle souffre d’une grande sensibilité aux valeurs aberrantes. Nous étudions donc comment tirer partie des estimateurs de la médiane des moyennes (MoM) pour renforcer l’estimation de la distance de Wasserstein avec des garanties théoriques. Par la suite, nous introduisons une nouvelle fonction de profondeur statistique dénommée Affine-Invariante Integrated Rank-Weighted (AI-IRW). Au-delà de l’analyse théorique effectuée, des résultats numériques sont présentés, confirmant la pertinence de cette profondeur. Les sur-ensembles de niveau des profondeurs statistiques donnent lieu à une extension possible des fonctions quantiles aux espaces multivariés. Nous proposons une nouvelle mesure de similarité entre deux distributions de probabilité. Elle repose sur la moyenne de la distance de Hausdorff entre les régions quantiles, induites par les profondeur de données, de chaque distribution. Nous montrons qu’elle hérite des propriétés intéressantes des profondeurs de données telles que la robustesse ou l’interprétabilité. Tous les algorithmes développés dans cette thèse sont accessible en ligne<br>Enthusiasm for Machine Learning is spreading to nearly all fields such as transportation, energy, medicine, banking or insurance as the ubiquity of sensors through IoT makes more and more data at disposal with an ever finer granularity. The abundance of new applications for monitoring of complex infrastructures (e.g. aircrafts, energy networks) together with the availability of massive data samples has put pressure on the scientific community to develop new reliable Machine-Learning methods and algorithms. The work presented in this thesis focuses around two axes: unsupervised functional anomaly detection and robust learning, both from practical and theoretical perspectives.The first part of this dissertation is dedicated to the development of efficient functional anomaly detection approaches. More precisely, we introduce Functional Isolation Forest (FIF), an algorithm based on randomly splitting the functional space in a flexible manner in order to progressively isolate specific function types. Also, we propose the novel notion of functional depth based on the area of the convex hull of sampled curves, capturing gradual departures from centrality, even beyond the envelope of the data, in a natural fashion. Estimation and computational issues are addressed and various numerical experiments provide empirical evidence of the relevance of the approaches proposed. In order to provide recommendation guidance for practitioners, the performance of recent functional anomaly detection techniques is evaluated using two real-world data sets related to the monitoring of helicopters in flight and to the spectrometry of construction materials.The second part describes the design and analysis of several robust statistical approaches relying on robust mean estimation and statistical data depth. The Wasserstein distance is a popular metric between probability distributions based on optimal transport. Although the latter has shown promising results in many Machine Learning applications, it suffers from a high sensitivity to outliers. To that end, we investigate how to leverage Medians-of-Means (MoM) estimators to robustify the estimation of Wasserstein distance with provable guarantees. Thereafter, a new statistical depth function, the Affine-Invariant Integrated Rank-Weighted (AI-IRW) depth is introduced. Beyond the theoretical analysis carried out, numerical results are presented, providing strong empirical confirmation of the relevance of the depth function proposed. The upper-level sets of statistical depths—the depth-trimmed regions—give rise to a definition of multivariate quantiles. We propose a new discrepancy measure between probability distributions that relies on the average of the Hausdorff distance between the depth-based quantile regions w.r.t. each distribution and demonstrate that it benefits from attractive properties of data depths such as robustness or interpretability. All algorithms developed in this thesis are open-sourced and available online
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Wenlang. "Optimal monetary policy rules theory and estimation for OECD countries /." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=971939020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Qing. "Optimal experimental designs for hyperparameter estimation in hierarchical linear models." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1154042775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Fraleigh, Lisa Marie. "Optimal sensor selection and parameter estimation for real-time optimization." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ40050.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Fang, Haian. "Optimal estimation of head scan data with generalized cross validation." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179344603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Stanis, Deepak Michael. "Estimation of optimal node degree for ad-hoc sensor networks." Thesis, Wichita State University, 2009. http://hdl.handle.net/10057/2416.

Full text
Abstract:
Recent developments in sensor networks have led to a number of routing schemes that use the limited resources available at sensor nodes more efficiently. Control traffic analysis plays a major role in ad hoc sensor networks in optimizing the energy used in the network. In this research, we present an effective method to estimate the number of nodes to be deployed in a given area. We analyze the optimal node degree under different levels of mobility. We consider two parameters, packet delivery ratio and control traffic for the estimation of optimal node degree. The quantitative and simulation results provide a detailed analysis of the working of protocols and they can be used to design efficient routing protocols for ad hoc networks.<br>Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography