To see the other types of publications on this topic, follow the link: Gradient Estimation.

Dissertations / Theses on the topic 'Gradient Estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Gradient Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Swapp, David. "Estimation of visual textural gradient using Gabor functions." Thesis, University of Strathclyde, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Choon. "Interframe image coding with three-dimensional gradient motion estimation." Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-08252008-162144/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zotov, Alexander. "Models of disparity gradient estimation in the visual cortex." Birmingham, Ala. : University of Alabama at Birmingham, 2007. https://www.mhsl.uab.edu/dt/2008r/zotov.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mehrparvar, Arash. "ATTITUDE ESTIMATION FOR A GRAVITY GRADIENT MOMENTUM BIASED NANOSATELLITE." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1097.

Full text
Abstract:
Attitude determination and estimation algorithms are developed and implemented in simulation for the Exocube satellite currently under development by PolySat at Cal Poly. A mission requirement of ±5˚ of attitude knowledge has been flowed down from the NASA Goddard developed payload, and this requirement is to be met with a basic sensor suite and the appropriate algorithms. The algorithms selected in this work are TRIAD and an Extended Kalman Filter, both of which are placed in a simulation structure along with models for orbit propagation, spacecraft kinematics and dynamics, and sensor and reference vector models. Errors inherent from sensors, orbit position knowledge, and reference vector generation are modeled as well. Simulations are then run for anticipated dynamic states of Exocube while varying parameters for the spacecraft, attitude algorithms, and level of error. The nominal case shows steady state convergence to within 1˚ of attitude knowledge, with sensor errors set to 3.5˚ and reference vector errors set to 2˚. The algorithms employed have their functionality confirmed with the use of STK, and the simulations have been structured to be used as tools to help evaluate attitude knowledge capabilities for the Exocube mission and future PolySat missions.
APA, Harvard, Vancouver, ISO, and other styles
5

Miyoshi, Naoto. "Studies on Gradient Estimation for Stationary Single-Server Queues." Kyoto University, 1997. http://hdl.handle.net/2433/202289.

Full text
Abstract:
Kyoto University (京都大学)<br>0048<br>新制・課程博士<br>博士(工学)<br>甲第6839号<br>工博第1590号<br>新制||工||1063(附属図書館)<br>15926<br>UT51-97-H223<br>京都大学大学院工学研究科応用システム科学専攻<br>(主査)教授 長谷川 利治, 教授 沖野 教郎, 教授 茨木 俊秀<br>学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
6

Shahnaz, Sabina. "Gas flux estimation from surface gas concentrations." Thesis, Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55073.

Full text
Abstract:
A gradient-independent model of gas fluxes was formulated and tested. The model is built on the relationship between gas flux and the time history of surface gas concentration, known as half-order derivative (HOD), when the transport of the gas in the boundary layer is described by a diffusion equation. The eddy-diffusivity of gas is parameterized based on the similarity theory of boundary layer turbulence combined with the MEP model of surface heat fluxes. Test of the new model using in-situ data of CO2 concentration and fluxes at several locations with diverse vegetation cover, geographic and climatic conditions confirms its usefulness and potential for monitoring and modeling greenhouse gases. The proposed model may also be used for estimating other GHGS fluxes such as methane (CH4) and Water vapor flux. This proof-of-concept study justifies the proposed model as a practical solution for monitoring and modeling global GHGS budget over remote areas and oceans where ground observations of GHGS fluxes are limited or non-existent. One focus of the on-going research is to investigate its application to producing regional and global distributions of carbon fluxes for identifying sinks and sources of carbon and re-evaluating the regional and global carbon budget at monthly and annual time scales.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Junjun. "Seafloor Topography Estimation from Gravity Gradients." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1512048462472145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Siow, Bernard, Ivana Drobnjak, Andrada Ianus, Isabel N. Christie, Mark F. Lythgoe, and Daniel C. Alexander. "Axon radius estimation with Oscillating Gradient Spin Echo (OGSE) diffusion MRI." Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-184163.

Full text
Abstract:
The estimation of axon radius provides insights into brain function [1] and could provide progression and classification biomarkers for a number of white matter diseases [2-4]. A recent in silico study [5] has shown that optimised gradient waveforms (GEN) and oscillating gradient waveform spin echo (OGSE) have increased sensitivity to small axon radius compared to pulsed gradient spin echo (PGSE) diffusion MR sequences. In a follow-up study [6], experiments with glass capillaries show the practical feasibility of GEN sequences and verify improved pore-size estimates. Here, we compare PGSE with sine, sine with arbitrary phase, and square wave OGSE (SNOGSE, SPOGSE, SWOGSE, respectively) for axon radius mapping in the corpus callosum of a rat, ex-vivo. Our results suggest improvements in pore size estimates from OGSE over PGSE, with greatest improvement from SWOGSE, supporting theoretical results from [5] and other studies [7-9].
APA, Harvard, Vancouver, ISO, and other styles
9

Siow, Bernard, Ivana Drobnjak, Andrada Ianus, Isabel N. Christie, Mark F. Lythgoe, and Daniel C. Alexander. "Axon radius estimation with Oscillating Gradient Spin Echo (OGSE) diffusion MRI." Diffusion fundamentals 18 (2013) 1, S. 1-6, 2013. https://ul.qucosa.de/id/qucosa%3A13707.

Full text
Abstract:
The estimation of axon radius provides insights into brain function [1] and could provide progression and classification biomarkers for a number of white matter diseases [2-4]. A recent in silico study [5] has shown that optimised gradient waveforms (GEN) and oscillating gradient waveform spin echo (OGSE) have increased sensitivity to small axon radius compared to pulsed gradient spin echo (PGSE) diffusion MR sequences. In a follow-up study [6], experiments with glass capillaries show the practical feasibility of GEN sequences and verify improved pore-size estimates. Here, we compare PGSE with sine, sine with arbitrary phase, and square wave OGSE (SNOGSE, SPOGSE, SWOGSE, respectively) for axon radius mapping in the corpus callosum of a rat, ex-vivo. Our results suggest improvements in pore size estimates from OGSE over PGSE, with greatest improvement from SWOGSE, supporting theoretical results from [5] and other studies [7-9].
APA, Harvard, Vancouver, ISO, and other styles
10

Valstad, Bård Arve. "Parameter Estimation and Control of a Dual Gradient Managed Pressure Drilling System." Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9025.

Full text
Abstract:
<p>The increasing demand for oil and gas in the world, and the fact that most of the easily accessible reservoirs are in production or already abandoned, result in a need to develop new resources. These may be new reservoirs that have previously been considered uneconomical or impossible to develop, or extended operation of existing fields. Developing smaller reservoirs, means that more wells have to be drilled per barrel, which gives both more and eventually greater challenges as more and more wells are drilled because the wells has to be drilled further and into more difficult formations. Mature fields are drained, which leads to lowering of reservoir pressure and therefore tighter pressure margins for drilling. Because of the challenges with deep water drilling and depleted reservoirs, there is a need to precisely control the pressure profile in the well during drilling in such formations. Some of the parameters that are needed to control the well precisely are not easily obtained during drilling, and an estimation of these will therefore be crucial to be able to use a model to control the well. The transmission of measurements from a well is also often either delayed or absent during periods of drilling, which will cause problems for the control of the well. It is therefore required that an estimation scheme is able to estimate the pressure in the well for the time interval between the updates of the measurements from the well. The conventional method for transmitting measurements from the bottom of the well is by mud pulse telemetry which is pressure waves transmitted through the drilling mud. These measurements will be delayed, so accurate real-time measurements will never be available. To estimate the bottom hole pressure, a extended kalman filter was evaluated. This filter is based on a simple mathematical model derived for the drilling process. The states in the filter is height of mud in the riser, mud weight and different friction factors for the well. The filter is tested when the measurements are continuous available, with delayed update of the bottom hole measurement, and for cases where one of the measurements are absent. A simple controller to control the bottom hole pressure is implemented to control the pressure for reference tracking and during a simulated pipe connection. During simulations, it was not possible to achieve convergence for the friction factor for normal flows, and this led to errors in the other states. The friction factor would only converge to its true value during very high flows during the nominal testing, which led to the other states also achieving their correct values. The problem in estimating the friction factor applied to all different forms for friction parameters. The kalman filter was tested against an artificial well simulated in WeMod, and gave decent estimates of the bottom hole pressure except for at low flows.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Ades, Michel. "Topics in stochastic systems, cumulative renewal processes, stochastic control and gradient estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq44336.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Levine, N. D. "Superconvergent estimation of the gradient from linear finite element approximations on triangular elements." Thesis, University of Reading, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.353472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mangan, S. J. "Development of an intelligent road gradient estimation method using vehicle CAN bus data." Thesis, University of Liverpool, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lawrence, Joseph Scott. "Use of Phase and Amplitude Gradient Estimation for Acoustic Source Characterization and Localization." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6969.

Full text
Abstract:
Energy-based acoustic quantities provide vital information about acoustic fields and the characterization of acoustic sources. Recently, the phase and amplitude gradient estimator (PAGE) method has been developed to reduce error and extend bandwidth of energy-based quantity estimates. To inform uses and applications of the method, analytical and experimental characterizations of the method are presented. Analytical PAGE method bias errors are compared with those of traditional estimation for two- and three-microphone one-dimensional probes. For a monopole field when phase unwrapping is possible, zero bias error is achieved for active intensity using three-microphone PAGE and for specific acoustic impedance using two-microphone PAGE. A method for higher-order estimation in reactive fields is developed, and it is shown that a higher-order traditional method outperforms higher-order PAGE for reactive intensity in a standing wave field. Extending the applications of PAGE, the unwrapped phase gradient is used to develop a method for directional sensing with improved bandwidth and arbitrary array response.
APA, Harvard, Vancouver, ISO, and other styles
15

Baidoo-Williams, Henry Ernest. "Novel techniques for estimation and tracking of radioactive sources." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1539.

Full text
Abstract:
Radioactive source signal measurements are Poisson distributed due to the underlying radiation process. This fact, coupled with the ubiquitous normally occurring radioactive materials (NORM), makes it challenging to localize or track a radioactive source or target accurately. This leads to the necessity to either use highly accurate sensors to minimize measurement noise or many less accurate sensors whose measurements are averaged to minimize the noise. The cost associated with highly accurate sensors places a bound on the number that can realistically be deployed. Similarly, the degree of inaccuracy in cheap sensors also places a lower bound on the number of sensors needed to achieve realistic estimates of location or trajectory of a radioactive source in order to achieve reasonable error margins. We first consider the use of the smallest number of highly accurate sensors to localize radioactive sources. The novel ideas and algorithms we develop use no more than the minimum number of sensors required by triangulation based algorithms but avoid all the pitfalls manifest with triangulation based algorithms such as multiple local minima and slow convergence rate from algorithm reinitialization. Under the general assumption that we have a priori knowledge of the statistics of the intensity of the source, we show that if the source or target is known to be in one open half plane, then N sensors are enough to guarantee a unique solution, N being the dimension of the search space. If the assumptions are tightened such that the source or target lies in the open convex hull of the sensors, then N+1 sensors are required. Suppose we do not have knowledge of the statistics of the intensity of the source, we show that N+1 sensors is still the minimum number of sensors required to guarantee a unique solution if the source is in the open convex hull of the sensors. Second, we present tracking of a radioactive source using cheap low sensitivity binary proximity sensors under some general assumptions. Suppose a source or target moves in a straight line, and suppose we have a priori knowledge of the radiation intensity of the source, we show that three binary sensors and their binary measurements depicting the presence or absence of a source within their nominal sensing range suffices to localize the linear trajectory. If we do not have knowledge of the intensity of the source or target, then a minimum of four sensors suffices to localize the trajectory of the source. Finally we present some fundamental limits on the estimation accuracy of a stationary radioactive source using ideal mobile measurement sensors and provide a robust algorithm which achieves the estimation accuracy bounds asymptotically as the expected radiation count increases.
APA, Harvard, Vancouver, ISO, and other styles
16

Castillo, Anthony. "Contribution à l'étude de l'endommagement de matériaux composites par estimation des termes sources et des diffusivités thermiques." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0592/document.

Full text
Abstract:
Ce travail porte sur la détection de l’endommagement de matériaux composites. Une première partie concerne l’élaboration de méthodes permettant d’estimer les termes sources de chaleur d’un matériau sollicité mécaniquement. Lors de ce processus, un ensemble de défauts mécaniques mènent à des productions de chaleur. La détection des sources peut permettre la détection de ces défauts. Deux principales méthodes sont présentées : une méthode dite « directe » basée sur une discrétisation du champ de température mesuré et une méthode « itérative » basée sur la méthode du gradient conjugué. A ces méthodes sont couplées des techniques de filtrages des données comme la SVD. Les équations sont résolues par différences finies sous leur forme linéaire. Des modifications sont apportées à l’algorithme itératif pour améliorer sa convergence ainsi que les résultats. Les problématiques envisagées font partie des problèmes inverses en thermique. L’objectif de la première partie est de trouver un lien entre l’apparition de macro-fissure et la localisation de termes sources de chaleur au sein d’un matériau composite. La seconde partie consiste à élaborer des méthodes d’estimation des diffusivités thermiques directionnelles. Les méthodes reposent sur une modélisation du transfert de chaleur à l’aide des quadripôles thermiques. Les estimations de paramètres sont réalisées sur des zones ciblées à risque sur un matériau déjà endommagé. Le but est de faire le lien entre un endommagement mécanique connu diffus et une dégradation des propriétés thermiques. Ce manuscrit est présenté en deux parties : une partie de validation des méthodes. Une partie expérimentale où sont analysés les composites<br>This work deals with the damage detection of composite materials. These materials are used in the aeronautics industry. The first part concerns the development of methods to estimate the heat sources terms of a stressed material. During this process, a set of mechanical defects leads to heat productions. The sources detection can conduct to the detection of these defects. Two main methods are presented: a "direct" method based on a discretization of the measured temperature field and an "iterative" method based on the conjugate gradient method. These methods are coupled with data filtering techniques such as SVD. In order to optimize computation time, equations are solved by finite differences in their linear form. Modifications are also made for the iterative algorithm to improve its convergence as well as the results of the estimation. These problems are considered as thermal inverse problems. The main objective of the first part is to find an experimental link between the appearance of a macro fissure and the localization of a heat source term within a composite material. The second part consists in the elaboration of methods for estimating thermal directional diffusivities. The methods are based on a modeling of heat transfer using thermal quadrupoles. Parameter estimations are made on targeted "risked" areas on a material, which is already damaged but not under stress. The aim is to link a known mechanical damage, which is called "diffuse" to thermal properties degradation in the main directions. This manuscript is presented in two parts: a validation part of the methods, and an experimental part in which composites are analyzed
APA, Harvard, Vancouver, ISO, and other styles
17

Jia, Zhen. "Image Registration and Image Completion: Similarity and Estimation Error Optimization." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1406821875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Beiki, Majid. "New Techniques for Estimation of Source Parameters : Applications to Airborne Gravity and Pseudo-Gravity Gradient Tensors." Doctoral thesis, Uppsala universitet, Geofysik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-143015.

Full text
Abstract:
Gravity gradient tensor (GGT) data contains the second derivatives of the Earth’s gravitational potential in three orthogonal directions. GGT data can be measured either using land, airborne, marine or space platforms. In the last two decades, the applications of GGT data in hydrocarbon exploration, mineral exploration and structural geology have increased considerably. This work focuses on developing new interpretation techniques for GGT data as well as pseudo-gravity gradient tensor (PGGT) derived from measured magnetic field. The applications of developed methods are demonstrated on a GGT data set from the Vredefort impact structure, South Africa and a magnetic data set from the Särna area, west central Sweden. The eigenvectors of the symmetric GGT can be used to estimate the position of the causative body as well as its strike direction. For a given measurement point, the eigenvector corresponding to the maximum eigenvalue points approximately toward the center of mass of the source body. For quasi 2D structures, the strike direction of the source can be estimated from the direction of the eigenvectors corresponding to the smallest eigenvalues. The same properties of GGT are valid for the pseudo-gravity gradient tensor (PGGT) derived from magnetic field data assuming that the magnetization direction is known. The analytic signal concept is applied to GGT data in three dimensions. Three analytic signal functions are introduced along x-, y- and z-directions which are called directional analytic signals. The directional analytic signals are homogenous and satisfy Euler’s homogeneity equation. Euler deconvolution of directional analytic signals can be used to locate causative bodies. The structural index of the gravity field is automatically identified from solving three Euler equations derived from the GGT for a set of data points located within a square window with adjustable size. For 2D causative bodies with geometry striking in the y-direction, the measured gxz and gzz components of GGT can be jointly inverted for estimating the parameters of infinite dike and geological contact models. Once the strike direction of 2D causative body is estimated, the measured components can be transformed into the strike coordinate system. The GGT data within a set of square windows for both infinite dike and geological contact models are deconvolved and the best model is chosen based on the smallest data fit error.<br>Felaktigt tryckt som Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 730
APA, Harvard, Vancouver, ISO, and other styles
19

Genest, Laurent. "Optimisation de forme par gradient en dynamique rapide." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC022/document.

Full text
Abstract:
Afin de faire face aux nouveaux challenges de l’industrie automobile, les ingénieurs souhaitent appliquer des méthodes d’optimisation à chaque étape du processus de conception. En élargissant l’espace de conception aux paramètres de forme, en augmentant leur nombre et en étendant les plages de variation, de nouveaux verrous sont apparus. C’est le cas de la résistance aux chocs. Avec les temps de calcul long, la non-linéarité, l’instabilité et la dispersion numérique de ce problème de dynamique rapide, la méthode usuellement employée, l’optimisation par plan d’expériences et surfaces de réponse, devient trop coûteuse pour être utilisée industriellement. Se pose alors la problématique suivante : Comment faire de l’optimisation de forme en dynamique rapide avec un nombre élevé de paramètres ?. Pour y répondre, les méthodes d’optimisation par gradient s’avèrent être les plus judicieuses. Le nombre de paramètres a une influence réduite sur le coût de l’optimisation. Elles permettent donc l’optimisation de problèmes ayant de nombreux paramètres. Cependant, les méthodes classiques de calcul du gradient sont peu pertinentes en dynamique rapide : le coût en nombre de simulations et le bruit empêchent l’utilisation des différences finies et le calcul du gradient en dérivant les équations de dynamique rapide n’est pas encore disponible et serait très intrusif vis-à-vis des logiciels. Au lieu de déterminer le gradient, au sens classique du terme, des problèmes de crash, nous avons cherché à l’estimer. L’Equivalent Static Loads Method est une méthode permettant l’optimisation à moindre coût basée sur la construction d’un problème statique linéaire équivalent au problème de dynamique rapide. En utilisant la dérivée du problème équivalent comme estimation du gradient, il nous a été possible d’optimiser des problèmes de dynamique rapide ayant des épaisseurs comme variables d’optimisation. De plus, si l’on construit les équations du problème équivalent avec la matrice de rigidité sécante, l’approximation du gradient n’en est que meilleure. De cette manière, il est aussi possible d’estimer le gradient par rapport à la position des nœuds du modèle de calcul. Comme il est plus courant de travailler avec des paramètres CAO, il faut déterminer la dérivée de la position des nœuds par rapport à ces paramètres. Nous pouvons le faire de manière analytique si nous utilisons une surface paramétrique pour définir la forme et ses points de contrôle comme variables d’optimisation. Grâce à l’estimation du gradient et à ce lien entre nœuds et paramètres de forme, l’optimisation de forme avec un nombre important de paramètres est désormais possible à moindre coût. La méthode a été développée pour deux familles de critères issues du crash automobile. La première est liée au déplacement d’un nœud, objectif important lorsqu’il faut préserver l’intégrité de l’habitacle du véhicule. La seconde est liée à l’énergie de déformation. Elle permet d’assurer un bon comportement de la structure lors du choc<br>In order to face their new industrial challenges, automotive constructors wish to apply optimization methods in every step of the design process. By including shape parameters in the design space, increasing their number and their variation range, new problematics appeared. It is the case of crashworthiness. With the high computational time, the nonlinearity, the instability and the numerical dispersion of this rapid dynamics problem, metamodeling techniques become to heavy for the standardization of those optimization methods. We face this problematic: ”How can we carry out shape optimization in rapid dynamics with a high number of parameters ?”. Gradient methods are the most likely to solve this problematic. Because the number of parameters has a reduced effect on the optimization cost, they allow optimization with a high number of parameters. However, conventional methods used to calculate gradients are ineffective: the computation cost and the numerical noise prevent the use of finite differences and the calculation of a gradient by deriving the rapid dynamics equations is not currently available and would be really intrusive towards the software. Instead of determining the real gradient, we decided to estimate it. The Equivalent Static Loads Method is an optimization method based on the construction of a linear static problem equivalent to the rapid dynamic problem. By using the sensitivity of the equivalent problem as the estimated gradient, we have optimized rapid dynamic problems with thickness parameters. It is also possible to approximate the derivative with respect to the position of the nodes of the CAE model. But it is more common to use CAD parameters in shape optimization studies. So it is needed to have the sensitivity of the nodes position with these CAD parameters. It is possible to obtain it analytically by using parametric surface for the shape and its poles as parameters. With this link between nodes and CAD parameters, we can do shape optimization studies with a large number of parameters and this with a low optimization cost. The method has been developed for two kinds of crashworthiness objective functions. The first family of criterions is linked to a nodal displacement. This category contains objectives like the minimization of the intrusion inside the passenger compartment. The second one is linked to the absorbed energy. It is used to ensure a good behavior of the structure during the crash
APA, Harvard, Vancouver, ISO, and other styles
20

Strydom, Willem Jacobus. "Recovery based error estimation for the Method of Moments." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96881.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2015.<br>ENGLISH ABSTRACT: The Method of Moments (MoM) is routinely used for the numerical solution of electromagnetic surface integral equations. Solution errors are inherent to any numerical computational method, and error estimators can be effectively employed to reduce and control these errors. In this thesis, gradient recovery techniques of the Finite Element Method (FEM) are formulated within the MoM context, in order to recover a higher-order charge of a Rao-Wilton-Glisson (RWG) MoM solution. Furthermore, a new recovery procedure, based specifically on the properties of the RWG basis functions, is introduced by the author. These recovered charge distributions are used for a posteriori error estimation of the charge. It was found that the newly proposed charge recovery method has the highest accuracy of the considered recovery methods, and is the most suited for applications within recovery based error estimation. In addition to charge recovery, the possibility of recovery procedures for the MoM solution current are also investigated. A technique is explored whereby a recovered charge is used to find a higher-order divergent current representation. Two newly developed methods for the subsequent recovery of the solenoidal current component, as contained in the RWG solution current, are also introduced by the author. A posteriori error estimation of the MoM current is accomplished through the use of the recovered current distributions. A mixed second-order recovered current, based on a vector recovery procedure, was found to produce the most accurate results. The error estimation techniques developed in this thesis could be incorporated into an adaptive solver scheme to optimise the solution accuracy relative to the computational cost.<br>AFRIKAANSE OPSOMMING: Die Moment Metode (MoM) vind algemene toepassing in die numeriese oplossing van elektromagnetiese oppervlak integraalvergelykings. Numeriese foute is inherent tot die prosedure: foutberamingstegnieke is dus nodig om die betrokke foute te analiseer en te reduseer. Gradiënt verhalingstegnieke van die Eindige Element Metode word in hierdie tesis in die MoM konteks geformuleer. Hierdie tegnieke word ingespan om die oppervlaklading van 'n Rao-Wilton-Glisson (RWG) MoM oplossing na 'n verbeterde hoër-orde voorstelling te neem. Verder is 'n nuwe lading verhalingstegniek deur die outeur voorgestel wat spesifiek op die eienskappe van die RWG basis funksies gebaseer is. Die verhaalde ladingsverspreidings is geïmplementeer in a posteriori fout beraming van die lading. Die nuut voorgestelde tegniek het die akkuraatste resultate gelewer, uit die groep verhalingstegnieke wat ondersoek is. Addisioneel tot ladingsverhaling, is die moontlikheid van MoM-stroom verhalingstegnieke ook ondersoek. 'n Metode vir die verhaling van 'n hoër-orde divergente stroom komponent, gebaseer op die verhaalde lading, is geïmplementeer. Verder is twee nuwe metodes vir die verhaling van die solenodiale komponent van die RWG stroom deur die outeur voorgestel. A posteriori foutberaming van die MoM-stroom is met behulp van die verhaalde stroom verspreidings gerealiseer, en daar is gevind dat 'n gemengde tweede-orde verhaalde stroom, gebaseer op 'n vektor metode, die beste resultate lewer. Die foutberamingstegnieke wat in hierdie tesis ondersoek is, kan in 'n aanpasbare skema opgeneem word om die akkuraatheid van 'n numeriese oplossing, relatief tot die berekeningskoste, te optimeer.
APA, Harvard, Vancouver, ISO, and other styles
21

Nwagbara, Anuri Nwadimma Chiamaka. "Contribution for heat flow density estimation in the Meso-Cenozoic basins of Portugal." Master's thesis, Universidade de Évora, 2021. http://hdl.handle.net/10174/30091.

Full text
Abstract:
The evolution of temperature in sedimentary basins is a fundamental tool for the evaluation and exploration of hydrocarbons, for the evaluation of geothermal potential, for paleogeographic reconstruction, for carbon sequestration and for the hydrogeological evaluation of a given region. Estimates of heat flow density (HFD) on the surface in the Portuguese Meso Cenozoic basins are difficult to obtain. The small number of HFD estimates in the Meso Cenozoic basins is a consequence of the high drilling costs for determining HFD and strict drilling regulation measures. Most of the temperature data available for estimating HFD is obtained in oil exploration holes; however, the temperature data obtained from them are subject to high uncertainty. Twelve oil exploration holes carried out in Portugal, with temperature records, were considered in this work; only one hole was rejected because they did not present the minimum quality requirements for HFD estimation. The values of thermal conductivity of the rock formations traversed by the various holes were assumed since there are no laboratory determinations for those geological formations. Bottom temperatures (BHT) have been corrected with Zetaware software that uses the Horner method and produces results with acceptable uncertainties. Only three sedimentary basins (Lusitanian, Porto, Alentejo) were identified and possessing a regional HFD estimates ranging from 61 to 174 mWm-2. The average geothermal gradient and average HFD estimates of the Lusitanian basin were found to be 33 ℃ km-1 and 113 mWm-2, Porto (24 ℃ km-1, 78 mWm-2) and Alentejo (21 ℃ km-1, 61 mWm-2) respectively. Compared to previous geothermal and HFD values, the new estimates obtained had a fair correspondence with a high regional sedimentary HFD estimates. Nevertheless, a heat flow density map was generated and an attempt to geothermally characterize the Portuguese Meso Cenozoic basins is made; RESUMO: CONTRIBUIÇÃO PARA A ESTIMATIVA DA DENSIDADE DO FLUXO DE CALOR NAS BACIAS MESO CENOZÓICAS DE PORTUGAL A evolução da temperatura nas bacias sedimentares é uma ferramenta fundamental para a avaliação e exploração de hidrocarbonetos, para a avaliação do potencial geotérmico, para a reconstrução paleogeográfica, para o sequestro de carbono e para a avaliação hidrogeológica de uma determinada região. Estimativas da densidade do fluxo de calor (DFC) na superfície das bacias Meso Cenozóicas Portuguesas são difíceis de obter. O pequeno número de estimativas de DFC nas bacias Meso Cenozóicas é uma consequência dos elevados custos de perfuração para a determinação do DFC e de medidas rigorosas de regulação da perfuração. A maioria dos dados de temperatura disponíveis para estimar o DFC é obtida em furos de prospeção de petróleo; no entanto, os dados de temperatura neles obtidos estão sujeitos a uma elevada incerteza. Neste trabalho foram considerados doze furos de prospeção de petróleo realizados em Portugal com registos de temperatura; apenas um furo foi rejeitado por não apresentar os requisitos mínimos de qualidade para a estimativa do DFC. Assumiram-se os valores de condutividade térmica das formações rochosas atravessadas pelos diversos furos uma vez que não existem determinações laboratoriais para essas formações geológicas. As temperaturas de fundo de furo (BHT) foram corrigidas com o software Zetaware que utiliza o método de Horner e produz resultados com incertezas aceitáveis. Apenas foram identificadas três bacias sedimentares (Lusitanianas, do Porto, do Alentejo) e com uma estimativa regional de DFC que varia entre 61 e 174 mWm-2. Verificou-se que o gradiente geotérmico médio e a DFC média na bacia Lusitaniana são, respectivamente, 33 ℃km-1 e 113 mWm-2 Porto (24 ℃ km-1, 78 mWm-2) e Alentejo (21 ℃ km-1, 61 mWm-2) respectivamente. Em comparação com valores geotérmicos e de DFC anteriores, as novas estimativas obtidas correspondem a uma DFC sedimentar regional elevada. Foi desenhado um mapa da densidade de fluxo de calor e é feita uma tentativa de caracterizar geotermicamente as bacias Meso Cenozóicas Portuguesas.
APA, Harvard, Vancouver, ISO, and other styles
22

Cengiz, Acarturk. "Gradient Characteristics Of The Unaccusative/unergative Distinction In Turkish: An Experimental Investigation." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605912/index.pdf.

Full text
Abstract:
This thesis investigates the gradient behaviour of monadic intransitive verb classes in Turkish, under an aspectual classification of the unaccusative/unergative verb types, namely The Split Intransitivity Hierarchy. This Hierarchy claims that intransitive verb types are subject to gradient acceptability in certain syntactic constructions. The methods used in judgment elicitation studies in psychophysics, such as the magnitude estimation technique have recently been adapted to be used in capturing gradient linguistic data. Also, the practical benefits of the Internet directed researchers to design and conduct web-based experiments for linguistic data elicitation. Research on Human Computer Interaction offers suggestions for the design of more usable user interfaces. Considering these developments, in this thesis, a web based experiment interface has been designed as an extension to the magnitude estimation technique to elicit acceptability judgments on two syntactic constructions, i.e. the -mIS participle (the unaccusative diagnostic) and impersonal passivization (the unergative diagnostic) for different verb types on the Split Intransitivity Hierarchy. The experiment was conducted on the Internet. The results show that in the two diagnostics, the verb types receive categorical or indeterminate acceptability judgments, which allows us to specify the core or peripheral status of the verbs. Within the classes we have examined, change of state verbs constitute the core unaccusative verbs, and controlled (motional and non-motional) process verbs constitute the core unergative verbs. Stative verbs and uncontrolled process verbs are peripheral unaccusatives and unergatives, respectively. Change of location verbs (with an animate subject) are close to the unergative end.
APA, Harvard, Vancouver, ISO, and other styles
23

Lacinová, Veronika. "Odhady diskrétního rozložení pravděpodobnosti a bootstrap." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-234260.

Full text
Abstract:
Doctoral thesis is focused on the unconventional methods of the discrete probability estimation of categorical quantity from its observed values. The gradient of quasinorm and so-called line estimation were emlopyed for these estimations. Bootstrap method was used for the improvement of accuracy. Theoretical results for selected quasinorms were illustrated on specific examples.
APA, Harvard, Vancouver, ISO, and other styles
24

Yu, Jia. "Distributed parameter and state estimation for wireless sensor networks." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28929.

Full text
Abstract:
The research in distributed algorithms is linked with the developments of statistical inference in wireless sensor networks (WSNs) applications. Typically, distributed approaches process the collected signals from networked sensor nodes. That is to say, the sensors receive local observations and transmit information between each other. Each sensor is capable of combining the collected information with its own observations to improve performance. In this thesis, we propose novel distributed methods for the inference applications using wireless sensor networks. In particular, the efficient algorithms which are not computationally intensive are investigated. Moreover, we present a number of novel algorithms for processing asynchronous network events and robust state estimation. In the first part of the thesis, a distributed adaptive algorithm based on the component-wise EM method for decentralized sensor networks is investigated. The distributed component-wise Expectation-Maximization (EM) algorithm has been designed for application in a Gaussian density estimation. The proposed algorithm operates a component-wise EM procedure for local parameter estimation and exploit an incremental strategy for network updating, which can provide an improved convergence rate. Numerical simulation results have illustrated the advantages of the proposed distributed component-wise EM algorithm for both well-separated and overlapped mixture densities. The distributed component-wise EM algorithm can outperform other EM-based distributed algorithms in estimating overlapping Gaussian mixtures. In the second part of the thesis, a diffusion based EM gradient algorithm for density estimation in asynchronous wireless sensor networks has been proposed. Specifically, based on the asynchronous adapt-then-combine diffusion strategy, a distributed EM gradient algorithm that can deal with asynchronous network events has been considered. The Bernoulli model has been exploited to approximate the asynchronous behaviour of the network. Compared with existing distributed EM based estimation methods using a consensus strategy, the proposed algorithm can provide more accurate estimates in the presence of asynchronous networks uncertainties, such as random link failures, random data arrival times, and turning on or off sensor nodes for energy conservation. Simulation experiments have been demonstrated that the proposed algorithm significantly outperforms the consensus based strategies in terms of Mean-Square- Deviation (MSD) performance in an asynchronous network setting. Finally, the challenge of distributed state estimation in power systems which requires low complexity and high stability in the presence of bad data for a large scale network is addressed. A gossip based quasi-Newton algorithm has been proposed for solving the power system state estimation problem. In particular, we have applied the quasi-Newton method for distributed state estimation under the gossip protocol. The proposed algorithm exploits the Broyden- Fletcher-Goldfarb-Shanno (BFGS) formula to approximate the Hessian matrix, thus avoiding the computation of inverse Hessian matrices for each control area. The simulation results for IEEE 14 bus system and a large scale 4200 bus system have shown that the distributed quasi-Newton scheme outperforms existing algorithms in terms of Mean-Square-Error (MSE) performance with bad data.
APA, Harvard, Vancouver, ISO, and other styles
25

Meftahi, Houcine. "Études théoriques et numériques de quelques problèmes inverses." Thesis, Lille 1, 2009. http://www.theses.fr/2009LIL10090/document.

Full text
Abstract:
Le travail de la thèse concerne l'étude de quelques problèmes inverses par différents approches mathématiques. Dans la première partie, nous considérons le problème inverse géométrique consistant à retrouver une fissure ou cavité(s) inconnue à partir de mesures sur le bord d'un domaine plan. Nous traitons ce problème par des techniques d'approximation rationnelle et méromorphe dans le plan complexe. Nous étudions un autre problème inverse consistant à estimer l'aire d'une cavité. Nous donnons une majoration explicite de l'aire de la cavité. Cette majoration est basée sur une estimation de croissance dans l'espace de Hardy­-Sobolev de la couronne. Nous appliquons également cette estimation pour donner la vitesse de comvergence d'un schéma d'ïnterpolation d'une fonction de l'espace de Hardy-Sobolev de la couronne. Dans la deuxième partie, nous considérons d'abord le problème inverse d'identification des paramètres de Lamé en élasticité linéaire. Nous transformons ce problème en un problème de minimisation et nous exhibons quelques exemples numériques. Nous considérons également le problème inverse d'identification d'une inclusion correspondant à une discontinuité de la conductivité. Nous utilisons la méthode du gradient topologique pour une première approximation et ensuite la méthode du gradient classique pour identifier plus précisément celles-ci. Enfin, nous étudions un problème inverse d'identification d'une inclusion en élasticité linéaire. Nous utilisons le gradient de forme pour retrouver numériquement des inclusions elliptiques<br>This work concerns the study of some inverse problems by different mathematical approaches. ln the first part, we consider the geometrical inverse problem. related to the identification of an unknown crack or inclusion(s) by boundary measurements. We treat this problem by technique of rational and meromorphic approximation in the complex plane. We study another inverse problem, namely estimating the area of a cavity. We derivive an explicit upper bound on the area of the cavity. We also aplly this estimation, to find an upper bound on the rate of convergence of a recovery interpolation schema in the Hardy-Sobolev space of an annulus. ln the second part. we first consider the inverse problem of recovering the Lamé parameters in linear elasticity from boundary measurements. we perform numerical experiments. We also consider the inverse problem or identification of an inclusion corresponding to a discontinuity of the conductivity. We use the method of the topological gradient to obtain a first estimate on the location of one or several inclusions and then, we use the method of the classical gradient to identify more precisely these. Finally. in the context of shape optimization, we study the inverse problem of identification of an inclusion in linear elasticity. We calculate the shape gradient of a functional of Kohn-Vogelius type, minmax of a lagrangian with respect to the parameter of deformation. We use this gradient to numerically flnd elliptic inclusions
APA, Harvard, Vancouver, ISO, and other styles
26

Roshani, Pedram. "The Effect of Temperature on the SWCC and Estimation of the SWCC from Moisture Profile under a Controlled Thermal Gradient." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31072.

Full text
Abstract:
In many situations, the upper layers of soil above the ground water table are in a state of unsaturated condition. Although unsaturated soils are found throughout the world, they are predominant in arid or semi-arid regions. In these areas, the soil water characteristic curve (SWCC) which relates the water content to the matric suction could be used as key tool to implement the mechanics of unsaturated soils into the designs of geotechnical structures such as dams, embankments, pavements, canals, and foundations. Several experimental techniques are available for determining the SWCC in a laboratory environment. However, these experimental techniques are expensive, time consuming typically requiring days or weeks, depending on the soil type, and demanding intricate testing equipment. Due to these reasons, there has been a growing interest to find other means for estimating SWCC and encourage the adoption of unsaturated soils mechanics in geotechnical engineering practice. Several methods exist to indirectly estimate the SWCC from basic soil properties. Some may include statistical estimation of the water content at selected matric suction values, correlation of soil properties with the fitting parameters of an analytical equation that represents the SWCC, estimation of the SWCC using a physics-based conceptual model, and artificial intelligence methods such as neural networks or genetic programming. However, many studies have shown that environmental effects such as temperature, soil structure, initial water content, void ratio, stress history, compaction method, etc. can also affect the SWCC. This means that the estimation SWCC from set of conditions may not reliably predict the SWCC in other conditions. Due to this reason, it is crucial for engineers involved with unsaturated soils to take into account all the factors that influence the SWCC. The two key objectives of the present thesis are the development of a method based on first principles, using the capillary rise theory, to predict the variation of the SWCC as a function of temperature, as well as developing a technique for the prediction of the fixed parameters of a well-known function representing the SWCC based on basic soil properties together with the moisture profile of a soil column subjected to a known temperature gradient. A rational approach using capillary rise theory and the effect of temperature on surface tension and liquid density is developed to study the relation between temperature and the parameters of the Fredlund and Xing (1994) equation. Several tests, using a Tempe cell submerged in a controlled temperature bath, were performed to determine the SWCC of two coarse-grained soils at different temperatures. A good comparison between the predicted SWCC at different temperatures using the proposed model and the measured values from the Tempe cell test results is achieved. Within the scope of this thesis, a separate testing program was undertaken to indirectly estimate the SWCC of the same two coarse-grained soils from the measurement of their steady state soil-moisture profile while subjected to a fixed temperature differences. The water potential equation in the liquid and vapor phases is used to analyses the steady state flow conditions in the unsaturated soil. A good comparison is obtained for the SWCC estimated using this technique with the SWCC measured used a Tempe cell submerged in a controlled temperature bath. The results of this study indicate that knowledge of the moisture content of a soil specimen under a constant thermal gradient and basic soil properties can be used to estimate the SWCC of the soil at the desired temperature.
APA, Harvard, Vancouver, ISO, and other styles
27

Hagos, Tesfamichael Marikos. "Estimation of phases for compliant motion : Auto-regressive HMM, multi-class logistic regression, Learning from Demonstration (LfD), Gradient descent optimization." Thesis, Luleå tekniska universitet, Rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Peng. "Joint Estimation and Calibration for Motion Sensor." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286839.

Full text
Abstract:
In the thesis, a calibration method for positions of each accelerometer in an Inertial Sensor Array (IMU) sensor array is designed and implemented. In order to model the motion of the sensor array in the real world, we build up a state space model. Based on the model we use, the problem is to estimate the parameters within the state space model. In this thesis, this problem is solved using Maximum Likelihood (ML) framework and two methods are implemented and analyzed. One is based on Expectation Maximization (EM) and the other is to optimize the cost function directly using Gradient Descent (GD). In the EM algorithm, an ill-conditioned problem exists in the M step, which degrades the performance of the algorithm especially when the initial error is small, and the final Mean Square Error (MSE) curve will diverge in this case. The EM algorithm with enough data samples works well when the initial error is large. In the Gradient Descent method, a reformulation of the problem avoids the ill-conditioned problem. After the parameter estimation part, we analyze the MSE curve of these parameters through the Monte Carlo Simulation. The final MSE curves show that the Gradient Descent based method is more robust in handling the numerical issues of the parameter estimation. The Gradient Descent method is also robust to noise level based on the simulation result.<br>I denna rapport utvecklas och implementeras en kalibreringsmethod för att skatta positionen för en grupp av accelerometrar placerade i en så kallad IMU sensor array. För att beskriva rörelsen för hela sensorgruppen, härleds en dynamisk tillståndsmodell. Problemställningen är då att skatta parametrarna i tillståndsmodellen. Detta löses med hjälp av Maximum Likelihood-metoden (ML) där två stycken algoritmer implementeras och analyseras. En baseras på Expectation Maximization (EM) och i den andra optimeras kostnadsfunktionen direkt med gradientsökning. I EM-algoritmen uppstår ett illa konditionerat delproblem i M-steget, vilket försämrar algoritmens prestanda, speciellt när det initiala felet är litet. Den resulterande MSE-kurvan kommer att avvika i detta fall. Däremot fungerar EM-algoritmen väl när antalet datasampel är tillräckligt och det initiala felet är större. I gradientsökningsmetoden undviks konditioneringsproblemen med hjälp av en omformulering. Slutligen analyseras medelkvadratfelet (MSE) för parameterskattningarna med hjälp av Monte Carlo-simulering. De resulterande MSE-kurvorna visar att gradientsökningsmetoden är mer robust mot numeriska problem, speciellt när det initiala felet är litet. Simuleringarna visar även att gradientsökning är robust mot brus.
APA, Harvard, Vancouver, ISO, and other styles
29

Leblond, Timothée. "Calcul de gradient sur des paramètres CAO pour l’optimisation de forme." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC017/document.

Full text
Abstract:
Dans ce manuscrit, nous présentons une méthode d’optimisation de forme qui se base sur des paramètres géométriques comme des longueurs, des angles, etc. Nous nous appuyons sur des techniques d’optimisation basées sur un gradient. La sensibilité de la fonction objectif par rapport à la position des noeuds du maillage nous est fournie par un solveur adjoint que l’on considère comme une boîte noire. Afin d’optimiser par rapport aux paramètres CAO, nous nous concentrons sur l’évaluation de la sensibilité de la position des noeuds par rapport à ces paramètres. Ainsi, nous proposons deux approches par différences finies. La première méthode s’appuie sur une projection harmonique afin de comparer dans un même espace le maillage initial et celui obtenu suite à la variation d’un paramètre CAO. Les développements présentés dans ce manuscrit permettent d’étendre l’application aux formes ayant plusieurs frontières comme les collecteurs d’échappement. Nous avons développé une méthode d’interpolation adaptée à cette comparaison. L’ensemble du processus a été automatisé et nous en montrons l’entière efficacité sur des applications industrielles en aérodynamique interne. La deuxième méthode se base directement sur les géométries CAO pour évaluer cette sensibilité. Nous utilisons la définition intrinsèque des patches dans l’espace paramétrique (u;v) pour effectuer cette comparaison. Grâce à l’utilisation des coordonnées exactes en tout point de la surface fournies par la CAO, nous évitons d’avoir recours à une interpolation afin d’avoir la meilleure précision de calcul possible. Cependant, contrairement à la première méthode, elle requiert d’identifier les correspondances entre les patches d’une forme à l’autre. Une application sur un cas académique a été faite en aérodynamique externe. La pertinence de la première méthode a été démontrée sur des cas représentatifs et multiobjectifs, ce qui permettrait de faciliter son déploiement et son utilisation dans un cadre industriel. Quant à la deuxième méthode, nous avons montré son fort potentiel. Cependant, des développements supplémentaires seraient nécessaires pour une application plus poussée. Du fait qu’elles sont indépendantes des solveurs mécaniques et du nombre de paramètres, ces méthodes réduisent considérablement les temps de développement des produits, notamment en permettant l’optimisation multiphysique en grande dimension<br>In this manuscript, we present a shape optimization method based on CAD parameters such as lengths, angles, etc. We rely on gradient-based optimization techniques. The sensitivity of the objective function, with respect to the mesh nodes position, is provided by an adjoint solver considered here as a black box. To optimize with respect to CAD parameters, we focus on computing the sensitivity of the nodes positions with respect to these parameters. Thus, we propose two approaches based on finite differences. The first method uses a harmonic projection to compare in the same space the initial mesh and the one obtained after a change of the set of CAD parameters. The developments presented in this manuscript open up new doors like the application to shapes with multiple borders such as exhaust manifolds. We also developed an interpolation method suitable for this comparison. The entire process is automated, and we demonstrate the entire effectiveness on internal aerodynamics industrial applications. The second method is directly based on the CAD geometries to assess this sensitivity. To perform this comparison, we use the intrinsic definition of the patches in the parametric space (u;v). Through the use of the exact coordinates at any point on the surface provided by the CAD, we avoid using an interpolation to get the best calculation accuracy possible. However, unlike the first method, it requires to identify the correspondence between patches from one shape to another. An application on an external aerodynamics academic case was made. The relevance of the first method is demonstrated on a representative multi-objective case, which facilitate its deployment use in an industrial environment. Regarding the second method, we showed its great potential. However, further developments are needed to handle more advanced cases. Because they are independent of the mechanical solver and the number of parameters, these methods significantly reduce product development time, particularly by allowing large and multiphysics optimization
APA, Harvard, Vancouver, ISO, and other styles
30

LASRI, ABDELLAH. "Estimation du gradient pour les équations aux dérivées partielles paraboliques non linéaires et les équations différentielles stochastiques rétrogrades par la méthode de Bernstein." Tours, 1995. http://www.theses.fr/1995TOUR4015.

Full text
Abstract:
Cette thèse s'organise autour de deux thèmes. Le premier concerne les solutions de viscosité continues d'EDP paraboliques non linéaires gérées par un hamiltonien H. Le second se rapporte aux solutions de carré intégrables d'EDS rétrogrades engendrées par un générateur F. Notre but, dans les deux cas, est d'obtenir des estimations de la solution sous certaines propriétés intrinsèques par rapport aux fonctions H et F. Pour cela, nous avons utilisé l'approche dite version faible de la méthode de Bernstein introduite par G. Barles. Les propriétés en question sont appelées conditions de structure. Dans la première partie, nous avons établi une généralisation de cette approche au cas des EDP paraboliques. Ainsi, nous avons obtenu, sous certaines conditions de structure par rapport à H, un résultat de régularité lipschitzienne en x des solutions de viscosité continues. Concernant le comportement de telles solutions par rapport au temps, nous avons établi un résultat de type effets régularisants quand H satisfait certaines hypothèses de croissance. Dans la deuxième partie, nous avons obtenu des estimations à priori des solutions de carré intégrables d'EDSR linéaires ou les coefficients vérifient une certaine condition de structure. Nous avons étendu ce résultat au cas paramètre (resp. Markovien) pour obtenir un résultat de régularité lipschitzienne par rapport à (resp. X) quand F satisfait certaines conditions de structure. Par des techniques similaires, nous avons établi un résultat d'unicité pour les solutions de carré intégrables d'EDSR
APA, Harvard, Vancouver, ISO, and other styles
31

Younes, Laurent. "Problèmes d'estimation paramétrique pour des champs de Gibbs Markoviens : applications au traitement d'images." Paris 11, 1988. http://www.theses.fr/1988PA112269.

Full text
Abstract:
Nous nous intéressons à l’estimation paramétrique par maximum de vraisemblance pour des champs de Gibbs markoviens. Apres une introduction composée d'une part d'une discussion heuristique de l'analyse statistique des images aboutissant à une modélisation par champs aléatoires. Et d'autre part d'un rappel de différentes techniques d'estimation paramétrique existant dans la littérature, nous consacrons un chapitre au rappel de quelques résultats reliés aux champs de Gibbs, et à leur étude statistique: nous introduisons la notion de potentiel, ainsi que les définitions qui s’y rattachent, puis nous rappelons des conditions d'existence et d'unicité de champs de Gibbs associés à un potentiel. Nous présentons ensuite te un algorithme de gradient stochastique permettant la maximisation de la vraisemblance. Il utilise l'échantillonneur de Gibbs qui est une méthode itérative de simulation de champs markoviens. Des propriétés relatives à l'ergodicité de cet échantillonneur sont alors données. En fin de chapitre, nous rappelons des résultats de Métivier et Priouret sur les algorithmes stochastiques du type de celui que nous utilisons, qui permettent de mesurer la (tendance à la) convergence de telles procédures. Le chapitre 4 est consacré à l'étude plus précise de la convergence de l’algorithme. A réseau fixé, tout d'abord nous étudions le cas de modèle exponentiel, et nous démontrons un résultat de convergence presque sûre de l'algorithme. Nous étudions ensuite les modèles plus généraux en nous intéressant en particulier aux problèmes d’estimation basée sur de observations imparfaites (ou bruitées). Dans le chapitre 5, nous étudions le comportement asymptotique de l'estimateur de maximum de vraisemblance proprement dit, prouvant un résultat de consistance et de normalité asyptotique. Enfin, nous donnons quelques remarques pratiques sur l'algorithme d'estimation, suivies de résultats expérimentaux, composés d'une part de simulations et d'autre part de traitements appliqués à de vraies images<br>We study parameter estimation by maximum of likelihood for Gibbs Markov Random Fields. We begin by a heuristic discussion about statistical analysis of pictures, to obtain a modelization by random fields, and by a summary of various parameter estimation technics that exist. Then, we recall some results related to Gibbs Fields, and to their statistical analysis we introduce the notion of potential, and recall existence and uniqueness conditions for an associated Gibbs field. Ln the next chapter, we present a stochastic gradient algorithm in order to maximize the Likelihood. It uses the Gibbs sampler, which is an iterative method for Markov field simulation. We give properties related to the ergodicity of this sampler. Finaly we recall some results of Métivier and Priouret about stochastic gradient algorithms, such as the one we use that allow measurement of the (trend for) convergence for this kind of procedures. Ln chapter 4, we make a precise study of the convergence of the algorithm. First, with fixed lattice, we deal with the case of exponential models and show almost sure convergence of the algorithm. We study then more general models, and especialy problems related to imperfect (or noisy) observa tions. Ln chapter 5, we study the asymptotic behaviour of the maximum of likelihood estimator, and prove consistency, and asymptotic normality. Eventualy we give some practical remarks on the estimation algorithm, followed by some experiments
APA, Harvard, Vancouver, ISO, and other styles
32

Ögren, Petter. "Formations and Obstacle Avoidance in Mobile Robot Control." Doctoral thesis, KTH, Mathematics, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3555.

Full text
Abstract:
<p>This thesis consists of four independent papers concerningthe control of mobile robots in the context of obstacleavoidance and formation keeping.</p><p>The first paper describes a new theoreticallyv erifiableapproach to obstacle avoidance. It merges the ideas of twoprevious methods, with complementaryprop erties, byusing acombined control Lyapunov function (CLF) and model predictivecontrol (MPC) framework.</p><p>The second paper investigates the problem of moving a fixedformation of vehicles through a partiallykno wn environmentwith obstacles. Using an input to state (ISS) formulation theconcept of configuration space obstacles is generalized toleader follower formations. This generalization then makes itpossible to convert the problem into a standard single vehicleobstacle avoidance problem, such as the one considered in thefirst paper. The properties of goal convergence and safetyth uscarries over to the formation obstacle avoidance case.</p><p>In the third paper, coordination along trajectories of anonhomogenuos set of vehicles is considered. Byusing a controlLyapunov function approach, properties such as boundedformation error and finite completion time is shown.</p><p>Finally, the fourth paper applies a generalized version ofthe control in the third paper to translate,rotate and expanda formation. It is furthermore shown how a partial decouplingof formation keeping and formation mission can be achieved. Theapproach is then applied to a scenario of underwater vehiclesclimbing gradients in search for specific thermal/biologicalregions of interest. The sensor data fusion problem fordifferent formation configurations is investigated and anoptimal formation geometryis proposed.</p><p><b>Keywords:</b>Mobile Robots, Robot Control, ObstacleAvoidance, Multirobot System, Formation Control, NavigationFunction, Lyapunov Function, Model Predictive Control, RecedingHorizon Control, Gradient Climbing, Gradient Estimation.</p>
APA, Harvard, Vancouver, ISO, and other styles
33

Kirchner, William. "Anthropomimetic Control Synthesis: Adaptive Vehicle Traction Control." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26620.

Full text
Abstract:
Human expert drivers have the unique ability to build complex perceptive models using correlated sensory inputs and outputs. In the case of longitudinal vehicle traction, this work will show a direct correlation in longitudinal acceleration to throttle input in a controlled laboratory environment. In fact, human experts have the ability to control a vehicle at or near the performance limits, with respect to vehicle traction, without direct knowledge of the vehicle states; speed, slip or tractive force. Traditional algorithms such as PID, full state feedback, and even sliding mode control have been very successful at handling low level tasks where the physics of the dynamic system are known and stationary. The ability to learn and adapt to changing environmental conditions, as well as develop perceptive models based on stimulus-response data, provides expert human drivers with significant advantages. When it comes to bandwidth, accuracy, and repeatability, automatic control systems have clear advantages over humans; however, most high performance control systems lack many of the unique abilities of a human expert. The underlying motivation for this work is that there are advantages to framing the traction control problem in a manner that more closely resembles how a human expert drives a vehicle. The fundamental idea is the belief that humans have a unique ability to adapt to uncertain environments that are both temporal and spatially varying. In this work, a novel approach to traction control is developed using an anthropomimetic control synthesis strategy. The proposed anthropomimetic traction control algorithm operates on the same correlated input signals that a human expert driver would in order to maximize traction. A gradient ascent approach is at the heart of the proposed anthropomimetic control algorithm, and a real-time implementation is described using linear operator techniques, even though the tire-ground interface is highly non-linear. Performance of the proposed anthropomimetic traction control algorithm is demonstrated using both a longitudinal traction case study and a combined mode traction case study, in which longitudinal and lateral accelerations are maximized simultaneously. The approach presented in this research should be considered as a first step in the development of a truly anthropomimetic solution, where an advanced control algorithm has been designed to be responsive to the same limited input signals that a human expert would rely on, with the objective of maximizing traction. This work establishes the foundation for a general framework for an anthropomimetic control algorithm that is capable of learning and adapting to an uncertain, time varying environment. The algorithms developed in this work are well suited for efficient real time control in ground vehicles in a variety of applications from a driver assist technology to fully autonomous applications.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

Nassif, Roula. "Estimation distribuée adaptative sur les réseaux multitâches." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4118/document.

Full text
Abstract:
L’apprentissage adaptatif distribué sur les réseaux permet à un ensemble d’agents de résoudre des problèmes d’estimation de paramètres en ligne en se basant sur des calculs locaux et sur des échanges locaux avec les voisins immédiats. La littérature sur l’estimation distribuée considère essentiellement les problèmes à simple tâche, où les agents disposant de fonctions objectives séparables doivent converger vers un vecteur de paramètres commun. Cependant, dans de nombreuses applications nécessitant des modèles plus complexes et des algorithmes plus flexibles, les agents ont besoin d’estimer et de suivre plusieurs vecteurs de paramètres simultanément. Nous appelons ce type de réseau, où les agents doivent estimer plusieurs vecteurs de paramètres, réseau multitâche. Bien que les agents puissent avoir différentes tâches à résoudre, ils peuvent capitaliser sur le transfert inductif entre eux afin d’améliorer les performances de leurs estimés. Le but de cette thèse est de proposer et d’étudier de nouveaux algorithmes d’estimation distribuée sur les réseaux multitâches. Dans un premier temps, nous présentons l’algorithme diffusion LMS qui est une stratégie efficace pour résoudre les problèmes d’estimation à simple-tâche et nous étudions théoriquement ses performances lorsqu’il est mis en oeuvre dans un environnement multitâche et que les communications entre les noeuds sont bruitées. Ensuite, nous présentons une stratégie de clustering non-supervisé permettant de regrouper les noeuds réalisant une même tâche en clusters, et de restreindre les échanges d’information aux seuls noeuds d’un même cluster<br>Distributed adaptive learning allows a collection of interconnected agents to perform parameterestimation tasks from streaming data by relying solely on local computations and interactions with immediate neighbors. Most prior literature on distributed inference is concerned with single-task problems, where agents with separable objective functions need to agree on a common parameter vector. However, many network applications require more complex models and flexible algorithms than single-task implementations since their agents involve the need to estimate and track multiple objectives simultaneously. Networks of this kind, where agents need to infer multiple parameter vectors, are referred to as multitask networks. Although agents may generally have distinct though related tasks to perform, they may still be able to capitalize on inductive transfer between them to improve their estimation accuracy. This thesis is intended to bring forth advances on distributed inference over multitask networks. First, we present the well-known diffusion LMS strategies to solve single-task estimation problems and we assess their performance when they are run in multitask environments in the presence of noisy communication links. An improved strategy allowing the agents to adapt their cooperation to neighbors sharing the same objective is presented in order to attain improved learningand estimation over networks. Next, we consider the multitask diffusion LMS strategy which has been proposed to solve multitask estimation problems where the network is decomposed into clusters of agents seeking different
APA, Harvard, Vancouver, ISO, and other styles
35

Authesserre, Jean-baptiste. "Alignement paramétrique d’images : proposition d’un formalisme unifié et prise en compte du bruit pour le suivi d’objets". Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14136/document.

Full text
Abstract:
L’alignement d’images paramétrique a de nombreuses applications pour la réalité augmentée, la compression vidéo ou encore le suivi d’objets. Dans cette thèse, nous nous intéressons notamment aux techniques de recalage d’images (template matching) reposant sur l’optimisation locale d’une fonctionnelle d’erreur. Ces approches ont conduit ces dernières années à de nombreux algorithmes efficaces pour le suivi d’objets. Cependant, les performances de ces algorithmes ont été peu étudiées lorsque les images sont dégradées par un bruit important comme c’est le cas, par exemple, pour des captures réalisées dans des conditions de faible luminosité. Dans cette thèse, nous proposons un nouveau formalisme, appelé formalisme bidirectionnel, qui unifie plusieurs approches de l’état de l’art. Ce formalisme est utilisé dans un premier temps pour porter un éclairage nouveau sur un grand nombre d’approches de la littérature et en particulier sur l’algorithme ESM (Efficient Second-order Minimization). Nous proposons ensuite une étude théorique approfondie de l’influence du bruit sur le processus d’alignement. Cette étude conduit à la définition de deux nouvelles familles d’algorithmes, les approches ACL (Asymmetric Composition on Lie Groups) et BCL (Bidirectional Composition on Lie Groups) qui permettent d’améliorer les performances en présence de niveaux de bruit asymétriques (Rapport Signal sur Bruit différent dans les images). L’ensemble des approches introduites sont validées sur des données synthétiques et sur des données réelles capturées dans des conditions de faible luminosité<br>Parametric image alignment is a fundamental task of many vision applications such as object tracking, image mosaicking, video compression and augmented reality. To recover the motion parameters, direct image alignment works by optimizing a pixel-based difference measure between a moving image and a fixed-image called template. In the last decade, many efficient algorithms have been proposed for parametric object tracking. However, those approaches have not been evaluated for aligning images of low SNR (Signal to Noise ratio) such as images captured in low-light conditions. In this thesis, we propose a new formulation of image alignment called Bidirectional Framework for unifying existing state of the art algorithms. First, this framework allows us to produce new insights on existing approaches and in particular on the ESM (Efficient Second-order Minimization) algorithm. Subsequently, we provide a theoretical analysis of image noise on the alignment process. This yields the definition of two new approaches : the ACL (Asymmetric Composition on Lie Groups) algorithm and the BCL (Bidirectional Composition on Lie Groups) algorithm, which outperform existing approaches in presence of images of different SNR. Finally, experiments on synthetic and real images captured under low-light conditions allow to evaluate the new and existing approaches under various noise conditions
APA, Harvard, Vancouver, ISO, and other styles
36

Halimi, Abdelghafour. "Modélisation et traitement statistique d'images de microscopie confocale : application en dermatologie." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19515/1/HALIMI_Abdleghafour.pdf.

Full text
Abstract:
Dans cette thèse, nous développons des modèles et des méthodes statistiques pour le traitement d’images de microscopie confocale de la peau dans le but de détecter une maladie de la peau appelée lentigo. Une première contribution consiste à proposer un modèle statistique paramétrique pour représenter la texture dans le domaine des ondelettes. Plus précisément, il s’agit d’une distribution gaussienne généralisée dont on montre que le paramètre d’échelle est caractéristique des tissus sousjacents. La modélisation des données dans le domaine de l’image est un autre sujet traité dans cette thèse. A cette fin, une distribution gamma généralisée est proposée. Notre deuxième contribution consiste alors à développer un estimateur efficace des paramètres de cette loi à l’aide d’une descente de gradient naturel. Finalement, un modèle d’observation de bruit multiplicatif est établi pour expliquer la distribution gamma généralisée des données. Des méthodes d’inférence bayésienne paramétrique sont ensuite développées avec ce modèle pour permettre la classification d’images saines et présentant un lentigo. Les algorithmes développés sont appliqués à des images réelles obtenues d’une étude clinique dermatologique.
APA, Harvard, Vancouver, ISO, and other styles
37

Portier, François. "Réduction de la dimension en régression." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00871049.

Full text
Abstract:
Dans cette thèse, nous étudions le problème de réduction de la dimension dans le cadre du modèle de régression suivant Y=g(B X,e), où X est un vecteur de dimension p, Y appartient à R, la fonction g est inconnue et le bruit e est indépendant de X. Nous nous intéressons à l'estimation de la matrice B, de taille dxp où d est plus petit que p, (dont la connaissance permet d'obtenir de bonnes vitesses de convergence pour l'estimation de g). Ce problème est traité en utilisant deux approches distinctes. La première, appelée régression inverse nécessite la condition de linéarité sur X. La seconde, appelée semi-paramétrique ne requiert pas une telle condition mais seulement que X possède une densité lisse. Dans le cadre de la régression inverse, nous étudions deux familles de méthodes respectivement basées sur E[X f(Y)] et E[XX^T f(Y)]. Pour chacune de ces familles, nous obtenons les conditions sur f permettant une estimation exhaustive de B, aussi nous calculons la fonction f optimale par minimisation de la variance asymptotique. Dans le cadre de l'approche semi-paramétrique, nous proposons une méthode permettant l'estimation du gradient de la fonction de régression. Sous des hypothèses semi-paramétriques classiques, nous montrons la normalité asymptotique de notre estimateur et l'exhaustivité de l'estimation de B. Quel que soit l'approche considérée, une question fondamentale est soulevée : comment choisir la dimension de B ? Pour cela, nous proposons une méthode d'estimation du rang d'une matrice par test d'hypothèse bootstrap.
APA, Harvard, Vancouver, ISO, and other styles
38

Harrane, Ibrahim El Khalil. "Estimation distribuée respectueuse de la consommation d’énergie et de la confidentialité sur les réseaux adaptatifs." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4041.

Full text
Abstract:
L’estimation adaptative distribuée sur les réseaux tire parti des interconnexions entre agents pour effectuer une tâche d’estimation de paramètres à partir de flux continus de donnée. Comparées aux solutions centralisées, les stratégies distribuées sont robustes aux pertes de communications ou aux défaillances des agents. Cependant, ces avantages engendrent de nouveaux défis. Les stratégies distribuées nécessitent une communication permanente entre agents voisins, engendrant un coût considérable, en particulier pour les agents dont le budget énergétique est limité. Au-delà du coût de communication, comme pour tout algorithme distribué, on peut craindre des problèmes de confidentialité, en particulier pour les applications impliquant des données sensibles. L’objectif de cette thèse est de répondre à ces deux défis. Pour réduire le coût de communication et par conséquent la consommation d’énergie, nous proposons deux stratégies. La première repose sur la compression tandis que la seconde vise à limiter les coûts de communication en considérant un réseau moins dense. Pour la première approche, nous proposons une version compressée du diffusion LMS dans laquelle seules quelques composantes des vecteurs de données partagés, sélectionnées aléatoirement, sont transmises. Nous effectuons une analyse théorique de l’algorithme ainsi que des simulations numériques pour confirmer la validité du modèle théorique. Nous effectuons aussi des simulations selon un scénario réaliste dans lequel les agents s’allument et s’éteignent pour économiser les ressources énergétiques. L’algorithme proposé surpasse les performances des méthodes de l’état de l’art. La seconde approche exploite l’aspect multitâche pour réduire les coûts de communication. Dans un environnement multitâche, il est avantageux de ne communiquer qu’avec des agents qui estiment des quantités similaires. Pour ce faire, nous considérons un réseau avec deux types d’agents : des agents de cluster estimant la structure du réseau et des agents réguliers chargés d’estimer leurs paramètres objectifs respectifs. Nous analysons théoriquement le comportement de l’algorithme dans les deux scénarios : l’un dans lequel tous les agents sont correctement groupés et l’autre dans lequel certains agents sont affectés au mauvais cluster. Nous effectuons une analyse numérique approfondie pour confirmer la validité des modèles théoriques et pour étudier l’effet des paramètres de l’algorithme sur sa convergence. Pour répondre aux préoccupations en matière de confidentialité, nous nous sommes inspirés de la notion de "differential privacy" pour proposer une version du diffusion LMS prenant en compte la confidentialité des données. Sachant que le diffusion LMS repose sur la communication entre agents, la sécurité des données est constamment menacée. Pour éviter ce risque et tirer profit de l’échange d’informations, nous utilisons des matrices aléatoires de Wishart pour corrompre les données transmises. Ce faisant, nous empêchons la reconstruction des données par les voisins adverses ainsi que les menaces externes. Nous analysons théoriquement et numériquement le comportement de l’algorithme. Nous étudions également l’effet du rang des matrices de Wishart sur la vitesse de convergence et la préservation de confidentialité<br>Distributed estimation over adaptive networks takes advantage of the interconnections between agents to perform parameter estimation from streaming data. Compared to their centralized counterparts, distributed strategies are resilient to links and agents failures, and are scalable. However, such advantages do not come without a cost. Distributed strategies require reliable communication between neighbouring agents, which is a substantial burden especially for agents with a limited energy budget. In addition to this high communication load, as for any distributed algorithm, there may be some privacy concerns particularly for applications involving sensitive data. The aim of this dissertation is to address these two challenges. To reduce the communication load and consequently the energy consumption, we propose two strategies. The first one involves compression while the second one aims at limiting the communication cost by sparsifying the network. For the first approach, we propose a compressed version of the diffusion LMS where only some random entries of the shared vectors are transmitted. We theoretically analyse the algorithm behaviour in the mean and mean square sense. We also perform numerical simulations that confirm the theoretical model accuracy. As energy consumption is the main focus, we carry out simulations with a realistic scenario where agents turn on and off to save energy. The proposed algorithm outperforms its state of the art counterparts. The second approach takes advantage of the multitask setting to reduce the communication cost. In a multitask setting it is beneficial to only communicate with agents estimating similar quantities. To do so, we consider a network with two types of agents: cluster agents estimating the network structure, and regular agents tasked with estimating their respective objective vectors. We theoretically analyse the algorithm behaviour under two scenarios: one where all agents are properly clustered, and a second one where some agents are asigned to wrong clusters. We perform an extensive numerical analysis to confirm the fitness of the theoretical models and to study the effect of the algorithm parameters on its convergence. To address the privacy concerns, we take inspiration from differentially private Algorithms to propose a privacy aware version of diffusion LMS. As diffusion strategies relies heavily on communication between agents, the data are in constant jeopardy. To avoid such risk and benefit from the information exchange, we propose to use Wishart matrices to corrupt the transmitted data. Doing so, we prevent data reconstruction by adversary neighbours as well as external threats. We theoretically and numerically analyse the algorithm behaviour. We also study the effect of the rank of the Wishart matrices on the convergence speed and privacy preservation
APA, Harvard, Vancouver, ISO, and other styles
39

Bonifacio, Henry F. "Estimating particulate emission rates from large beef cattle feedlots." Diss., Kansas State University, 2013. http://hdl.handle.net/2097/15530.

Full text
Abstract:
Doctor of Philosophy<br>Department of Biological and Agricultural Engineering<br>Ronaldo G. Maghirang<br>Emission of particulate matter (PM) and various gases from open-lot beef cattle feedlots is becoming a concern because of the adverse effects on human health and the environment; however, scientific information on feedlot emissions is limited. This research was conducted to estimate emission rates of PM[subscript]10 from large cattle feedlots. Specific objectives were to: (1) determine feedlot PM[subscript]10 emission rates by reverse dispersion modeling using AERMOD; (2) compare AERMOD and WindTrax in terms of their predicted concentrations and back-calculated PM[subscript]10 emission rates; (3) examine the sensitivity of both AERMOD and WindTrax to changes in meteorological parameters, source location, and receptor location; (4) determine feedlot PM[subscript]10 emission rates using the flux-gradient technique; and (5) compare AERMOD and computational fluid dynamics (CFD) in simulating particulate dispersion from an area source. PM[subscript]10 emission rates from two cattle feedlots in Kansas were determined by reverse dispersion modeling with AERMOD using PM[subscript]10 concentration and meteorological measurements over a 2-yr period. PM[subscript]10 emission rates for these feedlots varied seasonally, with overall medians of 1.60 and 1.10 g /m[superscript]2 -day. Warm and prolonged dry periods had significantly higher PM emissions compared to cold periods. Results also showed that the PM[subscript]10 emissions had a diurnal trend; highest PM[subscript]10 emission rates were observed during the afternoon and early evening periods. Using particulate concentration and meteorological measurements from a third cattle feedlot, PM[subscript]10 emission rates were back-calculated with AERMOD and WindTrax. Higher PM[subscript]10 emission rates were calculated by AERMOD, but their resulting PM[subscript]10 emission rates were highly linear (R[superscript]2 > 0.88). As such, development of conversion factors between these two models is feasible. AERMOD and WindTrax were also compared based on their sensitivity to changes in meteorological parameters and source locations. In general, AERMOD calculated lower concentrations than WindTrax; however, the two models responded similarly to changes in wind speed, surface roughness, atmospheric stability, and source and receptor locations. The flux-gradient technique also estimated PM[subscript]10 emission rates at the third cattle feedlot. Analyses of PM[subscript]10 emission rates and meteorological parameters indicated that PM[subscript]10 emissions at the feedlot were influenced by friction velocity, sensible heat flux, temperature, and surface roughness. Based on pen surface water content measurements, a water content of at least 20% (wet basis) significantly lowered PM[subscript]10 emissions at the feedlot. The dispersion of particulate from a simulated feedlot pen was predicted using CFD turbulence model ([kappa]-[epsilon] model) and AERMOD. Compared to CFD, AERMOD responded differently to wind speed setting, and was not able to provide detailed vertical concentration profiles such that the vertical concentration gradients at the first few meters from the ground were negligible. This demonstrates some limitations of AERMOD in simulating dispersion for area sources such as cattle feedlots and suggests the need to further evaluate its performance for area source modeling.
APA, Harvard, Vancouver, ISO, and other styles
40

Ramadoss, Balaji. "Vector Flow Model in Video Estimation and Effects of Network Congestion in Low Bit-Rate Compression Standards." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gaspar, Jonathan. "Fluxmétrie et caractérisation thermiques instationnaires des dépôts des composants face au plasma du Tokamak JET par techniques inverses." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4739/document.

Full text
Abstract:
Ces travaux portent sur la résolution successive de deux problèmes inverses en transferts thermiques : l'estimation de la densité de flux en surface d'un matériau puis de la conductivité thermique équivalente d'une couche déposée en surface de ce matériau. Le modèle direct est bidimensionnel orthotrope (géométrie réelle d'un matériau composite), instationnaire, non-linéaire et ses équations sont résolues par éléments finis. Les matériaux étudiés sont les composants face au plasma (tuiles composite carbone-carbone) dans le Tokamak JET. La densité de flux recherchée varie avec une dimension spatiale et avec le temps. La conductivité du dépôt de surface varie spatialement et peut également varier au cours du temps pendant l'expérience (toutes les autres propriétés thermophysiques dépendent de la température). Les deux problèmes inverses sont résolus à l'aide de l'algorithme des gradients conjugués associé à la méthode de l'état adjoint pour le calcul exact du gradient. La donnée expérimentale utilisée pour la résolution du premier problème inverse (estimation de flux surfacique) est le thermogramme fourni par un thermocouple enfoui. Le second problème inverse utilise, lui, les variations spatio-temporelles de la température de surface du dépôt inconnu (thermographie infrarouge) pour identifier sa conductivité. Des calculs de confiance associée aux grandeurs identifiées sont réalisés avec la démarche Monte Carlo. Les méthodes mises au point pendant ces travaux aident à comprendre la dynamique de l'interaction plasma-paroi ainsi que la cinétique de formation des dépôts de carbone sur les composants et aideront au design des composants des machines futures (WEST, ITER)<br>This work deals with the successive resolution of two inverse heat transfer problems: the estimation of surface heat flux on a material and equivalent thermal conductivity of a surface layer on that material. The direct formulation is bidimensional, orthotropic (real geometry of a composite material), unsteady, non-linear and solved by finite elements. The studied materials are plasma facing components (carbon-carbon composite tiles) from Tokamak JET. The searched heat flux density varies with time and one dimension in space. The surface layers conductivity varies spatially and can vary with time during the experiment (the other thermophysical properties are temperature dependent). The two inverse problems are solved by the conjugate gradient method with the adjoint state method for the exact gradient calculation. The experimental data used for the first inverse problem resolution (surface heat flux estimation) is the thermogram provided by an embedded thermocouple. The second inverse problem uses the space and time variations of the surface temperature of the unknown surface layer (infrared thermography) for the conductivity identification. The confidence calculations associated to the estimated values are done by the Monte Carlo approach. The method developed during this thesis helps to the understanding of the plasma-wall interaction dynamic, as well as the kinetic of the surface carbon layer formation on the plasma facing components, and will be helpful to the design of the components of the future machines (WEST, ITER)
APA, Harvard, Vancouver, ISO, and other styles
42

Santos, Jailson França dos. "Análise dos erros na estimação de gradientes em malhas de Voronoi." Universidade do Estado do Rio de Janeiro, 2013. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=4876.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>Este trabalho apresenta um estudo teórico e numérico sobre os erros que ocorrem nos cálculos de gradientes em malhas não estruturadas constituídas pelo diagrama de Voronoi, malhas estas, formadas também pela triangulação de Delaunay. As malhas adotadas, no trabalho, foram as malhas cartesianas e as malhas triangulares, esta última é gerada pela divisão de um quadrado em dois ou quatro triângulos iguais. Para tal análise, adotamos a escolha de três metodologias distintas para o cálculo dos gradientes: método de Green Gauss, método do Mínimo Resíduo Quadrático e método da Média do Gradiente Projetado Corrigido. O texto se baseia em dois enfoques principais: mostrar que as equações de erros dadas pelos gradientes podem ser semelhantes, porém com sinais opostos, para pontos de cálculos em volumes vizinhos e que a ordem do erro das equações analíticas pode ser melhorada em malhas uniformes quando comparada as não uniformes, nos casos unidimensionais, e quando analisada na face de tais volumes vizinhos nos casos bidimensionais.<br>This work presents a theoretical and numerical study on the errors that occur in the calculation of gradients on unstructured meshes Voronoi type, these meshes, also formed by Delaunay triangulation. The meshes adopted in the work were cartesian and triangular meshes, the latter is formed by dividing a square in two or four equal triangles. For this analysis, we adopt the choice of three different methodologies for the calculation of gradients: Green Gauss method, weighted least-squares method and mean value of the projected gradients method. The text is based on two main approaches: to show that the equations of errors given by the gradients may be similar, but with opposite signs, for calculation point in opposite volumes. And show that the order of the error of the analytical equations can be improved in uniform mesh when compared to not uniform, the one-dimensional case, and when viewed from the opposite face of such volumes for the two-dimensional case.
APA, Harvard, Vancouver, ISO, and other styles
43

Nikolaou, Nikolaos. "Cost-sensitive boosting : a unified approach." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/costsensitive-boosting-a-unified-approach(ae9bb7bd-743e-40b8-b50f-eb59461d9d36).html.

Full text
Abstract:
In this thesis we provide a unifying framework for two decades of work in an area of Machine Learning known as cost-sensitive Boosting algorithms. This area is concerned with the fact that most real-world prediction problems are asymmetric, in the sense that different types of errors incur different costs. Adaptive Boosting (AdaBoost) is one of the most well-studied and utilised algorithms in the field of Machine Learning, with a rich theoretical depth as well as practical uptake across numerous industries. However, its inability to handle asymmetric tasks has been the subject of much criticism. As a result, numerous cost-sensitive modifications of the original algorithm have been proposed. Each of these has its own motivations, and its own claims to superiority. With a thorough analysis of the literature 1997-2016, we find 15 distinct cost-sensitive Boosting variants - discounting minor variations. We critique the literature using {\em four} powerful theoretical frameworks: Bayesian decision theory, the functional gradient descent view, margin theory, and probabilistic modelling. From each framework, we derive a set of properties which must be obeyed by boosting algorithms. We find that only 3 of the published Adaboost variants are consistent with the rules of all the frameworks - and even they require their outputs to be calibrated to achieve this. Experiments on 18 datasets, across 21 degrees of cost asymmetry, all support the hypothesis - showing that once calibrated, the three variants perform equivalently, outperforming all others. Our final recommendation - based on theoretical soundness, simplicity, flexibility and performance - is to use the original Adaboost algorithm albeit with a shifted decision threshold and calibrated probability estimates. The conclusion is that novel cost-sensitive boosting algorithms are unnecessary if proper calibration is applied to the original.
APA, Harvard, Vancouver, ISO, and other styles
44

Roland, Christophe Brezinski Claude. "Méthodes d'accélération de convergence en analyse numérique et en statistique." Villeneuve d'Ascq : Université des sciences et technologies de Lille, 2007. https://iris.univ-lille1.fr/dspace/handle/1908/633.

Full text
Abstract:
Reproduction de : Thèse de doctorat : Mathématiques appliquées : Lille 1 : 2005.<br>N° d'ordre (Lille 1) : 3627. 1 article en anglais intégré dans le texte. Titre provenant de la page de titre du document numérisé. Bibliogr. p. [125]-132.
APA, Harvard, Vancouver, ISO, and other styles
45

Gastaud, Muriel. "Modèles de contours actifs pour la segmentation d'images et de vidéos." Phd thesis, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00089384.

Full text
Abstract:
La segmentation en objets d'une image consiste à extraire de l'image des régions d'intérêt suivant un critère défini. Nous segmentons l'image par un algorithme de contours actifs dans le cadre d'une approche variationnelle. Partant d'un contour initial quelconque, le contour actif évolue, suivant une équation aux dérivées partielles. L'équation d'évolution du contour actif est déduite de la dérivation du critère. Au vu de la dépendance du critère à la région considérée, la dérivation du critère par rapport à la région n'est pas aisée. Nous utilisons des outils de dérivation empruntés à l'optimisation de domaine: les gradients de forme.<br />La contribution de cette thèse réside dans l'élaboration et l'étude de différents descripteurs de région. Pour chaque critère, nous calculons la dérivée du critère à l'aide des gradients de forme, et en déduisons l'équation d'évolution du contour actif.<br />Le premier descripteur définit un a priori géométrique sans contrainte paramétrique: il minimise la distance du contour actif à un contour de référence. Nous l'avons appliqué à la déformation de courbe, la segmentation et le suivi de cible.<br />Le deuxième descripteur caractérise le mouvement de l'objet par un modèle de mouvement. Le critère associé définit conjointement une région et son mouvement sur plusieurs images consécutives. Nous avons appliqué ce critère à l'estimation et la segmentation conjointe du mouvement et au suivi d'objets en mouvement.
APA, Harvard, Vancouver, ISO, and other styles
46

Flammarion, Nicolas. "Stochastic approximation and least-squares regression, with applications to machine learning." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE056/document.

Full text
Abstract:
De multiples problèmes en apprentissage automatique consistent à minimiser une fonction lisse sur un espace euclidien. Pour l’apprentissage supervisé, cela inclut les régressions par moindres carrés et logistique. Si les problèmes de petite taille sont résolus efficacement avec de nombreux algorithmes d’optimisation, les problèmes de grande échelle nécessitent en revanche des méthodes du premier ordre issues de la descente de gradient. Dans ce manuscrit, nous considérons le cas particulier de la perte quadratique. Dans une première partie, nous nous proposons de la minimiser grâce à un oracle stochastique. Dans une seconde partie, nous considérons deux de ses applications à l’apprentissage automatique : au partitionnement de données et à l’estimation sous contrainte de forme. La première contribution est un cadre unifié pour l’optimisation de fonctions quadratiques non-fortement convexes. Celui-ci comprend la descente de gradient accélérée et la descente de gradient moyennée. Ce nouveau cadre suggère un algorithme alternatif qui combine les aspects positifs du moyennage et de l’accélération. La deuxième contribution est d’obtenir le taux optimal d’erreur de prédiction pour la régression par moindres carrés en fonction de la dépendance au bruit du problème et à l’oubli des conditions initiales. Notre nouvel algorithme est issu de la descente de gradient accélérée et moyennée. La troisième contribution traite de la minimisation de fonctions composites, somme de l’espérance de fonctions quadratiques et d’une régularisation convexe. Nous étendons les résultats existants pour les moindres carrés à toute régularisation et aux différentes géométries induites par une divergence de Bregman. Dans une quatrième contribution, nous considérons le problème du partitionnement discriminatif. Nous proposons sa première analyse théorique, une extension parcimonieuse, son extension au cas multi-labels et un nouvel algorithme ayant une meilleure complexité que les méthodes existantes. La dernière contribution de cette thèse considère le problème de la sériation. Nous adoptons une approche statistique où la matrice est observée avec du bruit et nous étudions les taux d’estimation minimax. Nous proposons aussi un estimateur computationellement efficace<br>Many problems in machine learning are naturally cast as the minimization of a smooth function defined on a Euclidean space. For supervised learning, this includes least-squares regression and logistic regression. While small problems are efficiently solved by classical optimization algorithms, large-scale problems are typically solved with first-order techniques based on gradient descent. In this manuscript, we consider the particular case of the quadratic loss. In the first part, we are interestedin its minimization when its gradients are only accessible through a stochastic oracle. In the second part, we consider two applications of the quadratic loss in machine learning: clustering and estimation with shape constraints. In the first main contribution, we provided a unified framework for optimizing non-strongly convex quadratic functions, which encompasses accelerated gradient descent and averaged gradient descent. This new framework suggests an alternative algorithm that exhibits the positive behavior of both averaging and acceleration. The second main contribution aims at obtaining the optimal prediction error rates for least-squares regression, both in terms of dependence on the noise of the problem and of forgetting the initial conditions. Our new algorithm rests upon averaged accelerated gradient descent. The third main contribution deals with minimization of composite objective functions composed of the expectation of quadratic functions and a convex function. Weextend earlier results on least-squares regression to any regularizer and any geometry represented by a Bregman divergence. As a fourth contribution, we consider the the discriminative clustering framework. We propose its first theoretical analysis, a novel sparse extension, a natural extension for the multi-label scenario and an efficient iterative algorithm with better running-time complexity than existing methods. The fifth main contribution deals with the seriation problem. We propose a statistical approach to this problem where the matrix is observed with noise and study the corresponding minimax rate of estimation. We also suggest a computationally efficient estimator whose performance is studied both theoretically and experimentally
APA, Harvard, Vancouver, ISO, and other styles
47

Zahnd, Guillaume. "Estimation du mouvement bi-dimensionnel de la paroi artérielle en imagerie ultrasonore par une approche conjointe de segmentation et de speckle tracking." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00835828.

Full text
Abstract:
Ce travail de thèse est axé sur le domaine du traitement d'images biomédicales. L'objectif de notre étude est l'estimation des paramètres traduisant les propriétés mécaniques de l'artère carotide in vivo en imagerie échographique, dans une optique de détection précoce de la pathologie cardiovasculaire. L'analyse du mouvement longitudinal des tissus de la paroi artérielle, i.e. dans la même direction que le flux sanguin, représente la motivation majeure de ce travail. Les trois contributions principales proposées dans ce travail sont i) le développement d'un cadre méthodologique original et semi-automatique, dédié à la segmentation et à l'estimation du mouvement de la paroi artérielle dans des séquences in vivo d'images ultrasonores mode-B, ii) la description d'un protocole de génération d'une référence, faisant intervenir les opérations manuelles de plusieurs experts, dans le but de quantifier la précision des résultats de notre méthode malgré l'absence de vérité terrain inhérente à la modalité échographique, et iii) l'évaluation clinique de l'association entre les paramètres mécaniques et dynamiques de la paroi carotidienne et les facteurs de risque cardiovasculaire dans le cadre de la détection précoce de l'athérosclérose. Nous proposons une méthode semi-automatique, basée sur une approche conjointe de segmentation des contours de la paroi et d'estimation du mouvement des tissus. L'extraction de la position des interfaces est réalisée via une approche spécifique à la structure morphologique de la carotide, basée sur une stratégie de programmation dynamique exploitant un filtrage adapté. L'estimation du mouvement est réalisée via une méthode robuste de mise en correspondance de blocs (block matching), basée sur la connaissance du déplacement a priori ainsi que sur la mise à jour temporelle du bloc de référence par un filtre de Kalman spécifique. La précision de notre méthode, évaluée in vivo, correspond au même ordre de grandeur que celle résultant des opérations manuelles réalisées par des experts, et reste sensiblement meilleure que celle obtenue avec deux autres méthodes traditionnelles (i.e. une implémentation classique de la technique de block matching et le logiciel commercial Velocity Vector Imaging). Nous présentons également quatre études cliniques réalisées en milieu hospitalier, où nous évaluons l'association entre le mouvement longitudinal et les facteurs de risque cardiovasculaire. Nous suggérons que le mouvement longitudinal, qui représente un marqueur de risque émergent et encore peu étudié, constitue un indice pertinent et complémentaire aux marqueurs traditionnels dans la caractérisation de la physiopathologie artérielle, reflète le niveau de risque cardiovasculaire global, et pourrait être bien adapté à la détection précoce de l'athérosclérose.
APA, Harvard, Vancouver, ISO, and other styles
48

Lang, Humblot Karine. "Etude et réalisation d'un magnétomètre à résonance magnétique nucléaire à polarisation dynamique pour les applications en forage pétrolier." Université Joseph Fourier (Grenoble), 1997. http://www.theses.fr/1997GRE10023.

Full text
Abstract:
Les magnetometres a resonance magnetique nucleaire bases sur le phenomene de polarisation dynamique sont des instruments tres adaptes a la mesure du champ magnetique terrestre dans les puits de forage petrolier, ce qui permet une datation assez precise des couches geologiques concernees. Le magnetometre contient un solvant protone avec des radicaux libres en solution afin de realiser la polarisation dynamique qui amplifie le signal nucleaire des protons inobservable sans ce dispositif. Le but de ce travail est d'ameliorer les performances du magnetometre en presence de gradients de champs magnetiques importants et de permettre son fonctionnement jusqu'a des temperatures relativement elevees. Le choix de la solution radicalaire est crucial. Les criteres preponderants sont l'obtention d'un radical stable presentant une structure hyperfine interne importante et une largeur de raie rpe fine. Nous avons developpe un modele plus performant de cette largeur de raie qui nous a permis de definir precisement les caracteristiques requises pour le radical et le solvant. Nous avons montre que la tenue en gradient de la solution est conditionnee par des temps de relaxation nucleaires courts et la correlation entre ces deux effets a ete formulee a l'aide d'un modele theorique en accord avec les resultats experimentaux. L'ensemble de ces criteres a conduit au choix de la solution de triglyme contenant le radical nitroxyde tmio. Cette solution stable et relativement resistante aux gradients de champ magnetiques est la plus performante pour notre application. Un nouvel oscillateur base sur cette solution a ete concu et caracterise. Une etude fondamentale complementaire et originale sur la relaxation intermoleculaire entre les protons du solvant et les spins electroniques des radicaux a structure hyperfine interne a ete developpee. Le comportement des densites spectrales selon une loi proportionnelle a la racine de la frequence de resonance nucleaire a ete observe dans divers domaines de frequence, ce qui a permis une determination precise de la constante de diffusion relative du couple radical-solvant
APA, Harvard, Vancouver, ISO, and other styles
49

Salva, Karol T. "A Hybrid Approach to Aerial Video Image Registration." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1483524722687971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Valenta, Václav. "Řešení parciálních diferenciálních rovnic s využitím aposteriorního odhadu chyby." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-412549.

Full text
Abstract:
This thesis deals with gradient calculation in triangulation nodes using weighted average of gradients of neighboring elements. This gradient is then used for a posteriori error estimation which produce better solution of partial differential equations. This work presents two common methods - Finite elements method and Finite difference method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography