To see the other types of publications on this topic, follow the link: Inverse estimator.

Dissertations / Theses on the topic 'Inverse estimator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Inverse estimator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Yang. "Analysis of Dependently Truncated Sample Using Inverse Probability Weighted Estimator." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/110.

Full text
Abstract:
Many statistical methods for truncated data rely on the assumption that the failure and truncation time are independent, which can be unrealistic in applications. The study cohorts obtained from bone marrow transplant (BMT) registry data are commonly recognized as truncated samples, the time-to-failure is truncated by the transplant time. There are clinical evidences that a longer transplant waiting time is a worse prognosis of survivorship. Therefore, it is reasonable to assume the dependence between transplant and failure time. To better analyze BMT registry data, we utilize a Cox analysis in which the transplant time is both a truncation variable and a predictor of the time-to-failure. An inverse-probability-weighted (IPW) estimator is proposed to estimate the distribution of transplant time. Usefulness of the IPW approach is demonstrated through a simulation study and a real application.
APA, Harvard, Vancouver, ISO, and other styles
2

Swain, David James. "Enhancing and Reconstructing Digitized Handwriting." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36904.

Full text
Abstract:
This thesis involves restoration, reconstruction, and enhancement of a digitized library of hand-written documents. Imaging systems that perform this digitization often degrade the quality of the original documents. Many techniques exist for reconstructing, restoring, and enhancing digital images; however, many require <i> a priori </i> knowledge of the imaging system. In this study, only partial <i> a priori </i> knowledge is available, and therefore unknown parameters must be estimated before restoration, reconstruction, or enhancement is possible. The imaging system used to digitize the documents library has degraded the images in several ways. First, it has introduced a ringing that is apparent around each stroke. Second, the system has eliminated strokes of narrow widths. To restore these images, the imaging system is modeled by estimating the point spread function from sample impulse responses, and the image noise is estimated in an attempt to apply standard linear restoration techniques. The applicability of these techniques is investigated in the first part of this thesis. Then nonlinear filters, structural techniques, and enhancement techniques are applied to obtain substantial improvements in image quality.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Pingel, Ronnie. "Some Aspects of Propensity Score-based Estimators for Causal Inference." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-229341.

Full text
Abstract:
This thesis consists of four papers that are related to commonly used propensity score-based estimators for average causal effects. The first paper starts with the observation that researchers often have access to data containing lots of covariates that are correlated. We therefore study the effect of correlation on the asymptotic variance of an inverse probability weighting and a matching estimator. Under the assumptions of normally distributed covariates, constant causal effect, and potential outcomes and a logit that are linear in the parameters we show that the correlation influences the asymptotic efficiency of the estimators differently, both with regard to direction and magnitude. Further, the strength of the confounding towards the outcome and the treatment plays an important role. The second paper extends the first paper in that the estimators are studied under the more realistic setting of using the estimated propensity score. We also relax several assumptions made in the first paper, and include the doubly robust estimator. Again, the results show that the correlation may increase or decrease the variances of the estimators, but we also observe that several aspects influence how correlation affects the variance of the estimators, such as the choice of estimator, the strength of the confounding towards the outcome and the treatment, and whether constant or non-constant causal effect is present. The third paper concerns estimation of the asymptotic variance of a propensity score matching estimator. Simulations show that large gains can be made for the mean squared error by properly selecting smoothing parameters of the variance estimator and that a residual-based local linear estimator may be a more efficient estimator for the asymptotic variance. The specification of the variance estimator is shown to be crucial when evaluating the effect of right heart catheterisation, i.e. we show either a negative effect on survival or no significant effect depending on the choice of smoothing parameters.   In the fourth paper, we provide an analytic expression for the covariance matrix of logistic regression with normally distributed regressors. This paper is related to the other papers in that logistic regression is commonly used to estimate the propensity score.
APA, Harvard, Vancouver, ISO, and other styles
4

Coudret, Raphaël. "Stochastic modelling using large data sets : applications in ecology and genetics." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00865867.

Full text
Abstract:
There are two main parts in this thesis. The first one concerns valvometry, which is here the study of the distance between both parts of the shell of an oyster, over time. The health status of oysters can be characterized using valvometry in order to obtain insights about the quality of their environment. We consider that a renewal process with four states underlies the behaviour of the studied oysters. Such a hidden process can be retrieved from a valvometric signal by assuming that some probability density function linked with this signal, is bimodal. We then compare several estimators which take this assumption into account, including kernel density estimators.In another chapter, we compare several regression approaches, aiming at analysing transcriptomic data. To understand which explanatory variables have an effect on gene expressions, we apply a multiple testing procedure on these data, through the linear model FAMT. The SIR method may find nonlinear relations in such a context. It is however more commonly used when the response variable is univariate. A multivariate version of SIR was then developed. Procedures to measure gene expressions can be expensive. The sample size n of the corresponding datasets is then often small. That is why we also studied SIR when n is less than the number of explanatory variables p.
APA, Harvard, Vancouver, ISO, and other styles
5

Portier, François. "Réduction de la dimension en régression." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00871049.

Full text
Abstract:
Dans cette thèse, nous étudions le problème de réduction de la dimension dans le cadre du modèle de régression suivant Y=g(B X,e), où X est un vecteur de dimension p, Y appartient à R, la fonction g est inconnue et le bruit e est indépendant de X. Nous nous intéressons à l'estimation de la matrice B, de taille dxp où d est plus petit que p, (dont la connaissance permet d'obtenir de bonnes vitesses de convergence pour l'estimation de g). Ce problème est traité en utilisant deux approches distinctes. La première, appelée régression inverse nécessite la condition de linéarité sur X. La seconde, appelée semi-paramétrique ne requiert pas une telle condition mais seulement que X possède une densité lisse. Dans le cadre de la régression inverse, nous étudions deux familles de méthodes respectivement basées sur E[X f(Y)] et E[XX^T f(Y)]. Pour chacune de ces familles, nous obtenons les conditions sur f permettant une estimation exhaustive de B, aussi nous calculons la fonction f optimale par minimisation de la variance asymptotique. Dans le cadre de l'approche semi-paramétrique, nous proposons une méthode permettant l'estimation du gradient de la fonction de régression. Sous des hypothèses semi-paramétriques classiques, nous montrons la normalité asymptotique de notre estimateur et l'exhaustivité de l'estimation de B. Quel que soit l'approche considérée, une question fondamentale est soulevée : comment choisir la dimension de B ? Pour cela, nous proposons une méthode d'estimation du rang d'une matrice par test d'hypothèse bootstrap.
APA, Harvard, Vancouver, ISO, and other styles
6

Alghamdi, Amani Saeed. "Study of Generalized Lomax Distribution and Change Point Problem." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1526387579759835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Solís, Maikol. "Conditional covariance estimation for dimension reduction and sensivity analysis." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2354/.

Full text
Abstract:
Cette thèse se concentre autour du problème de l'estimation de matrices de covariance conditionnelles et ses applications, en particulier sur la réduction de dimension et l'analyse de sensibilités. Dans le Chapitre 2 nous plaçons dans un modèle d'observation de type régression en grande dimension pour lequel nous souhaitons utiliser une méthodologie de type régression inverse par tranches. L'utilisation d'un opérateur fonctionnel, nous permettra d'appliquer une décomposition de Taylor autour d'un estimateur préliminaire de la densité jointe. Nous prouverons deux choses : notre estimateur est asymptoticalement normale avec une variance que dépend de la partie linéaire, et cette variance est efficace selon le point de vue de Cramér-Rao. Dans le Chapitre 3, nous étudions l'estimation de matrices de covariance conditionnelle dans un premier temps coordonnée par coordonnée, lesquelles dépendent de la densité jointe inconnue que nous remplacerons par un estimateur à noyaux. Nous trouverons que l'erreur quadratique moyenne de l'estimateur converge à une vitesse paramétrique si la distribution jointe appartient à une classe de fonctions lisses. Sinon, nous aurons une vitesse plus lent en fonction de la régularité de la densité de la densité jointe. Pour l'estimateur de la matrice complète, nous allons appliquer une transformation de régularisation de type "banding". Finalement, dans le Chapitre 4, nous allons utiliser nos résultats pour estimer des indices de Sobol utilisés en analyses de sensibilité. Ces indices mesurent l'influence des entrées par rapport a la sortie dans modèles complexes. L'avantage de notre implémentation est d'estimer les indices de Sobol sans l'utilisation de coûteuses méthodes de type Monte-Carlo. Certaines illustrations sont présentées dans le chapitre pour montrer les capacités de notre estimateur<br>This thesis will be focused in the estimation of conditional covariance matrices and their applications, in particular, in dimension reduction and sensitivity analyses. In Chapter 2, we are in a context of high-dimensional nonlinear regression. The main objective is to use the sliced inverse regression methodology. Using a functional operator depending on the joint density, we apply a Taylor decomposition around a preliminary estimator. We will prove two things: our estimator is asymptotical normal with variance depending only the linear part, and this variance is efficient from the Cramér-Rao point of view. In the Chapter 3, we study the estimation of conditional covariance matrices, first coordinate-wise where those parameters depend on the unknown joint density which we will replace it by a kernel estimator. We prove that the mean squared error of the nonparametric estimator has a parametric rate of convergence if the joint distribution belongs to some class of smooth functions. Otherwise, we get a slower rate depending on the regularity of the model. For the estimator of the whole matrix estimator, we will apply a regularization of type "banding". Finally, in Chapter 4, we apply our results to estimate the Sobol or sensitivity indices. These indices measure the influence of the inputs with respect to the output in complex models. The advantage of our implementation is that we can estimate the Sobol indices without use computing expensive Monte-Carlo methods. Some illustrations are presented in the chapter showing the capabilities of our estimator
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Huu Du. "System Reliability : Inference for Common Cause Failure Model in Contexts of Missing Information." Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS530.

Full text
Abstract:
Le bon fonctionnement de l’ensemble d’un système industriel est parfois fortement dépendant de la fiabilité de certains éléments qui le composent. Une défaillance de l’un de ces éléments peut conduire à une défaillance totale du système avec des conséquences qui peuvent être catastrophiques en particulier dans le secteur de l’industrie nucléaire ou dans le secteur de l’industrie aéronautique. Pour réduire ce risque de panne catastrophique, une stratégie consiste à dupliquer les éléments sensibles dans le dispositif. Ainsi, si l’un de ces éléments tombe en panne, un autre pourra prendre le relais et le bon fonctionnement du système pourra être maintenu. Cependant, on observe couramment des situations qui conduisent à des défaillances simultanées d’éléments du système : on parle de défaillance de cause commune. Analyser, modéliser, prédire ce type d’événement revêt donc une importance capitale et sont l’objet des travaux présentés dans cette thèse. Il existe de nombreux modèles pour les défaillances de cause commune. Des méthodes d’inférence pour étudier les paramètres de ces modèles ont été proposées. Dans cette thèse, nous considérons la situation où l’inférence est menée sur la base de données manquantes. Nous étudions en particulier le modèle BFR (Binomial Failure Rate) et la méthode des facteurs alpha. En particulier, une approche bayésienne est développée en s’appuyant sur des techniques algorithmiques (Metropolis, IBF). Dans le domaine du nucléaire, les données de défaillances sont peu abondantes et des techniques particulières d’extrapolations de données doivent être mis en oeuvre pour augmenter l’information. Nous proposons dans le cadre de ces stratégies, des techniques de prédiction des défaillances de cause commune. L’actualité récente a mis en évidence l’importance de la fiabilité des systèmes redondants et nous espérons que nos travaux contribueront à une meilleure compréhension et prédiction des risques de catastrophes majeures<br>The effective operation of an entire industrial system is sometimes strongly dependent on the reliability of its components. A failure of one of these components can lead to the failure of the system with consequences that can be catastrophic, especially in the nuclear industry or in the aeronautics industry. To reduce this risk of catastrophic failures, a redundancy policy, consisting in duplicating the sensitive components in the system, is often applied. When one of these components fails, another will take over and the normal operation of the system can be maintained. However, some situations that lead to simultaneous failures of components in the system could be observed. They are called common cause failure (CCF). Analyzing, modeling, and predicting this type of failure event are therefore an important issue and are the subject of the work presented in this thesis. We investigate several methods to deal with the statistical analysis of CCF events. Different algorithms to estimate the parameters of the models and to make predictive inference based on various type of missing data are proposed. We treat confounded data using a BFR (Binomial Failure Rare) model. An EM algorithm is developed to obtain the maximum likelihood estimates (MLE) for the parameters of the model. We introduce the modified-Beta distribution to develop a Bayesian approach. The alpha-factors model is considered to analyze uncertainties in CCF. We suggest a new formalism to describe uncertainty and consider Dirichlet distributions (nested, grouped) to make a Bayesian analysis. Recording of CCF cause data leads to incomplete contingency table. For a Bayesian analysis of this type of tables, we propose an algorithm relying on inverse Bayes formula (IBF) and Metropolis-Hasting algorithm. We compare our results with those obtained with the alpha- decomposition method, a recent method proposed in the literature. Prediction of catastrophic event is addressed and mapping strategies are described to suggest upper bounds of prediction intervals with pivotal method and Bayesian techniques. Recent events have highlighted the importance of reliability redundant systems and we hope that our work will contribute to a better understanding and prediction of the risks of major CCF events
APA, Harvard, Vancouver, ISO, and other styles
9

Krämer, Romy, Matthias Richter, and Bernd Hofmann. "Parameter estimation in a generalized bivariate Ornstein-Uhlenbeck model." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501307.

Full text
Abstract:
In this paper, we consider the inverse problem of calibrating a generalization of the bivariate Ornstein-Uhlenbeck model introduced by Lo and Wang. Even though the generalized Black-Scholes option pricing formula still holds, option prices change in comparison to the classical Black-Scholes model. The time-dependent volatility function and the other (real-valued) parameters in the model are calibrated simultaneously from option price data and from some empirical moments of the logarithmic returns. This gives an ill-posed inverse problem, which requires a regularization approach. Applying the theory of Engl, Hanke and Neubauer concerning Tikhonov regularization we show convergence of the regularized solution to the true data and study the form of source conditions which ensure convergence rates.
APA, Harvard, Vancouver, ISO, and other styles
10

Saracco, Jérôme. "Contributions à la régression inverse par tranchage : sliced inverse regression (S.I.R.)." Toulouse 3, 1996. http://www.theses.fr/1996TOU30185.

Full text
Abstract:
La regression inverse par tranchage (sliced inverse regression (s. I. R. )) est une methode de regression semiparametrique reposant sur un argument geometrique. Au contraire des autres methodes de regression semiparametrique, elle ne requiert que des temps de calcul informatique tres courts. Dans cette these, apres un panorama de l'etat actuel des travaux sur s. I. R. , nous envisageons deux aspects de cette methode, ainsi qu'une application a l'estimation simplifiee d'un modele de selection. (1) nous developpons une theorie asymptotique basee sur un decoupage non aleatoire en tranches et portant sur la loi asymptotique de la partie parametrique du modele. (2) une extension semiparametrique du modele de selection tobit peut s'interpreter geometriquement dans le cadre de s. I. R. Exploitant cette observation, nous introduisons un estimateur simplifie pour un tel modele, et nous etudions sa convergence en probabilite et en loi. Des simulations, y compris lorsque certaines hypotheses theoriques ne sont pas respectees par les donnees, confirment le bon comportement de notre estimateur. (3) pour le cas d'echantillons de petite taille, l'estimation par tranchage se revele sensible au choix des tranches. Nous proposons deux methodes alternatives a un tranchage particulier fixe par l'utilisateur, l'une est basee sur un argument nonparametrique, et l'autre est basee sur un lissage de plusieurs decoupages en tranches. Nous etablissons diverses proprietes asymptotiques de ces methodes. Nous les comparons aux methodes s. I. R. Existantes par simulations sur des echantillons de 25 et 50 observations. Les methodes que nous proposons se revelent sensiblement meilleures que les methodes anterieures. Nous avons programme l'ensemble des methodes s. I. R. En splus. Nous fournissons une illustration et un descriptif de l'implementation informatique que nous avons realisee, les differentes procedures et fonctions sont disponibles par ftp
APA, Harvard, Vancouver, ISO, and other styles
11

Helin, Mikael. "Inverse Parameter Estimation using Hamilton-Jacobi Equations." Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123092.

Full text
Abstract:
Inthis degree project, a solution on a coarse grid is recovered by fitting apartial differential equation to a few known data points. The PDE to consideris the heat equation and the Dupire’s equation with their synthetic data,including synthetic data from the Black-Scholes formula. The approach to fit aPDE is by optimal control to derive discrete approximations to regularized Hamiltoncharacteristic equations to which discrete stepping schemes, and parameters forsmoothness, are examined. By non-parametric numerical implementation thedervied method is tested and then a few suggestions on possible improvementsare given<br>I detta examensarbete återskapas en lösning på ett glest rutnät genom att anpassa en partiell differentialekvation till några givna datapunkter. De partiella differentialekvationer med deras motsvarande syntetiska data som betraktas är värmeledningsekvationen och Dupires ekvation inklusive syntetiska data från Black-Scholes formel. Tillvägagångssättet att anpassa en PDE är att med hjälp av optimal styrning härleda diskreta approximationer på ett system av regulariserade Hamilton karakteristiska ekvationer till vilka olika diskreta stegmetoder och parametrar för släthet undersöks. Med en icke-parametrisk numerisk implementation prövas den härledda metoden och slutligen föreslås möjliga förbättringar till metoden.
APA, Harvard, Vancouver, ISO, and other styles
12

Mbah, Alfred Kubong. "On the theory of records and applications." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Orchard, Peter Raymond. "Sparse inverse covariance estimation in Gaussian graphical models." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9955.

Full text
Abstract:
One of the fundamental tasks in science is to find explainable relationships between observed phenomena. Recent work has addressed this problem by attempting to learn the structure of graphical models - especially Gaussian models - by the imposition of sparsity constraints. The graphical lasso is a popular method for learning the structure of a Gaussian model. It uses regularisation to impose sparsity. In real-world problems, there may be latent variables that confound the relationships between the observed variables. Ignoring these latents, and imposing sparsity in the space of the visibles, may lead to the pruning of important structural relationships. We address this problem by introducing an expectation maximisation (EM) method for learning a Gaussian model that is sparse in the joint space of visible and latent variables. By extending this to a conditional mixture, we introduce multiple structures, and allow side information to be used to predict which structure is most appropriate for each data point. Finally, we handle non-Gaussian data by extending each sparse latent Gaussian to a Gaussian copula. We train these models on a financial data set; we find the structures to be interpretable, and the new models to perform better than their existing competitors. A potential problem with the mixture model is that it does not require the structure to persist in time, whereas this may be expected in practice. So we construct an input-output HMM with sparse Gaussian emissions. But the main result is that, provided the side information is rich enough, the temporal component of the model provides little benefit, and reduces efficiency considerably. The GWishart distribution may be used as the basis for a Bayesian approach to learning a sparse Gaussian. However, sampling from this distribution often limits the efficiency of inference in these models. We make a small change to the state-of-the-art block Gibbs sampler to improve its efficiency. We then introduce a Hamiltonian Monte Carlo sampler that is much more efficient than block Gibbs, especially in high dimensions. We use these samplers to compare a Bayesian approach to learning a sparse Gaussian with the (non-Bayesian) graphical lasso. We find that, even when limited to the same time budget, the Bayesian method can perform better. In summary, this thesis introduces practically useful advances in structure learning for Gaussian graphical models and their extensions. The contributions include the addition of latent variables, a non-Gaussian extension, (temporal) conditional mixtures, and methods for efficient inference in a Bayesian formulation.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Sha. "A Bayesian Approach for Inverse Problems in Synthetic Aperture Radar Imaging." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00844748.

Full text
Abstract:
Synthetic Aperture Radar (SAR) imaging is a well-known technique in the domain of remote sensing, aerospace surveillance, geography and mapping. To obtain images of high resolution under noise, taking into account of the characteristics of targets in the observed scene, the different uncertainties of measure and the modeling errors becomes very important.Conventional imaging methods are based on i) over-simplified scene models, ii) a simplified linear forward modeling (mathematical relations between the transmitted signals, the received signals and the targets) and iii) using a very simplified Inverse Fast Fourier Transform (IFFT) to do the inversion, resulting in low resolution and noisy images with unsuppressed speckles and high side lobe artifacts.In this thesis, we propose to use a Bayesian approach to SAR imaging, which overcomes many drawbacks of classical methods and brings high resolution, more stable images and more accurate parameter estimation for target recognition.The proposed unifying approach is used for inverse problems in Mono-, Bi- and Multi-static SAR imaging, as well as for micromotion target imaging. Appropriate priors for modeling different target scenes in terms of target features enhancement during imaging are proposed. Fast and effective estimation methods with simple and hierarchical priors are developed. The problem of hyperparameter estimation is also handled in this Bayesian approach framework. Results on synthetic, experimental and real data demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
15

Gill, Jennifer. "AN INVERSE ALGORITHM TO ESTIMATE THERMAL CONTACT RESISTANCE." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2546.

Full text
Abstract:
Thermal systems often feature composite regions that are mechanically mated. In general, there exists a significant temperature drop across the interface between such regions which may be composed of similar or different materials. The parameter characterizing this temperature drop is the thermal contact resistance, which is defined as the ratio of the temperature drop to the heat flux normal to the interface. The thermal contact resistance is due to roughness effects between mating surfaces which cause certain regions of the mating surfaces to loose contact thereby creating gaps. In these gap regions, the principal modes of heat transfer are conduction across the contacting regions of the interface, conduction or natural convection in the fluid filling the gap regions of the interface, and radiation across the gap surfaces. Moreover, the contact resistance is a function of contact pressure as this can significantly alter the topology of the contact region. The thermal contact resistance is a phenomenologically complex function and can significantly alter prediction of thermal models of complex multi-component structures. Accurate estimates of thermal contact resistances are important in engineering calculations and find application in thermal analysis ranging from relatively simple layered and composite materials to more complex biomaterials. There have been many studies devoted to the theoretical predictions of thermal contact resistance and although general theories have been somewhat successful in predicting thermal contact resistances, most reliable results have been obtained experimentally. This is due to the fact that the nature of thermal contact resistance is quite complex and depends on many parameters including types of mating materials, surface characteristics of the interfacial region such as roughness and hardness, and contact pressure distribution. In experiments, temperatures are measured at a certain number of locations, usually close to the contact surface, and these measurements are used as inputs to a parameter estimation procedure to arrive at the sought-after thermal contact resistance. Most studies seek a single value for the contact resistance, while the resistance may in fact also vary spatially. In this thesis, an inverse problem (IP) is formulated to estimate the spatial variation of the thermal contact resistance along an interface in a two-dimensional configuration. Temperatures measured at discrete locations using embedded sensors appropriately placed in proximity to the interface provide the additional information required to solve the inverse problem. A superposition method serves to determine sensitivity coefficients and provides guidance in the location of the measuring points. Temperature measurements are then used to define a regularized quadratic functional that is minimized to yield the contact resistance between the two mating surfaces. A boundary element method analysis (BEM) provides the temperature field under current estimates of the contact resistance in the solution of the inverse problem when the geometry of interest is not regular, while an analytical solution can be used for regular geometries. Minimization of the IP functional is carried out by the Levenberg-Marquadt method or by a Genetic Algorithm depending on the problem under consideration. The L-curve method of Hansen is used to choose the optimal regularization parameter. A series of numerical examples are provided to demonstrate and validate the approach.<br>M.S.<br>Department of Mechanical, Materials and Aerospace Engineering;<br>Engineering and Computer Science<br>Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
16

Willer, Thomas. "Estimation non paramétrique et problèmes inverses." Phd thesis, Université Paris-Diderot - Paris VII, 2006. http://tel.archives-ouvertes.fr/tel-00121197.

Full text
Abstract:
On se place dans le cadre de<br />l'estimation non paramétrique pour les problèmes inverses, où une<br />fonction inconnue subit une transformation par un opérateur<br />linéaire mal posé, et où l'on en observe une version bruitée par<br />une erreur aléatoire additive. Dans ce type de problèmes, les<br />méthodes d'ondelettes sont très utiles, et ont été largement<br />étudiées. Les méthodes développées dans cette thèse s'en<br />inspirent, mais consistent à s'écarter des bases d'ondelettes<br />"classiques", ce qui permet d'ouvrir de nouvelles perspectives<br />théoriques et pratiques. Dans l'essentiel de la thèse, on utilise<br />un modèle de type bruit blanc. On construit des estimateurs<br />utilisant des bases qui d'une part sont adaptées à l'opérateur, et<br />d'autre part possèdent des propriétés analogues à celles des<br />ondelettes. On en étudie les propriétés minimax dans un cadre<br />large, et l'on implémente ces méthodes afin d'en étudier leurs<br />performances pratiques. Dans une dernière partie, on utilise un<br />modèle de regression en design aléatoire, et on étudie les<br />performances numériques d'un estimateur reposant sur la<br />déformation des bases d'ondelettes.
APA, Harvard, Vancouver, ISO, and other styles
17

Sakamoto, Julia. "Inverse Optical Design and Its Applications." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/216969.

Full text
Abstract:
We present a new method for determining the complete set of patient-specific ocular parameters, including surface curvatures, asphericities, refractive indices, tilts, decentrations, thicknesses, and index gradients. The data consist of the raw detector outputs of one or more Shack-Hartmann wavefront sensors (WFSs); unlike conventional wavefront sensing, we do not perform centroid estimation, wavefront reconstruction, or wavefront correction. Parameters in the eye model are estimated by maximizing the likelihood. Since a purely Gaussian noise model is used to emulate electronic noise, maximum-likelihood (ML) estimation reduces to nonlinear least-squares fitting between the data and the output of our optical design program. Bounds on the estimate variances are computed with the Fisher information matrix (FIM) for different configurations of the data-acquisition system, thus enabling system optimization. A global search algorithm called simulated annealing (SA) is used for the estimation step, due to multiple local extrema in the likelihood surface. The ML approach to parameter estimation is very time-consuming, so rapid processing techniques are implemented with the graphics processing unit (GPU).We are leveraging our general method of reverse-engineering optical systems in optical shop testing for various applications. For surface profilometry of aspheres, which involves the estimation of high-order aspheric coefficients, we generated a rapid ray-tracing algorithm that is well-suited to the GPU architecture. Additionally, reconstruction of the index distribution of GRIN lenses is performed using analytic solutions to the eikonal equation. Another application is parameterized wavefront estimation, in which the pupil phase distribution of an optical system is estimated from multiple irradiance patterns near focus. The speed and accuracy of the forward computations are emphasized, and our approach has been refined to handle large wavefront aberrations and nuisance parameters in the imaging system.
APA, Harvard, Vancouver, ISO, and other styles
18

Syllebranque, Cédric. "Estimation de propriétés mécaniques d'objets complexes à partir de séquences d'images." Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10175/document.

Full text
Abstract:
De nombreux algorithmes de simulations informatiques permettent de reproduire des comportements physiques d'objets en trois dimensions. Des phénomènes liés à différents domaines, comme la mécanique classique (animation de tissus par exemple), la mécanique des fluides (animation de feux par exemple) ou encore la photométrie (visualisation photo-réaliste de scènes par exemple) peuvent être recréés virtuellement. Ces algorithmes complexes possèdent généralement de nombreux paramètres. Leurs réglages sont souvent délicats et peu conformes au réel. Pour utiliser ces algorithmes en simulations chirurgicales ou pour les calculs de stabilité d'un pont suspendu, il est indispensable d’avoir des paramètres exacts.Malheureusement, il est difficile, voire impossible, même pour un expert de trouver les bonnes valeurs de ces paramètres pour produire un effet donné, et tout particulièrement sur des objets réels complexes comme les organes. En effet, même si nous disposons d’un simulateur très puissant, l'adéquation de ses paramètres avec le monde réel est difficile, et dans le meilleur des cas il s’agit toujours de « tâtonner » pour obtenir le résultat voulu, et ce parfois pendant des heures. Cette thèse vise donc à déterminer directement ces paramètres depuis des séquences d'images réelles, afin de reproduire en réalité virtuelle et de la façon la plus fidèle possible le comportement observé des objets du monde réel. Nous concevons pour cela une solution matérielle et logicielle peu onéreuse en proposant un nouveau dispositif de capture de force et un algorithme d'estimation inverse basé sur plusieurs métriques d'erreur<br>Many computer-driven simulations are able to reproduce three dimensional object's physical behaviours. Phenomenons related to many different domains like "classical" mechanic (cloth simulation for example), fluids mechanic (fires or smoke for example) or photometry (photo-realistic 3d scenes visualisation for example) can be recreated virtually. However, those algorithms usually need a lot of parameters. Their tuning is often complex and not realistic. To use those algorithms in surgical simulations or for bridge stability computations, it is essential to have the right parameters. Unfortunately, it is hard (sometimes impossible), even for a domain expert, to find the right values for those parameters to produce the desired effect, and especially for complex real objects like an eye or a liver for example. Indeed, even if we had a very powerfull simulator, the adequacy with the real world is far to be obvious, and in the best case we always need to grope in order to obtain the wanted result, and it can sometimes take hours. This PhD aim to find these parameters directly from real videos, in order to reproduce in virtual reality the "real" objects behaviours in the most faithful possible way. To achieve this, we propose a low cost hardware and software solution by designing a new force capture device and an inverse estimation algorithm based on some error metrics
APA, Harvard, Vancouver, ISO, and other styles
19

Ingle, William Nathan. "Stability estimates for inverse problems of some elliptic equations." Diss., Wichita State University, 2011. http://hdl.handle.net/10057/5149.

Full text
Abstract:
In this dissertation we obtain new Carleman formulas for the solution of the Cauchy problem for equations P u = h, in Ω, u|E = f , where E ⊂ ∂Ω and | E | > 0. Our elliptic operator is of the form P = [ [2∂¯ 0 ; 0 2δ ] + A (x), where A is a 2 x 2 matrix. We also obtain estimates for the solution of equation P u = 0 when u is given at a finite number of points, and we prove that non-trivial solutions to the equation can not be small on large portions of the boundary, | Eδ | ≤ c / ln δ−1 , δ ∈ (0, 1), where Eδ = {z ∈ ∂Ω| | u(z) | < δ} and | Eδ | is the Lebesgue measure of Eδ . Finding the boundary condition from only a finite number of interior measurements of a domain is interesting both theoretically and practically. For example, when the boundary is physically inaccessible, all measurements must be made within the domain itself, and the conditions on the boundary must be reconstructed. We investigate the problem of recovering a boundary condition of the third kind for the Laplace operator defined on a simply connected domain in the complex plane, when the value of the solution and its gradient are known only for a finite number of interior points.<br>Thesis (Ph.D.)--Wichita State University, College of Liberal Arts and Sciences, Dept. of Mathematics, Statistics, and Physics
APA, Harvard, Vancouver, ISO, and other styles
20

Sehlstedt, Niklas. "Hybrid methods for inverse force estimation in structural dynamics." Doctoral thesis, KTH, Vehicle Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Leacock, Garry R. "Helicopter inverse simulation for workload and handling qualities estimation." Thesis, University of Glasgow, 2000. http://theses.gla.ac.uk/4947/.

Full text
Abstract:
Helicopter handling qualities are investigated using inverse simulation as the method of providing state and control information for the appropriate quantitative metrics. The main aim of the work was to develop a more comprehensive and versatile method of quantifying handling qualities levels using the available inverse algorithm "Helin v". Subsequently, the assessment of the helicopter model inherent in Helinv, "Helicopter Generic Simulation", (HGS) for its suitability to handling qualities studies was paramount. Since the Helinv inverse algorithm operates by initially defining a mathematical flight test manoeuvre for the vehicle to "fly", considerable time was given to modelling suitable handling qualities assessment manoeuvres. So-called "attitude quickness" values were then calculated thus providing an initial objective insight into handling qualities level of the vehicle under test. Validation of the tasks formed an integral part of successfully fulfilling the flight test manoeuvre development objective. The influence of the human is captured by the inclusion of a pilot model and the development of a novel method of parameter estimation, supplements the overall objective of modifying Helinv results to achieve potentially more realistic responses and thus correspondingly more realistic handling qualities. A comparative study of two helicopters, one based on the Westland Lynx battlefield/utility type and the other, a hypothetically superior configuration effectively demonstrates the capability of inverse simulation to deliver results adequate for initial handling qualities studies. Several examples are used to illustrate the point. Helinv has been shown to be versatile and efficient and can be used in initial handling qualities studies. The advantages of such a technique are clear when it is seen that actual flight testing, ground based or airborne is extremely costly, as the flight test manoeuvres must be representative of real life, reproducible and of course, as risk free as possible. Many inverse simulation runs and handling qualities calculations have been carried out for different helicopter configurations and manoeuvres thus illustrating the advantages of the technique and fulfilling all the aims mentioned above.
APA, Harvard, Vancouver, ISO, and other styles
22

Jung, Ylva. "Estimation of Inverse Models Applied to Power Amplifier Predistortion." Licentiate thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97355.

Full text
Abstract:
Mathematical models are commonly used in technical applications to describe the behavior of a system. These models can be estimated from data, which is known as system identification. Usually the models are used to calculate the output for a given input, but in this thesis, the estimation of inverse models is investigated. That is, we want to find a model that can be used to calculate the input for a given output. In this setup, the goal is to minimize the difference between the input and the output from the cascaded systems (system and inverse). A good model would be one that reconstructs the original input when used in series with the original system. Different methods for estimating a system inverse exist. The inverse model can be based on a forward model, or it can be estimated directly by reversing the use of input and output in the identification procedure. The models obtained using the different approaches capture different aspects of the system, and the choice of method can have a large impact. Here, it is shown in a small linear example that a direct estimation of the inverse can be advantageous, when the inverse is supposed to be used in cascade with the system to reconstruct the input. Inverse systems turn up in many different applications, such as sensor calibration and power amplifier (PA) predistortion. PAs used in communication devices can be nonlinear, and this causes interference in adjacent transmitting channels, which will be noise to anyone that transmits in these channels. Therefore, linearization of the amplifier is needed, and a prefilter is used, called a predistorter. In this thesis, the predistortion problem has been investigated for a type of PA, called outphasing power amplifier, where the input signal is decomposed into two branches that are amplified separately by highly efficient nonlinear amplifiers, and then recombined. If the decomposition and summation of the two parts are not perfect, nonlinear terms will be introduced in the output, and predistortion is needed. Here, a predistorter has been constructed based on a model of the PA. In a first method, the structure of the outphasing amplifier has been used to model the distortion, and from this model, a predistorter can be estimated. However, this involves solving two nonconvex optimization problems, and the risk of obtaining a suboptimal solution. Exploring the structure of the PA, the problem can be reformulated such that the PA modeling basically can be done by solving two least-squares (LS) problems, which are convex. In a second step, an analytical description of an ideal predistorter can be used to obtain a predistorter estimate. Another approach is to compute the predistorter without a PA model by estimating the inverse directly. The methods have been evaluated in simulations and in measurements, and it is shown that the predistortion improves the linearity of the overall power amplifier system.<br><p>The series name "Linköping Studies in Science and Technology. Licentiate Thesis" is incorrect. The correct sereis name is "Linköping Studies in Science and Technology. Thesis".</p>
APA, Harvard, Vancouver, ISO, and other styles
23

Hutchinson, Derek Charles Glenn. "Manipulator inverse kinematics based on recursive least squares estimation." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/27890.

Full text
Abstract:
The inverse kinematics problem for six degree of freedom robots having a separable structure with the wrist equivalent to a spherical joint is considered and an iterative solution based on estimating the inverse Jacobian by recursive least squares estimation is proposed. This solution is found to have properties similar to Wampler's Damped Least Squares method and provides a stable result when the manipulator is in singular regions. Furthermore, the solution is more computationally efficient than Wampler's method; however, its best performance is obtained when the distances between the current end effector pose and the target pose are small. No knowledge of the manipulator's geometry is required provided that the end effector and joint position data are obtained from sensor information. This permits the algorithm to be readily transferable among manipulators and circumvents detailed analysis of the manipulator's structure.<br>Applied Science, Faculty of<br>Electrical and Computer Engineering, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Kang, Xiaoning. "Contributions to Large Covariance and Inverse Covariance Matrices Estimation." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/82150.

Full text
Abstract:
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimating large covariance and inverse covariance matrices with different applications. In Chapter 2, I consider an estimation of time-varying covariance matrices in the analysis of multivariate financial data. An order-invariant Cholesky-log-GARCH model is developed for estimating the time-varying covariance matrices based on the modified Cholesky decomposition. This decomposition provides a statistically interpretable parametrization of the covariance matrix. The key idea of the proposed model is to consider an ensemble estimation of covariance matrix based on the multiple permutations of variables. Chapter 3 investigates the sparse estimation of inverse covariance matrix for the highdimensional data. This problem has attracted wide attention, since zero entries in the inverse covariance matrix imply the conditional independence among variables. I propose an orderinvariant sparse estimator based on the modified Cholesky decomposition. The proposed estimator is obtained by assembling a set of estimates from the multiple permutations of variables. Hard thresholding is imposed on the ensemble Cholesky factor to encourage the sparsity in the estimated inverse covariance matrix. The proposed method is able to catch the correct sparse structure of the inverse covariance matrix. Chapter 4 focuses on the sparse estimation of large covariance matrix. Traditional estimation approach is known to perform poorly in the high dimensions. I propose a positive-definite estimator for the covariance matrix using the modified Cholesky decomposition. Such a decomposition provides a exibility to obtain a set of covariance matrix estimates. The proposed method considers an ensemble estimator as the center" of these available estimates with respect to Frobenius norm. The proposed estimator is not only guaranteed to be positive definite, but also able to catch the underlying sparse structure of the true matrix.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Choi, Kerkil. "Minimum I-divergence Methods for Inverse Problems." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7543.

Full text
Abstract:
Problems of estimating nonnegative functions from nonnegative data induced by nonnegative mappings are ubiquitous in science and engineering. We address such problems by minimizing an information-theoretic discrepancy measure, namely Csiszar's I-divergence, between the collected data and hypothetical data induced by an estimate. Our applications can be summarized along the following three lines: 1) Deautocorrelation: Deautocorrelation involves recovering a function from its autocorrelation. Deautocorrelation can be interpreted as phase retrieval in that recovering a function from its autocorrelation is equivalent to retrieving Fourier phases from just the corresponding Fourier magnitudes. Schulz and Snyder invented an minimum I-divergence algorithm for phase retrieval. We perform a numerical study concerning the convergence of their algorithm to local minima. X-ray crystallography is a method for finding the interatomic structure of a crystallized molecule. X-ray crystallography problems can be viewed as deautocorrelation problems from aliased autocorrelations, due to the periodicity of the crystal structure. We derive a modified version of the Schulz-Snyder algorithm for application to crystallography. Furthermore, we prove that our tweaked version can theoretically preserve special symmorphic group symmetries that some crystals possess. We quantify noise impact via several error metrics as the signal-to-ratio changes. Furthermore, we propose penalty methods using Good's roughness and total variation for alleviating roughness in estimates caused by noise. 2) Deautoconvolution: Deautoconvolution involves finding a function from its autoconvolution. We derive an iterative algorithm that attempts to recover a function from its autoconvolution via minimizing I-divergence. Various theoretical properties of our deautoconvolution algorithm are derived. 3) Linear inverse problems: Various linear inverse problems can be described by the Fredholm integral equation of the first kind. We address two such problems via minimum I-divergence methods, namely the inverse blackbody radiation problem, and the problem of estimating an input distribution to a communication channel (particularly Rician channels) that would create a desired output. Penalty methods are proposed for dealing with the ill-posedness of the inverse blackbody problem.
APA, Harvard, Vancouver, ISO, and other styles
26

Maksymenko, Kostiantyn. "Nouvelles approches algorithmiques pour les problèmes directs et inverses en M/EEG." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4112.

Full text
Abstract:
La Magnéto- et l'Electro-encéphalographie (M/EEG) sont deux modalités d'imagerie fonctionnelle non invasives qui mesurent l'activité électromagnétique du cerveau. Ces techniques sont utilisées pour des études cognitives ainsi que pour des applications cliniques, comme l'épilepsie. Après une présentation de quelques notions de base sur ces modalités M/EEG, cette thèse développe deux contributions principales. La première est une méthode d’approximation efficace d’un ensemble de solutions de problèmes directs d’EEG paramétrés par des valeurs de conductivité pour différents tissus. Ce problème direct consiste à calculer comment une activité corticale spécifique serait mesurée par des capteurs EEG. Le principal avantage de notre méthode est qu’elle accélère considérablement le temps de calcul tout en contrôlant l'erreur d'approximation. Les valeurs de conductivités des tissus de la tête varient selon les sujets et il serait intéressant de les estimer à partir des données EEG. Notre méthode est un pas important pour la résolution efficace d'un tel problème d'estimation de conductivités. La deuxième contribution est une nouvelle méthode de reconstruction de sources qui estime des configurations de sources corticales étendues expliquant les mesures M/EEG. La principale originalité de cette méthode réside dans le fait qu’au lieu de fournir une reconstruction unique, comme le font la majorité des méthodes de l’état de l'art, elle propose plusieurs solutions candidates valables. Nous avons validé nos deux contributions sur des données M/EEG simulées et réelles<br>Magneto- and Electro-encephalography (M/EEG) are two non-invasive functional imaging modalities which measure the electromagnetic activity of the brain. These tools are used in cognitive studies as well as in clinical applications as, for example, epilepsy. Besides the presentation of some background material about the M/EEG modalities, this thesis describes two main contributions. The first one is a method for a fast approximation of a set of EEG forward problem solutions, parametrized by tissue conductivity values. This forward problem consists in computing how a specific cortical activity would be measured by EEG sensors. The main advantage of our method is that it significantly accelerates the computation time, while controlling the approximation error. Head tissue conductivity values vary across subjects and it might be interesting to estimate them from the EEG data. Our method is an important step towards an efficient solution of such a head tissues conductivity estimation problem. The second contribution is a novel source reconstruction method, which estimates extended cortical sources explaining the M/EEG measurements. The main originality of the method is that instead of providing a unique reconstruction, as the majority of the state-of-the-art methods do, it proposes several equally valid candidates. We validated both our contributions on simulated and real M/EEG data
APA, Harvard, Vancouver, ISO, and other styles
27

Nickless, Alecia. "Regional CO₂ flux estimates for South Africa through inverse modelling." Doctoral thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29703.

Full text
Abstract:
Bayesian inverse modelling provides a top-down technique of verifying emissions and uptake of carbon dioxide (CO₂) from both natural and anthropogenic sources. It relies on accurate measurements of CO₂ concentrations at appropriately placed sites and "best-guess" initial estimates of the biogenic and anthropogenic emissions, together with uncertainty estimates. The Bayesian framework improves current estimates of CO₂ fluxes based on independent measurements of CO₂ concentrations while being constrained by the initial estimates of these fluxes. Monitoring, reporting and verification (MRV) is critical for establishing whether emission reducing activities to mitigate the effects of climate change are being effective, and the Bayesian inverse modelling approach of correcting CO₂ flux estimates provides one of the tools regulators and researchers can use to refine these emission estimates. South Africa is known to be the largest emitter of CO₂ on the African continent. The first major objective of this research project was to carry out such an optimal network design for South Africa. This study used fossil fuel emission estimates from a satellite product based on observations of night-time lights and locations of power stations (Fossil Fuel Data Assimilations System (FFDAS)), and biogenic productivity estimates from a carbon assessment carried out for South Africa to provide the initial CO₂ flux estimates and their uncertainties. Sensitivity analyses considered changes to the covariance matrix and spatial scale of the inversion, as well as different optimisation algorithms, to assess the impact of these specifications on the optimal network solution. This question was addressed in Chapters 2 and 3. The second major objective of this project was to use the Bayesian inverse modelling approach to obtain estimates of CO₂ fluxes over Cape Town and surrounding area. I collected measurements of atmospheric CO₂ concentrations from March 2012 until July 2013 at Robben Island and Hangklip lighthouses. CABLE (Community Atmosphere Biosphere Land Exchange), a land-atmosphere exchange model, provided the biogenic estimates of CO₂ fluxes and their uncertainties. Fossil fuel estimates and uncertainties were obtained by means of an inventory analysis for Cape Town. As an inventory analysis was not available for Cape Town, this exercise formed an additional objective of the project, presented in Chapter 4. A spatially and temporally explicit, high resolution surface of fossil fuel emission estimates was derived from road vehicle, aviation and shipping vessel count data, population census data, and industrial fuel use statistics, making use of well-established emission factors. The city-scale inversion for Cape Town solved for weekly fluxes of CO₂ emissions on a 1 km × 1 km grid, keeping fossil fuel and biogenic emissions as separate sources. I present these results for the Cape Town inversion under the proposed best available configuration of the Bayesian inversion framework in Chapter 5. Due to the large number of CO₂ sources at this spatial and temporal resolution, the reference inversion solved for weekly fluxes in blocks of four weeks at a time. As the uncertainties around the biogenic flux estimates were large, the inversion corrected the prior fluxes predominantly through changes to the biogenic fluxes. I demonstrated the benefit of using a control vector with separate terms for fossil fuel and biogenic flux components. Sensitivity analyses, solving for average weekly fluxes within a monthly inversion, as well as solving for separate weekly fluxes (i.e. solving in one week blocks) were considered. Sensitivity analyses were performed which focused on how changes to the prior information and prior uncertainty estimates and the error correlations of the fluxes would impact on the Bayesian inversion solution. The sensitivity tests are presented in Chapter 6. These sensitivity analyses indicated that refining the estimates of biogenic fluxes and reducing their uncertainties, as well as taking advantage of spatial correlation between areas of homogeneous biota would lead to the greatest improvement in the accuracy and precision of the posterior fluxes from the Cape Town metropolitan area.
APA, Harvard, Vancouver, ISO, and other styles
28

Korats, Gundars. "Estimation de sources corticales : du montage laplacian aux solutions parcimonieuses." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0027/document.

Full text
Abstract:
L’imagerie de source corticale joue un rôle important pour la compréhension fonctionnelle ou pathologique du cerveau. Elle permet d'estimer l'activation de certaines zones corticales en réponse à un stimulus cognitif donné et elle est également utile pour identifier la localisation des activités pathologiques, qui sont les premières étapes de l'étude des activations de réseaux neuronaux sous-jacents. Diverses méthodes d'investigation clinique peuvent être utilisées, des modalités d'imagerie (TEP, IRM) et magnéto-électroencéphalographie (EEG, SEEG, MEG). Nous souhaitions résoudre le problème à partir de données non invasives : les mesures de l'EEG de scalp, elle procure une résolution temporelle à la hauteur des processus étudiés Cependant, la localisation des sources activées à partir d'enregistrements EEG reste une tâche extrêmement difficile en raison de la faible résolution spatiale. Pour ces raisons, nous avons restreint les objectifs de cette thèse à la reconstruction de cartes d’activation des sources corticales de surface. Différentes approches ont été explorées. Les méthodes les plus simples d'imagerie corticales sont basées uniquement sur les caractéristiques géométriques de la tête. La charge de calcul est considérablement réduite et les modèles utilisés sont faciles à mettre en œuvre. Toutefois, ces approches ne fournissent pas d'informations précises sur les générateurs neuronaux et sur leurs propriétés spatiotemporelles. Pour surmonter ces limitations, des techniques plus sophistiquées peuvent être utilisées pour construire un modèle de propagation réaliste, et donc d'atteindre une meilleure reconstruction de sources. Cependant, le problème inverse est sévèrement mal posé, et les contraintes doivent être imposées pour réduire l'espace des solutions. En l'absence de modèle bioanatomique, les méthodes développées sont fondées sur des considérations géométriques de la tête ainsi que la propagation physiologique des sources. Les opérateurs matriciels de rang plein sont appliqués sur les données, de manière similaire à celle effectuée par les méthodes de surface laplacien, et sont basés sur l'hypothèse que les données de surface peuvent être expliquées par un mélange de fonctions de bases radiales linéaires produites par les sources sous-jacentes. Dans la deuxième partie de ces travaux, nous détendons la contrainte-de rang plein en adoptant un modèle de dipôles distribués sur la surface corticale. L'inversion est alors contrainte par une hypothèse de parcimonie, basée sur l'hypothèse physiologique que seuls quelques sources corticales sont simultanément actives ce qui est particulièrement valable dans le contexte des sources d'épilepsie ou dans le cas de tâches cognitives. Pour appliquer cette régularisation, nous considérons simultanément les deux domaines spatiaux et temporels. Nous proposons deux dictionnaires combinés d’atomes spatio-temporels, le premier basé sur une analyse en composantes principales des données, la seconde à l'aide d'une décomposition en ondelettes, plus robuste vis-à-vis du bruit et bien adaptée à la nature non-stationnaire de ces données électrophysiologiques. Toutes les méthodes proposées ont été testées sur des données simulées et comparées aux approches classiques de la littérature. Les performances obtenues sont satisfaisantes et montrent une bonne robustesse vis-à-vis du bruit. Nous avons également validé notre approche sur des données réelles telles que des pointes intercritiques de patients épileptiques expertisées par les neurologues de l'hôpital universitaire de Nancy affiliées au projet. Les localisations estimées sont validées par l'identification de la zone épileptogène obtenue par l'exploration intracérébrale à partir de mesures stéréo EEG<br>Cortical Source Imaging plays an important role for understanding the functional and pathological brain mechanisms. It links the activation of certain cortical areas in response to a given cognitive stimulus, and allows one to study the co-activation of the underlying functional networks. Among the available acquisition modality, electroencephalographic measurements (EEG) have the great advantage of providing a time resolution of the order of the millisecond, at the scale of the dynamic of the studied process, while being a non-invasive technique often used in clinical routine. However the identification of the activated sources from EEG recordings remains an extremely difficult task because of the low spatial resolution this modality provides, of the strong filtering effect of the cranial bones and errors inherent to the used propagation model. In this work different approaches for the estimation of cortical activity from surface EEG have been explored. The simplest cortical imaging methods are based only on the geometrical characteristics of the head. The computational load is greatly reduced and the used models are easy to implement. However, such approaches do not provide accurate information about the neural generators and on their spatiotemporal properties. To overcome such limitations, more sophisticated techniques can be used to build a realistic propagation model, and thus to reach better source reconstruction by its inversion. However, such inversion problem is severely ill-posed, and constraints have to be imposed to reduce the solution space. We began by reconsidering the cortical source imaging problem by relying mostly on the observations provided by the EEG measurements, when no anatomical modeling is available. The developed methods are based on simple but universal considerations about the head geometry as well as the physiological propagation of the sources. Full-rank matrix operators are applied on the data, similarly as done by Surface Laplacian methods, and are based on the assumption that the surface can be explained by a mixture of linear radial basis functions produced by the underlying sources. In the second part of the thesis, we relax the full-rank constraint by adopting a distributed dipole model constellating the cortical surface. The inversion is constrained by an hypothesis of sparsity, based on the physiological assumption that only a few cortical sources are active simultaneously Such hypothesis is particularly valid in the context of epileptic sources or in the case of cognitive tasks. To apply this regularization, we consider simultaneously both spatial and temporal domains. We propose two combined dictionaries of spatio-temporal atoms, the first based on a principal components analysis of the data, the second using a wavelet decomposition, more robust to noise and well suited to the non-stationary nature of these electrophysiological data. All of the proposed methods have been tested on simulated data and compared to conventional approaches of the literature. The obtained performances are satisfactory and show good robustness to the addition of noise. We have also validated our approach on real epileptic data provided by neurologists of the University Hospital of Nancy affiliated to the project. The estimated locations are consistent with the epileptogenic zone identification obtained by intracerebral exploration based on Stereo-EEG measurements
APA, Harvard, Vancouver, ISO, and other styles
29

Fowler, William Mark. "Experimental validation of the inverse structural filter force estimation technique." Thesis, Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/17264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Konstantinidis, Spyridon. "Inverse problems in channel estimation for MIMO-OFDM communication systems." Thesis, University of Leeds, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.530827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Woo, Wai Lok. "Blind inverse channel estimation using second and higher order statistics." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.421174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Boudineau, Mégane. "Vers la résolution "optimale" de problèmes inverses non linéaires parcimonieux grâce à l'exploitation de variables binaires sur dictionnaires continus : applications en astrophysique." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30020/document.

Full text
Abstract:
Cette thèse s'intéresse à la résolution de problèmes inverses non linéaires exploitant un a priori de parcimonie ; plus particulièrement, des problèmes où les données se modélisent comme la combinaison linéaire d'un faible nombre de fonctions non linéaires en un paramètre dit de " localisation " (par exemple la fréquence en analyse spectrale ou le décalage temporel en déconvolution impulsionnelle). Ces problèmes se reformulent classiquement en un problème d'approximation parcimonieuse linéaire (APL) en évaluant les fonctions non linéaires sur une grille de discrétisation arbitrairement fine du paramètre de localisation, formant ainsi un " dictionnaire discret ". Cependant, une telle approche se heurte à deux difficultés majeures. D'une part, le dictionnaire provenant d'une telle discrétisation est fortement corrélé et met en échec les méthodes de résolution sous-optimales classiques comme la pénalisation L1 ou les algorithmes gloutons. D'autre part, l'estimation du paramètre de localisation, appartenant nécessairement à la grille de discrétisation, se fait de manière discrète, ce qui entraîne une erreur de modélisation. Dans ce travail nous proposons des solutions pour faire face à ces deux enjeux, d'une part via la prise en compte de la parcimonie de façon exacte en introduisant un ensemble de variables binaires, et d'autre part via la résolution " optimale " de tels problèmes sur " dictionnaire continu " permettant l'estimation continue du paramètre de localisation. Deux axes de recherches ont été suivis, et l'utilisation des algorithmes proposés est illustrée sur des problèmes de type déconvolution impulsionnelle et analyse spectrale de signaux irrégulièrement échantillonnés. Le premier axe de ce travail exploite le principe " d'interpolation de dictionnaire ", consistant en une linéarisation du dictionnaire continu pour obtenir un problème d'APL sous contraintes. L'introduction des variables binaires nous permet de reformuler ce problème sous forme de " programmation mixte en nombres entiers " (Mixed Integer Programming - MIP) et ainsi de modéliser de façon exacte la parcimonie sous la forme de la " pseudo-norme L0 ". Différents types d'interpolation de dictionnaires et de relaxation des contraintes sont étudiés afin de résoudre de façon optimale le problème grâce à des algorithmes classiques de type MIP. Le second axe se place dans le cadre probabiliste Bayésien, où les variables binaires nous permettent de modéliser la parcimonie en exploitant un modèle de type Bernoulli-Gaussien. Ce modèle est étendu (modèle BGE) pour la prise en compte de la variable de localisation continue. L'estimation des paramètres est alors effectuée à partir d'échantillons tirés avec des algorithmes de type Monte Carlo par Chaîne de Markov. Plus précisément, nous montrons que la marginalisation des amplitudes permet une accélération de l'algorithme de Gibbs dans le cas supervisé (hyperparamètres du modèle connu). De plus, nous proposons de bénéficier d'une telle marginalisation dans le cas non supervisé via une approche de type " Partially Collapsed Gibbs Sampler. " Enfin, nous avons adapté le modèle BGE et les algorithmes associés à un problème d'actualité en astrophysique : la détection d'exoplanètes par la méthode des vitesses radiales. Son efficacité sera illustrée sur des données simulées ainsi que sur des données réelles<br>This thesis deals with solutions of nonlinear inverse problems using a sparsity prior; more specifically when the data can be modelled as a linear combination of a few functions, which depend non-linearly on a "location" parameter, i.e. frequencies for spectral analysis or time-delay for spike train deconvolution. These problems are generally reformulated as linear sparse approximation problems, thanks to an evaluation of the nonlinear functions at location parameters discretised on a thin grid, building a "discrete dictionary". However, such an approach has two major drawbacks. On the one hand, the discrete dictionary is highly correlated; classical sub-optimal methods such as L1- penalisation or greedy algorithms can then fail. On the other hand, the estimated location parameter, which belongs to the discretisation grid, is necessarily discrete and that leads to model errors. To deal with these issues, we propose in this work an exact sparsity model, thanks to the introduction of binary variables, and an optimal solution of the problem with a "continuous dictionary" allowing a continuous estimation of the location parameter. We focus on two research axes, which we illustrate with problems such as spike train deconvolution and spectral analysis of unevenly sampled data. The first axis focusses on the "dictionary interpolation" principle, which consists in a linearisation of the continuous dictionary in order to get a constrained linear sparse approximation problem. The introduction of binary variables allows us to reformulate this problem as a "Mixed Integer Program" (MIP) and to exactly model the sparsity thanks to the "pseudo-norm L0". We study different kinds of dictionary interpolation and constraints relaxation, in order to solve the problem optimally thanks to MIP classical algorithms. For the second axis, in a Bayesian framework, the binary variables are supposed random with a Bernoulli distribution and we model the sparsity through a Bernoulli-Gaussian prior. This model is extended to take into account continuous location parameters (BGE model). We then estimate the parameters from samples drawn using Markov chain Monte Carlo algorithms. In particular, we show that marginalising the amplitudes allows us to improve the sampling of a Gibbs algorithm in a supervised case (when the model's hyperparameters are known). In an unsupervised case, we propose to take advantage of such a marginalisation through a "Partially Collapsed Gibbs Sampler." Finally, we adapt the BGE model and associated samplers to a topical science case in Astrophysics: the detection of exoplanets from radial velocity measurements. The efficiency of our method will be illustrated with simulated data, as well as actual astrophysical data
APA, Harvard, Vancouver, ISO, and other styles
33

Earl, Simeon J. "Estimation of subsurface electrical resistivity values in 3D." Thesis, University of Bristol, 1998. http://hdl.handle.net/1983/e7842879-bf35-43eb-86d6-d4624fae9c3c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Xue, Qi. "Etude mathématique et numérique du problème inverse de l'électro-sismique en milieu poreux." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM084/document.

Full text
Abstract:
Dans cette thèse, nous étudions le problème inverse du phénomène de couplage des ondes électromagnétiques (EM) et sismiques. Les équations différentielles partielles régissant le phénomène de couplage sont composées d'équations de Maxwell et de Biot. Comme le phénomène de couplage est plutôt faible, nous ne considérons que la transformation des ondes électromagnétiques en ondes sismiques. Nous utilisons le modèle électrosismique pour se référer à cette transformation. Dans le modèle, le champ électrique devient la source des équations de Biot. Un coefficient de couplage est utilisé pour désigner l'efficacité de la transformation.Chapitre 2, nous considérons l'existence et l'unicité du problème vers l'avant dans le domaine fréquentiel et dans le domaine temporel. Dans le domaine fréquentiel, nous proposons l'espace de Sobolev approprié pour considérer le problème électrocinétique. Nous prouvons que la formule faible satisfait l'inégalité de Garding en utilisant la décomposition de Helmohltz. L'alternative de Fredholm peut être appliquée, ce qui montre que l'existence est équivalente à l'unicité. Dans le domaine temporel, la solution faible est définie et l'existence et l'unicité de la solution faible est démontrée.La stabilité du problème inverse est considérée dans le chapitre 3. Nous prouvons d'abord les estimations de Carleman pour les équations de Biot et les équations électrosismiques. Basé sur les estimations de Carleman pour les équations électrosismiques, nous prouvons une stabilité de Holder pour inverser tous les paramètres dans l'équation de Maxwell et le coefficient de couplage. Pour simplifier le problème, nous utilisons des équations électrostatiques pour remplacer les équations de Maxwell. Le problème inverse est décomposé en deux étapes: le problème de source inverse pour les équations de Biot et le problème de paramètre inverse pour l'équation électrostatique. Nous pouvons prouver la stabilité du problème de source inverse pour les équations de Biot sur la base de l'estimation de Carleman pour les équations de Biot. Ensuite, la conductivité et le coefficient de couplage peuvent être reconstitués avec les informations de la première étape.Dans le chapitre 4, nous résolvons les équations électrosismiques numériquement. L'équation électrostatique est résolue par la boîte à outils Matlabe PDE. Les équations de Biot sont résolues avec une méthode de différences finies échelonnées. Pour diminuer la consommation de calcul, nous ne traitons que du problème bidimensionnel. Pour simuler des ondes se propageant dans un domaine non borné, nous utilisons le PML pour absorber les ondes atteignant la limite de coupure.Le chapitre 5 traite du problème de source inverse numérique pour les équations de Biot. La méthode que nous allons utiliser est une variante de la méthode d'inversion temporelle. La première étape de la méthode consiste à transformer le problème source en un problème de valeur initiale sans aucune source. Ensuite, l'application de la méthode d'inversion de temps récupère la valeur initiale. Des exemples numériques démontrent que cette méthode fonctionne bien même pour les équations de Biot avec un petit terme d'amortissement. Mais si le terme d'amortissement est trop grand, le processus inverse n'est pas symétrique avec le processus en avant et les résultats de la reconstruction dégénèrent<br>In this thesis, we study the inverse problem of the coupling phenomenon of electromagnetic (EM) and seismic waves. Partial differential equations governing the coupling phenomenon are composed of Maxwell and Biot equations. Since the coupling phenomenon is rather weak, in low frequency we only consider the transformation from EM waves to seismic waves. We use electroseismic model to refer to this transformation. In the model, the electric field becomes the source of Biot equations. A coupling coefficient is used to denote the efficiency of the transformation.Chapter 2, we consider the existence and uniqueness of the forward problem in both frequency domain and time domain. In the frequency domain, we propose the suitable Sobolev space to consider the electrokinetic problem. We prove that the weak formula satisfies a Garding's inequality using Helmohltz decomposition. The Fredholm alternative can be applied, which shows that the existence is equivalent to the uniqueness. In the time domain, the weak solution is defined and the existence and uniqueness of the weak solution is proved.The stability of the inverse problem is considered in Chapter 3. We first prove Carleman estimates for both Biot equations and electroseismic equations. Based on the Carleman estimates for electroseismic equations, we prove a Holder stability to inverse all the parameters in Maxwell equation and the coupling coefficient. To simply the problem, we use electrostatic equations to replace Maxwell equations. The inverse problem is decomposed into two steps: the inverse source problem for Biot equations and the inverse parameter problem for the electrostatic equation. We can prove the stability of the inverse source problem for Biot equations based on the Carleman estimate for Biot equations. Then the conductivity and the coupling coefficient can be reconstructed with the information from the first step.In Chapter 4, we solve the electroseismic equations numerically. The electrostatic equation is solved by the Matlabe PDE toolbox. Biot equations are solved with a staggered finite difference method. To decrease the computation consumption, we only deal with the two dimensional problem. To simulate waves propagating in unbounded domain, we use PML to absorb waves reaching the cut-off boundary.Chapter 5 deals with the numerical inverse source problem for Biot equations. The method we are going to use is a variant of the time reversal method. The first step of the method is to transform the source problem into an initial value problem without any source. Then the application of the time reversal method recovers the initial value. Numerical examples demonstrate that this method works well even for Biot equations with a small damping term. But if the damping term is too large, the inverse process is not symmetric with the forward process and the reconstruction results degenerate
APA, Harvard, Vancouver, ISO, and other styles
35

Salomond, Jean-Bernard. "Propriétés fréquentistes des méthodes Bayésiennes semi-paramétriques et non paramétriques." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090034/document.

Full text
Abstract:
La recherche sur les méthodes bayésiennes non-paramétriques connaît un essor considérable depuis les vingt dernières années notamment depuis le développement d'algorithmes de simulation permettant leur mise en pratique. Il est donc nécessaire de comprendre, d'un point de vue théorique, le comportement de ces méthodes. Cette thèse présente différentes contributions à l'analyse des propriétés fréquentistes des méthodes bayésiennes non-paramétriques. Si se placer dans un cadre asymptotique peut paraître restrictif de prime abord, cela permet néanmoins d'appréhender le fonctionnement des procédures bayésiennes dans des modèles extrêmement complexes. Cela permet notamment de détecter les aspects de l'a priori particulièrement influents sur l’inférence. De nombreux résultats généraux ont été obtenus dans ce cadre, cependant au fur et à mesure que les modèles deviennent de plus en plus complexes, de plus en plus réalistes, ces derniers s'écartent des hypothèses classiques et ne sont plus couverts par la théorie existante. Outre l'intérêt intrinsèque de l'étude d'un modèle spécifique ne satisfaisant pas les hypothèses classiques, cela permet aussi de mieux comprendre les mécanismes qui gouvernent le fonctionnement des méthodes bayésiennes non-paramétriques<br>Research on Bayesian nonparametric methods has received a growing interest for the past twenty years, especially since the development of powerful simulation algorithms which makes the implementation of complex Bayesian methods possible. From that point it is necessary to understand from a theoretical point of view the behaviour of Bayesian nonparametric methods. This thesis presents various contributions to the study of frequentist properties of Bayesian nonparametric procedures. Although studying these methods from an asymptotic angle may seems restrictive, it allows to grasp the operation of the Bayesian machinery in extremely complex models. Furthermore, this approach is particularly useful to detect the characteristics of the prior that are strongly influential in the inference. Many general results have been proposed in the literature in this setting, however the more complex and realistic the models the further they get from the usual assumptions. Thus many models that are of great interest in practice are not covered by the general theory. If the study of a model that does not fall under the general theory has an interest on its owns, it also allows for a better understanding of the behaviour of Bayesian nonparametric methods in a general setting
APA, Harvard, Vancouver, ISO, and other styles
36

Herman, Michael [Verfasser], and Wolfram [Akademischer Betreuer] Burgard. "Simultaneous estimation of rewards and dynamics in inverse reinforcement learning problems." Freiburg : Universität, 2020. http://d-nb.info/1204003297/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Karlsson, Johan. "Inverse Problems in Analytic Interpolation for Robust Control and Spectral Estimation." Doctoral thesis, Stockholm : Matematik, Mathematics, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Fox-Neff, Kristen. "Inverse Methods in Parameter Estimation for High Intensity Focused Ultrasound (HIFU)." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1459155373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kaperick, Bryan James. "Diagonal Estimation with Probing Methods." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90402.

Full text
Abstract:
Probing methods for trace estimation of large, sparse matrices has been studied for several decades. In recent years, there has been some work to extend these techniques to instead estimate the diagonal entries of these systems directly. We extend some analysis of trace estimators to their corresponding diagonal estimators, propose a new class of deterministic diagonal estimators which are well-suited to parallel architectures along with heuristic arguments for the design choices in their construction, and conclude with numerical results on diagonal estimation and ordering problems, demonstrating the strengths of our newly-developed methods alongside existing methods.<br>Master of Science<br>In the past several decades, as computational resources increase, a recurring problem is that of estimating certain properties very large linear systems (matrices containing real or complex entries). One particularly important quantity is the trace of a matrix, defined as the sum of the entries along its diagonal. In this thesis, we explore a problem that has only recently been studied, in estimating the diagonal entries of a particular matrix explicitly. For these methods to be computationally more efficient than existing methods, and with favorable convergence properties, we require the matrix in question to have a majority of its entries be zero (the matrix is sparse), with the largest-magnitude entries clustered near and on its diagonal, and very large in size. In fact, this thesis focuses on a class of methods called probing methods, which are of particular efficiency when the matrix is not known explicitly, but rather can only be accessed through matrix vector multiplications with arbitrary vectors. Our contribution is new analysis of these diagonal probing methods which extends the heavily-studied trace estimation problem, new applications for which probing methods are a natural choice for diagonal estimation, and a new class of deterministic probing methods which have favorable properties for large parallel computing architectures which are becoming ever-more-necessary as problem sizes continue to increase beyond the scope of single processor architectures.
APA, Harvard, Vancouver, ISO, and other styles
40

Metzger, Thomas. "Dispersion thermique en milieux poreux : caractérisation expérimentale par technique inverse." Vandoeuvre-les-Nancy, INPL, 2002. http://www.theses.fr/2002INPL065N.

Full text
Abstract:
La dispersion thermique, c'est-à-dire le transfert de chaleur dans un milieu poreux qui est traversé par un fluide, est un phénomène complexe. Pour que celui-ci puisse être pris en compte dans des applications industrielles, il faut disposer d'un modèle simple avec des paramètres qui sont physiquement raisonnables et expérimentalement accessibles. Ce travail présente une technique expérimentale pour estimer les paramètres du modèle dit "à une température", notamment les coefficients du tenseur de dispersion thermique qui dépendent de la vitesse de Darcy. Le dispositif expérimental correspond à un lit fixe de billes de verre avec de l'eau comme fluide. Une source de chaleur linéique, perpendiculaire à la direction de l'écoulement, est utilisée (échelon temporel) ; le signal de température, mesuré par des thermocouples en aval de la source, est utilisé pour l'estimation des paramètres. Dans cette situation, la qualité d'estimation par la méthode des moindres carrés ordinaires est limitée par les incertitudes sur la vitesse et les positions des thermocouples. L'optimisation de la géométrie expérimentale et la méthode de Gauss-Markov permettent d'estimer également ces paramètres. Des simulations d'inversion de type Monte-Carlo montrent que cette approche fournit une estimation du coefficient longitudinal de dispersion thermique avec une bonne précision; d'autres géométries de chauffe et les travaux de la littérature confirment ces résultats. L'estimation du coefficient de dispersion transverse est moins précise; cependant, les résultats obtenus pour sa dépendance de la vitesse mettent en cause le modèle établi de la littérature. Les excellents résidus en température qui ont été obtenus suggèrent que le modèle à une température est raisonnable, même dans le cas du non-équilibre thermique entre les deux phases.
APA, Harvard, Vancouver, ISO, and other styles
41

Walker, Don Gregory Jr. "Estimation of Unsteady Nonuniform Heating Rates from Surface Temperature Measurements." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/40387.

Full text
Abstract:
Shock wave interactions such as those that occur during atmospheric re-entry, can produce extreme thermal loads on aerospace structures. These interactions are reproduced experimentally in hypersonic wind tunnels to study how the flow structures relate to the deleterious heat fluxes. In these studies, localized fluid jets created by shock interactions impinge on a test cylinder, where the temperature due to the heat flux is measured. These measurements are used to estimate the heat flux on the surface as a result of the shock interactions. The nature of the incident flux usually involves dynamic transients and severe nonuniformities. Finding this boundary flux from discrete unsteady temperature measurements is characterized by instabilities in the solution. The purpose of this work is to evaluate existing methodologies for the determination of the unsteady heat flux and to introduce a new approach based on an inverse technique. The performance of these methods was measured first in terms of accuracy and their ability to handle inherently ``unstable'' or highly dynamic data such as step fluxes and high frequency oscillating fluxes. Then the method was expanded to estimate unsteady and nonuniform heat fluxes. The inverse methods proved to be the most accurate and stable of the methods examined, with the proposed method being preferable.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Costa, Facundo hernan. "Bayesian M/EEG source localization with possible joint skull conductivity estimation." Thesis, Toulouse, INPT, 2017. http://www.theses.fr/2017INPT0016/document.

Full text
Abstract:
Les techniques M/EEG permettent de déterminer les changements de l'activité du cerveau, utiles au diagnostic de pathologies cérébrales, telle que l'épilepsie. Ces techniques consistent à mesurer les potentiels électriques sur le scalp et le champ magnétique autour de la tête. Ces mesures sont reliées à l'activité électrique du cerveau par un modèle linéaire dépendant d'une matrice de mélange liée à un modèle physique. La localisation des sources, ou dipôles, des mesures M/EEG consiste à inverser le modèle physique. Cependant, la non-unicité de la solution (due à la loi fondamentale de physique) et le faible nombre de dipôles rendent le problème inverse mal-posé. Sa résolution requiert une forme de régularisation pour restreindre l'espace de recherche. La littérature compte un nombre important de travaux traitant de ce problème, notamment avec des approches variationnelles. Cette thèse développe des méthodes Bayésiennes pour résoudre des problèmes inverses, avec application au traitement des signaux M/EEG. L'idée principale sous-jacente à ce travail est de contraindre les sources à être parcimonieuses. Cette hypothèse est valide dans plusieurs applications, en particulier pour certaines formes d'épilepsie. Nous développons différents modèles Bayésiens hiérarchiques pour considérer la parcimonie des sources. En théorie, contraindre la parcimonie des sources équivaut à minimiser une fonction de coût pénalisée par la norme l0 de leurs positions. Cependant, la régularisation l0 générant des problèmes NP-complets, l'approximation de cette pseudo-norme par la norme l1 est souvent adoptée. Notre première contribution consiste à combiner les deux normes dans un cadre Bayésien, à l'aide d'une loi a priori Bernoulli-Laplace. Un algorithme Monte Carlo par chaîne de Markov est utilisé pour estimer conjointement les paramètres du modèle et les positions et intensités des sources. La comparaison des résultats, selon plusieurs scenarii, avec ceux obtenus par sLoreta et la régularisation par la norme l1 montre des performances intéressantes, mais au détriment d'un coût de calcul relativement élevé. Notre modèle Bernoulli Laplace résout le problème de localisation des sources pour un instant donné. Cependant, il est admis que l'activité cérébrale a une certaine structure spatio-temporelle. L'exploitation de la dimension temporelle est par conséquent intéressante pour contraindre d'avantage le problème. Notre seconde contribution consiste à formuler un modèle de parcimonie structurée pour exploiter ce phénomène biophysique. Précisément, une distribution Bernoulli-Laplacienne multivariée est proposée comme loi a priori pour les dipôles. Une variable latente est introduite pour traiter la loi a posteriori complexe résultante et un algorithme d'échantillonnage original de type Metropolis Hastings est développé. Les résultats montrent que la technique d'échantillonnage proposée améliore significativement la convergence de la méthode MCMC. Une analyse comparative des résultats a été réalisée entre la méthode proposée, une régularisation par la norme mixte l21, et l'algorithme MSP (Multiple Sparse Priors). De nombreuses expérimentations ont été faites avec des données synthétiques et des données réelles. Les résultats montrent que notre méthode a plusieurs avantages, notamment une meilleure localisation des dipôles. Nos deux précédents algorithmes considèrent que le modèle physique est entièrement connu. Cependant, cela est rarement le cas dans les applications pratiques. Au contraire, la matrice du modèle physique est le résultat de méthodes d'approximation qui conduisent à des incertitudes significatives<br>M/EEG mechanisms allow determining changes in the brain activity, which is useful in diagnosing brain disorders such as epilepsy. They consist of measuring the electric potential at the scalp and the magnetic field around the head. The measurements are related to the underlying brain activity by a linear model that depends on the lead-field matrix. Localizing the sources, or dipoles, of M/EEG measurements consists of inverting this linear model. However, the non-uniqueness of the solution (due to the fundamental law of physics) and the low number of dipoles make the inverse problem ill-posed. Solving such problem requires some sort of regularization to reduce the search space. The literature abounds of methods and techniques to solve this problem, especially with variational approaches. This thesis develops Bayesian methods to solve ill-posed inverse problems, with application to M/EEG. The main idea underlying this work is to constrain sources to be sparse. This hypothesis is valid in many applications such as certain types of epilepsy. We develop different hierarchical models to account for the sparsity of the sources. Theoretically, enforcing sparsity is equivalent to minimizing a cost function penalized by an l0 pseudo norm of the solution. However, since the l0 regularization leads to NP-hard problems, the l1 approximation is usually preferred. Our first contribution consists of combining the two norms in a Bayesian framework, using a Bernoulli-Laplace prior. A Markov chain Monte Carlo (MCMC) algorithm is used to estimate the parameters of the model jointly with the source location and intensity. Comparing the results, in several scenarios, with those obtained with sLoreta and the weighted l1 norm regularization shows interesting performance, at the price of a higher computational complexity. Our Bernoulli-Laplace model solves the source localization problem at one instant of time. However, it is biophysically well-known that the brain activity follows spatiotemporal patterns. Exploiting the temporal dimension is therefore interesting to further constrain the problem. Our second contribution consists of formulating a structured sparsity model to exploit this biophysical phenomenon. Precisely, a multivariate Bernoulli-Laplacian distribution is proposed as an a priori distribution for the dipole locations. A latent variable is introduced to handle the resulting complex posterior and an original Metropolis-Hastings sampling algorithm is developed. The results show that the proposed sampling technique improves significantly the convergence. A comparative analysis of the results is performed between the proposed model, an l21 mixed norm regularization and the Multiple Sparse Priors (MSP) algorithm. Various experiments are conducted with synthetic and real data. Results show that our model has several advantages including a better recovery of the dipole locations. The previous two algorithms consider a fully known leadfield matrix. However, this is seldom the case in practical applications. Instead, this matrix is the result of approximation methods that lead to significant uncertainties. Our third contribution consists of handling the uncertainty of the lead-field matrix. The proposed method consists in expressing this matrix as a function of the skull conductivity using a polynomial matrix interpolation technique. The conductivity is considered as the main source of uncertainty of the lead-field matrix. Our multivariate Bernoulli-Laplacian model is then extended to estimate the skull conductivity jointly with the brain activity. The resulting model is compared to other methods including the techniques of Vallaghé et al and Guttierez et al. Our method provides results of better quality without requiring knowledge of the active dipole positions and is not limited to a single dipole activation
APA, Harvard, Vancouver, ISO, and other styles
43

Khlaifi, Anis. "Estimation des sources de pollution atmosphérique par modélisation inversée." Paris 12, 2007. http://www.theses.fr/2007PA120067.

Full text
Abstract:
L’identification des sources de pollution et de leurs contributions à partir des mesures dans leur environnement a été traitée par deux approches, adaptées à deux problématiques différentes. Dans le premier cas, l’objectif est l’identification des sources en aveugle, par leurs profils (signatures), suivie éventuellement par la détermination de leurs contributions. Il s’agit de sources complexes, dont le profil d’émission est inconnu et comprend plusieurs espèces. C’est à travers les mesures des différentes espèces dans l’environnement, en utilisant des méthodes statistiques de reconnaissance des formes (ACP, PMF, CAH, ACPN, ACI), qu’on cherche à mettre en évidence les profils des sources. L’intérêt général de cette problématique s’inscrit dans le cadre de l’évaluation de l’impact des sources d’aérosols. Dans le deuxième cas, la différenciation entre les sources ne se fait plus par leur profil, car il s’agit d’une seule espèce chimique, et on se propose d’estimer les contributions (en termes d’émission) des sources chroniques et connues. Nous avons développé un couplage original entre le modèle gaussien de Pasquill et les algorithmes génétiques, pour résoudre le problème inverse : déterminer les émissions des sources à partir des mesures d’un réseau de surveillance de la qualité de l’air. L’estimation de ces émissions peut se faire dans un but de surveillance des sources ou bien d’inventaire et analyse des émissions. Nos résultats montrent les différentes configurations liées à l’inversion d’un modèle physique et débouchent sur le développement d’une méthodologie permettant une conception optimale d’un réseau de mesure permettant de remonter aux émissions des sources<br>The identification of the pollution sources and their contributions using the measures in their environment was treated by two approaches, adapted to two different problems. In the first case, the objective is the identification of the sources as a blind source separation, by their profiles (fingerprints), and the estimation of their contributions. They are complex sources, whose emission profiles are unknown and include several species. It is through the measurements of the various species in the environment, by using statistical methods of pattern recognition (PCA, PMF, HC, KPCA, ICA), that we determined the sources profiles. The general interest of these problems lies within the evaluation of the impact of the aerosols sources. In the second case, the separation among the sources is not done any more by their profile, because there is only one chemical species; in this case, the purpose is to estimate the contributions of the chronic and known sources. We developed an original coupling between the Pasquill gaussian model and the genetic algorithms, to solve the inverse problem: source emission estimation from the measurements of an air quality monitoring network. This estimation can be realized with the aim of sources monitoring or emission inventory. Our results revealed various configurations related to the inversion of a physical model and led to the development of a methodology allowing an optimal network design
APA, Harvard, Vancouver, ISO, and other styles
44

Hoang, Van Hà. "Estimation adaptative pour des problèmes inverses avec des applications à la division cellulaire." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10096/document.

Full text
Abstract:
Cette thèse se divise en deux parties indépendantes. Dans la première, nous considérons un modèle stochastique individu-centré en temps continu décrivant une population structurée par la taille. La population est représentée par une mesure ponctuelle évoluant suivant un processus aléatoire déterministe par morceaux. Nous étudions ici l'estimation non-paramétrique du noyau régissant les divisions, sous deux schémas d'observation différents. Premièrement, dans le cas où nous obtenons l'arbre entier des divisions, nous construisons un estimateur à noyau avec une sélection adaptative de fenêtre dépendante des données. Nous obtenons une inégalité oracle et des vitesses de convergence exponentielles optimales. Deuxièmement, dans le cas où l'arbre de division n'est pas complètement observé, nous montrons que le processus microscopique renormalisé décrivant l'évolution de la population converge vers la solution faible d'une équation aux dérivés partielles. Nous proposons un estimateur du noyau de division en utilisant des techniques de Fourier. Nous montrons la consistance de l'estimateur. Dans la seconde partie, nous considérons le modèle de régression non-paramétrique avec erreurs sur les variables dans le contexte multidimensionnel. Notre objectif est d'estimer la fonction de régression multivariée inconnue. Nous proposons un estimateur adaptatif basé sur des noyaux de projection fondés sur une base d'ondelettes multi-index et sur un opérateur de déconvolution. Le niveau de résolution des ondelettes est obtenu par la méthode de Goldenshluger-Lepski. Nous obtenons une inégalité oracle et des vitesses de convergence optimales sur les espaces de Hölder anisotropes<br>This thesis is divided into two independent parts. In the first one, we consider a stochastic individual-based model in continuous time to describe a size-structured population for cell divisions. The random point measure describing the cell population evolves as a piecewise deterministic Markov process. We address here the problem of nonparametric estimation of the kernel ruling the divisions, under two observation schemes. First, we observe the evolution of cells up to a fixed time T and we obtain the whole division tree. We construct an adaptive kernel estimator of the division kernel with a fully data-driven bandwidth selection. We obtain an oracle inequality and optimal exponential rates of convergence. Second, when the whole division tree is not completely observed, we show that, in a large population limit, the renormalized microscopic process describing the evolution of cells converges to the weak solution of a partial differential equation. We propose an estimator of the division kernel by using Fourier techniques. We prove the consistency of the estimator. In the second part, we consider the nonparametric regression with errors-in-variables model in the multidimensional setting. We estimate the multivariate regression function by an adaptive estimator based on projection kernels defined with multi-indexed wavelets and a deconvolution operator. The wavelet level resolution is selected by the method of Goldenshluger-Lepski. We obtain an oracle inequality and optimal rates of convergence over anisotropic Hölder classes
APA, Harvard, Vancouver, ISO, and other styles
45

Koch, Michael Conrad. "Inverse analysis in geomechanical problems using Hamiltonian Monte Carlo." Kyoto University, 2020. http://hdl.handle.net/2433/253350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Das, Sarit Kumar. "Monitoring and Inverse Dispersion Modeling to Quantify VOCs from MSW Landfill." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/1093.

Full text
Abstract:
In USA, the Municipal Solid Waste (MSW) landfills accumulate about 130 million tons of solid waste every year. A significant amount of biodegradable solid waste is converted to landfill gas due to anaerobic stabilization by bacteria. These biochemical reactions produce volatile organic compounds (VOCs) like methane and others. Due to heterogeneity in refuse composition, unpredictable distribution of favorable environmental conditions for bacterial actions and highly uncertain pathway of gases, estimation of landfill gas emission for a particular landfill is complex. However, it is important to quantify landfill gases for health risk assessment and energy recovery purposes. This research is based on the monitoring and modeling methodology proposed by researchers at University of Central Florida is reported in this thesis. River Birch Sub-title D landfill, Westwego, LA was selected as the study area. The total emission calculated using the mathematical model ran on MATLAB is comparable with the result obtained from EPA LandGEM model, using historical waste deposition records
APA, Harvard, Vancouver, ISO, and other styles
47

Roberti, Débora Regina. "Problemas inversos em física da atmosfera." Universidade Federal de Santa Maria, 2005. http://repositorio.ufsm.br/handle/1/3940.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>Techniques for estimating unknown terms - such as eddy diffusivity and counter-gradient - in atmospheric flow are presented in this study. The methos is also used to identify the source term in atmospheric pollution. The scheme adopted is based on inverse problem methodology. The inverse problem is formulated as a non-linear optimization problem, where the objective function is defined as the square difference between observational data and data from a transport mathematical model. For estimating the properties of the atmospheric flux, an implicit strategy was used, and an Eulerian model was used as forward model. The estimation of the pollutant source term was tested employing a source-repector technique. In the pollutant dispersion simulation, a Lagrangian model was applied. For some inversions, regularized solutions should be searched. The Tikhonov and entropy regularizations were considered. Three different optimization methods were used: Levenberg-Marquardt, quasi-Newton (deterministic) e simulated annealing (stochastic). The results show a good performance of the proposed methodology in many tested situations.<br>Neste estudo apresentam-se técnicas para estimar termos desconhecidos em fluxos atmosféricos, tais como coeficiente de difusão turbulento e termo de contra-gradiente. O método é também usado para a estimação de termos de fonte em poluição atmosférica. O esquema adotado é baseado na metodologia de problema inverso. O problema inverso é formulado como um problema de otimização não linear, onde a função objetivo é definida como a diferença quadrática entre dados experimentais e dados obtidos através de um modelo matemático de transporte. Para a estimação de propriedades de fluxo atmosféricos, uma estratégia de inversão implícita foi utilizada, onde um modelo eureliano foi empregado como modelo matemático. Para a estimação do termo de fonte de poluição, um procedimento de inversão empregando a técnica fonte-receptor foi testado em diversos cenários físicos. Para simulação da dispersão de poluentes na atmosfera foi empregado um modelo lagrangeano. Em alguns casos tornou-se necessário aplicar técnicas de regularização na obtenção da solução inversa. Regularização de Tikhonov e em entrópicas foram empregadas, quando necessário. Três diferentes métodos de otimização são utilizados: Levenberg-Marquardt e quase-Newton (determinísticos) e recozimento simulado (estocástico). Os resultados comprovam a robustez do método de inversão nas diversas situações testadas.
APA, Harvard, Vancouver, ISO, and other styles
48

Neviackas, Andrew. "Inverse fire modeling to estimate the heat release rate of compartment fires." College Park, Md.: University of Maryland, 2007. http://hdl.handle.net/1903/7290.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2007.<br>Thesis research directed by: Dept. of Fire Protection Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
49

Clairon, Quentin. "New regularization methods for the inverse problem of parameter estimation in Ordinary differential equations." Thesis, Evry-Val d'Essonne, 2015. http://www.theses.fr/2015EVRY0024/document.

Full text
Abstract:
Nous présentons dans cette thèse deux méthodes de régularisation du problème d’estimationde paramètres dans des équations différentielles ordinaires (EDOs). La première est une extensionde la méthode two-step, permettant d’obtenir une expression de la variance asymptotique etd’éviter l’usage de la derivée de l’estimateur non-paramétrique. Elle fait appel à la notion desolution faible et propose une caractérisation variationnelle de la solution de l’EDO. Ce faisant,elle identifie le vrai ensemble de paramètres comme celui respectant un ensemble de moments,fonctions plus régulières des paramètres que le critère des moindre carrés. Cette formulationgénérale permet de définir un estimateur s’appliquant à une large classe d’équations différentielles,et pouvant incorporer des informations supplémentaires disponibles sur la solution de l’EDO. Cesarguments, confortés par les resultats numériques obtenus, en font une approche compétitive parrapport aux moindres carrés. Néanmoins, cet estimateur nécessite l’observation de toutes lesvariables d’état.La seconde méthode s’applique également au cas partiellement observé. Elle régularise leproblème inverse par relaxation de la contrainte imposée par l’EDO en replaçant l’équationoriginale par une version perturbée. L’estimateur est ensuite défini à travers la minimisation d’uncoût profilé sur l’ensemble des perturbations possibles et pénalisant la distance entre le modèleinitial et le modèle perturbé. Cette approche introduit un problème d’optimisation en dimensioninfinie résolu grâce à un résultat fondamental de la théorie du contrôle optimal, le principedu maximum de Pontryagin. Il permet de ramener la résolution du problème d’optimisationà l’intégration d’une EDO avec condition aux bords. Ainsi, nous avons obtenu un estimateurimplémentable et que nous avons démontré consistent. Un intérêt particulier est porté au cas desEDOs linéaires pour lequel nous avons démontré la vitesse de convergence paramétrique et lanormalité asymptotique de notre estimateur. En outre, nous disposons d’une expression simplifiéedu coût profilé, ce qui facilite l’implémentation numérique de notre estimateur. Ces résultats sontdus à la théorie linéaire-quadratique, derivée du principe du maximum de Pontryagin dans lecas linéaire, elle assure l’existence, l’unicité et donne une expression simple de la solution duproblème d’optimisation définissant notre estimateur. A travers des exemples numériques nousavons montré que notre estimateur est compétitif avec les moindres carrés et le lissage généralisé,en particulier en présence d’une mauvaise spécification de modèle grâce à la relaxation du modèleoriginal introduite dans notre approche. Enfin, notre méthode d’estimation par utilisation de lathéorie du contrôle optimal offre un cadre pertinent pour traiter des problèmes d’analyse dedonnées fonctionnelles, ceci est illustré à travers un exemple dans le cas linéaire<br>We present in this thesis two regularization methods of the parameter estimation problemin ordinary differential equations (ODEs). The first one is an extension of the two-step method,its initial motivations are to obtain an expression of the asymptotic variance and to avoid theuse of the derivative form of the nonparametric estimator. It relies on the notion of weak ODEsolution and considers a variational characterisation of the solution. By doing so, it identifiesthe true parameter set as the one satisfying a set of moments, which are smoother functions ofthe parameters than the least squares criteria. This general formulation defines an estimatorwhich applies to a broad range of differential equations, and can incorporate prior knowledgeson the ODE solution. Theses arguments, supported by the obtained numerical results, make ofthis method a competitive one comparing to least squares. Nonetheless, this estimator requiresto observe all state variables.The second method also applies to the partially observed case. It regularizes the inverseproblem thanks to a relaxation of the constraint imposed by the ODE by replacing the initialmodel by a pertubated one. The estimator is then defined as the minimizer of a profiled cost onthe set of possible perturbations and penalizing the distance between the initial model and theperturbated one. This approach requires the introduction of an infinite dimensional optimizationproblem solved by the use of a fundamental result coming from optimal control theory, the Pontryaginmaximum principle. Its application turns the resolution of the optimization problem intothe integration of an ODE with boundary constraints. Thus, we have obtained an implementableestimator for which we have proven consistency. We dedicate a thorough analysis to the caseof linear ODEs for which we have derived the parametric convergence rate and the asymptoticnormality of our estimator. In addition, we have access to a simpler expression for the profiledcost, which makes the numerical implementation of our estimator easier. Theses results are dueto linear-quadratic theory, derived from the Pontryagin maximum principle in the linear case, itgives existence, uniqueness and a simple expression of the solution of the optimization problemdefining our estimator. We have shown, through numerical examples, that our estimator is acompetitive one comparing to least squares and generalized smoothing, in particular in presenceof model misspecification thanks to the model relaxation introduced in our approach. Furthermore,our estimation method based on optimal control theory offers a relevant framework fordealing with functional data analysis problems, which is emphasized thanks to an example inthe linear case
APA, Harvard, Vancouver, ISO, and other styles
50

GUELTON, Kevin. "Estimation des caractéristiques du mouvement humain en station debout. Mise en œuvre d'observateurs flous sous forme descripteur." Phd thesis, Université de Valenciennes et du Hainaut-Cambresis, 2003. http://tel.archives-ouvertes.fr/tel-00007960.

Full text
Abstract:
La neurophysiologie et la biomécanique constituent deux approches complémentaires à la compréhension globale de la régulation posturale. Le corps humain est considéré comme un système mécanique poly-articulé régulé par le système nerveux central. Ses entrées sont les couples articulaires et ses sorties les positions des segments corporels. Au vu des stratégies posturales employées en station debout, un modèle non linéaire en double pendule inversé est adopté. Il dépend de paramètres pouvant être estimés par des tables anthropométriques standards. Pour ajuster le modèle à un individu particulier, un recuit simulé est utilisé. Les couples articulaires sont classiquement estimés par des techniques de dynamique inverse sensibles aux incertitudes de mesure. Une méthode alternative issue du domaine de l'automatique, basée l'utilisation d'observateurs à entrées inconnues, est proposée. L'étude d'une classe d'observateurs flous de type Takagi-Sugeno basés sur une forme descripteur est proposée. Les conditions de convergence sont obtenues par une fonction de Lyapunov quadratique et peuvent être conservatives. Quatre approches, admettant une écriture sous la forme de problèmes LMI ou BMI, sont alors proposées afin de relâcher ces conditions. La mise en œuvre d'un observateur flou à entrées inconnues dans le cas de l'homme en station debout est proposée. Les résultats obtenus sont confrontés à ceux des différentes approches de la dynamique inverse. L'observateur flou semble mieux adapté à l'estimation des caractéristiques du mouvement en station debout. De plus son caractère temps réel est souligné et conduit à de nombreuses perspectives en rééducation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!