To see the other types of publications on this topic, follow the link: Estimation by Method of Moments.

Dissertations / Theses on the topic 'Estimation by Method of Moments'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimation by Method of Moments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Strydom, Willem Jacobus. "Recovery based error estimation for the Method of Moments." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96881.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: The Method of Moments (MoM) is routinely used for the numerical solution of electromagnetic surface integral equations. Solution errors are inherent to any numerical computational method, and error estimators can be effectively employed to reduce and control these errors. In this thesis, gradient recovery techniques of the Finite Element Method (FEM) are formulated within the MoM context, in order to recover a higher-order charge of a Rao-Wilton-Glisson (RWG) MoM solution. Furthermore, a new recovery procedure, based specifically on the properties of the RWG basis functions, is introduced by the author. These recovered charge distributions are used for a posteriori error estimation of the charge. It was found that the newly proposed charge recovery method has the highest accuracy of the considered recovery methods, and is the most suited for applications within recovery based error estimation. In addition to charge recovery, the possibility of recovery procedures for the MoM solution current are also investigated. A technique is explored whereby a recovered charge is used to find a higher-order divergent current representation. Two newly developed methods for the subsequent recovery of the solenoidal current component, as contained in the RWG solution current, are also introduced by the author. A posteriori error estimation of the MoM current is accomplished through the use of the recovered current distributions. A mixed second-order recovered current, based on a vector recovery procedure, was found to produce the most accurate results. The error estimation techniques developed in this thesis could be incorporated into an adaptive solver scheme to optimise the solution accuracy relative to the computational cost.
AFRIKAANSE OPSOMMING: Die Moment Metode (MoM) vind algemene toepassing in die numeriese oplossing van elektromagnetiese oppervlak integraalvergelykings. Numeriese foute is inherent tot die prosedure: foutberamingstegnieke is dus nodig om die betrokke foute te analiseer en te reduseer. Gradiënt verhalingstegnieke van die Eindige Element Metode word in hierdie tesis in die MoM konteks geformuleer. Hierdie tegnieke word ingespan om die oppervlaklading van 'n Rao-Wilton-Glisson (RWG) MoM oplossing na 'n verbeterde hoër-orde voorstelling te neem. Verder is 'n nuwe lading verhalingstegniek deur die outeur voorgestel wat spesifiek op die eienskappe van die RWG basis funksies gebaseer is. Die verhaalde ladingsverspreidings is geïmplementeer in a posteriori fout beraming van die lading. Die nuut voorgestelde tegniek het die akkuraatste resultate gelewer, uit die groep verhalingstegnieke wat ondersoek is. Addisioneel tot ladingsverhaling, is die moontlikheid van MoM-stroom verhalingstegnieke ook ondersoek. 'n Metode vir die verhaling van 'n hoër-orde divergente stroom komponent, gebaseer op die verhaalde lading, is geïmplementeer. Verder is twee nuwe metodes vir die verhaling van die solenodiale komponent van die RWG stroom deur die outeur voorgestel. A posteriori foutberaming van die MoM-stroom is met behulp van die verhaalde stroom verspreidings gerealiseer, en daar is gevind dat 'n gemengde tweede-orde verhaalde stroom, gebaseer op 'n vektor metode, die beste resultate lewer. Die foutberamingstegnieke wat in hierdie tesis ondersoek is, kan in 'n aanpasbare skema opgeneem word om die akkuraatheid van 'n numeriese oplossing, relatief tot die berekeningskoste, te optimeer.
APA, Harvard, Vancouver, ISO, and other styles
2

Menshikova, M. "Uncertainty estimation using the moments method facilitated by automatic differentiation in Matlab." Thesis, Department of Engineering Systems and Management, 2010. http://hdl.handle.net/1826/4328.

Full text
Abstract:
Computational models have long been used to predict the performance of some baseline design given its design parameters. Given inconsistencies in manufacturing, the manufactured product always deviates from the baseline design. There is currently much interest in both evaluating the effects of variability in design parameters on a design’s performance (uncertainty estimation), and robust optimization of the baseline design such that near optimal performance is obtained despite variability in design parameters. Traditionally, uncertainty analysis is performed by expensive Monte-Carlo methods. This work considers the alternative moments method for uncertainty propagation and its implementation in Matlab. In computational design it is assumed a computational model gives a sufficiently accurate approximation to a design’s performance. As such it can be used for estimating statistical moments (expectation, variance, etc.) of the design due to known statistical variation of the model’s parameters, e.g., by the Monte Carlo approach. In the moments method we further assume the model is sufficiently differentiable that a Taylor series approximation to a model may be constructed, and the moments of the Taylor series may be taken analytically to yield approximations to the model’s moments. In this thesis we generalise techniques considered within the engineering community and design and document associated software to generate arbitrary order Taylor series approximations to arbitrary order statistical moments of computational models implemented in Matlab; Taylor series coefficients are calculated using automatic differentiation. This approach is found to be more efficient than a standard Monte Carlo method for the small-scale model test problems we consider. Previously Christianson and Cox (2005) have indicated that the moments method will be non-convergent in the presence of complex poles of the computational model and suggested a partitioning method to overcome this problem. We implement a version of the partitioning method and demonstrate that it does result in convergence of the moments method. Additionally, we consider, what we term, the branch detection problem in order to ascertain if our Taylor series approximation might only be valid piecewise.
APA, Harvard, Vancouver, ISO, and other styles
3

Ginos, Brenda Faith. "Parameter Estimation for the Lognormal Distribution." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3205.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Owen, Claire Elayne Bangerter. "Parameter Estimation for the Beta Distribution." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2670.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

CUNHA, JOAO MARCO BRAGA DA. "ESTIMATING ARTIFICIAL NEURAL NETWORKS WITH GENERALIZED METHOD OF MOMENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26922@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
As Redes Neurais Artificiais (RNAs) começaram a ser desenvolvidas nos anos 1940. Porém, foi a partir dos anos 1980, com a popularização e o aumento de capacidade dos computadores, que as RNAs passaram a ter grande relevância. Também nos anos 1980, houve dois outros acontecimentos acadêmicos relacionados ao presente trabalho: (i) um grande crescimento do interesse de econometristas por modelos não lineares, que culminou nas abordagens econométricas para RNAs, no final desta década; e (ii) a introdução do Método Generalizado dos Momentos (MGM) para estimação de parâmetros, em 1982. Nas abordagens econométricas de RNAs, sempre predominou a estimação por Quasi Máxima Verossimilhança (QMV). Apesar de possuir boas propriedades assintóticas, a QMV é muito suscetível a um problema nas estimações em amostra finita, conhecido como sobreajuste. O presente trabalho estende o estado da arte em abordagens econométricas de RNAs, apresentando uma proposta alternativa à estimação por QMV que preserva as suas boas propriedades assintóticas e é menos suscetível ao sobreajuste. A proposta utiliza a estimação pelo MGM. Como subproduto, a estimação pelo MGM possibilita a utilização do chamado Teste J para verifificar a existência de não linearidade negligenciada. Os estudos de Monte Carlo realizados indicaram que as estimações pelo MGM são mais precisas que as geradas pela QMV em situações com alto ruído, especialmente em pequenas amostras. Este resultado é compatível com a hipótese de que o MGM é menos suscetível ao sobreajuste. Experimentos de previsão de taxas de câmbio reforçaram estes resultados. Um segundo estudo de Monte Carlo apontou boas propriedades em amostra finita para o Teste J aplicado à não linearidade negligenciada, comparado a um teste de referência amplamente conhecido e utilizado. No geral, os resultados apontaram que a estimação pelo MGM é uma alternativa recomendável, em especial no caso de dados com alto nível de ruído.
Artificial Neural Networks (ANN) started being developed in the decade of 1940. However, it was during the 1980 s that the ANNs became relevant, pushed by the popularization and increasing power of computers. Also in the 1980 s, there were two other two other academic events closely related to the present work: (i) a large increase of interest in nonlinear models from econometricians, culminating in the econometric approaches for ANN by the end of that decade; and (ii) the introduction of the Generalized Method of Moments (GMM) for parameter estimation in 1982. In econometric approaches for ANNs, the estimation by Quasi Maximum Likelihood (QML) always prevailed. Despite its good asymptotic properties, QML is very prone to an issue in finite sample estimations, known as overfiting. This thesis expands the state of the art in econometric approaches for ANNs by presenting an alternative to QML estimation that keeps its good asymptotic properties and has reduced leaning to overfiting. The presented approach relies on GMM estimation. As a byproduct, GMM estimation allows the use of the so-called J Test to verify the existence of neglected nonlinearity. The performed Monte Carlo studies indicate that the estimates from GMM are more accurate than those generated by QML in situations with high noise, especially in small samples. This result supports the hypothesis that GMM is susceptible to overfiting. Exchange rate forecasting experiments reinforced these findings. A second Monte Carlo study revealed satisfactory finite sample properties of the J Test applied to the neglected nonlinearity, compared with a reference test widely known and used. Overall, the results indicated that the estimation by GMM is a better alternative, especially for data with high noise level.
APA, Harvard, Vancouver, ISO, and other styles
6

Pant, Mohan Dev. "Simulating Univariate and Multivariate Burr Type III and Type XII Distributions Through the Method of L-Moments." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/dissertations/401.

Full text
Abstract:
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and heavy-tailed non-normal distributions. Conventional moment-based estimators (i.e., the mean, variance, skew, and kurtosis) are traditionally used to characterize the distribution of a random variable or in the context of fitting data. However, conventional moment-based estimators can (a) be substantially biased, (b) have high variance, or (c) be influenced by outliers. In view of these concerns, a characterization of the Burr Type III and Type XII distributions through the method of L-moments is introduced. Specifically, systems of equations are derived for determining the shape parameters associated with user specified L-moment ratios (e.g., L-Skew and L-Kurtosis). A procedure is also developed for the purpose of generating non-normal Burr Type III and Type XII distributions with arbitrary L-correlation matrices. Numerical examples are provided to demonstrate that L-moment based Burr distributions are superior to their conventional moment based counterparts in the context of estimation, distribution fitting, and robustness to outliers. Monte Carlo simulation results are provided to demonstrate that L-moment-based estimators are nearly unbiased, have relatively small variance, and are robust in the presence of outliers for any sample size. Simulation results are also provided to show that the methodology used for generating correlated non-normal Burr Type III and Type XII distributions is valid and efficient. Specifically, Monte Carlo simulation results are provided to show that the empirical values of L-correlations among simulated Burr Type III (and Type XII) distributions are in close agreement with the specified L-correlation matrices.
APA, Harvard, Vancouver, ISO, and other styles
7

Ragusa, Giuseppe. "Essays on moment conditions models econometrics /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3170252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Katyal, Bhavana. "Multiple current dipole estimation in a realistic head model using signal subspace methods." Online access for everyone, 2004. http://www.dissertations.wsu.edu/Thesis/Summer2004/b%5Fkatyal%5F072904.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Babichev, Dmitry. "On efficient methods for high-dimensional statistical estimation." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE032.

Full text
Abstract:
Dans cette thèse, nous examinons plusieurs aspects de l'estimation des paramètres pour les statistiques et les techniques d'apprentissage automatique, aussi que les méthodes d'optimisation applicables à ces problèmes. Le but de l'estimation des paramètres est de trouver les paramètres cachés inconnus qui régissent les données, par exemple les paramètres dont la densité de probabilité est inconnue. La construction d'estimateurs par le biais de problèmes d'optimisation n'est qu'une partie du problème, trouver la valeur optimale du paramètre est souvent un problème d'optimisation qui doit être résolu, en utilisant diverses techniques. Ces problèmes d'optimisation sont souvent convexes pour une large classe de problèmes, et nous pouvons exploiter leur structure pour obtenir des taux de convergence rapides. La première contribution principale de la thèse est de développer des techniques d'appariement de moments pour des problèmes de régression non linéaire multi-index. Nous considérons le problème classique de régression non linéaire, qui est irréalisable dans des dimensions élevées en raison de la malédiction de la dimensionnalité. Nous combinons deux techniques existantes : ADE et SIR pour développer la méthode hybride sans certain des aspects faibles de ses parents. Dans la deuxième contribution principale, nous utilisons un type particulier de calcul de la moyenne pour la descente stochastique du gradient. Nous considérons les familles exponentielles conditionnelles (comme la régression logistique), où l'objectif est de trouver la valeur inconnue du paramètre. Nous proposons le calcul de la moyenne des paramètres de moments, que nous appelons fonctions de prédiction. Pour les modèles à dimensions finies, ce type de calcul de la moyenne peut entraîner une erreur négative, c'est-à-dire que cette approche nous fournit un estimateur meilleur que tout estimateur linéaire ne peut jamais le faire. La troisième contribution principale de cette thèse porte sur les pertes de Fenchel-Young. Nous considérons des classificateurs linéaires multi-classes avec les pertes d'un certain type, de sorte que leur double conjugué a un produit direct de simplices comme support. La formulation convexe-concave à point-selle correspondante a une forme spéciale avec un terme de matrice bilinéaire et les approches classiques souffrent de la multiplication des matrices qui prend beaucoup de temps. Nous montrons que pour les pertes SVM multi-classes avec des techniques d'échantillonnage efficaces, notre approche a une complexité d'itération sous-linéaire, c'est-à-dire que nous devons payer seulement trois fois O(n+d+k) : pour le nombre de classes k, le nombre de caractéristiques d et le nombre d'échantillons n, alors que toutes les techniques existantes sont plus complexes
In this thesis we consider several aspects of parameter estimation for statistics and machine learning and optimization techniques applicable to these problems. The goal of parameter estimation is to find the unknown hidden parameters, which govern the data, for example parameters of an unknown probability density. The construction of estimators through optimization problems is only one side of the coin, finding the optimal value of the parameter often is an optimization problem that needs to be solved, using various optimization techniques. Hopefully these optimization problems are convex for a wide class of problems, and we can exploit their structure to get fast convergence rates. The first main contribution of the thesis is to develop moment-matching techniques for multi-index non-linear regression problems. We consider the classical non-linear regression problem, which is unfeasible in high dimensions due to the curse of dimensionality. We combine two existing techniques: ADE and SIR to develop the hybrid method without some of the weak sides of its parents. In the second main contribution we use a special type of averaging for stochastic gradient descent. We consider conditional exponential families (such as logistic regression), where the goal is to find the unknown value of the parameter. Classical approaches, such as SGD with constant step-size are known to converge only to some neighborhood of the optimal value of the parameter, even with averaging. We propose the averaging of moment parameters, which we call prediction functions. For finite-dimensional models this type of averaging can lead to negative error, i.e., this approach provides us with the estimator better than any linear estimator can ever achieve. The third main contribution of this thesis deals with Fenchel-Young losses. We consider multi-class linear classifiers with the losses of a certain type, such that their dual conjugate has a direct product of simplices as a support. We show, that for multi-class SVM losses with smart matrix-multiplication sampling techniques, our approach has an iteration complexity which is sublinear, i.e., we need to pay only trice O(n+d+k): for number of classes k, number of features d and number of samples n, whereas all existing techniques have higher complexity
APA, Harvard, Vancouver, ISO, and other styles
10

MENICHINI, AMILCAR ARMANDO. "Financial Frictions and Capital Structure Choice: A Structural Dynamic Estimation." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145397.

Full text
Abstract:
This thesis studies different aspects of firm decisions by using a dynamic model. I estimate a dynamic model of the firm based on the trade-off theory of capital structure that endogenizes investment, leverage, and payout decisions. For the estimation of the model I use Efficient Method of Moments (EMM), which allows me to recover the structural parameters that best replicate the characteristics of the data. I start analyzing the question of whether target leverage varies over time. While this is a central issue in finance, there is no consensus in the literature on this point. I propose an explanation that reconciles some of the seemingly contradictory empirical evidence. The dynamic model generates a target leverage that changes over time and consistently reproduces the results of Lemmon, Roberts, and Zender (2008). These findings suggest that the time-varying target leverage assumption of the big bulk of the previous literature is not incompatible with the evidence presented by Lemmon, Roberts, and Zender (2008). Then I study how corporate income tax payments vary with the corporate income tax rate. The dynamic model produces a bell-shaped relationship between tax revenue and the tax rate that is consistent with the notion of the Laffer curve. The dynamic model generates the maximum tax revenue for a tax rate between 36% and 41%. Finally, I investigate the impact of financial constraints on investment decisions by firms. Model results show that investment-cash flow sensitivity is higher for less financially constrained firms. This result is consistent with Kaplan and Zingales (1997). The dynamic model also rationalizes why large and mature firms have a positive and significant investment-cash flow sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
11

Augustine-Ohwo, Odaro. "Estimating break points in linear models : a GMM approach." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/estimating-break-points-in-linear-models-a-gmm-approach(804d83e3-dad8-4cda-b1e1-fbfce7ef41b8).html.

Full text
Abstract:
In estimating econometric time series models, it is assumed that the parameters remain constant over the period examined. This assumption may not always be valid when using data which span an extended period, as the underlying relationships between the variables in these models are exposed to various exogenous shifts. It is therefore imperative to examine the stability of models as failure to identify any changes could result in wrong predictions or inappropriate policy recommendations. This research proposes a method of estimating the location of break points in linear econometric models with endogenous regressors, estimated using Generalised Method of Moments (GMM). The proposed estimation method is based on Wald, Lagrange Multiplier and Difference type test statistics of parameter variation. In this study, the equation which sets out the relationship between the endogenous regressor and the instruments is referred to as the Jacobian Equation (JE). The thesis is presented along two main categories: Stable JE and Unstable JE. Under the Stable JE, models with a single and multiple breaks in the Structural Equation (SE) are examined. The break fraction estimators obtained are shown to be consistent for the true break fraction in the model. Additionally, using the fixed break approach, their $T$-convergence rates are established. Monte Carlo simulations which support the asymptotic properties are presented. Two main types of Unstable JE models are considered: a model with a single break only in the JE and another with a break in both the JE and SE. The asymptotic properties of the estimators obtained from these models are intractable under the fixed break approach, hence the thesis provides essential steps towards establishing the properties using the shrinking breaks approach. Nonetheless, a series of Monte Carlo simulations conducted provide strong support for the consistency of the break fraction estimators under the Unstable JE. A combined procedure for testing and estimating significant break points is detailed in the thesis. This method yields a consistent estimator of the true number of breaks in the model, as well as their locations. Lastly, an empirical application of the proposed methodology is presented using the New Keynesian Phillips Curve (NKPC) model for U.S. data. A previous study has found this NKPC model is unstable, having two endogenous regressors with Unstable JE. Using the combined testing and estimation approach, similar break points were estimated at 1975:2 and 1981:1. Therefore, using the GMM estimation approach proposed in this study, the presence of a Stable or Unstable JE does not affect estimations of breaks in the SE. A researcher can focus directly on estimating potential break points in the SE without having to pre-estimate the breaks in the JE, as is currently performed using Two Stage Least Squares.
APA, Harvard, Vancouver, ISO, and other styles
12

Sevilla, David. "Computerized method for finding the ideal patient-specific location to place an equivalent electric dipole to derive an estimation of the electrical activity of the heart." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Saeed, Usman. "Adaptive numerical techniques for the solution of electromagnetic integral equations." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41173.

Full text
Abstract:
Various error estimation and adaptive refinement techniques for the solution of electromagnetic integral equations were developed. Residual based error estimators and h-refinement implementations were done for the Method of Moments (MoM) solution of electromagnetic integral equations for a number of different problems. Due to high computational cost associated with the MoM, a cheaper solution technique known as the Locally-Corrected Nyström (LCN) method was explored. Several explicit and implicit techniques for error estimation in the LCN solution of electromagnetic integral equations were proposed and implemented for different geometries to successfully identify high-error regions. A simple p-refinement algorithm was developed and implemented for a number of prototype problems using the proposed estimators. Numerical error was found to significantly reduce in the high-error regions after the refinement. A simple computational cost analysis was also presented for the proposed error estimation schemes. Various cost-accuracy trade-offs and problem-specific limitations of different techniques for error estimation were discussed. Finally, a very important problem of slope-mismatch in the global error rates of the solution and the residual was identified. A few methods to compensate for that mismatch using scale factors based on matrix norms were developed.
APA, Harvard, Vancouver, ISO, and other styles
14

Tao, Ji. "Spatial econometrics models, methods and applications /." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1118957992.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains x, 140 p. Includes bibliographical references (p. 137-140). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
15

Dürr, Robert [Verfasser], Achim [Akademischer Betreuer] Kienle, and Dominique [Akademischer Betreuer] Thevenin. "Parameter estimation and method of moments for multi dimensional population balance equations with application to vaccine production processes / Robert Dürr ; Achim Kienle, Dominique Thévenin." Magdeburg : Universitätsbibliothek, 2016. http://d-nb.info/1123631476/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ruzibuka, John S. "The impact of fiscal deficits on economic growth in developing countries : Empirical evidence and policy implications." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/16282.

Full text
Abstract:
This study examines the impact of fiscal deficits on economic growth in developing countries. Based on deduction from the relevant theoretical and empirical literature, the study tests the following hypotheses regarding the impact of fiscal deficits on economic growth. First, fiscal deficits have significant positive or negative impact on economic growth in developing countries. Second, the impact of fiscal deficits on economic growth depends on the size of deficits as a percentage of GDP – that is, there is a non-linear relationship between fiscal deficits and economic growth. Third, the impact of fiscal deficits on economic growth depends on the ways in which deficits are financed. Fourth, the impact of fiscal deficits on economic growth depends on what deficit financing is used for. The study also examines whether there are any significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries. The study uses panel data for thirty-one developing countries covering the period 1972- 2001, which is analysed based on the econometric estimation of a dynamic growth model using the Arellano and Bond (1991) generalised method of moments (GMM) technique. Overall, the results suggest the following. First, fiscal deficits per se have no any significant positive or negative impact on economic growth. Second, by contrast, when the deficit is substituted by domestic and foreign financing, we find that both domestic and foreign financing of fiscal deficits exerts a negative and statistically significant impact on economic growth with a lag. Third, we find that both categories of economic classification of government expenditure, namely, capital and current expenditure, have no significant impact on economic growth. When government expenditure is disaggregated on the basis of a functional classification, the results suggest that spending on education, defence and economic services have positive but insignificant impact on growth, while spending on health and general public services have positive and significant impact. Fourth, in terms of regional differences with regard to the estimated relationships, the study finds that, while there are some regional differences between the four different regions represented in our sample of thirty-one developing countries - namely, Asia and the Pacific, Latin America and the Caribbean, Middle East and North Africa, and Sub-Saharan Africa – these differences are not statistically significant. On the basis of these findings, the study concludes that fiscal deficits per se are not necessarily good or bad for economic growth in developing countries; how the deficits are financed and what they are used for matters. In addition, the study concludes that there are no statistically significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries.
APA, Harvard, Vancouver, ISO, and other styles
17

Hattaway, James T. "Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3412.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Xiaodong. "Econometrics on interactions-based models methods and applications /." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180283230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ruzibuka, John Shofel. "The impact of fiscal deficits on economic growth in developing countries : empirical evidence and policy implications." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/16282.

Full text
Abstract:
This study examines the impact of fiscal deficits on economic growth in developing countries. Based on deduction from the relevant theoretical and empirical literature, the study tests the following hypotheses regarding the impact of fiscal deficits on economic growth. First, fiscal deficits have significant positive or negative impact on economic growth in developing countries. Second, the impact of fiscal deficits on economic growth depends on the size of deficits as a percentage of GDP - that is, there is a non-linear relationship between fiscal deficits and economic growth. Third, the impact of fiscal deficits on economic growth depends on the ways in which deficits are financed. Fourth, the impact of fiscal deficits on economic growth depends on what deficit financing is used for. The study also examines whether there are any significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries. The study uses panel data for thirty-one developing countries covering the period 1972- 2001, which is analysed based on the econometric estimation of a dynamic growth model using the Arellano and Bond (1991) generalised method of moments (GMM) technique. Overall, the results suggest the following. First, fiscal deficits per se have no any significant positive or negative impact on economic growth. Second, by contrast, when the deficit is substituted by domestic and foreign financing, we find that both domestic and foreign financing of fiscal deficits exerts a negative and statistically significant impact on economic growth with a lag. Third, we find that both categories of economic classification of government expenditure, namely, capital and current expenditure, have no significant impact on economic growth. When government expenditure is disaggregated on the basis of a functional classification, the results suggest that spending on education, defence and economic services have positive but insignificant impact on growth, while spending on health and general public services have positive and significant impact. Fourth, in terms of regional differences with regard to the estimated relationships, the study finds that, while there are some regional differences between the four different regions represented in our sample of thirty-one developing countries - namely, Asia and the Pacific, Latin America and the Caribbean, Middle East and North Africa, and Sub-Saharan Africa - these differences are not statistically significant. On the basis of these findings, the study concludes that fiscal deficits per se are not necessarily good or bad for economic growth in developing countries; how the deficits are financed and what they are used for matters. In addition, the study concludes that there are no statistically significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries.
APA, Harvard, Vancouver, ISO, and other styles
20

Burk, David Morris. "Estimating the Effect of Disability on Medicare Expenditures." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2127.

Full text
Abstract:
We consider the effect of disability status on Medicare expenditures. Disabled elderly historically have accounted for a significant portion of Medicare expenditures. Recent demographic trends exhibit a decline in the size of this population, causing some observers to predict declines in Medicare expenditures. There are, however, reasons to be suspicious of this rosy forecast. To better understand the effect of disability on Medicare expenditures, we develop and estimate a model using the generalized method of moments technique. We find that newly disabled elderly generally spend more than those who have been disabled for longer periods of time. Also, we find that increases in expenditures have risen much more quickly for those disabled Medicare beneficiaries who were at the higher ends of the expenditure distribution before the increases.
APA, Harvard, Vancouver, ISO, and other styles
21

Sanjab, Anibal Jean. "Statistical Analysis of Electric Energy Markets with Large-Scale Renewable Generation Using Point Estimate Methods." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/74356.

Full text
Abstract:
The restructuring of the electric energy market and the proliferation of intermittent renewable-energy based power generation have introduced serious challenges to power system operation emanating from the uncertainties introduced to the system variables (electricity prices, congestion levels etc.). In order to economically operate the system and efficiently run the energy market, a statistical analysis of the system variables under uncertainty is needed. Such statistical analysis can be performed through an estimation of the statistical moments of these variables. In this thesis, the Point Estimate Methods (PEMs) are applied to the optimal power flow (OPF) problem to estimate the statistical moments of the locational marginal prices (LMPs) and total generation cost under system uncertainty. An extensive mathematical examination and risk analysis of existing PEMs are performed and a new PEM scheme is introduced. The applied PEMs consist of two schemes introduced by H.P. Hong, namely, the 2n and 2n+1 schemes, and a proposed combination between Hong's and M. E Harr's schemes. The accuracy of the applied PEMs in estimating the statistical moments of system LMPs is illustrated and the performance of the suggested combination of Harr's and Hong's PEMs is shown. Moreover, the risks of the application of Hong's 2n scheme to the OPF problem are discussed by showing that it can potentially yield inaccurate LMP estimates or run into unfeasibility of the OPF problem. In addition, a new PEM configuration is also introduced. This configuration is derived from a PEM introduced by E. Rosenblueth. It can accommodate asymmetry and correlation of input random variables in a more computationally efficient manner than its Rosenblueth's counterpart.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
22

Munasib, Abdul B. A. "Lifecycle of social networks: A dynamic analysis of social capital accumulation." The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1121441394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Nguyen, Ngoc B. "Estimation of Technical Efficiency in Stochastic Frontier Analysis." Bowling Green State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1275444079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

De, la Rey Tanja. "Two statistical problems related to credit scoring / Tanja de la Rey." Thesis, North-West University, 2007. http://hdl.handle.net/10394/3689.

Full text
Abstract:
This thesis focuses on two statistical problems related to credit scoring. In credit scoring of individuals, two classes are distinguished, namely low and high risk individuals (the so-called "good" and "bad" risk classes). Firstly, we suggest a measure which may be used to study the nature of a classifier for distinguishing between the two risk classes. Secondly, we derive a new method DOUW (detecting outliers using weights) which may be used to fit logistic regression models robustly and for the detection of outliers. In the first problem, the focus is on a measure which may be used to study the nature of a classifier. This measure transforms a random variable so that it has the same distribution as another random variable. Assuming a linear form of this measure, three methods for estimating the parameters (slope and intercept) and for constructing confidence bands are developed and compared by means of a Monte Carlo study. The application of these estimators is illustrated on a number of datasets. We also construct statistical hypothesis to test this linearity assumption. In the second problem, the focus is on providing a robust logistic regression fit and the identification of outliers. It is well-known that maximum likelihood estimators of logistic regression parameters are adversely affected by outliers. We propose a robust approach that also serves as an outlier detection procedure and is called DOUW. The approach is based on associating high and low weights with the observations as a result of the likelihood maximization. It turns out that the outliers are those observations to which low weights are assigned. This procedure depends on two tuning constants. A simulation study is presented to show the effects of these constants on the performance of the proposed methodology. The results are presented in terms of four benchmark datasets as well as a large new dataset from the application area of retail marketing campaign analysis. In the last chapter we apply the techniques developed in this thesis on a practical credit scoring dataset. We show that the DOUW method improves the classifier performance and that the measure developed to study the nature of a classifier is useful in a credit scoring context and may be used for assessing whether the distribution of the good and the bad risk individuals is from the same translation-scale family.
Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2008.
APA, Harvard, Vancouver, ISO, and other styles
25

Kelbick, Nicole DePriest. "Detecting underlying emotional sensitivity in bereaved children via a multivariate normal mixture distribution." Columbus, Ohio : Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc%5fnum=osu1064331329.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xiv, 122 p.; also contains graphics. Includes abstract and vita. Advisor: Joseph, Dept. of Statistics. Includes bibliographical references (p. 119-122).
APA, Harvard, Vancouver, ISO, and other styles
26

Badinger, Harald, and Peter Egger. "Spacey Parents and Spacey Hosts in FDI." WU Vienna University of Economics and Business, 2013. http://epub.wu.ac.at/3924/2/wp154.pdf.

Full text
Abstract:
Empirical trade economists have found that shocks on foreign direct investment (FDI) of some parent country in a host country affect the same parent country´s FDI in other hosts (interdependent hosts). Independent of this, there is evidence that shocks on a parent country´s FDI in some host economy affect other parent countries´ FDI in the same host (interdependent parents). In general equilibrium, shocks on FDI between any country pair will affect all country-pairs´ FDI in the world, including anyone of the two countries in a pair as well as third countries (interdependent third countries). No attempt has been made so far to allow simultaneously for all three modes of interdependence of FDI. Using cross-sectional data on FDI among 22 OECD countries in 2000, we employ a spatial feasible generalized two-stage least squares and generalized moments estimation framework to allow for all three modes of interdependence across all parent and host countries, thereby distinguishing between market-size-related and remainder interdependence. Our results highlight the complexity of multinational enterprises´ investment strategies and the interconnectedness of the world investment system (authors' abstract).
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
27

Tamagnini, Filippo. "EKF based State Estimation in a CFI Copolymerization Reactor including Polymer Quality Information." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20235/.

Full text
Abstract:
State estimation is an integral part of modern control techniques, as it allows to characterize the state information of complex plants based on a limited number of measurements and the knowledge of the process model. The benefit is twofold: on one hand it has the potential to rationalize the number of measurements required to monitor the plant, thus reducing costs, on the other hand it enables to extract information about variables that have an effect on the system but would otherwise be inaccessible to direct measurement. The scope of this thesis is to design a state estimator for a tubular copolymerization reactor, with the aim to provide the full state information of the plant and to characterize the quality of the product. Due to the fact that, with the existing set of measurements, only a small number of state variables can be observed, a new differential pressure sensor is installed in the plant to provide the missing information, and a model for the pressure measurement is developed. Following, the state estimation problem is approached rigorously and a comprehensive method for analyzing, tuning and implementing the state estimator is assembled from scientific literature, using a variety of tools from graph theory, linear observability theory and matrix algebra. Data reduction and visualization techniques are also employed to make sense of high dimensional information. The proposed method is then tested in simulations to assess the effect of the tuning parameters and measured set on the estimator performance during initialization and in case of estimation with plant-model mismatch. Finally, the state estimator is tested with plant data.
APA, Harvard, Vancouver, ISO, and other styles
28

Xu, Xingbai Xu. "Asymptotic Analysis for Nonlinear Spatial and Network Econometric Models." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461249529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ren, Kai. "Physics-Based Near-Field Microwave Imaging Algorithms for Dense Layered Media." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511273574098455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

ARAÚJO, Raphaela Lima Belchior de. "Família composta Poisson-Truncada: propriedades e aplicações." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/16315.

Full text
Abstract:
Submitted by Haroudo Xavier Filho (haroudo.xavierfo@ufpe.br) on 2016-04-05T14:28:43Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertacao_Raphaela(CD).pdf: 1067677 bytes, checksum: 6d371901336a7515911aeffd9ee38c74 (MD5)
Made available in DSpace on 2016-04-05T14:28:43Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertacao_Raphaela(CD).pdf: 1067677 bytes, checksum: 6d371901336a7515911aeffd9ee38c74 (MD5) Previous issue date: 2015-07-31
CAPES
Este trabalho analisa propriedades da família de distribuições de probabilidade Composta N e propõe a sub-família Composta Poisson-Truncada como um meio de compor distribuições de probabilidade. Suas propriedades foram estudadas e uma nova distribuição foi investigada: a distribuição Composta Poisson-Truncada Normal. Esta distribuição possui três parâmetros e tem uma flexibilidade para modelar dados multimodais. Demonstramos que sua densidade é dada por uma mistura infinita de densidades normais em que os pesos são dados pela função de massa de probabilidade da Poisson-Truncada. Dentre as propriedades exploradas desta distribuição estão a função característica e expressões para o cálculo dos momentos. Foram analisados três métodos de estimação para os parâmetros da distribuição Composta Poisson-Truncada Normal, sendo eles, o método dos momentos, o da função característica empírica (FCE) e o método de máxima verossimilhança (MV) via algoritmo EM. Simulações comparando estes três métodos foram realizadas e, por fim, para ilustrar o potencial da distribuição proposta, resultados numéricos com modelagem de dados reais são apresentados.
This work analyzes properties of the Compound N family of probability distributions and proposes the sub-family Compound Poisson-Truncated as a means of composing probability distributions. Its properties were studied and a new distribution was investigated: the Compound Poisson-Truncated Normal distribution. This distribution has three parameters and has the flexibility to model multimodal data. We demonstrated that its density is given by an infinite mixture of normal densities where in the weights are given by the Poisson-Truncated probability mass function. Among the explored properties of this distribution are the characteristic function end expressions for the calculation of moments. Three estimation methods were analyzed for the parameters of the Compound Poisson-Truncated Normal distribution, namely, the method of moments, the empirical characteristic function (ECF) and the method of maximum likelihood (ML) by EM algorithm. Simulations comparing these three methods were performed and, finally, to illustrate the potential of the proposed distribution numerical results with real data modeling are presented.
APA, Harvard, Vancouver, ISO, and other styles
31

Naylor, Guilherme Lima. "O impacto das instituições na renda dos países : uma abordagem dinâmica para dados em painel." Master's thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/21704.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
As diferenças nos níveis de renda entre os países vêm sendo estudadas há muito tempo na economia. O capital humano, a produtividade, as instituições e outros fatores foram tidos como determinantes para as discrepâncias verificadas. Este trabalho segue a linha institucionalista ao procurar medir e relacionar o modo como as instituições impactam o nível de renda dos países.Primeiro, faz-se necessário rever brevemente a literatura sobre os modelos de crescimento econômico. Posteriormente, delimita-se o conceito de instituição e descreve-se seu processo de evolução ao longo do tempo. Esse preâmbulo é importante, pois fornece base teórica para os modelos econométricos estimados, que visam a medir os efeitos de diferentes características das instituições sobre o nível de renda dos países. O método escolhido para a análise é a estimação de modelos dinâmicos, por meio da abordagem do estimador do Método dos Momentos Generalizados de Sistema de Blundell e Bond.
Differences in income levels between countries have long been studied in economics. Human capital, productivity, institutions and other factors were taken as determinants for the discrepancies found. This work follows the institutionalist line in seeking to measure and relate how institutions impact the income level of countries.First, it is necessary to briefly review the literature on economic growth models. Subsequently, the concept of institution is delimited and its evolution process over time is descripted. This preamble is important because it provides a theoretical basis for the estimated econometric models, which aim to measure the effects of different characteristics of institutions on the income level of countries. The method chosen for the analysis is the estimation of dynamic models, using the Blundell & Bond Generalized Method of Moments System estimator approach.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
32

Ramalho, Joaquim J. S. "Alternative estimation methods and specification tests for moment condition models." Thesis, University of Bristol, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.393329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Thekkudan, Dennis. "Multidimensional Methods: Applications in Drug-Enzyme Intrinsic Clearance Determination and Comprehensive Two-Dimensional Liquid Chromatography Peak Volume Determination." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/2005.

Full text
Abstract:
The goal of the first project was to evaluate strategies for determining the in vitro intrinsic clearance (CLint) of dextrorphan (DR) as metabolized by the UGT2B7 enzyme to obtain dextrorphan glucuronide (DR-G). A direct injection liquid chromatography-mass spectrometry (LC-MS) method was used to monitor products using the pseudo-first-order (PFO) model. Standard enzymatic incubations were also quantified using LC-MS. These data were fit utilizing both PFO and Michaelis-Menten (MM) models to determine estimates of kinetic parameters. The CLint was determined to be 0.28 (± 0.08) µL/min/mg protein for a baculovirus insect cell-expressed UGT2B7 enzyme. This is the first confirmation that dextrorphan is specifically metabolized by UGT2B7 and the first report of these kinetic parameters. Simulated chromatographic data were used to determine the precision and accuracy in the estimation of peak volumes in comprehensive two-dimensional liquid chromatography (2D-LC). Volumes were determined both by summing the areas in the second dimension chromatograms via the moments method and by fitting the second dimension areas to a Gaussian peak. When only two second dimension signals are substantially above baseline, the accuracy and precision are poor because the solution to the Gaussian fitting algorithm is indeterminate. The fit of a Gaussian peak to the areas of the second dimension peaks is better at predicting the peak volume when there are at least three second dimension injections above the limit of detection. Based on simulations where the sampling interval and sampling phase were varied, we conclude for well-resolved peaks that the optimum precision in peak volumes in 2D separations will be obtained when the sampling ratio is approximately two. This provides an RSD of approximately 2 % for the signal-to-noise (S/N) used in this work. The precision of peak volume estimation for experimental data was also assessed, and RSD values were in the 4-5 % range. We conclude that the poorer precision found in the 2D-LC experimental data as compared to 1D-LC is due to a combination of factors, including variations in the first dimension peak shape related to undersampling and loss in S/N due to the injection of multiple smaller peaks onto the second dimension column.
APA, Harvard, Vancouver, ISO, and other styles
34

Arvidsson, Elina, and Rickard Brunskog. "Methods for Estimating the Magnetic Dipole Moment of Small Objects." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293890.

Full text
Abstract:
A small satellite can be adversely affected by Earth’s magnetic field due to the resulting torque the magnetic field exerts on the satellites magnetic dipole moment. Therefore, this dipole moment needs to be estimated during the development of the satellite to make sure that the torque does not become a problem once the satellite is in orbit. This needs to be done for the MIniture STudent satellite (MIST), built at KTH Royal Institute of Technology. Two methods that make use of different techniques to measure the magnetic dipole moment of objects are evaluated through simulations. The first method holds the promise of being able to accurately estimate the dipole moment on components and the whole satellite alike, but has the downside of having a more complex setup. The second method can be set up easily, and can quickly produce an estimate of the dipole moment of one single object. However, the method is more susceptible to external disturbances in the magnetic field, and placement of the object. Due to time constraints, only the second method is evaluated experimentally. To understand how the second method performs, reference measurements are made on a coil with a known dipole moment. The results from the reference measurements show that the second method works well enough to produce values accurate enough for this project. Measurements are thereafter made on components similar to the flight hardware which are put through a set of tests to see how easily magnetised they are. The resulting values show that the magnetic field from magnetic tools can magnetise the components to the extent of becoming a problem, making the satellite’s dipole moment exceed the set limit. A more thorough investigation of MIST’s magnetic dipole moment should be conducted to determine if the satellites total magnetic dipole moment runs the risk of exceeding the set limit.
Små satelliter kan påverkas negativt av jordens magnetfält då det i samband med satellitens magnetiska dipolmoment kan resultera i ett moment som verkar på satelliten. Därför behöver det magnetiska dipolmomentet uppskattas under utvecklingen av satelliten för att fastställa att problem inte uppstår när satelliten är i omloppsbana runt jorden. Denna uppskattning måste göras på MIniture STudent satelliten MIST, som byggs på Kungliga Tekniska Högskolan i Stockholm. Två metoder som använder sig av olika tekniker för att mäta det magnetiska dipolmomentet på komponenter undersöks genom simuleringar. Första metoden verkar från simuleringarna kunna göra en mätning av det magnetiska dipolmomentet med hög precision på enskilda komponenter och hela satelliten. Metoden har dock en komplex mätuppställning. Andra metoden kan snabbt ställas upp och en uppskattning av en komponent kan fås. Nackdelen med metoden är dess känslighet mot störningar i magnetfältet samt felplacering av dipolen. På grund av projektets tidsram väljs den andra metoden för att göra experimentella mätningar. För att förstå hur den andra metoden presterar görs mätningar på en spole med ett känt magnetiskt dipolmoment. Resultaten från mätningarna på spolen visar att metoden fungerar bra nog för att ge tillräckligt noggranna värden för projektet. Mätningar görs därefter på komponenter som liknar MISTs hårdvara för att undersöka hur lätta de är att magnetisera. Resultaten visar att magnetiska fält från verktyg kan magnetisera komponenterna så pass mycket att den satta gränsen för satellitens magnetiska dipolmomentet överskrids. En mer grundlig utredning av MISTs magnetiska dipolmoment bör göras för att fastställa om dipolmomentet riskerar att överskrida gränsen.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
APA, Harvard, Vancouver, ISO, and other styles
35

Lai, Yanzhao. "Generalized method of moments exponential distribution family." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/laiy/yanzhaolai.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Tao. "3D Capacitance Extraction With the Method of Moments." Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-theses/86.

Full text
Abstract:
In this thesis, the Method of Moments has been applied to calculate capacitance between two arbitrary 3D metal conductors or a capacitance matrix for a 3D multi-conductor system. Capacitance extraction has found extensive use for systems involving sets of long par- allel transmission lines in multi-dielectric environment as well as integrated circuit package including three-dimensional conductors located on parallel planes. This paper starts by reviewing fundamental aspects of transient electro-magnetics followed by the governing dif- ferential and integral equations to motivate the application of numerical methods as Method of Moments(MoM), Finite Element Method(FEM), etc. Among these numerical tools, the surface-based integral-equation methodology - MoM is ideally suited to address the prob- lem. It leads to a well-conditioned system with reduced size, as compared to volumetric methods. In this dissertation, the MoM Surface Integral Equation (SIE)-based modeling approach is developed to realize electrostatic capacitance extraction for 3D geometry. MAT- LAB is employed to validate its e?ciency and e?ectiveness along with design of a friendly GUI. As a base example, a parallel-plate capacitor is considered. We evaluate the accu- racy of the method by comparison with FEM simulations as well as the corresponding quasi-analytical solution. We apply this method to the parallel-plate square capacitor and demonstrate how far could the undergraduate result 0C = A ? "=d' be from reality. For the completion of the solver, the same method is applied to the calculation of line capacitance for two- and multi-conductor 2D transmission lines.
APA, Harvard, Vancouver, ISO, and other styles
37

Esterhuizen, Gerhard. "Generalised density function estimation using moments and the characteristic function." Thesis, Link to the online version, 2003. http://hdl.handle.net/10019.1/1001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Eicholtz, Matthew R. "Design and analysis of an inertial properties measurement device for manual wheelchairs." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34677.

Full text
Abstract:
The dynamics of rigid body motion are dependent on the inertial properties of the body - that is, the mass and moment of inertia. For complex systems, it may be necessary to derive these results empirically. Such is the case for manual wheelchairs, which can be modeled as a rigid body frame connected to four wheels. While 3D modeling software is capable of estimating inertial parameters, modeling inaccuracies and ill-defined material properties may introduce significant errors in this estimation technique and necessitate experimental measurements. To that end, this thesis discusses the design of a device called the iMachine that empirically determines the mass, location of the center of mass, and moment of inertia about the vertical (yaw) axis passing through the center of mass of the wheelchair. The iMachine is a spring-loaded rotating platform that freely oscillates about an axis passing through its center due to an initial angular velocity. The mass and location of the center of mass can be determined using a static analysis of a triangular configuration of load cells. An optical encoder records the dynamic angular displacement of the platform, and the natural frequency of free vibration is calculated using several techniques. Finally, the moment of inertia is determined from the natural frequency of the system. In this thesis, test results are presented for the calibration of the load cells and spring rate. In addition, objects with known mass properties were tested and comparisons are made between the analytical and empirical inertia results. In general, the mass measurement of the test object had greater than 99% accuracy. The average relative error for the x and y-coordinates of the center of mass was 0.891% and 1.99%, respectively. For the moment of inertia, a relationship was established between relative error and the ratio of the test object inertia to the inertia of the system. The results suggest that 95% accuracy can be achieved if the test object accounts for at least 25% of the total inertia of the system. Finally, the moment of inertia of a manual wheelchair is determined using the device (I = 1.213 kg-m²), and conclusions are made regarding the reliability and validity of results. The results of this project will feed into energy calculations for the Anatomical Model Propulsion System (AMPS), a wheelchair-propelling robot used to measure the mechanical efficiency of manual wheelchairs.
APA, Harvard, Vancouver, ISO, and other styles
39

Liang, Yitian. "Generalized method of moments : theoretical, econometric and simulation studies." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/36866.

Full text
Abstract:
The GMM estimator is widely used in the econometrics literature. This thesis mainly focus on three aspects of the GMM technique. First, I derive the prooves to study the asymptotic properties of the GMM estimator under certain conditions. To my best knowledge, the original complete prooves proposed by Hansen (1982) is not easily available. In this thesis, I provide complete prooves of consistency and asymptotic normality of the GMM estimator under some stronger assumptions than those in Hansen (1982). Second, I illustrate the application of GMM estimator in linear models. Specifically, I emphasize the economic reasons underneath the linear statistical models where GMM estimator (also referred to the Instrumental Variable estimator) is widely used. Third, I perform several simulation studies to investigate the performance of GMM estimator under different situations.
APA, Harvard, Vancouver, ISO, and other styles
40

McLeod, James William. "An investigation of the CDF-based method of moments." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0009/MQ34121.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Shin, Changmock. "Entropy Based Moment Selection in Generalized Method of Moments." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-06072005-112026/.

Full text
Abstract:
GMM provides a computationally convenient estimation method and the resulting estimator can be shown to be consistent and asymptotically normal under the fairly moderate regularity conditions. It is widely known that the information content in the population moment condition has impacts on the quality of the asymptotic approximation to finite sample behavior. This dissertation focuses on a moment selection procedure that leads us to choose relevant (asymptotically efficient and non-redundant) moment conditions in the presence of weak identification. The contributions of this dissertation can be characterized as follows: in the framework of linear model, (i) the concept of nearly redundant moment conditions is introduced and the connection between near redundancy and weak identification is explored; (ii) performance of RMSC(c) is evaluated when weak identification is a possibility but the parameter vector to be estimated is not weakly identified by the candidate set of moment conditions; (iii) performance of RMSC(c) is also evaluated when the parameter vector is weakly identified by the candidate set; (iv) a combined strategy of Stock and Yogo's (2002) test for weak identification and RMSC(c) is introduced and evaluated; (v) (i) and (ii) are extended to allow for nonlinear dynamic models. The subsequent simulation results support the analytical findings: when only a part of instruments in the set of possible candidates for instruments are relevant and the others are redundant given all or some of the relevant ones, RMSC(c) chooses all the relevant instruments with high probabilities and improves the quality of the post-selection inferences; when the candidates are in order of their importance, a combined strategy of Stock and Yogo's (2002) pretest and RMSC(c) improves the post-selection inferences, however it tends to select parsimonious models; when all the possible candidates are equally important, it seems that RMSC(c) does not provide any merits. However, in the last case, asymptotic efficiency and non-redundancy can be achieved by basing the estimation and inference on all the possible candidates.
APA, Harvard, Vancouver, ISO, and other styles
42

Arvas, Serhend. "A method of moments analysis of microstructured optical fibers." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2009. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kluskens, Michael S. "Method of moments analysis of scattering by chiral media /." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487688507504775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Khlifi, Rachid. "Hybrid space discretizing method -method of moments for numerical modeling of transient interference." kostenfrei, 2007. http://mediatum2.ub.tum.de/doc/620327/document.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Volkin, Ronald S. "Estimation of rotor blade torsional deformations from measured blade torsion moments." Thesis, Monterey, California. Naval Postgraduate School, 2002. http://hdl.handle.net/10945/6003.

Full text
Abstract:
Approved for public release; distribution is unlimited
The strain pattern analysis (SPA) method is applied to estimate rotor blade torsional deflections. The SPA technique requires calculated mode shapes for the tested rotor blade and strain measurements from the rotor's wind tunnel or flight test. The Holzer method is developed to calculate the required mode shapes from rotor blade stiffness and mass properties and the torsional equation of motion. The Holzer method is tested with numerous theoretical and experimental cases and is proven accurate. The strain measurements are from wind tunnel tests conducted by the Army, NASA, United Technologies Research Center (UTRC) and Sikorsky at DNW with a (1:5.73) model-scale UH-60A rotor blade with an advance ratio of 0.301, an advancing tip Mach number of 0.8224 and an average Reynolds number of 1,278,729. The SPA method predicts slightly larger torsional deflections that compare well with the overall trend and range of UTRC static method integrated deflections. The SPA method was evaluated to determine the tolerance to change in the number of measurements and the modes applied, errors in the measurements, and errors in rotor blade stiffness and mass properties. The method is tolerant to all effects except a decrease in the number of measurements and modes.
APA, Harvard, Vancouver, ISO, and other styles
46

Russant, Stuart. "A-posteriori error estimation using higher moments in computational fluid dynamics." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/aposteriori-error-estimation-using-higher-moments-in-computational-fluid-dynamics(77bdb9c6-e99a-490d-9624-fdc61525d039).html.

Full text
Abstract:
In industrial situations time is expensive and simulation accuracy is not always investigated because it requires grid refinement studies or other time consuming methods. With this in mind the goal of this research is to develop a method to assess the errors and uncertainties on computational fluid dynamics (CFD) simulations that can be adopted by industry to meet their requirements and time constraints. In a CFD calculation there are a number of sources of errors and uncertainties. An uncertainty is a potential deficiency that is due to a lack of knowledge of an activity of the modelling process, for example turbulence modelling. An error is defined as a recognisable deficiency that is not due to a lack of knowledge, for example numerical discretisation error. The process of determining the level of errors and uncertainties is termed verification and validation. The work aims to define an error estimation method for verification of numerical errors that can be produced during one simulation on a single grid. The second moment solution error estimate for scalar and vector quantities was proposed to meet these requirements. Where the governing equations of CFD, termed the first moments, represent the transport of primary variables such as the velocity, the second moments represents the transport of the primary variables squared such as the total kinetic energy. The second moments are formed by a rearrangement of the first moments. Based on a mathematical justification, an error estimate for vector or scalar quantities was defined from combinations of the solutions to the first and second moments. The error estimate was highly successful when applied to six test cases using laminar flow and scalar transport. These test cases used either central differencing with Gaussian elimination, or the finite volume method with the CFD solver Code_Saturne to conduct the simulations, demonstrating the applicability of the error estimate across solution methods. Comparisons were made to the numerical simulation errors, which were found using either the analytical or refined solutions. The comparisons were aided by the normalised cross correlation coefficient, which compared the similarity of the shape prediction, and the averaged summation coefficients, which compared the scale prediction. When using the first order upwind scheme the method consistently produced good predictions of the locations of error. When using the second order centred or second order linear upwind schemes there was similar success, but limited by influences from solution unboundedness, non-resolution of the boundary layer, the near-wall gradient approximation, and numerical pressure error. At high Reynolds numbers these caused the prediction of the location of error to degrade. This effect was made worse when using the second order schemes in conjunction with the constant value boundary condition. This was the case for the scalar or velocity simulations, and is caused by the unavoidable drop to first order accuracy during the near-wall gradient approximation that is required for the second moment source term approximation. The prediction of the scale demonstrated a dependence on the cell Peclet number. Below cell Peclet number 4 the increase of the estimate scale was linearly related to the increase of the error scale. The estimate scale consistently over-predicts by up to a factor of 3. This allows confidence that the true error level is below that which is predicted by the error estimate. At cell Peclet numbers greater than 4 the relationship between the scales remained linear, however, the estimate begins to under-predict the estimate. The exact relation becomes case dependent, and the highest under-prediction was by a factor of 10. In such circumstances a computationally inexpensive calibration can be done.
APA, Harvard, Vancouver, ISO, and other styles
47

Chester, David A. III. "Using the method of moments and Robin Hood method to solve electromagnetic scattering problems." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78502.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
This thesis project was to write a program in C++ that solves electromagnetic scattering problems for arbitrarily shaped scatterers. This was implemented by using a surface integral formulation of Maxwell's equations, which discretizes the surface of the scatterer into thousands of triangles. The method of moments (MoM) was applied, which calculates the Green's functions between each triangle element. A matrix equation is obtained and solved using the Robin Hood (RH) method. The solution to this equation gives the scattered electromangetic field. This program is first tested on a sphere, which is compared to the analytic solution known as Mie scattering. Once these results are confirmed, the program can be used for the KATRIN experiment to ensure that no Penning traps occur in the electron spectrometer.
by David A. Chester.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
48

Virk, Bikram. "Implementing method of moments on a GPGPU using Nvidia CUDA." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33980.

Full text
Abstract:
This thesis concentrates on the algorithmic aspects of Method of Moments (MoM) and Locally Corrected Nyström (LCN) numerical methods in electromagnetics. The data dependency in each step of the algorithm is analyzed to implement a parallel version that can harness the powerful processing power of a General Purpose Graphics Processing Unit (GPGPU). The GPGPU programming model provided by NVIDIA's Compute Unified Device Architecture (CUDA) is described to learn the software tools at hand enabling us to implement C code on the GPGPU. Various optimizations such as the partial update at every iteration, inter-block synchronization and using shared memory enable us to achieve an overall speedup of approximately 10. The study also brings out the strengths and weaknesses in implementing different methods such as Crout's LU decomposition and triangular matrix inversion on a GPGPU architecture. The results suggest future directions of study in different algorithms and their effectiveness on a parallel processor environment. The performance data collected show how different features of the GPGPU architecture can be enhanced to yield higher speedup.
APA, Harvard, Vancouver, ISO, and other styles
49

Tham, C. Y. "Electromagnetic transient analysis using the frequency domain method of moments." Thesis, Swansea University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.639176.

Full text
Abstract:
The relative merits of frequency domain (FD) electromagnetic transient analysis against the time domain approach are discussed. When used on highly resonant systems, conventional FD methods which rely on the FFT, can yield erroneous results. This is shown to be due to inadequate sampling resolution, which is determined empirically. The collection of analytical tools for FD analysis is reviewed with emphasis on the control of errors. From these principles a systematic and objective methodology to extract a system's transient response from FD data is proposed. The methodology is extended with a further proposal using dynamic adaptive sampling to obtain an accurate frequency response spectrum efficiently. The proposal is based on the adaptive integration principle but uses a relative convergence limit based on the most recently computed value of integral. The frequency samples obtained are non-uniformly spaced and a modified inverse DFT formula is developed. The sampling strategy results in a very substantial reduction in computational demand over the conventional FFT technique. Accurate transient results can be obtained with typically less than 10% of the samples of the conventional approach. The sampling strategy also enables highly resonant structures to be analysed in the frequency domain. To sample an extremely resonant spectrum accurately, resolution in the order of 1 in 106 is required. This level of computational demand is beyond the practical limit of the conventional FFT methods. The methodology has been used to model transients on resonant wires and transmission lines of various configurations in both antenna and scattering modes. A particular case studied uses an equivalent antenna model of a human body which is standing on a perfect ground and exposed to low frequency radiations. The resulting currents flowing at the feet into the ground, which is adopted as a measure for exposure level, are predicted accurately. The methodology's relative performance in terms of efficiency, accuracy, utility and ease of use against a thin wire time domain integral equation formulation is discussed in some detail.
APA, Harvard, Vancouver, ISO, and other styles
50

Vered, Nissan. "Method of moments analysis of displaced-axis dual reflector antennas." Thesis, Monterey, Calif. : Naval Postgraduate School, 1992. http://handle.dtic.mil/100.2/ADA247970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography