To see the other types of publications on this topic, follow the link: Generalized method of moments estimation.

Dissertations / Theses on the topic 'Generalized method of moments estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Generalized method of moments estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

CUNHA, JOAO MARCO BRAGA DA. "ESTIMATING ARTIFICIAL NEURAL NETWORKS WITH GENERALIZED METHOD OF MOMENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26922@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
As Redes Neurais Artificiais (RNAs) começaram a ser desenvolvidas nos anos 1940. Porém, foi a partir dos anos 1980, com a popularização e o aumento de capacidade dos computadores, que as RNAs passaram a ter grande relevância. Também nos anos 1980, houve dois outros acontecimentos acadêmicos relacionados ao presente trabalho: (i) um grande crescimento do interesse de econometristas por modelos não lineares, que culminou nas abordagens econométricas para RNAs, no final desta década; e (ii) a introdução do Método Generalizado dos Momentos (MGM) para estimação de parâmetros, em 1982. Nas abordagens econométricas de RNAs, sempre predominou a estimação por Quasi Máxima Verossimilhança (QMV). Apesar de possuir boas propriedades assintóticas, a QMV é muito suscetível a um problema nas estimações em amostra finita, conhecido como sobreajuste. O presente trabalho estende o estado da arte em abordagens econométricas de RNAs, apresentando uma proposta alternativa à estimação por QMV que preserva as suas boas propriedades assintóticas e é menos suscetível ao sobreajuste. A proposta utiliza a estimação pelo MGM. Como subproduto, a estimação pelo MGM possibilita a utilização do chamado Teste J para verifificar a existência de não linearidade negligenciada. Os estudos de Monte Carlo realizados indicaram que as estimações pelo MGM são mais precisas que as geradas pela QMV em situações com alto ruído, especialmente em pequenas amostras. Este resultado é compatível com a hipótese de que o MGM é menos suscetível ao sobreajuste. Experimentos de previsão de taxas de câmbio reforçaram estes resultados. Um segundo estudo de Monte Carlo apontou boas propriedades em amostra finita para o Teste J aplicado à não linearidade negligenciada, comparado a um teste de referência amplamente conhecido e utilizado. No geral, os resultados apontaram que a estimação pelo MGM é uma alternativa recomendável, em especial no caso de dados com alto nível de ruído.
Artificial Neural Networks (ANN) started being developed in the decade of 1940. However, it was during the 1980 s that the ANNs became relevant, pushed by the popularization and increasing power of computers. Also in the 1980 s, there were two other two other academic events closely related to the present work: (i) a large increase of interest in nonlinear models from econometricians, culminating in the econometric approaches for ANN by the end of that decade; and (ii) the introduction of the Generalized Method of Moments (GMM) for parameter estimation in 1982. In econometric approaches for ANNs, the estimation by Quasi Maximum Likelihood (QML) always prevailed. Despite its good asymptotic properties, QML is very prone to an issue in finite sample estimations, known as overfiting. This thesis expands the state of the art in econometric approaches for ANNs by presenting an alternative to QML estimation that keeps its good asymptotic properties and has reduced leaning to overfiting. The presented approach relies on GMM estimation. As a byproduct, GMM estimation allows the use of the so-called J Test to verify the existence of neglected nonlinearity. The performed Monte Carlo studies indicate that the estimates from GMM are more accurate than those generated by QML in situations with high noise, especially in small samples. This result supports the hypothesis that GMM is susceptible to overfiting. Exchange rate forecasting experiments reinforced these findings. A second Monte Carlo study revealed satisfactory finite sample properties of the J Test applied to the neglected nonlinearity, compared with a reference test widely known and used. Overall, the results indicated that the estimation by GMM is a better alternative, especially for data with high noise level.
APA, Harvard, Vancouver, ISO, and other styles
2

Burk, David Morris. "Estimating the Effect of Disability on Medicare Expenditures." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2127.

Full text
Abstract:
We consider the effect of disability status on Medicare expenditures. Disabled elderly historically have accounted for a significant portion of Medicare expenditures. Recent demographic trends exhibit a decline in the size of this population, causing some observers to predict declines in Medicare expenditures. There are, however, reasons to be suspicious of this rosy forecast. To better understand the effect of disability on Medicare expenditures, we develop and estimate a model using the generalized method of moments technique. We find that newly disabled elderly generally spend more than those who have been disabled for longer periods of time. Also, we find that increases in expenditures have risen much more quickly for those disabled Medicare beneficiaries who were at the higher ends of the expenditure distribution before the increases.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Xiaodong. "Econometrics on interactions-based models methods and applications /." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180283230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ruzibuka, John S. "The impact of fiscal deficits on economic growth in developing countries : Empirical evidence and policy implications." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/16282.

Full text
Abstract:
This study examines the impact of fiscal deficits on economic growth in developing countries. Based on deduction from the relevant theoretical and empirical literature, the study tests the following hypotheses regarding the impact of fiscal deficits on economic growth. First, fiscal deficits have significant positive or negative impact on economic growth in developing countries. Second, the impact of fiscal deficits on economic growth depends on the size of deficits as a percentage of GDP – that is, there is a non-linear relationship between fiscal deficits and economic growth. Third, the impact of fiscal deficits on economic growth depends on the ways in which deficits are financed. Fourth, the impact of fiscal deficits on economic growth depends on what deficit financing is used for. The study also examines whether there are any significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries. The study uses panel data for thirty-one developing countries covering the period 1972- 2001, which is analysed based on the econometric estimation of a dynamic growth model using the Arellano and Bond (1991) generalised method of moments (GMM) technique. Overall, the results suggest the following. First, fiscal deficits per se have no any significant positive or negative impact on economic growth. Second, by contrast, when the deficit is substituted by domestic and foreign financing, we find that both domestic and foreign financing of fiscal deficits exerts a negative and statistically significant impact on economic growth with a lag. Third, we find that both categories of economic classification of government expenditure, namely, capital and current expenditure, have no significant impact on economic growth. When government expenditure is disaggregated on the basis of a functional classification, the results suggest that spending on education, defence and economic services have positive but insignificant impact on growth, while spending on health and general public services have positive and significant impact. Fourth, in terms of regional differences with regard to the estimated relationships, the study finds that, while there are some regional differences between the four different regions represented in our sample of thirty-one developing countries - namely, Asia and the Pacific, Latin America and the Caribbean, Middle East and North Africa, and Sub-Saharan Africa – these differences are not statistically significant. On the basis of these findings, the study concludes that fiscal deficits per se are not necessarily good or bad for economic growth in developing countries; how the deficits are financed and what they are used for matters. In addition, the study concludes that there are no statistically significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries.
APA, Harvard, Vancouver, ISO, and other styles
5

Badinger, Harald, and Peter Egger. "Spacey Parents and Spacey Hosts in FDI." WU Vienna University of Economics and Business, 2013. http://epub.wu.ac.at/3924/2/wp154.pdf.

Full text
Abstract:
Empirical trade economists have found that shocks on foreign direct investment (FDI) of some parent country in a host country affect the same parent country´s FDI in other hosts (interdependent hosts). Independent of this, there is evidence that shocks on a parent country´s FDI in some host economy affect other parent countries´ FDI in the same host (interdependent parents). In general equilibrium, shocks on FDI between any country pair will affect all country-pairs´ FDI in the world, including anyone of the two countries in a pair as well as third countries (interdependent third countries). No attempt has been made so far to allow simultaneously for all three modes of interdependence of FDI. Using cross-sectional data on FDI among 22 OECD countries in 2000, we employ a spatial feasible generalized two-stage least squares and generalized moments estimation framework to allow for all three modes of interdependence across all parent and host countries, thereby distinguishing between market-size-related and remainder interdependence. Our results highlight the complexity of multinational enterprises´ investment strategies and the interconnectedness of the world investment system (authors' abstract).
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
6

Ruzibuka, John Shofel. "The impact of fiscal deficits on economic growth in developing countries : empirical evidence and policy implications." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/16282.

Full text
Abstract:
This study examines the impact of fiscal deficits on economic growth in developing countries. Based on deduction from the relevant theoretical and empirical literature, the study tests the following hypotheses regarding the impact of fiscal deficits on economic growth. First, fiscal deficits have significant positive or negative impact on economic growth in developing countries. Second, the impact of fiscal deficits on economic growth depends on the size of deficits as a percentage of GDP - that is, there is a non-linear relationship between fiscal deficits and economic growth. Third, the impact of fiscal deficits on economic growth depends on the ways in which deficits are financed. Fourth, the impact of fiscal deficits on economic growth depends on what deficit financing is used for. The study also examines whether there are any significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries. The study uses panel data for thirty-one developing countries covering the period 1972- 2001, which is analysed based on the econometric estimation of a dynamic growth model using the Arellano and Bond (1991) generalised method of moments (GMM) technique. Overall, the results suggest the following. First, fiscal deficits per se have no any significant positive or negative impact on economic growth. Second, by contrast, when the deficit is substituted by domestic and foreign financing, we find that both domestic and foreign financing of fiscal deficits exerts a negative and statistically significant impact on economic growth with a lag. Third, we find that both categories of economic classification of government expenditure, namely, capital and current expenditure, have no significant impact on economic growth. When government expenditure is disaggregated on the basis of a functional classification, the results suggest that spending on education, defence and economic services have positive but insignificant impact on growth, while spending on health and general public services have positive and significant impact. Fourth, in terms of regional differences with regard to the estimated relationships, the study finds that, while there are some regional differences between the four different regions represented in our sample of thirty-one developing countries - namely, Asia and the Pacific, Latin America and the Caribbean, Middle East and North Africa, and Sub-Saharan Africa - these differences are not statistically significant. On the basis of these findings, the study concludes that fiscal deficits per se are not necessarily good or bad for economic growth in developing countries; how the deficits are financed and what they are used for matters. In addition, the study concludes that there are no statistically significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries.
APA, Harvard, Vancouver, ISO, and other styles
7

Naylor, Guilherme Lima. "O impacto das instituições na renda dos países : uma abordagem dinâmica para dados em painel." Master's thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/21704.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
As diferenças nos níveis de renda entre os países vêm sendo estudadas há muito tempo na economia. O capital humano, a produtividade, as instituições e outros fatores foram tidos como determinantes para as discrepâncias verificadas. Este trabalho segue a linha institucionalista ao procurar medir e relacionar o modo como as instituições impactam o nível de renda dos países.Primeiro, faz-se necessário rever brevemente a literatura sobre os modelos de crescimento econômico. Posteriormente, delimita-se o conceito de instituição e descreve-se seu processo de evolução ao longo do tempo. Esse preâmbulo é importante, pois fornece base teórica para os modelos econométricos estimados, que visam a medir os efeitos de diferentes características das instituições sobre o nível de renda dos países. O método escolhido para a análise é a estimação de modelos dinâmicos, por meio da abordagem do estimador do Método dos Momentos Generalizados de Sistema de Blundell e Bond.
Differences in income levels between countries have long been studied in economics. Human capital, productivity, institutions and other factors were taken as determinants for the discrepancies found. This work follows the institutionalist line in seeking to measure and relate how institutions impact the income level of countries.First, it is necessary to briefly review the literature on economic growth models. Subsequently, the concept of institution is delimited and its evolution process over time is descripted. This preamble is important because it provides a theoretical basis for the estimated econometric models, which aim to measure the effects of different characteristics of institutions on the income level of countries. The method chosen for the analysis is the estimation of dynamic models, using the Blundell & Bond Generalized Method of Moments System estimator approach.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
8

Lai, Yanzhao. "Generalized method of moments exponential distribution family." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/laiy/yanzhaolai.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Augustine-Ohwo, Odaro. "Estimating break points in linear models : a GMM approach." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/estimating-break-points-in-linear-models-a-gmm-approach(804d83e3-dad8-4cda-b1e1-fbfce7ef41b8).html.

Full text
Abstract:
In estimating econometric time series models, it is assumed that the parameters remain constant over the period examined. This assumption may not always be valid when using data which span an extended period, as the underlying relationships between the variables in these models are exposed to various exogenous shifts. It is therefore imperative to examine the stability of models as failure to identify any changes could result in wrong predictions or inappropriate policy recommendations. This research proposes a method of estimating the location of break points in linear econometric models with endogenous regressors, estimated using Generalised Method of Moments (GMM). The proposed estimation method is based on Wald, Lagrange Multiplier and Difference type test statistics of parameter variation. In this study, the equation which sets out the relationship between the endogenous regressor and the instruments is referred to as the Jacobian Equation (JE). The thesis is presented along two main categories: Stable JE and Unstable JE. Under the Stable JE, models with a single and multiple breaks in the Structural Equation (SE) are examined. The break fraction estimators obtained are shown to be consistent for the true break fraction in the model. Additionally, using the fixed break approach, their $T$-convergence rates are established. Monte Carlo simulations which support the asymptotic properties are presented. Two main types of Unstable JE models are considered: a model with a single break only in the JE and another with a break in both the JE and SE. The asymptotic properties of the estimators obtained from these models are intractable under the fixed break approach, hence the thesis provides essential steps towards establishing the properties using the shrinking breaks approach. Nonetheless, a series of Monte Carlo simulations conducted provide strong support for the consistency of the break fraction estimators under the Unstable JE. A combined procedure for testing and estimating significant break points is detailed in the thesis. This method yields a consistent estimator of the true number of breaks in the model, as well as their locations. Lastly, an empirical application of the proposed methodology is presented using the New Keynesian Phillips Curve (NKPC) model for U.S. data. A previous study has found this NKPC model is unstable, having two endogenous regressors with Unstable JE. Using the combined testing and estimation approach, similar break points were estimated at 1975:2 and 1981:1. Therefore, using the GMM estimation approach proposed in this study, the presence of a Stable or Unstable JE does not affect estimations of breaks in the SE. A researcher can focus directly on estimating potential break points in the SE without having to pre-estimate the breaks in the JE, as is currently performed using Two Stage Least Squares.
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, Yitian. "Generalized method of moments : theoretical, econometric and simulation studies." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/36866.

Full text
Abstract:
The GMM estimator is widely used in the econometrics literature. This thesis mainly focus on three aspects of the GMM technique. First, I derive the prooves to study the asymptotic properties of the GMM estimator under certain conditions. To my best knowledge, the original complete prooves proposed by Hansen (1982) is not easily available. In this thesis, I provide complete prooves of consistency and asymptotic normality of the GMM estimator under some stronger assumptions than those in Hansen (1982). Second, I illustrate the application of GMM estimator in linear models. Specifically, I emphasize the economic reasons underneath the linear statistical models where GMM estimator (also referred to the Instrumental Variable estimator) is widely used. Third, I perform several simulation studies to investigate the performance of GMM estimator under different situations.
APA, Harvard, Vancouver, ISO, and other styles
11

Shin, Changmock. "Entropy Based Moment Selection in Generalized Method of Moments." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-06072005-112026/.

Full text
Abstract:
GMM provides a computationally convenient estimation method and the resulting estimator can be shown to be consistent and asymptotically normal under the fairly moderate regularity conditions. It is widely known that the information content in the population moment condition has impacts on the quality of the asymptotic approximation to finite sample behavior. This dissertation focuses on a moment selection procedure that leads us to choose relevant (asymptotically efficient and non-redundant) moment conditions in the presence of weak identification. The contributions of this dissertation can be characterized as follows: in the framework of linear model, (i) the concept of nearly redundant moment conditions is introduced and the connection between near redundancy and weak identification is explored; (ii) performance of RMSC(c) is evaluated when weak identification is a possibility but the parameter vector to be estimated is not weakly identified by the candidate set of moment conditions; (iii) performance of RMSC(c) is also evaluated when the parameter vector is weakly identified by the candidate set; (iv) a combined strategy of Stock and Yogo's (2002) test for weak identification and RMSC(c) is introduced and evaluated; (v) (i) and (ii) are extended to allow for nonlinear dynamic models. The subsequent simulation results support the analytical findings: when only a part of instruments in the set of possible candidates for instruments are relevant and the others are redundant given all or some of the relevant ones, RMSC(c) chooses all the relevant instruments with high probabilities and improves the quality of the post-selection inferences; when the candidates are in order of their importance, a combined strategy of Stock and Yogo's (2002) pretest and RMSC(c) improves the post-selection inferences, however it tends to select parsimonious models; when all the possible candidates are equally important, it seems that RMSC(c) does not provide any merits. However, in the last case, asymptotic efficiency and non-redundancy can be achieved by basing the estimation and inference on all the possible candidates.
APA, Harvard, Vancouver, ISO, and other styles
12

Koci, Eni. "The stochastic discount factor and the generalized method of moments." Digital WPI, 2006. https://digitalcommons.wpi.edu/etd-theses/873.

Full text
Abstract:
"The fundamental theorem of asset pricing in finance states that the price of any asset is its expected discounted payoff. Ideally, the payoff is discounted by a factor, which depends on parameters present in the market, and it should be unique, in the sense that financial derivatives should be able to be priced using the same discount factor. In theory, risk neutral valuation implies the existence of a positive random variable, which is called the stochastic discount factor and is used to discount the payoffs of any asset. Apart from asset pricing another use of stochastic discount factor is to evaluate the performance of the of hedge fund managers. Among many methods used to evaluate the stochastic discount factor, generalized method of moments has become very popular. In this paper we will see how generalized method of moments is used to evaluate the stochastic discount factor on linear models and the calculation of stochastic discount factor using generalized method of moments for the popular model in finance CAPM. "
APA, Harvard, Vancouver, ISO, and other styles
13

Strydom, Willem Jacobus. "Recovery based error estimation for the Method of Moments." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96881.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: The Method of Moments (MoM) is routinely used for the numerical solution of electromagnetic surface integral equations. Solution errors are inherent to any numerical computational method, and error estimators can be effectively employed to reduce and control these errors. In this thesis, gradient recovery techniques of the Finite Element Method (FEM) are formulated within the MoM context, in order to recover a higher-order charge of a Rao-Wilton-Glisson (RWG) MoM solution. Furthermore, a new recovery procedure, based specifically on the properties of the RWG basis functions, is introduced by the author. These recovered charge distributions are used for a posteriori error estimation of the charge. It was found that the newly proposed charge recovery method has the highest accuracy of the considered recovery methods, and is the most suited for applications within recovery based error estimation. In addition to charge recovery, the possibility of recovery procedures for the MoM solution current are also investigated. A technique is explored whereby a recovered charge is used to find a higher-order divergent current representation. Two newly developed methods for the subsequent recovery of the solenoidal current component, as contained in the RWG solution current, are also introduced by the author. A posteriori error estimation of the MoM current is accomplished through the use of the recovered current distributions. A mixed second-order recovered current, based on a vector recovery procedure, was found to produce the most accurate results. The error estimation techniques developed in this thesis could be incorporated into an adaptive solver scheme to optimise the solution accuracy relative to the computational cost.
AFRIKAANSE OPSOMMING: Die Moment Metode (MoM) vind algemene toepassing in die numeriese oplossing van elektromagnetiese oppervlak integraalvergelykings. Numeriese foute is inherent tot die prosedure: foutberamingstegnieke is dus nodig om die betrokke foute te analiseer en te reduseer. Gradiënt verhalingstegnieke van die Eindige Element Metode word in hierdie tesis in die MoM konteks geformuleer. Hierdie tegnieke word ingespan om die oppervlaklading van 'n Rao-Wilton-Glisson (RWG) MoM oplossing na 'n verbeterde hoër-orde voorstelling te neem. Verder is 'n nuwe lading verhalingstegniek deur die outeur voorgestel wat spesifiek op die eienskappe van die RWG basis funksies gebaseer is. Die verhaalde ladingsverspreidings is geïmplementeer in a posteriori fout beraming van die lading. Die nuut voorgestelde tegniek het die akkuraatste resultate gelewer, uit die groep verhalingstegnieke wat ondersoek is. Addisioneel tot ladingsverhaling, is die moontlikheid van MoM-stroom verhalingstegnieke ook ondersoek. 'n Metode vir die verhaling van 'n hoër-orde divergente stroom komponent, gebaseer op die verhaalde lading, is geïmplementeer. Verder is twee nuwe metodes vir die verhaling van die solenodiale komponent van die RWG stroom deur die outeur voorgestel. A posteriori foutberaming van die MoM-stroom is met behulp van die verhaalde stroom verspreidings gerealiseer, en daar is gevind dat 'n gemengde tweede-orde verhaalde stroom, gebaseer op 'n vektor metode, die beste resultate lewer. Die foutberamingstegnieke wat in hierdie tesis ondersoek is, kan in 'n aanpasbare skema opgeneem word om die akkuraatheid van 'n numeriese oplossing, relatief tot die berekeningskoste, te optimeer.
APA, Harvard, Vancouver, ISO, and other styles
14

Menshikova, M. "Uncertainty estimation using the moments method facilitated by automatic differentiation in Matlab." Thesis, Department of Engineering Systems and Management, 2010. http://hdl.handle.net/1826/4328.

Full text
Abstract:
Computational models have long been used to predict the performance of some baseline design given its design parameters. Given inconsistencies in manufacturing, the manufactured product always deviates from the baseline design. There is currently much interest in both evaluating the effects of variability in design parameters on a design’s performance (uncertainty estimation), and robust optimization of the baseline design such that near optimal performance is obtained despite variability in design parameters. Traditionally, uncertainty analysis is performed by expensive Monte-Carlo methods. This work considers the alternative moments method for uncertainty propagation and its implementation in Matlab. In computational design it is assumed a computational model gives a sufficiently accurate approximation to a design’s performance. As such it can be used for estimating statistical moments (expectation, variance, etc.) of the design due to known statistical variation of the model’s parameters, e.g., by the Monte Carlo approach. In the moments method we further assume the model is sufficiently differentiable that a Taylor series approximation to a model may be constructed, and the moments of the Taylor series may be taken analytically to yield approximations to the model’s moments. In this thesis we generalise techniques considered within the engineering community and design and document associated software to generate arbitrary order Taylor series approximations to arbitrary order statistical moments of computational models implemented in Matlab; Taylor series coefficients are calculated using automatic differentiation. This approach is found to be more efficient than a standard Monte Carlo method for the small-scale model test problems we consider. Previously Christianson and Cox (2005) have indicated that the moments method will be non-convergent in the presence of complex poles of the computational model and suggested a partitioning method to overcome this problem. We implement a version of the partitioning method and demonstrate that it does result in convergence of the moments method. Additionally, we consider, what we term, the branch detection problem in order to ascertain if our Taylor series approximation might only be valid piecewise.
APA, Harvard, Vancouver, ISO, and other styles
15

Ahmed, Mohamed Salem. "Contribution à la statistique spatiale et l'analyse de données fonctionnelles." Thesis, Lille 3, 2017. http://www.theses.fr/2017LIL30047/document.

Full text
Abstract:
Ce mémoire de thèse porte sur la statistique inférentielle des données spatiales et/ou fonctionnelles. En effet, nous nous sommes intéressés à l’estimation de paramètres inconnus de certains modèles à partir d’échantillons obtenus par un processus d’échantillonnage aléatoire ou non (stratifié), composés de variables indépendantes ou spatialement dépendantes.La spécificité des méthodes proposées réside dans le fait qu’elles tiennent compte de la nature de l’échantillon étudié (échantillon stratifié ou composé de données spatiales dépendantes).Tout d’abord, nous étudions des données à valeurs dans un espace de dimension infinie ou dites ”données fonctionnelles”. Dans un premier temps, nous étudions les modèles de choix binaires fonctionnels dans un contexte d’échantillonnage par stratification endogène (échantillonnage Cas-Témoin ou échantillonnage basé sur le choix). La spécificité de cette étude réside sur le fait que la méthode proposée prend en considération le schéma d’échantillonnage. Nous décrivons une fonction de vraisemblance conditionnelle sous l’échantillonnage considérée et une stratégie de réduction de dimension afin d’introduire une estimation du modèle par vraisemblance conditionnelle. Nous étudions les propriétés asymptotiques des estimateurs proposées ainsi que leurs applications à des données simulées et réelles. Nous nous sommes ensuite intéressés à un modèle linéaire fonctionnel spatial auto-régressif. La particularité du modèle réside dans la nature fonctionnelle de la variable explicative et la structure de la dépendance spatiale des variables de l’échantillon considéré. La procédure d’estimation que nous proposons consiste à réduire la dimension infinie de la variable explicative fonctionnelle et à maximiser une quasi-vraisemblance associée au modèle. Nous établissons la consistance, la normalité asymptotique et les performances numériques des estimateurs proposés.Dans la deuxième partie du mémoire, nous abordons des problèmes de régression et prédiction de variables dépendantes à valeurs réelles. Nous commençons par généraliser la méthode de k-plus proches voisins (k-nearest neighbors; k-NN) afin de prédire un processus spatial en des sites non-observés, en présence de co-variables spatiaux. La spécificité du prédicteur proposé est qu’il tient compte d’une hétérogénéité au niveau de la co-variable utilisée. Nous établissons la convergence presque complète avec vitesse du prédicteur et donnons des résultats numériques à l’aide de données simulées et environnementales.Nous généralisons ensuite le modèle probit partiellement linéaire pour données indépendantes à des données spatiales. Nous utilisons un processus spatial linéaire pour modéliser les perturbations du processus considéré, permettant ainsi plus de flexibilité et d’englober plusieurs types de dépendances spatiales. Nous proposons une approche d’estimation semi paramétrique basée sur une vraisemblance pondérée et la méthode des moments généralisées et en étudions les propriétés asymptotiques et performances numériques. Une étude sur la détection des facteurs de risque de cancer VADS (voies aéro-digestives supérieures)dans la région Nord de France à l’aide de modèles spatiaux à choix binaire termine notre contribution
This thesis is about statistical inference for spatial and/or functional data. Indeed, weare interested in estimation of unknown parameters of some models from random or nonrandom(stratified) samples composed of independent or spatially dependent variables.The specificity of the proposed methods lies in the fact that they take into considerationthe considered sample nature (stratified or spatial sample).We begin by studying data valued in a space of infinite dimension or so-called ”functionaldata”. First, we study a functional binary choice model explored in a case-controlor choice-based sample design context. The specificity of this study is that the proposedmethod takes into account the sampling scheme. We describe a conditional likelihoodfunction under the sampling distribution and a reduction of dimension strategy to definea feasible conditional maximum likelihood estimator of the model. Asymptotic propertiesof the proposed estimates as well as their application to simulated and real data are given.Secondly, we explore a functional linear autoregressive spatial model whose particularityis on the functional nature of the explanatory variable and the structure of the spatialdependence. The estimation procedure consists of reducing the infinite dimension of thefunctional variable and maximizing a quasi-likelihood function. We establish the consistencyand asymptotic normality of the estimator. The usefulness of the methodology isillustrated via simulations and an application to some real data.In the second part of the thesis, we address some estimation and prediction problemsof real random spatial variables. We start by generalizing the k-nearest neighbors method,namely k-NN, to predict a spatial process at non-observed locations using some covariates.The specificity of the proposed k-NN predictor lies in the fact that it is flexible and allowsa number of heterogeneity in the covariate. We establish the almost complete convergencewith rates of the spatial predictor whose performance is ensured by an application oversimulated and environmental data. In addition, we generalize the partially linear probitmodel of independent data to the spatial case. We use a linear process for disturbancesallowing various spatial dependencies and propose a semiparametric estimation approachbased on weighted likelihood and generalized method of moments methods. We establishthe consistency and asymptotic distribution of the proposed estimators and investigate thefinite sample performance of the estimators on simulated data. We end by an applicationof spatial binary choice models to identify UADT (Upper aerodigestive tract) cancer riskfactors in the north region of France which displays the highest rates of such cancerincidence and mortality of the country
APA, Harvard, Vancouver, ISO, and other styles
16

Ginos, Brenda Faith. "Parameter Estimation for the Lognormal Distribution." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3205.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Owen, Claire Elayne Bangerter. "Parameter Estimation for the Beta Distribution." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2670.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Pant, Mohan Dev. "Simulating Univariate and Multivariate Burr Type III and Type XII Distributions Through the Method of L-Moments." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/dissertations/401.

Full text
Abstract:
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and heavy-tailed non-normal distributions. Conventional moment-based estimators (i.e., the mean, variance, skew, and kurtosis) are traditionally used to characterize the distribution of a random variable or in the context of fitting data. However, conventional moment-based estimators can (a) be substantially biased, (b) have high variance, or (c) be influenced by outliers. In view of these concerns, a characterization of the Burr Type III and Type XII distributions through the method of L-moments is introduced. Specifically, systems of equations are derived for determining the shape parameters associated with user specified L-moment ratios (e.g., L-Skew and L-Kurtosis). A procedure is also developed for the purpose of generating non-normal Burr Type III and Type XII distributions with arbitrary L-correlation matrices. Numerical examples are provided to demonstrate that L-moment based Burr distributions are superior to their conventional moment based counterparts in the context of estimation, distribution fitting, and robustness to outliers. Monte Carlo simulation results are provided to demonstrate that L-moment-based estimators are nearly unbiased, have relatively small variance, and are robust in the presence of outliers for any sample size. Simulation results are also provided to show that the methodology used for generating correlated non-normal Burr Type III and Type XII distributions is valid and efficient. Specifically, Monte Carlo simulation results are provided to show that the empirical values of L-correlations among simulated Burr Type III (and Type XII) distributions are in close agreement with the specified L-correlation matrices.
APA, Harvard, Vancouver, ISO, and other styles
19

Gajic, Ruzica, and Isabelle Söder. "Arbetslöshetsförsäkringens finansiering : Hur påverkas arbetslöshetskassornas medlemsantal av en förhöjd grad av avgiftsfinansiering?" Thesis, Uppsala University, Department of Economics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126711.

Full text
Abstract:

Sedan årsskiftet 2006/2007 har antalet medlemmar i arbetslöshetskassorna minskat drastiskt. Under samma period har ett flertal reformer genomförts på arbetslöshetsförsäkringens område som bland annat resulterat i höjda medlemsavgifter för de flesta a-kassorna. Syftet med denna uppsats är att undersöka huruvida det över tid går att finna något samband mellan förändringar i medlemsantal och medlemsavgifter. För att undersöka detta måste man förutom avgifterna även ta hänsyn till andra variabler kopplade till arbetslöshetsförsäkringen. Dessa övriga variabler är grundbelopp, högsta dagpenning, ersättningsgrad och arbetslöshet. Vi formulerar en modell för sambandet mellan medlemsantal och dessa variabler och skattar denna genom metoden Generalized Method of Moments med hjälp av data från 2000-2009. Våra resultat visar i enlighet med teori och tidigare forskning på ett negativt samband mellan medlemsavgifter och antalet medlemmar i a-kassan. Detta samband visar sig vara starkt, särskilt på lång sikt. För att tydigare se hur avgiftsförändringar påverkar olika typer av individer i olika grad har vi även undersökt huruvida medlemsantalet i a-kassor kopplade till tjänstemanna- respektive arbetarförbund är olika känsliga för förändringar i avgiften. Våra resultat visar i kontrast till tidigare studier att a-kassorna kopplade till tjänstemannaförbunden (TCO och Saco) är mer känsliga för förändringar jämfört med arbetarförbunden (LO). Detta skapar anledning att tro att det finns andra faktorer än avgifter och de övriga variablerna som inkluderats i vår modell vilka påverkar anslutningsgraden och som kan förklara skillnaden mellan de olika grupperna.

APA, Harvard, Vancouver, ISO, and other styles
20

Ragusa, Giuseppe. "Essays on moment conditions models econometrics /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3170252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Badinger, Harald, and Peter Egger. "Estimation and Testing of Higher-Order Spatial Autoregressive Panel Data Error Component Models." Springer, 2013. http://epub.wu.ac.at/5468/1/JoGS_2012.pdf.

Full text
Abstract:
This paper develops an estimator for higher-order spatial autoregressive panel data error component models with spatial autoregressive disturbances, SARAR(R,S). We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define a generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators, derive their joint asymptotic distribution, and provide Monte Carlo evidence on their small sample performance.
APA, Harvard, Vancouver, ISO, and other styles
22

Demirbaäg, Mustafa Emin. "Estimation of seismic parameters from multifold reflection seismic data by generalized linear inversion of Zoeppritz equations." Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/37224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhou, Zhuzhu. "Essays in Social Choice and Econometrics:." Thesis, Boston College, 2021. http://hdl.handle.net/2345/bc-ir:109181.

Full text
Abstract:
Thesis advisor: Uzi Segal
The dissertation studies the property of transitivity in the social choice theory. I explain why we should care about transitivity in decision theory. I propose two social decision theories: redistribution regret and ranking regret, study their properties of transitivity, and discuss the possibility to find a best choice for the social planner. Additionally, in the joint work, we propose a general method to construct a consistent estimator given two parametric models, one of which could be incorrectly specified. In “Why Transitivity”, to explain behaviors violating transitivity, e.g., preference reversals, some models, like regret theory, salience theory were developed. However, these models naturally violate transitivity, which may not lead to a best choice for the decision maker. This paper discusses the consequences and the possible extensions to deal with it. In “Redistribution Regret and Transitivity”, a social planner wants to allocate resources, e.g., the government allocates fiscal revenue or parents distribute toys to children. The social planner cares about individuals' feelings, which depend both on their assigned resources, and on the alternatives they might have been assigned. As a result, there could be intransitive cycles. This paper shows that the preference orders are generally non-transitive but there are two exceptions: fixed total resource and one extremely sensitive individual, or only two individuals with the same non-linear individual regret function. In “Ranking Regret”, a social planner wants to rank people, e.g., assign airline passengers a boarding order. A natural ranking is to order people from most to least sensitive to their rank. But people's feelings can depend both on their assigned rank, and on the alternatives they might have been assigned. As a result, there may be no best ranking, due to intransitive cycles. This paper shows how to tell when a best ranking exists, and that when it exists, it is indeed the natural ranking. When this best does not exist, an alternative second-best group ranking strategy is proposed, which resembles actual airline boarding policies. In “Over-Identified Doubly Robust Identification and Estimation”, joint with Arthur Lewbel and Jinyoung Choi, we consider two parametric models. At least one is correctly specified, but we don't know which. Both models include a common vector of parameters. An estimator for this common parameter vector is called Doubly Robust (DR) if it's consistent no matter which model is correct. We provide a general technique for constructing DR estimators (assuming the models are over identified). Our Over-identified Doubly Robust (ODR) technique is a simple extension of the Generalized Method of Moments. We illustrate our ODR with a variety of models. Our empirical application is instrumental variables estimation, where either one of two instrument vectors might be invalid
Thesis (PhD) — Boston College, 2021
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Economics
APA, Harvard, Vancouver, ISO, and other styles
24

Thurston, David Curtis. "A generalized method of moments comparison of several discrete time stochastic models of the term structure in the Heath-Jarrow-Morton arbitrage-based framework." Diss., The University of Arizona, 1992. http://hdl.handle.net/10150/185902.

Full text
Abstract:
This paper tests a new methodology; the discrete time no arbitrage-based model of Heath, Jarrow and Morton (HJM). From within Ho and Lee's framework, HJM's model is shown to encompass Ho and Lee's AR model as a special case. Several discrete stochastic models of the term structure based on restrictions placed on the variance of the forward rate process are discussed. These models are tested in HJM's no arbitrage-based framework. For testing, it is necessary to use current bond prices to substitute out for the market price of risk implied in the initial term structure. In this way, additional current bond prices appear in the pricing formulas, but the market price of risk does not. Several sets of forward rate models are tested. To avoid measurement errors associated with fitting splines to coupon-bearing bonds, coupon-free data are used. Weekly T-bill quotes over a twenty-three year period, starting in 1968 are split into two equal sets about the structural break of October 7, 1979 following the shift in the Federal Reserve's monetary policy. These two data sets are split in half for further testing. Hansen's Generalized Method of Moments (GMM) is employed to estimate the models' parameters with a minimum of assumptions. Because the models are not nested, the resulting J statistics are not suitable for model comparisons. As an alternative, "simulated residuals" resulting from the imposition of the parameter values obtained from the GMM estimation are calculated. The model generating the set of simulated residuals with the smallest variance is assumed to have the best fit. The F test is used for pairwise comparisons of the models. The sets of simulated residuals are not normally distributed. However, unless two samples are from radically different distributions, the F test is quite robust to the assumption of sample normality and can still be used to perform an informal comparison of two similar samples.
APA, Harvard, Vancouver, ISO, and other styles
25

Lins, Rafael Marques. "A posteriori error estimations for the generalized finite element method and modified versions." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-03092015-083839/.

Full text
Abstract:
This thesis investigates two a posteriori error estimators, based on gradient recovery, aiming to fill the gap of the error estimations for the Generalized FEM (GFEM) and, mainly, its modified versions called Corrected XFEM (C-XFEM) and Stable GFEM (SGFEM). In order to reach this purpose, firstly, brief reviews regarding the GFEM and its modified versions are presented, where the main advantages attributed to each numerical method are highlighted. Then, some important concepts related to the error study are presented. Furthermore, some contributions involving a posteriori error estimations for the GFEM are shortly described. Afterwards, the two error estimators hereby proposed are addressed focusing on linear elastic fracture mechanics problems. The first estimator was originally proposed for the C-XFEM and is hereby extended to the SGFEM framework. The second one is based on a splitting of the recovered stress field into two distinct parts: singular and smooth. The singular part is computed with the help of the J integral, whereas the smooth one is calculated from a combination between the Superconvergent Patch Recovery (SPR) and Singular Value Decomposition (SVD) techniques. Finally, various numerical examples are selected to assess the robustness of the error estimators considering different enrichment types, versions of the GFEM, solicitant modes and element types. Relevant aspects such as effectivity indexes, error distribution and convergence rates are used for describing the error estimators. The main contributions of this thesis are: the development of two efficient a posteriori error estimators for the GFEM and its modified versions; a comparison between the GFEM and its modified versions; the identification of the positive features of each error estimator and a detailed study concerning the blending element issues.
Esta tese investiga dois estimadores de erro a posteriori, baseados na recuperação do gradiente, visando preencher o hiato das estimativas de erro para o Generalized FEM (GFEM) e, sobretudo, suas versões modificadas denominadas Corrected XFEM (C-XFEM) e Stable GFEM (SGFEM). De modo a alcançar este objetivo, primeiramente, breves revisões a respeito do GFEM e suas versões modificadas são apresentadas, onde as principais vantagens atribuídas a cada método são destacadas. Em seguida, alguns importantes conceitos relacionados ao estudo do erro são apresentados. Além disso, algumas contribuições envolvendo estimativas de erro a posteriori para o GFEM são brevemente descritas. Posteriormente, os dois estimadores de erro propostos neste trabalho são abordados focando em problemas da mecânica da fratura elástico linear. O primeiro estimador foi originalmente proposto para o C-XFEM e por este meio é estendido para o âmbito do SGFEM. O segundo é baseado em uma divisão do campo de tensões recuperadas em duas partes distintas: singular e suave. A parte singular é calculada com o auxílio da integral J, enquanto que a suave é calculada a partir da combinação entre as técnicas Superconvergent Patch Recovery (SPR) e Singular Value Decomposition (SVD). Finalmente, vários exemplos numéricos são selecionados para avaliar a robustez dos estimadores de erro considerando diferentes tipos de enriquecimento, versões do GFEM, modos solicitantes e tipos de elemento. Aspectos relevantes tais como índices de efetividade, distribuição do erro e taxas de convergência são usados para descrever os estimadores de erro. As principais contribuições desta tese são: o desenvolvimento de dois eficientes estimadores de erro a posteriori para o GFEM e suas versões modificadas; uma comparação entre o GFEM e suas versões modificadas; a identificação das características positivas de cada estimador de erro e um estudo detalhado sobre a questão dos elementos de mistura.
APA, Harvard, Vancouver, ISO, and other styles
26

MENICHINI, AMILCAR ARMANDO. "Financial Frictions and Capital Structure Choice: A Structural Dynamic Estimation." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145397.

Full text
Abstract:
This thesis studies different aspects of firm decisions by using a dynamic model. I estimate a dynamic model of the firm based on the trade-off theory of capital structure that endogenizes investment, leverage, and payout decisions. For the estimation of the model I use Efficient Method of Moments (EMM), which allows me to recover the structural parameters that best replicate the characteristics of the data. I start analyzing the question of whether target leverage varies over time. While this is a central issue in finance, there is no consensus in the literature on this point. I propose an explanation that reconciles some of the seemingly contradictory empirical evidence. The dynamic model generates a target leverage that changes over time and consistently reproduces the results of Lemmon, Roberts, and Zender (2008). These findings suggest that the time-varying target leverage assumption of the big bulk of the previous literature is not incompatible with the evidence presented by Lemmon, Roberts, and Zender (2008). Then I study how corporate income tax payments vary with the corporate income tax rate. The dynamic model produces a bell-shaped relationship between tax revenue and the tax rate that is consistent with the notion of the Laffer curve. The dynamic model generates the maximum tax revenue for a tax rate between 36% and 41%. Finally, I investigate the impact of financial constraints on investment decisions by firms. Model results show that investment-cash flow sensitivity is higher for less financially constrained firms. This result is consistent with Kaplan and Zingales (1997). The dynamic model also rationalizes why large and mature firms have a positive and significant investment-cash flow sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
27

Sevilla, David. "Computerized method for finding the ideal patient-specific location to place an equivalent electric dipole to derive an estimation of the electrical activity of the heart." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Badinger, Harald, and Peter Egger. "Fixed Effects and Random Effects Estimation of Higher-Order Spatial Autoregressive Models with Spatial Autoregressive and Heteroskedastic Disturbances." WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4126/1/wp173.pdf.

Full text
Abstract:
This paper develops a unified framework for fixed and random effects estimation of higher-order spatial autoregressive panel data models with spatial autoregressive disturbances and heteroskedasticity of unknown form in the idiosyncratic error component. We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define both a random effects and a fixed effects spatial generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators and derive their joint asymptotic distribution, which is robust to heteroskedasticity of unknown form in the idiosyncratic error component. Finally, we derive a robust Hausman-test of the spatial random against the spatial fixed effects model. (authors' abstract)
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
29

Dürr, Robert [Verfasser], Achim [Akademischer Betreuer] Kienle, and Dominique [Akademischer Betreuer] Thevenin. "Parameter estimation and method of moments for multi dimensional population balance equations with application to vaccine production processes / Robert Dürr ; Achim Kienle, Dominique Thévenin." Magdeburg : Universitätsbibliothek, 2016. http://d-nb.info/1123631476/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Babichev, Dmitry. "On efficient methods for high-dimensional statistical estimation." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE032.

Full text
Abstract:
Dans cette thèse, nous examinons plusieurs aspects de l'estimation des paramètres pour les statistiques et les techniques d'apprentissage automatique, aussi que les méthodes d'optimisation applicables à ces problèmes. Le but de l'estimation des paramètres est de trouver les paramètres cachés inconnus qui régissent les données, par exemple les paramètres dont la densité de probabilité est inconnue. La construction d'estimateurs par le biais de problèmes d'optimisation n'est qu'une partie du problème, trouver la valeur optimale du paramètre est souvent un problème d'optimisation qui doit être résolu, en utilisant diverses techniques. Ces problèmes d'optimisation sont souvent convexes pour une large classe de problèmes, et nous pouvons exploiter leur structure pour obtenir des taux de convergence rapides. La première contribution principale de la thèse est de développer des techniques d'appariement de moments pour des problèmes de régression non linéaire multi-index. Nous considérons le problème classique de régression non linéaire, qui est irréalisable dans des dimensions élevées en raison de la malédiction de la dimensionnalité. Nous combinons deux techniques existantes : ADE et SIR pour développer la méthode hybride sans certain des aspects faibles de ses parents. Dans la deuxième contribution principale, nous utilisons un type particulier de calcul de la moyenne pour la descente stochastique du gradient. Nous considérons les familles exponentielles conditionnelles (comme la régression logistique), où l'objectif est de trouver la valeur inconnue du paramètre. Nous proposons le calcul de la moyenne des paramètres de moments, que nous appelons fonctions de prédiction. Pour les modèles à dimensions finies, ce type de calcul de la moyenne peut entraîner une erreur négative, c'est-à-dire que cette approche nous fournit un estimateur meilleur que tout estimateur linéaire ne peut jamais le faire. La troisième contribution principale de cette thèse porte sur les pertes de Fenchel-Young. Nous considérons des classificateurs linéaires multi-classes avec les pertes d'un certain type, de sorte que leur double conjugué a un produit direct de simplices comme support. La formulation convexe-concave à point-selle correspondante a une forme spéciale avec un terme de matrice bilinéaire et les approches classiques souffrent de la multiplication des matrices qui prend beaucoup de temps. Nous montrons que pour les pertes SVM multi-classes avec des techniques d'échantillonnage efficaces, notre approche a une complexité d'itération sous-linéaire, c'est-à-dire que nous devons payer seulement trois fois O(n+d+k) : pour le nombre de classes k, le nombre de caractéristiques d et le nombre d'échantillons n, alors que toutes les techniques existantes sont plus complexes
In this thesis we consider several aspects of parameter estimation for statistics and machine learning and optimization techniques applicable to these problems. The goal of parameter estimation is to find the unknown hidden parameters, which govern the data, for example parameters of an unknown probability density. The construction of estimators through optimization problems is only one side of the coin, finding the optimal value of the parameter often is an optimization problem that needs to be solved, using various optimization techniques. Hopefully these optimization problems are convex for a wide class of problems, and we can exploit their structure to get fast convergence rates. The first main contribution of the thesis is to develop moment-matching techniques for multi-index non-linear regression problems. We consider the classical non-linear regression problem, which is unfeasible in high dimensions due to the curse of dimensionality. We combine two existing techniques: ADE and SIR to develop the hybrid method without some of the weak sides of its parents. In the second main contribution we use a special type of averaging for stochastic gradient descent. We consider conditional exponential families (such as logistic regression), where the goal is to find the unknown value of the parameter. Classical approaches, such as SGD with constant step-size are known to converge only to some neighborhood of the optimal value of the parameter, even with averaging. We propose the averaging of moment parameters, which we call prediction functions. For finite-dimensional models this type of averaging can lead to negative error, i.e., this approach provides us with the estimator better than any linear estimator can ever achieve. The third main contribution of this thesis deals with Fenchel-Young losses. We consider multi-class linear classifiers with the losses of a certain type, such that their dual conjugate has a direct product of simplices as a support. We show, that for multi-class SVM losses with smart matrix-multiplication sampling techniques, our approach has an iteration complexity which is sublinear, i.e., we need to pay only trice O(n+d+k): for number of classes k, number of features d and number of samples n, whereas all existing techniques have higher complexity
APA, Harvard, Vancouver, ISO, and other styles
31

Saeed, Usman. "Adaptive numerical techniques for the solution of electromagnetic integral equations." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41173.

Full text
Abstract:
Various error estimation and adaptive refinement techniques for the solution of electromagnetic integral equations were developed. Residual based error estimators and h-refinement implementations were done for the Method of Moments (MoM) solution of electromagnetic integral equations for a number of different problems. Due to high computational cost associated with the MoM, a cheaper solution technique known as the Locally-Corrected Nyström (LCN) method was explored. Several explicit and implicit techniques for error estimation in the LCN solution of electromagnetic integral equations were proposed and implemented for different geometries to successfully identify high-error regions. A simple p-refinement algorithm was developed and implemented for a number of prototype problems using the proposed estimators. Numerical error was found to significantly reduce in the high-error regions after the refinement. A simple computational cost analysis was also presented for the proposed error estimation schemes. Various cost-accuracy trade-offs and problem-specific limitations of different techniques for error estimation were discussed. Finally, a very important problem of slope-mismatch in the global error rates of the solution and the residual was identified. A few methods to compensate for that mismatch using scale factors based on matrix norms were developed.
APA, Harvard, Vancouver, ISO, and other styles
32

Srinivas, L. "FIR System Identification Using Higher Order Cumulants -A Generalized Approach." Thesis, Indian Institute of Science, 1994. http://hdl.handle.net/2005/637.

Full text
Abstract:
The thesis presents algorithms based on a linear algebraic solution for the identification of the parameters of the FIR system using only higher order statistics when only the output of the system corrupted by additive Gaussian noise is observed. All the traditional parametric methods of estimating the parameters of the system have been based on the 2nd order statistics of the output of the system. These methods suffer from the deficiency that they do not preserve the phase response of the system and hence cannot identify non-minimum phase systems. To circumvent this problem, higher order statistics which preserve the phase characteristics of a process and hence are able to identify a non-minimum phase system and also are insensitive to additive Gaussian noise have been used in recent years. Existing algorithms for the identification of the FIR parameters based on the higher order cumulants use the autocorrelation sequence as well and give erroneous results in the presence of additive colored Gaussian noise. This problem can be overcome by obtaining algorithms which do not utilize the 2nd order statistics. An existing relationship between the 2nd order and any Ith order cumulants is generalized to a relationship between any two arbitrary k, Ith order cumulants. This new relationship is used to obtain new algorithms for FIR system identification which use only cumulants of order > 2 and with no other restriction than the Gaussian nature of the additive noise sequence. Simulation studies are presented to demonstrate the failure of the existing algorithms when the imposed constraints on the 2nd order statistics of the additive noise are violated while the proposed algorithms perform very well and give consistent results. Recently, a new algebraic approach for parameter estimation method denoted the Linear Combination of Slices (LCS) method was proposed and was based on expressing the FIR parameters as a linear combination of the cumulant slices. The rank deficient cumulant matrix S formed in the LCS method can be expressed as a product of matrices which have a certain structure. The orthogonality property of the subspace orthogonal to S and the range space of S has been exploited to obtain a new class of algorithms for the estimation of the parameters of a FIR system. Numerical simulation studies have been carried out to demonstrate the good behaviour of the proposed algorithms. Analytical expressions for the covariance of the estimates of the FIR parameters of the different algorithms presented in the thesis have been obtained and numerical comparison has been done for specific cases. Numerical examples to demonstrate the application of the proposed algorithms for channel equalization in data communication and as an initial solution to the cumulant matching nonlinear optimization methods have been presented.
APA, Harvard, Vancouver, ISO, and other styles
33

Hattaway, James T. "Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3412.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Tao, Ji. "Spatial econometrics models, methods and applications /." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1118957992.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains x, 140 p. Includes bibliographical references (p. 137-140). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
35

Hokayem, Charles. "ESSAYS ON HUMAN CAPITAL, HEALTH CAPITAL, AND THE LABOR MARKET." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/23.

Full text
Abstract:
This dissertation consists of three essays concerning the effects of human capital and health capital on the labor market. Chapter 1 presents a structural model that incorporates a health capital stock to the traditional learning-by-doing model. The model allows health to affect future wages by interrupting current labor supply and on-the-job human capital accumulation. Using data on sick time from the Panel Study Income of Dynamics the model is estimated using a nonlinear Generalized Method of Moments estimator. The results show human capital production exhibits diminishing returns. Health capital production increases with the current stock of health capital, or better current health improves future health. Among prime age working men, the effect of health on human capital accumulation is relatively small. Chapter 2 explores the role of another form of human capital, noncognitive skills, in explaining racial gaps in wages. Chapter 2 adds two noncognitive skills, locus of control and self-esteem, to a simple wage specification to determine the effect of these skills on the racial wage gap (white, black, and Hispanic) and the return to these skills across the wage distribution. The wage specifications are estimated using pooled, between, and quantile estimators. Results using the National Longitudinal Survey of Youth 1979 show these skills account for differing portions of the racial wage gap depending on race and gender. Chapter 3 synthesizes the idea of health and on-the-job human capital accumulation from Chapter 1 with the idea of noncognitive skills in Chapter 2 to examine the influence of these skills on human capital and health capital accumulation in adult life. Chapter 3 introduces noncognitive skills to a life cycle labor supply model with endogenous health and human capital accumulation. Noncognitive skills, measured by degree of future orientation, self-efficacy, trust-hostility, and aspirations, exogenously affect human capital and health production. The model uses noncognitive skills assessed in the early years of the Panel Study of Income Dynamics and relates these skills to health and human capital accumulation during adult life. The main findings suggest individuals with high self-efficacy receive higher future wages.
APA, Harvard, Vancouver, ISO, and other styles
36

Loum, Mor Absa. "Modèle de mélange et modèles linéaires généralisés, application aux données de co-infection (arbovirus & paludisme)." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS299/document.

Full text
Abstract:
Nous nous intéressons, dans cette thèse, à l'étude des modèles de mélange et des modèles linéaires généralisés, avec une application aux données de co-infection entre les arbovirus et les parasites du paludisme. Après une première partie consacrée à l'étude de la co-infection par un modèle logistique multinomial, nous proposons dans une deuxième partie l'étude des mélanges de modèles linéaires généralisés. La méthode proposée pour estimer les paramètres du mélange est une combinaison d'une méthode des moments et d'une méthode spectrale. Nous proposons à la fin une dernière partie consacrée aux mélanges de valeurs extrêmes en présence de censure. La méthode d'estimation proposée dans cette partie se fait en deux étapes basées sur la maximisation d'une vraisemblance
We are interested, in this thesis, to the study of mixture models and generalized linear models, with an application to co-infection data between arboviruses and malaria parasites. After a first part dedicated to the study of co-infection using a multinomial logistic model, we propose in a second part to study the mixtures of generalized linear models. The proposed method to estimate the parameters of the mixture is a combination of a moment method and a spectral method. Finally, we propose a final section for studing extreme value mixtures under random censoring. The estimation method proposed in this section is done in two steps based on the maximization of a likelihood
APA, Harvard, Vancouver, ISO, and other styles
37

Asad, Humaira. "Effective financial development, inequality and poverty." Thesis, University of Exeter, 2012. http://hdl.handle.net/10036/3583.

Full text
Abstract:
This thesis addresses the question, whether the impact of financial development on the relative and absolute indicators of poverty is dependent on the levels of the human capital present in an economy. To answer this question, first we develop a theoretical framework to explain the growth process in the context of financial development assuming that human capital is heterogeneous in terms of the skills and education people have. Then, by using the data sets based on five-year averages over 1960-2010 and 1980-2010, covering 107 developed and developing countries, we empirically investigate the extensions of the theoretical framework developed earlier. These extensions cover the relationships between: 1. Income inequality and economic growth 2. Financial development, human capital and income inequality, and 3. Financial development, human capital and poverty We provide empirical evidence using modern panel data techniques of dynamic and static GMM. The findings elucidate that income inequality and economic growth are inter-dependent on each other. There exists an inverse relationship between initial inequality and economic growth. The changes in income inequality follow the pattern identified by Kuznets (1955) known as Kuznets’ hypothesis. The results also show that financial development helps in reducing income inequalities and in alleviating poverty, only when there is a sufficient level of human capital available. On the basis of our findings we develop the term "effective financial development" which means that financial development is effective in accelerating growth levels, reducing income inequalities and alleviating poverty only if there is a sufficient level of human capital available. The empirical study covers multiple aspects of financial development like private credit extended by banks and other financial institutions, liquid liabilities and stock market capitalization. The results of the empirical investigations are robust to multiple data sets and various indicators of income inequality, financial development, poverty and human capital. The study also provides marginal analysis, which helps in understanding the impact of financial development on inequality and poverty at different levels of human capital. This research study of effective financial development can be a useful learning paradigm for the academics and researchers interested in growth economics and keen to learn how poverty and income inequality can be reduced effectively. This study can also be useful for the policy makers in the financial institutions, because it provides robust empirical evidence that shows that financial development cannot help in alleviating poverty and in reducing inequalities unless there is a sufficient level of human capital available. The findings can be useful for policy makers, particularly in the developing countries where high levels of income inequalities and poverty are big problems. This study explains the mechanism of how effective financial development can be used to reduce income inequalities and to alleviate poverty. It also explains the process of inter-linkages between financial development, human capital, inequality, economic growth and financial instability. The policy makers can also take advantage from the marginal analyses that illustrate the minimum levels of private credit and primary and secondary schooling above which the effects of financial development and human capital become significant in reducing inequalities and poverty.
APA, Harvard, Vancouver, ISO, and other styles
38

BrandÃo, Jose Wellington. "Os efeitos da estrutura de propriedade sobre a politica de dividendos da empresa brasileira." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14523.

Full text
Abstract:
nÃo hÃ
A despeito dos diversos achados, ao longo de dÃcadas, sobre a polÃtica de dividendos, a ecisÃo de pagar dividendos ainda à um tema que segue em debate. Diversos fatores tÃm sido propostos como capazes de explicar a polÃtica de dividendos, como por exemplo, o lucro/rentabilidade, o dividendo prÃvio (manutenÃÃo na polÃtica de dividendos), tamanho da empresa, alavancagem, e oportunidades de crescimento. Mais recentemente, a literatura tem explorado a interferÃncia que a estrutura de propriedade pode ter sobre a distribuiÃÃo de dividendos. Neste contexto surgem as proposiÃÃes como a das hipÃteses relacionadas ao uso da polÃtica de dividendos como instrumento de controle da direÃÃo executiva, e à possÃvel expropriaÃÃo de acionistas minoritÃrios por parte dos controladores. O objetivo desta pesquisa à avaliar, sob o marco teÃrico da Teoria da AgÃncia, se hà uso da polÃtica de dividendos como instrumento de monitoraÃÃo executiva ou de expropriaÃÃo de acionistas minoritÃrios no mercado brasileiro. A amostra à um painel de dados composto por 1890 observaÃÃes anuais de 223 empresas no perÃodo 1996-2012 a partir de dados coletados no sistema EconomÃtica de empresas com aÃÃes negociadas na Bolsa de Valores de SÃo Paulo. A partir da estimaÃÃo de um conjunto de modelos explicativos da polÃtica de dividendos os resultados indicam que a presenÃa de um acionista majoritÃrio tem um efeito negativo sobre a polÃtica de dividendos, em linha com a hipÃtese de expropriaÃÃo. Outro resultado relevante à o efeito positivo da presenÃa de outra empresa nÃo financeira, como acionista majoritÃrio ou principal, sobre o nÃvel de distribuiÃÃo de dividendos, o que està em sintonia com a hipÃtese de monitoramento da direÃÃo executiva
Despite the many finds, for decades on the dividend policy, the ECISION to pay dividends is also a theme that follows in debate. Several factors have been proposed as able to explain the dividend policy, such as profit / profitability prior dividend (maintaining the dividend policy), firm size, leverage, and growth opportunities. More recently, the literature has explored the interference that the ownership structure may have on the distribution of dividends. In this context arise propositions as the assumptions related to the use of the dividend policy as an executive steering control instrument, and the possible expropriation of minority shareholders by the controlling. The objective of this research is to evaluate, under the theoretical framework of the Agency Theory, if there is use of the dividend policy as executive monitoring instrument or expropriation of minority shareholders in the Brazilian market. The sample is a data panel of 1,890 annual observations of 223 companies in the period 1996-2012 from data collected in EconomÃtica system companies listed on the SÃo Paulo Stock Exchange. From the estimation of a set of explanatory models of dividend policy The results indicate that the presence of a majority shareholder has a negative effect on the dividend policy, in line with the hypothesis of expropriation. Another important result is the positive effect of the presence of other non-financial company, as major or principal shareholder on the dividend distribution level, which is in line with the executive direction of the monitoring event
APA, Harvard, Vancouver, ISO, and other styles
39

Ribarczyk, Bruna Gabriela. "Os efeitos da integração financeira sobre a competitividade externa dos países da União Monetária Europeia." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/132893.

Full text
Abstract:
A adoção de uma moeda única por diferentes países muda significativamente a política econômica desses países. O objetivo desta dissertação, elaborada em forma de artigo, é estudar os efeitos da adoção do euro sobre a competitividade internacional dos países-membros da União Monetária Europeia (UME) com base no arcabouço teórico da teoria das áreas monetárias ótimas. A análise econométrica irá compreender um painel dinâmico com 12 países da UME nos períodos de 2002 a 2013 para inferir se a entrada de capitais teve impacto negativo na competitividade externa dos países periféricos da UME e como que os diferentes tipos de capitais interferiram sobre a taxa de câmbio real efetiva dos países da Zona do Euro. Conclui-se assim que não só a crise é capaz de permitir ganhos de competitividade entre os países da UME, como outros fatores mais desejáveis também, tal como a entrada de outros investimentos da conta financeira do balanço de pagamentos, a abertura comercial e os gastos do governo. Além disso, constata-se que o impacto da mobilidade de capital na competitividade é influenciado não só pelo tipo de capital como também pelo país que recebe esse fluxo.
Adopting a single currency in different countries changes significantly the economic policy of these countries. The objective of this dissertation, prepared in the form of an article is to study the effects of the adoption of the euro on the external competitiveness of member countries of the European Monetary Union (EMU) based on the theoretical framework of the theory of optimum currency areas. The econometric analysis will comprise a dynamic panel with 12 countries of the EMU in the period 2002-2013 to infer if the capital inflow had a negative impact on the external competitiveness of the peripheral countries of the EMU and how different types of capital flows interfered on the real effective exchange rate of the countries of the euro zone. It is therefore concluded that not only the crisis can allow gains in competitiveness between countries in the EMU, as more desirable factors as well, like the inflow of other investments of the financial account of the balance of payments, trade liberalization and government expenditures. In addition, it appears that the capital flows impact on competitiveness is influenced not only by the type of capital but also by the country that receives the flow.
APA, Harvard, Vancouver, ISO, and other styles
40

Salerno, André. "A velocidade de ajuste das necessidades de capital de giro: um estudo sobre amostra de empresas listadas na BM&FBovespa." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/13116.

Full text
Abstract:
Submitted by Andre Salerno (a.salerno@uol.com.br) on 2015-01-16T15:57:20Z No. of bitstreams: 1 Dissertação André Salerno versão pos banca.pdf: 1673849 bytes, checksum: 0504a07bec49830f6e9e88dc0cd5d055 (MD5)
Approved for entry into archive by Ana Luiza Holme (ana.holme@fgv.br) on 2015-01-16T16:00:17Z (GMT) No. of bitstreams: 1 Dissertação André Salerno versão pos banca.pdf: 1673849 bytes, checksum: 0504a07bec49830f6e9e88dc0cd5d055 (MD5)
Made available in DSpace on 2015-01-16T17:04:03Z (GMT). No. of bitstreams: 1 Dissertação André Salerno versão pos banca.pdf: 1673849 bytes, checksum: 0504a07bec49830f6e9e88dc0cd5d055 (MD5) Previous issue date: 2014-12-18
The main objective of this study is to evaluate some determinants of working capital needs commonly studied in literature and to analyze how companies are moving toward a goal (target) of NTC. Such study is unprecedented in Brazil, as far as we know. In fact, there is a lack of substantial theories on working capital in the finance area and very few studies can be found. Those who choose to study this subject may see that due to its current stage, it has been researched with the support of more consolidated theoretical bases, such as capital structure. These studies have widely used the concept of goal/target to determine the optimal capital structure and the speed this structure adjusts itself to in order to optimize its resources. The fact that such definitions and/or more established theories on the topic do not exist yet set this study in motion. It uses speed adjustment towards a working capital goal as well as the Partial Adjustment Model (PAM) and the Generalized Method of Moments (GMM) as techniques to support this goal. With this unprecedented combination in the Brazilian market when it comes to working capital, we hope to bring new contributions to the academic and business communities. In order to get the data for this quantitative study, we used existing information from Economatica® and BCB - Central Bank of Brazil. These databases use the quarterly financial statements between the periods of December 21st 2007 to June 30th 2014 (adjusted by inflation - IPCA) of companies listed on the BM&FBovespa which have at least 15 consecutive periods (quarters) of data. A total of 2,000 observations and 105 companies were studied. As for the method, the Dynamic Data Panel (unbalanced) was used as well as the following techniques in order to reach the main goal of the study ('What is the speed of adjustment in Working Capital Requirement?'): the Partial Adjustment Model technique for the analysis of determinants of working capital needs and movement towards a goal and the Generalized Method of Moments (GMM) technique to control possible effects of endogeneity (BLUNDELL and BOND, 1998) and to solve problems with residual autocorrelation (PIRES, ZANI e NAKAMURA, 2013, p. 19)
O presente estudo - até onde se sabe inédito no Brasil – possui como principal objetivo avaliar alguns determinantes das necessidades de capital de giro comumente estudados na literatura e analisar de que forma as empresas se movimentam em direção a uma meta (target) de Net Trade Cycle (similar ao Ciclo de Caixa - CCC). Sabemos que o tema capital de giro ainda carece de teorias mais robustas dentro da área de finanças, e poucos estudos ainda são encontrados na literatura. Aqueles que decidem estudá-lo, observam que dado o seu atual estágio, ele tem sido pesquisado com o suporte de bases teóricas mais consolidadas, como por exemplo estrutura de capitais. Esses estudos têm se utilizado muito do conceito de meta para determinar a estrutura ótima de capitais, e com qual velocidade de ajuste procura-se adequar essa estrutura como forma de otimizar seus recursos. O fato de ainda não existir definições e/ou teorias mais definidas sobre o tema foi o grande motivador para a realização desse estudo, que emprega a velocidade de ajuste em direção a uma meta de capital de giro, utilizando como técnica para suporte a esse objetivo o Modelo de Ajustamento Parcial (MAP) e o Generalized Method of Moments (GMM). Com essa combinação inédita no mercado brasileiro quando o assunto é capital de giro, esperamos trazer novas contribuições para as comunidades acadêmicas e empresariais. Para a obtenção dos dados que compõem esse estudo de caráter quantitativo, utilizamos informações existentes na Economatica® e BCB – Banco Central do Brasil. Nessas bases de dados utilizamos os demonstrativos financeiros trimestrais entre os períodos de 31/Dez./2007 a 30/Jun./2014 (ajustados por inflação – IPCA) das empresas listadas na BM&FBovespa que possuíssem pelos menos 15 períodos (trimestres) consecutivos de dados, com isso chegamos a um total de um pouco mais de 2 mil observações e 105 empresas. Quanto ao método, utilizamos Painel de Dados Dinâmico (desbalanceado) e as seguintes técnicas foram empregadas como forma de atender ao principal objetivo do estudo ('Qual é a velocidade de ajuste das Necessidades de Capital de Giro?'): Modelo de Ajustamento Parcial para a análise dos determinantes das necessidades de capital de giro e movimentação em direção a uma meta e; Generalized Method of Moments (GMM) como técnica de controle aos possíveis efeitos de endogeneidade (BLUNDELL e BOND, 1998) e solução para o problema de autocorrelação residual (PIRES, ZANI e NAKAMURA, 2013, p. 19).
APA, Harvard, Vancouver, ISO, and other styles
41

Otunuga, Olusegun Michael. "Stochastic Modeling and Analysis of Energy Commodity Spot Price Processes." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5289.

Full text
Abstract:
Supply and demand in the World oil market are balanced through responses to price movement with considerable complexity in the evolution of underlying supply-demand expectation process. In order to be able to understand the price balancing process, it is important to know the economic forces and the behavior of energy commodity spot price processes. The relationship between the different energy sources and its utility together with uncertainty also play a role in many important energy issues. The qualitative and quantitative behavior of energy commodities in which the trend in price of one commodity coincides with the trend in price of other commodities, have always raised the questions regarding their interactions. Moreover, if there is any interaction, then one would like to know the extent of influence on each other. In this work, we undertake the study to shed a light on the above highlighted processes and issues. The presented study systematically deals with the development of stochastic dynamic models and mathematical, statistical and computational analysis of energy commodity spot price and interaction processes. Below we list the main components of the research carried out in this dissertation. (1) Employing basic economic principles, interconnected deterministic and stochastic models of linear log-spot and expected log-spot price processes coupled with non-linear volatility process are initiated. (2) Closed form solutions of the models are analyzed. (3) Introducing a change of probability measure, a risk-neutral interconnected stochastic model is derived. (4) Furthermore, under the risk-neutral measure, expectation of the square of volatility is reduced to a continuous-time deterministic delay differential equation. (5) The by-product of this exhibits the hereditary effects on the mean-square volatility process. (6) Using a numerical scheme, a time-series model is developed and utilized to estimate the state and parameters of the dynamic model. In fact, the developed time-series model includes the extended GARCH model as special case. (7) Using the Henry Hub natural gas data set, the usefulness of the linear interconnected stochastic models is outlined. (8) Using natural and basic economic ideas, interconnected deterministic and stochastic models in (1) are extended to non-linear log-spot price, expected log-spot price and volatility processes. (9) The presented extended models are validated. (10) Closed form solution and risk-neutral models of (8) are outlined. (11) To exhibit the usefulness of the non-linear interconnected stochastic model, to increase the efficiency and to reduce the magnitude of error, it was essential to develop a modified version of extended Kalman filtering approach. The modified approach exhibits the reduction of magnitude of error. Furthermore, Henry Hub natural gas data set is used to show the advantages of the non-linear interconnected stochastic model. (12) Parameter and state estimation problems of continuous time non-linear stochastic dynamic process is motivated to initiate an alternative innovative approach. This led to introduce the concept of statistic processes, namely, local sample mean and sample variance. (13) Then it led to the development of an interconnected discrete-time dynamic system of local statistic processes and (14) its mathematical model. (15) This paved the way for developing an innovative approach referred as Local Lagged adapted Generalized Method of Moments (LLGMM). This approach exhibits the balance between model specification and model prescription of continuous time dynamic processes. (16) In addition, it motivated to initiate conceptual computational state and parameter estimation and simulation schemes that generates a mean square sub-optimal procedure. (17) The usefulness of this approach is illustrated by applying this technique to four energy commodity data sets, the U. S. Treasury Bill Yield Interest Rate and the U.S. Eurocurrency Exchange Rate data sets for state and parameter estimation problems. (18) Moreover, the forecasting and confidence-interval problems are also investigated. (19) The non-linear interconnected stochastic model (8) was further extended to multivariate interconnected energy commodities and sources with and without external random intervention processes. (20) Moreover, it was essential to extend the interconnected discrete-time dynamic system of local sample mean and variance processes to multivariate discrete-time dynamic system. (21) Extending the LLGMM approach in (15) to a multivariate interconnected stochastic dynamic model under intervention process, the parameters in the multivariate interconnected stochastic model are estimated. These estimated parameters help in analyzing the short term and long term relationship between the energy commodities. These developed results are applied to the Henry Hub natural gas, crude oil and coal data sets.
APA, Harvard, Vancouver, ISO, and other styles
42

Sanjab, Anibal Jean. "Statistical Analysis of Electric Energy Markets with Large-Scale Renewable Generation Using Point Estimate Methods." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/74356.

Full text
Abstract:
The restructuring of the electric energy market and the proliferation of intermittent renewable-energy based power generation have introduced serious challenges to power system operation emanating from the uncertainties introduced to the system variables (electricity prices, congestion levels etc.). In order to economically operate the system and efficiently run the energy market, a statistical analysis of the system variables under uncertainty is needed. Such statistical analysis can be performed through an estimation of the statistical moments of these variables. In this thesis, the Point Estimate Methods (PEMs) are applied to the optimal power flow (OPF) problem to estimate the statistical moments of the locational marginal prices (LMPs) and total generation cost under system uncertainty. An extensive mathematical examination and risk analysis of existing PEMs are performed and a new PEM scheme is introduced. The applied PEMs consist of two schemes introduced by H.P. Hong, namely, the 2n and 2n+1 schemes, and a proposed combination between Hong's and M. E Harr's schemes. The accuracy of the applied PEMs in estimating the statistical moments of system LMPs is illustrated and the performance of the suggested combination of Harr's and Hong's PEMs is shown. Moreover, the risks of the application of Hong's 2n scheme to the OPF problem are discussed by showing that it can potentially yield inaccurate LMP estimates or run into unfeasibility of the OPF problem. In addition, a new PEM configuration is also introduced. This configuration is derived from a PEM introduced by E. Rosenblueth. It can accommodate asymmetry and correlation of input random variables in a more computationally efficient manner than its Rosenblueth's counterpart.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
43

Al, Masry Zeina. "Processus gamma étendus en vue des applications à la fiabilité." Thesis, Pau, 2016. http://www.theses.fr/2016PAUU3020/document.

Full text
Abstract:
La thèse s’intéresse à l’étude du fonctionnement d’un système industriel. Il s’agit de proposer et de développer un nouveau modèle pour modéliser la dégradation accumulative d’un système. Le processus gamma standard est fréquemment utilisé pour étudier l’évolution de la détérioration d’un système. Toutefois, ce processus peut s’avérer inadapté pour décrire le phénomène de dégradation car le rapport variance sur moyenne est constant dans le temps, ce qui est relativement restrictif en pratique. Afin de surmonter cette restriction, nous proposons d’utiliser un processus gamma étendu introduit par Cinlar (1980), qui ne souffre plus de cette restriction. Mais ce dernier présente quelques difficultés techniques. A titre d’exemple, la loi d’un processus gamma étendu n’est pas connue sous une forme explicite. Ces difficultés techniques ont conduit Guida et al. (2012) à utiliser une version discrète d’un processus gamma étendu. Nous travaillons ici avec la version originale d’un processus gamma étendu en temps continu. Le but de ce mémoire est de développer des méthodes numériques permettant de quantifier les quantités fiabilistes associées et de développer des méthodes statistiques d’estimation des paramètres du modèle. Aussi, une autre partie de ce travail consiste à proposer une politique de maintenance dans le contexte d’un processus gamma étendu
This thesis is dedicated to study the functioning of an industrial system. It is about proposing and developing a new model for modelling the accumulative degradation of a system. The standard gamma process is widely used to model the evolution of the system degradation. A notable restriction of a standard gamma process is that its variance-to-mean ratio is constant over time. This may be restrictive within an applicative context. To overcome this drawback, we propose to use an extended gamma process, which was introduced by Cinlar (1980). However, there is a cost and the use of an extended gamma process presents some technical difficulties. For example, there is no explicit formula for the probability distribution of an extended gamma process. These technical difficulties have lead Guida et al. (2012) to use a discrete version of an extended gamma process. We here propose to deal with the original continuous time version. The aim of this work is to develop numerical methods in order to compute the related reliability function and to develop statistical methods to estimate the parameters of the model. Also, another part of this work consists of proposing a maintenance policy within the context of an extended gamma process
APA, Harvard, Vancouver, ISO, and other styles
44

De, la Rey Tanja. "Two statistical problems related to credit scoring / Tanja de la Rey." Thesis, North-West University, 2007. http://hdl.handle.net/10394/3689.

Full text
Abstract:
This thesis focuses on two statistical problems related to credit scoring. In credit scoring of individuals, two classes are distinguished, namely low and high risk individuals (the so-called "good" and "bad" risk classes). Firstly, we suggest a measure which may be used to study the nature of a classifier for distinguishing between the two risk classes. Secondly, we derive a new method DOUW (detecting outliers using weights) which may be used to fit logistic regression models robustly and for the detection of outliers. In the first problem, the focus is on a measure which may be used to study the nature of a classifier. This measure transforms a random variable so that it has the same distribution as another random variable. Assuming a linear form of this measure, three methods for estimating the parameters (slope and intercept) and for constructing confidence bands are developed and compared by means of a Monte Carlo study. The application of these estimators is illustrated on a number of datasets. We also construct statistical hypothesis to test this linearity assumption. In the second problem, the focus is on providing a robust logistic regression fit and the identification of outliers. It is well-known that maximum likelihood estimators of logistic regression parameters are adversely affected by outliers. We propose a robust approach that also serves as an outlier detection procedure and is called DOUW. The approach is based on associating high and low weights with the observations as a result of the likelihood maximization. It turns out that the outliers are those observations to which low weights are assigned. This procedure depends on two tuning constants. A simulation study is presented to show the effects of these constants on the performance of the proposed methodology. The results are presented in terms of four benchmark datasets as well as a large new dataset from the application area of retail marketing campaign analysis. In the last chapter we apply the techniques developed in this thesis on a practical credit scoring dataset. We show that the DOUW method improves the classifier performance and that the measure developed to study the nature of a classifier is useful in a credit scoring context and may be used for assessing whether the distribution of the good and the bad risk individuals is from the same translation-scale family.
Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2008.
APA, Harvard, Vancouver, ISO, and other styles
45

Forrester, Andrew C. "Equity Returns and Economic Shocks: A Survey of Macroeconomic Factors and the Co-movement of Asset Returns." Miami University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=miami1512128483719638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kebewar, Mazen. "La structure du capital et son impact sur la profitabilité et sur la demande de travail : analyses théoriques et empiriques sur données de panel françaises." Phd thesis, Université d'Orléans, 2012. http://tel.archives-ouvertes.fr/tel-00762748.

Full text
Abstract:
La présente thèse contribue à la littérature sur trois principaux axes de recherche relatifs à la structure du capital: les déterminants de la structure du capital, la profitabilité et la demande de travail. (i) Le fondement théorique des déterminants de la structure du capital montre qu'il existe trois modèles qui peuvent expliquer la structure du capital: la théorie de ratio optimal d'endettement, la théorie hiérarchique de financement et récemment la théorie de market timing. De plus, l'évaluation empirique montre un effet positif des coûts d'ajustement et de la garantie. Par contre, l'opportunité de croissance, l'impôt non lié à la dette et la rentabilité sont corrélés de façon négative avec l'endettement. (ii) L'impact de la structure du capital sur la profitabilité peut être expliqué par trois théories essentielles: la théorie du signal, l'influence de la fiscalité et la théorie de l'agence. L'analyse empirique a permis de distinguer trois groupes différents de secteurs: pour le premier groupe, la structure du capital n'a aucune incidence sur la profitabilité. Le deuxième, c'est le groupe où l'endettement affecte négativement la profitabilité de manière linéaire. Le dernier groupe se caractérise par la présence d'un effet négatif de façon linéaire et non linéaire (iii) Théoriquement, un impact négatif de la structure du capital sur la demande de travail est prévu. L'application empirique montre une hétérogénéité des comportements entre les secteurs en ce qui concerne l'effet de l'endettement sur la demande de travail, donc, il existe aussi trois groupes différents de secteurs (pas d'effet, effet négatif linéaire et effet négatif linéaire et non linéaire). De plus, la magnitude de l'effet de l'endettement sur la demande de travail et sur la profitabilité dépend, non seulement du secteur, mais aussi de la taille d'entreprise.
APA, Harvard, Vancouver, ISO, and other styles
47

Tamagnini, Filippo. "EKF based State Estimation in a CFI Copolymerization Reactor including Polymer Quality Information." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20235/.

Full text
Abstract:
State estimation is an integral part of modern control techniques, as it allows to characterize the state information of complex plants based on a limited number of measurements and the knowledge of the process model. The benefit is twofold: on one hand it has the potential to rationalize the number of measurements required to monitor the plant, thus reducing costs, on the other hand it enables to extract information about variables that have an effect on the system but would otherwise be inaccessible to direct measurement. The scope of this thesis is to design a state estimator for a tubular copolymerization reactor, with the aim to provide the full state information of the plant and to characterize the quality of the product. Due to the fact that, with the existing set of measurements, only a small number of state variables can be observed, a new differential pressure sensor is installed in the plant to provide the missing information, and a model for the pressure measurement is developed. Following, the state estimation problem is approached rigorously and a comprehensive method for analyzing, tuning and implementing the state estimator is assembled from scientific literature, using a variety of tools from graph theory, linear observability theory and matrix algebra. Data reduction and visualization techniques are also employed to make sense of high dimensional information. The proposed method is then tested in simulations to assess the effect of the tuning parameters and measured set on the estimator performance during initialization and in case of estimation with plant-model mismatch. Finally, the state estimator is tested with plant data.
APA, Harvard, Vancouver, ISO, and other styles
48

Munasib, Abdul B. A. "Lifecycle of social networks: A dynamic analysis of social capital accumulation." The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1121441394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Xu, Xingbai Xu. "Asymptotic Analysis for Nonlinear Spatial and Network Econometric Models." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461249529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lima, André Fernandes. "Estudo da relação causal entre os níveis organizacionais de folga, o risco e o desempenho financeiro de empresas manufatureiras." Universidade Presbiteriana Mackenzie, 2009. http://tede.mackenzie.br/jspui/handle/tede/848.

Full text
Abstract:
Made available in DSpace on 2016-03-15T19:31:17Z (GMT). No. of bitstreams: 1 Andre Fernandes Lima.pdf: 717720 bytes, checksum: e1002c943e6cd65b97220bcb149117a4 (MD5) Previous issue date: 2009-02-04
Fundo Mackenzie de Pesquisa
This dissertation aims to investigate the existence of a causal relationship between levels of organizational slack, the risk of the company and its performance. The point of departure is the conjecture that the magnitude of the organizational slack is a determinant factor of the risk as well as the performance of the company. The importance of this piece of research lies on the empirical fact that owners of a company are willing to take risks based on the prospect of returns. In order to test the causal relationship, it proceeds as follows. First, it collects data from 218 manufacturing companies in the period 2001-2007 and combines part of it through factor analysis so as to compose the three types of organizational slack: available, recoverable and potential ones. Second, the data is arranged in the form of a panel and is next assessed by the generalized method of moments (GMM). The results support the validity of the two proposed models: the first takes risk as the dependent variable, while the second takes future performance. The findings corroborate the hypothesis that the organizational slack has a nonlinear influence on risk and performance. In addition, they shed light on the increased robustness of the second model relative to the first one. This is regarded as the second contribution of the dissertation provided that most literature emphasizes the influence of the organizational slack over risk neglecting its role in performance. We go on to claim that the little attention paid to performance contributes to the available inconclusive empirical results within the literature.
O objetivo do trabalho é investigar a existência de uma relação causal entre os níveis organizacionais de folga, o risco da empresa e seu desempenho. O ponto de partida é a conjectura de que a magnitude da folga organizacional é fator determinante do risco representado pela empresa, bem como de seu desempenho. A importância desta pesquisa recai sobre o fato empírico de que os proprietários da empresa estão dispostos a se expor a riscos com base na perspectiva de retorno. Para testar esta relação causal são considerados dados de 218 empresas manufatureiras no período 2001-2007, sendo parte destes dados agrupados através de análise fatorial, de forma a compor os três tipos de folga organizacional considerados: disponível, recuperável e potencial. Em seguida, os dados são dispostos na forma de painel e, então, analisados através do método dos momentos generalizados (GMM), o que constitui uma contribuição original. Os resultados obtidos suportam a validade de dois modelos propostos, o primeiro em que o risco é variável dependente, e o segundo em que a variável dependente é o desempenho futuro, corroborando a hipótese de que a folga organizacional exerce influência não linear sobre o risco e o desempenho. Adicionalmente, verifica-se que o modelo de desempenho futuro é mais robusto, sendo esta a segunda contribuição da pesquisa. Isso decorre do fato de que grande parte da literatura enfatiza a influência da folga organizacional sobre o risco, negligenciando sua significância sobre o desempenho. Argumenta-se aqui que tais práticas implicaram em resultados empíricos não conclusivos na literatura.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography