To see the other types of publications on this topic, follow the link: Generalized Method of Moments (GMM).

Dissertations / Theses on the topic 'Generalized Method of Moments (GMM)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Generalized Method of Moments (GMM).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Koci, Eni. "The stochastic discount factor and the generalized method of moments." Digital WPI, 2006. https://digitalcommons.wpi.edu/etd-theses/873.

Full text
Abstract:
"The fundamental theorem of asset pricing in finance states that the price of any asset is its expected discounted payoff. Ideally, the payoff is discounted by a factor, which depends on parameters present in the market, and it should be unique, in the sense that financial derivatives should be able to be priced using the same discount factor. In theory, risk neutral valuation implies the existence of a positive random variable, which is called the stochastic discount factor and is used to discount the payoffs of any asset. Apart from asset pricing another use of stochastic discount factor is to evaluate the performance of the of hedge fund managers. Among many methods used to evaluate the stochastic discount factor, generalized method of moments has become very popular. In this paper we will see how generalized method of moments is used to evaluate the stochastic discount factor on linear models and the calculation of stochastic discount factor using generalized method of moments for the popular model in finance CAPM. "
APA, Harvard, Vancouver, ISO, and other styles
2

Augustine-Ohwo, Odaro. "Estimating break points in linear models : a GMM approach." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/estimating-break-points-in-linear-models-a-gmm-approach(804d83e3-dad8-4cda-b1e1-fbfce7ef41b8).html.

Full text
Abstract:
In estimating econometric time series models, it is assumed that the parameters remain constant over the period examined. This assumption may not always be valid when using data which span an extended period, as the underlying relationships between the variables in these models are exposed to various exogenous shifts. It is therefore imperative to examine the stability of models as failure to identify any changes could result in wrong predictions or inappropriate policy recommendations. This research proposes a method of estimating the location of break points in linear econometric models with endogenous regressors, estimated using Generalised Method of Moments (GMM). The proposed estimation method is based on Wald, Lagrange Multiplier and Difference type test statistics of parameter variation. In this study, the equation which sets out the relationship between the endogenous regressor and the instruments is referred to as the Jacobian Equation (JE). The thesis is presented along two main categories: Stable JE and Unstable JE. Under the Stable JE, models with a single and multiple breaks in the Structural Equation (SE) are examined. The break fraction estimators obtained are shown to be consistent for the true break fraction in the model. Additionally, using the fixed break approach, their $T$-convergence rates are established. Monte Carlo simulations which support the asymptotic properties are presented. Two main types of Unstable JE models are considered: a model with a single break only in the JE and another with a break in both the JE and SE. The asymptotic properties of the estimators obtained from these models are intractable under the fixed break approach, hence the thesis provides essential steps towards establishing the properties using the shrinking breaks approach. Nonetheless, a series of Monte Carlo simulations conducted provide strong support for the consistency of the break fraction estimators under the Unstable JE. A combined procedure for testing and estimating significant break points is detailed in the thesis. This method yields a consistent estimator of the true number of breaks in the model, as well as their locations. Lastly, an empirical application of the proposed methodology is presented using the New Keynesian Phillips Curve (NKPC) model for U.S. data. A previous study has found this NKPC model is unstable, having two endogenous regressors with Unstable JE. Using the combined testing and estimation approach, similar break points were estimated at 1975:2 and 1981:1. Therefore, using the GMM estimation approach proposed in this study, the presence of a Stable or Unstable JE does not affect estimations of breaks in the SE. A researcher can focus directly on estimating potential break points in the SE without having to pre-estimate the breaks in the JE, as is currently performed using Two Stage Least Squares.
APA, Harvard, Vancouver, ISO, and other styles
3

Gajic, Ruzica, and Isabelle Söder. "Arbetslöshetsförsäkringens finansiering : Hur påverkas arbetslöshetskassornas medlemsantal av en förhöjd grad av avgiftsfinansiering?" Thesis, Uppsala University, Department of Economics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126711.

Full text
Abstract:

Sedan årsskiftet 2006/2007 har antalet medlemmar i arbetslöshetskassorna minskat drastiskt. Under samma period har ett flertal reformer genomförts på arbetslöshetsförsäkringens område som bland annat resulterat i höjda medlemsavgifter för de flesta a-kassorna. Syftet med denna uppsats är att undersöka huruvida det över tid går att finna något samband mellan förändringar i medlemsantal och medlemsavgifter. För att undersöka detta måste man förutom avgifterna även ta hänsyn till andra variabler kopplade till arbetslöshetsförsäkringen. Dessa övriga variabler är grundbelopp, högsta dagpenning, ersättningsgrad och arbetslöshet. Vi formulerar en modell för sambandet mellan medlemsantal och dessa variabler och skattar denna genom metoden Generalized Method of Moments med hjälp av data från 2000-2009. Våra resultat visar i enlighet med teori och tidigare forskning på ett negativt samband mellan medlemsavgifter och antalet medlemmar i a-kassan. Detta samband visar sig vara starkt, särskilt på lång sikt. För att tydigare se hur avgiftsförändringar påverkar olika typer av individer i olika grad har vi även undersökt huruvida medlemsantalet i a-kassor kopplade till tjänstemanna- respektive arbetarförbund är olika känsliga för förändringar i avgiften. Våra resultat visar i kontrast till tidigare studier att a-kassorna kopplade till tjänstemannaförbunden (TCO och Saco) är mer känsliga för förändringar jämfört med arbetarförbunden (LO). Detta skapar anledning att tro att det finns andra faktorer än avgifter och de övriga variablerna som inkluderats i vår modell vilka påverkar anslutningsgraden och som kan förklara skillnaden mellan de olika grupperna.

APA, Harvard, Vancouver, ISO, and other styles
4

Tan, David Tatwei Banking &amp Finance Australian School of Business UNSW. "Corporate governance and firm outcomes: causation or spurious correlation?" Awarded By:University of New South Wales. Banking & Finance, 2009. http://handle.unsw.edu.au/1959.4/43371.

Full text
Abstract:
The rapid growth of financial markets and the increasing diffusion of corporate ownership have placed tremendous emphasis on the effectiveness of corporate governance in resolving agency conflicts within the firm. This study investigates the corporate governance and firm performance/failure relation by implementing various econometric modelling methods to disaggregate causal relations and spurious correlations. Using a panel dataset of Australian firms, a comprehensive suite of corporate governance mechanisms are considered; including the ownership, remuneration, and board structures of the firm. Initial ordinary least squares (OLS) and fixed-effects panel specifications report significant causal relations between various corporate governance measures and firm outcomes. However, the dynamic generalised method of moments (GMM) results indicate that no causal relations exist when taking into account the effects of simultaneity, dynamic endogeneity, and unobservable heterogeneity. Moreover, these results remain robust when accounting for the firm??s propensity for fraud. The findings support the equilibrium theory of corporate governance and the firm, suggesting that a firm??s corporate governance structure is an endogenous characteristic determined by other firm factors; and that any observed relations between governance and firm outcomes are spurious in nature. Chapter 2 examines the corporate governance and firm performance relation. Using a comprehensive suite of corporate governance measures, this chapter finds no evidence of a causal relation between corporate governance and firm performance when accounting for the biases introduced by simultaneity, dynamic endogeneity, and unobservable heterogeneity. This result is consistent across all firm performance measures. Chapter 3 explores the corporate governance and likelihood of firm failure relation by implementing the Merton (1974) model of firm-valuation. Similarly, no significant causal relations between a firm??s corporate governance structure and its likelihood of failure are detected when accounting for the influence of endogeneity on the parameter estimates. Chapter 4 re-examines the corporate governance and firm performance/failure relation within the context of corporate fraud. Using KPMG and ASIC fraud databases, the corporate governance and firm outcome relations are estimated whilst accounting for the firms?? vulnerability to corporate fraud. This chapter finds no evidence of a causal relation between corporate governance and firm outcomes when conditioning on a firm??s propensity for fraud.
APA, Harvard, Vancouver, ISO, and other styles
5

Lima, André Fernandes. "Estudo da relação causal entre os níveis organizacionais de folga, o risco e o desempenho financeiro de empresas manufatureiras." Universidade Presbiteriana Mackenzie, 2009. http://tede.mackenzie.br/jspui/handle/tede/848.

Full text
Abstract:
Made available in DSpace on 2016-03-15T19:31:17Z (GMT). No. of bitstreams: 1 Andre Fernandes Lima.pdf: 717720 bytes, checksum: e1002c943e6cd65b97220bcb149117a4 (MD5) Previous issue date: 2009-02-04
Fundo Mackenzie de Pesquisa
This dissertation aims to investigate the existence of a causal relationship between levels of organizational slack, the risk of the company and its performance. The point of departure is the conjecture that the magnitude of the organizational slack is a determinant factor of the risk as well as the performance of the company. The importance of this piece of research lies on the empirical fact that owners of a company are willing to take risks based on the prospect of returns. In order to test the causal relationship, it proceeds as follows. First, it collects data from 218 manufacturing companies in the period 2001-2007 and combines part of it through factor analysis so as to compose the three types of organizational slack: available, recoverable and potential ones. Second, the data is arranged in the form of a panel and is next assessed by the generalized method of moments (GMM). The results support the validity of the two proposed models: the first takes risk as the dependent variable, while the second takes future performance. The findings corroborate the hypothesis that the organizational slack has a nonlinear influence on risk and performance. In addition, they shed light on the increased robustness of the second model relative to the first one. This is regarded as the second contribution of the dissertation provided that most literature emphasizes the influence of the organizational slack over risk neglecting its role in performance. We go on to claim that the little attention paid to performance contributes to the available inconclusive empirical results within the literature.
O objetivo do trabalho é investigar a existência de uma relação causal entre os níveis organizacionais de folga, o risco da empresa e seu desempenho. O ponto de partida é a conjectura de que a magnitude da folga organizacional é fator determinante do risco representado pela empresa, bem como de seu desempenho. A importância desta pesquisa recai sobre o fato empírico de que os proprietários da empresa estão dispostos a se expor a riscos com base na perspectiva de retorno. Para testar esta relação causal são considerados dados de 218 empresas manufatureiras no período 2001-2007, sendo parte destes dados agrupados através de análise fatorial, de forma a compor os três tipos de folga organizacional considerados: disponível, recuperável e potencial. Em seguida, os dados são dispostos na forma de painel e, então, analisados através do método dos momentos generalizados (GMM), o que constitui uma contribuição original. Os resultados obtidos suportam a validade de dois modelos propostos, o primeiro em que o risco é variável dependente, e o segundo em que a variável dependente é o desempenho futuro, corroborando a hipótese de que a folga organizacional exerce influência não linear sobre o risco e o desempenho. Adicionalmente, verifica-se que o modelo de desempenho futuro é mais robusto, sendo esta a segunda contribuição da pesquisa. Isso decorre do fato de que grande parte da literatura enfatiza a influência da folga organizacional sobre o risco, negligenciando sua significância sobre o desempenho. Argumenta-se aqui que tais práticas implicaram em resultados empíricos não conclusivos na literatura.
APA, Harvard, Vancouver, ISO, and other styles
6

Asad, Humaira. "Effective financial development, inequality and poverty." Thesis, University of Exeter, 2012. http://hdl.handle.net/10036/3583.

Full text
Abstract:
This thesis addresses the question, whether the impact of financial development on the relative and absolute indicators of poverty is dependent on the levels of the human capital present in an economy. To answer this question, first we develop a theoretical framework to explain the growth process in the context of financial development assuming that human capital is heterogeneous in terms of the skills and education people have. Then, by using the data sets based on five-year averages over 1960-2010 and 1980-2010, covering 107 developed and developing countries, we empirically investigate the extensions of the theoretical framework developed earlier. These extensions cover the relationships between: 1. Income inequality and economic growth 2. Financial development, human capital and income inequality, and 3. Financial development, human capital and poverty We provide empirical evidence using modern panel data techniques of dynamic and static GMM. The findings elucidate that income inequality and economic growth are inter-dependent on each other. There exists an inverse relationship between initial inequality and economic growth. The changes in income inequality follow the pattern identified by Kuznets (1955) known as Kuznets’ hypothesis. The results also show that financial development helps in reducing income inequalities and in alleviating poverty, only when there is a sufficient level of human capital available. On the basis of our findings we develop the term "effective financial development" which means that financial development is effective in accelerating growth levels, reducing income inequalities and alleviating poverty only if there is a sufficient level of human capital available. The empirical study covers multiple aspects of financial development like private credit extended by banks and other financial institutions, liquid liabilities and stock market capitalization. The results of the empirical investigations are robust to multiple data sets and various indicators of income inequality, financial development, poverty and human capital. The study also provides marginal analysis, which helps in understanding the impact of financial development on inequality and poverty at different levels of human capital. This research study of effective financial development can be a useful learning paradigm for the academics and researchers interested in growth economics and keen to learn how poverty and income inequality can be reduced effectively. This study can also be useful for the policy makers in the financial institutions, because it provides robust empirical evidence that shows that financial development cannot help in alleviating poverty and in reducing inequalities unless there is a sufficient level of human capital available. The findings can be useful for policy makers, particularly in the developing countries where high levels of income inequalities and poverty are big problems. This study explains the mechanism of how effective financial development can be used to reduce income inequalities and to alleviate poverty. It also explains the process of inter-linkages between financial development, human capital, inequality, economic growth and financial instability. The policy makers can also take advantage from the marginal analyses that illustrate the minimum levels of private credit and primary and secondary schooling above which the effects of financial development and human capital become significant in reducing inequalities and poverty.
APA, Harvard, Vancouver, ISO, and other styles
7

Ruzibuka, John S. "The impact of fiscal deficits on economic growth in developing countries : Empirical evidence and policy implications." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/16282.

Full text
Abstract:
This study examines the impact of fiscal deficits on economic growth in developing countries. Based on deduction from the relevant theoretical and empirical literature, the study tests the following hypotheses regarding the impact of fiscal deficits on economic growth. First, fiscal deficits have significant positive or negative impact on economic growth in developing countries. Second, the impact of fiscal deficits on economic growth depends on the size of deficits as a percentage of GDP – that is, there is a non-linear relationship between fiscal deficits and economic growth. Third, the impact of fiscal deficits on economic growth depends on the ways in which deficits are financed. Fourth, the impact of fiscal deficits on economic growth depends on what deficit financing is used for. The study also examines whether there are any significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries. The study uses panel data for thirty-one developing countries covering the period 1972- 2001, which is analysed based on the econometric estimation of a dynamic growth model using the Arellano and Bond (1991) generalised method of moments (GMM) technique. Overall, the results suggest the following. First, fiscal deficits per se have no any significant positive or negative impact on economic growth. Second, by contrast, when the deficit is substituted by domestic and foreign financing, we find that both domestic and foreign financing of fiscal deficits exerts a negative and statistically significant impact on economic growth with a lag. Third, we find that both categories of economic classification of government expenditure, namely, capital and current expenditure, have no significant impact on economic growth. When government expenditure is disaggregated on the basis of a functional classification, the results suggest that spending on education, defence and economic services have positive but insignificant impact on growth, while spending on health and general public services have positive and significant impact. Fourth, in terms of regional differences with regard to the estimated relationships, the study finds that, while there are some regional differences between the four different regions represented in our sample of thirty-one developing countries - namely, Asia and the Pacific, Latin America and the Caribbean, Middle East and North Africa, and Sub-Saharan Africa – these differences are not statistically significant. On the basis of these findings, the study concludes that fiscal deficits per se are not necessarily good or bad for economic growth in developing countries; how the deficits are financed and what they are used for matters. In addition, the study concludes that there are no statistically significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries.
APA, Harvard, Vancouver, ISO, and other styles
8

Ruzibuka, John Shofel. "The impact of fiscal deficits on economic growth in developing countries : empirical evidence and policy implications." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/16282.

Full text
Abstract:
This study examines the impact of fiscal deficits on economic growth in developing countries. Based on deduction from the relevant theoretical and empirical literature, the study tests the following hypotheses regarding the impact of fiscal deficits on economic growth. First, fiscal deficits have significant positive or negative impact on economic growth in developing countries. Second, the impact of fiscal deficits on economic growth depends on the size of deficits as a percentage of GDP - that is, there is a non-linear relationship between fiscal deficits and economic growth. Third, the impact of fiscal deficits on economic growth depends on the ways in which deficits are financed. Fourth, the impact of fiscal deficits on economic growth depends on what deficit financing is used for. The study also examines whether there are any significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries. The study uses panel data for thirty-one developing countries covering the period 1972- 2001, which is analysed based on the econometric estimation of a dynamic growth model using the Arellano and Bond (1991) generalised method of moments (GMM) technique. Overall, the results suggest the following. First, fiscal deficits per se have no any significant positive or negative impact on economic growth. Second, by contrast, when the deficit is substituted by domestic and foreign financing, we find that both domestic and foreign financing of fiscal deficits exerts a negative and statistically significant impact on economic growth with a lag. Third, we find that both categories of economic classification of government expenditure, namely, capital and current expenditure, have no significant impact on economic growth. When government expenditure is disaggregated on the basis of a functional classification, the results suggest that spending on education, defence and economic services have positive but insignificant impact on growth, while spending on health and general public services have positive and significant impact. Fourth, in terms of regional differences with regard to the estimated relationships, the study finds that, while there are some regional differences between the four different regions represented in our sample of thirty-one developing countries - namely, Asia and the Pacific, Latin America and the Caribbean, Middle East and North Africa, and Sub-Saharan Africa - these differences are not statistically significant. On the basis of these findings, the study concludes that fiscal deficits per se are not necessarily good or bad for economic growth in developing countries; how the deficits are financed and what they are used for matters. In addition, the study concludes that there are no statistically significant regional differences in terms of the relationship between fiscal deficits and economic growth in developing countries.
APA, Harvard, Vancouver, ISO, and other styles
9

Alsaraireh, Ahmad. "Firm's value, financing constraints and dividend policy in relation to firm's political connections." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15824.

Full text
Abstract:
The relationship between politicians and firms has attracted a considerable amount of research, especially in developing countries, where firms' political links are a widespread phenomenon. However, existing literature offers contradicting views about this relationship, espicially regarding the impact of firms' political connections on firms' market-performance. Furthermore, there is limited evidence on the impact of firms' political connections on some of the important corporate decisions, including firms' investment- and dividend-policies. Therefore, this thesis seeks to fill these gaps by offering three empirical essays with Jordan as a case study. The first essay examines the impact of firms' political links on their values by controlling for macroeconomic conditions. Also, in the extended models, by specifying three major events which occurred after 2008, namely, the establishment of the Anti-Corruption Commission (ACC), the Global Financial Crisis, and the Arab Uprisings, we investigate the effects of these events on the relationship between firms' political ties and their value. The findings of this essay indicate that politically-connected firms have higher values compared to their non-connected counterparts in Jordan. Moreover, it is found that firms with stronger political-ties have higher values than firms with weaker ties. Furthermore, the positive effect of political connections continues, even after controlling for the macroeconomic conditions, though the latter are considered to be more important than political connections for firm valuation due to their impact on the share price. Interestingly, findings show that the events occurring after 2008 do not seem to have affected the relationship between political connections and firm value since the significant positive impact of political-ties on firm value persists during the post-event period. The second empirical essay studies the role of political connections in mitigating firms' financing-constraints. Moreover, it investigates the effect of the strength of political connections in alleviating these constraints. Finally, it looks at the impact of the above-mentioned three events which occurred after 2008, notwithstanding the new banking Corporate Governance Code issued in 2007. Findings of this essay reveal that firms' political connections are important in mitigating their financing-constraints. Furthermore, the results show that stronger political connections seem to reduce financing-constraints more than weaker connections. Finally, findings show that the impact of firms' political connections has diminished during the post-event period (2008 - 2014). The third essay examines how a firm's political connections can affect its dividend-policy. It also considers the impact of the strength of political connections on dividend-policy. Finally, we extend the empirical analysis by investigating any shift in the relationship between political connections and dividends due to the events of the Global Financial Crisis, the Arab Uprisings, and the adoption of the International Financial Reporting Standards (IFRS). Results of this essay reveal that a firm's political connections have a significant positive impact on both the propensity to pay dividends and the dividend-payout ratio. Regarding the impact of the strength of political connections on dividends, it is found that firms with weaker political connections pay out more in dividends than firms with stronger connections. In terms of the impact of the events which occurred after 2008 on the relationship between political connections and dividends, the findings show that the impact of these connections on dividends is eliminated.
APA, Harvard, Vancouver, ISO, and other styles
10

Forrester, Andrew C. "Equity Returns and Economic Shocks: A Survey of Macroeconomic Factors and the Co-movement of Asset Returns." Miami University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=miami1512128483719638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kebewar, Mazen. "La structure du capital et son impact sur la profitabilité et sur la demande de travail : analyses théoriques et empiriques sur données de panel françaises." Phd thesis, Université d'Orléans, 2012. http://tel.archives-ouvertes.fr/tel-00762748.

Full text
Abstract:
La présente thèse contribue à la littérature sur trois principaux axes de recherche relatifs à la structure du capital: les déterminants de la structure du capital, la profitabilité et la demande de travail. (i) Le fondement théorique des déterminants de la structure du capital montre qu'il existe trois modèles qui peuvent expliquer la structure du capital: la théorie de ratio optimal d'endettement, la théorie hiérarchique de financement et récemment la théorie de market timing. De plus, l'évaluation empirique montre un effet positif des coûts d'ajustement et de la garantie. Par contre, l'opportunité de croissance, l'impôt non lié à la dette et la rentabilité sont corrélés de façon négative avec l'endettement. (ii) L'impact de la structure du capital sur la profitabilité peut être expliqué par trois théories essentielles: la théorie du signal, l'influence de la fiscalité et la théorie de l'agence. L'analyse empirique a permis de distinguer trois groupes différents de secteurs: pour le premier groupe, la structure du capital n'a aucune incidence sur la profitabilité. Le deuxième, c'est le groupe où l'endettement affecte négativement la profitabilité de manière linéaire. Le dernier groupe se caractérise par la présence d'un effet négatif de façon linéaire et non linéaire (iii) Théoriquement, un impact négatif de la structure du capital sur la demande de travail est prévu. L'application empirique montre une hétérogénéité des comportements entre les secteurs en ce qui concerne l'effet de l'endettement sur la demande de travail, donc, il existe aussi trois groupes différents de secteurs (pas d'effet, effet négatif linéaire et effet négatif linéaire et non linéaire). De plus, la magnitude de l'effet de l'endettement sur la demande de travail et sur la profitabilité dépend, non seulement du secteur, mais aussi de la taille d'entreprise.
APA, Harvard, Vancouver, ISO, and other styles
12

Kebewar, Mazen. "La structure du capital et son impact sur la profitabilité et sur la demande de travail : analyses théoriques et empiriques sur données de panel françaises." Electronic Thesis or Diss., Orléans, 2012. http://www.theses.fr/2012ORLE0501.

Full text
Abstract:
La présente thèse contribue à la littérature sur trois principaux axes de recherche relatifs à la structure du capital: les déterminants de la structure du capital, la profitabilité et la demande de travail. (i) Le fondement théorique des déterminants de la structure du capital montre qu’il existe trois modèles qui peuvent expliquer la structure du capital: la théorie de ratio optimal d’endettement, la théorie hiérarchique de financement et récemment la théorie de market timing. De plus, l’évaluation empirique montre un effet positif des coûts d’ajustement et de la garantie. Par contre, l’opportunité de croissance, l’impôt non lié à la dette et la rentabilité sont corrélés de façon négative avec l’endettement. (ii) L’impact de la structure du capital sur la profitabilité peut être expliqué par trois théories essentielles: la théorie du signal, l’influence de la fiscalité et la théorie de l’agence. L’analyse empirique a permis de distinguer trois groupes différents de secteurs: pour le premier groupe, la structure du capital n’a aucune incidence sur la profitabilité. Le deuxième, c’est le groupe où l’endettement affecte négativement la profitabilité de manière linéaire. Le dernier groupe se caractérise par la présence d’un effet négatif de façon linéaire et non linéaire (iii) Théoriquement, un impact négatif de la structure du capital sur la demande de travail est prévu. L’application empirique montre une hétérogénéité des comportements entre les secteurs en ce qui concerne l’effet de l’endettement sur la demande de travail, donc, il existe aussi trois groupes différents de secteurs (pas d’effet, effet négatif linéaire et effet négatif linéaire et non linéaire). De plus, la magnitude de l’effet de l’endettement sur la demande de travail et sur la profitabilité dépend, non seulement du secteur, mais aussi de la taille d’entreprise
This thesis contributes to the literature in three main areas of research about capital structure: the determinants of capital structure, the profitability and the labour demand. (i) The theoretical basis of the determinants of capital structure shows that there are three models that explains the capital structure: Trade-Off theory, Pecking Order theory and Market Timing theory. Further, the empirical evaluation shows a positive effect of the adjustment costs and the tangibility. On the other hand, growth opportunity, non-debt tax shield and profitability are negatively correlated with debt. (ii) The impact of capital structure on profitability can be explained by three essential theories: signal theory, tax theory and the agency costs theory. The empirical analysis allowed to distinguish three different groups of sectors: for the first group, the capital structure has no impact on profitability. The second, it is the group where the debt affects negatively the profitability in a linear way. The last group is characterized by the presence of a negative effect in a linear and nonlinear way. (iii) Theoretically, a negative impact of the capital structure on labour demand is expected. The empirical application shows heterogeneity of behavior between sectors regarding the impact of debt on the demand for labor; therefore, there are three different groups of sectors (i.e. no effect, negative linear effect, and linear and non linear negative effect). Furthermore, the magnitude of the effect of debt on the labour demand and on the profitability depends not only of the sector, but also of the size of company
APA, Harvard, Vancouver, ISO, and other styles
13

Talukdar, Muhammad Bakhtear U. "CFO Turnover, Firm’s Debt-Equity Choice and Information Environment." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2618.

Full text
Abstract:
The CEO and CFO are the two key executives of a firm. They work cohesively to ensure the growth of the firm. After the adoption of the Sarbanes Oxley Act (SOX) in 2002, the importance of CFOs has increased due to their personal legal obligation in certifying the accuracy of financial statements. Only a few papers such as Mian (2001), Fee and Hadlock (2004), and Geiger and North (2006) focus on CFOs in the pre-SOX era. However, a vacuum exists in research focusing exclusively on CFOs in the post-SOX era. The purpose of this dissertation is to delve into a comprehensive investigation of the CFOs. More specifically, I answer three questions: a) does the CEO change lead to the CFO change? b) does the CFO appointment type affect the firm’s debt-equity choice? and c) does the CFO appointment affect the firm’s information environment? I use Shumway’s (2001) dynamic hazard model in answering question ‘a’. For question ‘b’, I use instrumental variable (IV) regression under various estimation techniques to control for endogeneity. For part ‘c’, I use the cross sectional difference-in-difference (DND) methodology by pairing treatment firms with control firms chosen by the propensity scores matching (PSM). I find there is about a 70% probability of CFO replacement after the CEO replacement. Both of their replacements are affected by prior year’s poor performance. In addition, as a custodian of the firm’s financial reporting, the CFO is replaced proactively due to a probability of restatement of earnings. I find firms with internal CFO hires issue more equity in the year of appointment than firms with external hires. The promoted CFO significantly improves the firm’s overall governance which helps the firm obtain external financing from equity issue. However, I find that CFO turnover does not significantly affect the firm’s information environment. To ensure that my finding is not due to mixing up of samples of good and distressed firms together, I separated distressed firms and re-ran my models and my finding still holds. This dissertation fills the gap in the literature with regards to CFOs and their post SOX relationship with the firm.
APA, Harvard, Vancouver, ISO, and other styles
14

Lai, Yanzhao. "Generalized method of moments exponential distribution family." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/laiy/yanzhaolai.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Shin, Changmock. "Entropy Based Moment Selection in Generalized Method of Moments." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-06072005-112026/.

Full text
Abstract:
GMM provides a computationally convenient estimation method and the resulting estimator can be shown to be consistent and asymptotically normal under the fairly moderate regularity conditions. It is widely known that the information content in the population moment condition has impacts on the quality of the asymptotic approximation to finite sample behavior. This dissertation focuses on a moment selection procedure that leads us to choose relevant (asymptotically efficient and non-redundant) moment conditions in the presence of weak identification. The contributions of this dissertation can be characterized as follows: in the framework of linear model, (i) the concept of nearly redundant moment conditions is introduced and the connection between near redundancy and weak identification is explored; (ii) performance of RMSC(c) is evaluated when weak identification is a possibility but the parameter vector to be estimated is not weakly identified by the candidate set of moment conditions; (iii) performance of RMSC(c) is also evaluated when the parameter vector is weakly identified by the candidate set; (iv) a combined strategy of Stock and Yogo's (2002) test for weak identification and RMSC(c) is introduced and evaluated; (v) (i) and (ii) are extended to allow for nonlinear dynamic models. The subsequent simulation results support the analytical findings: when only a part of instruments in the set of possible candidates for instruments are relevant and the others are redundant given all or some of the relevant ones, RMSC(c) chooses all the relevant instruments with high probabilities and improves the quality of the post-selection inferences; when the candidates are in order of their importance, a combined strategy of Stock and Yogo's (2002) pretest and RMSC(c) improves the post-selection inferences, however it tends to select parsimonious models; when all the possible candidates are equally important, it seems that RMSC(c) does not provide any merits. However, in the last case, asymptotic efficiency and non-redundancy can be achieved by basing the estimation and inference on all the possible candidates.
APA, Harvard, Vancouver, ISO, and other styles
16

Liang, Yitian. "Generalized method of moments : theoretical, econometric and simulation studies." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/36866.

Full text
Abstract:
The GMM estimator is widely used in the econometrics literature. This thesis mainly focus on three aspects of the GMM technique. First, I derive the prooves to study the asymptotic properties of the GMM estimator under certain conditions. To my best knowledge, the original complete prooves proposed by Hansen (1982) is not easily available. In this thesis, I provide complete prooves of consistency and asymptotic normality of the GMM estimator under some stronger assumptions than those in Hansen (1982). Second, I illustrate the application of GMM estimator in linear models. Specifically, I emphasize the economic reasons underneath the linear statistical models where GMM estimator (also referred to the Instrumental Variable estimator) is widely used. Third, I perform several simulation studies to investigate the performance of GMM estimator under different situations.
APA, Harvard, Vancouver, ISO, and other styles
17

CUNHA, JOAO MARCO BRAGA DA. "ESTIMATING ARTIFICIAL NEURAL NETWORKS WITH GENERALIZED METHOD OF MOMENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26922@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
As Redes Neurais Artificiais (RNAs) começaram a ser desenvolvidas nos anos 1940. Porém, foi a partir dos anos 1980, com a popularização e o aumento de capacidade dos computadores, que as RNAs passaram a ter grande relevância. Também nos anos 1980, houve dois outros acontecimentos acadêmicos relacionados ao presente trabalho: (i) um grande crescimento do interesse de econometristas por modelos não lineares, que culminou nas abordagens econométricas para RNAs, no final desta década; e (ii) a introdução do Método Generalizado dos Momentos (MGM) para estimação de parâmetros, em 1982. Nas abordagens econométricas de RNAs, sempre predominou a estimação por Quasi Máxima Verossimilhança (QMV). Apesar de possuir boas propriedades assintóticas, a QMV é muito suscetível a um problema nas estimações em amostra finita, conhecido como sobreajuste. O presente trabalho estende o estado da arte em abordagens econométricas de RNAs, apresentando uma proposta alternativa à estimação por QMV que preserva as suas boas propriedades assintóticas e é menos suscetível ao sobreajuste. A proposta utiliza a estimação pelo MGM. Como subproduto, a estimação pelo MGM possibilita a utilização do chamado Teste J para verifificar a existência de não linearidade negligenciada. Os estudos de Monte Carlo realizados indicaram que as estimações pelo MGM são mais precisas que as geradas pela QMV em situações com alto ruído, especialmente em pequenas amostras. Este resultado é compatível com a hipótese de que o MGM é menos suscetível ao sobreajuste. Experimentos de previsão de taxas de câmbio reforçaram estes resultados. Um segundo estudo de Monte Carlo apontou boas propriedades em amostra finita para o Teste J aplicado à não linearidade negligenciada, comparado a um teste de referência amplamente conhecido e utilizado. No geral, os resultados apontaram que a estimação pelo MGM é uma alternativa recomendável, em especial no caso de dados com alto nível de ruído.
Artificial Neural Networks (ANN) started being developed in the decade of 1940. However, it was during the 1980 s that the ANNs became relevant, pushed by the popularization and increasing power of computers. Also in the 1980 s, there were two other two other academic events closely related to the present work: (i) a large increase of interest in nonlinear models from econometricians, culminating in the econometric approaches for ANN by the end of that decade; and (ii) the introduction of the Generalized Method of Moments (GMM) for parameter estimation in 1982. In econometric approaches for ANNs, the estimation by Quasi Maximum Likelihood (QML) always prevailed. Despite its good asymptotic properties, QML is very prone to an issue in finite sample estimations, known as overfiting. This thesis expands the state of the art in econometric approaches for ANNs by presenting an alternative to QML estimation that keeps its good asymptotic properties and has reduced leaning to overfiting. The presented approach relies on GMM estimation. As a byproduct, GMM estimation allows the use of the so-called J Test to verify the existence of neglected nonlinearity. The performed Monte Carlo studies indicate that the estimates from GMM are more accurate than those generated by QML in situations with high noise, especially in small samples. This result supports the hypothesis that GMM is susceptible to overfiting. Exchange rate forecasting experiments reinforced these findings. A second Monte Carlo study revealed satisfactory finite sample properties of the J Test applied to the neglected nonlinearity, compared with a reference test widely known and used. Overall, the results indicated that the estimation by GMM is a better alternative, especially for data with high noise level.
APA, Harvard, Vancouver, ISO, and other styles
18

Lahlou, Mehdi, and Sebastian Sandstedt. "Where There’s Smoke, There’s Fire : An Analysis of the Riksbank’s Interest Setting Policy." Thesis, Stockholms universitet, Nationalekonomiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-143163.

Full text
Abstract:
We analyse the Swedish central bank, the Riksbank’s, interest setting policy in a Taylor rule framework. In particular, we examine whether or not the Riksbank has reacted to fluctuations in asset prices during the period 1995:Q1 to 2016:Q2. This is done by estimating a forward-looking Taylor rule with interest rate smoothing, augmented with stock prices, house prices and the real exchange rate, using IV GMM. In general, we find that the Riksbank’s interest setting policy is well described by a forward-looking Taylor rule with interest rate smoothing and that the use of factors as instruments, derived from a PCA, serves to alleviate the weak-identification problem that tend to plague GMM. Moreover, apart from finding evidence that the Riksbank exhibit a substantial degree of policy rate inertia and has acted so as to stabilize inflation and the real economy, we also find evidence that the Riksbank has been reacting to fluctuations in stock prices, house prices, and the real exchange rate.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Zhuzhu. "Essays in Social Choice and Econometrics:." Thesis, Boston College, 2021. http://hdl.handle.net/2345/bc-ir:109181.

Full text
Abstract:
Thesis advisor: Uzi Segal
The dissertation studies the property of transitivity in the social choice theory. I explain why we should care about transitivity in decision theory. I propose two social decision theories: redistribution regret and ranking regret, study their properties of transitivity, and discuss the possibility to find a best choice for the social planner. Additionally, in the joint work, we propose a general method to construct a consistent estimator given two parametric models, one of which could be incorrectly specified. In “Why Transitivity”, to explain behaviors violating transitivity, e.g., preference reversals, some models, like regret theory, salience theory were developed. However, these models naturally violate transitivity, which may not lead to a best choice for the decision maker. This paper discusses the consequences and the possible extensions to deal with it. In “Redistribution Regret and Transitivity”, a social planner wants to allocate resources, e.g., the government allocates fiscal revenue or parents distribute toys to children. The social planner cares about individuals' feelings, which depend both on their assigned resources, and on the alternatives they might have been assigned. As a result, there could be intransitive cycles. This paper shows that the preference orders are generally non-transitive but there are two exceptions: fixed total resource and one extremely sensitive individual, or only two individuals with the same non-linear individual regret function. In “Ranking Regret”, a social planner wants to rank people, e.g., assign airline passengers a boarding order. A natural ranking is to order people from most to least sensitive to their rank. But people's feelings can depend both on their assigned rank, and on the alternatives they might have been assigned. As a result, there may be no best ranking, due to intransitive cycles. This paper shows how to tell when a best ranking exists, and that when it exists, it is indeed the natural ranking. When this best does not exist, an alternative second-best group ranking strategy is proposed, which resembles actual airline boarding policies. In “Over-Identified Doubly Robust Identification and Estimation”, joint with Arthur Lewbel and Jinyoung Choi, we consider two parametric models. At least one is correctly specified, but we don't know which. Both models include a common vector of parameters. An estimator for this common parameter vector is called Doubly Robust (DR) if it's consistent no matter which model is correct. We provide a general technique for constructing DR estimators (assuming the models are over identified). Our Over-identified Doubly Robust (ODR) technique is a simple extension of the Generalized Method of Moments. We illustrate our ODR with a variety of models. Our empirical application is instrumental variables estimation, where either one of two instrument vectors might be invalid
Thesis (PhD) — Boston College, 2021
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Economics
APA, Harvard, Vancouver, ISO, and other styles
20

Thurston, David Curtis. "A generalized method of moments comparison of several discrete time stochastic models of the term structure in the Heath-Jarrow-Morton arbitrage-based framework." Diss., The University of Arizona, 1992. http://hdl.handle.net/10150/185902.

Full text
Abstract:
This paper tests a new methodology; the discrete time no arbitrage-based model of Heath, Jarrow and Morton (HJM). From within Ho and Lee's framework, HJM's model is shown to encompass Ho and Lee's AR model as a special case. Several discrete stochastic models of the term structure based on restrictions placed on the variance of the forward rate process are discussed. These models are tested in HJM's no arbitrage-based framework. For testing, it is necessary to use current bond prices to substitute out for the market price of risk implied in the initial term structure. In this way, additional current bond prices appear in the pricing formulas, but the market price of risk does not. Several sets of forward rate models are tested. To avoid measurement errors associated with fitting splines to coupon-bearing bonds, coupon-free data are used. Weekly T-bill quotes over a twenty-three year period, starting in 1968 are split into two equal sets about the structural break of October 7, 1979 following the shift in the Federal Reserve's monetary policy. These two data sets are split in half for further testing. Hansen's Generalized Method of Moments (GMM) is employed to estimate the models' parameters with a minimum of assumptions. Because the models are not nested, the resulting J statistics are not suitable for model comparisons. As an alternative, "simulated residuals" resulting from the imposition of the parameter values obtained from the GMM estimation are calculated. The model generating the set of simulated residuals with the smallest variance is assumed to have the best fit. The F test is used for pairwise comparisons of the models. The sets of simulated residuals are not normally distributed. However, unless two samples are from radically different distributions, the F test is quite robust to the assumption of sample normality and can still be used to perform an informal comparison of two similar samples.
APA, Harvard, Vancouver, ISO, and other styles
21

Burk, David Morris. "Estimating the Effect of Disability on Medicare Expenditures." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2127.

Full text
Abstract:
We consider the effect of disability status on Medicare expenditures. Disabled elderly historically have accounted for a significant portion of Medicare expenditures. Recent demographic trends exhibit a decline in the size of this population, causing some observers to predict declines in Medicare expenditures. There are, however, reasons to be suspicious of this rosy forecast. To better understand the effect of disability on Medicare expenditures, we develop and estimate a model using the generalized method of moments technique. We find that newly disabled elderly generally spend more than those who have been disabled for longer periods of time. Also, we find that increases in expenditures have risen much more quickly for those disabled Medicare beneficiaries who were at the higher ends of the expenditure distribution before the increases.
APA, Harvard, Vancouver, ISO, and other styles
22

Hokayem, Charles. "ESSAYS ON HUMAN CAPITAL, HEALTH CAPITAL, AND THE LABOR MARKET." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/23.

Full text
Abstract:
This dissertation consists of three essays concerning the effects of human capital and health capital on the labor market. Chapter 1 presents a structural model that incorporates a health capital stock to the traditional learning-by-doing model. The model allows health to affect future wages by interrupting current labor supply and on-the-job human capital accumulation. Using data on sick time from the Panel Study Income of Dynamics the model is estimated using a nonlinear Generalized Method of Moments estimator. The results show human capital production exhibits diminishing returns. Health capital production increases with the current stock of health capital, or better current health improves future health. Among prime age working men, the effect of health on human capital accumulation is relatively small. Chapter 2 explores the role of another form of human capital, noncognitive skills, in explaining racial gaps in wages. Chapter 2 adds two noncognitive skills, locus of control and self-esteem, to a simple wage specification to determine the effect of these skills on the racial wage gap (white, black, and Hispanic) and the return to these skills across the wage distribution. The wage specifications are estimated using pooled, between, and quantile estimators. Results using the National Longitudinal Survey of Youth 1979 show these skills account for differing portions of the racial wage gap depending on race and gender. Chapter 3 synthesizes the idea of health and on-the-job human capital accumulation from Chapter 1 with the idea of noncognitive skills in Chapter 2 to examine the influence of these skills on human capital and health capital accumulation in adult life. Chapter 3 introduces noncognitive skills to a life cycle labor supply model with endogenous health and human capital accumulation. Noncognitive skills, measured by degree of future orientation, self-efficacy, trust-hostility, and aspirations, exogenously affect human capital and health production. The model uses noncognitive skills assessed in the early years of the Panel Study of Income Dynamics and relates these skills to health and human capital accumulation during adult life. The main findings suggest individuals with high self-efficacy receive higher future wages.
APA, Harvard, Vancouver, ISO, and other styles
23

AMATI, VIVIANA. "New statistics for the parameters estimation of the stochastic actor-oriented model for network change." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19389.

Full text
Abstract:
The Stochastic actor-oriented model (SAO) is a statistical model for longitudinal network data. The most often used procedure for the estimation of the parameter of the SAO model is the Method of Moments (MoM), which estimates the parameters using one observed statistic for each estimated parameter. A new set of statistics is defined taking into account the different ways of creating and deleting ties to which a certain effect can contribute. This definition leads to having more than one statistic for a single parameter, i.e. to an over-determined system of equations. Thus, the ordinary MoM cannot be applied. A suitable method then is the Generalized Method of Moments (GMM), an estimation technique mainly used in econometrics, and potentially more efficient than the MoM. Like the regular MoM, the GMM is based on the differences between the expected values of the statistics and their sample counterparts, but the GMM involves the minimization of a quadratic function of these differences rather than setting all differences to 0. This means that an extra problem arises: the determination of a matrix of weights reflecting the different importance and correlations of the statistics involved. An optimization-simulation algorithm is used, following the approach suggested by Gelman (1995) and based on the Newton-Raphson algorithm, to compare the estimators deriving from the MoM and the GMM. Simulation results suggest that the new set of statistics performs better when network observations are close. In fact, in this context the standard errors of the GMM estimators are lower than those of the MoM.
APA, Harvard, Vancouver, ISO, and other styles
24

Loum, Mor Absa. "Modèle de mélange et modèles linéaires généralisés, application aux données de co-infection (arbovirus & paludisme)." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS299/document.

Full text
Abstract:
Nous nous intéressons, dans cette thèse, à l'étude des modèles de mélange et des modèles linéaires généralisés, avec une application aux données de co-infection entre les arbovirus et les parasites du paludisme. Après une première partie consacrée à l'étude de la co-infection par un modèle logistique multinomial, nous proposons dans une deuxième partie l'étude des mélanges de modèles linéaires généralisés. La méthode proposée pour estimer les paramètres du mélange est une combinaison d'une méthode des moments et d'une méthode spectrale. Nous proposons à la fin une dernière partie consacrée aux mélanges de valeurs extrêmes en présence de censure. La méthode d'estimation proposée dans cette partie se fait en deux étapes basées sur la maximisation d'une vraisemblance
We are interested, in this thesis, to the study of mixture models and generalized linear models, with an application to co-infection data between arboviruses and malaria parasites. After a first part dedicated to the study of co-infection using a multinomial logistic model, we propose in a second part to study the mixtures of generalized linear models. The proposed method to estimate the parameters of the mixture is a combination of a moment method and a spectral method. Finally, we propose a final section for studing extreme value mixtures under random censoring. The estimation method proposed in this section is done in two steps based on the maximization of a likelihood
APA, Harvard, Vancouver, ISO, and other styles
25

BrandÃo, Jose Wellington. "Os efeitos da estrutura de propriedade sobre a politica de dividendos da empresa brasileira." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14523.

Full text
Abstract:
nÃo hÃ
A despeito dos diversos achados, ao longo de dÃcadas, sobre a polÃtica de dividendos, a ecisÃo de pagar dividendos ainda à um tema que segue em debate. Diversos fatores tÃm sido propostos como capazes de explicar a polÃtica de dividendos, como por exemplo, o lucro/rentabilidade, o dividendo prÃvio (manutenÃÃo na polÃtica de dividendos), tamanho da empresa, alavancagem, e oportunidades de crescimento. Mais recentemente, a literatura tem explorado a interferÃncia que a estrutura de propriedade pode ter sobre a distribuiÃÃo de dividendos. Neste contexto surgem as proposiÃÃes como a das hipÃteses relacionadas ao uso da polÃtica de dividendos como instrumento de controle da direÃÃo executiva, e à possÃvel expropriaÃÃo de acionistas minoritÃrios por parte dos controladores. O objetivo desta pesquisa à avaliar, sob o marco teÃrico da Teoria da AgÃncia, se hà uso da polÃtica de dividendos como instrumento de monitoraÃÃo executiva ou de expropriaÃÃo de acionistas minoritÃrios no mercado brasileiro. A amostra à um painel de dados composto por 1890 observaÃÃes anuais de 223 empresas no perÃodo 1996-2012 a partir de dados coletados no sistema EconomÃtica de empresas com aÃÃes negociadas na Bolsa de Valores de SÃo Paulo. A partir da estimaÃÃo de um conjunto de modelos explicativos da polÃtica de dividendos os resultados indicam que a presenÃa de um acionista majoritÃrio tem um efeito negativo sobre a polÃtica de dividendos, em linha com a hipÃtese de expropriaÃÃo. Outro resultado relevante à o efeito positivo da presenÃa de outra empresa nÃo financeira, como acionista majoritÃrio ou principal, sobre o nÃvel de distribuiÃÃo de dividendos, o que està em sintonia com a hipÃtese de monitoramento da direÃÃo executiva
Despite the many finds, for decades on the dividend policy, the ECISION to pay dividends is also a theme that follows in debate. Several factors have been proposed as able to explain the dividend policy, such as profit / profitability prior dividend (maintaining the dividend policy), firm size, leverage, and growth opportunities. More recently, the literature has explored the interference that the ownership structure may have on the distribution of dividends. In this context arise propositions as the assumptions related to the use of the dividend policy as an executive steering control instrument, and the possible expropriation of minority shareholders by the controlling. The objective of this research is to evaluate, under the theoretical framework of the Agency Theory, if there is use of the dividend policy as executive monitoring instrument or expropriation of minority shareholders in the Brazilian market. The sample is a data panel of 1,890 annual observations of 223 companies in the period 1996-2012 from data collected in EconomÃtica system companies listed on the SÃo Paulo Stock Exchange. From the estimation of a set of explanatory models of dividend policy The results indicate that the presence of a majority shareholder has a negative effect on the dividend policy, in line with the hypothesis of expropriation. Another important result is the positive effect of the presence of other non-financial company, as major or principal shareholder on the dividend distribution level, which is in line with the executive direction of the monitoring event
APA, Harvard, Vancouver, ISO, and other styles
26

Ribarczyk, Bruna Gabriela. "Os efeitos da integração financeira sobre a competitividade externa dos países da União Monetária Europeia." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/132893.

Full text
Abstract:
A adoção de uma moeda única por diferentes países muda significativamente a política econômica desses países. O objetivo desta dissertação, elaborada em forma de artigo, é estudar os efeitos da adoção do euro sobre a competitividade internacional dos países-membros da União Monetária Europeia (UME) com base no arcabouço teórico da teoria das áreas monetárias ótimas. A análise econométrica irá compreender um painel dinâmico com 12 países da UME nos períodos de 2002 a 2013 para inferir se a entrada de capitais teve impacto negativo na competitividade externa dos países periféricos da UME e como que os diferentes tipos de capitais interferiram sobre a taxa de câmbio real efetiva dos países da Zona do Euro. Conclui-se assim que não só a crise é capaz de permitir ganhos de competitividade entre os países da UME, como outros fatores mais desejáveis também, tal como a entrada de outros investimentos da conta financeira do balanço de pagamentos, a abertura comercial e os gastos do governo. Além disso, constata-se que o impacto da mobilidade de capital na competitividade é influenciado não só pelo tipo de capital como também pelo país que recebe esse fluxo.
Adopting a single currency in different countries changes significantly the economic policy of these countries. The objective of this dissertation, prepared in the form of an article is to study the effects of the adoption of the euro on the external competitiveness of member countries of the European Monetary Union (EMU) based on the theoretical framework of the theory of optimum currency areas. The econometric analysis will comprise a dynamic panel with 12 countries of the EMU in the period 2002-2013 to infer if the capital inflow had a negative impact on the external competitiveness of the peripheral countries of the EMU and how different types of capital flows interfered on the real effective exchange rate of the countries of the euro zone. It is therefore concluded that not only the crisis can allow gains in competitiveness between countries in the EMU, as more desirable factors as well, like the inflow of other investments of the financial account of the balance of payments, trade liberalization and government expenditures. In addition, it appears that the capital flows impact on competitiveness is influenced not only by the type of capital but also by the country that receives the flow.
APA, Harvard, Vancouver, ISO, and other styles
27

Badinger, Harald, and Peter Egger. "Spacey Parents and Spacey Hosts in FDI." WU Vienna University of Economics and Business, 2013. http://epub.wu.ac.at/3924/2/wp154.pdf.

Full text
Abstract:
Empirical trade economists have found that shocks on foreign direct investment (FDI) of some parent country in a host country affect the same parent country´s FDI in other hosts (interdependent hosts). Independent of this, there is evidence that shocks on a parent country´s FDI in some host economy affect other parent countries´ FDI in the same host (interdependent parents). In general equilibrium, shocks on FDI between any country pair will affect all country-pairs´ FDI in the world, including anyone of the two countries in a pair as well as third countries (interdependent third countries). No attempt has been made so far to allow simultaneously for all three modes of interdependence of FDI. Using cross-sectional data on FDI among 22 OECD countries in 2000, we employ a spatial feasible generalized two-stage least squares and generalized moments estimation framework to allow for all three modes of interdependence across all parent and host countries, thereby distinguishing between market-size-related and remainder interdependence. Our results highlight the complexity of multinational enterprises´ investment strategies and the interconnectedness of the world investment system (authors' abstract).
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
28

Salerno, André. "A velocidade de ajuste das necessidades de capital de giro: um estudo sobre amostra de empresas listadas na BM&FBovespa." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/13116.

Full text
Abstract:
Submitted by Andre Salerno (a.salerno@uol.com.br) on 2015-01-16T15:57:20Z No. of bitstreams: 1 Dissertação André Salerno versão pos banca.pdf: 1673849 bytes, checksum: 0504a07bec49830f6e9e88dc0cd5d055 (MD5)
Approved for entry into archive by Ana Luiza Holme (ana.holme@fgv.br) on 2015-01-16T16:00:17Z (GMT) No. of bitstreams: 1 Dissertação André Salerno versão pos banca.pdf: 1673849 bytes, checksum: 0504a07bec49830f6e9e88dc0cd5d055 (MD5)
Made available in DSpace on 2015-01-16T17:04:03Z (GMT). No. of bitstreams: 1 Dissertação André Salerno versão pos banca.pdf: 1673849 bytes, checksum: 0504a07bec49830f6e9e88dc0cd5d055 (MD5) Previous issue date: 2014-12-18
The main objective of this study is to evaluate some determinants of working capital needs commonly studied in literature and to analyze how companies are moving toward a goal (target) of NTC. Such study is unprecedented in Brazil, as far as we know. In fact, there is a lack of substantial theories on working capital in the finance area and very few studies can be found. Those who choose to study this subject may see that due to its current stage, it has been researched with the support of more consolidated theoretical bases, such as capital structure. These studies have widely used the concept of goal/target to determine the optimal capital structure and the speed this structure adjusts itself to in order to optimize its resources. The fact that such definitions and/or more established theories on the topic do not exist yet set this study in motion. It uses speed adjustment towards a working capital goal as well as the Partial Adjustment Model (PAM) and the Generalized Method of Moments (GMM) as techniques to support this goal. With this unprecedented combination in the Brazilian market when it comes to working capital, we hope to bring new contributions to the academic and business communities. In order to get the data for this quantitative study, we used existing information from Economatica® and BCB - Central Bank of Brazil. These databases use the quarterly financial statements between the periods of December 21st 2007 to June 30th 2014 (adjusted by inflation - IPCA) of companies listed on the BM&FBovespa which have at least 15 consecutive periods (quarters) of data. A total of 2,000 observations and 105 companies were studied. As for the method, the Dynamic Data Panel (unbalanced) was used as well as the following techniques in order to reach the main goal of the study ('What is the speed of adjustment in Working Capital Requirement?'): the Partial Adjustment Model technique for the analysis of determinants of working capital needs and movement towards a goal and the Generalized Method of Moments (GMM) technique to control possible effects of endogeneity (BLUNDELL and BOND, 1998) and to solve problems with residual autocorrelation (PIRES, ZANI e NAKAMURA, 2013, p. 19)
O presente estudo - até onde se sabe inédito no Brasil – possui como principal objetivo avaliar alguns determinantes das necessidades de capital de giro comumente estudados na literatura e analisar de que forma as empresas se movimentam em direção a uma meta (target) de Net Trade Cycle (similar ao Ciclo de Caixa - CCC). Sabemos que o tema capital de giro ainda carece de teorias mais robustas dentro da área de finanças, e poucos estudos ainda são encontrados na literatura. Aqueles que decidem estudá-lo, observam que dado o seu atual estágio, ele tem sido pesquisado com o suporte de bases teóricas mais consolidadas, como por exemplo estrutura de capitais. Esses estudos têm se utilizado muito do conceito de meta para determinar a estrutura ótima de capitais, e com qual velocidade de ajuste procura-se adequar essa estrutura como forma de otimizar seus recursos. O fato de ainda não existir definições e/ou teorias mais definidas sobre o tema foi o grande motivador para a realização desse estudo, que emprega a velocidade de ajuste em direção a uma meta de capital de giro, utilizando como técnica para suporte a esse objetivo o Modelo de Ajustamento Parcial (MAP) e o Generalized Method of Moments (GMM). Com essa combinação inédita no mercado brasileiro quando o assunto é capital de giro, esperamos trazer novas contribuições para as comunidades acadêmicas e empresariais. Para a obtenção dos dados que compõem esse estudo de caráter quantitativo, utilizamos informações existentes na Economatica® e BCB – Banco Central do Brasil. Nessas bases de dados utilizamos os demonstrativos financeiros trimestrais entre os períodos de 31/Dez./2007 a 30/Jun./2014 (ajustados por inflação – IPCA) das empresas listadas na BM&FBovespa que possuíssem pelos menos 15 períodos (trimestres) consecutivos de dados, com isso chegamos a um total de um pouco mais de 2 mil observações e 105 empresas. Quanto ao método, utilizamos Painel de Dados Dinâmico (desbalanceado) e as seguintes técnicas foram empregadas como forma de atender ao principal objetivo do estudo ('Qual é a velocidade de ajuste das Necessidades de Capital de Giro?'): Modelo de Ajustamento Parcial para a análise dos determinantes das necessidades de capital de giro e movimentação em direção a uma meta e; Generalized Method of Moments (GMM) como técnica de controle aos possíveis efeitos de endogeneidade (BLUNDELL e BOND, 1998) e solução para o problema de autocorrelação residual (PIRES, ZANI e NAKAMURA, 2013, p. 19).
APA, Harvard, Vancouver, ISO, and other styles
29

Otunuga, Olusegun Michael. "Stochastic Modeling and Analysis of Energy Commodity Spot Price Processes." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5289.

Full text
Abstract:
Supply and demand in the World oil market are balanced through responses to price movement with considerable complexity in the evolution of underlying supply-demand expectation process. In order to be able to understand the price balancing process, it is important to know the economic forces and the behavior of energy commodity spot price processes. The relationship between the different energy sources and its utility together with uncertainty also play a role in many important energy issues. The qualitative and quantitative behavior of energy commodities in which the trend in price of one commodity coincides with the trend in price of other commodities, have always raised the questions regarding their interactions. Moreover, if there is any interaction, then one would like to know the extent of influence on each other. In this work, we undertake the study to shed a light on the above highlighted processes and issues. The presented study systematically deals with the development of stochastic dynamic models and mathematical, statistical and computational analysis of energy commodity spot price and interaction processes. Below we list the main components of the research carried out in this dissertation. (1) Employing basic economic principles, interconnected deterministic and stochastic models of linear log-spot and expected log-spot price processes coupled with non-linear volatility process are initiated. (2) Closed form solutions of the models are analyzed. (3) Introducing a change of probability measure, a risk-neutral interconnected stochastic model is derived. (4) Furthermore, under the risk-neutral measure, expectation of the square of volatility is reduced to a continuous-time deterministic delay differential equation. (5) The by-product of this exhibits the hereditary effects on the mean-square volatility process. (6) Using a numerical scheme, a time-series model is developed and utilized to estimate the state and parameters of the dynamic model. In fact, the developed time-series model includes the extended GARCH model as special case. (7) Using the Henry Hub natural gas data set, the usefulness of the linear interconnected stochastic models is outlined. (8) Using natural and basic economic ideas, interconnected deterministic and stochastic models in (1) are extended to non-linear log-spot price, expected log-spot price and volatility processes. (9) The presented extended models are validated. (10) Closed form solution and risk-neutral models of (8) are outlined. (11) To exhibit the usefulness of the non-linear interconnected stochastic model, to increase the efficiency and to reduce the magnitude of error, it was essential to develop a modified version of extended Kalman filtering approach. The modified approach exhibits the reduction of magnitude of error. Furthermore, Henry Hub natural gas data set is used to show the advantages of the non-linear interconnected stochastic model. (12) Parameter and state estimation problems of continuous time non-linear stochastic dynamic process is motivated to initiate an alternative innovative approach. This led to introduce the concept of statistic processes, namely, local sample mean and sample variance. (13) Then it led to the development of an interconnected discrete-time dynamic system of local statistic processes and (14) its mathematical model. (15) This paved the way for developing an innovative approach referred as Local Lagged adapted Generalized Method of Moments (LLGMM). This approach exhibits the balance between model specification and model prescription of continuous time dynamic processes. (16) In addition, it motivated to initiate conceptual computational state and parameter estimation and simulation schemes that generates a mean square sub-optimal procedure. (17) The usefulness of this approach is illustrated by applying this technique to four energy commodity data sets, the U. S. Treasury Bill Yield Interest Rate and the U.S. Eurocurrency Exchange Rate data sets for state and parameter estimation problems. (18) Moreover, the forecasting and confidence-interval problems are also investigated. (19) The non-linear interconnected stochastic model (8) was further extended to multivariate interconnected energy commodities and sources with and without external random intervention processes. (20) Moreover, it was essential to extend the interconnected discrete-time dynamic system of local sample mean and variance processes to multivariate discrete-time dynamic system. (21) Extending the LLGMM approach in (15) to a multivariate interconnected stochastic dynamic model under intervention process, the parameters in the multivariate interconnected stochastic model are estimated. These estimated parameters help in analyzing the short term and long term relationship between the energy commodities. These developed results are applied to the Henry Hub natural gas, crude oil and coal data sets.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Xiaodong. "Econometrics on interactions-based models methods and applications /." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180283230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Al, Masry Zeina. "Processus gamma étendus en vue des applications à la fiabilité." Thesis, Pau, 2016. http://www.theses.fr/2016PAUU3020/document.

Full text
Abstract:
La thèse s’intéresse à l’étude du fonctionnement d’un système industriel. Il s’agit de proposer et de développer un nouveau modèle pour modéliser la dégradation accumulative d’un système. Le processus gamma standard est fréquemment utilisé pour étudier l’évolution de la détérioration d’un système. Toutefois, ce processus peut s’avérer inadapté pour décrire le phénomène de dégradation car le rapport variance sur moyenne est constant dans le temps, ce qui est relativement restrictif en pratique. Afin de surmonter cette restriction, nous proposons d’utiliser un processus gamma étendu introduit par Cinlar (1980), qui ne souffre plus de cette restriction. Mais ce dernier présente quelques difficultés techniques. A titre d’exemple, la loi d’un processus gamma étendu n’est pas connue sous une forme explicite. Ces difficultés techniques ont conduit Guida et al. (2012) à utiliser une version discrète d’un processus gamma étendu. Nous travaillons ici avec la version originale d’un processus gamma étendu en temps continu. Le but de ce mémoire est de développer des méthodes numériques permettant de quantifier les quantités fiabilistes associées et de développer des méthodes statistiques d’estimation des paramètres du modèle. Aussi, une autre partie de ce travail consiste à proposer une politique de maintenance dans le contexte d’un processus gamma étendu
This thesis is dedicated to study the functioning of an industrial system. It is about proposing and developing a new model for modelling the accumulative degradation of a system. The standard gamma process is widely used to model the evolution of the system degradation. A notable restriction of a standard gamma process is that its variance-to-mean ratio is constant over time. This may be restrictive within an applicative context. To overcome this drawback, we propose to use an extended gamma process, which was introduced by Cinlar (1980). However, there is a cost and the use of an extended gamma process presents some technical difficulties. For example, there is no explicit formula for the probability distribution of an extended gamma process. These technical difficulties have lead Guida et al. (2012) to use a discrete version of an extended gamma process. We here propose to deal with the original continuous time version. The aim of this work is to develop numerical methods in order to compute the related reliability function and to develop statistical methods to estimate the parameters of the model. Also, another part of this work consists of proposing a maintenance policy within the context of an extended gamma process
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Xu. "Essays on theories and applications of spatial econometric models." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1147892372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kiefer, Hua. "Essays on applied spatial econometrics and housing economics." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180467420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Naylor, Guilherme Lima. "O impacto das instituições na renda dos países : uma abordagem dinâmica para dados em painel." Master's thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/21704.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
As diferenças nos níveis de renda entre os países vêm sendo estudadas há muito tempo na economia. O capital humano, a produtividade, as instituições e outros fatores foram tidos como determinantes para as discrepâncias verificadas. Este trabalho segue a linha institucionalista ao procurar medir e relacionar o modo como as instituições impactam o nível de renda dos países.Primeiro, faz-se necessário rever brevemente a literatura sobre os modelos de crescimento econômico. Posteriormente, delimita-se o conceito de instituição e descreve-se seu processo de evolução ao longo do tempo. Esse preâmbulo é importante, pois fornece base teórica para os modelos econométricos estimados, que visam a medir os efeitos de diferentes características das instituições sobre o nível de renda dos países. O método escolhido para a análise é a estimação de modelos dinâmicos, por meio da abordagem do estimador do Método dos Momentos Generalizados de Sistema de Blundell e Bond.
Differences in income levels between countries have long been studied in economics. Human capital, productivity, institutions and other factors were taken as determinants for the discrepancies found. This work follows the institutionalist line in seeking to measure and relate how institutions impact the income level of countries.First, it is necessary to briefly review the literature on economic growth models. Subsequently, the concept of institution is delimited and its evolution process over time is descripted. This preamble is important because it provides a theoretical basis for the estimated econometric models, which aim to measure the effects of different characteristics of institutions on the income level of countries. The method chosen for the analysis is the estimation of dynamic models, using the Blundell & Bond Generalized Method of Moments System estimator approach.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
35

Louis, Maryse. "Migration-development nexus : macro and micro empirical evidence." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM1084/document.

Full text
Abstract:
Cette thèse examine la relation complexe et la causalité entre la migration et le développement, sujet d'actualité vus les flux croissants de migrants et les transferts de fonds privés. La revue de la littérature théorique et empirique montre la complexité de cette relation, et l’absence de consensus dégagé par les travaux menés sur les causes et les effets de la migration sur le développement. Sur les causes de migration, une première estimation empirique montre qu’elle fait partie intégrante du processus de développement et n’est donc pas une simple conséquence de faibles niveaux de développement: le niveau de développement des pays d'origine s’accroissant, les aspirations et les capabilités des populations augmentent et si celles-ci font face à l'absence d’opportunités, elles vont migrer à condition d’en avoir les capabilités (compétences requises, moyens financiers, politiques de migration, etc.). Concernant l’impact de la migration, une deuxième estimation empirique montre un effet positif sur le développement via les transferts privés. Les modèles indiquent que leur contribution se fait à travers deux principaux canaux, l'investissement en capital et le capital humain (éducation et santé), lesquels sont susceptibles de permettre un développement à long terme des pays d'origine. Au niveau micro, une troisième série de modèles étudie le mécanisme de cet impact au niveau de ménages, à partir du cas de l'Egypte. Ces modèles confirment l'importance des transferts privés sur les niveaux d'éducation et de santé dans les ménages qui les reçoivent. Ces résultats sont censés contribuer à la compréhension de cette relation complexe entre migration et développement
This thesis is concerned with the causal and complex relation between migration and development. A timely subject, especially with increasing flows of migrants and the remittances these migrants send home. Both the theoretical and empirical literature reviews address the complexity of this relation but consensuses on the causes and impacts of migration on development are generally inconclusive. On the causes of migration, our first empirical estimation shows that migration is part of the development process and not a simple result of its low levels: the increasing development level of the home countries increase the aspirations and capabilities of their populations and if these are faced with lack of opportunities at home, individuals seek migration provided they have the right capabilities (skills required, financial means, migration policies, etc.). On the impact of migration, our second empirical estimation gives evidence of a positive impact through remittances on the development of the home countries. The models show the positive contributions of remittances towards development through two main channels: capital investment and human capital (education and health). These two channels are believed to achieve long-term development of the home countries. At the micro level, we look at the mechanism of this impact at the household level, addressing the case study of Egypt. Our third models give evidence of the importance of these remittances in increasing both education and health status of the recipients’ households’. These findings are believed to make a contribution towards the understanding of this complex relation between migration and development
APA, Harvard, Vancouver, ISO, and other styles
36

Costa, Rafael Carneiro da. "A relaÃÃo entre receitas e despesas nos MunicÃpios Brasileiros: uma anÃlise sob as TÃcnicas de Bootstrap." Universidade Federal do CearÃ, 2010. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=5308.

Full text
Abstract:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
Trabalhos recentes mostraram que a teoria assintÃtica traz resultados equivocados nos testes de causalidade quando o MÃtodo de Momentos Generalizados (MGM) à utilizado. Este estudo re-examina a relaÃÃo dinÃmica entre receitas prÃprias, despesas correntes e transferÃncias correntes para os governos municipais brasileiros no perÃodo de 2000 a 2008. A estimaÃÃo do modelo de dados em painel dinÃmico à feita atravÃs do MGM, mas os testes de especificaÃÃo utilizam valores crÃticos gerados por bootstrap para fornecer melhor aproximaÃÃo à distribuiÃÃo da estatÃstica de teste. Uma defasagem de dois anos à encontrada na equaÃÃo de despesas, mas nenhuma dinÃmica à observada nas equaÃÃes de receitas prÃprias e de transferÃncias, sugerindo a hipÃtese de que receitas passadas afetam despesas correntes
Recent works has shown that the asymptotic theory provides misleading results in causality tests when the Generalized Method of Moments (GMM) is used. This study re-examines the dynamic relationship between own revenues, current expenditures and current grants to municipal governments in Brazil in the period 2000 to 2008. The dynamic panel data model estimation is done by GMM, but the specification tests use bootstrap critical values to provide a better approximation to the distribution of the test statistic. A lag of two years is found in the expenditure equation, but no dynamics is observed in the own revenues and transfers equations, suggesting the hypothesis that past revenues affect current expenditures
APA, Harvard, Vancouver, ISO, and other styles
37

Chaves, Leonardo Salim Saker. "Asymptotic efficiency in an instrumental variable model." reponame:Repositório Institucional do FGV, 2015. http://hdl.handle.net/10438/13874.

Full text
Abstract:
Submitted by Leonardo Salim Saker Chaves (lsalimsaker@gmail.com) on 2015-07-24T19:51:22Z No. of bitstreams: 1 Dissertacao_LeonardoSalim_BMHS.pdf: 661288 bytes, checksum: a89da060d1378be5cf51ff1edc18cfc6 (MD5)
Approved for entry into archive by BRUNA BARROS (bruna.barros@fgv.br) on 2015-07-27T14:42:35Z (GMT) No. of bitstreams: 1 Dissertacao_LeonardoSalim_BMHS.pdf: 661288 bytes, checksum: a89da060d1378be5cf51ff1edc18cfc6 (MD5)
Approved for entry into archive by Maria Almeida (maria.socorro@fgv.br) on 2015-07-30T19:33:12Z (GMT) No. of bitstreams: 1 Dissertacao_LeonardoSalim_BMHS.pdf: 661288 bytes, checksum: a89da060d1378be5cf51ff1edc18cfc6 (MD5)
Made available in DSpace on 2015-07-30T19:33:33Z (GMT). No. of bitstreams: 1 Dissertacao_LeonardoSalim_BMHS.pdf: 661288 bytes, checksum: a89da060d1378be5cf51ff1edc18cfc6 (MD5) Previous issue date: 2015-04-28
This work studies the hypothesis testing based on generalized method of moments (GMM) estimation given by instruments condition. The importance for the development of Economics lies on the fact that when identi cation is weak, the standard test can be misleading. Therefore, it is made a review of proposed tests to overcome this problem and also present two useful frameworks of study; from Moreira (2002), Moreira and Moreira (2013) and Kleibergen (2005). So, this work conciliate the previous frameworks a way to write the score proposed initially in Kleibergen (2005) using Moreira and Moreira (2013) statistics and presents the optimal score test based on asymptotic theory from Newey and McFadden (1984). Moreover, the study shows the equivalence between the GMM and maximum likelihood estimation to deal with the weak instruments problem.
Esta dissertação se propõe ao estudo de inferência usando estimação por método generalizado dos momentos (GMM) baseado no uso de instrumentos. A motivação para o estudo está no fato de que sob identificação fraca dos parâmetros, a inferência tradicional pode levar a resultados enganosos. Dessa forma, é feita uma revisão dos mais usuais testes para superar tal problema e uma apresentação dos arcabouços propostos por Moreira (2002) e Moreira & Moreira (2013), e Kleibergen (2005). Com isso, o trabalho concilia as estatísticas utilizadas por eles para realizar inferência e reescreve o teste score proposto em Kleibergen (2005) utilizando as estatísticas de Moreira & Moreira (2013), e é obtido usando a teoria assintótica em Newey & McFadden (1984) a estatística do teste score ótimo. Além disso, mostra-se a equivalência entre a abordagem por GMM e a que usa sistema de equações e verossimilhança para abordar o problema de identificação fraca.
APA, Harvard, Vancouver, ISO, and other styles
38

Silva, Dany Rogers. "Associações entre rating de crédito e estrutura de capitais de empresas listadas na América Latina." reponame:Repositório Institucional do FGV, 2012. http://hdl.handle.net/10438/10247.

Full text
Abstract:
Submitted by DANY Rogers (danyrogers@pontal.ufu.br) on 2012-11-13T17:38:59Z No. of bitstreams: 1 Tese_VersãoFinal.pdf: 995043 bytes, checksum: b0f42b67aa58d63a0d387fff6aff0867 (MD5)
Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2012-11-13T18:12:05Z (GMT) No. of bitstreams: 1 Tese_VersãoFinal.pdf: 995043 bytes, checksum: b0f42b67aa58d63a0d387fff6aff0867 (MD5)
Made available in DSpace on 2012-11-13T18:14:28Z (GMT). No. of bitstreams: 1 Tese_VersãoFinal.pdf: 995043 bytes, checksum: b0f42b67aa58d63a0d387fff6aff0867 (MD5) Previous issue date: 2012-10-19
A credit rating of low (or high) risk enables a reduction (or increase) the spread paid by the issuer at the time of issuance of credit, as well as in capturing financing and bank lendings. So, the rating appears as a relevant aspect in the decisions of the capital structure of a company, mostly for the possibility of influencing on their levels of debt. However, despite the importance given by the market players and the existence of empirical evidence of the effect of the rating about the capital structure of a company, the few existing studies on the associations between trends of reclassifications of credit ratings and decisions on structure of capital of a firm does not has approached the Latin American markets. In markets of Latin America are not common studies showing that companies internally evaluate the imminence of a reclassification about their rating and, from this, alter the composition of the capital structure so as to avoid causing a downgrade, or even to stimulate the occurrence of an upgrade, in their credit risk classification. Accordingly, the purpose of this research is to analyze the impact of trends in the credit rating reclassifications about decisions structure of capital of listed companies in Latin America. To verify the existence of this association were applied data belonging to all non-financial listed companies in Latin America, possessors of ratings issued by the three major international rating agencies (i.e. Stardand & Poor´s, Moody´s and Fitch) in January 2010. In this way, took part in the research all listed companies in six different Latin American countries, in the period 2001-2010. The main empirical results suggest that: (i) reclassifications of credit ratings have no informational content for the decisions of the capital structure of listed companies in Latin America, in other words, no association was observed between trends of reclassifications credit rating and decisions about the composition of the capital structure of listed companies in Latin America; (ii) between companies considered in the survey, those that were in worst levels of risk and the imminent reclassification of credit rating, tended to use more debt than other companies analyzed in this research.
Um rating de crédito de baixo (ou alto) risco possibilita uma redução (ou elevação) do spread pago pelo emissor na ocasião da emissão de títulos de crédito, bem como na captação de financiamentos e empréstimos bancários. Assim, o rating apresenta-se como um aspecto relevante nas decisões de estrutura de capitais de uma empresa, sobretudo pela possibilidade de influenciar nos seus níveis de dívidas. Todavia, apesar da importância atribuída pelos agentes de mercado e a existência de indícios empíricos do efeito do rating sobre a estrutura de capitais de uma empresa, os poucos estudos já realizados acerca das associações entre as tendências de reclassificações dos ratings de crédito e as decisões de estrutura de capitais de uma firma não têm abordado os mercados latino-americanos. Não são comuns nos mercados da América Latina estudos analisando se as empresas avaliam internamente a iminência de uma reclassificação do seu rating e, a partir disso, alteram a sua composição de estrutura de capitais de modo a evitar que ocorra um downgrade, ou mesmo para estimular a ocorrência de um upgrade, em sua classificação de risco de crédito. Nesse sentido, o objetivo desta pesquisa é analisar o impacto das tendências de reclassificações do rating de crédito sobre as decisões de estrutura de capitais de empresas listadas da América Latina. Para verificar a existência dessa associação foram empregados dados pertencentes a todas as empresas não-financeiras listadas da América Latina, possuidoras de ratings emitidos pelas três principais agências de ratings internacionais (i.e. Stardand & Poor´s, Moody´s e Fitch) em janeiro de 2010. Desse modo, fizeram parte da pesquisa todas as empresas listadas em seis diferentes países latino-americanos, no período 2001-2010. Os principais resultados empíricos obtidos sugerem que: (i) as reclassificações dos ratings de crédito não possuem conteúdo informacional para as decisões de estrutura de capitais das empresas listadas da América Latina, ou seja, não foi observada associação entre as tendências de reclassificações do ratings de crédito e as decisões sobre composição das estruturas de capitais das empresas listadas da América Latina; (ii) entre as empresas consideradas na pesquisa, aquelas que se encontravam em níveis piores de riscos e na iminência de reclassificações do rating de crédito, tenderam a utilizar mais dívidas do que as outras empresas analisadas na pesquisa.
APA, Harvard, Vancouver, ISO, and other styles
39

ANDREATTA, DANIELA. "Un’analisi esplorativa delle determinanti della gestione illegale dei rifiuti: il caso italiano." Doctoral thesis, Università Cattolica del Sacro Cuore, 2019. http://hdl.handle.net/10280/55868.

Full text
Abstract:
Negli ultimi anni, la gestione illegale dei rifiuti ha attirato l’attenzione pubblica e dell’accademia. A causa delle sue conseguenze negative non solo per l’ambiente, ma anche per la salute pubblica e la crescita economica, gli esperti hanno cominciato ad esplorare le dinamiche del fenomeno e le possibilità di prevenzione. Alcuni studi hanno evidenziato l’esistenza di diversi fattori che possono determinare la gestione illegale dei rifiuti, ma pochi di essi hanno empiricamente testato la validità dei fattori stessi. Di conseguenza, si avverte la necessità di produrre nuova conoscenza sull’argomento. Il presente studio consiste in un’analisi esplorativa di fattori socio-economici, fattori di policy e di performance, e fattori criminali che influenzano la gestione illegale dei rifiuti in Italia. Dopo aver identificato le determinanti considerate rilevanti dalla letteratura, l’obiettivo è quello di testarle empiricamente. Per prima cosa, grazie all’unicità di un dataset creato sul contesto italiano, nello studio si indaga quantitativamente l’effetto di diversi fattori sul fenomeno attraverso un’analisi econometrica. Successivamente, lo studio prosegue con un’analisi “crime script” al fine di esplorare quali fattori suggeriti dalla letteratura e testati nella parte quantitativa emergono anche da casi studio e come effettivamente intervengono nel ciclo dei rifiuti italiano. I risultati indicano che la gestione illegale dei rifiuti è determinata da: i) uno scarso sviluppo economico e demografico, un alto livello d’istruzione nel territorio, la presenza di turisti; ii) l'inefficienza della normativa ambientale, delle forze dell’ordine e delle prestazioni sui rifiuti; iii) la presenza di criminalità organizzata e la diffusione di crimini economici e fiscali. Prendendo spunto da questi risultati, lo studio non solo aumenta la conoscenza sul fenomeno, ma è anche in grado di avanzare alcuni suggerimenti di policy per contrastare efficacemente le condotte illegali legate alla gestione dei rifiuti.
In the last several decades, illegal waste management (IWM) has attracted great academic and public attention. Due to its negative consequences not only for the environment, but also for public health and economic growth, scholars started to be interested in the dynamics of IWM and in how to prevent it. Some studies stressed the existence of different factors that can determine the phenomenon, but very few of them have empirically tested their validity. Consequently, developing new research on the topic is still necessary. The present study conducts an explorative analysis of the socio-economic, policy and performance-driven and criminal factors influencing IWM in Italy. After the identification of the most relevant determinants according to the literature, the objective is to empirically test them. First, thanks to a unique dataset focused on the Italian context, the study quantitatively investigates the effect of different factors on the phenomenon through an econometric analysis. Second, the study realises a crime script analysis to explore which factors suggested by the literature and tested in the quantitative part emerge also in concrete case studies and how they effectively intervene in the Italian waste cycle. Results indicate that IWM is determined by: i) a low level of economic development and population density, a high level of education and tourists’ presence; ii) inefficiency in environmental regulation, enforcement and waste performances; iii) the presence of organised crime and the diffusion of economic and fiscal crimes. According to these findings, the study not only deepens the knowledge of the phenomenon, but it is also able to provide some policy suggestions to efficiently hinder illegal conducts related to waste management.
APA, Harvard, Vancouver, ISO, and other styles
40

ANDREATTA, DANIELA. "Un’analisi esplorativa delle determinanti della gestione illegale dei rifiuti: il caso italiano." Doctoral thesis, Università Cattolica del Sacro Cuore, 2019. http://hdl.handle.net/10280/55868.

Full text
Abstract:
Negli ultimi anni, la gestione illegale dei rifiuti ha attirato l’attenzione pubblica e dell’accademia. A causa delle sue conseguenze negative non solo per l’ambiente, ma anche per la salute pubblica e la crescita economica, gli esperti hanno cominciato ad esplorare le dinamiche del fenomeno e le possibilità di prevenzione. Alcuni studi hanno evidenziato l’esistenza di diversi fattori che possono determinare la gestione illegale dei rifiuti, ma pochi di essi hanno empiricamente testato la validità dei fattori stessi. Di conseguenza, si avverte la necessità di produrre nuova conoscenza sull’argomento. Il presente studio consiste in un’analisi esplorativa di fattori socio-economici, fattori di policy e di performance, e fattori criminali che influenzano la gestione illegale dei rifiuti in Italia. Dopo aver identificato le determinanti considerate rilevanti dalla letteratura, l’obiettivo è quello di testarle empiricamente. Per prima cosa, grazie all’unicità di un dataset creato sul contesto italiano, nello studio si indaga quantitativamente l’effetto di diversi fattori sul fenomeno attraverso un’analisi econometrica. Successivamente, lo studio prosegue con un’analisi “crime script” al fine di esplorare quali fattori suggeriti dalla letteratura e testati nella parte quantitativa emergono anche da casi studio e come effettivamente intervengono nel ciclo dei rifiuti italiano. I risultati indicano che la gestione illegale dei rifiuti è determinata da: i) uno scarso sviluppo economico e demografico, un alto livello d’istruzione nel territorio, la presenza di turisti; ii) l'inefficienza della normativa ambientale, delle forze dell’ordine e delle prestazioni sui rifiuti; iii) la presenza di criminalità organizzata e la diffusione di crimini economici e fiscali. Prendendo spunto da questi risultati, lo studio non solo aumenta la conoscenza sul fenomeno, ma è anche in grado di avanzare alcuni suggerimenti di policy per contrastare efficacemente le condotte illegali legate alla gestione dei rifiuti.
In the last several decades, illegal waste management (IWM) has attracted great academic and public attention. Due to its negative consequences not only for the environment, but also for public health and economic growth, scholars started to be interested in the dynamics of IWM and in how to prevent it. Some studies stressed the existence of different factors that can determine the phenomenon, but very few of them have empirically tested their validity. Consequently, developing new research on the topic is still necessary. The present study conducts an explorative analysis of the socio-economic, policy and performance-driven and criminal factors influencing IWM in Italy. After the identification of the most relevant determinants according to the literature, the objective is to empirically test them. First, thanks to a unique dataset focused on the Italian context, the study quantitatively investigates the effect of different factors on the phenomenon through an econometric analysis. Second, the study realises a crime script analysis to explore which factors suggested by the literature and tested in the quantitative part emerge also in concrete case studies and how they effectively intervene in the Italian waste cycle. Results indicate that IWM is determined by: i) a low level of economic development and population density, a high level of education and tourists’ presence; ii) inefficiency in environmental regulation, enforcement and waste performances; iii) the presence of organised crime and the diffusion of economic and fiscal crimes. According to these findings, the study not only deepens the knowledge of the phenomenon, but it is also able to provide some policy suggestions to efficiently hinder illegal conducts related to waste management.
APA, Harvard, Vancouver, ISO, and other styles
41

Cincera, Michele. "Economic and technological performances of international firms." Doctoral thesis, Universite Libre de Bruxelles, 1998. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212081.

Full text
Abstract:
The research performed throughout this dissertation aims at implementing quantitative methods in order to assess economic and technological performances of firms, i.e. it tries to assess the impacts of the determinants of technological activity on the results of this activity. For this purpose, a representative sample of the most important R&D firms in the world is constituted. The micro-economic nature of the analysis, as well as its international dimension are two main features of this research at the empirical level.

The second chapter illustrates the importance of R&D investments, patenting activities and other measures of technological activities performed by firms over the last 10 years.

The third chapter describes the main features as well as the construction of the database. The raw data sample consists of comparable detailed micro-level data on 2676 large manufacturing firms from several countries. These firms have reported important R&D expenditures over the period 1980-1994.

The fourth chapter explores the dynamic structure of the patent-R&D relationship by considering the number of patent applications as a function of present and lagged levels of R&D expenditures. R&D spillovers as well as technological and geographical opportunities are taken into account as additional determinants in order to explain patenting behaviours. The estimates are based on recently developed econometric techniques that deal with the discrete non-negative nature of the dependent patent variable as well as the simultaneity that can arise between the R&D decisions and patenting. The results show evidence of a rather contemporaneous impact of R&D activities on patenting. As far as R&D spillovers are concerned, these externalities have a significantly higher impact on patenting than own R&D. Furthermore, these effects appear to take more time, three years on average, to show up in patents.

The fifth chapter explores the contribution of own stock of R&D capital to productivity performance of firms. To this end the usual productivity residual methodology is implemented. The empirical section presents a first set of results which replicate the analysis of previous studies and tries to assess the robustness of the findings with regard to the above issues. Then, further results, based on different sub samples of the data set, investigate to what extent the R&D contribution on productivity differs across firms of different industries and geographic areas or between small and large firms and low and high-tech firms. The last section explores more carefully the simultaneity issue. On the whole, the estimates indicate that R&D has a positive impact on productivity performances. Yet, this contribution is far from being homogeneous across the different dimensions of data or according to the various assumptions retained in the productivity model.

The last empirical chapter goes deeper into the analysis of firms' productivity increases, by considering besides own R&D activities the impact of technological spillovers. The chapter begins by surveying the alternative ways proposed in the literature in order to asses the effect of R&D spillovers on productivity. The main findings reported by some studies at the micro level are then outlined. Then, the framework to formalize technological externalities and other technological determinants is exposed. This framework is based on a positioning of firms into a technological space using their patent distribution across technological fields. The question of whether the externalities generated by the technological and geographic neighbours are different on the recipient's productivity is also addressed by splitting the spillover variable into a local and national component. Then, alternative measures of technological proximity are examined. Some interesting observations emerge from the empirical results. First, the impact of spillovers on productivity increases is positive and much more important than the contribution of own R&D. Second, spillover effects are not the same according to whether they emanate from firms specialized in similar technological fields or firms more distant in the technological space. Finally, the magnitude and direction of these effects are radically different within and between the pillars of the Triad. While European firms do not appear to particularly benefit from both national and international sources of spillovers, US firms are mainly receptive to their national stock and Japanese firms take advantage from the international stock.


Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
42

Ahmed, Mohamed Salem. "Contribution à la statistique spatiale et l'analyse de données fonctionnelles." Thesis, Lille 3, 2017. http://www.theses.fr/2017LIL30047/document.

Full text
Abstract:
Ce mémoire de thèse porte sur la statistique inférentielle des données spatiales et/ou fonctionnelles. En effet, nous nous sommes intéressés à l’estimation de paramètres inconnus de certains modèles à partir d’échantillons obtenus par un processus d’échantillonnage aléatoire ou non (stratifié), composés de variables indépendantes ou spatialement dépendantes.La spécificité des méthodes proposées réside dans le fait qu’elles tiennent compte de la nature de l’échantillon étudié (échantillon stratifié ou composé de données spatiales dépendantes).Tout d’abord, nous étudions des données à valeurs dans un espace de dimension infinie ou dites ”données fonctionnelles”. Dans un premier temps, nous étudions les modèles de choix binaires fonctionnels dans un contexte d’échantillonnage par stratification endogène (échantillonnage Cas-Témoin ou échantillonnage basé sur le choix). La spécificité de cette étude réside sur le fait que la méthode proposée prend en considération le schéma d’échantillonnage. Nous décrivons une fonction de vraisemblance conditionnelle sous l’échantillonnage considérée et une stratégie de réduction de dimension afin d’introduire une estimation du modèle par vraisemblance conditionnelle. Nous étudions les propriétés asymptotiques des estimateurs proposées ainsi que leurs applications à des données simulées et réelles. Nous nous sommes ensuite intéressés à un modèle linéaire fonctionnel spatial auto-régressif. La particularité du modèle réside dans la nature fonctionnelle de la variable explicative et la structure de la dépendance spatiale des variables de l’échantillon considéré. La procédure d’estimation que nous proposons consiste à réduire la dimension infinie de la variable explicative fonctionnelle et à maximiser une quasi-vraisemblance associée au modèle. Nous établissons la consistance, la normalité asymptotique et les performances numériques des estimateurs proposés.Dans la deuxième partie du mémoire, nous abordons des problèmes de régression et prédiction de variables dépendantes à valeurs réelles. Nous commençons par généraliser la méthode de k-plus proches voisins (k-nearest neighbors; k-NN) afin de prédire un processus spatial en des sites non-observés, en présence de co-variables spatiaux. La spécificité du prédicteur proposé est qu’il tient compte d’une hétérogénéité au niveau de la co-variable utilisée. Nous établissons la convergence presque complète avec vitesse du prédicteur et donnons des résultats numériques à l’aide de données simulées et environnementales.Nous généralisons ensuite le modèle probit partiellement linéaire pour données indépendantes à des données spatiales. Nous utilisons un processus spatial linéaire pour modéliser les perturbations du processus considéré, permettant ainsi plus de flexibilité et d’englober plusieurs types de dépendances spatiales. Nous proposons une approche d’estimation semi paramétrique basée sur une vraisemblance pondérée et la méthode des moments généralisées et en étudions les propriétés asymptotiques et performances numériques. Une étude sur la détection des facteurs de risque de cancer VADS (voies aéro-digestives supérieures)dans la région Nord de France à l’aide de modèles spatiaux à choix binaire termine notre contribution
This thesis is about statistical inference for spatial and/or functional data. Indeed, weare interested in estimation of unknown parameters of some models from random or nonrandom(stratified) samples composed of independent or spatially dependent variables.The specificity of the proposed methods lies in the fact that they take into considerationthe considered sample nature (stratified or spatial sample).We begin by studying data valued in a space of infinite dimension or so-called ”functionaldata”. First, we study a functional binary choice model explored in a case-controlor choice-based sample design context. The specificity of this study is that the proposedmethod takes into account the sampling scheme. We describe a conditional likelihoodfunction under the sampling distribution and a reduction of dimension strategy to definea feasible conditional maximum likelihood estimator of the model. Asymptotic propertiesof the proposed estimates as well as their application to simulated and real data are given.Secondly, we explore a functional linear autoregressive spatial model whose particularityis on the functional nature of the explanatory variable and the structure of the spatialdependence. The estimation procedure consists of reducing the infinite dimension of thefunctional variable and maximizing a quasi-likelihood function. We establish the consistencyand asymptotic normality of the estimator. The usefulness of the methodology isillustrated via simulations and an application to some real data.In the second part of the thesis, we address some estimation and prediction problemsof real random spatial variables. We start by generalizing the k-nearest neighbors method,namely k-NN, to predict a spatial process at non-observed locations using some covariates.The specificity of the proposed k-NN predictor lies in the fact that it is flexible and allowsa number of heterogeneity in the covariate. We establish the almost complete convergencewith rates of the spatial predictor whose performance is ensured by an application oversimulated and environmental data. In addition, we generalize the partially linear probitmodel of independent data to the spatial case. We use a linear process for disturbancesallowing various spatial dependencies and propose a semiparametric estimation approachbased on weighted likelihood and generalized method of moments methods. We establishthe consistency and asymptotic distribution of the proposed estimators and investigate thefinite sample performance of the estimators on simulated data. We end by an applicationof spatial binary choice models to identify UADT (Upper aerodigestive tract) cancer riskfactors in the north region of France which displays the highest rates of such cancerincidence and mortality of the country
APA, Harvard, Vancouver, ISO, and other styles
43

Hamdi, Bilel. "Modélisation des circuits périodiques et quasi-périodiques alimentés par des sources arbitraires." Thesis, Toulouse, INPT, 2015. https://oatao.univ-toulouse.fr/25877/1/HAMDI_Bilel.pdf.

Full text
Abstract:
Les réseaux d’antennes planaires sont réputés par leur grande directivité et leur facilité de mise en œuvre qui offre la possibilité d’avoir un diagramme de rayonnement commandable. Cependant l’étude globale tenant compte des différents couplages EM par une théorie électromagnétique rigoureuse nécessite un espace mémoire important et un temps de calcul considérable. Pour surmonter à ces inconvénients, nous proposons d’introduire une nouvelle approche théorique basée sur le théorème de Floquet permettant la réduction du volume d’analyse EM au niveau d’une seule cellule élémentaire. Nous allons focaliser notre étude à la détermination des termes de couplage dans une configuration répartie dans une trame presque-périodique et en particulier pour les réseaux d’antennes périodiques et quasi-périodiques alimentés par des sources arbitraires. Dans ce cas, nous utiliserons les décompositions en modes de Floquet (adaptées aux structures périodiques) pour extraire la matrice de couplage [S]. Ces décompositions sont des concepts établis de longue date et a priori démontrés par de supports théoriques solides. Par conséquent, cette analyse modale permet de simplifier la résolution du problème considéré surtout quand les éléments rayonnants sont fortement couplés. Une seule méthode numérique est adoptée afin de modéliser la structure proposée : la méthode des moments combinée avec le circuit équivalent généralisé : MoM-GEC. La validation de cette dernière sera réalisée par comparaison avec d’autres méthodes numériques exactes
Planar antenna arrays are renowned for their high directivity and ease of implementation, which offers the possibility of having a controllable radiation pattern. However the global study taking into account the different EM couplings by a rigorous electromagnetic theory requires a large memory space and a considerable computation time. To overcome these drawbacks, we propose to introduce a new theoretical approach based on the Floquet theorem allowing the reduction of the EM analysis volume at the level of a single elementary cell (basic cell). We will focus our study on the determination of coupling terms in a distributed configuration in an almost-periodic frame and in particular for periodic and quasi-periodic antenna arrays fed by arbitrary sources. In this case, we will use Floquet mode decompositions (adapted to periodic structures) to extract the coupling matrix [S]. These decompositions are concepts established for a long time and a priori demonstrated by solid theoretical supports. Consequently, this modal analysis makes it possible to simplify considerably the resolution of the problem, especially when the radiating elements are strongly coupled. A single numerical method is adopted in order to model the proposed structure: the method of moments combined with the generalized equivalent circuit: MoM-GEC. Validation of the latter will be done by comparison with other exact numerical methods
APA, Harvard, Vancouver, ISO, and other styles
44

"Correlated GMM Logistic Regression Models with Time-Dependent Covariates and Valid Estimating Equations." Master's thesis, 2012. http://hdl.handle.net/2286/R.I.15098.

Full text
Abstract:
abstract: When analyzing longitudinal data it is essential to account both for the correlation inherent from the repeated measures of the responses as well as the correlation realized on account of the feedback created between the responses at a particular time and the predictors at other times. A generalized method of moments (GMM) for estimating the coefficients in longitudinal data is presented. The appropriate and valid estimating equations associated with the time-dependent covariates are identified, thus providing substantial gains in efficiency over generalized estimating equations (GEE) with the independent working correlation. Identifying the estimating equations for computation is of utmost importance. This paper provides a technique for identifying the relevant estimating equations through a general method of moments. I develop an approach that makes use of all the valid estimating equations necessary with each time-dependent and time-independent covariate. Moreover, my approach does not assume that feedback is always present over time, or present at the same degree. I fit the GMM correlated logistic regression model in SAS with PROC IML. I examine two datasets for illustrative purposes. I look at rehospitalization in a Medicare database. I revisit data regarding the relationship between the body mass index and future morbidity among children in the Philippines. These datasets allow us to compare my results with some earlier methods of analyses.
Dissertation/Thesis
Arizona Medicare Data on Rehospitalization
Philippine Data on Children's Morbidity
M.S. Statistics 2012
APA, Harvard, Vancouver, ISO, and other styles
45

Akbar, Saeed, J. Poletti-Hughes, R. El-Faitouri, and S. Z. A. Shah. "More on the relationship between corporate governance and firm performance in the UK: Evidence from the application of generalized method of moments estimation." 2016. http://hdl.handle.net/10454/17135.

Full text
Abstract:
Yes
This study examines the relationship between corporate governance compliance and firm performance in the UK. We develop a Governance Index and investigate its impact on corporate performance after controlling for potential endogeneity through the use of a more robust methodology, Generalized Method of Moments (GMM) Estimation. Our evidence is based on a sample of 435 non-financial publicly listed firms over the period 1999–2009. In contrast to earlier findings in the UK literature, our results suggest that compliance with corporate governance regulations is not a determinant of corporate performance in the UK. We argue that results from prior studies showing a positive impact of corporate governance on firms’ performance may be biased as they fail to control for potential endogeneity. There may be a possibility of reverse causality in the results of prior studies due to which changes in the internal characteristics of firms may be responsible for the corporate governance compliance and performance relationship. Our findings are based on GMM, which controls for the effects of unobservable heterogeneity, simultaneity and dynamic endogeneity and thus present more robust conclusions as compared to the findings of previously published studies in this area.
APA, Harvard, Vancouver, ISO, and other styles
46

Dovonon, Prosper. "Common factors in stochastic volatility of asset returns and new developments of the generalized method of moments." Thèse, 2007. http://hdl.handle.net/1866/1962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Enkhbold, Buuruljin. "Finance and Growth Nexus: CEE & Central Asia and Beyond." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-352487.

Full text
Abstract:
Buuruljin Enkhbold Finance and Growth Nexus: CEE & Central Asia and Beyond Abstract (English) This thesis investigates the effect of financial development on economic growth using both global sample and regional samples focusing on Central and Eastern Europe (CEE) and Central Asia during the time period 1960-2013. The results of fixed effect panel and system GMM estimators suggest that the effect of private credit on growth had been neutral until 2007 and the effect turns negative if the time period is up to 2013. The negative effect of private credit on growth has been the largest for CEE and Central Asia, particularly for non-EU countries in the region. Stock market capitalisation and lending deposit spread have consistent effects regardless of the choice of time frame which implies that economies benefit from larger stock markets and lower lending deposit spread. Keywords: financial development, credit, stock market, spread, growth, CEE and Central Asia, generalized method of moments (GMM)
APA, Harvard, Vancouver, ISO, and other styles
48

"Generalized Empirical Likelihood Estimators." Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.18718.

Full text
Abstract:
abstract: Schennach (2007) has shown that the Empirical Likelihood (EL) estimator may not be asymptotically normal when a misspecified model is estimated. This problem occurs because the empirical probabilities of individual observations are restricted to be positive. I find that even the EL estimator computed without the restriction can fail to be asymptotically normal for misspecified models if the sample moments weighted by unrestricted empirical probabilities do not have finite population moments. As a remedy for this problem, I propose a group of alternative estimators which I refer to as modified EL (MEL) estimators. For correctly specified models, these estimators have the same higher order asymptotic properties as the EL estimator. The MEL estimators are obtained by the Generalized Method of Moments (GMM) applied to an exactly identified model. The simulation results provide promising evidence for these estimators. In the second chapter, I introduce an alternative group of estimators to the Generalized Empirical Likelihood (GEL) family. The new group is constructed by employing demeaned moment functions in the objective function while using the original moment functions in the constraints. This designation modifies the higher-order properties of estimators. I refer to these new estimators as Demeaned Generalized Empirical Likelihood (DGEL) estimators. Although Newey and Smith (2004) show that the EL estimator in the GEL family has fewer sources of bias and is higher-order efficient after bias-correction, the demeaned exponential tilting (DET) estimator in the DGEL group has those superior properties. In addition, if data are symmetrically distributed, every estimator in the DGEL family shares the same higher-order properties as the best member.  
Dissertation/Thesis
Ph.D. Economics 2013
APA, Harvard, Vancouver, ISO, and other styles
49

RINAURO, STEFANO. "Generalized method of moments estimation for QAM carrier acquisition: an image processing approach." Doctoral thesis, 2010. http://hdl.handle.net/11573/918840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Xiao, Zhiguo. "Topics in generalized method of moments estimation with application to panel data with measurement error." 2008. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography