Academic literature on the topic 'System GMM estimator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'System GMM estimator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "System GMM estimator"

1

Moyo, Vusani. "Dynamic Capital Structure Adjustment: Which estimator yields consistent and efficient estimates?" Journal of Economic and Financial Sciences 9, no. 1 (December 18, 2017): 209–27. http://dx.doi.org/10.4102/jef.v9i1.38.

Full text
Abstract:
The partial adjustment model is key to a number of corporate finance research areas. The model is by its nature an autoregressive-distributed lag model that is characterised by heterogeneity among individuals and autocorrelation due to the presence of the lagged dependent variable. Finding a suitable estimator to fit the model can be challenging, as the existing estimators differ significantly in their consistency and bias. This study used data drawn from 143 non-financial firms listed on the Johannesburg Stock Exchange (JSE) to test for the consistency and efficiency of the leading partial adjustment model estimators. The study results confirm the bias-corrected least squares dummy variable (LSDVC) initialised by the system generalised method of moments (GMM) estimator, the random effects Tobit estimator and the system GMM estimator as the most suitable estimators for the partial adjustment model. The difference GMM estimator and the Anderson-Hsiao instrumental variables estimator are inconsistent and biased in the context of the partial adjustment model.
APA, Harvard, Vancouver, ISO, and other styles
2

Shina, Arya Fendha Ibnu. "ESTIMASI PARAMETER PADA SISTEM MODEL PERSAMAAN SIMULTAN DATA PANEL DINAMIS DENGAN METODE 2 SLS GMM-AB." MEDIA STATISTIKA 11, no. 2 (December 30, 2018): 79–91. http://dx.doi.org/10.14710/medstat.11.2.79-91.

Full text
Abstract:
Single equation models ignore interdependencies or two-way relationships between response variables. The simultaneous equation model accommodates this two-way relationship form. Two Stage Least Square Generalized Methods of Moment Arellano and Bond (2 SLS GMM-AB) is used to estimate the parameters in the simultaneous system model of dynamic panel data if each structural equation is exactly identified or over identified. In the simultaneous equation system model with dynamic panel data, each structural equation and reduced form is a dynamic panel data regression equation. Estimation of structural equations and reduced form using Ordinary Least Square (OLS) resulted biased and inconsistent estimators. Arellano and Bond GMM method (GMM AB) estimator produces unbiased, consistent, and efficient estimators.The purpose of this paper is to explain the steps of 2 SLS GMM-AB method to estimate parameter in simultaneous equation model with dynamic panel data. Keywords:2 SLS GMM-AB, Arellano and Bond estimator, Dynamic Panel Data, Simultaneous Equations
APA, Harvard, Vancouver, ISO, and other styles
3

Tan, Yong, and John Anchor. "Stability and profitability in the Chinese banking industry: evidence from an auto-regressive-distributed linear specification." Investment Management and Financial Innovations 13, no. 4 (December 15, 2016): 120–29. http://dx.doi.org/10.21511/imfi.13(4).2016.10.

Full text
Abstract:
The important role played by the Chinese commercial banks in the development of China’s economy has made the government and banking regulatory authority concerned about the performance of these banks.Indeedthe stability of the banking sector has attracted greater attention since the financial crisis of 2007-2009. The principal objective of this study is to investigate the inter-relationships between profitability and stability in the Chinese banking industry. Using a sample of Chinese commercial banks over the period 2003-2013, the study examines the inter-relationships under an auto-regressive-distributed linear model. Both Z-score and stability inefficiency were used as measures of stability, while Return on Assets (ROA) was used as the indicator of profitability. Different types of Generalized Method of Moments (GMM) estimators including difference GMM, one-step system GMM, two-step system GMM as well as two-step robust GMM were used. In order to the check the robustness of the results, alternative econometric techniques were used, such as ordinary least square (OLS) estimator, between effect estimator, as well as fixed effect estimator. The results show that higher insolvency risk/lower bank stability leads to higher profitability of Chinese commercial banks and also that higher profitability leads to higher bank fragility. Keywords: bank profitability, bank risk, China. JEL classification: G21, C23
APA, Harvard, Vancouver, ISO, and other styles
4

Hayakawa, Kazuhiko. "THE ASYMPTOTIC PROPERTIES OF THE SYSTEM GMM ESTIMATOR IN DYNAMIC PANEL DATA MODELS WHEN BOTH N AND T ARE LARGE." Econometric Theory 31, no. 3 (September 15, 2014): 647–67. http://dx.doi.org/10.1017/s0266466614000449.

Full text
Abstract:
In this paper, we derive the asymptotic properties of the system generalized method of moments (GMM) estimator in dynamic panel data models with individual and time effects when both N and T, the dimensions of cross-section and time series, are large. Specifically, we show that the two-step system GMM estimator is consistent when a suboptimal weighting matrix where off-diagonal blocks are set to zero is used. Such consistency results theoretically support the use of the system GMM estimator in large N and T contexts even though it was originally developed for large N and small T panels. Simulation results indicate that the large N and large T asymptotic results approximate the finite sample behavior reasonably well unless persistency of data is strong and/or the variance ratio of individual effects to the disturbances is large.
APA, Harvard, Vancouver, ISO, and other styles
5

Leitão, Nuno Carlos. "GMM Estimator: An Application to Intraindustry Trade." Journal of Applied Mathematics 2012 (2012): 1–12. http://dx.doi.org/10.1155/2012/857824.

Full text
Abstract:
This paper investigates the determinants of intraindustry trade (IIT), horizontal IIT (HIIT), and Vertical IIT (VIIT) in the automobile industry in Portugal. The trade in this sector between Portugal and the European Union (EU-27) was examined, between 1995 and 2008, using a dynamic panel data. We apply the GMM system to solve the problems of serial correlation and the endogeneity of some explanatory variables. The findings are consistent with the literature. The difference between per capita incomes and factor endowments present a positive sign. These results are according to Heckscher-Ohlin predictions. The economic dimension has a positive impact on trade. A negative effect of the distance on bilateral trade was expected and the results confirm this, underlining the importance of neighbour partnerships for all trade.
APA, Harvard, Vancouver, ISO, and other styles
6

Ataünal, Levent, and Aslı Aybars. "Testing Target-Adjustment and Pecking Order Models of Capital Structure and Estimating Speed of Adjustment." International Journal of Corporate Finance and Accounting 4, no. 1 (January 2017): 1–15. http://dx.doi.org/10.4018/ijcfa.2017010101.

Full text
Abstract:
This article examines the explanation power of the pecking order and target adjustment models on 148 Borsa Istanbul (BIST) firms' capital structure over the period of 2005 to 2015. The article also estimates the speed of adjustment (SOA) to the targeted leverage level. Although a firm's capital structure is jointly determined by both theories, target adjustment model appear to have relatively higher power in explaining capital structures of BIST firms. Estimates of the adjustment speeds suggests that firms move toward their target debt ratios at a fast pace. Adjustment speeds estimated with market leverage were significantly higher (44% - 83%). Share price volatility was found to have a rather short-term impact on market leverage. Firms rapidly revert back to their targets and offset these fluctuations within few years. Adjustment speed estimates vary with the estimation method. System generalized methods of moment estimator (GMM-SYS) provided the slowest SOA estimation whereas firm-fixed effects estimators imparted the fastest adjustment speed.
APA, Harvard, Vancouver, ISO, and other styles
7

Kruiniger, Hugo. "GMM ESTIMATION AND INFERENCE IN DYNAMIC PANEL DATA MODELS WITH PERSISTENT DATA." Econometric Theory 25, no. 5 (October 2009): 1348–91. http://dx.doi.org/10.1017/s0266466608090531.

Full text
Abstract:
In this paper we consider generalized method of moments–based (GMM-based) estimation and inference for the panel AR(1) model when the data are persistent and the time dimension of the panel is fixed. We find that the nature of the weak instruments problem of the Arellano–Bond (Arellano and Bond, 1991,Review of Economic Studies58, 277–297) estimator depends on the distributional properties of the initial observations. Subsequently, we derive local asymptotic approximations to the finite-sample distributions of the Arellano–Bond estimator and the System estimator, respectively, under a variety of distributional assumptions about the initial observations and discuss the implications of the results we obtain for doing inference. We also propose two Lagrange multiplier–type (LM-type) panel unit root tests.
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Chirok, Peter C. B. Phillips, and Donggyu Sul. "X-DIFFERENCING AND DYNAMIC PANEL MODEL ESTIMATION." Econometric Theory 30, no. 1 (August 7, 2013): 201–51. http://dx.doi.org/10.1017/s0266466613000170.

Full text
Abstract:
This paper introduces a new estimation method for dynamic panel models with fixed effects and AR(p) idiosyncratic errors. The proposed estimator uses a novel form of systematic differencing, called X-differencing, that eliminates fixed effects and retains information and signal strength in cases where there is a root at or near unity. The resulting “panel fully aggregated” estimator (PFAE) is obtained by pooled least squares on the system of X-differenced equations. The method is simple to implement, consistent for all parameter values, including unit root cases, and has strong asymptotic and finite sample performance characteristics that dominate other procedures, such as bias corrected least squares, generalized method of moments (GMM), and system GMM methods. The asymptotic theory holds as long as the cross section (n) or time series (T) sample size is large, regardless of then/Tratio, which makes the approach appealing for practical work. In the time series AR(1) case (n= 1), the FAE estimator has a limit distribution with smaller bias and variance than the maximum likelihood estimator (MLE) when the autoregressive coefficient is at or near unity and the same limit distribution as the MLE in the stationary case, so the advantages of the approach continue to hold for fixed and even smalln. Some simulation results are reported, giving comparisons with other dynamic panel estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Chávez, Carlos Cesar. "The Impact of Macroeconomics Factors on Real Exchange Rate in Latin America:." Latin American Journal of Trade Policy 3, no. 8 (December 31, 2020): 6. http://dx.doi.org/10.5354/0719-9368.2020.57342.

Full text
Abstract:
This paper studies the determinants of the real exchange rate using macroeconomic variables, and whether they can predict it. A panel data is used, which estimator is system GMM that allows controlling the endogeneity of the variables. In turn, we transformed the variables with forward orthogonal deviations (FOD) and first difference (FD), which allows us to eliminate unobserved effects that are invariant in time. To check the robustness of the estimates, different periods were used, from 1980-2019, 2000-2019 and 2010-2019. For the period 1980-2019, it is found that the past values of the real exchange rate, the current values of inflation, economic growth, fiscal and monetary policy have positive effects on the current values of the real exchange rate, while the money supply and the terms of trade have negative impacts on the real exchange rate. For the period 2000-2019, we had similar results and for the period 2010-2019, we found that economic growth has negative impacts on the real exchange rate. It is also presented the Arellano-Bond test and the Sargan test to estimate model over-identification. Using the Pedroni test, we estimated the cointegration of the variables with respect to the real exchange rate, finding cointegration with inflation in the long run. The originality of this paper is that we focused on Latin American countries, analyzing short-term relationships with the System GMM estimator and long-term relationships with the Pedroni Test.
APA, Harvard, Vancouver, ISO, and other styles
10

Al Farooque, Omar, Wonlop Buachoom, and Lan Sun. "Board, audit committee, ownership and financial performance – emerging trends from Thailand." Pacific Accounting Review 32, no. 1 (December 21, 2019): 54–81. http://dx.doi.org/10.1108/par-10-2018-0079.

Full text
Abstract:
Purpose This study aims to investigate the effects of corporate board and audit committee characteristics and ownership structures on market-based financial performance of listed firms in Thailand. Design/methodology/approach It applies system GMM (generalized method of moments) as the baseline estimator approach, and ordinary least squares and fixed effects for robustness checks on a sample of 452 firms listed on the Thai Stock Exchange for the period 2000-2016. Findings Relying mainly on the system GMM estimator, the empirical results indicate some emerging trends in the Thai economy. Contrary to expectations for an emerging market and prior research findings, ownership structures, particularly ownership concentration and family ownership, appear to have no significant influence on market-based firm performance, while managerial ownership exerts a positive effect on performance. Moreover, as expected, board structure variables such as board independence; size; meeting and dual role; and audit committee meeting show significant explanatory power on market-based firm performance in Thai firms. Practical implications These findings are important for policymakers in constructing an appropriate set of governance mechanisms in an emerging market context, and for corporate entities and investors in shaping their understanding of corporate governance in the Thai institutional context. Originality/value Unlike previous literature on the Thai market, this study is the first to use the more advanced econometric method known as system GMM estimator for addressing causality/endogeneity issues in governance–performance relationships. The findings indicate new trends in the explanatory power of ownership structure variables on market-based firm performance in Thai-listed firms.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "System GMM estimator"

1

Souza, Junior Celso Vila Nova de. "Tournaments in the public sector." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

PACE, MARIA LUCIA. "La diseguaglianza di opportunità in Italia." Doctoral thesis, Università Cattolica del Sacro Cuore, 2017. http://hdl.handle.net/10280/35716.

Full text
Abstract:
La diseguaglianza dei redditi è comunemente analizzata e misurata attraverso l’impiego di varie misure quali l’indice di Gini, il coefficiente di variazione, l’indice di Theil, la varianza dei logaritmi ed altri ancora (Sen, 1970). A partire dagli anni ’90 l’applicazione di tecniche di scomposizione relative, ad esempio, all’indice di Theil hanno reso possibile quantificare due diverse componenti della diseguaglianza ovvero la disuguaglianza legata allo sforzo individuale e la disuguaglianza dovuta alle ineguali opportunità. Questa seconda componente dipende esclusivamente da fattori esogeni, non controllabili dall’individuo, e, per questa ragione, è a ragione considerata una diseguaglianza “ingiusta”. Alla componente residua della scomposizione è di solito attribuito, invece, il significato di disuguaglianza nello sforzo, ovvero quanto ciascun individuo si è impegnato per raggiungere un determinato obiettivo di successo economico. L’applicazione di questo approccio alle misure di diseguaglianza ha permesso di studiare quale tipo di disuguaglianza prevalga all’interno di un Paese e, soprattutto, quali siano le circostanze esogene che incrementano la disparità nelle opportunità. Il presente lavoro si muove lungo questa linea di ricerca proponendo un metodo per testare il peso relativo delle due componenti e la loro significativita’. Come misura di diseguaglianza si e’ scelto di considerare il coefficiente di variazione in modo da ricondurre il test ad un problema di Analisi della varianza (ANOVA) a piu’ vie. Il test viene presentato facendo riferimento ai dati dell’ISTAT e dell'indagine Bankitalia sui redditi delle famiglie. Dopo quest'analisi preliminare sulle determinanti della diseguaglianza di opportunità in Italia, si utilizza la scomposizione della diseguaglianza nelle sue due componenti: diseguaglianza di opportunità e diseguaglianza legata all'impegno, per definire univocamente l'effetto della diseguaglianza sulla crescita economica. L'analisi econometria è svolta sui dati dell'indagine sulla ricchezza e sui redditi delle famiglie forniti dalla Banca d'Italia. L'effetto viene stimato utilizzando il modello panel dinamico con il metodo di stima GMM.
While the analysis of inequality has been central to economic studies for cen- turies, in recent years many studies concentrated on the distinction between in- equality of opportunity (IO) and inequality of returns to effort (IE) and attempted empirical estimates of the two components, e.g. in US and in Europe. The decompo- sition of a general inequality index into these two components allows to analyze the prevalence of fair or unfair income inequality within a country. This paper suggests to test the differences between the two sources of inequality in a simple way using the ANOVA framework adapted to decompose the coefficient of variation, to better suit the requirements of an inequality index. The proposed procedure is applied to the Italian Survey on Income and Living Condition (IT-SILC data, wave 2005 and 2011). The analysis of the results help identifying the circumstances that foster the rise of inequality of opportunities in Italy. Our analysis shows in particular, that father education, region of residence and gender result as the most relevant circumstances determining inequality of opportunity. On the other side, the role of mother education starting from a lower level, as an inequality of opportunity factor, is increasing its influence over time. The decomposition of inequality index in two components allows not only to analyze the prevalence of fair or unfair income inequality in a country, but also to find a clearer relation between inequality and growth. In fact, it is still missing an analysis of the relation between inequality of opportunity and economic growth in Italy. This paper aims at filling in that gap, by using Italian data from Bank of Italys Survey on Income and Wealth from 1998 to 2014. We choose the coefficient of variation to measure inequality of opportunity at the regional level and, then, we studied its relation with economic growth using Dynamic Panel Data models estimated through System- GMM. Finally, in order to check if the coefficient of variation could be a measure as good as the Entropy’s index, I will compare the results of the estimated panel models with the two different inequality of opportunity indeces. We evaluate the effect of inequality of opportunity on different length of the economic growth rate, going from a short term (2 years) to a very long term growth rate (10 years). Our results shows that, in Italy, inequality of opportunity is negative in the short period, but it does not have any effect on long run growth.
APA, Harvard, Vancouver, ISO, and other styles
3

PACE, MARIA LUCIA. "La diseguaglianza di opportunità in Italia." Doctoral thesis, Università Cattolica del Sacro Cuore, 2017. http://hdl.handle.net/10280/35716.

Full text
Abstract:
La diseguaglianza dei redditi è comunemente analizzata e misurata attraverso l’impiego di varie misure quali l’indice di Gini, il coefficiente di variazione, l’indice di Theil, la varianza dei logaritmi ed altri ancora (Sen, 1970). A partire dagli anni ’90 l’applicazione di tecniche di scomposizione relative, ad esempio, all’indice di Theil hanno reso possibile quantificare due diverse componenti della diseguaglianza ovvero la disuguaglianza legata allo sforzo individuale e la disuguaglianza dovuta alle ineguali opportunità. Questa seconda componente dipende esclusivamente da fattori esogeni, non controllabili dall’individuo, e, per questa ragione, è a ragione considerata una diseguaglianza “ingiusta”. Alla componente residua della scomposizione è di solito attribuito, invece, il significato di disuguaglianza nello sforzo, ovvero quanto ciascun individuo si è impegnato per raggiungere un determinato obiettivo di successo economico. L’applicazione di questo approccio alle misure di diseguaglianza ha permesso di studiare quale tipo di disuguaglianza prevalga all’interno di un Paese e, soprattutto, quali siano le circostanze esogene che incrementano la disparità nelle opportunità. Il presente lavoro si muove lungo questa linea di ricerca proponendo un metodo per testare il peso relativo delle due componenti e la loro significativita’. Come misura di diseguaglianza si e’ scelto di considerare il coefficiente di variazione in modo da ricondurre il test ad un problema di Analisi della varianza (ANOVA) a piu’ vie. Il test viene presentato facendo riferimento ai dati dell’ISTAT e dell'indagine Bankitalia sui redditi delle famiglie. Dopo quest'analisi preliminare sulle determinanti della diseguaglianza di opportunità in Italia, si utilizza la scomposizione della diseguaglianza nelle sue due componenti: diseguaglianza di opportunità e diseguaglianza legata all'impegno, per definire univocamente l'effetto della diseguaglianza sulla crescita economica. L'analisi econometria è svolta sui dati dell'indagine sulla ricchezza e sui redditi delle famiglie forniti dalla Banca d'Italia. L'effetto viene stimato utilizzando il modello panel dinamico con il metodo di stima GMM.
While the analysis of inequality has been central to economic studies for cen- turies, in recent years many studies concentrated on the distinction between in- equality of opportunity (IO) and inequality of returns to effort (IE) and attempted empirical estimates of the two components, e.g. in US and in Europe. The decompo- sition of a general inequality index into these two components allows to analyze the prevalence of fair or unfair income inequality within a country. This paper suggests to test the differences between the two sources of inequality in a simple way using the ANOVA framework adapted to decompose the coefficient of variation, to better suit the requirements of an inequality index. The proposed procedure is applied to the Italian Survey on Income and Living Condition (IT-SILC data, wave 2005 and 2011). The analysis of the results help identifying the circumstances that foster the rise of inequality of opportunities in Italy. Our analysis shows in particular, that father education, region of residence and gender result as the most relevant circumstances determining inequality of opportunity. On the other side, the role of mother education starting from a lower level, as an inequality of opportunity factor, is increasing its influence over time. The decomposition of inequality index in two components allows not only to analyze the prevalence of fair or unfair income inequality in a country, but also to find a clearer relation between inequality and growth. In fact, it is still missing an analysis of the relation between inequality of opportunity and economic growth in Italy. This paper aims at filling in that gap, by using Italian data from Bank of Italys Survey on Income and Wealth from 1998 to 2014. We choose the coefficient of variation to measure inequality of opportunity at the regional level and, then, we studied its relation with economic growth using Dynamic Panel Data models estimated through System- GMM. Finally, in order to check if the coefficient of variation could be a measure as good as the Entropy’s index, I will compare the results of the estimated panel models with the two different inequality of opportunity indeces. We evaluate the effect of inequality of opportunity on different length of the economic growth rate, going from a short term (2 years) to a very long term growth rate (10 years). Our results shows that, in Italy, inequality of opportunity is negative in the short period, but it does not have any effect on long run growth.
APA, Harvard, Vancouver, ISO, and other styles
4

Evaldsson, Matilda. "Has EMU Led to Higher Debt Levels? : -A Dynamic Panel Data Estimation." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-120396.

Full text
Abstract:
Europe is in the midst of its deepest crisis since the 1930s where unsustainable debt-to-GDP levels are among the most alarming issues. It is so critical that it is unsure if the Euro can be saved. The risk of moral hazard increases within EMU when governments are taking too much risk in their public debt policies due to the anticipation that ECB or other Member States would eventually bail them out. Moreover, the SGP imposes restrictions on government deficits and debts but have previously failed to enforce them. The weakness seen in the past is that no sanctions have been put in place once the limits have been breached and the SGP is therefore incredible. Previous research on common pool and debt spillovers in a monetary union point to an upward drift of public debt as countries join the EMU. Does this argument hold true? In order to find out, 25 OECD countries between the years of 1995 and 2010 are analyzed using System GMM Arellano-Bover/Blundell-Bond one-step estimator. The primary balance, the interest payments, and GDP growth are regressed respectively in order to see through what channel EMU displays its effect. One regression will cover the entire time period and another will only cover the years from 1995 to 2007 in order to isolate the effects of the current crisis. The results, based on the years over the entire time period (including the crisis) suggest that the effect of an EMU Membership goes via the Interest payments which it is connected to positively. By using the equation of debt dynamics, the fact that net debt interest payments are higher for a country within EMU indicates, all else equal, that they have on average higher levels of debt. Nevertheless, this realization might be a crisis phenomenon and the implication of this is not clear. However more importantly, the regressions based on the years of 1995 and 2007 (prior to the crisis) did not display any significant results. These results indicate that there is no significant relationship between a country’s membership in EMU and its level of debt prior to the crisis.
APA, Harvard, Vancouver, ISO, and other styles
5

Pinter, Julien. "Essays on two new central banking debates : central bank financial strength and monetary policy outcome : the instability of the transmission of monetary policy to deposit rates after the global financial crisis." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E051.

Full text
Abstract:
Cette thèse traite de deux nouveaux débats sur le central banking qui ont émergé après la crise financière de 2008: le débat sur les pertes financières aux bilans des banques centrales, et le débat sur le niveau élevé des taux bancaires par rapport aux taux de marché après la crise. Les deux premiers chapitres s’inscrivent dans le premier débat. Le lien entre la solidité financière des banques centrales et l’inflation est étudié empiriquement dans le premier chapitre, en se basant sur un large panel de 82 pays. Théoriquement, ce lien est potentiellement présent lorsque le gouvernement ne soutient pas financièrement la banque centrale et que celle-ci ne peut donc compter que sur elle-même pour améliorer sa situation financière. Les résultats du premier chapitre montrent qu’en pratique tel est effectivement le cas: les détériorations aux bilans des banques centrales s’accompagnent d’une inflation plus forte lorsque la banque centrale n’a pas de soutien fiscal. Les résultats ne montrent pas de lien dans un contexte général, comme la théorie le suggère. Dans le second chapitre, il est analysé et conceptualisé l’argument selon lequel une banque centrale peut mettre fin à un régime de change fixe ou quasi-fixe par peur de futures pertes financières. L’analyse est ensuite appliquée au cas du cours plancher mis en place par la Banque Centrale de Suisse (BNS) entre 2011 et 2015 vis-à-vis de l’euro. Cet argument a été avancé par beaucoup pour expliquer la fin de la politique de cours plancher en Suisse, sans qu’aucune recherche avant celle-ci n’évalue sa pertinence. Les estimations empiriques du Chapitre 2 permettent de montrer que cet argument avait une crédibilité: elles montrent que dans des scénarios crédibles, en cassant le peg avec l’euro 17 mois plus tard, la BNS aurait essuyé une perte considérable, dépassant un seuil perçu comme limite par beaucoup de banquiers centraux. Le dernier chapitre de cette thèse s’intéresse à l’écart entre les taux de dépôts et le taux de marché en zone euro (l’EURIBOR) qui est devenu significativement positif après la crise, conduisant certains à parler de « sur-rémunération » des dépôts. Ce chapitre soutient que la majorité de cet écart ne s’explique non pas par un comportement anormal des dépôts comme certains l’ont avancé, mais au contraire par une perte de pertinence de l’EURIBOR. Construisant une alternative à l’EURIBOR, ce chapitre conclut que le risque bancaire a eu une influence primordiale sur le niveau de rémunération des dépôts dans le monde d’après-crise
This thesis deals with the new debates on central banking which arose after the 2008 global financial crisis. More particularly, two of such debates are addressed: the debates on the financial losses in central banks’ balance sheets, and the debates on the high level of bank rates compared to market interest rates following the financial crisis. The two first chapters are related to the first debate. The link between central bank financial strength and inflation is empirically examined in a large sample of 82 countries. Theoretically, this link is potentially present when the government does not fiscally support the central bank, so that the central bank can only rely on itself to improve its financial situation. The results show that in practice central bank balance sheet deteriorations indeed lead to higher inflation when fiscal support is absent. The results, based on a particularly meticulous and consistent sample selection, do not show the presence of a link between the two variables in a general context, as the theory suggests. In the second chapter, I analyze and conceptualize the argument according to which a central bank can end a peg exchange rate regime by fear of making significant losses in the future, and I apply this analysis to the Swiss franc peg between 2011 and 2015. This argument was brought forward by many commentators to explain the Swiss move, while no research before this one did study the relevance of this argument. The empirical estimates in Chapter 2 show that this argument indeed had some credibility: under some credible scenarios the Swiss central bank would have incurred significant losses by breaking its peg 17 months later, with losses exceeding a threshold judged as relevant by many central bankers. The last chapter of this thesis focuses on the spread between deposit rates and market interest rates in the Eurozone (more specifically, the EURIBOR), which became significantly positive after the financial crisis, leading some commentators to claim that deposits were over-remunerated. This chapter upholds that the major part of this spread is not due to an « abnormal » behavior of deposits but is rather due to the fact that the EURIBOR has become irrelevant after the global financial crisis. Building an alternative to the EURIBOR, the chapter concludes that banking risks have been having a major influence on the level of deposit remuneration
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Anna. "Kletba přírodních zdrojů a stínová ekonomika: empirická evidence." Master's thesis, 2021. http://www.nusl.cz/ntk/nusl-438017.

Full text
Abstract:
The study aims to investigate the impact of natural resource wealth on the shadow economy. The theoretical section provides the basis of understanding the nature of two phenomena and discusses the possible transmission channels through which natural resources might influence the shadow economy. Consequently, the key determinants of the shadow economy are examined by static and dynamic models. Natural resource abundance is proxied by natural resource rents. We employ a panel data set for 109 countries for the period from 1996 to 2006. The results reveal that resource wealth is associated with the decrease of the shadow economy. This result is robust for different resource types (durable and non-durable), and the effect is more profound for countries with a low income level. JEL Classification C33, E26, O13 Keywords natural resources, shadow economy, dynamic panel data models, system GMM estimator Title Natural Resource Curse and Shadow Economy: Empirical Evidence
APA, Harvard, Vancouver, ISO, and other styles
7

Baptista, Sara Reis Gomes. "Evaluation of the efficiency of innovation: evidence for European Union regions (NUT-II)." Master's thesis, 2018. http://hdl.handle.net/10773/24520.

Full text
Abstract:
At a time when Innovation is seen as one of the main drivers of regional economic growth, this study aims to assess the efficiency of innovation of 104 regions (NUT-II) of the European Union from 2006 to 2012. In this way, the study creates a ranking of the most efficient regions based on innovation indicators and seeks to understand what factors are at the origin of these ranking results. On the other hand, the global financial crisis of 2008 has also shaken all prospects of sustained growth for Europe, so the impact of the crisis on Innovation and efficiency of the regions is taken into account. For this purpose, the DEA methodology was used in a first phase to determine the levels of efficiency found and scoring of the regions, and in a second approach the use of the PCSE and GMM methodologies to analyse the factors that influence the efficiency of the innovation measured by the proposed indicator. The results show large disparities between regions, namely due to the crisis, with the most efficient regions being Romania, Belgium and Bulgaria. The results also point to human resources as being the most significant factor for the positive evolution of Innovation Efficiency.
Numa altura em que a Inovação é vista como um dos motores principais para o crescimento económico regional, este trabalho visa avaliar a eficiência da inovação de 104 regiões (NUT-II) da União Europeia de 2006 a 2012. Desta forma, o estudo cria um ranking das regiões mais eficientes baseado em indicadores de inovação e procura perceber quais os fatores que estão na origem desses resultados do ranking. Por outro lado, também a crise financeira global de 2008 veio abalar todas as perspetivas de crescimento sustentado para a Europa pelo que o impacto da mesma na Inovação e eficiência das regiões é tido em conta. Para isso foi utilizada a metodologia DEA, numa primeira fase para determinar os níveis de eficiência encontrados e scoring das regiões, e numa segunda abordagem a utilização das metodologias PCSE e GMM, para analisar os fatores que influenciam a eficiência da inovação medida pelo indicador proposto. Os resultados obtidos revelam grandes disparidades entre regiões, nomeadamente devido à crise, sendo que as regiões mais eficientes pertencem à Roménia, Bélgica e Bulgária. Os resultados apontam ainda para os recursos humanos como sendo o fator mais significativo para a evolução positiva da eficiência de Inovação.
Mestrado em Economia
APA, Harvard, Vancouver, ISO, and other styles
8

Berkimbayeva, Aliya. "Přechod k bezhotovostní společnosti: dopady na ekonomickou aktivitu." Master's thesis, 2019. http://www.nusl.cz/ntk/nusl-398156.

Full text
Abstract:
The present study aims to deliberate over a wider perspective on the topic of physical currency, assuming the global conversion to digital payment instruments affecting stakeholders at different scales alters number of aspects. The theoretical section discusses the process of transition to cashless society by identifying transformation stages and the barriers faced to undertake the shift. Subsequently, the links between factors as business environment, globalization, and shadow economy in relation to physical currency in circulation are examined by static and dynamic panel data analyses applying annual panel data for 70 countries for the period from 2013 to 2017. The conclusive inference is formulated based on outputs from the Blundell-Bond (1998) system GMM estimator. The empirical results provide significant evidence on negative relationship between business environment and physical currency in circulation and contrary positive link for shadow economy. Further, the greater impact of business environment on physical money among variables included, implies the promotion of electronic money solutions solely to be not sufficient to transit to cashless economy. We also construct transformation score ranking for the last five years to snap the transit stage among countries included in the study with...
APA, Harvard, Vancouver, ISO, and other styles
9

Lakhani, Rishi Ajitkumar. "The finance inequality nexus in Portugal: An empirical study." Master's thesis, 2021. http://hdl.handle.net/10071/23838.

Full text
Abstract:
This paper conducts a time-series econometric analysis in order to assess the effect of finance on the distribution of income in Portugal between 1977 to 2016 using annual data. A battery of bank and market-based proxies are used to measure financial indicators (private credit, foreign direct investment, money supply, stock market capitalisation, financial value added and financial openness) to provide a holistic representation of the financial system in terms of depth, access, and efficiency (Svirydzenka, 2016). To ensure robustness, two different measures of the Gini coefficient are employed to proxy income inequality (gross and net). School enrolment (as a proxy for human capital), inflation, real GDP, government spending and trade openness were used as control variables. Additionally, the estimations were conducted using the Generalised Method of Moments (GMM) estimator to control for endogeneity. Linear and non-linear estimations were performed to test the different hypotheses available in literature concerning the relationship between finance and income inequality. Our results suggest that bank-based financial indicators have reduced income inequality while market-based financial indicators have worsened income distribution in Portugal. Additionally, financial liberalisation has increased income inequality in Portugal. The majority of our models support a concave relationship between finance and inequality in Portugal.
Este estudo pretende avaliar o efeito da financeirização na desigualdade de rendimentos em Portugal utilizando uma série temporal de dados anuais entre 1977 e 2016. Um conjunto de proxies financeiros foram utilizados para o efeito, nomeadamente, o crédito privado, o investimento directo estrangeiro, a oferta de Moeda, a capitalização bolsista, o valor acrescentado financeiro e o grau de abertura financeira, de modo a garantir uma representatividade adequada do Setor Financeiro em termos de profundidade, acesso e eficiência (Svirydzenka, 2016). Para efeitos de robustez, foram empregues duas medidas do coeficiente de Gini, mais especificamente, o coeficiente de Gini líquido e bruto. A taxa de escolarização (como proxy para o capital humano), a inflação, a taxa de crescimento real do PIB, a despesa pública e o grau de abertura comercial foram empregues como variáveis de controlo. As regressões foram estimadas utilizando o Método dos Momentos Generalizado (GMM) para controlar a endogeneidade. Foram estimados modelos lineares e não lineares de modo a testar as diferentes hipóteses disponíveis na literatura sobre a relação entre desigualdade de rendimentos e o desenvolvimento financeiro, nomeadamente, a hipótese linear (Positiva ou Negativa) e a hipótese não-linear (côncava ou convexa). Os resultados obtidos sugerem que o desenvolvimento financeiro bancário reduziu a desigualdade de rendimentos enquanto o desenvolvimento financeiro baseado nos mercados produziu o efeito oposto. Adicionalmente, a liberalização financeira gerou um efeito nefasto na distribuição de rendimentos em Portugal. A maioria dos modelos estimados apoiam a hipótese côncava.
APA, Harvard, Vancouver, ISO, and other styles
10

Chang, Lun-Kai, and 張倫愷. "Modified GML Algorithm with Simulated Annealing for Estimation of Signal Arrival Time in WPAN Systems." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/62953612085545307834.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
94
The main purpose of this thesis is to estimate the signal arrival time in low rate wireless personal area network systems. In a dense multipath environment, the generalized maximum-likelihood (GML) algorithm can be used for the time-of-arrival (TOA) estimation. Nevertheless, the GML algorithm is very time-consuming and usually takes a long period of time, and sometimes fails to converge. Hence, a simplified scheme that would improve the algorithm is investigated. In the simplified scheme, the search is executed in a sequential form. Two threshold parameters are determined for the stop condition in the algorithm. One threshold is on the arrival time of estimated path, while the other is on the fading amplitude of estimated path. The determination of thresholds can be based on the minimum error probability, which is defined as the sum of the false alarm probability and the missing probability. Root-mean-square error statistics are used to improve the thresholds setting. In this scheme, candidate pairs of thresholds are evaluated in each appropriate range. To solve the problem that the root-mean-square error value for each pair of thresholds is calculated, the simulated annealing is adopted for searching the best threshold pair. The problem that all possible solutions in a large range must be evaluated can be solved by simulated annealing. From the simulation results, it is seen that, while the signal-to-noise ratio is larger or equal to 4dB, the proposed scheme can achieve better performance than the root-mean-square error statistics scheme.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "System GMM estimator"

1

Wang, Bin. Intraseasonal Modulation of the Indian Summer Monsoon. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228620.013.616.

Full text
Abstract:
The strongest Indian summer monsoon (ISM) on the planet features prolonged clustered spells of wet and dry conditions often lasting for two to three weeks, known as active and break monsoons. The active and break monsoons are attributed to a quasi-periodic intraseasonal oscillation (ISO), which is an extremely important form of the ISM variability bridging weather and climate variation. The ISO over India is part of the ISO in global tropics. The latter is one of the most important meteorological phenomena discovered during the 20th century (Madden & Julian, 1971, 1972). The extreme dry and wet events are regulated by the boreal summer ISO (BSISO). The BSISO over Indian monsoon region consists of northward propagating 30–60 day and westward propagating 10–20 day modes. The “clustering” of synoptic activity was separately modulated by both the 30–60 day and 10–20 day BSISO modes in approximately equal amounts. The clustering is particularly strong when the enhancement effect from both modes acts in concert. The northward propagation of BSISO is primarily originated from the easterly vertical shear (increasing easterly winds with height) of the monsoon flows, which by interacting with the BSISO convective system can generate boundary layer convergence to the north of the convective system that promotes its northward movement. The BSISO-ocean interaction through wind-evaporation feedback and cloud-radiation feedback can also contribute to the northward propagation of BSISO from the equator. The 10–20 day oscillation is primarily produced by convectively coupled Rossby waves modified by the monsoon mean flows. Using coupled general circulation models (GCMs) for ISO prediction is an important advance in subseasonal forecasts. The major modes of ISO over Indian monsoon region are potentially predictable up to 40–45 days as estimated by multiple GCM ensemble hindcast experiments. The current dynamical models’ prediction skills for the large initial amplitude cases are approximately 20–25 days, but the prediction of developing BSISO disturbance is much more difficult than the prediction of the mature BSISO disturbances. This article provides a synthesis of our current knowledge on the observed spatial and temporal structure of the ISO over India and the important physical processes through which the BSISO regulates the ISM active-break cycles and severe weather events. Our present capability and shortcomings in simulating and predicting the monsoon ISO and outstanding issues are also discussed.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "System GMM estimator"

1

Windmeijer, Frank. "Efficiency Comparisons for a System GMM Estimator in Dynamic Panel Data Models." In Innovations in Multivariate Statistical Analysis, 175–84. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4603-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Inkmann, Joachim. "Computation of GMM Estimators." In Lecture Notes in Economics and Mathematical Systems, 28–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-56571-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Inkmann, Joachim. "Asymptotic Properties of GMM Estimators." In Lecture Notes in Economics and Mathematical Systems, 20–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-56571-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Inkmann, Joachim. "GMM Estimation with Optimal Weights." In Lecture Notes in Economics and Mathematical Systems, 67–106. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-56571-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Inkmann, Joachim. "GMM Estimation with Optimal Instruments." In Lecture Notes in Economics and Mathematical Systems, 107–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-56571-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Inkmann, Joachim. "The Conditional Moment Approach to GMM Estimation." In Lecture Notes in Economics and Mathematical Systems, 6–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-56571-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Michieletto, Stefano, Luca Tonin, Mauro Antonello, Roberto Bortoletto, Fabiola Spolaor, Enrico Pagello, and Emanuele Menegatti. "GMM-Based Single-Joint Angle Estimation Using EMG Signals." In Intelligent Autonomous Systems 13, 1173–84. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-08338-4_85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gbode, Imoleayo E., Vincent O. Ajayi, Kehinde O. Ogunjobi, Jimy Dudhia, and Changhai Liu. "Impacts of Global Warming on West African Monsoon Rainfall." In African Handbook of Climate Change Adaptation, 2469–83. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-45106-6_93.

Full text
Abstract:
AbstractThe impacts of global warming on rainfall in West Africa were examined using a numerical framework for 5 monsoon years (2001, 2007, 2008, 2010, and 2011). Rainfall characteristics over the three climatic zones, Guinea coast, Savannah, and Sahel, were analyzed. The potential changes associated with global warming were assessed by the pseudo-global warming (PGW) downscaling method. Multiple PGW runs were conducted using climate perturbation from the 40-member ensemble of the Community Earth System Model version 1 (CESM1) coupled with Community Atmospheric Model version 5.2 (CAM5.2) component large ensemble project. The model output was compared with Tropical Rainfall Measuring Mission and Global Precipitation Climatology Project rainfall alongside surface temperature from the European Center for Medium-Range Weather Forecast Reanalysis. Results show that the estimated rainfall amount from the future climate in the 2070s increases slightly compared with the current climate. The total rainfall amount simulated for the current climate is 16% and 63% less than that of the PGW runs and observations, respectively. Also found is an increase (decrease) in heavy (light and moderate) rainfall amount in the PGW runs. These results are, however, contingent on the global circulation model (GCM), which provides the boundary conditions of the regional climate model. CESM1.0-CAM5.2, the GCM employed in this study, tends to provide a greater surface temperature change of about 4 °C. This projected temperature change consequently caused the increase in the simulated precipitation in the PGW experiments, thus highlighting the advantage of using the PGW method to estimate the likely difference between the present and future climate with reduced large-scale model differences and computational resources. The findings of this study are, however, useful to inform decision-making in climate-related activities and guide the design of climate change adaptation projects for the West African region.
APA, Harvard, Vancouver, ISO, and other styles
9

Jaouad Malzi, Mohamed. "The Dynamic of Residential Energy Demand Function: Evidence from Natural Gas." In Natural Gas - New Perspectives and Future Developments [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.102451.

Full text
Abstract:
This analysis uses annual data on residential gas use for 29 Organization for Economic Cooperation and Development nations from 2005 to 2016 to look at per capita energy demand. The effect of price and income on natural gas demand elasticities has been studied in the past, but most research have ignored demographic aspects. The goal of this study is to incorporate these characteristics into natural gas demand modeling. A dynamic panel system dubbed the Generalized Method of Moments (GMM) estimator was used to address the endogeneity issue. The following are the study’s main findings: First, the residential sector consumes more natural gas per capita as the population grows. Second, the consumption of per capita residential natural gas in Organization for Economic Cooperation and Development countries is decreasing as the population ages. Finally, as the population density rises, so does per capita gas consumption.
APA, Harvard, Vancouver, ISO, and other styles
10

Mansour Nomran, Naji, and Razali Haron. "Relevance of Shari’ah Governance in Driving Performance of Islamic Banks during the Financial Crisis: International Evidence." In Banking and Finance. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.92368.

Full text
Abstract:
This study aims to examine the impact of Shari’ah governance mechanism on the performance of Islamic banks (IBs) during the financial crisis of 2008. Data were collected from 66 IBs over 18 countries covering the period of 2007–2015 and analyzed using the System-GMM estimator. The findings indicate that an increase in SSB effectiveness increases IBs’ performance even during the crisis periods. A possible justification for this positive effect is related to the SG structure of IBs that allows them to undertake higher risks to achieve a high efficiency level. For this, the IBs, policymakers and practitioners should consider these findings when aiming to improve SG practices in the Islamic banking industry, which in turn may help in protecting IBs during crisis and non-crisis periods. More specifically, they should give due importance to SSB (size, cross-membership, educational qualification, reputation and expertise) in enhancing the performance of IBs during the crisis and non-crisis periods. This study provides additional evidence on how IBs can sustain their performance during either crisis or non-crisis periods through adopting appropriate SG structure. However, the study only focuses on a small sample of 66 IBs due to lack of the data.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "System GMM estimator"

1

Ahmad, Aftab, Kjell Andersson, and Ulf Sellgren. "A Comparative Study of Friction Estimation and Compensation Using Extended, Iterated, Hybrid, and Unscented Kalman Filters." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12997.

Full text
Abstract:
Transparency is a key performance evaluation criterion for haptic devices, which describes how realistically the haptic force/torque feedback is mimicked from a virtual environment or in case of master-slave haptic device. Transparency in haptic devices is affected by disturbance forces like friction between moving parts. An accurate estimate of friction forces for observer based compensation requires estimation techniques, which are computationally efficient and gives reduced error between measured and estimated friction. In this work different estimation techniques based on Kalman filter, such as Extended Kalman filter (EKF), Iterated Extended Kalman filter (IEKF), Hybrid extended Kalman filter (HEKF) and Unscented Kalman filter (UKF) are investigated with the purpose to find which estimation technique that gives the most efficient and realistic compensation using online estimation. The friction observer is based on a newly developed friction smooth generalized Maxwell slip model (S-GMS). Each studied estimation technique is demonstrated by numerical and experimental simulation of sinusoidal position tracking experiments. The performances of the system are quantified with the normalized root mean-square error (NRMSE) and the computation time. The results from comparative analyses suggest that friction estimation and compensation based on Iterated Extended Kalman filter both gives a reduced tracking error and computational advantages compared to EKF, HEKF, UKF, as well as with no friction compensation.
APA, Harvard, Vancouver, ISO, and other styles
2

Watanabe, Hidenori, Shogo Muramatsu, and Hisakazu Kikuchi. "Interval calculation of EM algorithm for GMM parameter estimation." In 2010 IEEE International Symposium on Circuits and Systems - ISCAS 2010. IEEE, 2010. http://dx.doi.org/10.1109/iscas.2010.5537044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Wei, Todd Klein, and James Collins. "Giant Magnetoresistive Based Handheld System for Rapid Detection of Human NT-proBNP." In 2019 Design of Medical Devices Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/dmd2019-3264.

Full text
Abstract:
In this work, we developed giant magnetoresistive (GMR) based handheld biosensing systems that serve as platform for detecting human NT-proBNP. This assay takes advantages of high sensitivity and real-time signal readout of GMR biosensor. The limit of detection was estimated to be less than 0.01ng/mL, and detection range covered from 0.01 ng/mL to 5 ng/mL was obtained. The assay can be completed within 20 min, which is very important for further development of point-of-care testing. The proposed GMR handheld system is also successfully used for the detection of real NT-proBNP human samples. It can be foreseen that this handheld detection system could become a robust contender in the applications of in vitro biomarker diagnostics.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Junde, Navyata Gattu, and Swaroop Ghosh. "FAuto: An Efficient GMM-HMM FPGA Implementation for Behavior Estimation in Autonomous Systems." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chun-guo, Liu, Li Li-zhong, and Zhang Xiao-yong. "A robust frequency offset estimation scheme for OFDM systems." In 2010 Global Mobile Congress (GMC). IEEE, 2010. http://dx.doi.org/10.1109/gmc.2010.5634579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmad, Aftab, Kjell Andersson, Ulf Sellgren, and Max Boegli. "Evaluation of Friction Models for Haptic Devices." In ASME 2013 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/dscc2013-3982.

Full text
Abstract:
In this work different friction models are evaluated to determine how well these models are suited for performance simulation and control of a 6-DOF haptic device. The studied models include, Dahl model, LuGre model, Generalized Maxwell slip model (GMS), smooth Generalized Maxwell slip model (S-GMS) and Differential Algebraic Multistate (DAM) friction model. These models are evaluated both numerically and experimentally with an existing 6-DOF haptic device that is based on a Stewart platform. In order to evaluate how well these models compensate friction, a model-based feedback friction compensation strategy along with a PID controller were used for position tracking accuracy. The accuracies of the friction compensation models are examined separately for both low-velocity and high-velocity motions of the system. To evaluate these models, we use criteria based on fidelity to predict realistic friction phenomena, easiness to implement, computational efficiency and easiness to estimate the model parameters. Experimental results show that friction compensated with GMS, S-GMS and DAM models give better accuracy in terms of standard deviation, Root Mean Squared Error, and maximum error between a reference and measured trajectory. Based on the criteria of fidelity, ease of implementation and ease to estimate model parameters, the S-GMS model, which represents a smooth transition between sliding and pre-sliding regime through an analytical set of differential equations, is suggested.
APA, Harvard, Vancouver, ISO, and other styles
7

Anand, Vishnu, Durgakant Pushp, Rishin Raj, and Kaushik Das. "Gaussian Mixture Model (GMM) Based Object Detection and Tracking using Dynamic Patch Estimation." In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019. http://dx.doi.org/10.1109/iros40897.2019.8968275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rutberg, Michael J., Anna Delgado, Howard J. Herzog, and Ahmed F. Ghoniem. "A System-Level Generic Model of Water Use at Power Plants and its Application to Regional Water Use Estimation." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-63786.

Full text
Abstract:
The withdrawal and consumption of water at electricity generation plants, mainly for cooling purposes, is a significant component of the energy water nexus in the US. The existing field data on US power plant water use, however, is of limited granularity and poor quality, hampering efforts to track industry trends and project future scenarios. Furthermore, there is a need for a common quantitative framework on which to evaluate the potential of the many technologies that have been proposed to reduce water use at power plants. To address these deficiencies, we have created a system-level generic model (S-GEM) of water use at power plants that applies to fossil, nuclear, geothermal and solar thermal plants, using either steam or combined cycles. The S-GEM is a computationally inexpensive analytical model that approximately reflects the physics of the key processes involved and requires a small number of input parameters; the outputs are water withdrawal and consumption intensity in liters per kilowatt-hour. Data from multiple sources are combined to characterize value distributions of S-GEM input parameters across the US, resulting in refined estimates of water use with quantified uncertainties. These estimates are then validated against typical values from the literature and against an existing field data set. By adjusting S-GEM input values or value distributions, any number of hypothetical scenarios can be rapidly evaluated. As an example, we focus here on technology evaluation, expressing proposed technological improvements in terms of S-GEM input parameters, then comparing their projected effects on overall water withdrawal and consumption intensities.
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Dennis W. K., Y. K. Yan, and W. M. Leung. "Uncertainty Estimation for Force Measurements." In NCSL International Workshop & Symposium. NCSL International, 2014. http://dx.doi.org/10.51843/wsproceedings.2014.15.

Full text
Abstract:
Load cells are force measurement transducers which form integral parts of systems for measurement of weight, torque, impact, acceleration and other quantities. Particularly in the construction industry, load cells are extensively used for calibrating force machines and determining the strengths of construction materials. Load cells are calibrated against standard masses using standard force machines based on principles of deadweight or hydraulic amplification. For international recognition purpose, load cells are calibrated in accordance with international or national standards such as ISO 376, BS 1610, EN 10002-3. However, these standards do not provide guidelines for evaluation of measurement data and expression of measurement uncertainty. There is another complication. Load cells are transducers that give out deflection values in response to the applied forces. A load cell is calibrated at specific test points only and the behavior of the test unit is expressed graphically by plotting the indicated output value against the applied force (known as a response curve). Hence, measurement results for load cells are expressed in terms of calibration coefficients, which are used to reproduce the response curve. This made the evaluation and expression of measurement uncertainty a complicated process. The document JCGM 100 "Evaluation of measurement data - Guide to the expression of uncertainty in measurement (GUM)" provides a framework for uncertainty evaluation. However, the GUM does not provide specific guidelines for uncertainty estimates for load cells, in particularly, to deal with errors concerning curve fitting and interpolation. It is also known that GUM has certain limitations which render it unreliable when there is prominent nonlinearity in the model or there are dominant uncertainty contributions. In this paper, we not only demonstrate how to use the GUM framework to estimate uncertainties of a load cell but also apply the method stipulated in the "Supplement 1 to the GUM - Propagation of distributions using a Monte Carlo method (JCGM 101)" to validate the GUM uncertainty framework.
APA, Harvard, Vancouver, ISO, and other styles
10

Zha, Tangdi, Caixiang Wang, and Xiaowen Li. "Frequency offset and channel estimation in OFDM systems by blind adaptive filter." In 2010 Global Mobile Congress (GMC). IEEE, 2010. http://dx.doi.org/10.1109/gmc.2010.5634621.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "System GMM estimator"

1

Morgan, Miranda, and Alastair Stewart. Making Market Systems Work for Women Farmers in Tajikistan: A final evaluation of Oxfam's Gendered Enterprise and Markets programme in Tajikistan. Oxfam GB, December 2019. http://dx.doi.org/10.21201/2019.5372.

Full text
Abstract:
Gendered Enterprise and Markets (GEM) is Oxfam GB’s approach to market systems development. The GEM approach facilitates change in market systems and social norms, with the aim of ensuring more sustainable livelihood opportunities for marginalized women and men. The GEM DFID AidMatch Programme (June 2014–February 2018) worked within the soya, milk and vegetable value chains targeting women smallholder farmers in areas of poverty. The programme aimed to benefit 63,600 people (10,600 smallholder households) living in Zambia, Tajikistan and Bangladesh through increases in household income, women having greater influence over key livelihood decisions within their households and communities, and engaging in livelihoods more resilient to shocks, such as natural disasters and market volatility. In Tajikistan, the Gendered Enterprise and Markets (GEM) programme has been implemented in five districts of Khatlon Province by Oxfam in partnership with local public organizations, League of Women Lawyers of Tajikistan (LWL) and Neksigol Mushovir. The GEM programme in Tajikistan sought to directly improve the livelihoods of an estimated 3,000 smallholder farmers (60 percent women) in fruit and vegetable value chains through improved production skills, resilience to climate risks, access to market opportunities and greater engagement with market players, and strengthened ability to influence private sector and government actors. The evaluation was designed to investigate if and how the GEM programme might have contributed to its intended outcomes – not only in the lives of individual women smallholder farmers targeted by the programme but also to changes in their communities and the larger market system. It also sought to capture any potential unintended outcomes of the programme.
APA, Harvard, Vancouver, ISO, and other styles
2

Morgan, Miranda, Alastair Stewart, and Simone Lombardini. Making Market Systems Work for Women Farmers in Zambia: A final evaluation of Oxfam's Gendered Enterprise and Markets programme in the Copperbelt region of Zambia. Oxfam GB, December 2019. http://dx.doi.org/10.21201/2019.5389.

Full text
Abstract:
Gendered Enterprise and Markets (GEM) is Oxfam GB’s approach to market systems development. The GEM approach facilitates change in market systems and social norms, with the aim of ensuring more sustainable livelihood opportunities for marginalized women and men. The GEM DFID AidMatch Programme (June 2014–February 2018) worked within the soya, milk and vegetable value chains targeting women smallholder farmers in areas of poverty. The programme aimed to benefit 63,600 people (10,600 smallholder households) living in Zambia, Tajikistan and Bangladesh through increases in household income, women having greater influence over key livelihood decisions within their households and communities, and engaging in livelihoods more resilient to shocks, such as natural disasters and market volatility. In Zambia, the GEM programme has been implemented in four districts of the Copperbelt Province in coordination with implementing partners Heifer Programmes International and the Sustainable Agricultural Programme (SAP). The GEM programme in the Copperbelt seeks to directly improve the livelihoods of an estimated 4,000 smallholder farmers (75 percent women) in the dairy and soya value chains through improved production skills, resilience to climate risks, access to market opportunities, greater engagement with market players and strengthened ability to influence private sector and government actors. The evaluation was designed to investigate if and how the GEM programme might have contributed to its intended outcomes – not only in the lives of individual women smallholder farmers targeted by the programme but also to changes in their communities and the larger market system. It also sought to capture any potential unintended outcomes of the programme.
APA, Harvard, Vancouver, ISO, and other styles
3

Gunay, Selim, Fan Hu, Khalid Mosalam, Arpit Nema, Jose Restrepo, Adam Zsarnoczay, and Jack Baker. Blind Prediction of Shaking Table Tests of a New Bridge Bent Design. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, November 2020. http://dx.doi.org/10.55461/svks9397.

Full text
Abstract:
Considering the importance of the transportation network and bridge structures, the associated seismic design philosophy is shifting from the basic collapse prevention objective to maintaining functionality on the community scale in the aftermath of moderate to strong earthquakes (i.e., resiliency). In addition to performance, the associated construction philosophy is also being modernized, with the utilization of accelerated bridge construction (ABC) techniques to reduce impacts of construction work on traffic, society, economy, and on-site safety during construction. Recent years have seen several developments towards the design of low-damage bridges and ABC. According to the results of conducted tests, these systems have significant potential to achieve the intended community resiliency objectives. Taking advantage of such potential in the standard design and analysis processes requires proper modeling that adequately characterizes the behavior and response of these bridge systems. To evaluate the current practices and abilities of the structural engineering community to model this type of resiliency-oriented bridges, the Pacific Earthquake Engineering Research Center (PEER) organized a blind prediction contest of a two-column bridge bent consisting of columns with enhanced response characteristics achieved by a well-balanced contribution of self-centering, rocking, and energy dissipation. The parameters of this blind prediction competition are described in this report, and the predictions submitted by different teams are analyzed. In general, forces are predicted better than displacements. The post-tension bar forces and residual displacements are predicted with the best and least accuracy, respectively. Some of the predicted quantities are observed to have coefficient of variation (COV) values larger than 50%; however, in general, the scatter in the predictions amongst different teams is not significantly large. Applied ground motions (GM) in shaking table tests consisted of a series of naturally recorded earthquake acceleration signals, where GM1 is found to be the largest contributor to the displacement error for most of the teams, and GM7 is the largest contributor to the force (hence, the acceleration) error. The large contribution of GM1 to the displacement error is due to the elastic response in GM1 and the errors stemming from the incorrect estimation of the period and damping ratio. The contribution of GM7 to the force error is due to the errors in the estimation of the base-shear capacity. Several teams were able to predict forces and accelerations with only moderate bias. Displacements, however, were systematically underestimated by almost every team. This suggests that there is a general problem either in the assumptions made or the models used to simulate the response of this type of bridge bent with enhanced response characteristics. Predictions of the best-performing teams were consistently and substantially better than average in all response quantities. The engineering community would benefit from learning details of the approach of the best teams and the factors that caused the models of other teams to fail to produce similarly good results. Blind prediction contests provide: (1) very useful information regarding areas where current numerical models might be improved; and (2) quantitative data regarding the uncertainty of analytical models for use in performance-based earthquake engineering evaluations. Such blind prediction contests should be encouraged for other experimental research activities and are planned to be conducted annually by PEER.
APA, Harvard, Vancouver, ISO, and other styles
4

Lubowa, Nasser, Zita Ekeocha, Stephen Robert Byrn, and Kari L. Clase. Pharmaceutical Industry in Uganda: A Review of the Common GMP Non-conformances during Regulatory Inspections. Purdue University, December 2021. http://dx.doi.org/10.5703/1288284317442.

Full text
Abstract:
The prevalence of substandard medicines in Africa is high but not well documented. Low and Middle-Income Countries (LMICs) are likely to face considerable challenges with substandard medications. Africa faces inadequate drug regulatory practices, and in general, compliance with Good Manufacturing Practices (GMP) in most of the pharmaceutical industries is lacking. The majority of pharmaceutical manufacturers in developing countries are often overwhelmed by the GMP requirements and therefore are unable to operate in line with internationally acceptable standards. Non-conformances observed during regulatory inspections provide the status of the compliance to GMP requirements. The study aimed to identify the GMP non-conformances during regulatory inspections and gaps in the production of pharmaceuticals locally manufactured in Uganda by review of the available 50 GMP reports of 21 local pharmaceutical companies in Uganda from 2016. The binary logistic generalized estimating equations (GEE) model was applied to estimate the association between odds of a company failing to comply with the GMP requirements and non-conformances under each GMP inspection parameter. Analysis using dummy estimation to linear regression included determination of the relationship that existed between the selected variables (GMP inspection parameters) and the production capacity of the local pharmaceutical industry. Oral liquids, external liquid preparations, powders, creams, and ointments were the main categories of products manufactured locally. The results indicated that 86% of the non-conformances were major, 11% were minor, and 3% critical. The majority of the non-conformances were related to production (30.1%), documentation (24.5%), and quality control (17.6%). Regression results indicated that for every non-conformance under premises, equipment, and utilities, there was a 7-fold likelihood of the manufacturer failing to comply with the GMP standards (aOR=6.81, P=0.001). The results showed that major non-conformances were significantly higher in industries of small scale (B=6.77, P=0.02) and medium scale (B=8.40, P=0.04), as compared to those of large scale. This study highlights the failures in quality assurance systems and stagnated GMP improvements in these industries that need to be addressed by the manufacturers with support from the regulator. The addition of risk assessment to critical production and quality control operations and establishment of appropriate corrective and preventive actions as part of quality management systems are required to ensure that quality pharmaceuticals are manufactured locally.
APA, Harvard, Vancouver, ISO, and other styles
5

O'Neill, H. B., S. A. Wolfe, and C. Duchesne. Preliminary modelling of ground ice abundance in the Slave Geological Province, Northwest Territories and Nunavut. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329815.

Full text
Abstract:
New infrastructure corridors within the Slave Geological Province could provide transportation, electric, and communications links to mineral-rich areas of northern Canada, and connect southern highway systems and Arctic shipping routes. Relatively little information on permafrost and ground ice is available compared to other regions, particularly in the north of the corridor. Improved knowledge of permafrost and ground ice conditions is required to inform planning and management of infrastructure. Work within the Geological Survey of Canada's (GSC) GEM-GeoNorth program includes mapping periglacial terrain features, synthesizing existing permafrost and surficial data, and modelling ground ice conditions along the Yellowknife-Grays Bay corridor. Here we present initial modelling of ground ice abundance in the region using a methodology developed for the national scale Ground ice map of Canada (GIMC), and higher resolution surficial geology mapping. The results highlight the increased estimated abundance of potentially ice-rich deposits compared to the GIMC when using more detailed surficial geology as model inputs.
APA, Harvard, Vancouver, ISO, and other styles
6

Mathew, Sonu, Srinivas S. Pulugurtha, and Sarvani Duvvuri. Modeling and Predicting Geospatial Teen Crash Frequency. Mineta Transportation Institute, June 2022. http://dx.doi.org/10.31979/mti.2022.2119.

Full text
Abstract:
This research project 1) evaluates the effect of road network, demographic, and land use characteristics on road crashes involving teen drivers, and, 2) develops and compares the predictability of local and global regression models in estimating teen crash frequency. The team considered data for 201 spatially distributed road segments in Mecklenburg County, North Carolina, USA for the evaluation and obtained data related to teen crashes from the Highway Safety Information System (HSIS) database. The team extracted demographic and land use characteristics using two different buffer widths (0.25 miles and 0.5 miles) at each selected road segment, with the number of crashes on each road segment used as the dependent variable. The generalized linear models with negative binomial distribution (GLM-based NB model) as well as the geographically weighted negative binomial regression (GWNBR) and geographically weighted negative binomial regression model with global dispersion (GWNBRg) were developed and compared. This research relied on data for 147 geographically distributed road segments for modeling and data for 49 segments for validation. The annual average daily traffic (AADT), light commercial land use, light industrial land use, number of household units, and number of pupils enrolled in public or private high schools are significant explanatory variables influencing the teen crash frequency. Both methods have good predictive capabilities and can be used to estimate the teen crash frequency. However, the GWNBR and GWNBRg better capture the spatial dependency and spatial heterogeneity among road teen crashes and the associated risk factors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography