To see the other types of publications on this topic, follow the link: Regression analysis – Econometric models.

Dissertations / Theses on the topic 'Regression analysis – Econometric models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Regression analysis – Econometric models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yeasmin, Mahbuba 1965. "Multiple maxima of likelihood functions and their implications for inference in the general linear regression model." Monash University, Dept. of Econometrics and Business Statistics, 2003. http://arrow.monash.edu.au/hdl/1959.1/5821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pitrun, Ivet 1959. "A smoothing spline approach to nonlinear inference for time series." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Marchenko, Maria [Verfasser], and Enno [Akademischer Betreuer] Mammen. "Econometric analysis of quantile regression models and networks : With empirical applications / Maria Marchenko ; Betreuer: Enno Mammen." Mannheim : Universitätsbibliothek Mannheim, 2016. http://d-nb.info/1114661287/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Volgina, Vera. "Postmerger financial performance: econometric analysis." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-16850.

Full text
Abstract:
There are numerous researches done in the last couple decades dedicated to the observation of impact of merges and acquisitions on the performance of the company. The topic is considered to be up-to-date, as still there is no common approach to evaluating of benefits mergers are about to bring to a new established entity. In this thesis the issue of post-merger financial performance is investigated on an example of three biggest energy companies in Europe: RWE, E.ON and Vattenfall. The aim of the thesis is to find out whether financial performance of chosen companies improves after the merger occurs. This target is elaborated with a help of the analysis of commonly used financial ratios in corporate finance and construction of two regression models, which explain the interrelations between basic indicator of the company's growth (net income), the fact of the merger and determined financial ratios. As an outcome of the research, a few findings were obtained, such as worsening of financial performance three to five years after the merger, with continuing improvement in further years, quite stable financial indicators before the merger, positive interconnection between the fact of the merger and the net income. Such outcomes might be considered as significant, though further research and elaboration of the topic can be performed in the future.
APA, Harvard, Vancouver, ISO, and other styles
5

Wesso, Gilbert R. "The econometrics of structural change: statistical analysis and forecasting in the context of the South African economy." University of the Western Cape, 1994. http://hdl.handle.net/11394/7907.

Full text
Abstract:
Philosophiae Doctor - PhD
One of the assumptions of conventional regression analysis is I that the parameters are constant over all observations. It has often been suggested that this may not be a valid assumption to make, particularly if the econometric model is to be used for economic forecasting0 Apart from this it is also found that econometric models, in particular, are used to investigate the underlying interrelationships of the system under consideration in order to understand and to explain relevant phenomena in structural analysis. The pre-requisite of such use of econometrics is that the regression parameters of the model is assumed to be constant over time or across different crosssectional units.
APA, Harvard, Vancouver, ISO, and other styles
6

Huh, Ji Young. "Applications of Monte Carlo Methods in Statistical Inference Using Regression Analysis." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1160.

Full text
Abstract:
This paper studies the use of Monte Carlo simulation techniques in the field of econometrics, specifically statistical inference. First, I examine several estimators by deriving properties explicitly and generate their distributions through simulations. Here, simulations are used to illustrate and support the analytical results. Then, I look at test statistics where derivations are costly because of the sensitivity of their critical values to the data generating processes. Simulations here establish significance and necessity for drawing statistical inference. Overall, the paper examines when and how simulations are needed in studying econometric theories.
APA, Harvard, Vancouver, ISO, and other styles
7

AraÃjo, Ana Maria MaurÃcio. "Analysis of practices of management environmental and its impacts on productivity of shrimp farming in Ceara State." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14305.

Full text
Abstract:
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico
Shrimp farming has been consolidated as one of the most promising economic activities of the Northeast, where it has also been shown to be responsible for high negative impacts on the coastal environment. The research problem was to see how productivity is affected by the adoption of environmental management practices by analyzing the interaction between the productivity factor and other environmental management factors. To do so, we estimated the linear type regression analysis, to obtain a mathematical equation which quantify the relationship between productivity and other variables. The survey was conducted in 60 shrimp farms located in CearÃ, on farms intended only for fattening phase. Were raised environmental management practices adopted by producers and created management indices,where these indices were aggregated into a single index that along with the variables that describe the productive characteristics and location of the farms originated econometric linlog semi-logarithmic models. Regression analysis showed that the yield is better explained by the storage density, intensive production system periodic servicing. Environmental management is not configured as a factor that influences productivity, justifying the low level of environmental management by shrimp farmers.
A carcinicultura vem se consolidando como uma das mais promissoras atividades econÃmicas da RegiÃo Nordeste, onde tambÃm tem sido apresentada como responsÃvel por elevados impactos negativos sobre o ambiente costeiro. O problema da pesquisa consistiu em verificar como a produtividade à afetada pela adoÃÃo de prÃticas de gestÃo ambiental, atravÃs da anÃlise da interaÃÃo entre o fator produtividade e os outros fatores de gestÃo ambiental. Para isto, estimou-se uma anÃlise de regressÃo do tipo linear, para obter uma equaÃÃo matemÃtica que quantificasse o relacionamento entre produtividade e outras variÃveis. A pesquisa foi realizada em 60 fazendas de carcinicultura localizadas no CearÃ, em fazendas destinadas somente à fase de engorda. Foram levantadas as prÃticas de gestÃo ambiental adotadas pelos produtores e criados Ãndices de manejo, onde estes Ãndices foram agregados em um Ãnico Ãndice que juntamente com as variÃveis que descrevem as caracterÃsticas produtivas e de localizaÃÃo das fazendas originou modelos economÃtricos semi-logarÃtmicos lin-log. A anÃlise de regressÃo mostrou que a produtividade à melhor explicada pela densidade de estocagem, sistema de produÃÃo intensivo a assistÃncia tÃcnica periÃdica. A gestÃo ambiental nÃo se configura como um fator que influencie a produtividade, justificando o baixo nÃvel de gestÃo ambiental pelos carcinicultores.
APA, Harvard, Vancouver, ISO, and other styles
8

COLAGROSSI, MARCO. "META-ANALYSIS AND META-REGRESSION ANALYSIS IN ECONOMICS: METHODOLOGY AND APPLICATIONS." Doctoral thesis, Università Cattolica del Sacro Cuore, 2017. http://hdl.handle.net/10280/19697.

Full text
Abstract:
A partire dagli anni ’80, la diffusione dei metodi statistici, abbinata ai progressi nelle capacità computazionali dei personal computers, ha progressivamente facilitato i ricercatori nel testare empiricamente le proprie teorie. Gli economisti sono diventati in grado di eseguire milioni di regressioni prima di pranzo senza abbandonare le proprie scrivanie. Purtroppo, ciò ha portato ad un accumulo di evidenze spesso eterogenee, quando non contradditorie se non esplicitamente in conflitto. Per affrontare il problema, questa tesi fornirà una panoramica dei metodi meta-analitici disponibili in economia. Nella prima parte verranno introdotte le intuizioni alla base dei modelli gerarchici a fattori fissi e casuali capaci di risolvere le problematicità derivanti dalla presenza di osservazioni non indipendenti. Verrà inoltre affrontato il tema dell’errore sistematico di pubblicazione in presenza di elevata eterogeneità tra gli studi. La metodologia verrà successivamente applicata, nella seconda e terza parte, a due diverse aree della letteratura economica: l’impatto del rapporto banca-impresa sulle prestazioni aziendali e il dibattito sulla relazione fra democrazia e crescita. Mentre nel primo caso la correlazione negativa non è influenzata da fattori specifici ai singoli paesi, il contrario è vero per spiegare l’impatto (statisticamente non significativo) delle istituzioni democratiche sullo sviluppo economico. Quali siano questi fattori è però meno chiaro; gli studiosi non hanno ancora individuato le co-variate – o la corretta misurazione di esse – capaci di spiegare questa discussa relazione.
Starting in the late 1980s, improved computing performances and spread knowledge of statistical methods allowed researchers to put their theories to test. Formerly constrained economists became able [to] run millions of regressions before lunch without leaving their desks. Unfortunately, this led to an accumulation of often conflicting evidences. To address such issue, this thesis will provide an overview of the meta-analysis methods available in economics. The first paper will explain the intuitions behind fixed and random effects models in such a framework. It will then detail how multilevel modelling can help overcome hierarchical dependence issues. Finally, it will address the problem of publication bias in presence of high between-studies heterogeneity. Such methods will be then applied, in the second and third papers, to two different areas of the economics literature: the effect of relationship banking on firm performances and the democracy and growth conundrum. Results are far-reaching. While in the first case the documented negative relation is not driven by country-specific characteristics the opposite is true for the (statistically insignificant) impact of democratic institutions on economic growth. What these characteristics are is, however, less clear. Scholars have not yet found the covariates - or their suitable proxies - that matter to explain such much-debated relationship.
APA, Harvard, Vancouver, ISO, and other styles
9

Cowley, Mervyn Wellesley. "Property market forecasts and their valuation implications: a study of the Brisbane central business district office market." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16563/.

Full text
Abstract:
Property market forecasts play a crucial role in modern real estate valuation methodologies and, consequently, flawed forecasts can have adverse impacts on the accuracy of valuations. This thesis identifies property industry inconsistencies in the formulation and application of office rent forecasts adopted in discounted cash flow (DCF) studies used to assess the value of commercial properties and the viability of proposed projects. Existing research on commercial property cycles and office property market modelling is examined in order to identify the dominant market drivers adopted by researchers. Forecasting techniques are also explored towards specifying space and rent models for the Brisbane CBD office market using the perceived dominant drivers as explanatory variables. Surveys of property valuers and developers are undertaken to underpin the selection of these variables. The implications of varying rent forecasts applied in DCF based valuation assessments are tested through the use of a case study involving four Brisbane office buildings. Innovative research is conducted through adopting geographic information system supported land use and historical valuation studies to delineate market precincts within the Brisbane CBD. The rent model is then re-estimated using precinct based office rent data to allow the generation of forecasts for the individual precincts. Out-of-sample accuracy test results for the precinct forecasts are compared with the results produced by the model specified using whole-of-city data. The literature reviews, surveys and model testing determine a relatively consistent range of dominant explanatory variables applicable to office markets. The case study, in a local context, confirms that varying forecasts do have a significant impact on property valuations. Tests of the forecast results generated by the Brisbane CBD model provide some evidence that more plausible office rent forecasts stem from the use of market models as compared with solely applying professional judgement based forecasts. Subject to data availability limitations, the precinct based rent model is found to produce rent forecasts superior to those generated by the whole-of-city model. Finally, the thesis makes a range of industry recommendations towards enhancing forecasts and recommendations are also made for potential future research projects.
APA, Harvard, Vancouver, ISO, and other styles
10

Miskolczi, Martina. "Vícestavová analýza nezaměstnanosti a další statistické metody pro modelování nezaměstnanosti." Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-201115.

Full text
Abstract:
Unemployment modelling covers both view of the labour market such as is, economy and knowledge of mathematics, statistics and, thus, econometrics. The importance of unemployment seems to be even more significant after the period of crisis; high unemployment is not only economic burden bud serious social risk and psychological problem as well. In the dissertation thesis, selected models used for unemployment modelling and -- in some cases for its prediction -- are introduced. To be able to predict the future trend of labour market reliably means to be able to plan tools of active and passive employment policies effectively. Alternatively, it means to search programs and supports that help in reduction of unemployment. Specific applications of models for the Czech labour market involve model of multistate life tables, simultaneous econometric models and Phillips curve. Phillips curve of mutual "trade-off" of unemployment and inflation is confirmed in short periods, in longer and long period of time rather fails, it is not reliable. It is not possible to use it for prediction at all; it would be needed to predict inflation. Analogous characteristics has the Beveridge curve. Simultaneous econometric models for number of economically active persons and for unemployment and inflation de facto fail, even though they demonstrate the range of opportunities including point and interval forecasts. Period of economic crisis when changes in labour market principles occur means usually problem for such the models, which work well in periods of stable growth or decline. More, it is difficult to specify these models correctly with regard to threat of multikolinearity. Multistate models aiming at calculation of multistate life tables, or even multistate projection are extremely demanding for input data. But they enable to understand relations or transitions among states, respectively. It is very beneficial tool for comprehension and policy planning in the area of labour market and social affairs in the process of lowering unemployment. Forecasts in such a type of model are possible but difficult because it is necessary to predict probability of transition among states.
APA, Harvard, Vancouver, ISO, and other styles
11

Gualdani, C. "Econometric analysis of network formation models." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/1566643/.

Full text
Abstract:
This dissertation addresses topics in the econometrics of network formation models. Chapter 1 provides a review of the literature. Statistical models focus on the specification of the probability distribution of the network. Examples include models in which nodes are born sequentially and meet existing vertices according to random meetings and network-based meetings. Within this group of models, special attention is reserved to the milestone work by Jackson and Rogers (2007): after having discussed and replicated the main results of the paper, an extension of the original model is examined and fitted to a dataset of Google Plus users. Even if statistical models can reproduce relatively well the main characteristics of real networks, they usually lack of microfundation, essential for counterfactual analysis. The chapter hence moves to considering the econometrics of economic models of network formation, where agents form links in order to maximise a payoff function. Within this framework, Chapter 2 studies identification of the parameters governing agents’ preferences in a static game of network formation, where links represent asymmetric relations between players. After having shown existence of an equilibrium, partial identification arguments are provided without restrictions on equilibrium selection. The usual computational difficulties are attenuated by restricting the attention to some local games of the network formation game and giving up on sharpness. Chapter 3 applies the methodology developed in Chapter 2 to empirically investigate which preferences are behind firms’ decisions to appoint competitors’ directors as executives. Using data on Italian companies, it is found that a firm i prefers its executives sitting on the board of a rival j when executives of other competitors are hosted too, possibly because it enables i to engage with them in “cheap talk” communications, besides having the opportunity to learn about j’s decision making process.
APA, Harvard, Vancouver, ISO, and other styles
12

Izadi, Hooshang. "Censored regression and the Pearson system of distributions : an estimation method and application to demand analysis." Thesis, University of Essex, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Azam, Mohammad Nurul 1957. "Modelling and forecasting in the presence of structural change in the linear regression model." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Vilela, Lucas Pimentel. "Hypothesis testing in econometric models." reponame:Repositório Institucional do FGV, 2015. http://hdl.handle.net/10438/18249.

Full text
Abstract:
Submitted by Lucas Pimentel Vilela (lucaspimentelvilela@gmail.com) on 2017-05-04T01:19:37Z No. of bitstreams: 1 Hypothesis Testing in Econometric Models - Vilela 2017.pdf: 2079231 bytes, checksum: d0387462f36ab4ab7e5d33163bb68416 (MD5)
Approved for entry into archive by Maria Almeida (maria.socorro@fgv.br) on 2017-05-15T19:31:43Z (GMT) No. of bitstreams: 1 Hypothesis Testing in Econometric Models - Vilela 2017.pdf: 2079231 bytes, checksum: d0387462f36ab4ab7e5d33163bb68416 (MD5)
Made available in DSpace on 2017-05-15T19:32:18Z (GMT). No. of bitstreams: 1 Hypothesis Testing in Econometric Models - Vilela 2017.pdf: 2079231 bytes, checksum: d0387462f36ab4ab7e5d33163bb68416 (MD5) Previous issue date: 2015-12-11
This thesis contains three chapters. The first chapter considers tests of the parameter of an endogenous variable in an instrumental variables regression model. The focus is on one-sided conditional t-tests. Theoretical and numerical work shows that the conditional 2SLS and Fuller t-tests perform well even when instruments are weakly correlated with the endogenous variable. When the population F-statistic is as small as two, the power is reasonably close to the power envelopes for similar and non-similar tests which are invariant to rotation transformations of the instruments. This finding is surprising considering the poor performance of two-sided conditional t-tests found in Andrews, Moreira, and Stock (2007). These tests have bad power because the conditional null distributions of t-statistics are asymmetric when instruments are weak. Taking this asymmetry into account, we propose two-sided tests based on t-statistics. These novel tests are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test. The second and third chapters are interested in maxmin and minimax regret tests for broader hypothesis testing problems. In the second chapter, we present maxmin and minimax regret tests satisfying more general restrictions than the alpha-level and the power control over all alternative hypothesis constraints. More general restrictions enable us to eliminate trivial known tests and obtain tests with desirable properties, such as unbiasedness, local unbiasedness and similarity. In sequence, we prove that both tests always exist and under suficient assumptions, they are Bayes tests with priors that are solutions of an optimization problem, the dual problem. In the last part of the second chapter, we consider testing problems that are invariant to some group of transformations. Under the invariance of the hypothesis testing, the Hunt-Stein Theorem proves that the search for maxmin and minimax regret tests can be restricted to invariant tests. We prove that the Hunt-Stein Theorem still holds under the general constraints proposed. In the last chapter we develop a numerical method to implement maxmin and minimax regret tests proposed in the second chapter. The parametric space is discretized in order to obtain testing problems with a finite number of restrictions. We prove that, as the discretization turns finer, the maxmin and the minimax regret tests satisfying the finite number of restrictions have the same alternative power of the maxmin and minimax regret tests satisfying the general constraints. Hence, we can numerically implement tests for a finite number of restrictions as an approximation for the tests satisfying the general constraints. The results in the second and third chapters extend and complement the maxmin and minimax regret literature interested in characterizing and implementing both tests.
Esta tese contém três capítulos. O primeiro capítulo considera testes de hipóteses para o coeficiente de regressão da variável endógena em um modelo de variáveis instrumentais. O foco é em testes-t condicionais para hipóteses unilaterais. Trabalhos teóricos e numéricos mostram que os testes-t condicionais centrados nos estimadores de 2SLS e Fuller performam bem mesmo quando os instrumentos são fracamente correlacionados com a variável endógena. Quando a estatística F populacional é menor que dois, o poder é razoavelmente próximo do poder envoltório para testes que são invariantes a transformações que rotacionam os instrumentos (similares ou não similares). Este resultado é surpreendente considerando a baixa performance dos testes-t condicionais para hipóteses bilaterais apresentado em Andrews, Moreira, and Stock (2007). Estes testes possuem baixo poder porque as distribuições das estatísticas-t na hipótese nula são assimétricas quando os instrumentos são fracos. Explorando tal assimetria, nós propomos testes para hipóteses bilaterais baseados em estatísticas-t. Estes testes são aproximadamente não viesados e podem performar tão bem quanto o teste de razão de máxima verossimilhança condicional. No segundo e no terceiro capítulos, nosso interesse é em testes do tipo maxmin e minimax regret para testes de hipóteses mais gerais. No segundo capítulo, nós apresentamos testes maxmin e minimax regret que satisfazem restrições mais gerais que as restrições de tamanho e de controle sobre todo o poder na hipótese alternativa. Restrições mais gerais nos possibilitam eliminar testes triviais e obter testes com propriedades desejáveis, como por exemplo não viés, não viés local e similaridade. Na sequência, nós provamos que ambos os testes existem e, sob condições suficientes, eles são testes Bayesianos com priors que são solução de um problema de otimização, o problema dual. Na última parte do segundo capítulo, nós consideramos testes de hipóteses que são invariantes à algum grupo de transformações. Sob invariância, o Teorema de Hunt-Stein implica que a busca por testes maxmin e minimax regret pode ser restrita a testes invariantes. Nós provamos que o Teorema de Hunt-Stein continua válido sob as restrições gerais propostas. No último capítulo, nós desenvolvemos um procedimento numérico para implementar os testes maxmin e minimax regret propostos no segundo capítulo. O espaço paramétrico é discretizado com o objetivo de obter testes de hipóteses com um número finito de pontos. Nós provamos que, ao considerarmos partições mais finas, os testes maxmin e minimax regret que satisfazem um número finito de pontos possuem o mesmo poder na hipótese alternativa que os testes maxmin e minimax regret que satisfazem as restrições gerais. Portanto, nós podemos implementar numericamente os testes que satisfazem um número finito de pontos como aproximação aos testes que satisfazem as restrições gerais.
APA, Harvard, Vancouver, ISO, and other styles
15

Kapetanios, George. "Essays on the econometric analysis of threshold models." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Fezzi, Carlo <1980&gt. "Econometric models for the analysis of electricity markets." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2007. http://amsdottorato.unibo.it/433/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Pole, A. M. "Bayesian analysis of some threshold switching models." Thesis, University of Nottingham, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lu, Xuewen. "Semiparametric regression models in survival analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0030/NQ27458.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mitchell, Napoleon. "Outliers and Regression Models." Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc279029/.

Full text
Abstract:
The mitigation of outliers serves to increase the strength of a relationship between variables. This study defined outliers in three different ways and used five regression procedures to describe the effects of outliers on 50 data sets. This study also examined the relationship among the shape of the distribution, skewness, and outliers.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Qingfeng. "Econometric methods for market risk analysis : GARCH-type models and diffusion models." Kyoto University, 2007. http://hdl.handle.net/2433/136053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Xingbai Xu. "Asymptotic Analysis for Nonlinear Spatial and Network Econometric Models." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461249529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Yang, and 李杨. "Statistical inference for some econometric time series models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/195984.

Full text
Abstract:
With the increasingly economic activities, people have more and more interest in econometric models. There are two mainstream econometric models which are very popular in recent decades. One is quantile autoregressive (QAR) model which allows varying-coefficients in linear time series and greatly promotes the ranges of regression research. The first topic of this thesis is to focus on the modeling of QAR model. We propose two important measures, quantile correlation (QCOR) and quantile partial correlation (QPCOR). We then apply them to QAR models, and introduce two valuable quantities, the quantile autocorrelation function (QACF) and the quantile partial autocorrelation function (QPACF). This allows us to extend the Box-Jenkins three-stage procedure (model identification, model parameter estimation, and model diagnostic checking) from classical autoregressive models to quantile autoregressive models. Specifically, the QPACF of an observed time series can be employed to identify the autoregressive order, while the QACF of residuals obtained from the model can be used to assess the model adequacy. We not only demonstrate the asymptotic properties of QCOR, QPCOR, QACF and PQACF, but also show the large sample results of the QAR estimates and the quantile version of the Ljung- Box test. Moreover, we obtain the bootstrap approximations to the distributions of parameter estimators and proposed measures. Simulation studies indicate that the proposed methods perform well in finite samples, and an empirical example is presented to illustrate the usefulness of QAR model. The other important econometric model is autoregressive conditional duration (ACD) model which is developed with the purpose of depicting ultra high frequency (UHF) financial time series data. The second topic of this thesis is designed to incorporate ACD model with one of the extreme value distributions, i.e. Fréchet distribution. We apply the maximum likelihood estimation (MLE) to Fréchet ACD models and derive its generalized residuals for model adequacy checking. It is noteworthy that simulations show a relative greater sensitiveness in the linear parameters to sampling errors. This phenomenon successfully reflects the skewness of the Fréchet distribution and suggests a method to practitioners in proceeding model accuracy. Furthermore, we present the empirical sizes and powers for Box-Pierce, Ljung-Box and modified Box-Pierce statistics as comparisons of the proposed portmanteau statistic. In addition to the Fréchet ACD, we also systematically analyze theWeibull ACD, where the Weibull distribution is the other nonnegative extreme value distribution. The last topic of the thesis explains the estimation and diagnostic checking the Weibull ACD model. By investigating the MLE in this model, there exhibits a slight sensitiveness in linear parameters. However, there is an obvious phenomenon on the trade-off between the skewness of Weibull distribution and the sampling error when the simulations are conducted. Moreover, the asymptotic properties are also studied for the generalized residuals and a goodness-of-fit test is employed to obtain a portmanteau statistic. Through the simulation results in size and power, it shows that Weibull ACD is superior to Fréchet ACD in specifying the wrong model. This is meaningful in practice.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
23

You, Jiazhong 1968. "Robust estimation and testing : finite-sample properties and econometric applications." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36739.

Full text
Abstract:
High breakdown point, bounded influence and high efficiency at the Gaussian model are desired properties of robust regression estimators. Robustness of validity, robustness of efficiency and high breakdown point size and power are the fundamental goals in robust testing. The objective of this dissertation is to examine the finite-sample properties of robust estimators and tests, and to find some useful applications for them. This is accomplished by extensive Monte Carlo experiments and other inference techniques in various contamination situations. In the linear regression model with an outlying regressor and deviations from the normal error distribution, robust estimators demonstrate noticeable advantages over the standard LS and maximum likelihood (ML) estimators. Our findings reveal that the finite-sample behavior of the robust estimators is very different from their asymptotic properties. The robust properties of estimators carry over to test statistics based on these estimators. The robust tests we proposed can achieve to the large extent the fundamental goals in robust testing. Economic applications on modelling the household consumption behavior and testing for (G)ARCH effects show that one can capture big gains from the appropriate utilization of the robust methods even at very simple models.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Zhigang. "Nonproportional hazards regression models for survival analysis /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mo, Lijia. "Examining the reliability of logistic regression estimation software." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7059.

Full text
Abstract:
Doctor of Philosophy
Department of Agricultural Economics
Allen M. Featherstone
Bryan W. Schurle
The reliability of nine software packages using the maximum likelihood estimator for the logistic regression model were examined using generated benchmark datasets and models. Software packages tested included: SAS (Procs Logistic, Catmod, Genmod, Surveylogistic, Glimmix, and Qlim), Limdep (Logit, Blogit), Stata (Logit, GLM, Binreg), Matlab, Shazam, R, Minitab, Eviews, and SPSS for all available algorithms, none of which have been previously tested. This study expands on the existing literature in this area by examination of Minitab 15 and SPSS 17. The findings indicate that Matlab, R, Eviews, Minitab, Limdep (BFGS), and SPSS provided consistently reliable results for both parameter and standard error estimates across the benchmark datasets. While some packages performed admirably, shortcomings did exist. SAS maximum log-likelihood estimators do not always converge to the optimal solution and stop prematurely depending on starting values, by issuing a ``flat" error message. This drawback can be dealt with by rerunning the maximum log-likelihood estimator, using a closer starting point, to see if the convergence criteria are actually satisfied. Although Stata-Binreg provides reliable parameter estimates, there is no way to obtain standard error estimates in Stata-Binreg as of yet. Limdep performs relatively well, but did not converge due to a weakness of the algorithm. The results show that solely trusting the default settings of statistical software packages may lead to non-optimal, biased or erroneous results, which may impact the quality of empirical results obtained by applied economists. Reliability tests indicate severe weaknesses in SAS Procs Glimmix and Genmod. Some software packages fail reliability tests under certain conditions. The finding indicates the need to use multiple software packages to solve econometric models.
APA, Harvard, Vancouver, ISO, and other styles
26

Lye, J. N. "Some contributions to finite-sample analysis in three econometric models." Thesis, University of Canterbury. Economics, 1990. http://hdl.handle.net/10092/4367.

Full text
Abstract:
In the standard classical regression model the most commonly used procedures for estimation are based on the Ordinary Least Squares Method, which is justified on the basis of well known finite-sample properties. However, this model consists of a number of assumptions, such as, for example, homoskedastic, serially independent and normally distributed disturbances and nonstochastic regressors. By changing these assumptions in one way or another, different estimating situations are created, in many of which the OLS estimator may have no statistical justification at all. Further, alternative estimation methods have often been justified only on the basis of their asymptotic properties, although in practice economists frequently have to base their statistical analysis on a relatively small number of observations. This suggests that the particular estimator to use in any situation should be chosen on the basis of finite-sample considerations. The analysis of finite-sample properties of commonly used estimators in three well known Econometric models is the focus of this thesis. In particular the three models considered are: the limited-information simultaneous equations model, the nonnormal linear regression model and the nonnormal limited-information simultaneous equation model. The techniques used include the derivation of the estimators' exact distribution and when this is analytically intractable Monte Carlo methods are employed. The limited-information simultaneous equation model is analyzed in two stages. First, a useful method of numerically evaluating many of the commonly used estimators, including the two-stage least squares estimator, is presented. Secondly this method is then used, and combined with Monte Carlo analysis, to compare the distributions of the limited-information maximum likelihood and two-stage least squares estimators in misspecified simultaneous equations models. The result of this comparison indicates the superior performance of the limited-information maximum likelihood estimator over the two-stage least squares estimator in both correctly specified and misspecified simultaneous equations models. Recently, models with possibly nonnormal distributed disturbances have attracted more attention. For such models, independence and uncorrelatedness of the disturbance terms are not equivalent. Using the nonnormal regression model the statistical consequences of distinguishing between independence and uncorrelatedness are considered when the disturbances are Student-t distributed. The results obtained demonstrate that the distinction between the two assumptions is an important one and the consequences of making the wrong assumption can be serious. Consequently, specification tests are also presented which test for uncorrelatedness versus independence in the elliptically symmetric family. The nonnormal limited-information simultaneous equation model provides a relatively new area of analysis as there are few published results available on the effects of nonnormal disturbances in the limited-information simultaneous equation model. The objective here is to combine the themes pursued separately in the other two models previously considered. However, to narrow the range of possible models that can be examined, attention is focussed only on the exactly-identified simultaneous equation model. This model has a number of interesting features when the reduced-form disturbances are normally distributed. These features are illustrated and then comparisons are made with the same model when the distribution of the disturbances is widened to include the Student-t family. In this case, as for the nonnormal linear regression model, a distinction needs to be made between independently distributed and jointly distributed disturbances. The consequences of these different assumptions are shown to be important; specification tests relating to this distinction are therefore also presented.
APA, Harvard, Vancouver, ISO, and other styles
27

Billah, Baki 1965. "Model selection for time series forecasting models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Ke 1969. "A general equilibrium analysis of the division of labour : violation and enforcement of property rights, impersonal networking decisions and bundling sale." Monash University, School of Asian Languages and Studies, 2001. http://arrow.monash.edu.au/hdl/1959.1/9256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Lingzhu. "Model checking for general parametric regression models." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/654.

Full text
Abstract:
Model checking for regressions has drawn considerable attention in the last three decades. Compared with global smoothing tests, local smoothing tests, which are more sensitive to high-frequency alternatives, can only detect local alternatives dis- tinct from the null model at a much slower rate when the dimension of predictor is high. When the number of covariates is large, nonparametric estimations used in local smoothing tests lack efficiency. Corresponding tests then have trouble in maintaining the significance level and detecting the alternatives. To tackle the issue, we propose two methods under high but fixed dimension framework. Further, we investigate a model checking test under divergent dimension, where the numbers of covariates and unknown parameters go divergent with the sample size n. The first proposed test is constructed upon a typical kernel-based local smoothing test using projection method. Employed by projection and integral, the resulted test statistic has a closed form that depends only on the residuals and distances of the sample points. A merit of the developed test is that the distance is easy to implement compared with the kernel estimation, especially when the dimension is high. Moreover, the test inherits some feature of local smoothing tests owing to its construction. Although it is eventually similar to an Integrated Conditional Moment test in spirit, it leads to a test with a weight function that helps to collect more information from the samples than Integrated Conditional Moment test. Simulations and real data analysis justify the powerfulness of the test. The second test, which is a synthesis of local and global smoothing tests, aims at solving the slow convergence rate caused by nonparametric estimation in local smoothing tests. A significant feature of this approach is that it allows nonparamet- ric estimation-based tests, under the alternatives, also share the merits of existing empirical process-based tests. The proposed hybrid test can detect local alternatives at the fastest possible rate like the empirical process-based ones, and simultane- ously, retains the sensitivity to high-frequency alternatives from the nonparametric estimation-based ones. This feature is achieved by utilizing an indicative dimension in the field of dimension reduction. As a by-product, we have a systematic study on a residual-related central subspace for model adaptation, showing when alterna- tive models can be indicated and when cannot. Numerical studies are conducted to verify its application. Since the data volume nowadays is increasing, the numbers of predictors and un- known parameters are probably divergent as sample size n goes to infinity. Model checking under divergent dimension, however, is almost uncharted in the literature. In this thesis, an adaptive-to-model test is proposed to handle the divergent dimen- sion based on the two previous introduced tests. Theoretical results tell that, to get the asymptotic normality of the parameter estimator, the number of unknown parameters should be in the order of o(n1/3). Also, as a spinoff, we demonstrate the asymptotic properties of estimations for the residual-related central subspace and central mean subspace under different hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
30

Yuen, Wai-kee, and 袁偉基. "A historical event analysis of the variability in the empirical uncovered interest parity (UIP) coefficient." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B36424201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Neugebauer, Shawn Patrick. "Robust Analysis of M-Estimators of Nonlinear Models." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/36557.

Full text
Abstract:
Estimation of nonlinear models finds applications in every field of engineering and the sciences. Much work has been done to build solid statistical theories for its use and interpretation. However, there has been little analysis of the tolerance of nonlinear model estimators to deviations from assumptions and normality. We focus on analyzing the robustness properties of M-estimators of nonlinear models by studying the effects of deviations from assumptions and normality on these estimators. We discuss St. Laurent and Cook's Jacobian Leverage and identify the relationship of the technique to the robustness concept of influence. We derive influence functions for M-estimators of nonlinear models and show that influence of position becomes, more generally, influence of model. The result shows that, for M-estimators, we must bound not only influence of residual but also influence of model. Several examples highlight the unique problems of nonlinear model estimation and demonstrate the utility of the influence function.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Kempf, Simon P. "The office property market of Hong Kong: an econometric analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B2963166X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sullwald, Wichard. "Grain regression analysis." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86526.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Grain regression analysis forms an essential part of solid rocket motor simulation. In this thesis a numerical grain regression analysis module is developed as an alternative to cumbersome and time consuming analytical methods. The surface regression is performed by the level-set method, a numerical interface advancement scheme. A novel approach to the integration of the surface area and volume of a numerical interface, as defined implicitly in a level-set framework, by means of Monte-Carlo integration is proposed. The grain regression module is directly coupled to a quasi -1D internal ballistics solver in an on-line fashion, in order to take into account the effects of spatially varying burn rate distributions. A multi-timescale approach is proposed for the direct coupling of the two solvers.
AFRIKAANSE OPSOMMING: Gryn regressie analise vorm ’n integrale deel van soliede vuurpylmotor simulasie. In hierdie tesis word ’n numeriese gryn regressie analise model, as ’n alternatief tot dikwels omslagtige en tydrowende analitiese metodes, ontwikkel. Die oppervlak regressie word deur die vlak-set metode, ’n numeriese koppelvlak beweging skema uitgevoer. ’n Nuwe benadering tot die integrasie van die buite-oppervlakte en volume van ’n implisiete numeriese koppelvlak in ’n vlakset raamwerk, deur middel van Monte Carlo-integrasie word voorgestel. Die gryn regressie model word direk en aanlyn aan ’n kwasi-1D interne ballistiek model gekoppel, ten einde die uitwerking van ruimtelik-wisselende brand-koers in ag te neem. ’n Multi-tydskaal benadering word voorgestel vir die direkte koppeling van die twee modelle.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhou, Qi Jessie. "Inferential methods for extreme value regression models /." *McMaster only, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Jian. "Estimation in regression models with interval censoring /." Thesis, Connect to this title online; UW restricted, 1994. http://hdl.handle.net/1773/8950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Möls, Märt. "Linear mixed models with equivalent predictors /." Online version, 2004. http://dspace.utlib.ee/dspace/bitstream/10062/1339/5/Mols.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Shami, Roland G. (Roland George) 1960. "Bayesian analysis of a structural model with regime switching." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Dai, Wenlin. "Different-based methods in nonparametric regression models." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/40.

Full text
Abstract:
This thesis develops some new di.erence-based methods for nonparametric regression models. The .rst part of this thesis focuses on the variance estimation for nonparametric models with various settings. In Chapter 2, a uni.ed framework of variance estimator is proposed for a model with smooth mean function. This framework combines the higher order di.erence sequence with least squares method and greatly extends the literature, including most of existing methods as special cases. We derive the asymp­totic mean squared errors and make both theoretical and numerical comparison for various estimators within the system. Based on the dramatic interaction of ordinary di.erence sequences and least squares method, we eventually .nd a uniformly sat­isfactory estimator for all the settings, solving the challenging problem of sequence selection. In Chapter 3, three methods are developed for the variance estimation in the repeated measurement setting. Both their asymptotic properties and .nite sample performance are explored. The sequencing method is shown to be the most adaptive while the sample variance method and the partitioning method are shown to outperform in certain cases. In Chapter 4, we propose a pairwise regression method for estimating the residual variance. Speci.cally, we regress the squared di.erence between observations on the squared distance between design points, and then es­timate the residual variance as the intercept. Unlike most existing di.erence-based estimators that require a smooth regression function, our method applies to regres­sion models with jump discontinuities. And it also applies to the situations where the design points are unequally spaced. The smoothness assumption of the nonparametric regression function is quite critical for the curve .tting and the residual variance estimation. The second part (Chapter 5) concentrates on the discontinuities detection for the mean function. In particular, we revisit the di.erence-based method in M¨uller and Stadtm¨uller (1999) and propose to improve it. To achieve the goal, we .rst reveal that their method is less e.cient due to the inappropriate choice of the response variable in their linear regression model. We then propose a new regression model for estimating the resid­ual variance and the total amount of discontinuities simultaneously. In both theory and simulations, we show that the proposed variance estimator has a smaller MSE compared to their estimator, whereas the e.ciency of the estimators for the total amount of discontinuities remain unchanged. Finally, we construct a new test proce­dure for detection using the newly proposed estimations; and via simulation studies, we demonstrate that our new test procedure outperforms the existing one in most settings. At the beginning of Chapter 6, a series of new di.erence sequences is de.ned to complete the span between the optimal sequence and the ordinary sequence. The vari­ance estimators using proposed sequences are shown to be quite robust and achieve smallest mean square errors for most of general settings. Then, the di.erence-based methods for variance function estimation are generally discussed. Keywords: Asymptotic normality, Di.erence-based estimator, Di.erence sequence, Jump point, Least square, Nonparametric regression, Pairwise regression, Repeated measurement, Residual variance
APA, Harvard, Vancouver, ISO, and other styles
39

Kang, Sungjun. "Forecasting inflation with probit and regression models /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9946268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Venter, Daniel Jacobus Lodewyk. "The consolidation of forecests with regression models." Thesis, Nelson Mandela Metropolitan University, 2014. http://hdl.handle.net/10948/d1020964.

Full text
Abstract:
The primary objective of this study was to develop a dashboard for the consolidation of multiple forecasts utilising a range of multiple linear regression models. The term dashboard is used to describe with a single word the characteristics of the forecasts consolidation application that was developed to provide the required functionalities via a graphical user interface structured as a series of interlinked screens. Microsoft Excel© was used as the platform to develop the dashboard named ConFoRM (acronym for Consolidate Forecasts with Regression Models). The major steps of the consolidation process incorporated in ConFoRM are: 1. Input historical data. Select appropriate analysis and holdout samples. 3. Specify regression models to be considered as candidates for the final model to be used for the consolidation of forecasts. 4. Perform regression analysis and holdout analysis for each of the models specified in step 3. 5. Perform post-holdout testing to assess the performance of the model with best holdout validation results on out-of-sample data. 6. Consolidate forecasts. Two data transformations are available: the removal of growth and time-periods effect from the time series; a translation of the time series by subtracting ̅i, the mean of all the forecasts for data record i, from the variable being predicted and its related forecasts for each data record I. The pre-defined regression models available for ordinary least square linear regression models (LRM) are: a. A set of k simple LRM’s, one for each of the k forecasts; b. A multiple LRM that includes all the forecasts: c. A multiple LRM that includes all the forecasts and as many of the first-order interactions between the input forecasts as allowed by the sample size and the maximum number of predictors provided by the dashboard with the interactions included in the model to be those with the highest individual correlation with the variable being predicted; d. A multiple LRM that includes as many of the forecasts and first-order interactions between the input forecasts as allowed by the sample size and the maximum number of predictors provided by the dashboard: with the forecasts and interactions included in the model to be those with the highest individual correlation with the variable being predicted; e. A simple LRM with the predictor variable being the mean of the forecasts: f. A set of simple LRM’s with the predictor variable in each case being the weighted mean of the forecasts with different formulas for the weights Also available is an ad hoc user specified model in terms of the forecasts and the predictor variables generated by the dashboard for the pre-defined models. Provision is made in the regression analysis for both of forward entry and backward removal regression. Weighted least squares (WLS) regression can be performed optionally based on the age of forecasts with smaller weight for older forecasts.
APA, Harvard, Vancouver, ISO, and other styles
41

Webster, Gregg. "Bayesian logistic regression models for credit scoring." Thesis, Rhodes University, 2011. http://hdl.handle.net/10962/d1005538.

Full text
Abstract:
The Bayesian approach to logistic regression modelling for credit scoring is useful when there are data quantity issues. Data quantity issues might occur when a bank is opening in a new location or there is change in the scoring procedure. Making use of prior information (available from the coefficients estimated on other data sets, or expert knowledge about the coefficients) a Bayesian approach is proposed to improve the credit scoring models. To achieve this, a data set is split into two sets, “old” data and “new” data. Priors are obtained from a model fitted on the “old” data. This model is assumed to be a scoring model used by a financial institution in the current location. The financial institution is then assumed to expand into a new economic location where there is limited data. The priors from the model on the “old” data are then combined in a Bayesian model with the “new” data to obtain a model which represents all the available information. The predictive performance of this Bayesian model is compared to a model which does not make use of any prior information. It is found that the use of relevant prior information improves the predictive performance when the size of the “new” data is small. As the size of the “new” data increases, the importance of including prior information decreases
APA, Harvard, Vancouver, ISO, and other styles
42

Gandy, Axel. "Directed model checks for regression models from survival analysis." Berlin Logos-Ver, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2766731&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gandy, Axel. "Directed model checks for regression models from survival analysis /." Berlin : Logos-Ver, 2006. http://deposit.ddb.de/cgi-bin/dokserv?id=2766731&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yin, Jiang Ling. "Financial time series analysis." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2492929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

O'LEARY, CHRISTOPHER JOSEPH. "AN ECONOMETRIC ANALYSIS OF UNEMPLOYMENT INSURANCE BENEFIT ADEQUACY (RATIONING CONSTRAINTS, TOBIT MODELS)." Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183901.

Full text
Abstract:
Explicit parameterizations of labor supply are specified and estimated on a sample of single unattached individuals using data from the Panel Study of Income Dynamics and a generalized Tobit maximum likelihood method which is consistent under the assumption that employed hours are exogenous. Results of these estimations are then used to compute triangle approximation and direct closed form solutions for labor market constraint compensation. Underemployment compensation estimates are generated and compared to actual and hypothetical payments which would accrue under the UI systems of representative states. Certain compensation results for overemployment are also offered. Where they are directly comparable, results from Tobit estimation of the basic labor supply relations are found to strictly dominate ordinary least squares (OLS) results in terms of efficiency. While the OLS and Tobit parameter estimates differ dramatically in most cases, the latter are consistent with the bulk of recent empirical labor supply research. A corollary purpose of estimating the several labor supply specifications is the search for an appropriate structure of preferences to be used in modeling the labor-leisure choice problem. Direct likelihood ratio tests yielded no best form, but suggested that more flexible parameterizations are to be desired. Results on compensation amounts tend to support accepted standards of UI benefit adequacy. For all levels of unemployment the direct compensation results suggested that "one-half gross wage replacement" would slightly overcompensate individuals from a utility based perspective.
APA, Harvard, Vancouver, ISO, and other styles
46

Eadie, Edward Norman. "Small resource stock share price behaviour and prediction." Title page, contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09CM/09cme11.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Donghui 1970. "Median-unbiased estimation in linear autoregressive time series models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Silvestrini, Andrea. "Essays on aggregation and cointegration of econometric models." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210304.

Full text
Abstract:
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint.

Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models.

A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed.

Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it.

Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results.

Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country".

The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions.

The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available.

The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less).

Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process.

The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors.

Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure.

Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations.

Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely.

The empirical analysis to examine debt stabilization is made up by two steps.

First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005).

Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999).

The priors used in the paper leads to straightforward posterior calculations which can be easily performed.

Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
49

Venditti, Fabrizio. "Essays on models with time-varying parameters for forecasting and policy analysis." Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/24868.

Full text
Abstract:
The aim of this thesis is the development and the application of econometric models with time-varying parameters in a policy environment. The popularity of these methods has run in parallel with advances in computing power, which has made feasible estimation methods that until the late '90s would have been unfeasible. Bayesian methods, in particular, benefitted from these technological advances, as sampling from complicated posterior distributions of the model parameters became less and less time-consuming. Building on the seminal work by Carter and Kohn (1994) and Jacquier, Polson, and Rossi (1994), bayesian algorithms for estimating Vector Autoregressions (VARs) with drifting coefficients and volatility were independently derived by Cogley and Sargent (2005) and Primiceri (2005). Despite their increased popularity, bayesian methods still suffer from some limitations, from both a theoretical and a practical viewpoint. First, they typically assume that parameters evolve as independent driftless random walks. It is therefore unclear whether the output that one obtains from these estimators is accurate when the model parameters are generated by a different stochastic process. Second, some computational limitations remain as only a limited number of time series can be jointly modeled in this environment. These shortcomings have prompted a new line of research that uses non-parametric methods to estimate random time-varying coefficients models. Giraitis, Kapetanios, and Yates (2014) develop kernel estimators for autoregressive models with random time-varying coefficients and derive the conditions under which such estimators consistently recover the true path of the model coefficients. The method has been suitably adapted by Giraitis, Kapetanios, and Yates (2012) to a multivariate context. In this thesis I make use of both bayesian and non-parametric methods, adapting them (and in some cases extending them) to answer some of the research questions that, as a Central Bank economist, I have been tackling in the past five years. The variety of empirical exercises proposed throughout the work testifies the wide range of applicability of these models, be it in the area of macroeconomic forecasting (both at short and long horizons) or in the investigation of structural change in the relationship among macroeconomic variables. The first chapter develops a mixed frequency dynamic factor model in which the disturbances of both the latent common factor and of the idiosyncratic components have time varying stochastic volatility. The model is used to investigate business cycle dynamics in the euro area, and to perform point and density forecast. The main result is that introducing stochastic volatility in the model contributes to an improvement in both point and density forecast accuracy. Chapter 2 introduces a nonparametric estimation method for a large Vector Autoregression (VAR) with time-varying parameters. The estimators and their asymptotic distributions are available in closed form. This makes the method computationally efficient and capable of handling information sets as large as those typically handled by factor models and Factor Augmented VARs (FAVAR). When applied to the problem of forecasting key macroeconomic variables, the method outperforms constant parameter benchmarks and large Bayesian VARs with time-varying parameters. The tool is also used for structural analysis to study the time-varying effects of oil price innovations on sectorial U.S. industrial output. Chapter 3 uses a bayesian VAR to provide novel evidence on changes in the relationship between the real price of oil and real exports in the euro area. By combining robust predictions on the sign of the impulse responses obtained from a theoretical model with restrictions on the slope of the oil demand and oil supply curves, oil supply and foreign productivity shocks are identified. The main finding is that from the 1980s onwards the relationship between oil prices and euro area exports has become less negative conditional on oil supply shortfalls and more positive conditional on foreign productivity shocks. A general equilibrium model is used to shed some light on the plausible reasons for these changes. Chapter 4 investigates the failure of conventional constant parameter models in anticipating the sharp fall in inflation in the euro area in 2013- 2014. This forecasting failure can be partly attributed to a break in the elasticity of inflation to the output gap. Using structural break tests and non-parametric time varying parameter models this study shows that this elasticity has indeed increased substantially after 2013. Two structural interpretations of this finding are offered. The first is that the increase in the cyclicality of inflation has stemmed from lower nominal rigidities or weaker strategic complementarity in price setting. A second possibility is that real time output gap estimates are understating the amount of spare capacity in the economy. I estimate that, in order to reconcile the observed fall in inflation with the historical correlation between consumer prices and the business cycle, the output gap should be wider by around one third.
APA, Harvard, Vancouver, ISO, and other styles
50

Leigh, Lamin. "Financial development, economic growth and the effect of financial innovation on the demand for money in an open economy : an econometric analysis for Singapore." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography