To see the other types of publications on this topic, follow the link: Statistics|Economics.

Dissertations / Theses on the topic 'Statistics|Economics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistics|Economics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Van, Tassel Peter. "Essays on financial economics." Thesis, Princeton University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3729751.

Full text
Abstract:
<p> Asset prices aggregate information and reflect market expectations about real outcomes. In this dissertation, I examine the informational content of prices and investigate the implications for forecasting returns, volatility, and the successful completion of corporate events with applications for popular hedge fund trading strategies.</p><p> The first chapter introduces a structural model for stock and option pricing in mergers and acquisitions. I show theoretically and empirically that option prices contain significant content for forecasting deal outcomes. Additionally, I employ my model to study the risks and returns of merger arbitrage strategies. Consistent with the data, my model predicts that merger arbitrage exhibits low volatility and large Sharpe ratios when deals are likely to succeed. To implement this observation, I construct the returns from a buy and hold strategy that overweights deals with a high implied probability of success. The high probability strategy nearly doubles the monthly Sharpe ratio of an equal weighted strategy that invests in all of the active deals in the economy. </p><p> The second chapter, which incorporates material from a joint paper with Yacine A&iuml;t-Sahalia and Jiangmin Xu, examines the relationship between high frequency machine-readable news and asset prices. Within the trading day, I show that positive news sentiment forecasts high returns and low volatility, and that large quantities of news forecast high volatility and high volumes. In an application of these observations, I use intraday news sentiment to improve the performance of contrarian trading strategies. Additionally, I demonstrate that intraday patterns in the arrival of news are contemporaneous with patterns in realized volatility and volume, and I document examples of large price movements that lead and lag the news.</p><p> The third chapter concludes by proposing a new test of dynamic asset pricing models whose expected returns satisfy a conditional beta relationship. The test applies recent developments from the financial econometrics literature to estimate time varying betas with high frequency data thereby providing a nonparametric alternative to traditional asset pricing tests. Empirically, I find the conditional CAPM is rejected by the data.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Safonov, Taras Aleksandrovich. "A New Measure of Quality of Life and Its Application in the Regions of the Russian Federation." Thesis, Western Illinois University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1597385.

Full text
Abstract:
<p> The search for a reliable measure of the quality of life (QOL) has long been a heavily discussed issue for many a researcher and a policy analyst. Despite much debate in theoretical and practical circles, there is still no consensus on either what exactly constitutes QOL or on the universal formula to quantify this concept. For instance, the United Nations&rsquo; Human Development Index is arguably one of the most well-known indicators of people&rsquo;s welfare that allows one to compare QOL across countries and regional blocs. </p><p> However, the practice of regional policy development and administration shows that government authorities at all levels require a comprehensive tool that would allow them to assess QOL in different regions of a country to develop improved policies aimed at increasing people&rsquo;s well-being. This is of particular relevance for large federative countries, such as Russia, where official statistics do not employ a spatially comparable aggregate indicator that would measure QOL in the regions. </p><p> Therefore, this study attempts to address this issue by following the three research objectives. Firstly, a spatially comparable standardized comprehensive Quality of Life Index (QOLI) is derived for each of the 83 Russian constituent territories based on the data provided by the Russian Federal State Statistics Service for the years 2009-2013. The index is calculated across five dimensions (Physical Well-Being, Decent Standard of Living, Social Security &amp; Inequality, Hospitable Environment, and Education) and incorporates 16 indicators such as Mortality Rates, Average Income, the Gini Coefficient, or Education Attainment, and does not have an analogue in Russian Official Statistics. </p><p> The QOLI does not assign weights to its components, letting data &ldquo;tell their story&rdquo; in estimating ceteris paribus effects of each of the 16 elements of the QOLI through a series of techniques, which allows to find out what indicator or a group of indicators have the most weight in determining the aggregate index and what parameters affect it only slightly. For instance, the study has found out that health and affordability of living might be the key factors in shaping the level of people&rsquo;s well-being and that people might be more interested in how their income compares to overall affordability of life in a region rather than what their income amounts to in absolute value. The need for this type of analysis is suggested by practice: policy administrators at various levels usually need to know what factors are most important for reaching higher levels of well-being in order to formulate their policy plans. </p><p> The study goes on to elaborate on the derived index methodology, studying spatial and temporal trends in the QOLI that might prevail, for example, in such natural regional groupings as the capital regions or the northern territories. The study also suggests an algorithm of estimating the sustainability of changes in the QOLI, which may prove useful for policy analysts at the step of feasibility research.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Chu, Chi-Yang. "Applied Nonparametric Density and Regression Estimation with Discrete Data| Plug-In Bandwidth Selection and Non-Geometric Kernel Functions." Thesis, The University of Alabama, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10262364.

Full text
Abstract:
<p> Bandwidth selection plays an important role in kernel density estimation. Least-squares cross-validation and plug-in methods are commonly used as bandwidth selectors for the continuous data setting. The former is a data-driven approach and the latter requires <i>a priori</i> assumptions about the unknown distribution of the data. A benefit from the plug-in method is its relatively quick computation and hence it is often used for preliminary analysis. However, we find that much less is known about the plug-in method in the discrete data setting and this motivates us to propose a plug-in bandwidth selector. A related issue is undersmoothing in kernel density estimation. Least-squares cross-validation is a popular bandwidth selector, but in many applied situations, it tends to select a relatively small bandwidth, or undersmooths. The literature suggests several methods to solve this problem, but most of them are the modifications of extant error criterions for continuous variables. Here we discuss this problem in the discrete data setting and propose non-geometric discrete kernel functions as a possible solution. This issue also occurs in kernel regression estimation. Our proposed bandwidth selector and kernel functions perform well in simulated and real data.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
4

Cox, Gregory Fletcher. "Advances in Weak Identification and Robust Inference for Generically Identified Models." Thesis, Yale University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10633240.

Full text
Abstract:
<p> This dissertation establishes tools for valid inference in models that are only generically identified with a special focus on factor models.</p><p> Chapter one considers inference for models under a general form of identification failure, by studying microeconometric applications of factor models. Factor models postulate unobserved variables (factors) that explain the covariation between observed variables. For example, school quality can be modeled as a common factor to a variety of school characteristics. Observed variables depend on factors linearly with coefficients that are called factor loadings. Identification in factor models is determined by a rank condition on the factor loadings. The rank condition guarantees that the observed variables are sufficiently related to the factors that the parameters in the distribution of the factors can be identified. When the rank condition fails, for example when the observed school characteristics are weakly related to school quality, the asymptotic distribution of test statistics is nonstandard so that chi-squared critical values no longer control size.</p><p> Calculating new critical values that do control size requires characterizing the asymptotic distribution of the test statistic along sequences of parameters that converge to points of rank condition failure. This paper presents new theorems for this characterization which overcome two technical difficulties: (1) non-differentiability of the boundary of the identified set and (2) degeneracy in the limit stochastic process for the objective function. These difficulties arise in factor models, as well as a wider class of generically identified models, which these theorems cover. Non-differentiability of the boundary of the identified set is solved by squeezing the distribution of the estimator between a nonsmooth, fixed boundary and a smooth, drifting boundary. Degeneracy in the limit stochastic process is solved by restandardizing the objective function to a higher-order so that the resulting limit satisfies a unique minimum condition. Robust critical values, calculated by taking the supremum over quintiles of the asymptotic distributions of the test statistic, result in a valid robust inference procedure.</p><p> Chapter one demonstrates the robust inference procedure in two examples. In the first example, there is only one factor, for which the factor loadings may be zero or close to zero. This simple example highlights the aforementioned important theoretical difficulties. For the second example, Cunha, Heckman, and Schennach (2010), as well as other papers in the literature, use a factor model to estimate the production of skills in children as a function of parental investments. Their empirical specification includes two types of skills, cognitive and noncognitive, but only one type of parental investment out of a concern for identification failure. We formulate and estimate a factor model with two types of parental investment, which may not be identified because of rank condition failure. We find that for one of the four age categories, 6-9 year olds, the factors are close to being unidentified, and therefore standard inference results are misleading. For all other age categories, the distribution of the factors is identified.</p><p> Chapter two provides a higher-order stochastic expansion of M- and Z- estimators. Stochastic expansions are useful for a wide variety of stochastic problems, including bootstrap refinements, Edgeworth expansions, and identification failure. Without identification, the higher-order terms in the expansion may become relevant for the limit theory. Stochastic expansions above fourth order are rarely used because the expressions in the expansion become intractable. For M- and Z- estimators, a wide class of estimators that maximize an objective function or set an objective function to zero, this paper provides smoothness conditions and a closed-form expression for a stochastic expansion up to an arbitrary order.</p><p> Chapter three provides sufficient conditions for a random function to have a global unique minimum almost surely. Many important statistical objects can be defined as the global minimizing set of a function, including identified sets, extremum estimators, and the limit of a sequence of random variables (due to the argmax theorem). Whether this minimum is achieved at a unique point or a larger set is often practically and/or theoretically relevant. This paper considers a class of functions indexed by a vector of parameters and provides simple transversality-type conditions which are sufficient for the minimizing set to be a unique point for almost every function.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
5

Hwang, Jungbin. "Fixed smoothing asymptotic theory in over-identified econometric models in the presence of time-series and clustered dependence." Thesis, University of California, San Diego, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10128431.

Full text
Abstract:
<p> In the widely used over-identified econometric model, the two-step Generalized Methods of Moments (GMM) estimator and inference, first suggested by Hansen (1982), require the estimation of optimal weighting matrix at the initial stages. For time series data and clustered dependent data, which is our focus here, the optimal weighting matrix is usually referred to as the long run variance (LRV) of the (scaled) sample moment conditions. To maintain generality and avoid misspecification, nowadays we do not model serial dependence and within-cluster dependence parametrically but use the heteroscedasticity and autocorrelation robust (HAR) variance estimator in standard practice. These estimators are nonparametric in nature with high variation in finite samples, but the conventional increasing smoothing asymptotics, so called small-bandwidth asymptotics, completely ignores the finite sample variation of the estimated GMM weighting matrix. As a consequence, empirical researchers are often in danger of making unreliable inferences and false assessments of the (efficient) two-step GMM methods. Motivated by this issue, my dissertation consists of three papers which explore the efficiency and approximation issues in the two-step GMM methods by developing new, more accurate, and easy-to-use approximations to the GMM weighting matrix. </p><p> The first chapter, "Simple and Trustworthy Cluster-Robust GMM Inference" explores new asymptotic theory for two-step GMM estimation and inference in the presence of clustered dependence. Clustering is a common phenomenon for many cross-sectional and panel data sets in applied economics, where individuals in the same cluster will be interdependent while those from different clusters are more likely to be independent. The core of new approximation scheme here is that we treat the number of clusters G fixed as the sample size increases. Under the new fixed-G asymptotics, the centered two-step GMM estimator and two continuously-updating estimators have the same asymptotic mixed normal distribution. Also, the t statistic, J statistic, as well as the trinity of two-step GMM statistics (QLR, LM and Wald) are all asymptotically pivotal, and each can be modified to have an asymptotic standard F distribution or t distribution. We also suggest a finite sample variance correction further to improve the accuracy of the F or t approximation. Our proposed asymptotic F and t tests are very appealing to practitioners, as test statistics are simple modifications of the usual test statistics, and the F or t critical values are readily available from standard statistical tables. We also apply our methods to an empirical study on the causal effect of access to domestic and international markets on household consumption in rural China. </p><p> The second paper "Should we go one step further? An Accurate Comparison of One-step and Two-step procedures in a Generalized Method of Moments Framework&rdquo; (coauthored with Yixiao Sun) focuses on GMM procedure in time-series setting and provides an accurate comparison of one-step and two-step GMM procedures in a fixed-smoothing asymptotics framework. The theory developed in this paper shows that the two-step procedure outperforms the one-step method only when the benefit of using the optimal weighting matrix outweighs the cost of estimating it. We also provide clear guidance on how to choose a more efficient (or powerful) GMM estimator (or test) in practice. </p><p> While our fixed smoothing asymptotic theory accurately describes sampling distribution of two-step GMM test statistic, the limiting distribution of conventional GMM statistics is non-standard, and its critical values need to be simulated or approximated by standard distributions in practice. In the last chapter, "Asymptotic F and t Tests in an Efficient GMM Setting" (coauthored with Yixiao Sun), we propose a simple and easy-to-implement modification to the trinity (QLM, LM, and Wald) of two-step GMM statistics and show that the modified test statistics are all asymptotically F distributed under the fixed-smoothing asymptotics. The modification is multiplicative and only involves the J statistic for testing over-identifying restrictions. In fact, what we propose can be regarded as the multiplicative variance correction for two-step GMM statistics that takes into account the additional asymptotic variance term under the fixed-smoothing asymptotics. The results in this paper can be immediately generalized to the GMM setting in the presence of clustered dependence.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Serra, Gerardo. "From scattered data to ideological education : economics, statistics and the state in Ghana, 1948-1966." Thesis, London School of Economics and Political Science (University of London), 2015. http://etheses.lse.ac.uk/3188/.

Full text
Abstract:
This thesis analyses the contribution of economics and statistics in the transformation of Ghana from colonial dependency to socialist one-party state. The narrative begins in 1948, extending through the years of decolonization, and ends in 1966, when the first postcolonial government led by Kwame Nkrumah was overthrown by a military coup d’état. Drawing on insights from political economy, the history of economics and the sociology of science, the study is constructed as a series of microhistories of public institutions, social scientists, statistical enquiries and development plans. In the period under consideration economics and statistics underwent a radical transformation in their political use. This transformation is epitomised by the two extremes mentioned in the title: the ‘scattered data’ of 1950s household budget surveys were expression of the limited will and capacity of the colonial state to exercise control over different areas of the country. In contrast, the 1960s dream of a monolithic one-party state led the political rulers to use Marxist-Leninist political economy as a cornerstone of the ideological education aiming at creating the ideal citizen of the socialist regime. Based on research in British and Ghanaian archives, the study claims that economists and statisticians provided important cognitive tools to imagine competing alternatives to the postcolonial nation state, finding its most extreme version in the attempt to fashion a new type of economics supporting Nkrumah’s dream of a Pan-African political and economic union. At a more general level, the thesis provides a step towards a deeper incorporation of Sub-Saharan Africa in the history of economics and statistics, by depicting it not simply as an importer of ideas and scientific practices, but as a site in which the interaction of local and foreign political and scientific visions turned economics and statistics into powerful tools of social engineering. These tools created new spaces for political support and dissent, and shifted the boundaries between the possible and the utopian.
APA, Harvard, Vancouver, ISO, and other styles
7

Yao, Jiawei. "Factor models| Testing and forecasting." Thesis, Princeton University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3682786.

Full text
Abstract:
<p> This dissertation focuses on two aspects of factor models, testing and forecasting. For testing, we investigate a more general high-dimensional testing problem, with an emphasis on panel data models. Specifically, we propose a novel technique to boost the power of testing a high-dimensional vector against sparse alternatives. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers, whereas more powerful tests such as thresholding and extreme-value tests require either stringent conditions or bootstrap to derive the null distribution, and often suffer from size distortions. Based on a screening technique, we introduce a ''power enhancement component", which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. As a byproduct, the power enhancement component also consistently identifies the elements that violate the null hypothesis. </p><p> Next, we consider forecasting a single time series using many predictors when nonliearity is present. We develop a new methodology called sufficient forecasting, by connecting sliced inverse regression with factor models. The sufficient forecasting correctly estimates projections of the underlying factors and provides multiple predictive indices for further investigation. We derive asymptotic results for the estimate of the central space spanned by these projection directions. Our method allows the number of predictors larger than the sample size, and therefore extends the applicability of inverse regression. Numerical experiments demonstrate that the proposed method improves upon a linear forecasting model. Our results are further illustrated in an empirical study of macroeconomic variables, where sufficient forecasting is found to deliver additional predictive power over conventional methods.</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Martinez-Cruz, Adan L. "Implications of heterogeneity in discrete choice analysis." Thesis, University of Maryland, College Park, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3587273.

Full text
Abstract:
<p> This dissertation carries out a series of Monte Carlo simulations seeking the implications for welfare estimates from three research practices commonly implemented in empirical applications of mixed logit and latent class logit. </p><p> Chapter 3 compares welfare measures across conditional logit, mixed logit, and latent class logit. The practice of comparing welfare estimates is widely used in the field. However, this chapter shows comparisons of welfare estimates seem unable to provide reliable information about the differences in welfare estimates that result from controlling for unobserved heterogeneity. The reason is that estimates from mixed logit and latent class logit are inherently inecient and inaccurate. </p><p> Researchers tend to use their own judgement to select the number of classes of a latent class logit. Chapter 4 studies the reliability of welfare estimates obtained under two scenarios for which an empirical researcher using his/her judgement would arguably choose less classes than the true number of classes. Results show that models with a number of classes smaller than the true number tend to yield down- ward biased and inaccurate estimates. The latent class logit with the true number of classes always yield unbiased estimates but their accuracy may be worse than models with the smaller number of classes. </p><p> Studies implementing discrete choice experiments commonly obtain estimates of preference parameters from latent class logit models. This practice, however, implies a mismatch: discrete choice experiments are designed under the assumption of homogeneity in preferences, and latent class logit search for heterogeneous preferences. Chapter 5 studies whether welfare estimates are robust to this mismatch. This chapter checks whether the number of choice tasks impact the reliability of welfare estimates. The findings show welfare estimates are unbiased regardless the number of choice tasks, and their accuracy increases with the number of choice tasks. However, some of the welfare estimates are inefficient to the point that cannot be statistically distinguished from zero, regardless the number of choice tasks. </p><p> Implications from these findings for the empirical literature are discussed. </p>
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Yonghui. "Three essays on large panel data models with cross-sectional dependence." Thesis, Singapore Management University (Singapore), 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3601351.

Full text
Abstract:
<p> My dissertation consists of three essays which contribute new theoretical results to large panel data models with cross-sectional dependence. These essays try to answer or partially answer some prominent questions such as how to detect the presence of cross-sectional dependence and how to capture the latent structure of cross-sectional dependence and estimate parameters efficiently by removing its effects.</p><p> Chapter 2 introduces a nonparametric test for cross-sectional contemporaneous dependence in large dimensional panel data models based on the squared distance between the pair-wise joint density and the product of the marginals. The test can be applied to either raw observable data or residuals from local polynomial time series regressions for each individual to estimate the joint and marginal probability density functions of the error terms. In either case, we establish the asymptotic normality of our test statistic under the null hypothesis by permitting both the cross section dimension <i> n</i> and the time series dimension <i>T</i> to pass to infinity simultaneously and relying upon the Hoeffding decomposition of a two-fold <i> U</i>-statistic. We also establish the consistency of our test. A small set of Monte Carlo simulations is conducted to evaluate the finite sample performance of our test and compare it with that of Pesaran (2004) and Chen, Gao, and Li (2009).</p><p> Chapter 3 analyzes nonparametric dynamic panel data models with interactive fixed effects, where the predetermined regressors enter the models nonparametrically and the common factors enter the models linearly but with individual specific factor loadings. We consider the issues of estimation and specification testing when both the cross-sectional dimension <i>N</i> and the time dimension <i> T</i> are large. We propose sieve estimation for the nonparametric function by extending Bai's (2009) principal component analysis (PCA) to our nonparametric framework. Following Moon and Weidner's (2010, 2012) asymptotic expansion of the Gaussian quasilog-likelihood function, we derive the convergence rate for the sieve estimator and establish its asymptotic normality. The sources of asymptotic biases are discussed and a consistent bias-corrected estimator is provided. We also propose a consistent specification test for the linearity of the nonparametric functional form by comparing the linear and sieve estimators. We establish the asymptotic distributions of the test statistic under both the null hypothesis and a sequence of Pitman local alternatives.</p><p> To improve the finite sample performance of the test, we also propose a bootstrap procedure to obtain the bootstrap <i>p</i>-values and justify its validity. Monte Carlo simulations are conducted to investigate the finite sample performance of our estimator and test. We apply our model to an economic growth data set to study the relationship between capital accumulation and real GDP growth rate.</p><p> Chapter 4 proposes a nonparametric test for common trends in semiparametric panel data models with fixed effects based on a measure of nonparametric goodness-of-fit (<i>R</i><sup>2</sup>). We first estimate the model under the null hypothesis of common trends by the method of profile least squares, and obtain the augmented residual which consistently estimates the sum of the fixed effect and the disturbance under the null.</p><p> Then we run a local linear regression of the augmented residuals on a time trend and calculate the nonparametric <i>R</i><sup>2</sup> for each cross section unit. The proposed test statistic is obtained by averaging all cross sectional nonparametric <i>R</i><sup>2</sup>'s, which is close to 0 under the null and deviates from 0 under the alternative. We show that after appropriate standardization the test statistic is asymptotically normally distributed under both the null hypothesis and a sequence of Pitman local alternatives. We prove test consistency and propose a bootstrap procedure to obtain <i>p</i>-values. Monte Carlo simulations indicate that the test performs well infinite samples. Empirical applications are conducted exploring the commonality of spatial trends in UK climate change data and idiosyncratic trends in OECD real GDP growth data. Both applications reveal the fragility of the widely adopted common trends assumption.</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Witte, Hugh Douglas. "Markov chain Monte Carlo and data augmentation methods for continuous-time stochastic volatility models." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/283976.

Full text
Abstract:
In this paper we exploit some recent computational advances in Bayesian inference, coupled with data augmentation methods, to estimate and test continuous-time stochastic volatility models. We augment the observable data with a latent volatility process which governs the evolution of the data's volatility. The level of the latent process is estimated at finer increments than the data are observed in order to derive a consistent estimator of the variance over each time period the data are measured. The latent process follows a law of motion which has either a known transition density or an approximation to the transition density that is an explicit function of the parameters characterizing the stochastic differential equation. We analyze several models which differ with respect to both their drift and diffusion components. Our results suggest that for two size-based portfolios of U.S. common stocks, a model in which the volatility process is characterized by nonstationarity and constant elasticity of instantaneous variance (with respect to the level of the process) greater than 1 best describes the data. We show how to estimate the various models, undertake the model selection exercise, update posterior distributions of parameters and functions of interest in real time, and calculate smoothed estimates of within sample volatility and prediction of out-of-sample returns and volatility. One nice aspect of our approach is that no transformations of the data or the latent processes, such as subtracting out the mean return prior to estimation, or formulating the model in terms of the natural logarithm of volatility, are required.
APA, Harvard, Vancouver, ISO, and other styles
11

Yevstihnyeyev, Roman. "Estimation of Asset Volatility and Correlation Over Market Microstructure Noise in High-Frequency Data." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:14398547.

Full text
Abstract:
Accurate measurement of asset return volatility and correlation is an important problem in financial econometrics. The presence of market microstructure noise in high-frequency data complicates such estimations. This study extends a prior application of a model-based volatility estimator with autocorrelated market microstructure noise to estimation of correlation. The model is applied to a high-frequency dataset including a stock and an index, and the results are compared to some existing models. This study supports previous findings that including an autocorrelation factor produces an estimator potentially less vulnerable to market microstructure noise, and finds that the same is true about the extended correlation estimator that is introduced here.
APA, Harvard, Vancouver, ISO, and other styles
12

Nadel, Sara B. "Essays in Optimizing Social Policy for Different Populations: Education, Targeting, and Impact Evaluation." Thesis, Harvard University, 2016. http://nrs.harvard.edu/urn-3:HUL.InstRepos:33493361.

Full text
Abstract:
In the first chapter of this dissertation, I look at the relationship between preference sets among students in similar majors, compared with different majors, in Peru. I find that students within majors share preference sets that differ from students in other majors. I further find that students from households without a formal labor market participant have made decisions that are more consistent with predicted professional opportunities compared with students with a formal labor market participant. These differences are systematic and not related to the general industrialization level of the city where the student lives. This research suggests that the difference between students and workers from households with formal labor-market familiarity and those from households without formal labor-market familiarity are not accidental or due to lack of familiarity. In the second chapter, I evaluate whether proxy-means testing as a method of targeting for Mexico's Conditional Cash Transfer program caused spending distortions among (potential) recipients. The income and wealth effect of participating in Progresa complicate a simple comparison of members of the control and treatment group in the acquisition of assets. To resolve this, I look at reduced asset acquisition just above the cutoff point. Because an imperfect implementation of the eligibility evaluation may have reduced treatment villagers’ perceived benefit of distorting, I also look for evidence of increased spending in non-assets and of increasing the number of eligible-aged children in the home to increase the size of the transfer. I do not find evidence of lack of investment in assets along the eligibility cutoff, but I do find evidence of increased spending as a percentage of income on items not included in the PMT, as well as evidence of increases in eligible-aged children among the poorest families in treatment villages. In the final chapter, which is joint with Lant Pritchett, we propose that many development programs, projects and policies are characterized by a high dimensional design space with a rugged fitness function over that space. In nearly any project/program/policy there are many design elements, and each design element has a number of possible choices, and the combination produces a high dimensionality design space. If different program designs produce large changes to outcomes/impact, this implies that the "fitness function" or "response surface," the mapping from program design to outcomes/impact, is rugged. We motivate this investigation using as an example a skill-set signaling program for new entrants to the labor market in Peru. We present a simulation model which compares two alternative learning strategies: "crawling the design space" (CDS) and a standard randomized control trial (RCT) approach. In this artificial world, we demonstrate that with even modest dimensionality of the design space and even modest degrees of ruggedness, the CDS learning strategy substantially outperforms the RCT learning strategy. Moreover, we show that the greater the ruggedness of the fitness function, the higher the variance of the RCT results relative to CDS and hence the lower the reliability of RCT results even with "external validity" across contexts. We suggest that RCT results to date are consistent with a world in which social programs exist in a high dimensional design space with rugged fitness functions and hence in which the standard RCT approach has limited direct practical application.<br>Public Policy
APA, Harvard, Vancouver, ISO, and other styles
13

Gober, Jon M. "A Points Per Game Rating For NFL Quarterbacks." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1242922731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

shen, zhangxin. "Modeling Farm-Retail Price Spread in the U.S. Pork Industry." NCSU, 2010. http://www.lib.ncsu.edu/theses/available/etd-04012010-133623/.

Full text
Abstract:
The farm-retail price spread is the difference between the retail price of a product and its farm value. It changes with changes in factor prices, the efficiency of providing services, and the quantity and quality of services embodied in the final product. A model was derived by Box-Cox transform base on the relative price spread mode for the U.S. pork industry. The new model analyses the determinant of margins more accurately. The results indicate the log of farm-retail price spread is significantly and positively related to increases in log of retail price and log of quantity of farm input. And the relationship between the price spread and industry costs is indeterminate. The results point to the strong possibility of spurious correlation between the price spread and concentration variables. It suggests other possibly unobserved variables correlated with trend are spuriously indicating concentration ratio, has a significant effect on the price spread. Another major implication of this study is that variables used on the regression need to be detrended in estimation.
APA, Harvard, Vancouver, ISO, and other styles
15

Montiel, Olea Jose Luis. "Essays on Econometrics and Decision Theory." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10940.

Full text
Abstract:
This dissertation presents three essays. The first essay, coauthored with Tomasz Strzalecki, is a classical exercise in axiomatic decision theory. We propose a simple and novel axiomatization of quasi-hyperbolic discounting, a tractable model of present bias preferences that has found many applications in economics. Our axiomatization imposes consistency restrictions directly on the intertemporal tradeoffs faced by the decision maker, without relying on auxiliary calibration devices such as lotteries. Such axiomatization is useful for experimental work since it renders the short-run and long-run discount factor elicitation independent of assumptions on the decision maker's utility function. The second essay, coauthored with Carolin Pflueger, belongs to the field of econometric theory. We develop a test for weak identification in the context of linear instrumental variables regression. The central feature of our test is its robustness to heteroskedasticity, autocorrelation, and clustering. We define identification to be weak when the Two-Stage Least Squares (TSLS) or the Limited Information Maximum Likelihood (LIML) Nagar bias is large relative to a benchmark. To test the null hypothesis of weak identification we propose a scaled non-robust first stage F statistic: the effective F. The test rejects for large values of the effective F. The critical values depend on an estimate of the covariance matrix of the OLS reduced form regression coefficients and on the covariance matrix of the reduced form errors. The third essay—the main chapter of this dissertation—belongs to the intersection of econometric theory and statistical decision theory. I present a new class of tests for hypothesis testing problems with a special feature: a boundary-sufficient statistic. The new tests minimize a weighted sum of the average rates of Type I and Type II error (average risk), while controlling the conditional rejection probability on the boundary of the null hypothesis; in this sense they are efficient conditionally similar on the boundary (ecs). The ecs tests emerge from an axiomatic approach: they essentially characterize admissibility—an important finite-sample optimality property—and similarity on the boundary in the class of all tests, provided the boundary-sufficient statistic is boundedly complete.<br>Economics
APA, Harvard, Vancouver, ISO, and other styles
16

Ortega, Hesles Maria Elena. "School Choice and Educational Opportunities: The Upper-Secondary Student-Assignment Process in Mexico City." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:16461054.

Full text
Abstract:
Many education systems around the world use a centralized admission process to assign students to schools. By definition, some applicants to oversubscribed schools are not offered admission to their most-preferred school. Thus, one naturally asks whether it makes a difference to applicants’ educational opportunities and outcomes which schools they apply to, are offered admission to, and eventually enroll in. Each year in Mexico City, about 300,000 teenagers apply for a seat at one of the nearly 650 public upper-secondary schools. In this centralized, merit-based admission process, applicants are assigned to a school based on entrance examination score and their ranked list of school choices, subject to school capacity constraints. In this dissertation, I include two papers assessing data from the upper-secondary application cohorts in Mexico City from 2005 to 2009. In the first paper, I find evidence of socio-economic stratification across schools. I also find dissimilarities in the application behavior of individuals according to their socio-economic background, even for those with high achievement levels. Based on qualitative and quantitative data from a small sample of applicants, I suggest that in addition to differences in economic resources, asymmetries in access to information might help to explain disparities in the application behavior of individuals from different socio-economic backgrounds. In the second paper, I capitalize on the natural experiment created at each oversubscribed public upper-secondary school in Mexico City by the imposition of exogenous admission cut-off scores. Using a regression-discontinuity design, I estimate that, on average, upper-secondary applicants who score just above the admission threshold for a more competitive school (i.e. a school with higher cut-off score and higher average examination scores) have lower probability of graduating on time and within 5 years than do applicants who scored just below the admission threshold. Given the high take-up rates of the offers of admission, I find that the effects for enrollment in a more competitive school are only slightly larger than they are in their analogous reduced-form estimates. In addition, I show that effects differ across the distribution of admission cut-off scores and for applicants with selected socio-demographic characteristics who scored just above the admission threshold.<br>Quantitative Policy Analysis in Education
APA, Harvard, Vancouver, ISO, and other styles
17

Cuartas, Beatriz. "Essays on Well-being and Quality of Life in Latin America." Thesis, George Mason University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10685733.

Full text
Abstract:
<p> Elevated Latin American well-being rankings are controversial. The dissertation explores the relationship between well-being and other performance measures covering 134 countries. A correlation analysis tests the relationship across country rankings, such as the Happy Planet Index, the World Development Indicators, the Global Peace Index, and the Corruption Perception Index. The empirical findings suggest that life satisfaction becomes statistically insignificant for the region when correlated with other measures including peace-security, and corruption. The findings also indicate that an increase in per-capita-income, war, and corruption tend to have little to no effect on the given HPI country ranking.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
18

Bailey, James. "Three essays on health insurance regulation and the labor market." Thesis, Temple University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3623108.

Full text
Abstract:
<p> This dissertation continues the tradition of identifying the unintended consequences of the US health insurance system. Its main contribution is to estimate the size of the distortions caused by the employer-based system and regulations intended to fix it, while using methods that are more novel and appropriate than those of previous work. </p><p> Chapter 1 examines the effect of state-level health insurance mandates, which are regulations intended to expand access to health insurance. It finds that these regulations have the unintended consequence of increasing insurance premiums, and that these regulations have been responsible for 9&ndash;23% of premium increases since 1996. The main contribution of the chapter is that its results are more general than previous work, since it considers many more years of data, and it studies the employer-based plans that cover most Americans rather than the much less common individual plans. </p><p> Whereas Chapter 1 estimates the effect of the average mandate on premiums, Chapter 2 focuses on a specific mandate, one that requires insurers to cover prostate cancer screenings. The focus on a single mandate allows a broader and more careful analysis that demonstrates how health policies spill over to affect the labor market. I find that the mandate has a significant negative effect on the labor market outcomes of the very group it was intended to help. The mandate expands the treatments health insurance covers for men over age 50, but by doing so it makes them more expensive to insure and employ. Employers respond to this added expense by lowering wages and hiring fewer men over age 50. According to the theoretical model put forward in the chapter, this suggests the mandate reduces total welfare. </p><p> Chapter 3 shows that the employer-based health insurance system has deterred entrepreneurship. It takes advantage of the natural experiment provided by the Affordable Care Act's dependent coverage mandate, which de-linked insurance from employment for many 19&ndash;25 year olds. Difference-in-difference estimates show that the mandate increased self-employment among the treated group by 13&ndash;24%. Instrumental variables estimates show that those who actually received parental health insurance as a result of the mandate were drastically more likely to start their own business. This suggest that concerns over health insurance are a major barrier to entrepreneurship in the United States.</p>
APA, Harvard, Vancouver, ISO, and other styles
19

Richard, Patrick. "Sieve bootstrap unit root tests." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103285.

Full text
Abstract:
We consider the use of a sieve bootstrap based on moving average (MA) and autoregressive moving average (ARMA) approximations to test the unit root hypothesis when the true Data Generating Process (DGP) is a general linear process. We provide invariance principles for these bootstrap DGPs and we prove that the resulting ADF tests are asymptotically valid. Our simulations indicate that these tests sometimes outperform those based on the usual autoregressive (AR) sieve bootstrap. We study the reasons for the failure of the AR sieve bootstrap tests and propose some solutions, including a modified version of the fast double bootstrap.<br>We also argue that using biased estimators to build bootstrap DGPs may result in less accurate inference. Some simulations confirm this in the case of ADF tests. We show that one can use the GLS transformation matrix to obtain equations that can be used to estimate bias in general ARMA(p,q) models. We compare the resulting bias reduced estimator to a widely used bootstrap based bias corrected estimator. Our simulations indicate that the former has better finite sample properties then the latter in the case of MA models. Finally, our simulations show that using bias corrected or bias reduced estimators to build bootstrap DGP sometimes provides accuracy gains.
APA, Harvard, Vancouver, ISO, and other styles
20

Wan, Ke. "Estimation of Travel Time Distribution and Travel Time Derivatives." Thesis, Princeton University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3642164.

Full text
Abstract:
<p>Given the complexity of transportation systems, generating optimal routing decisions is a critical issue. This thesis focuses on how routing decisions can be computed by considering the distribution of travel time and associated risks. More specifically, the routing decision process is modeled in a way that explicitly considers the dependence between the travel times of different links and the risks associated with the volatility of travel time. Furthermore, the computation of this volatility allows for the development of the travel time derivative, which is a financial derivative based on travel time. It serves as a value or congestion pricing scheme based not only on the level of congestion but also its uncertainties. In addition to the introduction (Chapter 1), the literature review (Chapter 2), and the conclusion (Chapter 6), the thesis consists of two major parts: </p><p> In part one (Chapters 3 and 4), the travel time distribution for transportation links and paths, conditioned on the latest observations, is estimated to enable routing decisions based on risk. Chapter 3 sets up the basic decision framework by modeling the dependent structure between the travel time distributions for nearby links using the copula method. In Chapter 4, the framework is generalized to estimate the travel time distribution for a given path using Gaussian copula mixture models (GCMM). To explore the data from fundamental traffic conditions, a scenario-based GCMM is studied. A distribution of the path scenario representing path traffic status is first defined; then, the dependent structure between constructing links in the path is modeled as a Gaussian copula for each path scenario and the scenario-wise path travel time distribution is obtained based on this copula. The final estimates are calculated by integrating the scenario-wise path travel time distributions over the distribution of the path scenario. In a discrete setting, it is a weighted sum of these conditional travel time distributions. Different estimation methods are employed based on whether or not the path scenarios are observable: An explicit two-step maximum likelihood method is used for the GCMM based on observable path scenarios; for GCMM based on unobservable path scenarios, extended Expectation Maximum algorithms are designed to estimate the model parameters, which introduces innovative copula-based machine learning methods. </p><p> In part two (Chapter 5), travel time derivatives are introduced as financial derivatives based on road travel times&mdash;a non-tradable underlying asset. This is proposed as a more fundamental approach to value pricing. The chapter addresses (a) the motivation for introducing such derivatives (that is, the demand for hedging), (b) the potential market, and (c) the product design and pricing schemes. Pricing schemes are designed based on the travel time data captured by real time sensors, which are modeled as Ornstein-Uhlenbeck processes and more generally, continuous time auto regression moving average (CARMA) models. The risk neutral pricing principle is used to generate the derivative price, with reasonably designed procedures to identify the market value of risk. </p>
APA, Harvard, Vancouver, ISO, and other styles
21

Zhao, Dan. "Measuring Technical Efficiency of the Japanese Professional Football (Soccer) League (J1 and J2)." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4972.

Full text
Abstract:
This is the first paper to measure the efficiency of the Japan Professional Football League clubs both the first and the second divisions. In Chapter 1, a non-parametric method Data Envelopment Development (DEA) is used and the data covers six seasons from 2005 to 2010. The input variables are payroll, cost besides payroll, and total assets. The output variables are attendance, revenue, and points awarded. I use different output combinations in order to check the sensitivity of the efficiency of the clubs after the original composition. This is also the first research to include more than one division of the Professional Football League and hence, the promotion and relegation impact on the efficiency can be analyzed using unique data such as Tokyo Verdy 1969. Tokyo Verdy 1969 operated inefficiently in the second division because it spent so much on inputs hoping for promotion. It was efficient when in the first division. The results indicate that athletic rank in the league is not correlated with the efficiency scores. The efficient clubs in the second division are all ranked at the bottom in the league and this is because they have limited resource inputs, no expectation to promote, and because the expansion policy of the league precludes relegation. Chapter 2 is an extension of Chapter 1. In this chapter I check the exogenous factors impacting the efficiency scores but not involved in the DEA analysis as the input variables. I aim to estimate the relationship between the input-oriented DEA efficiency scores under the constant returns to scale assumption and use an exogenous variable ordinary least square (OLS) model to check the relationship between the efficiency scores and exogenous variables. I regress the DEA efficiency scores on all of the exogenous variables collected from various resources during the sample period. Chapter 3 estimates the productivity and efficiencies of the football clubs in Japan Professional Football League. This chapter is an extension of the first chapter. In this chapter I check the dynamic change of Total Factor Productivity (TFP) based on the calculation of the Malmquist Index, which consists of efficiency change and technical change between two time periods. Additionally, the production frontier used in this chapter was built by the non-parametric input-oriented CRS DEA approach as applied in the first chapter. Based on the results of the Malmquist Index, we find if the change in the TFP growth as increasing, declining or remaining the same.
APA, Harvard, Vancouver, ISO, and other styles
22

Donnelly, James P. "NFL Betting Market: Using Adjusted Statistics to Test Market Efficiency and Build a Betting Model." Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/721.

Full text
Abstract:
The use of statistical analysis has been prevalent in the sports gambling industry for years. More recently, we have seen the emergence of "adjusted statistics", a more sophisticated way to examine each play and each result (further explanation below). And while adjusted statistics have become commonplace for professional and recreational bettors alike, little research has been done to justify their use. In this paper the effectiveness of this data is tested on the most heavily wagered sport in the world – the National Football League (NFL). The results are studied with two central questions in mind: Does the market account for the information provided by adjusted statistics? And, can this data be interpreted to create a profitable betting strategy? First, the Efficient Market Hypothesis is introduced and tested using these new variables. Then, a betting model is built and tested.
APA, Harvard, Vancouver, ISO, and other styles
23

Zajonc, Tristan. "Essays on Causal Inference for Public Policy." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10163.

Full text
Abstract:
Effective policymaking requires understanding the causal effects of competing proposals. Relevant causal quantities include proposals' expected effect on different groups of recipients, the impact of policies over time, the potential trade-offs between competing objectives, and, ultimately, the optimal policy. This dissertation studies causal inference for public policy, with an emphasis on applications in economic development and education. The first chapter introduces Bayesian methods for time-varying treatments that commonly arise in economics, health, and education. I present methods that account for dynamic selection on intermediate outcomes and can estimate the causal effect of arbitrary dynamic treatment regimes, recover the optimal regime, and characterize the set of feasible outcomes under different regimes. I demonstrate these methods through an application to optimal student tracking in ninth and tenth grade mathematics. The proposed estimands characterize outcomes, mobility, equity, and efficiency under different tracking regimes. The second chapter studies regression discontinuity designs with multiple forcing variables. Leading examples include education policies where treatment depends on multiple test scores and spatial treatment discontinuities arising from geographic borders. I give local linear estimators for both the conditional effect along the boundary and the average effect over the boundary. For two-dimensional RD designs, I derive an optimal, data-dependent, bandwidth selection rule for the conditional effect. I demonstrate these methods using a summer school and grade retention example. The third chapters illustrate the central role of persistence in estimating and interpreting value-added models of learning. Using data from Pakistani public and private schools, I apply dynamic panel methods that address three key empirical challenges: imperfect persistence, unobserved student heterogeneity, and measurement error. After correcting for these difficulties, the estimates suggest that only a fifth to a half of learning persists between grades and that private schools increase average achievement by 0.25 standard deviations each year. In contrast, value-added models that assume perfect persistence yield severely downwardly biased and occasionally wrong-signed estimates of the private school effect.
APA, Harvard, Vancouver, ISO, and other styles
24

Xi, Xiaomin. "Challenges in Electric Vehicle Adoption and Vehicle-Grid Integration." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366106454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lu, Zhen Cang. "Price forecasting models in online flower shop implementation." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Fonseca, Camila Veneo Campos 1988. "A influência da adesão aos níveis diferenciados de governança corporativa sobre a estrutura de capital das empresas brasileiras de capital aberto (2000 ¿ 2013)." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/286529.

Full text
Abstract:
Orientadores: Rodrigo Lanna Franco da Silveira, Celio Hiratuka<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Economia<br>Made available in DSpace on 2018-08-27T14:59:01Z (GMT). No. of bitstreams: 1 Fonseca_CamilaVeneoCampos_M.pdf: 2187150 bytes, checksum: 17c287ca376b037d1783e582e37cb3ab (MD5) Previous issue date: 2015<br>Resumo: O reduzido acesso ao financiamento externo, particularmente o de longo prazo, é um dos principais problemas do ambiente corporativo brasileiro. Os conflitos de agência e a existência de informação assimétrica nas transações efetivadas no mercado financeiro resultam em um maior custo de capital e no racionamento do crédito, sendo estes proporcionais ao grau de desconfiança dos investidores. A adoção de melhores práticas de governança corporativa pelas empresas, como o comprometimento com a transparência das informações divulgadas e a vigência de sistemas mais eficientes de proteção aos acionistas minoritários e credores, ao implicar redução do custo de capital, amplia o papel exercido pelo mercado na captação de recursos para o investimento, mitigando o problema do financiamento empresarial. O objetivo deste trabalho é verificar a possível influência da adesão aos níveis diferenciados de governança corporativa da Bolsa de Valores, Mercadorias e Futuros (BM&FBOVESPA) sobre o montante e o perfil do endividamento das empresas brasileiras de capital aberto no período entre 2000 e 2013. Para atingir o objetivo acima exposto, o estudo fez uso de técnicas econométricas baseadas em dados em painel. Os parâmetros do modelo empírico foram estimados pelo Método dos Momentos Generalizado Sistêmico (GMM-Sis). Os resultados do teste econométrico corroboram as hipóteses da pesquisa, ou seja, a governança corporativa é relevante na determinação do nível de endividamento das empresas brasileiras, exercendo um efeito positivo sobre o acesso aos recursos de terceiros. Além disso, empresas reconhecidas pela adoção de melhores práticas de governança têm alterado o perfil do seu endividamento, sendo privilegiadas na captação de recursos de longo prazo. Conclui-se que a governança corporativa é um fator chave no debate sobre os determinantes da estrutura de capital no Brasil uma vez que modifica não somente o montante, mas o perfil de endividamento das empresas comprometidas com a implementação de melhores práticas<br>Abstract: The reduced access to finance, particularly the long-term one, is one of the main problems of the Brazilian corporate environment. Agency conflicts and the existence of asymmetric information in financial markets results in higher cost of capital and credit rationing, which are proportional to the degree of investors distrust. The adoption of best practices of corporate governance by enterprises - such as better disclosure and efficient systems of investor protection -, results in a reduced cost of capital, expands the role played by the market in raising funds for investment, and mitigates the problem of business financing. The objective of this study is to verify the possible influence of adherence to different levels of corporate governance on the amount and profile of the debt contract by Brazilian public companies during 2000-2013 period. The study adopts econometric methods based on panel data models to explore the impact of corporate governance on corporate capital structure of Brazilian companies. The parameters of the models were estimated using a Systemic Generalized Method of Moments (GMM-Sys). The results of the econometric tests corroborate the hypotheses of the research - corporate governance is relevant in determining the level of indebtedness of Brazilian companies. ln addition, companies recognized by the adoption of best corporate governance practices have changed the profile of its debt, being privileged in the long-term fundraising. ln conclusion, corporate governance is a key factor in the debate about the determinants of capital structure in Brazil since it modifies not only the amount, but also the profile of indebtedness of companies committed to implementing best practices<br>Mestrado<br>Ciências Economicas<br>Mestra em Ciências Econômicas
APA, Harvard, Vancouver, ISO, and other styles
27

Lazariciu, Irina. "Economic evaluations of Alzheimer's disease medications-review and an application." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80315.

Full text
Abstract:
In the past decade, the increasing costs incurred as a result of caring for Alzheimer's disease (AD) patients have led to the recognition of AD economic research as an important area of study in most industrialized countries. This thesis contains an overview of the AD economic literature published in recent years, focusing in particular on four cost-effectiveness analyses (CEA's), which employ Markov models to simulate disease progression through different health states. Methodological issues and key study assumptions related to modelling disease progression are identified and critically evaluated. Two of these issues, namely the assumption that transition probabilities are independent of a patient's age and the implications of MMSE (Mini-Mental State Examination) score misclassification, are investigated through re-analysis of MMSE data from two longitudinal cohorts of probable AD patients (N = 106). Simulations are carried out to assess the impact of score misclassification on transition probabilities.<br>Our findings suggest that younger age may be associated with a higher likelihood of progressing into more advanced stages of AD. Additionally, we conclude that, in the presence of score misclassification, the use of the MMSE in the context of CEA's would lead to underestimating disease progression and the time spent in the more severe stages of AD. Recommendations are made for future research.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Qian. "Studies of choice behaviors in the Medicare market." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3386697.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Economics, 2009.<br>Title from PDF t.p. (viewed on Jul 15, 2010). Source: Dissertation Abstracts International, Volume: 70-12, Section: A, page: 4783. Adviser: Pravin K. Trivedi.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Jie. "Co-integration : a review." Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Figueroa-López, José Enrique. "Nonparametric estimation of Lévy processes with a view towards mathematical finance." Available online, Georgia Institute of Technology, 2003:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-122020/unrestricted/figueroa-lopez%5Fjose%5Fe%5F200405%5Fphd.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Mathematics, Georgia Institute of Technology, 2004.<br>Marcus C. Spruill, Committee Member ; Richard Serfozo, Committee Member ; Shijie Deng, Committee Member ; Christian Houdre, Committee Chair ; Robert P. Kertz, Committee Member. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
31

Black, Katie Jo. "Is Being Satisfied Making You Wealthy and Wise? A Study of the Effects of Well-Being at the City-Level." Youngstown State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1329242946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Yanwei. "A hierarchical Bayesian approach to model spatially correlated binary data with applications to dental research." Diss., Connect to online resource - MSU authorized users, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yi, Kang-Oh. "Three essays on quantal response equilibrium model /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9938589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ma, Jun. "Essays on applications of majorization : robust inference, market demand elasticity, and constrained optimization." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chamberlain, Lauren. "The Power Law Distribution of Agricultural Land Size." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7400.

Full text
Abstract:
This paper demonstrates that the distribution of county level agricultural land size in the United States is best described by a power-law distribution, a distribution that displays extremely heavy tails. This indicates that the majority of farmland exists in the upper tail. Our analysis indicates that the top 5% of agricultural counties account for about 25% of agricultural land between 1997-2012. The power-law distribution of farm size has important implications for the design of more efficient regional and national agricultural policies as counties close to the mean account for little of the cumulative distribution of total agricultural land. This has consequences for more efficient management and government oversight as a disruption in one of the counties containing a large amount of farmland (due to natural disasters, for instance) could have nationwide consequences for agricultural production and prices. In particular, the policy makers and government agencies can monitor about 25% of total agricultural land by overseeing just 5% of counties.
APA, Harvard, Vancouver, ISO, and other styles
36

Wright, Jeffrey. "A Tournament Approach to Price Discovery in the US Cattle Market." DigitalCommons@USU, 2017. https://digitalcommons.usu.edu/etd/6252.

Full text
Abstract:
Cattle price discovery is a process of determining the price in the market through the interactions of cattle buyers (packers) and sellers (ranchers). Locating the price discovery center or market, and estimating price interactions among the regional fed cattle markets and also among feeder cattle markets can help define a relevant fed cattle procurement market. This research identifies that the U.S. cattle markets is discovered in the futures markets, feeder cattle futures and fed futures.
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Xijia. "On Non Parametric Regression and Panel Unit Root Testing." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-222242.

Full text
Abstract:
In this thesis, two different issues in econometrics are studied, the estimation of regression coefficients and the non-stationartiy analysis in a panel setting. Regarding the first topic, we study a set of measure of location-based estimators (MLBEs) for the slope parameter in a linear regression model with a single stochastic regressor. The median-unbiased MLBEs are interesting as they can be robust to heavy-tailed and, hence, preferable to the ordinary least squares estimator (LSE) in such situations. Two cases, symmetric stable regression and contaminated normal regression, are considered as we investigate the statistical properties of the MLBEs. In addition, we illustrate how our results can be extended to include certain heteroscedastic regressions. There are three papers concerning the second part. In the first paper, we propose a novel way to test the unit roots in the panel setting. The new tests are based on the observation that the trajectory of the cross sectional sample variance behaves differently for stationary than for non-stationary processes. Three different test statistics are proposed. The limiting distributions are derived and the small sample properties are studied by simulations. In the remaining papers, we focus on the studies of the block bootstrap panel unit root tests proposed by Palm, Smeekes and Urbain (2011) which aims at dealing with a rather general cross-sectional dependency structure. One paper studies the robustness of PSU tests by a comparison with two representative tests from the second generation panel unit root tests. In another paper, we generalized the block bootstrap panel unit root tests in the sense of considering the deterministic terms in the model. Two different methods to deal with the deterministic terms are proposed and the asymptotic validity of bootstrap tests under the main null hypothesis is theoretically checked. The small sample properties are studied by simulations.
APA, Harvard, Vancouver, ISO, and other styles
38

Savard, Katherine J. "Disaster Capitalism: Impact on the Great Flood of 1993." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1256.

Full text
Abstract:
This thesis attempts to analyze the impact of disaster capitalism on the areas affected by the Great Flood of 1993. Using Naomi Klein’s book, the Shock Doctrine, I selected three variables that can be indicators of disaster capitalism. Unemployment rates, new private housing units authorized by permit, and employment in the mining, logging, and construction industry are used. I use a comparison of means test and a difference-in- differences estimate to find if the variables were changed as a result of the flood. Unemployment rates seemed to be affected by the crisis and strongly support Klein’s theories of disaster capitalism.
APA, Harvard, Vancouver, ISO, and other styles
39

Henderson, Neil James Kerr. "Extending the clinical and economic evaluations of a randomised controlled trial the IONA study /." Connect to e-thesis, 2008. http://theses.gla.ac.uk/418/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2008.<br>Ph.D. thesis submitted to the Department of Statistics, Faculty of Information and Mathematical Sciences, University of Glasgow, 2008. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
40

Gehman, Andrew J. "The Effects of Spatial Aggregation on Spatial Time Series Modeling and Forecasting." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/382669.

Full text
Abstract:
Statistics<br>Ph.D.<br>Spatio-temporal data analysis involves modeling a variable observed at different locations over time. A key component of space-time modeling is determining the spatial scale of the data. This dissertation addresses the following three questions: 1) How does spatial aggregation impact the properties of the variable and its model? 2) What spatial scale of the data produces more accurate forecasts of the aggregate variable? 3) What properties lead to the smallest information loss due to spatial aggregation? Answers to these questions involve a thorough examination of two common space-time models: the STARMA and GSTARMA models. These results are helpful to researchers seeking to understand the impact of spatial aggregation on temporal and spatial correlation as well as to modelers interested in determining a spatial scale for the data. Two data examples are included to illustrate the findings, and they concern states' annual labor force totals and monthly burglary counts for police districts in the city of Philadelphia.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
41

Machacny, Michaela, and Ismael Hallbäck. "Students saving : What are the governing factors of students saving?" Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39783.

Full text
Abstract:
Problem: Few studies regarding savings among students have been conducted, particularly studies looking at students in Sweden, as the general assumption is that students cannot save.   Purpose: Investigate what factors may affect students saving behavior   Method: The thesis will be done by a quantitative and deductive approach. To investigate what factors affect students saving we conducted several hypotheses tests through regression analysis.   Result and Conclusion: The investigation showed that through our estimates we were able to find three significant variables at a 5% level; Income, Worked Before and Saving Before and a general fit of the model of 16.36%. One possible explanation for the poor fit is the complexity of human behavior and thusly that it is hard to explain.   Keywords: Students, saving, consumption theory, statistics, age, behavioral, finance
APA, Harvard, Vancouver, ISO, and other styles
42

Medvedeff, Alexander Mark. "On the Interpolation of Missing Dependent Variable Observations." University of Akron / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=akron1208185971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Romanus, Dorothy. "The Value of Targeted Therapies in Lung Cancer." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:13070030.

Full text
Abstract:
The goal of this dissertation was to examine the realized value of targeted therapies in routine care and to identify opportunities for improving the return on medical spending for these technologies. Chapter 1 investigated the value of targeted therapies in lung cancer patients who were treated in routine care. This observational, claims-based analysis used propensity score, and instrumental variable methods, combined with a Kaplan Meier Sample Average estimator to calculate lifetime costs and life expectancy. An incremental comparison showed that the realized value of targeted therapies in routine care was unfavorable relative to chemotherapy treatment. Subgroup analyses revealed that initial erlotinib therapy yielded effectiveness results that are substantially lower than efficacy survival outcomes in molecularly guided trials. Our results indicated that in routine care, chemotherapy was the most cost effective strategy. The unexpectedly low outcomes with first-line erlotinib suggested that some of the value of this treatment was not being realized in practice. Chapter 2 examined the practice patterns of targeted therapies and utilization of predictive biomarker testing in routine care to better understand the observed gaps between trial-based and `real-world' outcomes with these agents. In our nationally representative cohort of lung cancer patients, we found that the vast majority of patients did not undergo molecular testing to inform first-line therapy. Our prediction models for biomarker screening and first-line treatment suggested that phenotypic enrichment criteria guided selection for testing and initiation of erlotinib therapy. Since clinical characteristics do not adequately discriminate between mutation positive and wild type tumors, these practices signal the need for wider dissemination of biomarker screening to accurately target patients towards improving therapeutic gains with erlotinib. Chapter 3 assessed the cost-effectiveness of multiplexed predictive biomarker screening to inform treatment decisions in lung cancer patients. Using a micro-simulation model to evaluate the incremental value of molecularly guided therapy compared to chemotherapy in unselected patients, we found that personalized therapy is a cost effective strategy. Our results indicated that better value of targeted therapies in lung cancer is achievable through molecularly guided treatment.
APA, Harvard, Vancouver, ISO, and other styles
44

Kessler, Lawrence. "Bayesian Estimation of Panel Data Fractional Response Models with Endogeneity: An Application to Standardized Test Rates." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4518.

Full text
Abstract:
In this paper I propose Bayesian estimation of a nonlinear panel data model with a fractional dependent variable (bounded between 0 and 1). Specifically, I estimate a panel data fractional probit model which takes into account the bounded nature of the fractional response variable. I outline estimation under the assumption of strict exogeneity as well as when allowing for potential endogeneity. Furthermore, I illustrate how transitioning from the strictly exogenous case to the case of endogeneity only requires slight adjustments. For comparative purposes I also estimate linear specifications of these models and show how quantities of interest such as marginal effects can be calculated and compared across models. Using data from the state of Florida, I examine the relationship between school spending and student achievement, and find that increased spending has a positive and statistically significant effect on student achievement. Furthermore, this effect is roughly 50% larger in the model which allows for endogenous spending. Specifically, a $1,000 increase in per-pupil spending is associated with an increase in standardized test pass rates ranging from 6.2-10.1%.
APA, Harvard, Vancouver, ISO, and other styles
45

Sullivan, Ryan Michael. "Data-snooping in financial markets : a re-examination of empirical evidence on predictability /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9908496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Dufour, Alfonso. "Essays on the econometrics of inter-trade durations and market liquidity /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9944222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lavidas, George. "Wave energy resource modelling and energy pattern identification using a spectral wave model." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/25506.

Full text
Abstract:
The benefits of the Oceans and Seas have been exploited by societies for many centuries; the marine offshore and naval sectors have been the predominant users of the waters. It has been overlooked until recently, that significant amounts of energy can be harnessed by waves, providing an additional abundant resource for renewable energy generation. The increasing energy needs of current societies have led to the consideration of waves as an exploitable renewable resource. During the past decades, advancements have been made towards commercialising wave energy converters (WECs), though significant knowledge gap exists on the accurate estimation of the potential energy that can be harnessed. In order, to enhance our understanding of opportunities within wave energy highly resolved long-term resource assessment of potential sites are necessary, which will allow for not only a detailed energy estimation methodology but also information on extreme waves that are expected to affect the survivability and reliability of future wave energy converters. This research work aims to contribute the necessary knowledge to the estimation of wave energy resources from both highly energetic and milder sea environment, exhibiting the opportunities that lay within these environments. A numerical model SWAN (Simulating WAves Nearshore), based on spectral wave formulation has been utilised for wave hindcasting which was driven by high resolution temporal and spatially varying wind data. The capabilities of the model, allow a detailed representation of several coastal areas, which are not usually accurately resolved by larger ocean models. The outcome of this research provides long-term data and characterisation of the wave environment and its extremes for the Scottish region. Moreover, investigation on the applicability of wave energy in the Mediterranean Sea, an area which was often overlooked, showed that wave energy is more versatile than expected. The outcomes provide robust estimations of extreme wave values for coastal waters, alongside valuable information about the usage of numerical modelling and WECs to establish energy pattern production. Several key tuning factors and inputs such as boundary wind conditions and computational domain parameters are tested. This was done in a systematic way in order to establish a customized solution and detect parameters that may hinder the process and lead to erroneous results. The uncertainty of power production by WECs is reduced by the introduction of utilization rates based on the long-term data, which include annual and seasonal variability. This will assist to minimize assumptions for energy estimates and financial returns in business plans. Finally, the importance of continuous improvements in resource assessment is stressed in order to enhance our understanding of the wave environment.
APA, Harvard, Vancouver, ISO, and other styles
48

Holmes, Beth. "Assessing regional volatility and estimating regional cotton acres in the United States." Thesis, Kansas State University, 2013. http://hdl.handle.net/2097/15534.

Full text
Abstract:
Master of Agribusiness<br>Department of Agricultural Economics<br>Vincent Amanor-Boadu<br>The objective of the research is to understand the volatility of cotton acres and estimate planted acres based on the factors that drive volatility in the United States at a regional level. Estimating cotton acres is important so that demand for cotton seed and technology can be anticipated and the appropriate investments in cotton seed production can be made. Post Multi-Fiber Arrangement, the US cotton economy has entered a state of imperfect completion which makes cotton price, ending stocks and the relationship of cotton to other crops important in understanding volatility in cotton acres. Linear Regression, Random Forest and Partial Least Squares Neural Networks (PLS NN) were used to estimate cotton acres at a US and Regional Level. The modeling approaches used to estimate change in acres yielded similar performance for U.S. total, Southwest, and West. The PLS NN was slightly better for the Delta and Southeast, where more crop alternatives exist. Random Forest offered a different perspective on variable importance in all regions.
APA, Harvard, Vancouver, ISO, and other styles
49

Mo, Lijia. "Examining the reliability of logistic regression estimation software." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7059.

Full text
Abstract:
Doctor of Philosophy<br>Department of Agricultural Economics<br>Allen M. Featherstone<br>Bryan W. Schurle<br>The reliability of nine software packages using the maximum likelihood estimator for the logistic regression model were examined using generated benchmark datasets and models. Software packages tested included: SAS (Procs Logistic, Catmod, Genmod, Surveylogistic, Glimmix, and Qlim), Limdep (Logit, Blogit), Stata (Logit, GLM, Binreg), Matlab, Shazam, R, Minitab, Eviews, and SPSS for all available algorithms, none of which have been previously tested. This study expands on the existing literature in this area by examination of Minitab 15 and SPSS 17. The findings indicate that Matlab, R, Eviews, Minitab, Limdep (BFGS), and SPSS provided consistently reliable results for both parameter and standard error estimates across the benchmark datasets. While some packages performed admirably, shortcomings did exist. SAS maximum log-likelihood estimators do not always converge to the optimal solution and stop prematurely depending on starting values, by issuing a ``flat" error message. This drawback can be dealt with by rerunning the maximum log-likelihood estimator, using a closer starting point, to see if the convergence criteria are actually satisfied. Although Stata-Binreg provides reliable parameter estimates, there is no way to obtain standard error estimates in Stata-Binreg as of yet. Limdep performs relatively well, but did not converge due to a weakness of the algorithm. The results show that solely trusting the default settings of statistical software packages may lead to non-optimal, biased or erroneous results, which may impact the quality of empirical results obtained by applied economists. Reliability tests indicate severe weaknesses in SAS Procs Glimmix and Genmod. Some software packages fail reliability tests under certain conditions. The finding indicates the need to use multiple software packages to solve econometric models.
APA, Harvard, Vancouver, ISO, and other styles
50

Stottlemyre, Sonia M. "The Effect of Country-Level Income on Domestic Terrorism| A Worldwide Analysis of the Difference Between Lone-Wolf and Group Affiliated Domestic Terrorism." Thesis, Georgetown University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1554444.

Full text
Abstract:
<p>Despite vast literature examining causes of terrorism, domestic terrorism has only recently begun to be studied as an entity unto itself. It has long been postulated that a country&rsquo;s wealth influences its domestic terrorism rates but very little research has backed that claim. Preliminary data suggests that there may be important differences between what leads to domestic attacks conducted by terrorist organizations and attacks conducted by people acting alone. The current study hypothesizes that the relationship between a country&rsquo;s wealth, as measured by GDP per capita, and its domestic terrorism rate may be different for lone-wolf terrorism than for group-affiliated terrorism. Results support this hypothesis but not in the expected way; per-capita GDP appears to have a non-linear relationship with lone-wolf terrorism and a linear relationship with group-affiliated terrorism. The data were highly sensitive to changes in model specification so caution must be taken when drawing conclusions based on these findings. Although these results are preliminary, they should encourage future researchers to examine the differences between lone-wolf and group-affiliated domestic terrorism to best understand and prevent both phenomena. </p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography