To see the other types of publications on this topic, follow the link: Empirical regression.

Dissertations / Theses on the topic 'Empirical regression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Empirical regression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lin, Hui-Ling. "Jackknife Empirical Likelihood for the Variance in the Linear Regression Model." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/math_theses/129.

Full text
Abstract:
The variance is the measure of spread from the center. Therefore, how to accurately estimate variance has always been an important topic in recent years. In this paper, we consider a linear regression model which is the most popular model in practice. We use jackknife empirical likelihood method to obtain the interval estimate of variance in the regression model. The proposed jackknife empirical likelihood ratio converges to the standard chi-squared distribution. The simulation study is carried out to compare the jackknife empirical likelihood method and standard method in terms of coverage probability and interval length for the confidence interval of variance from linear regression models. The proposed jackknife empirical likelihood method has better performance. We also illustrate the proposed methods using two real data sets.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xue. "Empirical Bayes block shrinkage for wavelet regression." Thesis, University of Nottingham, 2006. http://eprints.nottingham.ac.uk/13516/.

Full text
Abstract:
There has been great interest in recent years in the development of wavelet methods for estimating an unknown function observed in the presence of noise, following the pioneering work of Donoho and Johnstone (1994, 1995) and Donoho et al. (1995). In this thesis, a novel empirical Bayes block (EBB) shrinkage procedure is proposed and the performance of this approach with both independent identically distributed (IID) noise and correlated noise is thoroughly explored. The first part of this thesis develops a Bayesian methodology involving the non-central X[superscript]2 distribution to simultaneously shrink wavelet coefficients in a block, based on the block sum of squares. A useful (and to the best of our knowledge, new) identity satisfied by the non-central X[superscript]2 density is exploited. This identity leads to tractable posterior calculations for suitable families of prior distributions. Also, the families of prior distribution we work with are sufficiently flexible to represent various forms of prior knowledge. Furthermore, an efficient method for finding the hyperparameters is implemented and simulations show that this method has a high degree of computational advantage. The second part relaxes the assumption of IID noise considered in the first part of this thesis. A semi-parametric model including a parametric component and a nonparametric component is presented to deal with correlated noise situations. In the parametric component, attention is paid to the covariance structure of the noise. Two distinct parametric methods (maximum likelihood estimation and time series model identification techniques) for estimating the parameters in the covariance matrix are investigated. Both methods have been successfully implemented and are believed to be new additions to smoothing methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Arafeen, Md Junaid. "Adaptive Regression Testing Strategy: An Empirical Study." Thesis, North Dakota State University, 2012. https://hdl.handle.net/10365/26525.

Full text
Abstract:
When software systems evolve, different amounts of code modifications can be involved in different versions. These factors can affect the costs and benefits of regression testing techniques, and thus, there may be no single regression testing technique that is the most cost-effective technique to use on every version. To date, many regression testing techniques have been proposed, but no research has been done on the problem of helping practitioners systematically choose appropriate techniques on new versions as systems evolve. To address this problem, we propose adaptive regression testing (ART) strategies that attempt to identify the regression testing techniques that will be the most cost-effective for each regression testing session considering organization?s situations and testing environment. To assess our approach, we conducted an experiment focusing on test case prioritization techniques. Our results show that prioritization techniques selected by our approach can be more cost-effective than those used by the control approaches.
APA, Harvard, Vancouver, ISO, and other styles
4

Ketkar, Nikhil S. "Empirical comparison of graph classification and regression algorithms." Pullman, Wash. : Washington State University, 2009. http://www.dissertations.wsu.edu/Dissertations/Spring2009/n_ketkar_042409.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Washington State University, May 2009.
Title from PDF title page (viewed on June 3, 2009). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 101-108).
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Ying-keh. "Empirical Bayes procedures in time series regression models." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/76089.

Full text
Abstract:
In this dissertation empirical Bayes estimators for the coefficients in time series regression models are presented. Due to the uncontrollability of time series observations, explanatory variables in each stage do not remain unchanged. A generalization of the results of O'Bryan and Susarla is established and shown to be an extension of the results of Martz and Krutchkoff. Alternatively, as the distribution function of sample observations is hard to obtain except asymptotically, the results of Griffin and Krutchkoff on empirical linear Bayes estimation are extended and then applied to estimating the coefficients in time series regression models. Comparisons between the performance of these two approaches are also made. Finally, predictions in time series regression models using empirical Bayes estimators and empirical linear Bayes estimators are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Jinnah, Ali. "Inference for Cox's regression model via a new version of empirical likelihood." unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-11272007-223933/.

Full text
Abstract:
Thesis (M.S.)--Georgia State University, 2007.
Title from file title page. Yichuan Zhao, committee chair; Yu-Sheng Hsu , Xu Zhang, Yuanhui Xiao , committee members. Electronic text (54 p.) : digital, PDF file. Description based on contents viewed Feb. 25, 2008. Includes bibliographical references (p. 30-32).
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yi. "Empirical minimum distance lack-of-fit tests for Tobit regression models." Kansas State University, 2011. http://hdl.handle.net/2097/12123.

Full text
Abstract:
Master of Science
Department of Statistics
Weixing Song
The purpose of this report is to propose and evaluate two lack-of-fit test procedures to check the adequacy of the regression functional forms in the standard Tobit regression models. It is shown that testing the null hypothesis for the standard Tobit regression models amounts testing a new equivalent null hypothesis of the classic regression models. Both procedures are constructed based on the empirical variants of a minimum distance, which measures the squared difference between a nonparametric estimator and a parametric estimator of the regression functions fitted under the null hypothesis for the new regression models. The asymptotic null distributions of the test statistics are investigated, as well as the power for some fixed alternatives and some local hypotheses. Simulation studies are conducted to assess the finite sample power performance and the robustness of the tests. Comparisons between these two test procedures are also made.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yang. "An Empirical Analysis of Family Cost of Children : A Comparison of Ordinary Least Square Regression and Quantile Regression." Thesis, Uppsala University, Department of Statistics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126660.

Full text
Abstract:

Quantile regression have its advantage properties comparing to the OLS model regression which are full measurement of the effects of a covariate on response, robustness and Equivariance property. In this paper, I use a survey data in Belgium and apply a linear model to see the advantage properites of quantile regression. And I use a quantile regression model with the raw data to analyze the different cost of family on different numbers of children and apply a Wald test. The result shows that for most of the family types and living standard, from the lower quantile to the upper quantile the family cost on children increases along with the increasing number of children and the cost of each child is the same. And we found a common behavior that the cost of the second child is significantly more than the cost of the first child for a nonworking type of family and all living standard families, at the upper quantile (from 0.75 quantile to 0.9 quantile) of the conditional distribution.

APA, Harvard, Vancouver, ISO, and other styles
9

Boshoff, Lusilda. "Boosting, bagging and bragging applied to nonparametric regression : an empirical approach / Lusilda Boshoff." Thesis, North-West University, 2009. http://hdl.handle.net/10394/4337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mo, Zheng. "An Empirical Evaluation of OLS Hedonic Pricing Regression on Singapore Private Housing Market." Thesis, KTH, Byggvetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-150401.

Full text
Abstract:
The empirical paper studies the relationship between property value and hedonic attributes. To indentify the determinant characteristics the influent the private real estate price, their degrees of significance and help with the valuation procedure, 8870 private residential property transactions with caveats lodged across country are selected from Urban Redevelopment Authority of Singapore. 40 models are tested and RMSE, R-Square, Adjusted R-Square, F-Value tests are performed to discover the overall fitness of the models. Breusch-Pagan F-Test is performed to test the existence of heteroskedasticity and VIF test to check multicolinearity. Z score is performed to check the spatial autocorrelation. Three founding are discovered. Firstly, size, age, floor level, population density level, latitude and construction status are core attributes resulting from the regression. Secondly, new district zones classified by functions are detected instead of 28 administrative districts. Thirdly, government policies and local customs (Feng Shui) are proven to be determinant variables as well. Two suggestions are given to regulate the market in the end of this article.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Xi. "Empirical Properties of Functional Regression Models and Application to High-Frequency Financial Data." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1973.

Full text
Abstract:
Functional data analysis (FDA) has grown into a substantial field of statistical research, with new methodology, numerous useful applications and interesting novel theoretical developments. My dissertation focuses on the empirical properties of functional regression models and their application to financial data. We start from testing the empirical properties of forecasts with the functional autoregressive models based on simulated and real data. We define intraday returns and consider their prediction from such returns on a market index. This is an extension to intraday data of the Capital Asset Pricing model. Finally we investigate multifactor functional models and assess their suitability for the prediction of intraday returns for various financial assets, including stock and commodity futures.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Fan. "Penalised regression for high-dimensional data : an empirical investigation and improvements via ensemble learning." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289419.

Full text
Abstract:
In a wide range of applications, datasets are generated for which the number of variables p exceeds the sample size n. Penalised likelihood methods are widely used to tackle regression problems in these high-dimensional settings. In this thesis, we carry out an extensive empirical comparison of the performance of popular penalised regression methods in high-dimensional settings and propose new methodology that uses ensemble learning to enhance the performance of these methods. The relative efficacy of different penalised regression methods in finite-sample settings remains incompletely understood. Through a large-scale simulation study, consisting of more than 1,800 data-generating scenarios, we systematically consider the influence of various factors (for example, sample size and sparsity) on method performance. We focus on three related goals --- prediction, variable selection and variable ranking --- and consider six widely used methods. The results are supported by a semi-synthetic data example. Our empirical results complement existing theory and provide a resource to compare performance across a range of settings and metrics. We then propose a new ensemble learning approach for improving the performance of penalised regression methods, called STructural RANDomised Selection (STRANDS). The approach, that builds and improves upon the Random Lasso method, consists of two steps. In both steps, we reduce dimensionality by repeated subsampling of variables. We apply a penalised regression method to each subsampled dataset and average the results. In the first step, subsampling is informed by variable correlation structure, and in the second step, by variable importance measures from the first step. STRANDS can be used with any sparse penalised regression approach as the ``base learner''. In simulations, we show that STRANDS typically improves upon its base learner, and demonstrate that taking account of the correlation structure in the first step can help to improve the efficiency with which the model space may be explored. We propose another ensemble learning method to improve the prediction performance of Ridge Regression in sparse settings. Specifically, we combine Bayesian Ridge Regression with a probabilistic forward selection procedure, where inclusion of a variable at each stage is probabilistically determined by a Bayes factor. We compare the prediction performance of the proposed method to penalised regression methods using simulated data.
APA, Harvard, Vancouver, ISO, and other styles
13

Luo, Zairen. "Flexible Pavement Condition Model Using Clusterwise Regression and Mechanistic-Empirical Procedure for Fatigue Cracking Modeling." See Full Text at OhioLINK ETD Center (Requires Adobe Acrobat Reader for viewing), 2005. http://www.ohiolink.edu/etd/view.cgi?toledo1133560069.

Full text
Abstract:
Dissertation (Ph.D.)--University of Toledo, 2005.
Typescript. "A dissertation [submitted] as partial fulfillment of the requirements of the Doctor of Philosophy degree in Engineering." Bibliography: leaves 90-99.
APA, Harvard, Vancouver, ISO, and other styles
14

Fu, Shuting. "Bayesian Logistic Regression Model with Integrated Multivariate Normal Approximation for Big Data." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/451.

Full text
Abstract:
The analysis of big data is of great interest today, and this comes with challenges of improving precision and efficiency in estimation and prediction. We study binary data with covariates from numerous small areas, where direct estimation is not reliable, and there is a need to borrow strength from the ensemble. This is generally done using Bayesian logistic regression, but because there are numerous small areas, the exact computation for the logistic regression model becomes challenging. Therefore, we develop an integrated multivariate normal approximation (IMNA) method for binary data with covariates within the Bayesian paradigm, and this procedure is assisted by the empirical logistic transform. Our main goal is to provide the theory of IMNA and to show that it is many times faster than the exact logistic regression method with almost the same accuracy. We apply the IMNA method to the health status binary data (excellent health or otherwise) from the Nepal Living Standards Survey with more than 60,000 households (small areas). We estimate the proportion of Nepalese in excellent health condition for each household. For these data IMNA gives estimates of the household proportions as precise as those from the logistic regression model and it is more than fifty times faster (20 seconds versus 1,066 seconds), and clearly this gain is transferable to bigger data problems.
APA, Harvard, Vancouver, ISO, and other styles
15

Jian, Wen. "Analysis of Longitudinal Data in the Case-Control Studies via Empirical Likelihood." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/8.

Full text
Abstract:
The case-control studies are primary tools for the study of risk factors (exposures) related to the disease interested. The case-control studies using longitudinal data are cost and time efficient when the disease is rare and assessing the exposure level of risk factors is difficult. Instead of GEE method, the method of using a prospective logistic model for analyzing case-control longitudinal data was proposed and the semiparametric inference procedure was explored by Park and Kim (2004). In this thesis, we apply an empirical likelihood ratio method to derive limiting distribution of the empirical likelihood ratio and find one likelihood-ratio based confidence region for the unknown regression parameters. Our approach does not require estimating the covariance matrices of the parameters. Moreover, the proposed confidence region is adapted to the data set and not necessarily symmetric. Thus, it reflects the nature of the underlying data and hence gives a more representative way to make inferences about the parameter of interest. We compare empirical likelihood method with normal approximation based method, simulation results show that the proposed empirical likelihood ratio method performs well in terms of coverage probability.
APA, Harvard, Vancouver, ISO, and other styles
16

Ham, Roger, University of Western Sydney, and School of Economics and Finance. "The urban residential economic model : theoretical and empirical developments." THESIS_XXX_EFI_HAM_R.xml, 1999. http://handle.uws.edu.au:8081/1959.7/447.

Full text
Abstract:
The aim of thesis is to analyse the economic model of urban residential location through the application of duality methods. Whilst some dual methods have been used in urban economic modelling in the past this paper proposes alternative dual approaches which appear to be novel, but are complimentary to existing approaches to the urban model. As part of the application of dual techniques the paper proposes a method of application which is general enough to be applied to all Von Thunen type models and tests this proposition on the fundamental agrarian model of Von Thunen. As part of the dual analysis of the urban residential model the conditions for the traditional lot size hypothesis are examined in the light of conditional demand functions stemming from the dual analysis. The work also empirically tests the traditional residential lot size hypothesis for various Australian cities. The empirical method adopted involves estimation of density gradients utilising competing non-nested flexible form models and discrimination between these alternative models utilising semi-parametric non-nested tests based on an artificial regression model. Two of the three competing models have not been used in this context before, one of them being completely novel. Moreover, the artificial regression model has not been previously used in this context, requiring some modification to deal with the problem of competing models with dependent variable transformation.
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
17

Park, Kunsoon. "User Acceptance of the Intranet in Restaurant Franchise Systems: An Empirical Study." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/30234.

Full text
Abstract:
This research study examined the acceptance of the intranet in restaurant franchise systems. The widely accepted Technology Acceptance Model (TAM) developed by Davis (1986, 1989) was the basis for this study. TAM is an excellent model to predict information technology (IT) usage and is based on the Theory of Reasoned Action (TRA). Therefore, TAM was adopted in this study of intranet acceptance. Furthermore, this study attempted to see if the earlier results of TAM are still valid. The original model was modified to include one external variable, franchise support. Data were collected from franchise restaurant systems throughout the United States, excluding Alaska and Hawaii. Of 3,500 questionnaires distributed to individual users of intranet, 161 contained usable responses. The results of regression analysis confirm that TAM is valid for additional applications such as evaluating the intranet in restaurant franchise systems.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Lu, Min. "A Study of the Calibration Regression Model with Censored Lifetime Medical Cost." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/14.

Full text
Abstract:
Medical cost has received increasing interest recently in Biostatistics and public health. Statistical analysis and inference of life time medical cost have been challenging by the fact that the survival times are censored on some study subjects and their subsequent cost are unknown. Huang (2002) proposed the calibration regression model which is a semiparametric regression tool to study the medical cost associated with covariates. In this thesis, an inference procedure is investigated using empirical likelihood ratio method. The unadjusted and adjusted empirical likelihood confidence regions are constructed for the regression parameters. We compare the proposed empirical likelihood methods with normal approximation based method. Simulation results show that the proposed empirical likelihood ratio method outperforms the normal approximation based method in terms of coverage probability. In particular, the adjusted empirical likelihood is the best one which overcomes the under coverage problem.
APA, Harvard, Vancouver, ISO, and other styles
19

Marchenko, Maria [Verfasser], and Enno [Akademischer Betreuer] Mammen. "Econometric analysis of quantile regression models and networks : With empirical applications / Maria Marchenko ; Betreuer: Enno Mammen." Mannheim : Universitätsbibliothek Mannheim, 2016. http://d-nb.info/1114661287/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ülkü, Tolga. "Empirical analyses of airport efficiency and costs." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17117.

Full text
Abstract:
Kleine regionale Flughäfen leiden oft unter begrenzter Nachfrage sodass sie ihre Kosten nicht decken können. Die Frage ist wie solche Flughäfen effizient strukturiert, bewirtschaftet und möglicherweise finanziell unterstützt werden können. Viele solcher Flughäfen werden einzeln betrieben und erhalten direkte lokale oder nationale Subventionen, während andere von den Quersubventionen leben. Die Dissertation befasst sich zuerst mit der Abschätzung der Effizienz von 85 regionalen europäischen Flughäfen (2002-2009) durch Anwendung der „Data Envelopment Analysis“. Die Schätzungen zeigen, dass die potenziellen Einsparungen 50 Prozent und gesteigerten Einnahmemöglichkeiten 25 Prozent betragen. Die Zugehörigkeit zu einem Flughafensystem reduziert die Effizienz um 5 Prozent. Das durchschnittliche Break-Even Passagieraufkommen hat sich im letzten Jahrzehnt mit 464.000 Passagiere mehr als verdoppelt. Die Flughäfen hätten ihre Kosten mit allein 166.000 Passagiere decken können, wären sie effizient betrieben worden. Der zweite Teil beschäftigt sich mit einem Vergleich der Flughäfen von AENA und DHMI (2009-2011). Eine „Russell measure“ der DEA zeigt, dass die Mehrheit der Flughäfen unter zunehmenden Skalenerträge arbeitet. Die Ergebnisse zeigen höhere durchschnittliche Effizienz der spanischen Flughäfen. Aber ein verstärkte privates Engagement steigert die Effizienz in den türkischen Flughäfen. Wir schlagen verschiedene wirtschaftspolitische Optionen vor um die Effizienz zu verbessern, wie zum Beispiel die Dezentralisierung von Flughafen-Management und die Verbesserung des Flughafennetzes durch die Schließung ineffizienter Flughäfen. Im letzten Teil wird eine räumliche Regressionsmethode verwendet um verschiedene Hypothesen zu testen. Die Ergebnisse von subventionierten französischen und norwegischen Flughäfen zeigen eine negative Auswirkung von Subventionen auf Kosteneffizienz der Flughäfen. Darüber hinaus wird die Bedeutung von Skaleneffekten veranschaulicht.
Small and regional airports often have insufficient revenues to cover their costs. The question is how such airports could be efficiently structured, managed and financially supported. Some airports are operated individually and receive direct subsidies from the local and federal governments. Others survive through cross-subsidizations. This dissertation first deals with the efficiency of 85 small regional European airports for the years 2002-2009 by applying a data envelopment analysis. Estimates show the potential savings and revenue opportunities to be 50 percent and 25 percent respectively. Belonging to an airport system reduces efficiency by about 5 percent. The average break-even passenger throughput over the last decade more than doubled to 464 thousand passengers. However airports behaving efficiently could have covered their operational costs with a mere 166 thousand passengers annually. The second part addresses the comparison of airports belonging to AENA and DHMI for the years between 2009 and 2011. The majority of airports operate under increasing returns to scale. A Russell measure of data envelopment analysis is implemented. Results indicate higher average efficiency levels at Spanish airports, but private involvement enhances efficiency at Turkish ones. Certain policy options including a greater decentralization of airport management and the restructuring of the airport network (by closing some inefficient airports) should be considered to increase the airport systems’ efficiency. In the final part of the dissertation, we have studied how the airport specific characteristics drive the unit costs. In order to capture the spatial interdependence of airport costs, a spatial regression methodology is applied. Two separate datasets of subsidized French and Norwegian airports are used to test various hypotheses. The results show a negative effect of subsidies on airport cost efficiency. Furthermore, the significance of scale economies is illustrated.
APA, Harvard, Vancouver, ISO, and other styles
21

Ercisli, Safak. "Development of Enhanced Pavement Deterioration Curves." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/56599.

Full text
Abstract:
Modeling pavement deterioration and predicting the pavement performance is crucial for optimum pavement network management. Currently only a few models exist that incorporate the structural capacity of the pavements into deterioration modeling. This thesis develops pavement deterioration models that take into account, along with the age of the pavement, the pavement structural condition expressed in terms of the Modified Structural Index (MSI). The research found MSI to be a significant input parameter that affects the rate of deterioration of a pavement section by using the Akaike Information Criterion (AIC). The AIC method suggests that a model that includes the MSI is at least 10^21 times more likely to be closer to the true model than a model that does not include the MSI. The developed models display the average deterioration of pavement sections for specific ages and MSI values. Virginia Department of Transportation (VDOT) annually collects pavement condition data on road sections with various lengths. Due to the nature of data collection practices, many biased measurements or influential outliers exist in this data. Upon the investigation of data quality and characteristics, the models were built based on filtered and cleansed data. Following the regression models, an empirical Bayesian approach was employed to reduce the variance between observed and predicted conditions and to deliver a more accurate prediction model.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
22

Yuzbasioglu, Asim. "An empirical analysis of takeover predictions in the UK : application of artificial neural networks and logistic regression." Thesis, University of Plymouth, 2002. http://hdl.handle.net/10026.1/2219.

Full text
Abstract:
This study undertakes an empirical analysis of takeover predictions in the UK. The objectives of this research are twofold. First, whether it is possible to predict or identify takeover targets before they receive any takeover bid. Second, to test whether it is possible to improve prediction outcome by extending firm specific characteristics such as corporate governance variables as well as employing a different technique that has started becoming an established analytical tool by its extensive application in corporate finance field. In order to test the first objective, Logistic Regression (LR) and Artificial Neural Networks (ANNs) have been applied as modelling techniques for predicting target companies in the UK. Hence by applying ANNs in takeover predictions, their prediction ability in target classification is tested and results are compared to the LR results. For the second objective, in addition to the company financial variables, non-financial characteristics, corporate governance characteristics, of companies are employed. For the fist time, ANNs are applied to corporate governance variables in takeover prediction purposes. In the final section, two groups of variables are combined to test whether the previous outcomes of financial and non-financial variables could be improved. However the results suggest that predicting takeovers, by employing publicly available information that is already reflected in the share price of the companies, is not likely at least by employing current techniques of LR and ANNs. These results are consistent with the semi-strong form of the efficient market hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
23

Hari, Vijaya. "Empirical Investigation of CART and Decision Tree Extraction from Neural Networks." Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1235676338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhitina, Anna. "The economic benefits of EU membership: an Empirical analysis." Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-262236.

Full text
Abstract:
This master thesis is devoted to the empirical analysis of the economic benefits of EU membership. The analysis aims to investigate what is the impact of EU membership on growth of the real GDP (in constant prices), unemployment rate and inflation rate for 16 states entering EU after the year 1995 (analysed period of years is 1991-2014). The applied method for evaluation in the current work is econometric analysis of panel data. The first part of the thesis is devoted to the literature review. The second part is describing data and variables that will be used for analysis, development of these variables over the time and stationary testing. The third part is dedicated to the regression analysis and includes models for GDP growth, unemployment growth and inflation. The last part of this master thesis will sum up the results and findings of previous parts. The main source for the data used in this work is the statistical database of World Bank (2016).
APA, Harvard, Vancouver, ISO, and other styles
25

Almeida, Leonardo Viana de. "Short selling recall option pricing: empirical and theoretical approaches." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/12/12138/tde-22112016-114644/.

Full text
Abstract:
Short selling is important for price efficiency as it helps negative information to be incorporated into prices. As short selling requires borrowing stock in advance, the equity lending market plays a central role in price efficiency. For instance, when the costs of borrowing certain equities are high, these stocks are likely to be overpriced. Unfortunately, not much is known about the equity lending market, particularly the Brazilian market. Here, we have investigated a particular feature of the equity lending contract, namely, the lender recall option. Lending contracts either i) allow the lender to recall the stock at an earlier date than initially agreed, or ii) allow no early recall, that is, they are fixed term contracts. We have derived a simple model for recall option pricing and confirmed the model empirically
A venda descoberta desempenha uma importante participação na eficiência da precificação de ativos, pois permite incorporar informações negativas aos seus preços. Como a venda descoberta requer que um ativo seja alugado previamente, o mercado de aluguel de ativos tem um papel central na formação eficiente de preços. Por exemplo, quando os custos de aluguel são altos, ativos estão provavelmente sobrevalorizados. Infelizmente pouco se conhece a fundo sobre o mercado de aluguel de ativos. Neste artigo, investigamos uma característica do aluguel de ações, propriamente dita, a opção de liquidação antecipada pelo doador. Contratos de aluguel, quanto a este aspecto, podem i) permitir que o doador requeira suas ações antes do prazo acordado ou ii) não permitir esta opção, possuindo prazo fixo. Derivamos um modelo simples de precificação desta opção e confirmamos o modelo empiricamente
APA, Harvard, Vancouver, ISO, and other styles
26

Herath, Shanaka. "The Size of the Government and Economic Growth: An Empirical Study of Sri Lanka." WU Vienna University of Economics and Business, 2010. http://epub.wu.ac.at/2962/1/sre%2Ddisc%2D2010_05.pdf.

Full text
Abstract:
The new growth theory establishes, among other things, that government expenditure can manipulate economic growth of a country. This study attempts to explain whether government expenditure increases or decreases economic growth in the context of Sri Lanka. Results obtained applying an analytical framework based on time series and second degree polynomial regressions are generally consistent with previous findings: government expenditure and economic growth are positively correlated; excessive government expenditure is negatively correlated with economic growth; and an open economy promotes growth. In a separate section, the paper examines Armey's (1995) idea of a quadratic curve that explains the level of government expenditure in an economy and the corresponding level of economic growth. The findings confirm the possibility of constructing the Armey curve for Sri Lanka, and it estimates the optimal level of government expenditure to be approximately 27 per cent. This paper adds to the literature indicating that the Armey curve is a reality not only for developed economies, but also for developing economies.(author's abstract)
Series: SRE - Discussion Papers
APA, Harvard, Vancouver, ISO, and other styles
27

Ahmad, Abd-Razak. "Modelling corporate failure with financial and 'event' information : an empirical study using logistic regression and artificial neural networks." Thesis, University of Leeds, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Åkesson, Nils, and Ludvig Harting. "Valuing firms within the utilities sector using regression analysis: : An empirical study of the US and European market." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275681.

Full text
Abstract:
Valuing a company is an important task in finance, especially before a potential merger or acquisition of a company. It is then of great importance for both parties in a deal to make an accurate estimate of the value of the company. The goal of this paper is to investigate how well regression analysis can be applied in this matter and if it can perform at par or better than more frequently used methods in the industry today. The study was conducted within the utilities sector in the US and Europe, with data collected from historic public transactions dating back to 2009. The study concludes that a regression model as a valuation tool can generate several advantages as it identifies key value drivers and is based on core mathematical concepts. However, the model created in this thesis underperforms compared to the prominent methods in place today. For further research this thesis may provide useful insight into different areas to consider when creating a valuation model.
Att värdera ett företag är en viktig uppgift inom finanssektorn, särskilt innan en potentiell sammanslagning eller förvärv av ett företag. Det är då av stor vikt för båda parter i en affär att göra en exakt uppskattning av företagets värde. Målet med denna studie är att undersöka hur väl regressionsanalys kan tillämpas i denna fråga och om den kan generera samma eller bättre resultat än mer använda värderingsmetoder inom branschen idag. Studien genomfördes inom el-, gas- och vattensektorn i USA och Europa, med data som samlats in från historiska offentliga transaktioner som går tillbaka till 2009. Studien drar slutsatsen att en regressionsmodell som ett värderingsverktyg kan generera flera fördelar eftersom den identifierar viktiga faktorer som driver en värdering och baseras på grundläggande matematiska begrepp. Modellen som skapats i denna avhandling underpresterar dock jämfört med de framstående metoderna som finns idag. För ytterligare forskning kan denna studie ge användbar insikt i olika områden att beakta när man skapar en värderingsmodell.
APA, Harvard, Vancouver, ISO, and other styles
29

Pihl, Svante, and Leonardo Olivetti. "An Empirical Comparison of Static Count Panel Data Models: the Case of Vehicle Fires in Stockholm County." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412014.

Full text
Abstract:
In this paper we study the occurrences of outdoor vehicle fires recorded by the Swedish Civil Contingencies Agency (MSB) for the period 1998-2019, and build static panel data models to predict future occurrences of fire in Stockholm County. Through comparing the performance of different models, we look at the effect of different distributional assumptions for the dependent variable on predictive performance. Our study concludes that treating the dependent variable as continuous does not hamper performance, with the exception of models meant to predict more uncommon occurrences of fire. Furthermore, we find that assuming that the dependent variable follows a Negative Binomial Distribution, rather than a Poisson Distribution, does not lead to substantial gains in performance, even in cases of overdispersion. Finally, we notice a slight increase in the number of vehicle fires shown in the data, and reflect on whether this could be related to the increased population size.
APA, Harvard, Vancouver, ISO, and other styles
30

Sjölin, Carin. "The impact of governance on inequality : An empirical study." Thesis, Södertörns högskola, Nationalekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-31304.

Full text
Abstract:
This paper examines the effect of governance on inequality, specifically if improvements in the World Bank’s Worldwide Governance Indicators affect inequality as measured by two Gini coefficients: Market Gini, before taxes and redistribution, and Net Gini, after taxes and redistribution. The data for the Gini measurements was taken from the Standardized World Income Inequality Database (SWIID) and the data for the Worldwide Governance Indicators was taken from the World Bank. Data for fifteen (15) years, from the start of the Worldwide Governance Indicators until 2013, was combined with data from SWIID for the same years. In all, data from one hundred fifty-six (156) countries with a full set of six (6) indicators for the years that had at least one corresponding Gini measurements were used in this study: in total one thousand seven hundred and forty-seven (1747) observations. In a pooled OLS regression, controlling for growth with the variable GDP per Capita expressed as a per cent (%) change on an annual basis, the individual indicators gave the following results, where a positive sign indicates increased inequality and vice versa: Control of Corruption and Regulatory Quality showed a positive sign for both Gini measurements. Rule of Law, Government Effectiveness, Political Stability and the Absence of Violence/Terrorism, gave a negative sign for both Gini measurements. Voice and Accountability showed a positive sign for Market Gini and a negative sign for Net Gini. The fact that an improvement in Control of Corruption increased inequality both before and after taxes and redistribution was unexpected and should be further researched.
APA, Harvard, Vancouver, ISO, and other styles
31

Arya, Sanjeev. "Empirical Modeling of Regional Stream Habitat Quality Using GIS-Derived Watersheds of Flexible Scale." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1023200635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Rodriguez-Castro, Monica. "ELEMENTS OF TASK, JOB, AND PROFESSIONAL SATISFACTION IN THE LANGUAGE INDUSTRY: AN EMPIRICAL MODEL." Kent State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=kent1322006349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Luoma, Alem. "Tax competition among municipalities in the central part of Sweden : An empirical study: Does municipal taxation decisions depend on taxations in neighboring municipalities?" Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-26518.

Full text
Abstract:
The primary task of this paper is to test the interactive relations between tax rates at municipality level. We include 96 municipalities between the years 2006 to 2013.   The relations are estimated by panel data instrumental variable estimation method with fixed effect for overcoming the possible specific error of simultaneity. In addition, we choose a set of control variables to strength our analysis. The main findings of this study suggest, one percent tax cut in the neighboring municipality leads to a 0,62 percent decrease in the tax in the home municipality ceteris paribus. This result is in line with theory and is similar to findings in previous studies such as Edmark and Åhgren (2008).
APA, Harvard, Vancouver, ISO, and other styles
34

Lu, Yinghua. "Empirical Likelihood Inference for the Accelerated Failure Time Model via Kendall Estimating Equation." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/math_theses/76.

Full text
Abstract:
In this thesis, we study two methods for inference of parameters in the accelerated failure time model with right censoring data. One is the Wald-type method, which involves parameter estimation. The other one is empirical likelihood method, which is based on the asymptotic distribution of likelihood ratio. We employ a monotone censored data version of Kendall estimating equation, and construct confidence intervals from both methods. In the simulation studies, we compare the empirical likelihood (EL) and the Wald-type procedure in terms of coverage accuracy and average length of confidence intervals. It is concluded that the empirical likelihood method has a better performance. We also compare the EL for Kendall’s rank regression estimator with the EL for other well known estimators and find advantages of the EL for Kendall estimator for small size sample. Finally, a real clinical trial data is used for the purpose of illustration.
APA, Harvard, Vancouver, ISO, and other styles
35

Alluri, Anjaneya Varma. "Empirical Study On Key Attributes of Yelp dataset which Account for Susceptibility of a user to Social Influence." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439281364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Galdino, Carlos Henrique Pereira Assunção. "Previsão de longo prazo de níveis no sistema hidrológico do TAIM." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/142492.

Full text
Abstract:
O crescimento populacional e a degradação dos corpos d’água vêm exercendo pressão à agricultura moderna, a proporcionar respostas mais eficientes quanto ao uso racional da água. Para uma melhor utilização dos recursos hídricos, faz-se necessário compreender o movimento da água na natureza, onde o conhecimento prévio dos fenômenos atmosféricos constitui uma importante ferramenta no planejamento de atividades que utilizam os recursos hídricos como fonte primária de abastecimento. Nesse trabalho foram realizadas previsões de longo prazo com antecedência de sete meses e intervalo de tempo mensal de níveis no Sistema Hidrológico do Taim, utilizando previsões de precipitação geradas por um modelo de circulação global. Para realizar as previsões foi elaborado um modelo hidrológico empírico de regressão, onde foram utilizadas técnicas estatísticas de análise e manipulação de séries históricas para correlacionar os dados disponíveis aos níveis (volumes) de água no banhado. Partindo do pressuposto que as previsões meteorológicas são a maior fonte de incerteza na previsão hidrológica, foi utilizada a técnica de previsão por conjunto (ensemble) e dados do modelo COLA, com 30 membros, para quantificar as incertezas envolvidas. Foi elaborado um algoritmo para gerar todas as possibilidades de regressão linear múltipla com os dados disponíveis, onde oito equações candidatas foram selecionadas para realizar as previsões. Numa análise preliminar dos dados de entrada de precipitações previstas foi observado que o modelo de circulação global não representou os extremos observados de forma satisfatória, sendo executado um processo de remoção do viés. O modelo de empírico de simulação foi posteriormente executado em modo continuo, gerando previsões de longo prazo de níveis para os próximos sete meses, para cada mês no período de junho/2004 a dezembro/2011. Os resultados obtidos mostraram que a metodologia utilizada obteve bons resultados, com desempenho satisfatórios até o terceiro mês, decaindo seu desempenho nos meses posteriores, mas configurando-se em uma ferramenta para auxílio à gestão dos recursos hídricos do local de estudo.
Population growth and degradation of water bodies have been pressuring modern agriculture, to provide more efficient responses about the rational use of water. For a better use of water resources, it is necessary to understand the movement of water in nature, where prior knowledge of atmospheric phenomena is an important tool in planning activities that use water as the primary source of supply. In this study were performed long-term forecasts of water levels (seven months of horizon, monthly time-step) in the Hydrological System Taim, using rainfall forecasts generated by a global circulation model as input. To perform predictions was developed an empirical hydrological regression model. This model was developed based on statistical techniques of analysis and manipulation of historical data to correlate the input data available to the levels (volume) of water in a wetland. Assuming that weather forecasts are a major source of uncertainty in hydrological forecasting, we used an ensemble forecast from COLA 2.2 with 30 members to quantify the uncertainties involved. An algorithm was developed to generate all the multiple linear regression models with the available data, where eight candidates equations were selected for hydrological forecasting. In a preliminary analysis of the precipitation forecast was observed that the global circulation model did not achieve a good representation of extremes values, thus a process of bias removal was carried out. Then the empirical model was used to generate water levels forecast for the next seven months, in each month of the period june/2004 to december/2011. The results showed that the methodology used has a satisfactory performance until the lead time three (third month in the future) where the performance starts to show lower values. Beside the sharply lost of performance in the last lead times, the model is a support tool that can help the decision making in the management of water resources for the study case.
APA, Harvard, Vancouver, ISO, and other styles
37

Aljaid, Mohammad, and Mohammed Diaa Zakaria. "Implied Volatility and Historical Volatility : An Empirical Evidence About The Content of Information And Forecasting Power." Thesis, Umeå universitet, Företagsekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-172756.

Full text
Abstract:
This study examines whether the implied volatility index can provide further information in forecasting volatility than historical volatility using GARCHfamily models. For this purpose, this researchhas been conducted to forecast volatility in two main markets the United States of America through its wildly used Standard and Poor’s 500 index and its correspondingvolatility index VIX and in Europe through its Euro Stoxx 50 and its correspondingvolatility index VSTOXX. To evaluate the in-sample content of information, the conditional variance equations of GARCH(1,1) and EGARCH (1,1) are supplemented by integrating implied volatility as an explanatory variable. The realized volatility has been generated from daily squared returns and was employed as a proxy for true volatility. To examine the out-of-sample forecast performance, one-day-ahead rolling forecasts have been generated, and Mincer–Zarnowitz regression and encompassing regression has been utilized. The predictive power of implied volatility has been assessed based on Mean Square Error (MSE). Findings suggest that the integration of implied volatility as an exogenous variable in the conditional variance of GARCHmodels enhancesthe fitness of modelsand decreasesvolatility persistency. Furthermore, the significance of the implied volatility coefficient suggests that implied volatility includes pertinent information in illuminating the variation of the conditional variance. Implied volatility is found to be a biased forecast of realized volatility. Empirical findings of encompassingregression testsimply that the implied volatility index does not surpass historical volatility in terms of forecasting future realized volatility.
APA, Harvard, Vancouver, ISO, and other styles
38

Mikaelsson, Alex, and Saliou Sall. "Does corruption have a significant effect on economic growth? : An empirical analysis examining the relationship between corruption and economic growth in developing countries." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-26139.

Full text
Abstract:
Corruption is a major cause and result of poverty around the globe. It arises at all levels of society, from national governments and military to small businesses and sports. Corruption affects all elements of society in some way as it undermines democracy and economic growth as well as the environment and people’s health. The main purpose of this thesis is to examine if corruption has a significant effect on economic growth in developing countries. The empirical analysis is conducted with a regression analysis, using data from recognized institutions. Other variables that can affect GDP per capita growth are also examined such as the level of democracy, fertility rate, life expectancy, education and the Initial GDP per capita to test for conditional convergence. In our main model, the empirical results show that corruption does not have a significant effect on economic growth but this is basically due to that the model exhibits multicollinearity. In our second model, where we omitted the variables Democracy, Initial GDP and Life expectancy, we found that corruption has a significant, negative effect on economic growth. This is in accordance with previous empirical results which hold that more corruption in a nation leads to less economic growth.
APA, Harvard, Vancouver, ISO, and other styles
39

Andersson, Gustaf, and Nora Lindvall. "Trust and Turnout : An Empirical Study of South African Voters." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-352688.

Full text
Abstract:
Scholars have proposed the idea that trust influences individuals’ choice to vote or abstain. However, there is uncertainty about the composition of trust and its effect on voter turnout. The aim of this study is to explore the relationship between interpersonal and institutional trust and voter turnout in South Africa. Examining presently unused data for South Africa from the World Values Survey 2006 through exploratory and confirmatory factor analysis, the argument is advanced that trust is a multidimensional concept that may be modelled by multivariate measurements. A logistic factor score regression model shows that a one-unit increase of trust in public institutions on average increases the odds of voting by 9 % whereas trust in private institutions and interpersonal trust have no significant effects. The results imply that trust- strengthening actions may be of interest to South African public institutions to increase electoral participation and legitimise election outcomes.
APA, Harvard, Vancouver, ISO, and other styles
40

Van, Deventer Megan. "The development and empirical evaluation of an work engagement structural model." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96784.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: Work Engagement is one construct of many that forms part of the complex nomological network of constructs underlying the behaviour of working man2. Work Engagement is an important construct both from an individual as well as from an organisational perspective. Human resource management interventions aimed at enhancing Work Engagement aspire to contribute to the achievement of the organisation’s primary objective and the well-being of the organisation’s employees. Such interventions will most likely also be valued by individuals within the workplace, as individuals will be able to experience a sense of personal fulfilment through self-expression at work. It is therefore essential to gain a valid understanding of the Work Engagement construct and the psychological mechanism that underpins it, in order to design human resource interventions that will successfully enhance Work Engagement. The current study raises the question why variance in Work Engagement exists amongst different employees working in different organisational contexts. The research objective of the current study is to develop and empirically test an explanatory Work Engagement structural model that will provide a valid answer to this question. In this study, a comprehensive Work Engagement structural model was proposed. An ex post facto correlational design with structural equation modelling (SEM) as the statistical analysis technique was used to test the substantive research hypotheses as represented by the Work Engagement structural model. Furthermore, the current study tested two additional narrow-focus structural models describing the impact of value congruence on Work Engagement by using an ex post facto correlational design with polynomial regression as the statistical analysis technique. A convenience sample of 227 teachers working in public sector schools falling under the jurisdiction of the Western Cape Education Department (WCED) participated in the study. The comprehensive Work Engagement model achieved reasonable close fit. Support was found for all of the hypothesised theoretical relationships in the Work Engagement structural model, except for the influence of the PsyCap*Job Characteristics interaction effect on Meaningfulness and for three of the five latent polynomial regression terms added in the model in an attempt to derive response surface test values. The response surface analyses findings were mixed. Based on the obtained results, meaningful practical recommendations were derived.
AFRIKAANSE OPSOMMING: Werkverbintenis1 is een van ‘n groot verskeidenheid konstrukte wat deel vorm van die komplekse nomologiese netwerk van konstrukte wat die gedrag van die arbeidende mens onderlê. Werkverbintenis word as ‘n belangrike konstruk beskou vanuit ‘n individuele sowel as vanuit ‘n organisatoriese perspektief. Menslike hulpbronbestuurs-intervensies gerig op die bevordering van Werkverbintenis streef daarna om by te dra tot die bereiking van die organisasie se primêre doel sowel as tot die welstand van die organisasie se werknemers. Sodanige intervensies sal waarskynlik ook deur werknemers waardeer word, aangesien sodanige intervensies die kanse verhoog dat individue selfvervulling in hul werk sal ervaar omdat die werk hul die geleentheid bied om hulself in hul werk uit te leef. Dit is gevolglik noodsaaklik om ‘n geldige begrip te ontwikkel van die Werkverbintenis-konstruk en die sielkundige meganisme wat dit onderlê ten einde menslike hulpronbestuurs-intervensies te ontwerp wat suksesvol Werkverbintenis sal bevorder. Die huidige studie stel die vraag aan die orde waarom variansie in Werkverbintenis tussen verskillende werknemers bestaan wat in verskillende organisatoriese kontekste werk. Die navorsingsdoelstelling van die huidige studie is om ‘n verklarende Werkverbintenisstrukturele model te ontwikkel en te toets wat ‘n geldige antwoord op hierdie vraag sal bied. ‘n Omvattende Werkverbintenis strukturele model is in hierdie studie voorgestel. ‘n Ex post facto korrelatiewe ontwerp met strukturele vergelykingsmodellering (SVM) as die statistiese ontledingstegniek is gebruik om die substantiewe navorsingshipotese soos voorgestel deur die Werkverbintenis strukturele model te toets. Die huidige studie het voorts twee addisionele nouer-fokus strukturele modelle getoets wat die impak van waardekongruensie op Werkverbintenis beskryf deur middel van ‘n ex post facto korrelatiewe ontwerp met polinomiese regressie-ontleding as statistiese ontledingstegniek. ‘n Geriefsteekproef van 227 onderwysers wat in openbare skole werksaam is wat onder die beheer van die Wes Kaapse Department van Onderwys val (WKDO) het aan die studie deelgeneem. Die omvattende Werkverbintenis-model het redelik goeie pasgehalte getoon. Steun is gevind vir all die voorgestelde teoretiese verwantskappe in die Werkverbintenis strukturele model, behalwe vir die invloed van die Sielkundige kapitaal*Werk eienskappe-interaksie-effek op Betekenisvolheid en vir drie van die vyf polinomiese latente regressie-terme wat in die model ingesluit is in ‘n poging om responsoppervlakte-waardes af te lei. Gemengde resultate is verkry vir die responsoppervlakte-ontleding. Betekenisvolle praktiese aanbevelings is gemaak op grond van die navorsingsresultate.
APA, Harvard, Vancouver, ISO, and other styles
41

Dias, Sónia Manuela Mendes. "Linear regression with empirical distributions." Tese, 2014. https://repositorio-aberto.up.pt/handle/10216/74191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Dias, Sónia Manuela Mendes. "Linear regression with empirical distributions." Doctoral thesis, 2014. https://repositorio-aberto.up.pt/handle/10216/74191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hsu, Pai-Hung, and 徐百宏. "Empirical study on strategy for Regression Testing." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/26560876409792786516.

Full text
Abstract:
碩士
國立中山大學
資訊管理學系研究所
94
Software testing plays a necessary role in software development and maintenance. This activity is performed to support quality assurance. It is very common to design a number of testing suite to test their programs manually for most test engineers. To design test data manually is an expensive and labor-wasting process. Base on this reason, how to generate software test data automatically becomes a hot issue. Most researches usually use the meta-heuristic search methods like genetic algorithm or simulated annealing to gain the test data. In most circumstances, test engineers will generate the test suite first if they have a new program. When they debug or change some code to become a new one, they still design another new test suite to test it. Nearly no people will reserve the first test data and reuse it. In this research, we want to discuss whether it is useful to store the original test data.
APA, Harvard, Vancouver, ISO, and other styles
44

Mei-Hui, Lin, and 林美慧. "Meta-Regression Analysis : Theory and an Empirical Application." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/11150409754455696269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

März, Alexander. "Applications of modern regression techniques in empirical economics." Doctoral thesis, 2016. http://hdl.handle.net/11858/00-1735-0000-0028-87DE-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ho, Fang-Rou, and 何枋柔. "An Empirical Study for Stabilization of Spatial Regression Modeling." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/5wkc9j.

Full text
Abstract:
碩士
國立彰化師範大學
統計資訊研究所
105
Model selection or model averaging is essential to a statistical modeling, but how to determine which of them is more appropriate has not received much attention. In this thesis, we focus on discussing the spatial regression models and propose an intuitive criterion to assess the stabilization of a model selection procedure. Further, a data perturbation technique is applied to estimate this criterion in practice. Some empirical comments and suggestions are given through simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
47

LIU, WEI-CHIN, and 劉偉欽. "An Empirical Analysis of Risk Measures using Quantile Regression." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/77579431065630308496.

Full text
Abstract:
碩士
國立高雄應用科技大學
金融系金融資訊碩士班
104
This paper presents a comprehensive empirical analysis of a set of left-tail measures (LTMs): the mean and standard deviation of a loss larger than the VaR (MLL and SDLL) and the VaR. The data we use for estimating the LTMs are the daily return of the S&P500 Index from January 1963 to December 2015 and T50 Index from July 2003 to December 2015. We estimate the value-at-risk using the quantile regression model and the other two traditional method. We want to compare the value-at-risk from the three models that were analyzed from two angles: risk prediction and investment. In risk prediction, the empirical results indicate the quantile regression’s LTMs overvalued. The VaRiance-CoVaRiance Method is undervalued because of the back test more than 12.5. In investment, the Sharpe ratio daily investment using LTMs is higher than the traditional method.
APA, Harvard, Vancouver, ISO, and other styles
48

Shi, Yi Ling, and 施依伶. "An empirical study on some break point in nonparametric regression." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/30672507669026780989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Chun, Chen Hung, and 陳俊宏. "Building Multi-factor Stock Return Models Using Regression Analysis and Regression Trees─ Empirical Study in USA Stock Market." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/78737459524670281525.

Full text
Abstract:
碩士
中華大學
資訊管理學系碩士在職專班
100
Combining multiple factors may construct a more accurate stock return prediction model, but a trial and error approach is clearly inefficient to discovery the best multi-factor model. In this study, sort normalization was employed to normalize the independent variables and the dependent variable, and regression analysis and regression trees were employed to establish the rate of return prediction models to identify the most important variables affecting the rate of return. The U.S. stock market was employed as the market to implement empirical analysis. The results showed (1) the principal component analysis and variable clustering analysis showed that stock selection factors can be divided into five ingredients: value (such as B/P), growth (such as ROE), scale (such as total market capitalization), inertia (such as quarter stock price rate of change), others (such as S/P). Univariate sorting method showed that the value stocks earn high returns relying on high-risk, but the growth stocks, not. It also showed that the value-oriented stocks intend to select small-cap stocks, but the growth-oriented stocks have no such tendency. (2) When using regression analysis to establish the rate of return prediction models, adopting rank values for independent and dependent variables is better than adopting original values. When using backward elimation method, the first four most important variables are the stock price, total market value, EPS, and debt-equity ratio. When using forward selection method, the first four most important variables are the stock price, total market value, GVI (0.06), and EPS. Using stepwise regression to construct the regression model containing only a small number of independent variables can improve the predictive ability. (3) Using regression tree to establish the rate of return prediction models, the GVI (0.125) is the best factor in term of predictive ability except for stock price. The RMSE and error rate in the test period of integrating multiple regression trees is lower than the single regression trees. (4) The results of weighted scoring stock selection model showed that the combination of multiple factors can increase the rate of return, and the ability that ROE weight increases the rate of return is much lower than the B/P and S/P factors. The combination of multiple factors can reduce the risk, and the ability that ROE weight reduces the risk is much higher than the other two factors. The combination of multiple factors can increase the Sharpe Ratio. Especially, the effects of interactions, B/P*ROE and ROE*S/P, were very obvious.
APA, Harvard, Vancouver, ISO, and other styles
50

Callas, Peter W. "Empirical comparisons of logistic regression, Poisson regression, and Cox proportional hazards modeling in analysis of occupational cohort data." 1994. https://scholarworks.umass.edu/dissertations/AAI9510451.

Full text
Abstract:
Three multiplicative models commonly used in the analysis of occupational cohort studies are logistic, Poisson, and Cox proportional hazards regression. Although the underlying theories behind these are well known, this has not always led to clear decisions for selecting which to use in practice. This research was conducted to examine the effect model choice has on the epidemiologic interpretation of occupational cohort data. The three models were applied to a National Cancer Institute historical cohort of formaldehyde-exposed workers. Samples were taken from this dataset to create scenarios for model comparisons, varying the study size (n = 600, 3000, 6000), proportion of subjects experiencing the outcome (2.5%, 10%, 50%), strength of association between exposure and outcome (weak, moderate, strong), follow-up length (5, 15, 30 years), and proportion of subjects lost to follow-up (0%, 10%, 17.5%). Other factors investigated included how to handle subjects lost to follow-up in logistic regression. Models were compared on risk estimates, confidence intervals, and practical issues such as ease of use. The Poisson and Cox models yielded nearly identical relative risks and confidence intervals in all situations except when confounding by age could not be closely controlled in the Poisson analysis, which occurred when the sample size was small or outcome was rare. Logistic regression findings were more variable, with risk estimates differing most from the Cox results when there was a common outcome or strong relative risk. Logistic was also less precise than the others. Thus, although logistic was the easiest model to implement, it should only be used in occupational cohort studies when the outcome is rare (5% or less), and the relative risk is less than about 2. Even then, since it does not account for follow-up time differences between subjects or changes in risk factors values over time, the Cox or Poisson models are better choices. Selecting between these can usually be based on convenience, except when confounding cannot be closely controlled in the Poisson model but can in the Cox model, or when the Poisson assumption of exponential baseline survival is not met. In these cases Cox should be used.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography