To see the other types of publications on this topic, follow the link: Homoskedasticita.

Journal articles on the topic 'Homoskedasticita'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 journal articles for your research on the topic 'Homoskedasticita.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Erb, Jack, and Douglas G. Steigerwald. "Accurately sized test statistics with misspecified conditional homoskedasticity." Journal of Statistical Computation and Simulation 81, no. 6 (June 2011): 729–47. http://dx.doi.org/10.1080/00949650903463574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Djalic, Irena, and Svetlana Terzic. "Violation of the assumption of homoscedasticity and detection of heteroscedasticity." Decision Making: Applications in Management and Engineering 4, no. 1 (March 15, 2021): 1–18. http://dx.doi.org/10.31181/dmame2104001d.

Full text
Abstract:
In this paper, it is assumed that there is a violation of homoskedasticity in a certain classical linear regression model, and we have checked this with certain methods. Model refers to the dependence of savings on income. Proof of the hypothesis was performed by data simulation. The aim of this paper is to develop a methodology for testing a certain model for the presence of heteroskedasticity. We used the graphical method in combination with 4 tests (Goldfeld-Quantum, Glejser, White and Breusch-Pagan). The methodology that was used in this paper showed that the assumption of homoskedasticity was violated and it showed existence of heteroskedasticity.
APA, Harvard, Vancouver, ISO, and other styles
3

Hodoshima, Jiro, and Masakazu Ando. "Bootstrapping stochastic regression models under homoskedasticity: wild bootstrapvs. pairs bootstrap." Journal of Statistical Computation and Simulation 80, no. 11 (November 2010): 1225–35. http://dx.doi.org/10.1080/00949650903014971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baltagi, Badi H., Georges Bresson, and Alain Pirotte. "Joint LM test for homoskedasticity in a one-way error component model." Journal of Econometrics 134, no. 2 (October 2006): 401–17. http://dx.doi.org/10.1016/j.jeconom.2005.06.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jun, Sung Jae, and Joris Pinkse. "ADDING REGRESSORS TO OBTAIN EFFICIENCY." Econometric Theory 25, no. 1 (February 2009): 298–301. http://dx.doi.org/10.1017/s0266466608090567.

Full text
Abstract:
It is well known that in standard linear regression models with independent and identically distributed data and homoskedasticity, adding “irrelevant regressors” hurts (asymptotic) efficiency unless such irrelevant regressors are orthogonal to the remaining regressors. But we have found that under (conditional) heteroskedasticity “irrelevant regressors” can always be found such that one can achieve the asymptotic variance of the generalized least squares estimator by adding the “irrelevant regressors” to the model.
APA, Harvard, Vancouver, ISO, and other styles
6

Kew, Hsein, and David Harris. "HETEROSKEDASTICITY-ROBUST TESTING FOR A FRACTIONAL UNIT ROOT." Econometric Theory 25, no. 6 (December 2009): 1734–53. http://dx.doi.org/10.1017/s0266466609990314.

Full text
Abstract:
This paper shows how fractional unit root tests originally derived under stationarity can be made robust to heteroskedasticity. This is done by using existing tests nested in a regression framework and then implementing these tests using White’s heteroskedasticity consistent standard errors (White, 1980). We show this approach is effective both asymptotically and in finite samples. We also provide some evidence on the asymptotic local power of different implementations of the tests, under both homoskedasticity and heteroskedasticity.
APA, Harvard, Vancouver, ISO, and other styles
7

Smeekes, Stephan, and A. M. Robert Taylor. "BOOTSTRAP UNION TESTS FOR UNIT ROOTS IN THE PRESENCE OF NONSTATIONARY VOLATILITY." Econometric Theory 28, no. 2 (September 13, 2011): 422–56. http://dx.doi.org/10.1017/s0266466611000387.

Full text
Abstract:
Three important issues surround testing for a unit root in practice: uncertainty as to whether or not a linear deterministic trend is present in the data; uncertainty as to whether the initial condition of the process is (asymptotically) negligible or not, and the possible presence of nonstationary volatility in the data. Assuming homoskedasticity, Harvey, Leybourne, and Taylor (2011, Journal of Econometrics, forthcoming) propose decision rules based on a four-way union of rejections of quasi-differenced (QD) and ordinary least squares (OLS) detrended tests, both with and without a linear trend, to deal with the first two problems. In this paper we first discuss, again under homoskedasticity, how these union tests may be validly bootstrapped using the sieve bootstrap principle combined with either the independent and identically distributed (i.i.d.) or wild bootstrap resampling schemes. This serves to highlight the complications that arise when attempting to bootstrap the union tests. We then demonstrate that in the presence of nonstationary volatility the union test statistics have limit distributions that depend on the form of the volatility process, making tests based on the standard asymptotic critical values or, indeed, the i.i.d. bootstrap principle invalid. We show that wild bootstrap union tests are, however, asymptotically valid in the presence of nonstationary volatility. The wild bootstrap union tests therefore allow for a joint treatment of all three of the aforementioned issues in practice.
APA, Harvard, Vancouver, ISO, and other styles
8

Mubyarjati, Dhea Kurnia, Abdul Hoyyi, and Hasbi Yasin. "PEMODELAN REGRESI ROBUST S-ESTIMATOR UNTUK PENANGANAN PENCILAN MENGGUNAKAN GUI MATLAB (Studi Kasus : Faktor-Faktor yang Mempengaruhi Produksi Ikan Tangkap di Jawa Tengah)." Jurnal Gaussian 8, no. 1 (February 28, 2019): 81–92. http://dx.doi.org/10.14710/j.gauss.v8i1.26616.

Full text
Abstract:
Multiple Linear Regression can be solved by using the Ordinary Least Squares (OLS). Some classic assumptions must be fulfilled namely normality, homoskedasticity, non-multicollinearity, and non-autocorrelation. However, violations of assumptions can occur due to outliers so the estimator obtained is biased and inefficient. In statistics, robust regression is one of method can be used to deal with outliers. Robust regression has several estimators, one of them is Scale estimator (S-estimator) used in this research. Case for this reasearch is fish production per district / city in Central Java in 2015-2016 which is influenced by the number of fishermen, number of vessels, number of trips, number of fishing units, and number of households / fishing companies. Approximate estimation with the Ordinary Least Squares occur in violation of the assumptions of normality, autocorrelation and homoskedasticity this occurs because there are outliers. Based on the t- test at 5% significance level can be concluded that several predictor variables there are the number of fishermen, the number of ships, the number of trips and the number of fishing units have a significant effect on the variables of fish production. The influence value of predictor variables to fish production is 88,006% and MSE value is 7109,519. GUI Matlab is program for robust regression for S-estimator to make it easier for users to do calculations. Keywords: Ordinary Least Squares (OLS), Outliers, Robust Regression, Fish Production, GUI Matlab.
APA, Harvard, Vancouver, ISO, and other styles
9

Hsiao, Cheng, and Qi Li. "A CONSISTENT TEST FOR CONDITIONAL HETEROSKEDASTICITY IN TIME-SERIES REGRESSION MODELS." Econometric Theory 17, no. 1 (February 2001): 188–221. http://dx.doi.org/10.1017/s0266466601171069.

Full text
Abstract:
We show that the standard consistent test for testing the null of conditional homoskedasticity (against conditional heteroskedasticity) can be generalized to a time-series regression model with weakly dependent data and with generated regressors. The test statistic is shown to have an asymptotic normal distribution under the null hypothesis of conditional homoskedastic error. We also discuss extension of our test to the case of testing the null of a parametrically specified conditional variance. We advocate using a bootstrap method to overcome the issue of slow convergence of this test statistic to its limiting distribution.
APA, Harvard, Vancouver, ISO, and other styles
10

Attfield, C. L. F. "A Bartlett adjustment to the likelihood ratio test for homoskedasticity in the linear model." Economics Letters 37, no. 2 (October 1991): 119–23. http://dx.doi.org/10.1016/0165-1765(91)90118-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Moussa, Richard Kouamé. "Heteroskedasticity in One-Way Error Component Probit Models." Econometrics 7, no. 3 (August 11, 2019): 35. http://dx.doi.org/10.3390/econometrics7030035.

Full text
Abstract:
This paper introduces an estimation procedure for a random effects probit model in presence of heteroskedasticity and a likelihood ratio test for homoskedasticity. The cases where the heteroskedasticity is due to individual effects or idiosyncratic errors or both are analyzed. Monte Carlo simulations show that the test performs well in the case of high degree of heteroskedasticity. Furthermore, the power of the test increases with larger individual and time dimensions. The robustness analysis shows that applying the wrong approach may generate misleading results except for the case where both individual effects and idiosyncratic errors are modelled as heteroskedastic.
APA, Harvard, Vancouver, ISO, and other styles
12

Robinson, P. M. "INFERENCE ON NONPARAMETRICALLY TRENDING TIME SERIES WITH FRACTIONAL ERRORS." Econometric Theory 25, no. 6 (December 2009): 1716–33. http://dx.doi.org/10.1017/s0266466609990302.

Full text
Abstract:
The central limit theorem for nonparametric kernel estimates of a smooth trend, with linearly generated errors, indicates asymptotic independence and homoskedasticity across fixed points, irrespective of whether disturbances have short memory, long memory, or antipersistence. However, the asymptotic variance depends on the kernel function in a way that varies across these three circumstances, and in the latter two it involves a double integral that cannot necessarily be evaluated in closed form. For a particular class of kernels, we obtain analytic formulas. We discuss extensions to more general settings, including ones involving possible cross-sectional or spatial dependence.
APA, Harvard, Vancouver, ISO, and other styles
13

Cattaneo, Matias D., Michael Jansson, and Whitney K. Newey. "ALTERNATIVE ASYMPTOTICS AND THE PARTIALLY LINEAR MODEL WITH MANY REGRESSORS." Econometric Theory 34, no. 2 (October 19, 2016): 277–301. http://dx.doi.org/10.1017/s026646661600013x.

Full text
Abstract:
Many empirical studies estimate the structural effect of some variable on an outcome of interest while allowing for many covariates. We present inference methods that account for many covariates. The methods are based on asymptotics where the number of covariates grows as fast as the sample size. We find a limiting normal distribution with variance that is larger than the standard one. We also find that with homoskedasticity this larger variance can be accounted for by using degrees-of-freedom-adjusted standard errors. We link this asymptotic theory to previous results for many instruments and for small bandwidth(s) distributional approximations.
APA, Harvard, Vancouver, ISO, and other styles
14

Shin, Myungho, Unkyung No, and Sehee Hong. "Comparing the Robustness of Stepwise Mixture Modeling With Continuous Nonnormal Distal Outcomes." Educational and Psychological Measurement 79, no. 6 (April 12, 2019): 1156–83. http://dx.doi.org/10.1177/0013164419839770.

Full text
Abstract:
The present study aims to compare the robustness under various conditions of latent class analysis mixture modeling approaches that deal with auxiliary distal outcomes. Monte Carlo simulations were employed to test the performance of four approaches recommended by previous simulation studies: maximum likelihood (ML) assuming homoskedasticity (ML_E), ML assuming heteroskedasticity (ML_U), BCH, and LTB. For all investigated simulation conditions, the BCH approach yielded the most unbiased estimates of class-specific distal outcome means. This study has implications for researchers looking to apply recommended latent class analysis mixture modeling approaches in that nonnormality, which has been not fully considered in previous studies, was taken into account to address the distributional form of distal outcomes.
APA, Harvard, Vancouver, ISO, and other styles
15

Curto, José Dias. "To keep faith with homoskedasticity or to go back to heteroskedasticity? The case of FATANG stocks." Nonlinear Dynamics 104, no. 4 (May 28, 2021): 4117–47. http://dx.doi.org/10.1007/s11071-021-06535-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Jun, Sung Jae, and Joris Pinkse. "TESTING UNDER WEAK IDENTIFICATION WITH CONDITIONAL MOMENT RESTRICTIONS." Econometric Theory 28, no. 6 (May 21, 2012): 1229–82. http://dx.doi.org/10.1017/s0266466612000138.

Full text
Abstract:
AbstractWe propose a semiparametric test for the value of coefficients in models with conditional moment restrictions that has correct size regardless of identification strength. The test is in essence an Anderson-Rubin (AR) test using nonparametrically estimated instruments to which we apply a standard error correction. We show that the test is (1) always size-correct, (2) consistent when identification is not too weak, and (3) asymptotically equivalent to an infeasible AR test when identification is sufficiently strong. We moreover prove that under homoskedasticity and strong identification our test has a limiting noncentral chi-square distribution under a sequence of local alternatives, where the noncentrality parameter is given by a quadratic form of the inverse of the semiparametric efficiency bound.
APA, Harvard, Vancouver, ISO, and other styles
17

Chang, Dongfeng, and Apostolos Serletis. "THE DEMAND FOR LIQUID ASSETS: EVIDENCE FROM THE MINFLEX LAURENT DEMAND SYSTEM WITH CONDITIONALLY HETEROSKEDASTIC ERRORS." Macroeconomic Dynamics 23, no. 07 (March 16, 2018): 2941–58. http://dx.doi.org/10.1017/s1365100517001006.

Full text
Abstract:
We investigate the demand for money and the degree of substitutability among monetary assets in the United States using the generalized Leontief and the Minflex Laurent (ML) models as suggested by Serletis and Shahmoradi (2007). In doing so, we merge the demand systems literature with the recent financial econometrics literature, relaxing the homoskedasticity assumption and instead assuming that the covariance matrix of the errors of flexible demand systems is time-varying. We also pay explicit attention to theoretical regularity, treating the curvature property as a maintained hypothesis. Our findings indicate that only the curvature constrained ML model with a Baba, Engle, Kraft, and Kroner (BEKK) specification for the conditional covariance matrix is able to generate inference consistent with theoretical regularity.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Wenjie, and Firmin Doko Tchatoka. "On Bootstrap inconsistency and Bonferroni-based size-correction for the subset Anderson–Rubin test under conditional homoskedasticity." Journal of Econometrics 207, no. 1 (November 2018): 188–211. http://dx.doi.org/10.1016/j.jeconom.2018.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Marshall, P., T. Szikszai, V. LeMay, and A. Kozak. "Testing the distributional assumptions of least squares linear regression." Forestry Chronicle 71, no. 2 (April 1, 1995): 213–18. http://dx.doi.org/10.5558/tfc71213-2.

Full text
Abstract:
The error terms in least squares linear regression are assumed to be normally distributed with equal variance (homoskedastic), and independent of one another. If any of these distributional assumptions are violated, several of the desirable properties of a least squares fit may not hold. A variety of statistical tests of the assumptions is available. The following are recommended for reasons of ease of use and discriminating power: the K2 test for testing for non-normality, either the Durbin-Watson test or the Q-test for testing for autocorrelation, and either Szroeter's or White's test for testing for heteroskedasticity. The assumptions should be tested in this order; violating one of the assumptions may invalidate the results of subsequent tests. A microcomputer-based software package for least squares linear regression that incorporates the above tests is introduced. Key words: normality, homoskedasticity, independence
APA, Harvard, Vancouver, ISO, and other styles
20

Rust, Roland T. "Flexible Regression." Journal of Marketing Research 25, no. 1 (February 1988): 10–24. http://dx.doi.org/10.1177/002224378802500102.

Full text
Abstract:
Marketing researchers who want to do multiple regression often have data in which some of the standard regression assumptions are violated. Flexible regression is a new method for performing a nonparametric multiple regression while relaxing several of the standard assumptions of regression. In particular the assumptions of linearity, normal errors, and homoskedasticity are relaxed. The approach is based on nonparametric density estimation, which results in a more synergistic and less parametrically constrained method of analysis. Asymptotic properties of estimators are explored and necessary conditions are established for the rejection of significance tests that correspond to the major tests of regression. In addition, a necessary condition for rejection of a significance test is provided to determine whether or not to use flexible regression instead of conventional multiple regression. The advantages of the method are illustrated with several examples.
APA, Harvard, Vancouver, ISO, and other styles
21

Mechenov, A. S. "Least distance method for a confluent model with homoskedasticity in the matrix columns and the right-hand side." Computational Mathematics and Modeling 11, no. 3 (July 2000): 299–304. http://dx.doi.org/10.1007/bf02361135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Guggenberger, Patrik, Frank Kleibergen, and Sophocles Mavroeidis. "A more powerful subvector Anderson Rubin test in linear instrumental variables regression." Quantitative Economics 10, no. 2 (2019): 487–526. http://dx.doi.org/10.3982/qe1116.

Full text
Abstract:
We study subvector inference in the linear instrumental variables model assuming homoskedasticity but allowing for weak instruments. The subvector Anderson and Rubin (1949) test that uses chi square critical values with degrees of freedom reduced by the number of parameters not under test, proposed by Guggenberger, Kleibergen, Mavroeidis, and Chen (2012), controls size but is generally conservative. We propose a conditional subvector Anderson and Rubin test that uses data‐dependent critical values that adapt to the strength of identification of the parameters not under test. This test has correct size and strictly higher power than the subvector Anderson and Rubin test by Guggenberger et al. (2012). We provide tables with conditional critical values so that the new test is quick and easy to use. Application of our method to a model of risk preferences in development economics shows that it can strengthen empirical conclusions in practice.
APA, Harvard, Vancouver, ISO, and other styles
23

Bărbulescu, Alina, and Cristian Ștefan Dumitriu. "On the Connection between the GEP Performances and the Time Series Properties." Mathematics 9, no. 16 (August 5, 2021): 1853. http://dx.doi.org/10.3390/math9161853.

Full text
Abstract:
Artificial intelligence (AI) methods are interesting alternatives to classical approaches for modeling financial time series since they relax the assumptions imposed on the data generating process by the parametric models and do not impose any constraint on the model’s functional form. Even if many studies employed these techniques for modeling financial time series, the connection of the models’ performances with the statistical characteristics of the data series has not yet been investigated. Therefore, this research aims to study the performances of Gene Expression Programming (GEP) for modeling monthly and weekly financial series that present trend and/or seasonality and after the removal of each component. It is shown that series normality and homoskedasticity do not influence the models’ quality. The trend removal increases the models’ performance, whereas the seasonality elimination results in diminishing the goodness of fit. Comparisons with ARIMA models built are also provided.
APA, Harvard, Vancouver, ISO, and other styles
24

Petersen, Trond. "Multiplicative Models For Continuous Dependent Variables: Estimation on Unlogged versus Logged Form." Sociological Methodology 47, no. 1 (August 2017): 113–64. http://dx.doi.org/10.1177/0081175017730108.

Full text
Abstract:
In regression analysis with a continuous and positive dependent variable, a multiplicative relationship between the unlogged dependent variable and the independent variables is often specified. It can then be estimated on its unlogged or logged form. The two procedures may yield major differences in estimates, even opposite signs. The reason is that estimation on the unlogged form yields coefficients for the relative arithmetic mean of the unlogged dependent variable, whereas estimation on the logged form gives coefficients for the relative geometric mean for the unlogged dependent variable (or for absolute differences in the arithmetic mean of the logged dependent variable). Estimated coefficients from the two forms may therefore vary widely, because of their different foci, relative arithmetic versus relative geometric means. The first goal of this article is to explain why major divergencies in coefficients can occur. Although well understood in the statistical literature, this is not widely understood in sociological research, and it is hence of significant practical interest. The second goal is to derive conditions under which divergencies will not occur, where estimation on the logged form will give unbiased estimators for relative arithmetic means. First, it derives the necessary and sufficient conditions for when estimation on the logged form will give unbiased estimators for the parameters for the relative arithmetic mean. This requires not only that there is arithmetic mean independence of the unlogged error term but that there is also geometric mean independence. Second, it shows that statistical independence of the error terms on regressors implies that there is both arithmetic and geometric mean independence for the error terms, and it is hence a sufficient condition for absence of bias. Third, it shows that although statistical independence is a sufficient condition, it is not a necessary one for lack of bias. Fourth, it demonstrates that homoskedasticity of error terms is neither a necessary nor a sufficient condition for absence of bias. Fifth, it shows that in the semi-logarithmic specification, for a logged error term with the same qualitative distributional shape at each value of independent variables (e.g., normal), arithmetic mean independence, but heteroskedasticity, estimation on the logged form will give biased estimators for the parameters for the arithmetic mean (whereas with homoskedasticity, and for this case thus statistical independence, estimators are unbiased, from the second result above).
APA, Harvard, Vancouver, ISO, and other styles
25

MAZIYYA, PUTU AYU, I. KOMANG GDE SUKARSA, and NI MADE ASIH. "MENGATASI HETEROSKEDASTISITAS PADA REGRESI DENGAN MENGGUNAKAN WEIGHTED LEAST SQUARE." E-Jurnal Matematika 4, no. 1 (January 30, 2015): 20. http://dx.doi.org/10.24843/mtk.2015.v04.i01.p083.

Full text
Abstract:
In the regression analysis we need a method to estimate parameters to fulfill the BLUE characteristic. There are assumptions that must be fulfilled homoscedasticity one of which is a condition in which the assumption of error variance is constant (same), infraction from the assumptions homoskedasticity called heteroscedasticity. The Consequence of going heteroscedasticity can impact OLS estimators still fulfill the requirements of not biased, but the variant obtained becomes inefficient. So we need a method to solve these problems either by Weighted Least Square (WLS). The purpose of this study is to find out how to overcome heteroscedasticity in regression with WLS. Step of this research was do with the OLS analysis, then testing to see whether there heteroscedasticity problem with BPG method, the next step is to repair the beginning model by way of weighting the data an exact multiplier factor, then re-using the OLS procedure to the data that have been weighted, the last stage is back with a heteroscedasticity test BPG method, so we obtained the model fulfill the assumptions of homoskedasicity. Estimates indicate that the WLS method can resolve the heteroscedasticity, with exact weighting factors based on the distribution pattern of data seen.
APA, Harvard, Vancouver, ISO, and other styles
26

Oberfichtner, Michael, and Harald Tauchmann. "Stacked linear regression analysis to facilitate testing of hypotheses across OLS regressions." Stata Journal: Promoting communications on statistics and Stata 21, no. 2 (June 2021): 411–29. http://dx.doi.org/10.1177/1536867x211025801.

Full text
Abstract:
In empirical work, researchers frequently test hypotheses of parallel form in several regressions, which raises concerns about multiple testing. One way to address the multiple-testing issue is to jointly test the hypotheses (for example, Pei, Pischke, and Schwandt [2019, Journal of Business & Economic Statistics 37: 205–216] and Lee and Lemieux [2010, Journal of Economic Literature 48: 281–355]). While the existing commands suest (Weesie, 1999, Stata Technical Bulletin Reprints 9: 231–248) and mvreg enable Stata users to follow this approach, both are limited in several dimensions. For instance, mvreg assumes homoskedasticity and uncorrelatedness across sampling units, and neither command is designed to be used with panel data. In this article, we introduce the new community-contributed command stackreg, which overcomes the aforementioned limitations and allows for some settings and features that go beyond the capabilities of the existing commands. To achieve this, stackreg runs an ordinary least-squares regression in which the regression equations are stacked as described, for instance, in Wooldridge (2010, Econometric Analysis of Cross Section and Panel Data, p. 166–173, MIT Press) and applies cluster–robust variance–covariance estimation.
APA, Harvard, Vancouver, ISO, and other styles
27

Baltagi, Badi H. "Pooling Under Misspecification: Some Monte Carlo Evidence on the Kmenta and the Error Components Techniques." Econometric Theory 2, no. 3 (December 1986): 429–40. http://dx.doi.org/10.1017/s0266466600011695.

Full text
Abstract:
Two different methods for pooling time series of cross section data are used by economists. The first method, described by Kmenta, is based on the idea that pooled time series of cross sections are plagued with both heteroskedasticity and serial correlation.The second method, made popular by Balestra and Nerlove, is based on the error components procedure where the disturbance term is decomposed into a cross-section effect, a time-period effect, and a remainder.Although these two techniques can be easily implemented, they differ in the assumptions imposed on the disturbances and lead to different estimators of the regression coefficients. Not knowing what the true data generating process is, this article compares the performance of these two pooling techniques under two simple setting. The first is when the true disturbances have an error components structure and the second is where they are heteroskedastic and time-wise autocorrelated.First, the strengths and weaknesses of the two techniques are discussed. Next, the loss from applying the wrong estimator is evaluated by means of Monte Carlo experiments. Finally, a Bartletfs test for homoskedasticity and the generalized Durbin-Watson test for serial correlation are recommended for distinguishing between the two error structures underlying the two pooling techniques.
APA, Harvard, Vancouver, ISO, and other styles
28

Kuhe, DA, and J. Akor. "An Empirical Investigation of the Random Walk Hypothesis in the Nigerian Stock Market." NIGERIAN ANNALS OF PURE AND APPLIED SCIENCES 4, no. 1 (August 21, 2021): 62–77. http://dx.doi.org/10.46912/napas.229.

Full text
Abstract:
The Random Walk Hypothesis (RWH) states that stock prices move randomly in the stock market without following any regular or particular pattern and as such historical information contained in the past prices of stocks cannot be used to predict current or future stock prices. Hence, stock prices are unpredictable and that investors cannot usurp any available information in the market to manipulate the market and make abnormal profits. This study empirically examines the random walk hypothesis in the Nigerian stock market using the daily quotations of the Nigerian stock exchange from 2nd January, 1998 to 31st December, 2019. The study employs Augmented Dickey-Fuller unit root test, the random walk model, Ljung-Box Q-statistic test for serial dependence, runs test of randomness, and the robust variance ratio test as methods of analyses. The result of the study rejected the null hypotheses of a unit root and random walk in the stock returns. The null hypothesis of no serial correlation in the residuals of stock returns was also rejected indicating the presence of serial correlation/autocorrelation in the residual series. The result of the runs test rejected the null hypothesis of randomness in the Nigerian stock returns. The results of the variance ratio test under homoskedasticity and heteroskedasticity assumptions both strongly rejected the null hypothesis of a random walk for both joint tests and test of individual periods. Based on the results of the four tests applied in this study, it is concluded that the Nigerian daily stock returns under the period of investigation do not follow a random walk and hence the null hypothesis of a random walk is rejected. The results of the study further revealed that the Nigerian stock market is weak-form inefficient indicating that prices in the Nigerian stock market are predictable, dependable, consistently mispriced, inflated, liable to arbitraging and left unprotected to speculations and market manipulations. The study provided some policy recommendations
APA, Harvard, Vancouver, ISO, and other styles
29

DOĞAN, Osman, and Suleyman TASPİNAR. "Testing Homoskedasticity in Cross-sectional Spatial Autoregressive Models." Pamukkale University Journal of Social Sciences Institute, March 29, 2021. http://dx.doi.org/10.30794/pausbed.858658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Baltagi, Badi H., Alain Pirotte, and Zhenlin Yang. "Diagnostic tests for homoskedasticity in spatial cross-sectional or panel models." Journal of Econometrics, December 2020. http://dx.doi.org/10.1016/j.jeconom.2020.10.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Baltagi, Badi H., Georges Bresson, and Alain Pirotte. "Joint LM Test for Homoskedasticity in a One-Way Error Component Model." SSRN Electronic Journal, 2005. http://dx.doi.org/10.2139/ssrn.1815223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Jing, and Walte Enders. "Flexible Fourier form for volatility breaks." Studies in Nonlinear Dynamics & Econometrics 22, no. 1 (September 26, 2017). http://dx.doi.org/10.1515/snde-2016-0039.

Full text
Abstract:
AbstractThis paper proposes a class of models: TRIARCH/TRIGARCH models that account for structural breaks in conditional variance using a variant of the flexible Fourier form. Based on the new model, three likelihood multiplier tests are proposed for the null hypotheses of (1) homoskedasticity in the presence of unknown structural breaks; (2) no structural changes in conditional variance; and (3) Integrated GARCH effect. The in-sample fit and out-of-sample forecasts of the TRIGARCH model and GARCH model are compared by simulations. We apply the new models to the SP500 returns. Our finding indicates level shifts in variance, and therefore, the almost integration indicated by the GARCH(1,1) model may be spurious.
APA, Harvard, Vancouver, ISO, and other styles
33

Kyriacou, Maria, Peter C. B. Phillips, and Francesca Rossi. "CONTINUOUSLY UPDATED INDIRECT INFERENCE IN HETEROSKEDASTIC SPATIAL MODELS." Econometric Theory, September 22, 2021, 1–39. http://dx.doi.org/10.1017/s0266466621000384.

Full text
Abstract:
Spatial units typically vary over many of their characteristics, introducing potential unobserved heterogeneity which invalidates commonly used homoskedasticity conditions. In the presence of unobserved heteroskedasticity, methods based on the quasi-likelihood function generally produce inconsistent estimates of both the spatial parameter and the coefficients of the exogenous regressors. A robust generalized method of moments estimator as well as a modified likelihood method have been proposed in the literature to address this issue. The present paper constructs an alternative indirect inference (II) approach which relies on a simple ordinary least squares procedure as its starting point. Heteroskedasticity is accommodated by utilizing a new version of continuous updating that is applied within the II procedure to take account of the parameterization of the variance–covariance matrix of the disturbances. Finite-sample performance of the new estimator is assessed in a Monte Carlo study. The approach is implemented in an empirical application to house price data in the Boston area, where it is found that spatial effects in house price determination are much more significant under robustification to heterogeneity in the equation errors.
APA, Harvard, Vancouver, ISO, and other styles
34

Giacalone, Massimiliano. "Optimal forecasting accuracy using Lp-norm combination." METRON, August 7, 2021. http://dx.doi.org/10.1007/s40300-021-00218-5.

Full text
Abstract:
AbstractA well-known result in statistics is that a linear combination of two-point forecasts has a smaller Mean Square Error (MSE) than the two competing forecasts themselves (Bates and Granger in J Oper Res Soc 20(4):451–468, 1969). The only case in which no improvements are possible is when one of the single forecasts is already the optimal one in terms of MSE. The kinds of combination methods are various, ranging from the simple average (SA) to more robust methods such as the one based on median or Trimmed Average (TA) or Least Absolute Deviations or optimization techniques (Stock and Watson in J Forecast 23(6):405–430, 2004). Standard regression-based combination approaches may fail to get a realistic result if the forecasts show high collinearity in several situations or the data distribution is not Gaussian. Therefore, we propose a forecast combination method based on Lp-norm estimators. These estimators are based on the Generalized Error Distribution, which is a generalization of the Gaussian distribution, and they can be used to solve the cases of multicollinearity and non-Gaussianity. In order to demonstrate the potential of Lp-norms, we conducted a simulated and an empirical study, comparing its performance with other standard-regression combination approaches. We carried out the simulation study with different values of the autoregressive parameter, by alternating heteroskedasticity and homoskedasticity. On the other hand, the real data application is based on the daily Bitfinex historical series of bitcoins (2014–2020) and the 25 historical series relating to companies included in the Dow Jonson, were subsequently considered. We showed that, by combining different GARCH and the ARIMA models, assuming both Gaussian and non-Gaussian distributions, the Lp-norm scheme improves the forecasting accuracy with respect to other regression-based combination procedures.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography