Academic literature on the topic 'Matrix regression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Matrix regression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Matrix regression"

1

Zhou, Hua, and Lexin Li. "Regularized matrix regression." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76, no. 2 (August 12, 2013): 463–83. http://dx.doi.org/10.1111/rssb.12031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Viroli, Cinzia. "On matrix-variate regression analysis." Journal of Multivariate Analysis 111 (October 2012): 296–309. http://dx.doi.org/10.1016/j.jmva.2012.04.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Luo, Changtong, and Shao-Liang Zhang. "Parse-matrix evolution for symbolic regression." Engineering Applications of Artificial Intelligence 25, no. 6 (September 2012): 1182–93. http://dx.doi.org/10.1016/j.engappai.2012.05.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koláček, Jan, and Ivana Horová. "Bandwidth matrix selectors for kernel regression." Computational Statistics 32, no. 3 (January 16, 2017): 1027–46. http://dx.doi.org/10.1007/s00180-017-0709-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mukha, V. S. "The best polynomial multidimensional-matrix regression." Cybernetics and Systems Analysis 43, no. 3 (May 2007): 427–32. http://dx.doi.org/10.1007/s10559-007-0065-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Jianguang, and Jianmin Jiang. "Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification." Neural Computation 30, no. 2 (February 2018): 505–25. http://dx.doi.org/10.1162/neco_a_01038.

Full text
Abstract:
While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles—enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.
APA, Harvard, Vancouver, ISO, and other styles
7

Zeebari, Zangin, B. M. Golam Kibria, and Ghazi Shukur. "Seemingly unrelated regressions with covariance matrix of cross-equation ridge regression residuals." Communications in Statistics - Theory and Methods 47, no. 20 (November 13, 2017): 5029–53. http://dx.doi.org/10.1080/03610926.2017.1383431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chitsaz, Shabnam, and S. Ejaz Ahmed. "Shrinkage estimation for the regression parameter matrix in multivariate regression model." Journal of Statistical Computation and Simulation 82, no. 2 (February 2012): 309–23. http://dx.doi.org/10.1080/00949655.2011.648938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chitsaz, S., and S. Ejaz Ahmed. "An Improved Estimation in Regression Parameter Matrix in Multivariate Regression Model." Communications in Statistics - Theory and Methods 41, no. 13-14 (July 2012): 2305–20. http://dx.doi.org/10.1080/03610926.2012.664672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Turner, David L. "Matrix Calculator and Stepwise Interactive Regression Programs." American Statistician 41, no. 4 (November 1987): 329. http://dx.doi.org/10.2307/2684760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Matrix regression"

1

Fischer, Manfred M., and Philipp Piribauer. "Model uncertainty in matrix exponential spatial growth regression models." WU Vienna University of Economics and Business, 2013. http://epub.wu.ac.at/4013/1/wp158.pdf.

Full text
Abstract:
This paper considers the problem of model uncertainty associated with variable selection and specification of the spatial weight matrix in spatial growth regression models in general and growth regression models based on the matrix exponential spatial specification in particular. A natural solution, supported by formal probabilistic reasoning, is the use of Bayesian model averaging which assigns probabilities on the model space and deals with model uncertainty by mixing over models, using the posterior model probabilities as weights. This paper proposes to adopt Bayesian information criterion model weights since they have computational advantages over fully Bayesian model weights. The approach is illustrated for both identifying model covariates and unveiling spatial structures present in pan-European growth data. (authors' abstract)
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
2

Piribauer, Philipp, and Manfred M. Fischer. "Model uncertainty in matrix exponential spatial growth regression models." Wiley-Blackwell, 2015. http://dx.doi.org/10.1111/gean.12057.

Full text
Abstract:
This paper considers the most important aspects of model uncertainty for spatial regression models, namely the appropriate spatial weight matrix to be employed and the appropriate explanatory vari- ables. We focus on the spatial Durbin model (SDM) specification in this study that nests most models used in the regional growth literature, and develop a simple Bayesian model averaging approach that provides a unified and formal treatment of these aspects of model uncertainty for SDM growth models. The approach expands on the work by LeSage and Fischer (2008) by reducing the computational costs through the use of Bayesian information criterion model weights and a matrix exponential specification of the SDM model. The spatial Durbin matrix exponential model has theoretical and computational advantages over the spatial autoregressive specification due to the ease of inversion, differentiation and integration of the matrix expo- nential. In particular, the matrix exponential has a simple matrix determinant which vanishes for the case of a spatial weight matrix with a trace of zero (LeSage and Pace 2007). This allows for a larger domain of spatial growth regression models to be analysed with this approach, including models based on different classes of spatial weight matrices. The working of the approach is illustrated for the case of 32 potential determinants and three classes of spatial weight matrices (contiguity-based, k-nearest neighbor and distance-based spatial weight matrices), using a dataset of income per capita growth for 273 European regions. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yihua M. Eng Massachusetts Institute of Technology. "Blind regression : understanding collaborative filtering from matrix completion to tensor completion." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105983.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 37-39).
Neighborhood-based Collaborative filtering (CF) methods have proven to be successful in practice and are widely applied in commercial recommendation systems. Yet theoretical understanding of their performance is lacking. In this work, we introduce a new framework of Blind Regression which assumes that there are latent features associated with input variables, and we observe outputs of some Lipschitz continuous function over those unobserved features. We apply our framework to the problem of matrix completion and give a nonparametric method which, similar to CF, combines the local estimates according to the distance between the neighbors. We use the sample variance of the difference in ratings between neighbors as the proximity of the distance. Through error analysis, we show that the minimum sample variance is a good proxy of the prediction error in the estimates. Experiments on real-world datasets suggests that our matrix completion algorithm outperforms classic user-user and item-item CF approaches. Finally, our framework easily extends to the setting of higher-order tensors and we present our algorithm for tensor completion. The result from real-world application of image inpainting demonstrates that our method is competitive with the state-of-the-art tensor factorization approaches in terms of predictive performance.
by Yihua Li.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
4

Fallowfield, Jonathan Andrew. "The role of matrix metalloproteinase-13 in the regression of liver fibrosis." Thesis, University of Southampton, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Albertson, K. V. "Pre-test estimation in a regression model with a mis-specified error covariance matrix." Thesis, University of Canterbury. Economics, 1993. http://hdl.handle.net/10092/4315.

Full text
Abstract:
This thesis considers some finite sample properties of a number of preliminary test (pre-test) estimators of the unknown parameters of a linear regression model that may have been mis-specified as a result of incorrectly assuming that the disturbance term has a scalar covariance matrix, and/or as a result of the exclusion of relevant regressors. The pre-test itself is a test for exact linear restrictions and is conducted using the usual Wald statistic, which provides a Uniformly Most Powerful Invariant test of the restrictions in a well specified model. The parameters to be estimated are the coefficient vector, the prediction vector (i.e. the expectation of the dependent variable conditional on the regressors), and the regression scale parameter. Note that while the problem of estimating the prediction vector is merely a special case of estimating the coefficient vector when the model is well specified, this is not the case when the model is mis-specified. The properties of each of these estimators in a well specified regression model have been examined in the literature, as have the effects of a number of different model mis-specifications, and we survey these results in Chapter Two. We will extend the existing literature by generalising the error covariance matrix in conjunction with allowing for possibly excluded regressors. To motivate the consideration of a nonscalar error covariance matrix in the context of a pre-test situation we briefly examine the literature on autoregressive and heteroscedastic error processes in Chapter Three. In Chapters Four, Five, Six, and Seven we derive the cumulative distribution function of the test statistic, and exact formulae for the bias and risk (under quadratic loss) of the unrestricted, restricted and pre-test estimators, in a model with a general error covariance matrix and possibly excluded relevant regressors. These formulae are data dependent and, to illustrate the results, are evaluated for a number of regression models and forms of error covariance matrix. In particular we determine the effects of autoregressive errors and heteroscedastic errors on each of the regression models under consideration. Our evaluations confirm the known result that the presence of a non scalar error covariance matrix introduces a distortion into the pre-test power function and we show the effects of this on the pre-test estimators. In addition to this we show that one effect of the mis-specification may be that the pre-test and restricted estimators may be strictly dominated by the corresponding unrestricted estimator even if there are no relevant regressors excluded from the model. If there are relevant regressors excluded from the model it appears that the additional mis-specification of the error covariance matrix has little qualitative impact unless the coefficients on the excluded regressors are small in magnitude or the excluded regressors are not correlated with the included regressors. As one of the effects of the mis-specification is to introduce a distortion into the pre-test power function, in Chapter Eight we consider the problem of determining the optimal critical value (under the criterion of minimax regret) for the pre-test when estimating the regression coefficient vector. We show that the mis-specification of the error covariance matrix may have a substantial impact on the optimal critical value chosen for the pre-test under this criterion, although, generally, the actual size of the pre-test is relatively unaffected by increasing degrees of mis-specification. Chapter Nine concludes this thesis and provides a summary of the results obtained in the earlier chapters. In addition, we outline some possible future research topics in this general area.
APA, Harvard, Vancouver, ISO, and other styles
6

Mei, Jiali. "Time series recovery and prediction with regression-enhanced nonnegative matrix factorization applied to electricity consumption." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS578/document.

Full text
Abstract:
Nous sommes intéressé par la reconstitution et la prédiction des séries temporelles multivariées à partir des données partiellement observées et/ou agrégées.La motivation du problème vient des applications dans la gestion du réseau électrique.Nous envisageons des outils capables de résoudre le problème d'estimation de plusieurs domaines.Après investiguer le krigeage, qui est une méthode de la litérature de la statistique spatio-temporelle, et une méthode hybride basée sur le clustering des individus, nous proposons un cadre général de reconstitution et de prédiction basé sur la factorisation de matrice nonnégative.Ce cadre prend en compte de manière intrinsèque la corrélation entre les séries temporelles pour réduire drastiquement la dimension de l'espace de paramètres.Une fois que le problématique est formalisé dans ce cadre, nous proposons deux extensions par rapport à l'approche standard.La première extension prend en compte l'autocorrélation temporelle des individus.Cette information supplémentaire permet d'améliorer la précision de la reconstitution.La deuxième extension ajoute une composante de régression dans la factorisation de matrice nonnégative.Celle-ci nous permet d'utiliser dans l'estimation du modèle des variables exogènes liées avec la consommation électrique, ainsi de produire des facteurs plus interprétatbles, et aussi améliorer la reconstitution.De plus, cette méthod nous donne la possibilité d'utiliser la factorisation de matrice nonnégative pour produire des prédictions.Sur le côté théorique, nous nous intéressons à l'identifiabilité du modèle, ainsi qu'à la propriété de la convergence des algorithmes que nous proposons.La performance des méthodes proposées en reconstitution et en prédiction est testé sur plusieurs jeux de données de consommation électrique à niveaux d'agrégation différents
We are interested in the recovery and prediction of multiple time series from partially observed and/or aggregate data.Motivated by applications in electricity network management, we investigate tools from multiple fields that are able to deal with such data issues.After examining kriging from spatio-temporal statistics and a hybrid method based on the clustering of individuals, we propose a general framework based on nonnegative matrix factorization.This frameworks takes advantage of the intrisic correlation between the multivariate time series to greatly reduce the dimension of the parameter space.Once the estimation problem is formalized in the nonnegative matrix factorization framework, two extensions are proposed to improve the standard approach.The first extension takes into account the individual temporal autocorrelation of each of the time series.This increases the precision of the time series recovery.The second extension adds a regression layer into nonnegative matrix factorization.This allows exogenous variables that are known to be linked with electricity consumption to be used in estimation, hence makes the factors obtained by the method to be more interpretable, and also increases the recovery precision.Moreover, this method makes the method applicable to prediction.We produce a theoretical analysis on the framework which concerns the identifiability of the model and the convergence of the algorithms that are proposed.The performance of proposed methods to recover and forecast time series is tested on several multivariate electricity consumption datasets at different aggregation level
APA, Harvard, Vancouver, ISO, and other styles
7

Bownds, Christopher D. "Updating the Navy's recruit quality matrix : an analysis of educational credentials and the success of first-term sailors /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FBownds.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bogren, Patrik, and Isak Kristola. "Exploring the use of call stack depth limits to reduce regression testing costs." Thesis, Mittuniversitetet, Institutionen för data- och systemvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-43166.

Full text
Abstract:
Regression testing is performed after existing source code has been modified to verify that no new faults have been introduced by the changes. Test case selection can be used to reduce the effort of regression testing by selecting a smaller subset of the test suite for later execution. Several criteria and objectives can be used as constraints that should be satisfied by the selection process. One common criteria is function coverage, which can be represented by a coverage matrix that maps test cases to methods under test. The process of generating and evaluating these matrices can be very time consuming for large matrices since their complexity increases exponentially with the number of tests included. To the best of our knowledge, no techniques for reducing execution matrix size have been proposed. This thesis develops a matrix-reduction technique based on analysis of call stack data. It studies the effects of limiting the call stack depth in terms of coverage accuracy, matrix size, and generation costs. Further, it uses a tool that can instrument Java projects using Java’s instrumentation API to collect coverage information on open-source Java projects for varying depth limits of the call stack. Our results show that the stack depth limit can be significantly reduced while retaining high coverage and that matrix size can be decreased by up to 50%. The metric we used to indicate the difficulty of splitting up the matrix closely resembled the curve for coverage. However, we did not see any significant differences in execution time for lower depth limits.
APA, Harvard, Vancouver, ISO, and other styles
9

Kuljus, Kristi. "Rank Estimation in Elliptical Models : Estimation of Structured Rank Covariance Matrices and Asymptotics for Heteroscedastic Linear Regression." Doctoral thesis, Uppsala universitet, Matematisk statistik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9305.

Full text
Abstract:
This thesis deals with univariate and multivariate rank methods in making statistical inference. It is assumed that the underlying distributions belong to the class of elliptical distributions. The class of elliptical distributions is an extension of the normal distribution and includes distributions with both lighter and heavier tails than the normal distribution. In the first part of the thesis the rank covariance matrices defined via the Oja median are considered. The Oja rank covariance matrix has two important properties: it is affine equivariant and it is proportional to the inverse of the regular covariance matrix. We employ these two properties to study the problem of estimating the rank covariance matrices when they have a certain structure. The second part, which is the main part of the thesis, is devoted to rank estimation in linear regression models with symmetric heteroscedastic errors. We are interested in asymptotic properties of rank estimates. Asymptotic uniform linearity of a linear rank statistic in the case of heteroscedastic variables is proved. The asymptotic uniform linearity property enables to study asymptotic behaviour of rank regression estimates and rank tests. Existing results are generalized and it is shown that the Jaeckel estimate is consistent and asymptotically normally distributed also for heteroscedastic symmetric errors.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Shuo. "An Improved Meta-analysis for Analyzing Cylindrical-type Time Series Data with Applications to Forecasting Problem in Environmental Study." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/386.

Full text
Abstract:
This thesis provides a case study on how the wind direction plays an important role in the amount of rainfall, in the village of Somi$acute{o}$. The primary goal is to illustrate how a meta-analysis, together with circular data analytic methods, helps in analyzing certain environmental issues. The existing GLS meta-analysis combines the merits of usual meta-analysis that yields a better precision and also accounts for covariance among coefficients. But, it is quite limited since information about the covariance among coefficients is not utilized. Hence, in my proposed meta-analysis, I take the correlations between adjacent studies into account when employing the GLS meta-analysis. Besides, I also fit a time series linear-circular regression as a comparable model. By comparing the confidence intervals of parameter estimates, covariance matrix, AIC, BIC and p-values, I discuss an improvement on the GLS meta analysis model in its application to forecasting problem in Environmental study.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Matrix regression"

1

Puntanen, Simo, George P. H. Styan, and Jarkko Isotalo. Formulas Useful for Linear Regression Analysis and Related Matrix Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-32931-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ni un paso atrás: La prohibición de regresividad en materia de derechos sociales. Ciudad Autónoma de Buenos Aires: Del Puerto, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Formulas Useful For Linear Regression Analysis And Related Matrix Theory Its Only Formulas But We Like Them. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Franzese, Robert J., and Jude C. Hays. Empirical Models of Spatial Inter‐Dependence. Edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0025.

Full text
Abstract:
This article discusses the role of ‘spatial interdependence’ between units of analysis by using a symmetric weighting matrix for the units of observation whose elements reflect the relative connectivity between unit i and unit j. It starts by addressing spatial interdependence in political science. There are two workhorse regression models in empirical spatial analysis: spatial lag and spatial error models. The article then addresses OLS estimation and specification testing under the null hypothesis of no spatial dependence. It turns to the topic of assessing spatial lag models, and a discussion of spatial error models. Moreover, it reports the calculation of spatial multipliers. Furthermore, it presents several newer applications of spatial techniques in empirical political science research: SAR models with multiple lags, SAR models for binary dependent variables, and spatio-temporal autoregressive (STAR) models for panel data.
APA, Harvard, Vancouver, ISO, and other styles
5

Cheng, Russell. The Skew Normal Distribution. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0012.

Full text
Abstract:
This chapter considers the univariate skew-normal distribution, a generalization of the normal that includes the normal as a special case. The most natural parametrization is non-standard. This is because the Fisher information matrix is then singular at the true parameter value when the true model is the normal special case. The log-likelihood is then particularly flat in a certain coordinate direction. Standard theory cannot then be used to calculate the asymptotic distribution of all the parameter estimates. This problem can be handled using an alternative parametrization. There is another special case: the half/folded normal distribution. This occurs in the usual parametrization when the shape parameter is infinite. This is not a problem computationally and is easily handled. There are many generalizations to skew-t distributions and to tractable multivariate forms and regression versions. A very brief review is included of these.
APA, Harvard, Vancouver, ISO, and other styles
6

1967-, Vizzardelli Silvia, ed. La regressione dell'ascolto: Forma e materia sonora nell'estetica musicale contemporanea. Macerata: Quodlibet, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Matrix regression"

1

Groß, Jürgen. "Matrix Algebra." In Linear Regression, 331–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55864-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schmidt, Karsten, and Götz Trenkler. "Lineare Regression." In Moderne Matrix-Algebra, 181–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-662-08806-7_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

von Frese, Ralph R. B. "Matrix Linear Regression." In Basic Environmental Data Analysis for Scientists and Engineers, 127–40. Boca Raton, FL : CRC Press, Taylor & Francis Group, 2019.: CRC Press, 2019. http://dx.doi.org/10.1201/9780429291210-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brown, Jonathon D. "Polynomial Regression." In Linear Models in Matrix Form, 341–75. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11734-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brown, Jonathon D. "Multiple Regression." In Linear Models in Matrix Form, 105–45. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11734-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Groß, Jürgen. "The Covariance Matrix of the Error Vector." In Linear Regression, 259–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55864-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schmidt, Karsten, and Götz Trenkler. "LINEARE REGRESSION." In Einführung in die Moderne Matrix-Algebra, 181–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46773-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brown, Jonathon D. "Simple Linear Regression." In Linear Models in Matrix Form, 39–67. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11734-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Adachi, Kohei. "Regression Analysis." In Matrix-Based Introduction to Multivariate Data Analysis, 47–62. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2341-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Adachi, Kohei. "Regression Analysis." In Matrix-Based Introduction to Multivariate Data Analysis, 49–64. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4103-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Matrix regression"

1

Papalexakis, Evangelos E., Nicholas D. Sidiropoulos, and Minos N. Garofalakis. "Reviewer Profiling Using Sparse Matrix Regression." In 2010 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2010. http://dx.doi.org/10.1109/icdmw.2010.87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Liang, Deepak Agarwal, and Bee-Chung Chen. "Generalizing matrix factorization through flexible regression priors." In the fifth ACM conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2043932.2043940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yi Tang and Hong Chen. "Matrix-value regression for single-image super-resolution." In 2013 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). IEEE, 2013. http://dx.doi.org/10.1109/icwapr.2013.6599319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Honeine, Paul, Cedric Richard, Mehdi Essoloh, and Hichem Snoussi. "Localization in sensor networks - A matrix regression approach." In 2008 IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM). IEEE, 2008. http://dx.doi.org/10.1109/sam.2008.4606873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Junyu, Haoliang Yuan, Loi Lei Lai, Houqing Zheng, Wenzhong Qian, and Xiaoming Zhou. "Graph-Based Sparse Matrix Regression for 2D Feature Selection." In 2018 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). IEEE, 2018. http://dx.doi.org/10.1109/icwapr.2018.8521279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cao, Guangzhi, Yandong Guo, and Charles A. Bouman. "High dimensional regression using the sparse matrix transform (SMT)." In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5495359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Joshi, Swapna, S. Karthikeyan, B. S. Manjunath, Scott Grafton, and Kent A. Kiehl. "Anatomical parts-based regression using non-negative matrix factorization." In 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2010. http://dx.doi.org/10.1109/cvpr.2010.5540022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Miao, Xiaoyu, Aimin Jiang, and Ning Xu. "Gaussian Processes Regression with Joint Learning of Precision Matrix." In 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco47968.2020.9287742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xie, Jianchun, Jian Yang, Jianjun Qian, and Ying Tai. "Robust Matrix Regression for Illumination and Occlusion Tolerant Face Recognition." In 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2015. http://dx.doi.org/10.1109/iccvw.2015.118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Song, Yiliao, Guangquan Zhang, Haiyan Lu, and Jie Lu. "A Fuzzy Drift Correlation Matrix for Multiple Data Stream Regression." In 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2020. http://dx.doi.org/10.1109/fuzz48607.2020.9177566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography