Academic literature on the topic 'Large dimensional learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Large dimensional learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Large dimensional learning"
Khan, Usman A., Soummya Kar, and José M. F. Moura. "Higher Dimensional Consensus: Learning in Large-Scale Networks." IEEE Transactions on Signal Processing 58, no. 5 (May 2010): 2836–49. http://dx.doi.org/10.1109/tsp.2010.2042482.
Full textLin, Zhiping, Jiuwen Cao, Tao Chen, Yi Jin, Zhan-Li Sun, and Amaury Lendasse. "Extreme Learning Machine on High Dimensional and Large Data Applications." Mathematical Problems in Engineering 2015 (2015): 1–2. http://dx.doi.org/10.1155/2015/624903.
Full textPeng, Chong, Jie Cheng, and Qiang Cheng. "A Supervised Learning Model for High-Dimensional and Large-Scale Data." ACM Transactions on Intelligent Systems and Technology 8, no. 2 (January 18, 2017): 1–23. http://dx.doi.org/10.1145/2972957.
Full textTerol, Rafael Munoz, Alejandro Reina Reina, Saber Ziaei, and David Gil. "A Machine Learning Approach to Reduce Dimensional Space in Large Datasets." IEEE Access 8 (2020): 148181–92. http://dx.doi.org/10.1109/access.2020.3012836.
Full textKeriven, Nicolas, Anthony Bourrier, Rémi Gribonval, and Patrick Pérez. "Sketching for large-scale learning of mixture models." Information and Inference: A Journal of the IMA 7, no. 3 (December 22, 2017): 447–508. http://dx.doi.org/10.1093/imaiai/iax015.
Full textPanos, Aristeidis, Petros Dellaportas, and Michalis K. Titsias. "Large scale multi-label learning using Gaussian processes." Machine Learning 110, no. 5 (April 14, 2021): 965–87. http://dx.doi.org/10.1007/s10994-021-05952-5.
Full textCao, Jiuwen, and Zhiping Lin. "Extreme Learning Machines on High Dimensional and Large Data Applications: A Survey." Mathematical Problems in Engineering 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/103796.
Full textJu, Cheng, Susan Gruber, Samuel D. Lendle, Antoine Chambaz, Jessica M. Franklin, Richard Wyss, Sebastian Schneeweiss, and Mark J. van der Laan. "Scalable collaborative targeted learning for high-dimensional data." Statistical Methods in Medical Research 28, no. 2 (September 22, 2017): 532–54. http://dx.doi.org/10.1177/0962280217729845.
Full textLoyola R, Diego G., Mattia Pedergnana, and Sebastián Gimeno García. "Smart sampling and incremental function learning for very large high dimensional data." Neural Networks 78 (June 2016): 75–87. http://dx.doi.org/10.1016/j.neunet.2015.09.001.
Full textTran, Loc, Debrup Banerjee, Jihong Wang, Ashok J. Kumar, Frederic McKenzie, Yaohang Li, and Jiang Li. "High-dimensional MRI data analysis using a large-scale manifold learning approach." Machine Vision and Applications 24, no. 5 (April 19, 2013): 995–1014. http://dx.doi.org/10.1007/s00138-013-0499-8.
Full textDissertations / Theses on the topic "Large dimensional learning"
Bussy, Simon. "Introduction of high-dimensional interpretable machine learning models and their applications." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS488.
Full textThis dissertation focuses on the introduction of new interpretable machine learning methods in a high-dimensional setting. We developped first the C-mix, a mixture model of censored durations that automatically detects subgroups based on the risk that the event under study occurs early; then the binarsity penalty combining a weighted total variation penalty with a linear constraint per block, that applies on one-hot encoding of continuous features; and finally the binacox model that uses the binarsity penalty within a Cox model to automatically detect cut-points in the continuous features. For each method, theoretical properties are established: algorithm convergence, non-asymptotic oracle inequalities, and comparison studies with state-of-the-art methods are carried out on both simulated and real data. All proposed methods give good results in terms of prediction performances, computing time, as well as interpretability abilities
Rawald, Tobias. "Scalable and Efficient Analysis of Large High-Dimensional Data Sets in the Context of Recurrence Analysis." Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/18797.
Full textRecurrence quantification analysis (RQA) is a method from nonlinear time series analysis. It relies on the identification of line structures within so-called recurrence matrices and comprises a set of scalar measures. Existing computing approaches to RQA are either not capable of processing recurrence matrices exceeding a certain size or suffer from long runtimes considering time series that contain hundreds of thousands of data points. This thesis introduces scalable recurrence analysis (SRA), which is an alternative computing approach that subdivides a recurrence matrix into multiple sub matrices. Each sub matrix is processed individually in a massively parallel manner by a single compute device. This is implemented exemplarily using the OpenCL framework. It is shown that this approach delivers considerable performance improvements in comparison to state-of-the-art RQA software by exploiting the computing capabilities of many-core hardware architectures, in particular graphics cards. The usage of OpenCL allows to execute identical SRA implementations on a variety of hardware platforms having different architectural properties. An extensive evaluation analyses the impact of applying concepts from database technology, such memory storage layouts, to the RQA processing pipeline. It is investigated how different realisations of these concepts affect the performance of the computations on different types of compute devices. Finally, an approach based on automatic performance tuning is introduced that automatically selects well-performing RQA implementations for a given analytical scenario on specific computing hardware. Among others, it is demonstrated that the customised auto-tuning approach allows to considerably increase the efficiency of the processing by adapting the implementation selection.
Mai, Xiaoyi. "Méthodes des matrices aléatoires pour l’apprentissage en grandes dimensions." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC078/document.
Full textThe BigData challenge induces a need for machine learning algorithms to evolve towards large dimensional and more efficient learning engines. Recently, a new direction of research has emerged that consists in analyzing learning methods in the modern regime where the number n and the dimension p of data samples are commensurately large. Compared to the conventional regime where n>>p, the regime with large and comparable n,p is particularly interesting as the learning performance in this regime remains sensitive to the tuning of hyperparameters, thus opening a path into the understanding and improvement of learning techniques for large dimensional datasets.The technical approach employed in this thesis draws on several advanced tools of high dimensional statistics, allowing us to conduct more elaborate analyses beyond the state of the art. The first part of this dissertation is devoted to the study of semi-supervised learning on high dimensional data. Motivated by our theoretical findings, we propose a superior alternative to the standard semi-supervised method of Laplacian regularization. The methods involving implicit optimizations, such as SVMs and logistic regression, are next investigated under realistic mixture models, providing exhaustive details on the learning mechanism. Several important consequences are thus revealed, some of which are even in contradiction with common belief
Chinot, Geoffrey. "Localization methods with applications to robust learning and interpolation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG002.
Full textThis PhD thesis deals with supervized machine learning and statistics. The main goal is to use localization techniques to derive fast rates of convergence, with a particular focus on robust learning and interpolation problems.Localization methods aim to analyze localized properties of an estimator to obtain fast rates of convergence, that is rates of order O(1/n), where n is the number of observations. Under assumptions, such as the Bernstein condition, such rates are attainable.A robust estimator is an estimator with good theoretical guarantees, under as few assumptions as possible. This question is getting more and more popular in the current era of big data. Large dataset are very likely to be corrupted and one would like to build reliable estimators in such a setting. We show that the well-known regularized empirical risk minimizer (RERM) with Lipschitz-loss function is robust with respect to heavy-tailed noise and outliers in the label. When the class of predictor is heavy-tailed, RERM is not reliable. In this setting, we show that minmax Median of Means estimators can be a solution. By construction minmax-MOM estimators are also robust to an adversarial contamination.Interpolation problems study learning procedure with zero training error. Surprisingly, in large dimension, interpolating the data does not necessarily implies over-fitting. We study a high dimensional Gaussian linear model and show that sometimes the over-fitting may be benign
Hmamouche, Youssef. "Prédiction des séries temporelles larges." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0480.
Full textNowadays, storage and data processing systems are supposed to store and process large time series. As the number of variables observed increases very rapidly, their prediction becomes more and more complicated, and the use of all the variables poses problems for classical prediction models.Univariate prediction models are among the first models of prediction. To improve these models, the use of multiple variables has become common. Thus, multivariate models and become more and more used because they consider more information.With the increase of data related to each other, the application of multivariate models is also questionable. Because the use of all existing information does not necessarily lead to the best predictions. Therefore, the challenge in this situation is to find the most relevant factors among all available data relative to a target variable.In this thesis, we study this problem by presenting a detailed analysis of the proposed approaches in the literature. We address the problem of prediction and size reduction of massive data. We also discuss these approaches in the context of Big Data.The proposed approaches show promising and very competitive results compared to well-known algorithms, and lead to an improvement in the accuracy of the predictions on the data used.Then, we present our contributions, and propose a complete methodology for the prediction of wide time series. We also extend this methodology to big data via distributed computing and parallelism with an implementation of the prediction process proposed in the Hadoop / Spark environment
Dang, Quang Vinh. "Évaluation de la confiance dans la collaboration à large échelle." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0002/document.
Full textLarge-scale collaborative systems wherein a large number of users collaborate to perform a shared task attract a lot of attention from both academic and industry. Trust is an important factor for the success of a large-scale collaboration. It is difficult for end-users to manually assess the trust level of each partner in this collaboration. We study the trust assessment problem and aim to design a computational trust model for collaborative systems. We focused on three research questions. 1. What is the effect of deploying a trust model and showing trust scores of partners to users? We designed and organized a user-experiment based on trust game, a well-known money-exchange lab-control protocol, wherein we introduced user trust scores. Our comprehensive analysis on user behavior proved that: (i) showing trust score to users encourages collaboration between them significantly at a similar level with showing nick- name, and (ii) users follow the trust score in decision-making. The results suggest that a trust model can be deployed in collaborative systems to assist users. 2. How to calculate trust score between users that experienced a collaboration? We designed a trust model for repeated trust game that computes user trust scores based on their past behavior. We validated our trust model against: (i) simulated data, (ii) human opinion, and (iii) real-world experimental data. We extended our trust model to Wikipedia based on user contributions to the quality of the edited Wikipedia articles. We proposed three machine learning approaches to assess the quality of Wikipedia articles: the first one based on random forest with manually-designed features while the other two ones based on deep learning methods. 3. How to predict trust relation between users that did not interact in the past? Given a network in which the links represent the trust/distrust relations between users, we aim to predict future relations. We proposed an algorithm that takes into account the established time information of the links in the network to predict future user trust/distrust relationships. Our algorithm outperforms state-of-the-art approaches on real-world signed directed social network datasets
Yang, Xiaoke. "Regularized Discriminant Analysis: A Large Dimensional Study." Thesis, 2018. http://hdl.handle.net/10754/627734.
Full textChajnacki, Gregory M. "Characteristics of learning organizations and multi-dimensional organizational performance indicators a survey of large, publicly-owned companies /." 2007. http://www.etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-1775/index.html.
Full textBooks on the topic "Large dimensional learning"
Walker, Stephen G., and Mark Schafer. Operational Code Theory: Beliefs and Foreign Policy Decisions. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190846626.013.411.
Full textSullivan, Mark, Nilay Patel, and Inderbir Gill. Principles of laparoscopic and robotic urological surgery. Edited by John Reynard. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199659579.003.0033.
Full textLiang, Percy, Michael Jordan, and Dan Klein. Probabilistic grammars and hierarchical Dirichlet processes. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.27.
Full textDobson, James E. Critical Digital Humanities. University of Illinois Press, 2019. http://dx.doi.org/10.5622/illinois/9780252042270.001.0001.
Full textAustin, Kenneth. The Jews and the Reformation. Yale University Press, 2020. http://dx.doi.org/10.12987/yale/9780300186291.001.0001.
Full textBook chapters on the topic "Large dimensional learning"
Lee, Sangkyun, and Andreas Holzinger. "Knowledge Discovery from Complex High Dimensional Data." In Solving Large Scale Learning Tasks. Challenges and Algorithms, 148–67. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41706-6_7.
Full textRimal, Yagyanath. "Regression Analysis of Large Research Data: Dimensional Reduction Techniques." In Learning and Analytics in Intelligent Systems, 296–306. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42363-6_35.
Full textShenoy, P. Deepa, K. G. Srinivasa, M. P. Mithun, K. R. Venugopal, and L. M. Patnaik. "Dynamic Subspace Clustering for Very Large High-Dimensional Databases." In Intelligent Data Engineering and Automated Learning, 850–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45080-1_117.
Full textWang, Dongxia, and Yongmei Lei. "Asynchronous Distributed ADMM for Learning with Large-Scale and High-Dimensional Sparse Data Set." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 259–74. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36405-2_27.
Full textZhang, Lijun, Tianbao Yang, Rong Jin, and Zhi-Hua Zhou. "Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach." In Lecture Notes in Computer Science, 83–97. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46379-7_6.
Full textQiu, Waishan, Wenjing Li, Xun Liu, and Xiaokai Huang. "Subjectively Measured Streetscape Qualities for Shanghai with Large-Scale Application of Computer Vision and Machine Learning." In Proceedings of the 2021 DigitalFUTURES, 242–51. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5983-6_23.
Full textGoncalves, André R., Arindam Banerjee, Vidyashankar Sivakumar, and Soumyadeep Chatterjee. "Structured Estimation in High Dimensions." In Large-Scale Machine Learning in the Earth Sciences, 13–32. Boca Raton : Taylor & Francis, 2017. | Series: Chapman & Hall/CRC data mining & knowledge discovery series ; 42 | “A CRC title, part of the Taylor & Francis imprint, a member of the Taylor & Francis Group, the academic division of T&F Informa plc.”: Chapman and Hall/CRC, 2017. http://dx.doi.org/10.4324/9781315371740-2.
Full textBehuet, Sabrina, Sebastian Bludau, Olga Kedo, Christian Schiffer, Timo Dickscheid, Andrea Brandstetter, Philippe Massicotte, Mona Omidyeganeh, Alan Evans, and Katrin Amunts. "A High-Resolution Model of the Human Entorhinal Cortex in the ‘BigBrain’ – Use Case for Machine Learning and 3D Analyses." In Lecture Notes in Computer Science, 3–21. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82427-3_1.
Full textHabyarimana, Ephrem, and Sofia Michailidou. "Genomics Data." In Big Data in Bioeconomy, 69–76. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71069-9_6.
Full textReimers, Fernando M. "Conclusions. Seven Lessons to Build an Education Renaissance After the Pandemic." In Implementing Deeper Learning and 21st Education Reforms, 171–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-57039-2_8.
Full textConference papers on the topic "Large dimensional learning"
Tiomoko, Malik, Cosme Louart, and Romain Couillet. "Large Dimensional Asymptotics of Multi-Task Learning." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053557.
Full textZarrouk, Tayeb, Romain Couillet, Florent Chatelain, and Nicolas Le Bihan. "Performance-Complexity Trade-Off in Large Dimensional Statistics." In 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2020. http://dx.doi.org/10.1109/mlsp49062.2020.9231568.
Full textMai, Xiaoyi, and Romain Couillet. "Revisiting and Improving Semi-supervised Learning: A Large Dimensional Approach." In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683378.
Full textCouillet, Romain, and Matthew McKay. "Robust covariance estimation and linear shrinkage in the large dimensional regime." In 2014 IEEE 24th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2014. http://dx.doi.org/10.1109/mlsp.2014.6958867.
Full textTsymbal, Alexey, Sonja Zillner, and Martin Huber. "Feature Ontology for Improved Learning from Large-Dimensional Disease-Specific Heterogeneous Data." In Twentieth IEEE International Symposium on Computer-Based Medical Systems. IEEE, 2007. http://dx.doi.org/10.1109/cbms.2007.50.
Full textLiao, Zhenyu, and Romain Couillet. "Random matrices meet machine learning: A large dimensional analysis of LS-SVM." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952586.
Full textCouillet, Romain. "A Random Matrix Analysis and Optimization Framework to Large Dimensional Transfer Learning." In 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). IEEE, 2019. http://dx.doi.org/10.1109/camsap45676.2019.9022482.
Full textYu, Wenjian, Yu Gu, Jian Li, Shenghua Liu, and Yaohang Li. "Single-Pass PCA of Large High-Dimensional Data." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/468.
Full textHongqing, Zhang, Wan Wuyi, Bao Zhongjin, Hu Jinchun, and Ye Long. "Three-dimensional Numerical Simulation for a Spillway Tunnel with High Head and Large Discharge." In The 1st EAI International Conference on Multimedia Technology and Enhanced Learning. EAI, 2017. http://dx.doi.org/10.4108/eai.28-2-2017.152332.
Full textWang, Haobo, Weiwei Liu, Yang Zhao, Tianlei Hu, Ke Chen, and Gang Chen. "Learning From Multi-Dimensional Partial Labels." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/407.
Full textReports on the topic "Large dimensional learning"
Bednar, Amy. Topological data analysis : an overview. Engineer Research and Development Center (U.S.), June 2021. http://dx.doi.org/10.21079/11681/40943.
Full textPritchett, Lant, and Martina Viarengo. Learning Outcomes in Developing Countries: Four Hard Lessons from PISA-D. Research on Improving Systems of Education (RISE), April 2021. http://dx.doi.org/10.35489/bsg-rise-wp_2021/069.
Full textMcKenna, Patrick, and Mark Evans. Emergency Relief and complex service delivery: Towards better outcomes. Queensland University of Technology, June 2021. http://dx.doi.org/10.5204/rep.eprints.211133.
Full text