Academic literature on the topic 'High dimensional data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'High dimensional data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "High dimensional data"

1

Geethika, Paruchuri, and Voleti Prasanthi. "Booster in High Dimensional Data Classification." International Journal of Trend in Scientific Research and Development Volume-2, Issue-3 (April 30, 2018): 1186–90. http://dx.doi.org/10.31142/ijtsrd11368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gayathri, Tata, and N. Durga. "Privacy Preserving Approaches for High Dimensional Data." International Journal of Trend in Scientific Research and Development Volume-1, Issue-5 (August 31, 2017): 1120–25. http://dx.doi.org/10.31142/ijtsrd2430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

G, Vasanthi. "Nearest Neighbors Search Algorithm for High Dimensional Data." Journal of Advanced Research in Dynamical and Control Systems 12, SP8 (July 30, 2020): 1215–18. http://dx.doi.org/10.5373/jardcs/v12sp8/20202636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Amaratunga, Dhammika, and Javier Cabrera. "High-dimensional data." Journal of the National Science Foundation of Sri Lanka 44, no. 1 (March 31, 2016): 3. http://dx.doi.org/10.4038/jnsfsr.v44i1.7976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Geubbelmans, Melvin, Axel-Jan Rousseau, Dirk Valkenborg, and Tomasz Burzykowski. "High-dimensional data." American Journal of Orthodontics and Dentofacial Orthopedics 164, no. 3 (September 2023): 453–56. http://dx.doi.org/10.1016/j.ajodo.2023.06.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yuan, Xupeng, Miao Zhao, Xinjun Guo, Yao Li, Zongsong Gan, and Hao Ruan. "Optical tape for high capacity three-dimensional optical data storage." Chinese Optics Letters 18, no. 1 (2020): 012001. http://dx.doi.org/10.3788/col202018.012001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khot, Tejas. "Visualizing high-dimensional data." XRDS: Crossroads, The ACM Magazine for Students 23, no. 2 (December 15, 2016): 66–67. http://dx.doi.org/10.1145/3021604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kriegel, Hans-Peter, and Eirini Ntoutsi. "Clustering high dimensional data." ACM SIGKDD Explorations Newsletter 15, no. 2 (June 16, 2014): 1–8. http://dx.doi.org/10.1145/2641190.2641192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tang, Lin. "High-dimensional data visualization." Nature Methods 17, no. 2 (February 2020): 129. http://dx.doi.org/10.1038/s41592-020-0750-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kriegel, Hans-Peter, Peer Kröger, and Arthur Zimek. "Clustering high-dimensional data." ACM Transactions on Knowledge Discovery from Data 3, no. 1 (March 2009): 1–58. http://dx.doi.org/10.1145/1497577.1497578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "High dimensional data"

1

Wauters, John, and John Wauters. "Independence Screening in High-Dimensional Data." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/623083.

Full text
Abstract:
High-dimensional data, data in which the number of dimensions exceeds the number of observations, is increasingly common in statistics. The term "ultra-high dimensional" is defined by Fan and Lv (2008) as describing the situation where log(p) is of order O(na) for some a in the interval (0, ½). It arises in many contexts such as gene expression data, proteomic data, imaging data, tomography, and finance, as well as others. High-dimensional data present a challenge to traditional statistical techniques. In traditional statistical settings, models have a small number of features, chosen based on an assumption of what features may be relevant to the response of interest. In the high-dimensional setting, many of the techniques of traditional feature selection become computationally intractable, or does not yield unique solutions. Current research in modeling high-dimensional data is heavily focused on methods that screen the features before modeling; that is, methods that eliminate noise-features as a pre-modeling dimension reduction. Typically noise feature are identified by exploiting properties of independent random variables, thus the term "independence screening." There are methods for modeling high-dimensional data without feature screening first (e.g. LASSO or SCAD), but simulation studies show screen-first methods perform better as dimensionality increases. Many proposals for independence screening exist, but in my literature review certain themes recurred: A) The assumption of sparsity: that all the useful information in the data is actually contained in a small fraction of the features (the "active features"), the rest being essentially random noise (the "inactive" features). B) In many newer methods, initial dimension reduction by feature screening reduces the problem from the high-dimensional case to a classical case; feature selection then proceeds by a classical method. C) In the initial screening, removal of features independent of the response is highly desirable, as such features literally give no information about the response. D) For the initial screening, some statistic is applied pairwise to each feature in combination with the response; the specific statistic chosen so that in the case that the two random variables are independent, a specific known value is expected for the statistic. E) Features are ranked by the absolute difference between the calculated statistic and the expected value of that statistic in the independent case, i.e. features that are most different from the independent case are most preferred. F) Proof is typically offered that, asymptotically, the method retains the true active features with probability approaching one. G) Where possible, an iterative version of the process is explored, as iterative versions do much better at identifying features that are active in their interactions, but not active individually.
APA, Harvard, Vancouver, ISO, and other styles
2

Zeugner, Stefan. "Macroeconometrics with high-dimensional data." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209640.

Full text
Abstract:
CHAPTER 1:

The default g-priors predominant in Bayesian Model Averaging tend to over-concentrate posterior mass on a tiny set of models - a feature we denote as 'supermodel effect'. To address it, we propose a 'hyper-g' prior specification, whose data-dependent shrinkage adapts posterior model distributions to data quality. We demonstrate the asymptotic consistency of the hyper-g prior, and its interpretation as a goodness-of-fit indicator. Moreover, we highlight the similarities between hyper-g and 'Empirical Bayes' priors, and introduce closed-form expressions essential to computationally feasibility. The robustness of the hyper-g prior is demonstrated via simulation analysis, and by comparing four vintages of economic growth data.

CHAPTER 2:

Ciccone and Jarocinski (2010) show that inference in Bayesian Model Averaging (BMA) can be highly sensitive to small data perturbations. In particular they demonstrate that the importance attributed to potential growth determinants varies tremendously over different revisions of international income data. They conclude that 'agnostic' priors appear too sensitive for this strand of growth empirics. In response, we show that the found instability owes much to a specific BMA set-up: First, comparing the same countries over data revisions improves robustness. Second, much of the remaining variation can be reduced by applying an evenly 'agnostic', but flexible prior.

CHAPTER 3:

This chapter explores the link between the leverage of the US financial sector, of households and of non-financial businesses, and real activity. We document that leverage is negatively correlated with the future growth of real activity, and positively linked to the conditional volatility of future real activity and of equity returns.

The joint information in sectoral leverage series is more relevant for predicting future real activity than the information contained in any individual leverage series. Using in-sample regressions and out-of sample forecasts, we show that the predictive power of leverage is roughly comparable to that of macro and financial predictors commonly used by forecasters.

Leverage information would not have allowed to predict the 'Great Recession' of 2008-2009 any better than conventional macro/financial predictors.

CHAPTER 4:

Model averaging has proven popular for inference with many potential predictors in small samples. However, it is frequently criticized for a lack of robustness with respect to prediction and inference. This chapter explores the reasons for such robustness problems and proposes to address them by transforming the subset of potential 'control' predictors into principal components in suitable datasets. A simulation analysis shows that this approach yields robustness advantages vs. both standard model averaging and principal component-augmented regression. Moreover, we devise a prior framework that extends model averaging to uncertainty over the set of principal components and show that it offers considerable improvements with respect to the robustness of estimates and inference about the importance of covariates. Finally, we empirically benchmark our approach with popular model averaging and PC-based techniques in evaluating financial indicators as alternatives to established macroeconomic predictors of real economic activity.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
3

Boulesteix, Anne-Laure. "Dimension reduction and Classification with High-Dimensional Microarray Data." Diss., lmu, 2005. http://nbn-resolving.de/urn:nbn:de:bvb:19-28017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Samko, Oksana. "Low dimension hierarchical subspace modelling of high dimensional data." Thesis, Cardiff University, 2009. http://orca.cf.ac.uk/54883/.

Full text
Abstract:
Building models of high-dimensional data in a low dimensional space has become extremely popular in recent years. Motion tracking, facial animation, stock market tracking, digital libraries and many other different models have been built and tuned to specific application domains. However, when the underlying structure of the original data is unknown, the modelling of such data is still an open question. The problem is of interest as capturing and storing large amounts of high dimensional data has become trivial, yet the capability to process, interpret, and use this data is limited. In this thesis, we introduce novel algorithms for modelling high dimensional data with an unknown structure, which allows us to represent the data with good accuracy and in a compact manner. This work presents a novel fully automated dynamic hierarchical algorithm, together with a novel automatic data partitioning method to work alongside existing specific models (talking head, human motion). Our algorithm is applicable to hierarchical data visualisation and classification, meaningful pattern extraction and recognition, and new data sequence generation. Also during our work we investigated problems related to low dimensional data representation: automatic optimal input parameter estimation, and robustness against noise and outliers. We show the potential of our modelling with many data domains: talking head, motion, audio, etc. and we believe that it has good potential in adapting to other domains.
APA, Harvard, Vancouver, ISO, and other styles
5

Ruan, Lingyan. "Statistical analysis of high dimensional data." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37135.

Full text
Abstract:
This century is surely the century of data (Donoho, 2000). Data analysis has been an emerging activity over the last few decades. High dimensional data is in particular more and more pervasive with the advance of massive data collection system, such as microarrays, satellite imagery, and financial data. However, analysis of high dimensional data is of challenge with the so called curse of dimensionality (Bellman 1961). This research dissertation presents several methodologies in the application of high dimensional data analysis. The first part discusses a joint analysis of multiple microarray gene expressions. Microarray analysis dates back to Golub et al. (1999). It draws much attention after that. One common goal of microarray analysis is to determine which genes are differentially expressed. These genes behave significantly differently between groups of individuals. However, in microarray analysis, there are thousands of genes but few arrays (samples, individuals) and thus relatively low reproducibility remains. It is natural to consider joint analyses that could combine microarrays from different experiments effectively in order to achieve improved accuracy. In particular, we present a model-based approach for better identification of differentially expressed genes by incorporating data from different studies. The model can accommodate in a seamless fashion a wide range of studies including those performed at different platforms, and/or under different but overlapping biological conditions. Model-based inferences can be done in an empirical Bayes fashion. Because of the information sharing among studies, the joint analysis dramatically improves inferences based on individual analysis. Simulation studies and real data examples are presented to demonstrate the effectiveness of the proposed approach under a variety of complications that often arise in practice. The second part is about covariance matrix estimation in high dimensional data. First, we propose a penalised likelihood estimator for high dimensional t-distribution. The student t-distribution is of increasing interest in mathematical finance, education and many other applications. However, the application in t-distribution is limited by the difficulty in the parameter estimation of the covariance matrix for high dimensional data. We show that by imposing LASSO penalty on the Cholesky factors of the covariance matrix, EM algorithm can efficiently compute the estimator and it performs much better than other popular estimators. Secondly, we propose an estimator for high dimensional Gaussian mixture models. Finite Gaussian mixture models are widely used in statistics thanks to its great flexibility. However, parameter estimation for Gaussian mixture models with high dimensionality can be rather challenging because of the huge number of parameters that need to be estimated. For such purposes, we propose a penalized likelihood estimator to specifically address such difficulties. The LASSO penalty we impose on the inverse covariance matrices encourages sparsity on its entries and therefore helps reducing the dimensionality of the problem. We show that the proposed estimator can be efficiently computed via an Expectation-Maximization algorithm. To illustrate the practical merits of the proposed method, we consider its application in model-based clustering and mixture discriminant analysis. Numerical experiments with both simulated and real data show that the new method is a valuable tool in handling high dimensional data. Finally, we present structured estimators for high dimensional Gaussian mixture models. The graphical representation of every cluster in Gaussian mixture models may have the same or similar structure, which is an important feature in many applications, such as image processing, speech recognition and gene network analysis. Failure to consider the sharing structure would deteriorate the estimation accuracy. To address such issues, we propose two structured estimators, hierarchical Lasso estimator and group Lasso estimator. An EM algorithm can be applied to conveniently solve the estimation problem. We show that when clusters share similar structures, the proposed estimator perform much better than the separate Lasso estimator.
APA, Harvard, Vancouver, ISO, and other styles
6

Shen, Xilin. "Multiscale analysis of high dimensional data." Connect to online resource, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3284443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Wangie. "Clustering Problems for High Dimensional Data." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/384.

Full text
Abstract:
We consider a clustering problem where we observe feature vectors Xi ∈ Rp, i = 1, 2, ..., n, from several possible classes. The class labels are unknown and the main interest is to estimate these labels. We propose a three-step clustering procedure where we first evaluate the significance of each feature by the Kolmogorov-Smirnov statistic, then we select the small fraction of features for which the Kolmogorov-Smirnov scores exceed a preselected threshold t > 0, and then use only the selected features for clustering by one version of the Principal Component Analysis (PCA). In this procedure, one of the main challenges is how to set the threshold t. We propose a new approach to set the threshold, where the core is the so-called Signal-to-Noise Ratio (SNR) in post-selection PCA. SNR is reminiscent of the recent innovation of Higher Criticism; for this reason, we call the proposed threshold the Higher Criticism Threshold (HCT), despite that it is significantly different from the HCT proposed earlier by [Donoho 2008] in the context of classification. Motivated by many examples in Big Data, we study the spectral clustering with HCT for a model where the signals are both rare and weak for two-classes clustering case. Through delicate PCA, we forge a close link between the HCT and the ideal threshold choice, and show that the HCT yields optimal results in the spectral clustering approach. The approach is successfully applied to three gene microarray data sets, where it compares favorably with existing clustering methods. Our analysis is subtle and requires new development in the Random Matrix Theory (RMT). One challenge we face is that most results in the RMT can not be applied directly to our case: existing results are usually for matrices with i.i.d. entries, but the object of interest in the current case is the post-selection data matrix, where (due to feature selection) the columns are non-independent and have hard-to-track distributions. We develop intricate new RMT to overcome this problem. We also find the theoretical approximation for the tail distribution of Kolmogorov-Smirnov Statistic under null hypothesis and alternative hypothesis. With the theoretical approximation, we can claim the effectiveness of KS statistic. Besides, we also find the fundamental limits for clustering problem, signal recovery problem, and detection problem under the Asymptotic Rare and Weak model. We find the boundary such that when the model parameters are beyond the boundary, then the inference is unavailable, otherwise there are some methods (usually exhausted search) to achieve the inference.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Wanjie. "CLUSTERING PROBLEMS FOR HIGH DIMENSIONAL DATA." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/443.

Full text
Abstract:
We consider a clustering problem where we observe feature vectors Xi ∈ Rp , i = 1, 2, . . . , n, from several possible classes. The class labels are unknown and the main interest is to estimate these labels. We propose a three-step clustering procedure where we first evaluate the significance of each feature by the Kolmogorov-Smirnov statistic, then we select the small fraction of features for which the Kolmogorov-Smirnov scores exceed a preselected threshold t > 0, and then use only the selected features for clustering by one version of the Principal Component Analysis (PCA). In this procedure, one of the main challenges is how to set the threshold t. We propose a new approach to set the threshold, where the core is the so-called Signal-toNoise Ratio (SNR) in post-selection PCA. SNR is reminiscent of the recent innovation of Higher Criticism; for this reason, we call the proposed threshold the Higher Criticism Threshold (HCT), despite that it is significantly different from the HCT proposed earlier by [Donoho 2008] in the context of classification. Motivated by many examples in Big Data, we study the spectral clustering with HCT for a model where the signals are both rare and weak for two-classes clustering case. Through delicate PCA, we forge a close link between the HCT and the ideal threshold choice, and show that the HCT yields optimal results in the spectral clustering approach. The approach is successfully applied to three gene microarray data sets, where it compares favorably with existing clustering methods. Our analysis is subtle and requires new development in the Random Matrix Theory (RMT). One challenge we face is that most results in the RMT can not be applied directly to our case: existing results are usually for matrices with i.i.d. entries, but the object of interest in the current case is the post-selection data matrix, where (due to feature selection) the columns are non-independent and have hard-to-track distributions. We develop intricate new RMT to overcome this problem. We also find the theoretical approximation for the tail distribution of KolmogorovSmirnov Statistic under null hypothesis and alternative hypothesis. With the theoretical approximation, we can claim the effectiveness of KS statistic. Besides, we also find the fundamental limits for clustering problem, signal recovery problem, and detection problem under the Asymptotic Rare and Weak model. We find the boundary such that when the model parameters are beyond the boundary, then the inference is unavailable, otherwise there are some methods (usually exhausted search) to achieve the inference.
APA, Harvard, Vancouver, ISO, and other styles
9

Csikós, Mónika. "Efficient Approximations of High-Dimensional Data." Thesis, Université Gustave Eiffel, 2022. http://www.theses.fr/2022UEFL2004.

Full text
Abstract:
Dans cette thèse, nous étudions les approximations de systèmes d'ensembles (X,S), où X est un ensemble de base et S est constitué de sous-ensembles de X appelés plages. Étant donné un système d'ensembles finis, notre objectif est de construire un petit sous-ensemble de X tel que chaque plage soit `bien-approximée'. En particulier, pour un paramètre epsilon donné dans (0,1), nous disons qu'un sous-ensemble A de X est une epsilon-approximation de (X,S) si pour toute plage R dans S, les fractions |A cap R|/|A| et |R|/|X| sont proches de epsilon.La recherche sur de telles approximations a commencé dans les années 1950, l'échantillonnage aléatoire étant l'outil clé pour montrer leur existence. Depuis lors, la notion d'approximations est devenue une structure fondamentale dans plusieurs communautés - théorie de l'apprentissage, statistiques, combinatoire et algorithmes. Une percée dans l'étude des approximations remonte à 1971, lorsque Vapnik et Chervonenkis ont étudié les systèmes d'ensembles avec une VC-dimension finie, qui s'est avérée être un paramètre clé pour caractériser leur complexité. Par exemple, si un système d'ensembles (X, S) a une VC-dimension d, alors un échantillon uniforme de O(d/epsilon^2) points est une approximation epsilon de (X, S) avec une probabilité élevée. Il est important de noter que la taille de l'approximation ne dépend que d'epsilon et de d, et qu'elle est indépendante des tailles d'entrée |X| et |S| !Dans la première partie de cette thèse, nous donnons une preuve modulaire, autonome et intuitive de la garantie d'échantillonnage uniforme ci-dessus. Dans la deuxième partie, nous donnons une amélioration d'un goulot d'étranglement algorithmique vieux de 30 ans - la construction d'appariements avec un faible nombre de croisements. Ceci peut être appliqué pour construire des approximations avec des garanties améliorées. Enfin, nous répondons à un problème ouvert vieux de 30 ans de Blumer etal. en prouvant des bornes inférieures serrées sur la dimension VC des unions de demi-espaces - un système d'ensembles géométriques qui apparaît dans plusieurs applications, par exemple les constructions de coresets
In this thesis, we study approximations of set systems (X,S), where X is a base set and S consists of subsets of X called ranges. Given a finite set system, our goal is to construct a small subset of X set such that each range is `well-approximated'. In particular, for a given parameter epsilon in (0,1), we say that a subset A of X is an epsilon-approximation of (X,S) if for any range R in S, the fractions |A cap R|/|A| and |R|/|X| are epsilon-close.Research on such approximations started in the 1950s, with random sampling being the key tool for showing their existence. Since then, the notion of approximations has become a fundamental structure across several communities---learning theory, statistics, combinatorics and algorithms. A breakthrough in the study of approximations dates back to 1971 when Vapnik and Chervonenkis studied set systems with finite VC-dimension, which turned out a key parameter to characterise their complexity. For instance, if a set system (X,S) has VC dimension d, then a uniform sample of O(d/epsilon^2) points is an epsilon-approximation of (X,S) with high probability. Importantly, the size of the approximation only depends on epsilon and d, and it is independent of the input sizes |X| and |S|!In the first part of this thesis, we give a modular, self-contained, intuitive proof of the above uniform sampling guarantee .In the second part, we give an improvement of a 30 year old algorithmic bottleneck---constructing matchings with low crossing number. This can be applied to build approximations with improved guarantees.Finally, we answer a 30 year old open problem of Blumer etal. by proving tight lower bounds on the VC dimension of unions of half-spaces - a geometric set system that appears in several applications, e.g. coreset constructions
APA, Harvard, Vancouver, ISO, and other styles
10

Qin, Yingli. "Statistical inference for high-dimensional data." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3389139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "High dimensional data"

1

Masulli, Francesco, Alfredo Petrosino, and Stefano Rovetta, eds. Clustering High--Dimensional Data. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48577-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shinmura, Shuichi. High-dimensional Microarray Data Analysis. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-5998-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bühlmann, Peter, and Sara van de Geer. Statistics for High-Dimensional Data. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20192-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bolón-Canedo, Verónica, Noelia Sánchez-Maroño, and Amparo Alonso-Betanzos. Feature Selection for High-Dimensional Data. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21858-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Frigessi, Arnoldo, Peter Bühlmann, Ingrid K. Glad, Mette Langaas, Sylvia Richardson, and Marina Vannucci, eds. Statistical Analysis for High-Dimensional Data. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27099-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Xiaochun, and Ronghui Xu, eds. High-Dimensional Data Analysis in Cancer Research. New York, NY: Springer New York, 2009. http://dx.doi.org/10.1007/978-0-387-69765-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xiaochun, Li, and Xu Ronghui 1969-, eds. High-dimensional data analysis in cancer research. New York, NY: Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xiaochun, Li, and Xu Ronghui 1969-, eds. High-dimensional data analysis in cancer research. New York, NY: Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

A, Landgrebe D., and United States. National Aeronautics and Space Administration., eds. Spectral feature design in high dimensional multispectral data. West Lafayette, Ind: School of Electrical Engineering, Purdue University, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Angel Garcia de la Garza. Functional Data Analysis and Machine Learning for High-Dimensional Structured Data. [New York, N.Y.?]: [publisher not identified], 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "High dimensional data"

1

Forsyth, David. "High Dimensional Data." In Applied Machine Learning, 69–91. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18114-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schintler, Laurie A. "High Dimensional Data." In Encyclopedia of Big Data, 1–3. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-319-32001-4_552-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schintler, Laurie A. "High Dimensional Data." In Encyclopedia of Big Data, 546–48. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-319-32010-6_552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schintler, Laurie A. "High Dimensional Data." In Encyclopedia of Big Data, 1–3. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-319-32001-4_552-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Marron, J. S., and Ian L. Dryden. "High Dimensional Asymptotics." In Object Oriented Data Analysis, 275–92. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781351189675-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Marron, J. S., and Ian L. Dryden. "High-Dimensional Inference." In Object Oriented Data Analysis, 257–74. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781351189675-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zou, Hui. "High-Dimensional Classification." In Handbook of Big Data Analytics, 225–61. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-18284-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Masulli, Francesco, and Stefano Rovetta. "Clustering High-Dimensional Data." In Clustering High--Dimensional Data, 1–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48577-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Klawonn, Frank, Frank Höppner, and Balasubramaniam Jayaram. "What are Clusters in High Dimensions and are they Difficult to Find?" In Clustering High--Dimensional Data, 14–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48577-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Assent, Ira. "Efficient Density-Based Subspace Clustering in High Dimensions." In Clustering High--Dimensional Data, 34–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48577-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "High dimensional data"

1

Agarwal, Deepak, Datong Chen, Long-ji Lin, Jayavel Shanmugasundaram, and Erik Vee. "Forecasting high-dimensional data." In the 2010 international conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1807167.1807277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gadepally, Vijay, and Jeremy Kepner. "Big data dimensional analysis." In 2014 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2014. http://dx.doi.org/10.1109/hpec.2014.7040944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sharma, Varun Kumar, and Anju Bala. "Clustering for high dimensional data." In 2014 International Conference on Networks & Soft Computing (ICNSC). IEEE, 2014. http://dx.doi.org/10.1109/cnsc.2014.6906700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tahmoush, Dave, and Hanan Samet. "High-dimensional similarity retrieval using dimensional choice." In 2008 IEEE 24th International Conference on Data Engineeing workshop (ICDE Workshop 2008). IEEE, 2008. http://dx.doi.org/10.1109/icdew.2008.4498342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Georgakopoulos, Spiros V., Sotiris K. Tasoulis, and Vassilis P. Plagianakos. "Efficient change detection for high dimensional data streams." In 2015 IEEE International Conference on Big Data (Big Data). IEEE, 2015. http://dx.doi.org/10.1109/bigdata.2015.7364010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yamamoto, Yoshitaka, and Koji Iwanuma. "Online pattern mining for high-dimensional data streams." In 2015 IEEE International Conference on Big Data (Big Data). IEEE, 2015. http://dx.doi.org/10.1109/bigdata.2015.7364109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Voicu, Iulian, and Denis Kouame. "High dimensional data processing for fetal activity evaluation." In 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. http://dx.doi.org/10.1109/bigdata.2017.8258397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Hankui, Nicos Pavlidis, Idris Eckley, and Ioannis Tsalamanis. "Subspace Clustering of Very Sparse High-Dimensional Data." In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018. http://dx.doi.org/10.1109/bigdata.2018.8622472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schleif, Frank-Michael, Thomas Villmann, and Xibin Zhu. "High Dimensional Matrix Relevance Learning." In 2014 IEEE International Conference on Data Mining Workshop (ICDMW). IEEE, 2014. http://dx.doi.org/10.1109/icdmw.2014.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Szekely, Eniko, Eric Bruno, and Stephane Marchand-Maillet. "High-Dimensional Multimodal Distribution Embedding." In 2010 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2010. http://dx.doi.org/10.1109/icdmw.2010.194.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "High dimensional data"

1

Ding, Chris, Xiaofeng He, Hongyuan Zha, and Horst Simon. Adaptive dimension reduction for clustering high dimensional data. Office of Scientific and Technical Information (OSTI), October 2002. http://dx.doi.org/10.2172/807420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hansen, Christian, Ivan Fernandez-Val, and Victor Chernozhukov. Program evaluation with high-dimensional data. Institute for Fiscal Studies, November 2013. http://dx.doi.org/10.1920/wp.cem.2013.5713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fernandez-Val, Ivan, Alexandre Belloni, Victor Chernozhukov, and Christian Hansen. Program evaluation with high-dimensional data. Institute for Fiscal Studies, December 2013. http://dx.doi.org/10.1920/wp.cem.2013.7713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hansen, Christian, Ivan Fernandez-Val, Victor Chernozhukov, and Alexandre Belloni. Program evaluation with high-dimensional data. IFS, August 2014. http://dx.doi.org/10.1920/wp.cem.2014.3314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fernandez-Val, Ivan, Christian Hansen, Victor Chernozhukov, and Alexandre Belloni. Program evaluation with high-dimensional data. Institute for Fiscal Studies, September 2015. http://dx.doi.org/10.1920/wp.cem.2015.5515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wasserman, Larry, and John Lafferty. Statistical Machine Learning for Structured and High Dimensional Data. Fort Belvoir, VA: Defense Technical Information Center, September 2014. http://dx.doi.org/10.21236/ada610544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wegman, Edward J. Visualization Methods for the Exploration of High Dimensional Data. Fort Belvoir, VA: Defense Technical Information Center, August 1998. http://dx.doi.org/10.21236/ada358165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Belloni, Alexandre, Victor Chernozhukov, Ivan Fernandez-Val, and Christian Hansen. Program evaluation and causal inference with high-dimensional data. The Institute for Fiscal Studies, March 2016. http://dx.doi.org/10.1920/wp.cem.2016.1316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Meng, Zhaoyi. High Performance Computing and Real Time Software for High Dimensional Data Classification. Office of Scientific and Technical Information (OSTI), May 2018. http://dx.doi.org/10.2172/1485604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Meinshausen, Nicolai, and Bin Yu. Lasso-type recovery of sparse representations for high-dimensional data. Fort Belvoir, VA: Defense Technical Information Center, December 2006. http://dx.doi.org/10.21236/ada472998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography