Academic literature on the topic 'Very large data sets'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Very large data sets.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Very large data sets"

1

Zhang, Kui, Linlin Ge, Zhe Hu, Alex Hay-Man Ng, Xiaojing Li, and Chris Rizos. "Phase Unwrapping for Very Large Interferometric Data Sets." IEEE Transactions on Geoscience and Remote Sensing 49, no. 10 (October 2011): 4048–61. http://dx.doi.org/10.1109/tgrs.2011.2130530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kettaneh, Nouna, Anders Berglund, and Svante Wold. "PCA and PLS with very large data sets." Computational Statistics & Data Analysis 48, no. 1 (January 2005): 69–85. http://dx.doi.org/10.1016/j.csda.2003.11.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bottou, L�on, and Yann Le Cun. "On-line learning for very large data sets." Applied Stochastic Models in Business and Industry 21, no. 2 (2005): 137–51. http://dx.doi.org/10.1002/asmb.538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cressie, Noel, and Gardar Johannesson. "Fixed rank kriging for very large spatial data sets." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 70, no. 1 (January 4, 2008): 209–26. http://dx.doi.org/10.1111/j.1467-9868.2007.00633.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harrison, L. M., and G. G. R. Green. "A Bayesian spatiotemporal model for very large data sets." NeuroImage 50, no. 3 (April 2010): 1126–41. http://dx.doi.org/10.1016/j.neuroimage.2009.12.042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kazar, Baris. "High performance spatial data mining for very large data-sets (citation_only)." ACM SIGPLAN Notices 38, no. 10 (October 2003): 1. http://dx.doi.org/10.1145/966049.781509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Angiulli, F., and G. Folino. "Distributed Nearest Neighbor-Based Condensation of Very Large Data Sets." IEEE Transactions on Knowledge and Data Engineering 19, no. 12 (December 2007): 1593–606. http://dx.doi.org/10.1109/tkde.2007.190665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Maarel, Eddy, Ileana Espejel, and Patricia Moreno-Casasola. "Two-step vegetation analysis based on very large data sets." Vegetatio 68, no. 3 (January 1987): 139–43. http://dx.doi.org/10.1007/bf00114714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hathaway, Richard J., and James C. Bezdek. "Extending fuzzy and probabilistic clustering to very large data sets." Computational Statistics & Data Analysis 51, no. 1 (November 2006): 215–34. http://dx.doi.org/10.1016/j.csda.2006.02.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Liang, James C. Bezdek, Christopher Leckie, and Ramamohanarao Kotagiri. "Selective sampling for approximate clustering of very large data sets." International Journal of Intelligent Systems 23, no. 3 (2008): 313–31. http://dx.doi.org/10.1002/int.20268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Very large data sets"

1

Quddus, Syed. "Accurate and efficient clustering algorithms for very large data sets." Thesis, Federation University Australia, 2017. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/162586.

Full text
Abstract:
The ability to mine and extract useful information from large data sets is a common concern for organizations. Data over the internet is rapidly increasing and the importance of development of new approaches to collect, store and mine large amounts of data is significantly increasing. Clustering is one of the main tasks in data mining. Many clustering algorithms have been proposed but there are still clustering problems that have not been addressed in depth especially the clustering problems in large data sets. Clustering in large data sets is important in many applications and such applications include network intrusion detection systems, fraud detection in banking systems, air traffic control, web logs, sensor networks, social networks and bioinformatics. Data sets in these applications contain from hundreds of thousands to hundreds of millions of data points and they may contain hundreds or thousands of attributes. Recent developments in computer hardware allows to store in random access memory and repeatedly read data sets with hundreds of thousands and even millions of data points. This makes possible the use of existing clustering algorithms in such data sets. However, these algorithms require a prohibitively large CPU time and fail to produce an accurate solution. Therefore, it is important to develop clustering algorithms which are accurate and can provide real time clustering in such data sets. This is especially important in a big data era. The aim of this PhD study is to develop accurate and real time algorithms for clustering in very large data sets containing hundreds of thousands and millions of data points. Such algorithms are developed based on the combination of heuristic algorithms with the incremental approach. These algorithms also involve a special procedure to identify dense areas in a data set and compute a subset most informative representative data points in order to decrease the size of a data set. It is the aim of this PhD study to develop the center-based clustering algorithms. The success of these algorithms strongly depends on the choice of starting cluster centers. Different procedures are proposed to generate such centers. Special procedures are designed to identify the most promising starting cluster centers and to restrict their number. New clustering algorithms are evaluated using large data sets available in public domains. Their results will be compared with those obtained using several existing center-based clustering algorithms.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
2

Harrington, Justin. "Extending linear grouping analysis and robust estimators for very large data sets." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/845.

Full text
Abstract:
Cluster analysis is the study of how to partition data into homogeneous subsets so that the partitioned data share some common characteristic. In one to three dimensions, the human eye can distinguish well between clusters of data if clearly separated. However, when there are more than three dimensions and/or the data is not clearly separated, an algorithm is required which needs a metric of similarity that quantitatively measures the characteristic of interest. Linear Grouping Analysis (LGA, Van Aelst et al. 2006) is an algorithm for clustering data around hyperplanes, and is most appropriate when: 1) the variables are related/correlated, which results in clusters with an approximately linear structure; and 2) it is not natural to assume that one variable is a “response”, and the remainder the “explanatories”. LGA measures the compactness within each cluster via the sum of squared orthogonal distances to hyperplanes formed from the data. In this dissertation, we extend the scope of problems to which LGA can be applied. The first extension relates to the linearity requirement inherent within LGA, and proposes a new method of non-linearly transforming the data into a Feature Space, using the Kernel Trick, such that in this space the data might then form linear clusters. A possible side effect of this transformation is that the dimension of the transformed space is significantly larger than the number of observations in a given cluster, which causes problems with orthogonal regression. Therefore, we also introduce a new method for calculating the distance of an observation to a cluster when its covariance matrix is rank deficient. The second extension concerns the combinatorial problem for optimizing a LGA objective function, and adapts an existing algorithm, called BIRCH, for use in providing fast, approximate solutions, particularly for the case when data does not fit in memory. We also provide solutions based on BIRCH for two other challenging optimization problems in the field of robust statistics, and demonstrate, via simulation study as well as application on actual data sets, that the BIRCH solution compares favourably to the existing state-of-the-art alternatives, and in many cases finds a more optimal solution.
APA, Harvard, Vancouver, ISO, and other styles
3

Sandhu, Jatinder Singh. "Combining exploratory data analysis and scientific visualization in the study of very large, space-time data sets /." The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487683401443166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geppert, Leo Nikolaus [Verfasser], Katja [Akademischer Betreuer] Ickstadt, and Andreas [Gutachter] Groll. "Bayesian and frequentist regression approaches for very large data sets / Leo Nikolaus Geppert ; Gutachter: Andreas Groll ; Betreuer: Katja Ickstadt." Dortmund : Universitätsbibliothek Dortmund, 2018. http://d-nb.info/1181427479/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

McNeil, Vivienne Heather. "Assessment methodologies for very large, irregularly collected water quality data sets with special reference to the natural waters of Queensland." Thesis, Queensland University of Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cordeiro, Robson Leonardo Ferreira. "Data mining in large sets of complex data." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-22112011-083653/.

Full text
Abstract:
Due to the increasing amount and complexity of the data stored in the enterprises\' databases, the task of knowledge discovery is nowadays vital to support strategic decisions. However, the mining techniques used in the process usually have high computational costs that come from the need to explore several alternative solutions, in different combinations, to obtain the desired knowledge. The most common mining tasks include data classification, labeling and clustering, outlier detection and missing data prediction. Traditionally, the data are represented by numerical or categorical attributes in a table that describes one element in each tuple. Although the same tasks applied to traditional data are also necessary for more complex data, such as images, graphs, audio and long texts, the complexity and the computational costs associated to handling large amounts of these complex data increase considerably, making most of the existing techniques impractical. Therefore, especial data mining techniques for this kind of data need to be developed. This Ph.D. work focuses on the development of new data mining techniques for large sets of complex data, especially for the task of clustering, tightly associated to other data mining tasks that are performed together. Specifically, this Doctoral dissertation presents three novel, fast and scalable data mining algorithms well-suited to analyze large sets of complex data: the method Halite for correlation clustering; the method BoW for clustering Terabyte-scale datasets; and the method QMAS for labeling and summarization. Our algorithms were evaluated on real, very large datasets with up to billions of complex elements, and they always presented highly accurate results, being at least one order of magnitude faster than the fastest related works in almost all cases. The real data used come from the following applications: automatic breast cancer diagnosis, satellite imagery analysis, and graph mining on a large web graph crawled by Yahoo! and also on the graph with all users and their connections from the Twitter social network. Such results indicate that our algorithms allow the development of real time applications that, potentially, could not be developed without this Ph.D. work, like a software to aid on the fly the diagnosis process in a worldwide Healthcare Information System, or a system to look for deforestation within the Amazon Rainforest in real time
O crescimento em quantidade e complexidade dos dados armazenados nas organizações torna a extração de conhecimento utilizando técnicas de mineração uma tarefa ao mesmo tempo fundamental para aproveitar bem esses dados na tomada de decisões estratégicas e de alto custo computacional. O custo vem da necessidade de se explorar uma grande quantidade de casos de estudo, em diferentes combinações, para se obter o conhecimento desejado. Tradicionalmente, os dados a explorar são representados como atributos numéricos ou categóricos em uma tabela, que descreve em cada tupla um caso de teste do conjunto sob análise. Embora as mesmas tarefas desenvolvidas para dados tradicionais sejam também necessárias para dados mais complexos, como imagens, grafos, áudio e textos longos, a complexidade das análises e o custo computacional envolvidos aumentam significativamente, inviabilizando a maioria das técnicas de análise atuais quando aplicadas a grandes quantidades desses dados complexos. Assim, técnicas de mineração especiais devem ser desenvolvidas. Este Trabalho de Doutorado visa a criação de novas técnicas de mineração para grandes bases de dados complexos. Especificamente, foram desenvolvidas duas novas técnicas de agrupamento e uma nova técnica de rotulação e sumarização que são rápidas, escaláveis e bem adequadas à análise de grandes bases de dados complexos. As técnicas propostas foram avaliadas para a análise de bases de dados reais, em escala de Terabytes de dados, contendo até bilhões de objetos complexos, e elas sempre apresentaram resultados de alta qualidade, sendo em quase todos os casos pelo menos uma ordem de magnitude mais rápidas do que os trabalhos relacionados mais eficientes. Os dados reais utilizados vêm das seguintes aplicações: diagnóstico automático de câncer de mama, análise de imagens de satélites, e mineração de grafos aplicada a um grande grafo da web coletado pelo Yahoo! e também a um grafo com todos os usuários da rede social Twitter e suas conexões. Tais resultados indicam que nossos algoritmos permitem a criação de aplicações em tempo real que, potencialmente, não poderiam ser desenvolvidas sem a existência deste Trabalho de Doutorado, como por exemplo, um sistema em escala global para o auxílio ao diagnóstico médico em tempo real, ou um sistema para a busca por áreas de desmatamento na Floresta Amazônica em tempo real
APA, Harvard, Vancouver, ISO, and other styles
7

Chaudhary, Amitabh. "Applied spatial data structures for large data sets." Available to US Hopkins community, 2002. http://wwwlib.umi.com/dissertations/dlnow/3068131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Arvidsson, Johan. "Finding delta difference in large data sets." Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74943.

Full text
Abstract:
To find out what differs between two versions of a file can be done with several different techniques and programs. These techniques and programs are often focusd on finding differences in text files, in documents, or in class files for programming. An example of a program is the popular git tool which focuses on displaying the difference between versions of files in a project. A common way to find these differences is to utilize an algorithm called Longest common subsequence, which focuses on finding the longest common subsequence in each file to find similarity between the files. By excluding all similarities in a file, all remaining text will be the differences between the files. The Longest Common Subsequence is often used to find the differences in an acceptable time. When two lines in a file is compared to see if they differ from each other hashing is used. The hash values for each correspondent line in both files will be compared. Hashing a line will give the content on that line a unique value. If as little as one character on a line is different between the version, the hash values for those lines will be different as well. These techniques are very useful when comparing two versions of a file with text content. With data from a database some, but not all, of these techniques can be useful. A big difference between data in a database and text in a file will be that content is not just added and delete but also updated. This thesis studies the problem on how to make use of these techniques when finding differences between large datasets, and doing this in a reasonable time, instead of finding differences in documents and files.  Three different methods are going to be studied in theory. These results will be provided in both time and space complexities. Finally, a selected one of these methods is further studied with implementation and testing. The reason only one of these three is implemented is because of time constraint. The one that got chosen had easy maintainability, an easy implementation, and maintains a good execution time.
APA, Harvard, Vancouver, ISO, and other styles
9

Tricker, Edward A. "Detecting anomalous aggregations of data points in large data sets." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.512050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Romig, Phillip R. "Parallel task processing of very large datasets." [Lincoln, Neb. : University of Nebraska-Lincoln], 1999. http://international.unl.edu/Private/1999/romigab.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Very large data sets"

1

International Conference on Very Large Data Bases (16th 1990 Brisbane, Qld.). Very large data bases. Edited by McLeod Dennis, Sacks-Davis Ron, and Schek H. -J. Palo Alto, Ca: Morgan Kaufmann, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cordeiro, Robson L. F., Christos Faloutsos, and Caetano Traina Júnior. Data Mining in Large Sets of Complex Data. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4890-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cordeiro, Robson L. F. Data Mining in Large Sets of Complex Data. London: Springer London, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

International Conference on Very Large Data Bases (12th 1986 Kyoto, Japan). Very large data bases: Proceedings. Edited by VLDB Endowment. Los Altos, CA, USA: Distributed by Morgan Kaufmann Publishers, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

P, Keenan Maryanne, and United States. Agency for Health Care Policy and Research., eds. Measuring cognitive impairment with large data sets. Rockville, MD (18-12 Parklawn Bldg., Rockville 20857): U.S. Dept. of Health and Human Services, Public Health Service, Agency for Health Care Policy and Research, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1973-, Wang Wei, and Yang Jiong, eds. Mining sequential patterns from large data sets. New York: Springer, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stock, James H. Estimating turning points using large data sets. Cambridge, MA: National Bureau of Economic Research, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

International Conference on Very Large Data Bases (11th 1985 Stockholm, Sweden). Very large data bases, Stockholm 1985: 11th International Conference on Very Large Data Bases, Stockholm, August 21-23, 1985. Palo Alto, Calif: [distributed by] Morgan Kaufmann Publishers, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

International Conference on Very Large Data Bases (13th 1987 Brighton, England). Very large data bases, Brighton 1987: 13th International Conference on Very Large Data Bases, Brighton, September 1-4, 1987. Edited by Stocker P. M, Kent William 1936-, and Hammersley P. Los Altos, Calif: Morgan Kaufmann, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

International Conference on Very Large Data Bases (11th 1985 Stockholm, Sweden). Very large data bases, Stockholm 1985: 11th International Conference on Very Large Data Bases, Stockholm, August 21-23, 1985. [S.l: s.n., 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Very large data sets"

1

Johnson, Theodore, and Damianos Chatziantoniou. "Joining Very Large Data Sets." In Databases in Telecommunications, 118–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/10721056_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hammer, Barbara, and Alexander Hasenfuss. "Clustering Very Large Dissimilarity Data Sets." In Artificial Neural Networks in Pattern Recognition, 259–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12159-3_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McNabb, David E. "Researching With Very Large DATA SETS." In Research Methods for Public Administration and Nonprofit Management, 251–63. Fourth edition. | New York ; London : Routledge, [2018]: Routledge, 2017. http://dx.doi.org/10.4324/9781315181158-20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Strobel, Norbert, Chrisian Gosch, Jürgen Hesser, and Christoph Poliwoda. "Multiresolution Data Handling for Visualization of Very Large Data Sets." In Informatik aktuell, 106–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-18993-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Krogh, Benjamin, Ove Andersen, and Kristian Torp. "Analyzing Electric Vehicle Energy Consumption Using Very Large Data Sets." In Database Systems for Advanced Applications, 471–87. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18123-3_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fu, Lixin. "Querying and Clustering Very Large Data Sets Using Dynamic Bucketing Approach." In Advances in Web-Age Information Management, 279–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45703-8_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Angiulli, Fabrizio, Stefano Basta, Stefano Lodi, and Claudio Sartori. "A Distributed Approach to Detect Outliers in Very Large Data Sets." In Euro-Par 2010 - Parallel Processing, 329–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15277-1_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Angiulli, Fabrizio, Clara Pizzuti, and Massimo Ruffolo. "DESCRY: A Density Based Clustering Algorithm for Very Large Data Sets." In Lecture Notes in Computer Science, 203–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28651-6_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lerman, Israel, Joaquim Pinto da Costa, and Helena Silva. "Validation of Very Large Data Sets Clustering by Means of a Nonparametric Linear Criterion." In Classification, Clustering, and Data Analysis, 147–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-642-56181-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Braverman, Amy. "A Strategy for Compression and Analysis of Very Large Remote Sensing Data Sets." In Nonlinear Estimation and Classification, 429–41. New York, NY: Springer New York, 2003. http://dx.doi.org/10.1007/978-0-387-21579-2_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Very large data sets"

1

Almeida, Virgilio. "Exploring very large data sets from online social networks." In the 22nd International Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2487788.2488143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sung, E., Zhu Yan, and Li Xuchun. "Accelerating the SVM Learning for Very Large Data Sets." In 18th International Conference on Pattern Recognition (ICPR'06). IEEE, 2006. http://dx.doi.org/10.1109/icpr.2006.201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kazar, Baris. "High performance spatial data mining for very large data-sets (citation_only)." In the ninth ACM SIGPLAN symposium. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/781498.781509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Littau, David, and Daniel Boley. "Using Low-Memory Representations to Cluster Very Large Data Sets." In Proceedings of the 2003 SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2003. http://dx.doi.org/10.1137/1.9781611972733.42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ekpar, Frank, Masaaki Yoneda, and Hiroyuki Hase. "On the Interactive Visualization of Very Large Image Data Sets." In 7th IEEE International Conference on Computer and Information Technology (CIT 2007). IEEE, 2007. http://dx.doi.org/10.1109/cit.2007.80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Owens, A. J. "Empirical modeling of very large data sets using neural networks." In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. IEEE, 2000. http://dx.doi.org/10.1109/ijcnn.2000.859413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Marks, D., E. Ioup, J. Sample, M. Abdelguerfi, and F. Qaddoura. "Spatio-temporal Knowledge Discovery in Very Large METOC Data Sets." In 2010 4th International Conference on Network and System Security (NSS). IEEE, 2010. http://dx.doi.org/10.1109/nss.2010.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cudre-Mauroux, Philippe, Eugene Wu, and Samuel Madden. "TrajStore: An adaptive storage system for very large trajectory data sets." In 2010 IEEE 26th International Conference on Data Engineering (ICDE 2010). IEEE, 2010. http://dx.doi.org/10.1109/icde.2010.5447829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhi-Qiang Zeng, Hua-Rong Xu, Yan-Qi Xie, and Ji Gao. "A geometric approach to train SVM on very large data sets." In 2008 3rd International Conference on Intelligent System and Knowledge Engineering (ISKE 2008). IEEE, 2008. http://dx.doi.org/10.1109/iske.2008.4731074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chan, Chien-Chung, and Sivaraj Selvaraj. "Distributed Approach to Feature Selection From Very Large Data Sets Using BLEM2." In 2006 Annual Meeting of the North American Fuzzy Information Processing Society. IEEE, 2006. http://dx.doi.org/10.1109/nafips.2006.365470.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Very large data sets"

1

Ramnarayan, R., C. Baker, H. Lu, K. Mikkilineni, and J. Richardson. Very Large Parallel Data Flow. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada196205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stock, James, and Mark Watson. Estimating Turning Points Using Large Data Sets. Cambridge, MA: National Bureau of Economic Research, November 2010. http://dx.doi.org/10.3386/w16532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carr, D. B. Looking at large data sets using binned data plots. Office of Scientific and Technical Information (OSTI), April 1990. http://dx.doi.org/10.2172/6930282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lenat, Douglas B., Keith Goolsbey, Kevin Knight, and Pace Smith. Efficient Pathfinding in Very Large Data Spaces. Fort Belvoir, VA: Defense Technical Information Center, November 2007. http://dx.doi.org/10.21236/ada475328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

DeVore, Ronald A., Peter G. Binev, and Robert C. Sharpley. Advanced Mathematical Methods for Processing Large Data Sets. Fort Belvoir, VA: Defense Technical Information Center, October 2008. http://dx.doi.org/10.21236/ada499985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gertz, E. M., and J. D. Griffin. Support vector machine classifiers for large data sets. Office of Scientific and Technical Information (OSTI), January 2006. http://dx.doi.org/10.2172/881587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hammond, William E., Vivian West, David Borland, Igor Akushevich, and Eugenia M. Heinz. Novel Visualization of Large Health Related Data Sets. Fort Belvoir, VA: Defense Technical Information Center, March 2014. http://dx.doi.org/10.21236/ada614184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hammond, William E., Vivian L. West, David Borland, Igor Akushevich, and Eugenia M. Heinz. Novel Visualization of Large Health Related Data Sets. Fort Belvoir, VA: Defense Technical Information Center, March 2015. http://dx.doi.org/10.21236/ada624744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hammond, William E., Vivian West, David Borland, Igor Akushevich, and Eugenia M. Heinz. Novel Visualization of Large Health Related Data Sets - NPHRD. Fort Belvoir, VA: Defense Technical Information Center, November 2015. http://dx.doi.org/10.21236/ada624632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hodson, Stephen W., Stephen W. Poole, Thomas Ruwart, and Bradley W. Settlemyer. Moving Large Data Sets Over High-Performance Long Distance Networks. Office of Scientific and Technical Information (OSTI), April 2011. http://dx.doi.org/10.2172/1016604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography