Academic literature on the topic 'Partitional clustering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Partitional clustering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Partitional clustering"

1

Aparna K. and Mydhili K. Nair. "Comprehensive Study and Analysis of Partitional Data Clustering Techniques." International Journal of Business Analytics 2, no. 1 (2015): 23–38. http://dx.doi.org/10.4018/ijban.2015010102.

Full text
Abstract:
Data clustering has found significant applications in various domains like bioinformatics, medical data, imaging, marketing study and crime analysis. There are several types of data clustering such as partitional, hierarchical, spectral, density-based, mixture-modeling to name a few. Among these, partitional clustering is well suited for most of the applications due to the less computational requirement. An analysis of various literatures available on partitional clustering will not only provide good knowledge, but will also lead to find the recent problems in partitional clustering domain. Accordingly, it is planned to do a comprehensive study with the literature of partitional data clustering techniques. In this paper, thirty three research articles have been taken for survey from the standard publishers from 2005 to 2013 under two different aspects namely the technical aspect and the application aspect. The technical aspect is further classified based on partitional clustering, constraint-based partitional clustering and evolutionary programming-based clustering techniques. Furthermore, an analysis is carried out, to find out the importance of the different approaches that can be adopted, so that any new development in partitional data clustering can be made easier to be carried out by researchers.
APA, Harvard, Vancouver, ISO, and other styles
2

Ozdemir, Ozer, and Simgenur Cerman. "Performance Comparison with Hierarchical and Partitional Clustering Methods." WSEAS TRANSACTIONS ON COMMUNICATIONS 20 (December 27, 2021): 177–84. http://dx.doi.org/10.37394/23204.2021.20.23.

Full text
Abstract:
In data mining, one of the commonly-used techniques is the clustering. Clustering can be done by the different algorithms such as hierarchical, partitioning, grid, density and graph based algorithms. In this study first of all the concept of data mining explained, then giving information the aims of using data mining and the areas of using and then clustering and clustering algorithms that used in data mining are explained theoretically. Ultimately within the scope of this study, "Mall Customers" data set that taken from Kaggle database, based partitioned clustering and hierarchical clustering algorithms aimed at the separation of clusters according to their costumers features. In the clusters obtained by the partitional clustering algorithms, the similarity within the cluster is maximum and the similarity between the clusters is minimum. The hierarchical clustering algorithms is based on the gathering of similar features or vice versa. The partitional clustering algorithms used; k-means and PAM, hierarchical clustering algorithms used; AGNES and DIANA are algorithms. In this study, R statistical programming language was used in the application of algorithms. At the end of the study, the data set was run with clustering algorithms and the obtained analysis results were interpreted.
APA, Harvard, Vancouver, ISO, and other styles
3

Zuo, Yong Xia, Guo Qiang Wang, and Chun Cheng Zuo. "The Segmentation Algorithm for Pavement Cracking Images Based on the Improved Fuzzy Clustering." Applied Mechanics and Materials 319 (May 2013): 362–66. http://dx.doi.org/10.4028/www.scientific.net/amm.319.362.

Full text
Abstract:
The segmentation technology of pavement cracking image is critical for identifying, quantifying and classifying pavement cracks. An improved fuzzy clustering algorithm is introduced to segment pavement cracking images. The algorithm makes no assumptions the initial position of clusters. For each value of the multiscale parameter, it obtains a corresponding hard partition. The different partitions values of the multiscale parameter indicate the structure of the image in different partitional scales. The algorithm was tested on actual pavement cracking images. We compared the results with FCM and OTSU to show that the improved fuzzy clustering algorithm can provide better crack edges.
APA, Harvard, Vancouver, ISO, and other styles
4

Alamgir, Zareen, and Hina Naveed. "Efficient Density-Based Partitional Clustering Algorithm." Computing and Informatics 40, no. 6 (2021): 1322–44. http://dx.doi.org/10.31577/cai_2021_6_1322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, J., J. Yang, and S. Ólafsson. "An optimization approach to partitional data clustering." Journal of the Operational Research Society 60, no. 8 (2009): 1069–84. http://dx.doi.org/10.1057/jors.2008.195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

MENÉNDEZ, HÉCTOR D., DAVID F. BARRERO, and DAVID CAMACHO. "A GENETIC GRAPH-BASED APPROACH FOR PARTITIONAL CLUSTERING." International Journal of Neural Systems 24, no. 03 (2014): 1430008. http://dx.doi.org/10.1142/s0129065714300083.

Full text
Abstract:
Clustering is one of the most versatile tools for data analysis. In the recent years, clustering that seeks the continuity of data (in opposition to classical centroid-based approaches) has attracted an increasing research interest. It is a challenging problem with a remarkable practical interest. The most popular continuity clustering method is the spectral clustering (SC) algorithm, which is based on graph cut: It initially generates a similarity graph using a distance measure and then studies its graph spectrum to find the best cut. This approach is sensitive to the parameters of the metric, and a correct parameter choice is critical to the quality of the cluster. This work proposes a new algorithm, inspired by SC, that reduces the parameter dependency while maintaining the quality of the solution. The new algorithm, named genetic graph-based clustering (GGC), takes an evolutionary approach introducing a genetic algorithm (GA) to cluster the similarity graph. The experimental validation shows that GGC increases robustness of SC and has competitive performance in comparison with classical clustering methods, at least, in the synthetic and real dataset used in the experiments.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Erzhou, and Ruhui Ma. "An effective partitional clustering algorithm based on new clustering validity index." Applied Soft Computing 71 (October 2018): 608–21. http://dx.doi.org/10.1016/j.asoc.2018.07.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gullo, Francesco, and Andrea Tagarelli. "Uncertain centroid based partitional clustering of uncertain data." Proceedings of the VLDB Endowment 5, no. 7 (2012): 610–21. http://dx.doi.org/10.14778/2180912.2180914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Prakash, Jay, and P. K. Singh. "An effective multiobjective approach for hard partitional clustering." Memetic Computing 7, no. 2 (2015): 93–104. http://dx.doi.org/10.1007/s12293-014-0147-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Jiahai, and Yalan Zhou. "Stochastic optimal competitive Hopfield network for partitional clustering." Expert Systems with Applications 36, no. 2 (2009): 2072–80. http://dx.doi.org/10.1016/j.eswa.2007.12.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Partitional clustering"

1

Hahsler, Michael, and Kurt Hornik. "Dissimilarity Plots. A Visual Exploration Tool for Partitional Clustering." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2009. http://epub.wu.ac.at/1244/1/document.pdf.

Full text
Abstract:
For hierarchical clustering, dendrograms provide convenient and powerful visualization. Although many visualization methods have been suggested for partitional clustering, their usefulness deteriorates quickly with increasing dimensionality of the data and/or they fail to represent structure between and within clusters simultaneously. In this paper we extend (dissimilarity) matrix shading with several reordering steps based on seriation. Both methods, matrix shading and seriation, have been well-known for a long time. However, only recent algorithmic improvements allow to use seriation for larger problems. Furthermore, seriation is used in a novel stepwise process (within each cluster and between clusters) which leads to a visualization technique that is independent of the dimensionality of the data. A big advantage is that it presents the structure between clusters and the micro-structure within clusters in one concise plot. This not only allows for judging cluster quality but also makes mis-specification of the number of clusters apparent. We give a detailed discussion of the construction of dissimilarity plots and demonstrate their usefulness with several examples.<br>Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
2

Malm, Patrik. "Development of a hierarchical k-selecting clustering algorithm – application to allergy." Thesis, Linköping University, The Department of Physics, Chemistry and Biology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10273.

Full text
Abstract:
<p>The objective with this Master’s thesis was to develop, implement and evaluate an iterative procedure for hierarchical clustering with good overall performance which also merges features of certain already described algorithms into a single integrated package. An accordingly built tool was then applied to an allergen IgE-reactivity data set. The finally implemented algorithm uses a hierarchical approach which illustrates the emergence of patterns in the data. At each level of the hierarchical tree a partitional clustering method is used to divide data into k groups, where the number k is decided through application of cluster validation techniques. The cross-reactivity analysis, by means of the new algorithm, largely arrives at anticipated cluster formations in the allergen data, which strengthen results obtained through previous studies on the subject. Notably, though, certain unexpected findings presented in the former analysis where aggregated differently, and more in line with phylogenetic and protein family relationships, by the novel clustering package.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Nordqvist, My. "Classify part of day and snow on the load of timber stacks : A comparative study between partitional clustering and competitive learning." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42238.

Full text
Abstract:
In today's society, companies are trying to find ways to utilize all the data they have, which considers valuable information and insights to make better decisions. This includes data used to keeping track of timber that flows between forest and industry. The growth of Artificial Intelligence (AI) and Machine Learning (ML) has enabled the development of ML modes to automate the measurements of timber on timber trucks, based on images. However, to improve the results there is a need to be able to get information from unlabeled images in order to decide weather and lighting conditions. The objective of this study is to perform an extensive for classifying unlabeled images in the categories, daylight, darkness, and snow on the load. A comparative study between partitional clustering and competitive learning is conducted to investigate which method gives the best results in terms of different clustering performance metrics. It also examines how dimensionality reduction affects the outcome. The algorithms K-means and Kohonen Self-Organizing Map (SOM) are selected for the clustering. Each model is investigated according to the number of clusters, size of dataset, clustering time, clustering performance, and manual samples from each cluster. The results indicate a noticeable clustering performance discrepancy between the algorithms concerning the number of clusters, dataset size, and manual samples. The use of dimensionality reduction led to shorter clustering time but slightly worse clustering performance. The evaluation results further show that the clustering time of Kohonen SOM is significantly higher than that of K-means.
APA, Harvard, Vancouver, ISO, and other styles
4

Queiroga, Eduardo Vieira. "Abordagens meta-heurísticas para clusterização de dados e segmentação de imagens." Universidade Federal da Paraíba, 2017. http://tede.biblioteca.ufpb.br:8080/handle/tede/9249.

Full text
Abstract:
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-14T11:28:15Z No. of bitstreams: 1 arquivototal.pdf: 7134434 bytes, checksum: a99ec0d172a3be38a844f44b70616b16 (MD5)<br>Made available in DSpace on 2017-08-14T11:28:15Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 7134434 bytes, checksum: a99ec0d172a3be38a844f44b70616b16 (MD5) Previous issue date: 2017-02-17<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES<br>Many computational problems are considered to be hard due to their combinatorial nature. In such cases, the use of exaustive search techniques for solving medium and large size instances becomes unfeasible. Some data clustering and image segmentation problems belong to NP-Hard class, and require an adequate treatment by means of heuristic techniques such as metaheuristics. Data clustering is a set of problems in the fields of pattern recognition and unsupervised machine learning which aims at finding groups (or clusters) of similar objects in a benchmark dataset, using a predetermined measure of similarity. The partitional clustering problem aims at completely separating the data in disjont and non-empty clusters. For center-based clustering methods, the minimal intracluster distance criterion is one of the most employed. This work proposes an approach based on the metaheuristic Continuous Greedy Randomized Adaptive Search Procedure (CGRASP). High quality results were obtained through comparative experiments between the proposed method and other metaheuristics from the literature. In the computational vision field, image segmentation is the process of partitioning an image in regions of interest (set of pixels) without allowing overlap. Histogram thresholding is one of the simplest types of segmentation for images in grayscale. Thes Otsu’s method is one of the most populars and it proposes the search for the thresholds that maximize the variance between the segments. For images with deep levels of gray, exhaustive search techniques demand a high computational cost, since the number of possible solutions grows exponentially with an increase in the number of thresholds. Therefore, metaheuristics have been playing an important role in finding good quality thresholds. In this work, an approach based on Quantum-behaved Particle Swarm Optimization (QPSO) were investigated for multilevel thresholding of available images in the literature. A local search based on Variable Neighborhood Descent (VND) was proposed to improve the convergence of the search for the thresholds. An specific application of thresholding for electronic microscopy images for microstructural analysis of cementitious materials was investigated, as well as graph algorithms to crack detection and feature extraction.<br>Muitos problemas computacionais s˜ao considerados dif´ıceis devido `a sua natureza combinat´oria. Para esses problemas, o uso de t´ecnicas de busca exaustiva para resolver instˆancias de m´edio e grande porte torna-se impratic´avel. Quando modelados como problemas de otimiza¸c˜ao, alguns problemas de clusteriza¸c˜ao de dados e segmenta¸c˜ao de imagens pertencem `a classe NP-Dif´ıcil e requerem um tratamento adequado por m´etodos heur´ısticos. Clusteriza¸c˜ao de dados ´e um vasto conjunto de problemas em reconhecimento de padr˜oes e aprendizado de m´aquina n˜ao-supervisionado, cujo objetivo ´e encontrar grupos (ou clusters) de objetos similares em uma base de dados, utilizando uma medida de similaridade preestabelecida. O problema de clusteriza¸c˜ao particional consiste em separar completamente os dados em conjuntos disjuntos e n˜ao vazios. Para m´etodos de clusteriza ¸c˜ao baseados em centros de cluster, minimizar a soma das distˆancias intracluster ´e um dos crit´erios mais utilizados. Para tratar este problema, ´e proposta uma abordagem baseada na meta-heur´ıstica Continuous Greedy Randomized Adaptive Search Procedure (C-GRASP). Resultados de alta qualidade foram obtidos atrav´es de experimentos envolvendo o algoritmo proposto e outras meta-heur´ısticas da literatura. Em vis˜ao computacional, segmenta¸c˜ao de imagens ´e o processo de particionar uma imagem em regi˜oes de interesse (conjuntos de pixels) sem que haja sobreposi¸c˜ao. Um dos tipos mais simples de segmenta¸c˜ao ´e a limiariza¸c˜ao do histograma para imagens em n´ıvel de cinza. O m´etodo de Otsu ´e um dos mais populares e prop˜oe a busca pelos limiares que maximizam a variˆancia entre os segmentos. Para imagens com grande profundidade de cinza, t´ecnicas de busca exaustiva possuem alto custo computacional, uma vez que o n´umero de solu¸c˜oes poss´ıveis cresce exponencialmente com o aumento no n´umero de limiares. Dessa forma, as meta-heur´ısticas tem desempenhado um papel importante em encontrar limiares de boa qualidade. Neste trabalho, uma abordagem baseada em Quantum-behaved Particle Swarm Optimization (QPSO) foi investigada para limiariza¸c˜ao multin´ıvel de imagens dispon´ıveis na literatura. Uma busca local baseada em Variable Neighborhood Descent (VND) foi proposta para acelerar a convergˆencia da busca pelos limiares. Al´em disso, uma aplica¸c˜ao espec´ıfica de segmenta¸c˜ao de imagens de microscopia eletrˆonica para an´alise microestrutural de materiais ciment´ıcios foi investigada, bem como a utiliza¸c˜ao de algoritmos em grafos para detec¸c˜ao de trincas e extra¸c˜ao de caracter´ısticas de interesse.
APA, Harvard, Vancouver, ISO, and other styles
5

Kong, Tian Fook. "Multilevel spectral clustering : graph partitions and image segmentation." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45275.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008.<br>Includes bibliographical references (p. 145-146).<br>While the spectral graph partitioning method gives high quality segmentation, segmenting large graphs by the spectral method is computationally expensive. Numerous multilevel graph partitioning algorithms are proposed to reduce the segmentation time for the spectral partition of large graphs. However, the greedy local refinement used in these multilevel schemes has the tendency of trapping the partition in poor local minima. In this thesis, I develop a multilevel graph partitioning algorithm that incorporates the inverse powering method with greedy local refinement. The combination of the inverse powering method with greedy local refinement ensures that the partition quality of the multilevel method is as good as, if not better than, segmenting the large graph by the spectral method. In addition, I present a scheme to construct the adjacency matrix, W and degree matrix, D for the coarse graphs. The proposed multilevel graph partitioning algorithm is able to bisect a graph (k = 2) with significantly shorter time than segmenting the original graph without the multilevel implementation, and at the same time achieving the same normalized cut (Ncut) value. The starting eigenvector, obtained by solving a generalized eigenvalue problem on the coarsest graph, is close to the Fiedler vector of the original graph. Hence, the inverse iteration needs only a few iterations to converge the starting vector. In the k-way multilevel graph partition, the larger the graph, the greater the reduction in the time needed for segmenting the graph. For the multilevel image segmentation, the multilevel scheme is able to give better segmentation than segmenting the original image. The multilevel scheme has higher success of preserving the salient part of an object.<br>(cont.) In this work, I also show that the Ncut value is not the ultimate yardstick for the segmentation quality of an image. Finding a partition that has lower Ncut value does not necessary means better segmentation quality. Segmenting large images by the multilevel method offers both speed and quality.<br>by Tian Fook Kong.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Dimitriadou, Evgenia, Andreas Weingessel, and Kurt Hornik. "Fuzzy voting in clustering." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/742/1/document.pdf.

Full text
Abstract:
In this paper we present a fuzzy voting scheme for cluster algorithms. This fuzzy voting method allows us to combine several runs of cluster algorithms resulting in a common fuzzy partition. This helps us to overcome instabilities of the cluster algorithms and results in a better clustering.<br>Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
7

Macrì, Silvia Maria. "Unsupervised Mondrian forest: a space partition method for clustering." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24844/.

Full text
Abstract:
Cluster analysis is an ensemble of techniques whose aim is to divide an unlabeled dataset into groups so that samples with similar features are assigned to the same groups and dissimilar samples are assigned to different ones; it has been applied in several fields and it consists of a wide variety of techniques, each one designed for a specific type of dataset and required prior informations on its structure. In this thesis we discuss a new formulation of clustering technique whose structure is similar to that of an unsupervised random forest; it is based on the Mondrian stochastic process and it consists of a hierarchical partition of the space of definition of the given dataset. It gives as output an estimation of the probability distribution to belong to a certain class, defined on the whole underlying space, and it shows some interesting properties, like the automatic determination of the number of clusters and the capability to deal with different shape datasets. After a brief theorical introduction about clustering, the Mondrian stochastic process and its main mathematical properties are defined. The Mondrian clustering algorithm is then described and results of its applications on two and three dimension toy datasets are presented; the discussion focuses on the role of the algorithm parameters, that characterize the method and can be possibly tuned by the user, in order to obtain better performances when dealing with different datasets, and on the comparison of the results with that of some notable clustering algorithms. Finally, some interesting aspects that could be further investigated are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Dimitriadou, Evgenia, Andreas Weingessel, and Kurt Hornik. "A voting-merging clustering algorithm." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/94/1/document.pdf.

Full text
Abstract:
In this paper we propose an unsupervised voting-merging scheme that is capable of clustering data sets, and also of finding the number of clusters existing in them. The voting part of the algorithm allows us to combine several runs of clustering algorithms resulting in a common partition. This helps us to overcome instabilities of the clustering algorithms and to improve the ability to find structures in a data set. Moreover, we develop a strategy to understand, analyze and interpret these results. In the second part of the scheme, a merging procedure starts on the clusters resulting by voting, in order to find the number of clusters in the data set.<br>Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
9

Raßer, Günter. "Clustering Partition Models for Discrete Structures with Applications in Geographical Epidemiology." Diss., lmu, 2003. http://nbn-resolving.de/urn:nbn:de:bvb:19-12932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liverani, Silvia. "Bayesian clustering of curves and the search of the partition space." Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/2774/.

Full text
Abstract:
This thesis is concerned with the study of a Bayesian clustering algorithm, proposed by Heard et al. (2006), used successfully for microarray experiments over time. It focuses not only on the development of new ways of setting hyperparameters so that inferences both reflect the scientific needs and contribute to the inferential stability of the search, but also on the design of new fast algorithms for the search over the partition space. First we use the explicit forms of the associated Bayes factors to demonstrate that such methods can be unstable under common settings of the associated hyperparameters. We then prove that the regions of instability can be removed by setting the hyperparameters in an unconventional way. Moreover, we demonstrate that MAP (maximum a posteriori) search is satisfied when a utility function is defined according to the scientific interest of the clusters. We then focus on the search over the partition space. In model-based clustering a comprehensive search for the highest scoring partition is usually impossible, due to the huge number of partitions of even a moderately sized dataset. We propose two methods for the partition search. One method encodes the clustering as a weighted MAX-SAT problem, while the other views clusterings as elements of the lattice of partitions. Finally, this thesis includes the full analysis of two microarray experiments for identifying circadian genes.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Partitional clustering"

1

Celebi, M. Emre, ed. Partitional Clustering Algorithms. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09259-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. Partitional Clustering via Nonsmooth Optimization. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

G, Rothblum Uriel, ed. Partitions: Optimality and clustering. World Scientific, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Celebi, M. Emre. Partitional Clustering Algorithms. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Celebi, M. Emre. Partitional Clustering Algorithms. Springer, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Karmitsa, Napsu, Adil M. Bagirov, and Sona Taheri. Partitional Clustering via Nonsmooth Optimization: Clustering via Optimization. Springer, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rothblum, Uriel G. Partitions : Optimality and Clustering - Vol Ii: Multi-Parameter. World Scientific Publishing Co Pte Ltd, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Partitional clustering"

1

Jin, Xin, and Jiawei Han. "Partitional Clustering." In Encyclopedia of Machine Learning and Data Mining. Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7502-7_637-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jin, Xin, and Jiawei Han. "Partitional Clustering." In Encyclopedia of Machine Learning and Data Mining. Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zeugmann, Thomas, Pascal Poupart, James Kennedy, et al. "Partitional Clustering." In Encyclopedia of Machine Learning. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Melnykov, Volodymyr, Semhar Michael, and Igor Melnykov. "Recent Developments in Model-Based Clustering with Applications." In Partitional Clustering Algorithms. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09259-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aidos, Helena, and Ana Fred. "Consensus of Clusterings Based on High-Order Dissimilarities." In Partitional Clustering Algorithms. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09259-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tomašev, Nenad, Miloš Radovanović, Dunja Mladenić, and Mirjana Ivanović. "Hubness-Based Clustering of High-Dimensional Data." In Partitional Clustering Algorithms. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09259-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barouti, Maria, Daniel Keren, Jacob Kogan, and Yaakov Malinovsky. "Clustering for Monitoring Distributed Data Streams." In Partitional Clustering Algorithms. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09259-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hamerly, Greg, and Jonathan Drake. "Accelerating Lloyd’s Algorithm for k-Means Clustering." In Partitional Clustering Algorithms. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09259-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Celebi, M. Emre, and Hassan A. Kingravi. "Linear, Deterministic, and Order-Invariant Initialization Methods for the K-Means Clustering Algorithm." In Partitional Clustering Algorithms. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09259-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bagirov, Adil M., and Ehsan Mohebi. "Nonsmooth Optimization Based Algorithms in Cluster Analysis." In Partitional Clustering Algorithms. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09259-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Partitional clustering"

1

Zhao, Ying, and George Karypis. "Soft clustering criterion functions for partitional document clustering." In the Thirteenth ACM conference. ACM Press, 2004. http://dx.doi.org/10.1145/1031171.1031225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Duarte, Flávio Gabriel, and Leandro Nunes Castro. "Asset Allocation based on a Partitional Clustering Algorithm." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-49.

Full text
Abstract:
This paper proposes a method for asset allocation based on partitional clustering. This method is different from the approaches already proposed in the literature, which essentially use either an optimization-based approach or a hierarchical clustering algorithm to allocate resources in assets. After finding the clusters, the method uniformly allocates the resources over the clusters and then within the clusters, thus guaranteeing that all assets are allocated. The method was tested using data from the Brazilian Stock Exchange (B3) and the assets eligible to enter the allocation were those that were part of the Ibovespa Index at the time of portfolio rebalancing. The results were compared with the Ibovespa index for different metrics, such as volatility, return, sharpe ratio, turnover and drawdown. The proposed approach illustrates the potential of machine learning techniques in portfolio allocation.
APA, Harvard, Vancouver, ISO, and other styles
3

Dehideniya, D. M. M. B., and A. S. Karunananda. "Dynamic partitional clustering using multi-agent technology." In 2013 International Conference on Advances in ICT for Emerging Regions (ICTer). IEEE, 2013. http://dx.doi.org/10.1109/icter.2013.6761183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ojeda-Magana, B., R. Ruelas, J. Quintanilla-Dominguez, and D. Andina. "Color image segmentation by partitional clustering algorithms." In IECON 2010 - 36th Annual Conference of IEEE Industrial Electronics. IEEE, 2010. http://dx.doi.org/10.1109/iecon.2010.5675072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pakhira, Malay K., and Prasenjit Das. "Partitional Clustering using a Generalized Visual Stochastic Optimizer." In 2009 IEEE International Advance Computing Conference (IACC 2009). IEEE, 2009. http://dx.doi.org/10.1109/iadcc.2009.4809030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kermani, Fatemeh Hojati, and Shirin Ghanbari. "A Partitional Clustering Approach to Persian Spell Checking." In 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI). IEEE, 2019. http://dx.doi.org/10.1109/kbei.2019.8734932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abrishami, Vahid, Ghamarnaz Tadayon Tabrizi, Hossein Deldari, and Maryam Sabzevari. "A fuzzy based Hopfield network for partitional clustering." In 2010 IEEE 9th International Conference on Cybernetic Intelligent Systems (CIS). IEEE, 2010. http://dx.doi.org/10.1109/ukricis.2010.5898145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhan, Tangsen, and Yuanguo Zhou. "Clustering algorithm on high-dimension data partitional mended attribute." In 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE, 2012. http://dx.doi.org/10.1109/fskd.2012.6234074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Prakash, Jay, and P. K. Singh. "Evolutionary and Swarm Intelligence Methods for Partitional Hard Clustering." In 2014 International Conference on Information Technology (ICIT). IEEE, 2014. http://dx.doi.org/10.1109/icit.2014.67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Naija, Yosr, Salem Chakhar, Kaouther Blibech, and Riadh Robbana. "Extension of Partitional Clustering Methods for Handling Mixed Data." In 2008 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2008. http://dx.doi.org/10.1109/icdmw.2008.85.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Partitional clustering"

1

Zhao, Ying, and George Karypis. Soft Clustering Criterion Functions for Partitional Document Clustering. Defense Technical Information Center, 2004. http://dx.doi.org/10.21236/ada439425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Ying, and George Karypis. Comparison of Agglomerative and Partitional Document Clustering Algorithms. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada439503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cordeiro de Amorim, Renato. A survey on feature weighting based K-Means algorithms. Web of Open Science, 2020. http://dx.doi.org/10.37686/ser.v1i2.79.

Full text
Abstract:
In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography