To see the other types of publications on this topic, follow the link: Clustering Applications.

Journal articles on the topic 'Clustering Applications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Clustering Applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

VEGA-PONS, SANDRO, and JOSÉ RUIZ-SHULCLOPER. "A SURVEY OF CLUSTERING ENSEMBLE ALGORITHMS." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 03 (2011): 337–72. http://dx.doi.org/10.1142/s0218001411008683.

Full text
Abstract:
Cluster ensemble has proved to be a good alternative when facing cluster analysis problems. It consists of generating a set of clusterings from the same dataset and combining them into a final clustering. The goal of this combination process is to improve the quality of individual data clusterings. Due to the increasing appearance of new methods, their promising results and the great number of applications, we consider that it is necessary to make a critical analysis of the existing techniques and future projections. This paper presents an overview of clustering ensemble methods that can be ve
APA, Harvard, Vancouver, ISO, and other styles
2

Gan, Guojun, and Emiliano A. Valdez. "Data Clustering with Actuarial Applications." North American Actuarial Journal 24, no. 2 (2019): 168–86. http://dx.doi.org/10.1080/10920277.2019.1575242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Derntl, Alexandra, and Claudia Plant. "Clustering techniques for neuroimaging applications." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 6, no. 1 (2015): 22–36. http://dx.doi.org/10.1002/widm.1174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qian, Yue, Shixin Yao, Tianjun Wu, You Huang, and Lingbin Zeng. "Improved Selective Deep-Learning-Based Clustering Ensemble." Applied Sciences 14, no. 2 (2024): 719. http://dx.doi.org/10.3390/app14020719.

Full text
Abstract:
Clustering ensemble integrates multiple base clustering results to improve the stability and robustness of the single clustering method. It consists of two principal steps: a generation step, which is about the creation of base clusterings, and a consensus function, which is the integration of all clusterings obtained in the generation step. However, most of the existing base clustering algorithms used in the generation step are shallow clustering algorithms such as k-means. These shallow clustering algorithms do not work well or even fail when dealing with large-scale, high-dimensional unstru
APA, Harvard, Vancouver, ISO, and other styles
5

Dasgupta, Sajib, Richard Golden, and Vincent Ng. "Clustering Documents Along Multiple Dimensions." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 879–85. http://dx.doi.org/10.1609/aaai.v26i1.8325.

Full text
Abstract:
Traditional clustering algorithms are designed to search for a single clustering solution despite the fact that multiple alternative solutions might exist for a particular dataset. For example, a set of news articles might be clustered by topic or by the author's gender or age. Similarly, book reviews might be clustered by sentiment or comprehensiveness. In this paper, we address the problem of identifying alternative clustering solutions by developing a Probabilistic Multi-Clustering (PMC) model that discovers multiple, maximally different clusterings of a data sample. Empirical results on si
APA, Harvard, Vancouver, ISO, and other styles
6

Dutta, Arjun. "Clustering Techniques and Their Applications: A Review." American Journal of Advanced Computing 1, no. 4 (2020): 1–6. http://dx.doi.org/10.15864/ajac.1404.

Full text
Abstract:
This paper deals with concise study on clustering: existing methods and developments made at various times. Clustering is defined as an unsupervised learning where the targets are sorted out on the foundation of some similarity inherent among them. In the recent times, we dispense with large masses of data including images, video, social text, DNA, gene information, etc. Data clustering analysis has come out as an efficient technique to accurately achieve the task of categorizing information into sensible groups. Clustering has a deep association with researches in several scientific fields. k
APA, Harvard, Vancouver, ISO, and other styles
7

Zhi, Weifeng, Xiang Wang, Buyue Qian, Patrick Butler, Naren Ramakrishnan, and Ian Davidson. "Clustering with Complex Constraints — Algorithms and Applications." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 1056–62. http://dx.doi.org/10.1609/aaai.v27i1.8663.

Full text
Abstract:
Clustering with constraints is an important and developing area. However, most work is confined to conjunctions of simple together and apart constraints which limit their usability. In this paper, we propose a new formulation of constrained clustering that is able to incorporate not only existing types of constraints but also more complex logical combinations beyond conjunctions. We first show how any statement in conjunctive normal form (CNF) can be represented as a linear inequality. Since existing clustering formulations such as spectral clustering cannot easily incorporate these linear ine
APA, Harvard, Vancouver, ISO, and other styles
8

Velunachiyar, S., and K. Sivakumar. "Some Clustering Methods, Algorithms and their Applications." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 6s (2023): 401–10. http://dx.doi.org/10.17762/ijritcc.v11i6s.6946.

Full text
Abstract:
Clustering is a type of unsupervised learning [15]. When no target values are known, or "supervisors," in an unsupervised learning task, the purpose is to produce training data from the inputs themselves. Data mining and machine learning would be useless without clustering. If you utilize it to categorize your datasets according to their similarities, you'll be able to predict user behavior more accurately. The purpose of this research is to compare and contrast three widely-used data-clustering methods. Clustering techniques include partitioning, hierarchy, density, grid, and fuzzy clustering
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Hong-Dong, Yunpei Xu, Xiaoshu Zhu, Quan Liu, Gilbert S. Omenn, and Jianxin Wang. "ClusterMine: A knowledge-integrated clustering approach based on expression profiles of gene sets." Journal of Bioinformatics and Computational Biology 18, no. 03 (2020): 2040009. http://dx.doi.org/10.1142/s0219720020400090.

Full text
Abstract:
Clustering analysis of gene expression data is essential for understanding complex biological data, and is widely used in important biological applications such as the identification of cell subpopulations and disease subtypes. In commonly used methods such as hierarchical clustering (HC) and consensus clustering (CC), holistic expression profiles of all genes are often used to assess the similarity between samples for clustering. While these methods have been proven successful in identifying sample clusters in many areas, they do not provide information about which gene sets (functions) contr
APA, Harvard, Vancouver, ISO, and other styles
10

Zangana, Hewa Majeed, and Adnan M. Abdulazeez. "Developed Clustering Algorithms for Engineering Applications: A Review." International Journal of Informatics, Information System and Computer Engineering (INJIISCOM) 4, no. 2 (2023): 147–69. http://dx.doi.org/10.34010/injiiscom.v4i2.11636.

Full text
Abstract:
Clustering algorithms play a pivotal role in the field of engineering, offering valuable insights into complex datasets. This review paper explores the landscape of developed clustering algorithms with a focus on their applications in engineering. The introduction provides context for the significance of clustering algorithms, setting the stage for an in-depth exploration. The overview section delineates fundamental clustering concepts and elucidates the workings of these algorithms. Categorization of clustering algorithms into partitional, hierarchical, and density-based forms lay the groundw
APA, Harvard, Vancouver, ISO, and other styles
11

Pham, D. T., and A. A. Afify. "Clustering techniques and their applications in engineering." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 221, no. 11 (2007): 1445–59. http://dx.doi.org/10.1243/09544062jmes508.

Full text
Abstract:
Clustering is an important data exploration technique with many applications in different areas of engineering, including engineering design, manufacturing system design, quality assurance, production planning and process planning, modelling, monitoring, and control. The clustering problem has been addressed by researchers from many disciplines. However, efforts to perform effective and efficient clustering on large data sets only started in recent years with the emergence of data mining. The current paper presents an overview of clustering algorithms from a data mining perspective. Attention
APA, Harvard, Vancouver, ISO, and other styles
12

Aggarwal, C. C., and P. S. Yu. "Redefining clustering for high-dimensional applications." IEEE Transactions on Knowledge and Data Engineering 14, no. 2 (2002): 210–25. http://dx.doi.org/10.1109/69.991713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhuang, Weiwei, Yanfang Ye, Yong Chen, and Tao Li. "Ensemble Clustering for Internet Security Applications." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, no. 6 (2012): 1784–96. http://dx.doi.org/10.1109/tsmcc.2012.2222025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Quynh, and V. J. Rayward-Smith. "CLAM: Clustering Large Applications Using Metaheuristics." Journal of Mathematical Modelling and Algorithms 10, no. 1 (2010): 57–78. http://dx.doi.org/10.1007/s10852-010-9141-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Peters, Georg, Richard Weber, and René Nowatzke. "Dynamic rough clustering and its applications." Applied Soft Computing 12, no. 10 (2012): 3193–207. http://dx.doi.org/10.1016/j.asoc.2012.05.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Segal, M. R., Y. Xiao, and F. W. Huffer. "Clustering with exclusion zones: genomic applications." Biostatistics 12, no. 2 (2010): 234–46. http://dx.doi.org/10.1093/biostatistics/kxq066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Panagiotakis, Costas, Emmanuel Ramasso, Paraskevi Fragopoulou, and Daniel Aloise. "Theory and Applications of Data Clustering." Mathematical Problems in Engineering 2016 (2016): 1–2. http://dx.doi.org/10.1155/2016/5427923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bordogna, Gloria, and Gabriella Pasi. "Soft clustering for information retrieval applications." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 1, no. 2 (2011): 138–46. http://dx.doi.org/10.1002/widm.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Upadhye, Akshata. "A Survey of Text Clustering Techniques: Algorithms, Applications, and Challenges." International Journal of Science and Research (IJSR) 10, no. 9 (2021): 1749–52. http://dx.doi.org/10.21275/sr24304163737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Baghernia, Ali, Hamid Pavin, Miresmail Mirnabibaboli, and Hamid Alinejad-Rokny. "Clustering High-Dimensional Data Stream: A Survey on Subspace Clustering, Projected Clustering on Bioinformatics Applications." Advanced Science, Engineering and Medicine 8, no. 9 (2016): 749–57. http://dx.doi.org/10.1166/asem.2016.1915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Dong. "Educational data mining: Methods and applications." Applied and Computational Engineering 16, no. 1 (2023): 205–9. http://dx.doi.org/10.54254/2755-2721/16/20230892.

Full text
Abstract:
Educational data mining is a rapidly growing field that applies various statistical and data mining techniques to analyze educational data. This paper provides a general review of the literature on educational data mining, focusing on the methods and applications. Methods used in education data mining include classification and clustering. A classification algorithm is a supervised learning technique that seeks to categorize a given set of data objects into specified categories, build a classification model using the input data that already exists, and then apply the model to categorize new da
APA, Harvard, Vancouver, ISO, and other styles
22

Gautam, Kudale, and Singh Rajpoot Sandeep. "CLUSTERING IN DATA MINING: TECHNIQUES, ADVANTAGES, APPLICATIONS, AND CHALLENGES." International Journal of Engineering Sciences & Emerging Technologies 11, no. 2 (2023): 62–70. https://doi.org/10.5281/zenodo.10434263.

Full text
Abstract:
<em>Clustering is a technique that groups similar data points together for analysis and pattern discovery across various fields like machine learning, data mining, and image analysis. Its main purpose is to group similar objects together based on a defined distance measure. Essentially, clustering involves partitioning a data set into subsets, with each subset containing data points that are similar to each other. This research paper aims to provide a comprehensive understanding of clustering in data mining. It discusses the concept of clustering, its advantages and disadvantages, applications
APA, Harvard, Vancouver, ISO, and other styles
23

Yan, Chen Guang, Yu Jing Liu, and Jin Hui Fan. "An Improving Algorithm Based on SOM Clustering and its Applications." Advanced Materials Research 655-657 (January 2013): 1000–1004. http://dx.doi.org/10.4028/www.scientific.net/amr.655-657.1000.

Full text
Abstract:
SOM (Self-organizing Map) algorithm is a clustering method basing on non-supervision condition. The paper introduces an improved algorithm based on SOM neural network clustering. It proposes SOM’s basic theory on data clustering. For SOM’s practical problems in applications, the algorithm also improved the selection of initial weights and the scope of neighborhood parameters. Finally, the simulation results in Matlab prove that the improved clustering algorithm improve the correct rate and computational efficiency of data clustering and to make the convergence speed better.
APA, Harvard, Vancouver, ISO, and other styles
24

AGOGINO, ADRIAN, and KAGAN TUMER. "A MULTIAGENT COORDINATION APPROACH TO ROBUST CONSENSUS CLUSTERING." Advances in Complex Systems 13, no. 02 (2010): 165–97. http://dx.doi.org/10.1142/s0219525910002499.

Full text
Abstract:
In many distributed modeling, control or information processing applications, clustering patterns that share certain similarities is the critical first step. However, many traditional clustering algorithms require centralized processing, reliable data collection and the availability of all the raw data in one place at one time. None of these requirement can be met in many complex real world problems. In this paper, we present an agent-based method for combining multiple base clusterings into a single unified "consensus" clustering that is robust against many types of failures and does not requ
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Zhenggang, Xuantong Li, Jin Jin, Zhong Liu, and Wei Liu. "Unsupervised Clustering of Neighborhood Associations and Image Segmentation Applications." Algorithms 13, no. 12 (2020): 309. http://dx.doi.org/10.3390/a13120309.

Full text
Abstract:
Irregular shape clustering is always a difficult problem in clustering analysis. In this paper, by analyzing the advantages and disadvantages of existing clustering analysis algorithms, a new neighborhood density correlation clustering (NDCC) algorithm for quickly discovering arbitrary shaped clusters. Because the density of the center region of any cluster sample dataset is greater than that of the edge region, the data points can be divided into core, edge, and noise data points, and then the density correlation of the core data points in their neighborhood can be used to form a cluster. Fur
APA, Harvard, Vancouver, ISO, and other styles
26

MEMARSADEGHI, NARGESS, DAVID M. MOUNT, NATHAN S. NETANYAHU, and JACQUELINE LE MOIGNE. "A FAST IMPLEMENTATION OF THE ISODATA CLUSTERING ALGORITHM." International Journal of Computational Geometry & Applications 17, no. 01 (2007): 71–103. http://dx.doi.org/10.1142/s0218195907002252.

Full text
Abstract:
Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense
APA, Harvard, Vancouver, ISO, and other styles
27

Hua, Ming, and Jian Pei. "Clustering in applications with multiple data sources—A mutual subspace clustering approach." Neurocomputing 92 (September 2012): 133–44. http://dx.doi.org/10.1016/j.neucom.2011.08.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dan Zhang, Fei Wang, Luo Si, and Tao Li. "Maximum Margin Multiple Instance Clustering With Applications to Image and Text Clustering." IEEE Transactions on Neural Networks 22, no. 5 (2011): 739–51. http://dx.doi.org/10.1109/tnn.2011.2109011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Abualigah, Laith, Ali Diabat, and Zong Woo Geem. "A Comprehensive Survey of the Harmony Search Algorithm in Clustering Applications." Applied Sciences 10, no. 11 (2020): 3827. http://dx.doi.org/10.3390/app10113827.

Full text
Abstract:
The Harmony Search Algorithm (HSA) is a swarm intelligence optimization algorithm which has been successfully applied to a broad range of clustering applications, including data clustering, text clustering, fuzzy clustering, image processing, and wireless sensor networks. We provide a comprehensive survey of the literature on HSA and its variants, analyze its strengths and weaknesses, and suggest future research directions.
APA, Harvard, Vancouver, ISO, and other styles
30

Ibrahim, Omar A., Yiqing Wang, and James M. Keller. "Analysis of Incremental Cluster Validity for Big Data Applications." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 26, Suppl. 2 (2018): 47–62. http://dx.doi.org/10.1142/s0218488518400111.

Full text
Abstract:
Online clustering has attracted attention due to the explosion of ubiquitous continuous sensing. Streaming clustering algorithms need to look for new structures and adapt as the data evolves, such that outliers are detected, and that new emerging clusters are automatically formed. The performance of a streaming clustering algorithm needs to be monitored over time to understand the behavior of the streaming data in terms of new emerging clusters and number of outlier data points. Small datasets with 2 or 3 dimensions can be monitored by plotting the clustering results as data evolves. However,
APA, Harvard, Vancouver, ISO, and other styles
31

Fan, Xue Dong. "Clustering Analysis Based on Data Mining Applications." Applied Mechanics and Materials 303-306 (February 2013): 1026–29. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.1026.

Full text
Abstract:
Abstract. In this paper, a clustering algorithm based on data mining technology applications, the use of the extraction mode noise characteristics amount and pattern recognition algorithms, extraction and selection of the characteristic quantities of the three types of mode, carried out under the same working conditions data mining clustering analysis ultimately satisfying recognition.
APA, Harvard, Vancouver, ISO, and other styles
32

Lukauskas, Mantas, and Tomas Ruzgas. "Data clustering and its applications in medicine." New Trends in Mathematical Science 10, ISAME2022-Proceedings (2022): 067–70. http://dx.doi.org/10.20852/ntmsci.2022.465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lindsay, Bruce, G. L. McLachlan, K. E. Basford, and Marcel Dekker. "Mixture Models: Inference and Applications to Clustering." Journal of the American Statistical Association 84, no. 405 (1989): 337. http://dx.doi.org/10.2307/2289892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Elhamifar, E., and R. Vidal. "Sparse Subspace Clustering: Algorithm, Theory, and Applications." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 11 (2013): 2765–81. http://dx.doi.org/10.1109/tpami.2013.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Jolion, J. M., P. Meer, and S. Bataouche. "Robust clustering with applications in computer vision." IEEE Transactions on Pattern Analysis and Machine Intelligence 13, no. 8 (1991): 791–802. http://dx.doi.org/10.1109/34.85669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kosmidis, Ioannis, and Dimitris Karlis. "Model-based clustering using copulas with applications." Statistics and Computing 26, no. 5 (2015): 1079–99. http://dx.doi.org/10.1007/s11222-015-9590-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Xiang, Buyue Qian, and Ian Davidson. "On constrained spectral clustering and its applications." Data Mining and Knowledge Discovery 28, no. 1 (2012): 1–30. http://dx.doi.org/10.1007/s10618-012-0291-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hand, D. J., G. J. McLachlan, and K. E. Basford. "Mixture Models: Inference and Applications to Clustering." Applied Statistics 38, no. 2 (1989): 384. http://dx.doi.org/10.2307/2348072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Geary, D. N., G. J. McLachlan, and K. E. Basford. "Mixture Models: Inference and Applications to Clustering." Journal of the Royal Statistical Society. Series A (Statistics in Society) 152, no. 1 (1989): 126. http://dx.doi.org/10.2307/2982840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Shunfu, Fangxing Li, Erwei Tian, Yang Fu, and Dongdong Li. "Clustering Load Profiles for Demand Response Applications." IEEE Transactions on Smart Grid 10, no. 2 (2019): 1599–607. http://dx.doi.org/10.1109/tsg.2017.2773573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Singh, Vikas, Lopamudra Mukherjee, Jiming Peng, and Jinhui Xu. "Ensemble clustering using semidefinite programming with applications." Machine Learning 79, no. 1-2 (2009): 177–200. http://dx.doi.org/10.1007/s10994-009-5158-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Binhimd, Sulafah M. Saleh, and Zakiah I. Kalantan. "Bootstrap approach for clustering method with applications." International Journal of ADVANCED AND APPLIED SCIENCES 10, no. 3 (2023): 189–95. http://dx.doi.org/10.21833/ijaas.2023.03.023.

Full text
Abstract:
Discovering patterns of big data is an important step to actionable insights data. The clustering method is used to identify the data pattern by splitting the data set into clusters with associated variables. Various research works proposed a bootstrap method for clustering the array data but there is a weak view of statistical or theoretical results and measures of the model consistency or stability. The purpose of this paper is to assess model stability and cluster consistency of the K-number of clusters by using bootstrap sampling patterns with replacement. In addition, we present a reasona
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Yan, Jian-tao Zhou, Xinyuan Li, and Xiaoyu Song. "Effective User Preference Clustering in Web Service Applications." Computer Journal 63, no. 11 (2019): 1633–43. http://dx.doi.org/10.1093/comjnl/bxz090.

Full text
Abstract:
Abstract The research on personalized recommendation of Web services plays an important role in the field of Web services technology applications. Fortunately, not all users have completely different service preferences. Due to the same application scenarios and personal interests, some users have the same preferences for certain types of Web services. This paper explores the problem of user clustering in the service environment, grouping users according to their service preferences. It helps service providers to identify and characterize the preferences of similar users and provide them with
APA, Harvard, Vancouver, ISO, and other styles
44

Abdullah, Manal, Ahlam Al-Thobaity, Afnan Bawazir, and Nouf Al-Harbe. "Energy Efficient Ensemble K-means and SVM for Wireless Sensor Network." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 11, no. 9 (2013): 3034–42. http://dx.doi.org/10.24297/ijct.v11i9.3409.

Full text
Abstract:
A wireless sensor network (WSN) consists of a large number of small sensors with limited energy. For many WSN applications, prolonged network lifetime is important requirements. There are different techniques have already been proposed to improve energy consumption rate such as clustering ,efficient routing , and data aggregation. In this paper, we present a novel technique using clustering .The different clustering algorithms also differ in their objectives. Sometimes Clustering suffers from more overlapping and redundancy data since sensor node's position is in a critical position does n
APA, Harvard, Vancouver, ISO, and other styles
45

Di Nuzzo, Cinzia. "Advancing Spectral Clustering for Categorical and Mixed-Type Data: Insights and Applications." Mathematics 12, no. 4 (2024): 508. http://dx.doi.org/10.3390/math12040508.

Full text
Abstract:
This study focuses on adapting spectral clustering, a numeric data-clustering technique, for categorical and mixed-type data. The method enhances spectral clustering for categorical and mixed-type data with novel kernel functions, showing improved accuracy in real-world applications. Despite achieving better clustering for datasets with mixed variables, challenges remain in identifying suitable kernel functions for categorical relationships.
APA, Harvard, Vancouver, ISO, and other styles
46

Ma, Jungmok. "Multi-Attribute Utility Theory Based K-Means Clustering Applications." International Journal of Data Warehousing and Mining 13, no. 2 (2017): 1–12. http://dx.doi.org/10.4018/ijdwm.2017040101.

Full text
Abstract:
One of major obstacles in the application of the k-means clustering algorithm is the selection of the number of clusters k. The multi-attribute utility theory (MAUT)-based k-means clustering algorithm is proposed to tackle the problem by incorporating user preferences. Using MAUT, the decision maker's value structure for the number of clusters and other attributes can be quantitatively modeled, and it can be used as an objective function of the k-means. A target clustering problem for military targeting process is used to demonstrate the MAUT-based k-means and provide a comparative study. The
APA, Harvard, Vancouver, ISO, and other styles
47

Crase, Simon, and Suresh N. Thennadil. "An analysis framework for clustering algorithm selection with applications to spectroscopy." PLOS ONE 17, no. 3 (2022): e0266369. http://dx.doi.org/10.1371/journal.pone.0266369.

Full text
Abstract:
Cluster analysis is a valuable unsupervised machine learning technique that is applied in a multitude of domains to identify similarities or clusters in unlabelled data. However, its performance is dependent of the characteristics of the data it is being applied to. There is no universally best clustering algorithm, and hence, there are numerous clustering algorithms available with different performance characteristics. This raises the problem of how to select an appropriate clustering algorithm for the given analytical purposes. We present and validate an analysis framework to address this pr
APA, Harvard, Vancouver, ISO, and other styles
48

Qi, Hui, Jinqing Li, Xiaoqiang Di, Weiwu Ren, and Fengrong Zhang. "Improved K-means Clustering Algorithm and its Applications." Recent Patents on Engineering 13, no. 4 (2019): 403–9. http://dx.doi.org/10.2174/1872212113666181203110611.

Full text
Abstract:
Background: K-means algorithm is implemented through two steps: initialization and subsequent iterations. Initialization is to select the initial cluster center, while subsequent iterations are to continuously change the cluster center until it won't change any more or the number of iterations reaches its maximum. K-means algorithm is so sensitive to the cluster center selected during initialization that the selection of a different initial cluster center will influence the algorithm performance. Therefore, improving the initialization process has become an important means of K-means performan
APA, Harvard, Vancouver, ISO, and other styles
49

Shutaywi, Meshal, and Nezamoddin N. Kachouie. "Silhouette Analysis for Performance Evaluation in Machine Learning with Applications to Clustering." Entropy 23, no. 6 (2021): 759. http://dx.doi.org/10.3390/e23060759.

Full text
Abstract:
Grouping the objects based on their similarities is an important common task in machine learning applications. Many clustering methods have been developed, among them k-means based clustering methods have been broadly used and several extensions have been developed to improve the original k-means clustering method such as k-means ++ and kernel k-means. K-means is a linear clustering method; that is, it divides the objects into linearly separable groups, while kernel k-means is a non-linear technique. Kernel k-means projects the elements to a higher dimensional feature space using a kernel func
APA, Harvard, Vancouver, ISO, and other styles
50

Titus Zira Fate, Titus Zira Fate, Dogo Siyani Ezra Dogo Siyani Ezra, and Ijandir Isaac Samuel Ijandir Isaac Samuel. "Comparative Analysis of Clustering Techniques for Customer Segmentation: Evaluating K-Means, Hierarchical, and DBSCAN Models alongside RFM Frameworks to Enhance Marketing Strategies through Behavioral, Demographic, and Transactional Insights." International Journal of Advances in Engineering and Management 7, no. 4 (2025): 34–43. https://doi.org/10.35629/5252-07043443.

Full text
Abstract:
The study aims at clustering, segmentation customer using K-means clustering model, hierarchical clustering model, Density-based Spatial Clustering of Applications with Noise (DBSCAN) model and customer segmentation frameworks (RFM), A few unsupervised machine learning (ML) clustering models such as K-means clustering model, hierarchical clustering model, Density-based Spatial Clustering of Applications with Noise (DBSCAN) and customer segmentation frameworks (RFM), to identify distinct and actionable customer segment based on their behavioral, demographic and transactional characteristics. Th
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!