To see the other types of publications on this topic, follow the link: Possibilistic partition.

Journal articles on the topic 'Possibilistic partition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 journal articles for your research on the topic 'Possibilistic partition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

ROMDHANE, LOTFI BEN, HECHMI SHILI, and BECHIR AYEB. "P3M— POSSIBILISTIC MULTI-STEP MAXMIN AND MERGING ALGORITHM WITH APPLICATION TO GENE EXPRESSION DATA MINING." International Journal on Artificial Intelligence Tools 18, no. 04 (August 2009): 545–67. http://dx.doi.org/10.1142/s0218213009000263.

Full text
Abstract:
Gene expression data generated by DNA microarray experiments provide a vast resource of medical diagnostic and disease understanding. Unfortunately, the large amount of data makes it hard, sometimes impossible, to understand the correct behavior of genes. In this work, we develop a possibilistic approach for mining gene microarray data. Our model consists of two steps. In the first step, we use possibilistic clustering to partition the data into groups (or clusters). The optimal number of clusters is evaluated automatically from data using the Partition Information Entropy as a validity measure. In the second step, we select from each computed cluster the most representative genes and model them as a graph called a proximity graph. This set of graphs (or hyper-graph) will be used to predict the function of new and previously unknown genes. Benchmark results on real-world data sets reveal a good performance of our model in computing optimal partitions even in the presence of noise; and a high prediction accuracy on unknown genes.
APA, Harvard, Vancouver, ISO, and other styles
2

Szilágyi, László. "Robust Spherical Shell Clustering Using Fuzzy-Possibilistic Product Partition." International Journal of Intelligent Systems 28, no. 6 (March 20, 2013): 524–39. http://dx.doi.org/10.1002/int.21591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chowdhary, Chiranji Lal, and D. P. Acharjya. "Clustering Algorithm in Possibilistic Exponential Fuzzy C-Mean Segmenting Medical Images." Journal of Biomimetics, Biomaterials and Biomedical Engineering 30 (January 2017): 12–23. http://dx.doi.org/10.4028/www.scientific.net/jbbbe.30.12.

Full text
Abstract:
Different fuzzy segmentation methods were used in medical imaging from last two decades for obtaining better accuracy in various approaches like detecting tumours etc. Well-known fuzzy segmentations like fuzzy c-means (FCM) assign data to every cluster but that is not realistic in few circumstances. Our paper proposes a novel possibilistic exponential fuzzy c-means (PEFCM) clustering algorithm for segmenting medical images. This new clustering algorithm technology can maintain the advantages of a possibilistic fuzzy c-means (PFCM) and exponential fuzzy c-mean (EFCM) clustering algorithms to maximize benefits and reduce noise/outlier influences. In our proposed hybrid possibilistic exponential fuzzy c-mean segmentation approach, exponential FCM intention functions are recalculated and that select data into the clusters. Traditional FCM clustering process cannot handle noise and outliers so we require being added in clusters due to the reasons of common probabilistic constraints which give the total of membership’s degree in every cluster to be 1. We revise possibilistic exponential fuzzy clustering (PEFCM) which hybridize possibilistic method over exponential fuzzy c-mean segmentation and this proposed idea partition the data filters noisy data or detects them as outliers. Our result analysis by PEFCM segmentation attains an average accuracy of 97.4% compared with existing algorithms. It was concluded that the possibilistic exponential fuzzy c-means segmentation algorithm endorsed for additional efficient for accurate detection of breast tumours to assist for the early detection.
APA, Harvard, Vancouver, ISO, and other styles
4

Szilágyi, László, Szidónia Lefkovits, and Sándor M. Szilágyi. "Self-Tuning Possibilistic c-Means Clustering Models." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 27, Supp01 (November 5, 2019): 143–59. http://dx.doi.org/10.1142/s0218488519400075.

Full text
Abstract:
The relaxation of the probabilistic constraint of the fuzzy c-means clustering model was proposed to provide robust algorithms that are insensitive to strong noise and outlier data. These goals were achieved by the possibilistic c-means (PCM) algorithm, but these advantages came together with a sensitivity to cluster prototype initialization. According to the original recommendations, the probabilistic fuzzy c-means (FCM) algorithm should be applied to establish the cluster initialization and possibilistic penalty terms for PCM. However, when FCM fails to provide valid cluster prototypes due to the presence of noise, PCM has no chance to recover and produce a fine partition. This paper proposes a two-stage c-means clustering algorithm to tackle with most problems enumerated above. In the first stage called initialization, FCM with two modifications is performed: (1) extra cluster added for noisy data; (2) extra variable and constraint added to handle clusters of various diameters. In the second stage, a modified PCM algorithm is carried out, which also contains the cluster width tuning mechanism based on which it adaptively updates the possibilistic penalty terms. The proposed algorithm has less parameters than PCM when the number of clusters is [Formula: see text]. Numerical evaluation involving synthetic and standard test data sets proved the advantages of the proposed clustering model.
APA, Harvard, Vancouver, ISO, and other styles
5

YANG, XULEI, QING SONG, and YUE WANG. "A WEIGHTED SUPPORT VECTOR MACHINE FOR DATA CLASSIFICATION." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 05 (August 2007): 961–76. http://dx.doi.org/10.1142/s0218001407005703.

Full text
Abstract:
This paper presents a weighted support vector machine (WSVM) to improve the outlier sensitivity problem of standard support vector machine (SVM) for two-class data classification. The basic idea is to assign different weights to different data points such that the WSVM training algorithm learns the decision surface according to the relative importance of data points in the training data set. The weights used in WSVM are generated by a robust fuzzy clustering algorithm, kernel-based possibilistic c-means (KPCM) algorithm, whose partition generates relative high values for important data points but low values for outliers. Experimental results indicate that the proposed method reduces the effect of outliers and yields higher classification rate than standard SVM does when outliers exist in the training data set.
APA, Harvard, Vancouver, ISO, and other styles
6

Lazli, Boukadoum, and Ait Mohamed. "Computer-Aided Diagnosis System of Alzheimer's Disease Based on Multimodal Fusion: Tissue Quantification Based on the Hybrid Fuzzy-Genetic-Possibilistic Model and Discriminative Classification Based on the SVDD Model." Brain Sciences 9, no. 10 (October 22, 2019): 289. http://dx.doi.org/10.3390/brainsci9100289.

Full text
Abstract:
: An improved computer-aided diagnosis (CAD) system is proposed for the early diagnosis of Alzheimer’s disease (AD) based on the fusion of anatomical (magnetic resonance imaging (MRI)) and functional (8F-fluorodeoxyglucose positron emission tomography (FDG-PET)) multimodal images, and which helps to address the strong ambiguity or the uncertainty produced in brain images. The merit of this fusion is that it provides anatomical information for the accurate detection of pathological areas characterized in functional imaging by physiological abnormalities. First, quantification of brain tissue volumes is proposed based on a fusion scheme in three successive steps: modeling, fusion and decision. (1) Modeling which consists of three sub-steps: the initialization of the centroids of the tissue clusters by applying the Bias corrected Fuzzy C-Means (FCM) clustering algorithm. Then, the optimization of the initial partition is performed by running genetic algorithms. Finally, the creation of white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) tissue maps by applying the Possibilistic FCM clustering algorithm. (2) Fusion using a possibilistic operator to merge the maps of the MRI and PET images highlighting redundancies and managing ambiguities. (3) Decision offering more representative anatomo-functional fusion images. Second, a support vector data description (SVDD) classifier is used that must reliably distinguish AD from normal aging and automatically detects outliers. The "divide and conquer" strategy is then used, which speeds up the SVDD process and reduces the load and cost of the calculating. The robustness of the tissue quantification process is proven against noise (20% level), partial volume effects and when inhomogeneities of spatial intensity are high. Thus, the superiority of the SVDD classifier over competing conventional systems is also demonstrated with the adoption of the 10-fold cross-validation approach for synthetic datasets (Alzheimer disease neuroimaging (ADNI) and Open Access Series of Imaging Studies (OASIS)) and real images. The percentage of classification in terms of accuracy (%), sensitivity (%), specificity (%) and area under ROC curve was 93.65%, 90.08%, 92.75% and 0.973; 91.46%, 92%, 91.78% and 0.967; 85.09%, 86.41%, 84.92% and 0.946 in the case of the ADNI, OASIS and real images respectively.
APA, Harvard, Vancouver, ISO, and other styles
7

Ubukata, Seiki, Katsuya Koike, Akira Notsu, and Katsuhiro Honda. "MMMs-Induced Possibilistic Fuzzy Co-Clustering and its Characteristics." Journal of Advanced Computational Intelligence and Intelligent Informatics 22, no. 5 (September 20, 2018): 747–58. http://dx.doi.org/10.20965/jaciii.2018.p0747.

Full text
Abstract:
In the field of cluster analysis, fuzzy theory including the concept of fuzzy sets has been actively utilized to realize flexible and robust clustering methods. FuzzyC-means (FCM), which is the most representative fuzzy clustering method, has been extended to achieve more robust clustering. For example, noise FCM (NFCM) performs noise rejection by introducing a noise cluster that absorbs noise objects and possibilisticC-means (PCM) performs the independent extraction of possibilistic clusters by introducing cluster-wise noise clusters. Similarly, in the field of co-clustering, fuzzy co-clustering induced by multinomial mixture models (FCCMM) was proposed and extended to noise FCCMM (NFCCMM) in an analogous fashion to the NFCM. Ubukata et al. have proposed noise clustering-based possibilistic co-clustering induced by multinomial mixture models (NPCCMM) in an analogous fashion to the PCM. In this study, we develop an NPCCMM scheme considering variable cluster volumes and the fuzziness degree of item memberships to investigate the specific aspects of fuzzy nature rather than probabilistic nature in co-clustering tasks. We investigated the characteristics of the proposed NPCCMM by applying it to an artificial data set and conducted document clustering experiments using real-life data sets. As a result, we found that the proposed method can derive more flexible possibilistic partitions than the probabilistic model by adjusting the fuzziness degrees of object and item memberships. The document clustering experiments also indicated the effectiveness of tuning the fuzziness degree of object and item memberships, and the optimization of cluster volumes to improve classification performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Haonan, Xu Wu, Yinghui Liang, and Chen Zhang. "A Multistakeholder Approach to the Airport Gate Assignment Problem: Application of Fuzzy Theory for Optimal Performance Indicator Selection." Computational Intelligence and Neuroscience 2021 (September 3, 2021): 1–15. http://dx.doi.org/10.1155/2021/2675052.

Full text
Abstract:
Airport gate assignment performance indicator selection is a complicated decision-making problem with strong subjectivity and difficulty in measuring the importance of each indicator. A better selection of performance indicators (PIs) can greatly increase the airport overall benefit. We adopt a multicriteria decision-making approach to quantify qualitative PIs and conduct subsequent selection using the fuzzy clustering method. First, we identify 21 commonly used PIs through literature review and survey. Subsequently, the fuzzy analytic hierarchy process technique was employed to obtain the selection criteria weights based on the relative importance of significance, availability, and generalisability. Further, we aggregated the selection criteria weights and experts’ score to evaluate each PI for the clustering process. The fuzzy-possibilistic product partition c-means algorithm was applied to divide the PIs into different groups based on the three selection criteria as partitioning features. The cluster with highest weights of the centre was identified as the very high-influence cluster, and 10 PIs were identified as a result. This study revealed that the passenger-oriented objective is the most important performance criterion; however, the relevance of the airport/airline-oriented and robustness-oriented performance objectives was highlighted as well. It also offers a scientific approach to determine the objective functions for future gate assignment research. And, we believe, through slight modifications, this model can be used in other airports, other indicator selection problems, or other scenarios at the same airport to facilitate policy making and real situation practice, hence facilitate the management system for the airport.
APA, Harvard, Vancouver, ISO, and other styles
9

Anderson, Derek T., James C. Bezdek, Mihail Popescu, and James M. Keller. "Comparing Fuzzy, Probabilistic, and Possibilistic Partitions." IEEE Transactions on Fuzzy Systems 18, no. 5 (October 2010): 906–18. http://dx.doi.org/10.1109/tfuzz.2010.2052258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Honda, K., A. Notsu, T. Matsui, and H. Ichihashi. "Fuzzy Cluster Validation Based on Fuzzy PCA-Guided Procedure." International Journal of Fuzzy System Applications 1, no. 1 (January 2011): 49–60. http://dx.doi.org/10.4018/ijfsa.2011010104.

Full text
Abstract:
Cluster validation is an important issue in fuzzy clustering research and many validity measures, most of which are motivated by intuitive justification considering geometrical features, have been developed. This paper proposes a new validation approach, which evaluates the validity degree of cluster partitions from the view point of the optimality of objective functions in FCM-type clustering. This approach makes it possible to evaluate the validity degree of robust cluster partitions, in which geometrical features are not available because of their possibilistic natures.
APA, Harvard, Vancouver, ISO, and other styles
11

Hu, Ya Ting, Fu Heng Qu, Yao Hong Xue, and Yong Yang. "An Efficient and Robust Kernelized Possibilistic C-Means Clustering Algorithm." Advanced Materials Research 962-965 (June 2014): 2881–85. http://dx.doi.org/10.4028/www.scientific.net/amr.962-965.2881.

Full text
Abstract:
To avoid the initialization sensitivity and low computational efficiency problems of the kernelized possibilistic c-means clustering algorithm (KPCM), a new clustering algorithm called efficient and robust kernelized possibilistic c-means clustering algorithm (ERKPCM) was proposed in this paper. ERKPCM improved KPCM by two ways. First, the data are refined by the data reduction technique, which makes it keep the data structure of the original data and have higher efficiency. Secondly, weighted clustering algorithm is executed several times to estimate cluster centers accurately, which makes it more robust to initializations and get more reasonable partitions. As a by-product, ERKPCM overcomes the problem of generating coincident clusters of KPCM. The contrast experimental results with conventional algorithms show that ERKPCM is more robust to initializations, and has a relatively high precision and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
12

Beldjehem, Mokhtar. "A Granular Unified Min-Max Fuzzy-Neuro Framework for Learning Fuzzy Systems." Journal of Advanced Computational Intelligence and Intelligent Informatics 13, no. 5 (September 20, 2009): 520–28. http://dx.doi.org/10.20965/jaciii.2009.p0520.

Full text
Abstract:
We propose a novel computational granular unified framework that is cognitively motivated for learning if-then fuzzy weighted rules by using a hybrid neuro-fuzzy or fuzzy-neuro possibilistic model appropriately crafted as a means to automatically extract or learn fuzzy rules from only input-output examples by integrating some useful concepts from the human cognitive processes and adding some interesting granular functionalities. This learning scheme uses an exhaustive search over the fuzzy partitions of involved variables, automatic fuzzy hypotheses generation, formulation and testing, and approximation procedure of Min-Max relational equations. The main idea is to start learning from coarse fuzzy partitions of the involved variables (both input and output) and proceed progressively toward fine-grained partitions until finding the appropriate partitions that fit the data. According to the complexity of the problem at hand, it learns the whole structure of the fuzzy system, i.e. conjointly appropriate fuzzy partitions, appropriate fuzzy rules, their number and their associated membership functions.
APA, Harvard, Vancouver, ISO, and other styles
13

Ceccarelli, Michele, and Antonio Maratea. "Concordance indices for comparing fuzzy, possibilistic, rough and grey partitions." International Journal of Knowledge Engineering and Soft Data Paradigms 1, no. 4 (2009): 331. http://dx.doi.org/10.1504/ijkesdp.2009.028986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Anderson, Derek T., Alina Zare, and Stanton Price. "Comparing Fuzzy, Probabilistic, and Possibilistic Partitions Using the Earth Mover’s Distance." IEEE Transactions on Fuzzy Systems 21, no. 4 (August 2013): 766–75. http://dx.doi.org/10.1109/tfuzz.2012.2230181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

"Implementation of Fuzzy Possibilistic Product Partition C-Means and Modified Fuzzy Possibilistic C-Means Clustering To Pick the Low Performers Using R-Tool." International Journal of Recent Technology and Engineering 8, no. 2 (July 30, 2019): 5942–46. http://dx.doi.org/10.35940/ijrte.b3580.078219.

Full text
Abstract:
The different techniques like clustering, classification, association rule and regression are available in data mining to deal with a huge number of datasets that are available in the education field. The main purpose of educational data mining is to extract useful information that will create a good impact on educational institutions. The identification of risk students, improving the graduation rates and placement opportunities will assess the institutional performance. The clustering is one of the famous techniques to deal with noisy and disjoint groups. The clustering technique is used to measure the distance between data objects of a similar group and also it finds the different cluster center in each iteration. The placement creates the opportunity to learn specific skills on their subject or industry and improves their knowledge in various sectors. In this paper, we are going to discuss Fuzzy Possibilistic Product Partition C-Means (FPPPCM) and Modified Fuzzy Possibilistic C-Means Clustering (MFPCM) performance while dealing with the student placement performance details. The improvement of the educational system will depend on reducing the low performing students rate. The main aim of this paper to pick the low performers by using FPPPCM and MFPCM algorithms. This will helps academia to identify the low performers and provide proper training to them in an early stage. And also the efficiency of FPPPCM and MFPCM is going to analyze with different parameters.
APA, Harvard, Vancouver, ISO, and other styles
16

"Parallel Semi-Supervised Big Data Clustering Based on Mapreduce Technology." International Journal of Recent Technology and Engineering 8, no. 4 (November 30, 2019): 1657–64. http://dx.doi.org/10.35940/ijrte.c5206.118419.

Full text
Abstract:
In the area of information technology, a speedy sensational technology is big data. Big data brings tremendous challenges to extract valuable hidden knowledge. Data mining techniques can be used over big data to extract valuable knowledge for decision making. Big data results in high heterogeneity because it consists of various inter-related kinds of objects such as audios, texts, and images. In addition to this, the inter-related kinds of objects carry different information. So, in this paper clustering techniques are introduced to separate objects into several clusters. It also reduces the computational complexity of classifiers. A Possibilistic c-Means (PCM) algorithm was introduced to group the objects in big data. PCM replicated the characteristic of each object to different clusters effectively and it had capability to avoid the corruption of noise in the clustering process. However, PCM is not more efficient for big data and it cannot confine the complex correlation over multiple modalities of the heterogeneous data objects. So, a Parallel Semi-supervised Multi-Ant Colonies Clustering (PSMACC) is introduced for big data clustering. Initially, the PSMACC splits the data into number of partitions and each partition is processed in mappers. Each mapper generates a diverse collection of three clustering components using the semisupervised ant colony clustering algorithm with various moving speeds. Then, a hyper graph model was used to combine three clustering components. Finally, two constraints such as MustLink (ML) and Cannot-Link (CL) are included to form a consensus clustering. Finally, the intermediate results of each mapper are combined in the reducer. However, the overhead of iteration in PSMACC is overwhelming which affects the performance of PSMACC. So, a Parallel Semi-supervised MultiImperialist Competitive Algorithm (PSMICA) is proposed to cluster the big data. In PSMICA, each mapper processes the ICA where initial population is called countries. Some of the best countries in the population chosen as the imperialists and the remaining countries form the colonies of these imperialists. The colonies move towards the imperialists based on the distance between them. The intermediate results of each mapper are combined in reducer to get the final clustering result.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography