Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Entropy algorithms.

Articles de revues sur le sujet « Entropy algorithms »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Entropy algorithms ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Li, Yancang, et Wanqing Li. « Adaptive Ant Colony Optimization Algorithm Based on Information Entropy : Foundation and Application ». Fundamenta Informaticae 77, no 3 (janvier 2007) : 229–42. https://doi.org/10.3233/fun-2007-77303.

Texte intégral
Résumé :
In order to solve the premature convergence problem of the basic Ant Colony Optimization algorithm, a promising modification based on the information entropy is proposed. The main idea is to evaluate stability of the current space of represented solutions using information entropy, which is then applied to turning of the algorithm's parameters. The path selection and evolutional strategy are controlled by the information entropy self-adaptively. Simulation study and performance comparison with other Ant Colony Optimization algorithms and other meta-heuristics on Traveling Salesman Problem show that the improved algorithm, with high efficiency and robustness, appears self -adaptive and can converge at the global optimum with a high probability. The work proposes a more general approach to evolutionary-adaptive algorithms related to the population's entropy and has significance in theory and practice for solving the combinatorial optimization problems.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Turlykozhayeva, D. A. « ROUTING METRIC AND PROTOCOL FOR WIRELESS MESH NETWORK BASED ON INFORMATION ENTROPY THEORY ». Eurasian Physical Technical Journal 20, no 4 (46) (19 décembre 2023) : 90–98. http://dx.doi.org/10.31489/2023no4/90-98.

Texte intégral
Résumé :
In this work, the authors propose a routing algorithm based on information entropy theory for calculating the metric, considering theprobability of packet loss. Information entropy theory serves as a robust foundation for evaluating uncertainty and disorder in data transmission, facilitating the development of a more resilient and intelligent routing strategy. In contrast to existing algorithms, the proposed approach enables a more accurate assessment of data transmission quality within the network, optimizing the routing process for maximum efficiency. The experimental results demonstrate a significant enhancement in network service quality while maintaining high performance. To validate the algorithm's effectiveness, a series of experiments were conducted, evaluating key performance metrics such as throughput, delay, and packet loss. A comparative analysis with established routing algorithms was also carried out, allowing for the assessment of advantages and drawbacks in relation to well-known algorithms. The findings suggest that the proposed algorithm surpasses traditional routing methods in optimizing data transmission quality and overall network efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhang, Chuang, Yue-Han Pei, Xiao-Xue Wang, Hong-Yu Hou et Li-Hua Fu. « Symmetric cross-entropy multi-threshold color image segmentation based on improved pelican optimization algorithm ». PLOS ONE 18, no 6 (29 juin 2023) : e0287573. http://dx.doi.org/10.1371/journal.pone.0287573.

Texte intégral
Résumé :
To address the problems of low accuracy and slow convergence of traditional multilevel image segmentation methods, a symmetric cross-entropy multilevel thresholding image segmentation method (MSIPOA) with multi-strategy improved pelican optimization algorithm is proposed for global optimization and image segmentation tasks. First, Sine chaotic mapping is used to improve the quality and distribution uniformity of the initial population. A spiral search mechanism incorporating a sine cosine optimization algorithm improves the algorithm’s search diversity, local pioneering ability, and convergence accuracy. A levy flight strategy further improves the algorithm’s ability to jump out of local minima. In this paper, 12 benchmark test functions and 8 other newer swarm intelligence algorithms are compared in terms of convergence speed and convergence accuracy to evaluate the performance of the MSIPOA algorithm. By non-parametric statistical analysis, MSIPOA shows a greater superiority over other optimization algorithms. The MSIPOA algorithm is then experimented with symmetric cross-entropy multilevel threshold image segmentation, and eight images from BSDS300 are selected as the test set to evaluate MSIPOA. According to different performance metrics and Fridman test, MSIPOA algorithm outperforms similar algorithms in global optimization and image segmentation, and the symmetric cross entropy of MSIPOA algorithm for multilevel thresholding image segmentation method can be effectively applied to multilevel thresholding image segmentation tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Manis, George, Md Aktaruzzaman et Roberto Sassi. « Low Computational Cost for Sample Entropy ». Entropy 20, no 1 (13 janvier 2018) : 61. http://dx.doi.org/10.3390/e20010061.

Texte intégral
Résumé :
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Liu, Jing, Huibin Lu, Xiuru Zhang, Xiaoli Li, Lei Wang, Shimin Yin et Dong Cui. « Which Multivariate Multi-Scale Entropy Algorithm Is More Suitable for Analyzing the EEG Characteristics of Mild Cognitive Impairment ? » Entropy 25, no 3 (21 février 2023) : 396. http://dx.doi.org/10.3390/e25030396.

Texte intégral
Résumé :
So far, most articles using the multivariate multi-scale entropy algorithm mainly use algorithms to analyze the multivariable signal complexity without clearly describing what characteristics of signals these algorithms measure and what factors affect these algorithms. This paper analyzes six commonly used multivariate multi-scale entropy algorithms from a new perspective. It clarifies for the first time what characteristics of signals these algorithms measure and which factors affect them. It also studies which algorithm is more suitable for analyzing mild cognitive impairment (MCI) electroencephalograph (EEG) signals. The simulation results show that the multivariate multi-scale sample entropy (mvMSE), multivariate multi-scale fuzzy entropy (mvMFE), and refined composite multivariate multi-scale fuzzy entropy (RCmvMFE) algorithms can measure intra- and inter-channel correlation and multivariable signal complexity. In the joint analysis of coupling and complexity, they all decrease with the decrease in signal complexity and coupling strength, highlighting their advantages in processing related multi-channel signals, which is a discovery in the simulation. Among them, the RCmvMFE algorithm can better distinguish different complexity signals and correlations between channels. It also performs well in anti-noise and length analysis of multi-channel data simultaneously. Therefore, we use the RCmvMFE algorithm to analyze EEG signals from twenty subjects (eight control subjects and twelve MCI subjects). The results show that the MCI group had lower entropy than the control group on the short scale and the opposite on the long scale. Moreover, frontal entropy correlates significantly positively with the Montreal Cognitive Assessment score and Auditory Verbal Learning Test delayed recall score on the short scale.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Ji, Binghui, Xiaona Sun, Peimiao Chen, Siyu Wang, Shangfa Song et Bo He. « An Integrated Navigation Algorithm for Underwater Vehicles Improved by a Variational Bayesian and Minimum Mixed Error Entropy Unscented Kalman Filter ». Electronics 13, no 23 (29 novembre 2024) : 4727. http://dx.doi.org/10.3390/electronics13234727.

Texte intégral
Résumé :
In complex marine environments, autonomous underwater vehicles (AUVs) rely on robust navigation and positioning. Traditional algorithms face challenges from sensor outliers and non-Gaussian noise, leading to significant prediction errors and filter divergence. Outliers in sensor observations also impact positioning accuracy. The original unscented Kalman filter (UKF) based on the minimum mean square error (MMSE) criterion suffers from performance degradation under these conditions. This paper enhances the minimum error entropy unscented Kalman filter algorithm using variational Bayesian (VB) methods and mixed entropy functions. By implementing minimum error entropy (MEE) and mixed kernel functions in the UKF, the algorithm’s robustness under complex underwater conditions is improved. The VB method adaptively fits the measurement noise covariance, enhancing adaptability to marine environments. Simulations and sea trials validate the proposed algorithm’s performance, showing significant improvements in navigation accuracy and root mean square error (RMSE). In environments with complex noise, our algorithm improves the overall navigation accuracy by at least 10% over other existing algorithms. This demonstrates the high accuracy and robustness of the algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Du, Xinzhi. « A Robust and High-Dimensional Clustering Algorithm Based on Feature Weight and Entropy ». Entropy 25, no 3 (16 mars 2023) : 510. http://dx.doi.org/10.3390/e25030510.

Texte intégral
Résumé :
Since the Fuzzy C-Means algorithm is incapable of considering the influence of different features and exponential constraints on high-dimensional and complex data, a fuzzy clustering algorithm based on non-Euclidean distance combining feature weights and entropy weights is proposed. The proposed algorithm is based on the Fuzzy C-Means soft clustering algorithm to deal with high-dimensional and complex data. The objective function of the new algorithm is modified with the help of two different entropy terms and a non-Euclidean way of computing the distance. The distance calculation formula enhances the efficiency of extracting the contribution of different features. The first entropy term helps to minimize the clusters’ dispersion and maximize the negative entropy to control the clustering process, which also promotes the association between the samples. The second entropy term helps to control the weights of features since different features have different weights in the clustering process. Experiments on real-world datasets indicate that the proposed algorithm gives better clustering results than other algorithms. The experiments demonstrate the proposed algorithm’s robustness by analyzing the parameters’ sensitivity and comparing the computational distance formulas. In summary, the improved algorithm improves classification performance under noisy interference and high-dimensional datasets, increases computational efficiency, performs well in real-world high-dimensional datasets, and encourages the development of robust noise-resistant high-dimensional fuzzy clustering algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Morozov, Denys. « Polynomial Representation of Binary Trees of Entropy Binary Codes ». Mohyla Mathematical Journal 4 (19 mai 2022) : 20–23. http://dx.doi.org/10.18523/2617-70804202120-23.

Texte intégral
Résumé :
An important component of streaming large amounts of information are algorithms for compressing information flow. Which in turn are divided into lossless compression algorithms (entropic) - Shannon, Huffman, arithmetic coding, conditional compression - LZW, and otherinformation cone injections and lossy compression algorithms - such as mp3, jpeg and others. It is important to follow a formal strategy when building a lossy compression algorithm. It can be formulated as follows. After describing the set of objects that are atomic elements of exchange in the information flow, it is necessary to build an abstract scheme of this description, which will determine the boundary for abstract sections of this scheme, which begins the allowable losses. Approaches to the detection of an abstract scheme that generates compression algorithms with allowable losses can be obtained from the context of the subject area. For example, an audio stream compression algorithm can divide a signal into simple harmonics and leave among them those that are within a certain range of perception. Thus, the output signal is a certain abstraction of the input, which contains important information in accordance with the context of auditory perception of the audio stream and is represented by less information. A similar approach is used in the mp3 format, which is a compressed representation. Unlike lossy compression algorithms, entropic compression algorithms do not require contextanalysis, but can be built according to the frequency picture. Among the known algorithms for constructing such codes are the Shannon-Fano algorithm, the Huffman algorithm and arithmetic coding. Finding the information entropy for a given Shannon code is a trivial task. The inverse problem, namely finding the appropriate Shannon codes that have a predetermined entropy and with probabilities that are negative integer powers of two, is quite complex. It can be solved by direct search, but a significant disadvantage of this approach is its computational complexity. This article offers an alternative technique for finding such codes.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Crysdian, Cahyo. « The Evaluation of Entropy-based Algorithm towards the Production of Closed-Loop Edge ». JOIV : International Journal on Informatics Visualization 7, no 4 (31 décembre 2023) : 2481. http://dx.doi.org/10.62527/joiv.7.4.1727.

Texte intégral
Résumé :
This research concerns the common problem of edge detection that produces a disjointed and incomplete edge, leading to the misdetection of visual objects. The entropy-based algorithm can potentially solve this problem by classifying the pixel belonging to which objects in an image. Hence, the paper aims to evaluate the performance of entropy-based algorithm to produce the closed-loop edge representing the formation of object boundary. The research utilizes the concept of Entropy to sense the uncertainty of pixel membership to the existing objects to classify pixels as the edge or object. Six entropy-based algorithms are evaluated, i.e., the optimum Entropy based on Shannon formula, the optimum of relative-entropy based on Kullback-Leibler divergence, the maximum of optimum entropy neighbor, the minimum of optimum relative-entropy neighbor, the thinning of optimum entropy neighbor, and the thinning of optimum relative-entropy neighbor. The experiment is held to compare the developed algorithms against Canny as a benchmark by employing five performance parameters, i.e., the average number of detected objects, the average number of detected edge pixels, the average size of detected objects, the ratio of the number of edge pixel per object, and the average of ten biggest sizes. The experiment shows that the entropy-based algorithms significantly improve the production of closed-loop edges, and the optimum of relative-entropy neighbor based on Kullback-Leibler divergence becomes the most desired approach among others due to the production of more considerable closed-loop edge in the average. This finding suggests that the entropy-based algorithm is the best choice for edge-based segmentation. The effectiveness of Entropy in the segmentation task is addressed for further research.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Crysdian, Cahyo. « The Evaluation of Entropy-based Algorithm towards the Production of Closed-Loop Edge ». JOIV : International Journal on Informatics Visualization 7, no 4 (31 décembre 2023) : 2481. http://dx.doi.org/10.30630/joiv.7.4.01727.

Texte intégral
Résumé :
This research concerns the common problem of edge detection that produces a disjointed and incomplete edge, leading to the misdetection of visual objects. The entropy-based algorithm can potentially solve this problem by classifying the pixel belonging to which objects in an image. Hence, the paper aims to evaluate the performance of entropy-based algorithm to produce the closed-loop edge representing the formation of object boundary. The research utilizes the concept of Entropy to sense the uncertainty of pixel membership to the existing objects to classify pixels as the edge or object. Six entropy-based algorithms are evaluated, i.e., the optimum Entropy based on Shannon formula, the optimum of relative-entropy based on Kullback-Leibler divergence, the maximum of optimum entropy neighbor, the minimum of optimum relative-entropy neighbor, the thinning of optimum entropy neighbor, and the thinning of optimum relative-entropy neighbor. The experiment is held to compare the developed algorithms against Canny as a benchmark by employing five performance parameters, i.e., the average number of detected objects, the average number of detected edge pixels, the average size of detected objects, the ratio of the number of edge pixel per object, and the average of ten biggest sizes. The experiment shows that the entropy-based algorithms significantly improve the production of closed-loop edges, and the optimum of relative-entropy neighbor based on Kullback-Leibler divergence becomes the most desired approach among others due to the production of more considerable closed-loop edge in the average. This finding suggests that the entropy-based algorithm is the best choice for edge-based segmentation. The effectiveness of Entropy in the segmentation task is addressed for further research.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Li, Yanjun, et Yongquan Yan. « Training Autoencoders Using Relative Entropy Constraints ». Applied Sciences 13, no 1 (26 décembre 2022) : 287. http://dx.doi.org/10.3390/app13010287.

Texte intégral
Résumé :
Autoencoders are widely used for dimensionality reduction and feature extraction. The backpropagation algorithm for training the parameters of the autoencoder model suffers from problems such as slow convergence. Therefore, researchers propose forward propagation algorithms. However, the existing forward propagation algorithms do not consider the characteristics of the data itself. This paper proposes an autoencoder forward training algorithm based on relative entropy constraints, called relative entropy autoencoder (REAE). When solving the feature map parameters, REAE imposes different constraints on the average activation value of the hidden layer outputs obtained by the feature map for different data sets. In the experimental section, different forward propagation algorithms are compared by applying the features extracted from the autoencoder to an image classification task. The experimental results on three image classification datasets show that the classification performance of the classification model constructed by REAE is better than that of the classification model constructed by other forward propagation algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Delgado-Bonal, Alfonso, et Alexander Marshak. « Approximate Entropy and Sample Entropy : A Comprehensive Tutorial ». Entropy 21, no 6 (28 mai 2019) : 541. http://dx.doi.org/10.3390/e21060541.

Texte intégral
Résumé :
Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns. Despite their similarities, the theoretical ideas behind those techniques are different but usually ignored. This paper aims to be a complete guideline of the theory and application of the algorithms, intended to explain their characteristics in detail to researchers from different fields. While initially developed for physiological applications, both algorithms have been used in other fields such as medicine, telecommunications, economics or Earth sciences. In this paper, we explain the theoretical aspects involving Information Theory and Chaos Theory, provide simple source codes for their computation, and illustrate the techniques with a step by step example of how to use the algorithms properly. This paper is not intended to be an exhaustive review of all previous applications of the algorithms but rather a comprehensive tutorial where no previous knowledge is required to understand the methodology.
Styles APA, Harvard, Vancouver, ISO, etc.
13

CORTES, CORINNA, MEHRYAR MOHRI, ASHISH RASTOGI et MICHAEL RILEY. « ON THE COMPUTATION OF THE RELATIVE ENTROPY OF PROBABILISTIC AUTOMATA ». International Journal of Foundations of Computer Science 19, no 01 (février 2008) : 219–42. http://dx.doi.org/10.1142/s0129054108005644.

Texte intégral
Résumé :
We present an exhaustive analysis of the problem of computing the relative entropy of two probabilistic automata. We show that the problem of computing the relative entropy of unambiguous probabilistic automata can be formulated as a shortest-distance problem over an appropriate semiring, give efficient exact and approximate algorithms for its computation in that case, and report the results of experiments demonstrating the practicality of our algorithms for very large weighted automata. We also prove that the computation of the relative entropy of arbitrary probabilistic automata is PSPACE-complete. The relative entropy is used in a variety of machine learning algorithms and applications to measure the discrepancy of two distributions. We examine the use of the symmetrized relative entropy in machine learning algorithms and show that, contrarily to what is suggested by a number of publications in that domain, the symmetrized relative entropy is neither positive definite symmetric nor negative definite symmetric, which limits its use and application in kernel methods. In particular, the convergence of training for learning algorithms is not guaranteed when the symmetrized relative entropy is used directly as a kernel, or as the operand of an exponential as in the case of Gaussian Kernels. Finally, we show that our algorithm for the computation of the entropy of an unambiguous probabilistic automaton can be generalized to the computation of the norm of an unambiguous probabilistic automaton by using a monoid morphism. In particular, this yields efficient algorithms for the computation of the Lp-norm of a probabilistic automaton.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Yi, Yang, Mei Jia Chen et Fa Hong Yu. « An Improved Estimation of Distribution Algorithm Based on the Entropy Increment Theorem ». Applied Mechanics and Materials 385-386 (août 2013) : 1675–78. http://dx.doi.org/10.4028/www.scientific.net/amm.385-386.1675.

Texte intégral
Résumé :
To systematically harmonize the conflict between selective pressure and population diversity in estimation of distribution algorithms, an improved estimation of distribution algorithms based on the entropy increment theorem (IEDAEI) is proposed in this paper. IEDAEI conforms to the entropy increment theorem in simulating the competitive mechanism between energy and entropy in annealing process, in which population diversity is measured by the entropy increment theorem. By solving some typical high-dimension problems with multiple local optimizations, satisfactory results are achieved. The results show that this algorithm has preferable capability to avoid the premature convergence effectively and reduce the cost in search to some extent.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Cant, Richard, Ayodeji Remi-Omosowon, Caroline Langensiepen et Ahmad Lotfi. « An Entropy-Guided Monte Carlo Tree Search Approach for Generating Optimal Container Loading Layouts ». Entropy 20, no 11 (9 novembre 2018) : 866. http://dx.doi.org/10.3390/e20110866.

Texte intégral
Résumé :
In this paper, a novel approach to the container loading problem using a spatial entropy measure to bias a Monte Carlo Tree Search is proposed. The proposed algorithm generates layouts that achieve the goals of both fitting a constrained space and also having “consistency” or neatness that enables forklift truck drivers to apply them easily to real shipping containers loaded from one end. Three algorithms are analysed. The first is a basic Monte Carlo Tree Search, driven only by the principle of minimising the length of container that is occupied. The second is an algorithm that uses the proposed entropy measure to drive an otherwise random process. The third algorithm combines these two principles and produces superior results to either. These algorithms are then compared to a classical deterministic algorithm. It is shown that where the classical algorithm fails, the entropy-driven algorithms are still capable of providing good results in a short computational time.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Ignatenko, Vera, Anton Surkov et Sergei Koltcov. « Random forests with parametric entropy-based information gains for classification and regression problems ». PeerJ Computer Science 10 (3 janvier 2024) : e1775. http://dx.doi.org/10.7717/peerj-cs.1775.

Texte intégral
Résumé :
The random forest algorithm is one of the most popular and commonly used algorithms for classification and regression tasks. It combines the output of multiple decision trees to form a single result. Random forest algorithms demonstrate the highest accuracy on tabular data compared to other algorithms in various applications. However, random forests and, more precisely, decision trees, are usually built with the application of classic Shannon entropy. In this article, we consider the potential of deformed entropies, which are successfully used in the field of complex systems, to increase the prediction accuracy of random forest algorithms. We develop and introduce the information gains based on Renyi, Tsallis, and Sharma-Mittal entropies for classification and regression random forests. We test the proposed algorithm modifications on six benchmark datasets: three for classification and three for regression problems. For classification problems, the application of Renyi entropy allows us to improve the random forest prediction accuracy by 19–96% in dependence on the dataset, Tsallis entropy improves the accuracy by 20–98%, and Sharma-Mittal entropy improves accuracy by 22–111% compared to the classical algorithm. For regression problems, the application of deformed entropies improves the prediction by 2–23% in terms of R2 in dependence on the dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Amarantidis, Lampros Chrysovalantis, et Daniel Abásolo. « Interpretation of Entropy Algorithms in the Context of Biomedical Signal Analysis and Their Application to EEG Analysis in Epilepsy ». Entropy 21, no 9 (27 août 2019) : 840. http://dx.doi.org/10.3390/e21090840.

Texte intégral
Résumé :
Biomedical signals are measurable time series that describe a physiological state of a biological system. Entropy algorithms have been previously used to quantify the complexity of biomedical signals, but there is a need to understand the relationship of entropy to signal processing concepts. In this study, ten synthetic signals that represent widely encountered signal structures in the field of signal processing were created to interpret permutation, modified permutation, sample, quadratic sample and fuzzy entropies. Subsequently, the entropy algorithms were applied to two different databases containing electroencephalogram (EEG) signals from epilepsy studies. Transitions from randomness to periodicity were successfully detected in the synthetic signals, while significant differences in EEG signals were observed based on different regions and states of the brain. In addition, using results from one entropy algorithm as features and the k-nearest neighbours algorithm, maximum classification accuracies in the first EEG database ranged from 63% to 73.5%, while these values increased by approximately 20% when using two different entropies as features. For the second database, maximum classification accuracy reached 62.5% using one entropy algorithm, while using two algorithms as features further increased that by 10%. Embedding entropies (sample, quadratic sample and fuzzy entropies) are found to outperform the rest of the algorithms in terms of sensitivity and show greater potential by considering the fine-tuning possibilities they offer. On the other hand, permutation and modified permutation entropies are more consistent across different input parameter values and considerably faster to calculate.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Liu, Weifeng, Ying Jiang et Yuesheng Xu. « A Super Fast Algorithm for Estimating Sample Entropy ». Entropy 24, no 4 (8 avril 2022) : 524. http://dx.doi.org/10.3390/e24040524.

Texte intégral
Résumé :
Sample entropy, an approximation of the Kolmogorov entropy, was proposed to characterize complexity of a time series, which is essentially defined as −log(B/A), where B denotes the number of matched template pairs with length m and A denotes the number of matched template pairs with m+1, for a predetermined positive integer m. It has been widely used to analyze physiological signals. As computing sample entropy is time consuming, the box-assisted, bucket-assisted, x-sort, assisted sliding box, and kd-tree-based algorithms were proposed to accelerate its computation. These algorithms require O(N2) or O(N2−1m+1) computational complexity, where N is the length of the time series analyzed. When N is big, the computational costs of these algorithms are large. We propose a super fast algorithm to estimate sample entropy based on Monte Carlo, with computational costs independent of N (the length of the time series) and the estimation converging to the exact sample entropy as the number of repeating experiments becomes large. The convergence rate of the algorithm is also established. Numerical experiments are performed for electrocardiogram time series, electroencephalogram time series, cardiac inter-beat time series, mechanical vibration signals (MVS), meteorological data (MD), and 1/f noise. Numerical results show that the proposed algorithm can gain 100–1000 times speedup compared to the kd-tree and assisted sliding box algorithms while providing satisfactory approximate accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Pan, Rui, Wei Gao, Yunbo Zuo, Guoxin Wu et Yuda Chen. « Investigation into defect image segmentation algorithms for galvanised steel sheets under texture backgrounds ». Insight - Non-Destructive Testing and Condition Monitoring 65, no 9 (1 septembre 2023) : 492–500. http://dx.doi.org/10.1784/insi.2023.65.9.492.

Texte intégral
Résumé :
Image segmentation is a significant step in image analysis and computer vision. Many entropy-based approaches have been presented on this topic. Among them, Tsallis entropy is one of the best-performing methods. In this paper, the surface defect images of galvanised steel sheets were studied. A two-dimensional asymmetric Tsallis cross-entropy image segmentation algorithm based on chaotic bee colony algorithm optimisation was used to investigate the segmentation of surface defects under complex texture backgrounds. On the basis of Tsallis entropy threshold segmentation, a more concise expression form was used to define the asymmetric Tsallis cross-entropy in order to reduce the calculation complexity of the algorithm. The chaotic algorithm was combined with the artificial bee colony (ABC) algorithm to construct the chaotic bee colony algorithm, so that the optimal threshold of Tsallis entropy could be quickly identified. The experimental results showed that compared with the maximum Shannon entropy algorithm, the calculation time of this algorithm decreased by about 58% and the threshold value increased by about (26%, 54%). Compared with the two-dimensional Tsallis cross-entropy algorithm, the calculation time of this algorithm decreased by about 35% and about 20% was improved in the g-axis direction only. Compared with the two-dimensional asymmetric Tsallis cross-entropy algorithm, the calculation time of this algorithm decreased by about 30% and the threshold values of the two algorithms were almost the same. The algorithm proposed in this paper can rapidly and effectively segment defect targets, making it a more suitable method for detecting surface defects in factories with a rapid production pace.
Styles APA, Harvard, Vancouver, ISO, etc.
20

O., Zemlianyi, et Baibuz O. « Algorithms for data imputation based on entropy ». System technologies 6, no 155 (2 février 2025) : 116–31. https://doi.org/10.34185/1562-9945-6-155-2024-12.

Texte intégral
Résumé :
Recent advancements in data imputation have focused on various machine learning techniques, including methods like mean, median, and mode imputation, along with more complex approaches like k-nearest neighbors (KNN) and multiple imputation by chained equations (MICE). Research into entropy-based methods offers a promising direction. This method minimizes uncertainty by selecting imputation values that reduce the overall entropy of the dataset. The goal of this work is to develop an algorithm that imputes missing data by minimiz-ing conditional entropy, thus ensuring that the missing values are filled in a way that pre-serves the relationships between the variables. The method is designed for both qualitative and quantitative data, including discrete and continuous variables, aiming to reduce uncer-tainty in classification tasks and enhance the performance of machine learning models. The proposed algorithm is based on conditional entropy minimization, using entropy as a measure of uncertainty in data. For each incomplete row, the algorithm computes the con-ditional entropy for possible imputation values. The value that minimizes conditional entropy is selected, as it reduces uncertainty in the target variable. This process is iterated for each missing value until all missing data is imputed. Three types of tests were performed on two datasets. The analysis showed that the pro-posed algorithms are quite slow compared to other methods and can be improved, for exam-ple, by multiprocessing, as described in our work [15]. The type 1 test showed that the pro-posed algorithms do not give a gain on the RMS deviation metric, but significantly reduce en-tropy (type 2 test). At the same time, these methods show an improvement in classification performance over the baseline models (type 3 test). Thus, the proposed entropy-based imputation methods have shown good results and can be considered by researchers as an additional tool to improve the accuracy of decision mak-ing, but further computational optimisation studies are needed to improve the performance of these methods. The algorithm shows promise in improving classification accuracy by selecting imputa-tion values that minimize conditional entropy. Future research will focus on optimizing the method for large datasets and expanding its application to various domains.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Bacanin, Nebojsa, et Milan Tuba. « Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint ». Scientific World Journal 2014 (2014) : 1–16. http://dx.doi.org/10.1155/2014/721521.

Texte intégral
Résumé :
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Leng, Qi, Bo Shan et Chong Zhou. « Reference Point and Grid Method-Based Evolutionary Algorithm with Entropy for Many-Objective Optimization Problems ». Entropy 27, no 5 (14 mai 2025) : 524. https://doi.org/10.3390/e27050524.

Texte intégral
Résumé :
In everyday scenarios, there are many challenges involving multi-objective optimization. As the count of objective functions rises to four or beyond, the problem’s complexity intensifies considerably, often making it challenging for traditional algorithms to arrive at satisfactory solutions. The non-dominated sorting evolutionary reference point-based (NSGA-III) and the grid-based evolutionary algorithms (GrEA) are two prevalent algorithms for many-objective optimization. These two algorithms preserve population diversity by employing reference point and grid mechanisms, respectively. However, they still have limitations when addressing many-objective optimization problems. Due to the uniform distribution of reference points, the reference point-based methods do not obtain good performance on problems with an irregular Pareto front, while grid-based methods do not achieve good results on problems with a regular Pareto front because of the uneven partition of grids. To address the limitations of reference point-based algorithms and grid-based approaches in tackling both regular and irregular problems, a reference point- and grid-based evolutionary algorithm with entropy is proposed for many-objective optimization, denoted as RGEA, which aims to solve both regular and irregular many-objective optimization problems. Entropy is introduced to measure the shape of the Pareto front of a many-objective optimization problem. In RGEA, a parameter α is introduced to determine the interval for calculating the entropy value. By comparing the current entropy value with the maximum entropy value, the reference point-based method or the grid-based method can be determined. In order to verify the performance of the proposed algorithm, a comprehensive experiment was designed on some popular test suites with 3-to-10 objectives. In addition, RGEA was compared against six algorithms without adaptive technology and six algorithms with adaptive technology. A great number of experimental results were obtained showing that RGEA can obtain good results.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Mitroi-Symeonidis, Flavia-Corina, et Eleutherius Symeonidis. « Redistributing algorithms and Shannon’s Entropy ». Aequationes mathematicae 96, no 2 (31 janvier 2022) : 267–77. http://dx.doi.org/10.1007/s00010-022-00867-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Fyfe, Colin, Domingo Ortiz-Boyer et Nicolas Garcia-Pedrajas. « Evolutionary algorithms and cross entropy ». International Journal of Knowledge-based and Intelligent Engineering Systems 16, no 4 (5 décembre 2012) : 215–21. http://dx.doi.org/10.3233/kes-2012-00244.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Parthasarathy, Gayatri, et G. Abhilash. « Entropy‐based transform learning algorithms ». IET Signal Processing 12, no 4 (juin 2018) : 439–46. http://dx.doi.org/10.1049/iet-spr.2017.0337.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Kumar, Sumit, Garima Vig, Sapna Varshney et Priti Bansal. « Brain Tumor Detection Based on Multilevel 2D Histogram Image Segmentation Using DEWO Optimization Algorithm ». International Journal of E-Health and Medical Communications 11, no 3 (juillet 2020) : 71–85. http://dx.doi.org/10.4018/ijehmc.2020070105.

Texte intégral
Résumé :
Brain tumor detection from magnetic resonance (MR)images is a tedious task but vital for early prediction of the disease which until now is solely based on the experience of medical practitioners. Multilevel image segmentation is a computationally simple and efficient approach for segmenting brain MR images. Conventional image segmentation does not consider the spatial correlation of image pixels and lacks better post-filtering efficiency. This study presents a Renyi entropy-based multilevel image segmentation approach using a combination of differential evolution and whale optimization algorithms (DEWO) to detect brain tumors. Further, to validate the efficiency of the proposed hybrid algorithm, it is compared with some prominent metaheuristic algorithms in recent past using between-class variance and the Tsallis entropy functions. The proposed hybrid algorithm for image segmentation is able to achieve better results than all the other metaheuristic algorithms in every entropy-based segmentation performed on brain MR images.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Kanzawa, Yuchi. « Bezdek-Type Fuzzified Co-Clustering Algorithm ». Journal of Advanced Computational Intelligence and Intelligent Informatics 19, no 6 (20 novembre 2015) : 852–60. http://dx.doi.org/10.20965/jaciii.2015.p0852.

Texte intégral
Résumé :
In this study, two co-clustering algorithms based on Bezdek-type fuzzification of fuzzy clustering are proposed for categorical multivariate data. The two proposed algorithms are motivated by the fact that there are only two fuzzy co-clustering methods currently available – entropy regularization and quadratic regularization – whereas there are three fuzzy clustering methods for vectorial data: entropy regularization, quadratic regularization, and Bezdek-type fuzzification. The first proposed algorithm forms the basis of the second algorithm. The first algorithm is a variant of a spherical clustering method, with the kernelization of a maximizing model of Bezdek-type fuzzy clustering with multi-medoids. By interpreting the first algorithm in this way, the second algorithm, a spectral clustering approach, is obtained. Numerical examples demonstrate that the proposed algorithms can produce satisfactory results when suitable parameter values are selected.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Alexei, Kaltchenko. « Estimation of quantum entropies ». i-manager’s Journal on Mathematics 13, no 1 (2024) : 1. http://dx.doi.org/10.26634/jmat.13.1.20387.

Texte intégral
Résumé :
Motivated by the importance of entropy functions in quantum data compression, entanglement theory, and various quantum information-processing tasks, this study demonstrates how classical algorithms for entropy estimation can effectively contribute to the construction of quantum algorithms for universal quantum entropy estimation. Given two quantum i.i.d. sources with completely unknown density matrices, algorithms are developed for estimating quantum cross entropy and quantum relative entropy. These estimation techniques represent a quantum generalization of the classical algorithms by Lempel, Ziv, and Merhav.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Fujita, Mizuki, et Yuchi Kanzawa. « Three Fuzzy c-Shapes Clustering Algorithms for Series Data ». Journal of Advanced Computational Intelligence and Intelligent Informatics 27, no 5 (20 septembre 2023) : 976–85. http://dx.doi.org/10.20965/jaciii.2023.p0976.

Texte intégral
Résumé :
Various fuzzy clustering algorithms have been proposed for vectorial data. However, most of these methods have not been applied to series data. This study presents three fuzzy clustering algorithms for series data based on shape-based distances. The first algorithm involves Shannon entropy regularization of the k-shape objective function. The second algorithm is similar to the revised Bezdek-type fuzzy c-means algorithm obtained by replacing the membership of the hard c-means objective function with its power. The third algorithm involves Tsallis entropy regularization of the objective function of the second algorithm. Theoretical observations revealed that the third algorithm is a generalization of the first and second algorithms, which was validated by numerical experiments. Furthermore, numerical experiments were performed using 11 benchmark datasets to demonstrate that the third algorithm outperforms the others in terms of accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
30

KANNAN, S. R., S. RAMATHILAGAM, R. DEVI et YUEH-MIN HUANG. « ENTROPY TOLERANT FUZZY C-MEANS IN MEDICAL IMAGES ». Journal of Innovative Optical Health Sciences 04, no 04 (octobre 2011) : 447–62. http://dx.doi.org/10.1142/s179354581100168x.

Texte intégral
Résumé :
Segmenting the Dynamic Contrast-Enhanced Breast Magnetic Resonance Images (DCE-BMRI) is an extremely important task to diagnose the disease because it has the highest specificity when acquired with high temporal and spatial resolution and is also corrupted by heavy noise, outliers, and other imaging artifacts. In this paper, we intend to develop efficient robust segmentation algorithms based on fuzzy clustering approach for segmenting the DCE-BMRs. Our proposed segmentation algorithms have been amalgamated with effective kernel-induced distance measure on standard fuzzy c-means algorithm along with the spatial neighborhood information, entropy term, and tolerance vector into a fuzzy clustering structure for segmenting the DCE-BMRI. The significant feature of our proposed algorithms is its capability to find the optimal membership grades and obtain effective cluster centers automatically by minimizing the proposed robust objective functions. Also, this article demonstrates the superiority of the proposed algorithms for segmenting DCE-BMRI in comparison with other recent kernel-based fuzzy c-means techniques. Finally the clustering accuracies of the proposed algorithms are validated by using silhouette method in comparison with existed fuzzy clustering algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Jayalakshmi, N., P. Padmaja et G. Jaya Suma. « An Approach for Interesting Subgraph Mining from Web Log Data Using W-Gaston Algorithm ». International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 27, no 02 (avril 2019) : 277–301. http://dx.doi.org/10.1142/s0218488519500132.

Texte intégral
Résumé :
Graph-Based Data Mining (GBDM) is an emerging research topic nowadays, for the retrieval of the essential information from the graph database. There exist many algorithms that find frequent patterns in a given graph database. One such algorithm, GASTON uses support based on frequency to discover frequent patterns. The discovery phase in the Gaston algorithm is time-consuming, and the pages captured the interest of the users are ignored by the existing GASTON algorithm. This paper proposes an algorithm, Weighted-Gaston (W-Gaston) algorithm, by modifying the existing Gaston algorithm. Here, four interesting measures are developed based on the frequency, entropy, and the page duration, for the retrieval of the interesting sub-graphs. The proposed interesting measures include four types of support: (1) Support based on the page duration (W-Support), (2) Support based on the entropy (E-Support), (3) Support based on the page duration and the entropy (WE-Support), and (4) Support based on the frequency, page duration, and the entropy (FWE-Support). The simulation of the proposed work is done using the MSNBC and the weblog databases. The experimental results show that the proposed algorithm performed well as compared with the existing algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Markić, Ivan, Maja Štula, Marija Zorić et Darko Stipaničev. « Entropy-Based Approach in Selection Exact String-Matching Algorithms ». Entropy 23, no 1 (28 décembre 2020) : 31. http://dx.doi.org/10.3390/e23010031.

Texte intégral
Résumé :
The string-matching paradigm is applied in every computer science and science branch in general. The existence of a plethora of string-matching algorithms makes it hard to choose the best one for any particular case. Expressing, measuring, and testing algorithm efficiency is a challenging task with many potential pitfalls. Algorithm efficiency can be measured based on the usage of different resources. In software engineering, algorithmic productivity is a property of an algorithm execution identified with the computational resources the algorithm consumes. Resource usage in algorithm execution could be determined, and for maximum efficiency, the goal is to minimize resource usage. Guided by the fact that standard measures of algorithm efficiency, such as execution time, directly depend on the number of executed actions. Without touching the problematics of computer power consumption or memory, which also depends on the algorithm type and the techniques used in algorithm development, we have developed a methodology which enables the researchers to choose an efficient algorithm for a specific domain. String searching algorithms efficiency is usually observed independently from the domain texts being searched. This research paper aims to present the idea that algorithm efficiency depends on the properties of searched string and properties of the texts being searched, accompanied by the theoretical analysis of the proposed approach. In the proposed methodology, algorithm efficiency is expressed through character comparison count metrics. The character comparison count metrics is a formal quantitative measure independent of algorithm implementation subtleties and computer platform differences. The model is developed for a particular problem domain by using appropriate domain data (patterns and texts) and provides for a specific domain the ranking of algorithms according to the patterns’ entropy. The proposed approach is limited to on-line exact string-matching problems based on information entropy for a search pattern. Meticulous empirical testing depicts the methodology implementation and purports soundness of the methodology.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Niu, Yan, Jie Xiang, Kai Gao, Jinglong Wu, Jie Sun, Bin Wang, Runan Ding et al. « Multi-Frequency Entropy for Quantifying Complex Dynamics and Its Application on EEG Data ». Entropy 26, no 9 (27 août 2024) : 728. http://dx.doi.org/10.3390/e26090728.

Texte intégral
Résumé :
Multivariate entropy algorithms have proven effective in the complexity dynamic analysis of electroencephalography (EEG) signals, with researchers commonly configuring the variables as multi-channel time series. However, the complex quantification of brain dynamics from a multi-frequency perspective has not been extensively explored, despite existing evidence suggesting interactions among brain rhythms at different frequencies. In this study, we proposed a novel algorithm, termed multi-frequency entropy (mFreEn), enhancing the capabilities of existing multivariate entropy algorithms and facilitating the complexity study of interactions among brain rhythms of different frequency bands. Firstly, utilizing simulated data, we evaluated the mFreEn’s sensitivity to various noise signals, frequencies, and amplitudes, investigated the effects of parameters such as the embedding dimension and data length, and analyzed its anti-noise performance. The results indicated that mFreEn demonstrated enhanced sensitivity and reduced parameter dependence compared to traditional multivariate entropy algorithms. Subsequently, the mFreEn algorithm was applied to the analysis of real EEG data. We found that mFreEn exhibited a good diagnostic performance in analyzing resting-state EEG data from various brain disorders. Furthermore, mFreEn showed a good classification performance for EEG activity induced by diverse task stimuli. Consequently, mFreEn provides another important perspective to quantify complex dynamics.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Aldemir, Erdoğan, et Hidayet Oğraş. « Increasing of Compression Efficiency for Genomic Data by Manipulating Empirical Entropy ». Journal of Physics : Conference Series 2701, no 1 (1 février 2024) : 012050. http://dx.doi.org/10.1088/1742-6596/2701/1/012050.

Texte intégral
Résumé :
Abstract Sharing bio-informatics data is the key point to constructing a mobile and effective telemedicine network that brings with it various difficulties. A crucial challenge with this tremendous amount of information is storing it reversibly and analysing terabytes of data. Robust compression algorithms come up with a high rate of text and image compression ratios. However, the achievement of these advanced techniques has remained in a limited range since, intrinsically, the entropy contained by the raw data primarily determines the efficiency of compression. To enhance the performance of a compression algorithm, entropy of raw data needs to be reduced before any basic compression which reveals more effective redundancy. In this study, we use reversible sorting techniques to reduce the entropy thus providing higher efficiency in the case of integrating into compression technique for raw genomic data. To that end, permutation-based reversible sorting algorithms, such as Burrow-wheeler, are designed as a transform for entropy reduction. The algorithm achieves a low-entropy sequence by reordering raw data reversibly with low complexity and a fast approach. The empirical entropy, a quantitative analysis, shows a significant reduction of uncertainty has been achieved.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Nomura, Tomoki, et Yuchi Kanzawa. « Two Fuzzy Clustering Algorithms Based on ARMA Model ». Journal of Advanced Computational Intelligence and Intelligent Informatics 28, no 6 (20 novembre 2024) : 1251–62. http://dx.doi.org/10.20965/jaciii.2024.p1251.

Texte intégral
Résumé :
This study proposes two fuzzy clustering algorithms based on autoregressive moving average (ARMA) model for series data. The first, referred to as Tsallis entropy-regularized fuzzy c-ARMA model (TFCARMA), is created from k-ARMA, a conventional hard clustering algorithm for series data. TFCARMA is motivated by the relationship between the two clustering algorithms for vectorial data: k-means and Tsallis entropy-regularized fuzzy c-means. The second, referred to as q-divergence-based fuzzy c-ARMA model (QFCARMA), is created from ARMA mixtures, a conventional probabilistic clustering algorithm for series data. QFCARMA is motivated by the relationship between the two clustering algorithms for vectorial data: Gaussian mixture model and q-divergence-based fuzzy c-means. Based on numerical experiments using an artificial dataset, we observed the effects of fuzzification parameters in the proposed algorithms and relationship between the proposed and conventional algorithms. Moreover, numerical experiments using seven real datasets compared the clustering accuracy among the proposed and conventional algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Li, Ming Jing, Yu Bing Dong et Jie Li. « Overview of Pixel Level Image Fusion Algorithm ». Applied Mechanics and Materials 519-520 (février 2014) : 590–93. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.590.

Texte intégral
Résumé :
Pixel level image fusion algorithm is one of the basic algorithms in image fusion, which is mainly divided into time domain and frequency domain algorithm. The weighted average algorithm and PCA (principal component analysis) are popular algorithms in time domain. Pyramid algorithm and wavelet algorithm are usually used to fuse two or multiple images in frequency domain. In this paper, pixel level image fusion algorithm was summarized, including of operation, characteristics and application etc. MATLAB simulation shows that effect of frequency domain algorithm is better than time domain algorithm. Evaluation criteria mainly refer to entropy, cross entropy, the mean and standard deviation etc. Evaluation standard is the reference of fusion effects, different evaluation criteria could be selected according to different fused image and different fusion purpose.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Zhang, Shao Pu, et Tao Feng. « Reduction Algorithms of Incomplete Covering Decision Systems ». Advanced Materials Research 532-533 (juin 2012) : 1409–13. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1409.

Texte intégral
Résumé :
This paper discusses the reduction algorithms of an incomplete covering decision system. Firstly, we give the definition of a new pair of upper and lower approximations in an incomplete covering information system and give their axiomatic characterizations. Then, we introduce the special conditional entropy and the limitary conditional entropy of a covering decision system with multi-coverings and study the reduction of coverings by means of special conditional entropy (limitary conditional entropy) in -consistent (inconsistent) covering decision systems. Two algorithms are designed to compute reductions of the -consistent and inconsistent covering decision systems, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Li Wei-Jia, Shen Xiao-Hong, Li Ya-An et Zhang Kui. « Nonlinear Feature Extraction Based On Multi-Channel Dataset ». Acta Physica Sinica 74, no 4 (2025) : 0. https://doi.org/10.7498/aps.74.20241512.

Texte intégral
Résumé :
Phase space reconstruction plays a pivotal role in calculating features of nonlinear systems. By mapping one-dimensional time series onto a high-dimensional phase space using phase space reconstruction techniques, the dynamical characteristics of nonlinear systems can be revealed. However, existing nonlinear analysis methods are primarily based on phase space reconstruction of single-channel data and cannot directly exploit the rich information contained in multi-channel array data. The reconstructed data matrix exhibits structural similarities with multi-channel array data. The relationship between phase space reconstruction and array data structure, as well as the gain in nonlinear features brought by array data, has not been sufficiently studied. This paper employs two classical nonlinear features: multiscale sample entropy and multiscale permutation entropy. Utilizing array multi-channel data to replace the phase space reconstruction step in algorithms to enhance the algorithmic performance. Initially, the relationship between phase space reconstruction parameters and actual array structures is analyzed, and conversion relationships are established. Then, multiple sets of simulated and real-world array data are used to evaluate the performance of the two entropy algorithms. The results show that substituting array data for phase space reconstruction effectively improves the performance of both entropy algorithms. Specifically, the multiscale sample entropy algorithm, when applied to array data, allows for the differentiation of noisy target signals from background noise at low signal-to-noise ratios. Meanwhile, the multiscale permutation entropy algorithm using array data more accurately reveals the complexity structure of signals at different time scales.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Manis, George, Dimitrios Bakalis et Roberto Sassi. « A Multithreaded Algorithm for the Computation of Sample Entropy ». Algorithms 16, no 6 (15 juin 2023) : 299. http://dx.doi.org/10.3390/a16060299.

Texte intégral
Résumé :
Many popular entropy definitions for signals, including approximate and sample entropy, are based on the idea of embedding the time series into an m-dimensional space, aiming to detect complex, deeper and more informative relationships among samples. However, for both approximate and sample entropy, the high computational cost is a severe limitation. Especially when large amounts of data are processed, or when parameter tuning is employed premising a large number of executions, the necessity of fast computation algorithms becomes urgent. In the past, our research team proposed fast algorithms for sample, approximate and bubble entropy. In the general case, the bucket-assisted algorithm was the one presenting the lowest execution times. In this paper, we exploit the opportunities given by the multithreading technology to further reduce the computation time. Without special requirements in hardware, since today even our cost-effective home computers support multithreading, the computation of entropy definitions can be significantly accelerated. The aim of this paper is threefold: (a) to extend the bucket-assisted algorithm for multithreaded processors, (b) to present updated execution times for the bucket-assisted algorithm since the achievements in hardware and compiler technology affect both execution times and gain, and (c) to provide a Python library which wraps fast C implementations capable of running in parallel on multithreaded processors.
Styles APA, Harvard, Vancouver, ISO, etc.
40

ABELLÁN, JOAQUIN, et SERAFIN MORAL. « AN ALGORITHM TO COMPUTE THE UPPER ENTROPY FOR ORDER-2 CAPACITIES ». International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 14, no 02 (avril 2006) : 141–54. http://dx.doi.org/10.1142/s0218488506003911.

Texte intégral
Résumé :
The upper entropy of a credal set is the maximum of the entropies of the probabilities belonging to it. Although there are algorithms for computing the upper entropy for the particular cases of credal sets associated to belief functions and probability intervals, there is none for a more general model. In this paper, we shall present an algorithm to obtain the upper entropy for order-2 capacities. Our algorithm is an extension of the one presented for belief functions, and proofs of correctness are provided. By using a counterexample, we shall also prove that this algorithm is not valid for general lower probabilities as it computes a value which is strictly greater than the maximum of entropy.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Kafantaris, Evangelos, Ian Piper, Tsz-Yan Milly Lo et Javier Escudero. « Augmentation of Dispersion Entropy for Handling Missing and Outlier Samples in Physiological Signal Monitoring ». Entropy 22, no 3 (11 mars 2020) : 319. http://dx.doi.org/10.3390/e22030319.

Texte intégral
Résumé :
Entropy quantification algorithms are becoming a prominent tool for the physiological monitoring of individuals through the effective measurement of irregularity in biological signals. However, to ensure their effective adaptation in monitoring applications, the performance of these algorithms needs to be robust when analysing time-series containing missing and outlier samples, which are common occurrence in physiological monitoring setups such as wearable devices and intensive care units. This paper focuses on augmenting Dispersion Entropy (DisEn) by introducing novel variations of the algorithm for improved performance in such applications. The original algorithm and its variations are tested under different experimental setups that are replicated across heart rate interval, electroencephalogram, and respiratory impedance time-series. Our results indicate that the algorithmic variations of DisEn achieve considerable improvements in performance while our analysis signifies that, in consensus with previous research, outlier samples can have a major impact in the performance of entropy quantification algorithms. Consequently, the presented variations can aid the implementation of DisEn to physiological monitoring applications through the mitigation of the disruptive effect of missing and outlier samples.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Lee, Jaehyuk, et Kyungroul Lee. « A Method for Neutralizing Entropy Measurement-Based Ransomware Detection Technologies Using Encoding Algorithms ». Entropy 24, no 2 (4 février 2022) : 239. http://dx.doi.org/10.3390/e24020239.

Texte intégral
Résumé :
Ransomware consists of malicious codes that restrict users from accessing their own files while demanding a ransom payment. Since the advent of ransomware, new and variant ransomwares have caused critical damage around the world, thus prompting the study of detection and prevention technologies against ransomware. Ransomware encrypts files, and encrypted files have a characteristic of increasing entropy. Due to this characteristic, a defense technology has emerged for detecting ransomware-infected files by measuring the entropy of clean and encrypted files based on a derived entropy threshold. Accordingly, attackers have applied a method in which entropy does not increase even if the files are encrypted, such that the ransomware-infected files cannot be detected through changes in entropy. Therefore, if the attacker applies a base64 encoding algorithm to the encrypted files, files infected by ransomware will have a low entropy value. This can eventually neutralize the technology for detecting files infected from ransomware based on entropy measurement. Therefore, in this paper, we propose a method to neutralize ransomware detection technologies using a more sophisticated entropy measurement method by applying various encoding algorithms including base64 and various file formats. To this end, we analyze the limitations and problems of the existing entropy measurement-based ransomware detection technologies using the encoding algorithm, and we propose a more effective neutralization method of ransomware detection technologies based on the analysis results.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Gao, Shu Tao. « The Scheduling Algorithm of Grid Task Based on Cloud Model ». Key Engineering Materials 439-440 (juin 2010) : 1177–83. http://dx.doi.org/10.4028/www.scientific.net/kem.439-440.1177.

Texte intégral
Résumé :
In this paper, a kind of grid task scheduling optimization algorithm based on cloud model is proposed with the characteristics of cloud model. With the target being the cloud droplets of the cloud model, this algorithm gets three characteristic values of cloud through the reverse cloud: expectations, entropy and excess entropy, and then obtains cloud droplets using the forward cloud algorithm by adjusting the values of entropy and excess entropy. After several iterations, it achieves the optimal solution of task scheduling. Theoretical analysis and results of simulation experiments show that this scheduling algorithm effectively achieves load balancing of resources and avoids such problems as the local optimal solution of genetic algorithms and premature convergence caused by too much selection pressure with higher accuracy and faster convergence.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Zhuang, Liyun, et Yepeng Guan. « Adaptive Image Enhancement Using Entropy-Based Subhistogram Equalization ». Computational Intelligence and Neuroscience 2018 (13 août 2018) : 1–13. http://dx.doi.org/10.1155/2018/3837275.

Texte intégral
Résumé :
A novel image enhancement approach called entropy-based adaptive subhistogram equalization (EASHE) is put forward in this paper. The proposed algorithm divides the histogram of input image into four segments based on the entropy value of the histogram, and the dynamic range of each subhistogram is adjusted. A novel algorithm to adjust the probability density function of the gray level is proposed, which can adaptively control the degree of image enhancement. Furthermore, the final contrast-enhanced image is obtained by equalizing each subhistogram independently. The proposed algorithm is compared with some state-of-the-art HE-based algorithms. The quantitative results for a public image database named CVG-UGR-Database are statistically analyzed. The quantitative and visual assessments show that the proposed algorithm outperforms most of the existing contrast-enhancement algorithms. The proposed method can make the contrast of image more effectively enhanced as well as the mean brightness and details well preserved.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Zhang, Lei, et Liefeng Qiu. « Aerobic Exercise Fatigue Detection Based on Spatiotemporal Entropy and Label Technology ». Scientific Programming 2022 (5 avril 2022) : 1–9. http://dx.doi.org/10.1155/2022/8280685.

Texte intégral
Résumé :
Excessive exercise can strengthen the body and make people happy, but it can also cause physical injury. To address this issue, this paper proposes the TFD-SE (Three-Frame Difference Spatiotemporal Entropy) algorithm and the LB (Label Propagation) algorithm, which are both based on SE (spatiotemporal entropy) and label technology. The TFD-SE algorithm calculates the difference image using the three-frame difference method, then calculates the SE of pixels in the difference image, and performs morphological filtering and threshold segmentation, allowing it to detect moving objects effectively. The significance value of unlabeled nodes in the image is calculated using the LB algorithm. In both qualitative and quantitative comparisons, the experimental results show that both algorithms outperform other classical algorithms in terms of detection performance.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Bonidia, Robson P., Anderson P. Avila Santos, Breno L. S. de Almeida, Peter F. Stadler, Ulisses Nunes da Rocha, Danilo S. Sanches et André C. P. L. F. de Carvalho. « Information Theory for Biological Sequence Classification : A Novel Feature Extraction Technique Based on Tsallis Entropy ». Entropy 24, no 10 (1 octobre 2022) : 1398. http://dx.doi.org/10.3390/e24101398.

Texte intégral
Résumé :
In recent years, there has been an exponential growth in sequencing projects due to accelerated technological advances, leading to a significant increase in the amount of data and resulting in new challenges for biological sequence analysis. Consequently, the use of techniques capable of analyzing large amounts of data has been explored, such as machine learning (ML) algorithms. ML algorithms are being used to analyze and classify biological sequences, despite the intrinsic difficulty in extracting and finding representative biological sequence methods suitable for them. Thereby, extracting numerical features to represent sequences makes it statistically feasible to use universal concepts from Information Theory, such as Tsallis and Shannon entropy. In this study, we propose a novel Tsallis entropy-based feature extractor to provide useful information to classify biological sequences. To assess its relevance, we prepared five case studies: (1) an analysis of the entropic index q; (2) performance testing of the best entropic indices on new datasets; (3) a comparison made with Shannon entropy and (4) generalized entropies; (5) an investigation of the Tsallis entropy in the context of dimensionality reduction. As a result, our proposal proved to be effective, being superior to Shannon entropy and robust in terms of generalization, and also potentially representative for collecting information in fewer dimensions compared with methods such as Singular Value Decomposition and Uniform Manifold Approximation and Projection.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Cholewa, Marcin, et Bartłomiej Płaczek. « Application of Positional Entropy to Fast Shannon Entropy Estimation for Samples of Digital Signals ». Entropy 22, no 10 (19 octobre 2020) : 1173. http://dx.doi.org/10.3390/e22101173.

Texte intégral
Résumé :
This paper introduces a new method of estimating Shannon entropy. The proposed method can be successfully used for large data samples and enables fast computations to rank the data samples according to their Shannon entropy. Original definitions of positional entropy and integer entropy are discussed in details to explain the theoretical concepts that underpin the proposed approach. Relations between positional entropy, integer entropy and Shannon entropy were demonstrated through computational experiments. The usefulness of the introduced method was experimentally verified for various data samples of different type and size. The experimental results clearly show that the proposed approach can be successfully used for fast entropy estimation. The analysis was also focused on quality of the entropy estimation. Several possible implementations of the proposed method were discussed. The presented algorithms were compared with the existing solutions. It was demonstrated that the algorithms presented in this paper estimate the Shannon entropy faster and more accurately than the state-of-the-art algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Furlong, Ryan, Mirvana Hilal, Vincent O’Brien et Anne Humeau-Heurtier. « Parameter Analysis of Multiscale Two-Dimensional Fuzzy and Dispersion Entropy Measures Using Machine Learning Classification ». Entropy 23, no 10 (3 octobre 2021) : 1303. http://dx.doi.org/10.3390/e23101303.

Texte intégral
Résumé :
Two-dimensional fuzzy entropy, dispersion entropy, and their multiscale extensions (MFuzzyEn2D and MDispEn2D, respectively) have shown promising results for image classifications. However, these results rely on the selection of key parameters that may largely influence the entropy values obtained. Yet, the optimal choice for these parameters has not been studied thoroughly. We propose a study on the impact of these parameters in image classification. For this purpose, the entropy-based algorithms are applied to a variety of images from different datasets, each containing multiple image classes. Several parameter combinations are used to obtain the entropy values. These entropy values are then applied to a range of machine learning classifiers and the algorithm parameters are analyzed based on the classification results. By using specific parameters, we show that both MFuzzyEn2D and MDispEn2D approach state-of-the-art in terms of image classification for multiple image types. They lead to an average maximum accuracy of more than 95% for all the datasets tested. Moreover, MFuzzyEn2D results in a better classification performance than that extracted by MDispEn2D as a majority. Furthermore, the choice of classifier does not have a significant impact on the classification of the extracted features by both entropy algorithms. The results open new perspectives for these entropy-based measures in textural analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Zhang, Xin, et Xiu Zhang. « Cross Entropy Method Meets Local Search for Continuous Optimization Problems ». International Journal on Artificial Intelligence Tools 26, no 06 (décembre 2017) : 1750020. http://dx.doi.org/10.1142/s0218213017500208.

Texte intégral
Résumé :
The effectiveness of cross entropy (CE) method has been investigated on both combinatorial and continuous optimization problems, though it lacks exploitative search to refine solutions. Hybrid with local search (LS) method can greatly improve the performance of evolutionary algorithm. This paper proposes a parameter-less framework combining CE with LS method. Four LS methods are chosen and four combination algorithms are obtained after combining them with the CE method. We first study the performance of the four combinations on a set of twenty eight mathematical functions including both unimodal and multimodal functions. CE hybrid with Powell’s method (CE-Pow) is identified as the most effective algorithm. Then the CE-Pow algorithm is applied to resolve proportional, integral, and derivative (PID) controller design problem and Lennard-Jones potential problem. Its performance has been verified by comparing with four state of the art evolutionary algorithms. Experimental results show that CE-Pow significantly outperforms other benchmark algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Lee, Hak-Su, Chang-Yong Kang, Sang-Hyung Kim et Sung-Won Jung. « Entropy Interpretation On flow Distribution Algorithms ». Journal of Korea Water Resources Association 36, no 2 (1 avril 2003) : 263–71. http://dx.doi.org/10.3741/jkwra.2003.36.2.263.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie