To see the other types of publications on this topic, follow the link: Conditional mutual information.

Journal articles on the topic 'Conditional mutual information'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Conditional mutual information.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Lin. "Conditional Mutual Information and Commutator." International Journal of Theoretical Physics 52, no. 6 (2013): 2112–17. http://dx.doi.org/10.1007/s10773-013-1505-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suleykin, A., and R. Mammahajiyev. "MAXIMISATION OF CONDITIONAL MUTUAL INFORMATION." Norwegian Journal of development of the International Science, no. 95 (October 26, 2022): 37–41. https://doi.org/10.5281/zenodo.7258567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Xingtu. "Information-Theoretic Generalization Bounds for Batch Reinforcement Learning." Entropy 26, no. 11 (2024): 995. http://dx.doi.org/10.3390/e26110995.

Full text
Abstract:
We analyze the generalization properties of batch reinforcement learning (batch RL) with value function approximation from an information-theoretic perspective. We derive generalization bounds for batch RL using (conditional) mutual information. In addition, we demonstrate how to establish a connection between certain structural assumptions on the value function space and conditional mutual information. As a by-product, we derive a high-probability generalization bound via conditional mutual information, which was left open and may be of independent interest.
APA, Harvard, Vancouver, ISO, and other styles
4

Loeckx, D., P. Slagmolen, F. Maes, D. Vandermeulen, and P. Suetens. "Nonrigid Image Registration Using Conditional Mutual Information." IEEE Transactions on Medical Imaging 29, no. 1 (2010): 19–29. http://dx.doi.org/10.1109/tmi.2009.2021843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tezuka, Taro, and Shizuma Namekawa. "Information Bottleneck Analysis by a Conditional Mutual Information Bound." Entropy 23, no. 8 (2021): 974. http://dx.doi.org/10.3390/e23080974.

Full text
Abstract:
Task-nuisance decomposition describes why the information bottleneck loss I(z;x)−βI(z;y) is a suitable objective for supervised learning. The true category y is predicted for input x using latent variables z. When n is a nuisance independent from y, I(z;n) can be decreased by reducing I(z;x) since the latter upper bounds the former. We extend this framework by demonstrating that conditional mutual information I(z;x|y) provides an alternative upper bound for I(z;n). This bound is applicable even if z is not a sufficient representation of x, that is, I(z;y)≠I(x;y). We used mutual information neural estimation (MINE) to estimate I(z;x|y). Experiments demonstrated that I(z;x|y) is smaller than I(z;x) for layers closer to the input, matching the claim that the former is a tighter bound than the latter. Because of this difference, the information plane differs when I(z;x|y) is used instead of I(z;x).
APA, Harvard, Vancouver, ISO, and other styles
6

He, Deng Chao, Wen Ning Hao, Gang Chen, and Da Wei Jin. "An Improved Feature Selection Algorithm Based on Parzen Window and Conditional Mutual Information." Applied Mechanics and Materials 347-350 (August 2013): 2614–19. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2614.

Full text
Abstract:
In this paper, an improved feature selection algorithm by conditional mutual information with Parzen window was proposed, which adopted conditional mutual information as an evaluation criterion of feature selection in order to overcome the deficiency of feature redundant and used Parzen window to estimate the probability density functions and calculate the conditional mutual information of continuous variables, in such a way as to achieve feature selection for continuous data.
APA, Harvard, Vancouver, ISO, and other styles
7

Zeng, Zilin, Hongjun Zhang, Rui Zhang, and Youliang Zhang. "A Hybrid Feature Selection Method Based on Rough Conditional Mutual Information and Naive Bayesian Classifier." ISRN Applied Mathematics 2014 (March 30, 2014): 1–11. http://dx.doi.org/10.1155/2014/382738.

Full text
Abstract:
We introduced a novel hybrid feature selection method based on rough conditional mutual information and Naive Bayesian classifier. Conditional mutual information is an important metric in feature selection, but it is hard to compute. We introduce a new measure called rough conditional mutual information which is based on rough sets; it is shown that the new measure can substitute Shannon’s conditional mutual information. Thus rough conditional mutual information can also be used to filter the irrelevant and redundant features. Subsequently, to reduce the feature and improve classification accuracy, a wrapper approach based on naive Bayesian classifier is used to search the optimal feature subset in the space of a candidate feature subset which is selected by filter model. Finally, the proposed algorithms are tested on several UCI datasets compared with other classical feature selection methods. The results show that our approach obtains not only high classification accuracy, but also the least number of selected features.
APA, Harvard, Vancouver, ISO, and other styles
8

Vu, Vincent Q., Bin Yu, and Robert E. Kass. "Information in the Nonstationary Case." Neural Computation 21, no. 3 (2009): 688–703. http://dx.doi.org/10.1162/neco.2008.01-08-700.

Full text
Abstract:
Information estimates such as the direct method of Strong, Koberle, de Ruyter van Steveninck, and Bialek (1998) sidestep the difficult problem of estimating the joint distribution of response and stimulus by instead estimating the difference between the marginal and conditional entropies of the response. While this is an effective estimation strategy, it tempts the practitioner to ignore the role of the stimulus and the meaning of mutual information. We show here that as the number of trials increases indefinitely, the direct (or plug-in) estimate of marginal entropy converges (with probability 1) to the entropy of the time-averaged conditional distribution of the response, and the direct estimate of the conditional entropy converges to the time-averaged entropy of the conditional distribution of the response. Under joint stationarity and ergodicity of the response and stimulus, the difference of these quantities converges to the mutual information. When the stimulus is deterministic or nonstationary the direct estimate of information no longer estimates mutual information, which is no longer meaningful, but it remains a measure of variability of the response distribution across time.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, HongFang, Yao Zhang, YingJie Zhang, and HongJiang Liu. "Feature selection based on conditional mutual information: minimum conditional relevance and minimum conditional redundancy." Applied Intelligence 49, no. 3 (2018): 883–96. http://dx.doi.org/10.1007/s10489-018-1305-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shayovitz, Shachar, and Meir Feder. "Universal Active Learning via Conditional Mutual Information Minimization." IEEE Journal on Selected Areas in Information Theory 2, no. 2 (2021): 720–34. http://dx.doi.org/10.1109/jsait.2021.3073842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xiujuan, and Yuchen Zhou. "Multi-Label Feature Selection with Conditional Mutual Information." Computational Intelligence and Neuroscience 2022 (October 8, 2022): 1–13. http://dx.doi.org/10.1155/2022/9243893.

Full text
Abstract:
Feature selection is an important way to optimize the efficiency and accuracy of classifiers. However, traditional feature selection methods cannot work with many kinds of data in the real world, such as multi-label data. To overcome this challenge, multi-label feature selection is developed. Multi-label feature selection plays an irreplaceable role in pattern recognition and data mining. This process can improve the efficiency and accuracy of multi-label classification. However, traditional multi-label feature selection based on mutual information does not fully consider the effect of redundancy among labels. The deficiency may lead to repeated computing of mutual information and leave room to enhance the accuracy of multi-label feature selection. To deal with this challenge, this paper proposed a multi-label feature selection based on conditional mutual information among labels (CRMIL). Firstly, we analyze how to reduce the redundancy among features based on existing papers. Secondly, we propose a new approach to diminish the redundancy among labels. This method takes label sets as conditions to calculate the relevance between features and labels. This approach can weaken the impact of the redundancy among labels on feature selection results. Finally, we analyze this algorithm and balance the effects of relevance and redundancy on the evaluation function. For testing CRMIL, we compare it with the other eight multi-label feature selection algorithms on ten datasets and use four evaluation criteria to examine the results. Experimental results illustrate that CRMIL performs better than other existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Ahn, Chi Kyung, and Donguk Kim. "Efficient variable selection method using conditional mutual information." Journal of the Korean Data and Information Science Society 25, no. 5 (2014): 1079–94. http://dx.doi.org/10.7465/jkdi.2014.25.5.1079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Berta, Mario, Kaushik P. Seshadreesan, and Mark M. Wilde. "Rényi generalizations of the conditional quantum mutual information." Journal of Mathematical Physics 56, no. 2 (2015): 022205. http://dx.doi.org/10.1063/1.4908102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Papapetrou, M., and D. Kugiumtzis. "Markov chain order estimation with conditional mutual information." Physica A: Statistical Mechanics and its Applications 392, no. 7 (2013): 1593–601. http://dx.doi.org/10.1016/j.physa.2012.12.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fawzi, Omar, and Renato Renner. "Quantum Conditional Mutual Information and Approximate Markov Chains." Communications in Mathematical Physics 340, no. 2 (2015): 575–611. http://dx.doi.org/10.1007/s00220-015-2466-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Lin, and Junde Wu. "A lower bound of quantum conditional mutual information." Journal of Physics A: Mathematical and Theoretical 47, no. 41 (2014): 415303. http://dx.doi.org/10.1088/1751-8113/47/41/415303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, J., and D. W. Kim. "Efficient multivariate feature filter using conditional mutual information." Electronics Letters 48, no. 3 (2012): 161. http://dx.doi.org/10.1049/el.2011.3063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Liang, Kuo-Ching, and Xiaodong Wang. "Gene Regulatory Network Reconstruction Using Conditional Mutual Information." EURASIP Journal on Bioinformatics and Systems Biology 2008 (2008): 1–14. http://dx.doi.org/10.1155/2008/253894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ahmadi, Jafar, Antonio Di Crescenzo, and Maria Longobardi. "On dynamic mutual information for bivariate lifetimes." Advances in Applied Probability 47, no. 4 (2015): 1157–74. http://dx.doi.org/10.1239/aap/1449859804.

Full text
Abstract:
We consider dynamic versions of the mutual information of lifetime distributions, with a focus on past lifetimes, residual lifetimes, and mixed lifetimes evaluated at different instants. This allows us to study multicomponent systems, by measuring the dependence in conditional lifetimes of two components having possibly different ages. We provide some bounds, and investigate the mutual information of residual lifetimes within the time-transformed exponential model (under both the assumptions of unbounded and truncated lifetimes). Moreover, with reference to the order statistics of a random sample, we evaluate explicitly the mutual information between the minimum and the maximum, conditional on inspection at different times, and show that it is distribution-free in a special case. Finally, we develop a copula-based approach aiming to express the dynamic mutual information for past and residual bivariate lifetimes in an alternative way.
APA, Harvard, Vancouver, ISO, and other styles
20

Ahmadi, Jafar, Antonio Di Crescenzo, and Maria Longobardi. "On dynamic mutual information for bivariate lifetimes." Advances in Applied Probability 47, no. 04 (2015): 1157–74. http://dx.doi.org/10.1017/s0001867800049053.

Full text
Abstract:
We consider dynamic versions of the mutual information of lifetime distributions, with a focus on past lifetimes, residual lifetimes, and mixed lifetimes evaluated at different instants. This allows us to study multicomponent systems, by measuring the dependence in conditional lifetimes of two components having possibly different ages. We provide some bounds, and investigate the mutual information of residual lifetimes within the time-transformed exponential model (under both the assumptions of unbounded and truncated lifetimes). Moreover, with reference to the order statistics of a random sample, we evaluate explicitly the mutual information between the minimum and the maximum, conditional on inspection at different times, and show that it is distribution-free in a special case. Finally, we develop a copula-based approach aiming to express the dynamic mutual information for past and residual bivariate lifetimes in an alternative way.
APA, Harvard, Vancouver, ISO, and other styles
21

DiGiovanni, Anthony, and Jesse Clifton. "Commitment Games with Conditional Information Disclosure." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (2023): 5616–23. http://dx.doi.org/10.1609/aaai.v37i5.25697.

Full text
Abstract:
The conditional commitment abilities of mutually transparent computer agents have been studied in previous work on commitment games and program equilibrium. This literature has shown how these abilities can help resolve Prisoner’s Dilemmas and other failures of cooperation in complete information settings. But inefficiencies due to private information have been neglected thus far in this literature, despite the fact that these problems are pervasive and might also be addressed by greater mutual transparency. In this work, we introduce a framework for commitment games with a new kind of conditional commitment device, which agents can use to conditionally disclose private information. We prove a folk theorem for this setting that provides sufficient conditions for ex post efficiency, and thus represents a model of ideal cooperation between agents without a third-party mediator. Further, extending previous work on program equilibrium, we develop an implementation of conditional information disclosure. We show that this implementation forms program ε-Bayesian Nash equilibria corresponding to the Bayesian Nash equilibria of these commitment games.
APA, Harvard, Vancouver, ISO, and other styles
22

Batra, Luckshay, and Harish Chander Taneja. "Comparison between Information Theoretic Measures to Assess Financial Markets." FinTech 1, no. 2 (2022): 137–54. http://dx.doi.org/10.3390/fintech1020011.

Full text
Abstract:
Information theoretic measures were applied to the study of the randomness associations of different financial time series. We studied the level of similarities between information theoretic measures and the various tools of regression analysis, i.e., between Shannon entropy and the total sum of squares of the dependent variable, relative mutual information and coefficients of correlation, conditional entropy and residual sum of squares, etc. We observed that mutual information and its dynamical extensions provide an alternative approach with some advantages to study the association between several international stock indices. Furthermore, mutual information and conditional entropy are relatively efficient compared to the measures of statistical dependence.
APA, Harvard, Vancouver, ISO, and other styles
23

Łazęcka, Małgorzata, and Jan Mielniczuk. "Analysis of Information-Based Nonparametric Variable Selection Criteria." Entropy 22, no. 9 (2020): 974. http://dx.doi.org/10.3390/e22090974.

Full text
Abstract:
We consider a nonparametric Generative Tree Model and discuss a problem of selecting active predictors for the response in such scenario. We investigated two popular information-based selection criteria: Conditional Infomax Feature Extraction (CIFE) and Joint Mutual information (JMI), which are both derived as approximations of Conditional Mutual Information (CMI) criterion. We show that both criteria CIFE and JMI may exhibit different behavior from CMI, resulting in different orders in which predictors are chosen in variable selection process. Explicit formulae for CMI and its two approximations in the generative tree model are obtained. As a byproduct, we establish expressions for an entropy of a multivariate gaussian mixture and its mutual information with mixing distribution.
APA, Harvard, Vancouver, ISO, and other styles
24

Zan, Lei, Anouar Meynaoui, Charles K. Assaad, Emilie Devijver, and Eric Gaussier. "A Conditional Mutual Information Estimator for Mixed Data and an Associated Conditional Independence Test." Entropy 24, no. 9 (2022): 1234. http://dx.doi.org/10.3390/e24091234.

Full text
Abstract:
In this study, we focus on mixed data which are either observations of univariate random variables which can be quantitative or qualitative, or observations of multivariate random variables such that each variable can include both quantitative and qualitative components. We first propose a novel method, called CMIh, to estimate conditional mutual information taking advantages of the previously proposed approaches for qualitative and quantitative data. We then introduce a new local permutation test, called LocAT for local adaptive test, which is well adapted to mixed data. Our experiments illustrate the good behaviour of CMIh and LocAT, and show their respective abilities to accurately estimate conditional mutual information and to detect conditional (in)dependence for mixed data.
APA, Harvard, Vancouver, ISO, and other styles
25

Fullwood, James. "An Axiomatic Characterization of Mutual Information." Entropy 25, no. 4 (2023): 663. http://dx.doi.org/10.3390/e25040663.

Full text
Abstract:
We characterize mutual information as the unique map on ordered pairs of discrete random variables satisfying a set of axioms similar to those of Faddeev’s characterization of the Shannon entropy. There is a new axiom in our characterization, however, which has no analog for Shannon entropy, based on the notion of a Markov triangle, which may be thought of as a composition of communication channels for which conditional entropy acts functorially. Our proofs are coordinate-free in the sense that no logarithms appear in our calculations.
APA, Harvard, Vancouver, ISO, and other styles
26

Ahlswede, Rudolf. "The final form of Tao's inequality relating conditional expectation and conditional mutual information." Advances in Mathematics of Communications 1, no. 2 (2007): 239–42. http://dx.doi.org/10.3934/amc.2007.1.239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zhai, Yuan, Bo Yang, and Zhengjun Xi. "Belavkin–Staszewski Relative Entropy, Conditional Entropy, and Mutual Information." Entropy 24, no. 6 (2022): 837. http://dx.doi.org/10.3390/e24060837.

Full text
Abstract:
Belavkin–Staszewski relative entropy can naturally characterize the effects of the possible noncommutativity of quantum states. In this paper, two new conditional entropy terms and four new mutual information terms are first defined by replacing quantum relative entropy with Belavkin–Staszewski relative entropy. Next, their basic properties are investigated, especially in classical-quantum settings. In particular, we show the weak concavity of the Belavkin–Staszewski conditional entropy and obtain the chain rule for the Belavkin–Staszewski mutual information. Finally, the subadditivity of the Belavkin–Staszewski relative entropy is established, i.e., the Belavkin–Staszewski relative entropy of a joint system is less than the sum of that of its corresponding subsystems with the help of some multiplicative and additive factors. Meanwhile, we also provide a certain subadditivity of the geometric Rényi relative entropy.
APA, Harvard, Vancouver, ISO, and other styles
28

Liang, Jun, Liang Hou, Zhenhua Luan, and Weiping Huang. "Feature Selection with Conditional Mutual Information Considering Feature Interaction." Symmetry 11, no. 7 (2019): 858. http://dx.doi.org/10.3390/sym11070858.

Full text
Abstract:
Feature interaction is a newly proposed feature relevance relationship, but the unintentional removal of interactive features can result in poor classification performance for this relationship. However, traditional feature selection algorithms mainly focus on detecting relevant and redundant features while interactive features are usually ignored. To deal with this problem, feature relevance, feature redundancy and feature interaction are redefined based on information theory. Then a new feature selection algorithm named CMIFSI (Conditional Mutual Information based Feature Selection considering Interaction) is proposed in this paper, which makes use of conditional mutual information to estimate feature redundancy and interaction, respectively. To verify the effectiveness of our algorithm, empirical experiments are conducted to compare it with other several representative feature selection algorithms. The results on both synthetic and benchmark datasets indicate that our algorithm achieves better results than other methods in most cases. Further, it highlights the necessity of dealing with feature interaction.
APA, Harvard, Vancouver, ISO, and other styles
29

Liping, Wang. "FEATURE SELECTION ALGORITHM BASED ON CONDITIONAL DYNAMIC MUTUAL INFORMATION." International Journal on Smart Sensing and Intelligent Systems 8, no. 1 (2015): 316–37. http://dx.doi.org/10.21307/ijssis-2017-761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Guo, Dongning, Shlomo Shamai, and Sergio Verdu. "Mutual Information and Conditional Mean Estimation in Poisson Channels." IEEE Transactions on Information Theory 54, no. 5 (2008): 1837–49. http://dx.doi.org/10.1109/tit.2008.920206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ren, Jianfeng, Xudong Jiang, and Junsong Yuan. "Learning LBP structure by maximizing the conditional mutual information." Pattern Recognition 48, no. 10 (2015): 3180–90. http://dx.doi.org/10.1016/j.patcog.2015.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yuanfang, Wu, and Liu Lianshou. "Conditional entropy and mutual information in random cascading processes." Physical Review D 43, no. 9 (1991): 3077–79. http://dx.doi.org/10.1103/physrevd.43.3077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Yishi, and Zigang Zhang. "Feature subset selection with cumulate conditional mutual information minimization." Expert Systems with Applications 39, no. 5 (2012): 6078–88. http://dx.doi.org/10.1016/j.eswa.2011.12.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Duong, Bao, and Thin Nguyen. "Diffeomorphic Information Neural Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7468–75. http://dx.doi.org/10.1609/aaai.v37i6.25908.

Full text
Abstract:
Mutual Information (MI) and Conditional Mutual Information (CMI) are multi-purpose tools from information theory that are able to naturally measure the statistical dependencies between random variables, thus they are usually of central interest in several statistical and machine learning tasks, such as conditional independence testing and representation learning. However, estimating CMI, or even MI, is infamously challenging due the intractable formulation. In this study, we introduce DINE (Diffeomorphic Information Neural Estimator)–a novel approach for estimating CMI of continuous random variables, inspired by the invariance of CMI over diffeomorphic maps. We show that the variables of interest can be replaced with appropriate surrogates that follow simpler distributions, allowing the CMI to be efficiently evaluated via analytical solutions. Additionally, we demonstrate the quality of the proposed estimator in comparison with state-of-the-arts in three important tasks, including estimating MI, CMI, as well as its application in conditional independence testing. The empirical evaluations show that DINE consistently outperforms competitors in all tasks and is able to adapt very well to complex and high-dimensional relationships.
APA, Harvard, Vancouver, ISO, and other styles
35

Mielniczuk, Jan. "Information Theoretic Methods for Variable Selection—A Review." Entropy 24, no. 8 (2022): 1079. http://dx.doi.org/10.3390/e24081079.

Full text
Abstract:
We review the principal information theoretic tools and their use for feature selection, with the main emphasis on classification problems with discrete features. Since it is known that empirical versions of conditional mutual information perform poorly for high-dimensional problems, we focus on various ways of constructing its counterparts and the properties and limitations of such methods. We present a unified way of constructing such measures based on truncation, or truncation and weighing, for the Möbius expansion of conditional mutual information. We also discuss the main approaches to feature selection which apply the introduced measures of conditional dependence, together with the ways of assessing the quality of the obtained vector of predictors. This involves discussion of recent results on asymptotic distributions of empirical counterparts of criteria, as well as advances in resampling.
APA, Harvard, Vancouver, ISO, and other styles
36

Cai, Changxiao, та Sergio Verdú. "Conditional Rényi Divergence Saddlepoint and the Maximization of α-Mutual Information". Entropy 21, № 10 (2019): 969. http://dx.doi.org/10.3390/e21100969.

Full text
Abstract:
Rényi-type generalizations of entropy, relative entropy and mutual information have found numerous applications throughout information theory and beyond. While there is consensus that the ways A. Rényi generalized entropy and relative entropy in 1961 are the “right” ones, several candidates have been put forth as possible mutual informations of order α . In this paper we lend further evidence to the notion that a Bayesian measure of statistical distinctness introduced by R. Sibson in 1969 (closely related to Gallager’s E 0 function) is the most natural generalization, lending itself to explicit computation and maximization, as well as closed-form formulas. This paper considers general (not necessarily discrete) alphabets and extends the major analytical results on the saddle-point and saddle-level of the conditional relative entropy to the conditional Rényi divergence. Several examples illustrate the main application of these results, namely, the maximization of α -mutual information with and without constraints.
APA, Harvard, Vancouver, ISO, and other styles
37

Wan, Xiaogeng, and Lanxi Xu. "A study for multiscale information transfer measures based on conditional mutual information." PLOS ONE 13, no. 12 (2018): e0208423. http://dx.doi.org/10.1371/journal.pone.0208423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Aishwarya, Gautam, and Mokshay Madiman. "Conditional Rényi Entropy and the Relationships between Rényi Capacities." Entropy 22, no. 5 (2020): 526. http://dx.doi.org/10.3390/e22050526.

Full text
Abstract:
The analogues of Arimoto’s definition of conditional Rényi entropy and Rényi mutual information are explored for abstract alphabets. These quantities, although dependent on the reference measure, have some useful properties similar to those known in the discrete setting. In addition to laying out some such basic properties and the relations to Rényi divergences, the relationships between the families of mutual informations defined by Sibson, Augustin-Csiszár, and Lapidoth-Pfister, as well as the corresponding capacities, are explored.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Li. "A Feature Selection Algorithm Integrating Maximum Classification Information and Minimum Interaction Feature Dependency Information." Computational Intelligence and Neuroscience 2021 (December 28, 2021): 1–10. http://dx.doi.org/10.1155/2021/3569632.

Full text
Abstract:
Feature selection is the key step in the analysis of high-dimensional small sample data. The core of feature selection is to analyse and quantify the correlation between features and class labels and the redundancy between features. However, most of the existing feature selection algorithms only consider the classification contribution of individual features and ignore the influence of interfeature redundancy and correlation. Therefore, this paper proposes a feature selection algorithm for nonlinear dynamic conditional relevance (NDCRFS) through the study and analysis of the existing feature selection algorithm ideas and method. Firstly, redundancy and relevance between features and between features and class labels are discriminated by mutual information, conditional mutual information, and interactive mutual information. Secondly, the selected features and candidate features are dynamically weighted utilizing information gain factors. Finally, to evaluate the performance of this feature selection algorithm, NDCRFS was validated against 6 other feature selection algorithms on three classifiers, using 12 different data sets, for variability and classification metrics between the different algorithms. The experimental results show that the NDCRFS method can improve the quality of the feature subsets and obtain better classification results.
APA, Harvard, Vancouver, ISO, and other styles
40

Molavipour, Sina, German Bassi, and Mikael Skoglund. "Neural Estimators for Conditional Mutual Information Using Nearest Neighbors Sampling." IEEE Transactions on Signal Processing 69 (2021): 766–80. http://dx.doi.org/10.1109/tsp.2021.3050564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Le, Duc H., and Albert C. Reynolds. "Estimation of Mutual Information and Conditional Entropy for Surveillance Optimization." SPE Journal 19, no. 04 (2014): 648–61. http://dx.doi.org/10.2118/163638-pa.

Full text
Abstract:
Summary Given a suite of potential surveillance operations, we define surveillance optimization as the problem of choosing the operation that gives the minimum expected value of P90 minus P10 (i.e., P90 – P10) of a specified reservoir variable J (e.g., cumulative oil production) that will be obtained by conditioning J to the observed data. Two questions can be posed: (1) Which surveillance operation is expected to provide the greatest uncertainty reduction in J? and (2) What is the expected value of the reduction in uncertainty that would be achieved if we were to undertake each surveillance operation to collect the associated data and then history match the data obtained? In this work, we extend and apply a conceptual idea that we recently proposed for surveillance optimization to 2D and 3D waterflooding problems. Our method is based on information theory in which the mutual information between J and the random observed data vector Dobs is estimated by use of an ensemble of prior reservoir models. This mutual information reflects the strength of the relationship between J and the potential observed data and provides a qualitative answer to Question 1. Question 2 is answered by calculating the conditional entropy of J to generate an approximation of the expected value of the reduction in (P90 – P10) of J. The reliability of our method depends on obtaining a good estimate of the mutual information. We consider several ways to estimate the mutual information and suggest how a good estimate can be chosen. We validate the results of our proposed method with an exhaustive history-matching procedure. The methodology provides an approximate way to decide the data that should be collected to maximize the uncertainty reduction in a specified reservoir variable and to estimate the reduction in uncertainty that could be obtained. We expect this paper will stimulate significant research on the application of information theory and lead to practical methods and workflows for surveillance optimization.
APA, Harvard, Vancouver, ISO, and other styles
42

Mesner, Octavio Cesar, and Cosma Rohilla Shalizi. "Conditional Mutual Information Estimation for Mixed, Discrete and Continuous Data." IEEE Transactions on Information Theory 67, no. 1 (2021): 464–84. http://dx.doi.org/10.1109/tit.2020.3024886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Erker, Paul. "How not to Rényi-generalize the quantum conditional mutual information." Journal of Physics A: Mathematical and Theoretical 48, no. 27 (2015): 275303. http://dx.doi.org/10.1088/1751-8113/48/27/275303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tsimpiris, Alkiviadis, Ioannis Vlachos, and Dimitris Kugiumtzis. "Nearest neighbor estimate of conditional mutual information in feature selection." Expert Systems with Applications 39, no. 16 (2012): 12697–708. http://dx.doi.org/10.1016/j.eswa.2012.05.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Jia, Ziyu, Youfang Lin, Zehui Jiao, Yan Ma, and Jing Wang. "Detecting Causality in Multivariate Time Series via Non-Uniform Embedding." Entropy 21, no. 12 (2019): 1233. http://dx.doi.org/10.3390/e21121233.

Full text
Abstract:
Causal analysis based on non-uniform embedding schemes is an important way to detect the underlying interactions between dynamic systems. However, there are still some obstacles to estimating high-dimensional conditional mutual information and forming optimal mixed embedding vector in traditional non-uniform embedding schemes. In this study, we present a new non-uniform embedding method framed in information theory to detect causality for multivariate time series, named LM-PMIME, which integrates the low-dimensional approximation of conditional mutual information and the mixed search strategy for the construction of the mixed embedding vector. We apply the proposed method to simulations of linear stochastic, nonlinear stochastic, and chaotic systems, demonstrating its superiority over partial conditional mutual information from mixed embedding (PMIME) method. Moreover, the proposed method works well for multivariate time series with weak coupling strengths, especially for chaotic systems. In the actual application, we show its applicability to epilepsy multichannel electrocorticographic recordings.
APA, Harvard, Vancouver, ISO, and other styles
46

Verdú, Sergio. "Error Exponents and α-Mutual Information". Entropy 23, № 2 (2021): 199. http://dx.doi.org/10.3390/e23020199.

Full text
Abstract:
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint.
APA, Harvard, Vancouver, ISO, and other styles
47

Wei, Chenghao, Chen Li, Yingying Liu, et al. "Causal Discovery and Reasoning for Continuous Variables with an Improved Bayesian Network Constructed by Locality Sensitive Hashing and Kernel Density Estimation." Entropy 27, no. 2 (2025): 123. https://doi.org/10.3390/e27020123.

Full text
Abstract:
The structure learning of a Bayesian network (BN) is a crucial process that aims to unravel the complex dependencies relationships among variables using a given dataset. This paper proposes a new BN structure learning method for data with continuous attribute values. As a non-parametric distribution-free method, kernel density estimation (KDE) is applied in the conditional independence (CI) test. The skeleton of the BN is constructed utilizing the test based on mutual information and conditional mutual information, delineating potential relational connections between parents and children without imposing any distributional assumptions. In the searching stage of BN structure learning, the causal relationships between variables are achieved by using the conditional entropy scoring function and hill-climbing strategy. To further enhance the computational efficiency of our method, we incorporate a locality sensitive hashing (LSH) function into the KDE process. The method speeds up the calculations of KDE while maintaining the precision of the estimates, leading to a notable decrease in the time required for computing mutual information, conditional mutual information, and conditional entropy. A BN classifier (BNC) is established by using the computationally efficient BN learning method. Our experiments demonstrated that KDE using LSH has greatly improved the speed compared to traditional KDE without losing fitting accuracy. This achievement underscores the effectiveness of our method in balancing speed and accuracy. By giving the benchmark networks, the network structure learning accuracy with the proposed method is superior to other traditional structure learning methods. The BNC also demonstrates better accuracy with stronger interpretability compared to conventional classifiers on public datasets.
APA, Harvard, Vancouver, ISO, and other styles
48

Bleuler, Cédric, Amos Lapidoth, and Christoph Pfister. "Conditional Rényi Divergences and Horse Betting." Entropy 22, no. 3 (2020): 316. http://dx.doi.org/10.3390/e22030316.

Full text
Abstract:
Motivated by a horse betting problem, a new conditional Rényi divergence is introduced. It is compared with the conditional Rényi divergences that appear in the definitions of the dependence measures by Csiszár and Sibson, and the properties of all three are studied with emphasis on their behavior under data processing. In the same way that Csiszár’s and Sibson’s conditional divergence lead to the respective dependence measures, so does the new conditional divergence lead to the Lapidoth–Pfister mutual information. Moreover, the new conditional divergence is also related to the Arimoto–Rényi conditional entropy and to Arimoto’s measure of dependence. In the second part of the paper, the horse betting problem is analyzed where, instead of Kelly’s expected log-wealth criterion, a more general family of power-mean utility functions is considered. The key role in the analysis is played by the Rényi divergence, and in the setting where the gambler has access to side information, the new conditional Rényi divergence is key. The setting with side information also provides another operational meaning to the Lapidoth–Pfister mutual information. Finally, a universal strategy for independent and identically distributed races is presented that—without knowing the winning probabilities or the parameter of the utility function—asymptotically maximizes the gambler’s utility function.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Juan, Yiwei Zhou, Xiujun Zhang, and Luonan Chen. "Part mutual information for quantifying direct associations in networks." Proceedings of the National Academy of Sciences 113, no. 18 (2016): 5130–35. http://dx.doi.org/10.1073/pnas.1522586113.

Full text
Abstract:
Quantitatively identifying direct dependencies between variables is an important task in data analysis, in particular for reconstructing various types of networks and causal relations in science and engineering. One of the most widely used criteria is partial correlation, but it can only measure linearly direct association and miss nonlinear associations. However, based on conditional independence, conditional mutual information (CMI) is able to quantify nonlinearly direct relationships among variables from the observed data, superior to linear measures, but suffers from a serious problem of underestimation, in particular for those variables with tight associations in a network, which severely limits its applications. In this work, we propose a new concept, “partial independence,” with a new measure, “part mutual information” (PMI), which not only can overcome the problem of CMI but also retains the quantification properties of both mutual information (MI) and CMI. Specifically, we first defined PMI to measure nonlinearly direct dependencies between variables and then derived its relations with MI and CMI. Finally, we used a number of simulated data as benchmark examples to numerically demonstrate PMI features and further real gene expression data from Escherichia coli and yeast to reconstruct gene regulatory networks, which all validated the advantages of PMI for accurately quantifying nonlinearly direct associations in networks.
APA, Harvard, Vancouver, ISO, and other styles
50

Cheng, Hongrong, Zhiguang Qin, Chaosheng Feng, Yong Wang, and Fagen Li. "Conditional Mutual Information-Based Feature Selection Analyzing for Synergy and Redundancy." ETRI Journal 33, no. 2 (2011): 210–18. http://dx.doi.org/10.4218/etrij.11.0110.0237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!