To see the other types of publications on this topic, follow the link: Distance de Wasserstein.

Journal articles on the topic 'Distance de Wasserstein'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Distance de Wasserstein.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Vayer, Titouan, Laetitia Chapel, Remi Flamary, Romain Tavenard, and Nicolas Courty. "Fused Gromov-Wasserstein Distance for Structured Objects." Algorithms 13, no. 9 (2020): 212. http://dx.doi.org/10.3390/a13090212.

Full text
Abstract:
Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects, but treats them independently, whereas the Gromov–Wasserstein distance focuses on the relations between the elements, depicting the structure of the object, yet discarding its features. In this paper, we study the Fused Gromov-Wasserstein distance that extends the Wasserstein and
APA, Harvard, Vancouver, ISO, and other styles
2

Çelik, Türkü Özlüm, Asgar Jamneshan, Guido Montúfar, Bernd Sturmfels, and Lorenzo Venturello. "Wasserstein distance to independence models." Journal of Symbolic Computation 104 (May 2021): 855–73. http://dx.doi.org/10.1016/j.jsc.2020.10.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gangbo, Wilfrid, and Robert J. McCann. "Shape recognition via Wasserstein distance." Quarterly of Applied Mathematics 58, no. 4 (2000): 705–37. http://dx.doi.org/10.1090/qam/1788425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Decreusefond, L. "Wasserstein Distance on Configuration Space." Potential Analysis 28, no. 3 (2008): 283–300. http://dx.doi.org/10.1007/s11118-008-9077-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mathey-Prevot, Maxime, and Alain Valette. "Wasserstein distance and metric trees." L’Enseignement Mathématique 69, no. 3 (2023): 315–33. http://dx.doi.org/10.4171/lem/1052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harmati, István Á., Lucian Coroianu, and Robert Fullér. "Wasserstein distance for OWA operators." Fuzzy Sets and Systems 484 (May 2024): 108931. http://dx.doi.org/10.1016/j.fss.2024.108931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Peyre, Rémi. "Comparison between W2 distance and Ḣ−1 norm, and Localization of Wasserstein distance". ESAIM: Control, Optimisation and Calculus of Variations 24, № 4 (2018): 1489–501. http://dx.doi.org/10.1051/cocv/2017050.

Full text
Abstract:
It is well known that the quadratic Wasserstein distance W2(⋅, ⋅) is formally equivalent, for infinitesimally small perturbations, to some weighted H−1 homogeneous Sobolev norm. In this article I show that this equivalence can be integrated to get non-asymptotic comparison results between these distances. Then I give an application of these results to prove that the W2 distance exhibits some localization phenomenon: if μ and ν are measures on ℝn and ϕ: ℝn → ℝ+ is some bump function with compact support, then under mild hypotheses, you can bound above the Wasserstein distance between ϕ ⋅ μ and
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Minkai. "Towards Generalized Implementation of Wasserstein Distance in GANs." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10514–22. http://dx.doi.org/10.1609/aaai.v35i12.17258.

Full text
Abstract:
Wasserstein GANs (WGANs), built upon the Kantorovich-Rubinstein (KR) duality of Wasserstein distance, is one of the most theoretically sound GAN models. However, in practice it does not always outperform other variants of GANs. This is mostly due to the imperfect implementation of the Lipschitz condition required by the KR duality. Extensive work has been done in the community with different implementations of the Lipschitz constraint, which, however, is still hard to satisfy the restriction perfectly in practice. In this paper, we argue that the strong Lipschitz constraint might be unnecessar
APA, Harvard, Vancouver, ISO, and other styles
9

Dou, Jason Xiaotian, Lei Luo, and Raymond Mingrui Yang. "An Optimal Transport Approach to Deep Metric Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12935–36. http://dx.doi.org/10.1609/aaai.v36i11.21604.

Full text
Abstract:
Capturing visual similarity among images is the core of many computer vision and pattern recognition tasks. This problem can be formulated in such a paradigm called metric learning. Most research in the area has been mainly focusing on improving the loss functions and similarity measures. However, due to the ignoring of geometric structure, existing methods often lead to sub-optimal results. Thus, several recent research methods took advantage of Wasserstein distance between batches of samples to characterize the spacial geometry. Although these approaches can achieve enhanced performance, the
APA, Harvard, Vancouver, ISO, and other styles
10

Tong, Qijun, and Kei Kobayashi. "Entropy-Regularized Optimal Transport on Multivariate Normal and q-normal Distributions." Entropy 23, no. 3 (2021): 302. http://dx.doi.org/10.3390/e23030302.

Full text
Abstract:
The distance and divergence of the probability measures play a central role in statistics, machine learning, and many other related fields. The Wasserstein distance has received much attention in recent years because of its distinctions from other distances or divergences. Although computing the Wasserstein distance is costly, entropy-regularized optimal transport was proposed to computationally efficiently approximate the Wasserstein distance. The purpose of this study is to understand the theoretical aspect of entropy-regularized optimal transport. In this paper, we focus on entropy-regulari
APA, Harvard, Vancouver, ISO, and other styles
11

Lu, Cheng, Jiusun Zeng, Shihua Luo, and Jinhui Cai. "Detection and Isolation of Incipiently Developing Fault Using Wasserstein Distance." Processes 10, no. 6 (2022): 1081. http://dx.doi.org/10.3390/pr10061081.

Full text
Abstract:
This paper develops an incipient fault detection and isolation method using the Wasserstein distance, which measures the difference between the probability distributions of normal and faulty data sets from the aspect of optimal transport. For fault detection, a moving window based approach is introduced, resulting in two monitoring statistics that are constructed based on the Wasserstein distance. From analysis of the limiting distribution under multivariate Gaussian case, it is proved that the difference measured by the Wasserstein distance is more sensitive than conventional quadratic statis
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Zifan, Changgen Peng, Xing He, and Weijie Tan. "Wasserstein Distance-Based Deep Leakage from Gradients." Entropy 25, no. 5 (2023): 810. http://dx.doi.org/10.3390/e25050810.

Full text
Abstract:
Federated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in private information leakage. However, the algorithm has the disadvantages of slow model convergence and poor inverse generated images accuracy. To address these issues, a Wasserstein distance-based DLG method is proposed, named WDLG. The WDLG method uses Wasserstein distance as the training loss funct
APA, Harvard, Vancouver, ISO, and other styles
13

Tóth, Géza, and József Pitrik. "Quantum Wasserstein distance based on an optimization over separable states." Quantum 7 (October 16, 2023): 1143. http://dx.doi.org/10.22331/q-2023-10-16-1143.

Full text
Abstract:
We define the quantum Wasserstein distance such that the optimization of the coupling is carried out over bipartite separable states rather than bipartite quantum states in general, and examine its properties. Surprisingly, we find that the self-distance is related to the quantum Fisher information. We present a transport map corresponding to an optimal bipartite separable state. We discuss how the quantum Wasserstein distance introduced is connected to criteria detecting quantum entanglement. We define variance-like quantities that can be obtained from the quantum Wasserstein distance by repl
APA, Harvard, Vancouver, ISO, and other styles
14

Bernton, Espen, Pierre E. Jacob, Mathieu Gerber, and Christian P. Robert. "On parameter estimation with the Wasserstein distance." Information and Inference: A Journal of the IMA 8, no. 4 (2019): 657–76. http://dx.doi.org/10.1093/imaiai/iaz003.

Full text
Abstract:
Abstract Statistical inference can be performed by minimizing, over the parameter space, the Wasserstein distance between model distributions and the empirical distribution of the data. We study asymptotic properties of such minimum Wasserstein distance estimators, complementing results derived by Bassetti, Bodini and Regazzini in 2006. In particular, our results cover the misspecified setting, in which the data-generating process is not assumed to be part of the family of distributions described by the model. Our results are motivated by recent applications of minimum Wasserstein estimators t
APA, Harvard, Vancouver, ISO, and other styles
15

Tabak, Gil, Minjie Fan, Samuel Yang, Stephan Hoyer, and Geoffrey Davis. "Correcting nuisance variation using Wasserstein distance." PeerJ 8 (February 28, 2020): e8594. http://dx.doi.org/10.7717/peerj.8594.

Full text
Abstract:
Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells. One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drug compounds applied at different doses can be quantified. The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images. An important known issue for such methods is separating relevan
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Long, Ying Wei, Chenhe Dong, Chuaqiao Xu, and Zhaofu Diao. "Wasserstein Distance-Based Auto-Encoder Tracking." Neural Processing Letters 53, no. 3 (2021): 2305–29. http://dx.doi.org/10.1007/s11063-021-10507-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Shi, Jie, and Yalin Wang. "Hyperbolic Wasserstein Distance for Shape Indexing." IEEE Transactions on Pattern Analysis and Machine Intelligence 42, no. 6 (2020): 1362–76. http://dx.doi.org/10.1109/tpami.2019.2898400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Assa, Akbar, and Konstantinos N. Plataniotis. "Wasserstein-Distance-Based Gaussian Mixture Reduction." IEEE Signal Processing Letters 25, no. 10 (2018): 1465–69. http://dx.doi.org/10.1109/lsp.2018.2865829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Belili, Nacereddine, and Henri Heinich. "Approximation pour la distance de Wasserstein." Comptes Rendus Mathematique 335, no. 6 (2002): 537–40. http://dx.doi.org/10.1016/s1631-073x(02)02522-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Long, Arthur Vidard, François-Xavier Le Dimet, and Jianwei Ma. "Topological data assimilation using Wasserstein distance." Inverse Problems 35, no. 1 (2018): 015006. http://dx.doi.org/10.1088/1361-6420/aae993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rüschendorf, Ludger. "The Wasserstein distance and approximation theorems." Probability Theory and Related Fields 70, no. 1 (1985): 117–29. http://dx.doi.org/10.1007/bf00532240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Shi, Yong, Lei Zheng, Pei Quan, and Lingfeng Niu. "Wasserstein distance regularized graph neural networks." Information Sciences 670 (June 2024): 120608. http://dx.doi.org/10.1016/j.ins.2024.120608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kindelan Nuñez, Rolando, Mircea Petrache, Mauricio Cerda, and Nancy Hitschfeld. "A Class of Topological Pseudodistances for Fast Comparison of Persistence Diagrams." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (2024): 13202–10. http://dx.doi.org/10.1609/aaai.v38i12.29220.

Full text
Abstract:
Persistence diagrams (PD)s play a central role in topological data analysis, and are used in an ever increasing variety of applications. The comparison of PD data requires computing distances among large sets of PDs, with metrics which are accurate, theoretically sound, and fast to compute. Especially for denser multi-dimensional PDs, such comparison metrics are lacking. While on the one hand, Wasserstein-type distances have high accuracy and theoretical guarantees, they incur high computational cost. On the other hand, distances between vectorizations such as Persistence Statistics (PS)s have
APA, Harvard, Vancouver, ISO, and other styles
24

Cumings-Menon, Ryan, and Minchul Shin. "Probability Forecast Combination via Entropy Regularized Wasserstein Distance." Entropy 22, no. 9 (2020): 929. http://dx.doi.org/10.3390/e22090929.

Full text
Abstract:
We propose probability and density forecast combination methods that are defined using the entropy regularized Wasserstein distance. First, we provide a theoretical characterization of the combined density forecast based on the regularized Wasserstein distance under the assumption. More specifically, we show that the regularized Wasserstein barycenter between multivariate Gaussian input densities is multivariate Gaussian, and provide a simple way to compute mean and its variance–covariance matrix. Second, we show how this type of regularization can improve the predictive power of the resulting
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Shengxi, Zeyang Yu, Min Xiang, and Danilo Mandic. "Solving General Elliptical Mixture Models through an Approximate Wasserstein Manifold." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4658–66. http://dx.doi.org/10.1609/aaai.v34i04.5897.

Full text
Abstract:
We address the estimation problem for general finite mixture models, with a particular focus on the elliptical mixture models (EMMs). Compared to the widely adopted Kullback–Leibler divergence, we show that the Wasserstein distance provides a more desirable optimisation space. We thus provide a stable solution to the EMMs that is both robust to initialisations and reaches a superior optimum by adaptively optimising along a manifold of an approximate Wasserstein distance. To this end, we first provide a unifying account of computable and identifiable EMMs, which serves as a basis to rigorously
APA, Harvard, Vancouver, ISO, and other styles
26

Ponti, Andrea, Ilaria Giordani, Matteo Mistri, Antonio Candelieri, and Francesco Archetti. "The “Unreasonable” Effectiveness of the Wasserstein Distance in Analyzing Key Performance Indicators of a Network of Stores." Big Data and Cognitive Computing 6, no. 4 (2022): 138. http://dx.doi.org/10.3390/bdcc6040138.

Full text
Abstract:
Large retail companies routinely gather huge amounts of customer data, which are to be analyzed at a low granularity. To enable this analysis, several Key Performance Indicators (KPIs), acquired for each customer through different channels are associated to the main drivers of the customer experience. Analyzing the samples of customer behavior only through parameters such as average and variance does not cope with the growing heterogeneity of customers. In this paper, we propose a different approach in which the samples from customer surveys are represented as discrete probability distribution
APA, Harvard, Vancouver, ISO, and other styles
27

Kelbert, Mark. "Survey of Distances between the Most Popular Distributions." Analytics 2, no. 1 (2023): 225–45. http://dx.doi.org/10.3390/analytics2010012.

Full text
Abstract:
We present a number of upper and lower bounds for the total variation distances between the most popular probability distributions. In particular, some estimates of the total variation distances in the cases of multivariate Gaussian distributions, Poisson distributions, binomial distributions, between a binomial and a Poisson distribution, and also in the case of negative binomial distributions are given. Next, the estimations of Lévy–Prohorov distance in terms of Wasserstein metrics are discussed, and Fréchet, Wasserstein and Hellinger distances for multivariate Gaussian distributions are eva
APA, Harvard, Vancouver, ISO, and other styles
28

De Palma, Giacomo, Milad Marvian, Dario Trevisan, and Seth Lloyd. "The Quantum Wasserstein Distance of Order 1." IEEE Transactions on Information Theory 67, no. 10 (2021): 6627–43. http://dx.doi.org/10.1109/tit.2021.3076442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Frohmader, Andrew, and Hans Volkmer. "1-Wasserstein distance on the standard simplex." Algebraic Statistics 12, no. 1 (2021): 43–56. http://dx.doi.org/10.2140/astat.2021.12.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Jie, Dan Xu, and Shaowen Yao. "Sliced Wasserstein Distance for Neural Style Transfer." Computers & Graphics 102 (February 2022): 89–98. http://dx.doi.org/10.1016/j.cag.2021.12.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wu, Wei, Guangmin Hu, and Fucai Yu. "Graph Classification Method Based on Wasserstein Distance." Journal of Physics: Conference Series 1952, no. 2 (2021): 022018. http://dx.doi.org/10.1088/1742-6596/1952/2/022018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mémoli, Facundo. "The Gromov–Wasserstein Distance: A Brief Overview." Axioms 3, no. 3 (2014): 335–41. http://dx.doi.org/10.3390/axioms3030335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Carlsson, John Gunnar, Mehdi Behroozi, and Kresimir Mihic. "Wasserstein Distance and the Distributionally Robust TSP." Operations Research 66, no. 6 (2018): 1603–24. http://dx.doi.org/10.1287/opre.2018.1746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bernton, Espen, Pierre E. Jacob, Mathieu Gerber, and Christian P. Robert. "Approximate Bayesian computation with the Wasserstein distance." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 81, no. 2 (2019): 235–69. http://dx.doi.org/10.1111/rssb.12312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Abdellaoui, Taoufiq. "Approximation de la L1-distance de Wasserstein." Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 328, no. 12 (1999): 1203–6. http://dx.doi.org/10.1016/s0764-4442(99)80440-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sun, Fengdong, and Wenhui Li. "Saliency detection based on aggregated Wasserstein distance." Journal of Electronic Imaging 27, no. 04 (2018): 1. http://dx.doi.org/10.1117/1.jei.27.4.043014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nichols, Jonathan M., Meredith N. Hutchinson, Nicole Menkart, Geoff A. Cranch, and Gustavo Kunde Rohde. "Time Delay Estimation Via Wasserstein Distance Minimization." IEEE Signal Processing Letters 26, no. 6 (2019): 908–12. http://dx.doi.org/10.1109/lsp.2019.2895457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ma, Ming, Na Lei, Kehua Su, et al. "Surface-based shape classification using Wasserstein distance." Geometry, Imaging and Computing 2, no. 4 (2015): 237–55. http://dx.doi.org/10.4310/gic.2015.v2.n4.a1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Kedi, Ping Yi, Futai Zou, and Yue Wu. "Generating Adversarial Samples With Constrained Wasserstein Distance." IEEE Access 7 (2019): 136812–21. http://dx.doi.org/10.1109/access.2019.2942607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Verdinelli, Isabella, and Larry Wasserman. "Hybrid Wasserstein distance and fast distribution clustering." Electronic Journal of Statistics 13, no. 2 (2019): 5088–119. http://dx.doi.org/10.1214/19-ejs1639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Piccoli, Benedetto, and Francesco Rossi. "On Properties of the Generalized Wasserstein Distance." Archive for Rational Mechanics and Analysis 222, no. 3 (2016): 1339–65. http://dx.doi.org/10.1007/s00205-016-1026-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Robin, Yoann, Pascal Yiou, and Philippe Naveau. "Detecting changes in forced climate attractors with Wasserstein distance." Nonlinear Processes in Geophysics 24, no. 3 (2017): 393–405. http://dx.doi.org/10.5194/npg-24-393-2017.

Full text
Abstract:
Abstract. The climate system can been described by a dynamical system and its associated attractor. The dynamics of this attractor depends on the external forcings that influence the climate. Such forcings can affect the mean values or variances, but regions of the attractor that are seldom visited can also be affected. It is an important challenge to measure how the climate attractor responds to different forcings. Currently, the Euclidean distance or similar measures like the Mahalanobis distance have been favored to measure discrepancies between two climatic situations. Those distances do n
APA, Harvard, Vancouver, ISO, and other styles
43

Zhao, Chun Jiang. "A Modified Method to Measure Similarity of Generalized Fuzzy Numbers." Advanced Materials Research 159 (December 2010): 393–98. http://dx.doi.org/10.4028/www.scientific.net/amr.159.393.

Full text
Abstract:
A modified method to measure the similarity between the generalized fuzzy numbers based on Wasserstein distance is proposed. The method is more fully considered the difference of the generalized fuzzy numbers, especially the shape difference, by Wasserstein distance. The results of comparing the eighteen sets of generalized fuzzy numbers show that the method can overcome the drawbacks of the existing methods, and calculate the similarity measure excellently.
APA, Harvard, Vancouver, ISO, and other styles
44

Tao, Yang, Chunyan Li, Zhifang Liang, Haocheng Yang, and Juan Xu. "Wasserstein Distance Learns Domain Invariant Feature Representations for Drift Compensation of E-Nose." Sensors 19, no. 17 (2019): 3703. http://dx.doi.org/10.3390/s19173703.

Full text
Abstract:
Electronic nose (E-nose), a kind of instrument which combines with the gas sensor and the corresponding pattern recognition algorithm, is used to detect the type and concentration of gases. However, the sensor drift will occur in realistic application scenario of E-nose, which makes a variation of data distribution in feature space and causes a decrease in prediction accuracy. Therefore, studies on the drift compensation algorithms are receiving increasing attention in the field of the E-nose. In this paper, a novel method, namely Wasserstein Distance Learned Feature Representations (WDLFR), i
APA, Harvard, Vancouver, ISO, and other styles
45

Xu, Bi-Cun, Kai Ming Ting, and Yuan Jiang. "Isolation Graph Kernel." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10487–95. http://dx.doi.org/10.1609/aaai.v35i12.17255.

Full text
Abstract:
A recent Wasserstein Weisfeiler-Lehman (WWL) Graph Kernel has a distinctive feature: Representing the distribution of Weisfeiler-Lehman (WL)-embedded node vectors of a graph in a histogram that enables a dissimilarity measurement of two graphs using Wasserstein distance. It has been shown to produce better classification accuracy than other graph kernels which do not employ such distribution and Wasserstein distance. This paper introduces an alternative called Isolation Graph Kernel (IGK) that measures the similarity between two attributed graphs. IGK is unique in two aspects among existing gr
APA, Harvard, Vancouver, ISO, and other styles
46

He, Shuncheng, Yuhang Jiang, Hongchang Zhang, Jianzhun Shao, and Xiangyang Ji. "Wasserstein Unsupervised Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6884–92. http://dx.doi.org/10.1609/aaai.v36i6.20645.

Full text
Abstract:
Unsupervised reinforcement learning aims to train agents to learn a handful of policies or skills in environments without external reward. These pre-trained policies can accelerate learning when endowed with external reward, and can also be used as primitive options in hierarchical reinforcement learning. Conventional approaches of unsupervised skill discovery feed a latent variable to the agent and shed its empowerment on agent’s behavior by mutual information (MI) maximization. However, the policies learned by MI-based methods cannot sufficiently explore the state space, despite they can be
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Zhonghui, Huarui Jing, and Chihwa Kao. "High-Dimensional Distributionally Robust Mean-Variance Efficient Portfolio Selection." Mathematics 11, no. 5 (2023): 1272. http://dx.doi.org/10.3390/math11051272.

Full text
Abstract:
This paper introduces a novel distributionally robust mean-variance portfolio estimator based on the projection robust Wasserstein (PRW) distance. This approach addresses the issue of increasing conservatism of portfolio allocation strategies due to high-dimensional data. Our simulation results show the robustness of the PRW-based estimator in the presence of noisy data and its ability to achieve a higher Sharpe ratio than regular Wasserstein distances when dealing with a large number of assets. Our empirical study also demonstrates that the proposed portfolio estimator outperforms classic “pl
APA, Harvard, Vancouver, ISO, and other styles
48

Xu, Hongteng, Dixin Luo, Lawrence Carin, and Hongyuan Zha. "Learning Graphons via Structured Gromov-Wasserstein Barycenters." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10505–13. http://dx.doi.org/10.1609/aaai.v35i12.17257.

Full text
Abstract:
We propose a novel and principled method to learn a nonparametric graph model called graphon, which is defined in an infinite-dimensional space and represents arbitrary-size graphs. Based on the weak regularity lemma from the theory of graphons, we leverage a step function to approximate a graphon. We show that the cut distance of graphons can be relaxed to the Gromov-Wasserstein distance of their step functions. Accordingly, given a set of graphs generated by an underlying graphon, we learn the corresponding step function as the Gromov-Wasserstein barycenter of the given graphs. Furthermore,
APA, Harvard, Vancouver, ISO, and other styles
49

Xia, Aihua. "On the rate of Poisson process approximation to a Bernoulli process." Journal of Applied Probability 34, no. 4 (1997): 898–907. http://dx.doi.org/10.2307/3215005.

Full text
Abstract:
This note gives the rate for a Wasserstein distance between the distribution of a Bernoulli process on discrete time and that of a Poisson process, using Stein's method and Palm theory. The result here highlights the possibility that the logarithmic factor involved in the upper bounds established by Barbour and Brown (1992) and Barbour et al. (1995) may be superfluous in the true Wasserstein distance between the distributions of a point process and a Poisson process.
APA, Harvard, Vancouver, ISO, and other styles
50

Xia, Aihua. "On the rate of Poisson process approximation to a Bernoulli process." Journal of Applied Probability 34, no. 04 (1997): 898–907. http://dx.doi.org/10.1017/s0021900200101603.

Full text
Abstract:
This note gives the rate for a Wasserstein distance between the distribution of a Bernoulli process on discrete time and that of a Poisson process, using Stein's method and Palm theory. The result here highlights the possibility that the logarithmic factor involved in the upper bounds established by Barbour and Brown (1992) and Barbour et al. (1995) may be superfluous in the true Wasserstein distance between the distributions of a point process and a Poisson process.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!