To see the other types of publications on this topic, follow the link: Variational tasks.

Journal articles on the topic 'Variational tasks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Variational tasks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cervantes-Barraza, Jonathan Alberto, Shelsyn Johana Moreno Calvo, and Kattia Lucia de Arce Polo. "Mathematical Modelling and Argumentation: Designing a Task to Strengthen Variational Thinking by Integrating Data Science." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 16, no. 1 (2025): 25–43. https://doi.org/10.61841/turcomat.v16i1.14975.

Full text
Abstract:
This study explores the significance of task design in fostering variational thinking through the principles of PyLVar and task design in mathematics education, particularly in the incorporation of technological and scientific tools to enhance modelling and argumentation skills among secondary school students. To develop and implement interactive mathematical tasks based on the PyLVar principle and task design, incorporating data science tools and STEAM. A qualitative-descriptive approach was employed to design, implement, and analyse interactive tasks centred on the principles of variational
APA, Harvard, Vancouver, ISO, and other styles
2

Xiang, Guofei, Songyi Dian, Shaofeng Du, and Zhonghui Lv. "Variational Information Bottleneck Regularized Deep Reinforcement Learning for Efficient Robotic Skill Adaptation." Sensors 23, no. 2 (2023): 762. http://dx.doi.org/10.3390/s23020762.

Full text
Abstract:
Deep Reinforcement Learning (DRL) algorithms have been widely studied for sequential decision-making problems, and substantial progress has been achieved, especially in autonomous robotic skill learning. However, it is always difficult to deploy DRL methods in practical safety-critical robot systems, since the training and deployment environment gap always exists, and this issue would become increasingly crucial due to the ever-changing environment. Aiming at efficiently robotic skill transferring in a dynamic environment, we present a meta-reinforcement learning algorithm based on a variation
APA, Harvard, Vancouver, ISO, and other styles
3

Perez, Iker, and Giuliano Casale. "Variational inference for Markovian queueing networks." Advances in Applied Probability 53, no. 3 (2021): 687–715. http://dx.doi.org/10.1017/apr.2020.72.

Full text
Abstract:
AbstractQueueing networks are stochastic systems formed by interconnected resources routing and serving jobs. They induce jump processes with distinctive properties, and find widespread use in inferential tasks. Here, service rates for jobs and potential bottlenecks in the routing mechanism must be estimated from a reduced set of observations. However, this calls for the derivation of complex conditional density representations, over both the stochastic network trajectories and the rates, which is considered an intractable problem. Numerical simulation procedures designed for this purpose do n
APA, Harvard, Vancouver, ISO, and other styles
4

Gatopoulos, Ioannis, and Jakub M. Tomczak. "Self-Supervised Variational Auto-Encoders." Entropy 23, no. 6 (2021): 747. http://dx.doi.org/10.3390/e23060747.

Full text
Abstract:
Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), which utilizes deterministic and discrete transformations of data. This class of models allows both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where the transformation is either downscalin
APA, Harvard, Vancouver, ISO, and other styles
5

Nielsen, Frank. "On a Variational Definition for the Jensen-Shannon Symmetrization of Distances Based on the Information Radius." Entropy 23, no. 4 (2021): 464. http://dx.doi.org/10.3390/e23040464.

Full text
Abstract:
We generalize the Jensen-Shannon divergence and the Jensen-Shannon diversity index by considering a variational definition with respect to a generic mean, thereby extending the notion of Sibson’s information radius. The variational definition applies to any arbitrary distance and yields a new way to define a Jensen-Shannon symmetrization of distances. When the variational optimization is further constrained to belong to prescribed families of probability measures, we get relative Jensen-Shannon divergences and their equivalent Jensen-Shannon symmetrizations of distances that generalize the con
APA, Harvard, Vancouver, ISO, and other styles
6

Sheng, Xin, Linli Xu, Junliang Guo, Jingchang Liu, Ruoyu Zhao, and Yinlong Xu. "IntroVNMT: An Introspective Model for Variational Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8830–37. http://dx.doi.org/10.1609/aaai.v34i05.6411.

Full text
Abstract:
We propose a novel introspective model for variational neural machine translation (IntroVNMT) in this paper, inspired by the recent successful application of introspective variational autoencoder (IntroVAE) in high quality image synthesis. Different from the vanilla variational NMT model, IntroVNMT is capable of improving itself introspectively by evaluating the quality of the generated target sentences according to the high-level latent variables of the real and generated target sentences. As a consequence of introspective training, the proposed model is able to discriminate between the gener
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Wenlin, Hongteng Xu, Zhe Gan, et al. "Graph-Driven Generative Models for Heterogeneous Multi-Task Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 979–88. http://dx.doi.org/10.1609/aaai.v34i01.5446.

Full text
Abstract:
We propose a novel graph-driven generative model, that unifies multiple heterogeneous learning tasks into the same framework. The proposed model is based on the fact that heterogeneous learning tasks, which correspond to different generative processes, often rely on data with a shared graph structure. Accordingly, our model combines a graph convolutional network (GCN) with multiple variational autoencoders, thus embedding the nodes of the graph (i.e., samples for the tasks) in a uniform manner, while specializing their organization and usage to different tasks. With a focus on healthcare appli
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Hui, Shi Shu, and Jianping Zhang. "FAS-UNet: A Novel FAS-Driven UNet to Learn Variational Image Segmentation." Mathematics 10, no. 21 (2022): 4055. http://dx.doi.org/10.3390/math10214055.

Full text
Abstract:
Solving variational image segmentation problems with hidden physics is often expensive and requires different algorithms and manually tuned model parameters. The deep learning methods based on the UNet structure have obtained outstanding performances in many different medical image segmentation tasks, but designing such networks requires many parameters and training data, which are not always available for practical problems. In this paper, inspired by the traditional multiphase convexity Mumford–Shah variational model and full approximation scheme (FAS) solving the nonlinear systems, we propo
APA, Harvard, Vancouver, ISO, and other styles
9

Dai, Weihang, Xiaomeng Li, and Kwang-Ting Cheng. "Semi-Supervised Deep Regression with Uncertainty Consistency and Variational Model Ensembling via Bayesian Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7304–13. http://dx.doi.org/10.1609/aaai.v37i6.25890.

Full text
Abstract:
Deep regression is an important problem with numerous applications. These range from computer vision tasks such as age estimation from photographs, to medical tasks such as ejection fraction estimation from echocardiograms for disease tracking. Semi-supervised approaches for deep regression are notably under-explored compared to classification and segmentation tasks, however. Unlike classification tasks, which rely on thresholding functions for generating class pseudo-labels, regression tasks use real number target predictions directly as pseudo-labels, making them more sensitive to prediction
APA, Harvard, Vancouver, ISO, and other styles
10

Krishnan, Ranganath, Mahesh Subedar, and Omesh Tickoo. "Specifying Weight Priors in Bayesian Deep Neural Networks with Empirical Bayes." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4477–84. http://dx.doi.org/10.1609/aaai.v34i04.5875.

Full text
Abstract:
Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors and approximate posterior distributions over neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational inference to deeper architectures involving high dimensional weight space. We propose MOdel Priors with Empirical Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian neural networks. We formulate a two-stage hierarchical modeling, first find the maximum likelihood estimates of weights with DNN, and then set
APA, Harvard, Vancouver, ISO, and other styles
11

Saeedi, Ardavan, Yuria Utsumi, Li Sun, Kayhan Batmanghelich, and Li-wei Lehman. "Knowledge Distillation via Constrained Variational Inference." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (2022): 8132–40. http://dx.doi.org/10.1609/aaai.v36i7.20786.

Full text
Abstract:
Knowledge distillation has been used to capture the knowledge of a teacher model and distill it into a student model with some desirable characteristics such as being smaller, more efficient, or more generalizable. In this paper, we propose a framework for distilling the knowledge of a powerful discriminative model such as a neural network into commonly used graphical models known to be more interpretable (e.g., topic models, autoregressive Hidden Markov Models). Posterior of latent variables in these graphical models (e.g., topic proportions in topic models) is often used as feature represent
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Muyao, Wenchao Chen, and Bo Chen. "Considering Nonstationary within Multivariate Time Series with Variational Hierarchical Transformer for Forecasting." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 15563–70. http://dx.doi.org/10.1609/aaai.v38i14.29483.

Full text
Abstract:
The forecasting of Multivariate Time Series (MTS) has long been an important but challenging task. Due to the non-stationary problem across long-distance time steps, previous studies primarily adopt stationarization method to attenuate the non-stationary problem of original series for better predictability. However, existed methods always adopt the stationarized series, which ignore the inherent non-stationarity, and have difficulty in modeling MTS with complex distributions due to the lack of stochasticity. To tackle these problems, we first develop a powerful hierarchical probabilistic gener
APA, Harvard, Vancouver, ISO, and other styles
13

Choong, Jun Jin, Xin Liu, and Tsuyoshi Murata. "Optimizing Variational Graph Autoencoder for Community Detection with Dual Optimization." Entropy 22, no. 2 (2020): 197. http://dx.doi.org/10.3390/e22020197.

Full text
Abstract:
Variational Graph Autoencoder (VGAE) has recently gained traction for learning representations on graphs. Its inception has allowed models to achieve state-of-the-art performance for challenging tasks such as link prediction, rating prediction, and node clustering. However, a fundamental flaw exists in Variational Autoencoder (VAE)-based approaches. Specifically, merely minimizing the loss of VAE increases the deviation from its primary objective. Focusing on Variational Graph Autoencoder for Community Detection (VGAECD) we found that optimizing the loss using the stochastic gradient descent o
APA, Harvard, Vancouver, ISO, and other styles
14

Havasi, Marton, Jasper Snoek, Dustin Tran, Jonathan Gordon, and José Miguel Hernández-Lobato. "Sampling the Variational Posterior with Local Refinement." Entropy 23, no. 11 (2021): 1475. http://dx.doi.org/10.3390/e23111475.

Full text
Abstract:
Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the p
APA, Harvard, Vancouver, ISO, and other styles
15

Shutin, Dmitriy, Christoph Zechner, Sanjeev R. Kulkarni, and H. Vincent Poor. "Regularized Variational Bayesian Learning of Echo State Networks with Delay&Sum Readout." Neural Computation 24, no. 4 (2012): 967–95. http://dx.doi.org/10.1162/neco_a_00253.

Full text
Abstract:
In this work, a variational Bayesian framework for efficient training of echo state networks (ESNs) with automatic regularization and delay&sum (D&S) readout adaptation is proposed. The algorithm uses a classical batch learning of ESNs. By treating the network echo states as fixed basis functions parameterized with delay parameters, we propose a variational Bayesian ESN training scheme. The variational approach allows for a seamless combination of sparse Bayesian learning ideas and a variational Bayesian space-alternating generalized expectation-maximization (VB-SAGE) algorithm for est
APA, Harvard, Vancouver, ISO, and other styles
16

Medyakov, Daniil Olegovich, Gleb Lvovich Molodtsov, and Aleksandr Nikolaevich Beznosikov. "Effective Method with Compression for Distributed and Federated Cocoercive Variational Inequalities." Proceedings of the Institute for System Programming of the RAS 36, no. 5 (2024): 93–108. https://doi.org/10.15514/ispras-2024-36(5)-7.

Full text
Abstract:
Variational inequalities as an effective tool for solving applied problems, including machine learning tasks, have been attracting more and more attention from researchers in recent years. The use of variational inequalities covers a wide range of areas – from reinforcement learning and generative models to traditional applications in economics and game theory. At the same time, it is impossible to imagine the modern world of machine learning without distributed optimization approaches that can significantly speed up the training process on large amounts of data. However, faced with the high c
APA, Harvard, Vancouver, ISO, and other styles
17

Lockwood, Owen, and Mei Si. "Reinforcement Learning with Quantum Variational Circuit." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 16, no. 1 (2020): 245–51. http://dx.doi.org/10.1609/aiide.v16i1.7437.

Full text
Abstract:
The development of quantum computational techniques has advanced greatly in recent years, parallel to the advancements in techniques for deep reinforcement learning. This work explores the potential for quantum computing to facilitate reinforcement learning problems. Quantum computing approaches offer important potential improvements in time and space complexity over traditional algorithms because of its ability to exploit the quantum phenomena of superposition and entanglement. Specifically, we investigate the use of quantum variational circuits, a form of quantum machine learning. We present
APA, Harvard, Vancouver, ISO, and other styles
18

Becker, McCoy R., Alexander K. Lew, Xiaoyan Wang, et al. "Probabilistic Programming with Programmable Variational Inference." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 2123–47. http://dx.doi.org/10.1145/3656463.

Full text
Abstract:
Compared to the wide array of advanced Monte Carlo methods supported by modern probabilistic programming languages (PPLs), PPL support for variational inference (VI) is less developed: users are typically limited to a predefined selection of variational objectives and gradient estimators, which are implemented monolithically (and without formal correctness arguments) in PPL backends. In this paper, we propose a more modular approach to supporting variational inference in PPLs, based on compositional program transformation. In our approach, variational objectives are expressed as programs, that
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Fen, and Quan Qian. "Cost-Sensitive Variational Autoencoding Classifier for Imbalanced Data Classification." Algorithms 15, no. 5 (2022): 139. http://dx.doi.org/10.3390/a15050139.

Full text
Abstract:
Classification is among the core tasks in machine learning. Existing classification algorithms are typically based on the assumption of at least roughly balanced data classes. When performing tasks involving imbalanced data, such classifiers ignore the minority data in consideration of the overall accuracy. The performance of traditional classification algorithms based on the assumption of balanced data distribution is insufficient because the minority-class samples are often more important than others, such as positive samples, in disease diagnosis. In this study, we propose a cost-sensitive
APA, Harvard, Vancouver, ISO, and other styles
20

Мезенцев, A. Mezentsev, Сазонова, and Svetlana Sazonova. "THE FORMALIZATION OF VARIATIONAL PROBLEM MODELING IN THE CURRENT DISTRIBUTION IN THE SAFE OPERATION OF HYDRAULIC SYSTEMS." Modeling of systems and processes 8, no. 4 (2016): 46–49. http://dx.doi.org/10.12737/19522.

Full text
Abstract:
For the calculated area of the hydraulic system is considered a sequence of formation of mathematical models of flow distribution. The solution of the variational problem allows to obtain a system of equations describing the transient flow regime for the protection of objects of management tasks in safety performance.
APA, Harvard, Vancouver, ISO, and other styles
21

Nuñez-Gutierrez, Karina, Camilo Andrés Rodríguez-Nieto, Lisseth Correa-Sandoval, and Vicenç Font Moll. "High school Colombian students’ variational thinking triggered by mathematical connections in a laboratory on linear functions." International Electronic Journal of Mathematics Education 20, no. 1 (2025): em0800. http://dx.doi.org/10.29333/iejme/15649.

Full text
Abstract:
The variational thinking of high school students based on mathematical connections was analyzed through a laboratory on linear functions. This qualitative research based on design was developed in three phases: diagnostic test, implementation of the mathematics laboratory and final test, with students from a public institution in Barranquilla, Colombia. The diagnostic test revealed difficulties in the concept of linear function and the modelling of situations. Therefore, a laboratory was implemented with eight activities focused on the concept of linear function, variation situations, generali
APA, Harvard, Vancouver, ISO, and other styles
22

Choong, Jun Jin, Xin Liu, and Tsuyoshi Murata. "Variational Approach for Learning Community Structures." Complexity 2018 (December 13, 2018): 1–13. http://dx.doi.org/10.1155/2018/4867304.

Full text
Abstract:
Discovering and modeling community structure exist to be a fundamentally challenging task. In domains such as biology, chemistry, and physics, researchers often rely on community detection algorithms to uncover community structures from complex systems yet no unified definition of community structure exists. Furthermore, existing models tend to be oversimplified leading to a neglect of richer information such as nodal features. Coupled with the surge of user generated information on social networks, a demand for newer techniques beyond traditional approaches is inevitable. Deep learning techni
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Fangyikang, Huminhao Zhu, Chao Zhang, Hanbin Zhao, and Hui Qian. "GAD-PVI: A General Accelerated Dynamic-Weight Particle-Based Variational Inference Framework." Entropy 26, no. 8 (2024): 679. http://dx.doi.org/10.3390/e26080679.

Full text
Abstract:
Particle-based Variational Inference (ParVI) methods have been widely adopted in deep Bayesian inference tasks such as Bayesian neural networks or Gaussian Processes, owing to their efficiency in generating high-quality samples given the score of the target distribution. Typically, ParVI methods evolve a weighted-particle system by approximating the first-order Wasserstein gradient flow to reduce the dissimilarity between the particle system’s empirical distribution and the target distribution. Recent advancements in ParVI have explored sophisticated gradient flows to obtain refined particle s
APA, Harvard, Vancouver, ISO, and other styles
24

Eltager, Mostafa, Tamim Abdelaal, Mohammed Charrout, Ahmed Mahfouz, Marcel J. T. Reinders, and Stavros Makrodimitris. "Benchmarking variational AutoEncoders on cancer transcriptomics data." PLOS ONE 18, no. 10 (2023): e0292126. http://dx.doi.org/10.1371/journal.pone.0292126.

Full text
Abstract:
Deep generative models, such as variational autoencoders (VAE), have gained increasing attention in computational biology due to their ability to capture complex data manifolds which subsequently can be used to achieve better performance in downstream tasks, such as cancer type prediction or subtyping of cancer. However, these models are difficult to train due to the large number of hyperparameters that need to be tuned. To get a better understanding of the importance of the different hyperparameters, we examined six different VAE models when trained on TCGA transcriptomics data and evaluated
APA, Harvard, Vancouver, ISO, and other styles
25

Melching, Melanie, and Otmar Scherzer. "Regularization with metric double integrals for vector tomography." Journal of Inverse and Ill-posed Problems 28, no. 6 (2020): 857–75. http://dx.doi.org/10.1515/jiip-2019-0084.

Full text
Abstract:
AbstractWe present a family of non-local variational regularization methods for solving tomographic problems, where the solutions are functions with range in a closed subset of the Euclidean space, for example if the solution only attains values in an embedded sub-manifold. Recently, in [R. Ciak, M. Melching and O. Scherzer, Regularization with metric double integrals of functions with values in a set of vectors, J. Math. Imaging Vision 61 2019, 6, 824–848], such regularization methods have been investigated analytically and their efficiency has been tested for basic imaging tasks such as deno
APA, Harvard, Vancouver, ISO, and other styles
26

CISTERNAS, JAIME, MARCELO GÁLVEZ, BRAM STIELTJES, and FREDERIK B. LAUN. "VARIATIONAL PRINCIPLES IN IMAGE PROCESSING AND THE REGULARIZATION OF ORIENTATION FIELDS." International Journal of Bifurcation and Chaos 19, no. 08 (2009): 2705–16. http://dx.doi.org/10.1142/s0218127409024426.

Full text
Abstract:
Variational principles and partial differential equations have proved to be fundamental elements in the mathematical modeling of extended systems in physics and engineering. Of particular interest are the equations that arise from a free energy functional. Recently variational principles have begun to be used in Image Processing to perform basic tasks such as denoising, debluring, etc. Great improvements can be achieved by selecting the most appropriate form for the functional. In this article we show how these ideas can be applied not just to scalar fields (i.e. grayscale images) but also to
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Tan, Che-Heng Fung, Him-Ting Wong, Tak-Lam Chan, and Haibo Hu. "Functional Subspace Variational Autoencoder for Domain-Adaptive Fault Diagnosis." Mathematics 11, no. 13 (2023): 2910. http://dx.doi.org/10.3390/math11132910.

Full text
Abstract:
This paper presents the functional subspace variational autoencoder, a technique addressing challenges in sensor data analysis in transportation systems, notably the misalignment of time series data and a lack of labeled data. Our technique converts vectorial data into functional data, which captures continuous temporal dynamics instead of discrete data that consist of separate observations. This conversion reduces data dimensions for machine learning tasks in fault diagnosis and facilitates the efficient removal of misalignment. The variational autoencoder identifies trends and anomalies in t
APA, Harvard, Vancouver, ISO, and other styles
28

Hou, Dongpeng, Chao Gao, Xuelong Li, and Zhen Wang. "DAG-Aware Variational Autoencoder for Social Propagation Graph Generation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8508–16. http://dx.doi.org/10.1609/aaai.v38i8.28694.

Full text
Abstract:
Propagation models in social networks are critical, with extensive applications across various fields and downstream tasks. However, existing propagation models are often oversimplified, scenario-specific, and lack real-world user social attributes. These limitations detaching from real-world analysis lead to inaccurate representations of the propagation process in social networks. To address these issues, we propose a User Features Attention-based DAG-Aware Variational Autoencoder (DAVA) for propagation graph generation. First, nearly 1 million pieces of user attributes data are collected. Th
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Ke, and Gong Zhang. "SAR Target Recognition via Meta-Learning and Amortized Variational Inference." Sensors 20, no. 20 (2020): 5966. http://dx.doi.org/10.3390/s20205966.

Full text
Abstract:
The challenge of small data has emerged in synthetic aperture radar automatic target recognition (SAR-ATR) problems. Most SAR-ATR methods are data-driven and require a lot of training data that are expensive to collect. To address this challenge, we propose a recognition model that incorporates meta-learning and amortized variational inference (AVI). Specifically, the model consists of global parameters and task-specific parameters. The global parameters, trained by meta-learning, construct a common feature extractor shared between all recognition tasks. The task-specific parameters, modeled b
APA, Harvard, Vancouver, ISO, and other styles
30

Yang, Fan, Alina Vereshchaka, Yufan Zhou, Changyou Chen, and Wen Dong. "Variational Adversarial Kernel Learned Imitation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6599–606. http://dx.doi.org/10.1609/aaai.v34i04.6135.

Full text
Abstract:
Imitation learning refers to the problem where an agent learns to perform a task through observing and mimicking expert demonstrations, without knowledge of the cost function. State-of-the-art imitation learning algorithms reduce imitation learning to distribution-matching problems by minimizing some distance measures. However, the distance measure may not always provide informative signals for a policy update. To this end, we propose the variational adversarial kernel learned imitation learning (VAKLIL), which measures the distance using the maximum mean discrepancy with variational kernel le
APA, Harvard, Vancouver, ISO, and other styles
31

Milanés-Hermosilla, Daily, Rafael Trujillo-Codorniú, Saddid Lamar-Carbonell, et al. "Robust Motor Imagery Tasks Classification Approach Using Bayesian Neural Network." Sensors 23, no. 2 (2023): 703. http://dx.doi.org/10.3390/s23020703.

Full text
Abstract:
The development of Brain–Computer Interfaces based on Motor Imagery (MI) tasks is a relevant research topic worldwide. The design of accurate and reliable BCI systems remains a challenge, mainly in terms of increasing performance and usability. Classifiers based on Bayesian Neural Networks are proposed in this work by using the variational inference, aiming to analyze the uncertainty during the MI prediction. An adaptive threshold scheme is proposed here for MI classification with a reject option, and its performance on both datasets 2a and 2b from BCI Competition IV is compared with other app
APA, Harvard, Vancouver, ISO, and other styles
32

Vedadi, Elahe, Joshua V. Dillon, Philip Andrew Mansfield, Karan Singhal, Arash Afkanpour, and Warren Richard Morningstar. "Federated Variational Inference: Towards Improved Personalization and Generalization." Proceedings of the AAAI Symposium Series 3, no. 1 (2024): 323–27. http://dx.doi.org/10.1609/aaaiss.v3i1.31228.

Full text
Abstract:
Conventional federated learning algorithms train a single global model by leveraging all participating clients’ data. However, due to heterogeneity in client generative distributions and predictive models, these approaches may not appropriately approximate the predictive process, converge to an optimal state, or generalize to new clients. We study personalization and generalization in stateless cross-device federated learning setups assuming heterogeneity in client data distributions and predictive models. We first propose a hierarchical generative model and formalize it using Bayesian Inferen
APA, Harvard, Vancouver, ISO, and other styles
33

Zuo, Xiaojing. "Deep Gaussian Mixture Variational Information Bottleneck." Advances in Engineering Technology Research 6, no. 1 (2023): 421. http://dx.doi.org/10.56028/aetr.6.1.421.2023.

Full text
Abstract:
On supervised learning tasks, introducing an information bottleneck can guide the model to focus on the more discriminative features in the input, which can effectively prevent overfitting. The deep variational information bottleneck aims to learn a global Gaussian latent variable using the neural network, which compresses mutual information with the input features and retains the most relevant information with the output features as much as possible. However, for the input containing complex semantic information, such as multi-label classification datasets, the latent variable obeying a simpl
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Ximing, Jiaojiao Zhang, and Jihong Ouyang. "Dirichlet Multinomial Mixture with Variational Manifold Regularization: Topic Modeling over Short Texts." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7884–91. http://dx.doi.org/10.1609/aaai.v33i01.33017884.

Full text
Abstract:
Conventional topic models suffer from a severe sparsity problem when facing extremely short texts such as social media posts. The family of Dirichlet multinomial mixture (DMM) can handle the sparsity problem, however, they are still very sensitive to ordinary and noisy words, resulting in inaccurate topic representations at the document level. In this paper, we alleviate this problem by preserving local neighborhood structure of short texts, enabling to spread topical signals among neighboring documents, so as to correct the inaccurate topic representations. This is achieved by using variation
APA, Harvard, Vancouver, ISO, and other styles
35

Göppel, Simon, Jürgen Frikel, and Markus Haltmeier. "Feature Reconstruction from Incomplete Tomographic Data without Detour." Mathematics 10, no. 8 (2022): 1318. http://dx.doi.org/10.3390/math10081318.

Full text
Abstract:
In this paper, we consider the problem of feature reconstruction from incomplete X-ray CT data. Such incomplete data problems occur when the number of measured X-rays is restricted either due to limit radiation exposure or due to practical constraints, making the detection of certain rays challenging. Since image reconstruction from incomplete data is a severely ill-posed (unstable) problem, the reconstructed images may suffer from characteristic artefacts or missing features, thus significantly complicating subsequent image processing tasks (e.g., edge detection or segmentation). In this pape
APA, Harvard, Vancouver, ISO, and other styles
36

VAN GENNIP, YVES, and CAROLA-BIBIANE SCHÖNLIEB. "Introduction: Big data and partial differential equations." European Journal of Applied Mathematics 28, no. 6 (2017): 877–85. http://dx.doi.org/10.1017/s0956792517000304.

Full text
Abstract:
Partial differential equations (PDEs) are expressions involving an unknown function in many independent variables and their partial derivatives up to a certain order. Since PDEs express continuous change, they have long been used to formulate a myriad of dynamical physical and biological phenomena: heat flow, optics, electrostatics and -dynamics, elasticity, fluid flow and many more. Many of these PDEs can be derived in a variational way, i.e. via minimization of an ‘energy’ functional. In this globalised and technologically advanced age, PDEs are also extensively used for modelling social sit
APA, Harvard, Vancouver, ISO, and other styles
37

Bai, Wenjun, Changqin Quan, and Zhi-Wei Luo. "Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders." Applied Sciences 9, no. 12 (2019): 2551. http://dx.doi.org/10.3390/app9122551.

Full text
Abstract:
Learning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning paradigm could hardly yield optimal performance on both generative and discriminative tasks. To this end, in this research, we harness the power of two neuroscience-inspired learning constraints, that is, dependence minimisation and regularisation constraints, to improve generative and discriminativ
APA, Harvard, Vancouver, ISO, and other styles
38

Adebayo, Philip, Frederick Basaky, and Edgar Osaghae. "Variational Quantum-Classical Algorithms: A Review of Theory, Applications, and Opportunities." UMYU Scientifica 2, no. 4 (2023): 65–75. http://dx.doi.org/10.56919/usci.2324.008.

Full text
Abstract:
Variational Quantum-Classical Algorithm (VQCA) is a potential tool for machine learning (ML) prediction tasks, but its efficacy, adaptability to big datasets, and optimization for noise reduction on quantum hardware are not clear. We aim to accomplish three study goals in this literature review. We begin by reviewing the justifications for ML practitioners' use of VQCA. Second, we compare the accuracy and effectiveness of VQCA in diverse domains to see whether it has a performance advantage over other ML methods. Finally, we evaluate VQCA's immediate and long-term effects on quantum ML and how
APA, Harvard, Vancouver, ISO, and other styles
39

Linzner, Dominik, and Heinz Koeppl. "A Variational Perturbative Approach to Planning in Graph-Based Markov Decision Processes." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7203–10. http://dx.doi.org/10.1609/aaai.v34i05.6210.

Full text
Abstract:
Coordinating multiple interacting agents to achieve a common goal is a difficult task with huge applicability. This problem remains hard to solve, even when limiting interactions to be mediated via a static interaction-graph. We present a novel approximate solution method for multi-agent Markov decision problems on graphs, based on variational perturbation theory. We adopt the strategy of planning via inference, which has been explored in various prior works. We employ a non-trivial extension of a novel high-order variational method that allows for approximate inference in large networks and h
APA, Harvard, Vancouver, ISO, and other styles
40

Xie, Jianwen, Zilong Zheng, and Ping Li. "Learning Energy-Based Model with Variational Auto-Encoder as Amortized Sampler." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10441–51. http://dx.doi.org/10.1609/aaai.v35i12.17250.

Full text
Abstract:
Due to the intractable partition function, training energy-based models (EBMs) by maximum likelihood requires Markov chain Monte Carlo (MCMC) sampling to approximate the gradient of the Kullback-Leibler divergence between data and model distributions. However, it is non-trivial to sample from an EBM because of the difficulty of mixing between modes. In this paper, we propose to learn a variational auto-encoder (VAE) to initialize the finite-step MCMC, such as Langevin dynamics that is derived from the energy function, for efficient amortized sampling of the EBM. With these amortized MCMC sampl
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Zhongwei, Xue Zhu, Ziqi Xin, Fangming Guo, Xingshuai Cui, and Leiquan Wang. "Variational Generative Adversarial Network with Crossed Spatial and Spectral Interactions for Hyperspectral Image Classification." Remote Sensing 13, no. 16 (2021): 3131. http://dx.doi.org/10.3390/rs13163131.

Full text
Abstract:
Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have been widely used in hyperspectral image classification (HSIC) tasks. However, the generated HSI virtual samples by VAEs are often ambiguous, and GANs are prone to the mode collapse, which lead the poor generalization abilities ultimately. Moreover, most of these models only consider the extraction of spectral or spatial features. They fail to combine the two branches interactively and ignore the correlation between them. Consequently, the variational generative adversarial network with crossed spatial and spectral
APA, Harvard, Vancouver, ISO, and other styles
42

Xu, Zeyu, Wenbin Yu, Chengjun Zhang, and Yadang Chen. "Quantum Convolutional Long Short-Term Memory Based on Variational Quantum Algorithms in the Era of NISQ." Information 15, no. 4 (2024): 175. http://dx.doi.org/10.3390/info15040175.

Full text
Abstract:
In the era of noisy intermediate-scale quantum (NISQ) computing, the synergistic collaboration between quantum and classical computing models has emerged as a promising solution for tackling complex computational challenges. Long short-term memory (LSTM), as a popular network for modeling sequential data, has been widely acknowledged for its effectiveness. However, with the increasing demand for data and spatial feature extraction, the training cost of LSTM exhibits exponential growth. In this study, we propose the quantum convolutional long short-term memory (QConvLSTM) model. By ingeniously
APA, Harvard, Vancouver, ISO, and other styles
43

Qin, Junhan. "Review of ansatz designing techniques for variational quantum algorithms." Journal of Physics: Conference Series 2634, no. 1 (2023): 012043. http://dx.doi.org/10.1088/1742-6596/2634/1/012043.

Full text
Abstract:
Abstract For a large number of tasks, quantum computing demonstrates the potential for exponential acceleration over classical computing. In the NISQ era, variable-component subcircuits enable applications of quantum computing. To reduce the inherent noise and qubit size limitations of quantum computers, existing research has improved the accuracy and efficiency of Variational Quantum Algorithm (VQA). In this paper, we explore the various ansatz improvement methods for VQAs at the gate level and pulse level, and classify, evaluate and summarize them.
APA, Harvard, Vancouver, ISO, and other styles
44

Lawley, Lane, Will Frey, Patrick Mullen, and Alexander D. Wissner-Gross. "Joint sparsity-biased variational graph autoencoders." Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 18, no. 3 (2021): 239–46. http://dx.doi.org/10.1177/1548512921996828.

Full text
Abstract:
To bring the full benefits of machine learning to defense modeling and simulation, it is essential to first learn useful representations for sparse graphs consisting of both key entities (vertices) and their relationships (edges). Here, we present a new model, the Joint Sparsity-Biased Variational Graph AutoEncoder (JSBVGAE), capable of learning embedded representations of nodes from which both sparse network topologies and node features can be jointly and accurately reconstructed. We show that our model outperforms the previous state of the art on standard link-prediction and node-classificat
APA, Harvard, Vancouver, ISO, and other styles
45

Zhai, Ke, Jordan Boyd-Graber, and Shay B. Cohen. "Online Adaptor Grammars with Hybrid Inference." Transactions of the Association for Computational Linguistics 2 (December 2014): 465–76. http://dx.doi.org/10.1162/tacl_a_00196.

Full text
Abstract:
Adaptor grammars are a flexible, powerful formalism for defining nonparametric, unsupervised models of grammar productions. This flexibility comes at the cost of expensive inference. We address the difficulty of inference through an online algorithm which uses a hybrid of Markov chain Monte Carlo and variational inference. We show that this inference strategy improves scalability without sacrificing performance on unsupervised word segmentation and topic modeling tasks.
APA, Harvard, Vancouver, ISO, and other styles
46

Song, Junru, Yang Yang, Wei Peng, Weien Zhou, Feifei Wang, and Wen Yao. "MorphVAE: Advancing Morphological Design of Voxel-Based Soft Robots with Variational Autoencoders." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 9 (2024): 10368–76. http://dx.doi.org/10.1609/aaai.v38i9.28904.

Full text
Abstract:
Soft robot design is an intricate field with unique challenges due to its complex and vast search space. In the past literature, evolutionary computation algorithms, including novel probabilistic generative models (PGMs), have shown potential in this realm. However, these methods are sample inefficient and predominantly focus on rigid robots in locomotion tasks, which limit their performance and application in robot design automation. In this work, we propose MorphVAE, an innovative PGM that incorporates a multi-task training scheme and a meticulously crafted sampling technique termed ``contin
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Zheng, and Qingbiao Wu. "An Integrated Deep Generative Model for Text Classification and Generation." Mathematical Problems in Engineering 2018 (August 19, 2018): 1–8. http://dx.doi.org/10.1155/2018/7529286.

Full text
Abstract:
Text classification and generation are two important tasks in the field of natural language processing. In this paper, we deal with both tasks via Variational Autoencoder, which is a powerful deep generative model. The self-attention mechanism is introduced to the encoder. The modified encoder extracts the global feature of the input text to produce the hidden code, and we train a neural network classifier based on the hidden code to perform the classification. On the other hand, the label of the text is fed into the decoder explicitly to enhance the categorization information, which could hel
APA, Harvard, Vancouver, ISO, and other styles
48

Kiran R. Gavhale. "Leveraging Nonlinear Variational Inequalities with Hierarchical Graph Convolutional Networks for Adaptive Resource Management in Cloud Environments." Advances in Nonlinear Variational Inequalities 28, no. 4s (2025): 296–306. https://doi.org/10.52783/anvi.v28.3250.

Full text
Abstract:
The complexity and dynamism of cloud environments require new approaches to efficient resource provisioning and load balancing. Centralized approaches are often plagued with scalability problems, latency, and inefficient resource utilization under dynamic workloads. To overcome these shortcomings, we propose an advanced decentralized framework that utilizes deep learning and nonlinear variational inequalities for pre-emptive load balancing and cost-efficient resource provisioning. We develop our solution that is comprised of three complementary methods, which are Hierarchical Graph Convolution
APA, Harvard, Vancouver, ISO, and other styles
49

Sun, Qingyun, Jianxin Li, Hao Peng, et al. "Graph Structure Learning with Variational Information Bottleneck." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (2022): 4165–74. http://dx.doi.org/10.1609/aaai.v36i4.20335.

Full text
Abstract:
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications. Most empirical studies of GNNs directly take the observed graph as input, assuming the observed structure perfectly depicts the accurate and complete relations between nodes. However, graphs in the real-world are inevitably noisy or incomplete, which could even exacerbate the quality of graph representations. In this work, we propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL, in the perspective of information theory. VIB-GSL is the first atte
APA, Harvard, Vancouver, ISO, and other styles
50

Kiselev, Igor. "Variational BEJG Solvers for Marginal-MAP Inference with Accurate Approximation of B-Conditional Entropy." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9957–58. http://dx.doi.org/10.1609/aaai.v33i01.33019957.

Full text
Abstract:
Previously proposed variational techniques for approximate MMAP inference in complex graphical models of high-order factors relax a dual variational objective function to obtain its tractable approximation, and further perform MMAP inference in the resulting simplified graphical model, where the sub-graph with decision variables is assumed to be a disconnected forest. In contrast, we developed novel variational MMAP inference algorithms and proximal convergent solvers, where we can improve the approximation accuracy while better preserving the original MMAP query by designing such a dual varia
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!