To see the other types of publications on this topic, follow the link: Representation space / Latent space.

Journal articles on the topic 'Representation space / Latent space'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Representation space / Latent space.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gat, Itai, Guy Lorberbom, Idan Schwartz, and Tamir Hazan. "Latent Space Explanation by Intervention." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 679–87. http://dx.doi.org/10.1609/aaai.v36i1.19948.

Full text
Abstract:
The success of deep neural nets heavily relies on their ability to encode complex relations between their input and their output. While this property serves to fit the training data well, it also obscures the mechanism that drives prediction. This study aims to reveal hidden concepts by employing an intervention mechanism that shifts the predicted class based on discrete variational autoencoders. An explanatory model then visualizes the encoded information from any hidden layer and its corresponding intervened representation. By the assessment of differences between the original representation
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Yulei, Ziping Ma, Huirong Li, and Jingyu Wang. "Dual Space Latent Representation Learning for Image Representation." Mathematics 11, no. 11 (2023): 2526. http://dx.doi.org/10.3390/math11112526.

Full text
Abstract:
Semi-supervised non-negative matrix factorization (NMF) has achieved successful results due to the significant ability of image recognition by a small quantity of labeled information. However, there still exist problems to be solved such as the interconnection information not being fully explored and the inevitable mixed noise in the data, which deteriorates the performance of these methods. To circumvent this problem, we propose a novel semi-supervised method named DLRGNMF. Firstly, dual latent space is characterized by the affinity matrix to explicitly reflect the interrelationship between d
APA, Harvard, Vancouver, ISO, and other styles
3

Jin Dai, Jin Dai, and Zhifang Zheng Jin Dai. "Disentangling Representation of Variational Autoencoders Based on Cloud Models." 電腦學刊 34, no. 6 (2023): 001–14. http://dx.doi.org/10.53106/199115992023123406001.

Full text
Abstract:
<p>Variational autoencoder (VAE) has the problem of uninterpretable data generation process, because the features contained in the VAE latent space are coupled with each other and no mapping from the latent space to the semantic space is established. However, most existing algorithms cannot understand the data distribution features in the latent space semantically. In this paper, we propose a cloud model-based method for disentangling semantic features in VAE latent space by adding support vector machines (SVM) to feature transformations of latent variables, and we propose to use the clo
APA, Harvard, Vancouver, ISO, and other styles
4

Heese, Raoul, Jochen Schmid, Michał Walczak, and Michael Bortz. "Calibrated simplex-mapping classification." PLOS ONE 18, no. 1 (2023): e0279876. http://dx.doi.org/10.1371/journal.pone.0279876.

Full text
Abstract:
We propose a novel methodology for general multi-class classification in arbitrary feature spaces, which results in a potentially well-calibrated classifier. Calibrated classifiers are important in many applications because, in addition to the prediction of mere class labels, they also yield a confidence level for each of their predictions. In essence, the training of our classifier proceeds in two steps. In a first step, the training data is represented in a latent space whose geometry is induced by a regular (n − 1)-dimensional simplex, n being the number of classes. We design this represent
APA, Harvard, Vancouver, ISO, and other styles
5

Namatēvs, Ivars, Artūrs Ņikuļins, Anda Slaidiņa, Laura Neimane, Oskars Radziņš, and Kaspars Sudars. "Towards Explainability of the Latent Space by Disentangled Representation Learning." Information Technology and Management Science 26 (November 30, 2023): 41–48. http://dx.doi.org/10.7250/itms-2023-0006.

Full text
Abstract:
Deep neural networks are widely used in computer vision for image classification, segmentation and generation. They are also often criticised as “black boxes” because their decision-making process is often not interpretable by humans. However, learning explainable representations that explicitly disentangle the underlying mechanisms that structure observational data is still a challenge. To further explore the latent space and achieve generic processing, we propose a pipeline for discovering the explainable directions in the latent space of generative models. Since the latent space contains se
APA, Harvard, Vancouver, ISO, and other styles
6

Toledo-Marín, J. Quetzalcóatl, and James A. Glazier. "Using deep LSD to build operators in GANs latent space with meaning in real space." PLOS ONE 18, no. 6 (2023): e0287736. http://dx.doi.org/10.1371/journal.pone.0287736.

Full text
Abstract:
Generative models rely on the idea that data can be represented in terms of latent variables which are uncorrelated by definition. Lack of correlation among the latent variable support is important because it suggests that the latent-space manifold is simpler to understand and manipulate than the real-space representation. Many types of generative model are used in deep learning, e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs). Based on the idea that the latent space behaves like a vector space Radford et al. (2015), we ask whether we can expand the latent spac
APA, Harvard, Vancouver, ISO, and other styles
7

Sang, Neil. "Does Time Smoothen Space? Implications for Space-Time Representation." ISPRS International Journal of Geo-Information 12, no. 3 (2023): 119. http://dx.doi.org/10.3390/ijgi12030119.

Full text
Abstract:
The continuous nature of space and time is a fundamental tenet of many scientific endeavors. That digital representation imposes granularity is well recognized, but whether it is possible to address space completely remains unanswered. This paper argues Hales’ proof of Kepler’s conjecture on the packing of hard spheres suggests the answer to be “no”, providing examples of why this matters in GIS generally and considering implications for spatio-temporal GIS in particular. It seeks to resolve the dichotomy between continuous and granular space by showing how a continuous space may be emergent o
APA, Harvard, Vancouver, ISO, and other styles
8

Shrivastava, Aditya Divyakant, and Douglas B. Kell. "FragNet, a Contrastive Learning-Based Transformer Model for Clustering, Interpreting, Visualizing, and Navigating Chemical Space." Molecules 26, no. 7 (2021): 2065. http://dx.doi.org/10.3390/molecules26072065.

Full text
Abstract:
The question of molecular similarity is core in cheminformatics and is usually assessed via a pairwise comparison based on vectors of properties or molecular fingerprints. We recently exploited variational autoencoders to embed 6M molecules in a chemical space, such that their (Euclidean) distance within the latent space so formed could be assessed within the framework of the entire molecular set. However, the standard objective function used did not seek to manipulate the latent space so as to cluster the molecules based on any perceived similarity. Using a set of some 160,000 molecules of bi
APA, Harvard, Vancouver, ISO, and other styles
9

Banyay, Gregory A., and Andrew S. Wixom. "Latent space representation method for structural acoustic assessments." Journal of the Acoustical Society of America 155, no. 3_Supplement (2024): A141. http://dx.doi.org/10.1121/10.0027092.

Full text
Abstract:
When targeting structural acoustic objectives, engineering practitioners face epistemic uncertainties in the selection of optimal geometries and material distributions, particularly during early stages of the design process. Models built for simulating acoustic phenomena generally produce vector-valued output quantities of interest, such as autospectral density and frequency response functions. Given finite compute resources and time we seek computationally parsimonious ways to distill meaningful design information into actionable results from a limited set of model runs, and thus aim to use m
APA, Harvard, Vancouver, ISO, and other styles
10

You, Cong-Zhe, Vasile Palade, and Xiao-Jun Wu. "Robust structure low-rank representation in latent space." Engineering Applications of Artificial Intelligence 77 (January 2019): 117–24. http://dx.doi.org/10.1016/j.engappai.2018.09.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Iraki, Tarek, and Norbert Link. "Generative models for capturing and exploiting the influence of process conditions on process curves." Journal of Intelligent Manufacturing 33, no. 2 (2021): 473–92. http://dx.doi.org/10.1007/s10845-021-01846-4.

Full text
Abstract:
AbstractVariations of dedicated process conditions (such as workpiece and tool properties) yield different process state evolutions, which are reflected by different time series of the observable quantities (process curves). A novel method is presented, which firstly allows to extract the statistical influence of these conditions on the process curves and its representation via generative models, and secondly represents their influence on the ensemble of curves by transformations of the representation space. A latent variable space is derived from sampled process data, which represents the cur
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Man-Sheng, Ling Huang, Chang-Dong Wang, and Dong Huang. "Multi-View Clustering in Latent Embedding Space." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3513–20. http://dx.doi.org/10.1609/aaai.v34i04.5756.

Full text
Abstract:
Previous multi-view clustering algorithms mostly partition the multi-view data in their original feature space, the efficacy of which heavily and implicitly relies on the quality of the original feature presentation. In light of this, this paper proposes a novel approach termed Multi-view Clustering in Latent Embedding Space (MCLES), which is able to cluster the multi-view data in a learned latent embedding space while simultaneously learning the global structure and the cluster indicator matrix in a unified optimization framework. Specifically, in our framework, a latent embedding representat
APA, Harvard, Vancouver, ISO, and other styles
13

Zheng, Chuankun, Ruzhang Zheng, Rui Wang, Shuang Zhao, and Hujun Bao. "A Compact Representation of Measured BRDFs Using Neural Processes." ACM Transactions on Graphics 41, no. 2 (2022): 1–15. http://dx.doi.org/10.1145/3490385.

Full text
Abstract:
In this article, we introduce a compact representation for measured BRDFs by leveraging Neural Processes (NPs). Unlike prior methods that express those BRDFs as discrete high-dimensional matrices or tensors, our technique considers measured BRDFs as continuous functions and works in corresponding function spaces . Specifically, provided the evaluations of a set of BRDFs, such as ones in MERL and EPFL datasets, our method learns a low-dimensional latent space as well as a few neural networks to encode and decode these measured BRDFs or new BRDFs into and from this space in a non-linear fashion.
APA, Harvard, Vancouver, ISO, and other styles
14

ASEERVATHAM, SUJEEVAN. "A CONCEPT VECTOR SPACE MODEL FOR SEMANTIC KERNELS." International Journal on Artificial Intelligence Tools 18, no. 02 (2009): 239–72. http://dx.doi.org/10.1142/s0218213009000123.

Full text
Abstract:
Kernels are widely used in Natural Language Processing as similarity measures within inner-product based learning methods like the Support Vector Machine. The Vector Space Model (VSM) is extensively used for the spatial representation of the documents. However, it is purely a statistical representation. In this paper, we present a Concept Vector Space Model (CVSM) representation which uses linguistic prior knowledge to capture the meanings of the documents. We also propose a linear kernel and a latent kernel for this space. The linear kernel takes advantage of the linguistic concepts whereas t
APA, Harvard, Vancouver, ISO, and other styles
15

Stevens, Jesse, Daniel N. Wilke, and Isaac I. Setshedi. "Enhancing LS-PIE’s Optimal Latent Dimensional Identification: Latent Expansion and Latent Condensation." Mathematical and Computational Applications 29, no. 4 (2024): 65. http://dx.doi.org/10.3390/mca29040065.

Full text
Abstract:
The Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) framework enhances dimensionality reduction methods for linear latent variable models (LVMs). This paper extends LS-PIE by introducing an optimal latent discovery strategy to automate identifying optimal latent dimensions and projections based on user-defined metrics. The latent condensing (LCON) method clusters and condenses an extensive latent space into a compact form. A new approach, latent expansion (LEXP), incrementally increases latent dimensions using a linear LVM to find an optimal compact space. This study compares
APA, Harvard, Vancouver, ISO, and other styles
16

Perianez-Pascual, Jorge, Juan D. Gutiérrez, Laura Escobar-Encinas, Álvaro Rubio-Largo, and Roberto Rodriguez-Echeverria. "Beyond Spectrograms: Rethinking Audio Classification from EnCodec’s Latent Space." Algorithms 18, no. 2 (2025): 108. https://doi.org/10.3390/a18020108.

Full text
Abstract:
This paper presents a novel approach to audio classification leveraging the latent representation generated by Meta’s EnCodec neural audio codec. We hypothesize that the compressed latent space representation captures essential audio features more suitable for classification tasks than the traditional spectrogram-based approaches. We train a vanilla convolutional neural network for music genre, speech/music, and environmental sound classification using EnCodec’s encoder output as input to validate this. Then, we compare its performance training with the same network using a spectrogram-based r
APA, Harvard, Vancouver, ISO, and other styles
17

Asai, Masataro, Hiroshi Kajino, Alex Fukunaga, and Christian Muise. "Classical Planning in Deep Latent Space." Journal of Artificial Intelligence Research 74 (August 9, 2022): 1599–686. http://dx.doi.org/10.1613/jair.1.13768.

Full text
Abstract:
Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns
APA, Harvard, Vancouver, ISO, and other styles
18

Tan, Zhen, Xiang Zhao, Yang Fang, Bin Ge, and Weidong Xiao. "Knowledge Graph Representation via Similarity-Based Embedding." Scientific Programming 2018 (July 15, 2018): 1–12. http://dx.doi.org/10.1155/2018/6325635.

Full text
Abstract:
Knowledge graph, a typical multi-relational structure, includes large-scale facts of the world, yet it is still far away from completeness. Knowledge graph embedding, as a representation method, constructs a low-dimensional and continuous space to describe the latent semantic information and predict the missing facts. Among various solutions, almost all embedding models have high time and memory-space complexities and, hence, are difficult to apply to large-scale knowledge graphs. Some other embedding models, such as TransE and DistMult, although with lower complexity, ignore inherent features
APA, Harvard, Vancouver, ISO, and other styles
19

Shang, Ronghua, Lujuan Wang, Fanhua Shang, Licheng Jiao, and Yangyang Li. "Dual space latent representation learning for unsupervised feature selection." Pattern Recognition 114 (June 2021): 107873. http://dx.doi.org/10.1016/j.patcog.2021.107873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

周, 翊航. "Low-Rank Representation Algorithm Based on Latent Feature Space." Computer Science and Application 11, no. 04 (2021): 1140–48. http://dx.doi.org/10.12677/csa.2021.114117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bae, Seho, Nizam Ud Din, Hyunkyu Park, and Juneho Yi. "Exploiting an Intermediate Latent Space between Photo and Sketch for Face Photo-Sketch Recognition." Sensors 22, no. 19 (2022): 7299. http://dx.doi.org/10.3390/s22197299.

Full text
Abstract:
The photo-sketch matching problem is challenging because the modality gap between a photo and a sketch is very large. This work features a novel approach to the use of an intermediate latent space between the two modalities that circumvents the problem of modality gap for face photo-sketch recognition. To set up a stable homogenous latent space between a photo and a sketch that is effective for matching, we utilize a bidirectional (photo → sketch and sketch → photo) collaborative synthesis network and equip the latent space with rich representation power. To provide rich representation power,
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Hao, Lu Wang, Zhongyu Wang, Lixin Ma, and Ye Luo. "SSC-VAE: Structured Sparse Coding Based Variational Autoencoder for Detail Preserved Image Reconstruction." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 7 (2025): 7665–73. https://doi.org/10.1609/aaai.v39i7.32825.

Full text
Abstract:
Discrete latent representation techniques, such as Vector Quantization (VQ) and Sparse Coding (SC), have demonstrated superior image reconstruction and generation quality compared to continuous representation methods in Variational Autoencoders (VAEs). However, existing approaches often treat the latent representations of an image independently in their discrete representation space, neglecting both the inherent structural information within each representation and the correlations among them. This oversight leads to coarse representations and suboptimal generated results. In this paper, we ad
APA, Harvard, Vancouver, ISO, and other styles
23

Simhal, Anish K., Rena Elkin, Ross S. Firestone, Jung Hun Oh, and Joseph O. Deasy. "Abstract A031: Unsupervised graph-based visualization of variational autoencoder latent spaces reveals hidden multiple myeloma subtypes." Clinical Cancer Research 31, no. 13_Supplement (2025): A031. https://doi.org/10.1158/1557-3265.aimachine-a031.

Full text
Abstract:
Abstract Latent space representations learned through variational autoencoders (VAEs) offer a powerful, unsupervised means of capturing nonlinear structure in high-dimensional oncology data. The latent embedding spaces often encode information that differs from traditional bioinformatics methods such as t-SNE or UMAP. However, a persistent challenge remains: how to meaningfully visualize and interpret these latent variables. Common dimensionality reduction techniques like UMAP and t-SNE, while effective, can obscure graph-theoretic relationships that may underlie important biological patterns.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Xiang, Huaibo Huang, Vishal M. Patel, Ran He, and Zhenan Sun. "Disentangled Variational Representation for Heterogeneous Face Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9005–12. http://dx.doi.org/10.1609/aaai.v33i01.33019005.

Full text
Abstract:
Visible (VIS) to near infrared (NIR) face matching is a challenging problem due to the significant domain discrepancy between the domains and a lack of sufficient data for training cross-modal matching algorithms. Existing approaches attempt to tackle this problem by either synthesizing visible faces from NIR faces, extracting domain-invariant features from these modalities, or projecting heterogeneous data onto a common latent space for cross-modal matching. In this paper, we take a different approach in which we make use of the Disentangled Variational Representation (DVR) for crossmodal mat
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Jaein, Juwon Lee, Ungjin Jang, Seri Lee, and Jooyoung Park. "PyTorch/Pyro Implementation for Representation of Motion in Latent Space." Journal of Korean Institute of Intelligent Systems 28, no. 6 (2018): 558–63. http://dx.doi.org/10.5391/jkiis.2018.28.6.558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Win, Thinzar Aung, and Khamron Sunat. "Optimizing Latent Space Representation for Tourism Insights: A Metaheuristic Approach." Journal of Robotics and Control (JRC) 5, no. 2 (2024): 441–58. https://doi.org/10.18196/jrc.v5i2.21419.

Full text
Abstract:
In the modern digital era, social media platforms with travel reviews significantly influence the tourism industry by providing a wealth of information on consumer preferences and behaviors. However, these textual reviews' complex and varied nature poses analytical challenges. This research employs advanced Natural Language Processing (NLP) techniques to process and analyze vast amounts of travel data efficiently, tackling the challenges posed by the diverse and detailed content in the tourism field. We have developed an innovative text clustering methodology that combines BERT's deep linguist
APA, Harvard, Vancouver, ISO, and other styles
27

Kirchoff, Kathryn E., Travis Maxfield, Alexander Tropsha, and Shawn M. Gomez. "SALSA: Semantically-Aware Latent Space Autoencoder." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (2024): 13211–19. http://dx.doi.org/10.1609/aaai.v38i12.29221.

Full text
Abstract:
In deep learning for drug discovery, molecular representations are often based on sequences, known as SMILES, which allow for straightforward implementation of natural language processing methodologies, one being the sequence-to-sequence autoencoder. However, we observe that training an autoencoder solely on SMILES is insufficient to learn molecular representations that are semantically meaningful, where semantics are specified by the structural (graph-to-graph) similarities between molecules. We demonstrate by example that SMILES-based autoencoders may map structurally similar molecules to di
APA, Harvard, Vancouver, ISO, and other styles
28

Raja, Vinayak, and Bhuvi Chopra. "Fostering Privacy in Collaborative Data Sharing via Auto-encoder Latent Space Embedding." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 4, no. 1 (2024): 152–62. http://dx.doi.org/10.60087/jaigs.v4i1.129.

Full text
Abstract:
Securing privacy in machine learning via collaborative data sharing is essential for organizations seeking to harness collective data while upholding confidentiality. This becomes especially vital when protecting sensitive information across the entire machine learning pipeline, from model training to inference. This paper presents an innovative framework utilizing Representation Learning via autoencoders to generate privacy-preserving embedded data. As a result, organizations can distribute these representations, enhancing the performance of machine learning models in situations where multipl
APA, Harvard, Vancouver, ISO, and other styles
29

Raja, Vinayak, and BHUVI chopra. "Cultivating Privacy in Collaborative Data Sharing through Auto-encoder Latent Space Embeddings." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 3, no. 1 (2024): 269–83. http://dx.doi.org/10.60087/jaigs.vol03.issue01.p283.

Full text
Abstract:
Ensuring privacy in machine learning through collaborative data sharing is imperative for organizations aiming to leverage collective data without compromising confidentiality. This becomes particularly crucial when sensitive information must be safeguarded throughout the entire machine learning process, spanning from model training to inference. This paper introduces a novel framework employing Representation Learning through autoencoders to produce privacy-preserving embedded data. Consequently, organizations can share these representations, fostering improved performance of machine learning
APA, Harvard, Vancouver, ISO, and other styles
30

Raja, Vinayak, and Bhuvi Chopra. "Cultivating Privacy in Collaborative Data Sharing through Auto-encoder Latent Space Embeddings." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 3, no. 1 (2024): 371–91. http://dx.doi.org/10.60087/jaigs.v3i1.126.

Full text
Abstract:
Ensuring privacy in machine learning through collaborative data sharing is imperative for organizations aiming to leverage collective data without compromising confidentiality. This becomes particularly crucial when sensitive information must be safeguarded throughout the entire machine learning process, spanning from model training to inference. This paper introduces a novel framework employing Representation Learning through autoencoders to produce privacy-preserving embedded data. Consequently, organizations can share these representations, fostering improved performance of machine learning
APA, Harvard, Vancouver, ISO, and other styles
31

Rivero, Daniel, Iván Ramírez-Morales, Enrique Fernandez-Blanco, Norberto Ezquerra, and Alejandro Pazos. "Classical Music Prediction and Composition by Means of Variational Autoencoders." Applied Sciences 10, no. 9 (2020): 3053. http://dx.doi.org/10.3390/app10093053.

Full text
Abstract:
This paper proposes a new model for music prediction based on Variational Autoencoders (VAEs). In this work, VAEs are used in a novel way to address two different issues: music representation into the latent space, and using this representation to make predictions of the future note events of the musical piece. This approach was trained with different songs of Handel. As a result, the system can represent the music in the latent space, and make accurate predictions. Therefore, the system can be used to compose new music either from an existing piece or from a random starting point. An addition
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Jian, Jin Yuan, Chuanzhen Li, and Bin Li. "An Inverse Design Framework for Isotropic Metasurfaces Based on Representation Learning." Electronics 11, no. 12 (2022): 1844. http://dx.doi.org/10.3390/electronics11121844.

Full text
Abstract:
A hybrid framework for solving the non-uniqueness problem in the inverse design of isomorphic metasurfaces is proposed. The framework consists of a representation learning (RL) module and a variational autoencoder-particle swarm optimization (VAE-PSO) algorithm module. The RL module is used to reduce the complex high-dimensional space into a low-dimensional space with obvious features, with the purpose of eliminating the many-to-one relationship between the original design space and response space. The VAE-PSO algorithm first encodes all meta-atoms into a continuous latent space through VAE an
APA, Harvard, Vancouver, ISO, and other styles
33

Ahmed, Taufique, and Luca Longo. "Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility." Information 14, no. 9 (2023): 489. http://dx.doi.org/10.3390/info14090489.

Full text
Abstract:
Dimensionality reduction and producing simple representations of electroencephalography (EEG) signals are challenging problems. Variational autoencoders (VAEs) have been employed for EEG data creation, augmentation, and automatic feature extraction. In most of the studies, VAE latent space interpretation is used to detect only the out-of-order distribution latent variable for anomaly detection. However, the interpretation and visualisation of all latent space components disclose information about how the model arrives at its conclusion. The main contribution of this study is interpreting the d
APA, Harvard, Vancouver, ISO, and other styles
34

Karimi Mamaghan, Amir Mohammad, Andrea Dittadi, Stefan Bauer, Karl Henrik Johansson, and Francesco Quinzan. "Diffusion-Based Causal Representation Learning." Entropy 26, no. 7 (2024): 556. http://dx.doi.org/10.3390/e26070556.

Full text
Abstract:
Causal reasoning can be considered a cornerstone of intelligent systems. Having access to an underlying causal graph comes with the promise of cause–effect estimation and the identification of efficient and safe interventions. However, learning causal representations remains a major challenge, due to the complexity of many real-world systems. Previous works on causal representation learning have mostly focused on Variational Auto-Encoders (VAEs). These methods only provide representations from a point estimate, and they are less effective at handling high dimensions. To overcome these problems
APA, Harvard, Vancouver, ISO, and other styles
35

Liao, Jiayu, Xiaolan Liu, and Mengying Xie. "Inductive Latent Space Sparse and Low-rank Subspace Clustering Algorithm." Journal of Physics: Conference Series 2224, no. 1 (2022): 012124. http://dx.doi.org/10.1088/1742-6596/2224/1/012124.

Full text
Abstract:
Abstract Sparse subspace clustering (SSC) and low-rank representation (LRR) are the most popular algorithms for subspace clustering. However, SSC and LRR are transductive methods and cannot deal with the new data not involved in the training data. When a new data comes, SSC and LRR need to calculate over all the data again, which is a time-consuming thing. On the other hand, for high-dimensional data, dimensionality reduction is firstly performed before running SSC and LRR algorithms which isolate the dimensionality reduction and the following subspace clustering. To overcome these shortcoming
APA, Harvard, Vancouver, ISO, and other styles
36

Sha, Lei, and Thomas Lukasiewicz. "Text Attribute Control via Closed-Loop Disentanglement." Transactions of the Association for Computational Linguistics 12 (2024): 190–209. http://dx.doi.org/10.1162/tacl_a_00640.

Full text
Abstract:
Abstract Changing an attribute of a text without changing the content usually requires first disentangling the text into irrelevant attributes and content representations. After that, in the inference phase, the representation of one attribute is tuned to a different value, expecting that the corresponding attribute of the text can also be changed accordingly. The usual way of disentanglement is to add some constraints on the latent space of an encoder-decoder architecture, including adversarial-based constraints and mutual-information-based constraints. However, previous semi-supervised proce
APA, Harvard, Vancouver, ISO, and other styles
37

Winter, Robin, Floriane Montanari, Andreas Steffen, Hans Briem, Frank Noé, and Djork-Arné Clevert. "Efficient multi-objective molecular optimization in a continuous latent space." Chemical Science 10, no. 34 (2019): 8016–24. http://dx.doi.org/10.1039/c9sc01928f.

Full text
Abstract:
We utilize Particle Swarm Optimization to optimize molecules in a machine-learned continuous chemical representation with respect to multiple objectives such as biological activity, structural constrains or ADMET properties.
APA, Harvard, Vancouver, ISO, and other styles
38

Khan, Shujaat. "Deep-Representation-Learning-Based Classification Strategy for Anticancer Peptides." Mathematics 12, no. 9 (2024): 1330. http://dx.doi.org/10.3390/math12091330.

Full text
Abstract:
Cancer, with its complexity and numerous origins, continues to provide a huge challenge in medical research. Anticancer peptides are a potential treatment option, but identifying and synthesizing them on a large scale requires accurate prediction algorithms. This study presents an intuitive classification strategy, named ACP-LSE, based on representation learning, specifically, a deep latent-space encoding scheme. ACP-LSE can demonstrate notable advancements in classification outcomes, particularly in scenarios with limited sample sizes and abundant features. ACP-LSE differs from typical black-
APA, Harvard, Vancouver, ISO, and other styles
39

Bollon, Jordy, Michela Assale, Andrea Cina, et al. "Investigating How Reproducibility and Geometrical Representation in UMAP Dimensionality Reduction Impact the Stratification of Breast Cancer Tumors." Applied Sciences 12, no. 9 (2022): 4247. http://dx.doi.org/10.3390/app12094247.

Full text
Abstract:
Advances in next-generation sequencing have provided high-dimensional RNA-seq datasets, allowing the stratification of some tumor patients based on their transcriptomic profiles. Machine learning methods have been used to reduce and cluster high-dimensional data. Recently, uniform manifold approximation and projection (UMAP) was applied to project genomic datasets in low-dimensional Euclidean latent space. Here, we evaluated how different representations of the UMAP embedding can impact the analysis of breast cancer (BC) stratification. We projected BC RNA-seq data on Euclidean, spherical, and
APA, Harvard, Vancouver, ISO, and other styles
40

Rousseau, Thomas, Gentiane Venture, and Vincent Hernandez. "Latent Space Representation of Human Movement: Assessing the Effects of Fatigue." Sensors 24, no. 23 (2024): 7775. https://doi.org/10.3390/s24237775.

Full text
Abstract:
Fatigue plays a critical role in sports science, significantly affecting recovery, training effectiveness, and overall athletic performance. Understanding and predicting fatigue is essential to optimize training, prevent overtraining, and minimize the risk of injuries. The aim of this study is to leverage Human Activity Recognition (HAR) through deep learning methods for dimensionality reduction. The use of Adversarial AutoEncoders (AAEs) is explored to assess and visualize fatigue in a two-dimensional latent space, focusing on both semi-supervised and conditional approaches. By transforming c
APA, Harvard, Vancouver, ISO, and other styles
41

Zabihi, Mariam, Seyed Mostafa Kia, Thomas Wolfers, et al. "Nonlinear latent representations of high-dimensional task-fMRI data: Unveiling cognitive and behavioral insights in heterogeneous spatial maps." PLOS ONE 19, no. 8 (2024): e0308329. http://dx.doi.org/10.1371/journal.pone.0308329.

Full text
Abstract:
Finding an interpretable and compact representation of complex neuroimaging data is extremely useful for understanding brain behavioral mapping and hence for explaining the biological underpinnings of mental disorders. However, hand-crafted representations, as well as linear transformations, may inadequately capture the considerable variability across individuals. Here, we implemented a data-driven approach using a three-dimensional autoencoder on two large-scale datasets. This approach provides a latent representation of high-dimensional task-fMRI data which can account for demographic charac
APA, Harvard, Vancouver, ISO, and other styles
42

You, Cong-Zhe, Zhen-Qiu Shu, and Hong-Hui Fan. "Non-negative sparse Laplacian regularized latent multi-view subspace clustering." Journal of Algorithms & Computational Technology 15 (January 2021): 174830262110249. http://dx.doi.org/10.1177/17483026211024904.

Full text
Abstract:
Recently, in the area of artificial intelligence and machine learning, subspace clustering of multi-view data is a research hotspot. The goal is to divide data samples from different sources into different groups. We proposed a new subspace clustering method for multi-view data which termed as Non-negative Sparse Laplacian regularized Latent Multi-view Subspace Clustering (NSL2MSC) in this paper. The method proposed in this paper learns the latent space representation of multi view data samples, and performs the data reconstruction on the latent space. The algorithm can cluster data in the lat
APA, Harvard, Vancouver, ISO, and other styles
43

Bjerrum, Esben, and Boris Sattarov. "Improving Chemical Autoencoder Latent Space and Molecular De Novo Generation Diversity with Heteroencoders." Biomolecules 8, no. 4 (2018): 131. http://dx.doi.org/10.3390/biom8040131.

Full text
Abstract:
Chemical autoencoders are attractive models as they combine chemical space navigation with possibilities for de novo molecule generation in areas of interest. This enables them to produce focused chemical libraries around a single lead compound for employment early in a drug discovery project. Here, it is shown that the choice of chemical representation, such as strings from the simplified molecular-input line-entry system (SMILES), has a large influence on the properties of the latent space. It is further explored to what extent translating between different chemical representations influence
APA, Harvard, Vancouver, ISO, and other styles
44

Hu, Dou, Lingwei Wei, Yaxin Liu, Wei Zhou, and Songlin Hu. "Structured Probabilistic Coding." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12491–501. http://dx.doi.org/10.1609/aaai.v38i11.29142.

Full text
Abstract:
This paper presents a new supervised representation learning framework, namely structured probabilistic coding (SPC), to learn compact and informative representations from input related to the target task. SPC is an encoder-only probabilistic coding technology with a structured regularization from the target space. It can enhance the generalization ability of pre-trained language models for better language understanding. Specifically, our probabilistic coding simultaneously performs information encoding and task prediction in one module to more fully utilize the effective information from inpu
APA, Harvard, Vancouver, ISO, and other styles
45

Suo, Chuanzhe, Zhe Liu, Lingfei Mo, and Yunhui Liu. "LPD-AE: Latent Space Representation of Large-Scale 3D Point Cloud." IEEE Access 8 (2020): 108402–17. http://dx.doi.org/10.1109/access.2020.2999727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nguyễn, Tuấn, Nguyen Hai Hao, Dang Le Dinh Trang, Nguyen Van Tuan, and Cao Van Loi. "Robust anomaly detection methods for contamination network data." Journal of Military Science and Technology, no. 79 (May 19, 2022): 41–51. http://dx.doi.org/10.54939/1859-1043.j.mst.79.2022.41-51.

Full text
Abstract:
Recently, latent representation models, such as Shrink Autoencoder (SAE), have been demonstrated as robust feature representations for one-class learning-based network anomaly detection. In these studies, benchmark network datasets that are processed in laboratory environments to make them completely clean are often employed for constructing and evaluating such models. In real-world scenarios, however, we can not guarantee 100% to collect pure normal data for constructing latent representation models. Therefore, this work aims to investigate the characteristics of the latent representation of
APA, Harvard, Vancouver, ISO, and other styles
47

Liao, Chenxi, Masataka Sawayama, and Bei Xiao. "Unsupervised learning reveals interpretable latent representations for translucency perception." PLOS Computational Biology 19, no. 2 (2023): e1010878. http://dx.doi.org/10.1371/journal.pcbi.1010878.

Full text
Abstract:
Humans constantly assess the appearance of materials to plan actions, such as stepping on icy roads without slipping. Visual inference of materials is important but challenging because a given material can appear dramatically different in various scenes. This problem especially stands out for translucent materials, whose appearance strongly depends on lighting, geometry, and viewpoint. Despite this, humans can still distinguish between different materials, and it remains unsolved how to systematically discover visual features pertinent to material inference from natural images. Here, we develo
APA, Harvard, Vancouver, ISO, and other styles
48

Cahani, Ilda, and Marcus Stiemer. "Mathematical optimization and machine learning to support PCB topology identification." Advances in Radio Science 21 (December 1, 2023): 25–35. http://dx.doi.org/10.5194/ars-21-25-2023.

Full text
Abstract:
Abstract. In this paper, we study an identification problem for schematics with different concurring topologies. A framework is proposed, that is both supported by mathematical optimization and machine learning algorithms. Through the use of Python libraries, such as scikit-rf, which allows for the emulation of network analyzer measurements, and a physical microstrip line simulation on PCBs, data for training and testing the framework are provided. In addition to an individual treatment of the concurring topologies and subsequent comparison, a method is introduced to tackle the identification
APA, Harvard, Vancouver, ISO, and other styles
49

Lychenko, N. M., and A. V. Sorokovaja. "Comparison of Effectiveness of Word Representations Methods in Vector Space for the Text Sentiment Analysis." Mathematical structures and modeling, no. 4 (2019): 97–110. http://dx.doi.org/10.24147/2222-8772.2019.4.97-110.

Full text
Abstract:
The word representations in vector space is used for various tasks of automated processing of a natural language. There are many methods of vector representation of words, including neural network methods Word2Vec and GloVe, and the classical method of latent-sematic analysis LSA. This work is devoted to the study of the effectiveness of the application of vector representation of words in the neural network classifier for sentiment analysis of Russian and English texts based on the LSTM network. The features of word representations methods in vector space (LSA, Word2Vec, GloVe) are described,
APA, Harvard, Vancouver, ISO, and other styles
50

Reis, Eduardo, Mohamed Abdelaal, and Carsten Binnig. "Generalizable Data Cleaning of Tabular Data in Latent Space." Proceedings of the VLDB Endowment 17, no. 13 (2024): 4786–98. https://doi.org/10.14778/3704965.3704983.

Full text
Abstract:
In this paper, we present a new method for learned data cleaning. In contrast to existing methods, our method learns to clean data in the latent space. The main idea is that we (1) shape the latent space such that we know the area where clean data resides and (2) learn latent operators trained on error repair (Lopster) which shift erroneous data (e.g., table rows with noise, outliers, or missing values) in their latent representation back to a "clean" region, thus abstracting the complexities of the input domain. When formulating data cleaning as a simple shift operation in latent space, we ca
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!