To see the other types of publications on this topic, follow the link: Latent space.

Journal articles on the topic 'Latent space'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Latent space.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Stevens, Jesse, Daniel N. Wilke, and Isaac I. Setshedi. "Enhancing LS-PIE’s Optimal Latent Dimensional Identification: Latent Expansion and Latent Condensation." Mathematical and Computational Applications 29, no. 4 (2024): 65. http://dx.doi.org/10.3390/mca29040065.

Full text
Abstract:
The Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) framework enhances dimensionality reduction methods for linear latent variable models (LVMs). This paper extends LS-PIE by introducing an optimal latent discovery strategy to automate identifying optimal latent dimensions and projections based on user-defined metrics. The latent condensing (LCON) method clusters and condenses an extensive latent space into a compact form. A new approach, latent expansion (LEXP), incrementally increases latent dimensions using a linear LVM to find an optimal compact space. This study compares
APA, Harvard, Vancouver, ISO, and other styles
2

Baxter, William, and Ken-ichi Anjyo. "Latent Doodle Space." Computer Graphics Forum 25, no. 3 (2006): 477–85. http://dx.doi.org/10.1111/j.1467-8659.2006.00967.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sunde, Emilie K. "From outer space to latent space." Philosophy of Photography 15, no. 1 (2024): 123–42. http://dx.doi.org/10.1386/pop_00096_1.

Full text
Abstract:
Dall-E2 and Stable Diffusion promote their text-to-image models based on their level of (photo)realism. The use of photographic language is not superficial or accidental but indicative of a broader tendency in computer science and data practice. To nuance the general application of photorealism, I position the term alongside photographic realism and computational photorealism. To contextualize important nuances between these three terms, contemporary examples from astrophotography are analysed and reconstructed using text-to-image models. From the comparative analysis, computational photoreali
APA, Harvard, Vancouver, ISO, and other styles
4

Nguyen, Van Khoa, Yoann Boget, Frantzeska Lavda, and Alexandros Kalousis. "GLAD: Improving Latent Graph Generative Modeling with Simple Quantization." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19695–702. https://doi.org/10.1609/aaai.v39i18.34169.

Full text
Abstract:
Learning graph generative models over latent spaces has received less attention compared to models that operate on the original data space and has so far demonstrated lacklustre performance. We present GLAD a latent space graph generative model. Unlike most previous latent space graph generative models, GLAD operates on a discrete latent space that preserves to a significant extent the discrete nature of the graph structures making no unnatural assumptions such as latent space continuity. We learn the prior of our discrete latent space by adapting diffusion bridges to its structure. By operati
APA, Harvard, Vancouver, ISO, and other styles
5

Jaemo Sung, Z. Ghahramani, and Sung-Yang Bang. "Latent-Space Variational Bayes." IEEE Transactions on Pattern Analysis and Machine Intelligence 30, no. 12 (2008): 2236–42. http://dx.doi.org/10.1109/tpami.2008.157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sidky, Hythem, Wei Chen, and Andrew L. Ferguson. "Molecular latent space simulators." Chemical Science 11, no. 35 (2020): 9459–67. http://dx.doi.org/10.1039/d0sc03635h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

PATMAWATI, Andi Sunyoto, and Emha Taufiq Luthfi. "AUGMENTASI DATA MENGGUNAKAN DCGAN PADA GAMBAR TANAH." TEKNIMEDIA: Teknologi Informasi dan Multimedia 4, no. 1 (2023): 45–42. http://dx.doi.org/10.46764/teknimedia.v4i1.100.

Full text
Abstract:
Beberapa penelitian terkait klasifikasi jenis tanah telah dibanyak dilakukan. Namun, masing-masing penelitian tersebut menggunakan dataset yang berbeda. Hanya sebagian kecil peneliti yang membagikan dataset citra tanah secara public. Selain itu, dataset yang dipublish memiliki ketidakseimbangan jumlah data pada setiap kelasnya yang akan menghasilkan performa model yang buruk atau over fit, khususnya deep learning. Dengan augmentation data, variasi data baru dapat terbentuk sehingga dapat menangani masa-lah keterbatasan jumlah dataset. Salah satu model augmentasi modern DCGAN yang merupakan per
APA, Harvard, Vancouver, ISO, and other styles
8

Gat, Itai, Guy Lorberbom, Idan Schwartz, and Tamir Hazan. "Latent Space Explanation by Intervention." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 679–87. http://dx.doi.org/10.1609/aaai.v36i1.19948.

Full text
Abstract:
The success of deep neural nets heavily relies on their ability to encode complex relations between their input and their output. While this property serves to fit the training data well, it also obscures the mechanism that drives prediction. This study aims to reveal hidden concepts by employing an intervention mechanism that shifts the predicted class based on discrete variational autoencoders. An explanatory model then visualizes the encoded information from any hidden layer and its corresponding intervened representation. By the assessment of differences between the original representation
APA, Harvard, Vancouver, ISO, and other styles
9

HE, Ping, Xiao-Hua XU, and Ling CHEN. "Latent Attribute Space Tree Classifiers." Journal of Software 20, no. 7 (2010): 1735–45. http://dx.doi.org/10.3724/sp.j.1001.2009.03319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Seth, Sohan, Iain Murray, and Christopher K. I. Williams. "Model Criticism in Latent Space." Bayesian Analysis 14, no. 3 (2019): 703–25. http://dx.doi.org/10.1214/18-ba1124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Xue, Zhe, Guorong Li, Shuhui Wang, Weigang Zhang, and Qingming Huang. "Bilevel Multiview Latent Space Learning." IEEE Transactions on Circuits and Systems for Video Technology 28, no. 2 (2018): 327–41. http://dx.doi.org/10.1109/tcsvt.2016.2607842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Toledo-Marín, J. Quetzalcóatl, and James A. Glazier. "Using deep LSD to build operators in GANs latent space with meaning in real space." PLOS ONE 18, no. 6 (2023): e0287736. http://dx.doi.org/10.1371/journal.pone.0287736.

Full text
Abstract:
Generative models rely on the idea that data can be represented in terms of latent variables which are uncorrelated by definition. Lack of correlation among the latent variable support is important because it suggests that the latent-space manifold is simpler to understand and manipulate than the real-space representation. Many types of generative model are used in deep learning, e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs). Based on the idea that the latent space behaves like a vector space Radford et al. (2015), we ask whether we can expand the latent spac
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Rongchao, Yu Huang, Yiwei Lou, et al. "Exploit Your Latents: Coarse-Grained Protein Backmapping with Latent Diffusion Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 1 (2025): 1111–19. https://doi.org/10.1609/aaai.v39i1.32098.

Full text
Abstract:
Coarse-grained (CG) molecular dynamics of proteins is a preferred approach to studying large molecules on extended time scales by condensing the entire atomic model into a limited number of pseudo-atoms and preserving the thermodynamic properties of the system. However, the significantly increased efficiency impedes the analysis of substantial physicochemical information, since high-resolution atomic details are sacrificed to accelerate simulation. In this paper, we propose LatCPB, a generative approach based on diffusion that enables high-resolution backmapping of CG proteins. Specifically, o
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Yang, Eunice Jun, Qisheng Li, and Jeffrey Heer. "Latent Space Cartography: Visual Analysis of Vector Space Embeddings." Computer Graphics Forum 38, no. 3 (2019): 67–78. http://dx.doi.org/10.1111/cgf.13672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Stevens, Jesse, Daniel N. Wilke, and Isaac I. Setshedi. "Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) Framework." Mathematical and Computational Applications 29, no. 5 (2024): 85. http://dx.doi.org/10.3390/mca29050085.

Full text
Abstract:
Linear latent variable models such as principal component analysis (PCA), independent component analysis (ICA), canonical correlation analysis (CCA), and factor analysis (FA) identify latent directions (or loadings) either ordered or unordered. These data are then projected onto the latent directions to obtain their projected representations (or scores). For example, PCA solvers usually rank principal directions by explaining the most variance to the least variance. In contrast, ICA solvers usually return independent directions unordered and often with single sources spread across multiple dir
APA, Harvard, Vancouver, ISO, and other styles
16

Zhu, Cheng, Guangzhe Zhao, Benwang Lin, Xueping Wang, and Feihu Yan. "RSCAN: Residual Spatial Cross-Attention Network for High-Fidelity Architectural Image Editing by Fusing Multi-Latent Spaces." Electronics 13, no. 12 (2024): 2327. http://dx.doi.org/10.3390/electronics13122327.

Full text
Abstract:
Image editing technology has brought about revolutionary changes in the field of architectural design, garnering significant attention in both the computer and architectural industries. However, architectural image editing is a challenging task due to the complex hierarchical structure of architectural images, which complicates the learning process for the high-dimensional features of architectural images. Some methods invert the images into the latent space of a pre-trained generative adversarial network (GAN) model, completing the editing process by manipulating this latent space. However, t
APA, Harvard, Vancouver, ISO, and other styles
17

Fries, William D., Xiaolong He, and Youngsoo Choi. "LaSDI: Parametric Latent Space Dynamics Identification." Computer Methods in Applied Mechanics and Engineering 399 (September 2022): 115436. http://dx.doi.org/10.1016/j.cma.2022.115436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Asai, Masataro, Hiroshi Kajino, Alex Fukunaga, and Christian Muise. "Classical Planning in Deep Latent Space." Journal of Artificial Intelligence Research 74 (August 9, 2022): 1599–686. http://dx.doi.org/10.1613/jair.1.13768.

Full text
Abstract:
Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns
APA, Harvard, Vancouver, ISO, and other styles
19

Felton, Samuel, Pascal Brault, Elisa Fromont, and Eric Marchand. "Visual Servoing in Autoencoder Latent Space." IEEE Robotics and Automation Letters 7, no. 2 (2022): 3234–41. http://dx.doi.org/10.1109/lra.2022.3144490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Huh, Myung-Hoe. "Representing variables in the latent space." Korean Journal of Applied Statistics 30, no. 4 (2017): 555–66. http://dx.doi.org/10.5351/kjas.2017.30.4.555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Yuan, Jirui, Ke Gao, Pengfei Zhu, and Karen Egiazarian. "Multi-view predictive latent space learning." Pattern Recognition Letters 132 (April 2020): 56–61. http://dx.doi.org/10.1016/j.patrec.2018.06.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hong, Fu-Xing, Xiao-Lin Zheng, and Chao-Chao Chen. "Latent space regularization for recommender systems." Information Sciences 360 (September 2016): 202–16. http://dx.doi.org/10.1016/j.ins.2016.04.042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mehta, S. K., R. J. Cohrs, D. H. Gilden, et al. "Latent virus reactivation: Space to earth." Brain, Behavior, and Immunity 24 (August 2010): S13. http://dx.doi.org/10.1016/j.bbi.2010.07.042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

TAKEDA, Toshiki, Taisuke KOBAYASHI, and Kenji SUGIMOTO. "Model Predictive Control in Latent Space." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2020 (2020): 1P1—H04. http://dx.doi.org/10.1299/jsmermd.2020.1p1-h04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sewell, Daniel K., and Yuguo Chen. "Latent Space Models for Dynamic Networks." Journal of the American Statistical Association 110, no. 512 (2015): 1646–57. http://dx.doi.org/10.1080/01621459.2014.988214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kirchoff, Kathryn E., Travis Maxfield, Alexander Tropsha, and Shawn M. Gomez. "SALSA: Semantically-Aware Latent Space Autoencoder." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (2024): 13211–19. http://dx.doi.org/10.1609/aaai.v38i12.29221.

Full text
Abstract:
In deep learning for drug discovery, molecular representations are often based on sequences, known as SMILES, which allow for straightforward implementation of natural language processing methodologies, one being the sequence-to-sequence autoencoder. However, we observe that training an autoencoder solely on SMILES is insufficient to learn molecular representations that are semantically meaningful, where semantics are specified by the structural (graph-to-graph) similarities between molecules. We demonstrate by example that SMILES-based autoencoders may map structurally similar molecules to di
APA, Harvard, Vancouver, ISO, and other styles
27

Tran, April, Xiaolong He, Daniel A. Messenger, Youngsoo Choi, and David M. Bortz. "Weak-form latent space dynamics identification." Computer Methods in Applied Mechanics and Engineering 427 (July 2024): 116998. http://dx.doi.org/10.1016/j.cma.2024.116998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, Sangwon, Taro Yaoyama, Masaru Kitahara, and Tatsuya Itoi. "Latent space-based stochastic model updating." Mechanical Systems and Signal Processing 235 (July 2025): 112841. https://doi.org/10.1016/j.ymssp.2025.112841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Cao, Zhiyi, Shaozhang Niu, and Jiwei Zhang. "Weakly Supervised GAN for Image-to-Image Translation in the Wild." Mathematical Problems in Engineering 2020 (March 9, 2020): 1–8. http://dx.doi.org/10.1155/2020/6216048.

Full text
Abstract:
Generative Adversarial Networks (GANs) have achieved significant success in unsupervised image-to-image translation between given categories (e.g., zebras to horses). Previous GANs models assume that the shared latent space between different categories will be captured from the given categories. Unfortunately, besides the well-designed datasets from given categories, many examples come from different wild categories (e.g., cats to dogs) holding special shapes and sizes (short for adversarial examples), so the shared latent space is troublesome to capture, and it will cause the collapse of thes
APA, Harvard, Vancouver, ISO, and other styles
30

Shrivastava, Aditya Divyakant, and Douglas B. Kell. "FragNet, a Contrastive Learning-Based Transformer Model for Clustering, Interpreting, Visualizing, and Navigating Chemical Space." Molecules 26, no. 7 (2021): 2065. http://dx.doi.org/10.3390/molecules26072065.

Full text
Abstract:
The question of molecular similarity is core in cheminformatics and is usually assessed via a pairwise comparison based on vectors of properties or molecular fingerprints. We recently exploited variational autoencoders to embed 6M molecules in a chemical space, such that their (Euclidean) distance within the latent space so formed could be assessed within the framework of the entire molecular set. However, the standard objective function used did not seek to manipulate the latent space so as to cluster the molecules based on any perceived similarity. Using a set of some 160,000 molecules of bi
APA, Harvard, Vancouver, ISO, and other styles
31

Jin Dai, Jin Dai, and Zhifang Zheng Jin Dai. "Disentangling Representation of Variational Autoencoders Based on Cloud Models." 電腦學刊 34, no. 6 (2023): 001–14. http://dx.doi.org/10.53106/199115992023123406001.

Full text
Abstract:
<p>Variational autoencoder (VAE) has the problem of uninterpretable data generation process, because the features contained in the VAE latent space are coupled with each other and no mapping from the latent space to the semantic space is established. However, most existing algorithms cannot understand the data distribution features in the latent space semantically. In this paper, we propose a cloud model-based method for disentangling semantic features in VAE latent space by adding support vector machines (SVM) to feature transformations of latent variables, and we propose to use the clo
APA, Harvard, Vancouver, ISO, and other styles
32

Mukherjee, Sudipto, Himanshu Asnani, Eugene Lin, and Sreeram Kannan. "ClusterGAN: Latent Space Clustering in Generative Adversarial Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4610–17. http://dx.doi.org/10.1609/aaai.v33i01.33014610.

Full text
Abstract:
Generative Adversarial networks (GANs) have obtained remarkable success in many unsupervised learning tasks and unarguably, clustering is an important unsupervised learning problem. While one can potentially exploit the latent-space back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space. In this paper, we propose ClusterGAN as a new mechanism for clustering using GANs. By sampling latent variables from a mixture of one-hot encoded variables and continuous latent variables, coupled with an inverse network (which projects the data to
APA, Harvard, Vancouver, ISO, and other styles
33

Connor, Marissa, and Christopher Rozell. "Representing Closed Transformation Paths in Encoded Network Latent Space." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3666–75. http://dx.doi.org/10.1609/aaai.v34i04.5775.

Full text
Abstract:
Deep generative networks have been widely used for learning mappings from a low-dimensional latent space to a high-dimensional data space. In many cases, data transformations are defined by linear paths in this latent space. However, the Euclidean structure of the latent space may be a poor match for the underlying latent structure in the data. In this work, we incorporate a generative manifold model into the latent space of an autoencoder in order to learn the low-dimensional manifold structure from the data and adapt the latent space to accommodate this structure. In particular, we focus on
APA, Harvard, Vancouver, ISO, and other styles
34

Simhal, Anish K., Rena Elkin, Ross S. Firestone, Jung Hun Oh, and Joseph O. Deasy. "Abstract A031: Unsupervised graph-based visualization of variational autoencoder latent spaces reveals hidden multiple myeloma subtypes." Clinical Cancer Research 31, no. 13_Supplement (2025): A031. https://doi.org/10.1158/1557-3265.aimachine-a031.

Full text
Abstract:
Abstract Latent space representations learned through variational autoencoders (VAEs) offer a powerful, unsupervised means of capturing nonlinear structure in high-dimensional oncology data. The latent embedding spaces often encode information that differs from traditional bioinformatics methods such as t-SNE or UMAP. However, a persistent challenge remains: how to meaningfully visualize and interpret these latent variables. Common dimensionality reduction techniques like UMAP and t-SNE, while effective, can obscure graph-theoretic relationships that may underlie important biological patterns.
APA, Harvard, Vancouver, ISO, and other styles
35

Avrahami, Omri, Ohad Fried, and Dani Lischinski. "Blended Latent Diffusion." ACM Transactions on Graphics 42, no. 4 (2023): 1–11. http://dx.doi.org/10.1145/3592450.

Full text
Abstract:
The tremendous progress in neural image generation, coupled with the emergence of seemingly omnipotent vision-language models has finally enabled text-based interfaces for creating and editing images. Handling generic images requires a diverse underlying generative model, hence the latest works utilize diffusion models, which were shown to surpass GANs in terms of diversity. One major drawback of diffusion models, however, is their relatively slow inference time. In this paper, we present an accelerated solution to the task of local text-driven editing of generic images, where the desired edit
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Man-Sheng, Ling Huang, Chang-Dong Wang, and Dong Huang. "Multi-View Clustering in Latent Embedding Space." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3513–20. http://dx.doi.org/10.1609/aaai.v34i04.5756.

Full text
Abstract:
Previous multi-view clustering algorithms mostly partition the multi-view data in their original feature space, the efficacy of which heavily and implicitly relies on the quality of the original feature presentation. In light of this, this paper proposes a novel approach termed Multi-view Clustering in Latent Embedding Space (MCLES), which is able to cluster the multi-view data in a learned latent embedding space while simultaneously learning the global structure and the cluster indicator matrix in a unified optimization framework. Specifically, in our framework, a latent embedding representat
APA, Harvard, Vancouver, ISO, and other styles
37

Valtazanos, Aris, D. K. Arvind, and Subramanian Ramamoorthy. "Latent space segmentation for mobile gait analysis." ACM Transactions on Embedded Computing Systems 12, no. 4 (2013): 1–22. http://dx.doi.org/10.1145/2485984.2485989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hoff, Peter D., Adrian E. Raftery, and Mark S. Handcock. "Latent Space Approaches to Social Network Analysis." Journal of the American Statistical Association 97, no. 460 (2002): 1090–98. http://dx.doi.org/10.1198/016214502388618906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jiang, Bin, Fangqiang Xu, Yun Huang, Chao Yang, Wei Huang, and Jun Xia. "Adaptive Adversarial Latent Space for Novelty Detection." IEEE Access 8 (2020): 205088–98. http://dx.doi.org/10.1109/access.2020.3037346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

D'Arcy, Sing. "NOT YOKOHAMA, NOT MEMPHIS: ACTIVATING LATENT SPACE." Architectural Theory Review 1, no. 1 (1996): 135–40. http://dx.doi.org/10.1080/13264829609478271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Park, Jun Sur Richard, Siu Wun Cheung, Youngsoo Choi, and Yeonjong Shin. "tLaSDI: Thermodynamics-informed latent space dynamics identification." Computer Methods in Applied Mechanics and Engineering 429 (September 2024): 117144. http://dx.doi.org/10.1016/j.cma.2024.117144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Austin, Andrea, Crystal Linkletter, and Zhijin Wu. "Covariate-defined latent space random effects model." Social Networks 35, no. 3 (2013): 338–46. http://dx.doi.org/10.1016/j.socnet.2013.03.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Fulton, Lawson, Vismay Modi, David Duvenaud, David I. W. Levin, and Alec Jacobson. "Latent‐space Dynamics for Reduced Deformable Simulation." Computer Graphics Forum 38, no. 2 (2019): 379–91. http://dx.doi.org/10.1111/cgf.13645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sewell, Daniel K. "Latent space models for network perception data." Network Science 7, no. 2 (2019): 160–79. http://dx.doi.org/10.1017/nws.2019.1.

Full text
Abstract:
AbstractSocial networks, wherein the edges represent nonbehavioral relations such as friendship, power, and influence, can be difficult to measure and model. A powerful tool to address this is cognitive social structures (Krackhardt, D. (1987). Cognitive social structures. Social Networks, 9(2), 109–134.), where the perception of the entire network is elicited from each actor. We provide a formal statistical framework to analyze informants’ perceptions of the network, implementing a latent space network model that can estimate, e.g., homophilic effects while accounting for informant error. Our
APA, Harvard, Vancouver, ISO, and other styles
45

Salter-Townshend, Michael, and Tyler H. McCormick. "Latent space models for multiview network data." Annals of Applied Statistics 11, no. 3 (2017): 1217–44. http://dx.doi.org/10.1214/16-aoas955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yu, Yunlong, Zhong Ji, Jichang Guo, and Zhongfei Zhang. "Zero-Shot Learning via Latent Space Encoding." IEEE Transactions on Cybernetics 49, no. 10 (2019): 3755–66. http://dx.doi.org/10.1109/tcyb.2018.2850750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nitzan, Yotam, Amit Bermano, Yangyan Li, and Daniel Cohen-Or. "Face identity disentanglement via latent space mapping." ACM Transactions on Graphics 39, no. 6 (2020): 1–14. http://dx.doi.org/10.1145/3414685.3417826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Elstner, Jannes, Raoul Schönhof, Steffen Tauber, and Marco F. Huber. "Optimizing CAD Models with Latent Space Manipulation." Procedia CIRP 119 (2023): 650–55. http://dx.doi.org/10.1016/j.procir.2023.03.117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Liang, Haobo, Yan Yang, and Jiajie Jing. "Latent space diffusion model for image dehazing." Applied Soft Computing 180 (August 2025): 113322. https://doi.org/10.1016/j.asoc.2025.113322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Roman, Alexander, Roy T. Forestano, Konstantin T. Matchev, Katia Matcheva, and Eyup B. Unlu. "Oracle-Preserving Latent Flows." Symmetry 15, no. 7 (2023): 1352. http://dx.doi.org/10.3390/sym15071352.

Full text
Abstract:
A fundamental task in data science is the discovery, description, and identification of any symmetries present in the data. We developed a deep learning methodology for the simultaneous discovery of multiple non-trivial continuous symmetries across an entire labeled dataset. The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function, ensuring the desired symmetry properties. The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to invari
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!