Inhaltsverzeichnis

  1. Zeitschriftenartikel

Auswahl der wissenschaftlichen Literatur zum Thema „Generative sequence models“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Generative sequence models" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Generative sequence models"

1

Wang, Yongkang, Xuan Liu, Feng Huang, Zhankun Xiong, and Wen Zhang. "A Multi-Modal Contrastive Diffusion Model for Therapeutic Peptide Generation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (2024): 3–11. http://dx.doi.org/10.1609/aaai.v38i1.27749.

Der volle Inhalt der Quelle
Annotation:
Therapeutic peptides represent a unique class of pharmaceutical agents crucial for the treatment of human diseases. Recently, deep generative models have exhibited remarkable potential for generating therapeutic peptides, but they only utilize sequence or structure information alone, which hinders the performance in generation. In this study, we propose a Multi-Modal Contrastive Diffusion model (MMCD), fusing both sequence and structure modalities in a diffusion framework to co-generate novel peptide sequences and structures. Specifically, MMCD constructs the sequence-modal and structure-modal
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wu, Zachary, Kadina E. Johnston, Frances H. Arnold, and Kevin K. Yang. "Protein sequence design with deep generative models." Current Opinion in Chemical Biology 65 (December 2021): 18–27. http://dx.doi.org/10.1016/j.cbpa.2021.04.004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Akl, Hoda, Brooke Emison, Xiaochuan Zhao, Arup Mondal, Alberto Perez, and Purushottam D. Dixit. "GENERALIST: A latent space based generative model for protein sequence families." PLOS Computational Biology 19, no. 11 (2023): e1011655. http://dx.doi.org/10.1371/journal.pcbi.1011655.

Der volle Inhalt der Quelle
Annotation:
Generative models of protein sequence families are an important tool in the repertoire of protein scientists and engineers alike. However, state-of-the-art generative approaches face inference, accuracy, and overfitting- related obstacles when modeling moderately sized to large proteins and/or protein families with low sequence coverage. Here, we present a simple to learn, tunable, and accurate generative model, GENERALIST: GENERAtive nonLInear tenSor-factorizaTion for protein sequences. GENERALIST accurately captures several high order summary statistics of amino acid covariation. GENERALIST
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Feinauer, Christoph, Barthelemy Meynard-Piganeau, and Carlo Lucibello. "Interpretable pairwise distillations for generative protein sequence models." PLOS Computational Biology 18, no. 6 (2022): e1010219. http://dx.doi.org/10.1371/journal.pcbi.1010219.

Der volle Inhalt der Quelle
Annotation:
Many different types of generative models for protein sequences have been proposed in literature. Their uses include the prediction of mutational effects, protein design and the prediction of structural properties. Neural network (NN) architectures have shown great performances, commonly attributed to the capacity to extract non-trivial higher-order interactions from the data. In this work, we analyze two different NN models and assess how close they are to simple pairwise distributions, which have been used in the past for similar problems. We present an approach for extracting pairwise model
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Won, K. J., C. Saunders, and A. Prügel-Bennett. "Evolving Fisher Kernels for Biological Sequence Classification." Evolutionary Computation 21, no. 1 (2013): 83–105. http://dx.doi.org/10.1162/evco_a_00065.

Der volle Inhalt der Quelle
Annotation:
Fisher kernels have been successfully applied to many problems in bioinformatics. However, their success depends on the quality of the generative model upon which they are built. For Fisher kernel techniques to be used on novel problems, a mechanism for creating accurate generative models is required. A novel framework is presented for automatically creating domain-specific generative models that can be used to produce Fisher kernels for support vector machines (SVMs) and other kernel methods. The framework enables the capture of prior knowledge and addresses the issue of domain-specific kerne
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Yitian, and Zhouhui Lian. "DeepCalliFont: Few-Shot Chinese Calligraphy Font Synthesis by Integrating Dual-Modality Generative Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 4 (2024): 3774–82. http://dx.doi.org/10.1609/aaai.v38i4.28168.

Der volle Inhalt der Quelle
Annotation:
Few-shot font generation, especially for Chinese calligraphy fonts, is a challenging and ongoing problem. With the help of prior knowledge that is mainly based on glyph consistency assumptions, some recently proposed methods can synthesize high-quality Chinese glyph images. However, glyphs in calligraphy font styles often do not meet these assumptions. To address this problem, we propose a novel model, DeepCalliFont, for few-shot Chinese calligraphy font synthesis by integrating dual-modality generative models. Specifically, the proposed model consists of image synthesis and sequence generatio
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Safranchik, Esteban, Shiying Luo, and Stephen Bach. "Weakly Supervised Sequence Tagging from Noisy Rules." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5570–78. http://dx.doi.org/10.1609/aaai.v34i04.6009.

Der volle Inhalt der Quelle
Annotation:
We propose a framework for training sequence tagging models with weak supervision consisting of multiple heuristic rules of unknown accuracy. In addition to supporting rules that vote on tags in the output sequence, we introduce a new type of weak supervision, called linking rules, that vote on how sequence elements should be grouped into spans with the same tag. These rules are an alternative to candidate span generators that require significantly more human effort. To estimate the accuracies of the rules and combine their conflicting outputs into training data, we introduce a new type of gen
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Polceanu, Mihai, Julie Porteous, Alan Lindsay, and Marc Cavazza. "Narrative Plan Generation with Self-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (2021): 5984–92. http://dx.doi.org/10.1609/aaai.v35i7.16747.

Der volle Inhalt der Quelle
Annotation:
Narrative Generation has attracted significant interest as a novel application of Automated Planning techniques. However, the vast amount of narrative material available opens the way to the use of Deep Learning techniques. In this paper, we explore the feasibility of narrative generation through self-supervised learning, using sequence embedding techniques or auto-encoders to produce narrative sequences. We use datasets of well-formed plots generated by a narrative planning approach, using pre-existing, published, narrative planning domains, to train generative models. Our experiments demonst
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhang, Zhiyuan, and Zhanshan Wang. "Multi-Objective Prediction of Integrated Energy System Using Generative Tractive Network." Mathematics 11, no. 20 (2023): 4350. http://dx.doi.org/10.3390/math11204350.

Der volle Inhalt der Quelle
Annotation:
Accurate load forecasting can bring economic benefits and scheduling optimization. The complexity and uncertainty arising from the coupling of different energy sources in integrated energy systems pose challenges for simultaneously predicting multiple target load sequences. Existing data-driven methods for load forecasting in integrated energy systems use multi-task learning to address these challenges. When determining the input data for multi-task learning, existing research primarily relies on data correlation analysis and considers the influence of external environmental factors in terms o
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hawkins-Hooker, Alex, Florence Depardieu, Sebastien Baur, Guillaume Couairon, Arthur Chen, and David Bikard. "Generating functional protein variants with variational autoencoders." PLOS Computational Biology 17, no. 2 (2021): e1008736. http://dx.doi.org/10.1371/journal.pcbi.1008736.

Der volle Inhalt der Quelle
Annotation:
The vast expansion of protein sequence databases provides an opportunity for new protein design approaches which seek to learn the sequence-function relationship directly from natural sequence variation. Deep generative models trained on protein sequence data have been shown to learn biologically meaningful representations helpful for a variety of downstream tasks, but their potential for direct use in the design of novel proteins remains largely unexplored. Here we show that variational autoencoders trained on a dataset of almost 70000 luciferase-like oxidoreductases can be used to generate n
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!