Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Generative Coherence.

Zeitschriftenartikel zum Thema „Generative Coherence“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Generative Coherence" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kim, Jong Woo, Marc Messerschmidt, and William S. Graves. "Enhancement of Partially Coherent Diffractive Images Using Generative Adversarial Network." AI 3, no. 2 (2022): 274–84. http://dx.doi.org/10.3390/ai3020017.

Der volle Inhalt der Quelle
Annotation:
We present a deep learning-based generative model for the enhancement of partially coherent diffractive images. In lensless coherent diffractive imaging, a highly coherent X-ray illumination is required to image an object at high resolution. Non-ideal experimental conditions result in a partially coherent X-ray illumination, lead to imperfections of coherent diffractive images recorded on a detector, and ultimately limit the capability of lensless coherent diffractive imaging. The previous approaches, relying on the coherence property of illumination, require preliminary experiments or expensi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bounoua, Mustapha, Giulio Franzese, and Pietro Michiardi. "Multi-Modal Latent Diffusion." Entropy 26, no. 4 (2024): 320. http://dx.doi.org/10.3390/e26040320.

Der volle Inhalt der Quelle
Annotation:
Multimodal datasets are ubiquitous in modern applications, and multimodal Variational Autoencoders are a popular family of models that aim to learn a joint representation of different modalities. However, existing approaches suffer from a coherence–quality tradeoff in which models with good generation quality lack generative coherence across modalities and vice versa. In this paper, we discuss the limitations underlying the unsatisfactory performance of existing methods in order to motivate the need for a different approach. We propose a novel method that uses a set of independently trained an
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Severn, Stephen. "A Knot, A Network, A Thing, A World: Composition as Generative Meaning-making in Still Life Photography." tba: Journal of Art, Media, and Visual Culture 3, no. 1 (2021): 107–18. http://dx.doi.org/10.5206/tba.v3i1.13934.

Der volle Inhalt der Quelle
Annotation:
Elements move towards, cohere, and separate. It is in this ontogenetic and generative coherence – the composition – that meaning is created. This article positions still life photography as a non-representational, ontogenetic, and generative coherence of thought, matter, and meaning: what Tim Ingold describes as a knot, what Donna Haraway describes as a network, what Martin Heidegger, Bill Brown, and Elizabeth Grosz describe as a thing, and what Kathleen Stewart describes as a world. The photographic images in A Knot, A Network, A Thing, A World present an alternative to photography as represe
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

K, Tresha, Kavya, PB Medhaa, and T. Pragathi. "Automatic Video Generator." International Journal of Innovative Science and Research Technology (IJISRT) 9, no. 12 (2024): 104–8. https://doi.org/10.5281/zenodo.14470731.

Der volle Inhalt der Quelle
Annotation:
Text-to-video (T2V) generation is an emerging field in artificial intelligence, gaining traction with advances in deep learning models like generative adversarial networks (GANs), diffusion models, and hybrid architectures. This paper provides a comprehensive survey of recent T2V methodologies, exploring models such as GAN-based frameworks, VEGAN-CLIP, IRC-GAN, Sora OpenAI, and CogVideoX, which aim to transform textual descriptions into coherent video content. These models face challenges in maintaining semantic coherence, temporal consistency, and realistic motion across generated frames. We
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Corchado, Juan M., Sebastian López F., Juan M. Núñez V., Raul Garcia S., and Pablo Chamoso. "Generative Artificial Intelligence: Fundamentals." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 12, no. 1 (2023): e31704. http://dx.doi.org/10.14201/adcaij.31704.

Der volle Inhalt der Quelle
Annotation:
Generative language models have witnessed substantial traction, notably with the introduction of refined models aimed at more coherent user-AI interactions—principally conversational models. The epitome of this public attention has arguably been the refinement of the GPT-3 model into ChatGPT and its subsequent integration with auxiliary capabilities such as search features in Microsoft Bing. Despite voluminous prior research devoted to its developmental trajectory, the model’s performance, and applicability to a myriad of quotidian tasks remained nebulous and task specific. In terms of technol
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ning, Zihao. "Face Image Generation for Anime Characters based on Generative Adversarial Network." Theoretical and Natural Science 87, no. 1 (2025): 166–72. https://doi.org/10.54254/2753-8818/2025.20348.

Der volle Inhalt der Quelle
Annotation:
With the increasing demand for digital art, animation, and games, facial generation for anime characters has attracted growing research interest in recent years, which aims to build models to automatically generate unique and high-quality character images. Thanks to the rapid advancement of deep learning techniques, particularly generative adversarial networks, GAN-based image generation methods have continuously achieved breakthroughs in generation effectiveness and speed. Focusing on generating realistic anime face images, this paper proposes an anime character face image generation model ba
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Feng, Shaoxiong, Hongshen Chen, Kan Li, and Dawei Yin. "Posterior-GAN: Towards Informative and Coherent Response Generation with Posterior Generative Adversarial Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7708–15. http://dx.doi.org/10.1609/aaai.v34i05.6273.

Der volle Inhalt der Quelle
Annotation:
Neural conversational models learn to generate responses by taking into account the dialog history. These models are typically optimized over the query-response pairs with a maximum likelihood estimation objective. However, the query-response tuples are naturally loosely coupled, and there exist multiple responses that can respond to a given query, which leads the conversational model learning burdensome. Besides, the general dull response problem is even worsened when the model is confronted with meaningless response training instances. Intuitively, a high-quality response not only responds t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kamberaj, Valton, Arbana Kadriu, and Nuhi Besimi. "From Dataset to Melody: Enhancing Music Composition with Computational Models." Interdisciplinary Journal of Research and Development 12, no. 2 (2025): 64. https://doi.org/10.56345/ijrdv12n2007.

Der volle Inhalt der Quelle
Annotation:
This study explores the generation of melodic lines in the ethno-fusion genre using computational models, leveraging a unique MIDI-based dataset. The dataset, designed with two primary dimensions—solos and chords—aims to ensure precise, genre-specific outputs. The core techniques employed are Markov chains and generative grammar, chosen for their suitability in generating sequences aligned with rhythmic, pitch, and structural characteristics of the dataset. Generative grammar focuses on chord generation, isolating it from solos to enhance harmonic coherence, while Markov chains facilitate the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ritchhart, Ron. "Generative Topics: Building a Curriculum around Big Ideas." Teaching Children Mathematics 5, no. 8 (1999): 462–68. http://dx.doi.org/10.5951/tcm.5.8.0462.

Der volle Inhalt der Quelle
Annotation:
Data from the Third International Mathematics and Science Study (TIMSS) point out “a comparative lack of focus and coherence in the American mathematics curriculum” and an absence of “meaningful connections between the big ideas of mathematics” (Schmidt 1997). These findings, along with those reported in other international comparisons (e.g., Stevenson and Stigler 1992), suggest that we must carefully examine the actual content of both our written and our implemented mathematics curriculum to be sure that it gives students the focus and coherence they need to develop mathematical understanding
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Theodorou, Brandon, Shrusti Jain, Cao Xiao, and Jimeng Sun. "ConSequence: Synthesizing Logically Constrained Sequences for Electronic Health Record Generation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 15355–63. http://dx.doi.org/10.1609/aaai.v38i14.29460.

Der volle Inhalt der Quelle
Annotation:
Generative models can produce synthetic patient records for analytical tasks when real data is unavailable or limited. However, current methods struggle with adhering to domain-specific knowledge and removing invalid data. We present ConSequence, an effective approach to integrating domain knowledge into sequential generative neural network outputs. Our rule-based formulation includes temporal aggregation and antecedent evaluation modules, ensured by an efficient matrix multiplication formulation, to satisfy hard and soft logical constraints across time steps. Existing constraint methods often
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Guo, Xiangji, Fei Xie, Tingkai Yang, Ming Ming, and Tao Chen. "Physics-Informed Generative Adversarial Networks for Laser Speckle Noise Suppression." Sensors 25, no. 13 (2025): 3842. https://doi.org/10.3390/s25133842.

Der volle Inhalt der Quelle
Annotation:
In high-resolution microscopic imaging, using shorter-wavelength ultraviolet (UV) lasers as illumination sources is a common approach. However, the high spatial coherence of such lasers, combined with the surface roughness of the sample, often introduces disturbances in the received optical field, resulting in strong speckle noise. This paper presents a novel speckle noise suppression method specifically designed for coherent laser-based microscopic imaging. The proposed approach integrates statistical physical modeling and image gradient discrepancy into the training of a Cycle Generative Adv
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

KIM, YOUN-SUNG. "A Study on how to Secure Competitiveness through Case Analysis of Media and Contents using Generative AI." International Journal of Religion 5, no. 12 (2024): 1795–803. https://doi.org/10.61707/9r4wdc33.

Der volle Inhalt der Quelle
Annotation:
Due to the ChatGPT craze, all industries around the world have a hot Generative AI of interest. In particular, since Open AI announced AI Sora, which makes text into video on February 15, 2024, the media and content industries, both at home and abroad, have expressed expectations and a sense of crisis. By learning a large amount of Hyper-scale Data with artificial intelligence technology that actively generates results according to the specific needs of live Generative AI users, it is looking beyond the realm of creation, which can be called the human domain[1][2][3]. Although, unlike image-ge
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Krishna, Guntamukkala Gopi. "Generative AI." International Journal of Advanced Engineering and Nano Technology 10, no. 8 (2023): 1–3. http://dx.doi.org/10.35940/ijaent.g0474.0810823.

Der volle Inhalt der Quelle
Annotation:
Recent advancements in generative artificial intelligence (AI) have made it possible for machines to independently produce a variety of creative content. In the context of producing creative content, this essay examines the developments, difficulties, and ethical issues relating to generative AI. It looks into how generative models, such Generative Adversarial Networks (GANs) and Variational Auto encoders (VAEs), can produce realistic artwork like music, literature, and visuals. However, it is frequently discovered that GAN training is extremely unstable and frequently experiences non-converge
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Guntamukkala, Gopi Krishna. "Generative AI." International Journal of Advanced Engineering and Nano Technology (IJAENT) 10, no. 8 (2023): 1–3. https://doi.org/10.35940/ijaent.G0474.0810823.

Der volle Inhalt der Quelle
Annotation:
<strong>Abstract: </strong>Recent advancements in generative artificial intelligence (AI) have made it possible for machines to independently produce avariety of creative content. In the context of producing creative content, this essay examines the developments, difficulties, and ethical issues relating to generative AI. It looks into how generative models, such Generative Adversarial Networks (GANs) and Variational Auto encoders (VAEs), can produce realistic artwork like music, literature, and visuals. However, it is frequently discovered that GAN training is extremely unstable and frequentl
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Zhou, Yuchen. "Music Generation Based on Bidirectional GRU Model." Highlights in Science, Engineering and Technology 85 (March 13, 2024): 684–90. http://dx.doi.org/10.54097/t2szjs78.

Der volle Inhalt der Quelle
Annotation:
Lately, substantial advancements in the realm of deep learning have given rise to new approaches for autonomously generating music. This study has devised a generative framework intended to produce musical melodies. This framework capitalizes on bidirectional gated recurrent units (GRU) as its foundational architecture. To impart knowledge to the model, a collection of classical piano compositions in MIDI format has been employed as the training dataset. One implements a stacked architecture of bidirectional GRU layers to capture long-term musical patterns. The addition of dropout regularizati
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Vadlakonda, Ganesh. "Advancing Generative Artificial Intelligence (AI) Through Multimodal Integration and Contextual Learning." FMDB Transactions on Sustainable Computing Systems 2, no. 3 (2024): 131–39. https://doi.org/10.69888/ftscs.2024.000260.

Der volle Inhalt der Quelle
Annotation:
A great amount of progress has been made in generative artificial intelligence, which developments in neural network topologies and large-scale pretraining have driven. Existing models, on the other hand, frequently fail to meet expectations when they are charged with integrating numerous data modalities or comprehending complicated contextual information. Through the use of multimodal integration and contextual learning, this study investigates novel methods for the advancement of generative artificial intelligence. We provide an all-encompassing framework that integrates textual, visual, and
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Celard, Pedro, Adrián Seara Vieira, José Manuel Sorribes-Fdez, Eva Lorenzo Iglesias, and Lourdes Borrajo. "Temporal Development GAN (TD-GAN): Crafting More Accurate Image Sequences of Biological Development." Information 15, no. 1 (2023): 12. http://dx.doi.org/10.3390/info15010012.

Der volle Inhalt der Quelle
Annotation:
In this study, we propose a novel Temporal Development Generative Adversarial Network (TD-GAN) for the generation and analysis of videos, with a particular focus on biological and medical applications. Inspired by Progressive Growing GAN (PG-GAN) and Temporal GAN (T-GAN), our approach employs multiple discriminators to analyze generated videos at different resolutions and approaches. A new Temporal Discriminator (TD) that evaluates the developmental coherence of video content is introduced, ensuring that the generated image sequences follow a realistic order of stages. The proposed TD-GAN is e
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Seyed Aghaei, S. M. H., A. Rashno, and S. Fadaei. "Classification of Optical Coherence Tomography Images Using Generative Adversarial Networks." International Journal of Engineering 38, no. 2 (2025): 389–99. http://dx.doi.org/10.5829/ije.2025.38.02b.13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Muniyandi, Venkatesh. "RAG Architecture Design Patterns Balancing Retrieval Depth and Generative Coherence." International Journal of Computer Applications 187, no. 12 (2025): 34–38. https://doi.org/10.5120/ijca2025925142.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

S, Dr Manimala. "GenNarrate: AI-Powered Story Synthesis with Visual and Audio Outputs." International Journal for Research in Applied Science and Engineering Technology 13, no. 5 (2025): 2352–58. https://doi.org/10.22214/ijraset.2025.70567.

Der volle Inhalt der Quelle
Annotation:
Abstract: The emergence of generative artificial intelligence has redefined the boundaries of digital content creation, particularly in the domain of computational storytelling. This paper presents GenNarrate, a modular, multi-modal generative AI system engineered to synthesize coherent narratives augmented with corresponding visual and auditory elements. The architecture leverages advanced machine learning models, including LLaMA2 for text generation, DALL·E for image synthesis, and a combination of Google Text-to-Speech (GTTS) and AudioLDM for expressive audio narration and sound design. Gen
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Liu, Hui, Ritika Bansal, and Jinglin Liang. "Generative AI in Artistic Enterprises." International Journal on Semantic Web and Information Systems 21, no. 1 (2025): 1–24. https://doi.org/10.4018/ijswis.367444.

Der volle Inhalt der Quelle
Annotation:
Generative AI has revolutionized the field of digital art by enabling automated image creation that mimics human artistry. Traditional approaches often need more thematic coherence and help to capture complex styles in artistic domains. In order to overcome these constraints, we present a Generative Adversarial Network (GAN) model integrated with a Semantic Web architecture, therefore providing automated painting creation for creative businesses. Our strategy guarantees thematic and stylistic congruence by using Semantic Web methods to include contextual information and GANs to create high-qua
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Toktarbekov, M., and A. Sarsembayev. "AN OVERVIEW OF DEEP LEARNING TECHNIQUES FOR GENERATING MUSICAL COMPOSITIONS." SCIENTIFIC-DISCUSSION, no. 75 (April 10, 2023): 27–34. https://doi.org/10.5281/zenodo.7808643.

Der volle Inhalt der Quelle
Annotation:
Music generation is increasingly recognized as an attractive field of study in Deep Learning. This paper will focus on a review of the articles that deal with automatic music generation using deep learning methods. A number of well-known architectures, such as Recurrent Neural Network-Long Short-Term Memory (RNN-LSTM), Generative Adversarial Networks (GANs), Variational Auto-Encoders (VAEs). One aspect of Deep Learning that is currently the most popular is music. Artificial intelligence installation is a method of generating digital music using algorithms, neural networks and other performance
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

R, Geetha Rajakumari, Karthika Renuka D, and Ashok Kumar L. "ENHANCING ASR ACCURACY AND COHERENCE ACROSS INDIAN LANGUAGES WITH WAV2VEC2 AND GPT-2." ICTACT Journal on Data Science and Machine Learning 6, no. 2 (2025): 761–64. https://doi.org/10.21917/ijdsml.2025.0156.

Der volle Inhalt der Quelle
Annotation:
This paper presents a comprehensive framework for automatic speech recognition (ASR) and text refinement that leverages advanced deep learning models to improve transcription accuracy and contextual coherence across multiple languages, including Tamil, Kannada, Telugu, Malayalam, and English. The framework integrates three primary models: Wav2Vec2 for ASR, Sentence Transformer for semantic retrieval, and GPT-2 for text generation. Initially, the Wav2Vec2 model is employed to convert audio inputs into text, achieving a Word Error Rate (WER) of 8% and a Character Error Rate (CER) of 5%. This mod
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Murakami, Riki, and Basabi Chakraborty. "Investigating the Efficient Use of Word Embedding with Neural-Topic Models for Interpretable Topics from Short Texts." Sensors 22, no. 3 (2022): 852. http://dx.doi.org/10.3390/s22030852.

Der volle Inhalt der Quelle
Annotation:
With the rapid proliferation of social networking sites (SNS), automatic topic extraction from various text messages posted on SNS are becoming an important source of information for understanding current social trends or needs. Latent Dirichlet Allocation (LDA), a probabilistic generative model, is one of the popular topic models in the area of Natural Language Processing (NLP) and has been widely used in information retrieval, topic extraction, and document analysis. Unlike long texts from formal documents, messages on SNS are generally short. Traditional topic models such as LDA or pLSA (pr
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Hu, Yijia. "Performance exploration of Generative Pre-trained Transformer-2 for lyrics generation." Applied and Computational Engineering 48, no. 1 (2024): 53–60. http://dx.doi.org/10.54254/2755-2721/48/20241154.

Der volle Inhalt der Quelle
Annotation:
In recent years, the field of Natural Language Processing (NLP) has undergone a revolution, with text generation playing a key role in this transformation. This shift is not limited to technological areas but has also seamlessly penetrated creative domains, with a prime example being the generation of song lyrics. To be truly effective, generative models, like Generative Pre-trained Transformer (GPT)-2, require fine-tuning as a crucial step. This paper, utilizing the robustness of the widely-referenced Kaggle dataset titled "Song Lyrics", carefully explores the impacts of modulating three key
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

K, Ramesh, B. Muni Lavanya, B. Rajesh Kumar, Narayan Krishan Vyas, and Mohammed Saleh Al Ansari. "GENERATIVE ADVERSARIAL NETWORKS FOR IMAGE SYNTHESIS AND STYLE TRANSFER IN VIDEOS." ICTACT Journal on Image and Video Processing 14, no. 2 (2023): 3116–21. http://dx.doi.org/10.21917/ijivp.2023.0443.

Der volle Inhalt der Quelle
Annotation:
In computer vision and artistic expression, the synthesis of visually compelling images and the transfer of artistic styles onto videos have gained significant attention. This research addresses the challenges in achieving realistic image synthesis and style transfer in the dynamic context of videos. Existing methods often struggle to maintain temporal coherence and fail to capture intricate details, prompting the need for innovative approaches. The conventional methods for image synthesis and style transfer in videos encounter difficulties in preserving the natural flow of motion and consiste
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Tang, Lin. "The Lack of Other Minds as the Lack of Coherence in Human–AI Interactions." Philosophies 10, no. 4 (2025): 77. https://doi.org/10.3390/philosophies10040077.

Der volle Inhalt der Quelle
Annotation:
As artificial intelligence (AI) undergoes rapid evolutionary advancements, two enduring queries in the philosophy of language and linguistics persist: the problem of other minds and coherence. This can be further explored by the following question: is there a fundamental difference between human-AI interactions and human–human interactions? More precisely, does an AI partner’s ability to understand discursive coherence sufficiently approximate that of the human mind? This study frames the problem of other minds as a problem in discourse analysis, positing that linguistic exchange inherently co
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Awolesi Abolanle Ogunboyo. "Neuro-Symbolic Generative AI for Explainable Reasoning." International Journal of Science and Research Archive 16, no. 1 (2025): 121–25. https://doi.org/10.30574/ijsra.2025.16.1.2019.

Der volle Inhalt der Quelle
Annotation:
The integration of neural and symbolic systems termed neuro-symbolic AI presents a compelling path toward explainable reasoning in Artificial Intelligence (AI). While deep learning models excel at pattern recognition and generative capabilities, their opaque decision-making process has raised concerns about transparency, interpretability, and trustworthiness. This research investigates the convergence of generative AI and neuro-symbolic architectures to enhance explainable reasoning. Employing a mixed-methods methodology grounded in empirical evaluation, knowledge representation, and symbolic
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Chen, Baiyuan. "Exploiting Topological Priors for Boosting Point Cloud Generation." Transactions on Computer Science and Intelligent Systems Research 5 (August 12, 2024): 277–82. http://dx.doi.org/10.62051/6csenv07.

Der volle Inhalt der Quelle
Annotation:
This paper presents an innovative enhancement to the Sphere as Prior Generative Adversarial Network (SP-GAN) model, a state-of-the-art GAN designed for point cloud generation. A novel method is introduced for point cloud generation that elevates the structural integrity and overall quality of the generated point clouds by incorporating topological priors into the training process of the generator. Specifically, this work utilizes the K-means algorithm to segment a point cloud from the repository into clusters and extract centroids, which are then used as priors in the generation process of the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Ge, Nan, Yixi Liu, Xiang Xu, Xuedian Zhang, and Minshan Jiang. "A Fast Generative Adversarial Network for High-Fidelity Optical Coherence Tomography Image Synthesis." Photonics 9, no. 12 (2022): 944. http://dx.doi.org/10.3390/photonics9120944.

Der volle Inhalt der Quelle
Annotation:
(1) Background: We present a fast generative adversarial network (GAN) for generating high-fidelity optical coherence tomography (OCT) images. (2) Methods: We propose a novel Fourier-FastGAN (FOF-GAN) to produce OCT images. To improve the image quality of the synthetic images, a new discriminator with a Fourier attention block (FAB) and a new generator with fast Fourier transform (FFT) processes were redesigned. (3) Results: We synthesized normal, diabetic macular edema (DME), and drusen images from the Kermany dataset. When training with 2800 images with 50,000 epochs, our model used only 5 h
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Conato, Fabio, and Ilaria Spasari. "META-MODULE. Contemporary modularity as generative tool for Smart Architecture." IOP Conference Series: Earth and Environmental Science 1402, no. 1 (2024): 012055. http://dx.doi.org/10.1088/1755-1315/1402/1/012055.

Der volle Inhalt der Quelle
Annotation:
Abstract The complexity of the contemporary era challenges traditional methods and processes in architecture, exacerbated by diverse specializations, global production, and recent cultural, social, and technological shifts. Consequently, contemporary architecture responds to societal demands through innovative methods and tools. To maintain coherence in design objectives, explicit information on addressing contemporary concerns is essential. Systematizing informational flows becomes the language of Smart Architecture, utilizing informative patterns and design variations to address these challe
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Ulatowska, Hanna K., Tricia Santos, Diane Walsh, Jilliane Lagus, Mitchell Pruett, and Sara Aguilar. "STORIES OF TRAUMA AND RECONCILIATION OF WORLD WAR II VETERANS." Innovation in Aging 3, Supplement_1 (2019): S768. http://dx.doi.org/10.1093/geroni/igz038.2822.

Der volle Inhalt der Quelle
Annotation:
Abstract The present qualitative study examined the reconciliation of trauma experienced by 55 World War II veterans (22 aeronautical crew members, 27 non-pilot combatants, and 6 veterans with dementia) demonstrated via testimonial language within a semi-structured interview. The research team considered themes of language coherence as they relate to veteran experiences of trauma and reconciliation. Trauma literature documents the importance of personal narratives in both identifying and reconciling traumatic experiences. This study examined morals and values of participants, traumatic experie
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Verma, Babita, Ani Thomas, and Rohit Kumar Verma. "Innovative abstractive Hindi text summarization model incorporating Bi-LSTM classifier, optimizer and generative AI." Intelligent Decision Technologies 19, no. 2 (2025): 585–93. https://doi.org/10.1177/18724981241289752.

Der volle Inhalt der Quelle
Annotation:
This research explores the development and optimization of an advanced abstractive text summarization model specifically tailored for the Hindi language. The proposed model leverages the Cetacean Predator Optimization-Based Sentence Rank BiLSTM model (CPO-BiLSTM) as its core architecture, enhancing the model's ability to capture intricate dependencies in both forward and backward directions. BiLSTMs are versatile architectures that can be applied to various tasks utilising generative AI, especially ones that incorporate sequential data like speech, text, or music. The study focuses on optimizi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Sordo, Zineb, Eric Chagnon, Zixi Hu, et al. "Synthetic Scientific Image Generation with VAE, GAN, and Diffusion Model Architectures." Journal of Imaging 11, no. 8 (2025): 252. https://doi.org/10.3390/jimaging11080252.

Der volle Inhalt der Quelle
Annotation:
Generative AI (genAI) has emerged as a powerful tool for synthesizing diverse and complex image data, offering new possibilities for scientific imaging applications. This review presents a comprehensive comparative analysis of leading generative architectures, ranging from Variational Autoencoders (VAEs) to Generative Adversarial Networks (GANs) on through to Diffusion Models, in the context of scientific image synthesis. We examine each model’s foundational principles, recent architectural advancements, and practical trade-offs. Our evaluation, conducted on domain-specific datasets including
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Katyayini, P. P. S., Y. Shakina Jenissy, U.Nandini, T. Geethanjali Sree, and Y.Evanjali. "ZUMBLEBOT - AN UNIFIED GENERATIVE AI PLATFORM FOR EFFORTLESS MULTIMEDIA CREATION." Industrial Engineering Journal 54, no. 03 (2025): 65–72. https://doi.org/10.36893/iej.2025.v54i3.007.

Der volle Inhalt der Quelle
Annotation:
The rapid advancements in generative AI have led to the development of dedicated models for content, image, music, and video creation. However, customers are often faced with difficulties in switching between devices to meet multi-modal content generation. ZumbleBot bridges this gap by combining content, image, music, and video creation into one, integrated platform. Using cutting-edge Huggingface Pre-trained AI models like Qwen for content, Steady Dissemination for images, MusicGen for music, and text-to-video models, ZumbleBot uncouples creative workflows and enhances openness. The platform
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

He, Xingxin, Leyuan Fang, Hossein Rabbani, Xiangdong Chen, and Zhimin Liu. "Retinal optical coherence tomography image classification with label smoothing generative adversarial network." Neurocomputing 405 (September 2020): 37–47. http://dx.doi.org/10.1016/j.neucom.2020.04.044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Guo, Anjing, Leyuan Fang, Min Qi, and Shutao Li. "Unsupervised Denoising of Optical Coherence Tomography Images With Nonlocal-Generative Adversarial Network." IEEE Transactions on Instrumentation and Measurement 70 (2021): 1–12. http://dx.doi.org/10.1109/tim.2020.3017036.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Trabassi, Dante, Stefano Filippo Castiglia, Fabiano Bini, et al. "Optimizing Rare Disease Gait Classification through Data Balancing and Generative AI: Insights from Hereditary Cerebellar Ataxia." Sensors 24, no. 11 (2024): 3613. http://dx.doi.org/10.3390/s24113613.

Der volle Inhalt der Quelle
Annotation:
The interpretability of gait analysis studies in people with rare diseases, such as those with primary hereditary cerebellar ataxia (pwCA), is frequently limited by the small sample sizes and unbalanced datasets. The purpose of this study was to assess the effectiveness of data balancing and generative artificial intelligence (AI) algorithms in generating synthetic data reflecting the actual gait abnormalities of pwCA. Gait data of 30 pwCA (age: 51.6 ± 12.2 years; 13 females, 17 males) and 100 healthy subjects (age: 57.1 ± 10.4; 60 females, 40 males) were collected at the lumbar level with an
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Zhang, Longwen, Qixuan Zhang, Haoran Jiang, et al. "BANG: Dividing 3D Assets via Generative Exploded Dynamics." ACM Transactions on Graphics 44, no. 4 (2025): 1–21. https://doi.org/10.1145/3730840.

Der volle Inhalt der Quelle
Annotation:
3D creation has always been a unique human strength, driven by our ability to deconstruct and reassemble objects using our eyes, mind and hand. However, current 3D design tools struggle to replicate this natural process, requiring considerable artistic expertise and manual labor. This paper introduces BANG, a novel generative approach that bridges 3D generation and reasoning, allowing for intuitive and flexible part-level decomposition of 3D objects. At the heart of BANG is "Generative Exploded Dynamics", which creates a smooth sequence of exploded states for an input geometry, progressively s
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Shani, Itay. "Cosmopsychism, Coherence, and World-Affirming Monism." Monist 105, no. 1 (2022): 6–24. http://dx.doi.org/10.1093/monist/onab020.

Der volle Inhalt der Quelle
Annotation:
Abstract This paper explores cosmopsychism’s explanatory aspirations from a programmatic perspective. The bulk of the text consists of an argument in favor of the conclusion that cosmopsychism suffers from no insurmountable individuation problem (IND). I argue that the widespread tendency to view IND as a mirror-image of micropsychism’s combination problem (CP) is mistaken. In particular, what renders CP insolvable, namely, the commitment to the coupling of phenomenal constitution with phenomenal inclusion, is, from the standpoint of cosmopsychism, an entirely nonmandatory assumption. I procee
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Torres, Nicolás. "CodeContrast: A Contrastive Learning Approach for Generating Coherent Programming Exercises." Education Sciences 15, no. 1 (2025): 80. https://doi.org/10.3390/educsci15010080.

Der volle Inhalt der Quelle
Annotation:
Generating high-quality programming exercises with well-aligned problem descriptions, test cases, and code solutions is crucial for computer science education. However, current methods often lack coherence among these components, reducing their educational value. We present CodeContrast, a novel generative model that uses contrastive learning to map programming problems, test cases, and solutions into a shared feature space. By minimizing the distance between matched components and maximizing it for non-matched ones, CodeContrast learns the intricate relationships necessary to generate coheren
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Khushboo Patel. "Semantic-aware Mapping for Text-to-Image Synthesis." Journal of Information Systems Engineering and Management 10, no. 2 (2025): 746–54. https://doi.org/10.52783/jisem.v10i2.3135.

Der volle Inhalt der Quelle
Annotation:
This study explores the fast-progressing domain of Text-to-Image (T2I) synthesis, which aims to bridge the gap between language and visual comprehension. The main emphasis is on the crucial significance of Generative Adversarial Networks (GANs), which have transformed the process of image formation, with a specific emphasis on the impact of conditional GANs. The conditional models enable controlled image generation, and their influence on the production of high-quality images is extensively analyzed. We propose a novel method of generating semantically aware embeddings from the input text desc
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Graña, Manuel, Leire Ozaeta, and Darya Chyzhyk. "Resting State Effective Connectivity Allows Auditory Hallucination Discrimination." International Journal of Neural Systems 27, no. 05 (2017): 1750019. http://dx.doi.org/10.1142/s0129065717500198.

Der volle Inhalt der Quelle
Annotation:
Hallucinations are elusive phenomena that have been associated with psychotic behavior, but that have a high prevalence in healthy population. Some generative mechanisms of Auditory Hallucinations (AH) have been proposed in the literature, but so far empirical evidence is scarce. The most widely accepted generative mechanism hypothesis nowadays consists in the faulty workings of a network of brain areas including the emotional control, the audio and language processing, and the inhibition and self-attribution of the signals in the auditive cortex. In this paper, we consider two methods to anal
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Kande, Nilesh A., Rupali Dakhane, Ambedkar Dukkipati, and Phaneendra Kumar Yalavarthy. "SiameseGAN: A Generative Model for Denoising of Spectral Domain Optical Coherence Tomography Images." IEEE Transactions on Medical Imaging 40, no. 1 (2021): 180–92. http://dx.doi.org/10.1109/tmi.2020.3024097.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Guan, Jian, Zhuoer Feng, Yamei Chen, et al. "LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text Understanding and Generation." Transactions of the Association for Computational Linguistics 10 (2022): 434–51. http://dx.doi.org/10.1162/tacl_a_00469.

Der volle Inhalt der Quelle
Annotation:
Abstract Standard multi-task benchmarks are essential for developing pretraining models that can generalize to various downstream tasks. Existing benchmarks for natural language processing (NLP) usually focus only on understanding or generating short texts. However, long text modeling requires many distinct abilities in contrast to short texts, such as the modeling of long-range discourse and commonsense relations, and the coherence and controllability of generation. The lack of standardized benchmarks makes it difficult to assess these abilities of a model and fairly compare different models,
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Tyagi, Shourya, Olukayode Ayodele Oki, Vineet Verma, et al. "Novel Advance Image Caption Generation Utilizing Vision Transformer and Generative Adversarial Networks." Computers 13, no. 12 (2024): 305. http://dx.doi.org/10.3390/computers13120305.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a novel method for producing image captions through the utilization of Generative Adversarial Networks (GANs) and Vision Transformers (ViTs) using our proposed Image Captioning Utilizing Transformer and GAN (ICTGAN) model. Here we use the efficient representation learning of the ViTs to improve the realistic image production of the GAN. Using textual features from the LSTM-based language model, our proposed model combines salient information extracted from images using ViTs. This merging of features is made possible using a self-attention mechanism, which enables the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Hu, Zhaojun. "Research on the Application of Large Language Models in Interactive Game Storyline Generation." Applied and Computational Engineering 174, no. 1 (2025): 234–41. https://doi.org/10.54254/2755-2721/2025.po25184.

Der volle Inhalt der Quelle
Annotation:
This study focuses on the application of Large Language Models (LLMs) in interactive game storyline generation, and systematically analyzes their performance and key challenges in the dimensions of personalized response and reasoning logic. Through literature review and case studies, the study reveals that LLMs have obvious deficiencies in long-term character memory, story logic coherence and causal reasoning, which become the main obstacles to narrative consistency and player immersion experience. Based on this, the thesis proposes several optimization paths, including memory enhancement, log
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Wang, Yijie, Mingjian Hong, Luwen Huangfu, and Sheng Huang. "Data Distribution Distilled Generative Model for Generalized Zero-Shot Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 5695–703. http://dx.doi.org/10.1609/aaai.v38i6.28381.

Der volle Inhalt der Quelle
Annotation:
In the realm of Zero-Shot Learning (ZSL), we address biases in Generalized Zero-Shot Learning (GZSL) models, which favor seen data. To counter this, we introduce an end-to-end generative GZSL framework called D3GZSL. This framework respects seen and synthesized unseen data as in-distribution and out-of-distribution data, respectively, for a more balanced model. D3GZSL comprises two core modules: in-distribution dual space distillation (ID2SD) and out-of-distribution batch distillation (O2DBD). ID2SD aligns teacher-student outcomes in embedding and label spaces, enhancing learning coherence. O2
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Fanzeres, Leonardo A., and Climent Nadeu. "Sound-to-Imagination: An Exploratory Study on Cross-Modal Translation Using Diverse Audiovisual Data." Applied Sciences 13, no. 19 (2023): 10833. http://dx.doi.org/10.3390/app131910833.

Der volle Inhalt der Quelle
Annotation:
The motivation of our research is to explore the possibilities of automatic sound-to-image (S2I) translation for enabling a human receiver to visually infer occurrences of sound-related events. We expect the computer to ‘imagine’ scenes from captured sounds, generating original images that depict the sound-emitting sources. Previous studies on similar topics opted for simplified approaches using data with low content diversity and/or supervision/self-supervision for training. In contrast, our approach involves performing S2I translation using thousands of distinct and unknown scenes, using sou
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Abdeljaber, Hikmat A. M., Sultan Ahmad, Abdullah Alharbi, and Sudhir Kumar. "XAI-Based Reinforcement Learning Approach for Text Summarization of Social IoT-Based Content." Security and Communication Networks 2022 (August 4, 2022): 1–12. http://dx.doi.org/10.1155/2022/7516832.

Der volle Inhalt der Quelle
Annotation:
The purpose of automatic text summarising technology is to condense a given text while properly portraying the main information in the original text in a summary. To present generative text summarising approaches, on the other hand, restructure the original language and introduce new words when constructing summary sentences, which can easily lead to incoherence and poor readability. This research proposes a XAI (explainable artificial intelligence)-based Reinforcement Learning-based Text Summarization of Social IoT-Based Content using Reinforcement Learning. Furthermore, standard supervised t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!