Academic literature on the topic 'Text Generation Using Neural Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Text Generation Using Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Text Generation Using Neural Networks"

1

Chakravarty, Aniv, and Jagadish S. Kallimani. "Unsupervised Multi-Document Abstractive Summarization Using Recursive Neural Network with Attention Mechanism." Journal of Computational and Theoretical Nanoscience 17, no. 9 (2020): 3867–72. http://dx.doi.org/10.1166/jctn.2020.8976.

Full text
Abstract:
Text summarization is an active field of research with a goal to provide short and meaningful gists from large amount of text documents. Extractive text summarization methods have been extensively studied where text is extracted from the documents to build summaries. There are various type of multi document ranging from different formats to domains and topics. With the recent advancement in technology and use of neural networks for text generation, interest for research in abstractive text summarization has increased significantly. The use of graph based methods which handle semantic information has shown significant results. When given a set of documents of English text files, we make use of abstractive method and predicate argument structures to retrieve necessary text information and pass it through a neural network for text generation. Recurrent neural networks are a subtype of recursive neural networks which try to predict the next sequence based on the current state and considering the information from previous states. The use of neural networks allows generation of summaries for long text sentences as well. This paper implements a semantic based filtering approach using a similarity matrix while keeping all stop-words. The similarity is calculated using semantic concepts and Jiang–Conrath similarity and making use of a recurrent neural network with an attention mechanism to generate summary. ROUGE score is used for measuring accuracy, precision and recall scores.
APA, Harvard, Vancouver, ISO, and other styles
2

Zatsepina, Aleksandra, Galina Bardina, and Polina Shindina. "Eco-sensitive site assessment: Integrating neural networks for environmentally conscious pre-project planning." E3S Web of Conferences 614 (2025): 05003. https://doi.org/10.1051/e3sconf/202561405003.

Full text
Abstract:
The use of neural networks in the architectural design pre-phase is becoming increasingly prevalent among designers. This article presents a method of textual and graphical analysis of construction sites using neural networks. On the example of two projects, which won the architectural competition, the following are considered: qualitative characteristics analysis of the territory using Autodesk Forma software, cultural context analysis using ChatGPT, image generation using MidJourney and 3D-models generation using the neural network Meshy.ai. In regard to ChatGPT, the "risks" method is presented as a means of achieving optimal results. The possibility of factual errors in ChatGPT text generations is indicated.
APA, Harvard, Vancouver, ISO, and other styles
3

Chary, Podakanti Satyajith. "Text Generation: Using Markov Model & LSTM Networks to Generate Realistic Text." International Journal for Research in Applied Science and Engineering Technology 11, no. 12 (2023): 1323–27. http://dx.doi.org/10.22214/ijraset.2023.57601.

Full text
Abstract:
Abstract: Text generation plays a crucial role in various natural language processing applications, ranging from creative writing to chatbots. This research delves into the realm of text generation by exploring and comparing two distinct techniques: Markov models and Long Short-Term Memory (LSTM) networks. The study focuses on their ability to generate realistic text within specific styles or genres, providing valuable insights into their respective strengths and limitations. Markov models, rooted in probability theory, and LSTM networks, a type of recurrent neural network, represent contrasting approaches to text generation. The research employs these techniques on a carefully curated dataset, evaluating their performance based on coherence, style, and contextual relevance. The comparison aims to elucidate the nuanced differences in how these models capture dependencies within the data and their effectiveness in simulating authentic linguistic patterns. Through rigorous experimentation, this research investigates the intricacies of both Markov models and LSTM networks, shedding light on their individual contributions to the task of text generation. The examination extends beyond mere algorithmic efficacy, considering the impact of these techniques on the quality and diversity of the generated text. Additionally, the study explores the influence of hyperparameters, such as temperature in the context of LSTM networks, on the output's richness and variability.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zixiang, Qinfeng Wu, and Longjie Zhong. "Controllable text generation based on varied frameworks - rhyming lyrics generation technology." ITM Web of Conferences 73 (2025): 02012. https://doi.org/10.1051/itmconf/20257302012.

Full text
Abstract:
Controllable Text Generation, as a cutting-edge technology in Natural Language Processing (NLP), has significantly improved the quality of text generation. Users can customize the generated content by setting specific attributes, formats, and emotional characteristics, thereby achieving the goal of conserving resources. However, despite notable progress in this field, several challenges remain, such as limited text diversity under multiple conditions and information disconnection during long-text generation. In light of this, this paper focuses on controllable text generation technology within a Chinese context, particularly emphasizing the key element of rhyming. The aim is to investigate an effective method for generating rhyming lyrics and poetry. By comparing the text generation performance under Recurrent Neural Networks (RNN), Bidirectional Recurrent Neural Networks (Bi-RNN), and Transformer frameworks, and evaluating the results using n-grams metrics, this study attempts to reveal which architecture is better suited for handling the specific controllable generation requirement of rhyming. This provides theoretical support and technical guidance for automatically creating Chinese poetry and lyrics.
APA, Harvard, Vancouver, ISO, and other styles
5

Irina, A. Kiseleva, V. Borisova Yulia, and Yu. Maevskaya Anna. "Analysing the feasibility of using a neural network for generating English language assignments." Perspektivy Nauki i Obrazovania, no. 1 (February 28, 2025): 319–35. https://doi.org/10.32744/pse.2025.1.21.

Full text
Abstract:
 Introduction. Today neural networks are used in various fields, being able to improve the quality of work and significantly reduce the time spent on routine tasks. The application of neural networks in education is actively explored, but at the moment the research in the field of task generation for classroom teaching of foreign language students is scanty, which accounts for the relevance of this article. The aim of the work is to analyse the possibility of using neural networks as a tool for developing tasks for current control of students’ English-language knowledge. Methods and materials. The results were analysed using qualitative analysis, error analysis and performance analysis. The YandexGPT 2 neural network’s response to the queries for creating current control assignments was analysed, in particular, tasks for consolidating the studied vocabulary and test papers consisting of several parts. The survey involving 1st-year students was implemented at the Empress Catherine II Saint Petersburg Mining University. KEYWORDS Results. Based on the results of the analysis, the queries were modified to obtain an answer that would require minimal adjustment by the teacher. Although it was not possible to create queries that would consistently yield a good result, the neural network’s answers were successfully used for current control, making it possible to quickly prepare several versions of test papers aimed at consolidating the studied vocabulary and grammar. Thus, the survey explored the possibility of using neural networks for devising traditional English language tasks that are still in demand at the training sessions. Conclusion. The research showed that the use of neural networks for creating materials for current control significantly reduces the time for their development and allows the teacher to adapt the materials for each group of students, which improves their quality. The findings can be useful for the development of other types of teaching materials.
APA, Harvard, Vancouver, ISO, and other styles
6

Al Aziz, Md Momin, Tanbir Ahmed, Tasnia Faequa, Xiaoqian Jiang, Yiyu Yao, and Noman Mohammed. "Differentially Private Medical Texts Generation Using Generative Neural Networks." ACM Transactions on Computing for Healthcare 3, no. 1 (2022): 1–27. http://dx.doi.org/10.1145/3469035.

Full text
Abstract:
Technological advancements in data science have offered us affordable storage and efficient algorithms to query a large volume of data. Our health records are a significant part of this data, which is pivotal for healthcare providers and can be utilized in our well-being. The clinical note in electronic health records is one such category that collects a patient’s complete medical information during different timesteps of patient care available in the form of free-texts. Thus, these unstructured textual notes contain events from a patient’s admission to discharge, which can prove to be significant for future medical decisions. However, since these texts also contain sensitive information about the patient and the attending medical professionals, such notes cannot be shared publicly. This privacy issue has thwarted timely discoveries on this plethora of untapped information. Therefore, in this work, we intend to generate synthetic medical texts from a private or sanitized (de-identified) clinical text corpus and analyze their utility rigorously in different metrics and levels. Experimental results promote the applicability of our generated data as it achieves more than 80\% accuracy in different pragmatic classification problems and matches (or outperforms) the original text data.
APA, Harvard, Vancouver, ISO, and other styles
7

Prof., Aarthy G., Shenoy K. Vibha, H. Thejashree, and S. Nithin. "Image based Spam Detection using Recurrent Neural Networks (RNN)." Recent Innovations in Wireless Network Security 5, no. 2 (2023): 20–31. https://doi.org/10.5281/zenodo.8141409.

Full text
Abstract:
<em>Image-spam initially arose as a way of bypassing text-based spam filters. It is widely used to advertise products, mislead individuals into providing personal information, or transmit hazardous viruses. Image spam is harder to detect than text-based spam. Image-based encryption methods can be used to create image spam that is even more difficult to detect than what is often seen in reality. Image spam has evolved over time and may now overcome various kinds of classic anti-spam methods. Spammers can utilise pictures that just include text, sliced images, and randomly created images. Text-only images were used in the initial generation of image spam. Such images are practically empty, containing only pure text. Such text can be retrieved using optical character recognition (OCR), and then processed using normal text-based filters</em>
APA, Harvard, Vancouver, ISO, and other styles
8

Naga, Sai Krishna Mohan Pitchikala, Kodakondla Saisuhas, and Ghosh Debargha. "Word Generation Using Recurrent Neural Network." Journal of Scientific and Engineering Research 7, no. 1 (2020): 309–15. https://doi.org/10.5281/zenodo.14058872.

Full text
Abstract:
Computers have influenced the life of humans to a very great extent. Natural Processing language is a field of computer science which helps to exchange information very efficiently between humans and machines with less human requirement. Text generation techniques can be applied for improving language models, machine translation summarizing and captioning. In this project we train a Recurrent Neural Network so that it can generate new words related to the words that are fed to it.
APA, Harvard, Vancouver, ISO, and other styles
9

Murad, Aylu. "Using Neural Network Principles for Learning the English Language." Eurasian Science Review An International peer-reviewed multidisciplinary journal 1, no. 3 (2025): 2262–74. https://doi.org/10.63034/esr-380.

Full text
Abstract:
The scientific project on the topic "Using Neural Network Principles for Learning the English Language" is dedicated to the research of applying neural networks in educational technologies for studying the English language. The aim of the project is to analyze and develop methods that utilize neural networks to improve the effectiveness of English language learning. The project begins with a theoretical overview of neural networks, including their basic principles of operation and learning algorithms. Key components of neural networks, such as neurons, layers, and activation functions, as well as learning methods like the backpropagation algorithm, are discussed. Special attention is given to existing technologies and applications that use neural networks for language learning, such as machine translation, automatic error correction, and text generation. In conclusion, the project formulates key findings and recommendations for improving existing systems and developing new solutions. The project highlights the potential of neural networks in the educational field and opens up prospects for further research in the application of artificial intelligence in foreign language learning.
APA, Harvard, Vancouver, ISO, and other styles
10

Orlando, Iparraguirre-Villanuev, Guevara-Ponce Victor, Ruiz-Alvarado Daniel, et al. "Text prediction recurrent neural networks using long shortterm memory-dropout." Text prediction recurrent neural networks using long shortterm memory-dropout 29, no. 3 (2023): 1758–68. https://doi.org/10.11591/ijeecs.v29.i3.pp1758-1768.

Full text
Abstract:
Unit short-term memory (LSTM) is a type of recurrent neural network (RNN) whose sequence-based models are being used in text generation and/or prediction tasks, question answering, and classification systems due to their ability to learn long-term dependencies. The present research integrates the LSTM network and dropout technique to generate a text from a corpus as input, a model is developed to find the best way to extract the words from the context. For training the model, the poem &quot;La Ciudad y los perros&quot; which is composed of 128,600 words is used as input data. The poem was divided into two data sets, 38.88% for training and the remaining 61.12% for testing the model. The proposed model was tested in two variants: word importance and context. The results were evaluated in terms of the semantic proximity of the generated text to the given context.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Text Generation Using Neural Networks"

1

Zaghloul, Waleed A. Lee Sang M. "Text mining using neural networks." Lincoln, Neb. : University of Nebraska-Lincoln, 2005. http://0-www.unl.edu.library.unl.edu/libr/Dissertations/2005/Zaghloul.pdf.

Full text
Abstract:
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2005.<br>Title from title screen (sites viewed on Oct. 18, 2005). PDF text: 100 p. : col. ill. Includes bibliographical references (p. 95-100 of dissertation).
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Run Fen. "Semantic Text Matching Using Convolutional Neural Networks." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-362134.

Full text
Abstract:
Semantic text matching is a fundamental task for many applications in NaturalLanguage Processing (NLP). Traditional methods using term frequencyinversedocument frequency (TF-IDF) to match exact words in documentshave one strong drawback which is TF-IDF is unable to capture semanticrelations between closely-related words which will lead to a disappointingmatching result. Neural networks have recently been used for various applicationsin NLP, and achieved state-of-the-art performances on many tasks.Recurrent Neural Networks (RNN) have been tested on text classificationand text matching, but it did not gain any remarkable results, which is dueto RNNs working more effectively on texts with a short length, but longdocuments. In this paper, Convolutional Neural Networks (CNN) will beapplied to match texts in a semantic aspect. It uses word embedding representationsof two texts as inputs to the CNN construction to extract thesemantic features between the two texts and give a score as the output ofhow certain the CNN model is that they match. The results show that aftersome tuning of the parameters the CNN model could produce accuracy,prediction, recall and F1-scores all over 80%. This is a great improvementover the previous TF-IDF results and further improvements could be madeby using dynamic word vectors, better pre-processing of the data, generatelarger and more feature rich data sets and further tuning of the parameters.
APA, Harvard, Vancouver, ISO, and other styles
3

Shishani, Basel. "Segmentation of connected text using constrained neural networks." Thesis, Queensland University of Technology, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lameris, Harm. "Homograph Disambiguation and Diacritization for Arabic Text-to-Speech Using Neural Networks." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446509.

Full text
Abstract:
Pre-processing Arabic text for Text-to-Speech (TTS) systems poses major challenges, as Arabic omits short vowels in writing. This omission leads to a large number of homographs, and means that Arabic text needs to be diacritized to disambiguate these homographs, in order to be matched up with the intended pronunciation. Diacritizing Arabic has generally been achieved by using rule-based, statistical, or hybrid methods that combine rule-based and statistical methods. Recently, diacritization methods involving deep learning have shown promise in reducing error rates. These deep-learning methods are not yet commonly used in TTS engines, however. To examine neural diacritization methods for use in TTS engines, we normalized and pre-processed a version of the Tashkeela corpus, a large diacritized corpus containing largely Classical Arabic texts, for TTS purposes. We then trained and tested three state-of-the-art Recurrent-Neural-Network-based models on this data set. Additionally we tested these models on the Wiki News corpus, a test set that contains Modern Standard Arabic (MSA) news articles and thus more closely resembles most TTS queries. The models were evaluated by comparing the Diacritic Error Rate (DER) and Word Error Rate (WER) achieved for each data set to one another and to the DER and WER reported in the original papers. Moreover, the per-diacritic accuracy was examined, and a manual evaluation was performed. For the Tashkeela corpus, all models achieved a lower DER and WER than reported in the original papers. This was largely the result of using more training data in addition to the TTS pre-processing steps that were performed on the data. For the Wiki News corpus, the error rates were higher, largely due to the domain gap between the data sets. We found that for both data sets the models overfit on common patterns and the most common diacritic. For the Wiki News corpus the models struggled with Named Entities and loanwords. Purely neural models generally outperformed the model that combined deep learning with rule-based and statistical corrections. These findings highlight the usability of deep learning methods for Arabic diacritization in TTS engines as well as the need for diacritized corpora that are more representative of Modern Standard Arabic.
APA, Harvard, Vancouver, ISO, and other styles
5

Casini, Luca. "Automatic Music Generation Using Variational Autoencoders." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16137/.

Full text
Abstract:
The aim of the thesis is the design and evaluation of a generative model based on deep learning for creating symbolic music. Music, and art in general, pose interesting problems from a machine learning standpoint as they have structure and coherence both locally and globally and also have semantic content that goes beyond the mere structural problems. Working on challenges like those can give insight on other problems in the machine learning world. Historically algorithmic music generation focused on structure and was achieved through the use of Markov models or by defining, often manually, a set of strict rules to be followed. In recent years the availability of large amounts of data and cheap computational power led to the resurgence of Artificial Neural Networks (ANN). Deep Learning is machine learning based on ANN with many stacked layers and is improving state of the art in many fields, including generative models. This thesis focuses on Variational Autoencoders(VAE), a type of neural network where the input is mapped to a lower-dimensional code that is fit to a Gaussian distribution and then tries to reconstruct it minimizing the error. The distribution can be easily sampled allowing to generate and interpolate data in the latent space. Autoencoders can use any type of network to encode and decode the input, we will use Convolutional Neural Network (CNN) and Recurrent Neural Netowrks (RNN). Since the quality of music and art in general is deeply subjective and what seems pleasing to one may not be for another we will try to determine the “best” model by conducting a survey and asking the participants to rate their enjoyment of music and whether or not they think each sample to be composed by a human or AI.
APA, Harvard, Vancouver, ISO, and other styles
6

Kullmann, Emelie. "Speech to Text for Swedish using KALDI." Thesis, KTH, Optimeringslära och systemteori, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189890.

Full text
Abstract:
The field of speech recognition has during the last decade left the re- search stage and found its way in to the public market. Most computers and mobile phones sold today support dictation and transcription in a number of chosen languages.  Swedish is often not one of them. In this thesis, which is executed on behalf of the Swedish Radio, an Automatic Speech Recognition model for Swedish is trained and the performance evaluated. The model is built using the open source toolkit Kaldi.  Two approaches of training the acoustic part of the model is investigated. Firstly, using Hidden Markov Model and Gaussian Mixture Models and secondly, using Hidden Markov Models and Deep Neural Networks. The later approach using deep neural networks is found to achieve a better performance in terms of Word Error Rate.<br>De senaste åren har olika tillämpningar inom människa-dator interaktion och främst taligenkänning hittat sig ut på den allmänna marknaden. Många system och tekniska produkter stöder idag tjänsterna att transkribera tal och diktera text. Detta gäller dock främst de större språken och sällan finns samma stöd för mindre språk som exempelvis svenskan. I detta examensprojekt har en modell för taligenkänning på svenska ut- vecklas. Det är genomfört på uppdrag av Sveriges Radio som skulle ha stor nytta av en fungerande taligenkänningsmodell på svenska. Modellen är utvecklad i ramverket Kaldi. Två tillvägagångssätt för den akustiska träningen av modellen är implementerade och prestandan för dessa två är evaluerade och jämförda. Först tränas en modell med användningen av Hidden Markov Models och Gaussian Mixture Models och slutligen en modell där Hidden Markov Models och Deep Neural Networks an- vänds, det visar sig att den senare uppnår ett bättre resultat i form av måttet Word Error Rate.
APA, Harvard, Vancouver, ISO, and other styles
7

AbuRa'ed, Ahmed Ghassan Tawfiq. "Automatic generation of descriptive related work reports." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/669975.

Full text
Abstract:
A related work report is a section in a research paper which integrates key information from a list of related scientific papers providing context to the work being presented. Related work reports can either be descriptive or integrative. Integrative related work reports provide a high-level overview and critique of the scientific papers by comparing them with each other, providing fewer details of individual studies. Descriptive related work reports, instead, provide more in-depth information about each mentioned study providing information such as methods and results of the cited works. In order to write a related work report, scientist have to identify, condense/summarize, and combine relevant information from different scientific papers. However, such task is complicated due to the available volume of scientific papers. In this context, the automatic generation of related work reports appears to be an important problem to tackle. The automatic generation of related work reports can be considered as an instance of the multi-document summarization problem where, given a list of scientific papers, the main objective is to automatically summarize those scientific papers and generate related work reports. In order to study the problem of related work generation, we have developed a manually annotated, machine readable data-set of related work sections, cited papers (e.g. references) and sentences, together with an additional layer of papers citing the references. We have also investigated the relation between a citation context in a citing paper and the scientific paper it is citing so as to properly model cross-document relations and inform our summarization approach. Moreover, we have also investigated the identification of explicit and implicit citations to a given scientific paper which is an important task in several scientific text mining activities such as citation purpose identification, scientific opinion mining, and scientific summarization. We present both extractive and abstractive methods to summarize a list of scientific papers by utilizing their citation network. The extractive approach follows three stages: scoring the sentences of the scientific papers based on their citation network, selecting sentences from each scientific paper to be mentioned in the related work report, and generating an organized related work report by grouping the sentences of the scientific papers that belong to the same topic together. On the other hand, the abstractive approach attempts to generate citation sentences to be included in a related work report, taking advantage of current sequence-to-sequence neural architectures and resources that we have created specifically for this task. The thesis also presents and discusses automatic and manual evaluation of the generated related work reports showing the viability of the proposed approaches.<br>La sección de trabajos relacionados de un artículo científico resume e integra información clave de una lista de documentos científicos relacionados con el trabajo que se presenta. Para redactar esta sección del artículo científico el autor debe identificar, condensar/resumir y combinar información relevante de diferentes artículos. Esta tarea es complicada debido al gran volumen disponible de artículos científicos. En este contexto, la generación automática de tales secciones es un problema importante a abordar. La generación automática de secciones de trabajo relacionados puede ser considerada como una instancia del problema de resumen de documentos múltiples donde, dada una lista de documentos científicos, el objetivo es resumir automáticamente esos documentos científicos y generar la sección de trabajos relacionados. Para estudiar este problema, hemos creado un corpus de secciones de trabajos relacionados anotado manualmente y procesado automáticamente. Asimismo, hemos investigado la relación entre las citaciones y el artículo científico que se cita para modelar adecuadamente las relaciones entre documentos y, así, informar nuestro método de resumen automático. Además, hemos investigado la identificación de citaciones implícitas a un artículo científico dado que es una tarea importante en varias actividades de minería de textos científicos. Presentamos métodos extractivos y abstractivos para resumir una lista de artículos científicos utilizando su red de citaciones. El enfoque extractivo sigue tres etapas: cálculo de la relevancia las oraciones de cada artículo en función de la red de citaciones, selección de oraciones de cada artículo científico para integrarlas en el resumen y generación de la sección de trabajos relacionados agrupando las oraciones por tema. Por otro lado, el enfoque abstractivo intenta generar citaciones para incluirlas en un resumen utilizando redes neuronales y recursos que hemos creado específicamente para esta tarea. La tesis también presenta y discute la evaluación automática y manual de los resúmenes generados automáticamente, demostrando la viabilidad de los enfoques propuestos.<br>Una secció d’antecedents o estat de l’art d’un articulo científic resumeix la informació clau d'una llista de documents científics relacionats amb el treball que es presenta. Per a redactar aquesta secció de l’article científic l’autor ha d’identificar, condensar / resumir i combinar informació rellevant de diferents articles. Aquesta activitat és complicada per causa del gran volum disponible d’articles científics. En aquest context, la generació automàtica d’aquestes seccions és un problema important a abordar. La generació automàtica d’antecedents o d’estat de l’art pot considerar-se com una instància del problema de resum de documents. Per estudiar aquest problema, es va crear un corpus de seccions d’estat de l’art d’articles científics manualment anotat i processat automàticament. Així mateix, es va investigar la relació entre citacions i l’article científic que es cita per modelar adequadament les relacions entre documents i, així, informar el nostre mètode de resum automàtic. A més, es va investigar la identificació de citacions implícites a un article científic que és un problema important en diverses activitats de mineria de textos científics. Presentem mètodes extractius i abstractius per resumir una llista d'articles científics utilitzant el conjunt de citacions de cada article. L’enfoc extractiu segueix tres etapes: càlcul de la rellevància de les oracions de cada article en funció de les seves citacions, selecció d’oracions de cada article científic per a integrar-les en el resum i generació de la secció de treballs relacionats agrupant les oracions per tema. Per un altre costat, l’enfoc abstractiu implementa la generació de citacions per a incloure-les en un resum que utilitza xarxes neuronals i recursos que hem creat específicament per a aquest tasca. La tesi també presenta i discuteix l'avaluació automàtica i el manual dels resums generats automàticament, demostrant la viabilitat dels mètodes proposats.
APA, Harvard, Vancouver, ISO, and other styles
8

Stein, Roger Alan. "An analysis of hierarchical text classification using word embeddings." Universidade do Vale do Rio dos Sinos, 2018. http://www.repositorio.jesuita.org.br/handle/UNISINOS/7624.

Full text
Abstract:
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2019-03-07T14:41:05Z No. of bitstreams: 1 Roger Alan Stein_.pdf: 476239 bytes, checksum: a87a32ffe84d0e5d7a882e0db7b03847 (MD5)<br>Made available in DSpace on 2019-03-07T14:41:05Z (GMT). No. of bitstreams: 1 Roger Alan Stein_.pdf: 476239 bytes, checksum: a87a32ffe84d0e5d7a882e0db7b03847 (MD5) Previous issue date: 2018-03-28<br>CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>Efficient distributed numerical word representation models (word embeddings) combined with modern machine learning algorithms have recently yielded considerable improvement on automatic document classification tasks. However, the effectiveness of such techniques has not been assessed for the hierarchical text classification (HTC) yet. This study investigates application of those models and algorithms on this specific problem by means of experimentation and analysis. Classification models were trained with prominent machine learning algorithm implementations—fastText, XGBoost, and Keras’ CNN—and noticeable word embeddings generation methods—GloVe, word2vec, and fastText—with publicly available data and evaluated them with measures specifically appropriate for the hierarchical context. FastText achieved an LCAF1 of 0.871 on a single-labeled version of the RCV1 dataset. The results analysis indicates that using word embeddings is a very promising approach for HTC.<br>Modelos eficientes de representação numérica textual (word embeddings) combinados com algoritmos modernos de aprendizado de máquina têm recentemente produzido uma melhoria considerável em tarefas de classificação automática de documentos. Contudo, a efetividade de tais técnicas ainda não foi avaliada com relação à classificação hierárquica de texto. Este estudo investiga a aplicação daqueles modelos e algoritmos neste problema em específico através de experimentação e análise. Modelos de classificação foram treinados usando implementações proeminentes de algoritmos de aprendizado de máquina—fastText, XGBoost e CNN (Keras)— e notórios métodos de geração de word embeddings—GloVe, word2vec e fastText—com dados disponíveis publicamente e avaliados usando métricas especificamente adequadas ao contexto hierárquico. Nesses experimentos, fastText alcançou um LCAF1 de 0,871 usando uma versão da base de dados RCV1 com apenas uma categoria por tupla. A análise dos resultados indica que a utilização de word embeddings é uma abordagem muito promissora para classificação hierárquica de texto.
APA, Harvard, Vancouver, ISO, and other styles
9

Bengtsson, Fredrik, and Adam Combler. "Automatic Dispatching of Issues using Machine Learning." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162837.

Full text
Abstract:
Many software companies use issue tracking systems to organize their work. However, when working on large projects, across multiple teams, a problem of finding the correctteam to solve a certain issue arises. One team might detect a problem, which must be solved by another team. This can take time from employees tasked with finding the correct team and automating the dispatching of these issues can have large benefits for the company. In this thesis, the use of machine learning methods, mainly convolutional neural networks (CNN) for text classification, has been applied to this problem. For natural language processing both word- and character-level representations are commonly used. The results in this thesis suggests that the CNN learns different information based on whether word- or character-level representation is used. Furthermore, it was concluded that the CNN models performed on similar levels as the classical Support Vector Machine for this task. When compared to a human expert, working with dispatching issues, the best CNN model performed on a similar level when given the same information. The high throughput of a computer model, therefore, suggests automation of this task is very much possible.
APA, Harvard, Vancouver, ISO, and other styles
10

Nord, Sofia. "Multivariate Time Series Data Generation using Generative Adversarial Networks : Generating Realistic Sensor Time Series Data of Vehicles with an Abnormal Behaviour using TimeGAN." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302644.

Full text
Abstract:
Large datasets are a crucial requirement to achieve high performance, accuracy, and generalisation for any machine learning task, such as prediction or anomaly detection, However, it is not uncommon for datasets to be small or imbalanced since gathering data can be difficult, time-consuming, and expensive. In the task of collecting vehicle sensor time series data, in particular when the vehicle has an abnormal behaviour, these struggles are present and may hinder the automotive industry in its development. Synthetic data generation has become a growing interest among researchers in several fields to handle the struggles with data gathering. Among the methods explored for generating data, generative adversarial networks (GANs) have become a popular approach due to their wide application domain and successful performance. This thesis focuses on generating multivariate time series data that are similar to vehicle sensor readings from the air pressures in the brake system of vehicles with an abnormal behaviour, meaning there is a leakage somewhere in the system. A novel GAN architecture called TimeGAN was trained to generate such data and was then evaluated using both qualitative and quantitative evaluation metrics. Two versions of this model were tested and compared. The results obtained proved that both models learnt the distribution and the underlying information within the features of the real data. The goal of the thesis was achieved and can become a foundation for future work in this field.<br>När man applicerar en modell för att utföra en maskininlärningsuppgift, till exempel att förutsäga utfall eller upptäcka avvikelser, är det viktigt med stora dataset för att uppnå hög prestanda, noggrannhet och generalisering. Det är dock inte ovanligt att dataset är små eller obalanserade eftersom insamling av data kan vara svårt, tidskrävande och dyrt. När man vill samla tidsserier från sensorer på fordon är dessa problem närvarande och de kan hindra bilindustrin i dess utveckling. Generering av syntetisk data har blivit ett växande intresse bland forskare inom flera områden som ett sätt att hantera problemen med datainsamling. Bland de metoder som undersökts för att generera data har generative adversarial networks (GANs) blivit ett populärt tillvägagångssätt i forskningsvärlden på grund av dess breda applikationsdomän och dess framgångsrika resultat. Denna avhandling fokuserar på att generera flerdimensionell tidsseriedata som liknar fordonssensoravläsningar av lufttryck i bromssystemet av fordon med onormalt beteende, vilket innebär att det finns ett läckage i systemet. En ny GAN modell kallad TimeGAN tränades för att genera sådan data och utvärderades sedan både kvalitativt och kvantitativt. Två versioner av denna modell testades och jämfördes. De erhållna resultaten visade att båda modellerna lärde sig distributionen och den underliggande informationen inom de olika signalerna i den verkliga datan. Målet med denna avhandling uppnåddes och kan lägga grunden för framtida arbete inom detta område.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Text Generation Using Neural Networks"

1

Zaghloul, Waleed A. Text mining using neural networks. University of Nebraska-Lincoln, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

C, Jain L., and Johnson R. P, eds. Automatic generation of neural network architecture using evolutionary computation. World Scientific, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wendling, Fabrice, and Fernando H. Lopes da Silva. Dynamics of EEGs as Signals of Neuronal Populations. Edited by Donald L. Schomer and Fernando H. Lopes da Silva. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190228484.003.0003.

Full text
Abstract:
This chapter gives an overview of approaches used to understand the generation of electroencephalographic (EEG) signals using computational models. The basic concept is that appropriate modeling of neuronal networks, based on relevant anatomical and physiological data, allows researchers to test hypotheses about the nature of EEG signals. Here these models are considered at different levels of complexity. The first level is based on single cell biophysical properties anchored in classic Hodgkin-Huxley theory. The second level emphasizes on detailed neuronal networks and their role in generating different kinds of EEG oscillations. At the third level are models derived from the Wilson-Cowan approach, which constitutes the backbone of neural mass models. Another part of the chapter is dedicated to models of epileptiform activities. Finally, the themes of nonlinear dynamic systems and topological models in EEG generation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Automatic Generation of Neural Network Architecture Using Evolutionary Computation. World Scientific Publishing Co Pte Ltd, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jain, L. C. Automatic Generation of Neural Network Architecture Using Evolutionary Computation. World Scientific Publishing Co Pte Ltd, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Koch, Christof. Biophysics of Computation. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195104912.001.0001.

Full text
Abstract:
Neural network research often builds on the fiction that neurons are simple linear threshold units, completely neglecting the highly dynamic and complex nature of synapses, dendrites, and voltage-dependent ionic currents. Biophysics of Computation: Information Processing in Single Neurons challenges this notion, using richly detailed experimental and theoretical findings from cellular biophysics to explain the repertoire of computational functions available to single neurons. The author shows how individual nerve cells can multiply, integrate, or delay synaptic inputs and how information can be encoded in the voltage across the membrane, in the intracellular calcium concentration, or in the timing of individual spikes. Key topics covered include the linear cable equation; cable theory as applied to passive dendritic trees and dendritic spines; chemical and electrical synapses and how to treat them from a computational point of view; nonlinear interactions of synaptic input in passive and active dendritic trees; the Hodgkin-Huxley model of action potential generation and propagation; phase space analysis; linking stochastic ionic channels to membrane-dependent currents; calcium and potassium currents and their role in information processing; the role of diffusion, buffering and binding of calcium, and other messenger systems in information processing and storage; short- and long-term models of synaptic plasticity; simplified models of single cells; stochastic aspects of neuronal firing; the nature of the neuronal code; and unconventional models of sub-cellular computation. Biophysics of Computation: Information Processing in Single Neurons serves as an ideal text for advanced undergraduate and graduate courses in cellular biophysics, computational neuroscience, and neural networks, and will appeal to students and professionals in neuroscience, electrical and computer engineering, and physics.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahirwar, Kailash. Generative Adversarial Networks Projects: Build Next-Generation Generative Models Using TensorFlow and Keras. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Walters, Greg, and John Hany. Hands-On Generative Adversarial Networks with Pytorch 1. x: Implement Next-Generation Neural Networks to Build Powerful GAN Models Using Python. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Saidane, Zohra. Image and video text recognition using convolutional neural networks: Study of new CNNs architectures for binarization, segmentation and recognition of text images. LAP Lambert Academic Publishing, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gates, Bill. Business @ the Speed of Thought: Using a Digital Nervous System [Text in Italian]. Arnoldo Mandadori Editore S.p.A., 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Text Generation Using Neural Networks"

1

Souri, Adnan, Zakaria El Maazouzi, Mohammed Al Achhab, and Badr Eddine El Mohajir. "Arabic Text Generation Using Recurrent Neural Networks." In Communications in Computer and Information Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96292-4_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Şulea, Octavia-Maria, Steve Young, and Liviu P. Dinu. "MorphoGen: Full Inflection Generation Using Recurrent Neural Networks." In Computational Linguistics and Intelligent Text Processing. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-24340-0_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Milanova, Ivona, Ksenija Sarvanoska, Viktor Srbinoski, and Hristijan Gjoreski. "Automatic Text Generation in Macedonian Using Recurrent Neural Networks." In Communications in Computer and Information Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33110-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Soni, Sonam, Praveen Kumar, and Amal Saha. "Automatic Question and Answer Generation from Text Using Neural Networks." In Emerging Technologies in Data Mining and Information Security. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9774-9_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Samreen, Muhammad Javed Iqbal, Iftikhar Ahmad, Suleman Khan, and Rizwan Khan. "Language Modeling and Text Generation Using Hybrid Recurrent Neural Network." In Deep Learning for Unmanned Systems. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77939-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lezama-Sánchez, Ana Laura, and Mireya Tovar Vidal. "Cyberbullying Text Classification Using Neural Networks, Generative Models, and Graph Analysis." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-87647-9_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bernier, J. L., J. J. Merelo, J. Ortega, and A. Prieto. "Test pattern generation for analog circuits using neural networks and evolutive algorithms." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-59497-3_258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mitkov, Ruslan, Le An Ha, Halyna Maslak, Tharindu Ranasinghe, and Vilelmini Sosoni. "Automatic Generation of Multiple-Choice Test Items from Paragraphs Using Deep Neural Networks." In Advancing Natural Language Processing in Educational Assessment. Routledge, 2023. http://dx.doi.org/10.4324/9781003278658-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pawade, Dipti, Avani Sakhapara, Chaitya Shah, Jigar Wala, Ankitmani Tripathi, and Bhavikk Shah. "Text Caption Generation Based on Lip Movement of Speaker in Video Using Neural Network." In Communications in Computer and Information Science. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9942-8_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Paduraru, Ciprian, Marius-Constantin Melemciuc, and Miruna Paduraru. "Automatic Test Data Generation for a Given Set of Applications Using Recurrent Neural Networks." In Communications in Computer and Information Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29157-0_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Text Generation Using Neural Networks"

1

Damayanti, Shalsahbilla Nazhiifah, Agi Prasetiadi, and Atika Ratna Dewi. "Automatic Story Text Generation Using Recurrent Neural Network Algorithm." In 2024 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT). IEEE, 2024. https://doi.org/10.1109/comnetsat63286.2024.10861931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Baoqiang, Yu Bai, Fang Cai, Shuang Xue, Na Ye, and XinYuan Ye. "Reasoning Knowledge Transfer for Logical Table-to-Text Generation." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10649960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ayers, Megan, Luke Sanford, Margaret Roberts, and Eddie Yang. "Discovering influential text using convolutional neural networks." In Findings of the Association for Computational Linguistics ACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bernardi, Mario Luca, and Marta Cimitile. "Report Generation from X-Ray imaging by Retrieval-Augmented Generation and improved Image-Text Matching." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Qi, Qianqian, Lin Ni, Zhongsheng Wang, Libo Zhang, Jiamou Liu, and Michael Witbrock. "Epic-Level Text Generation with LLM through Auto-prompted Reinforcement Learning." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mykhailichenko, Ihor, Heorhii Ivashchenko, Tetiana Filimonchuk, and Oleksii Liashenko. "Transformative Neural Networks for Technical Text Generation: Context Length Dependence Analysis." In 2024 IEEE 5th KhPI Week on Advanced Technology (KhPIWeek). IEEE, 2024. https://doi.org/10.1109/khpiweek61434.2024.10878089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Weitian, Xu Sun, Yangxing Luo, Wei Gao, and Yanchun Zhu. "CMMQC: Cascaded Multi-Model Quality Control for Unsupervised Data-to-Text Generation." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fu, Jiajun, Yuxing Long, Xiaojie Wang, and Jianqin Yin. "LLM-Driven “Coach-Athlete” Pretraining Framework for Complex Text-To-Motion Generation." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mo, Xianjie, Yang Xiang, Youcheng Pan, Yongshuai Hou, and Ping Luo. "Mitigating Knowledge Conflicts in Data-to-Text Generation via the Internalization of Fact Extraction." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lymperopoulos, Dimitris, Maria Lymperaiou, Giorgos Filandrianos, and Giorgos Stamou. "Optimal and efficient text counterfactuals using Graph Neural Networks." In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.blackboxnlp-1.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Text Generation Using Neural Networks"

1

Al-Qadi, Imad, Jaime Hernandez, Angeli Jayme, et al. The Impact of Wide-Base Tires on Pavement—A National Study. Illinois Center for Transportation, 2021. http://dx.doi.org/10.36501/0197-9191/21-035.

Full text
Abstract:
Researchers have been studying wide-base tires for over two decades, but no evidence has been provided regarding the net benefit of this tire technology. In this study, a comprehensive approach is used to compare new-generation wide-base tires (NG-WBT) with the dual-tire assembly (DTA). Numerical modeling, prediction methods, experimental measurements, and environmental impact assessment were combined to provide recommendations about the use of NG-WBT. A finite element approach, considering variables usually omitted in the conventional analysis of flexible pavement was utilized for modeling. Five hundred seventy-six cases combining layer thickness, material properties, tire load, tire inflation pressure, and pavement type (thick and thin) were analyzed to obtained critical pavement responses. A prediction tool, known as ICT-Wide, was developed based on artificial neural networks to obtain critical pavement responses in cases outside the finite element analysis matrix. The environmental impacts were determined using life cycle assessment. Based on the bottom-up fatigue cracking, permanent deformation, and international roughness index, the life cycle energy consumption, cost, and green-house gas (GHG) emissions were estimated. To make the outcome of this research effort useful for state departments of transportation and practitioners, a modification to AASHTOWare is proposed to account for NG-WBT. The revision is based on two adjustment factors, one accounting for the discrepancy between the AASHTOware approach and the finite element model of this study, and the other addressing the impact of NG-WBT.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography