Gotowa bibliografia na temat „Natural Language Processes (NLP)”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Natural Language Processes (NLP)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Natural Language Processes (NLP)"

1

Alharbi, Mohammad, Matthew Roach, Tom Cheesman, and Robert S. Laramee. "VNLP: Visible natural language processing." Information Visualization 20, no. 4 (2021): 245–62. http://dx.doi.org/10.1177/14738716211038898.

Pełny tekst źródła
Streszczenie:
In general, Natural Language Processing (NLP) algorithms exhibit black-box behavior. Users input text and output are provided with no explanation of how the results are obtained. In order to increase understanding and trust, users value transparent processing which may explain derived results and enable understanding of the underlying routines. Many approaches take an opaque approach by default when designing NLP tools and do not incorporate a means to steer and manipulate the intermediate NLP steps. We present an interactive, customizable, visual framework that enables users to observe and participate in the NLP pipeline processes, explicitly manipulate the parameters of each step, and explore the result visually based on user preferences. The visible NLP (VNLP) pipeline design is then applied to a text similarity application to demonstrate the utility and advantages of a visible and transparent NLP pipeline in supporting users to understand and justify both the process and results. We also report feedback on our framework from a modern languages expert.
Style APA, Harvard, Vancouver, ISO itp.
2

Researcher. "PROMPT ENGINEERING: REVOLUTIONIZING NATURAL LANGUAGE PROCESSING." International Journal of Artificial Intelligence and Machine Learning (IJAIML) 3, no. 2 (2024): 195–203. https://doi.org/10.5281/zenodo.13933394.

Pełny tekst źródła
Streszczenie:
Natural Language Processing (NLP) has undergone a revolution with the emergence of large language models (LLMs) like GPT and BERT. Prompt engineering, the art of crafting effective inputs to guide LLMs, has become a cornerstone of modern NLP applications. This paper explores how prompt engineering is streamlining NLP processes, revolutionizing industries such as banking, and addressing the inherent risks associated with generative AI. Through practical examples and a discussion of potential pitfalls, we offer strategies for mitigating challenges and maximizing the benefits of prompt engineering in NLP.
Style APA, Harvard, Vancouver, ISO itp.
3

Dallo, Khan Ali Marwani. "Natural language processing for business analytics." Advances in Engineering Innovation 3, no. 1 (2023): 37–40. http://dx.doi.org/10.54254/2977-3903/3/2023038.

Pełny tekst źródła
Streszczenie:
Natural Language Processing (NLP), a branch of artificial intelligence, is gaining traction as a potent tool for business analytics. With the proliferation of unstructured textual data, businesses are actively seeking methodologies to distill valuable insights from vast textual repositories. The introduction of NLP in the realm of business analytics offers a transformative approach, automating traditional manual processes and fostering real-time, data-driven decision-making. From sentiment analysis to text summarization, NLP is facilitating businesses in deciphering consumer feedback, predicting market trends, and breaking down linguistic barriers in the age of globalization. This paper sheds light on the evolution of NLP techniques in business analytics, their applications, and the inherent challenges and opportunities they present.
Style APA, Harvard, Vancouver, ISO itp.
4

Simanullang, Gerald Shan Benediktus, and Jin Ai The. "Roles of Natural Language Processing in New Product Development Process: Literature Review." Jurnal Rekayasa Sistem Industri 13, no. 1 (2024): 117–30. http://dx.doi.org/10.26593/jrsi.v13i1.6790.117-130.

Pełny tekst źródła
Streszczenie:
Customer satisfaction is a key success factor for a business. To provide products that meet customer satisfaction, companies must be able to understand the customers’ needs and desires. Technological developments nowadays have helped companies to understand customer desires more easily so that companies can provide products that satisfy their customer. Natural Language Processing (NLP) is a technology that allows computers to process human language. NLP is also commonly referred as text-mining. NLP has been utilized in the New Product Development (NPD) process. We compiled studies related to NLP and NPD and conducted a literature review to map out how far NLP has been utilized in NPD processes. We found that in this era of Big Data, current NLP studies most often have the goal to process text data from online reviews on e-commerce and from social media. By using NLP, large amounts of data can produce valuable Voice of Customer (VOC) information for product development. We also found that NLP technology also has been utilized in other NPD processes that do not involve VOC, such as the design stage, document processing, and extraction of requirements in the NPD process.
Style APA, Harvard, Vancouver, ISO itp.
5

Khan, Noel, David Elizondo, Lipika Deka, and Miguel A. Molina-Cabello. "Natural Language Processing Tools and Workflows for Improving Research Processes." Applied Sciences 14, no. 24 (2024): 11731. https://doi.org/10.3390/app142411731.

Pełny tekst źródła
Streszczenie:
The modern research process involves refining a set of keywords until sufficiently pertinent results are obtained from acceptable sources. References and citations from the most relevant results can then be traced to related works. This process iteratively develops a set of keywords to find the most relevant literature. However, because a keyword-based search essentially samples a corpus, it may be inadequate for capturing a broad or exhaustive understanding of a topic. Further, a keyword-based search is dependent upon the underlying storage and retrieval technology and is essentially a syntactical search rather than a semantic search. To overcome such limitations, this paper explores the use of well-known natural language processing (NLP) techniques to support a semantic search and identifies where specific NLP techniques can be employed and what their primary benefits are, thus enhancing the opportunities to further improve the research process. The proposed NLP methods were tested through different workflows on different datasets and each workflow was designed to exploit latent relationships within the data to refine the keywords. The results of these tests demonstrated an improvement in the identified literature when compared to the literature extracted from the end-user-given keywords. For example, one of the defined workflows reduced the number of search results by two orders of magnitude but contained a larger percentage of pertinent results.
Style APA, Harvard, Vancouver, ISO itp.
6

Hasan, Zohaib, Zeba Vishwakarma, and Nidhi Pateriya. "NLP and its Components: A Detailed Discussion." International Journal of Innovative Research in Computer and Communication Engineering 11, no. 09 (2023): 10798–803. http://dx.doi.org/10.15680/ijircce.2023.1109034.

Pełny tekst źródła
Streszczenie:
Natural Language Processing (NLP) encompasses computational techniques for processing and analyzing human language, primarily through Natural Language Understanding (NLU) and Natural Language Generation (NLG). NLU focuses on interpreting language by analyzing phonology (sounds), morphology (word structures), syntax (sentence structures), semantics (meaning), and pragmatics (context). These processes enable machines to comprehend the nuances of human language for accurate interpretation and response generation. NLG, in contrast, involves producing human-like text from structured data. It includes content determination (identifying relevant information), text planning (organizing information), sentence planning (constructing grammatically correct sentences), and surface realization (generating the final text). NLG is crucial for applications like automated report generation and chat bots, where coherent and contextually appropriate responses are essential. The synergy between NLU and NLG underpins many NLP applications. In machine translation, NLU interprets the source text, while NLG generates the translated text. In question-answering systems, NLU processes the query, and NLG formulates the response. Deep learning advancements have significantly enhanced both NLU and NLG, enabling more sophisticated and human-like interactions. This paper explores the components and processes of NLU and NLG, offering a comprehensive understanding of their mechanisms and the advancements driving modern NLP systems.
Style APA, Harvard, Vancouver, ISO itp.
7

S.Girirajan. "Code Generation Empowered by Natural Language Processing and Machine Learning Algorithms." Advances in Nonlinear Variational Inequalities 28, no. 1s (2024): 44–56. http://dx.doi.org/10.52783/anvi.v28.2186.

Pełny tekst źródła
Streszczenie:
The goal of this study is to revolutionize code creation processes by investigating the synergistic union of machine learning (ML) and natural language processing (NLP). Enterprising non-programmers with entrance barriers, traditional approaches to code generation frequently demand expert-level programming expertise. Development teams can communicate coding tasks in natural language by utilizing NLP techniques like language modeling and semantic parsing. This helps to close the gap between human intent and instructions that can be executed by a computer. By incorporating ML techniques, the system may also more effectively understand and produce code that is compatible with a wider range of programming languages and paradigms. This research clarifies the revolutionary potential of NLP and ML-driven code creation and highlights its consequences for software development efficiency, accessibility, and innovation through an extensive assessment of current developments and case examples.
Style APA, Harvard, Vancouver, ISO itp.
8

Chung, Joohyun, Sangmin Song, and Heesook Son. "Exploring Natural Language Processing through an Exemplar Using YouTube." International Journal of Environmental Research and Public Health 21, no. 10 (2024): 1357. http://dx.doi.org/10.3390/ijerph21101357.

Pełny tekst źródła
Streszczenie:
There has been a growing emphasis on data across various health-related fields, not just in nursing research, due to the increasing volume of unstructured data in electronic health records (EHRs). Natural Language Processing (NLP) provides a solution by transforming this unstructured data into structured formats, thereby facilitating valuable insights. This methodology paper explores the application of NLP in nursing, using an exemplar case study that analyzes YouTube data to investigate social phenomena among adults living alone. The methodology involves five steps: accessing data through YouTube’s API, data cleaning, preprocessing (tokenization, sentence segmentation, linguistic normalization), sentiment analysis using Python, and topic modeling. This study serves as a comprehensive guide for integrating NLP into nursing research, supplemented with digital content demonstrating each step. For successful implementation, nursing researchers must grasp the fundamental concepts and processes of NLP. The potential of NLP in nursing is significant, particularly in utilizing unstructured textual data from nursing documentation and social media. Its benefits include streamlining nursing documentation, enhancing patient communication, and improving data analysis.
Style APA, Harvard, Vancouver, ISO itp.
9

Erik, Cambria. "A Review of Natural Language Processing Research." Engineering Computations 2017, no. 10 (2017): 10. https://doi.org/10.5281/zenodo.1000805.

Pełny tekst źródła
Streszczenie:
Natural language processing (NLP) is a theory-motivated range of computational techniques for the automatic analysis and representation of human language. NLP research has evolved from the era of punch cards and batch processing (in which the analysis of a sentence could take up to 7 minutes) to the era of Google and the likes of it (in which millions of webpages can be processed in less than a second). This review paper draws on recent developments in NLP research to look at the past, present, and future of NLP technology in a new light. Borrowing the paradigm of ‘jumping curves’ from the field of business management and marketing prediction, this survey article reinterprets the evolution of NLP research as the intersection of three overlapping curves-namely Syntactics, Semantics, and Pragmatics Curves- which will eventually lead NLP research to evolve into natural language understanding.
Style APA, Harvard, Vancouver, ISO itp.
10

Sivakumar, R. D. Assistant Professor Department of Computer Science. "TEXT MINING AND NATURAL LANGUAGE PROCESSING FOR DECISION SUPPORT SYSTEMS." Indian Journal of Research and Development Systems in Technologization 1, no. 1 (2024): 14–23. https://doi.org/10.5281/zenodo.10847090.

Pełny tekst źródła
Streszczenie:
<em>The techniques of text mining and natural language processing (NLP) have been discovered to be of the essence to the concept of decision support systems (DSS). Due to the growing volume of organizational text data that encompasses from customer feedback to social media interactions, over viewing the desirable findings from unstructured texts becomes imperative. This work is intended to explain the use of text mining and NLP techniques to support and improve decision-making processes in different disciplines. Employing methods like sentiment analysis, topic modeling and named entity recognition, DS can efficiently scrap textual data to identify patterns, trends and sentiments hidden behind it. Additionally, machine learning algorithms take advantage of the automated nature of text insights creation and employ it for the decision making tasks. After the critical assessment of the text mining and NLP literature review along with the case studies, this paper reveals the opportunities of data mining and NLP systems in emphasizing the range of industries where decisions are required. Moreover, it addresses the challenges and future research agenda for the future use of these approaches into frame decision-making processes.</em>
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Rozprawy doktorskie na temat "Natural Language Processes (NLP)"

1

Oliveira, Mirna Fernanda de. "Enfrentamento do problema das divergências de tradução por um sistema de tradução automática : um exercício exploratório /." Araraquara : [s.n.], 2006. http://hdl.handle.net/11449/103605.

Pełny tekst źródła
Streszczenie:
Orientador: Bento Carlos Dias da Silva<br>Banca: Beatriz Nunes de Oliveira Longo<br>Banca: Dirce Charara Monteiro<br>Banca: Gladis Maria de Barcellos Almeida<br>Banca: Heronides Maurílio de Melo Moura<br>Resumo: O objetivo desta tese é desenvolver um estudo lingüístico-computacional exploratório de um problema específico que deve ser enfrentado por sistemas de tradução automática: o problema da divergências de tradução quer de natureza sintática quer de natureza léxico-semântica que se verificam entre pares de sentenças de línguas naturais diferentes. Para isso, fundamenta-se na metodologia de pesquisa interdisciplinar em PLN (Processamento Automático de Línguas Naturais) de Dias-da-Silva (1996, 1998 e 2003) e na teoria lingüístico-computacional subjacente ao sistema de tradução automática UNITRAN de Dorr (1993), que, por sua vez é subsidiado pela teoria sintática dos princípios e Parâmetros de Chomsky (1981) e pela teoria semântica das Estruturas conceituais de Jackendoff (1990). Como contribuição, a tese descreve a composição e o funcionamento do UNITRAN, desenhado para dar conta de parte do problema posto pelas divergências de tradução e ilustra a possibilidade de inclusão do português nesse sistema através do exame de alguns tipos de divergências que se verificam entre frases do inglês e do português.<br>Abstract: This dissertation aims to develop an exploratory linguistic and computational study of an especific type of problem that must be faced by machine translation systems: the problem of translation divergences, whether syntactic or lexical-semantic ones that can be verified between distinct natural language sentence. In order to achieve this aim, this work is based on the interdisciplinary research metodology of the NLP (Natural Language Processing) field developed by Dias-da-Silva (1996, 1998 & 2003) and on the linguistic computacional theory behind UNITRAN, a machine translation systemdeveloped by Dorr (1993), a system that is on its turned based on Chomsky's syntactic theory of Government and Binding (1981) and Jackendoff's semantic theory of Conceptual Structures (1990). As a contribution to the field of NLP, this dissertation describes the machinery of UNITRAN, designed to deal with part of the problem of translation divergencies, and it illustrates the possibility of including Brazilian Portuguese language in the system through the investigation of certain kinds of divergences that can be found between English and Brazilian Portuguese senteces.<br>Doutor
Style APA, Harvard, Vancouver, ISO itp.
2

Oliveira, Mirna Fernanda de [UNESP]. "Enfrentamento do problema das divergências de tradução por um sistema de tradução automática: um exercício exploratório." Universidade Estadual Paulista (UNESP), 2006. http://hdl.handle.net/11449/103605.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2014-06-11T19:32:47Z (GMT). No. of bitstreams: 0 Previous issue date: 2006-04-25Bitstream added on 2014-06-13T20:43:58Z : No. of bitstreams: 1 oliveira_mf_dr_ararafcl.pdf: 631650 bytes, checksum: fa4233637c661c5e993adcc08801d158 (MD5)<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)<br>O objetivo desta tese é desenvolver um estudo lingüístico-computacional exploratório de um problema específico que deve ser enfrentado por sistemas de tradução automática: o problema da divergências de tradução quer de natureza sintática quer de natureza léxico-semântica que se verificam entre pares de sentenças de línguas naturais diferentes. Para isso, fundamenta-se na metodologia de pesquisa interdisciplinar em PLN (Processamento Automático de Línguas Naturais) de Dias-da-Silva (1996, 1998 e 2003) e na teoria lingüístico-computacional subjacente ao sistema de tradução automática UNITRAN de Dorr (1993), que, por sua vez é subsidiado pela teoria sintática dos princípios e Parâmetros de Chomsky (1981) e pela teoria semântica das Estruturas conceituais de Jackendoff (1990). Como contribuição, a tese descreve a composição e o funcionamento do UNITRAN, desenhado para dar conta de parte do problema posto pelas divergências de tradução e ilustra a possibilidade de inclusão do português nesse sistema através do exame de alguns tipos de divergências que se verificam entre frases do inglês e do português.<br>This dissertation aims to develop an exploratory linguistic and computational study of an especific type of problem that must be faced by machine translation systems: the problem of translation divergences, whether syntactic or lexical-semantic ones that can be verified between distinct natural language sentence. In order to achieve this aim, this work is based on the interdisciplinary research metodology of the NLP (Natural Language Processing) field developed by Dias-da-Silva (1996, 1998 & 2003) and on the linguistic computacional theory behind UNITRAN, a machine translation systemdeveloped by Dorr (1993), a system that is on its turned based on Chomsky's syntactic theory of Government and Binding (1981) and Jackendoff's semantic theory of Conceptual Structures (1990). As a contribution to the field of NLP, this dissertation describes the machinery of UNITRAN, designed to deal with part of the problem of translation divergencies, and it illustrates the possibility of including Brazilian Portuguese language in the system through the investigation of certain kinds of divergences that can be found between English and Brazilian Portuguese senteces.
Style APA, Harvard, Vancouver, ISO itp.
3

Hellmann, Sebastian. "Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data." Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-157932.

Pełny tekst źródła
Streszczenie:
This thesis is a compendium of scientific works and engineering specifications that have been contributed to a large community of stakeholders to be copied, adapted, mixed, built upon and exploited in any way possible to achieve a common goal: Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data The explosion of information technology in the last two decades has led to a substantial growth in quantity, diversity and complexity of web-accessible linguistic data. These resources become even more useful when linked with each other and the last few years have seen the emergence of numerous approaches in various disciplines concerned with linguistic resources and NLP tools. It is the challenge of our time to store, interlink and exploit this wealth of data accumulated in more than half a century of computational linguistics, of empirical, corpus-based study of language, and of computational lexicography in all its heterogeneity. The vision of the Giant Global Graph (GGG) was conceived by Tim Berners-Lee aiming at connecting all data on the Web and allowing to discover new relations between this openly-accessible data. This vision has been pursued by the Linked Open Data (LOD) community, where the cloud of published datasets comprises 295 data repositories and more than 30 billion RDF triples (as of September 2011). RDF is based on globally unique and accessible URIs and it was specifically designed to establish links between such URIs (or resources). This is captured in the Linked Data paradigm that postulates four rules: (1) Referred entities should be designated by URIs, (2) these URIs should be resolvable over HTTP, (3) data should be represented by means of standards such as RDF, (4) and a resource should include links to other resources. Although it is difficult to precisely identify the reasons for the success of the LOD effort, advocates generally argue that open licenses as well as open access are key enablers for the growth of such a network as they provide a strong incentive for collaboration and contribution by third parties. In his keynote at BNCOD 2011, Chris Bizer argued that with RDF the overall data integration effort can be “split between data publishers, third parties, and the data consumer”, a claim that can be substantiated by observing the evolution of many large data sets constituting the LOD cloud. As written in the acknowledgement section, parts of this thesis has received numerous feedback from other scientists, practitioners and industry in many different ways. The main contributions of this thesis are summarized here: Part I – Introduction and Background. During his keynote at the Language Resource and Evaluation Conference in 2012, Sören Auer stressed the decentralized, collaborative, interlinked and interoperable nature of the Web of Data. The keynote provides strong evidence that Semantic Web technologies such as Linked Data are on its way to become main stream for the representation of language resources. The jointly written companion publication for the keynote was later extended as a book chapter in The People’s Web Meets NLP and serves as the basis for “Introduction” and “Background”, outlining some stages of the Linked Data publication and refinement chain. Both chapters stress the importance of open licenses and open access as an enabler for collaboration, the ability to interlink data on the Web as a key feature of RDF as well as provide a discussion about scalability issues and decentralization. Furthermore, we elaborate on how conceptual interoperability can be achieved by (1) re-using vocabularies, (2) agile ontology development, (3) meetings to refine and adapt ontologies and (4) tool support to enrich ontologies and match schemata. Part II - Language Resources as Linked Data. “Linked Data in Linguistics” and “NLP & DBpedia, an Upward Knowledge Acquisition Spiral” summarize the results of the Linked Data in Linguistics (LDL) Workshop in 2012 and the NLP & DBpedia Workshop in 2013 and give a preview of the MLOD special issue. In total, five proceedings – three published at CEUR (OKCon 2011, WoLE 2012, NLP & DBpedia 2013), one Springer book (Linked Data in Linguistics, LDL 2012) and one journal special issue (Multilingual Linked Open Data, MLOD to appear) – have been (co-)edited to create incentives for scientists to convert and publish Linked Data and thus to contribute open and/or linguistic data to the LOD cloud. Based on the disseminated call for papers, 152 authors contributed one or more accepted submissions to our venues and 120 reviewers were involved in peer-reviewing. “DBpedia as a Multilingual Language Resource” and “Leveraging the Crowdsourcing of Lexical Resources for Bootstrapping a Linguistic Linked Data Cloud” contain this thesis’ contribution to the DBpedia Project in order to further increase the size and inter-linkage of the LOD Cloud with lexical-semantic resources. Our contribution comprises extracted data from Wiktionary (an online, collaborative dictionary similar to Wikipedia) in more than four languages (now six) as well as language-specific versions of DBpedia, including a quality assessment of inter-language links between Wikipedia editions and internationalized content negotiation rules for Linked Data. In particular the work described in created the foundation for a DBpedia Internationalisation Committee with members from over 15 different languages with the common goal to push DBpedia as a free and open multilingual language resource. Part III - The NLP Interchange Format (NIF). “NIF 2.0 Core Specification”, “NIF 2.0 Resources and Architecture” and “Evaluation and Related Work” constitute one of the main contribution of this thesis. The NLP Interchange Format (NIF) is an RDF/OWL-based format that aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. The core specification is included in and describes which URI schemes and RDF vocabularies must be used for (parts of) natural language texts and annotations in order to create an RDF/OWL-based interoperability layer with NIF built upon Unicode Code Points in Normal Form C. In , classes and properties of the NIF Core Ontology are described to formally define the relations between text, substrings and their URI schemes. contains the evaluation of NIF. In a questionnaire, we asked questions to 13 developers using NIF. UIMA, GATE and Stanbol are extensible NLP frameworks and NIF was not yet able to provide off-the-shelf NLP domain ontologies for all possible domains, but only for the plugins used in this study. After inspecting the software, the developers agreed however that NIF is adequate enough to provide a generic RDF output based on NIF using literal objects for annotations. All developers were able to map the internal data structure to NIF URIs to serialize RDF output (Adequacy). The development effort in hours (ranging between 3 and 40 hours) as well as the number of code lines (ranging between 110 and 445) suggest, that the implementation of NIF wrappers is easy and fast for an average developer. Furthermore the evaluation contains a comparison to other formats and an evaluation of the available URI schemes for web annotation. In order to collect input from the wide group of stakeholders, a total of 16 presentations were given with extensive discussions and feedback, which has lead to a constant improvement of NIF from 2010 until 2013. After the release of NIF (Version 1.0) in November 2011, a total of 32 vocabulary employments and implementations for different NLP tools and converters were reported (8 by the (co-)authors, including Wiki-link corpus, 13 by people participating in our survey and 11 more, of which we have heard). Several roll-out meetings and tutorials were held (e.g. in Leipzig and Prague in 2013) and are planned (e.g. at LREC 2014). Part IV - The NLP Interchange Format in Use. “Use Cases and Applications for NIF” and “Publication of Corpora using NIF” describe 8 concrete instances where NIF has been successfully used. One major contribution in is the usage of NIF as the recommended RDF mapping in the Internationalization Tag Set (ITS) 2.0 W3C standard and the conversion algorithms from ITS to NIF and back. One outcome of the discussions in the standardization meetings and telephone conferences for ITS 2.0 resulted in the conclusion there was no alternative RDF format or vocabulary other than NIF with the required features to fulfill the working group charter. Five further uses of NIF are described for the Ontology of Linguistic Annotations (OLiA), the RDFaCE tool, the Tiger Corpus Navigator, the OntosFeeder and visualisations of NIF using the RelFinder tool. These 8 instances provide an implemented proof-of-concept of the features of NIF. starts with describing the conversion and hosting of the huge Google Wikilinks corpus with 40 million annotations for 3 million web sites. The resulting RDF dump contains 477 million triples in a 5.6 GB compressed dump file in turtle syntax. describes how NIF can be used to publish extracted facts from news feeds in the RDFLiveNews tool as Linked Data. Part V - Conclusions. provides lessons learned for NIF, conclusions and an outlook on future work. Most of the contributions are already summarized above. One particular aspect worth mentioning is the increasing number of NIF-formated corpora for Named Entity Recognition (NER) that have come into existence after the publication of the main NIF paper Integrating NLP using Linked Data at ISWC 2013. These include the corpora converted by Steinmetz, Knuth and Sack for the NLP & DBpedia workshop and an OpenNLP-based CoNLL converter by Brümmer. Furthermore, we are aware of three LREC 2014 submissions that leverage NIF: NIF4OGGD - NLP Interchange Format for Open German Governmental Data, N^3 – A Collection of Datasets for Named Entity Recognition and Disambiguation in the NLP Interchange Format and Global Intelligent Content: Active Curation of Language Resources using Linked Data as well as an early implementation of a GATE-based NER/NEL evaluation framework by Dojchinovski and Kliegr. Further funding for the maintenance, interlinking and publication of Linguistic Linked Data as well as support and improvements of NIF is available via the expiring LOD2 EU project, as well as the CSA EU project called LIDER, which started in November 2013. Based on the evidence of successful adoption presented in this thesis, we can expect a decent to high chance of reaching critical mass of Linked Data technology as well as the NIF standard in the field of Natural Language Processing and Language Resources.
Style APA, Harvard, Vancouver, ISO itp.
4

NOZZA, DEBORA. "Deep Learning for Feature Representation in Natural Language Processing." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/241185.

Pełny tekst źródła
Streszczenie:
La mole di dati generata dagli utenti sul Web è esponenzialmente cresciuta negli ultimi dieci anni, creando nuove e rilevanti opportunità per ogni tipo di dominio applicativo. Per risolvere i problemi derivanti dall’eccessiva quantità di dati, la ricerca nell’ambito dell’elaborazione del linguaggio naturale si è mossa verso lo sviluppo di modelli computazionali capaci di capirlo ed interpretarlo senza (o quasi) alcun intervento umano. Recentemente, questo campo di studi è stato testimone di un incremento sia in termini di efficienza computazionale che di risultati, per merito dell’avvento di una nuova linea di ricerca nell’apprendimento automatico chiamata Deep Learning. Questa tesi si focalizza in modo particolare su una specifica classe di modelli di Deep Learning atta ad apprendere rappresentazioni di alto livello, e conseguentemente più significative, dei dati di input in ambiente non supervisionato. Nelle tecniche di Deep Learning, queste rappresentazioni sono ottenute tramite multiple trasformazioni non lineari di complessità e astrazione crescente a partire dai dati di input. Questa fase, in cui vengono elaborate le sopracitate rappresentazioni, è un processo cruciale per l’elaborazione del linguaggio naturale in quanto include la procedura di trasformazione da simboli discreti (es. lettere) a una rappresentazione vettoriale che può essere facilmente trattata da un elaboratore. Inoltre, questa rappresentazione deve anche essere in grado di codificare la sintattica e la semantica espressa nel linguaggio utilizzato nei dati. La prima direzione di ricerca di questa tesi mira ad evidenziare come i modelli di elaborazione del linguaggio naturale possano essere potenziati dalle rappresentazioni ottenute con metodi non supervisionati di Deep Learning al fine di conferire un senso agli ingenti contenuti generati dagli utenti. Nello specifico, questa tesi si focalizza su diversi ambiti che sono considerati cruciali per capire di cosa il testo tratti (Named Entity Recognition and Linking) e qual è l’opinione che l’utente sta cercando di esprimere considerando la possibile presenza di ironia (Sentiment Analysis e Irony Detection). Per ognuno di questi ambiti, questa tesi propone modelli innovativi di elaborazione del linguaggio naturale potenziati dalla rappresentazione ottenuta tramite metodi di Deep Learning. Come seconda direzione di ricerca, questa tesi ha approfondito lo sviluppo di un nuovo modello di Deep Learning per l’apprendimento di rappresentazioni significative del testo ulteriormente valorizzato considerando anche la struttura relazionale che sta alla base dei contenuti generati sul Web. Il processo di inferenza terrà quindi in considerazione sia il testo dei dati di input che la componente relazionale sottostante. La rappresentazione, dopo essere stata ottenuta, potrà quindi essere utilizzata da modelli di apprendimento automatico standard per poter eseguire svariate tipologie di analisi nell'ambito di elaborazione del linguaggio naturale. Concludendo, gli studi sperimentali condotti in questa tesi hanno rilevato che l’utilizzo di rappresentazioni più significative ottenute con modelli di Deep Learning, associate agli innovativi modelli di elaborazione del linguaggio naturale proposti in questa tesi, porta ad un miglioramento dei risultati ottenuti e a migliori le abilità di generalizzazione. Ulteriori progressi sono stati anche evidenziati considerando modelli capaci di sfruttare, oltre che al testo, la componente relazionale.<br>The huge amount of textual user-generated content on the Web has incredibly grown in the last decade, creating new relevant opportunities for different real-world applications and domains. To overcome the difficulties of dealing with this large volume of unstructured data, the research field of Natural Language Processing has provided efficient solutions developing computational models able to understand and interpret human natural language without any (or almost any) human intervention. This field has gained in further computational efficiency and performance from the advent of the recent machine learning research lines concerned with Deep Learning. In particular, this thesis focuses on a specific class of Deep Learning models devoted to learning high-level and meaningful representations of input data in unsupervised settings, by computing multiple non-linear transformations of increasing complexity and abstraction. Indeed, learning expressive representations from the data is a crucial step in Natural Language Processing, because it involves the transformation from discrete symbols (e.g. characters) to a machine-readable representation as real-valued vectors, which should encode semantic and syntactic meanings of the language units. The first research direction of this thesis is aimed at giving evidence that enhancing Natural Language Processing models with representations obtained by unsupervised Deep Learning models can significantly improve the computational abilities of making sense of large volume of user-generated text. In particular, this thesis addresses tasks that were considered crucial for understanding what the text is talking about, by extracting and disambiguating the named entities (Named Entity Recognition and Linking), and which opinion the user is expressing, dealing also with irony (Sentiment Analysis and Irony Detection). For each task, this thesis proposes a novel Natural Language Processing model enhanced by the data representation obtained by Deep Learning. As second research direction, this thesis investigates the development of a novel Deep Learning model for learning a meaningful textual representation taking into account the relational structure underlying user-generated content. The inferred representation comprises both textual and relational information. Once the data representation is obtained, it could be exploited by off-the-shelf machine learning algorithms in order to perform different Natural Language Processing tasks. As conclusion, the experimental investigations reveal that models able to incorporate high-level features, obtained by Deep Learning, show significant performance and improved generalization abilities. Further improvements can be also achieved by models able to take into account the relational information in addition to the textual content.
Style APA, Harvard, Vancouver, ISO itp.
5

Shao, Han. "Pretraining Deep Learning Models for Natural Language Understanding." Oberlin College Honors Theses / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin158955297757398.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Sætre, Rune. "GeneTUC: Natural Language Understanding in Medical Text." Doctoral thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-545.

Pełny tekst źródła
Streszczenie:
<p>Natural Language Understanding (NLU) is a 50 years old research field, but its application to molecular biology literature (BioNLU) is a less than 10 years old field. After the complete human genome sequence was published by Human Genome Project and Celera in 2001, there has been an explosion of research, shifting the NLU focus from domains like news articles to the domain of molecular biology and medical literature. BioNLU is needed, since there are almost 2000 new articles published and indexed every day, and the biologists need to know about existing knowledge regarding their own research. So far, BioNLU results are not as good as in other NLU domains, so more research is needed to solve the challenges of creating useful NLU applications for the biologists.</p><p>The work in this PhD thesis is a “proof of concept”. It is the first to show that an existing Question Answering (QA) system can be successfully applied in the hard BioNLU domain, after the essential challenge of unknown entities is solved. The core contribution is a system that discovers and classifies unknown entities and relations between them automatically. The World Wide Web (through Google) is used as the main resource, and the performance is almost as good as other named entity extraction systems, but the advantage of this approach is that it is much simpler and requires less manual labor than any of the other comparable systems.</p><p>The first paper in this collection gives an overview of the field of NLU and shows how the Information Extraction (IE) problem can be formulated with Local Grammars. The second paper uses Machine Learning to automatically recognize protein name based on features from the GSearch Engine. In the third paper, GSearch is substituted with Google, and the task in this paper is to extract all unknown names belonging to one of 273 biomedical entity classes, like genes, proteins, processes etc. After getting promising results with Google, the fourth paper shows that this approach can also be used to retrieve interactions or relationships between the named entities. The fifth paper describes an online implementation of the system, and shows that the method scales well to a larger set of entities.</p><p>The final paper concludes the “proof of concept” research, and shows that the performance of the original GeneTUC NLU system has increased from handling 10% of the sentences in a large collection of abstracts in 2001, to 50% in 2006. This is still not good enough to create a commercial system, but it is believed that another 40% performance gain can be achieved by importing more verb templates into GeneTUC, just like nouns were imported during this work. Work has already begun on this, in the form of a local Masters Thesis.</p>
Style APA, Harvard, Vancouver, ISO itp.
7

Panesar, Kulvinder. "Natural language processing (NLP) in Artificial Intelligence (AI): a functional linguistic perspective." Vernon Press, 2020. http://hdl.handle.net/10454/18140.

Pełny tekst źródła
Streszczenie:
Yes<br>This chapter encapsulates the multi-disciplinary nature that facilitates NLP in AI and reports on a linguistically orientated conversational software agent (CSA) (Panesar 2017) framework sensitive to natural language processing (NLP), language in the agent environment. We present a novel computational approach of using the functional linguistic theory of Role and Reference Grammar (RRG) as the linguistic engine. Viewing language as action, utterances change the state of the world, and hence speakers and hearer’s mental state change as a result of these utterances. The plan-based method of discourse management (DM) using the BDI model architecture is deployed, to support a greater complexity of conversation. This CSA investigates the integration, intersection and interface of the language, knowledge, speech act constructions (SAC) as a grammatical object, and the sub-model of BDI and DM for NLP. We present an investigation into the intersection and interface between our linguistic and knowledge (belief base) models for both dialogue management and planning. The architecture has three-phase models: (1) a linguistic model based on RRG; (2) Agent Cognitive Model (ACM) with (a) knowledge representation model employing conceptual graphs (CGs) serialised to Resource Description Framework (RDF); (b) a planning model underpinned by BDI concepts and intentionality and rational interaction; and (3) a dialogue model employing common ground. Use of RRG as a linguistic engine for the CSA was successful. We identify the complexity of the semantic gap of internal representations with details of a conceptual bridging solution.
Style APA, Harvard, Vancouver, ISO itp.
8

Panesar, Kulvinder. "Conversational artificial intelligence - demystifying statistical vs linguistic NLP solutions." Universitat Politécnica de Valéncia, 2020. http://hdl.handle.net/10454/18121.

Pełny tekst źródła
Streszczenie:
yes<br>This paper aims to demystify the hype and attention on chatbots and its association with conversational artificial intelligence. Both are slowly emerging as a real presence in our lives from the impressive technological developments in machine learning, deep learning and natural language understanding solutions. However, what is under the hood, and how far and to what extent can chatbots/conversational artificial intelligence solutions work – is our question. Natural language is the most easily understood knowledge representation for people, but certainly not the best for computers because of its inherent ambiguous, complex and dynamic nature. We will critique the knowledge representation of heavy statistical chatbot solutions against linguistics alternatives. In order to react intelligently to the user, natural language solutions must critically consider other factors such as context, memory, intelligent understanding, previous experience, and personalized knowledge of the user. We will delve into the spectrum of conversational interfaces and focus on a strong artificial intelligence concept. This is explored via a text based conversational software agents with a deep strategic role to hold a conversation and enable the mechanisms need to plan, and to decide what to do next, and manage the dialogue to achieve a goal. To demonstrate this, a deep linguistically aware and knowledge aware text based conversational agent (LING-CSA) presents a proof-of-concept of a non-statistical conversational AI solution.
Style APA, Harvard, Vancouver, ISO itp.
9

Välme, Emma, and Lea Renmarker. "Accelerating Sustainability Report Assessment with Natural Language Processing." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445912.

Pełny tekst źródła
Streszczenie:
Corporations are expected to be transparent on their sustainability impact and keep their stakeholders informed about how large the impact on the environment is, as well as their work on reducing the impact in question. The transparency is accounted for in a, usually voluntary, sustainability report additional to the already required financial report. With new regulations for mandatory sustainability reporting in Sweden, comprehensive and complete guidelines for corporations to follow are insufficient and the reports tend to be extensive. The reports are therefore hard to assess in terms of how well the reporting is actually done. The Sustainability Reporting Maturity Grid (SRMG) is an assessment tool introduced by Cöster et al. (2020) used for assessing the quality of sustainability reporting. Today, the assessment is performed manually which has proven to be both time-consuming and resulting in varying assessments, affected by individual interpretation of the content. This thesis is exploring how assessment time and grading with the SRMG can be improved by applying Natural Language Processing (NLP) on sustainability documents, resulting in a compressed assessment method - The Prototype. The Prototype intends to facilitate and speed up the process of assessment. The first step towards developing the Prototype was to decide which one of the three Machine Learning models; Naïve Bayes (NB), Support Vector Machines (SVM), or Bidirectional Encoder Representations of Transformers (BERT), is most suitable. This decision was supported by analyzing the accuracy for each model and for respective criteria in the SRMG, where BERT proved a strong classification ability with an average accuracy of 96,8%. Results from the user evaluation of the Prototypeindicated that the assessment time can be halved using the Prototype, with an initial average of 40 minutes decreased to 20 minutes. However, the results further showed a decreased average grading and an increased variation in assessment. The results indicate that applying NLP could be successful, but to get a more competitive Prototype, a more nuanced dataset must be developed, giving more space for the model to detect patterns in the data.
Style APA, Harvard, Vancouver, ISO itp.
10

Djoweini, Camran, and Henrietta Hellberg. "Approaches to natural language processing in app development." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230167.

Pełny tekst źródła
Streszczenie:
Natural language processing is an on-going field that is not yet fully established. A high demand for natural language processing in applications creates a need for good development-tools and different implementation approaches developed to suit the engineers behind the applications. This project approaches the field from an engineering point of view to research approaches, tools, and techniques that are readily available today for development of natural language processing support. The sub-area of information retrieval of natural language processing was examined through a case study, where prototypes were developed to get a deeper understanding of the tools and techniques used for such tasks from an engineering point of view. We found that there are two major approaches to developing natural language processing support for applications, high-level and low-level approaches. A categorization of tools and frameworks belonging to the two approaches as well as the source code, documentation and, evaluations, of two prototypes developed as part of the research are presented. The choice of approach, tools and techniques should be based on the specifications and requirements of the final product and both levels have their own pros and cons. The results of the report are, to a large extent, generalizable as many different natural language processing tasks can be solved using similar solutions even if their goals vary.<br>Datalingvistik (engelska natural language processing) är ett område inom datavetenskap som ännu inte är fullt etablerat. En hög efterfrågan av stöd för naturligt språk i applikationer skapar ett behov av tillvägagångssätt och verktyg anpassade för ingenjörer. Detta projekt närmar sig området från en ingenjörs synvinkel för att undersöka de tillvägagångssätt, verktyg och tekniker som finns tillgängliga att arbeta med för utveckling av stöd för naturligt språk i applikationer i dagsläget. Delområdet ‘information retrieval’ undersöktes genom en fallstudie, där prototyper utvecklades för att skapa en djupare förståelse av verktygen och teknikerna som används inom området. Vi kom fram till att det går att kategorisera verktyg och tekniker i två olika grupper, beroende på hur distanserad utvecklaren är från den underliggande bearbetningen av språket. Kategorisering av verktyg och tekniker samt källkod, dokumentering och utvärdering av prototyperna presenteras som resultat. Valet av tillvägagångssätt, tekniker och verktyg bör baseras på krav och specifikationer för den färdiga produkten. Resultaten av studien är till stor del generaliserbara eftersom lösningar till många problem inom området är likartade även om de slutgiltiga målen skiljer sig åt.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Książki na temat "Natural Language Processes (NLP)"

1

Christodoulakis, Dimitris N., ed. Natural Language Processing — NLP 2000. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45154-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Herbert, Robert K. Language universals, markedness theory, and natural phonetic processes. Mouton de Gruyter, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

International Conference on Natural Language Processing (2nd 2000 Patrai, Greece). Natural language processing - NLP 2000: Second International Conference, Patras, Greece, June 2-4, 2000 : proceedings. Springer, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Oppentocht, Anna Linnea. Lexical semantic classification of Dutch verbs: Towards constructing NLP and human-friendly definitions. LEd, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Gurevych, Iryna. The People’s Web Meets NLP: Collaboratively Constructed Language Resources. Springer Berlin Heidelberg, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chengqing, Zong, Chinese Association for Artificial Intelligence., IEEE Signal Processing Society, IEEE Systems, Man, and Cybernetics Society, and Institute of Electrical and Electronics Engineers. Beijing Section., eds. 2003 International conference on natural language processing and knowledge engineering: Proceedings : NLP-KE 2003 : Beijing, China. IEEE, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

International Conference on Natural Language Processing and Knowledge Engineering (2007 Beijing, China). Proceedings of International Conference on Natural Language Processing and Knowledge Engineering (NLP-KE'07) : Aug. 30-Sep. 1, Beijing China. IEEE, 2007.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Loftsson, Hrafn, Eiríkur Rögnvaldsson, and Sigrún Helgadóttir, eds. Advances in natural language processing: 7th International Conference on NLP, IceTAL 2010, Reykjavik, Iceland, August 16-18, 2010 : proceedings. Springer, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kyoko, Kanzaki, and SpringerLink (Online service), eds. Advances in Natural Language Processing: 8th International Conference on NLP, JapTAL 2012, Kanazawa, Japan, October 22-24, 2012. Proceedings. Springer Berlin Heidelberg, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Boschetti Adamczyk, Willian, Verónica Escudero, Hannah Liepmann, Simon Boehmer, and Isaure Delaporte. Developing a new method to uncover skills trends in emerging economies using online data and NLP techniques. ILO, 2025. https://doi.org/10.54394/hqqx3200.

Pełny tekst źródła
Streszczenie:
This research brief outlines a novel methodology using online big data and natural language processing (NLP) to analyze transferable skills trends across diverse sectors and countries, offering insights into skills demand, supply, mismatches, and their relationship to job quality and transitions.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Części książek na temat "Natural Language Processes (NLP)"

1

Kubsch, Marcus, Daniela Caballero, and Pablo Uribe. "Once More with Feeling: Emotions in Multimodal Learning Analytics." In The Multimodal Learning Analytics Handbook. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08076-0_11.

Pełny tekst źródła
Streszczenie:
AbstractThe emotions that students experience when engaging in tasks critically influence their performance and many models of learning and competence include assumptions about affective variables and respective emotions. However, while researchers agree about the importance of emotions for learning, it remains challenging to connect momentary affect, i.e., emotions, to learning processes. Advances in automated speech recognition and natural language processing (NLP) allow real time detection of emotions in recorded language. We use NLP and machine learning techniques to automatically extract information about students’ motivational states while engaging in the construction of explanations and investigate how this information can help more accurately predict students’ learning over the course of a 10-week energy unit. Our results show how NLP and ML techniques allow the use of different modalities of the same data in order to better understand individual differences in students’ performances. However, in realistic settings, this task remains far from trivial and requires extensive preprocessing of the data and the results need to be interpreted with care and caution. Thus, future research is needed before these methods can be deployed at scale.
Style APA, Harvard, Vancouver, ISO itp.
2

Roy, Piyal, and Rajat Pandit. "Natural Language Processing Applications." In Advances in Computational Intelligence and Robotics. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-6255-6.ch007.

Pełny tekst źródła
Streszczenie:
Natural language processing (NLP) stands at the forefront of innovation, revolutionizing communication between humans and machines. The chapter discusses practical applications across diverse domains such as healthcare, finance, customer service, social media analysis, e-commerce, legal, education, and journalism, emphasizing NLP's pivotal role in enhancing efficiency and decision-making processes. However, challenges like bias, data quality, and ethical concerns necessitate interdisciplinary collaboration for mitigation. Recent advances in deep learning, pre-trained language models, transfer learning, multimodal NLP, and few-shot/zero-shot learning are highlighted for their transformative impact. Looking ahead, the chapter advocates for continued research to address model fairness, interpretability, and ethical considerations.
Style APA, Harvard, Vancouver, ISO itp.
3

Ghosh, Subhajit. "Natural Language Processing: Basics, Challenges, and Clustering Applications." In A Handbook of Computational Linguistics: Artificial Intelligence in Natural Language Processing. BENTHAM SCIENCE PUBLISHERS, 2024. http://dx.doi.org/10.2174/9789815238488124020006.

Pełny tekst źródła
Streszczenie:
Natural Language Processing (NLP) involves the use of algorithms and models and various computational techniques to analyze, process, and generate natural language data, including speech and text. NLP helps computers interact with humans in a more natural way, which has become increasingly important as more humancomputer interactions take place. NLP allows machines to process and analyze voluminous unstructured data, including social media posts, newspaper articles, reviews from customers, emails, and others. It helps organizations extract insights, automate tasks, and improve decision-making by enabling machines to understand and generate human-like language. A linguistic background is essential for understanding NLP. Linguistic theories and models help in developing NLU systems, as NLP specialists need to understand the structure and rules of language. NLU systems are organized into different components, including language modelling, parsing, and semantic analysis. NLU systems may be assessed through the use of metrics that includes measures like precision and recall, as well as indicators that convey meaningful information that include F1 score and others. Semantics and knowledge representation are central to NLU, as they involve understanding the meaning of words and sentences and representing this information in a way that machines can use. Approaches to knowledge representation include semantic networks, ontologies, and vector embeddings. Language modelling is an essential step in NLP that sees usage in applications like speech recognition, text generation, and text completion and also in areas such as machine translation. Ambiguity Resolution remains a major challenge in NLP, as language is often ambiguous and context-dependent. Some common applications of NLP include sentiment analysis, chatbots, virtual assistants, machine translation, speech recognition, text classification, text summarization, and information extraction. In this chapter, we show the applicability of a popular unsupervised learning technique, viz., clustering through K-Means. The efficiency provided by the K-Means algorithm can be improved through the use of an optimization loop. The prospects for NLP are promising, with an increasing demand for AI-powered language technologies in various industries, including healthcare, finance, and e-commerce. There is also a growing need for ethical and responsible AI systems that are transparent and accountable.
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, MQ. "Applying Natural Language Processing to Assessment." In Talent Assessment. Oxford University PressNew York, 2023. http://dx.doi.org/10.1093/oso/9780197611050.003.0005.

Pełny tekst źródła
Streszczenie:
Abstract Natural language processing (NLP), the enablement of machines to automatically process and learn human natural language data, is at the core of many real-world applications. Recent years have seen initial attempts to apply NLP in assessment. The goal of this chapter is to discuss how NLP can be applied to improve assessment in the selection context. The chapter opens with an introduction of NLP—what it is and how NLP techniques have evolved over time. It then dives into how various NLP methods can be applied in the assessment cycle, from assessment construction to analyzing assessment responses to predicting job-related outcomes. The chapter ends with a discussion of the key challenges and opportunities facing assessment researchers and practitioners when considering using NLP for assessment.
Style APA, Harvard, Vancouver, ISO itp.
5

Jackson, G. Tanner, and Danielle S. McNamara. "Applying NLP Metrics to Students’ Self-Explanations." In Applied Natural Language Processing. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-741-8.ch015.

Pełny tekst źródła
Streszczenie:
Intelligent Tutoring Systems (ITSs) are becoming an increasingly common method for students to engage with and learn course material. ITSs are designed to provide students with one-on-one learning that is tailored to their own pace and needs. These systems can adapt to each users’ individual knowledge and ability level to provide the most pedagogically effective learning environment. Tutoring systems have been designed that cover a variety of topics, including both well-defined and ill-defined domains. ITSs have seen great success within well-defined domains, where the topic itself provides only a limited set of responses. For example, in the domain of algebra, there is a limited set of possible actions that can be performed to solve for an unknown variable. Knowing this complete set of actions allows the tutoring system to predict all possible responses from the user. In contrast, ill-defined domains are more abstract and open ended. Reading comprehension is an ill-defined, open ended domain that can incorporate text from any subject, and involve numerous processes and problems for the learner. The number of associations that learners can make with a given text (e.g., based on personal memories, previous courses, ideas within different parts of the same text, etc.) is virtually infinite. These associations make it almost impossible to predict how a user will respond to a text. In addition to working with more abstract concepts, ITSs within ill-defined domains often have the added challenge of interpreting natural language user input. Incorporating natural language allows learners to use their own words and ideas as they interact with the content; however, this also increases the ambiguity of the interaction and decreases the system’s ability to build a precise model of the learner. Building an accurate learner model is essential for the system to adapt the interaction in a pedagogically appropriate manner.
Style APA, Harvard, Vancouver, ISO itp.
6

Rajendran, Rajesh Kanna, Mohana Priya T., and Karthick Chitrarasu. "Natural Language Processing (NLP) in Chatbot Design." In Design and Development of Emerging Chatbot Technology. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-1830-0.ch006.

Pełny tekst źródła
Streszczenie:
The creation and development of chatbots, which are the prevalent manifestations of artificial intelligence (AI) and machine learning (ML) technologies in today's digital world, are built on Natural Language Processing (NLP), which serves as a cornerstone in the process. This chapter investigates the significant part that natural language processing (NLP) plays in determining the development and effectiveness of chatbots, beginning with their beginnings as personal virtual assistants and continuing through their seamless incorporation into messaging platforms and smart home gadgets. The study delves into the technological complexities and emphasizes the problems and improvements in natural language processing (NLP) algorithms and understanding (NLU) systems. These systems are essential in enabling chatbots to grasp context, decode user intent, and provide replies that are contextually appropriate in real time. In spite of the substantial progress that has been made, chatbots continue to struggle with constraints.
Style APA, Harvard, Vancouver, ISO itp.
7

Singh, Archana, Girish Lakhera, Megha Ojha, Amar Kumar Mishra, and Sanjay Kaushik. "Natural Language Processing for HR Chatbots and Virtual Assistants." In Advances in Logistics, Operations, and Management Science. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-3318-1.ch020.

Pełny tekst źródła
Streszczenie:
The purpose of this study is to explore fundamental NLP concepts, key techniques, and application across various HR processes. It addresses the challenges and limitations faced by NLP-driven HR chatbots and virtual assistants while presenting real-world success stories from leading organizations. The data for the present study was collected through secondary data formation from different articles and research papers available. To develop a theoretical framework, the literature on NLP, HR technology, and EHRM operations will be reviewed. This chapter underscores the significant role of NLP in transforming HR chatbots and virtual assistants. However, to fully harness the potential of NLP in HR applications, it is crucial to address challenges and adopt ethical practices. Moreover, the chapter delves into the future prospects of NLP in HR, considering advancements in NLP technology and its potential fusion with emerging technologies like machine learning and conversational AI.
Style APA, Harvard, Vancouver, ISO itp.
8

Dowell, Nia, and Vitomir Kovanović. "Modeling Educational Discourse with Natural Language Processing." In The Handbook of Learning Analytics, 2nd ed. SOLAR, 2022. http://dx.doi.org/10.18608/hla22.011.

Pełny tekst źródła
Streszczenie:
The broadening adoption of technology enhanced learning environments has substantially altered the manner in which educational communication takes place, with most people engaging in some form of online asynchronous or synchronous conversation every day. The language and discourse artifacts emerging from these technological environments is a rich source of information into learning processes and outcomes. This chapter describes the current landscape of natural language processing (NLP) tools and approaches available to researchers and practitioners to computationally discern patterns in large quantities of text-based conversations that take place across a variety of educational technology platforms. The capabilities of NLP are particularly important as, in the field of learning analytics, we desire to effectively and efficiently learn about the process of learning by observing learners, and then subsequently use that information to improve learning. We conclude the chapter with a discussion around the emerging applications (i.e., sensing technologies, breakthroughs in AI, and cloud computing) and challenges of NLP tools to educational discourse.
Style APA, Harvard, Vancouver, ISO itp.
9

Singh, Arun Kumar, Ashish Tripathi, Sandeep Saxena, Pushpa Choudhary, Mahesh Kumar Singh, and Arjun Singh. "Impact of Deep Learning on Natural Language Processing." In Artificial Intelligence, Machine Learning and User Interface Design. BENTHAM SCIENCE PUBLISHERS, 2024. http://dx.doi.org/10.2174/9789815179606124010005.

Pełny tekst źródła
Streszczenie:
In the era of digitalization, electronic gadgets such as Google Translate, Siri, and Alexa have at least one characteristic: They are all the products of natural language processing (NLP). “Natural Language” refers to a human language used for daily communication, such as English, Hindi, Bengali, etc. Natural languages, as opposed to artificial languages such as computer languages and mathematical nomenclature, have evolved as they have been transmitted from generation to generation and are challenging to explain with clear limits in the first instance. In natural language processing, artificial intelligence (Singh et al., 2021), linguistics, information processing, and cognitive science are all related fields (NLP). NLP aims to use intelligent computer techniques to process human language. However, NLP technologies such as voice recognition, language comprehension, and machine translation exist. With such limited obvious exclusions, machine learning algorithms in NLP sometimes lacked sufficient capacity to consume massive amounts of training data. In addition, the algorithms, techniques, and infrastructural facilities lack enough strength. Humans design features in traditional machine learning, and feature engineering is a limitation that requires significant human expertise. Simultaneously, the accompanying superficial algorithms lack depiction capability and, as a result, the ability to generate layers of duplicatable concepts that would naturally separate intricate aspects in forming visible linguistic data. Deep learning overcomes the challenges mentioned earlier by using deep, layered modelling architectures, often using neural networks and the corresponding full-stack learning methods. Deep learning has recently enhanced natural language processing by using artificial neural networks based on biological brain systems and Backpropagation. Deep learning approaches that use several processing layers to develop hierarchy data representations have produced cutting-edge results in various areas. This chapter introduces natural language processing (NLP) as an AI component. The history of NLP is next. Distributed language representations are the core of NLP's profound learning revolution. After the survey, the boundaries of deep learning for NLP are investigated. The paper proposes five NLP scientific fields.
Style APA, Harvard, Vancouver, ISO itp.
10

Arnone, Gioia. "NATURAL LANGUAGE PROCESSING VIA CHATBOTS." In Futuristic Trends in Blockchain Applications Volume 3 Book 1. Iterative International Publisher, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/v3bgba1p9ch3.

Pełny tekst źródła
Streszczenie:
Chatbots are very common today, to give AI support to customers and the way of developing further AI. The uses of NLP and MI to develop the chatbots and how they analyze the human language and copy them back. The way to understand the human language and they developed a procedure to create that function. The process to analyze human tones, expressions and the way they it's fit into the artificial intelligence world and their future. The way to develop intelligent and more creative chat bots and create differentiate between any human tones and synthesize own creative action. The way NLP provides intelligent virtual assistance to operate the chat box into a different chatbot. The way to improve machine learning of human voice.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Natural Language Processes (NLP)"

1

Pabalkar, Sarika, Prashant Patel, Ramila Choudhary, Vanshika Panoch, Sahil Yadav, and Harsh Ghogale. "Resume Analyzer Using Natural Language Processing (NLP)." In 2024 International Conference on Intelligent Systems and Advanced Applications (ICISAA). IEEE, 2024. https://doi.org/10.1109/icisaa62385.2024.10828940.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Braun, Daniel. "Teaching Natural Language Processing in Law School." In Proceedings of the Sixth Workshop on Teaching NLP. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.teachingnlp-1.13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

McCrae, John. "Co-Creational Teaching of Natural Language Processing." In Proceedings of the Sixth Workshop on Teaching NLP. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.teachingnlp-1.5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Singh, Dipali. "Legal documents Text Analysis using Natural Language Processing (NLP)." In 2024 2nd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS). IEEE, 2024. https://doi.org/10.1109/icssas64001.2024.10760929.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Parde, Natalie. "Example-Driven Course Slides on Natural Language Processing Concepts." In Proceedings of the Sixth Workshop on Teaching NLP. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.teachingnlp-1.2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Krause, Lea, and Piek T. J. M. Vossen. "The Gricean Maxims in NLP - A Survey." In Proceedings of the 17th International Natural Language Generation Conference. Association for Computational Linguistics, 2024. https://doi.org/10.18653/v1/2024.inlg-main.39.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Das, Satyesh, and Divyesh Das. "Natural Language Processing (NLP) Techniques: Usability in Human-Computer Interactions." In 2024 6th International Conference on Natural Language Processing (ICNLP). IEEE, 2024. http://dx.doi.org/10.1109/icnlp60986.2024.10692776.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Askarov, Mirzokhid, Alisher Gafforov, Adolat Darmonova, Mokhlaroyim Dadakhonova, Tokhirjon Ismailov, and Ugiljon Qushnazarova. "Preserving Historical Documents Using OCR and Natural Language Processing (NLP)." In 2025 International Conference on Computational Innovations and Engineering Sustainability (ICCIES). IEEE, 2025. https://doi.org/10.1109/iccies63851.2025.11032769.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Sun, Jinghua, Samuel Edwards, Nic Connelly, Andrew Bridge, and Lei Zhang. "Airborne Software Development Processes Certification Review Strategy based on RTCA/DO-178C." In 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112321.

Pełny tekst źródła
Streszczenie:
functionality for aircraft. Highly complex software, however, cannot be exhaustively tested and only assured through a structured, process, activity, and objective-based approach. This paper studied the development processes and objectives applicable to different software levels based on RTCA/DO-178C, and identified 82 technical focus points based on each airborne software development sub-process, then created a Process Technology Coverage matrix to demonstrate the technical focuses of each process. This paper proposes an objective-oriented top-down and bottom-up sampling strategy for the four software Stage of Involvement reviews by considering the frequency and depth of involvement. Finally, this paper provides a Technology Objective Coverage matrix, which can support the reviewers to perform the efficient risk-based SOI reviews by considering the identified technical points, thus efficiently achieving confidence in the level of safety of the aircraft from the software assurance perspective.
Style APA, Harvard, Vancouver, ISO itp.
10

Sintoris, Konstantinos, and Kostas Vergidis. "Extracting Business Process Models Using Natural Language Processing (NLP) Techniques." In 2017 IEEE 19th Conference on Business Informatics (CBI). IEEE, 2017. http://dx.doi.org/10.1109/cbi.2017.41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Natural Language Processes (NLP)"

1

Xu, Chao, Walter Forkel, Stefan Borgwardt, Franz Baader, and Beihai Zhou. Automatic Translation of Clinical Trial Eligibility Criteria into Formal Queries. Technische Universität Dresden, 2019. http://dx.doi.org/10.25368/2023.224.

Pełny tekst źródła
Streszczenie:
Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal, logic-based queries. First, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec, Stanford NLP tools, and the MetaMap Tagger, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov. Finally, we discuss some criteria that were hard to translate, and give suggestions for how to formulate eligibility criteria to make them easier to translate automatically.
Style APA, Harvard, Vancouver, ISO itp.
2

Liang, Xiao. Analyzing the Amazon Shopping Experience: A Sentiment Analysis Based on Natural Language Processing (NLP) and Model Comparison. Iowa State University, 2024. http://dx.doi.org/10.31274/cc-20240624-215.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Alonso-Robisco, Andres, and Jose Manuel Carbo. Analysis of CBDC Narrative OF Central Banks using Large Language Models. Banco de España, 2023. http://dx.doi.org/10.53479/33412.

Pełny tekst źródła
Streszczenie:
Central banks are increasingly using verbal communication for policymaking, focusing not only on traditional monetary policy, but also on a broad set of topics. One such topic is central bank digital currency (CBDC), which is attracting attention from the international community. The complex nature of this project means that it must be carefully designed to avoid unintended consequences, such as financial instability. We propose the use of different Natural Language Processing (NLP) techniques to better understand central banks’ stance towards CBDC, analyzing a set of central bank discourses from 2016 to 2022. We do this using traditional techniques, such as dictionary-based methods, and two large language models (LLMs), namely Bert and ChatGPT, concluding that LLMs better reflect the stance identified by human experts. In particular, we observe that ChatGPT exhibits a higher degree of alignment because it can capture subtler information than BERT. Our study suggests that LLMs are an effective tool to improve sentiment measurements for policy-specific texts, though they are not infallible and may be subject to new risks, like higher sensitivity to the length of texts, and prompt engineering.
Style APA, Harvard, Vancouver, ISO itp.
4

Alonso-Robisco, Andrés, Andrés Alonso-Robisco, José Manuel Carbó, et al. Empowering financial supervision: a SupTech experiment using machine learning in an early warning system. Banco de España, 2025. https://doi.org/10.53479/39320.

Pełny tekst źródła
Streszczenie:
New technologies have made available a vast amount of new data in the form of text, recording an exponentially increasing share of human and corporate behavior. For financial supervisors, the information encoded in text is a valuable complement to the more traditional balance sheet data typically used to track the soundness of financial institutions. In this study, we exploit several natural language processing (NLP) techniques as well as network analysis to detect anomalies in the Spanish corporate system, identifying both idiosyncratic and systemic risks. We use sentiment analysis at the corporate level to detect sentiment anomalies for specific corporations (idiosyncratic risks), while employing a wide range of network metrics to monitor systemic risks. In the realm of supervisory technology (SupTech), anomaly detection in sentiment analysis serves as a proactive tool for financial authorities. By continuously monitoring sentiment trends, SupTech applications can provide early warnings of potential financial distress or systemic risks.
Style APA, Harvard, Vancouver, ISO itp.
5

Burstein, Jill, Geoffrey LaFlair, Antony Kunnan, and Alina von Davier. A Theoretical Assessment Ecosystem for a Digital-First Assessment - The Duolingo English Test. Duolingo, 2022. http://dx.doi.org/10.46999/kiqf4328.

Pełny tekst źródła
Streszczenie:
The Duolingo English Test is a groundbreaking, digital­first, computer­adaptive measure of English language proficiency for communication and use in English­medium settings. The test measures four key English language proficiency constructs: Speaking, Writing, Reading, and Listening (SWRL), and is aligned with the Common European Framework of Reference for Languages (CEFR) proficiency levels and descriptors. As a digital­first assessment, the test uses “human­in­the­loop AI” from end to end for test security, automated item generation, and scoring of test­taker responses. This paper presents a novel theoretical assessment ecosystem for the Duolingo English Test. It is a theoretical representation of language assessment design, measurement, and test security processes, as well as the test­taker experience factors that contribute to the test validity argument and test impact. The test validity argument is constructed with a digitally­informed chain of inferences that addresses digital affordances applied to the test. The ecosystem is composed of an integrated set of complex frameworks: (1) the Language Assessment Design Framework, (2) the Expanded Evidence­Centered Design Framework, (3) the Computational Psychometrics Framework, and (4) the Test Security Framework. Test­taker experience (TTX) is a test priority throughout the test­taking pipeline, such as low cost, anytime/anywhere, and shorter testing time. The test’s expected impact is aligned with Duolingo’s social mission to lower barriers to education access and offer a secure and delightful test experience, while providing a valid, fair, and reliable test score. The ecosystem leverages principles from assessment theory, computational psychometrics, design, data science, language assessment theory, NLP/AI, and test security.
Style APA, Harvard, Vancouver, ISO itp.
6

Avellán, Leopoldo, and Steve Brito. Crossroads in a Fog: Navigating Latin America's Development Challenges with Text Analytics. Inter-American Development Bank, 2023. http://dx.doi.org/10.18235/0005489.

Pełny tekst źródła
Streszczenie:
Latin America and the Caribbean are facing challenging times due to a combination of worsening development gaps and limited fiscal space to address them. Furthermore, the region is contending with an unfavorable external environment. Issues such as rising poverty, climate change, inadequate infrastructure, and low-quality education and health services, among others, require immediate attention. Deciding how to prioritize efforts to address these development gaps is challenging due to their complexity and urgency, and setting priorities becomes even more difficult when resources are limited. Therefore, it is crucial to have tools that help policymakers prioritize current development challenges to guide the allocation of financial support from international financial institutions and other development partners. This paper contributes to this discussion by using Natural Language Processing (NLP) to identify the most critical development areas. It applies these techniques to detailed periodic country analysis reports (Country Development Challenges, CDCs) prepared by country economists at the Inter-American Development Bank (IDB) from 2015 to 2021. The study reveals that despite the perception that new development challenges have become more critical lately, the region continues to struggle with the same challenges from the past, particularly those related to the government's institutional capacity, fiscal policy, education, productivity and firms, infrastructure, and poverty.
Style APA, Harvard, Vancouver, ISO itp.
7

Pasupuleti, Murali Krishna. Decentralized Creativity: AI-Infused Blockchain for Secure and Transparent Digital Innovation. National Education Services, 2025. https://doi.org/10.62311/nesx/rrvi125.

Pełny tekst źródła
Streszczenie:
Abstract The convergence of artificial intelligence (AI) and blockchain technology is transforming the creative economy by enabling secure, transparent, and decentralized innovation in digital content creation, intellectual property management, and monetization. Traditional creative industries are often constrained by centralized platforms, opaque copyright enforcement, and unfair revenue distribution, which limit the autonomy and financial benefits of creators. By leveraging blockchain’s immutable ledger, smart contracts, and non-fungible tokens (NFTs), digital assets can be authenticated, tokenized, and securely traded, ensuring ownership verification and automated royalty distribution. Simultaneously, AI-driven tools such as generative adversarial networks (GANs), neural networks, and natural language processing (NLP) models facilitate content generation, curation, and adaptive recommendations, enhancing creative workflows and fostering new artistic possibilities. This research report explores the synergies between AI and blockchain in the decentralized creative economy, analyzing their impact on digital rights protection, NFT marketplaces, decentralized publishing, AI-assisted music composition, and smart licensing models. Furthermore, it examines regulatory challenges, ethical considerations, and scalability limitations that need to be addressed for mainstream adoption. By integrating AI-powered automation with blockchain’s decentralized infrastructure, this study outlines a sustainable roadmap for secure, fair, and transparent digital creativity in the Web3 era. Keywords AI-powered creativity, blockchain-based digital ownership, decentralized innovation, generative AI, smart contracts, non-fungible tokens (NFTs), digital content authentication, AI-driven content generation, decentralized autonomous organizations (DAOs), intellectual property management, AI in art and music, Web3 creativity, tokenized digital assets, secure content monetization, ethical AI in blockchain, AI-assisted copyright protection, decentralized publishing, AI-powered music composition, blockchain scalability, AI for digital rights management.
Style APA, Harvard, Vancouver, ISO itp.
8

JOSI, Editor. How GPT Enhances Soft Systems Methodologies in Industrial Engineering. Industrial Engineering Department, Faculty of Engineering, Universitas Andalas, 2024. https://doi.org/10.25077/11122024-2.

Pełny tekst źródła
Streszczenie:
Industrial systems are rarely straightforward. They often involve complex, ill-structured problems influenced by human factors, organizational culture, and decision-making dynamics. These types of challenges, often addressed through soft systems methodologies, require innovative approaches to bridge gaps between technology, people, and processes. Generative Pre-trained Transformers (GPT), with their advanced natural language understanding capabilities, have emerged as a promising tool for optimizing soft systems in industrial engineering. In this post, we explore how GPT can support soft systems methodologies by enhancing decision-making and facilitating effective communication among stakeholders.
Style APA, Harvard, Vancouver, ISO itp.
9

Lindo-Ocampo, Gloria Inés, and Hilda Clarena Buitrago-García. English for Business Course. Thematic Unit: Business Events. Ediciones Universidad Cooperativa de Colombia, 2022. http://dx.doi.org/10.16925/gcnc.24.

Pełny tekst źródła
Streszczenie:
This didactic unit is aimed at the fifth semester students of the Business Administration, Marketing and International Business program, who have already completed the four basic levels of the Open Lingua program. This proposal seeks to develop skills and competencies that allow them to perform in different fields related to private, public and solidarity economy companies, and in various mediation and negotiation processes at national and international levels. The instructional design of this unit contains real-life situations, focused on the world of business, that allow students to interact in various types of business events. The grammatical and lexical concepts, necessary to interact successfully in these types of communicative situations, are introduced and applied. The educational activities are designed to offer opportunities to interact in business conferences, international exhibitions, and seminars, among others. The contents are framed in natural and meaningful contexts. This leads to a greater understanding of the type of language used in business and the way it is used to communicate. The contents are structured in three lessons in which the level of complexity of the topics, tasks, texts and transitions (4Ts) have been considered. Also, various types of activities that activate and reinforce previous knowledge and that, subsequently, evaluate the progress of the students, are included.
Style APA, Harvard, Vancouver, ISO itp.
10

Or, Etti, David Galbraith, and Anne Fennell. Exploring mechanisms involved in grape bud dormancy: Large-scale analysis of expression reprogramming following controlled dormancy induction and dormancy release. United States Department of Agriculture, 2002. http://dx.doi.org/10.32747/2002.7587232.bard.

Pełny tekst źródła
Streszczenie:
The timing of dormancy induction and release is very important to the economic production of table grape. Advances in manipulation of dormancy induction and dormancy release are dependent on the establishment of a comprehensive understanding of biological mechanisms involved in bud dormancy. To gain insight into these mechanisms we initiated the research that had two main objectives: A. Analyzing the expression profiles of large subsets of genes, following controlled dormancy induction and dormancy release, and assessing the role of known metabolic pathways, known regulatory genes and novel sequences involved in these processes B. Comparing expression profiles following the perception of various artificial as well as natural signals known to induce dormancy release, and searching for gene showing similar expression patterns, as candidates for further study of pathways having potential to play a central role in dormancy release. We first created targeted EST collections from V. vinifera and V. riparia mature buds. Clones were randomly selected from cDNA libraries prepared following controlled dormancy release and controlled dormancy induction and from respective controls. The entire collection (7920 vinifera and 1194 riparia clones) was sequenced and subjected to bioinformatics analysis, including clustering, annotations and GO classifications. PCR products from the entire collection were used for printing of cDNA microarrays. Bud tissue in general, and the dormant bud in particular, are under-represented within the grape EST database. Accordingly, 59% of the our vinifera EST collection, composed of 5516 unigenes, are not included within the current Vitis TIGR collection and about 22% of these transcripts bear no resemblance to any known plant transcript, corroborating the current need for our targeted EST collection and the bud specific cDNA array. Analysis of the V. riparia sequences yielded 814 unigenes, of which 140 are unique (keilin et al., manuscript, Appendix B). Results from computational expression profiling of the vinifera collection suggest that oxidative stress, calcium signaling, intracellular vesicle trafficking and anaerobic mode of carbohydrate metabolism play a role in the regulation and execution of grape-bud dormancy release. A comprehensive analysis confirmed the induction of transcription from several calcium–signaling related genes following HC treatment, and detected an inhibiting effect of calcium channel blocker and calcium chelator on HC-induced and chilling-induced bud break. It also detected the existence of HC-induced and calcium dependent protein phosphorylation activity. These data suggest, for the first time, that calcium signaling is involved in the mechanism of dormancy release (Pang et al., in preparation). We compared the effects of heat shock (HS) to those detected in buds following HC application and found that HS lead to earlier and higher bud break. We also demonstrated similar temporary reduction in catalase expression and temporary induction of ascorbate peroxidase, glutathione reductase, thioredoxin and glutathione S transferase expression following both treatments. These findings further support the assumption that temporary oxidative stress is part of the mechanism leading to bud break. The temporary induction of sucrose syntase, pyruvate decarboxylase and alcohol dehydrogenase indicate that temporary respiratory stress is developed and suggest that mitochondrial function may be of central importance for that mechanism. These finding, suggesting triggering of identical mechanisms by HS and HC, justified the comparison of expression profiles of HC and HS treated buds, as a tool for the identification of pathways with a central role in dormancy release (Halaly et al., in preparation). RNA samples from buds treated with HS, HC and water were hybridized with the cDNA arrays in an interconnected loop design. Differentially expressed genes from the were selected using R-language package from Bioconductor project called LIMMA and clones showing a significant change following both HS and HC treatments, compared to control, were selected for further analysis. A total of 1541 clones show significant induction, of which 37% have no hit or unknown function and the rest represent 661 genes with identified function. Similarly, out of 1452 clones showing significant reduction, only 53% of the clones have identified function and they represent 573 genes. The 661 induced genes are involved in 445 different molecular functions. About 90% of those functions were classified to 20 categories based on careful survey of the literature. Among other things, it appears that carbohydrate metabolism and mitochondrial function may be of central importance in the mechanism of dormancy release and studies in this direction are ongoing. Analysis of the reduced function is ongoing (Appendix A). A second set of hybridizations was carried out with RNA samples from buds exposed to short photoperiod, leading to induction of bud dormancy, and long photoperiod treatment, as control. Analysis indicated that 42 genes were significant difference between LD and SD and 11 of these were unique.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii