Academic literature on the topic 'Natural Language Processing (NLP)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Natural Language Processing (NLP).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Natural Language Processing (NLP)"
Németh, Renáta, and Júlia Koltai. "Natural language processing." Intersections 9, no. 1 (April 26, 2023): 5–22. http://dx.doi.org/10.17356/ieejsp.v9i1.871.
Full textAli, Miss Aliya Anam Shoukat. "AI-Natural Language Processing (NLP)." International Journal for Research in Applied Science and Engineering Technology 9, no. VIII (August 10, 2021): 135–40. http://dx.doi.org/10.22214/ijraset.2021.37293.
Full textRohit Kumar Yadav, Aanchal Madaan, and Janu. "Comprehensive analysis of natural language processing." Global Journal of Engineering and Technology Advances 19, no. 1 (April 30, 2024): 083–90. http://dx.doi.org/10.30574/gjeta.2024.19.1.0058.
Full textSadiku, Matthew N. O., Yu Zhou, and Sarhan M. Musa. "NATURAL LANGUAGE PROCESSING IN HEALTHCARE." International Journal of Advanced Research in Computer Science and Software Engineering 8, no. 5 (June 2, 2018): 39. http://dx.doi.org/10.23956/ijarcsse.v8i5.626.
Full textAlharbi, Mohammad, Matthew Roach, Tom Cheesman, and Robert S. Laramee. "VNLP: Visible natural language processing." Information Visualization 20, no. 4 (August 13, 2021): 245–62. http://dx.doi.org/10.1177/14738716211038898.
Full textAl-Khalifa, Hend S., Taif AlOmar, and Ghala AlOlyyan. "Natural Language Processing Patents Landscape Analysis." Data 9, no. 4 (March 31, 2024): 52. http://dx.doi.org/10.3390/data9040052.
Full textSulistyo, Danang, Fadhli Ahda, and Vivi Aida Fitria. "Epistomologi dalam Natural Language Processing." Jurnal Inovasi Teknologi dan Edukasi Teknik 1, no. 9 (September 26, 2021): 652–64. http://dx.doi.org/10.17977/um068v1i92021p652-664.
Full textPutri, Nastiti Susetyo Fanany, Prasetya Widiharso, Agung Bella Putra Utama, Maharsa Caraka Shakti, and Urvi Ghosh. "Natural Language Processing in Higher Education." Bulletin of Social Informatics Theory and Application 6, no. 1 (July 3, 2023): 90–101. http://dx.doi.org/10.31763/businta.v6i1.593.
Full textGeetha, Dr V., Dr C. K. Gomathy, Mr P. V. Sri Ram, and Surya Prakash L N. "NOVEL STUDY ON NATURAL LANGUAGE PROCESSING." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 11 (November 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem27091.
Full textZhao, Liping, Waad Alhoshan, Alessio Ferrari, Keletso J. Letsholo, Muideen A. Ajagbe, Erol-Valeriu Chioasca, and Riza T. Batista-Navarro. "Natural Language Processing for Requirements Engineering." ACM Computing Surveys 54, no. 3 (June 2021): 1–41. http://dx.doi.org/10.1145/3444689.
Full textDissertations / Theses on the topic "Natural Language Processing (NLP)"
Hellmann, Sebastian. "Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data." Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-157932.
Full textNOZZA, DEBORA. "Deep Learning for Feature Representation in Natural Language Processing." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/241185.
Full textThe huge amount of textual user-generated content on the Web has incredibly grown in the last decade, creating new relevant opportunities for different real-world applications and domains. To overcome the difficulties of dealing with this large volume of unstructured data, the research field of Natural Language Processing has provided efficient solutions developing computational models able to understand and interpret human natural language without any (or almost any) human intervention. This field has gained in further computational efficiency and performance from the advent of the recent machine learning research lines concerned with Deep Learning. In particular, this thesis focuses on a specific class of Deep Learning models devoted to learning high-level and meaningful representations of input data in unsupervised settings, by computing multiple non-linear transformations of increasing complexity and abstraction. Indeed, learning expressive representations from the data is a crucial step in Natural Language Processing, because it involves the transformation from discrete symbols (e.g. characters) to a machine-readable representation as real-valued vectors, which should encode semantic and syntactic meanings of the language units. The first research direction of this thesis is aimed at giving evidence that enhancing Natural Language Processing models with representations obtained by unsupervised Deep Learning models can significantly improve the computational abilities of making sense of large volume of user-generated text. In particular, this thesis addresses tasks that were considered crucial for understanding what the text is talking about, by extracting and disambiguating the named entities (Named Entity Recognition and Linking), and which opinion the user is expressing, dealing also with irony (Sentiment Analysis and Irony Detection). For each task, this thesis proposes a novel Natural Language Processing model enhanced by the data representation obtained by Deep Learning. As second research direction, this thesis investigates the development of a novel Deep Learning model for learning a meaningful textual representation taking into account the relational structure underlying user-generated content. The inferred representation comprises both textual and relational information. Once the data representation is obtained, it could be exploited by off-the-shelf machine learning algorithms in order to perform different Natural Language Processing tasks. As conclusion, the experimental investigations reveal that models able to incorporate high-level features, obtained by Deep Learning, show significant performance and improved generalization abilities. Further improvements can be also achieved by models able to take into account the relational information in addition to the textual content.
Panesar, Kulvinder. "Natural language processing (NLP) in Artificial Intelligence (AI): a functional linguistic perspective." Vernon Press, 2020. http://hdl.handle.net/10454/18140.
Full textThis chapter encapsulates the multi-disciplinary nature that facilitates NLP in AI and reports on a linguistically orientated conversational software agent (CSA) (Panesar 2017) framework sensitive to natural language processing (NLP), language in the agent environment. We present a novel computational approach of using the functional linguistic theory of Role and Reference Grammar (RRG) as the linguistic engine. Viewing language as action, utterances change the state of the world, and hence speakers and hearer’s mental state change as a result of these utterances. The plan-based method of discourse management (DM) using the BDI model architecture is deployed, to support a greater complexity of conversation. This CSA investigates the integration, intersection and interface of the language, knowledge, speech act constructions (SAC) as a grammatical object, and the sub-model of BDI and DM for NLP. We present an investigation into the intersection and interface between our linguistic and knowledge (belief base) models for both dialogue management and planning. The architecture has three-phase models: (1) a linguistic model based on RRG; (2) Agent Cognitive Model (ACM) with (a) knowledge representation model employing conceptual graphs (CGs) serialised to Resource Description Framework (RDF); (b) a planning model underpinned by BDI concepts and intentionality and rational interaction; and (3) a dialogue model employing common ground. Use of RRG as a linguistic engine for the CSA was successful. We identify the complexity of the semantic gap of internal representations with details of a conceptual bridging solution.
Välme, Emma, and Lea Renmarker. "Accelerating Sustainability Report Assessment with Natural Language Processing." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445912.
Full textDjoweini, Camran, and Henrietta Hellberg. "Approaches to natural language processing in app development." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230167.
Full textDatalingvistik (engelska natural language processing) är ett område inom datavetenskap som ännu inte är fullt etablerat. En hög efterfrågan av stöd för naturligt språk i applikationer skapar ett behov av tillvägagångssätt och verktyg anpassade för ingenjörer. Detta projekt närmar sig området från en ingenjörs synvinkel för att undersöka de tillvägagångssätt, verktyg och tekniker som finns tillgängliga att arbeta med för utveckling av stöd för naturligt språk i applikationer i dagsläget. Delområdet ‘information retrieval’ undersöktes genom en fallstudie, där prototyper utvecklades för att skapa en djupare förståelse av verktygen och teknikerna som används inom området. Vi kom fram till att det går att kategorisera verktyg och tekniker i två olika grupper, beroende på hur distanserad utvecklaren är från den underliggande bearbetningen av språket. Kategorisering av verktyg och tekniker samt källkod, dokumentering och utvärdering av prototyperna presenteras som resultat. Valet av tillvägagångssätt, tekniker och verktyg bör baseras på krav och specifikationer för den färdiga produkten. Resultaten av studien är till stor del generaliserbara eftersom lösningar till många problem inom området är likartade även om de slutgiltiga målen skiljer sig åt.
Sætre, Rune. "GeneTUC: Natural Language Understanding in Medical Text." Doctoral thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-545.
Full textNatural Language Understanding (NLU) is a 50 years old research field, but its application to molecular biology literature (BioNLU) is a less than 10 years old field. After the complete human genome sequence was published by Human Genome Project and Celera in 2001, there has been an explosion of research, shifting the NLU focus from domains like news articles to the domain of molecular biology and medical literature. BioNLU is needed, since there are almost 2000 new articles published and indexed every day, and the biologists need to know about existing knowledge regarding their own research. So far, BioNLU results are not as good as in other NLU domains, so more research is needed to solve the challenges of creating useful NLU applications for the biologists.
The work in this PhD thesis is a “proof of concept”. It is the first to show that an existing Question Answering (QA) system can be successfully applied in the hard BioNLU domain, after the essential challenge of unknown entities is solved. The core contribution is a system that discovers and classifies unknown entities and relations between them automatically. The World Wide Web (through Google) is used as the main resource, and the performance is almost as good as other named entity extraction systems, but the advantage of this approach is that it is much simpler and requires less manual labor than any of the other comparable systems.
The first paper in this collection gives an overview of the field of NLU and shows how the Information Extraction (IE) problem can be formulated with Local Grammars. The second paper uses Machine Learning to automatically recognize protein name based on features from the GSearch Engine. In the third paper, GSearch is substituted with Google, and the task in this paper is to extract all unknown names belonging to one of 273 biomedical entity classes, like genes, proteins, processes etc. After getting promising results with Google, the fourth paper shows that this approach can also be used to retrieve interactions or relationships between the named entities. The fifth paper describes an online implementation of the system, and shows that the method scales well to a larger set of entities.
The final paper concludes the “proof of concept” research, and shows that the performance of the original GeneTUC NLU system has increased from handling 10% of the sentences in a large collection of abstracts in 2001, to 50% in 2006. This is still not good enough to create a commercial system, but it is believed that another 40% performance gain can be achieved by importing more verb templates into GeneTUC, just like nouns were imported during this work. Work has already begun on this, in the form of a local Masters Thesis.
Andrén, Samuel, and William Bolin. "NLIs over APIs : Evaluating Pattern Matching as a way of processing natural language for a simple API." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186429.
Full textDen här rapporten utforskar hur genomförbart det är att använda mönstermatchning för att implementera ett robust användargränssnitt för styrning med naturligt språk (Natural Language Interface, NLI) över en begränsad Application Programming Interface (API). Eftersom APIer används i stor utsträckning idag, ofta i mobila applikationer, har det blivit allt mer viktigt att hitta sätt att göra dem ännu mer tillgängliga för slutanvändare. Ett mycket intuitivt sätt att komma åt information är med hjälp av naturligt språk via en API. I den här rapporten redogörs först för möjligheten att bygga ett korpus för en viss API and att skapa mönster för mönstermatchning på det korpuset. Därefter utvärderas en implementation av ett NLI som bygger på mönstermatchning med hjälp av korpuset. Resultatet av korpusuppbyggnaden visar att trots att antalet unika fraser som används för vårt API ökar ganska stadigt, så konvergerar antalat mönster på de fraserna relativt snabbt mot en konstant. Detta antyder att det är mycket möjligt att använda desssa mönster för att skapa en NLI som är robust nog för en API. Utvärderingen av implementationen av mönstermatchingssystemet antyder att tekniken kan användas för att framgångsrikt extrahera information från fraser om mönstret frasen följer finns i systemet.
Wallner, Vanja. "Mapping medical expressions to MedDRA using Natural Language Processing." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-426916.
Full textWoldemariam, Yonas Demeke. "Natural language processing in cross-media analysis." Licentiate thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-147640.
Full textHuang, Fei. "Improving NLP Systems Using Unconventional, Freely-Available Data." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/221031.
Full textPh.D.
Sentence labeling is a type of pattern recognition task that involves the assignment of a categorical label to each member of a sentence of observed words. Standard supervised sentence-labeling systems often have poor generalization: it is difficult to estimate parameters for words which appear in the test set, but seldom (or never) appear in the training set, because they only use words as features in their prediction tasks. Representation learning is a promising technique for discovering features that allow a supervised classifier to generalize from a source domain dataset to arbitrary new domains. We demonstrate that features which are learned from distributional representations of unlabeled data can be used to improve performance on out-of-vocabulary words and help the model to generalize. We also argue that it is important for a representation learner to be able to incorporate expert knowledge during its search for helpful features. We investigate techniques for building open-domain sentence labeling systems that approach the ideal of a system whose accuracy is high and consistent across domains. In particular, we investigate unsupervised techniques for language model representation learning that provide new features which are stable across domains, in that they are predictive in both the training and out-of-domain test data. In experiments, our best system with the proposed techniques reduce error by as much as 11.4% relative to the previous system using traditional representations on the Part-of-Speech tagging task. Moreover, we leverage the Posterior Regularization framework, and develop an architecture for incorporating biases from prior knowledge into representation learning. We investigate three types of biases: entropy bias, distance bias and predictive bias. Experiments on two domain adaptation tasks show that our biased learners identify significantly better sets of features than unbiased learners. This results in a relative reduction in error of more than 16% for both tasks with respect to existing state-of-the-art representation learning techniques. We also extend the idea of using additional unlabeled data to improve the system's performance on a different NLP task, word alignment. Traditional word alignment only takes a sentence-level aligned parallel corpus as input and generates the word-level alignments. However, as the integration of different cultures, more and more people are competent in multiple languages, and they often use elements of multiple languages in conversations. Linguist Code Switching (LCS) is such a situation where two or more languages show up in the context of a single conversation. Traditional machine translation (MT) systems treat LCS data as noise, or just as regular sentences. However, if LCS data is processed intelligently, it can provide a useful signal for training word alignment and MT models. In this work, we first extract constraints from this code switching data and then incorporate them into a word alignment model training procedure. We also show that by using the code switching data, we can jointly train a word alignment model and a language model using co-training. Our techniques for incorporating LCS data improve by 2.64 in BLEU score over a baseline MT system trained using only standard sentence-aligned corpora.
Temple University--Theses
Books on the topic "Natural Language Processing (NLP)"
Christodoulakis, Dimitris N., ed. Natural Language Processing — NLP 2000. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45154-4.
Full textInternational Conference on Natural Language Processing (2nd 2000 Patrai, Greece). Natural language processing - NLP 2000: Second International Conference, Patras, Greece, June 2-4, 2000 : proceedings. Berlin: Springer, 2000.
Find full textGurevych, Iryna. The People’s Web Meets NLP: Collaboratively Constructed Language Resources. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.
Find full textChengqing, Zong, Chinese Association for Artificial Intelligence., IEEE Signal Processing Society, IEEE Systems, Man, and Cybernetics Society, and Institute of Electrical and Electronics Engineers. Beijing Section., eds. 2003 International conference on natural language processing and knowledge engineering: Proceedings : NLP-KE 2003 : Beijing, China. Piscataway, New Jersey: IEEE, 2003.
Find full textOppentocht, Anna Linnea. Lexical semantic classification of Dutch verbs: Towards constructing NLP and human-friendly definitions. Utrecht: LEd, 1999.
Find full textLoftsson, Hrafn, Eiríkur Rögnvaldsson, and Sigrún Helgadóttir, eds. Advances in natural language processing: 7th International Conference on NLP, IceTAL 2010, Reykjavik, Iceland, August 16-18, 2010 : proceedings. Berlin: Springer, 2010.
Find full textKyoko, Kanzaki, and SpringerLink (Online service), eds. Advances in Natural Language Processing: 8th International Conference on NLP, JapTAL 2012, Kanazawa, Japan, October 22-24, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Find full textInternational Conference on Natural Language Processing and Knowledge Engineering (2007 Beijing, China). Proceedings of International Conference on Natural Language Processing and Knowledge Engineering (NLP-KE'07) : Aug. 30-Sep. 1, Beijing China. PIscataway, NJ: IEEE, 2007.
Find full textSolution states: A course in solving problems in business with the power of NLP. Bancyfelin, Carmarthen, Wales: Anglo American Book Co., 1996.
Find full textFilgueiras, M., L. Damas, N. Moreira, and A. P. Tomás, eds. Natural Language Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-53678-7.
Full textBook chapters on the topic "Natural Language Processing (NLP)"
Lee, Raymond S. T. "Major NLP Applications." In Natural Language Processing, 199–239. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1999-4_9.
Full textSteedman, Mark. "Connectionist and symbolist sentence processing." In Natural Language Processing, 95–108. Amsterdam: John Benjamins Publishing Company, 2002. http://dx.doi.org/10.1075/nlp.4.07ste.
Full textBunt, Harry, and William Black. "The ABC of Computational Pragmatics." In Natural Language Processing, 1–46. Amsterdam: John Benjamins Publishing Company, 2000. http://dx.doi.org/10.1075/nlp.1.01bun.
Full textAllwood, Jens. "An activity-based approach to pragmatics." In Natural Language Processing, 47–80. Amsterdam: John Benjamins Publishing Company, 2000. http://dx.doi.org/10.1075/nlp.1.02all.
Full textBunt, Harry. "Dialogue pragmatics and context specification." In Natural Language Processing, 81–149. Amsterdam: John Benjamins Publishing Company, 2000. http://dx.doi.org/10.1075/nlp.1.03bun.
Full textSabah, Gérard. "Pragmatics in language understanding and cognitively motivated architectures." In Natural Language Processing, 151–88. Amsterdam: John Benjamins Publishing Company, 2000. http://dx.doi.org/10.1075/nlp.1.04sab.
Full textTaylor, Martin M., and David A. Waugh. "Dialogue analysis using layered protocols." In Natural Language Processing, 189–232. Amsterdam: John Benjamins Publishing Company, 2000. http://dx.doi.org/10.1075/nlp.1.05tay.
Full textRedeker, Gisela. "Coherence and structure in text and discourse." In Natural Language Processing, 233–64. Amsterdam: John Benjamins Publishing Company, 2000. http://dx.doi.org/10.1075/nlp.1.06red.
Full textCarter, David. "Discourse focus tracking." In Natural Language Processing, 265–92. Amsterdam: John Benjamins Publishing Company, 2000. http://dx.doi.org/10.1075/nlp.1.07car.
Full textRamsay, Allan. "Speech act theory and epistemic planning." In Natural Language Processing, 293–310. Amsterdam: John Benjamins Publishing Company, 2000. http://dx.doi.org/10.1075/nlp.1.08ram.
Full textConference papers on the topic "Natural Language Processing (NLP)"
Bianchi, Federico, Debora Nozza, and Dirk Hovy. "Language Invariant Properties in Natural Language Processing." In Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.nlppower-1.9.
Full textBOUCHEHAM, Anouar. "Natural Language Processing for Social Media Data Mining." In II. Alanya International Congress of Social Sciences. Rimar Academy, 2023. http://dx.doi.org/10.47832/alanyacongress2-8.
Full textYin, Kayo, and Malihe Alikhani. "Including Signed Languages in Natural Language Processing (Extended Abstract)." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/753.
Full textAlyafeai, Zaid, and Maged Al-Shaibani. "ARBML: Democritizing Arabic Natural Language Processing Tools." In Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.nlposs-1.2.
Full textGardner, Matt, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. "AllenNLP: A Deep Semantic Natural Language Processing Platform." In Proceedings of Workshop for NLP Open Source Software (NLP-OSS). Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/w18-2501.
Full textE. Samaridi, Nikoletta, Nikitas N. Karanikolas, and Evangelos C. Papakitsos. "Lexicographic Environments in Natural Language Processing (NLP)." In PCI 2020: 24th Pan-Hellenic Conference on Informatics. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3437120.3437310.
Full textMieskes, Margot, Karën Fort, Aurélie Névéol, Cyril Grouin, and Kevin Cohen. "NLP Community Perspectives on Replicability." In Recent Advances in Natural Language Processing. Incoma Ltd., Shoumen, Bulgaria, 2019. http://dx.doi.org/10.26615/978-954-452-056-4_089.
Full textDixon, Anthony, and Daniel Birks. "Improving Policing with Natural Language Processing." In Proceedings of the 1st Workshop on NLP for Positive Impact. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.nlp4posimpact-1.13.
Full textEllmann, Mathias. "Natural language processing (NLP) applied on issue trackers." In ESEC/FSE '18: 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3283812.3283825.
Full textFlayeh, Azhar Kassem, Yaser Issam Hamodi, and Nashwan Dheyaa Zaki. "Text Analysis Based on Natural Language Processing (NLP)." In 2022 2nd International Conference on Advances in Engineering Science and Technology (AEST). IEEE, 2022. http://dx.doi.org/10.1109/aest55805.2022.10413039.
Full textReports on the topic "Natural Language Processing (NLP)"
Alonso-Robisco, Andres, and Jose Manuel Carbo. Analysis of CBDC Narrative OF Central Banks using Large Language Models. Madrid: Banco de España, August 2023. http://dx.doi.org/10.53479/33412.
Full textAvellán, Leopoldo, and Steve Brito. Crossroads in a Fog: Navigating Latin America's Development Challenges with Text Analytics. Inter-American Development Bank, December 2023. http://dx.doi.org/10.18235/0005489.
Full textSteedman, Mark. Natural Language Processing. Fort Belvoir, VA: Defense Technical Information Center, June 1994. http://dx.doi.org/10.21236/ada290396.
Full textTratz, Stephen C. Arabic Natural Language Processing System Code Library. Fort Belvoir, VA: Defense Technical Information Center, June 2014. http://dx.doi.org/10.21236/ada603814.
Full textWilks, Yorick, Michael Coombs, Roger T. Hartley, and Dihong Qiu. Active Knowledge Structures for Natural Language Processing. Fort Belvoir, VA: Defense Technical Information Center, January 1991. http://dx.doi.org/10.21236/ada245893.
Full textFirpo, M. Natural Language Processing as a Discipline at LLNL. Office of Scientific and Technical Information (OSTI), February 2005. http://dx.doi.org/10.2172/15015192.
Full textAnderson, Thomas. State of the Art of Natural Language Processing. Fort Belvoir, VA: Defense Technical Information Center, November 1987. http://dx.doi.org/10.21236/ada188112.
Full textHobbs, Jerry R., Douglas E. Appelt, John Bear, Mabry Tyson, and David Magerman. Robust Processing of Real-World Natural-Language Texts. Fort Belvoir, VA: Defense Technical Information Center, January 1991. http://dx.doi.org/10.21236/ada258837.
Full textNeal, Jeannette G., Elissa L. Feit, Douglas J. Funke, and Christine A. Montgomery. An Evaluation Methodology for Natural Language Processing Systems. Fort Belvoir, VA: Defense Technical Information Center, December 1992. http://dx.doi.org/10.21236/ada263301.
Full textLehnert, Wendy G. Using Case-Based Reasoning in Natural Language Processing. Fort Belvoir, VA: Defense Technical Information Center, June 1993. http://dx.doi.org/10.21236/ada273538.
Full text