Littérature scientifique sur le sujet « Multivariate analysis. Natural language processing (Computer science) »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Multivariate analysis. Natural language processing (Computer science) ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Multivariate analysis. Natural language processing (Computer science)"

1

Duh, Kevin. « Bayesian Analysis in Natural Language Processing ». Computational Linguistics 44, no 1 (mars 2018) : 187–89. http://dx.doi.org/10.1162/coli_r_00310.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhao, Liping, Waad Alhoshan, Alessio Ferrari, Keletso J. Letsholo, Muideen A. Ajagbe, Erol-Valeriu Chioasca et Riza T. Batista-Navarro. « Natural Language Processing for Requirements Engineering ». ACM Computing Surveys 54, no 3 (juin 2021) : 1–41. http://dx.doi.org/10.1145/3444689.

Texte intégral
Résumé :
Natural Language Processing for Requirements Engineering (NLP4RE) is an area of research and development that seeks to apply natural language processing (NLP) techniques, tools, and resources to the requirements engineering (RE) process, to support human analysts to carry out various linguistic analysis tasks on textual requirements documents, such as detecting language issues, identifying key domain concepts, and establishing requirements traceability links. This article reports on a mapping study that surveys the landscape of NLP4RE research to provide a holistic understanding of the field. Following the guidance of systematic review, the mapping study is directed by five research questions, cutting across five aspects of NLP4RE research, concerning the state of the literature, the state of empirical research, the research focus, the state of tool development, and the usage of NLP technologies. Our main results are as follows: (i) we identify a total of 404 primary studies relevant to NLP4RE, which were published over the past 36 years and from 170 different venues; (ii) most of these studies (67.08%) are solution proposals, assessed by a laboratory experiment or an example application, while only a small percentage (7%) are assessed in industrial settings; (iii) a large proportion of the studies (42.70%) focus on the requirements analysis phase, with quality defect detection as their central task and requirements specification as their commonly processed document type; (iv) 130 NLP4RE tools (i.e., RE specific NLP tools) are extracted from these studies, but only 17 of them (13.08%) are available for download; (v) 231 different NLP technologies are also identified, comprising 140 NLP techniques, 66 NLP tools, and 25 NLP resources, but most of them—particularly those novel NLP techniques and specialized tools—are used infrequently; by contrast, commonly used NLP technologies are traditional analysis techniques (e.g., POS tagging and tokenization), general-purpose tools (e.g., Stanford CoreNLP and GATE) and generic language lexicons (WordNet and British National Corpus). The mapping study not only provides a collection of the literature in NLP4RE but also, more importantly, establishes a structure to frame the existing literature through categorization, synthesis and conceptualization of the main theoretical concepts and relationships that encompass both RE and NLP aspects. Our work thus produces a conceptual framework of NLP4RE. The framework is used to identify research gaps and directions, highlight technology transfer needs, and encourage more synergies between the RE community, the NLP one, and the software and systems practitioners. Our results can be used as a starting point to frame future studies according to a well-defined terminology and can be expanded as new technologies and novel solutions emerge.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Li, Yong, Xiaojun Yang, Min Zuo, Qingyu Jin, Haisheng Li et Qian Cao. « Deep Structured Learning for Natural Language Processing ». ACM Transactions on Asian and Low-Resource Language Information Processing 20, no 3 (9 juillet 2021) : 1–14. http://dx.doi.org/10.1145/3433538.

Texte intégral
Résumé :
The real-time and dissemination characteristics of network information make net-mediated public opinion become more and more important food safety early warning resources, but the data of petabyte (PB) scale growth also bring great difficulties to the research and judgment of network public opinion, especially how to extract the event role of network public opinion from these data and analyze the sentiment tendency of public opinion comment. First, this article takes the public opinion of food safety network as the research point, and a BLSTM-CRF model for automatically marking the role of event is proposed by combining BLSTM and conditional random field organically. Second, the Attention mechanism based on vocabulary in the field of food safety is introduced, the distance-related sequence semantic features are extracted by BLSTM, and the emotional classification of sequence semantic features is realized by using CNN. A kind of Att-BLSTM-CNN model for the analysis of public opinion and emotional tendency in the field of food safety is proposed. Finally, based on the time series, this article combines the role extraction of food safety events and the analysis of emotional tendency and constructs a net-mediated public opinion early warning model in the field of food safety according to the heat of the event and the emotional intensity of the public to food safety public opinion events.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wang, Dongyang, Junli Su et Hongbin Yu. « Feature Extraction and Analysis of Natural Language Processing for Deep Learning English Language ». IEEE Access 8 (2020) : 46335–45. http://dx.doi.org/10.1109/access.2020.2974101.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Taskin, Zehra, et Umut Al. « Natural language processing applications in library and information science ». Online Information Review 43, no 4 (12 août 2019) : 676–90. http://dx.doi.org/10.1108/oir-07-2018-0217.

Texte intégral
Résumé :
Purpose With the recent developments in information technologies, natural language processing (NLP) practices have made tasks in many areas easier and more practical. Nowadays, especially when big data are used in most research, NLP provides fast and easy methods for processing these data. The purpose of this paper is to identify subfields of library and information science (LIS) where NLP can be used and to provide a guide based on bibliometrics and social network analyses for researchers who intend to study this subject. Design/methodology/approach Within the scope of this study, 6,607 publications, including NLP methods published in the field of LIS, are examined and visualized by social network analysis methods. Findings After evaluating the obtained results, the subject categories of publications, frequently used keywords in these publications and the relationships between these words are revealed. Finally, the core journals and articles are classified thematically for researchers working in the field of LIS and planning to apply NLP in their research. Originality/value The results of this paper draw a general framework for LIS field and guides researchers on new techniques that may be useful in the field.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Fairie, Paul, Zilong Zhang, Adam G. D'Souza, Tara Walsh, Hude Quan et Maria J. Santana. « Categorising patient concerns using natural language processing techniques ». BMJ Health & ; Care Informatics 28, no 1 (juin 2021) : e100274. http://dx.doi.org/10.1136/bmjhci-2020-100274.

Texte intégral
Résumé :
ObjectivesPatient feedback is critical to identify and resolve patient safety and experience issues in healthcare systems. However, large volumes of unstructured text data can pose problems for manual (human) analysis. This study reports the results of using a semiautomated, computational topic-modelling approach to analyse a corpus of patient feedback.MethodsPatient concerns were received by Alberta Health Services between 2011 and 2018 (n=76 163), regarding 806 care facilities in 163 municipalities, including hospitals, clinics, community care centres and retirement homes, in a province of 4.4 million. Their existing framework requires manual labelling of pre-defined categories. We applied an automated latent Dirichlet allocation (LDA)-based topic modelling algorithm to identify the topics present in these concerns, and thereby produce a framework-free categorisation.ResultsThe LDA model produced 40 topics which, following manual interpretation by researchers, were reduced to 28 coherent topics. The most frequent topics identified were communication issues causing delays (frequency: 10.58%), community care for elderly patients (8.82%), interactions with nurses (8.80%) and emergency department care (7.52%). Many patient concerns were categorised into multiple topics. Some were more specific versions of categories from the existing framework (eg, communication issues causing delays), while others were novel (eg, smoking in inappropriate settings).DiscussionLDA-generated topics were more nuanced than the manually labelled categories. For example, LDA found that concerns with community care were related to concerns about nursing for seniors, providing opportunities for insight and action.ConclusionOur findings outline the range of concerns patients share in a large health system and demonstrate the usefulness of using LDA to identify categories of patient concerns.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wei, Wei, Jinsong Wu et Chunsheng Zhu. « Special issue on deep learning for natural language processing ». Computing 102, no 3 (9 janvier 2020) : 601–3. http://dx.doi.org/10.1007/s00607-019-00788-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Georgescu, Tiberiu-Marian. « Natural Language Processing Model for Automatic Analysis of Cybersecurity-Related Documents ». Symmetry 12, no 3 (2 mars 2020) : 354. http://dx.doi.org/10.3390/sym12030354.

Texte intégral
Résumé :
This paper describes the development and implementation of a natural language processing model based on machine learning which performs cognitive analysis for cybersecurity-related documents. A domain ontology was developed using a two-step approach: (1) the symmetry stage and (2) the machine adjustment. The first stage is based on the symmetry between the way humans represent a domain and the way machine learning solutions do. Therefore, the cybersecurity field was initially modeled based on the expertise of cybersecurity professionals. A dictionary of relevant entities was created; the entities were classified into 29 categories and later implemented as classes in a natural language processing model based on machine learning. After running successive performance tests, the ontology was remodeled from 29 to 18 classes. Using the ontology, a natural language processing model based on a supervised learning model was defined. We trained the model using sets of approximately 300,000 words. Remarkably, our model obtained an F1 score of 0.81 for named entity recognition and 0.58 for relation extraction, showing superior results compared to other similar models identified in the literature. Furthermore, in order to be easily used and tested, a web application that integrates our model as the core component was developed.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Gong, Yunlu, Nannan Lu et Jiajian Zhang. « Application of deep learning fusion algorithm in natural language processing in emotional semantic analysis ». Concurrency and Computation : Practice and Experience 31, no 10 (2 octobre 2018) : e4779. http://dx.doi.org/10.1002/cpe.4779.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Mills, Michael T., et Nikolaos G. Bourbakis. « Graph-Based Methods for Natural Language Processing and Understanding—A Survey and Analysis ». IEEE Transactions on Systems, Man, and Cybernetics : Systems 44, no 1 (janvier 2014) : 59–71. http://dx.doi.org/10.1109/tsmcc.2012.2227472.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Multivariate analysis. Natural language processing (Computer science)"

1

Cannon, Paul C. « Extending the information partition function : modeling interaction effects in highly multivariate, discrete data / ». Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2263.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Shepherd, David. « Natural language program analysis combining natural language processing with program analysis to improve software maintenance tools / ». Access to citation, abstract and download form provided by ProQuest Information and Learning Company ; downloadable PDF file, 176 p, 2007. http://proquest.umi.com/pqdweb?did=1397920371&sid=6&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Li, Wenhui. « Sentiment analysis : Quantitative evaluation of subjective opinions using natural language processing ». Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/28000.

Texte intégral
Résumé :
Sentiment Analysis consists of recognizing sentiment orientation towards specific subjects within natural language texts. Most research in this area focuses on classifying documents as positive or negative. The purpose of this thesis is to quantitatively evaluate subjective opinions of customer reviews using a five star rating system, which is widely used on on-line review web sites, and to try to make the predicted score as accurate as possible. Firstly, this thesis presents two methods for rating reviews: classifying reviews by supervised learning methods as multi-class classification does, or rating reviews by using association scores of sentiment terms with a set of seed words extracted from the corpus, i.e. the unsupervised learning method. We extend the feature selection approach used in Turney's PMI-IR estimation by introducing semantic relatedness measures based up on the content of WordNet. This thesis reports on experiments using the two methods mentioned above for rating reviews using the combined feature set enriched with WordNet-selected sentiment terms. The results of these experiments suggest ways in which incorporating WordNet relatedness measures into feature selection may yield improvement over classification and unsupervised learning methods which do not use it. Furthermore, via ordinal meta-classifiers, we utilize the ordering information contained in the scores of bank reviews to improve the performance, we explore the effectiveness of re-sampling for reducing the problem of skewed data, and we check whether discretization benefits the ordinal meta-learning process. Finally, we combine the unsupervised and supervised meta-learning methods to optimize performance on our sentiment prediction task.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Keller, Thomas Anderson. « Comparison and Fine-Grained Analysis of Sequence Encoders for Natural Language Processing ». Thesis, University of California, San Diego, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10599339.

Texte intégral
Résumé :

Most machine learning algorithms require a fixed length input to be able to perform commonly desired tasks such as classification, clustering, and regression. For natural language processing, the inherently unbounded and recursive nature of the input poses a unique challenge when deriving such fixed length representations. Although today there is a general consensus on how to generate fixed length representations of individual words which preserve their meaning, the same cannot be said for sequences of words in sentences, paragraphs, or documents. In this work, we study the encoders commonly used to generate fixed length representations of natural language sequences, and analyze their effectiveness across a variety of high and low level tasks including sentence classification and question answering. Additionally, we propose novel improvements to the existing Skip-Thought and End-to-End Memory Network architectures and study their performance on both the original and auxiliary tasks. Ultimately, we show that the setting in which the encoders are trained, and the corpus used for training, have a greater influence of the final learned representation than the underlying sequence encoders themselves.

Styles APA, Harvard, Vancouver, ISO, etc.
5

Ramachandran, Venkateshwaran. « A temporal analysis of natural language narrative text ». Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-03122009-040648/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Crocker, Matthew Walter. « A principle-based system for natural language analysis and translation ». Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/27863.

Texte intégral
Résumé :
Traditional views of grammatical theory hold that languages are characterised by sets of constructions. This approach entails the enumeration of all possible constructions for each language being described. Current theories of transformational generative grammar have established an alternative position. Specifically, Chomsky's Government-Binding theory proposes a system of principles which are common to human language. Such a theory is referred to as a "Universal Grammar"(UG). Associated with the principles of grammar are parameters of variation which account for the diversity of human languages. The grammar for a particular language is known as a "Core Grammar", and is characterised by an appropriately parametrised instance of UG. Despite these advances in linguistic theory, construction-based approaches have remained the status quo within the field of natural language processing. This thesis investigates the possibility of developing a principle-based system which reflects the modular nature of the linguistic theory. That is, rather than stipulating the possible constructions of a language, a system is developed which uses the principles of grammar and language specific parameters to parse language. Specifically, a system-is presented which performs syntactic analysis and translation for a subset of English and German. The cross-linguistic nature of the theory is reflected by the system which can be considered a procedural model of UG.
Science, Faculty of
Computer Science, Department of
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
7

Holmes, Wesley J. « Topological Analysis of Averaged Sentence Embeddings ». Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1609351352688467.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lee, Wing Kuen. « Interpreting tables in text using probabilistic two-dimensional context-free grammars / ». View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?COMP%202005%20LEEW.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Zhan, Tianjie. « Semantic analysis for extracting fine-grained opinion aspects ». HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1213.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Currin, Aubrey Jason. « Text data analysis for a smart city project in a developing nation ». Thesis, University of Fort Hare, 2015. http://hdl.handle.net/10353/2227.

Texte intégral
Résumé :
Increased urbanisation against the backdrop of limited resources is complicating city planning and management of functions including public safety. The smart city concept can help, but most previous smart city systems have focused on utilising automated sensors and analysing quantitative data. In developing nations, using the ubiquitous mobile phone as an enabler for crowdsourcing of qualitative public safety reports, from the public, is a more viable option due to limited resources and infrastructure limitations. However, there is no specific best method for the analysis of qualitative text reports for a smart city in a developing nation. The aim of this study, therefore, is the development of a model for enabling the analysis of unstructured natural language text for use in a public safety smart city project. Following the guidelines of the design science paradigm, the resulting model was developed through the inductive review of related literature, assessed and refined by observations of a crowdsourcing prototype and conversational analysis with industry experts and academics. The content analysis technique was applied to the public safety reports obtained from the prototype via computer assisted qualitative data analysis software. This has resulted in the development of a hierarchical ontology which forms an additional output of this research project. Thus, this study has shown how municipalities or local government can use CAQDAS and content analysis techniques to prepare large quantities of text data for use in a smart city.
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Livres sur le sujet "Multivariate analysis. Natural language processing (Computer science)"

1

Jones, Karen Sparck. Evaluating natural language processing systems : An analysis and review. Berlin : Springer, 1995.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Naive semantics for natural language understanding. Boston : Kluwer Academic Publishers, 1988.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Applied natural language processing and content analysis : Advances in identification, investigation, and resolution. Hershey, PA : Information Science Reference, 2012.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Text generation : Using discourse strategies and focus constraints to generate natural language text. Cambridge [Cambridgeshire] : Cambridge University Press, 1985.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Minker, Wolfgang. Stochastically-based semantic analysis. New York : Springer Science+Business Media, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Tache, Nicole, dir. Applied Text Analysis with Python : Enabling Language-Aware Data Products with Machine Learning. Beijing : O’Reilly Media, 2018.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Perez-Marin, Diana. Conversational agents and natural language interaction : Techniques and effective practices. Hershey, PA : Information Science Reference, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Minker, Wolfgang. Stochastically-based semantic analysis. Boston : Kluwer Academic, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sabourin, Conrad. Computational speech processing : Speech analysis, recognition, understanding, compression, transmission, coding, synthesis, text to speech systems, speech to tactile displays, speaker identification, prosody processing : bibliography. Montréal : Infolingua, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Moisl, Hermann. Cluster analysis for corpus linguistics. Berlin : De Gruyter, 2015.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Chapitres de livres sur le sujet "Multivariate analysis. Natural language processing (Computer science)"

1

Igual, Laura, et Santi Seguí. « Statistical Natural Language Processing for Sentiment Analysis ». Dans Undergraduate Topics in Computer Science, 181–97. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50017-1_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Minato, Junko, David B. Bracewell, Fuji Ren et Shingo Kuroiwa. « Statistical Analysis of a Japanese Emotion Corpus for Natural Language Processing ». Dans Lecture Notes in Computer Science, 924–29. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-37275-2_116.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Vargas, Mónica Pineda, Octavio José Salcedo Parra et Miguel José Espitia Rico. « Business Perception Based on Sentiment Analysis Through Deep Neuronal Networks for Natural Language Processing ». Dans Lecture Notes in Computer Science, 365–74. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67380-6_33.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Krilavičius, Tomas, Žygimantas Medelis, Jurgita Kapočiūtė-Dzikienė et Tomas Žalandauskas. « News Media Analysis Using Focused Crawl and Natural Language Processing : Case of Lithuanian News Websites ». Dans Communications in Computer and Information Science, 48–61. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33308-8_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Irene, Yixin Li, Tianxiao Li, Sergio Alvarez-Napagao, Dario Garcia-Gasulla et Toyotaro Suzumura. « What Are We Depressed About When We Talk About COVID-19 : Mental Health Analysis on Tweets Using Natural Language Processing ». Dans Lecture Notes in Computer Science, 358–70. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63799-6_27.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kumar, Santosh, et Roopali Sharma. « Applications of AI in Financial System ». Dans Natural Language Processing, 23–30. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0951-7.ch002.

Texte intégral
Résumé :
Role of computers are widely accepted and well known in the domain of Finance. Artificial Intelligence(AI) methods are extensively used in field of computer science for providing solution of unpredictable event in a frequent changing environment with utilization of neural network. Professionals are using AI framework into every field for reducing human interference to get better result from few decades. The main objective of the chapter is to point out the techniques of AI utilized in field of finance in broader perspective. The purpose of this chapter is to analyze the background of AI in finance and its role in Finance Market mainly as investment decision analysis tool.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Mane, D. T., et U. V. Kulkarni. « A Survey on Supervised Convolutional Neural Network and Its Major Applications ». Dans Natural Language Processing, 1149–61. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0951-7.ch055.

Texte intégral
Résumé :
With the advances in the computer science field, various new data science techniques have been emerged. Convolutional Neural Network (CNN) is one of the Deep Learning techniques which have captured lots of attention as far as real world applications are considered. It is nothing but the multilayer architecture with hidden computational power which detects features itself. It doesn't require any handcrafted features. The remarkable increase in the computational power of Convolutional Neural Network is due to the use of Graphics processor units, parallel computing, also the availability of large amount of data in various variety forms. This paper gives the broad view of various supervised Convolutional Neural Network applications with its salient features in the fields, mainly Computer vision for Pattern and Object Detection, Natural Language Processing, Speech Recognition, Medical image analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ptaszynski, Michal, Jacek Maciejewski, Pawel Dybala, Rafal Rzepka, Kenji Araki et Yoshio Momouchi. « Science of Emoticons ». Dans Speech, Image, and Language Processing for Human Computer Interaction, 234–60. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0954-9.ch012.

Texte intégral
Résumé :
Emoticons are string of symbols representing body language in text-based communication. For a long time they have been considered as unnatural language entities. This chapter argues that, in over 40-year-long history of text-based communication, emoticons have gained a status of an indispensable means of support for text-based messages. This makes them fully a part of Natural Language Processing. The fact the emoticons have been considered as unnatural language expressions has two causes. Firstly, emoticons represent body language, which by definition is nonverbal. Secondly, there has been a lack of sufficient methods for the analysis of emoticons. Emoticons represent a multimodal (bimodal in particular) type of information. Although they are embedded in lexical form, they convey non-linguistic information. To prove this argument the authors propose that the analysis of emoticons was based on a theory designed for the analysis of body language. In particular, the authors apply the theory of kinesics to develop a state of the art system for extraction and analysis of kaomoji, Japanese emoticons. The system performance is verified in comparison with other emoticon analysis systems. Experiments showed that the presented approach provides nearly ideal results in different aspects of emoticon analysis, thus proving that emoticons possess features of multimodal expressions.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Glad Shiya V., Belsini, et Sharmila K. « Language Processing and Python ». Dans Advances in Computational Intelligence and Robotics, 93–119. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7728-8.ch006.

Texte intégral
Résumé :
Natural language processing is the communication between the humans and the computers. It is the field of computer science which incorporates artificial intelligence and linguistics where machine learning algorithms are used to analyze and process the enormous variety of data. This chapter delivers the fundamental concepts of language processing in Python such as text and word operations. It also gives the details about the preference of Python language for language processing and its advantages. It specifies the basic concept of variables, list, operators, looping statements in Python and explains how it can be implemented in language processing. It also specifies how a structured program can be written using Python, categorizing and tagging of words, how an information can be extracted from a text, syntactic and semantic analysis, and NLP applications. It also concentrates some of the research applications where NLP is applied and the challenges of NLP processing in the real-time area of applications.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Mane, D. T., et U. V. Kulkarni. « A Survey on Supervised Convolutional Neural Network and Its Major Applications ». Dans Deep Learning and Neural Networks, 1058–71. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch059.

Texte intégral
Résumé :
With the advances in the computer science field, various new data science techniques have been emerged. Convolutional Neural Network (CNN) is one of the Deep Learning techniques which have captured lots of attention as far as real world applications are considered. It is nothing but the multilayer architecture with hidden computational power which detects features itself. It doesn't require any handcrafted features. The remarkable increase in the computational power of Convolutional Neural Network is due to the use of Graphics processor units, parallel computing, also the availability of large amount of data in various variety forms. This paper gives the broad view of various supervised Convolutional Neural Network applications with its salient features in the fields, mainly Computer vision for Pattern and Object Detection, Natural Language Processing, Speech Recognition, Medical image analysis.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Multivariate analysis. Natural language processing (Computer science)"

1

Zhao, Yusheng. « The Analysis of Web Page Information Processing Based on Natural Language Processing ». Dans 2018 International Symposium on Communication Engineering & Computer Science (CECS 2018). Paris, France : Atlantis Press, 2018. http://dx.doi.org/10.2991/cecs-18.2018.79.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kandasamy, Kamalanathan, et Preethi Koroth. « An integrated approach to spam classification on Twitter using URL analysis, natural language processing and machine learning techniques ». Dans 2014 IEEE Students' Conference on Electrical, Electronics and Computer Science (SCEECS). IEEE, 2014. http://dx.doi.org/10.1109/sceecs.2014.6804508.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sundararajan, V. « Constructing a Design Knowledge Base Using Natural Language Processing ». Dans ASME 2006 International Mechanical Engineering Congress and Exposition. ASMEDC, 2006. http://dx.doi.org/10.1115/imece2006-15276.

Texte intégral
Résumé :
Mechanical engineering, like other engineering disciplines, has witnessed maturation of various aspects of its domain, obsolescence of some areas and a resurgence of others. With a history of over 200 years of continuous research and development, both in academia and industry, the community has generated enormous amounts of design knowledge in the form of texts, articles and design drawings. With the advent of electronics and computer science, several of the classical mechanisms faced obsolescence, but with the emergence of MEMS and nanotechnology, the same designs are facing a resurrection. Research and development in mechanical engineering would derive enormous benefit from a structured knowledge-base of designs and mechanisms. This paper describes a prototype system that synthesizes a knowledge-base of mechanical designs by the processing of the text in engineering descriptions. The goal is to construct a system that stores and catalogs engineering designs, their sub-assemblies and their super-assemblies for the purposes of archiving, retrieval for launching new designs and for education of engineering design. Engineering texts have a relatively clear discourse structure with fewer ambiguities, less stylistic variations and less use of complex figures of speech. The text is first passed through a part-of-speech tagger. The concept of thematic roles is used to link different parts of the sentence. The discourse structure is then taken into account by anaphora resolution. The knowledge is gradually built up through progressive scanning and analysis of text. References, interconnections and attributes are added or deleted based upon the nature, reliability and strength of the new information. Examples of analysis and resulting knowledge structures are presented.
Styles APA, Harvard, Vancouver, ISO, etc.
4

« Systematic Improvement of User Engagement with Academic Titles Using Computational Linguistics ». Dans InSITE 2019 : Informing Science + IT Education Conferences : Jerusalem. Informing Science Institute, 2019. http://dx.doi.org/10.28945/4338.

Texte intégral
Résumé :
Aim/Purpose: This paper describes a novel approach to systematically improve information interactions based solely on its wording. Background: Providing users with information in a form and format that maximizes its effectiveness is a research ‎question of critical importance. Given the growing competition for ‎users’ attention and interest, it is agreed that digital content must engage. However, there are no clear methods or ‎frameworks for evaluation, optimization and creation of such engaging content. Methodology: Following an interdisciplinary literature review, we recognized three key attributes of words that drive user engagement: (1) Novelty (2) Familiarity (3) Emotionality. Based on these attributes, we developed a model to systematically improve a ‎given content using computational linguistics, natural language processing (NLP) and text analysis (word frequency, sentiment analysis and lexical substitution). We conducted a pilot study (n=216) in which the model was used to ‎formalize evaluation and optimization of academic titles. A between-group design (A/B testing) was used to compare responses to the ‎original and modified (treatment) titles. Data was collected for selection and evaluation (User Engagement Scale). Contribution: The pilot results suggest that user engagement‎ with digital information is ‎fostered by, and perhaps dependent upon, the wording being used. They also provide empirical support that engaging content can be systematically evaluated and produced. Findings: The preliminary results show that the modified (treatment) titles had significantly higher scores for information use and user engagement (selection and evaluation). Recommendations for Practitioners: We ‎propose that computational linguistics is a useful approach for optimizing information interactions. The ‎empirically based insights can inform the development of digital content strategies, ‎thereby improving the ‎success of information interactions. ‎ Recommendations for Researchers: By understanding and operationalizing ‎content strategy and engagement, we can ‎begin to ‎focus efforts on designing interfaces which ‎engage users with features ‎‎‎appropriate to the task and context of their interactions. This study will benefit the ‎information science field by ‎enabling researchers ‎and practitioners ‎alike to ‎understand the dynamic relationship ‎between users, computer applications and ‎tasks, ‎how to ‎assess whether ‎engagement is taking place and how to design ‎interfaces that ‎engage ‎users.‎ Impact on Society: This research can be used as an important starting point for ‎understanding ‎the phenomenon of digital ‎information interactions and the factors that promote ‎and facilitates them. It can also aid in the ‎‎development of a broad framework for systematic evaluation, ‎optimization, and creation of effective digital ‎content. ‎ Future Research: Moving forward, the validity, reliability and generalizability of ‎our model should be tested in various ‎contexts. In future research, we propose to include additional linguistic factors and ‎develop more ‎sophisticated interaction measures. ‎
Styles APA, Harvard, Vancouver, ISO, etc.
5

Klokov, Aleksey, Evgenii Slobodyuk et Michael Charnine. « Predicting the citation and impact factor of terms for scientific publications using machine learning algorithms ». Dans International Conference "Computing for Physics and Technology - CPT2020". ANO «Scientific and Research Center for Information in Physics and Technique», 2020. http://dx.doi.org/10.30987/conferencearticle_5fd755c0ea6458.82600196.

Texte intégral
Résumé :
The object of the research when writing the work was the body of text data collected together with the scientific advisor and the algorithms for processing the natural language of analysis. The stream of hypotheses has been tested against computer science scientific publications through a series of simulation experiments described in this dissertation. The subject of the research is algorithms and the results of the algorithms, aimed at predicting promising topics and terms that appear in the course of time in the scientific environment. The result of this work is a set of machine learning models, with the help of which experiments were carried out to identify promising terms and semantic relationships in the text corpus. The resulting models can be used for semantic processing and analysis of other subject areas.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie