Academic literature on the topic 'Computational linguistics ; Semantics ; Linguistic analysis (Linguistics)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computational linguistics ; Semantics ; Linguistic analysis (Linguistics).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computational linguistics ; Semantics ; Linguistic analysis (Linguistics)"

1

[Yadav Raj Upadhyay], यादवराज उपाध्याय. "भाषा र पारिभाषिक शब्दावलीको कोशीय प्रारूपः एक विश्लेषण [Lexical Structures of Language and Linguistic Semantics: An Analysis]." Prithvi Journal of Research and Innovation 3, no. 1 (June 2, 2021): 94–111. http://dx.doi.org/10.3126/pjri.v3i1.37438.

Full text
Abstract:
यस शोधन आलेखमा भाषा, भाषा विज्ञानको परिचय तथा शाखाहरूबारे चिनारी प्रस्तुत गर्दै पारिभाषिक शब्दावली र कोशीय प्रारूपबारे खोज विश्लेषण गरिएको छ । भावाभिव्यक्तिको संस्कृति विचार विनिमयको आधार भाषाका बारेमा वैज्ञानिक ढङ्गले अध्ययन गर्ने ज्ञानको शाखा नै भाषा विज्ञान हो । व्याकरण, भाषाशास्त्र हुँदै विकसित भाषा विज्ञानको संरचक पक्षका आधारमा ध्वनि विज्ञान, वणर् विज्ञान, व्याकरण (रूप, रूप सन्धि र वाक्य) र अर्थ विज्ञान प्रमुख शाखाहरू हुन् । अध्ययन विश्लेषणको पद्धतिका आधारमा भाषा विज्ञानका ऐतिहासिक, तुलनात्मक र वणर्नात्मक प्रमुख तिन शाखाहरू छन् । सिद्धान्तकेन्द्री र प्रयोगकेन्द्री आधारमा भाषा विज्ञान सैद्धान्तिक र प्रायोगिक दुई प्रकारका हुन्छन् । भाषा शिक्षण, कोश विज्ञान, शैली विज्ञान, सामाजिक भाषा विज्ञान, मनोभाषा विज्ञान, अनुवाद विज्ञान, कम्प्युटर विज्ञान, व्यतिरेकी भाषा विज्ञान, सङ्कथन विश्लेषण आदि प्रायोगिक भाषा विज्ञानका प्रकारहरू हुन् । भाषाविज्ञानका यी शाखाहरूमा प्रयुक्त परिभाषाका माध्यमबाट बुझ्नु पर्ने सयांै पारिभाषिक तथा प्राविधिक शब्दावलीहरू छन् । यस्ता शब्दावलीहरूलाई शब्दकोशीय ढाँचामा पेस गर्न सकिने कोशीय प्रारूपको सीमित नमुना समेत यहाँ प्रस्तुत गरिएको छ । [Linguistic semantics and lexical structures have been discussed in this paper, introducing language, linguistics and its forms. Linguistics is the scientific study of language and its structure that is associated with the knowledge systems while communicating across cultures. It is a developed form of grammar, including other aspects of language such as sound system, letters, words, sentences and meanings. It has three main branches such as historical linguistics, comparative linguistics and descriptive linguistics. It can also be categorized into two types: theoretical linguistics and applied linguistics. There are other types of linguistics as well that include language teaching, lexicology, stylistics, sociolinguistics, psycholinguistics, translation studies, computational linguistics and narratology are some examples of applied linguistics. Based on these branches of linguistics, there are hundreds of linguistic semantics to be leant in the study of language and its structure. In this paper, they are exemplified as lexical structures of language and linguistic semantics.]
APA, Harvard, Vancouver, ISO, and other styles
2

Karttunen, Lauri. "Word Play." Computational Linguistics 33, no. 4 (December 2007): 443–67. http://dx.doi.org/10.1162/coli.2007.33.4.443.

Full text
Abstract:
This article is a perspective on some important developments in semantics and in computational linguistics over the past forty years. It reviews two lines of research that lie at opposite ends of the field: semantics and morphology. The semantic part deals with issues from the 1970s such as discourse referents, implicative verbs, presuppositions, and questions. The second part presents a brief history of the application of finite-state transducers to linguistic analysis starting with the advent of two-level morphology in the early 1980s and culminating in successful commercial applications in the 1990s. It offers some commentary on the relationship, or the lack thereof, between computational and paper-and-pencil linguistics. The final section returns to the semantic issues and their application to currently popular tasks such as textual inference and question answering.
APA, Harvard, Vancouver, ISO, and other styles
3

Robinson, Justyna A. "A gay paper: why should sociolinguistics bother with semantics?" English Today 28, no. 4 (December 2012): 38–54. http://dx.doi.org/10.1017/s0266078412000399.

Full text
Abstract:
The study of meaning and changes in meaning has enjoyed varying levels of popularity within linguistics. There have been periods during which the exploration of meaning was of prime importance. For instance, in the late 19th century scholars considered the exploration of the etymology of words to be crucial in their quest to find the ‘true’ meaning of lexemes (Geeraerts, 2010; Malkiel, 1993). There have also been periods where semantic analysis was considered redundant to linguistic investigation (Hockett, 1954: 152). In the past 20–30 years semantics has enjoyed a period of revival. This has been mainly led by the advances in cognitive linguistics (and to some extent, historical linguistics) as well by the innovations associated with the development of electronic corpora and computational methods for extracting and tracing changes in the behaviour of the lexicon (cf. Geeraerts, 2010: 168ff, 261ff). However, there are still areas of linguistics which hardly involve lexis in their theoretical and epistemological considerations. One such area is sociolinguistics.
APA, Harvard, Vancouver, ISO, and other styles
4

Stede, Manfred. "Automatic argumentation mining and the role of stance and sentiment." Journal of Argumentation in Context 9, no. 1 (May 4, 2020): 19–41. http://dx.doi.org/10.1075/jaic.00006.ste.

Full text
Abstract:
Abstract Argumentation mining is a subfield of Computational Linguistics that aims (primarily) at automatically finding arguments and their structural components in natural language text. We provide a short introduction to this field, intended for an audience with a limited computational background. After explaining the subtasks involved in this problem of deriving the structure of arguments, we describe two other applications that are popular in computational linguistics: sentiment analysis and stance detection. From the linguistic viewpoint, they concern the semantics of evaluation in language. In the final part of the paper, we briefly examine the roles that these two tasks play in argumentation mining, both in current practice, and in possible future systems.
APA, Harvard, Vancouver, ISO, and other styles
5

DONG, ANDY. "Concept formation as knowledge accumulation: A computational linguistics study." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 20, no. 1 (February 2006): 35–53. http://dx.doi.org/10.1017/s0890060406060033.

Full text
Abstract:
Language plays at least two roles in design. First, language serves as representations of ideas and concepts through linguistic behaviors that represent the structure of thought during the design process. Second, language also performs actions and creates states of affairs. Based on these two perspectives on language use in design, we apply the computational linguistics tools of latent semantic analysis and lexical chain analysis to characterize how design teams engage in concept formation as the accumulation of knowledge represented by lexicalized concepts. The accumulation is described in a data structure comprised by a set of links between elemental lexicalized concepts. The folding together of these two perspectives on language use in design with the information processing theories of the mind afforded by the computational linguistics tools applied creates a new means to evaluate concept formation in design teams. The method suggests that analysis at a linguistic level can characterize concept formation even where process-oriented critiques were limited in their ability to uncover a formal design method that could explain the phenomenon.
APA, Harvard, Vancouver, ISO, and other styles
6

Ali, Mazhar, and Asim Imdad Wagan. "An Analysis of Sindhi Annotated Corpus using Supervised Machine Learning Methods." January 2019 38, no. 1 (January 1, 2019): 185–96. http://dx.doi.org/10.22581/muet1982.1901.15.

Full text
Abstract:
The linguistic corpus of Sindhi language is significant for computational linguistics process, machine learning process, language features identification and analysis, semantic and sentiment analysis, information retrieval and so on. There is little computational linguistics work done on Sindhi text whereas, English, Arabic, Urdu and some other languages are fully resourced computationally. The grammar and morphemes of these languages are analyzed properly using dissimilar machine learning methods. The development and research work regarding computational linguistics are in progress on Sindhi language at this time. This study is planned to develop the Sindhi annotated corpus using universal POS (Part of Speech) tag set and Sindhi POS tag set for the purpose of language features and variation analysis. The features are extracted using TF-IDF (Term Frequency and Inverse Document Frequency) technique. The supervised machine learning model is developed to assess the annotated corpus to know the grammatical annotation of Sindhi language. The model is trained with 80% of annotated corpus and tested with 20% of test set. The cross-validation technique with 10-folds is utilized to evaluate and validate the model. The results of model show the better performance of model as well as confirm the proper annotation to Sindhi corpus. This study described a number of research gaps to work more on topic modeling, language variation, sentiment and semantic analysis of Sindhi language.
APA, Harvard, Vancouver, ISO, and other styles
7

BOSQUE-GIL, J., J. GRACIA, E. MONTIEL-PONSODA, and A. GÓMEZ-PÉREZ. "Models to represent linguistic linked data." Natural Language Engineering 24, no. 6 (October 4, 2018): 811–59. http://dx.doi.org/10.1017/s1351324918000347.

Full text
Abstract:
AbstractAs the interest of the Semantic Web and computational linguistics communities in linguistic linked data (LLD) keeps increasing and the number of contributions that dwell on LLD rapidly grows, scholars (and linguists in particular) interested in the development of LLD resources sometimes find it difficult to determine which mechanism is suitable for their needs and which challenges have already been addressed. This review seeks to present the state of the art on the models, ontologies and their extensions to represent language resources as LLD by focusing on the nature of the linguistic content they aim to encode. Four basic groups of models are distinguished in this work: models to represent the main elements of lexical resources (group 1), vocabularies developed as extensions to models in group 1 and ontologies that provide more granularity on specific levels of linguistic analysis (group 2), catalogues of linguistic data categories (group 3) and other models such as corpora models or service-oriented ones (group 4). Contributions encompassed in these four groups are described, highlighting their reuse by the community and the modelling challenges that are still to be faced.
APA, Harvard, Vancouver, ISO, and other styles
8

Jenset, Gard B. "Mapping meaning with distributional methods." Journal of Historical Linguistics 3, no. 2 (December 31, 2013): 272–306. http://dx.doi.org/10.1075/jhl.3.2.04jen.

Full text
Abstract:
The semantics of existential there is discussed in a diachronic, corpus-based perspective. While previous studies of there have been qualitative or relied on interpreting relative frequencies directly, the present study combines multivariate statistical techniques with linguistic theory through distributional semantics. It is argued that existential uses of there in earlier stages of English were not semantically empty, and that the original meaning was primarily deictic rather than locative. This analysis combines key insights from previous studies of existential there with a Construction Grammar perspective, and discusses some methodological concerns regarding statistical methods for creating computational semantic maps from diachronic corpus data.
APA, Harvard, Vancouver, ISO, and other styles
9

Iomdin, Leonid. "Microsyntactic Annotation of Corpora and its Use in Computational Linguistics Tasks." Journal of Linguistics/Jazykovedný casopis 68, no. 2 (December 1, 2017): 169–78. http://dx.doi.org/10.1515/jazcas-2017-0027.

Full text
Abstract:
Abstract Microsyntax is a linguistic discipline dealing with idiomatic elements whose important properties are strongly related to syntax. In a way, these elements may be viewed as transitional entities between the lexicon and the grammar, which explains why they are often underrepresented in both of these resource types: the lexicographer fails to see such elements as full-fledged lexical units, while the grammarian finds them too specific to justify the creation of individual well-developed rules. As a result, such elements are poorly covered by linguistic models used in advanced modern computational linguistic tasks like high-quality machine translation or deep semantic analysis. A possible way to mend the situation and improve the coverage and adequate treatment of microsyntactic units in linguistic resources is to develop corpora with microsyntactic annotation, closely linked to specially designed lexicons. The paper shows how this task is solved in the deeply annotated corpus of Russian, SynTagRus.
APA, Harvard, Vancouver, ISO, and other styles
10

Jäppinen, H., T. Honkela, H. Hyötyniemi, and A. Lehtola. "A Multilevel Natural Language Processing Model." Nordic Journal of Linguistics 11, no. 1-2 (June 1988): 69–87. http://dx.doi.org/10.1017/s033258650000175x.

Full text
Abstract:
In this paper we describe a multilevel model for natural language processing. The distinct computational strata are motivated by invariant linguistic properties which are progressively uncovered from utterances. We examine each level in detail. The processes are morphological analysis, dependency parsing, logico-semantic analysis and query adaptation. Both linguistic and computational aspects are discussed. In addition to theory, we consider certain engineering viewpoints important and discuss them briefly.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Computational linguistics ; Semantics ; Linguistic analysis (Linguistics)"

1

Moilanen, Karo. "Compositional entity-level sentiment analysis." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.559817.

Full text
Abstract:
This thesis presents a computational text analysis tool called AFFECTiS (Affect Interpretation/Inference System) which focuses on the task of interpreting natural language text based on its subjective, non-factual, affective properties that go beyond the 'traditional' factual, objective dimensions of meaning that have so far been the main focus of Natural Language Processing and Computational Linguistics. The thesis presents a fully compositional uniform wide-coverage computational model of sentiment in text that builds on a number of fundamental compositional sentiment phenomena and processes discovered by detailed linguistic analysis of the behaviour of sentiment across key syntactic constructions in English. Driven by the Principle of Semantic Compositionality, the proposed model breaks sentiment interpretation down into strictly binary combinatory steps each of which explains the polarity of a given sentiment expression as a function of the properties of the sentiment carriers contained in it and the grammatical and semantic context(s) involved. An initial implementation of the proposed compositional sentiment model is de- scribed which attempts direct logical sentiment reasoning rather than basing compu- tational sentiment judgements on indirect data-driven evidence. Together with deep grammatical analysis and large hand-written sentiment lexica, the model is applied recursively to assign sentiment to all (sub )sentential structural constituents and to concurrently equip all individual entity mentions with gradient sentiment scores. The system was evaluated on an extensive multi-level and multi-task evaluation framework encompassing over 119,000 test cases from which detailed empirical ex- perimental evidence is drawn. The results across entity-, phrase-, sentence-, word-, and document-level data sets demonstrate that AFFECTiS is capable of human-like sentiment reasoning and can interpret sentiment in a way that is not only coherent syntactically but also defensible logically - even in the presence of the many am- biguous extralinguistic, paralogical, and mixed sentiment anomalies that so tellingly characterise the challenges involved in non-factual classification.
APA, Harvard, Vancouver, ISO, and other styles
2

Sinha, Ravi Som Mihalcea Rada F. "Graph-based centrality algorithms for unsupervised word sense disambiguation." [Denton, Tex.] : University of North Texas, 2008. http://digital.library.unt.edu/permalink/meta-dc-9736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Konrad, Karsten. "Model generation for natural language interpretation and analysis /." Berlin [u.a.] : Springer, 2004. http://www.loc.gov/catdir/enhancements/fy0818/2004042936-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Davis, Nathan Scott. "An Analysis of Document Retrieval and Clustering Using an Effective Semantic Distance Measure." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2674.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bihi, Ahmed. "Analysis of similarity and differences between articles using semantics." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-34843.

Full text
Abstract:
Adding semantic analysis in the process of comparing news articles enables a deeper level of analysis than traditional keyword matching. In this bachelor’s thesis, we have compared, implemented, and evaluated three commonly used approaches for document-level similarity. The three similarity measurement selected were, keyword matching, TF-IDF vector distance, and Latent Semantic Indexing. Each method was evaluated on a coherent set of news articles where the majority of the articles were written about Donald Trump and the American election the 9th of November 2016, there were several control articles, about random topics, in the set of articles. TF-IDF vector distance combined with Cosine similarity and Latent Semantic Indexing gave the best results on the set of articles by separating the control articles from the Trump articles. Keyword matching and TF-IDF distance using Euclidean distance did not separate the Trump articles from the control articles. We implemented and performed sentiment analysis on the set of news articles in the classes positive, negative and neutral and then validated them against human readers classifying the articles. With the sentiment analysis (positive, negative, and neutral) implementation, we got a high correlation with human readers (100%).
APA, Harvard, Vancouver, ISO, and other styles
6

Faruque, Md Ehsanul. "A Minimally Supervised Word Sense Disambiguation Algorithm Using Syntactic Dependencies and Semantic Generalizations." Thesis, University of North Texas, 2005. https://digital.library.unt.edu/ark:/67531/metadc4969/.

Full text
Abstract:
Natural language is inherently ambiguous. For example, the word "bank" can mean a financial institution or a river shore. Finding the correct meaning of a word in a particular context is a task known as word sense disambiguation (WSD), which is essential for many natural language processing applications such as machine translation, information retrieval, and others. While most current WSD methods try to disambiguate a small number of words for which enough annotated examples are available, the method proposed in this thesis attempts to address all words in unrestricted text. The method is based on constraints imposed by syntactic dependencies and concept generalizations drawn from an external dictionary. The method was tested on standard benchmarks as used during the SENSEVAL-2 and SENSEVAL-3 WSD international evaluation exercises, and was found to be competitive.
APA, Harvard, Vancouver, ISO, and other styles
7

Sinha, Ravi Som. "Graph-based Centrality Algorithms for Unsupervised Word Sense Disambiguation." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9736/.

Full text
Abstract:
This thesis introduces an innovative methodology of combining some traditional dictionary based approaches to word sense disambiguation (semantic similarity measures and overlap of word glosses, both based on WordNet) with some graph-based centrality methods, namely the degree of the vertices, Pagerank, closeness, and betweenness. The approach is completely unsupervised, and is based on creating graphs for the words to be disambiguated. We experiment with several possible combinations of the semantic similarity measures as the first stage in our experiments. The next stage attempts to score individual vertices in the graphs previously created based on several graph connectivity measures. During the final stage, several voting schemes are applied on the results obtained from the different centrality algorithms. The most important contributions of this work are not only that it is a novel approach and it works well, but also that it has great potential in overcoming the new-knowledge-acquisition bottleneck which has apparently brought research in supervised WSD as an explicit application to a plateau. The type of research reported in this thesis, which does not require manually annotated data, holds promise of a lot of new and interesting things, and our work is one of the first steps, despite being a small one, in this direction. The complete system is built and tested on standard benchmarks, and is comparable with work done on graph-based word sense disambiguation as well as lexical chains. The evaluation indicates that the right combination of the above mentioned metrics can be used to develop an unsupervised disambiguation engine as powerful as the state-of-the-art in WSD.
APA, Harvard, Vancouver, ISO, and other styles
8

Carter, David Maclean. "A shallow processing approach to anaphor resolution." Thesis, University of Cambridge, 1986. https://www.repository.cam.ac.uk/handle/1810/256804.

Full text
Abstract:
The thesis describes an investigation of the feasibility of resolving anaphors in natural language texts by means of a "shallow processing" approach which exploits knowledge of syntax, semantics and local focussing as heavily as possible; it does not rely on the presence of large amounts of world or domain knowledge, which are notoriously hard to process accurately. The ideas reported are implemented in a program called SPAR (Shallow Processing Anaphor Resolver), which resolves anaphoric and other linguistic ambiguities in simple English stories and generates sentence-by-sentence paraphrases that show what interpretations have been selected. Input to SPAR takes the form of semantic structures for single sentences constructed by Boguraev's English analyser. These structures are integrated into a network-style text representation as processing proceeds. To achieve anaphor resolution, SPAR combines and develops several existing techniques, most notably Sidner's theory of local focussing and Wilks' "preference semantics" theory of semantics and common sense inference. Consideration of the need to resolve several anaphors in the same sentence results in Sidner's framework being modified and extended to allow focus-based processing to interact more flexibly with processing based on other types of knowledge. Wilks' treatment of common sense inference is extended to incorporate a wider range of types of inference without jeopardizing its uniformity and simplicity. Further, his primitive-based formalism for word sense meanings is developed in the interests of economy, accuracy and ease of use. Although SPAR is geared mainly towards resolving anaphors, the design of the system allows many non-anaphoric (lexical and structural) ambiguities that cannot be resolved during sentence analysis to be resolved as a by-product of anaphor resolution.
APA, Harvard, Vancouver, ISO, and other styles
9

Gränsbo, Gustav. "Word Clustering in an Interactive Text Analysis Tool." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157497.

Full text
Abstract:
A central operation of users of the text analysis tool Gavagai Explorer is to look through a list of words and arrange them in groups. This thesis explores the use of word clustering to automatically arrange the words in groups intended to help users. A new word clustering algorithm is introduced, which attempts to produce word clusters tailored to be small enough for a user to quickly grasp the common theme of the words. The proposed algorithm computes similarities among words using word embeddings, and clusters them using hierarchical graph clustering. Multiple variants of the algorithm are evaluated in an unsupervised manner by analysing the clusters they produce when applied to 110 data sets previously analysed by users of Gavagai Explorer. A supervised evaluation is performed to compare clusters to the groups of words previously created by users of Gavagai Explorer. Results show that it was possible to choose a set of hyperparameters deemed to perform well across most data sets in the unsupervised evaluation. These hyperparameters also performed among the best on the supervised evaluation. It was concluded that the choice of word embedding and graph clustering algorithm had little impact on the behaviour of the algorithm. Rather, limiting the maximum size of clusters and filtering out similarities between words had a much larger impact on behaviour.
APA, Harvard, Vancouver, ISO, and other styles
10

Prost, Jean-Philippe. "Modelling Syntactic Gradience with Loose Constraint-based Parsing." Phd thesis, Université de Provence - Aix-Marseille I, 2008. http://tel.archives-ouvertes.fr/tel-00352828.

Full text
Abstract:
La grammaticalité d'une phrase est habituellement conçue comme une notion binaire : une phrase est soit grammaticale, soit agrammaticale. Cependant, bon nombre de travaux se penchent de plus en plus sur l'étude de degrés d'acceptabilité intermédiaires, auxquels le terme de gradience fait parfois référence. À ce jour, la majorité de ces travaux s'est concentrée sur l'étude de l'évaluation humaine de la gradience syntaxique. Cette étude explore la possibilité de construire un modèle robuste qui s'accorde avec ces jugements humains.
Nous suggérons d'élargir au langage mal formé les concepts de Gradience Intersective et de Gradience Subsective, proposés par Aarts pour la modélisation de jugements graduels. Selon ce nouveau modèle, le problème que soulève la gradience concerne la classification d'un énoncé dans une catégorie particulière, selon des critères basés sur les caractéristiques syntaxiques de l'énoncé. Nous nous attachons à étendre la notion de Gradience Intersective (GI) afin qu'elle concerne le choix de la meilleure solution parmi un ensemble de candidats, et celle de Gradience Subsective (GS) pour qu'elle concerne le calcul du degré de typicité de cette structure au sein de sa catégorie. La GI est alors modélisée à l'aide d'un critère d'optimalité, tandis que la GS est modélisée par le calcul d'un degré d'acceptabilité grammaticale. Quant aux caractéristiques syntaxiques requises pour permettre de classer un énoncé, notre étude de différents cadres de représentation pour la syntaxe du langage naturel montre qu'elles peuvent aisément être représentées dans un cadre de syntaxe modèle-théorique (Model-Theoretic Syntax). Nous optons pour l'utilisation des Grammaires de Propriétés (GP), qui offrent, précisément, la possibilité de modéliser la caractérisation d'un énoncé. Nous présentons ici une solution entièrement automatisée pour la modélisation de la gradience syntaxique, qui procède de la caractérisation d'une phrase bien ou mal formée, de la génération d'un arbre syntaxique optimal, et du calcul d'un degré d'acceptabilité grammaticale pour l'énoncé.
À travers le développement de ce nouveau modèle, la contribution de ce travail comporte trois volets.
Premièrement, nous spécifions un système logique pour les GP qui permet la révision de sa formalisation sous l'angle de la théorie des modèles. Il s'attache notamment à formaliser les mécanismes de satisfaction et de relâche de contraintes mis en oeuvre dans les GP, ainsi que la façon dont ils permettent la projection d'une catégorie lors du processus d'analyse. Ce nouveau système introduit la notion de satisfaction relâchée, et une formulation en logique du premier ordre permettant de raisonner au sujet d'un énoncé.
Deuxièmement, nous présentons notre implantation du processus d'analyse syntaxique relâchée à base de contraintes (Loose Satisfaction Chart Parsing, ou LSCP), dont nous prouvons qu'elle génère toujours une analyse syntaxique complète et optimale. Cette approche est basée sur une technique de programmation dynamique (dynamic programming), ainsi que sur les mécanismes décrits ci-dessus. Bien que d'une complexité élevée, cette solution algorithmique présente des performances suffisantes pour nous permettre d'expérimenter notre modèle de gradience.
Et troisièmement, après avoir postulé que la prédiction de jugements humains d'acceptabilité peut se baser sur des facteurs dérivés de la LSCP, nous présentons un modèle numérique pour l'estimation du degré d'acceptabilité grammaticale d'un énoncé. Nous mesurons une bonne corrélation de ces scores avec des jugements humains d'acceptabilité grammaticale. Qui plus est, notre modèle s'avère obtenir de meilleures performances que celles obtenues par un modèle préexistant que nous utilisons comme référence, et qui, quant à lui, a été expérimenté à l'aide d'analyses syntaxiques générées manuellement.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Computational linguistics ; Semantics ; Linguistic analysis (Linguistics)"

1

Underspecification and resolution in discourse semantics. Saarbrücken: DFKI, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

M, Shieber Stuart, ed. Prolog and natural-language analysis. Stanford, CA: Center for the Study of Language and Information, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pereira, Fernando C. N. Prolog and natural language analysis. Stanford, Calif: Center for the Study of Language & Information, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gelbukh, Alexander. Semantic Analysis of Verbal Collocations with Lexical Functions. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lorenz, Gunter R. Adjective intensification--learners versus native speakers: A corpus study of argumentative writing. Amsterdam: Rodopi, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lagerwerf, Luuk. Causal connectives have presuppositions: Effects on coherence and discourse structure. The Hague: Holland Academic Graphics, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sabourin, Conrad. Computational lexicology and lexicography: Dictionaries, thesauri, term banks, analysis, transfer and generation dictionaries, machine readable dictionaries, lexical semantics, lexicon grammars : bibliography. Montréal: Infolingua, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Naive semantics for natural language understanding. Boston: Kluwer Academic Publishers, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mapping academic values in the disciplines: A corpus-based approach. Bern: Peter Lang, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Reppen, Randi, and Douglas Biber. Corpus linguistics. Thousand Oaks, CA: SAGE Publications, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Computational linguistics ; Semantics ; Linguistic analysis (Linguistics)"

1

Luo, Zhaohui. "Contextual Analysis of Word Meanings in Type-Theoretical Semantics." In Logical Aspects of Computational Linguistics, 159–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22221-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nastase, Vivi, and Stan Szpakowicz. "Customisable Semantic Analysis of Texts." In Computational Linguistics and Intelligent Text Processing, 312–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30586-6_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sidorov, Grigori. "Latent Semantic Analysis (LSA): Reduction of Dimensions." In Syntactic n-grams in Computational Linguistics, 17–19. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14771-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, Bo-Yeong, Hae-Jung Kim, and Sang-Jo Lee. "Performance Analysis of Semantic Indexing in Text Retrieval." In Computational Linguistics and Intelligent Text Processing, 433–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24630-5_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Poria, Soujanya, Basant Agarwal, Alexander Gelbukh, Amir Hussain, and Newton Howard. "Dependency-Based Semantic Parsing for Concept-Level Text Analysis." In Computational Linguistics and Intelligent Text Processing, 113–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-54906-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, Li, Donghong Ji, Yu Nie, and Lingpeng Yang. "An Application of a Semantic Framework for the Analysis of Chinese Sentences." In Computational Linguistics and Intelligent Text Processing, 42–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24630-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tomokiyo, Mutsuko, and Gérard Chollet. "VoiceUNL: A Semantic Representation of Emotions Within Universal Networking Language Formalism Based on a Dialogue Corpus Analysis." In Computational Linguistics and Intelligent Text Processing, 441–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30586-6_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Heletka, Marharyta, Iryna Cherkashchenko, and Valentyna Kravchuk. "BUSINESS MODEL AS A SUBJECT FOR LINGUAL AND COGNITIVE ANALYSIS." In Integration of traditional and innovative scientific researches: global trends and regional as. Publishing House “Baltija Publishing”, 2020. http://dx.doi.org/10.30525/978-9934-26-001-8-1-10.

Full text
Abstract:
Lingual analysis allows structuring and rationalizing human perception of the real world through the primacy of semantics, the encyclopedic nature of linguistic meaning, the perspectival nature of pure lexical meaning. Cognitive science focuses on human mind, assuming it has mental representations similar to computer data structures, and computational procedures identical to computational algorithms. Supposedly, human mind relies on such mental representations as declarative knowledge including logical propositions, rules, concepts, images, and analogies. Additionally, the mind uses procedural knowledge including operations such as search, matching, retrieval and deduction. The combination of lingual and cognitive analyses turns out to be an effective tool for providing a comprehensive approach to studying and deep understanding of language concepts that reflect the phenomena of the real world. The paper deals with BUSINESS MODEL as a complicated economic concept, whose profound analysis and understanding is of great practical value for business analysis segment. Proceeding from the above, lingual and cognitive analysis of the concept BUSINESS MODEL also requires an inter-disciplinary approach, related both to linguo-cognitive and economic studies. Thus, the paper represents an attempt to clarify the mental essence of BUSINESS MODEL, which is implied by diverse language units verbalizing this concept, and to give it a rational structured form that can be easily understood and used by skillful experts in the field of economics. The research also focuses on major stages of linguo-cognitive analysis, used for establishing the relationship between mental and language representation of BUSINESS MODEL as an extralinguistic essence. The analysis offered enables determining a generalized definition of the BUSINESS MODEL in terms of cognitive linguistics and business-modeling/reengineering. At the long last, the cognitive paradigm of modern linguistic studies gives linguists the possibility to discover extralinguistic reality, mechanisms of human thinking through the lenses of language data, and processes of coding and knowledge objectification on the world in language structures. The relevance of the paper resulted from a very important scientifically practical task, namely the necessity to generalize the definition of the concept of BUSINESS MODEL in order to provide business-modeling and reengineering services to corporations. The aim of the paper is to create the conceptual interframe net of BUSINESS MODEL; to determine semantic roles (actants) as part of propositions that form frames; to find out the structure of the universal BUSINESS MODEL. The research focuses on the concept BUSINESS MODEL and a set of semantic roles and connections between them that form the concept under examination. Moreover, it has been established that BUSINESS MODEL belongs to semiotic fractal systems. The lingual and cognitive analyses gave an opportunity to figure out the preconditions for specification of top-down levels of the business-model as a multi-level construction with iterative nature.
APA, Harvard, Vancouver, ISO, and other styles
9

"Semantic analysis." In Computational Linguistics, 90–139. Cambridge University Press, 1986. http://dx.doi.org/10.1017/cbo9780511611797.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rupp, C. J., Roderick Johnson, and Michael Rosner. "Situation schemata and linguistic representation." In Computational Linguistics and Formal Semantics, 191–222. Cambridge University Press, 1992. http://dx.doi.org/10.1017/cbo9780511611803.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computational linguistics ; Semantics ; Linguistic analysis (Linguistics)"

1

Dong, Andy, Kevin Davies, and David McInnes. "Exploring the Relationship Between Lexical Behavior and Concept Formation in Design Conversations." In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-84407.

Full text
Abstract:
Designers bring individual knowledge and perspectives to the team. The hypothesis tested in this research is that semantic and grammatical structures (the language through which concepts are expressed) enable designers to bridge relations among ideas stored in each designer’s mind and from this to generate design concepts. This paper describes a linguistic and a computational method to examine the grammatical and semantic structure of design conversations and the linguistic processes by which individuals bridge their knowledge to the group’s ongoing knowledge accumulation. To test the hypothesis, we conducted a linguistic (systemic functional linguistics) and computational linguistic (lexical chain analysis) analysis of a design team conversation The computational analysis revealed hypernym relations as the primary lexico-syntactic pattern by which designers offer, interrelate and develop concepts. The linguistic analysis highlighted the grammatical linguistic features that actively contribute to the generation of design content by teams. These analyses point to the prospect of a functional correspondence between language use and a team’s ability to construct knowledge for design. This interrelation has implications both for computational systems that assess design teams and design teamwork education.
APA, Harvard, Vancouver, ISO, and other styles
2

Badryzlova, Yu G. "EXPLORING SEMANTIC CONCRETENESS AND ABSTRACTNESS FOR METAPHOR IDENTIFICATION AND BEYOND." In International Conference on Computational Linguistics and Intellectual Technologies "Dialogue". Russian State University for the Humanities, 2020. http://dx.doi.org/10.28995/2075-7182-2020-19-33-47.

Full text
Abstract:
The paper presents a method for computing indexes of semantic concreteness and abstractness in two languages (Russian and English). These indexes are used in metaphor identification experiments in both languages; the results are either comparable to or surpass pervious work and the baselines. We analyze the obtained indexes of concreteness and abstractness to see how they align with the linguistic intuitions about the corresponding semantic categories. The results of the analysis may have broader implications for computational studies of the semantics of concreteness and abstractness.
APA, Harvard, Vancouver, ISO, and other styles
3

Shmelev, A. D. "LANGUAGE-SPECIFIC WORDS IN THE LIGHT OF TRANSLATION: THE RUSSIAN TOSKA." In International Conference on Computational Linguistics and Intellectual Technologies "Dialogue". Russian State University for the Humanities, 2020. http://dx.doi.org/10.28995/2075-7182-2020-19-658-669.

Full text
Abstract:
This paper presents a semantic analysis of the most language-specific Russian word for ‘sadness’, namely, toska. The analysis is based on the hypothesis that one may regard translation equivalents and paraphrases of a linguistic unit extracted from real translated texts as a source of information about its semantics. The appearance of language-specific words in translated texts may be even more useful for studying their semantics. It turns out that тоска is not all that rare in Russian translated texts. The study of the incentives that lead Russian translators to use the word тоска often reveals important aspects of the semantics of this word. Stimuli for the appearance of toska in translations into Russian vary greatly. In general, when the original describes some bad feelings, the word toska appears if the original speaks of a subject’s unsatisfied desire, which desire may be vague and not well understood an
APA, Harvard, Vancouver, ISO, and other styles
4

Rosenstein, Mark, Peter Foltz, Anja Vaskinn, and Brita Elvevåg. "Practical issues in developing semantic frameworks for the analysis of verbal fluency data: A Norwegian data case study." In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/w15-1215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goncharov, A. A., and O. Yu Inkova. "IMPLICIT LOGICAL-SEMANTIC RELATIONS AND A METHOD OF THEIR IDENTIFICATION IN PARALLEL TEXTS." In International Conference on Computational Linguistics and Intellectual Technologies "Dialogue". Russian State University for the Humanities, 2020. http://dx.doi.org/10.28995/2075-7182-2020-19-310-320.

Full text
Abstract:
One of the main characteristics of logical-semantic relations (LSRs) between two fragments of a text is that these relations can be either explicit (expressed by some marker, e.g. a connective) or implicit (derived from the interrelation of these fragments’ semantics). Since implicit LSRs do not have any marker, it is difficult to find them in a text (whether automatically or not). In this paper, approaches to analysing implicit LSRs are compared, an original definition for them is offered and differences between implicit LSRs and LSRs expressed by non-prototypical means are described. A method is proposed to identify implicit LSRs using a parallel corpus and a supracorpora database of connectives. Based on the well-known statement that LSRs can be explicitated by adding connectives in the translation, it is argued here that through selecting pairs in which fragments where a connective is used to express an LSR in the translation correspond to those containing any of the translation stimuli standard for this connective in the source language, it is possible to get an array of contexts in which this LSR is implicit in the source text (or expressed by means other than connectives). This method is then applied to study the French causal connectives car, parce que and puisque using a Russian-French parallel corpus. The corpus data are analysed to obtain information about LSRs particularly about cases where the causal LSR in Russian is implicit, as well as about the use of causal connectives in French. These results are used to show that the method proposed allows to quickly create a representative array of contexts with implicit LSRs, which can be useful in both text analysis and in machine learning.
APA, Harvard, Vancouver, ISO, and other styles
6

Feinglass, Joshua, and Yezhou Yang. "SMURF: SeMantic and linguistic UndeRstanding Fusion for Caption Evaluation via Typicality Analysis." In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.acl-long.175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Podlesskaya, V. I. ""A TOT PEROVSKOJ NE DAL VSLAST' POSPAT'": PROSODY AND GRAMMAR OF ANAPHORIC TOT THROUGH THE LENS OF CORPUS DATA." In International Conference on Computational Linguistics and Intellectual Technologies "Dialogue". Russian State University for the Humanities, 2020. http://dx.doi.org/10.28995/2075-7182-2020-19-628-643.

Full text
Abstract:
Based on data from the Russian National Corpus and the General InternetCorpus of Russian, the paper addresses syntactic, sematic and prosodic features of constructions with the demonstrative TOT used as an anaphor. These constructions have gained some attention in earlier studies [Paducheva 2016], [Berger, Weiss 1987], [Kibrik 2011], [Podlesskaya 2001], but their analysis (a) covered primarily their prototypical uses; and (b) was based on written data. The data from informal, esp. from spoken discourse show however that the actual use of these constructions may deviate considerably from the known prototype. The paper aims at bridging this gap. I claim (i) that the function of TOT is to temporary promote a referent from a less privileged discourse status to a more privileged one; and (ii) that TOT can be analyzed on a par with switch reference devices in the languages where the latter are grammatically marked (e.g. on verb forms). The following parameters of TOT-constructions are discussed: syntactic and semantic roles of TOT and of its antecedent in their respective clauses, linear and structural distances between TOT and its antecedent, animacy of the maintained referent. Special attention is payed to the information structure of the TOT construction: I give structural and prosodic evidence that TOT never has a rhematic status. The revealed actual distribution of TOT (a) adds to our understanding of cross-linguistic variation of anaphoric functions of demonstratives; and, hopefully, (b) may contribute to further developing computational approaches to coreference and anaphora resolution for Russian, e.g. by improving datasets necessary for this task.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Sanghee, Rob H. Bracewell, and Ken M. Wallace. "A Framework for Automatic Causality Extraction Using Semantic Similarity." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35193.

Full text
Abstract:
Textual documents are the most common way of storing and distributing information within organizations. Extracting useful information from large text collections is therefore the goal of every organization that would like to take advantage of the experience encapsulated in those texts. Entering data using a free text style is easy, as it does not require any special training. However, unstructured texts pose a major challenge for automatic extraction and retrieval systems. Generally, deep levels of text analysis using advanced and complex linguistic processing are necessary that involve computational linguistic experts and domain experts. Linguistic experts are rare in engineering organizations, which thus find it difficult to apply and exploit such advanced extraction techniques. It is therefore desirable to minimize the extensive involvement of linguist experts by learning extraction patterns automatically from example texts. In doing so, the analysis of given texts is necessary in order to identify the scope and suitable automatic methods. Focusing on causality reasoning in the field of fault diagnosis, the results of experimenting with an automatic causality extraction method using shallow linguistic processing are presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Wintner, Shuly. "Compositional semantics for linguistic formalisms." In the 37th annual meeting of the Association for Computational Linguistics. Morristown, NJ, USA: Association for Computational Linguistics, 1999. http://dx.doi.org/10.3115/1034678.1034702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Detkova, J., V. Novitskiy, M. Petrova, and V. Selegey. "DIFFERENTIAL SEMANTIC SKETCHES FOR RUSSIAN INTERNET-CORPORA." In International Conference on Computational Linguistics and Intellectual Technologies "Dialogue". Russian State University for the Humanities, 2020. http://dx.doi.org/10.28995/2075-7182-2020-19-211-227.

Full text
Abstract:
The current paper suggests a new representation type of word collocations—the semantic sketches. It was first tested on one of the subcorpora of the General Internet-Corpus of Russian. The semantic sketches continue the idea of word sketches based on grammatical relations between words and expand it by adding the semantic information—word meanings and semantic relations between words. Moreover, the sketches can be additionally provided with metatextual characteristics. Certainly, building such sketches demands the semantic markup of the corpora. Therefore, we have used partial semantic analysis of the Compreno parser for our purposes. The paper demonstrates the examples of the sketches, provides the quality evaluation of the markup they are based on, and shows the advantages and disadvantages of the given approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography