Dissertations / Theses on the topic 'Traitement Automatique de la Langue Naturelle (TALN)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Traitement Automatique de la Langue Naturelle (TALN).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ahmia, Oussama. "Veille stratégique assistée sur des bases de données d’appels d’offres par traitement automatique de la langue naturelle et fouille de textes." Thesis, Lorient, 2020. http://www.theses.fr/2020LORIS555.
Full textThis thesis, carried out within the framework of a CIFRE contract with the OctopusMind company, is focused on developing a set of automated tools dedicated and optimized to assist call for tender databases processing, for the purpose of strategic intelligence monitoring. Our contribution is divided into three chapters: The first chapter is about developing a partially comparable multilingual corpus, built from the European calls for tender published by TED (Tenders Electronic Daily). It contains more than 2 million documents translated into 24 languages published over the last 9 years. The second chapter presents a study on the questions of words, sentences and documents embedding, likely to capture semantic features at different scales. We proposed two approaches: the first one is based on a combination between a word embedding (word2vec) and latent semantic analysis (LSA). The second one is based on a novel artificial neural network architecture based on two-level convolutional attention mechanisms. These embedding methods are evaluated on classification and text clustering tasks. The third chapter concerns the extraction of semantic relationships in calls for tenders, in particular, allowing to link buildings to areas, lots to budgets, and so on. The supervised approaches developed in this part of the thesis are essentially based on Conditionnal Random Fields. The end of the third chapter concerns the application aspect, in particular with the implementation of some solutions deployed within OctopusMind's software environment, including information extraction, a recommender system, as well as the combination of these different modules to solve some more complex problems
Annouz, Hamid. "Traitement morphologique des unités linguistiques du kabyle à l’aide de logiciel NooJ : Construction d’une base de données." Thesis, Paris, INALCO, 2019. http://www.theses.fr/2019INAL0022.
Full textThis work introduces the Kabyle language to the field of Natural Language Processing by giving it a database for the NooJ software that allows the automatic recognition of linguistic units in a written corpus.We have divided the work in four parts. The first part is the place to give a snapshot on the history of formal linguistics, to present the field of NLP and the NooJ software and the linguistic units that have been treated. The second part is devoted to the description of the process that has been followed for the treatment and the integration of Kabyle verbs in NooJ. We have built a dictionary that contains 4508 entries and 8762 derived components and some models of flexion for each type which have been linked with each entry. In the third part, we have explained the processing of nouns and other units. We have built, for the nouns, a dictionary (3508 entries, 501 derived components) that have been linked to the models of flexion and for the other units (870 entries including adverbs, prepositions, conjunctions, interrogatives, personal pronouns, etc.). The second and third part are completed by examples of applications on a text, this procedure has allowed us to show with various sort of annotations the ambiguities.Regarding the last part we have devoted it to ambiguities, after having identified a list of various types of amalgams, we have tried to show, with the help of some examples of syntactic grammars, some of the tools used by NooJ for disambiguation
Dziczkowski, Grzegorz. "Analyse des sentiments : système autonome d'exploration des opinions exprimées dans les critiques cinématographiques." Phd thesis, École Nationale Supérieure des Mines de Paris, 2008. http://tel.archives-ouvertes.fr/tel-00408754.
Full text- la recherche automatique des critiques sur Internet,
- l'évaluation et la notation des opinions des critiques cinématographiques,
- la publication des résultats.
Afin d'améliorer les résultats d'application des algorithmes prédicatifs, l'objectif de ce système est de fournir un système de support pour les moteurs de prédiction analysant les profils des utilisateurs. Premièrement, le système recherche et récupère les probables critiques cinématographiques de l'Internet, en particulier celles exprimées par les commentateurs prolifiques.
Par la suite, le système procède à une évaluation et à une notation de l'opinion
exprimée dans ces critiques cinématographiques pour automatiquement associer
une note numérique à chaque critique ; tel est l'objectif du système.
La dernière étape est de regrouper les critiques (ainsi que les notes) avec l'utilisateur qui les a écrites afin de créer des profils complets, et de mettre à disposition ces profils pour les moteurs de prédictions.
Pour le développement de ce système, les travaux de recherche de cette thèse portaient essentiellement sur la notation des sentiments ; ces travaux s'insérant dans les domaines de ang : Opinion Mining et d'Analyse des Sentiments.
Notre système utilise trois méthodes différentes pour le classement des opinions. Nous présentons deux nouvelles méthodes ; une fondée sur les connaissances linguistiques et une fondée sur la limite de traitement statistique et linguistique. Les résultats obtenus sont ensuite comparés avec la méthode statistique basée sur le classificateur de Bayes, largement utilisée dans le domaine.
Il est nécessaire ensuite de combiner les résultats obtenus, afin de rendre l'évaluation finale aussi précise que possible. Pour cette tâche nous avons utilisé un quatrième classificateur basé sur les réseaux de neurones.
Notre notation des sentiments à savoir la notation des critiques est effectuée sur une échelle de 1 à 5. Cette notation demande une analyse linguistique plus profonde qu'une notation seulement binaire : positive ou négative, éventuellement subjective ou objective, habituellement utilisée.
Cette thèse présente de manière globale tous les modules du système conçu et de manière plus détaillée la partie de notation de l'opinion. En particulier, nous mettrons en évidence les avantages de l'analyse linguistique profonde moins utilisée dans le domaine de l'analyse des sentiments que l'analyse statistique.
Froissart, Christel. "Robustesse des interfaces homme-machine en langue naturelle." Grenoble 2, 1992. http://www.theses.fr/1992GRE29053.
Full textOnce having demonstrated that robustness is currently a crucial problem for systems based on a natural language man-machine interface, we will evidence the extent of the problem through the analysis of researd carried out in error processing. We can thus define a deviation as any elements which violates academic use of the language and or system's expectations at every of analysis. Then, we show that a robust strategy must solve the double bind between tolerance (release of contraints) and the selection of the most plausible solution (constriction). We offer to identify deviations (either real or potential) which are not detected by the natural language understanding system by questioning the validity of the user's input as early as possible. We suggest a strategy based on additional knowledge that must be modelized in order to put in place predictive mechanisms that controll the robust processing, so as to direct suspicion towards the plausible deviation and to direct its processing towards the most likely hypothesis. This body of knowledge is derived from: - data which can be provided by the very operation of the parser thanks to a multi-agent structure; - external data (linguistic, cognitive, ergonomic) structured in five models constructed from the corpus of manmachine dialogue : the technological model, the field and application model, the language model (and its pitfalls), the dialogue model & the user's model. This
Thollard, Franck. "Inférence grammaticale probabiliste pour l'apprentissage de la syntaxe en traitement de la langue naturelle." Saint-Etienne, 2000. http://www.theses.fr/2000STET4010.
Full textFredj, Mounia. "Saphir : un système d'objets inférentiels : contribution à l'étude des raisonnements en langue naturelle." Grenoble 2, 1993. http://www.theses.fr/1993GRE21010.
Full textThis work is in keeping with the general framework of natural language processing. It especially addresses the problem of knowledge representation and reasoning "carried" in natural language. The goal of the saphir system is to construct the network of objects coming from the discourse. This construction is done by describing some of the reasonings taking place in the knowledge acquisition process and particularly the ones that allow to resolve the "associative anaphora". We define a knowledge representation model, having a linguistics basis and cognitif elements. In order to support this model, we propose an object oriented formalism, whose theoretical foundations are lesniewski's logical system : ontology and mereology. The first system relies upon a primitif functor called "epsilon" meaning "is-a", the second one upon the "part-of" relation called "ingredience". These logical systems constitute a more appropriate theoretical foundation than the traditional predicate calculus
Balicco, Laurence. "Génération de repliques en français dans une interface homme-machine en langue naturelle." Grenoble 2, 1993. http://www.theses.fr/1993GRE21025.
Full textThis research takes place in the context of natural language generation. This field has benn neglected for a long time because it seemed a much easier phase that those of analysis. The thesis corresponds to a first work on generation in the criss team and places the problem of generation in the context of a manmachine dialogue in natural language. Some of its consequences are : generation from a logical content to be translated into natural language, this translation of the original content kept as close as possible,. . . After the study of the different works that have been done, we decided to create our own generation system, resusing when it is possible, the tools elaborated during the analyzing process. This generation process is based on a linguistic model, which uses syntactic and morphologic information and in which linguistic transformations called operations are defined (coodination, anaphorisation, thematisation,. . . ). These operations can be given by the dialogue or calulated during the generation process. The model allows the creation of several of the same utterance and therefore a best adaptation for different users. This thesis presents the studied works, essentially on the french and the english languages, the linguistic model developped, the computing model used, and a brief presentation of an european project which offers a possible application of ou
Hue, Jean-François. "L'analyse contextuelle des textes en langue naturelle : les systèmes de réécritures typées." Nantes, 1995. http://www.theses.fr/1995NANT2034.
Full textPonton, Claude (1966. "Génération automatique de textes en langue naturelle : essai de définition d'un système noyau." Grenoble 3, 1996. http://www.theses.fr/1996GRE39030.
Full textOne of the common features with many generation systems is the strong dependence on the application. If few definition attempts of "non dedicated" systems have been realised, none of them permis to take into account the application characteristics (as its formalism) and the communication context (application field, user,. . . ). The purpose of this thesis is the definition of a generation system both non dedicated and permitting to take into account these elements. Such a system is called a "kernel generation system". In this perspective, we have studied 94 generation systems through objective relevant criteria. This study is used as a basis in the continuation of our work. The definition of a kernel generator needs the determination of the frontier between the application and the kernel generation (generator tasks, inputs, outputs, data,. . . ). Effectively, it is necessary to be aware of the role of both parts and their communication ways before designing the kernel generator. It results of this study that our generator considers as input any formal content representation as well as a set of constraints describing the communication context. The kernel generator then processes what is generally called the "how to say it?" and is able to produce every solutions according to the input constraints. This definition part is followed by the achievement of a first generator prototype which has been tested through two applications distinct in all respects (formalism, field, type of texts,. . . ). Finally, this work opens out on some evolution perspectives for the generator particulary on knowledge representation formalism (cotopies d'objets) and on architecture (distributed architecture)
Palmer, Patrick Chiaramella Yves Boitet Christian. "Étude d'un analyseur de surface de la langue naturelle application à l'indexation automatique de textes /." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00337917.
Full textNie, Shuling. "Enseignement de français à un public chinois constitué sur un modèle TAL implanté sur internet." Besançon, 2002. http://www.theses.fr/2002BESA1005.
Full textDerouault, Anne-Marie. "Modélisation d'une langue naturelle pour la désambiguation des chaînes phonétiques." Paris 7, 1985. http://www.theses.fr/1985PA077028.
Full textBouchaffra, Djamel. "Echantillonnage multivarié de textes pour les processus de Markov et introduction au raisonnement incertain dans le traitement de la langue naturelle." Grenoble 2, 1992. http://www.theses.fr/1992GRE21033.
Full textThis thesis aims to extract a sample of texts used as a training population for a markov process. This latter model is applied for a part-of-speech (pos) tagging. We adopted a stratification sampling. We set up a software called "multivariatesampling" which extracts a sample from a stratified and ambiguous corpus minimizing "the loss of information" in a certain sense, the results obtained are very satisfying since we improved the number of parts of speech tagged correctly. The case of vague and uncertain variables are also treated we evaluated the conditional probability of a sentence to check simultaneously a certain number of criteria given their probabilities separately. This probability is not unique whatever topologies used. The li isometries show us that it is impossible to obtain a unique solution to this problem. A unique solution constraints the "true" and "false" representation to be the same. It appeared that one has to distinguish the "true" associated to a logical formula from "the certain event" known in the probability theory finally, we proposed a new markov model capable to take into account he context associated to a pos
Al, Haj Hasan Issam. "Alimentation automatique d'une base de connaissances à partir de textes en langue naturelle." Clermont-Ferrand 2, 2008. http://www.theses.fr/2008CLF21879.
Full textTanguy, Ludovic. "Traitement automatique de la langue naturelle et interpretation : contribution a l'elaboration d'un modele informatique de la semantique interpretative." Rennes 1, 1997. http://www.theses.fr/1997REN10059.
Full textAmrani, Ahmed Charef Eddine. "Induction et visualisation interactive pour l'étiquetage morphosyntaxique des corpus de spécialité : application à la biologie moléculaire." Paris 11, 2005. http://www.theses.fr/2005PA112369.
Full textWithin the framework of a complete text-mining process, we were interested in Part-of-Speech tagging of specialized corpora. The existing taggers are trained on general language corpora, and give inconsistent results on the specialized texts. To solve this problem, we developed an interactive, convivial and inductive tagger named ETIQ. This tagger makes it possible to the expert to correct the tagging obtained by a general tagger and to adapt it to a specialized corpus. We supplemented our approach in order to treat efficiently the recurring errors of part-of-speech tagging due to ambiguous words having different tags according to the context. With this intention, we used a supervised learning to induce correction rules. In some cases, when the rules are too difficult to generate by the expert of the text domain, we propose to the expert to annotate the examples in a very simple way using the interface. In order to reduce the number of total examples to annotate, we used an active learning algorithm. The correction of difficult part-of-speech tagging ambiguities is a significant stage to obtain a ‘perfectly’ tagged specialized corpus. In order to resolve these ambiguities and thus to decrease the number of tagging errors, we used an interactive and iterative approach we call: Progressive Induction. This approach is a combination of machine learning, of hand-crafted rules, and of manually engineered corrections by user. The proposed approach enabled us to obtain a “correctly” tagged molecular biology corpus. By using this corpus, we carried out a comparative study of several taggers
Makki, Jawad. "Peuplement semi-automatique d'ontologies basé sur le TALN : applications à une ontologie en management de risques." Toulouse 1, 2010. http://www.theses.fr/2010TOU10001.
Full textThis work falls under the Ontological Engineering framework and deals with the issues related to ontology population that consists of learning instances of concepts as well as relations by relying on the information extraction. In this thesis, we propose a semi-automatic ontology population approach from natural language texts. This approach allows to move the knowledge found in the texts into the knowledge based associated to an ontology as instances of concepts and relations. The approach is based on combined NLP techniques (statistic, morphosyntactic and semantic) with integration of automatic weighting for evaluating the extracted knowledge and providing decision support. The validation of our proposals is based on the realization of a prototype named OntoPRiMa
Labadié, Alexandre. "Segmentation thématique de texte linéaire et non-supervisée : détection active et passive des frontières thématiques en Français." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2008. http://tel.archives-ouvertes.fr/tel-00364848.
Full textPalmer, Patrick. "Étude d'un analyseur de surface de la langue naturelle : application à l'indexation automatique de textes." Phd thesis, Grenoble 1, 1990. http://tel.archives-ouvertes.fr/tel-00337917.
Full textMoncla, Ludovic. "Automatic Reconstruction of Itineraries from Descriptive Texts." Thesis, Pau, 2015. http://www.theses.fr/2015PAUU3029/document.
Full textThis PhD thesis is part of the research project PERDIDO, which aims at extracting and retrieving displacements from textual documents. This work was conducted in collaboration with the LIUPPA laboratory of the university of Pau (France), the IAAA team of the university of Zaragoza (Spain) and the COGIT laboratory of IGN (France). The objective of this PhD is to propose a method for establishing a processing chain to support the geoparsing and geocoding of text documents describing events strongly linked with space. We propose an approach for the automatic geocoding of itineraries described in natural language. Our proposal is divided into two main tasks. The first task aims at identifying and extracting information describing the itinerary in texts such as spatial named entities and expressions of displacement or perception. The second task deal with the reconstruction of the itinerary. Our proposal combines local information extracted using natural language processing and physical features extracted from external geographical sources such as gazetteers or datasets providing digital elevation models. The geoparsing part is a Natural Language Processing approach which combines the use of part of speech and syntactico-semantic combined patterns (cascade of transducers) for the annotation of spatial named entities and expressions of displacement or perception. The main contribution in the first task of our approach is the toponym disambiguation which represents an important issue in Geographical Information Retrieval (GIR). We propose an unsupervised geocoding algorithm that takes profit of clustering techniques to provide a solution for disambiguating the toponyms found in gazetteers, and at the same time estimating the spatial footprint of those other fine-grain toponyms not found in gazetteers. We propose a generic graph-based model for the automatic reconstruction of itineraries from texts, where each vertex represents a location and each edge represents a path between locations. %, combining information extracted from texts and information extracted from geographical databases. Our model is original in that in addition to taking into account the classic elements (paths and waypoints), it allows to represent the other elements describing an itinerary, such as features seen or mentioned as landmarks. To build automatically this graph-based representation of the itinerary, our approach computes an informed spanning tree on a weighted graph. Each edge of the initial graph is weighted using a multi-criteria analysis approach combining qualitative and quantitative criteria. Criteria are based on information extracted from the text and information extracted from geographical sources. For instance, we compare information given in the text such as spatial relations describing orientation (e.g., going south) with the geographical coordinates of locations found in gazetteers. Finally, according to the definition of an itinerary and the information used in natural language to describe itineraries, we propose a markup langugage for encoding spatial and motion information based on the Text Encoding and Interchange guidelines (TEI) which defines a standard for the representation of texts in digital form. Additionally, the rationale of the proposed approach has been verified with a set of experiments on a corpus of multilingual hiking descriptions (French, Spanish and Italian)
Lauly, Stanislas. "Exploration des réseaux de neurones à base d'autoencodeur dans le cadre de la modélisation des données textuelles." Thèse, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9461.
Full textTromeur, Laurent. "Mise en place d'une interface en langue naturelle pour la plateforme Ontomantics." Paris 13, 2011. http://scbd-sto.univ-paris13.fr/secure/ederasme_th_2011_tromeur.pdf.
Full textEhrmann, Maud. "Les entités nommées, de la linguistique au TAL : statut théorique et méthodes de désambiguïsation." Paris 7, 2008. http://www.theses.fr/2008PA070095.
Full textIntroduced as part of the last Message Understanding Conferences dedicated to information extraction, Named Entity extraction is a well-studied task in Natural Language Processing. The recognition and the categorization of person names, location names, organisation names, etc. Is regarded as a fundamental process for a wide variety of natural language processing applications dealing with content analysis and many research works are devoted to it, achieving very good results. Following this success, named entity treatment is moving towards new research prospects with, among others, disambiguation,and fined-grained annotation. However, this new challenges make even more crucial the question of named entity definition, which was not much discussed until now. Two main lines were explored during this PhD project : first we tried to propose a definition of named entities and then we experimented disambiguation methods. After a presentation and a state of the art of the named entity recognition task, we had to examine, from a methodological point of view, how to tackle the question of the definition of named entities. Our approach led us to study, firstly, the linguistic side, with proper names and definite descriptions and, secondly, the Computing side, this development aiming at, finally, proposing a named entity definition that takes into account language aspects but also informatic Systems capacities and requirements. The continuation of the dissertation is about more experimental works, with a presentation of experiments about fined-grained named entity annotation and metonymy resolution methods
El, Abed Walid. "Meta modèle sémantique et noyau informatique pour l'interrogation multilingue des bases de données en langue naturelle (théorie et application)." Besançon, 2001. http://www.theses.fr/2001BESA1014.
Full textFrancony, Jean Marc. "Modélisation du dialogue et représentation du contexte d'interaction dans une interface de dialogue multi-modes dont l'un des modes est dédié à la langue naturelle écrite." Grenoble 2, 1993. http://www.theses.fr/1993GRE21038.
Full textThe problems posed by the representation of the interaction context in the dialogue systeme of a multi-modal man-machine interface are art the origin of the aim of this thesis which is a study of a focusing mechanism which. The emphasis is on the anchoring of the focusing mechanism in the intervention surface. In the model we propose, anchorage is expressed at each mode level in terms of a thematic model similar to the one we proposed for natural language in this thesis. This thematic model is based on work by the prague school of formal linguistics whose hypotheses concerning the communicative function have been adopted. The thematic model allows for an utterance to translate its enunciated dynamism into a degree of activation on its knowledge representation. This model has been extended to discourse representation on the basis of a hypothesis concerning textual cohesion (which can be found for instance in anaphorical or elliptical inter-utterance relation). From this point of view, synergy of modes can be expressed as the fusion of representations of modal segments. In the focusing model, cohesion relations are considered as pipes propagating activation. This work is at the origin of the context management system implemented in the project mmi2 (esprit project 2474)
Cadilhac, Anaïs. "Preference extraction and reasoning in negotiation dialogues." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2168/.
Full textModelling user preferences is crucial in many real-life problems, ranging from individual and collective decision-making to strategic interactions between agents for example. But handling preferences is not easy. Since agents don't come with their preferences transparently given in advance, we have only two means to determine what they are if we wish to exploit them in reasoning: we can infer them from what an agent says or from his nonlinguistic actions. Preference acquisition from nonlinguistic actions has been wildly studied within the Artificial Intelligence community. However, to our knowledge, there has been little work that has so far investigated how preferences can be efficiently elicited from users using Natural Language Processing (NLP) techniques. In this work, we propose a new approach to extract and reason on preferences expressed in negotiation dialogues. After having extracted the preferences expressed in each dialogue turn, we use the discursive structure to follow their evolution as the dialogue progresses. We use CP-nets, a model used for the representation of preferences, to formalize and reason about these extracted preferences. The method is first evaluated on different negotiation corpora for which we obtain promising results. We then apply the end-to-end method with principles from Game Theory to predict trades in the win-lose game The Settlers of Catan. Our method shows good results, beating baselines that don't adequately track or reason about preferences. This work thus presents a new approach at the intersection of several research domains: Natural Language Processing (for the automatic preference extraction and the reasoning on their verbalisation), Artificial Intelligence (for the modelling and reasoning on the extracted preferences) and Game Theory (for strategic action prediction in a bargaining game)
Bertrand, de Beuvron François de. "Un système de programmation logique pour la création d'interfaces homme-machine en langue naturelle." Compiègne, 1992. http://www.theses.fr/1992COMPD545.
Full textPham, Thi Nhung. "Résolution des anaphores nominales pour la compréhension automatique des textes." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD049/document.
Full textIn order to facilitate the interpretation of texts, this thesis is devoted to the development of a system to identify and resolve the indirect nominal anaphora and the associative anaphora. Resolution of the indirect nominal anaphora is based on calculating salience weights of candidate antecedents with the purpose of associating these antecedents with the anaphoric expressions identified. It is processed by twoAnnexe317different methods based on a linguistic approach: the first method uses lexical and morphological parameters; the second method uses morphological and syntactical parameters. The resolution of associative anaphora is based on syntactical and semantic parameters.The results obtained are encouraging: 90.6% for the indirect anaphora resolution with the first method, 75.7% for the indirect anaphora resolution with the second method and 68.7% for the associative anaphora resolution. These results show the contribution of each parameter used and the utility of this system in the automatic interpretation of the texts
Cousot, Kévin. "Inférences et explications dans les réseaux lexico-sémantiques." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS108.
Full textThanks to the democratization of new communication technologies, there is a growing quantity of textual resources, making Automatic Natural Language Processing (NLP) a discipline of crucial importance both scientifically and industrially. Easily available, these data offer unprecedented opportunities and, from opinion analysis to information research and semantic text analysis, there are many applications.However, this textual data cannot be easily exploited in its raw state and, in order to carry out such tasks, it seems essential to have resources describing semantic knowledge, particularly in the form of lexico-semantic networks such as that of the JeuxDeMots project. However, the constitution and maintenance of such resources remain difficult operations, due to their large size but also because of problems of polysemy and semantic identification. Moreover, their use can be tricky because a significant part of the necessary information is not directly accessible in the resource but must be inferred from the data of the lexico-semantic network.Our work seeks to demonstrate that lexico-semantic networks are, by their connexionic nature, much more than a collection of raw facts and that more complex structures such as interpretation paths contain more information and allow multiple inference operations to be performed. In particular, we will show how to use a knowledge base to provide explanations to high-level facts. These explanations allow at least to validate and memorize new information.In doing so, we can assess the coverage and relevance of the database data and consolidate it. Similarly, the search for paths is useful for classification and disambiguation problems, as they are justifications for the calculated results.In the context of the recognition of named entities, they also make it possible to type entities and disambiguate them (is the occurrence of the term Paris a reference to the city, and which one, or to a starlet?) by highlighting the density of connections between ambiguous entities, their context and their possible type.Finally, we propose to turn the large size of the JeuxDeMots network to our advantage to enrich the database with new facts from a large number of comparable examples and by an abduction process on the types of semantic relationships that can connect two given terms. Each inference is accompanied by explanations that can be validated or invalidated, thus providing a learning process
Lopez, Cédric. "Titrage automatique de documents textuels." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20071/document.
Full textDuring the first millennium BC, the already existing libraries needed to organize texts preservation, and were thus immediately confronted with the difficulties of indexation. The use of a title occurred then as a first solution, enabling a quick indentification of every work, and in most of the cases, helping to discern works thematically close to a given one. While in Ancient Greece, titles have had a little informative function, although still performing an indentification function, the invention of the printing office with mobile characters (Gutenberg, XVth century AD) dramatically increased the number of documents, which are today spread on a large-scale. The title acquired little by little new functions, leaning very often to sociocultural or political influence (in particular in journalistic articles).Today, for both electronic and paper documents, the presence of one or several titles is very often noticed. It helps creating a first link between the reader and the subject of the document. But how some words can have a so big influence? What functions do the titles have to perform at this beginning of the XXIth century? How can one automatically generate titles respecting these functions? The automatic titling of textual documents is one of the key domains of Web pages accessibility (W3C standards) such as defined in a standard given by associations about the disabled. For a given reader, the goal is to increase the readability of pages obtained from a search, since usual searches are often disheartening readers who must supply big cognitive efforts. For a Website designer, the aim is to improve the indexation of pages for a more relevant search. Other interests motivate this study (titling of commercial Web pages, titling in order to automatically generate contents, titling to bring elements to enhance automatic summarization).In this study, we use NLP (Natural Language Processing) methods and systems. While numerous works were published about indexation and automatic summarization, automatic titling remained discreet and knew some difficulties as for its positioning in NLP. We support in this study that the automatic titling must be nevertheless considered as a full task.Having defined problems connected to automatic titling, and having positioned this task among the already existing tasks, we provide a series of methods enabling syntactically correct titles production, according to several objectives. In particular, we are interested in the generation of informative titles, and, for the first time in the history of automatic titling, we introduce the concept of catchiness.Our TIT' system consists of three methods (POSTIT, NOMIT, and CATIT), that enables to produce sets of informative titles in 81% of the cases and catchy titles in 78% of the cases
Duran, Maximiliano. "Dictionnaire électronique français-quechua des verbes pour le TAL." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCC006/document.
Full textThe automatic processing of the Quechua language (APQL) lacks an electronic dictionary of French Quechua verbs. However, any NLP project requires this important linguistic resource.The present thesis proposes such a dictionary. The realization of such a resource couId also open new perspectives on different domains such as multilingual access to information, distance learning,inthe areas of annotation /indexing of documents, spelling correction and eventually in machine translation.The first challenge was the choice of the French dictionary which would be used as our basic reference. Among the numerous French dictionaries, there are very few which are presented in an electronic format, and even less that may be used as an open source. Among the latter, we found the dictionary Les verbes français (LVF}, of Jean Dubois and Françoise Dubois-Charlier, edited by Larousse en 1997. lt is a remarkably complete dictionary. lt contains 25 610 verbal senses and with open source license. lt is entirely compatible with the Nooj platform. That's why we have chosen this dictionary to be the one to translate into Quechua.However, this task faces a considerable obstacle: the Quechua lexicon of simple verbs contains around 1,500 entries. How to match 25,610 French verbal senses with only 1,500 Quechua verbs?Are we condemned to produce many polysemies? For example, in LVF, we have 27 verbal senses of the verb "tourner" to turn; should we translate them all by the Quechua verb muyuy to turn? Or, can we make use of a particular and remarkable Quechua strategy that may allow us to face thischallenge: the generation of new verbs by suffix derivation?As a first step, we have inventoried ail the Quechua suffixes that make possible to obtain a derived verbal form which behaves as if it was a simple verb. This set of suffixes, which we call IPS_DRV, contains 27 elements. Thus each Quechua verb, transitive or intransitive, gives rise to at least 27 derived verbs. Next, we need to formalize the paradigms and grammars that will allow us to obtain derivations compatible with the morphology of the language. This was done with the help of the NooJ platform.The application of these grammars allowed us to obtain 40,500 conjugable atomic linguistic units (CALU) out of 1,500 simple Quechua verbs. This encouraging first result allows us to hope to get a favorable solution to our project of translation of the 25,000 verbal senses of French into Quechua.At this point, a new difficulty appears: the translation into French of this enormous quantity of generated conjugable verbal forms. This work is essential if we want to obtain the translation of a large part of the twenty-five thousand French verbs into Quechua. ln order to obtain the translation of these CALUs, we first needed to know the modalities of enunciation that each IPS have and transmits to the verbal radical when it is agglutinated to it. Each suffix can have several modalities of enunciation. We have obtained an inventory of them from the corpus, our own experience and some recordings obtained in fieldwork. We constructed an indexed table containing all of these modalities.Next, we used NooJ operators to program grammars that present automatic translation into a glossed form of enunciation modalities.Finally, we developed an algorithm that allowed us to obtain the reciprocal translation from French to Quechua of more than 8,500 Verbal senses of Level 3 and a number of verbal senses of Levels 4 and 5
Bourcier, Frédéric. "Représentation des connaissances pour la résolution de problèmes et la génération d'explications en langue naturelle : contribution au projet AIDE." Compiègne, 1996. http://www.theses.fr/1996COMPD903.
Full textServan, Christophe. "Apprentissage automatique et compréhension dans le cadre d'un dialogue homme-machine téléphonique à initiative mixte." Phd thesis, Université d'Avignon, 2008. http://tel.archives-ouvertes.fr/tel-00591997.
Full textPopesco, Liana. "Analyse et génération de textes à partir d'un seul ensemble de connaissances pour chaque langue naturelle et de meta-règles de structuration." Paris 6, 1986. http://www.theses.fr/1986PA066138.
Full textMichalon, Olivier. "Modèles statistiques pour la prédiction de cadres sémantiques." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0221/document.
Full textIn natural language processing, each analysis step has improved the way in which language can be modeled by machines. Another step of analysis still poorly mastered resides in semantic parsing. This type of analysis can provide information which would allow for many advances, such as better human-machine interactions or more reliable translations. There exist several types of meaning representation structures, such as PropBank, AMR and FrameNet. FrameNet corresponds to the frame semantic framework whose theory has been described by Charles Fillmore (1971). In this theory, each prototypical situation and each different elements involved are represented in such a way that two similar situations are represented by the same object, called a semantic frame. The work that we will describe here follows the work already developed for machine prediction of frame semantic representations. We will present four prediction systems, and each one of them allowed to validate another hypothesis on the necessary properties for effective prediction. We will show that semantic parsing can also be improved by providing prediction models with refined information as input of the system, with firstly a syntactic analysis where deep links are made explicit and secondly vectorial representations of the vocabulary learned beforehand
Torres, Moreno Juan-Manuel. "Du textuel au numérique : analyse et classification automatiques." Habilitation à diriger des recherches, Université d'Avignon, 2007. http://tel.archives-ouvertes.fr/tel-00390068.
Full textUn goût personnel pour les méthodes d'apprentissage automatique m'a orienté vers leur utilisation dans le Traitement Automatique de la Langue Naturelle. Je laisserai de côte des aspects psycholinguistiques de la compréhension d'une langue humaine et je vais m'intéresser uniquement à la modélisation de son traitement comme un système à entrée-sortie. L'approche linguistique possède des limitations pour décider de cette appartenance, et en général pour faire face à trois caractéristiques des langages humaines : Ambiguïté.
Je pense que l'approche linguistique n'est pas tout à fait appropriée pour traiter des problèmes qui sont liés à un phénomène sous-jacent des langues humaines : l'incertitude. L'incertitude affecte aussi les réalisations technologiques dérivées du TAL : un système de reconnaissance vocale par exemple, doit faire face à de multiples choix générés par une entrée. Les phrases étranges, mal écrites ou avec une syntaxe pauvre ne posent pas un problème insurmontable à un humain, car les personnes sont capables de choisir l'interprétation des phrases en fonction de leur utilisation courante. L'approche probabiliste fait face à l'incertitude en posant un modèle de langage comme une distribution de probabilité. Il permet de diviser un modèle de langage en plusieurs couches : morphologie, syntaxe, sémantique et ainsi de suite. Tout au long de cette dissertation, j'ai essayé de montrer que les méthodes numériques sont performantes en utilisant une approche pragmatique : les campagnes d'évaluation nationales et internationales. Et au moins, dans les campagnes à portée de ma connaissance, les performances des méthodes numériques surpassent celles des méthodes linguistiques. Au moment de traiter de grandes masses de documents, l'analyse linguistique fine est vite dépassée par la quantité de textes à traiter. On voit des articles et des études portant sur Jean aime Marie et autant sur Marie aime Jean ou encore Marie est aimée par Jean. J'ai découvert tout au long de mes travaux, en particulier ceux consacrés au résumé automatique et au raffinement de requêtes, qu'un système hybride combinant des approches numériques à la base et une analyse linguistique au sommet, donne de meilleures performances que les systèmes pris de façon isolée.
Dans l'introduction je me posais la question de savoir si la linguistique pouvait encore jouer un rôle dans le traitement de la langue naturelle. Enfin, le modèle de sac de mots est une simplification exagérée qui néglige la structure de la phrase, ce qui implique une perte importante d'information. Je reformule alors les deux questions précédentes comme ceci : Les approches linguistiques et les méthodes numériques peuvent-elles jouer un partenariat dans les tâches du TAL? Cela ouvre une voie intéressante aux recherches que je compte entreprendre la conception de systèmes TAL hybrides, notamment pour la génération automatique de texte et pour la compression de phrases.
On peut difficilement envisager de dépasser le plafond auquel les méthodes numériques se heurtent sans faire appel à la finesse des approches linguistiques, mais sans négliger pour autant de les valider et de les tester sur des corpora.
Molina, Villegas Alejandro. "Compression automatique de phrases : une étude vers la génération de résumés." Phd thesis, Université d'Avignon, 2013. http://tel.archives-ouvertes.fr/tel-00998924.
Full textManishina, Elena. "Data-driven natural language generation using statistical machine translation and discriminative learning." Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0209/document.
Full textThe humanity has long been passionate about creating intellectual machines that can freely communicate with us in our language. Most modern systems communicating directly with the user share one common feature: they have a dialog system (DS) at their base. As of today almost all DS components embraced statistical methods and widely use them as their core models. Until recently Natural Language Generation (NLG) component of a dialog system used primarily hand-coded generation templates, which represented model phrases in a natural language mapped to a particular semantic content. Today data-driven models are making their way into the NLG domain. In this thesis, we follow along this new line of research and present several novel data-driven approaches to natural language generation. In our work we focus on two important aspects of NLG systems development: building an efficient generator and diversifying its output. Two key ideas that we defend here are the following: first, the task of NLG can be regarded as the translation between a natural language and a formal meaning representation, and therefore, can be performed using statistical machine translation techniques, and second, corpus extension and diversification which traditionally involved manual paraphrasing and rule crafting can be performed automatically using well-known and widely used synonym and paraphrase extraction methods. Concerning our first idea, we investigate the possibility of using NGRAM translation framework and explore the potential of discriminative learning, notably Conditional Random Fields (CRF) models, as applied to NLG; we build a generation pipeline which allows for inclusion and combination of different generation models (NGRAM and CRF) and which uses an efficient decoding framework (finite-state transducers' best path search). Regarding the second objective, namely corpus extension, we propose to enlarge the system's vocabulary and the set of available syntactic structures via integrating automatically obtained synonyms and paraphrases into the training corpus. To our knowledge, there have been no attempts to increase the size of the system vocabulary by incorporating synonyms. To date most studies on corpus extension focused on paraphrasing and resorted to crowd-sourcing in order to obtain paraphrases, which then required additional manual validation often performed by system developers. We prove that automatic corpus extension by means of paraphrase extraction and validation is just as effective as crowd-sourcing, being at the same time less costly in terms of development time and resources. During intermediate experiments our generation models showed a significantly better performance than the phrase-based baseline model and appeared to be more robust in handling unknown combinations of concepts than the current in-house rule-based generator. The final human evaluation confirmed that our data-driven NLG models is a viable alternative to rule-based generators
Mars, Mourad. "Analyse morphologique robuste de l'arabe et applications pédagogiques." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENL046.
Full textL'auteur n'a pas fourni de résumé en anglais
Wolfarth, Claire. "Apport du TAL à l’exploitation linguistique d’un corpus scolaire longitudinal." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAL025.
Full textIn recent years, there has been an actual effort to constitute and promote children’s writings corpora especially in French. The first research works on writing acquisition relied on small corpora that were not widely distributed. Longitudinal corpora, monitoring a cohort of children’s productions from similar collection conditions from one year to the next, do not exist in French yet.Moreover, although natural language processing (NLP) has provided tools for a wide variety of corpora, few studies have been conducted on children's writings corpora. This new scope represents a challenge for the NLP field because of children's writings specificities, and particularly their deviation from the written norm. Hence, tools currently available are not suitable for the exploitation of these corpora. There is therefore a challenge for NLP to develop specific methods for these written productions.This thesis provides two main contributions. On the one hand, this work has led to the creation of a large and digitized longitudinal corpus of children's writings (from 6 to 11 years old) named the Scoledit corpus. Its constitution implies the collection, the digitization and the transcription of productions, the annotation of linguistic data and the dissemination of the resource thus constituted. On the other hand, this work enables the development of a method exploiting this corpus, called the comparison approach, which is based on the comparison between the transcription of children’s productions and their standardized version.In order to create a first level of alignment, this method compared transcribed forms to their normalized counterparts, using the aligner AliScol. It also made possible the exploration of various linguistic analyses (lexical, morphographic, graphical). And finally, in order to analyse graphemes, an aligner of transcribed and normalized graphemes, called AliScol_Graph was created
Gleize, Martin. "Textual Inference for Machine Comprehension." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS004/document.
Full textWith the ever-growing mass of published text, natural language understanding stands as one of the most sought-after goal of artificial intelligence. In natural language, not every fact expressed in the text is necessarily explicit: human readers naturally infer what is missing through various intuitive linguistic skills, common sense or domain-specific knowledge, and life experiences. Natural Language Processing (NLP) systems do not have these initial capabilities. Unable to draw inferences to fill the gaps in the text, they cannot truly understand it. This dissertation focuses on this problem and presents our work on the automatic resolution of textual inferences in the context of machine reading. A textual inference is simply defined as a relation between two fragments of text: a human reading the first can reasonably infer that the second is true. A lot of different NLP tasks more or less directly evaluate systems on their ability to recognize textual inference. Among this multiplicity of evaluation frameworks, inferences themselves are not one and the same and also present a wide variety of different types. We reflect on inferences for NLP from a theoretical standpoint and present two contributions addressing these levels of diversity: an abstract contextualized inference task encompassing most NLP inference-related tasks, and a novel hierchical taxonomy of textual inferences based on their difficulty.Automatically recognizing textual inference currently almost always involves a machine learning model, trained to use various linguistic features on a labeled dataset of samples of textual inference. However, specific data on complex inference phenomena is not currently abundant enough that systems can directly learn world knowledge and commonsense reasoning. Instead, systems focus on learning how to use the syntactic structure of sentences to align the words of two semantically related sentences. To extend what systems know of the world, they include external background knowledge, often improving their results. But this addition is often made on top of other features, and rarely well integrated to sentence structure. The main contributions of our thesis address the previous concern, with the aim of solving complex natural language understanding tasks. With the hypothesis that a simpler lexicon should make easier to compare the sense of two sentences, we present a passage retrieval method using structured lexical expansion backed up by a simplifying dictionary. This simplification hypothesis is tested again in a contribution on textual entailment: syntactical paraphrases are extracted from the same dictionary and repeatedly applied on the first sentence to turn it into the second. We then present a machine learning kernel-based method recognizing sentence rewritings, with a notion of types able to encode lexical-semantic knowledge. This approach is effective on three tasks: paraphrase identification, textual entailment and question answering. We address its lack of scalability while keeping most of its strengths in our last contribution. Reading comprehension tests are used for evaluation: these multiple-choice questions on short text constitute the most practical way to assess textual inference within a complete context. Our system is founded on a efficient tree edit algorithm, and the features extracted from edit sequences are used to build two classifiers for the validation and invalidation of answer candidates. This approach reaches second place at the "Entrance Exams" CLEF 2015 challenge
Ramadier, Lionel. "Indexation et apprentissage de termes et de relations à partir de comptes rendus de radiologie." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT298/document.
Full textIn the medical field, the computerization of health professions and development of the personal medical file (DMP) results in a fast increase in the volume of medical digital information. The need to convert and manipulate all this information in a structured form is a major challenge. This is the starting point for the development of appropriate tools where the methods from the natural language processing (NLP) seem well suited.The work of this thesis are within the field of analysis of medical documents and address the issue of representation of biomedical information (especially the radiology area) and its access. We propose to build a knowledge base dedicated to radiology within a general knowledge base (lexical-semantic network JeuxDeMots). We show the interest of the hypothesis of no separation between different types of knowledge through a document analysis. This hypothesis is that the use of general knowledge, in addition to those specialties, significantly improves the analysis of medical documents.At the level of lexical-semantic network, manual and automated addition of meta information on annotations (frequency information, pertinence, etc.) is particularly useful. This network combines weight and annotations on typed relationships between terms and concepts as well as an inference mechanism which aims to improve quality and network coverage. We describe how from semantic information in the network, it is possible to define an increase in gross index built for each records to improve information retrieval. We present then a method of extracting semantic relationships between terms or concepts. This extraction is performed using lexical patterns to which we added semantic constraints.The results show that the hypothesis of no separation between different types of knowledge to improve the relevance of indexing. The index increase results in an improved return while semantic constraints improve the accuracy of the relationship extraction
Marzinotto, Gabriel. "Semantic frame based analysis using machine learning techniques : improving the cross-domain generalization of semantic parsers." Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0483.
Full textMaking semantic parsers robust to lexical and stylistic variations is a real challenge with many industrial applications. Nowadays, semantic parsing requires the usage of domain-specific training corpora to ensure acceptable performances on a given domain. Transfer learning techniques are widely studied and adopted when addressing this lack of robustness, and the most common strategy is the usage of pre-trained word representations. However, the best parsers still show significant performance degradation under domain shift, evidencing the need for supplementary transfer learning strategies to achieve robustness. This work proposes a new benchmark to study the domain dependence problem in semantic parsing. We use this bench to evaluate classical transfer learning techniques and to propose and evaluate new techniques based on adversarial learning. All these techniques are tested on state-of-the-art semantic parsers. We claim that adversarial learning approaches can improve the generalization capacities of models. We test this hypothesis on different semantic representation schemes, languages and corpora, providing experimental results to support our hypothesis
Bendaoud, Rokia. "Analyses formelle et relationnelle de concepts pour la construction d'ontologies de domaines à partir de ressources textuelles hétérogènes." Phd thesis, Université Henri Poincaré - Nancy I, 2009. http://tel.archives-ouvertes.fr/tel-00420109.
Full textDes expériences pratiques ont été menées dans deux domaines d'application : l'astronomie et la microbiologie.
Sastre, Martinez Javier Miguel. "Efficient finite-state algorithms for the application of local grammars." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1047/document.
Full textThis work focuses on the research and development of efficient algorithms of application of local grammars, taking as reference those of the currently existent open-source systems : Unitex's top-down parser and Outilex's Earley-like parser. Local grammars are a finite-state based formalism for the representation of natural language grammars. Moreover, local grammars are a model for the construction of fully scaled and accurated descriptions of the syntax of natural languages by means of systematic observation and methodical accumulation of data. The adequacy of local grammars for this task has been proved by multiple works. Due to the ambiguous nature of natural languages, and the particular properties of local grammars, classic parsing algorithms such as LR, CYK's and Tomita's cannot be used in the context of this work. Top-down and Earley parsers are possible alternatives, though they have an exponential worst-case cost for the case of local grammars. We have first conceived an algorithm of application of local grammars having a polynomial worst-case cost. Furthermore, we have conceived other optimizations which increase the efficiency of the algorithm for general cases, namely the efficient management of sets of elements and sequences. We have implemented our algorithm and those of the Unitex and Outilex systems with the same tools in order to test them under the same conditions. Moreover, we have implemented different versions of each algorithm, either using our custom set data structures or those included in GNU's implementation of the C++ Standard Template Library (STL). We have compared the performances of the different algorithms and algorithm versions in the context of an industrial natural language application provided by the enterprise Telefónica I+D : extending the understanding capabilities of a chatterbot that provides mobile services, such as sending SMSs to mobile phones as well as games and other digital contents. Conversation with the chatterbot is held in Spanish by means of Microsoft's Windows Live Messenger. In spite of the limited domain and the simplicity of the applied grammars, execution times of our parsing algorithm coupled with our custom implementation of sets were lower. Thanks to the improved asymptotic cost of our algorithm, execution times for the case of complex and large coverage grammars can be expected to be considerably lower than those of the Unitex and Outilex algorithms
Cossu, Jean-Valère. "Analyse de l’image de marque sur le Web 2.0." Thesis, Avignon, 2015. http://www.theses.fr/2015AVIG0207/document.
Full textAnalyse of entities representation over the Web 2.0Every day, millions of people publish their views on Web 2.0 (social networks,blogs, etc.). These comments focus on subjects as diverse as news, politics,sports scores, consumer objects, etc. The accumulation and agglomerationof these notices on an entity (be it a product, a company or a public entity) givebirth to the brand image of that entity. Internet has become in recent years aprivileged place for the emergence and dissemination of opinions and puttingWeb 2.0 at the head of observatories of opinions. The latter being a means ofaccessing the knowledge of the opinion of the world population.The image is here understood as the idea that a person or a group of peopleis that entity. This idea carries a priori on a particular subject and is onlyvalid in context for a given time. This perceived image is different from theentity initially wanted to broadcast (eg via a communication campaign). Moreover,in reality, there are several images in the end living together in parallel onthe network, each specific to a community and all evolve differently over time(imagine how would be perceived in each camp together two politicians edgesopposite). Finally, in addition to the controversy caused by the voluntary behaviorof some entities to attract attention (think of the declarations required orshocking). It also happens that the dissemination of an image beyond the frameworkthat governed the and sometimes turns against the entity (for example,« marriage for all » became « the demonstration for all »). The views expressedthen are so many clues to understand the logic of construction and evolution ofthese images. The aim is to be able to know what we are talking about and howwe talk with filigree opportunity to know who is speaking.viiIn this thesis we propose to use several simple supervised statistical automaticmethods to monitor entity’s online reputation based on textual contentsmentioning it. More precisely we look the most important contents and theirsauthors (from a reputation manager point-of-view). We introduce an optimizationprocess allowing us to enrich the data using a simulated relevance feedback(without any human involvement). We also compare content contextualizationmethod using information retrieval and automatic summarization methods.Wealso propose a reflection and a new approach to model online reputation, improveand evaluate reputation monitoring methods using Partial Least SquaresPath Modelling (PLS-PM). In designing the system, we wanted to address localand global context of the reputation. That is to say the features can explain thedecision and the correlation betweens topics and reputation. The goal of ourwork was to propose a different way to combine usual methods and featuresthat may render reputation monitoring systems more accurate than the existingones. We evaluate and compare our systems using state of the art frameworks: Imagiweb and RepLab. The performances of our proposals are comparableto the state of the art. In addition, the fact that we provide reputation modelsmake our methods even more attractive for reputation manager or scientistsfrom various fields
Perez, Laura Haide. "Génération automatique de phrases pour l'apprentissage des langues." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0062/document.
Full textIn this work, we explore how Natural Language Generation (NLG) techniques can be used to address the task of (semi-)automatically generating language learning material and activities in Camputer-Assisted Language Learning (CALL). In particular, we show how a grammar-based Surface Realiser (SR) can be usefully exploited for the automatic creation of grammar exercises. Our surface realiser uses a wide-coverage reversible grammar namely SemTAG, which is a Feature-Based Tree Adjoining Grammar (FB-TAG) equipped with a unification-based compositional semantics. More precisely, the FB-TAG grammar integrates a flat and underspecified representation of First Order Logic (FOL) formulae. In the first part of the thesis, we study the task of surface realisation from flat semantic formulae and we propose an optimised FB-TAG-based realisation algorithm that supports the generation of longer sentences given a large scale grammar and lexicon. The approach followed to optimise TAG-based surface realisation from flat semantics draws on the fact that an FB-TAG can be translated into a Feature-Based Regular Tree Grammar (FB-RTG) describing its derivation trees. The derivation tree language of TAG constitutes a simpler language than the derived tree language, and thus, generation approaches based on derivation trees have been already proposed. Our approach departs from previous ones in that our FB-RTG encoding accounts for feature structures present in the original FB-TAG having thus important consequences regarding over-generation and preservation of the syntax-semantics interface. The concrete derivation tree generation algorithm that we propose is an Earley-style algorithm integrating a set of well-known optimisation techniques: tabulation, sharing-packing, and semantic-based indexing. In the second part of the thesis, we explore how our SemTAG-based surface realiser can be put to work for the (semi-)automatic generation of grammar exercises. Usually, teachers manually edit exercises and their solutions, and classify them according to the degree of dificulty or expected learner level. A strand of research in (Natural Language Processing (NLP) for CALL addresses the (semi-)automatic generation of exercises. Mostly, this work draws on texts extracted from the Web, use machine learning and text analysis techniques (e.g. parsing, POS tagging, etc.). These approaches expose the learner to sentences that have a potentially complex syntax and diverse vocabulary. In contrast, the approach we propose in this thesis addresses the (semi-)automatic generation of grammar exercises of the type found in grammar textbooks. In other words, it deals with the generation of exercises whose syntax and vocabulary are tailored to specific pedagogical goals and topics. Because the grammar-based generation approach associates natural language sentences with a rich linguistic description, it permits defining a syntactic and morpho-syntactic constraints specification language for the selection of stem sentences in compliance with a given pedagogical goal. Further, it allows for the post processing of the generated stem sentences to build grammar exercise items. We show how Fill-in-the-blank, Shuffle and Reformulation grammar exercises can be automatically produced. The approach has been integrated in the Interactive French Learning Game (I-FLEG) serious game for learning French and has been evaluated both based in the interactions with online players and in collaboration with a language teacher
Saneifar, Hassan. "Locating Information in Heterogeneous log files." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20092/document.
Full textIn this thesis, we present contributions to the challenging issues which are encounteredin question answering and locating information in complex textual data, like log files. Question answering systems (QAS) aim to find a relevant fragment of a document which could be regarded as the best possible concise answer for a question given by a user. In this work, we are looking to propose a complete solution to locate information in a special kind of textual data, i.e., log files generated by EDA design tools.Nowadays, in many application areas, modern computing systems are instrumented to generate huge reports about occurring events in the format of log files. Log files are generated in every computing field to report the status of systems, products, or even causes of problems that can occur. Log files may also include data about critical parameters, sensor outputs, or a combination of those. Analyzing log files, as an attractive approach for automatic system management and monitoring, has been enjoying a growing amount of attention [Li et al., 2005]. Although the process of generating log files is quite simple and straightforward, log file analysis could be a tremendous task that requires enormous computational resources, long time and sophisticated procedures [Valdman, 2004]. Indeed, there are many kinds of log files generated in some application domains which are not systematically exploited in an efficient way because of their special characteristics. In this thesis, we are mainly interested in log files generated by Electronic Design Automation (EDA) systems. Electronic design automation is a category of software tools for designing electronic systems such as printed circuit boards and Integrated Circuits (IC). In this domain, to ensure the design quality, there are some quality check rules which should be verified. Verification of these rules is principally performed by analyzing the generated log files. In the case of large designs that the design tools may generate megabytes or gigabytes of log files each day, the problem is to wade through all of this data to locate the critical information we need to verify the quality check rules. These log files typically include a substantial amount of data. Accordingly, manually locating information is a tedious and cumbersome process. Furthermore, the particular characteristics of log files, specially those generated by EDA design tools, rise significant challenges in retrieval of information from the log files. The specific features of log files limit the usefulness of manual analysis techniques and static methods. Automated analysis of such logs is complex due to their heterogeneous and evolving structures and the large non-fixed vocabulary.In this thesis, by each contribution, we answer to questions raised in this work due to the data specificities or domain requirements. We investigate throughout this work the main concern "how the specificities of log files can influence the information extraction and natural language processing methods?". In this context, a key challenge is to provide approaches that take the log file specificities into account while considering the issues which are specific to QA in restricted domains. We present different contributions as below:> Proposing a novel method to recognize and identify the logical units in the log files to perform a segmentation according to their structure. We thus propose a method to characterize complex logicalunits found in log files according to their syntactic characteristics. Within this approach, we propose an original type of descriptor to model the textual structure and layout of text documents.> Proposing an approach to locate the requested information in the log files based on passage retrieval. To improve the performance of passage retrieval, we propose a novel query expansion approach to adapt an initial query to all types of corresponding log files and overcome the difficulties like mismatch vocabularies. Our query expansion approach relies on two relevance feedback steps. In the first one, we determine the explicit relevance feedback by identifying the context of questions. The second phase consists of a novel type of pseudo relevance feedback. Our method is based on a new term weighting function, called TRQ (Term Relatedness to Query), introduced in this work, which gives a score to terms of corpus according to their relatedness to the query. We also investigate how to apply our query expansion approach to documents from general domains.> Studying the use of morpho-syntactic knowledge in our approaches. For this purpose, we are interested in the extraction of terminology in the log files. Thus, we here introduce our approach, named Exterlog (EXtraction of TERminology from LOGs), to extract the terminology of log files. To evaluate the extracted terms and choose the most relevant ones, we propose a candidate term evaluation method using a measure, based on the Web and combined with statistical measures, taking into account the context of log files
Ratkovic, Zorana. "Predicative Analysis for Information Extraction : application to the biology domain." Thesis, Paris 3, 2014. http://www.theses.fr/2014PA030110.
Full textThe abundance of biomedical information expressed in natural language has resulted in the need for methods to process this information automatically. In the field of Natural Language Processing (NLP), Information Extraction (IE) focuses on the extraction of relevant information from unstructured data in natural language. A great deal of IE methods today focus on Machine Learning (ML) approaches that rely on deep linguistic processing in order to capture the complex information contained in biomedical texts. In particular, syntactic analysis and parsing have played an important role in IE, by helping capture how words in a sentence are related. This thesis examines how dependency parsing can be used to facilitate IE. It focuses on a task-based approach to dependency parsing evaluation and parser selection, including a detailed error analysis. In order to achieve a high quality of syntax-based IE, different stages of linguistic processing are addressed, including both pre-processing steps (such as tokenization) and the use of complementary linguistic processing (such as the use of semantics and coreference analysis). This thesis also explores how the different levels of linguistics processing can be represented for use within an ML-based IE algorithm, and how the interface between these two is of great importance. Finally, biomedical data is very heterogeneous, encompassing different subdomains and genres. This thesis explores how subdomain-adaptationcan be achieved by using already existing subdomain knowledge and resources. The methods and approaches described are explored using two different biomedical corpora, demonstrating how the IE results are used in real-life tasks
Aouini, Mourad. "Approche multi-niveaux pour l'analyse des données textuelles non-standardisées : corpus de textes en moyen français." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCC003.
Full textThis thesis presents a non-standardized text analysis approach which consists a chain process modeling allowing the automatic annotation of texts: grammar annotation using a morphosyntactic tagging method and semantic annotation by putting in operates a system of named-entity recognition. In this context, we present a system analysis of the Middle French which is a language in the course of evolution including: spelling, the flexional system and the syntax are not stable. The texts in Middle French are mainly distinguished by the absence of normalized orthography and the geographical and chronological variability of medieval lexicons.The main objective is to highlight a system dedicated to the construction of linguistic resources, in particular the construction of electronic dictionaries, based on rules of morphology. Then, we will present the instructions that we have carried out to construct a morphosyntactic tagging which aims at automatically producing contextual analyzes using the disambiguation grammars. Finally, we will retrace the path that led us to set up local grammars to find the named entities. Hence, we were asked to create a MEDITEXT corpus of texts in Middle French between the end of the thirteenth and fifteenth centuries