Dissertations / Theses on the topic 'Modèles syntaxiques'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 24 dissertations / theses for your research on the topic 'Modèles syntaxiques.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Caelen-Haumont, Geneviève. "Stratégies des locuteurs en réponse à des consignes de lecture d'un texte : analyse des interactions entre modèles syntaxiques, sémantiques, pragmatiques et paramètres prosodiques." Aix-Marseille 1, 1991. http://www.theses.fr/1991AIX10044.
Full textAs the interpretations of the relations between prosody and grammar do not concord among authors for french and other languages, we value these relations in the frame of text readings (12 speakers x 3 reading instructions), and more preci between syntax, semantics, pragmatics and a set of acoustical crues. These instructions are asking to readers to speak m and more clearly. Most of the 6 models and most of the 24 crues (extracted from pitch, energy, duration parameters) whic have been defined for this purpose, are new. The experiment was thought in order to put these models (as well as acoustical crues) in competition. Applied upon lexical items of a text, these models, in relation with their own grammat domain, enable one to ascribe weighting to each lexical item. The initial hypothesis is that these numerical values refl a hierarchy which is predicting pitch hierarchy in the utterances, at the intonation and accent levels. By using an appr methodology, we show in the limits of the text experiment that 1e these models are predicting of pitch crues in 85% of t cases 2e speakers are much more using semantic and pragmatic models than syntactic ones 3e the crue (new one) the most precise and requiring a high cost of attention is by far the most frequent in speaker productions 4e speaker strategies reflecting a meta-strategy consisting in using holistic models at the beginning of the text or when the less exacting re instruction is running, the analytic models 5e these strategies do not necessarily take place in sentence boundary frame but lay upon minimal phrases (the lowest extension) gathered in more or less important units 6e a sentence boundary overlap may occur 7e the function of duration and energy crues consists precisely in the demarcation of such minimal phrases 8e these strategies present the characteristics to be very adaptable and to be locally achieved in a moment-tomoment decision
Ben, Amor Rafika. "Comparaison de quelques modèles syntaxiques formels (GB, GPSG, HPSG et LFG) : application au traitement de l'accord dans différentes langues avec référence particulière au français." Paris 3, 2000. http://www.theses.fr/2000PA030087.
Full textFaure, Germain. "Structures et modèles de calculs de réécriture." Phd thesis, Université Henri Poincaré - Nancy I, 2007. http://tel.archives-ouvertes.fr/tel-00164576.
Full textéquationnelle a priori arbitraire. L'agrégation est utilisée pour collecter les différents résultats possibles.
Dans cette thèse, nous étudions différentes combinaisons des ingrédients fondamentaux du rho-calcul: le filtrage, l'agrégation et les mécanismes d'ordre supérieur.
Nous étudions le filtrage d'ordre supérieur dans le lambda-calcul pur modulo une restriction de la beta-conversion appelée super-développements. Cette nouvelle approche est suffisamment expressive pour traiter les problèmes de filtrage du second-ordre et ceux avec des motifs d'ordre supérieur à la Miller.
Nous examinons ensuite les modèles catégoriques du
lambda-calcul parallèle qui peut être vu comme un enrichissement du lambda-calcul avec l'agrégation de termes. Nous montrons que ceci est une étape significative vers la sémantique dénotationnelle du calcul de réécriture.
Nous proposons également une étude et une comparaison des calculs avec motifs éventuellement dynamiques, c'est-à-dire qui peuvent être instanciés et réduits. Nous montrons que cette étude, et plus particulièrement la preuve de confluence, est suffisamment générale pour
s'appliquer à l'ensemble des calculs connus. Nous étudions ensuite l'implémentation de tels calculs en proposant un calcul de réécriture avec filtrage et substitutions explicites.
Dugua, Céline. "Liaison, segmentation lexicale et schémas syntaxiques entre 2 et 6 ans : un modèle développemental basé sur l'usage." Grenoble 3, 2006. https://hal.archives-ouvertes.fr/tel-01272976.
Full textThis thesis focuses on the acquisition of liaison by french children aged between 2 and 6. Through cognitive functional approaches, more specifically on usage-based models and construction grammars, our analyses highlight how linguistic levels (phonological, lexical, syntactic) interact during the development. From 8 corpora studies as well as a measurement of errors in liaison contexts, taken from a child's utterances, we elaborated 6 experimental study protocols, in particular, a four-year longitudinal follow up of 20 children, as well as 2 cross-studies with larger samples (122 and 200 subjects). We suggest a 3 stage developmental model integrating liaison phenomenon, lexical segmentation and constructional schemas emergence. Precociously, the child would retrieve concrete linguistic sequences in her linguistic environment. She would then memorise these sequences and store them in her lexicon in same form as that one heard. For example, she could memorise sequences like un âne (a donkey), l'âne (with determiners), zâne, nâne (with consonant liaison on the initial). These concrete sequences constitute the base from which more abstract schemas emerge progressively. The first are general, integrating a determiner (the pivot) and a slot which can receive any lexical forms. They are like un (a/an) + X and they explain early frequent substitution errors (like un zâne). Gradually, these schemas become more specific, integrating the phonetic nature of the liaison consonant: un + nX. Their application explains progress in liaison contexts and overgeneralization errors on words starting with a consonant (like un nèbre instead of un zèbre (zebra))
Perez, Sanchez Maria Milagrosa. "Typologie et uniformisation syntaxique des modèles de transfert de chaleur dans le contexte de la thermique du bâtiment." Lyon, INSA, 1989. http://www.theses.fr/1989ISAL0005.
Full textChoi, Juyeon. "Problèmes morpho-syntaxiques analysés dans un modèle catégoriel étendu : application au coréen et au français avec une réalisation informatique." Thesis, Paris 4, 2011. http://www.theses.fr/2011PA040211.
Full textThis dissertation aims at proposing the formal analysis of the linguistic phenomena, such as the case system, the double case, the flexible word order, the coordination, the subordination and the thematisation, in the two structurally distinct languages: Korean and French. The formalism of Applicative Combinatory Categorial Grammar, developed by Jean-Pierre Desclés and Ismail Biskri, allow us to analyze these problems by means of the combinators of the Combinatory Logic of Curry and the functional calculus of the Church's types. By taking account of these formal analysis applied to Korean and to French, we discuss on the « anti-anti relativist » hypothesis by finding some syntactic invariants from the different operations such as the predication, the determination, the quantification, the transposition and the coordination. We propose also a categorial parser, ACCG, applicable to Korean and French sentences, which generates automatically categorial calculus and the operator-operand structures
Atar, Sharghi Navid. "Analyse syntaxique comparée du persan et du français : vers un modèle de traduction non ambigüe et une langue controlée." Phd thesis, Université de Franche-Comté, 2011. http://tel.archives-ouvertes.fr/tel-01011496.
Full textHuet, Stéphane. "Informations morpho-syntaxiques et adaptation thématique pour améliorer la reconnaissance de la parole." Phd thesis, Université Rennes 1, 2007. http://tel.archives-ouvertes.fr/tel-00524245.
Full textZouaidi, Safa. "La combinatoire des verbes d'affect : analyse sémantique, syntaxique et discursive français-arabe." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAL028/document.
Full textThe paramount stake of this research is to achieve an integrative functional model for the analysis of affective verbs in French and Arabic. I have chosen four affective verbs: two verbs of emotion (to astonish and to rage in French and their equivalent in Arabic) and two verbs of sentiment (to admire and to envy in French and their equivalent [ʔadhaʃa], [ʔaɣḍaba] in Arabic) they belong to semantic dictions of Surprise, Anger, Admiration, and Jealousy. More concretely, the analysis is shaped:- On the semantic and syntactic level: the semantic dimensions carried by verbal collocations such as to extremely astonish, to rage prodigiously in French, and [ʔaʕʒaba ʔiʕʒāban kabīran] (admire admiration big)*, [ɣaḍaba ɣaḍabaan ʃadīdan] (to rage rage extreme), and in Arabic are systematically linked to syntax (the recurrent grammatical constructions) (Hoey 2005).- On the syntactic and discursive level: the usage of passive, active and reflexive forms of affective verbs are dealt with from the perspective of informational dynamics in the sentence. (Van Valin et LaPolla 1997).From a methodological point of view, the study is based on the quantitative and qualitative approach of the verbal combination and favours the contrastive one. It is founded on the French journalistic corpus of Emobase Database (Emolex project 100 M of words) and the journalistic corpus Arabicorpus) (137 M of words).Furthermore, the thesis participates in the studies of semantic values, the syntactic and the discursive behavior of affective verbs’combinations, in Arabic and in French, which will enable to better structure the diction of emotions in relation to what is proposed by current studies in lexicography. The main results of the study can be applied in language teaching, translation, and automated processing of emotions' lexicon in the two compared languages
Le-Hong, Phuong. "Elaboration d'un composant syntaxique à base de grammaires d'arbres adjoints pour le vietnamien." Phd thesis, Université Nancy II, 2010. http://tel.archives-ouvertes.fr/tel-00529657.
Full textKirman, Jerome. "Mise au point d'un formalisme syntaxique de haut niveau pour le traitement automatique des langues." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0330/document.
Full textThe goal of computational linguistics is to provide a formal account linguistical knowledge, and to produce algorithmic tools for natural languageprocessing. Often, this is done in a so-called generative framework, where grammars describe sets of valid sentences by iteratively applying some set of rewrite rules. Another approach, based on model theory, describes instead grammaticality as a set of well-formedness logical constraints, relying on deep links between logic and automata in order to produce efficient parsers. This thesis favors the latter approach. Making use of several existing results in theoretical computer science, we propose a tool for linguistical description that is both expressive and designed to facilitate grammar engineering. It first tackles the abstract structure of sentences, providing a logical language based on lexical properties of words in order to concisely describe the set of grammaticaly valid sentences. It then draws the link between these abstract structures and their representations (both in syntax and semantics), through the use of linearization rules that rely on logic and lambda-calculus. Then in order to validate this proposal, we use it to model various linguistic phenomenas, ending with a specific focus on languages that include free word order phenomenas (that is, sentences which allow the free reordering of some of their words or syntagmas while keeping their meaning), and on their algorithmic complexity
Wang, Tiexin. "A study to define an automatic model transformation approach based on semantic and syntactic comparisons." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2015. http://www.theses.fr/2015EMAC0015/document.
Full textThe models are increasingly used both for the description of a view of a complex system or for information exchange. However, to share the information, transferring information from one model to another is an issue related to the interoperability of systems now. This problem can be approached in three ways: integrated (all identical models), unified (all models refer to a pivot model), federated (no specific rules on the models). Although standards exist, they are rarely respected rigorously. The federated approach therefore seems to be the most realistic approach. However, because of the different models, this approach is complicated. Models can have a very heterogeneous structure and different vocabulary to describe the same concept. Therefore, we must identify the common concepts of different models before defining the transformation rules for transforming from one format to another. This thesis proposes a methodology to achieve these goals. It is partly based on the proposal of a meta-meta-model (to unify the description of the model structure), i.e. the meta-model, and secondly calculating the distance between each element of models to deduce the transformation rules. This distance reflecting both syntactic distance (words occurrence) and semantic relation that related to the synonymous. Researching synonym relation is based on the use of knowledge base, represented as ontology, such as WordNet
Phan, Van Trung. "Modelling of the in service behaviour of passive insulated structures for deep sea offshore applications." Thesis, Brest, 2012. http://www.theses.fr/2012BRES0098/document.
Full textUltra deep offshore oil exploitation presents new challenges to offshore engineering and operating companies. Such applications require the use of pipelines with an efficient thermal protection. Passive insulation materials are commonly used to guarantee the thermal performance of the pipes, and syntactic foams are now the preferred material for this application. The mechanical behaviour of such insulation materials is quite complex, associating time-dependent behaviour of polymers with damage behaviour of glass microspheres. In order to allow an optimisation of such systems, while ensuring in-service durability, accurate numerical models of insulation materials are thus required. During the service life in deep water, hydrostatic pressure is the most important mechanical loading of the pipeline, so this study aims to describe the mechanical behaviour of the material under such loading. Using a hyperbaric chamber, the analysis of the evolution of the volumetric strain with time, with respect to the temperature, under different time-evolutions of the applied hydrostatic pressure is presented in this paper. Such experimental results associated with the mechanical response of the material under uniaxial tensile creep tests, allow the development of a thermo-mechanical model, so that representative loadings can be analysed
Pécheux, Nicolas. "Modèles exponentiels et contraintes sur les espaces de recherche en traduction automatique et pour le transfert cross-lingue." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS242/document.
Full textMost natural language processing tasks are modeled as prediction problems where one aims at finding the best scoring hypothesis from a very large pool of possible outputs. Even if algorithms are designed to leverage some kind of structure, the output space is often too large to be searched exaustively. This work aims at understanding the importance of the search space and the possible use of constraints to reduce it in size and complexity. We report in this thesis three case studies which highlight the risk and benefits of manipulating the seach space in learning and inference.When information about the possible outputs of a sequence labeling task is available, it may seem appropriate to include this knowledge into the system, so as to facilitate and speed-up learning and inference. A case study on type constraints for CRFs however shows that using such constraints at training time is likely to drastically reduce performance, even when these constraints are both correct and useful at decoding.On the other side, we also consider possible relaxations of the supervision space, as in the case of learning with latent variables, or when only partial supervision is available, which we cast as ambiguous learning. Such weakly supervised methods, together with cross-lingual transfer and dictionary crawling techniques, allow us to develop natural language processing tools for under-resourced languages. Word order differences between languages pose several combinatorial challenges to machine translation and the constraints on word reorderings have a great impact on the set of potential translations that is explored during search. We study reordering constraints that allow to restrict the factorial space of permutations and explore the impact of the reordering search space design on machine translation performance. However, we show that even though it might be desirable to design better reordering spaces, model and search errors seem yet to be the most important issues
Tellier, Isabelle. "Définition et implémentation par les grammaires catégorielles d'un modèle cognitif formel de l'énonciation." Cachan, Ecole normale supérieure, 1996. http://www.theses.fr/1996DENS0007.
Full textTafforeau, Jérémie. "Modèle joint pour le traitement automatique de la langue : perspectives au travers des réseaux de neurones." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0430/document.
Full textNLP researchers has identified different levels of linguistic analysis. This lead to a hierarchical division of the various tasks performed in order to analyze a text statement. The traditional approach considers task-specific models which are subsequently arranged in cascade within processing chains (pipelines). This approach has a number of limitations: the empirical selection of models features, the errors accumulation in the pipeline and the lack of robusteness to domain changes. These limitations lead to particularly high performance losses in the case of non-canonical language with limited data available such as transcriptions of conversations over phone. Disfluencies and speech-specific syntactic schemes, as well as transcription errors in automatic speech recognition systems, lead to a significant drop of performances. It is therefore necessary to develop robust and flexible systems. We intend to perform a syntactic and semantic analysis using a deep neural network multitask model while taking into account the variations of domain and/or language registers within the data
Ghoul, Dhaou. "Classifications et grammaires des invariants lexicaux arabes en prévision d’un traitement informatique de cette langue. Construction d’un modèle théorique de l’arabe : la grammaire des invariants lexicaux temporels." Thesis, Paris 4, 2016. http://www.theses.fr/2016PA040184.
Full textThis thesis focuses on the classification and the treatment of Arabic lexical invariants that express a temporal aspect. Our aim is to create a diagram of grammar (finite state machine) for each invariant. In this work, we limited our treatment to 20 lexical invariants. Our assumption is that the lexical invariants are located at the same structural level (formal) as the schemes in the language quotient (skeleton) of the Arabic language. They hide much information and involve syntactic expectations that make it possible to predict the structure of the sentence.In the first part of our research tasks, we present the concept of “invariant lexical” by exposing the various levels of invariance. Then, we classify the invariants according to several criteria.The second part is the object of our own study concerning the temporal lexical invariants. We present our linguistic method as well as our approach of modelling using diagrams of grammars. Then, we analyze the simple lexical invariants such “ḥattā, baʿda” and the complexes ones such “baʿdamā, baynamā”.Finally, an experimental application “Kawâkib” was used to detect and identify the lexical invariants by showing their strong points as well as their gaps. We also propose a new vision of the next version of “Kawâkib” that can represent a teaching application of Arabic without lexicon
Thuilier, Juliette. "Contraintes préférentielles et ordre des mots en français." Phd thesis, Université Paris-Diderot - Paris VII, 2012. http://tel.archives-ouvertes.fr/tel-00781228.
Full textParisse, Christophe. "Reconnaissance de l'écriture manuscrite : analyse de la forme globale des mots et utilisation de la morpho-syntaxe." Paris 11, 1989. http://www.theses.fr/1989PA112301.
Full textMachine recognition of handwriting: global analyses of word shapes and morpho-syntactic evaluation,Machine recognition of handwriting aims at a goal which is not far-removed from human reading. The study of reading may thus provide uscful hints to as yet unsuccessful computer recognition of unrestricted handwriting. A writer oriented system (for a 10000 word vocabulary) has been developed in this framework and tested. It operates on the basis of interaction of full-word shape analyses and syntactic and lexical-semantic processing. The system comprises:• 1 mage transformations designed so as to enable global shape comparisons of scanned words. These transformations reOect the global shape of word images and not their internal structure thereby permitting to conduct shape comparisons within a given unrestricted handwriting. • A syntactic parser based on a markovian mode! whose rules emerge through training. Lt checks the grammaticality of candidate sentences which result from shape comparisons. • Semantic weighting of sen. Tences which are found grammatical. Lt is based on computing lexical co-occurrences in thematically organized textual data-bases
Zemirli, Zouhir. "Synthèse vocale de textes arabes voyellés." Toulouse 3, 2004. http://www.theses.fr/2004TOU30262.
Full textThe text to speech synthesis consists in creating speech by analysis of a text which is subjected to no restriction. The object of this thesis is to describe the modeling and the taking into account of knowledge in phonetic, phonological, morpho-lexical and syntactic necessary to the development of a complete system of voice synthesis starting from diacritized arab texts. The automatic generation of the prosodico-phonetics sequence required the development of several components. The morphosyntaxic labelling "TAGGAR" carries out grammatical labelling, a marking and a syntactic grouping and the automatic insertion of the pauses. Graphemes to phonemes conversion is ensured by using lexicons, syntactic grammars, morpho-orthographical and phonological rules. A multiplicative model of prediction of the duration of the phonemes is described and a model of generation of the prosodic contours based on the accents of the words and the syntactic group is presented
Trouvilliez, Benoît. "Similarités de données textuelles pour l'apprentissage de textes courts d'opinions et la recherche de produits." Thesis, Artois, 2013. http://www.theses.fr/2013ARTO0403/document.
Full textThis Ph.D. thesis is about the establishment of textual data similarities in the client relation domain. Two subjects are mainly considered : - the automatic analysis of short messages in response of satisfaction surveys ; - the search of products given same criteria expressed in natural language by a human through a conversation with a program. The first subject concerns the statistical informations from the surveys answers. The ideas recognized in the answers are identified, organized according to a taxonomy and quantified. The second subject concerns the transcription of some criteria over products into queries to be interpreted by a database management system. The number of criteria under consideration is wide, from simplest criteria like material or brand, until most complex criteria like color or price. The two subjects meet on the problem of establishing textual data similarities thanks to NLP techniques. The main difficulties come from the fact that the texts to be processed, written in natural language, are short ones and with lots of spell checking errors and negations. Establishment of semantic similarities between words (synonymy, antonymy, ...) and syntactic relations between syntagms (conjunction, opposition, ...) are other issues considered in our work. We also study in this Ph. D. thesis automatic clustering and classification methods in order to analyse answers to satisfaction surveys
Prost, Jean-Philippe. "Modelling Syntactic Gradience with Loose Constraint-based Parsing." Phd thesis, Université de Provence - Aix-Marseille I, 2008. http://tel.archives-ouvertes.fr/tel-00352828.
Full textNous suggérons d'élargir au langage mal formé les concepts de Gradience Intersective et de Gradience Subsective, proposés par Aarts pour la modélisation de jugements graduels. Selon ce nouveau modèle, le problème que soulève la gradience concerne la classification d'un énoncé dans une catégorie particulière, selon des critères basés sur les caractéristiques syntaxiques de l'énoncé. Nous nous attachons à étendre la notion de Gradience Intersective (GI) afin qu'elle concerne le choix de la meilleure solution parmi un ensemble de candidats, et celle de Gradience Subsective (GS) pour qu'elle concerne le calcul du degré de typicité de cette structure au sein de sa catégorie. La GI est alors modélisée à l'aide d'un critère d'optimalité, tandis que la GS est modélisée par le calcul d'un degré d'acceptabilité grammaticale. Quant aux caractéristiques syntaxiques requises pour permettre de classer un énoncé, notre étude de différents cadres de représentation pour la syntaxe du langage naturel montre qu'elles peuvent aisément être représentées dans un cadre de syntaxe modèle-théorique (Model-Theoretic Syntax). Nous optons pour l'utilisation des Grammaires de Propriétés (GP), qui offrent, précisément, la possibilité de modéliser la caractérisation d'un énoncé. Nous présentons ici une solution entièrement automatisée pour la modélisation de la gradience syntaxique, qui procède de la caractérisation d'une phrase bien ou mal formée, de la génération d'un arbre syntaxique optimal, et du calcul d'un degré d'acceptabilité grammaticale pour l'énoncé.
À travers le développement de ce nouveau modèle, la contribution de ce travail comporte trois volets.
Premièrement, nous spécifions un système logique pour les GP qui permet la révision de sa formalisation sous l'angle de la théorie des modèles. Il s'attache notamment à formaliser les mécanismes de satisfaction et de relâche de contraintes mis en oeuvre dans les GP, ainsi que la façon dont ils permettent la projection d'une catégorie lors du processus d'analyse. Ce nouveau système introduit la notion de satisfaction relâchée, et une formulation en logique du premier ordre permettant de raisonner au sujet d'un énoncé.
Deuxièmement, nous présentons notre implantation du processus d'analyse syntaxique relâchée à base de contraintes (Loose Satisfaction Chart Parsing, ou LSCP), dont nous prouvons qu'elle génère toujours une analyse syntaxique complète et optimale. Cette approche est basée sur une technique de programmation dynamique (dynamic programming), ainsi que sur les mécanismes décrits ci-dessus. Bien que d'une complexité élevée, cette solution algorithmique présente des performances suffisantes pour nous permettre d'expérimenter notre modèle de gradience.
Et troisièmement, après avoir postulé que la prédiction de jugements humains d'acceptabilité peut se baser sur des facteurs dérivés de la LSCP, nous présentons un modèle numérique pour l'estimation du degré d'acceptabilité grammaticale d'un énoncé. Nous mesurons une bonne corrélation de ces scores avec des jugements humains d'acceptabilité grammaticale. Qui plus est, notre modèle s'avère obtenir de meilleures performances que celles obtenues par un modèle préexistant que nous utilisons comme référence, et qui, quant à lui, a été expérimenté à l'aide d'analyses syntaxiques générées manuellement.
Goyer, Simon. "Pour un modèle de l'explication pluraliste et mécaniste en psychiatrie." Mémoire, 2013. http://www.archipel.uqam.ca/5449/1/M12940.pdf.
Full textGermain, Pierre-Luc. "L'approche sémantique offre-t-elle un meilleur modèle de l'explication scientifique que les théories qu'elle prétend supplanter ?" Thèse, 2009. http://hdl.handle.net/1866/7543.
Full text