Добірка наукової літератури з теми "Deep syntax"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Deep syntax".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Deep syntax":

1

Kong, Leilei, Zhongyuan Han, Yong Han, and Haoliang Qi. "A Deep Paraphrase Identification Model Interacting Semantics with Syntax." Complexity 2020 (October 30, 2020): 1–14. http://dx.doi.org/10.1155/2020/9757032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Paraphrase identification is central to many natural language applications. Based on the insight that a successful paraphrase identification model needs to adequately capture the semantics of the language objects as well as their interactions, we present a deep paraphrase identification model interacting semantics with syntax (DPIM-ISS) for paraphrase identification. DPIM-ISS introduces the linguistic features manifested in syntactic features to produce more explicit structures and encodes the semantic representation of sentence on different syntactic structures by means of interacting semantics with syntax. Then, DPIM-ISS learns the paraphrase pattern from this representation interacting the semantics with syntax by exploiting a convolutional neural network with convolution-pooling structure. Experiments are conducted on the corpus of Microsoft Research Paraphrase (MSRP), PAN 2010 corpus, and PAN 2012 corpus for paraphrase plagiarism detection. The experimental results demonstrate that DPIM-ISS outperforms the classical word-matching approaches, the syntax-similarity approaches, the convolution neural network-based models, and some deep paraphrase identification models.
2

Wu, Xianchao, Takuya Matsuzaki, and Jun’ichi Tsujii. "Improve syntax-based translation using deep syntactic structures." Machine Translation 24, no. 2 (June 2010): 141–57. http://dx.doi.org/10.1007/s10590-010-9081-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Zhining, Liang Wan, Kun Chu, Shusheng Li, Haodong Wei, and Lu Tang. "JACLNet:Application of adaptive code length network in JavaScript malicious code detection." PLOS ONE 17, no. 12 (December 14, 2022): e0277891. http://dx.doi.org/10.1371/journal.pone.0277891.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Currently, JavaScript malicious code detection methods are becoming more and more effective. Still, the existing methods based on deep learning are poor at detecting too long or too short JavaScript code. Based on this, this paper proposes an adaptive code length deep learning network JACLNet, composed of convolutional block RDCNet, BiLSTM and Transfrom, to capture the association features of the variable distance between codes. Firstly, an abstract syntax tree recombination algorithm is designed to provide rich syntax information for feature extraction. Secondly, a deep residual convolution block network (RDCNet) is designed to capture short-distance association features between codes. Finally, this paper proposes a JACLNet network for JavaScript malicious code detection. To verify that the model presented in this paper can effectively detect variable JavaScript code, we divide the datasets used in this paper into long text dataset DB_Long; short text dataset DB_Short, original dataset DB_Or and enhanced dataset DB_Re. In DB_Long, our method’s F1 − score is 98.87%, higher than that of JSContana by 2.52%. In DB_Short, our method’s F1-score is 97.32%, higher than that of JSContana by 7.79%. To verify that the abstract syntax tree recombination algorithm proposed in this paper can provide rich syntax information for subsequent models, we conduct comparative experiments on DB_Or and DB_Re. In DPCNN+BiLSTM, F1-score with abstract syntax tree recombination increased by 1.72%, and in JSContana, F1-score with abstract syntax tree recombination increased by 1.50%. F1-score with abstract syntax tree recombination in JACNet improved by 1.00% otherwise unused.
4

Ding, Jiaman, Weikang Fu, and Lianyin Jia. "Deep Forest and Pruned Syntax Tree-Based Classification Method for Java Code Vulnerability." Mathematics 11, no. 2 (January 15, 2023): 461. http://dx.doi.org/10.3390/math11020461.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The rapid development of J2EE (Java 2 Platform Enterprise Edition) has brought unprecedented severe challenges to vulnerability mining. The current abstract syntax tree-based source code vulnerability classification method does not eliminate irrelevant nodes when processing the abstract syntax tree, resulting in a long training time and overfitting problems. Another problem is that different code structures will be translated to the same sequence of tree nodes when processing abstract syntax trees using depth-first traversal, so in this process, the depth-first algorithm will lead to the loss of semantic structure information which will reduce the accuracy of the model. Aiming at these two problems, we propose a deep forest and pruned syntax tree-based classification method (PSTDF) for Java code vulnerability. First, the breadth-first traversal of the abstract syntax tree obtains the sequence of statement trees, next, pruning statement trees removes irrelevant nodes, then we use a depth-first based encoder to obtain the vector, and finally, we use deep forest as the classifier to get classification results. Experiments on publicly accessible vulnerability datasets show that PSTDF can reduce the loss of semantic structure information and effectively remove the impact of redundant information.
5

Gupta, Vikram, Haoyue Shi, Kevin Gimpel, and Mrinmaya Sachan. "Deep Clustering of Text Representations for Supervision-Free Probing of Syntax." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 10720–28. http://dx.doi.org/10.1609/aaai.v36i10.21317.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We explore deep clustering of multilingual text representations for unsupervised model interpretation and induction of syntax. As these representations are high-dimensional, out-of-the-box methods like K-means do not work well. Thus, our approach jointly transforms the representations into a lower-dimensional cluster-friendly space and clusters them. We consider two notions of syntax: Part of Speech Induction (POSI) and Constituency Labelling (CoLab) in this work. Interestingly, we find that Multilingual BERT (mBERT) contains surprising amount of syntactic knowledge of English; possibly even as much as English BERT (E-BERT). Our model can be used as a supervision-free probe which is arguably a less-biased way of probing. We find that unsupervised probes show benefits from higher layers as compared to supervised probes. We further note that our unsupervised probe utilizes E-BERT and mBERT representations differently, especially for POSI. We validate the efficacy of our probe by demonstrating its capabilities as a unsupervised syntax induction technique. Our probe works well for both syntactic formalisms by simply adapting the input representations. We report competitive performance of our probe on 45-tag English POSI, state-of-the-art performance on 12-tag POSI across 10 languages, and competitive results on CoLab. We also perform zero-shot syntax induction on resource impoverished languages and report strong results.
6

Khasanah, Noor. "Transformational Linguistics and the Implication Towards Second Language Learning." Register Journal 3, no. 1 (July 1, 2016): 23. http://dx.doi.org/10.18326/rgt.v3i1.23-36.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The essence of Chomsky’s approach to language is the claim that there are linguistic universals in domain of syntax. He felt confident to show that syntax can be defined for any given language. For Chomsky, the nature of such mental representations is largely innate, so if a grammatical theory has explanatory adequacy it must be able to explain the various grammatical nuances of the languages of the world as relatively minor variations in the universal pattern of human language. In teaching English as L2, therefore knowing syntax and grammar of the language is important. Transformational Generative Grammar gives adequate elaboration in understanding them. Thus, the learners are expected to be able to avoid such ambiguity in interpreting the deep structure of a sentence since ambiguity will lead other people as the listeners or hearers of the speakers to misinterpret either consciously or unconsciously. Keywords: Surface Structure; Deep Structure; Constituent; Transformation
7

Liang, Hongliang, Lu Sun, Meilin Wang, and Yuxing Yang. "Deep Learning With Customized Abstract Syntax Tree for Bug Localization." IEEE Access 7 (2019): 116309–20. http://dx.doi.org/10.1109/access.2019.2936948.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chlipala, Adam. "Skipping the binder bureaucracy with mixed embeddings in a semantics course (functional pearl)." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–28. http://dx.doi.org/10.1145/3473599.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Rigorous reasoning about programs calls for some amount of bureaucracy in managing details like variable binding, but, in guiding students through big ideas in semantics, we might hope to minimize the overhead. We describe our experiment introducing a range of such ideas, using the Coq proof assistant, without any explicit representation of variables, instead using a higher-order syntax encoding that we dub "mixed embedding": it is neither the fully explicit syntax of deep embeddings nor the syntax-free programming of shallow embeddings. Marquee examples include different takes on concurrency reasoning, including in the traditions of model checking (partial-order reduction), program logics (concurrent separation logic), and type checking (session types) -- all presented without any side conditions on variables.
9

Gupta, Rahul, Aditya Kanade, and Shirish Shevade. "Deep Reinforcement Learning for Syntactic Error Repair in Student Programs." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 930–37. http://dx.doi.org/10.1609/aaai.v33i01.3301930.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Novice programmers often struggle with the formal syntax of programming languages. In the traditional classroom setting, they can make progress with the help of real time feedback from their instructors which is often impossible to get in the massive open online course (MOOC) setting. Syntactic error repair techniques have huge potential to assist them at scale. Towards this, we design a novel programming language correction framework amenable to reinforcement learning. The framework allows an agent to mimic human actions for text navigation and editing. We demonstrate that the agent can be trained through self-exploration directly from the raw input, that is, program text itself, without either supervision or any prior knowledge of the formal syntax of the programming language. We evaluate our technique on a publicly available dataset containing 6975 erroneous C programs with typographic errors, written by students during an introductory programming course. Our technique fixes 1699 (24.4%) programs completely and 1310 (18.8%) program partially, outperforming DeepFix, a state-of-the-art syntactic error repair technique, which uses a fully supervised neural machine translation approach.
10

Amini, Afra, Tiago Pimentel, Clara Meister, and Ryan Cotterell. "Naturalistic Causal Probing for Morpho-Syntax." Transactions of the Association for Computational Linguistics 11 (2023): 384–403. http://dx.doi.org/10.1162/tacl_a_00554.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Probing has become a go-to methodology for interpreting and analyzing deep neural models in natural language processing. However, there is still a lack of understanding of the limitations and weaknesses of various types of probes. In this work, we suggest a strategy for input-level intervention on naturalistic sentences. Using our approach, we intervene on the morpho-syntactic features of a sentence, while keeping the rest of the sentence unchanged. Such an intervention allows us to causally probe pre-trained models. We apply our naturalistic causal probing framework to analyze the effects of grammatical gender and number on contextualized representations extracted from three pre-trained models in Spanish, the multilingual versions of BERT, RoBERTa, and GPT-2. Our experiments suggest that naturalistic interventions lead to stable estimates of the causal effects of various linguistic properties. Moreover, our experiments demonstrate the importance of naturalistic causal probing when analyzing pre-trained models. https://github.com/rycolab/naturalistic-causal-probing

Дисертації з теми "Deep syntax":

1

Tse, Daniel Gar-shon. "Chinese CCGbank: Deep derivations and dependencies for Chinese CCG parsing." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9439.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The fundamental goal of this dissertation is to establish that deep, efficient, accurate parsing models can be acquired for Chinese, through parsers founded on Combinatory Categorial Grammar (CCG), a grammar formalism which has already enabled the creation of rich parsing models for English. We harness these CCG analyses of cross-linguistic syntax, harmonising them with modern accounts from Chinese generative syntax, contributing the first analysis of Chinese syntax through CCG in the literature. Supervised statistical parsing approaches rely on the availability of large annotated corpora. To avoid the cost of manual annotation, we adopt the corpus conversion methodology, in which an automatic corpus conversion algorithm projects annotations from a source corpus into the target formalism. The central contribution of this thesis is Chinese CCGbank, a corpus of 750,000 words automatically extracted from the Penn Chinese Treebank, reifying the abstract analysis through corpus conversion. We then take three state-of-the-art CCG parsers from the literature — the split-merge PCFG parser of Petrov and Klein, the transition-based CCG parser of Zhang et al., and the maximum entropy parser of Clark and Curran — and train and evaluate all three on Chinese CCGbank, achieving the first Chinese CCG parsing models in the literature. We demonstrate that while the three parsers are only separated by a small margin trained on English CCGbank, a substantial gulf of 4.8% separates the same parsers trained on Chinese CCGbank. We also confirm that the gap between the states-of-the-art in English and Chinese PSG parsing can be observed in CCG parsing. Our parsing experiments establish Chinese CCG parsing as a new and substantial challenge, a line of empirical investigation directly enabled by Chinese CCGbank.
2

Lim, Steven. "Recommending TEE-based Functions Using a Deep Learning Model." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104999.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Trusted execution environments (TEEs) are an emerging technology that provides a protected hardware environment for processing and storing sensitive information. By using TEEs, developers can bolster the security of software systems. However, incorporating TEE into existing software systems can be a costly and labor-intensive endeavor. Software maintenance—changing software after its initial release—is known to contribute the majority of the cost in the software development lifecycle. The first step of making use of a TEE requires that developers accurately identify which pieces of code would benefit from being protected in a TEE. For large code bases, this identification process can be quite tedious and time-consuming. To help reduce the software maintenance costs associated with introducing a TEE into existing software, this thesis introduces ML-TEE, a recommendation tool that uses a deep learning model to classify whether an input function handles sensitive information or sensitive code. By applying ML-TEE, developers can reduce the burden of manual code inspection and analysis. ML-TEE's model was trained and tested on functions from GitHub repositories that use Intel SGX and on an imbalanced dataset. The accuracy of the final model used in the recommendation system has an accuracy of 98.86% and an F1 score of 80.00%. In addition, we conducted a pilot study, in which participants were asked to identify functions that needed to be placed inside a TEE in a third-party project. The study found that on average, participants who had access to the recommendation system's output had a 4% higher accuracy and completed the task 21% faster.
Master of Science
Improving the security of software systems has become critically important. A trusted execution environment (TEE) is an emerging technology that can help secure software that uses or stores confidential information. To make use of this technology, developers need to identify which pieces of code handle confidential information and should thus be placed in a TEE. However, this process is costly and laborious because it requires the developers to understand the code well enough to make the appropriate changes in order to incorporate a TEE. This process can become challenging for large software that contains millions of lines of code. To help reduce the cost incurred in the process of identifying which pieces of code should be placed within a TEE, this thesis presents ML-TEE, a recommendation system that uses a deep learning model to help reduce the number of lines of code a developer needs to inspect. Our results show that the recommendation system achieves high accuracy as well as a good balance between precision and recall. In addition, we conducted a pilot study and found that participants from the intervention group who used the output from the recommendation system managed to achieve a higher average accuracy and perform the assigned task faster than the participants in the control group.
3

Michell, Theodore William Henry. "The psychasthenia of deep space : evaluating the 'reassertion of space in critical social theory'." Thesis, University College London (University of London), 2002. http://discovery.ucl.ac.uk/4325/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The aim of this work is to question the notion of space that underlies the claimed ‘spatial turn’ in geographical and social theory. Section 1 examines this theoretical literature, drawing heavily on Soja as the self declared taxonomist of the genre, and also seeks parallels with more populist texts on cities and space, to suggest, following Williams, that there is a new ‘structure of feeling’ towards space. Section 1 introduces two foundational concepts. The first, derived from Soja’s misunderstanding of Borges’ story The Aleph, argues for an ‘alephic vision’, an imposition of a de-materialized and revelatory understanding of space. This is related to the second, an ‘ecstatic vision’, which describes the tendency, illustrated through the work of Koolhaas and recent exhibitions on the experience of cities, to treat spatial and material experience in hyperbolic and hallucinatory terms. Section 2 offers a series of theoretical reconstructions which seek to draw out parallels between the work of key theorists of what I term the ‘respatialization’ literature (Harvey, Giddens, Foucault and Lefebvre) and the work of Hillier et al in the Space Syntax school. A series of empirical studies demonstrate that the approach to the material realm offered by Space Syntax is not only theoretically compatible but can also help to explain ‘real world’ phenomena. However, the elision with wider theoretical positions points to the need for a reworking of elements of Space Syntax, and steps towards this goal are offered in section 3. In the final ‘speculative epilogue’ I reopen the philosophical debates about the nature of space, deliberately suppressed from the beginning, and suggest that perhaps the apparent theoretical and empirical versatility of Space Syntax, based upon a configurational approach to space as a complex relational system, may offer an alternative approach to these enduring metaphysical debates.
4

Senko, Jozef. "Hluboký syntaxí řízený překlad." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234933.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis is a continuation of my bachelor thesis, which is dedicated to syntax analysis based on deep pushdown automata. In theorical part of this thesis is defined everything fundamental for this work, for example deep syntax-directed translation, pushdown automata, deep pushdown automata, finite transducer and deep pushdown transducer.   The second part of this thesis is dedicated to the educational program for students of IFJ. In this part is defined strucure of this program and its parts. All part of program are analyzed from a theoretical and practical point of view.
5

Ribeyre, Corentin. "Méthodes d’analyse supervisée pour l’interface syntaxe-sémantique : de la réécriture de graphes à l’analyse par transitions." Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC119.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Aujourd'hui, le volume de données textuelles disponibles est colossal. Ces données représentent des informations inestimables impossibles à traiter manuellement. De fait, il est essentiel d'utiliser des techniques de Traitement Automatique des Langues pour extraire les informations saillantes et comprendre le sens sous-jacent. Cette thèse s'inscrit dans cette perspective et proposent des ressources, des modèles et des méthodes pour permettre : (i) l'annotation automatique de corpus à l'interface entre la syntaxe et la sémantique afin d'en extraire la structure argumentale (ii) l'exploitation des ressources par des méthodes efficaces. Nous proposons d’abord un système de réécriture de graphes et un ensemble de règles de réécriture manuellement écrites permettant l'annotation automatique de la syntaxe profonde du français. Grâce à cette approche, deux corpus ont vu le jour : le DeepSequoia, version profonde du corpus Séquoia et le DeepFTB, version profonde du French Treebank en dépendances. Ensuite, nous proposons deux extensions d'analyseurs par transitions et les adaptons à l'analyse de graphes. Nous développons aussi un ensemble de traits riches issus d'analyses syntaxiques. L'idée est d'apporter des informations topologiquement variées donnant à nos analyseurs les indices nécessaires pour une prédiction performante de la structure argumentale. Couplé à un analyseur par factorisation d'arcs, cet ensemble de traits permet d'établir l'état de l'art sur le français et de dépasser celui établi pour les corpus DM et PAS sur l'anglais. Enfin, nous explorons succinctement une méthode d'induction pour le passage d'un arbre vers un graphe
Nowadays, the amount of textual data has become so gigantic, that it is not possible to deal with it manually. In fact, it is now necessary to use Natural Language Processing techniques to extract useful information from these data and understand their underlying meaning. In this thesis, we offer resources, models and methods to allow: (i) the automatic annotation of deep syntactic corpora to extract argument structure that links (verbal) predicates to their arguments (ii) the use of these resources with the help of efficient methods. First, we develop a graph rewriting system and a set of manually-designed rewriting rules to automatically annotate deep syntax in French. Thanks to this approach, two corpora were created: the DeepSequoia, a deep syntactic version of the Séquoia corpus and the DeepFTB, a deep syntactic version of the dependency version of the French Treebank. Next, we extend two transition-based parsers and adapt them to be able to deal with graph structures. We also develop a set of rich linguistic features extracted from various syntactic trees. We think they are useful to bring different kind of topological information to accurately predict predicat-argument structures. Used in an arc-factored second-order parsing model, this set of features gives the first state-of-the-art results on French and outperforms the one established on the DM and PAS corpora for English. Finally, we briefly explore a method to automatically induce the transformation between a tree and a graph. This completes our set of coherent resources and models to automatically analyze the syntax-semantics interface on French and English
6

Mille, Simon. "Deep stochastic sentence generation : resources and strategies." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/283136.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The present Ph.D. thesis addresses the problem of deep data-driven Natural Language Generation (NLG), and in particular the role of proper corpus annotation schemata for stochastic sentence realization. The lack of multilevel corpus annotation has prevented so far the development of proper statistical NLG systems starting from abstract structures. We first detail a methodology for annotating corpora at different levels of linguistic abstraction (namely, semantic, deep-syntactic, surface-syntactic, topological, and morphological levels), and report on the actual annotation of such corpora, manually for Spanish and automatically for English. Then, using the resulting annotated data for our experiments, we train and evaluate deep stochastic NLG tools which go beyond the current state of the art, in particular thanks to the absence of rules in non-isomorphic transductions. Finally, we show that such data can also serve well other purposes such as statistical surface and deep dependency parsing.
La presente tesis aborda el problema de la generación de textos partiendo desde estructuras profundas; se examina especialmente el papel de un esquema de anotación apropiado para la generación estadística de oraciones. La falta de anotación en varios niveles ha impedido hasta ahora el desarrollo de sistemas de generación estadística desde estructuras abstractas. En primer lugar, se detalla la metodología para anotar corpus en varios niveles (representaciones semánticas, sintácticas profundas, sintácticas superficiales, topológicas y morfológicas), y se presenta su proceso de anotación, manual para el español, y automático para el inglés. Posteriormente, se usan los datos anotados para entrenar y evaluar varios generadores de textos que van más allá del estado del arte actual, en particular porque no contienen reglas para transducciones no isomórficas. Por último, se muestra que estos datos se pueden utilizar también para otros objetivos tales como el análisis sintáctico estadístico de estructuras superficiales y profundas.
7

Colin, Émilie. "Traitement automatique des langues et génération automatique d'exercices de grammaire." Thesis, Université de Lorraine, 2020. http://www.theses.fr/2020LORR0059.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le thème support de cette thèse la génération de paraphrases sur support neuronal. Nos perspectives sont éducatives : créer des exercices de grammaire pour le français. La paraphrase est une opération de reformulation. Nos travaux tendent à attester que les modèles séquence vers séquence ne sont pas de simples répétiteurs mais peuvent apprendre la syntaxe. Nous avons montré, en combinant divers modèles, que la représentation de l'information sous de multiples formes (en utilisant de la donnée formelle (RDF), couplée à du texte pour l'étendre ou le réduire, ou encore seulement du texte) permet d'exploiter un corpus sous différents angles, augmentant la diversité des sorties, exploitant les leviers syntaxiques mis en place. Nous nous sommes penchée sur un problème récurrent, celui de la qualité des données, et avons obtenu des paraphrases avec une haute adéquation syntaxique (jusqu'à 98% de couverture de la demande) et un très bon niveau linguistique. Nous obtenons jusqu'à 83.97 points de BLEU*, 78.41 de plus que la moyenne de nos lignes de base, sans levier syntaxique. Ce taux indique un meilleur contrôle des sorties, pourtant variées et de bonne qualité en l'absence de levier. Nous avons ensuite travaillé depuis du texte brut en passant, pour la génération de phrases, par la production d'une représentation du sens de ce texte qui puisse servir d'entrée à la génération de paraphrases. Le passage à du texte en français était aussi pour nous un impératif. Travailler depuis du texte brut, en automatisant les procédures, nous a permis de créer un corpus de plus de 450 000 couples représentations/phrases, grâce auquel nous avons appris à générer des textes massivement corrects (92% sur la validation qualitative). Anonymiser ce qui n'est pas fonctionnel a participé notablement à la qualité des résultats (68.31 de BLEU, soit +3.96 par rapport à la ligne de base, qui était la génération depuis des données non anonymisées). La représentation formelle de l'information dans un cadre linguistique particulier à une langue est une tâche ardue. Cette thèse offre des pistes de méthodes pour automatiser cette opération. Par ailleurs, nous n'avons pu traiter que des phrases relativement courtes. L'utilisation de modèles neuronaux plus récents permettrait sans doute d'améliorer les résultats. Enfin, l'usage de traits adéquats en sortie permettrait des vérifications poussées. *BLEU (Papineni et al., 2002) : qualité d'un texte sur une échelle de 0 (pire) à 100 (meilleur)
Our perspectives are educational, to create grammar exercises for French. Paraphrasing is an operation of reformulation. Our work tends to attest that sequence-to-sequence models are not simple repeaters but can learn syntax. First, by combining various models, we have shown that the representation of information in multiple forms (using formal data (RDF), coupled with text to extend or reduce it, or only text) allows us to exploit a corpus from different angles, increasing the diversity of outputs, exploiting the syntactic levers put in place. We also addressed a recurrent problem, that of data quality, and obtained paraphrases with a high syntactic adequacy (up to 98% coverage of the demand) and a very good linguistic level. We obtain up to 83.97 points of BLEU-4*, 78.41 more than our baseline average, without syntax leverage. This rate indicates a better control of the outputs, which are varied and of good quality in the absence of syntax leverage. Our idea was to be able to work from raw text : to produce a representation of its meaning. The transition to French text was also an imperative for us. Working from plain text, by automating the procedures, allowed us to create a corpus of more than 450,000 sentence/representation pairs, thanks to which we learned to generate massively correct texts (92% on qualitative validation). Anonymizing everything that is not functional contributed significantly to the quality of the results (68.31 of BLEU, i.e. +3.96 compared to the baseline, which was the generation of text from non-anonymized data). This second work can be applied the integration of a syntax lever guiding the outputs. What was our baseline at time 1 (generate without constraint) would then be combined with a constrained model. By applying an error search, this would allow the constitution of a silver base associating representations to texts. This base could then be multiplied by a reapplication of a generation under constraint, and thus achieve the applied objective of the thesis. The formal representation of information in a language-specific framework is a challenging task. This thesis offers some ideas on how to automate this operation. Moreover, we were only able to process relatively short sentences. The use of more recent neural modelswould likely improve the results. The use of appropriate output strokes would allow for extensive checks. *BLEU : quality of a text (scale from 0 (worst) to 100 (best), Papineni et al. (2002))
8

Solár, Peter. "Syntaxí řízený překlad založený na hlubokých zásobníkových automatech." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236779.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis introduces syntax-directed translation based on deep pushdown automata. Necessary theoretical models are introduced in the theoretical part. The most important model, introduced in this thesis, is a deep pushdown transducer. The transducer should be used in syntax analysis, significant part of translation. Practical part consists of an implementation of simple-language interpret based on these models.
9

Genčúrová, Ľubica. "Nové verze zásobníkových automatů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis investigates multi pushdown automata and introduces their   new modifications based on deep pushdown.  The first modification is Input driven multi deep pushdown automata, which has several deep pushdown lists and the current input symbol determines whether the automaton performs a push operation, a pop operation, an expansion operation or does not touch the stack. The second introduced modification is Regulated pushdown automata by deep pushdown. In addition to ordinary pushdowns, this version contains deep pushdown, which is used to generate the control language. This thesis proves, that the acceptance power of the described variants  is equal to the accepting power of Turing machines.  This thesis also contains view on program realisation of theoretical models, which were described in the theoretical part and introduces a library for the syntax analysis, which is based on it.
10

Seraku, Tohru. "Clefts, relatives, and language dynamics : the case of Japanese." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:0448acc3-dee6-4b1b-9020-95fd84895f24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The goal of this thesis is to develop a grammar model of Japanese within the framework of Dynamic Syntax (Cann et al. 2005, Kempson et al. 2001), with special reference to constructions that involve the nominaliser no: clefts and certain kinds of relatives. The more general theoretical position which it aims to defend is that an account of these constructions in terms of ‘language dynamics’ is preferable to other ‘static’ approaches currently available. What is here meant by ‘language dynamics,’ in a nutshell, is the time-linear processing of a string and attendant growth of an interpretation. First, I shall motivate, and articulate, an integrated account of the two types of no- nominalisation. These two classes are uniformly modelled as an outcome of incremental semantic-tree growth. The analysis is corroborated by naturally-occurring data extracted from the Corpus of Spontaneous Japanese (CSJ). Moreover, novel data with regard to coordination are accounted for without losing uniformity. Second, the composite entry of no and the topic marker wa handles the two types of clefts uniformly. This account fits well with the CSJ findings. New data concerning case-marking of foci are explained in terms of whether an unfixed relation in a semantic tree is resolvable in incremental processing. The account also solves the island-puzzle without abandoning uniformity. As a further confirmation, the analysis is extendable to stripping/sluicing, making some novel predictions on case-marking patterns. Third, the entry of no characterises free relatives and change relatives in a unitary manner. Furthermore, the composite entry of no and a case particle predicts a vast range of properties of head-internal relatives, including new data (e.g., negation in the relative clause, locality restriction on the Relevancy Condition). In sum, the thesis presents a realistic, integrated, and empirically preferable model of Japanese. Some consequences stand out. The various new data reported are beneficial theory-neutrally. Formal aspects of Dynamic Syntax are advanced. The insights brought by a language dynamics account challenge the standard, static conception of grammar.

Книги з теми "Deep syntax":

1

Rauh, Gisa. Tiefenkasus, thematische Relationen und Thetarollen: Die Entwicklung einer Theorie von semantischen Relationen. Tübingen: G. Narr, 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Huck, Geoffrey J. Ideology and linguistic theory: Noam Chomsky and the deep structure debates. London: Routledge, 1995.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Schubert, K. Metataxis: Contrastive Dep[endency Syntax for Machine Translation. de Gruyter GmbH, Walter, 1987.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Taylor, Ralph B. How Do We Get to Causal Clarity on Physical Environment-Crime Dynamics? Edited by Gerben J. N. Bruinsma and Shane D. Johnson. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190279707.013.2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter discusses research and theorizing about the crime impacts of the physical environment, relating it to past reviews of scholarship in this area, and highlighting the crucial question of causality. It introduces key stumbling blocks in community criminology that must be addressed before scholarship can advance on the crucial causality question. Environmental criminology in a deep sense represents a field within a broader field of community criminology. The chapter underscores just a few of the most important recent works in four select areas within the physical environment-crime scholarship: space syntax, facilities and land use, accessibility/permeability, and crime prevention through environmental design/defensible space. The final section sketches one possible avenue for future research which can address these concerns.
5

Waters, Keith. Postbop Jazz in the 1960s. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190604578.001.0001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Innovations in postbop jazz compositions of the 1960s occurred in several dimensions, including harmony, form, and melody. Postbop jazz composers such as Wayne Shorter, Herbie Hancock, Chick Corea, along with others (Booker Little, Joe Henderson, Woody Shaw) broke with earlier tonal jazz traditions. Their compositions marked a departure from the techniques of jazz standards and original compositions that defined small-group repertory through the 1950s: single-key orientation, schematic 32-bar frameworks (in AABA or ABAC forms), and tonal harmonic progressions. The book develops analytical pathways through a number of compositions, including “El Gaucho,” “Penelope,” “Pinocchio,” “Face of the Deep” (Shorter); “King Cobra,” “Dolphin Dance,” “Jessica” (Hancock); “Windows,” “Inner Space,” “Song of the Wind” (Corea); as well as “We Speak” (Little); “Punjab” (Henderson); and “Beyond All Limits” (Shaw). These case studies offer ways to understand the works’ harmonic syntax, melodic and formal designs, and general principles of harmonic substitution. By locating points of contact among these postbop techniques—and by describing their evolution from previous tonal jazz practices—the book illustrates the syntactic changes that emerged during the 1960s.
6

McNaughton, James. Samuel Beckett and the Politics of Aftermath. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822547.001.0001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Samuel Beckett and the Politics of Aftermath explores Beckett’s creative response to the Irish Civil War and the crisis of commitment in 1930s Europe, to the rise of fascism and the atrocities of World War II. Grounded in archival material, the book reads Beckett’s letters and German Diaries to demonstrate Beckett’s personal attunement to propaganda and expectations for war. We see how profoundly Beckett’s fiction and theater engage with specific political strategies, rhetoric, and events. Deep into literary form, syntax, and language, Beckett contends with ominous political and historical developments taking place around him. More, he satirizes aesthetic and philosophical interpretations that overlook them. From critiques of the Irish Free State’s inability to examine its foundational violence to specific analysis of the functioning of Nazi propaganda, from exploring how language functions in conditions of authoritarian power to challenging postwar Europe’s conveniently limited definitions of genocide, Beckett’s writing challenges many political pieties with precision and force. He burdens all aesthetic production with guilt for how imagination and narrative form help to effect atrocity as well as cover it up. This book develops new readings of Beckett’s early and middle work up to Three Novels and Endgame.

Частини книг з теми "Deep syntax":

1

Correia, José, Jorge Baptista, and Nuno Mamede. "Syntax Deep Explorer." In Lecture Notes in Computer Science, 189–201. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41552-9_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Coüasnon, Bertrand, Ashok Popat, and Richard Zanibbi. "Discussion Group Summary: Graphics Syntax in the Deep Learning Age." In Lecture Notes in Computer Science, 158–62. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-02284-6_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

"Deep and Surface Structure." In An Introduction to Transformational Syntax, 22–32. Routledge, 2016. http://dx.doi.org/10.4324/9781315461496-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

"Dependency Syntax: Surface Structure and Deep Structure." In Application of Graph Rewriting to Natural Language Processing, 35–70. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2018. http://dx.doi.org/10.1002/9781119428589.ch2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

"On some deep structural analogies between syntax and phonology." In Morpheme-internal Recursion in Phonology, 57–116. De Gruyter Mouton, 2020. http://dx.doi.org/10.1515/9781501512582-004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Harris, Randy Allen. "The Beauty of Deep Structure." In The Linguistics Wars, 15–64. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780199740338.003.0002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter charts the rise of Noam Chomsky’s Transformational-Generative Grammar, from its cornerstone role in the cognitive revolution up to its widely heralded realization in Aspects of the Theory of Syntax. That realization featured the development of an evocative concept, Deep Structure, a brilliant nexus of meaning and structure that integrates seamlessly with Chomsky’s companion idea, Universal Grammar, the notion that all languages share a critical, genetically encoded core. At a technical level, Deep Structure concentrated meaning because of the Katz-Postal Principle, stipulating that transformations cannot change meaning. Transformations rearrange structure while keeping meaning stable. The appeal of Deep Structure and Universal Grammar helped Transformational Grammar propagate rapidly into language classrooms, literary studies, stylistics, and computer science, gave massive impetus to the emergence of psycholinguistics, attracted substantial military and educational funding, and featured prominently in Chomsky’s meteoric intellectual stardom.
7

Hellmuth, Sam. "Functional complementarity is only skin‐deep: Evidence from Egyptian Arabic for the autonomy of syntax and phonology in the expression of focus." In The Sound Patterns of Syntax, 247–70. Oxford University Press, 2010. http://dx.doi.org/10.1093/acprof:oso/9780199556861.003.0012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zheng, Robert Z. "Influence of Multimedia and Cognitive Strategies in Deep and Surface Verbal Processing." In Examining Multiple Intelligences and Digital Technologies for Enhanced Learning Opportunities, 162–83. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0249-5.ch009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The traditional view of linguistic-verbal intelligences focuses on individual linguistic abilities at the levels of phonology, syntax, and semantics. This chapter discusses the individual linguistic abilities from a text-comprehension perspective. The chapter examines the roles of multimedia and cognitive prompts in deep and surface verbal processing. Drawn from research in working memory, multimedia learning, and deep processing, a theoretical framework is proposed to promote learners' deep and surface learning in reading. Evidence from empirical studies are reviewed to support the underlying theoretical assumptions of the framework. The theoretical and practical significance of the theoretical framework is discussed with suggestions for future research.
9

Zheng, Robert Z. "Influence of Multimedia and Cognitive Strategies in Deep and Surface Verbal Processing." In Research Anthology on Applied Linguistics and Language Practices, 341–61. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-5682-8.ch012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The traditional view of linguistic-verbal intelligences focuses on individual linguistic abilities at the levels of phonology, syntax, and semantics. This chapter discusses the individual linguistic abilities from a text-comprehension perspective. The chapter examines the roles of multimedia and cognitive prompts in deep and surface verbal processing. Drawn from research in working memory, multimedia learning, and deep processing, a theoretical framework is proposed to promote learners' deep and surface learning in reading. Evidence from empirical studies are reviewed to support the underlying theoretical assumptions of the framework. The theoretical and practical significance of the theoretical framework is discussed with suggestions for future research.
10

Rey, Georges. "The Basics of Generative Grammars." In Representation of Language, 45–92. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198855637.003.0002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter offers explanations of some basic technical terms, and a sketch of the historical developments and continuity of Chomskyan theories: the early formal presentations; the 1965 Aspects model; issues about generative semantics, “Autonomy of Syntax” and what I call “teleotyranny”; the Principles and Parameters model; the Minimalist Program; and Chomsky’s “Third Factor” neural and evolutionary speculations. All of these developments should be regarded as they were always intended, not as finished theories, but as the development of increasingly deep and rich strategies for explaining the crucial data. The chapter concludes with two relatively simple, representative explanations: the constraints on negative polarity items (NPIs) and on binding.

Тези доповідей конференцій з теми "Deep syntax":

1

Blevins, Terra, Omer Levy, and Luke Zettlemoyer. "Deep RNNs Encode Soft Hierarchical Syntax." In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/p18-2003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Novák, Michal, Anna Nedoluzhko, and Zdeněk Žabokrtský. "Projection-based Coreference Resolution Using Deep Syntax." In Proceedings of the 2nd Workshop on Coreference Resolution Beyond OntoNotes (CORBON 2017). Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-1508.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Novák, Václav. "On distance between deep syntax and semantic representation." In the Workshop. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1641991.1642001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fei, Hao, Yafeng Ren, and Donghong Ji. "Improving Text Understanding via Deep Syntax-Semantics Communication." In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Novák, Michal, Dieke Oele, and Gertjan van Noord. "Comparison of Coreference Resolvers for Deep Syntax Translation." In Proceedings of the Second Workshop on Discourse in Machine Translation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.18653/v1/w15-2502.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Strubell, Emma, and Andrew McCallum. "Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL?" In Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/w18-2904.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shin, MyungJae, Joongheon Kim, Aziz Mohaisen, Jaebok Park, and Kyung Hee Lee. "Neural Network Syntax Analyzer for Embedded Standardized Deep Learning." In MobiSys '18: The 16th Annual International Conference on Mobile Systems, Applications, and Services. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3212725.3212727.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Feng, Hantao, Xiaotong Fu, Hongyu Sun, He Wang, and Yuqing Zhang. "Efficient Vulnerability Detection based on abstract syntax tree and Deep Learning." In IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). IEEE, 2020. http://dx.doi.org/10.1109/infocomwkshps50562.2020.9163061.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rodriguez, Lino. "Deep Genetic Programming." In LatinX in AI at International Conference on Machine Learning 2019. Journal of LatinX in AI Research, 2019. http://dx.doi.org/10.52591/lxai2019061512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We propose to develop a Deep Learning (DL) framework based on the paradigm of Genetic Programming (GP). The hypothesis is that GP non-parametric and non-differentiable learning units (abstract syntax trees) have the same learning and representation capacity to Artificial Neural Networks (ANN). In an analogy to the traditional ANN/Gradient Descend/Backpropagation DL approach, the proposed framework aims at building a DL alike model fully based on GP. Preliminary results when approaching a number of application domains, suggest that GP is able to deal with large amounts of training data, such as those required in DL tasks. However, extensive research is still required regarding the construction of a multi-layered learning architecture, another hallmark of DL.
10

Wu, Bowen, Haoyang Huang, Zongsheng Wang, Qihang Feng, Jingsong Yu, and Baoxun Wang. "Improving the Robustness of Deep Reading Comprehension Models by Leveraging Syntax Prior." In Proceedings of the 2nd Workshop on Machine Reading for Question Answering. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-5807.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії