To see the other types of publications on this topic, follow the link: Embedding types.

Dissertations / Theses on the topic 'Embedding types'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 dissertations / theses for your research on the topic 'Embedding types.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Genestier, Guillaume. "Dependently-Typed Termination and Embedding of Extensional Universe-Polymorphic Type Theory using Rewriting." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG045.

Full text
Abstract:
Dedukti est un cadre logique dans lequel l’utilisateur encode la théorie qu’il souhaite utiliser à l’aide de règles de réécriture. Pour garantir la décidabilité du typage, il faut s’assurer que le système de réécriture utilisé est terminant.Après avoir rappelé les propriétés des systèmes de types purs et leur extension avec de la réécriture, un critère de terminaison pour la réécriture d’ordre supérieur avec types dépendants est présenté. Il s’agit d’une extension de la notion de paires de dépendances au cas du lambda-pi-calcul modulo réécriture. Cerésultat se décompose en deux théorèmes principaux. Le premier stipule que la bonne fondaison de la relation d’appel définie à partir des paires de dépendances implique la normalisation forte du système de réécriture.Le second résultat de cette partie décrit des conditions décidables suffisantes pour pouvoir utiliser le premier théorème. Cette version décidable du critère de terminaison est implémenté dans un outil appelé “SizeChange Tool”.La seconde partie de cette thèse est consacrée à l’utilisation du cadre logiqueDedukti pour encoder une théorie des types riche. Nous nous intéressons plusparticulièrement à la traduction d’un fragment d’Agda incluant deux fonctionnalités très répandues : l’extension de la conversion avec la règle eta et le polymorphisme d’univers.Une fois encore, ce travail possède un versant théorique, avec des encodagesprouvés corrects de ces deux fonctionnalités dans le lambda-pi-calcul modulo réécriture, ainsi qu’une implémentation prototypique de traducteur entre Agda et Dedukti
Dedukti is a logical framework in which the user encodes the theory she wantsto use via rewriting rules. To ensure the decidability of typing, the rewriting system must be terminating.After recalling some properties of pure type systems and their extension with rewriting, a termination criterion for higher-order rewriting with dependent types is presented. It is an extension of the dependency pairs to the lambda-pi-calculus modulo rewriting. This result features two main theorems. The first one states that the well-foundedness of the call relation defined from dependency pairs implies the strong normalization of the rewriting system.The second result of this part describes decidable sufficient conditions to use the first one. This decidable version of the termination criterion is implemented in “SizeChange Tool”.The second part of this thesis is dedicated to the use of the logical framework Dedukti to encode a rich type theory. We are interested in a fragment of the logic beyond Agda which includes two widely used features: extension of conversion with the eta rule and universe polymorphism.Once again, this work includes a theoretical part, with correct encodings of both features in the lambda-pi-calculus modulo rewriting, and a prototypical translator from Agda to Dedukti
APA, Harvard, Vancouver, ISO, and other styles
2

Skodlerack, Daniel. "Embedding types and canonical affine maps between Bruhat-Tits buildings of classical groups." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2010. http://dx.doi.org/10.18452/16239.

Full text
Abstract:
P. Broussous and S. Stevens studierten für die Konstruktion einfacher Typen unitärer p-adischer Gruppen Abbildungen zwischen erweiterten Bruhat-Tits-Gebäuden, die die Moy-Prasad-Filtrierungen respektieren (CLF). Im ersten Teil der Doktorarbeit wird deren Arbeit zu solchen Abbildungen um den Quaternionenalgebrafall erweitert. Genauer, es sei k0 ein p-adischer Körper mit einer von 2 verschiedenen Restcharakteristik und beta ein eine halbeinfache k0-Algebra erzeugendes, k0-rationales Element der Lie-Algebra einer über k0 definierten unitaren Gruppe G=U(h) zu einer epsilon-hermitischen Form h. Es sei H der Zentralisator von beta in G. Es wird bewiesen, dass eine affine H(k0)-equivariante CLF-Abbildung j vom erweiterten Bruhat-Tits-Gebäude B^1(H,k0) nach B^1(G,k0) existiert. Wie von Broussous vermutet, stellt sich in der Doktorarbeit heraus, dass j durch die CLF-Eigenschaft eindeutig bestimmt wird, falls kein Faktor von H k0-isomorph zur isotropen orthogonalen Gruppe vom k0-Rank 1 ist und alle Faktoren unitäre Gruppen sind. Desweiteren wird bei abgeschwächter Äquivarianzeigenschaft bewiesen, dass j als affine und bezüglich dem Zentrum von H^0(k0) equivariante CLF-Abbildung bis auf eine Translation von B^1(H,k0) eindeutig bestimmt ist. Im zweiten Teil wird der von Broussous und M. Grabitz studierte Einbettungstyp mit Hilfe einer CLF-Abbildung entschlüsselt. Wir betrachten einen Schiefkörper von endlichem Index und p-adischem Zentrum F. Die Konstruktion einfacher Typen für GLn(D) nach der Methode von Bushnell und Kutzko bedurfte der Analyse sogenannter Strata, die eine Starrheitseingenschaft erfüllen mussten. Teil eines Stratums ist insbesondere ein Paar (E,a) bestehend aus einer Körpererweiterung E|F in Mn(D) und einer erblichen Ordnung a, welche von E^x normalisiert wird. Broussous und Grabitz klassifizierten diese Paare mit Hilfe von Invarianten. Im zweiten Teil werden diese Invarianten mit Hilfe der Geometrie einer CLF-Abbildung berechnet.
P. Broussous and S. Stevens studied maps between enlarged Bruhat-Tits buildings to construct types for p-adic unitary groups. They needed maps which respect the Moy-Prasad filtrations. That property is called (CLF), i.e. compatibility with the Lie algebra filtrations. In the first part of this thesis we generalise their results on such maps to the Quaternion-algebra case. Let k0 be a p-adic field of residue characteristic not two. We consider a semisimple k0-rational Lie algebra element beta of a unitary group G:=U(h) defined over k0 with a signed hermitian form h. Let H be the centraliser of beta in G. We prove the existence of an affine H(k0)-equivariant CLF-map j from the enlarged Bruhat-Tits building B^1(H,k0) to B^1(G,k0). As conjectured by Broussous the CLF-property determines j, if none of the factors of H is k0-isomorphic to the isotropic orthogonal group of k0-rank one and all factors are unitary groups. Under the weaker assumption that the affine CLF-map j is only equivariant under the center of H^0(k0) it is uniquely determined up to a translation of B^1(H,k0). The second part is devoted to the decoding of embedding types by the geometry of a CLF-map. Embedding types have been studied by Broussous and M. Grabitz. We consider a division algebra D of finite index with a p-adic center F. The construction of simple types for GLn(D) in the Budhnell-Kutzko framework required an investigation of strata which had to fulfil a rigidity property. Giving a stratum especially means to fix a pair (E,a) consisting of a field extension E|F in Mn(D) and a hereditary order a which is stable under conjugation by E^x, in other words we fix an embedding of E^x into the normalizer of a. Broussous and Grabitz classified these pairs with invariants. We describe and prove a way to decode these invariants using the geometry of a CLF-map.
APA, Harvard, Vancouver, ISO, and other styles
3

Skodlerack, Daniel [Verfasser], Ernst-Wilhelm [Akademischer Betreuer] Zink, Paul [Akademischer Betreuer] Broussous, and Bertrand [Akademischer Betreuer] Lemaire. "Embedding types and canonical affine maps between Bruhat-Tits buildings of classical groups / Daniel Skodlerack. Gutachter: Ernst-Wilhelm Zink ; Paul Broussous ; Bertrand Lemaire." Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2010. http://d-nb.info/1014974771/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ngwobia, Sunday C. "Capturing Knowledge of Emerging Entities from the Extended Search Snippets." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton157309507473671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Karlsson, Mikael. "Identifying New Fault Types Using Transformer Embeddings." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-303009.

Full text
Abstract:
Continuous integration/delivery and deployment consist of many automated tests, some of which may fail leading to faulty software. Similar faults may occur in different stages of the software production lifecycle and it is necessary to identify similar faults and cluster them into fault types in order to minimize troubleshooting time. Pretrained transformer based language models have been proven to achieve state of the art results in many natural language processing tasks like measuring semantic textual similarity. This thesis aims to investigate whether it is possible to cluster and identify new fault types by using a transformer based model to create context aware vector representations of fault records, which consists of numerical data and logs with domain specific technical terms. The clusters created were compared against the clusters created by an existing system, where log files are grouped by manual specified filters. Relying on already existing fault types with associated log data, this thesis shows that it is possible to finetune a transformer based model for a classification task in order to improve the quality of text embeddings. The embeddings are clustered by using density based and hierarchical clustering algorithms with cosine distance. The results show that it is possible to cluster log data and get comparable results to the existing manual system, where the cluster similarity was assessed with V-measure and Adjusted Rand Index.
Kontinuerlig integration består automatiserade tester där det finns risk för att några misslyckas vilket kan leda till felaktig programvara. Liknande fel kan uppstå under olika faser av en programvarans livscykel och det är viktigt att identifiera och gruppera olika feltyper för att optimera felsökningsprocessen. Det har bevisats att språkmodeller baserade på transformatorarkitekturen kan uppnå höga resultat i många uppgifter inom språkteknologi, inklusive att mäta semantisk likhet mellan två texter. Detta arbete undersöker om det är möjligt att gruppera och identifiera nya feltyper genom att använda en transformatorbaserad språkmodell för att skapa numeriska vektorer av loggtext, som består av domänspecifika tekniska termer och numerisk data. Klustren jämförs mot redan existerande grupperingar som skapats av ett befintligt system där feltyper identifieras med manuellt skrivna filter. Det här arbetet visar att det går att förbättra vektorrepresenationerna skapade av en språkmodell baserad på transformatorarkitekturen genom att tilläggsträna modellen för en klassificeringsuppgift. Vektorerna grupperas med hjälp av densitetsbaserade och hierarkiska klusteralgoritmer. Resultaten visar att det är möjligt att skapa vektorer av logg-texter med hjälp av en transformatorbaserad språkmodell och få jämförbara resultat som ett befintligt manuellt system, när klustren evaluerades med V-måttet och Adjusted Rand Index.
APA, Harvard, Vancouver, ISO, and other styles
6

Neves, Julio Severino. "Fractional Sobolev-type spaces and embeddings." Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hauser, Bruno. "Embedding proof-carrying components into Isabelle." Zurich : ETH, Swiss Federal Institute of Technology Zurich, Institute of Theoretical Computer Science, Chair of Software Engineering, 2009. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kinsley, Sam. "Duality methods for barrier-type solutions to the Skorokhod embedding problem." Thesis, University of Bath, 2018. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.761046.

Full text
Abstract:
The Skorokhod embedding problem is to find a stopping time of a Brownian motion, W, for which the stopped process has a given distribution. The Root, Rost, and cave embedding solutions to the problem can be seen as the first hitting time for (Wt,t) of regions known as barriers, inverse barriers, and cave barriers, respectively. In this thesis we present three ways of approaching the embedding problem, and apply the methods to these barrier-type solutions. Specifically, we consider infinite dimensional linear optimisation problems in both discrete and continuous time, and we also reformulate into an optimisation constrained by backwards stochastic differential equations and then solve using techniques from stochastic optimal control. For certain financial derivatives it is well known that there is an optimal Skorokhod embedding problem which corresponds to finding a model-independent upper bound on the price of the contingent claim. With this application in mind, the embedding problem has the dual problem of finding the minimal cost of a superhedging portfolio for the option. The methods developed in this thesis enable us to explore the rela- tion between the primal and dual problems, and, in the applications above, find dual optimisers. We also introduce a new barrier-type embedding, known as a K-cave em- bedding, which has the property of maximising the price of a European call option on a leveraged exchange traded fund. For the cave and K-cave embeddings the attainment of an optimal superhedging strategy is needed to find the optimal barriers. Unlike in the cases of Root and Rost, there are not unique cave, or K-cave barriers which embed a given distribution and in this way these are the first examples of embeddings which are not uniquely determined by their geometric structure.
APA, Harvard, Vancouver, ISO, and other styles
9

Ernstsson, August. "SkePU 2: Language Embedding and Compiler Support for Flexible and Type-Safe Skeleton Programming." Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129381.

Full text
Abstract:
This thesis presents SkePU 2, the next generation of the SkePU C++ framework for programming of heterogeneous parallel systems using the skeleton programming concept. SkePU 2 is presented after a thorough study of the state of parallel programming models, frameworks and tools, including other skeleton programming systems. The advancements in SkePU 2 include a modern C++11 foundation, a native syntax for skeleton parameterization with user functions, and an entirely new source-to-source translator based on Clang compiler front-end libraries. SkePU 2 extends the functionality of SkePU 1 by embracing metaprogramming techniques and C++11 features, such as variadic templates and lambda expressions. The results are improved programmability and performance in many situations, as shown in both a usability survey and performance evaluations on high-performance computing hardware. SkePU’s skeleton programming model is also extended with a new construct, Call, unique in the sense that it does not impose any predefined skeleton structure and can encapsulate arbitrary user-defined multi-backend computations. We conclude that SkePU 2 is a promising new direction for the SkePU project, and a solid basis for future work, for example in performance optimization.
APA, Harvard, Vancouver, ISO, and other styles
10

Walter, Alexander I. "Embedding transdisciplinary research : interface requirements for joint problem solving between scientists and stakeholders /." Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Hanqing. "DBpedia Type and Entity Detection Using Word Embeddings and N-gram Models." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37324.

Full text
Abstract:
Nowadays, knowledge bases are used more and more in Semantic Web tasks, such as knowledge acquisition (Hellmann et al., 2013), disambiguation (Garcia et al., 2009) and named entity corpus construction (Hahm et al., 2014), to name a few. DBpedia is playing a central role on the linked open data cloud; therefore, the quality of this knowledge base is becoming a central point of focus. However, there are some issues with the quality of DBpedia. In particular, DBpedia suffers from three major types of problems: a) invalid types for entities, b) missing types for entities, and c) invalid entities in the resources’ description. In order to enhance the quality of DBpedia, it is important to detect these invalid types and resources, as well as complete missing types. The three main goals of this thesis are: a) invalid entity type detection in order to solve the problem of invalid DBpedia types for entities, b) automatic detection of the types of entities in order to solve the problem of missing DBpedia types for entities, and c) invalid entity detection in order to solve the problem of invalid entities in the resource description of a DBpedia entity. We compare several methods for the detection of invalid types, automatic typing of entities, and invalid entities detection in the resource descriptions. In particular, we compare different classification and clustering algorithms based on various sets of features: entity embedding features (Skip-gram and CBOW models) and traditional n-gram features. We present evaluation results for 358 DBpedia classes extracted from the DBpedia ontology. The main contribution of this work consists of the development of automatic invalid type detection, automatic entity typing, and automatic invalid entity detection methods using clustering and classification. Our results show that entity embedding models usually perform better than n-gram models, especially the Skip-gram embedding model.
APA, Harvard, Vancouver, ISO, and other styles
12

Larsson, Leo. "Carlson type inequalities and their applications." Doctoral thesis, Uppsala : Univ. : Matematiska institutionen, Univ. [distributör], 2003. http://publications.uu.se/theses/91-506-1654-4/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Warren, Jared. "Using Haskell to Implement Syntactic Control of Interference." Thesis, Kingston, Ont. : [s.n.], 2008. http://hdl.handle.net/1974/1237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zghal, Mohamed Khalil. "Inégalités de type Trudinger-Moser et applications." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1077/document.

Full text
Abstract:
Cette thèse porte sur quelques inégalités de type Trudinger-Moser et leurs applications à l'étude des injections de Sobolev qu'elles induisent dans les espaces d'Orlicz et à l'analyse d'équations aux dérivées partielles non linéaires à croissance exponentielle.Le travail qu'on présente ici se compose de trois parties. La première partie est consacrée à la description du défaut de compacité de l'injection de Sobolev 4D dans l'espace d'Orlicz dansle cadre radial.L'objectif de la deuxième partie est double. D'abord, on caractérise le défaut de compacité de l'injection de Sobolev 2D dans les différentes classes d'espaces d'Orlicz. Ensuite, on étudiel'équation de Klein-Gordon semi-linéaire avec non linéarité exponentielle, où la norme d'Orlicz joue un rôle crucial. En particulier, on aborde les questions d'existence globale, de complétude asymptotique et d'étude qualitative.Dans la troisième partie, on établit des inégalités optimales de type Adams, en étroite relation avec les inégalités de Hardy, puis on fournit une description du défaut de compacité des injections de Sobolev qu'elles induisent
This thesis focuses on some Trudinger-Moser type inequalities and their applications to the study of Sobolev embeddings they induce into the Orlicz spaces, and the investigation of nonlinear partial differential equations with exponential growth.The work presented here includes three parts. The first part is devoted to the description of the lack of compactness of the 4D Sobolev embedding into the Orlicz space in the radialframework.The aim of the second part is twofold. Firstly, we characterize the lack of compactness of the 2D Sobolev embedding into the different classes of Orlicz spaces. Secondly, we undertakethe study of the nonlinear Klein-Gordon equation with exponential growth, where the Orlicz norm plays a crucial role. In particular, issues of global existence, scattering and qualitativestudy are investigated.In the third part, we establish sharp Adams-type inequalities invoking Hardy inequalities, then we give a description of the lack of compactness of the Sobolev embeddings they induce
APA, Harvard, Vancouver, ISO, and other styles
15

Romeo, Lauren Michele. "The Structure of the lexicon in the task of the automatic acquisition of lexical information." Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/325420.

Full text
Abstract:
La información de clase semántica de los nombres es fundamental para una amplia variedad de tareas del procesamiento del lenguaje natural (PLN), como la traducción automática, la discriminación de referentes en tareas como la detección y el seguimiento de eventos, la búsqueda de respuestas, el reconocimiento y la clasificación de nombres de entidades, la construcción y ampliación automática de ontologías, la inferencia textual, etc. Una aproximación para resolver la construcción y el mantenimiento de los léxicos de gran cobertura que alimentan los sistemas de PNL, una tarea muy costosa y lenta, es la adquisición automática de información léxica, que consiste en la inducción de una clase semántica relacionada con una palabra en concreto a partir de datos de su distribución obtenidos de un corpus. Precisamente, por esta razón, se espera que la investigación actual sobre los métodos para la producción automática de léxicos de alta calidad, con gran cantidad de información y con anotación de clase como el trabajo que aquí presentamos, tenga un gran impacto en el rendimiento de la mayoría de las aplicaciones de PNL. En esta tesis, tratamos la adquisición automática de información léxica como un problema de clasificación. Con este propósito, adoptamos métodos de aprendizaje automático para generar un modelo que represente los datos de distribución vectorial que, basados en ejemplos conocidos, permitan hacer predicciones de otras palabras desconocidas. Las principales preguntas de investigación que planteamos en esta tesis son: (i) si los datos de corpus proporcionan suficiente información para construir representaciones de palabras de forma eficiente y que resulten en decisiones de clasificación precisas y sólidas, y (ii) si la adquisición automática puede gestionar, también, los nombres polisémicos. Para hacer frente a estos problemas, realizamos una serie de validaciones empíricas sobre nombres en inglés. Nuestros resultados confirman que la información obtenida a partir de la distribución de los datos de corpus es suficiente para adquirir automáticamente clases semánticas, como lo demuestra un valor-F global promedio de 0,80 aproximadamente utilizando varios modelos de recuento de contextos y en datos de corpus de distintos tamaños. No obstante, tanto el estado de la cuestión como los experimentos que realizamos destacaron una serie de retos para este tipo de modelos, que son reducir la escasez de datos del vector y dar cuenta de la polisemia nominal en las representaciones distribucionales de las palabras. En este contexto, los modelos de word embedding (WE) mantienen la “semántica” subyacente en las ocurrencias de un nombre en los datos de corpus asignándole un vector. Con esta elección, hemos sido capaces de superar el problema de la escasez de datos, como lo demuestra un valor-F general promedio de 0,91 para las clases semánticas de nombres de sentido único, a través de una combinación de la reducción de la dimensionalidad y de números reales. Además, las representaciones de WE obtuvieron un rendimiento superior en la gestión de las ocurrencias asimétricas de cada sentido de los nombres de tipo complejo polisémicos regulares en datos de corpus. Como resultado, hemos podido clasificar directamente esos nombres en su propia clase semántica con un valor-F global promedio de 0,85. La principal aportación de esta tesis consiste en una validación empírica de diferentes representaciones de distribución utilizadas para la clasificación semántica de nombres junto con una posterior expansión del trabajo anterior, lo que se traduce en recursos léxicos y conjuntos de datos innovadores que están disponibles de forma gratuita para su descarga y uso.
La información de clase semántica de los nombres es fundamental para una amplia variedad de tareas del procesamiento del lenguaje natural (PLN), como la traducción automática, la discriminación de referentes en tareas como la detección y el seguimiento de eventos, la búsqueda de respuestas, el reconocimiento y la clasificación de nombres de entidades, la construcción y ampliación automática de ontologías, la inferencia textual, etc. Una aproximación para resolver la construcción y el mantenimiento de los léxicos de gran cobertura que alimentan los sistemas de PNL, una tarea muy costosa y lenta, es la adquisición automática de información léxica, que consiste en la inducción de una clase semántica relacionada con una palabra en concreto a partir de datos de su distribución obtenidos de un corpus. Precisamente, por esta razón, se espera que la investigación actual sobre los métodos para la producción automática de léxicos de alta calidad, con gran cantidad de información y con anotación de clase como el trabajo que aquí presentamos, tenga un gran impacto en el rendimiento de la mayoría de las aplicaciones de PNL. En esta tesis, tratamos la adquisición automática de información léxica como un problema de clasificación. Con este propósito, adoptamos métodos de aprendizaje automático para generar un modelo que represente los datos de distribución vectorial que, basados en ejemplos conocidos, permitan hacer predicciones de otras palabras desconocidas. Las principales preguntas de investigación que planteamos en esta tesis son: (i) si los datos de corpus proporcionan suficiente información para construir representaciones de palabras de forma eficiente y que resulten en decisiones de clasificación precisas y sólidas, y (ii) si la adquisición automática puede gestionar, también, los nombres polisémicos. Para hacer frente a estos problemas, realizamos una serie de validaciones empíricas sobre nombres en inglés. Nuestros resultados confirman que la información obtenida a partir de la distribución de los datos de corpus es suficiente para adquirir automáticamente clases semánticas, como lo demuestra un valor-F global promedio de 0,80 aproximadamente utilizando varios modelos de recuento de contextos y en datos de corpus de distintos tamaños. No obstante, tanto el estado de la cuestión como los experimentos que realizamos destacaron una serie de retos para este tipo de modelos, que son reducir la escasez de datos del vector y dar cuenta de la polisemia nominal en las representaciones distribucionales de las palabras. En este contexto, los modelos de word embedding (WE) mantienen la “semántica” subyacente en las ocurrencias de un nombre en los datos de corpus asignándole un vector. Con esta elección, hemos sido capaces de superar el problema de la escasez de datos, como lo demuestra un valor-F general promedio de 0,91 para las clases semánticas de nombres de sentido único, a través de una combinación de la reducción de la dimensionalidad y de números reales. Además, las representaciones de WE obtuvieron un rendimiento superior en la gestión de las ocurrencias asimétricas de cada sentido de los nombres de tipo complejo polisémicos regulares en datos de corpus. Como resultado, hemos podido clasificar directamente esos nombres en su propia clase semántica con un valor-F global promedio de 0,85. La principal aportación de esta tesis consiste en una validación empírica de diferentes representaciones de distribución utilizadas para la clasificación semántica de nombres junto con una posterior expansión del trabajo anterior, lo que se traduce en recursos léxicos y conjuntos de datos innovadores que están disponibles de forma gratuita para su descarga y uso.
Lexical semantic class information for nouns is critical for a broad variety of Natural Language Processing (NLP) tasks including, but not limited to, machine translation, discrimination of referents in tasks such as event detection and tracking, question answering, named entity recognition and classification, automatic construction and extension of ontologies, textual inference, etc. One approach to solve the costly and time-consuming manual construction and maintenance of large-coverage lexica to feed NLP systems is the Automatic Acquisition of Lexical Information, which involves the induction of a semantic class related to a particular word from distributional data gathered within a corpus. This is precisely why current research on methods for the automatic production of high- quality information-rich class-annotated lexica, such as the work presented here, is expected to have a high impact on the performance of most NLP applications. In this thesis, we address the automatic acquisition of lexical information as a classification problem. For this reason, we adopt machine learning methods to generate a model representing vectorial distributional data which, grounded on known examples, allows for the predictions of other unknown words. The main research questions we investigate in this thesis are: (i) whether corpus data provides sufficient distributional information to build efficient word representations that result in accurate and robust classification decisions and (ii) whether automatic acquisition can handle also polysemous nouns. To tackle these problems, we conducted a number of empirical validations on English nouns. Our results confirmed that the distributional information obtained from corpus data is indeed sufficient to automatically acquire lexical semantic classes, demonstrated by an average overall F1-Score of almost 0.80 using diverse count-context models and on different sized corpus data. Nonetheless, both the State of the Art and the experiments we conducted highlighted a number of challenges of this type of model such as reducing vector sparsity and accounting for nominal polysemy in distributional word representations. In this context, Word Embeddings (WE) models maintain the “semantics” underlying the occurrences of a noun in corpus data by mapping it to a feature vector. With this choice, we were able to overcome the sparse data problem, demonstrated by an average overall F1-Score of 0.91 for single-sense lexical semantic noun classes, through a combination of reduced dimensionality and “real” numbers. In addition, the WE representations obtained a higher performance in handling the asymmetrical occurrences of each sense of regular polysemous complex-type nouns in corpus data. As a result, we were able to directly classify such nouns into their own lexical-semantic class with an average overall F1-Score of 0.85. The main contribution of this dissertation consists of an empirical validation of different distributional representations used for nominal lexical semantic classification along with a subsequent expansion of previous work, which results in novel lexical resources and data sets that have been made freely available for download and use.
APA, Harvard, Vancouver, ISO, and other styles
16

Lin, Chia-Hui, and 林家暉. "Three Types of Two-disjoint-cycle-cover Pancyclicity And Their Applications to Cycle Embedding in Locally Twisted Cubes." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/47777454426457619950.

Full text
Abstract:
碩士
靜宜大學
資訊工程學系
104
A graph G = (V, E) is two-disjoint-cycle-cover [r1, r2]-pancyclic if for any integer l satisfying r1 ≤ l ≤ r2, there exist two vertex-disjoint cycles C1 and C2 in G such that the lengths of C1 and C2 are l and |V|− l, respectively, where |V| denotes the total number of vertices in G. On the basis of this definition, we further propose Ore-type conditions for graphs to be two-disjoint-cycle-cover vertex/edge [r1, r2]-pancyclic. In addition, we study cycle embedding in the n-dimensional locally twisted cube LTQn under the consideration of two-disjoint-cycle-cover vertex/edge pancyclicity.
APA, Harvard, Vancouver, ISO, and other styles
17

Franců, Martin. "Isoperimetrický problém, Sobolevovy prostory a Heisenbergova grupa." Doctoral thesis, 2018. http://www.nusl.cz/ntk/nusl-392430.

Full text
Abstract:
In this thesis we study embeddings of spaces of functions defined on Carnot- Carathéodory spaces. Main results of this work consist of conditions for Sobolev- type embeddings of higher order between rearrangement-invariant spaces. In a special case when the underlying measure space is the so-called X-PS domain in the Heisenberg group we obtain full characterization of a Sobolev embedding. The next set of main results concerns compactness of the above-mentioned em- beddings. In these cases we obtain sufficient conditions. We apply the general results to important particular examples of function spaces. In the final part of the thesis we present a new algorithm for approximation of the least concave majorant of a function defined on an interval complemented with the estimate of the error of such approximation. 1
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography