Siga este enlace para ver otros tipos de publicaciones sobre el tema: Generative Semantik.

Tesis sobre el tema "Generative Semantik"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Generative Semantik".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Öhl, Peter. "Economical computation of structural descriptions in natural language a minimally radicalist theory /". [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10633960.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Teunissen, Lisanne Maria. "Semantic syntax : evaluation by implementation /". Utrecht (Pays-Bas) : LOT, 2002. http://catalogue.bnf.fr/ark:/12148/cb39075632j.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Berger, Mikael. "Ledföljdsvariation i svenska : Verb, semantik och syntax i samband med lång objektsflytt". Thesis, Linnéuniversitetet, Institutionen för svenska språket (SV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-32232.

Texto completo
Resumen
The term ‘Object Shift’ means that a weak object pronoun has been raised to a position to the left of a sentence adverbial. Primarily, there are two positions to which a weak object pronoun can be raised; one of them is called Long Object Shift. In this paper, I primarily deal with finite verb frequencies in sentences with Long Object Shift in Swedish. Also, in order to further define the verbs in these sentences I discuss verb semantic and syntactic valence in relation to the shifted object. Furthermore, I discuss the search methods which have generated sentences with Long Object Shift in written sources in Swedish; an equivalent study has never been done before.   The most salient results of my study indicate that: verbs with physical denotation, and specifically vänta, möta and slå, frequently recur in sentences with Long Object Shift in written sources in Swedish; monotransitive verbs appear more often than ditransitive ones; shifted objects are most often assigned the semantic role of experiencer in the studied sentences; the search methods show divergent results, which indicates the significance of sentence adverbial initiating sentences with Long Object Shift. Lastly, the results show that semantic analyses, terminology and search methods need to be extended in order to further define the finite verb in sentences with Long Object Shift in Swedish.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kroik, David. "Differential object marking in South Saami". Licentiate thesis, Umeå universitet, Institutionen för språkstudier, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-124959.

Texto completo
Resumen
This licentiate thesis investigates the case and the syntactic position of the direct object in South Saami. The focus is on plural direct objects, which have Differential Object Marking, a phenomenon in which the case alternates between different types of direct objects. In South Saami, some direct objects carry the accusative case form in the plural, while others only carry the plural marker. This variation of suffix displayed on the direct object is contingent on definiteness; definite direct objects consistently display the accusative case form in the plural while indefinite direct objects, specific and nonspecific alike, lack accusative morphology. In addition to case marking, the study presents an analysis of the alternation of the syntactic position of some direct objects. Definite and indefinite specific direct objects can be realized in two positions: as the complement of the verb or in a position as specifier of the light verb projection. By contrast, indefinite nonspecific direct objects obligatorily surface in the complement position of the verb. This variability in syntactic position of some direct objects is analyzed by means of a Specificity Operator, adjoined to the DP-level of every specific NP, definite and indefinite. The operator moves as an instance of quantifier raising in order to take scope over Existential Closure (EC). EC binds NPs in its domain and give them an existential reading. Therefore, when the Specificity Operator raises, it anchors the DP it is adjoined to in a domain, which is unbound by EC and therefore facilitates a specific interpretation. The operator, void of phonological content, can raise alone to the specifier of vP as an instance of covert movement. The operator can also Pied-pipe the DP it is adjoined to, which results in overt movement of the DP. Indefinite nonspecific direct objects lack the Specificity Operator and therefore they remain in-situ in the VP, where they are bound by EC. In addition to its theoretical value, the thesis will be of use for teachers, students and others with an interest in a better understanding of the case form and the position of the direct object in South Saami.
Daennie licentiaatetjaalegisnie gïehtjedem guktie Åarjelsamien direkte objeekth gelliengiertesne kaasushgïetjieh åadtjoeh. Manne gelliengiertem veeljeme juktie åarjelaemien gïele Differential Object Marking åtna. Naakenh direkte objeekth dam giehtjiem -idie guedtieh, mij ackusatijvem gelliengïertesne muana. Jeatjah direkte objeekth barre låhkoegiehtjiem -h guedtieh, mij ajve gelliengïertem muana, menh ij kaasusem. Dan åvteste direkte objeekti kaasushaamoeh molsedieh. Mov gïehtjidimmie vuesehte ahte definijte direkteobjeekth gelliengiertesne dam ackusatijvegïehtjiem. Eah indefinijte direkte objeekth dam gïethjiem utnieh, valla barre gelliengierehaamoem utnieh. Manne vielie gïehtjedem gusnie, dennie raaje- sisnie, leah dej direkte objeekti sijjieh. Gaavneme ahte joekehtsh leah aaj ovmessie direkte objeekti gaskoeh. Definijte jïh indefinijte specifijke direkte objeekth utnieh göökte sijjieh gusnie maehtieh jïjhtedh, valla indefinite ovspecifijke direkte objeekth utnieh ajve aktem sijjiem gusnie maehtieh årrodh. Gaajhkh dah golme ovmessie direkte objeekth maehtieh maadthsijjesne årrodh goh verben komplemeente, valla definijte jïh indefinijte specifijke direkte objeekth maehtieh aaj aktene vP:n specificeerijisnie jïjhtedh. Manne daam joekehtehtem jïh vuesehtem mannasinie naemhtie jis. Mov innovasjovne lea akte specifijkeoperatovre. Dïhte lea adjungeradamme fïerhten DP:se mij lea definijte jallh indefinijte specifijke. Dïhte operatovre iktesth bæjjene DP:n sistie vP specificeerijen sïjse, men dïhte maahta aaj dam DP:m buektedh Pied-pipingen tjïrrh. Dïhte specifijkeoperatovre bæjjene juktie edtja baataridh Existential Closuren (EC) jaksoste. Gosse operatovre bæjjene, dïhte dan sov DP:m dïbrehte akten domeenese, gusnie specifijke guarkoe daerpies sjædta. Dah direkte objeekt mah eah specifijkeoperatovrem utnieh tjoerieh baetsedh VP:n sijse, jïh dannasinie EC dejtie veadta. Dannasinie existentielle guarkoem åadtjoeh. Daate tjaalege vihkeles lingvistihke teorijese, valla aaj lohkehtæjjide, learoehkidie jïh jeatjide guhth sïjhth buerebe guarkedh mij kaasusidie lea direkte objeekten jïh gusnie, dennie raajesisnie, dïhte objeekte jæjhta.
I den här licentiatavhandling undersöks kasusformen hos de direkta objekten och deras syntaktiska position i sydsamiskan. Fokus ligger på direkta objekt i pluralis, vilka uppvisar fenomenet differentiell objektsmarkering, som innebär att vissa direkta objekt bär ackusativsuffixet i plural medan andra endast bär pluralsuffixet. Denna variation i objektsmarkering är känslig för definithet. Definita direkta objekt har accusativändelsen medan indefinita, både specifika och icke-specifika direkta objekt, saknar den. Utöver själva realiseringen av kasussuffix undersöks också de direkta objektens syntaktiska position. En analys presenteras som definierar olika typer av nominalfraser och skiljer definita och specifika direkta objekt från icke-specifika direkta objekt. Den första typen uppvisar variation i sin syntaktiska placering och har möjligheten att dyka upp både i komplementställning till verbet och i en den lilla verbfrasens specificerare, det vill säga vid gränsen för den lexikala fasen. Indefinita icke-specifika direkta objekt, som utgör den andra typen, kan bara uppträda i en position som komplement till verbet. På basis av den analys som inkluderar min innovation Specifikhetsoperatorn, vilken är adjungerad till alla definita och specifika direkta objekts DP-nivå, kan de två positionerna förklaras. Specifikhetsoperatorn flyttar alltid till vP:s specifierare som en kvantifierarinteraktion, där Specifikhetstoperatorn får räckvidd över Existential Closure (EC) och förankrar sin DP i en domän där en specifik tolkning blir nödvändig. Detta är en typ av osynlig flytt. Flytten kan också vara synlig. I det fallet sker medfraktning (Pied-piping) när Specifikhetsoperatorn tar med sig den DP den är adjungerad till när den flyttar till vPs specifierare. Direkta object som saknar Specifikhetsoperatorn stannar i positionen som komplement till verbet och binds därför av EC, vilket leder till att de får en existentiell tolkning. Bortom sitt värde för lingvistisk teoribildning kommer avhandlingen också att bli viktig för lärare, studenter och elever såväl som för andra med ett intresse av att bättre förstå vilket kasus som uppträder på sydsamiska direkta objekt och dessa objekts position i satsen.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Han, Wei. "Wrapper application generation for semantic web". Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5407.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Al-Jabri, S. K. "Generating Arabic words from semantic descriptions". Thesis, University of Edinburgh, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.640291.

Texto completo
Resumen
This thesis addresses Arabic word generation from semantic descriptions. The purpose of this study is two-fold. Firstly, to analyse semantic specifications of Arabic derivation and, secondly, to build a computational model that performs Arabic lexical choices using a knowledge-base that is motivated by the semantics of derivation. Lexical choice in a highly derived language, such as Arabic, is non-trivial in MT and NLG systems where the semantic input cannot be biased directly towards the words that the language provides. On the linguistic side, the major contribution of this study is a new framework that organises semantic interactions in Arabic derivation in a two-layer semantics: an inner layer accounts for core meanings realised by Arabic consonantal roots and an outer layer accounts for semantic features associated with derivational moulds. The meaning of derived words can be established using controlled interactions between the two layers. The interactions, accordingly, state the link between semantic concepts on the one hand and their surface realisations on the other. Furthermore, derivational conditions imposed on the interactions account for many derivational gaps in Arabic. On the computational side, semantic generalisation motivated by the two layer semantics are organised into a taxonomic knowledge-base. Following the tradition in the KL-ONE family, the knowledge-base adopts a separation between definitional and assertional components. The former is used to represent semantic specifications of Arabic derived concepts while the latter is used to describe their linguistic properties such as roots and moulds. The knowledge-base, as a whole, is built around structured hierarchies that represent an integration of knowledge resources such as default and multiple inheritance, and subsumption. This integration allows not only for an expressive representation of Arabic derivation but also for the mapping of semantic inputs into Arabic derived lexical items by means of automatic classification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Green, Stephen Joseph. "Automatically generating hypertext by computing semantic similarity". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0019/NQ27656.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ball, Stephen Wayne. "Semantic web service generation for text classification". Thesis, University of Southampton, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430674.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ablonskis, Linas. "Programos kodo generavimas naudojant UML veiksmų semantiką". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2006. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2006~D_20060602_001134-80600.

Texto completo
Resumen
The recent version of UML 2.0 (in year 2006) specifies activities and actions, which allow describing low level behavior of software system being modeled, in implementation independent fashion. This work analyzes suitability of UML 2.0 activities and actions for generating full program code (or as many as possible). It also proposes a method for generating program code from UML 2.0 activities and actions, which consists of a way to express UML activities with concurrently executing actions in a sequential execution scenario and a way to determine the exact code template (among the few possible) for UML activity elements based on identifying the particular context of element in question.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Buys, Jan Moolman. "Incremental generative models for syntactic and semantic natural language processing". Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:a9a7b5cf-3bb1-4e08-b109-de06bf387d1d.

Texto completo
Resumen
This thesis investigates the role of linguistically-motivated generative models of syntax and semantic structure in natural language processing (NLP). Syntactic well-formedness is crucial in language generation, but most statistical models do not account for the hierarchical structure of sentences. Many applications exhibiting natural language understanding rely on structured semantic representations to enable querying, inference and reasoning. Yet most semantic parsers produce domain-specific or inadequately expressive representations. We propose a series of generative transition-based models for dependency syntax which can be applied as both parsers and language models while being amenable to supervised or unsupervised learning. Two models are based on Markov assumptions commonly made in NLP: The first is a Bayesian model with hierarchical smoothing, the second is parameterised by feed-forward neural networks. The Bayesian model enables careful analysis of the structure of the conditioning contexts required for generative parsers, but the neural network is more accurate. As a language model the syntactic neural model outperforms both the Bayesian model and n-gram neural networks, pointing to the complementary nature of distributed and structured representations for syntactic prediction. We propose approximate inference methods based on particle filtering. The third model is parameterised by recurrent neural networks (RNNs), dropping the Markov assumptions. Exact inference with dynamic programming is made tractable here by simplifying the structure of the conditioning contexts. We then shift the focus to semantics and propose models for parsing sentences to labelled semantic graphs. We introduce a transition-based parser which incrementally predicts graph nodes (predicates) and edges (arguments). This approach is contrasted against predicting top-down graph traversals. RNNs and pointer networks are key components in approaching graph parsing as an incremental prediction problem. The RNN architecture is augmented to condition the model explicitly on the transition system configuration. We develop a robust parser for Minimal Recursion Semantics, a linguistically-expressive framework for compositional semantics which has previously been parsed only with grammar-based approaches. Our parser is much faster than the grammar-based model, while the same approach improves the accuracy of neural Abstract Meaning Representation parsing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Kolozali, Sefki. "Automatic ontology generation based on semantic audio analysis". Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8452.

Texto completo
Resumen
Ontologies provide an explicit conceptualisation of a domain and a uniform framework that represents domain knowledge in a machine interpretable format. The Semantic Web heavily relies on ontologies to provide well-defined meaning and support for automated services based on the description of semantics. However, considering the open, evolving and decentralised nature of the SemanticWeb – though many ontology engineering tools have been developed over the last decade – it can be a laborious and challenging task to deal with manual annotation, hierarchical structuring and organisation of data as well as maintenance of previously designed ontology structures. For these reasons, we investigate how to facilitate the process of ontology construction using semantic audio analysis. The work presented in this thesis contributes to solving the problems of knowledge acquisition and manual construction of ontologies. We develop a hybrid system that involves a formal method of automatic ontology generation for web-based audio signal processing applications. The proposed system uses timbre features extracted from audio recordings of various musical instruments. The proposed system is evaluated using a database of isolated notes and melodic phrases recorded in neutral conditions, and we make a detailed comparison between musical instrument recognition models to investigate their effects on the automatic ontology generation system. Finally, the automatically-generated musical instrument ontologies are evaluated in comparison with the terminology and hierarchical structure of the Hornbostel and Sachs organology system. We show that the proposed system is applicable in multi-disciplinary fields that deal with knowledge management and knowledge representation issues.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Prout, Adam. "Parameterized Code Generation From Template Semantics". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1027.

Texto completo
Resumen
We have developed a tool that can create a Java code generator for a behavioural modelling notation given only a description of the notation's semantics as a set of parameters. This description is based on template semantics, which has previously been used to describe a wide variety of notations. As a result, we have a technique for generating Java code for models written in any notation describable in template semantics. Since template semantics allows for models containing nondeterminism, we introduce mechanisms for eliminating this nondeterminism when generating code. We describe Java implementations of several template-semantics composition operators that have no natural Java representations and give some techniques for optimizing the generated code without sacrificing correctness. The efficiency of our generated code is comparable to that of commercial notation-specific code generators.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Khandelwal, Madhur Jugalkishore. "Semantics of time travel in a generative information space". Texas A&M University, 2004. http://hdl.handle.net/1969.1/1354.

Texto completo
Resumen
This thesis focuses on interactive and computational semantics for manipulating the time-based medium of an evolving information space. The interactive semantics enable the user to engage in linear timeline traversal and non-linear history manipulation. Extended tape recorder metaphor controls, including jog-shuttle based navigation, provide the user with flexible means for operating the software's generative functionalities, and linearly traversing session history. The user can see previews of information space states while traversing the history using the jog-shuttle. We also introduce a door-latch metaphor that enables one of several considered forms of nonlinear history manipulation. Users can change history by retroactively latching an information sample in its position across time. For representing the information space history, we have developed MPEG-like computational keyframe semantics. This representation is in the form of XML, which is generated automatically and converted back to Java by a framework named ecologylab.xml, which was developed as a part of this thesis. These computational keyframe semantics serve as the basis for interaction semantics. A user study was conducted in the form of a design competition, to evaluate these new features. The results indicated that the users do find the time travel features useful and they feel more in-control of the information space with access to time travel features compared to the case when time travel features are not present.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Neil, Richard L. "Support for Pointer Semantics in a Generative Communication Framework". Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/26275.

Texto completo
Resumen
The Generative Communication (GC) paradigm was introduced in the early 1980s in the form of the Linda coordination and communication language (Gelernter & Bernstein, 1982). The goal of the language was to provide an elegant, intuitive alternative to the state-of-the-art in parallel and distributed processing languages at that time, namely remote procedure calls; message passing systems; and shared memory systems (Gelernter, 1985). Despite the longevity of the Linda coordination language and the GC paradigm, in general, the literature is devoid of meaningful research into the construction and manipulation of dynamic, pointer-based data structures within the context of the GC paradigm. This document motivates the need for such data structures in GC. In addition, this document explores the reasons for the lack literature regarding dynamic pointer-based data structures in GC. Finally, we develop solutions to address the lack of support, within the GC paradigm, for such data structures.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Karpovič, Jaroslav. "Semantinei paieškai naudojamos ontologijos generavimo pagal duomenų bazės schemą procesas". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070118_111316-44846.

Texto completo
Resumen
Data storing semantic technologies separate it from applications code and gives availability for computers as well as people understand and share semantics in real time. These technologies also enable to add new data source or link between software applications as easy as to draw new link in the model. Unfortunately these technologies are yet not developed and popular as we could notice strong benefits of them in daily life. Introduction of semantic search system is an attempt to show the strong points of semantic technologies. Semantic search is more precise because of its opportunities to narrow handled domain down, it gives more exact result than usual, keyword based search. This advantage is clearly shown when database is very large and is filled with plenty of data. It also gives possibility to retrieve results from multiple distant data sources and form custom or predefined result sets as a central hub for some data domain. Automatic ontology generation based on database schema and metadata is suggested in this work. Such solution ensures that semantic search, which uses generated ontology, serves up-to-date search services even when structure of database is changed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Abeyruwan, Saminda Wishwajith. "PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabilistic Methods". Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_theses/28.

Texto completo
Resumen
An ontology is a formal, explicit specification of a shared conceptualization. Formalizing an ontology for a domain is a tedious and cumbersome process. It is constrained by the knowledge acquisition bottleneck (KAB). There exists a large number of text corpora that can be used for classification in order to create ontologies with the intention to provide better support for the intended parties. In our research we provide a novel unsupervised bottom-up ontology generation method. This method is based on lexico-semantic structures and Bayesian reasoning to expedite the ontology generation process. This process also provides evidence to domain experts to build ontologies based on top-down approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Wächter, Thomas. "Semi-automated Ontology Generation for Biocuration and Semantic Search". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-64838.

Texto completo
Resumen
Background: In the life sciences, the amount of literature and experimental data grows at a tremendous rate. In order to effectively access and integrate these data, biomedical ontologies – controlled, hierarchical vocabularies – are being developed. Creating and maintaining such ontologies is a difficult, labour-intensive, manual process. Many computational methods which can support ontology construction have been proposed in the past. However, good, validated systems are largely missing. Motivation: The biocuration community plays a central role in the development of ontologies. Any method that can support their efforts has the potential to have a huge impact in the life sciences. Recently, a number of semantic search engines were created that make use of biomedical ontologies for document retrieval. To transfer the technology to other knowledge domains, suitable ontologies need to be created. One area where ontologies may prove particularly useful is the search for alternative methods to animal testing, an area where comprehensive search is of special interest to determine the availability or unavailability of alternative methods. Results: The Dresden Ontology Generator for Directed Acyclic Graphs (DOG4DAG) developed in this thesis is a system which supports the creation and extension of ontologies by semi-automatically generating terms, definitions, and parent-child relations from text in PubMed, the web, and PDF repositories. The system is seamlessly integrated into OBO-Edit and Protégé, two widely used ontology editors in the life sciences. DOG4DAG generates terms by identifying statistically significant noun-phrases in text. For definitions and parent-child relations it employs pattern-based web searches. Each generation step has been systematically evaluated using manually validated benchmarks. The term generation leads to high quality terms also found in manually created ontologies. Definitions can be retrieved for up to 78% of terms, child ancestor relations for up to 54%. No other validated system exists that achieves comparable results. To improve the search for information on alternative methods to animal testing an ontology has been developed that contains 17,151 terms of which 10% were newly created and 90% were re-used from existing resources. This ontology is the core of Go3R, the first semantic search engine in this field. When a user performs a search query with Go3R, the search engine expands this request using the structure and terminology of the ontology. The machine classification employed in Go3R is capable of distinguishing documents related to alternative methods from those which are not with an F-measure of 90% on a manual benchmark. Approximately 200,000 of the 19 million documents listed in PubMed were identified as relevant, either because a specific term was contained or due to the automatic classification. The Go3R search engine is available on-line under www.Go3R.org.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Nieto, Erick Mauricio Gomez. "Generation of semantic layouts for interactive multidimensional data visualization". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11052017-105059/.

Texto completo
Resumen
Visualization methods make use of interactive graphical representations embedded on a display area in order to enable data exploration and analysis. These typically rely on geometric primitives for representing data or building more sophisticated representations to assist the visual analysis process. One of the most challenging tasks in this context is to determinate an optimal layout of these primitives which turns out to be effective and informative. Existing algorithms for building layouts from geometric primitives are typically designed to cope with requirements such as orthogonal alignment, overlap removal, optimal area usage, hierarchical organization, dynamic update among others. However, most techniques are able to tackle just a few of those requirements simultaneously, impairing their use and flexibility. In this dissertation, we propose a set of approaches for building layouts from geometric primitives that concurrently addresses a wider range of requirements. Relying on multidimensional projection and optimization formulations, our methods arrange geometric objects in the visual space so as to generate well-structured layouts that preserve the semantic relation among objects while still making an efficient use of display area. A comprehensive set of quantitative comparisons against existing methods for layout generation and applications on text, image, and video data set visualization prove the effectiveness of our approaches.
Métodos de visualização fazem uso de representações gráficas interativas embutidas em uma área de exibição para exploração e análise de dados. Esses recursos visuais usam primitivas geométricas para representar dados ou compor representações mais sofisticadas que facilitem a extração visual de informações. Uma das tarefas mais desafiadoras é determinar um layout ótimo visando explorar suas capacidades para transmitir informação dentro de uma determinada visualização. Os algoritmos existentes para construir layouts a partir de primitivas geométricas são tipicamente projetados para lidar com requisitos como alinhamento ortogonal, remoção de sobreposição, área usada, organização hierárquica, atualização dinâmica entre outros. No entanto, a maioria das técnicas são capazes de lidar com apenas alguns desses requerimentos simultaneamente, prejudicando sua utilização e flexibilidade. Nesta tese, propomos um conjunto de abordagens para construir layouts a partir de primitivas geométricas que simultaneamente lidam com uma gama mais ampla de requerimentos. Baseando-se em projeções multidimensionais e formulações de otimização, os nossos métodos organizam objetos geométricos no espaço visual para gerar layouts bem estruturados que preservam a relação semântica entre objetos enquanto ainda fazem um uso eficiente da área de exibição. Um conjunto detalhado de comparações quantitativas com métodos existentes para a geração de layouts e aplicações em visualização de conjunto de dados de texto, imagem e vídeo comprova a eficácia das técnicas propostas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Mata, Cristina(Christina Florica). "Two approaches to robust hand pose estimation : generative modeling and semantic relations". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123053.

Texto completo
Resumen
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 107-111).
This thesis explores hand pose estimation through the use of two methods. We first investigate the use of segmentation networks within pose estimation pipelines with a focus on fine parts segmentation. We present two implementations of a novel method for fine parts segmentation employing a higher-order Conditional Random Field (CRF) that measures attachment and containment of fine parts. The first implementation is of the CRF as a post-processing module on top of a Convolutional Neural Network (CNN). The second addresses efficiency bottlenecks in the first by implementing the CRF as a Recurrent Neural Network (RNN) and allowing for end-to-end training with the CNN. Limited by the accuracy of fine parts segmentation and wishing to avoid propagation of segmentation errors through a pipeline, we turn to generative modeling methods for hand pose estimation and present an inverse-graphics approach implemented in a probabilistic programming language. Spurred by the lack of occlusion in hand image datasets, we present the MIT Partially Occluded Hands Dataset, a large-scale dataset of single RGB images, half of which feature natural hand-object interactions, and evaluate several baselines on this dataset.
by Cristina Mata.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

McKeever, Stephen Walter. "Automatic compiler generation from natural semantics specifications". Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.480685.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Berry, Dave. "Generating program animators from programming language semantics". Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/10791.

Texto completo
Resumen
I present a theory of program animation based on formal semantics. This theory shows how an animator for a language can be generated from a formal specification of that language. Such an animator presents a model of evaluation that is formally correct with respect to the semantics. The theory also provides a framework for comparing definitions of animation operations. The main part of the theory is the definition of an evaluation step. I compare two definitions. The first is based on the transitions used in the transitional style of structured operational semantics, and is motivated by the idea that these transitions represent an intuitive idea of a computation step. Unfortunately this definition produces unsatisfactory animations. However, it can be augmented to give one that better satisfies the needs of the user. Both of these definitions are given in the relational style of structured operational semantics. The first definition is based on an equivalence between the relational and transitional styles; I give a definition of this equivalence. I also discuss the relation between the definition of a step and the choice of semantic formalism. Views of a program in mid-evaluation can be defined by extending the specification of the programming language to include semantic display rules. Each semantic display rule specifies the display of one sub-phrase of the program inmid-evaluation. This approach is powerful enough to define a wide range of views. I also show how the definition of a step can be parameterised on a view. More advanced operations can also be defined in terms of this theory. These operations and the views mentioned in the previous paragraph cover most of the features found in existing animators. This indicates that the theory is powerful enough to specify useful systems. The main feature that is not yet provided is the ability to define views that are specific to a single program. These ideas have been implemented in a system called The Animator Generator. Animators produced by The Animator Generator support multiple views and the advanced operations mentioned here. I give a brief description of this system. I finish by discussing how this work could be developed further.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Spudys, Kęstutis. "Dinaminio semantinių užklausų formavimo sąsaja". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120807_154044-02570.

Texto completo
Resumen
Semantika informacinių technologijų kontekste yra duomenų apdorojimas pagal prasmę ir kontekstą. Tam įgyvendinti yra taikomas natūralios kalbos apdorojimas, pritaikytas informacijos paieškai, išrinkimui, analizavimui.Taikant semantines technologijas, kūrėjams dažnai kyla klausimas, kaip sukurti semantinės paieškos sąsają, kad ji būtų patogi ir duotų kuo tikslesnius atsakymus į vartotojų užklausas. Šiame darbe aprašomas sukurtas metodas, kuris padeda vartotojui palaipsniui formuoti SPARQL užklausą iš atskirų elementų, dinamiškai keičiant vartotojo sąsają.
Increasing popularity of Semantic Web raises a question how we could make a simple user interface for building semantic queries while keeping high precision of results returned. This thesis presents a method that helps users to create SPARQL queries by allowing to dynamically add components to user interface. The goal of the work is to improve of user interface model for semantic queries by allowing users to construct and change it dynamically till obtaining the desirable answer results. That model was created on the base of analysis of Semantic Web languages, tools and existing portals, their functions and user interfaces. Algorithms for dynamic user interface generation based on user actions were developed that allow creating queries of various complexities with minimal amount of user interface components. Implementation and testing the prototype of the system using movie and wine ontologies has shown that dynamic construction and generation of query interface has desirable functionality and is easily applicable to various ontologies. Experimental comparison with existing semantic search portals has shown that the proposed dynamic user interface generation method could improve precision and recall of semantic queries and may be applied in semantic search portal applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Wieland, Nellie Claire. "Scribbledehobble a dissertation on linguistic agency /". Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3259063.

Texto completo
Resumen
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed June 21, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 197-213).
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hielkema, Feikje. "Using natural language generation to provide access to semantic metadata". Thesis, University of Aberdeen, 2010. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=128191.

Texto completo
Resumen
In recent years, the use of using metadata to describe and share resources has grown in importance, especially in the context of the Semantic Web.  However, access to metadata is difficult for users without experience with description logic or formal languages, and currently this description applies to most web users.  There is a strong need for interfaces that provide easy access to semantic metadata, enabling novice users to browse, query and create it easily. This thesis describes a natural language generation interface to semantic metadata called LIBER (Language Interface for Browsing and Editing Rdf), driven by domain ontologies which are integrated with domain-specific linguistic information.  LIBER uses the linguistic information to generate fluent descriptions and search terms through syntactic aggregation. The tool contains three modules to support metadata creation, querying and browsing, which implement the WYSIWYM (What You See Is What You Meant) natural language generation approach.  Users can add and remove information by editing system-generated feedback texts.  Two studies have been conducted to evaluate LIBER’s usability, and compare it to a different Semantic Web interface.  The studies showed subjects with no prior experience of the Semantic Web could use LIBER effectively to create, search and browse metadata, and were a useful source of ideas in which to improve LIBER’s usability.  However, the results of these studies were less positive than we had hoped, and users actually preferred the other Semantic Web tool.  This has raised questions about which user audience LIBER should aim for, and the extent to which the underlying ontologies influence the usability of the interface. LIBER’s portability to other domains is supported by a tool with which ontology developers without a background in linguistics can prepare their ontologies for use in LIBER by adding the necessary linguistic information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Li, Jiawei. "Semantically Correct High-resolution CT Image Interpolation and its Application". Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41150.

Texto completo
Resumen
Image interpolation in the medical area is of vital importance as most 3D biomedical volume images are sampled where the distance between consecutive slices is significantly greater than the in-plane pixel size due to radiation dose or scanning time. Image interpolation creates a certain number of new slices between known slices in order to obtain an isotropic volume image. The results can be used for the higher quality of 2D and 3D visualization or reconstruction of human body structure. Semantic interpolation on the manifold has been proved to be very useful for smoothing the interpolation process. Nevertheless, all previous methods focused on low-resolution image interpolation, and most of which work poorly on high-resolution images. Besides, the medical field puts a high threshold for the quality of interpolations, as they need to be semantic and realistic enough, and resemble real data with only small errors permitted. Typically, people downsample the images into 322 and 642 for semantic interpolation, which does not meet the requirement for high-resolution in the medical field. Thus, we explore a novel way to generate semantically correct interpolations and maintain the resolution at the same time. Our method has been proved to generate realistic and high-resolution interpolations on the sizes of 5262 and 5122. Our main contribution is, first, we propose a novel network, High Resolution Interpolation Network (HRINet), aiming at producing semantically correct high-resolution CT image interpolations. Second, by combining the idea of ACAI and GANs, we propose a unique alternative supervision method by applying supervised and unsupervised training alternatively to raise the accuracy and fidelity of body structure in CT when interpolated while keeping high quality. Third, we introduce an extra Markovian discriminator as a texture or fine details regularizer to make our model generate results indistinguishable from real data. In addition, we explore other possibilities or tricks to further improve the performance of our model, including low-level feature maps mixing, and removing batch normalization layers within the autoencoder. Moreover, we compare the impacts of MSE based and perceptual based loss optimizing methods for high quality interpolation, and show the trade-off between the structural correctness and sharpness. The interpolation experiments show significant improvement on both sizes of 256 2 and 5122 images quantitatively and qualitatively. We find that interpolations produced by HRINet are sharper and more realistic compared with other existing methods such as AE and ACAI in terms of various metrics. As an application of high-resolution interpolation, we have done 2D volume projection and 3D volume reconstruction from axial view CT data and their interpolations. We show the great enhancement of applying HRINet for both in sharpness and fidelity. Specifically, for 2D volume projection, we explore orthogonal projection and weighted projection respectively so as to show the improved effectiveness for visualizing internal and external human body structure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Malec, Stanislaw. "Semantic Segmentation with Carla Simulator". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105287.

Texto completo
Resumen
Autonomous vehicles perform semantic segmentation to orient themselves, but training neural networks for semantic segmentation requires large amounts of labeled data. A hand-labeled real-life dataset requires considerable effort to create, so we instead turn to virtual simulators where the segmented labels are known to generate large datasets virtually for free. This work investigates how effective synthetic datasets are in driving scenarios by collecting a dataset from a simulator and testing it against a real-life hand-labeled dataset. We show that we can get a model up and running faster by mixing synthetic and real-life data than traditional dataset collection methods and achieve close to baseline performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Hsiao, Chih-Pin. "Toward semantic model generation from sketch and multi-touch interactions". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54237.

Texto completo
Resumen
Designers usually start their design process by exploring and evolving their ideas rapidly through sketching since this helps them to make numerous attempts at creating, practicing, simulating, and representing ideas. Creativity inherent in solving the ill-defined problems (Eastman, 1969) often emerges when designers explore potential solutions while sketching in the design process (Schön, 1992). When using computer programs such as CAD or Building Information Modeling (BIM) tools, designers often preplan the tasks prior to executing commands instead of engaging in the process of designing. Researchers argue that these programs force designers to focus on how to use a tool (i.e. how to execute series of commands) rather than how to explore a design, and thus hinder creativity in the early stages of the design process (Goel, 1995; Dorta, 2007). Since recent design and documentation works have been computer-generated using BIM software, transitions between ideas in sketches and those in digital CAD systems have become necessary. By employing sketch interactions, we argue that a computer system can provide a rapid, flexible, and iterative method to create 3D models with sufficient data for facilitating smooth transitions between designers’ early sketches and BIM programs. This dissertation begins by describing the modern design workflows and discussing the necessary data to be exchanged in the early stage of design. It then briefly introduces the modern cognitive theories, including embodiment (Varela, Rosch, & Thompson, 1992), situated action (Suchman, 1986), and distributed cognition (Hutchins, 1995). It continues by identifying problems in current CAD programs used in the early stage of the design process, using these theories as lenses. After reviewing modern attempts, including sketch tools and design automation tools, we describe the design and implementation of a sketch and multi-touch program, SolidSketch, to facilitate and augment our abilities to work on ill-defined problems in the early stage of design. SolidSketch is a parametric modeling program that enables users to construct 3D parametric models rapidly through sketch and multi-touch interactions. It combines the benefits of traditional design tools, such as physical models and pencil sketches (i.e. rapid, low-cost, and flexible methods), with the computational power offered by digital modeling tools, such as CAD. To close the gap between modern BIM and traditional sketch tools, the models created with SolidSketch can be read by other BIM programs. We then evaluate the programs with comparisons to the commercial CAD programs and other sketch programs. We also report a case study in which participants used the system for their design explorations. Finally, we conclude with the potential impacts of this new technology and the next steps for ultimately bringing greater computational power to the early stages of design.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Duboc, Jean-Remy. "Dynamic feedback generation in virtual patients using semantic web technologies". Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/355540/.

Texto completo
Resumen
Virtual patients are interactive tools commonly used by medical schools for teaching and learning, and as training tools for the development of clinical reasoning. The feedback delivered to students is a crucial feature in virtual patients. Personalised feedback, in particular, helps students to reflect on their mistakes and to organise their knowledge in order to use it appropriately in a clinical context. However, authoring personalised feedback in virtual patient systems can become a di�cult task, due to the large number of choices available to students and the complex implications of each choice. Additionally, the current technologies used for the design and exchange of virtual patients have limitations in terms of interoperability and data reusability. Semantic web technologies are designed to model complex knowledge in a flexible manner, allowing easy data sharing from multiple sources and automatic data processing. This thesis demonstrates the benefitts of Semantic Web technologies for the design of virtual patients, in particular for the automatic generation of personalised feedback. Seven important types of personalised feedback were identified from the literature, and a preliminary survey showed that students in year 3 to 5 consider two of these types of feedback to be particularly useful: feedback indicating actions that each student should have chosen but neglected, and feedback indicating the diagnoses that each student should have tested and rule out or confi�rmed, given the initial presentation of the patient. SemVP, a Semantic Web-based virtual patient system, was created and evaluated by medical students, using a quantitative survey and qualitative interviews. This study showed that SemVP can generate useful personalised feedback, without the need for a virtual case author to write feedback manually, using a semantic model representing both the virtual patient and each student's actions, and leveraging existing data sources available online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Jo, In-Hee. "A unified semantic analysis of serialization : intensionality of event individuation". Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/862290.

Texto completo
Resumen
Even though there has been little agreement as to how the phenomenon of serialization is to be defined, it is generally assumed (i) that there is a close meaning dependency (of some kind) between the event descriptions serialized (serials hereafter), (ii) that the serials cannot be modified independently by such sentential operators as tense, aspect, mood, etc., and (iii) that these aspects of serialization are closely related to the fact that a serial construction refers to a `single event'.However, these assumptions have not been materialized into an explicit analysis of serialization. In particular, it has not been clearly accounted for how the concept of single event is attributed to the meaning dependencies between serials that are apparently so diverse as to defy a unique semantic characterization. Thus, in previous studies, the apparent heterogeneity of meaning dependencies has led to `fragmentation' of serialization into coordinating and subordinating types, and of the subordinating type, in turn, into a variety of lexically governed subtypes.This dissertation argues against such fragmentation and provides a unified semantic analysis of serialization, drawing on the philosophical discussions of event individuation and causation. Under my analysis, the sense of inseparable connection between serials is represented by a counterfactual dependency between them. The counterfactual dependency only entails that the first serial is necessarily related to the second, without specifying the nature of the necessary relatedness. Thus, the variety of meaning dependencies observed in the literature can be accommodated as particular instances of the counterfactual dependency.Moreover, the single event reading of serialization is attributed to the `counterfactual' dependency between serials: in virtue of the dependency, the serials are not identified independently of each other, and hence are construed as constituting a single event unit. A variety of structural constraints on serialization observed in the literature are then analyzed as natural consequences of the conceptual unity of the serialized events as a whole.
Department of English
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Nguyen, Duc Minh Chau. "Affordance learning for visual-semantic perception". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2443.

Texto completo
Resumen
Affordance Learning is linked to the study of interactions between robots and objects, including how robots perceive objects by scene understanding. This area has been popular in the Psychology, which has recently come to influence Computer Vision. In this way, Computer Vision has borrowed the concept of affordance from Psychology in order to develop Visual-Semantic recognition systems, and to develop the capabilities of robots to interact with objects, in particular. However, existing systems of Affordance Learning are still limited to detecting and segmenting object affordances, which is called Affordance Segmentation. Further, these systems are not designed to develop specific abilities to reason about affordances. For example, a Visual-Semantic system, for captioning a scene, can extract information from an image, such as “a person holds a chocolate bar and eats it”, but does not highlight the affordances: “hold” and “eat”. Indeed, these affordances and others commonly appear within all aspects of life, since affordances usually connect to actions (from a linguistic view, affordances are generally known as verbs in sentences). Due to the above mentioned limitations, this thesis aims to develop systems of Affordance Learning for Visual-Semantic Perception. These systems can be built using Deep Learning, which has been empirically shown to be efficient for performing Computer Vision tasks. There are two goals of the thesis: (1) study what are the key factors that contribute to the performance of Affordance Segmentation and (2) reason about affordances (Affordance Reasoning) based on parts of objects for Visual-Semantic Perception. In terms of the first goal, the thesis mainly investigates the feature extraction module as this is one of the earliest steps in learning to segment affordances. The thesis finds that the quality of feature extraction from images plays a vital role in improved performance of Affordance Segmentation. With regard to the second goal, the thesis infers affordances from object parts to reason about part-affordance relationships. Based on this approach, the thesis devises an Object Affordance Reasoning Network that can learn to construct relationships between affordances and object parts. As a result, reasoning about affordance becomes achievable in the generation of scene graphs of affordances and object parts. Empirical results, obtained from extensive experiments, show the potential of the system (that the thesis developed) towards Affordance Reasoning from Scene Graph Generation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Le, Quang H. M. Eng (Quang Huy)Massachusetts Institute of Technology. "G A N mask R-CNN : instance semantic segmentation benefits from generative adversarial networks". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129896.

Texto completo
Resumen
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 33-35).
In designing instance segmentation ConvNets that reconstruct masks, segmentation is often taken as its literal definition -assigning label to every pixel- for defining the loss functions. That is, using losses that compute the difference between pixels in the predicted (reconstructed) mask and the ground truth mask -a template matching mechanism. However, any such instance segmentation ConvNet is a generator, so we can lay the problem of predicting masks as a GANs game framework: We can think the ground truth mask is drawn from the true distribution, and a ConvNet like Mask R-CNN is an implicit model that infers the true distribution. In GANs terms, Mask R-CNN is the generator who reconstructs a mask as the fake one. We then send the fake mask and the real (ground truth) one to a discriminator (critic). By playing a min-max game, we want Mask R-CNN to fool the critic, and the critic to distinguish between real and fake masks. In this way, we take the advantage of a region proposal network (implemented in Mask R-CNN) to design a generator, and the benefit of a critic network to design a better loss function as opposed to a template matching one. We discuss how we utilize the GANs training stability regiments in practice to make this concept works. We show this GANs framework performs better than the original Mask R-CNN. Furthermore, we show the results give crisper boundaries - a traditional challenge of ConvNets where there is a trade-off between having higher level of semantics and finer boundaries.
by Quang H. Le.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Johnson, Christopher Wayne Bagai Rajiv. "Mechanical generation of concrete syntax rules for the Schütz semantic editor". Diss., A link to full text of this thesis in SOAR, 2007. http://soar.wichita.edu/dspace/handle/10057/1141.

Texto completo
Resumen
Thesis (M.S.)--Wichita State University, College of Liberal Arts and Sciences., Dept. of Computer Science.
"May 2007." Title from PDF title page (viewed on Dec. 26, 2007). Thesis adviser: Rajiv Bagai. Includes bibliographical references (65-66 leaves).
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Wojtinnek, Pia-Ramona. "Generation and application of semantic networks from plain text and Wikipedia". Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:8b9e1aab-ff11-45a4-b321-e95cd2cb4a30.

Texto completo
Resumen
Natural Language Processing systems crucially depend on the availability of lexical and conceptual knowledge representations. They need to be able to disambiguate word senses and detect synonyms. In order to draw inferences, they require access to hierarchical relations between concepts (dog isAn animal) as well as non-hierarchical ones (gasoline fuels car). Knowledge resources such as lexical databases, semantic networks and ontologies explicitly encode such conceptual knowledge. However, traditionally, these have been manually created, which is expensive and time consuming for large re- sources, and cannot provide adequate coverage in specialised domains. In order to alleviate this acquisition bottleneck, statistical methods have been created to acquire lexical and conceptual knowledge automatically from text. In particular, unsupervised techniques have the advantage that they can be easily adapted to any domain, given some corpus on the topic. However, due to sparseness issues, they often require very large corpora to achieve high quality results. The spectrum of resources and statistical methods has a crucial gap in situations when manually cre- ated resources do not provide the necessary coverage and only limited corpora are available. This is the case for real-world domain applications such as an NLP system for processing technical information based on a limited amount of company documentation. We provide a large-scale demonstration that this gap can be filled through the use of automatically generated networks. The corpus is automatically transformed into a network representing the terms or concepts which occur in the text and their relations, based entirely on linguistic tools. The net- works structurally lie in between the unstructured corpus and the highly structured manually created resources. We show that they can be useful in situations for which neither existing approach is ap- plicable. In contrast to manually created resources, our networks can be generated quickly and on demand. Conversely, they make it possible to achieve higher quality representations from less text than corpus-based methods, relieving the requirement of very large scale corpora. We devise scaleable frameworks for building networks from plain text and Wikipedia with varying levels of expressiveness. This work creates concrete networks from the entire British National Corpus covering 1.2m terms and 21m relations and a Wikipedia network covering 2.7m concepts. We develop a network-based semantic space model and evaluate it on the task of measuring semantic relatedness. In addition, noun compound paraphrasing is tackled to demonstrate the quality of the indirect paths in the network for concept relation description. On both evaluations we achieve results competitive to the state of the art. In particular, our network-based methods outperform corpus-based methods, demonstrating the gain created by leveraging the network structure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Tao, Cui. "Ontology generation, information harvesting, and semantic annotation for machine-generated web pages /". Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2762.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Tao, Cui. "Ontology Generation, Information Harvesting and Semantic Annotation for Machine-Generated Web Pages". BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1646.

Texto completo
Resumen
The current World Wide Web is a web of pages. Users have to guess possible keywords that might lead through search engines to the pages that contain information of interest and browse hundreds or even thousands of the returned pages in order to obtain what they want. This frustrating problem motivates an approach to turn the web of pages into a web of knowledge, so that web users can query the information of interest directly. This dissertation provides a step in this direction and a way to partially overcome the challenges. Specifically, this dissertation shows how to turn machine-generated web pages like those on the hidden web into semantic web pages for the web of knowledge. We design and develop three systems to address the challenge of turning the web pages into web-of-knowledge pages: TISP (Table Interpretation for Sibling Pages), TISP++, and FOCIH (Form-based Ontology Creation and Information Harvesting). TISP can automatically interpret hidden-web tables. Given interpreted tables, TISP++ can generate ontologies and semantically annotate the information present in the interpreted tables automatically. This way, we can offer a way to make the hidden information publicly accessible. We also provide users with a way where they can generate personalized ontologies. FOCIH provides users with an interface with which they can provide their own view by creating a form that specifies the information they want. Based on the form, FOCIH can generate user-specific ontologies, and based on patterns in machine-generated pages, FOCIH can harvest information and annotate these pages with respect to the generated ontology. Users can directly query on the annotated information. With these contributions, this dissertation serves as a foundational pillar for turning the current web of pages into a web of knowledge.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

LIU, HSIAO-MEI. "A CATEGORIAL GRAMMAR ANALYSIS OF CHINESE SEPARABLE COMPOUNDS AND PHRASES (SYNTAX, SEMANTICS)". Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183896.

Texto completo
Resumen
The lexicon of modern Chinese is composed mainly of disyllabic compound words; some of the compounds are separable, while others are not. Hindered by problems with the definition of the Chinese word and by the concept of separate grammatical levels on which morphological, syntactic and semantic processes occur, previous linguistic studies have been unable to fully account for the separability of some compounds and for the relationship of compound separability to phrase separability. This dissertation finds that, with morphemes having the same syntactic association with other morphemes that words or phrases have with other words or phrases, categorial rules logically explain the common syntax of Chinese words and phrases. In categorial grammar analysis based on the work of Ajdukiewicz (1935), Montague (1974), Partee (1972; 1975), and Bach (1983; 1984), categories are determined by functions associating the expressions in component sets, and syntactic operations build categories up into larger derived categories according to specified functor-argument relations. In the present analysis of Chinese, to the set of the non-verb general category belong morphemes, words and phrases whose form classes are not verbs and which are generic names. Argument expressions, both compound words and verb phrases which belong to this category, combine with the intransitive/non-verb general functor to form the IV category. Rules operating by concatenation, cliticization and wrapping account for the occurrence of resultative expressions, aspect markers, and expressions of time duration or time frequency between the components of separable compounds. Further, the hierarchy of thematic roles devised by Jackendoff (1972) is applied to account for cases in which the functors in IV combine with more than one argument. In this way, an analysis which combines principles of morphology, syntax and semantics is able to account for the identity of compound and phrase separability and derive grammatical sentences for the language.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Meiklejohn, Mark. "Automated software development and model generation by means of syntactic and semantic analysis". Thesis, University of Strathclyde, 2014. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=24855.

Texto completo
Resumen
Software development is a global activity and the development of a software system starts from some requirement that describes the problem domain. These requirements need to be communicated so that the software system can be fully engineered and in the majority of cases the communication of software requirements typically take the form of written text, which is difficult to transform into a model of the software system and consumes an inordinate amount of project effort. This thesis proposes and evaluates a fully automated analysis and model creation technique that exploits the syntactic and semantic information contained within an English natural language requirements specification to construct a Unified Modelling Language (UML) model of the software requirements. The thesis provides a detailed description of the related literature, a thorough description of the Common Semantic Model (CSM) and Syntactic Analysis Model (SAM) models, and the results of a qualitative and comparative evaluation given realistic requirement specifications and ideal models. The research findings confirm that the CSM and SAM models can identify: classes, relationships, multiplicities, operations, parameters and attributes all from the written natural language requirements specification which is subsequently transformed into a UML model. Furthermore, this transformation is undertaken without the need of manual intervention or manipulation of the requirements specification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Chee, Tahir Aidid. "A framework for the semantic representation of energy policies related to electricity generation". Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:2c1f7a3c-4464-4bd0-b40b-67a0ad419529.

Texto completo
Resumen
Energy models are optimisation tools which aid in the formulation of energy policies. Built on mathematics, the strength of these models lie in their ability to process numerical data which in turn allows for the generation of an electricity generation mix that incorporates economic and the environmental aspects. Nevertheless, a comprehensive formulation of an electricity generation mix should include aspects associated with politics and society, an evaluation of which requires the consideration of non-numerical qualitative information. Unfortunately, the use of energy models for optimisation coupled with the evaluation of information other than numerical data is a complicated task. Two prerequisites must be fulfilled for energy models to consider political and societal aspects. First, the information associated with politics and society in the context of energy policies must be identified and defined. Second, a software tool which automatically converts both quantitative and qualitative data into mathematical expressions for optimisation is required. We propose a software framework which uses a semantic representation based on ontologies. Our semantic representation contains both qualitative and quantitative data. The semantic representation is integrated into an Optimisation Modelling System which outputs a model consisting of a set of mathematical expressions. The system uses ontologies, engineering models, logic inference and linear programming. To demonstrate our framework, a Prototype Energy Modelling System which accepts energy policy goals and targets as inputs and outputs an optimised electricity generation mix has been developed. To validate the capabilities of our prototype, a case study has been conducted. This thesis discusses the framework, prototype and case study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Klintberg, Josefine. "Per-actor Based Optimization for Semantic-preserving Facial Rig Generation Using Sample Data". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177887.

Texto completo
Resumen
With high emphasis on the need of combining recent research and technology regarding automatic facial rig generation with the artistic aspect and the usage of digital humans within film production pipelines, this thesis project presents a scalable blendshape optimization framework that is adapted to fit within a VFX-pipeline, provides stability for various kinds of usage and makes the workflow of creating facial rigs more efficient. The framework successfully generates per-actor based facial rigs adapted towards sample data while ensuring that the semantics of the input rig are kept in the process. With the core in a reusable generic model, gradient based deformations, user-driven regularization terms, rigid alignment, and the possibility to split blendshapes in symmetrical halves, the proposed framework provides a stable algorithm that can be applied to any target blendshape. The proposed framework serves as a source for investigating and evaluating parameters and solutions related to automatic facial rig generation and optimization.

Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet

Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Ferreira, António Alberto Vares. "Organizational modeling with a semantic wiki: formalization of content and automatic diagram generation". Master's thesis, Universidade da Madeira, 2008. http://hdl.handle.net/10400.13/112.

Texto completo
Resumen
A key to maintain Enterprises competitiveness is the ability to describe, standardize, and adapt the way it reacts to certain types of business events, and how it interacts with suppliers, partners, competitors, and customers. In this context the field of organization modeling has emerged with the aim to create models that help to create a state of self-awareness in the organization. This project's context is the use of Semantic Web in the Organizational modeling area. The Semantic Web technology advantages can be used to improve the way of modeling organizations. This was accomplished using a Semantic wiki to model organizations. Our research and implementation had two main purposes: formalization of textual content in semantic wiki pages; and automatic generation of diagrams from organization data stored in the semantic wiki pages.
Orientador: Pedro campos e Co-orientador: David Aveiro
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Oldham, Joseph D. "DEXTER: Generating Documents by means of computational registers". UKnowledge, 2000. http://uknowledge.uky.edu/gradschool_diss/321.

Texto completo
Resumen
Software is often capable of efficiently storing and managing data on computers. However, even software systems that store and manage data efficiently often do an inadequate job of presenting data to users. A prototypical example is the display of raw data in the tabular results of SQL queries. Users may need a presentation that is sensitive to data values and sensitive to domain conventions. One way to enhance presentation is to generate documents that correctly convey the data to users, taking into account the needs of the user and the values in the data. I have designed and implemented a software approach to generating human-readable documents in a variety of domains. The software to generate a document is called a {\em computational register}, or ``register'' for short. A {\em register system} is a software package for authoring and managing individual registers. Registers generating documents in various domains may be managed by one register system. In this thesis I describe computational registers at an architectural level and discuss registers as implemented in DEXTER, my register system. Input to DEXTER registers is a set of SQL query results. DEXTER registers use a rule-based approach to create a document outline from the input. A register creates the output document by using flexible templates to express the document outline. The register approach is unique in several ways. Content determination and structural planning are carried out sequentially rather than simultaneously. Content planning itself is broken down into data re-representation followed by content selection. No advanced linguistic knowledge is required to understand the approach. Register authoring follows a course very similar to writing a single document. The internal data representation and content planning steps allow registers to use flexible templates, rather than more abstract grammar-based approaches, to render the final document, Computational registers are applicable in a variety of domains. What registers can be written is restricted not by domain, but by the original data representation. Finally, DEXTER shows that a single software suite can assist in authoring and management of a variety of registers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Serrano, José Artur Ferreira da Silva e. Vale. "Automatic generation of software design tools supporting semantics of modelling techniques". Thesis, University of Glasgow, 1997. http://theses.gla.ac.uk/7018/.

Texto completo
Resumen
A generic visual language for diagram editing has been created which may be instantiated for a particular modelling technique. The instantiated visual language is used within the generated design tool. It provides non-obtrusive guidance observing the semantics of the underlying modelling technique. The tool user is allowed to produce intermediate inconsistent states to facilitate diagram editing. A prototype, comprising a compiler for the specification language, a multi-purpose graph tool and a configurable design tool was implemented to prove the feasibility of the complete approach. The prototype compiler generates executable code. The target of the compiler is the persistent programming language Napier88. The architecture of the compiler has been designed as two separate components: a front-end (the parser), and a back-end (the code generator). This approach gives independence from the target language. The code generator comprises a number of translation rules to produce Napier88 from a VC-t specification. The prototype compiler only incorporates some of the translation rules to demonstrate that executable Napier88 can be generated automatically. The remainder of the translation rules have been applied manually to produce Napier88, to demonstrate that the rules are correct. The multi-purpose graph tool, designated as GraphTool, can be used with the class of applications using graphs as their main data structures. It provides support for visual and interactive representations of an application. The application acts as a client of the Graph-Tool. A protocol has been defined to connect the client application to the GraphTool. Any number of visual representations can be associated with the application. Maps are used for this purpose: to change the representation it is only necessary to change the map. The GraphTool includes a constraint manager used to associate constraints with the representations objects and check these at run-time. The configurable design tool has been developed as a client for the GraphTool. It includes generic components for diagram editing and a drawing canvas. The functionality of the canvas is fully provided by the GraphTool.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Konrad, Karsten. "Model generation for natural language interpretation and analysis /". Berlin [u.a.] : Springer, 2004. http://www.loc.gov/catdir/enhancements/fy0818/2004042936-d.html.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Iyer, Laxmi R. "CANDID - A Neurodynamical Model of Idea Generation". University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1326828617.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Ackerman, Wesley. "Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains". BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8684.

Texto completo
Resumen
We expand the scope of image-to-image translation to include more distinct image domains, where the image sets have analogous structures, but may not share object types between them. Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains (SUNIT) is built to more successfully translate images in this setting, where content from one domain is not found in the other. Our method trains an image translation model by learning encodings for semantic segmentations of images. These segmentations are translated between image domains to learn meaningful mappings between the structures in the two domains. The translated segmentations are then used as the basis for image generation. Beginning image generation with encoded segmentation information helps maintain the original structure of the image. We qualitatively and quantitatively show that SUNIT improves image translation outcomes, especially for image translation tasks where the image domains are very distinct.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Sabuncu, Orkunt. "Using Model Generation Theorem Provers For The Computation Of Answer Sets". Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610705/index.pdf.

Texto completo
Resumen
Answer set programming (ASP) is a declarative approach to solving search problems. Logic programming constitutes the foundation of ASP. ASP is not a proof-theoretical approach where you get solutions by answer substitutions. Instead, the problem is represented by a logic program in such a way that models of the program according to the answer set semantics correspond to solutions of the problem. Answer set solvers (Smodels, Cmodels, Clasp, and Dlv) are used for finding answer sets of a given program. Although users can write programs with variables for convenience, current answer set solvers work on ground logic programs where there are no variables. The grounding step of ASP generates a propositional instance of a logic program with variables. It may generate a huge propositional instance and make the search process of answer set solvers more difficult. Model generation theorem provers (Paradox, Darwin, and FM-Darwin) have the capability of producing a model when the first-order input theory is satisfiable. This work proposes the use of model generation theorem provers as computational engines for ASP. The main motivation is to eliminate the grounding step of ASP completely or to perform it more intelligently using the model generation system. Additionally, regardless of grounding, model generation systems may display better performance than the current solvers. The proposed method can be seen as lifting SAT-based ASP, where SAT solvers are used to compute answer sets, to the first-order level for tight programs. A completion procedure which transforms a logic program to formulas of first-order logic is utilized. Besides completion, other transformations which are necessary for forming a firstorder theory suitable for model generation theorem provers are investigated. A system called Completor is implemented for handling all the necessary transformations. The empirical results demonstrate that the use of Completor and the theorem provers together can be an eective way of computing answer sets. Especially, the run time results of Paradox in the experiments has showed that using Completor and Paradox together is favorable compared to answer set solvers. This advantage has been more clearly observed for programs with large propositional instances, since grounding can be a bottleneck for such programs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Åkerström, Joakim y Aravena Carlos Peñaloza. "Semantiska modeller för syntetisk textgenerering - en jämförelsestudie". Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-13981.

Texto completo
Resumen
Denna kunskapsöversikt undersöker det forskningsfält som rör musikintegrerad matematikundervisning. Syftet med översikten är att få en inblick i hur musiken påverkar elevernas matematikprestationer samt hur forskningen ser ut inom denna kombination. Därför är vår frågeställning: Vad kännetecknar forskningen om integrationen mellan matematik och musik? För att besvara denna fråga har vi utfört litteratursökningar för att finna studier och artiklar som tillsammans bildar en överblick. Med hjälp av den metod som Claes Nilholm beskriver i SMART (2016) har vi skapat en struktur för hur vi arbetat. Ur det material som vi fann under sökningarna har vi funnit mönster som talar för musikens positiva inverkan på matematikundervisning. Förmågan att uttrycka sina känslor i form av ord eller beröra andra med dem har alltid varit enbeundransvärd och sällsynt egenskap. Det här projektet handlar om att skapa en text generatorkapabel av att skriva text i stil med enastående män och kvinnor med den här egenskapen. Arbetet har genomförts genom att träna ett neuronnät med citat skrivna av märkvärdigamänniskor såsom Oscar Wilde, Mark Twain, Charles Dickens, etc. Nätverket samarbetar med två olika semantiska modeller: Word2Vec och One-Hot och alla tre är delarna som vår textgenerator består av. Med dessa genererade texterna gjordes en enkätudersökning för att samlaåsikter från studenter om kvaliteten på de genererade texterna för att på så vis utvärderalämpligheten hos de olika semantiska modellerna. Efter analysen av resultatet lärde vi oss att de flesta respondenter tyckte att texterna de läste var sammanhängande och roliga. Vi lärde oss också att Word2Vec, presterade signifikant bättre än One-hot.
The ability of expressing feelings in words or moving others with them has always been admired and rare feature. This project involves creating a text generator able to write text in the style of remarkable men and women with this ability, this gift. This has been done by training a neural network with quotes written by outstanding people such as Oscar Wilde, Mark Twain, Charles Dickens, et alt. This neural network cooperate with two different semantic models: Word2Vec and One-Hot and the three of them compound our text generator. With the text generated we carried out a survey in order to collect the opinion of students about the quality of the text generated by our generator. Upon examination of the result, we proudly learned that most of the respondents thought the texts were coherent and fun to read, we also learned that the former semantic model performed, not by a factor of magnitude, better than the latter.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Al, Qundus Jamal [Verfasser]. "Generating Trust in Collaborative Environments : Evaluating Design Parameters in Area of Semantic Annotations / Jamal Al Qundus". Berlin : Freie Universität Berlin, 2019. http://d-nb.info/1179277937/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Chung, So-Woo. "Cataphora in discourse representation theory /". Thesis, Connect to this title online; UW restricted, 1992. http://hdl.handle.net/1773/8424.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Lavers, Peter Stanley. "Generating intensional logics : the application of paraconsistent logics to investigate certain areas of the boundaries of mathematics /". Title page, table of contents and summary only, 1985. http://web4.library.adelaide.edu.au/theses/09ARM/09arml399.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía