To see the other types of publications on this topic, follow the link: Polyglot.

Dissertations / Theses on the topic 'Polyglot'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Polyglot.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

TOMASSETTI, FEDERICO CESARE ARGENTINO. "Polyglot software development." Doctoral thesis, Politecnico di Torino, 2014. http://hdl.handle.net/11583/2537697.

Full text
Abstract:
The languages we choose to design solutions influence the way we think about the problem, the words we use in discussing it with colleagues, the processes we adopt in developing the software which should solve that problem. Therefore we should strive to use the best language possible for depicting each facet of the system. To do that we have to solve two challenges: i) first of all to understand merits and issues brought by the languages we could adopt and their long reaching effects on the organizations, ii) combine them wisely, trying to reduce the overhead due to their assembling. In the first part of this dissertation we study the adoption of modeling and domain specific languages. On the basis of an industrial survey we individuate a list of benefits attainable through these languages, how frequently they can be reached and which techniques permit to improve the chances to obtain a particular benefit. In the same way we study also the common problems which either prevent or hinder the adoption of these languages. We then analyze the processes through which these languages are employed, studying the relative frequency of the usage of the different techniques and the factors influencing it. Finally we present two case-studies performed in a small and in a very large company, with the intent of presenting the peculiarities of the adoption in different contexts. As consequence of adopting specialized languages, many of them have to be employed to represent the complete solution. Therefore in the second part of the thesis we focus on the integration of these languages. Being this topic really new we performed preliminary studies to first understand the phenomenon, studying the different ways through which languages interact and their effects on defectivity. Later we present some prototypal solutions for i) the automatic spotting of cross-language relations, ii) the design of language integration tool support in language workbenches through the exploitation of common meta-metamodeling. This thesis wants to offer a contribution towards the productive adoption of multiple, specific languages in the same software development project, hence polyglot software development. From this approach we should be able to reduce the complexity due to misrepresentation of solutions, offer a better facilities to think about problems and, finally to be able to solve more difficult problems with our limited brain resources. Our results consists in a better understanding of MDD and DSLs adoption in companies. From that we can derive guidelines for practitioners, lesson learned for deploying in companies, depending on the size of the company, and implications for other actors involved in the process: company management and universities. Regarding cross-language relations our contribution is an initial definition of the problem, supported by some empirical evidence to sustain its importance. The solutions we propose are not yet mature but we believe that from them future work can stem.
APA, Harvard, Vancouver, ISO, and other styles
2

Williams, James. "Polyglot passages : multilingualism and the twentieth-century novel." Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/25985.

Full text
Abstract:
This thesis reads the twentieth-century novel in light of its engagement with multilingualism. It treats the multilingual as a recurring formal preoccupation for writers working predominantly in English, but also as an emergent historical problematic through which they confront the linguistic and political inheritances of empire. The project thus understands European modernism as emerging from empire, and reads its formal innovations as engagements with the histories and quotidian realities of language use in the empire and in the metropolis. In addition to arguing for a rooting of modernism in the language histories of empire, I also argue for the multilingual as a potential linkage between European modernist writing and the writing of decolonisation, treating the Caribbean as a particularly productive region for this kind of enquiry. Ultimately, I argue that these periodical groupings - the modernist and the postcolonial - can be understood as part of a longer chronology of the linguistic legacy of empire. The thesis thus takes its case studies from across the twentieth century, moving between Europe and the Caribbean. The first chapter considers Joseph Conrad as the paradigmatic multilingual writer of late colonialism and early modernism, and the second treats Jean Rhys as a problematic late modernist of Caribbean extraction. The second half of the thesis reads texts more explicitly preoccupied with the Caribbean: the third chapter thus considers linguistic histories of Guyana and the Americas in the works of the experimental novelist Wilson Harris, and the fourth is concerned with the inventive and polemical contemporary Dominican-American novelist, Junot Díaz.
APA, Harvard, Vancouver, ISO, and other styles
3

Romsdorfer, Harald. "Polyglot text to speech synthesis text analysis & prosody control." Aachen Shaker, 2009. http://d-nb.info/993448836/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Romsdorfer, Harald [Verfasser]. "Polyglot Text-to-Speech Synthesis : Text Analysis & Prosody Control / Harald Romsdorfer." Aachen : Shaker, 2009. http://d-nb.info/1156517354/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sanz, Ortega Elena. "Beyond monolingualism : a descriptive and multimodal methodology for the dubbing of polyglot films." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/28692.

Full text
Abstract:
The days of English as the hegemonic language of cinema are slowly disappearing. Nowadays, filmmakers from different film industries are gradually embracing a multilingual shoot where languages coexist and play a key role within a film’s diegesis. This polyglot reality has brought up interesting questions and issues for the discipline of Translation Studies, where translation has been traditionally understood more in terms of going from one source language into one target language. Within the field of Audiovisual Translation (AVT), studies have concentrated on films where the presence of foreign languages is either sporadic or secondary and, as such, foreign languages have been mostly relegated to purely linguistic approaches. Interestingly, films in which foreign languages constantly reoccur or have a primary function have been hitherto widely disregarded, despite presenting the most complex scenario. Similarly, although researchers increasingly stress the relevance of film language on translational solutions, multimodal approaches to multilingualism in films remain scant. In light of this, this thesis designs a descriptive and multimodal methodology to investigate the issue of multilingualism at every stage of the dubbing process and to explore the effect of dubbing on both the plot and characterisation of polyglot films. This methodology is further complemented by para-textual information and semi-structured interviews to obtain a global perspective of the translation of the multilingual aspect. To this end, this thesis examines four polyglot films in which it is difficult to determine a predominant language. By investigating those with recurring use of languages, this project accounts for the most complex films in terms of language quantity and interplay to transcend textual restrictions and incorporate further issues surrounding translation as both process and product. This examination of original polyglot films brings to light the relevance of intermediate translations for the dubbing process as these are the foundations of the ‘rough’ translation on which the whole process relies. In turn, the macrostructure analysis unveils the use of a plethora of AVT modalities when dealing with foreign languages. Similarly, it suggests that decisions at this level depend on a complex interplay of factors of diverse natures such as filmmakers’ requests, screening habits, financial means, and film features. At the micro-textual level, a thorough list of translation techniques is compiled and their application is measured in relation to the influence of signifying codes. Additionally, a close linguistic examination of dialogue reveals a tendency towards standardisation, although certain nuances are sometimes enforced by character synchrony or added optionally to minor characters. Throughout these analyses, it becomes evident that all dubbing agents manipulate some aspects of multilingualism. Ultimately, this study suggests that dubbing affects polyglot films by hiding certain linguistic connotations and by providing different information to domestic and target audiences.
APA, Harvard, Vancouver, ISO, and other styles
6

Pienaar, Solé. "A critical evaluation of WOORDEBOEK/WÖRTERBUCH with regard to source- and target-language forms /." Link to the online version, 2006. http://hdl.handle.net/10019/1160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ramsey, Paul J. "A polyglot boardinghouse a history of public bilingual schooling in the United States, 1840-1920 /." [Bloomington, Ind.] : Indiana University, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3307564.

Full text
Abstract:
Thesis (Ph. D.)--Indiana University, Dept. of Educational Leadership and Policy Studies, 2008.
Title from PDF t.p. (viewed Dec. 9, 2008). Source: Dissertation Abstracts International, Volume: 69-05, Section: A, page: 1706. Adviser: Andrea Walton.
APA, Harvard, Vancouver, ISO, and other styles
8

Harmon, Neal S. "Book of Mormon Stories Diglot Reader on Computer." Diss., CLICK HERE for online access, 2002. http://contentdm.lib.byu.edu/u?/MTGM,35683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Machimana, Abios Sparks. "The ordering of senses in English-Xitsonga bilingual dictionaries : Towards logical meaning arrangement in the microstructure." Thesis, University of Limpopo (Turfloop Campus), 2009. http://hdl.handle.net/10386/954.

Full text
Abstract:
Thesis (M.A. (Linguistics)) --University of Limpopo, 2009
In many bilingual dictionaries, translation equivalents reveal some shortcomings with regard to the manner in which they are presented, particularly the English-Xitsonga bilingual dictionaries. Translation equivalents in the microstructure are frequently arranged and are without contextual guidance to assist the user. This way of presentation impedes the dictionary users from retrieving the appropriate and accurate equivalents. The study has, therefore, evaluated the way in which these translation equivalents are arranged. This research study shows that they should be logically and systematically arranged, starting with the translation equivalents that have the highest usage frequency to be user-friendly. The study also suggests that functional equivalence must prevail in English-Xitsonga bilingual dictionaries. The problem of zero-equivalence should also be resolved by giving a comprehensive description of the lemma as a translation equivalent to help the users to understand the lemma better.
APA, Harvard, Vancouver, ISO, and other styles
10

Crafford, M. F. "Vertalers en hul bronne : die behoefte aan 'n vertaalwoordeboek met Engels en Afrikaans as behandelde taalpaar." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50543.

Full text
Abstract:
Thesis (MPhil)--Stellenbosch University, 2005.
ENGLISH ABSTRACT: Dictionaries are important translation tools, but cause a lot of frustration when they either fail to provide the help required or offer very little guidance. This study examines the reasons that bilingual dictionaries in particular do not always provide satisfactory support to professional translators. Translation theory and different approaches to both the process and the product of translation are investigated. Specific attention is paid to linguistic, text-linguistic and functionalist approaches to translation. The study traces the development of lexicographic theory and practice, and highlights the linguistic genealogy of lexicography. Lexicography and translation studies share this genealogy. The concept of equivalence as it relates to both these academic disciplines - translation studies and lexicography - is investigated. Professional translation entails more than translating individual words or choosing the correct translation equivalent from a bilingual dictionary, and professional translators therefore tend to regard most existing bilingual dictionaries as unsatisfactory, insufficient and inadequate. The terms "bilingual dictionary", "translating/translatory dictionary", and "translation dictionary" are discussed. Requirements for a translation dictionary are identified based on research findings concerning the translation process, translation strategies, typical translation problems, and the requisite skills of professional translators. A translation dictionary should be designed so as to assist professional translators in solving problems related to the reception, translation, and production of texts. It should be based on a representative corpus of real language use. Finally, a lexicographic model of an English-Afrikaans translation dictionary is constructed, based on the genuine purpose and the lexicographic functions of a dictionary aimed at the needs of professional translators. A process of simultaneous feedback is recommended when new dictionaries are compiled.
AFRIKAANSE OPSOMMING: Woordeboeke is 'n belangrike hulpmiddel vir vertalers, maar is dikwels ook 'n bron van frustrasie wanneer hulle óf geen hulp nie óf gebrekkige leiding verskaf Hierdie studie ondersoek redes waarom spesifiek tweetalige woordeboeke nie 'n voldoende hulpmiddel vir professionele vertalers is nie. In die proses word teorieë oor en benaderings tot vertaling as produk en proses belig. Daar word veral klem gelê op linguistiese, tekslinguistiese en funksionalistiese werkswyses in vertaling. Die ontwikkeling van leksikografie en die leksikografiese beskouing ten opsigte van tweetalige woordeboeke word daarna aangesny. Soos vertaalkunde, het leksikografie uit die linguistiek as studiegebied ontwikkel. Dit het op die vormingsjare van die twee jonger studierigtings - vertaalstudie en leksikografie - 'n soortgelyke invloed uitgeoefen. Die problematiek om ekwivalensie in sowel vertaalteorie as tweetalige woordeboeke word uitgelig. Professionele vertaling behels meer as die vertaal van woorde of die korrekte keuse van vertaalekwivalente. Tweetalige woordeboeke is nie 'n voldoende hulpmiddel vir vertalers nie, omdat hulle nie 'n bevredigende, genoegsame of toereikende hulpmiddel is nie. Die gebruik van die terme "tweetalige woordeboek", "vertalende woordeboek" en "vertaalwoordeboek" word bespreek. Vereistes waaraan 'n vertaalwoordeboek behoort te voldoen, word ondersoek, onder meer aan die hand van insigte uit navorsing oor die vertaalproses, vertaalstrategieë, tipiese vertaalprobleme en die vaardighede waaroor professionele vertalers (moet) beskik. 'n Vertaalwoordeboek moet aan professionele vertalers hulp verleen met probleme wat hulle ondervind met die resepsie, oordrag en produksie van tekste. Verteenwoordigende korpusse van werklike taalgebruik moet die grondslag vorm vir die saamstel van sodanige woordeboek. Op grond van die werklike doel van 'n vertaalwoordeboek en die funksies wat dit moet verrig, word 'n model voorgestel VIr 'n vertaalwoordeboek vir vertalers uit Engels III Afrikaans. Bewerkingsvoorstelle vir enkele voorbeeldlemmas word aan die hand gedoen. 'n Proses van gelyktydige terugvoer behoort by die saamstel van nuwe woordeboeke gevolg te word.
APA, Harvard, Vancouver, ISO, and other styles
11

Nartey, Cecil Kabu. "Leveraging big data for competitive advantage in a media organisation." Thesis, Cape Peninsula University of Technology, 2015. http://hdl.handle.net/20.500.11838/1397.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Master of Technology: Information Technology In the Faculty of Informatics and Design at the Cape Peninsula University of Technology
Data sources often emerge with the potential to transform, drive and allow deriving never-envisaged business value. These data sources change the way business enacts and models value generation. As a result, sellers are compelled to capture value by collecting data about business elements that drive change. Some of these elements, such as the customer and products, generate data as part of transactions which necessitates placement of the business element at the centre of the organisation’s data curation journey. This is in order to reveal changes and how these elements affect the business model. Data in business represents information translated into a format convenient for transfer. Data holds the relevant markers needed to measure business elements and provide the relevant metrics to monitor, steer and forecast business to attain enterprise goals. Data forms the building blocks of information within an organisation, allowing for knowledge and facts to be obtained. At its lowest level of abstraction, it provides a platform from which insights and knowledge can be derived as a direct extract for business decision-making as these decisions steer business into profitable situations. Because of this, organisations have had to adapt or change their business models to derive business value for sustainability, profitability and transformation. An organisation’s business model reflects a conceptual representation on how the organisation obtains and delivers value to prospective customers (the service beneficiary). In the process of delivering value to the service beneficiaries, data is generated. Generated data leads to business knowledge which can be leveraged to re-engineer the business model. The business model dictates which information and technology assets are needed for a balanced, profitable and optimised operation. The information assets represent value holding documented facts. Information assets go hand in hand with technology assets. The technology assets within an organisation are the technologies (computers, communications and databases) that support the automation of well-defined tasks as the organisation seeks to remain relevant to its clientele. What has become apparent is the fact that companies find it difficult to leverage the opportunities that data, and for that matter Big Data (BD), offers them. A data curation journey enables a seller to strategise and collect insightful data to influence how business may be conducted in a sustainable and profitable way while positioning the curating firm in a state of ‘information advantage’. While much of the discussion surrounding the concept of BD has focused on programming models (such as Hadoop) and technology innovations usually referred to as disruptive technologies (such as The Internet of Things and Automation of Knowledge Work), the real driver of technology and business is BD economics, which is the combination of open source data management and advanced analytics software coupled with commodity-based, scale-out architectures which are comparatively cheaper than prevalent sustainable technologies known to industry. Hadoop, though hugely misconstrued, is not an integration platform; it is a model the helps determine data value while it brings on-board an optimised way of curating data cheaply as part of the integration architecture. The objectives of the study were to explore how BD can be used to utilise the opportunities it offers the organisation, such as leveraging insights to enable business for transformation. This is accomplished by assessing the level of BD integration with the business model using the BD Business Model Maturation Index. Guidelines with subsequent recommendations are proposed for curation procedures aimed at improving the curation process. A qualitative research methodology was adopted. The research design outlines the research as a single case study; it outlines the philosophy as interpretivist, the approach as data collection through interviews, and the strategy as a review of the method of analysis deployed in the study. Themes that emerged from categorised data indicate the diverging of business elements into primary business elements and secondary supporting business elements. Furthermore, results show that data curation still hinges firmly on traditional data curation processes which diminish the benefits associated with BD curation. Results suggest a guided data curation process optimised by persistence hybridisation as an enabler to gain information advantage. The research also evaluated the level of integration of BD into the case business model to extrapolate results leading to guidelines and recommendations for BD curation.
APA, Harvard, Vancouver, ISO, and other styles
12

Pienaar, Sole. "A Critical Evaluation of WOORDEBOEK/WÖRTERBUCH with Regard to Source- and Target-Language Forms." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2076.

Full text
Abstract:
Thesis (MA (Afrikaans and Dutch))—University of Stellenbosch, 2006.
In its Preface the WOORDEBOEK/WÖRTERBUCH claims to be suitable for both native and foreign speakers of Afrikaans and German. This study presents an evaluation of WOORDEBOEK/ WÖRTERBUCH to determine to what extent the dictionary can be regarded as a helpful and up-to-date tool for the user, whether he or she is a native or foreign speaker of the language pair treated in this dictionary. This ultimate aim is to determine to what degree WOORDEBOEK/WÖRTERBUCH can serve as a helpful tool in translation. Theoretically and methodically the study is based on the dictionary usage research by HE Wiegand, with the genuine purpose of the dictionary as the main principle. The genuine purpose of the dictionary forms the basis of the theoretical and practical analysis. The study contains a short overview of the development of WOORDEBOEK/WÖRTERBUCH from when it was first published as a monolingual dictionary for native speakers of Afrikaans who wanted to learn German in 1925 to the publication of the eighth edition in 1983, which claims to be a new and more effective source of information for both languages, to enable closer relations between South Africa and Germany. This is followed by a lexicographic discussion on the concepts underlying the planning of a dictionary, with the emphasis on lexicographic processes and lexicographic functions. The problematic issue of active and passive dictionaries is discussed, concluding with the preference for a dictionary orientated towards text production in the case of WOORDEBOEK/WÖRTERBUCH, which would enable translation from Afrikaans into German and vice versa. The problematic nature of equivalent relations is discussed in detail and forms the main focus of the empirical study and the practical study. The evaluation of WOORDEBOEK/WÖRTERBUCH is continued in the practical study, where the genuine purpose of the dictionary and the lexicographic functions are the main criteria. The empirical study analyses the actual problems target users have when consulting the dictionary in the context of the genuine purpose of the dictionary. The study is concluded with the statement that WOORDEBOEK/WÖRTERBUCH should be revised thoroughly and that this revision can only be successful if it is based on an up-to-date, representative corpus of both Afrikaans and German; and when the lexicographer is steered by the lexicographic functions and the needs of the target user, which determine the genuine purpose of the dictionary.
APA, Harvard, Vancouver, ISO, and other styles
13

Boshoff, Ilene. "'n Vertalersperspektief op enkele terkortkominge in algemene en gespesialiseerde woordeboeke." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50407.

Full text
Abstract:
Thesis (MPhil)--Stellenbosch University, 2005.
ENGLISH ABSTRACT: It is no new news that the use of a bilingual dictionary forms part of the day-to-day process of communication within a multilingual society like the one we live in. Whether it is a student, expert, language practitioner or a translator consulting a specific dictionary for meaning, spelling, pronunciation or grammatical information, it is of the utmost importance that that dictionary (regarding its macro as well as micro structure) is compiled in a such a way that the user is at all times capable of optimal retrieval of information. Bilingual dictionaries, whether general or specialised, are probably one of the most important sources of information that a translator has to his/her disposal. The most common problem that so often arises in this case, is the fact that these dictionaries are never really compiled with the translator, as possible user thereof, in mind. Because lexicographers in general do not bare in mind that the average translator (who, in the field of specialised language, is actually no more than a layman) may approach the specific dictionary for help, the vast majority of translators often experience great difficulty with the effectiveness of bilingual dictionaries with regard to a) the type of data that is brought about as well as b) the way in which this data is treated and presented in these dictionaries. The way in which aspects such as the correct way of indicating among other things equivalent relations and the use of, for example, context and cotext guidance, are supposed to be applied in order to ensure optimal information retrieval and, subsequently, that the user (which, in this case, is the translator) benefits from this,. should be studied thoroughly beforehand. The importance of other aspects such as lexicographical functions, specific user needs and the influence of cultural gaps on the data found in dictionaries, may also not be ignored when planning the structure and compilation of a bilingual dictionary. All these aspects are supposed to be treated as a lexicographical duty. And this goes for any bilingual dictionary.
AFRIKAANSE OPSOMMING: Dit is nie nuwe nuus dat die gebruik van 'n vertalende woordeboek deel vorm van die daaglikse proses van kommunikasie binne 'n multitalige samelewing soos ons eie nie. Hetsy dit 'n student, vakkundige, taalpraktisyn of vertaler is wat 'n betrokke tweetalige woordeboek raadpleeg ter wille van betekenis, spelling, uitspraak of grammatikale inligting, is dit van die uiterste belang dat daardie woordeboek (wat die spesifieke gebruiker met 'n doel gekies het) op so 'n wyse saamgestel is (wat mikrosowel as makrostrukturele aspekte betref) dat die gebruiker te alle tye tot optimale inligtingsonttrekking in staat is. Vertalende woordeboeke, hetsyalgemene of vakwoordeboeke, is waarskynlik een van die heel belangrikste inligtingsbronne wat 'n vertaler tot sy/haar beskikking het. Die algemeenste probleem wat so dikwels hier ter sprake kom, is die feit dat sulke woordeboeke as't ware nooit saamgestel word met die oog op die vertaler as moontlike gebruiker daarvan nie. Omdat leksikograwe oor die algemeen nie III gedagte hou dat vertalers (wat veralop die gebied van vaktaal basies niks anders as leke is nie) dalk die betrokke woordeboek sal nader vir hulp nie, ondervind die oorgrote meerderheid vertalers meestal probleme met die doeltreffendheid van vertalende woordeboeke ten opsigte van a) die tipe data wat daargestel word asook b) die manier waarop hierdie data bewerk word en aangebied word. Die mamer waarop aspekte soos die korrekte aanduiding van onder andere ekwivalentverhoudinge en die gebruik van byvoorbeeld konteks- en koteksleiding toegepas moet word ten einde optimale inligtingsonttrekking te verseker en, vervolgens, tot voordeel van die gebruiker (in hierdie geval die vertaler) te kan lei, moet vooraf deeglik bestudeer word. Die belangrikheid van ander kwessies soos leksikografiese funksies, spesifieke gebruikersbehoeftes en die invloed van kulturele gapings op die data wat in woordeboeke aangetref word, kan ook nie buite rekening gelaat word wanneer die struktuur en samestelling van 'n vertalende woordeboek beplan word nie. Al hierdie aspekte behoort as 'n leksikografiese plig behandel te word. En dit geld vir enige vertalende woordeboek.
APA, Harvard, Vancouver, ISO, and other styles
14

Sellami, Rami. "Supporting multiple data stores based applications in cloud environments." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLL002/document.

Full text
Abstract:
Avec l’avènement du cloud computing et des big data, de nouveaux systèmes de gestion de bases de données sont apparus, connus en général sous le vocable systèmes NoSQL. Par rapport aux systèmes relationnels, ces systèmes se distinguent par leur absence de schéma, une spécialisation pour des types de données particuliers (documents, graphes, clé/valeur et colonne) et l’absence de langages de requêtes déclaratifs. L’offre est assez pléthorique et il n’y a pas de standard aujourd’hui comme peut l’être SQL pour les systèmes relationnels. De nombreuses applications peuvent avoir besoin de manipuler en même temps des données stockées dans des systèmes relationnels et dans des systèmes NoSQL. Le programmeur doit alors gérer deux (au moins) modèles de données différents et deux (au moins) langages de requêtes différents pour pouvoir écrire son application. De plus, il doit gérer explicitement tout son cycle de vie. En effet, il a à (1) coder son application, (2) découvrir les services de base de données déployés dans chaque environnement Cloud et choisir son environnement de déploiement, (3) déployer son application, (4) exécuter des requêtes multi-sources en les programmant explicitement dans son application, et enfin le cas échéant (5) migrer son application d’un environnement Cloud à un autre. Toutes ces tâches sont lourdes et fastidieuses et le programmeur risque d’être perdu dans ce haut niveau d’hétérogénéité. Afin de pallier ces problèmes et aider le programmeur tout au long du cycle de vie des applications utilisant des bases de données multiples, nous proposons un ensemble cohérent de modèles, d’algorithmes et d’outils. En effet, notre travail dans ce manuscrit de thèse se présente sous forme de quatre contributions. Tout d’abord, nous proposons un modèle de données unifié pour couvrir l’hétérogénéité entre les modèles de données relationnelles et NoSQL. Ce modèle de données est enrichi avec un ensemble de règles de raffinement. En se basant sur ce modèle, nous avons défini notre algèbre de requêtes. Ensuite, nous proposons une interface de programmation appelée ODBAPI basée sur notre modèle de données unifié, qui nous permet de manipuler de manière uniforme n’importe quelle source de données qu’elle soit relationnelle ou NoSQL. ODBAPI permet de programmer des applications indépendamment des bases de données utilisées et d’exprimer des requêtes simples et complexes multi-sources. Puis, nous définissons la notion de bases de données virtuelles qui interviennent comme des médiateurs et interagissent avec les bases de données intégrées via ODBAPI. Ce dernier joue alors le rôle d’adaptateur. Les bases de données virtuelles assurent l’exécution des requêtes d’une façon optimale grâce à un modèle de coût et un algorithme de génération de plan d’exécution optimal que nous définis. Enfin, nous proposons une approche automatique de découverte de bases de données dans des environnements Cloud. En effet, les programmeurs peuvent décrire leurs exigences en termes de bases de données dans des manifestes, et grâce à notre algorithme d’appariement, nous sélectionnons l’environnement le plus adéquat à notre application pour la déployer. Ainsi, nous déployons l’application en utilisant une API générique de déploiement appelée COAPS. Nous avons étendue cette dernière pour pouvoir déployer les applications utilisant plusieurs sources de données. Un prototype de la solution proposée a été développé et mis en œuvre dans des cas d'utilisation du projet OpenPaaS. Nous avons également effectué diverses expériences pour tester l'efficacité et la précision de nos contributions
The production of huge amount of data and the emergence of Cloud computing have introduced new requirements for data management. Many applications need to interact with several heterogeneous data stores depending on the type of data they have to manage: traditional data types, documents, graph data from social networks, simple key-value data, etc. Interacting with heterogeneous data models via different APIs, and multiple data stores based applications imposes challenging tasks to their developers. Indeed, programmers have to be familiar with different APIs. In addition, the execution of complex queries over heterogeneous data models cannot, currently, be achieved in a declarative way as it is used to be with mono-data store application, and therefore requires extra implementation efforts. Moreover, developers need to master and deal with the complex processes of Cloud discovery, and application deployment and execution. In this manuscript, we propose an integrated set of models, algorithms and tools aiming at alleviating developers task for developing, deploying and migrating multiple data stores applications in cloud environments. Our approach focuses mainly on three points. First, we provide a unified data model used by applications developers to interact with heterogeneous relational and NoSQL data stores. This model is enriched by a set of refinement rules. Based on that, we define our query algebra. Developers express queries using OPEN-PaaS-DataBase API (ODBAPI), a unique REST API allowing programmers to write their applications code independently of the target data stores. Second, we propose virtual data stores, which act as a mediator and interact with integrated data stores wrapped by ODBAPI. This run-time component supports the execution of single and complex queries over heterogeneous data stores. It implements a cost model to optimally execute queries and a dynamic programming based algorithm to generate an optimal query execution plan. Finally, we present a declarative approach that enables to lighten the burden of the tedious and non-standard tasks of (1) discovering relevant Cloud environments and (2) deploying applications on them while letting developers to simply focus on specifying their storage and computing requirements. A prototype of the proposed solution has been developed and implemented use cases from the OpenPaaS project. We also performed different experiments to test the efficiency and accuracy of our proposals
APA, Harvard, Vancouver, ISO, and other styles
15

Gelpí, Arroyo Cristina. "Mesures d'avaluació lexicogràfica de diccionaris bilingües." Doctoral thesis, Universitat de Barcelona, 1997. http://hdl.handle.net/10803/673502.

Full text
Abstract:
Totes les activitats, els processos i els procediments són objecte, de manera inconscient, d'una avaluació. De fet, cada vegada que mirem un programa de televisió, conduïm un cotxe, contestem un examen o fem un experiment químic, per posar algun exemple, realitzem diverses operacions que impliquen un procés de control de l'operació en qüestió; verifiquem el canal de televisió que hem triat, controlem el retrovisor del cotxe, ens qualifiquen un examen o seguim els passos establerts per aconseguir un producte químic. La nostra activitat normal està plena d'operacions destinades a verificar i assegurar l'adequació entre allò que fem i allò que esperem com a resultat de les nostres opcions. D'aquí es desprèn el concepte d'avaluació que prenem com a punt de partida en aquest treball. En aquest treball considerarem que els processos d’avaluació es poden aplicar als diccionaris en general i als bilingües en particular emprant els mateixos paràmetres que serveixen per determinar l'èxit de qualsevol producte o procediment. Concretament ens centrarem en l'avaluació de diccionaris bilingües, considerant com a tals aquells que contenen dues llengües històriques en el qual els lemes se substitueixen per equivalents en la llengua d'arribada, i que té com a objectiu principal posar en relació d'equivalència les unitats lèxiques d’una llengua amb les unitats lèxiques d'una altra, entre les quals existeix equivalència en el significat lèxic. L’objecte particular d'aquest treball són els diccionaris bilingües que tracten la parella de llengües catalana i castellana, per tal d'establir mesures d'avaluació lexicogràfica, avui encara inexistents, que permetin objectivar l'anàlisi de la qualitat i l'adequació deis diccionaris bilingües català-castellà.
APA, Harvard, Vancouver, ISO, and other styles
16

Kubica, Matěj. "Optický polygon." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442372.

Full text
Abstract:
Diploma thesis focuses on a problematics of an optical networks in terms of an optical cables laying and a work with individual fibers. Thesis contains an basic physical properties which are used in a fiber optics. Methodology of correct working procedures used in fiber optics is discussed at the same time. Thesis also contains detailed documentation of realized optical connections including scheme of realized outdoor connection. 3D design of an rack case is also part of the thesis. Rack case provides an option to simulate plenty of different lengths of optical routes. Rack case is designed in 6U variant.
APA, Harvard, Vancouver, ISO, and other styles
17

Wells, Stephanie Alice. "Vocabulary development in a grade 7 class using dictionary skills: an action research project." Thesis, Rhodes University, 2011. http://hdl.handle.net/10962/d1003708.

Full text
Abstract:
As I was involved as a voluntary, part-time teacher in a local, semi-rural school in the Eastern Cape, South Africa, I became increasingly aware of the learners' lack of English literacy. I therefore decided to do a practical research on vocabulary development, focusing on dictionary skills. In this thesis I describe how I implemented a vocabulary development programme as an Action Research project. My research group was a grade 7 class of English First Additional Language learners who had minimal exposure to English at school and in their communities. The class was a mixture of Afrikaans and isiXhosa home language speakers and the medium of instruction was Afrikaans. The school served a low-income community and was poorly resourced. As dictionary skills is a requirement of the national curriculum, I used 10 time-tabled lessons over a 5 week period to introduce the learners to dictionaries. My data sources were a journal detailing my reflections on each lesson; a video-recording of the lessons; small group interviews after each lesson which were audio-recorded; tasksheets on the work covered in class and questionnaires asking the learners for written responses to the lessons. The class teacher who filmed the lessons was also asked for feedback during and after the programme. My goals were to assess my teaching approach in these circumstances and to what extent the outcomes were positive for the learners. As I had come from a background of English Home Language teaching in good, well-resourced schools I found I had to question many of my assumptions. Although I was an experienced, qualified and confident teacher, I was continually having to reassess my teaching methods which were being challenged by very different classroom conditions. The outcomes of the research show why I was not able to achieve what I had thought I could in the time given.
APA, Harvard, Vancouver, ISO, and other styles
18

LINDMARK, JONAS. "No Fit Polygon problem : Developing a complete solution to the No Fit Polygon problem." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142353.

Full text
Abstract:
This thesis evaluates some of the most recent and promising approaches for deriving the No Fit Polygon for two simple polygons, possibly with holes, and verify that the approach works in practice. We have chosen three different approaches based on the complexity of the input. These approaches were implemented and benchmarked against CGAL [1], a third party library which provides solvers for many computational geometry problems. Our solution solves problems of a higher degree of complexity than that of the library we benchmarked against and for input of lower complexity we solve the problem in a more efficient manner.
Den här uppsatsen evaluerar några av de senaste och mest lovande tillvägagånssätt för att generera No Fit - polygonen för två enkla polygoner, med och utan hål, och verifiera att de fungerar i praktiken. Vi har valt tre olika tillvägagångssätt baserat på komplexiteten av indata. Dessa tillvägagångssätt implementerades och jämfördes med CGAL [1], ett tredjepartsbibliotek som tillhandahåller lösare av många beräkningsgeometriska problem. Vår lösning klarar problem av en högre grad av komplexitet än tredjepartsbiblioteket vi jämför med. För indata av lägre komplexitetsnivåer löser vi problemet mer effektivt.
APA, Harvard, Vancouver, ISO, and other styles
19

Boland, Ralph Patrick. "Polygon visibility decompositions with applications." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6244.

Full text
Abstract:
Many problems in Computational Geometry involve a simple polygon P and a family of geometric objects, say sigma, contained in P. For example, if sigma is the family of chords of P then we may want to find the longest chord in P. Alternatively, given a chord of P, we may wish to determine the areas of the two subpolygons of P determined by the chord. Let pi be a polygonal decomposition of a polygon P. We call pi a visibility decomposition with respect to sigma if, for any object g ∈ sigma, we can cover g with o(|P|) of the subpolygons of pi. We investigate the application of visibility decompositions of polygons to solving problems of the forms described. Any visibility decomposition pi of a polygon P that we construct will have the property that, for some class of polygons ℘ where the polygons in ℘ have useful properties, pi ⊆ ℘. Furthermore, the properties of ℘ will be key to solving any problems we solve on P using pi. Some of the visibility decomposition classes we investigate are already known in the literature, for example weakly edge visible polygon decompositions. We make improvements relating to these decomposition classes and in some cases we also find new applications for them. We also develop several new polygon visibility decomposition classes. We then use these decomposition classes to solve a number of problems on polygons including the circular ray shooting problem and the largest axis-aligned rectangle problem. It is noteworthy that the solutions to problems that we provide are usually more efficient and always simpler than alternative solutions.
APA, Harvard, Vancouver, ISO, and other styles
20

Flaaten, Marcus. "Efficient polygon reduction in Maya." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121687.

Full text
Abstract:
Reducing the number of vertices in a mesh is a problem that if solved correctly can save the user a lot of time in the entire process of handling the model. Most of the solutions today are focused on reducing the mesh in one big step by running a separate application. The goal of this implementation is to bring the reduction application into the users workspace as a plugin. Many of the modellers in the various computer graphics industries use Autodesk Maya the plugins intention is to create a efficient tool which also give the modellers as much freedom as possible without the need to ever leave Mayas workspace. During the process the possible issues and solutions of creating this tool in Maya will also examined to help introduce the process of creating a tool for Maya. This plugin has the potential to improve on the existing reduction tool in Maya by giving the user more options and a more exact solution.
APA, Harvard, Vancouver, ISO, and other styles
21

Jackson, LillAnne Elaine, and University of Lethbridge Faculty of Arts and Science. "Polygon reconstruction from visibility information." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 1996, 1996. http://hdl.handle.net/10133/41.

Full text
Abstract:
Reconstruction results attempt to rebuild polygons from visibility information. Reconstruction of a general polygon from its visibility graph is still open and only known to be in PSPACE; thus additional information, such as the ordering of the edges around nodes that corresponds to the order of the visibilities around vertices is frequently added. The first section of this thesis extracts, in o(E) time, the Hamiltonian cycle that corresponds to the boundary of the polygon from the polygon's ordered visibility graph. Also, it converts an unordered visibility graph and Hamiltonian cycle to the ordered visibility graph for that polygon in O(E) time. The secod, and major result is an algorithm to reconstruct an arthogonal polygon that is consistent with the Hamiltonian cylce and visibility stabs of the sides of an unknown polygon. The algorithm uses O(nlogn) time, assuming there are no collinear sides, and )(n2) time otherwise.
vii, 78 leaves ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
22

Eldridge, Matthew Willard. "SIMD column-parallel polygon rendering." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38742.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 171-173).
by Matthew Willard Eldridge.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
23

Krocak, Makenzie, Sean Ernst, Jinan Allan, Wesley Wehde, Joseph Ripberger, Carol Silva, and Hank Jenkins-Smith. "Thinking Outside the Polygon: A Study of Tornado Warning Perception Outside of Warning Polygon Bounds." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/etsu-works/7865.

Full text
Abstract:
When the National Weather Service (NWS) issues a tornado warning, the alert is rapidly and widely disseminated to individuals in the general area of the warning. Historically, the assumption has been that a false-negative warning perception (i.e., when someone located within a warning polygon does not believe they have received a tornado warning) carries a higher cost than a false-positive warning perception (i.e., when someone located outside the warning area believes they have received a warning). While many studies investigate tornado warning false alarms (i.e., when the NWS issues a tornado warning, but a tornado does not actually occur), less work focuses on studying individuals outside of the warning polygon bounds who believe they received a warning (i.e., false-positive perceptions). This work attempts to quantify the occurrence of false-positive perceptions and possible factors associated with the rate of occurrence. Following two separate storm events, Oklahomans were asked whether they perceived a tornado warning. Their geolocated responses were then compared to issued warning polygons. Individuals closer to tornado warnings or within a different type of warning (e.g., a severe thunderstorm warning) are more likely to report a false-positive perception than those farther away or outside of other hazard warnings. Further work is needed to understand the rate of false-positive perceptions across different hazards and how this may influence warning response and trust in the National Weather Service.
APA, Harvard, Vancouver, ISO, and other styles
24

Wildt, Daniël de. "Automatic video segmentation by polygon evolution." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=978906063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kreykenbohm, Michael Walter. "Replication patterns for polygon fill algorithms." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/27974.

Full text
Abstract:
This thesis describes and compares several methods for producing bilevel patterns to simulate grey level values for use in polygon regions as generated for computer graphics. Random distribution, ordered dither, and error diffusion methods are shown to be visually inferior for many grey levels to the proposed maxmin algorithm for producing patterns for polygon area filling procedures. Through even spatial arrangement of the pixels and taking into consideration the edges of the pattern, the number of artifacts is decreased and the accuracy in small subregions of the pattern is improved, especially at low grey levels where most pattern generators degrade. At these lower levels, the maxmin algorithm can produce pleasing patterns if given sufficient flexibility through enlarged grid sizes. At higher grey levels, the proximity of pixels does not leave sufficient room to eliminate all artifacts, but by varying the criteria of the algorithm, the patterns still appear more pleasing than other methods.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
26

Lessard, Dominic. "Optimal polygon placement on a grid." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0028/MQ52380.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Širjov, Jakub. "Testovací polygon pro kvantovou distribuci klíčů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442371.

Full text
Abstract:
The aim of this masters thesis is to explain quantum key distribution (QKD) and principle of signal transmission in the quantum channel. Further this thesis complains commercial distributors of QKD technologies and their individual appliances. Practical part of the thesis is separated to 3 parts. First part handles transmission of quantum keys in QKDNetsim simulator. Second part takes care of design and creation of a test polygon that allows for testing of many optical network configurations with quantum signal and normal data traffic being transmitted in a single fiber. Multiple simulations of use of various filter types to supress the signal noise in the program VPIphotonics and tested by QKDNetsim are shown in the last part of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
28

Lessard, Dominic Carleton University Dissertation Computer Science. "Optimal polygon placement on a grid." Ottawa, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Qingda. "Facility location constrained to a simple polygon." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq61034.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Manning, James David. "Directional diffuse reflection from a polygon emitter." Thesis, University of Hull, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Barwick, Michael John. "The Newton polygon and the Puiseux characteristic." Thesis, The University of Sydney, 2011. http://hdl.handle.net/2123/8755.

Full text
Abstract:
In this thesis, we will use the Newton polygon and the Puiseux characteristic to study complex analytic curves in C{x,y} and C[[x,y]]. This allows us to topologically classify the plane curve singularities. Chapter 1 will introduce the Newton polygon, the process of sliding towards a root and polar curve. The first section of chapter 2 contains the technical background to this topic. The second section introduces the Puiseux characteristic, and the third uses results from knot theory to classify the plane curve singularities as the cone over an iterated torus knot. In the third chapter, we will look at the Kuo-Lu theorem, which is a generalisation of Rolle’s theorem to complex curves. Finally, in the fourth chapter, we will give an application of the previous results to show a method of calculating the Lojasiewicz exponent.
APA, Harvard, Vancouver, ISO, and other styles
32

Müller, Matthias. "Polyglotte Kommunikation soziale Arbeit und die Vielsprachigkeit ihrer Praxis." Heidelberg Carl-Auer-Verl, 2007. http://d-nb.info/988623080/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tse, Siu-ching, and 謝兆政. "Cross linguistic influence in polyglots: encoding of the future by L3 learners of Swedish." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B4842187X.

Full text
Abstract:
The current study aims to investigate the source(s)of cross linguistic influence(CLI)on the production of future encoding strategies by L1 Cantonese learners of L3 Swedish who speak L2 English. In the literature of third language acquisition (TLA) research, the language status of native and non-native languages as well as genetic and (psycho)typological language distance are identified to be important to TLA processes but the current knowledge is insufficient to inform which factor(s) is/are more influential. Given the close genetic distance between English and Swedish and the status of English as a second language, it is hypothesized that CLI on L3 Swedish comes from L2 English rather than L1 Cantonese. Any confirmation or rejection to this hypothesis serves to inform the relationship of language status and language distance to TLA. To test this hypothesis, linguistic background questionnaire and a picture elicitation task are designed to record the production of future ideas in the three languages. Through qualitative and quantitative analyses, mixed sources of CLI from Cantonese and English are identified. An equidistance representation of non-native languages is also identified in which non-native English and Swedish respectively show similar degree of cross linguistic matching in relation to native Cantonese regardless which of them is the principal source of CLI. The hypothesis of differentiation of linguistic representation in the minds of polyglots is therefore proposed and further verification and investigation is required.
published_or_final_version
Linguistics
Master
Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
34

Galanda, Martin. "Automated polygon generalization in a multi agent system /." [S.l.] : [s.n.], 2003. http://opac.nebis.ch:80/F/?func=service&doc_library=EBI01&doc_number=004623660&line_number=0001&func_code=WEB-FULL&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Rasmus, Siljedahl. "3D Conversion from CAD models to polygon models." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129881.

Full text
Abstract:
This thesis describes the design and implementation of an application that converts CAD models into polygon models. When going from CAD models to 3D polygon models a conversion of the file type has to be performed. XperDI uses these polygon models in their tool, called sales configurator, to create a photo realistic environment to be able to have a look at the end product before it is manufactured. Existing tools are difficult to use and is missing features that is important for the Sales Configurator. The purpose of this thesis is to create a proof of concept application that converts CAD models into 3D polygon models. This new lightweight application is a simpler alternative to convert CAD models into polygon models and offers features needed for the intended use of these models, that the alternative products do not offer.
APA, Harvard, Vancouver, ISO, and other styles
36

Trhlík, Tomáš. "Návrhové podmínky pro polygon specializovaný na autonomní vozidla." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-392235.

Full text
Abstract:
The aim of this diploma thesis is the research of building polygons for the testing of autonomous vehicles, from the point of view of road technology and also designing aspects. In the thesis are mentioned 9 most important world test polygons and their description of design parameters. There are described particular stages of automation from foreign organizations which are concerned with research and development in the automotive industry. In addition, there are described basic advanced driver assistance systems and connectivity between vehicles and infrastructure. Conclusion also contains the assessment of existing aerodrome test areas for autonomous vehicles.
APA, Harvard, Vancouver, ISO, and other styles
37

Colleu, Thomas. "A floating polygon soup representation for 3D video." Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00592207.

Full text
Abstract:
Cette thèse présente une nouvelle représentation appeléesoupe de polygones déformables pour les applications telles que 3DTV et FTV (Free Viewpoint TV). La soupe de polygones prend en compte les problèmes de compacité, efficacité de compression, et synthèse de vue. Les polygones sont définis en 2D avec des valeurs de profondeurs à chaque coin. Ils ne sont pas nécessairement connectés entre eux et peuvent se déformer en fonction du point de vue et de l'instant dans la séquence vidéo. A partir de données multi-vues plus profondeur (MVD), la construction tient en deux étapes: la décomposition en quadtree et la réduction des redondances inter-vues. Un ensemble compact de polygones est obtenu à la place des cartes de profondeur, tout en préservant les discontinuités de profondeurs et les détails géométriques. Ensuite, l'efficacité de compression et la qualité de synthèse de vue sont évaluées. Des méthodes classiques comme l'\emph{inpainting} et des post-traitements sont implémentées et adaptées à la soupe de polygones. Une nouvelle méthode de compression est proposée. Elle exploite la structure en quadtree et la prédiction spatiale. Les résultats sont comparés à un schéma de compression MVD utilisant le standard MPEG H.264/MVC. Des valeurs de PSNR légèrement supérieures sont obtenues à moyens et hauts débits, et les effets fantômes sont largement réduits. Enfin, la soupe de polygone est déformée en fonction du point de vue désiré. Cette géométrie dépendante du point de vue est guidée par l'estimation du mouvement entre les vues synthétisées et originales. Cela réduit les artefacts restants et améliore la qualité d'image.
APA, Harvard, Vancouver, ISO, and other styles
38

Colleu, Thomas. "A floating polygon soup representation for 3D video." Phd thesis, Rennes 1, 2010. http://www.theses.fr/2010REN1S201.

Full text
Abstract:
This thesis presents a new data representation for 3D video. Starting from a sequence of multi-view video plus depth (MVD), the representation called floating polygon soup takes into account all together compactness, compression efficiency, and view synthesis quality. It is made of a set of polygons stored in 2D with depth values at each corners. The polygons are not necessarily connected to each others and can be deformed (or floated) through space and time. The polygon soup is extracted from MVD data in two steps: first, quadtree decomposition of the depth maps, second, redundancy reduction across the views. It results in a compact set of polygons that replace the depth maps while preserving depth discontinuities and geometric details. It also reduces the so-called ghosting artifacts. Next, the representation is evaluated at the compression and view-synthesis steps. For view synthesis, classical methods are adapted to the polygon soup for obtaining good quality virtual views. For compression, a new quadtree-based method is proposed. It exploits the quadtree structure and limits coding artifacts. This method is compared with an existing MVD compression scheme based on MPEG's H. 264/MVC. Finally, the polygon soup is floated according to the desired viewpoint in order to reduce remaining artifacts and improve the final image quality. This view-dependent geometry is guided by motion estimation between synthesized and original views
Cette thèse présente une nouvelle représentation de données pour la vidéo 3D. A partir d'une séquence multi-vues plus profondeur (MVD), la représentation, appelée soupe de polygones déformables prend en compte de façon unifiée les problèmes de compacité, efficacité de compression, et synthèse de vue. Cette représentation est faite d'un ensemble de polygones 3D définis en 2D avec des valeurs de profondeurs à chaque coin. Les polygones ne sont pas nécessairement connectés entre eux. Ils peuvent aussi se déformer en fonction du point de vue et de l'instant dans la séquence vidéo. La soupe de polygones est extraite de données MVD en deux étapes: la première est la décomposition en quadtree des cartes de profondeur, la deuxième est la réduction des redondances inter-vues. Un ensemble compact de polygones est obtenu à la place des cartes de profondeur, tout en préservant les discontinuités de profondeurs et les détails géométriques. La méthode permet aussi de réduire les effets fantômes. Ensuite, la représentation est évaluée aux étapes de compression et synthèse de vue. Pour la synthèse de vue, des méthodes classiques sont adaptées à la soupe de polygones afin d'obtenir des vues virtuelles de bonne qualité. Pour l'étape de compression, une nouvelle méthode basée sur la structure du quadtree est proposée. Cette méthode est comparée à un schéma de compression MVD utilisant le standard MPEG H. 264/MVC. Enfin, la soupe de polygone est déformée en fonction du point de vue désiré afin de réduire les artefacts et ainsi améliorer la qualité finale de l'image. Cette géométrie dépendante du point de vue est guidée par l'estimation du mouvement entre les vues synthétisées et originales
APA, Harvard, Vancouver, ISO, and other styles
39

Grossmann, Christoph. "Fretting fatigue of shape optimised polygon-shaft-hub connections." [S.l.] : [s.n.], 2006. http://opus.kobv.de/tuberlin/volltexte/2007/1519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Klein, Jan. "Efficient collision detection for point and polygon based models." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=976777029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Henriksson, Johan. "Face detection for selective polygon reduction of humanoid meshes." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119967.

Full text
Abstract:
Automatic mesh optimization algorithms suffer from the problem that humans are not uniformly sensitive to changes on different parts of the body. This is a problem because when a mesh optimization algorithm typically measures errors caused by triangle reductions, the errors are strictly geometrical, and an error of a certain magnitude on the thigh of a 3D model will be perceived by a human as less of an error than one of equal geometrical significance introduced on the face. The partial solution to this problem proposed in this paper consists of detecting the faces of the 3D assets to be optimized using conventional, existing 2D face detection algorithms, and then using this information to selectively and automatically preserve the faces of 3D assets that are to be optimized, leading to a smaller perceived error in the optimized model, albeit not necessarily a smaller geometrical error. This is done by generating a set of per-vertex weights that are used to scale the errors measured by the reduction algorithm, hence preserving areas with higher weights. The final optimized meshes produced by using this method is found to be subjectively closer to the original 3D asset than their non-weighed counterparts, and if the input meshes conform to certain criteria this method is well suited for inclusion in a fully automatic mesh decimation pipeline
APA, Harvard, Vancouver, ISO, and other styles
42

Wallin, Hanna. "En topologisk representation av en polygon;det rakkantiga skelettet." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-93303.

Full text
Abstract:
The aim of this thesis project is to produce an algorithm for finding a topologicalrepresentation of a polygon, called a straight skeleton, using floating pointarithmetic. Various straight skeleton algorithms are examined and discussed witha focus on time complexity and one is chosen for implementation. This implementationis then compared with the open source library CGAL with regards torunning time. The result is an algorithm which is based on an algorithm by Felkeland Obdrzalek and which, for polygons with more than five thousand vertices andthree significant digits representing points, runs around 25% faster than CGALsimplementation. Complications regarding the use of floating-point arithmetic arealso discussed.
Syftet med detta examensarbete är att producera en algoritm för att finna entopologisk representation av en polygon, som kallas det rakkantiga skelettet, genomatt använda flyttalsaritmetik. Olika algoritmer diskuteras med avseende främst påtidskomplexitet och en väljs för implementation. Denna implementation jämförssedan med ett öppet källkodsbibliotek, CGAL, med avseende på körtid. Resultatetär en algoritm som är baserad på en algoritm av Felkel och Obdrzalek och som, förpolygoner med fler än fem tusen hörn och tre värdesiffror hos punkter, har ungefär25% snabbare körtid än CGALs implementation. Komplikationer som uppstår pågrund av användandet av flyttalsaritmetik diskuteras också.1
APA, Harvard, Vancouver, ISO, and other styles
43

Samuel, David. "Computing the external geodesic diameter of a simple polygon." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

DERRAIK, ANDRE LUIZ BEHRENSDORF. "A COMPARATIVE STUDY OF MULTIRESOLUTION REPRESENTATIONS FOR POLYGON LINES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=6524@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CENTRO DE PESQUISAS LEOPOLDO AMÉRICO MIGUEZ DE MELLO
Este trabalho apresenta um estudo comparativo de algumas representações em multi-resolução para linhas poligonais. São estudados as estruturas strip tree, arc tree e box tree, e suas variantes, comparando as velocidades de construção, de percorrimento (drawing), de operações de interseção e seleção (pick); e custo de armazenagem em memória. A comparação é feita usando bancos de dados cartográficos reais disponíveis na Internet. O objetivo do estudo é identificar técnicas e algoritmos adequados para exploração interativa de grandes bancos de dados cartográficos.
This work presents a comparative study of some multiresolution representations for polygonal lines. We study the strip tree, the arc tree and the box tree data structures, comparing their performance for construction, drawing, intersection and selection; and memory storage costs. The comparation uses actual databases available in the Internet. The goal of this study is to identify techniques and algorithms for interactive exploration of large cartographic data bases.
APA, Harvard, Vancouver, ISO, and other styles
45

Harvey, John Andrew 1975. "Blinking cubes : a method for polygon-based scene reconstruction." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/49661.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 50-51).
by John Andrew Harvey.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
46

Highfield, Julian Charles. "Polygon-based hidden surface elimination algorithms : serial and parallel." Thesis, Loughborough University, 1994. https://dspace.lboro.ac.uk/2134/33016.

Full text
Abstract:
Chapter 1 introduces the need for rapid solutions of hidden surface elimination (HSE) problems in the interactive display of objects and scenes, as used in many application areas such as flight and driving simulators and CAD systems. It reviews the existing approaches to high-performance computer graphics and to parallel computing. It then introduces the central tenet of this thesis: that general purpose parallel computers may be usefully applied to the solution of HSE problems. Finally it introduces a set of metrics for describing sets of scene data, and applies them to the test scenes used in this thesis. Chapter 2 describes variants of several common image space hidden surface elimination algorithms, which solve the HSE problem for scenes described as collections of polygons. Implementations of these HSE algorithms on a traditional, serial, single microprocessor computer are introduced and theoretical estimates of their performance are derived. The algorithms are compared under identical conditions for various sets of test data. The results of this comparison are then placed in context with existing historical results. Chapter 3 examines the application of MIMD style parallelism to accelerate the solution of HSE problems. MIMD parallel implementations of the previously considered HSE algorithms are introduced. Their behaviour under various system configurations and for various data sets is investigated and compared with theoretical estimates. The theoretical estimates are found to match closely the experimental findings. Chapter 4 summarises the conclusions of this thesis, finding that HSE algorithms can be implemented to use an MIMD parallel computer effectively, and that of the HSE algorithms examined the z-buffer algorithm generally proves to be a good compromise solution.
APA, Harvard, Vancouver, ISO, and other styles
47

Zong, Ruifa. "Molecular polygons self-assembled from conjugated 1,1'-Ferrocenediyl Bridged Bis(pyridines), Bis(2,2'-bipyridines), and Bis(1,10-phenanthrolines) and transition metals as building blocks." [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10236414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lindström, Kristian. "Smooth silhouette rendering of low polygon models for computer games." Thesis, University of Skövde, School of Humanities and Informatics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-1035.

Full text
Abstract:

This dissertation presents a method capable of smoothing the silhouette of a 3D model using interpolation to find smooth edges. The method has as goal to be used with normal mapping to improve the performance and give a better result with a low polygonal count. To do this the lines located on the silhouette of a model is interpolated to find a curve that is used as clipping frame in the stencil buffer. This method is able to modify the silhouette for the better. The amount of interpolation is rather limited.

APA, Harvard, Vancouver, ISO, and other styles
49

Julius, Willén. "Developing a process for automating UV mapping and polygon reduction." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131341.

Full text
Abstract:
An exploratory research project was conducted through a company focusing on CAD and their own developed real-time 3D model viewer. The company needed to be able to convert CAD models to use in Unreal Engine with great visual quality. Before this project, another was conducted to perform the simple conversion of CAD models to the FBX file format, which Unreal uses. In extension to the previous project, one needed to add functionalities to manipulate the models for better quality and performance. The tasks were carried out and performed with good results.
APA, Harvard, Vancouver, ISO, and other styles
50

Turker, Mustafa. "Polygon-based image analysis within an integrated GIS/RS environment." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23878.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography