To see the other types of publications on this topic, follow the link: Creative synthesis.

Dissertations / Theses on the topic 'Creative synthesis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Creative synthesis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

MILLER, ALEXANDER ALBERT. "GESAMTKUNSTWERK ARIZONA A CREATIVE SYNTHESIS OF POETRY AND MUSIC." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/613290.

Full text
Abstract:
Throughout the history of German culture, the synthesis of text and music has been central to the cultural and social identity of its respective eras. The union of poetry and text that immediately comes to mind is the Deutsche Lieder, the term used to describe the practice of setting classic German poetry to music during the early 19th century. The poetry of the Weimarer Klassik, principally the works of Goethe and Schiller, was honored by composers of the next generation through their Lieder, seeking to manifest these renowned works in the form of music. However, the synthesis of text and music in German culture had long since been a testament to underlying social temperament, from the introspective chants of Hildegard von Bingen conveying the spiritual mysticism of the 12th century, to the cynical and ironic Episches Theater, or Epic Theater, of Bertolt Brecht in the 20th century. The idea of this project is to similarly find a synthesis of music and text, such that a perspective on the contemporary German identity is manifested and communicated with an audience, simultaneously commenting on and participating in German culture. This project culminates with a live musical performance and poetry reading, both of which are original creations by the artist, in hopes of sharing an appreciation for German culture with the University of Arizona community. The performance can be found here: https://www.youtube.com/channel/UCZEehSGWO6N81Lc0gqg8tIg
APA, Harvard, Vancouver, ISO, and other styles
2

Dzjaparidze, Michaël. "Exploring the creative potential of physically inspired sound synthesis." Thesis, Queen's University Belfast, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.695331.

Full text
Abstract:
This thesis accompanies a portfolio of compositions and, in addition, discusses a number of compositional approaches which use physical modelling and physically inspired sound synthesis methods for the creation of electroacoustic music. To this end, a software library has been developed for the purpose of the real-time simulation of systems of inter-connected 10 and 2D objects, which has proven to be indispensable for producing the music works. It should be made clear from the outset that the primary objective of the research was not to add any novel scientific knowledge to the field of physical modelling. Instead, the aim was to explore in depth the creative possibilities of technical research carried out by others and to show that it can be utilised in a form which aids my own creative practice. From a creative perspective, it builds upon concepts and ideas formulated earlier by composers Jean-Claude Risset and Denis Smalley, centred around the interpretation of timbre and sound as constructs which actively inform compositional decision-making and structuring processes. This involves the creation of harmony out of timbre and playing with the source-cause perception of the listener through the transformation of timbre over time. In addition, the thesis offers a discussion of gesture and texture as they commonly appear in electroacoustic music and motivates my own personal preference for focussing on the development of texture over time as a means for creating musical form and function.
APA, Harvard, Vancouver, ISO, and other styles
3

Kokotovich, Vasilije. "Creative mental synthesis in designers and non-designers : experimental examinations." Phd thesis, Department of Architectural and Design Science, 2002. http://hdl.handle.net/2123/8079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yamane, Honami. "Creative Synthesis of Novel Optically-Functional Materials by Modified BODIPYs with Unique Structures." 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

McLaren, Sasha. "Material Synthesis: Negotiating experience with digital media." The University of Waikato, 2008. http://hdl.handle.net/10289/2761.

Full text
Abstract:
Given the accessibility of media devices available to us today and utilising van Leeuwen's concept of inscription and synthesis as a guide, this thesis explores the practice of re-presenting a domestic material object, the Croxley Recipe Book, into digital media. Driven by a creative practice research method, but also utilising materiality, digital storytelling practices and modality as important conceptual frames, this project was fundamentally experimental in nature. A materiality-framed content analysis, interpreted through cultural analysis, initially unraveled some of the cookbook's significance and contextualised it within a particular time of New Zealand's cultural history. Through the expressive and anecdotal practice of digital storytelling the cookbook's significance was further negotiated, especially as the material book was engaged with through the affective and experiential digital medium of moving-image. A total of six digital film works were created on an accompanying DVD, each of which represents some of the cookbook's significance but approached through different representational strategies. The Croxley Recipe Book Archive Film and Pav. Bakin' with Mark are archival documentaries, while Pav is more expressive and aligned with the digital storytelling form. Spinning Yarns and Tall Tales, a film essay, engages and reflects with the multiple processes and trajectories of the project, while Extras and The Creative Process Journal demonstrate the emergent nature of the research. The written thesis discusses the emergent nature of the research process and justifies the conceptual underpinning of the research.
APA, Harvard, Vancouver, ISO, and other styles
6

Oldfield, R. G. "The analysis and improvement of focused source reproduction with wave field synthesis." Thesis, University of Salford, 2013. http://usir.salford.ac.uk/29510/.

Full text
Abstract:
This thesis presents a treatise on the rendering of focused sources using wave field synthesis (WFS). The thesis describes the fundamental theory of WFS and presents a thorough derivation of focused source driving functions including, monopoles, dipoles and pistonic sources. The principle characteristics of focused sources including, array truncation, spatial aliasing, pre-echo artefacts, colouration and amplitude errors are analysed in depth and a new spatial aliasing criterion is presented for focused sources. Additionally a new secondary source selection protocol is presented allowing for directed and symmetrically rendered sources. This thesis also describes how the low frequency rendering of focused sources is limited by the focusing ability of the loudspeaker array and thus derives a formula to predict the focusing limits and the corresponding focal shift that occurs at low frequencies and with short arrays. Subsequently a frequency dependent position correction is derived which increases the positional accuracy of the source. Other characteristics and issues with the rendering of focused sources are also described including the use of large arrays, rendering of moving focused sources, issues with multiple focused sources in the scene, the phase response, and the focal point size of focused sound field. The perceptual characteristics are also covered, with a review of the literature and a series of subjective tests into the localisation of focused sources. It is shown that an improvement in the localisation can be achieved by including the virtual first order images as point sources into the WFS rendering. Practical rendering of focused sources is generally done in compromised scenarios such as in non-anechoic, reverberant rooms which contain various scattering objects. These issues are also covered in this thesis with the aid of finite difference time domain models which allow the characterisation of room effects on the reproduced field, it is shown that room effects can actually even out spatial aliasing artefacts and therefore reduce the perception of colouration. Scattering objects can also be included in the model, thus the effects of scattering are also shown and a method of correcting for the scattering is suggested. Also covered is the rendering of focused sources using elevated arrays which can introduce position errors in the rendering.
APA, Harvard, Vancouver, ISO, and other styles
7

Ding, Huafeng [Verfasser], and Andrés [Akademischer Betreuer] Kecskeméthy. "Automatic Structural Synthesis of Planar Mechanisms and Its Application to Creative Design / Huafeng Ding. Betreuer: Andrés Kecskeméthy." Duisburg, 2016. http://d-nb.info/108130300X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Knopf, Michael David. "Style and Genre Synthesis in Music Composition: Revealing and Examining the Craft and Creative Processes in Composing Poly-Genre Music." Thesis, Griffith University, 2011. http://hdl.handle.net/10072/366075.

Full text
Abstract:
Enhancing new musical works by referencing style and genre is a well-established practice in music making around the world. Audiences and artists of the past century were particularly exposed to music of many styles and genres through the growth of information and knowledge transfer as a result of travel, various media and the Internet. This musical and cultural interaction was, as Nettl (1986, p. 371) characterized it, a “prevailing force in musical innovation.” Aubert (2000, p. 4) states that the “generalization all over the planet of the cultural hybridization process observed today is a phenomenon without precedent.” Alongside this is the “need to investigate musical hybrids” that Leavy sees (2009, p. 105) as having “increased exponentially with globalization and the multidirectional cultural exchange it has fostered.” My work as a performer and composer is deeply influenced by music hybrids, with genre and style synthesis integral to my creative practice. In my search of the literature to place my work in style and genre synthesis, I was unable to find a systematic description of content, process or operational philosophy. This study aims to examine and reveal the technical and creative concepts and processes I use to generate new cross-genre compositions. The research is carried out through the composition of new works as examples of genre synthesis, and through reflections on, and analyses of, the particular processes of synthesis occurring in the new works. The research furnishes evidence of the use of new and conventional concepts, strategies and terminology, and offers a model for the creative practice of composing with styles and genres.<br>Thesis (Professional Doctorate)<br>Doctor of Musical Arts (DMA)<br>Queensland Conservatorium<br>Arts, Education and Law<br>Full Text
APA, Harvard, Vancouver, ISO, and other styles
9

Lavault, Antoine. "Generative Adversarial Networks for Synthesis and Control of Drum Sounds." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS614.

Full text
Abstract:
Les synthétiseurs audio sont des systèmes électroniques capable de générer des sons artificiels sous un ensemble de paramètres dépendants de leur architecture. Quand bien même de multiples évolutions ont transformé les synthétiseurs de simples curiosités sonores dans les années 60 et précédentes à des instruments maîtres dans les productions musicales modernes, deux grands défis restent à relever: le développement d'un système de synthèse répondant à des paramètres cohérent avec leur perception par un humain et la conception d'une méthode de synthèse universelle, capable de modéliser n'importe quelle source et de la dépasser. Cette thèse étudie l'utilisation et la valorisation des réseaux antagonistes génératifs (Generative Adversarial Networks, abrégé en GAN) pour construire un système répondant aux deux problèmes exposés précédemment. L'objectif principal est ainsi de proposer un synthétiseur neuronal capable de générer des sons de batteries réalistes et contrôlable par un ensemble de paramètres de timbres prédéfinis, ainsi que de proposer un contrôle de la vélocité de la synthèse. La première étape dans le projet a été de proposer une approche basée sur les dernières avancées techniques au moment de sa conception pour générer des sons de batteries réalistes. A cette méthode de synthèse neuronale, nous avons aussi ajouter des capacités de contrôle du timbre en explorant une voie différente des solutions existantes: l'utilisation de descripteurs différentiables. Pour donner des garanties expérimentales à notre travail, nous avons réalisé des expériences d'évaluation à la fois via des métriques objectives basées sur les statistiques mais aussi des évaluations subjectives et psychoĥysiques sur la qualité perçue et la perception des erreurs de contrôle. Pour proposer un synthétiseur utilisable pour des performances musicales, nous avons aussi ajouter un contrôle de la vélocité. Toujours dans l'idée de poursuivre la réalisation d'un synthétiseur universel et à contrôle universel, nous avons créer ex-nihilo un jeu de données composé de sons de batteries dans le but avoué de créer une base exhaustive des sons accessibles dans l'immense majorité des conditions rencontrées dans le contexte de la production musicale. De ce jeu de données, nous présentons des résultats expérimentaux liés au contrôle de la dynamique, un des aspects phares de la performance musicale mais laissé de côté par la littérature. Pour justifier des capacités offertes par la méthode de synthèse par GANs, nous montrons qu'il est possible de marier les méthodes de synthèse classiques avec la synthèse neuronale en exploitant les limites et particularités des GANs pour obtenir des sons hybrides nouveaux et musicalement intéressants<br>Audio synthesizers are electronic systems capable of generating artificial sounds under parameters depending on their architecture. Even though multiple evolutions have transformed synthesizers from simple sonic curiosities in the 1960s and earlier to the main instruments in modern musical productions, two major challenges remain; the development of a system of sound synthesis with a parameter set coherent with its perception by a human and the design of a universal synthesis method, able to model any source and provide new original sounds. This thesis studies using and enhancing Generative Adversarial Networks (GAN) to build a system answering the previously-mentioned problems. The main objective is to propose a neural synthesizer capable of generating realistic drum sounds controllable by predefined timbre parameters and hit velocity. The first step in the project was to propose an approach based on the latest technological advances at the time of its conception to generate realistic drum sounds. We added timbre control capabilities to this method by exploring a different way from existing solutions, i.e., differentiable descriptors. To give experimental guarantees to our work, we performed evaluation experiments via objective metrics based on statistics and subjective and psychopĥysical evaluations on perceived quality and perception of control errors. These experiments continued to add velocity control to the timbral control. Still, with the idea of pursuing the realization of a versatile synthesizer with universal control, we have created a dataset ex-nihilo composed of drum sounds to create an exhaustive database of sounds accessible in the vast majority of conditions encountered in the context of music production. From this dataset, we present experimental results related to the control of dynamics, one of the critical aspects of musical performance but left aside by the literature. To justify the capabilities offered by the GANs synthesis method, we show that it is possible to marry classical synthesis methods with neural synthesis by exploiting the limits and particularities of GANs to obtain new and musically interesting hybrid sounds
APA, Harvard, Vancouver, ISO, and other styles
10

Wright, David George. "Creativity and embodied learning : a reflection upon and a synthesis of the learning that arises in creative expression, with particular reference to writing and drama, through the perspective of the participant and self organising systems theory /." View thesis, 1998. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030807.134153/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Toth, A. M. "A contemporary voice for the female protagonist : an exploration of the collaborative creative process : the development and synthesis of vocal techniques in the realisation of premiere music performances." Thesis, University of Salford, 2016. http://usir.salford.ac.uk/37855/.

Full text
Abstract:
It is vital to increase the body of work where the female character acts as protagonist, rather than the foil for the masculine “norm” (McClary 1991). It is the intention of this candidature to present performances of several new music works that focus on the experience of the female character and voice through the synthesis of contemporary vocal and folk music practice, as well as the embodiment of the female protagonist on stage. The core of the presented thesis consists of a portfolio of a number of contrasting works that feature the female protagonist exploring her experience in their own way; themes include vocalised emotion, the presentation of the female body and character, gender stereotypes and archetypes, as well as roles, relationships and power. It researches a variety of collaborative composer/librettist/performer methodologies for the development of fully-human female characters onstage, which make use of a range of acting, dance and voice qualities. These vocal qualities and techniques are developed, hybridised and re-contextualised from a broad range of styles to include Classical / Contemporary, chest/Folk-style singing / vocalisations (Greek Amannes and Miroloi and Hungarian folk singing), and build upon Extended Vocal Techniques developed by various practitioners. The project draws upon fifteen years of experience as a professional singer-actor-dancer performing in a wealth of exciting projects as commissioner and collaborator using a wide range of vocal styles from Opera, Contemporary Classical, to Folk, Jazz and other Popular styles. My vision, with musical, textual, and performative input, guides collaborators in a joint exploration; stimuli include a variety of musical forms / structures, vocal techniques and textual treatment of themes, as well as visual stimuli and performance elements, like costume, set, and properties. The portfolio also includes existing works that are collaborative in the rehearsal process, with varying degrees of input from collaborators. The work takes the perspective of the performer as co-creator of on-stage female characters, using libretto, music and my own responses to theatrical elements. The creation on stage of a female protagonist must involve the personal emotional connection of the performer.
APA, Harvard, Vancouver, ISO, and other styles
12

Carroll, Adrian Dominic. "Beat-mixing Rock music: Rock and electronic dance music merge to create the Manarays." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/61231/13/61231.pdf.

Full text
Abstract:
This research introduces the proposition that Electronic Dance Music’s beat-mixing function could be implemented to create immediacy in other musical genres. The inclusion of rhythmic sections at the beginning and end of each musical work created a ‘DJ friendly’ environment. The term used in this thesis to refer to the application of beat-mixing in Rock music is ‘ClubRock’. Collaboration between a number of DJs and Rock music professionals applied the process of beat-mixing to blend Rock tracks to produce a continuous ClubRock set. The DJ technique of beat-mixing Rock music transformed static renditions into a fluid creative work. The hybridisation of the two genres, EDM and Rock, resulted in a contribution to Rock music compositional approaches and the production of a unique Rock album; Manarays—Get Lucky.
APA, Harvard, Vancouver, ISO, and other styles
13

Vick, Erik. "IMPLEMENTING LEXICAL AND CREATIVE INTENTIONALITY IN SYNTHETIC PERSONALITY." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3252.

Full text
Abstract:
Creating engaging, interactive, and immersive synthetic characters is a difficult task and evaluating the success of a synthetic character is often even more difficult. The later problem is solved by extending Turing's Imitation Game thusly: computational construct should be evaluated based on the criteria of how well the character can mimic a human. In order to accomplish a successful evaluation of the proposed metric, synthetic characters must be consistently believable and capable of role-appropriate emotional expression. The author believes traditional synthetic characters must be improved to meet this goal. For a synthetic character to be believable, human users must be able to perceive a link between the mental state of the character and its behaviors. That is to say, synthetic characters must possess intentionality. In addition to intentionality, the mental state of the character must be human-like in order to provide an adequate frame of reference for the human users' internal simulations, to wit, the character's mental state must be comprised of a synthetic model of personality, of personality dynamics, and of cognition, each of which must be psychologically valid and of sufficient fidelity for the type of character represented. The author proposes that synthetic characters possessing these three models are more accurately described as synthetic personalities. The author proposes and implements computational models of personality, personality dynamics, and cognition in order to evaluate the psychological veracity of these models and computational equivalence between the models and the implementation as a first step in the process of creating believable synthetic personalities.<br>Ph.D.<br>Other<br>Arts and Sciences<br>Modeling and Simulation
APA, Harvard, Vancouver, ISO, and other styles
14

ARKEL, MARIA. "Sintesi e caratterizzazione di nuovi derivati della creatina per la terapia del deficit del trasportatore SLC6A8 e nuova procedura sintetica per l'ottenimento della fosfocreatina." Doctoral thesis, Università degli studi di Genova, 2018. http://hdl.handle.net/11567/929139.

Full text
Abstract:
Parte 1 Il Deficit del trasportatore della creatina è una rara malattia genetica dovuta al malfunzionamento della proteina denominata SLC6A8 e deputata al trasporto della creatina attraverso le barriere biologiche. Il mancato funzionamento del trasportatore della creatina comporta l’assenza di creatina a livello cerebrale con la conseguenza di gravissimi danni neurologici. La creatina infatti è una molecola polare che attraversa le barriere biologiche solo per mezzo del suo trasportatore. Ad oggi non esiste nessuna terapia per tale patologia. Una possibilità terapeutica potrebbe essere rappresentata da profarmaci della creatina in grado di attraversare le membrane cellulari e la barriera ematoencefalica (BEE) in modo indipendente dal trasportatore della creatina stessa. Questo potrebbe ripristinare il contenuto di creatina all'interno delle cellule nervose. Per raggiungere tale scopo, in questo progetto di tesi, si sono messe a punto due diverse strategie di sviluppo di derivati della creatina: 1. modificare la struttura della creatina in modo tale da ottenere una molecola maggiormente lipofila che possa attraversare le membrane biologiche per diffusione passiva. 2. realizzare derivati ottenuti dalla coniugazione della creatina con molecole che possano utilizzare un trasportatore diverso dall’SLC6A8. I derivati sintetizzati secondo tali strategie sono stati ottenuti in buona resa e caratterizzati mediante HPLC e spettrometria di massa; alcuni di loro sono stati anche valutati dal punto di vista della stabilità in mezzo fisiologico e degli effetti neurobiologici.<br>Parte 2 La fosfocreatina esogena viene prodotta con il nome di “Neoton” dalla casa farmaceutica Alfa Wasserman S.p.A. (Italia), e fa parte del gruppo di farmaci metabolici utilizzati nella protezione del miocardio in aggiunta alle soluzioni cardioplegiche. La fosfocreatina infatti esercita un duplice effetto che la rende un potenziale agente terapeutico cardioprotettivo: l’attività di conservazione dell’energia miocardica mediante il ripristino delle riserve di ATP e l’azione protettiva verso le membrane biologiche. Quasi tutti i metodi pubblicati finora per la sintesi della fosfocreatina portano nella maggior parte dei casi ad una molecola ottenuta in bassa resa e dopo molteplici step di purificazione.La bassa resa è dovuta ad una scarsa reattività dell’agente guanilante utilizzato, soprattutto quando esso è costituito da un derivato della cianoammide. Parte del lavoro svolto ha riguardato la messa a punto di una metodica alternativa ai metodi convenzionali che ha permesso di ottenere la fosfocreatina in buona resa e purezza.<br>Part 1 Creatine transporter deficiency is a rare hereditary disease due to the loss of function of the SLC6A8 (creatine transporter). Creatine is a polar molecule, able to cross the biological barriers exclusively using its own transporter and therefore this disease causes the lack of cerebral creatine and leads to dramatic neurological symptoms. To date there is no therapy available for this disorder. A therapeutic strategy could be represented by creatine prodrugs able to cross the cellular membranes and the blood brain barrier (BBB) in an independent way from using the creatine transporter and releasing creatine once inside the cells to exert its biological activities. Two different strategies have been developed to synthesized creatine derivatives: 1.The modification of the creatine molecular skeleton in order to obtain more lipophilic prodrugs that could cross the BBB and the biological membranes by passive diffusion. 2. The conjugation of creatine with a molecule able to exploit a different transporter than SLC6A8, i.e. the glucose transporters, creating a chimeric molecule able to use an alternative way The synthesized derivatives have been obtained in high yield and purity and characterized by means of HPLC and mass spectrometry. Some of them have been evaluated for their stability in physiological conditions and neurobiological effects.<br>Part 2 The exogenous phosphocreatine is currently marketed as “Neoton” by Alfa Wasserman Industry S.p.A. (Italy) and is part of the therapy in the myocardial protection in addition to the cardioplegic solutions. Phosphocreatine exerts a twofold effect as a cardioprotective agent: the regeneration of the ATP reserves and the protective effects of the biological membranes. Most of the methods published so far to synthesize phosphocreatine lead to a low-yielded molecule and involve several purification steps. Low yield is due to the poor reactivity of the guanilating agent used, especially when it is a cyanamide derivative. Part of this job was to develop an alternative method to conventional methods ,that allowed to obtain phosphocreatine with good yeld and purity.
APA, Harvard, Vancouver, ISO, and other styles
15

Bilous, James Eric. "Concatenative Synthesis for Novel Timbral Creation." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1597.

Full text
Abstract:
Modern day musicians rely on a variety of instruments for musical expression. Tones produced from electronic instruments have become almost as commonplace as those produced by traditional ones as evidenced by the plethora of artists who can be found composing and performing with nothing more than a personal computer. This desire to embrace technical innovation as a means to augment performance art has created a budding field in computer science that explores the creation and manipulation of sound for artistic purposes. One facet of this new frontier concerns timbral creation, or the development of new sounds with unique characteristics that can be wielded by the musician as a virtual instrument. This thesis presents Timcat, a software system that can be used to create novel timbres from prerecorded audio. Various techniques for timbral feature extraction from short audio clips, or grains, are evaluated for use in timbral feature spaces. Clustering is performed on feature vectors in these spaces and groupings are recombined using concatenative synthesis techniques in order to form new instrument patches. The results reveal that interesting timbres can be created using features extracted by both newly developed and existing signal analysis techniques, many common in other fields though not often applied to music audio signals. Several of the features employed also show high accuracy for instrument separation in randomly mixed tracks. Survey results demonstrate positive feedback concerning the timbres created by Timcat from electronic music composers, musicians, and music lovers alike.
APA, Harvard, Vancouver, ISO, and other styles
16

Zubar, E. V., N. P. Efryushina, and V. P. Dotsenko. "Creating Properties of Eu3 +-doped Calcium Hydroxyapatite as Biocompatible Fluorescent Probes." Thesis, Sumy State University, 2013. http://essuir.sumdu.edu.ua/handle/123456789/35425.

Full text
Abstract:
The luminescent properties of Eu3+-doped calcium hydroxyapatite (Ca10(PO4)6(OH)2, HAp) nanopowders prepared by the chemical precipitation and sol-gel method were studied. It was shown the possibility of improving the properties of HAp as a fluorescent carrier of biological molecules and drugs by control of the defect formation. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/35425
APA, Harvard, Vancouver, ISO, and other styles
17

Opie, Timothy Tristram. "Creation of a real-time granular synthesis instrument for live performance." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15865/1/Timothy_Opie_Thesis.pdf.

Full text
Abstract:
This thesis explores how granular synthesis can be used in live performances. The early explorations of granular synthesis are first investigated, leading up to modern trends of electronic performance involving granular synthesis. Using this background it sets about to create a granular synthesis instrument that can be used for live performances in a range of different settings, from a computer quartet, to a flute duet. The instrument, an electronic fish called the poseidon, is documented from the creation and preparation stages right through to performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Opie, Timothy Tristram. "Creation of a Real-Time Granular Synthesis Instrument for Live Performance." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15865/.

Full text
Abstract:
This thesis explores how granular synthesis can be used in live performances. The early explorations of granular synthesis are first investigated, leading up to modern trends of electronic performance involving granular synthesis. Using this background it sets about to create a granular synthesis instrument that can be used for live performances in a range of different settings, from a computer quartet, to a flute duet. The instrument, an electronic fish called the poseidon, is documented from the creation and preparation stages right through to performance.
APA, Harvard, Vancouver, ISO, and other styles
19

Noti, Christian. "Synthesis of heparin oligosaccharides and the creation of heparin microarrays /." Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Griffin, Mark William. "Terrain synthesis : the creation, management, presentation and validation of artificial landscapes." Thesis, University of Nottingham, 2001. http://eprints.nottingham.ac.uk/13618/.

Full text
Abstract:
'Synthetic Terrain' is the term used for artificially-composed computer-based Digital Terrain Models (DTMs) created by a combination of techniques and heavily influenced by Earth Sciences applications. The synthetic landscape is created to produce 'geographically acceptable', 'realistic' or 'valid' computer-rendered landscapes, maps and 3D images, which are themselves based on synthetic terrain Digital Elevation Models (OEMs). This thesis examines the way in which mainly physical landscapes can be synthesised, and presents the techniques by which terrain data sets can be managed (created, manipulated, displayed and validated), both for academic reasons and to provide a convenient and cost-effective alternative to expensive 'real world' data sets. Indeed, the latter are collected by ground-based or aerial surveying techniques (e.g. photogrammetry), normally at considerable expense, depending on the scale, resolution and type required. The digital information for a real map could take months to collect, process and reproduce, possibly involving demanding Information Technology (IT) resources and sometimes complicated by differing (or contradictory) formats. Such techniques are invalid if the region lies within an 'unfriendly' or inaccessible part of the globe, where (for example), overflying or ground surveys are forbidden. Previous attempts at synthesising terrain have not necessarily aimed at realism. Digital terrain sets have been created by using fractal mathematical models, as 'special effects' for the entertainment industry (e.g. science fiction 'alien' landscapes' for motion pictures and arcade games) or for artistic reasons. There are no known examples of synthesised DTMs being created with such a wide range of requirements and functionality, and with such a regard to validation and realism. This thesis addresses the whole concept of producing' alternative' landscapes in artificial form - nearly 22 years of research aimed at creating' geographically-sensible' synthetic terrain is described with the emphasis on the last 5 years, when this PhD thesis was conceived. These concepts are based on radical, inexpensive and rapid techniques for synthesising terrain, yet value is also placed on the 'validity', realism and 'fitness for purpose' of such models. The philosophy - or the 'thought processes' - necessary to achieve the development of the algorithms leading to synthesised DTMs is one of the primary achievement of the research. This in turn led to the creation of an interactive software package called GEOFORMA, which requires some manual intervention in the form of preliminary terrain classification. The sequence is thus: the user can choose to create terrain or landform assemblages without reference to any real world area. Alternatively, he can select a real world region or a 'typical' terrain type on a 'dial up' basis, which requires a short period of intensive parametric analysis based on research into established terrain classification techniques (such as fractals and other mathematical routines, process-response models etc.) The creates a composite synthesised terrain model of high quality and realism, a factor examined both qualitatively and quantitatively. Although the physical terrain is the primary concern, similar techniques are applied to the human landscape, noting such attributes as the density, type, nature and distribution of settlements, transport systems etc., and although this thread of the research is limited in scope compared with the physical landscape synthesis, some spectacular results are presented. The system also creates place names based on a simple algorithm. Fluvial landscapes, upland regions and coastlines have been selected from the many possible terrain types for 'treatment', and the thesis gives each of these sample landscapes a separate chapter with appropriate illustrations from this original and extensive research. Finally, and inevitably, the work also poses questions in attempting to provide answers, this is perhaps inevitable in a relatively new genre, encompassing so many disciplines, and with relatively sparse literature on the subject.
APA, Harvard, Vancouver, ISO, and other styles
21

Wielemaker, Martin. "Managing initiatives : a synthesis of the conditioning and knowledge-creating view = Het managen van initiatieven : een synthese van de condities en kenniscreatie perspectieven /." Rotterdam : Erasmus Research Institute of Management, 2003. http://aleph.unisg.ch/hsgscan/hm00084594.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Vouloutsi, Vasiliki. "Learning from a robot: creating synthetic psychologically plausible agents." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/456321.

Full text
Abstract:
Due to technological advancements, robots will soon become part of our daily lives and interact with us on a frequent basis. Robot acceptance is important, as it delineates whether users will potentially interact with them or not. We argue that psychological plausibility is a key determinant of acceptance and the challenge that rises is to understand, measure and identify what a ects plausibility. Here, we propose a taxonomy of four psychological benchmarks that one can apply to evaluate the behavioural components of robots and assess how they a ect acceptance: social competence, task competence, autonomy and morphology. By decomposing plausibility to discrete parts and empirically test them, we can use their interactions in practice for the meaningful design and development of social robots. In this thesis, we have identi ed behavioural components that are relevant to the proposed taxonomy and evaluated them in a series of studies. We show that it is possible to use the proposed taxonomy to evaluate the interaction and the robot. By systematically assessing the behavioural features of the robot, we gain useful insights that we apply to our H5WRobot that we later validate in the domain of tutoring. We show that our robot is accepted by students and stress that our proposed taxonomy might provide useful insights regarding the establishment of future assessments for HRI.<br>A causa dels avenços tecnològics, els robots aviat formaran part de la nostra vida diària i interactuaran amb nosaltres de forma freqüent. Que els robots siguin ben rebuts és important, ja que determina si els usuaris voldran interactuar amb ells o no. Argumentem que la plausibilitat psicològica dels robots és fonamental per a la seva acceptació i que un repte que sorgeix és entendre, mesurar i identi car qué afecta aquesta plausibilitat. Proposem una taxonomia de quatre criteris psicològics que es poden aplicar per tal d'avaluar els components de conducta dels robots i com afecten la seva acceptació: competència social, competència funcional, autonomia i morfologia. Descomposant la plausibilitat en parts discretes, i avaluant-les de forma empírica, podem fer-ne un ús pràctic de les interaccions per al disseny i desenvolupament de robots socials. En aquesta tesi hem identi cat comportaments conductuals que són rellevants per a la taxonomia proposada i que han estat avaluats en una sèrie d'estudis. Mostrem que és possible utilitzar la taxonomia proposada per tal d'avaluar un robot i la interacció amb aquest. Mitjançant una avaluació sistemàtica de les caracterítiques conductuals dels robots, obtenim una sèrie d'idees útils que hem aplicat al nostre robot H5WRobot, i que posteriorment validem en un context de tutoria. Demostrem que el nostre robot és acceptat pels estudiants i fem palès que la taxonomia que proposem pot proporcionar observacions útils per a l'establiment de futures avaluacions per a la interacció entre humans i robots.
APA, Harvard, Vancouver, ISO, and other styles
23

Saintourens, Michel. "Des outils pour la creation d'images de synthese." Paris 8, 1989. http://www.theses.fr/1990PA080470.

Full text
Abstract:
Pour la creation d'image de synthese artistiques, plutot que d'utiliser des logiciels "tout fait", qui imposent leur esthetique specifique au detriment de celle du createur, nous developpons des "outils" personnels, adaptes a nos preoccupations esthetiques. Ils nous servent tant a la modelisation qu'au rendu et sont particulierement adaptes a notre travail de synthese et d'animation du visage et du corps humains<br>With the development of experiences in computer art, we can see that the style of synthetic images is not independent of the software. Available commercial software produce a very standard "hightech" aesthetic. We have developped a series of more personal tools, adapted to a particular creation and animation of syntetic characters and pictures
APA, Harvard, Vancouver, ISO, and other styles
24

Toghiani-Rizi, Babak. "Evaluation of Deep Learning Methods for Creating Synthetic Actors." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-324756.

Full text
Abstract:
Recent advancements in hardware, techniques and data availability have resulted in major advancements within the field of Machine Learning and specifically in a subset of modeling techniques referred to as Deep Learning. Virtual simulations are common tools of support in training and decision making within the military. These simulations can be populated with synthetic actors, often controlled through manually implemented behaviors, developed in a streamlined process by domain doctrines and programmers. This process is often time inefficient, expensive and error prone, potentially resulting in actors unrealistically superior or inferior to human players. This thesis evaluates alternative methods of developing the behavior of synthetic actors through state-of-the-art Deep Learning methods. Through a few selected Deep Reinforcement Learning algorithms, the actors are trained in four different light weight simulations with objectives like those that could be encountered in a military simulation. The results show that the actors trained with Deep Learning techniques can learn how to perform simple as well as more complex tasks by learning a behavior that could be difficult to manually program. The results also show the same algorithm can be used to train several totally different types of behavior, thus demonstrating the robustness of these methods. This thesis finally concludes that Deep Learning techniques have, given the right tools, a good potential as alternative methods of training the behavior of synthetic actors, and to potentially replace the current methods in the future.
APA, Harvard, Vancouver, ISO, and other styles
25

Rankine, Peter J. "Reimagining My Music Creation: A Polymodular Synthesis A portfolio of three compositions and exegesis." Thesis, Griffith University, 2021. http://hdl.handle.net/10072/404461.

Full text
Abstract:
Synthesizers changed the way I hear and understand sound, revealing surprising synergies between the electronic oscillator, the world of music I had known up until that time, and the natural world which has always inspired my musical thinking and creation. In embracing the synthesizer, my music was altered in ways that go beyond the adjustments made in re-conceiving a work for different forces. By virtue of their intrinsic and unique nature, synthesizers challenged and then modulated my thinking about every aspect of my music creation from idea, thematic development, form and orchestration through to realisation. This submission presents an opera recorded and explored during the research period and two multi-movement works composed and recorded during the research period. The exegesis outlines the journey undertaken from my work as a score-oriented composer of music for acoustic orchestral instrumentalists and vocalists, to my work as a synthesizer-oriented composer-performer of my music for electronic instruments. The evolving story contained in the musical works composed as the primary dimension of my research is strongly autobiographical. Accordingly, an autoethnographical methodology complements the foundational methodology of research led by artistic practice. The challenges involved in this journey were conceptual, technical, aesthetic, and methodological. This exegesis examines these various lines of development via the three works in the portfolio and, with reference to the scores, videos and recordings, the artistic responses and solutions resulting from my research. Excerpts of other works are offered as points of reference where relevant. Ultimately the work is examined for evidence of a polymodular synthesis between art, science and technology, forged in the creation of the music. Future creative potential for the candidate is considered, and to music composition more broadly in the electronic, digital age.<br>Thesis (PhD Doctorate)<br>Doctor of Philosophy (PhD)<br>Queensland Conservatorium<br>Arts, Education and Law<br>Full Text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Kuan-Hao. "Creating Extended Landau Levels of Large Degeneracy with Photons." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1542878428843845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Villalonga, Pineda Gabriel. "Leveraging Synthetic Data to Create Autonomous Driving Perception Systems." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671739.

Full text
Abstract:
L’anotació manual d’imatges per desenvolupar sistemes basats en visió per computador ha estat un dels punts més problemàtics des que s’utilitza aprenentatge automàtic per a això. Aquesta tesi es centra en aprofitar les dades sintètiques per alleujar el cost de les anotacions manuals en tres tasques de percepció relacionades amb l’assistència a la conducció i la conducció autònoma. En tot moment assumim l’ús de xarxes neuronals convolucionals per al desenvolupament dels nostres models profunds de percepció. La primera tasca planteja el reconeixement de senyals de trànsit, un problema de classificació d’imatges. Assumim que el nombre de classes de senyals de trànsit a reconèixer s’ha d’incrementar sense haver pogut anotar noves imatges amb què realitzar el corresponent reentrenament. Demostrem que aprofitant les dades sintètiques de les noves classes i transformant-les amb una xarxa adversària-generativa (GAN, de les seves sigles en anglès) entrenada amb les classes conegudes (sense usar mostres de les noves classes), és possible reentrenar la xarxa neuronal per classificar tots els senyals en una proporció ~1/4 entre classes noves i conegudes. La segona tasca consisteix en la detecció de vehicles i vianants (objectes) en imatges. En aquest cas, assumim la recepció d’un conjunt d’imatges sense anotar. L’objectiu és anotar automàticament aquestes imatges perquè així es puguin utilitzar posteriorment en l’entrenament del detector d’objectes que desitgem. Per assolir aquest objectiu, vam partir de dades sintètiques anotades i proposem un mètode d’aprenentatge semi-supervisat basat en la idea del co-aprenentatge. A més, utilitzem una GAN per reduir la distància entre els dominis sintètic i real abans d’aplicar el co-aprenentatge. Els nostres resultats quantitatius mostren que el procediment desenvolupat permet anotar el conjunt d’imatges d’entrada amb la precisió suficient per entrenar detectors d’objectes de forma efectiva; és a dir, tan precisos com si les imatges s’haguessin anotat manualment. A la tercera tasca deixem enrere l’espai 2D de les imatges, i ens centrem en processar núvols de punts 3D provinents de sensors LiDAR. El nostre objectiu inicial era desenvolupar un detector d’objectes 3D (vehicles, vianants, ciclistes) entrenat en núvols de punts sintètics estil LiDAR. En el cas de les imatges es podia esperar el problema de canvi de domini degut a les diferències visuals entre les imatges sintètiques i reals. Però, a priori, no esperàvem el mateix en treballar amb núvols de punts LiDAR, ja que es tracta d’informació geomètrica provinent del mostreig actiu del món, sense que l’aparença visual influeixi. No obstant això, a la pràctica, hem vist que també apareixen els problemes d’adaptació de domini. Factors com els paràmetres de mostreig del LiDAR, la configuració dels sensors a bord del vehicle autònom, i l’anotació manual dels objectes 3D, indueixen diferències de domini. A la tesi demostrem aquesta observació mitjançant un exhaustiu conjunt d’experiments amb diferents bases de dades públiques i detectors 3D disponibles. Per tant, en relació amb la tercera tasca, el treball s’ha centrat finalment en el disseny d’una GAN capaç de transformar núvols de punts 3D per portar-los d’un domini a un altre, un tema relativament inexplorat.Finalment, cal esmentar que tots els conjunts de dades sintètiques usats en aquestes tres tasques han estat dissenyats i generats en el context d’aquesta tesi doctoral i es faran públics. En general, considerem que aquesta tesi presenta un avanç en el foment de la utilització de dades sintètiques per al desenvolupament de models profunds de percepció, essencials en el camp de la conducció autònoma.<br>La anotación manual de imágenes para desarrollar sistemas basados en visión por computador ha sido uno de los puntos más problemáticos desde que se utiliza aprendizaje automático para ello. Esta tesis se centra en aprovechar los datos sintéticos para aliviar el coste de las anotaciones manuales en tres tareas de percepción relacionadas con la asistencia a la conducción y la conducción autónoma. En todo momento asumimos el uso de redes neuronales convolucionales para el desarrollo de nuestros modelos profundos de percepción. La primera tarea plantea el reconocimiento de señales de tráfico, un problema de clasificación de imágenes. Asumimos que el número de clases de señales de tráfico a reconocer se debe incrementar sin haber podido anotar nuevas imágenes con las que realizar el correspondiente reentrenamiento. Demostramos que aprovechando los datos sintéticos de las nuevas clases y transformándolas con una red adversaria-generativa (GAN, de sus siglas en inglés) entrenada con las clases conocidas (sin usar muestras de las nuevas clases), es posible reentrenar la red neuronal para clasificar todas las señales en una proporción de ~1/4 entre clases nuevas y conocidas. La segunda tarea consiste en la detección de vehículos y peatones (objetos) en imágenes. En este caso, asumimos la recepción de un conjunto de imágenes sin anotar. El objetivo es anotar automáticamente esas imágenes para que así se puedan utilizar posteriormente en el entrenamiento del detector de objetos que deseemos. Para alcanzar este objetivo, partimos de datos sintéticos anotados y proponemos un método de aprendizaje semi-supervisado basado en la idea del co-aprendizaje. Además, utilizamos una GAN para reducir la distancia entre los dominios sintético y real antes de aplicar el co-aprendizaje. Nuestros resultados cuantitativos muestran que el procedimiento desarrollado permite anotar el conjunto de imágenes de entrada con la precisión suficiente para entrenar detectores de objetos de forma efectiva; es decir, tan precisos como si las imágenes se hubiesen anotado manualmente. En la tercera tarea dejamos atrás el espacio 2D de las imágenes, y nos centramos en procesar nubes de puntos 3D provenientes de sensores LiDAR. Nuestro objetivo inicial era desarrollar un detector de objetos 3D (vehículos, peatones, ciclistas) entrenado en nubes de puntos sintéticos estilo LiDAR. En el caso de las imágenes cabía esperar el problema de cambio de dominio debido a las diferencias visuales entre las imágenes sintéticas y reales. Pero, a priori, no esperábamos lo mismo al trabajar con nubes de puntos LiDAR, ya que se trata de información geométrica proveniente del muestreo activo del mundo, sin que la apariencia visual influya. Sin embargo, en la práctica, hemos visto que también aparecen los problemas de adaptación de dominio. Factores como los parámetros de muestreo del LiDAR, la configuración de los sensores a bordo del vehículo autónomo, y la anotación manual de los objetos 3D, inducen diferencias de dominio. En la tesis demostramos esta observación mediante un exhaustivo conjunto de experimentos con diferentes bases de datos públicas y detectores 3D disponibles. Por tanto, en relación a la tercera tarea, el trabajo se ha centrado finalmente en el diseño de una GAN capaz de transformar nubes de puntos 3D para llevarlas de un dominio a otro, un tema relativamente inexplorado. Finalmente, cabe mencionar que todos los conjuntos de datos sintéticos usados en estas tres tareas han sido diseñados y generados en el contexto de esta tesis doctoral y se harán públicos. En general, consideramos que esta tesis presenta un avance en el fomento de la utilización de datos sintéticos para el desarrollo de modelos profundos de percepción, esenciales en el campo de la conducción autónoma.<br>Manually annotating images to develop vision models has been a major bottleneck since computer vision and machine learning started to walk together. This thesis focuses on leveraging synthetic data to alleviate manual annotation for three perception tasks related to driving assistance and autonomous driving. In all cases, we assume the use of deep convolutional neural networks (CNNs) to develop our perception models. The first task addresses traffic sign recognition (TSR), a kind of multi-class classification problem. We assume that the number of sign classes to be recognized must be suddenly increased without having annotated samples to perform the corresponding TSR CNN re-training. We show that leveraging synthetic samples of such new classes and transforming them by a generative adversarial network (GAN) trained on the known classes (i.e., without using samples from the new classes), it is possible to re-train the TSR CNN to properly classify all the signs for a ~1/4 ratio of new/known sign classes. The second task addresses on-board 2D object detection, focusing on vehicles and pedestrians. In this case, we assume that we receive a set of images without the annotations required to train an object detector, i.e., without object bounding boxes. Therefore, our goal is to self-annotate these images so that they can later be used to train the desired object detector. In order to reach this goal, we leverage from synthetic data and propose a semi-supervised learning approach based on the co-training idea. In fact, we use a GAN to reduce the synth-to-real domain shift before applying co-training. Our quantitative results show that co-training and GAN-based image-to-image translation complement each other up to allow the training of object detectors without manual annotation, and still almost reaching the upper-bound performances of the detectors trained from human annotations. While in previous tasks we focus on vision-based perception, the third task we address focuses on LiDAR pointclouds. Our initial goal was to develop a 3D object detector trained on synthetic LiDAR-style pointclouds. While for images we may expect synth/real-to-real domain shift due to differences in their appearance (e.g. when source and target images come from different camera sensors), we did not expect so for LiDAR pointclouds since these active sensors factor out appearance and provide sampled shapes. However, in practice, we have seen that it can be domain shift even among real-world LiDAR pointclouds. Factors such as the sampling parameters of the LiDARs, the sensor suite configuration on-board the ego-vehicle, and the human annotation of 3D bounding boxes, do induce a domain shift. We show it through comprehensive experiments with different publicly available datasets and 3D detectors. This redirected our goal towards the design of a GAN for pointcloud-to-pointcloud translation, a relatively unexplored topic. Finally, it is worth to mention that all the synthetic datasets used for these three tasks, have been designed and generated in the context of this PhD work and will be publicly released. Overall, we think this PhD presents several steps forward to encourage leveraging synthetic data for developing deep perception models in the field of driving assistance and autonomous driving.<br>Universitat Autònoma de Barcelona. Programa de Doctorat en Informàtica
APA, Harvard, Vancouver, ISO, and other styles
28

Sukkasi, Sittha. "Commons-oriented information syntheses : a model for user-driven design and creation activities." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44801.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.<br>Includes bibliographical references (p. 173-195).<br>The phenomenon of user-driven creation activities has recently emerged and is quickly expanding, especially on the Web. A growing number of people participate in online activities, where they generate content by themselves, freely share their creations, and combine one another's creations in order to synthesize new material. Similar activities also occur in the area of product development, as people design products for themselves and share their designs for others to reuse or build upon. The phenomenon shows that under some special circumstances, typically passive users can become active creators. Also, under such circumstances, creation activities are not just isolated do-it-yourself activities of an individual; instead, people build on one another's creations and further share their own. Recognizing the positive potential of user-driven design, this work endeavored to understand the underlying drivers of open source creation and essential environmental elements. The most important element is the commons, or shared resources, of the communities where the activities take place. A model of commons-oriented information syntheses was formulated. The model provides a unifying description of user-driven creation activities and, more importantly, serves as a general prescription for how to construct a circumstance to recreate the phenomenon for desired applications. Key aspects of the model include: that, in this particular form of information synthesis, the processes of information creating, participating in a community, and sharing of information take place integrally; that the three processes revolve around the commons; and that people consider the prospective benefits and costs of all three processes when they decide on whether or not to engage in a synthesis activity. This understanding can be employed to build circumstances under which the phenomenon can be recreated.<br>(cont.) The ability to recreate the phenomenon of user-driven creation activities can be beneficial in many areas, including design and knowledge transfer. In the design area, the understanding can be used to build an environment that induces and fosters open-source design. With such environment, people can design things for themselves by reusing, remixing, and building on designs shared by others. They can also freely make available their own designs, which can continue to evolve through a series of building-on processes by others. In the knowledge transfer area, the understanding can be a key to constructing an environment that not only supports transfer of knowledge, but also enables people to further generate knowledge by building on what they receive, particularly when the transferred knowledge is in meta-forms such simulation models. Possible applications include: engineering education (where students can connect models of fundamental topics in various ways to create simulations of complex systems and learn from them), sustainable development (where citizens can integrate models of potential environmental remedies to figure out which solution mix will be the most effective in their situations), and academic communities (where researchers can share and allow their colleagues to reuse or build on simulation models from which the results they publish in journal papers are derived). A prototypical online environment was designed and implemented, employing the essential elements outlined in the model. Hosting a commons of environmental and energy-related simulation models, the environment functions as an open-source design environment for alternative energy systems and a public platform for generative transfers of environmental knowledge. Anyone can freely access the commons, build on them to synthesize new simulation models, and further share their synthesized models as new commons.<br>by Sittha Sukkasi.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
29

Van, der Merwe Johann. "Effect of creatine monohydrate supplementation for 3 weeks on testosterone conversion to dihydrotestosterone in young rugby players." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/3464.

Full text
Abstract:
Thesis (MPhil (Physiological Sciences))--University of Stellenbosch, 2006.<br>Background. Creatine monohydrate is widely used for its purported ergogenic and anabolic properties. The mechanism by which creatine supplementation enhances muscle growth is not understood. This study was undertaken to determine whether creatine monohydrate supplementation increases the conversion rate of testosterone to dihydrotestosterone. An increase in dihydrotestosterone could partly explain the beneficial effect of creatine monohydrate on muscle hypertrophy. Methods. Subcommittee C of the research committee of the University of Stellenbosch approved the study. Project number 2001/ C045. The study was designed as a double blind crossover with subjects (n = 20) in each leg of the study. Group 1 (n = 10) taking creatine monohydrate and group 2 (n = 10) glucose during the first leg of the study. In accordance with crossover study design the groups were reversed in the second leg of the study. Gelatin capsules were filled with 5g of either creatine monohydrate or 5g of glucose. Subjects taking creatine monohydrate also took 25g of glucose to improve absorption of creatine. Subjects took creatine monohydrate 25g plus 25g of glucose (ten capsules in all) or glucose ten capsules per day for seven days in the loading phase. In the maintenance phase they took 5g of creatine monohydrate plus 25g of glucose (six capsules in all) per day or six capsules of glucose, for 14 days. The groups were reversed after a six-week washout period and the dosages repeated as per crossover study design. Blood samples were taken on day zero of the study as baseline measurements, repeated on day 7, (after the loading phase), and again on day 21, (after the maintenance phase). These were again repeated in the second leg of the study as per crossover design. Serum was separated within one hour of collection and stored at minus 70 oC. Testosterone and dihydrotestosterone concentrations were determined using a radio-immunoassay kit by an accredited university laboratory. The percentage conversion of testosterone to dihydrotestosterone was calculated. The results were statistically analyzed: A paired t - tests at the beginning of each leg of the study and repeated measure analysis of variance, for the pooled data for each condition over the whole study. Results. The difference in blood levels of testosterone and dihydrotestosterone on both days 0, were not statistically significant. This made the pooling of the data possible. The difference in the percentage conversion of testosterone to dihydrotestosterone over the study period between the creatine monohydrate condition and the glucose condition, was however significant (p < 0.0001). In this small study highly significant statistical results were obtained. The answer to how creatine taken as a supplement exerts its effect may lie in the increased rate of conversion of testosterone to dihydrotestosterone. Conclusion. With the known greater androgenic effect of dihydrotestosterone as opposed to testosterone, the increase in testosterone conversion to dihydrotestosterone could explain how creatine supplementation exerts its anabolic effect in susceptible individuals. A larger study should be done to confirm these results and answer the questions arising from the findings.
APA, Harvard, Vancouver, ISO, and other styles
30

Cadic, Didier. "Optimisation du procede de creation de voix en synthese par selection." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00608610.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre de la synthèse de parole à partir du texte. Elle traite plus précisément du procédé de création de voix en synthèse par sélection d'unités. L'état de l'art repose pour cela sur l'enregistrement d'un locuteur pendant une à deux semaines, suivant un script de lecture de plusieurs dizaines de milliers de mots. Les 5 à 10 heures de parole collectées sont généralement révisées par des opérateurs humains, pour en vérifier la segmentation phonétique et ainsi améliorer la qualité finale de la voix de synthèse.La lourdeur générale de ce procédé freine considérablement la diversification des voix de synthèse ; aussi en proposons-nous ici une rationalisation. Nous introduisons une nouvelle unité, appelée "sandwich vocalique", pour l'optimisation de la couverture des scripts de lecture. Sur le plan phonétique, cette unité offre une meilleure prise en compte des limites segmentales de la synthèse par sélection que les unités traditionnelles (diphones, triphones, syllabes, mots, etc.). Sur le plan linguistique, un nouvel enrichissement contextuel nous permet de mieux focaliser la couverture, sans négliger les aspects prosodiques. Nous proposons des moyens d'accroître le contrôle sur les phrases du script lecture, tant dans leur longueur que dans leur pertinence phonétique et prosodique, afin de mieux anticiper le contenu du corpus de parole final et de rendre automatisable la tâche de segmentation. Nous introduisons également une alternative à la stratégie classique de condensation de corpus en mettant au point un algorithme semi-automatique de création de phrases, grâce auquel nous accroissons de 30 à 40% la densité linguistique du script de lecture.Ces nouveaux outils nous permettent d'établir un procédé très efficace de création de voix de synthèse, procédé que nous validons à travers la création et l'évaluation subjective de nombreuses voix. Des scores perceptifs comparables à l'approche traditionnelle sont ainsi atteints dès 40 minutes de parole (une demi-journée d'enregistrement) et sans post-traitement manuel. Enfin, nous mettons à profit ce résultat pour enrichir nos voix de synthèse de diverses composantes expressives, multi-expressives et paralinguistiques.
APA, Harvard, Vancouver, ISO, and other styles
31

Renman, Casper. "Creating Human-like AI Movement in Games Using Imitation Learning." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210887.

Full text
Abstract:
The way characters move and behave in computer and video games are important factors in their believability, which has an impact on the player’s experience. This project explores Imitation Learning using limited amounts of data as an approach to creating human-like AI behaviour in games, and through a user study investigates what factors determine if a character is human-like, when observed through the characters first-person perspective. The idea is to create or shape AI behaviour by recording one's own actions. The implemented framework uses a Nearest Neighbour algorithm with a KD-tree as the policy which maps a state to an action. Results showed that the chosen approach was able to create human-like AI behaviour while respecting the performance constraints of a modern 3D game.<br>Sättet karaktärer rör sig och beter sig på i dator- och tvspel är viktiga faktoreri deras trovärdighet, som i sin tur har en inverkan på spelarens upplevelse. Det här projektet utforskar Imitation Learning med begränsad mängd data som etttillvägagångssätt för att skapa människolik rörelse för AI-karaktärer i spel, ochutforskar genom en användarstudie vilka faktorer som avgör om en karaktärär människolik, när karaktären observeras genom dess förstapersonsperspektiv. Iden är att skapa eller forma AI-beteende genom att spela in sina egna handlingar. Det implementerade ramverket använder en Nearest Neighbour-algoritmmed ett KD-tree som den policy som kopplar ett tillstånd till en handling. Resultatenvisade att det valda tillvägagångssättet lyckades skapa människolikt AI-beteende samtidigt som det respekterar beräkningskomplexitetsrestriktionersom ett modernt 3D-spel har.
APA, Harvard, Vancouver, ISO, and other styles
32

Prendergast, Cathal Francis. "Novel synthetic route for the creation of a newly formed zeolite material." Thesis, Manchester Metropolitan University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491226.

Full text
Abstract:
The synthesis of silica by sol-gel mineralization of cellulose nanorod nematic suspensions under a magnetic field gradient was investigated. The formation of a crystalline material has been found to have direct relation to the presence of a magnetic field gradient, coupled with the additional influence of an efficiently mixed system. This 'perfect' mixing occurs by the self stimulated vortex flow generated in far-from-equilibrium conditions. The vortex mixing undergoes vectorial motion that is amplified and governed by a magnetic field gradient.
APA, Harvard, Vancouver, ISO, and other styles
33

Steimel, Joshua Paul. "Synthetic creation of a chemotactic system via utilization of magnetically actuated microrobotic walkers." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78511.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2012.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 55-58).<br>Chemotaxis is a fundamental biological process that plays an important role in disease, reproduction, and most biological functions. Here, we present a radically novel method to create the first synthetic chemotactic system which utilized magnetically actuated microrobotic walkers. The system used a rotating magnetic field that once actuated induced the magnetic beads to self-assemble into microrobots and walk on surfaces. The velocity of these microrobotic walkers could be modulated by the frequency and the number of beads that composed the walkers. The receptor-ligand pair of biotin-streptavidin was utilized due to the extremely strong binding affinity of the pair. The presence of free biotin binding sites on the surface was required to obtain chemotactic motion as these binding sites modulated walker velocity. The walkers moved faster in areas with a high density of binding sites and slower in areas with a low density of binding sites. To achieve chemotaxis, gradients in the density of binding sites were required. Gradients were created by placing a droplet of concentrated streptavidin on a biotynlated slide and letting the droplet evaporate. The Gaussian evaporation process created differentials in the density of binding sites. A series of continuous velocity measurements were conducted across the sample to map the walker velocity profile. The velocity profile illustrated regions with a high density of binding sites as well as a local minimum in the density of binding sites. The discrete motion of the beads was analyzed to understand how chemotactic directed motion could be achieved by breaking the symmetry of the system. Walkers in an area with a high density of binding sites experienced a significant amount of "sticking" followed by hinge-like motion, while walkers in a low density area exhibited virtually no "sticking" and tended to slip much more frequently. Walkers were then placed on a random walk path and chemotactic directed motion was observed as the walkers drifted towards regions with a high density of binding sites. The drift velocities that were extracted from the random walk path illustrated the discrepancy between the chemical gradients present in this synthetic chemotactic system. Keywords: biomimetic, chemotaxis, superparamagnetic microrobotic walkers, biotin, streptavidin, PEG, drift velocity, random walk.<br>by Joshua Paul Steimel.<br>S.B.
APA, Harvard, Vancouver, ISO, and other styles
34

Ellafi, Ali Mosbah Mohammed. "Modelling cultural dimensions and social relationships to create cultural synthetic characters." Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/2962.

Full text
Abstract:
The work presented in this thesis investigates studies and theories of culture, social power and the relationship between culture and emotion studied by psychologists and anthropology. We operationalised a Cultural Dimension model, proposed by Hofstede, and Social Power and integrated them into an already existing architecture for autonomous agents called “FAtiMA”. The purpose of the adapted system is to generate culturally-specific behaviour in character interaction which is recognisably different to users. Two different experiments, with human participants, were conducted to investigate the perceived differences between two different groups of characters: with and without cultural parameters. The main result shows that users do recognise the differences in character behaviour between the two experimental cases, which demonstrates that our model is able to create culturally-specific synthetic characters.
APA, Harvard, Vancouver, ISO, and other styles
35

Blagoiev, Aleksander. "Implementation and verification of a quantitative MRI method for creating and evaluating synthetic MR images." Thesis, Karlstads universitet, Institutionen för ingenjörsvetenskap och fysik (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-79068.

Full text
Abstract:
The purpose of this thesis was to implement and quantitatively test a quantitative MRI (qMRI) method, from which synthetic MR images are created and also evaluated. The parameter maps of T1, T2*, and effective proton density (PD*) were tested with reference tubes containing different relaxation times, and concentrations of water (H2O) and heavy water (D2O). Two normal volunteers were also used to test qMRI method, by performing regional analysis on the parameter maps of the volunteers. The synthetic FLASH MR images were evaluated by: using the relative standard deviation of a region of interest (ROI) as a measure for the signal-to-noise ratio (SNR), implanting artificial multiple sclerosis (MS) lesions in the parameter maps used to create the synthetic images, and an MRI radiologist opinion of the images. All MRI measurements were conducted on a 3.0 Tesla scanner (Siemens MAGNETOM Skyrafit). The results from reference tube testing, shows that the implementation was reasonably successful, although the T2* maps can not display values on voxels which have T2 exceeding 100 ms. In vivo parameter map ROI values were consistent between volunteers. The SNR and contrast-to-noise ratio of synthetic images are comparable to their measured counterparts depending on TE. The artificial MS lesions were distinguishable from normal appearing tissue in a T1-weighted synthetic FLASH. The radiologist thought the a synthetic T2*-weighted FLASH was somewhat promising for clinical use after further research and development, however a synthetic T1-weighted FLASH had clinical value.<br>Syftet med detta arbete var att implementera och kvantitativt undersöka en kvantitativ MR (qMRI) metod, för att sedan skapa och utvärdera syntetiska MR-bilder. qMRI-metodens parameterkartor (T1, T2* och effektiv proton densitets PD*) undersöktes med olika typer av referensprover. Dessa prover innehöll skilda relaxationstider, samt olika koncentrationer av vatten (H2O) och tungt vatten (D2O). In vivo parameterkartor från frivilliga granskades genom att jämföra T1, T2* och PD* värdena på intresseområden (ROIs) mellan frivilliga och publicerade värden. Syntetiska FLASH MR-bilder utvärderades genom att: använda relativa standardavvikelsen av ett intresseområde (ROI) som ett mått på signal-brusförhållande (SNR), implantera artificiell multipel skleros (MS) lesioner i de frivilligas parameterkartor för att se ifall dessa kan identifieras i de syntetiska MR-bilder, och slutligen utvärderade en MR-radiolog bilderna. MR-mätningarna utfördes på 3.0 Tesla MR-kamera (Siemens MAGNETOM Skyrafit). Resultaten från referensproverna visar att implementeringen var rimligen framgångsrik, även om beräknade T2* för voxlar som har T2 över 100 ms inte är pålitliga. Frivilligas parameterkartor visade på bra överensstämmelse, dessvärre inte med publicerade. SNR och kontrast-till-brus-förhållandet (CNR) för syntetiska bilder är jämförbara med deras uppmätta motsvarigheter, beroende på TE. De artificiella MS-lesionerna kunde tydligt skiljas från normal omgivande vävnad i en T1-viktad syntetisk FLASH. Radiologen tyckte att en syntetisk T2*-viktad FLASH var något lovande för klinisk användning efter ytterligare förbättringar, medan en syntetisk T1-viktad FLASH hade kliniskt värde.
APA, Harvard, Vancouver, ISO, and other styles
36

Chuang, Hsin-i. "La matérialité du souvenir : l’expérience esthétique comme expérience mnésique dans l'art contemporain." Electronic Thesis or Diss., Paris 8, 2019. http://www.theses.fr/2019PA080025.

Full text
Abstract:
Le souvenir relève d'une prédisposition à prendre en considération la présence évanescente d’une intensité mnésique, car il se présente comme un certain écart à l'égard du niveau d'espace, de temps et de signification où nous sommes établis. Après avoir analysé les pouvoirs plastiques du souvenir en nous penchant sur les pratiques artistiques, nous nous sommes proposés de montrer que l’œuvre d’art apprend à connaître, au contact de la perception, une matérialité potentielle qui rend possible une nouvelle analyse de la sensibilité. Du fait qu’elle est réglée sur une réalité tangible, l’œuvre d’art se confond souvent avec le procédé formel des matériaux qui la supportent. Et penser concrètement la matérialité, c'est-à-dire la penser non pas par la seule réflexion, mais par le sentiment également, change radicalement notre appréhension de l’œuvre.En guise d’indice discret, la sensation intensive du corps nous mène à reconnaître tous ceux qui ont contribué à transmettre l’émotion ressentie et à enrichir notre mémoire. À chaque instant, elle ne se raconte pas, mais cristallise en bloc dans le présent. L’artiste opère une présentification de son état émotionnel et explore les correspondances ou les disjonctions entre les sens, afin de se trouver dans un « rythme de la durée » qui n'existe que s'il y participe. Grâce à cette perpétuelle présentification de l'épaisseur temporelle, l’œuvre devient le véhicule du fonctionnement mnésique.Dans cette recherche, nous avons tenté d’élucider le profil de l’intégralité d’une matérialité du souvenir, en vue d’entrevoir la possibilité palpable d’un état affectif dans le cadre de différentes expériences, comme un vecteur de la réalisation artistique. Nous avons cherché à reconnaître la spécificité de cet objet d'étude et attaché une attention particulière aux « processus de création », car notre thèse a pour objet de s'interroger sur les conditions théoriques et pratiques qui rendent possible l’élaboration des problématiques du souvenir dans le domaine de l'art<br>The memory creates a predisposition to move towards the evanescent moments of an intense memory because it declares itself as a certain deviation related to time and space, referring to where we're located. After analyzing the powers of remembrance in terms of dealing with artistic practices, we proposed to demonstrate that the work of art correlated with perception teaches us to fully grasp a potential materiality that makes possible a new analysis of our sensitivities. Due to works of art being based on a tangible reality, it is often confused with the formal process of materials that can support them . In particular, thinking about the materiality, not by the reflection of the artist's work, but by our feeling, changes completely our apprehensions. By way of an unobtrusive indication, the intensive sensation of the body leads us to recognise all those things which have contributed to transmitting the emotions felt and to enriching our memory. The artist operates a presentification of his/her emotional state and explores the connections and the disjunctions between the senses, in order to find herself/himself in a rhythm of the duration, which only exists if she/he participates at the same time. we have probed into a profile of the materiality of memory, with a view considering the tangible possibility of an emotional state within the framework of our different experiences, as a force for artistic creation. We attempted to recognise the specificity of this study and paid particular attention to the « process of creation », because our approach starts with practical and theoretical conditions, which may be combined with issues of the memory in the field of art
APA, Harvard, Vancouver, ISO, and other styles
37

Yamamoto, Keisuke. "Modification and application of glycosidases to create homogeneous glycoconjugates." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:59d1917c-345d-4fe3-ace4-67dd3c8bc017.

Full text
Abstract:
In the post-genomic era, recognition of the importance of sugars is increasing in biological research. For the precise analysis of their functions, homogeneous materials are required. Chemical synthesis is a powerful tool for preparation of homogeneous oligosaccharides and glycoconjugates. Glycosidases are potent catalysts for this purpose because they realize high stereo- and regio- selectivities under conditions benign to biomolecules without repetitive protection/deprotection procedures. A glycosynthase is an aritificial enzyme which is derived from a glycosidase and is devised for glycosylation reaction. To suppress the mechanistically inherent oligomerization side reaction of this class of biocatalysts, a glycosidase with plastic substrate recognition was engineered to afford the first α-mannosynthase. This novel biocatalyst showed low occurrence of oligomerized products as designed and was applied to prepare a wide range of oligosaccharides. Glycosidases are also valuable tools for glycan engineering of glycoconjugates, which is a pivotal issue in the development of pharmaceutical agents, including immunoglobulin G (IgG)-based drugs. EndoS, an endo-β-N-acetylglucosaminidase from Streptococcus pyogenes, natively cleaves N-glycans on IgG specifically. When the latent glycosylation activity of this enzyme was applied, the N-glycan remodelling of full-length IgG was successfully achieved for the first time and a highly pure glycoform was obtained using the chemically synthesized oxazoline tetrasaccharide as glycosyl donor. This biocatalytic reaction allows development of a novel type of antibody-drug conjugates (ADCs) in which drug molecules are linked to N-glycans site-specifically. For this purpose, glycans with bioorthogonal reaction handles were synthesized and conjugated to IgG. A model reaction using a dye compound as reaction partner worked successfully and the synthetic method for this newly designed ADC was validated. Glycan trimming of glycoproteins expressed from Pichia pastoris was performed using exoglycosidases to derive homogeneous glycoform. Jack Bean α-mannosidase (JBM) trimmed native N-glycans down to the core trisaccharide structure but some of the glycoforms were discovered to be resistant to the JBM activity. Enzymatic analyses using exoglycosidases suggested that the JBM-resistant factor was likely to be β-mannoside. In summary, this work advanced application of modified glycosidases for preparation of oligosaccharides and also demonstrated biocatalytic utility of glycosidases to produce biologically relevant glycoconjugates with homogeneous glycoforms.
APA, Harvard, Vancouver, ISO, and other styles
38

Steckel, Jonathan S. (Jonathan Stephen). "The synthesis of inorganic semiconductor nanocrystalline materials for the purpose of creating hybrid organic/inorganic light-emitting devices." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34493.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemistry, 2006.<br>Includes bibliographical references.<br>Colloidal semiconductor nanocrystals (NCs) or quantum dots (QDs) can be synthesized to efficiently emit light from the ultraviolet, across the entire visible spectrum, and into the near infrared. This is now possible due to the continual development of new core and core-shell NC structures to meet specific color needs in areas as diverse as optoelectronic devices to biological imaging. Core-shell semiconductor NCs are unique light emitters. They are more stable overtime to photobleaching compared to organic dyes. Their emission is efficient and their spectral full width at half maximum remains highly narrow as their size is synthetically changed to provide desired peak wavelengths of emission to within plus or minus a couple of nanometers. They can be purified and manipulated in solution and their chemical interaction with the environment is the same for all sizes and can be modified using chemical techniques. These unique properties make semiconductor NCs ideal for use in light emitting devices (QD-LEDs). This work shows how electroluminescence can be extended into the near infrared region of the spectrum by employing infrared emitting NCs as well as into the blue region of the spectrum by designing and synthesizing NCs specifically for this application.<br>(cont.) Once efficient and color saturated electroluminescence at the visible spectrum's extremes had been realized it was a natural extension to begin exploring the potential of QD-LED devices to satisfy the technological requirements of flat panel displays and imaging applications. This led to the synthesis of a new green-emitting core-shell NC material to meet the specific color needs for flat panel display applications. At the same time we developed a new QD-LED device fabrication method to allow the patterning of the NC monolayer in our devices. Micro-contact printing the NC monolayer instead of using phase separation provided efficient and highly color saturated QD--LEDs in the red, green, and blue, and allowed us to pattern these monolayers towards the development of pixelated QD-LEDs such as needed for flat panel display applications. Along the way, the synthesis of colloidal NCs was studied to allow for more control in synthesizing higher quality materials in the future. The simple synthesis of PbSe NCs was used as a model system to begin to understand the mechanism of how the molecular precursors are reduced in solution to produce solid crystalline material in the presence of phosphorous containing molecules.<br>by Jonathan S. Steckel.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
39

Cooper, Merideth A. "Creating an Efficient Biopharmaceutical Factory: Protein Expression and Purification Using a Self-Cleaving Split Intein." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152226172238882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Norkevičius, Giedrius. "Method for creating phone duration models using very large, multi-speaker, automatically annotated speech corpus." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110201_144440-12017.

Full text
Abstract:
Two heretofore unanalyzed aspects are addressed in this dissertation: 1. Building a model capable of predicting phone duration of Lithuanian. All existing investigations of phone durations of Lithuanian were performed by linguists. Usually these investigations are the kind of exploratory statistics and are limited to a single factor, affecting phone duration, analysis. Phone duration dependencies on contextual factors were estimated and written in explicit form (decision tree) in this work by means of machine learning method. 2. Construction of language independent method for creating phone duration models using very large, multi-speaker, automatically annotated speech corpus. Most of the researchers worldwide use speech corpus that are: relatively small scale, single speaker, manually annotated or at least validated by experts. Usually the referred reasons are: using multi-speaker speech corpora is inappropriate because different speakers have different pronunciation manners and speak in different speech rate; automatically annotated corpuses lack accuracy. The created method for phone duration modeling enables the use of such corpus. The main components of the created method are: the reduction of noisy data in speech corpus; normalization of speaker specific phone durations by using phone type clustering. The performed listening tests of synthesized speech, showed that: the perceived naturalness is affected by the underlying phones durations; The use of contextual... [to full text]<br>Disertacijoje nagrinėjamos dvi iki šiol netyrinėtos problemos: 1. Lietuvių kalbos garsų trukmių prognozavimo modelių kūrimas Iki šiol visi darbai, kuriuose yra nagrinėjamos lietuvių kalbos garsų trukmės, yra atlikti kalbininkų, tačiau šie tyrimai yra daugiau aprašomosios statistikos pobūdžio ir apsiriboja pavienių požymių įtakos garso trukmei analize. Šiame darbe, mašininio mokymo algoritmo pagalba, požymių įtaka garsų trukmei yra išmokstama iš duomenų ir užrašoma sprendimo medžio pavidalu. 2. Nuo kalbos nepriklausomų garsų trukmių prognozavimo modelių kūrimo metodas, naudojant didelės apimties daugelio, kalbėtojų automatiškai, anotuotą garsyną. Dėl skirtingų kalbėtojų tarties specifikos ir dėl automatinio anotavimo netikslumų, kuriant garsų trukmės modelius visame pasaulyje yra apsiribojama vieno kalbėtojo ekspertų anotuotais nedidelės apimties garsynais. Darbe pasiūlyti skirtingų kalbėtojų tarties ypatybių normalizavimo ir garsyno duomenų triukšmo atmetimo algoritmai leidžia garsų trukmių modelių kūrimui naudoti didelės apimties, daugelio kalbėtojų automatiškai anotuotus garsynus. Darbo metu atliktas audicinis tyrimas, kurio pagalba parodoma, kad šnekos signalą sudarančių garsų trukmės turi įtakos klausytojų/respondentų suvokiamam šnekos signalo natūralumui; kontekstinės informacijos panaudojimas garsų trukmių prognozavimo uždavinio sprendime yra svarbus faktorius įtakojantis sintezuotos šnekos natūralumą; natūralaus šnekos signalo atžvilgiu, geriausiai vertinamas yra... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
41

Jablonski, PIOTR. "Synthesis of Silica Modified with Corannulene Ligands : Attempts to create an HPLC column capable of separating fullerenes and hydrogenated fullerenes." Thesis, Umeå universitet, Kemiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-123308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Klesken, Ashley. "Toward a Catholic Cosmocentric Theological Anthropology: A Synthesis from Ask the Beasts: Darwin and the God of Love and Laudato Si'." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1596719880496661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bazin, Théis. "Designing novel time-frequency scales for interactive music creation with hierarchical statistical modeling." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS242.

Full text
Abstract:
La création musicale moderne se déploie à de nombreuses échelles de temps différentes : de la vibration d'une corde ou la résonance d'un instrument électronique à l'échelle de la milliseconde en passant par les quelques secondes typiques d'une note d'instrument, jusqu'aux dizaines de minutes d'opéras ou de DJ sets. L'entremêlement de ces multiples échelles a mené au développement de nombreux outils techniques et théoriques pour rendre efficace cette entreprise de manipulation du temps. Ces abstractions, telles les gammes, les notations rythmiques ou encore les modèles courants de synthèse audio, infusent largement les outils actuels -- logiciels et matériels -- de création musicale. Pourtant, ces abstractions, qui ont émergé pour la plupart au cours du 20ème siècle en Occident sur la base de théories musicales classiques de la musique écrite, ne sont pas dénuées d'a priori culturels. Elles reflètent des principes déterminés visant à gommer certains aspects de la musique (par exemple, micro-déviations par rapport à un temps métronomique ou micro-déviations de fréquence par rapport à une hauteur idéalisée), dont le haut degré de variabilité physique les rend typiquement peu commodes pour l'écriture musicale. Ces compromis, qui s'avèrent pertinents lorsque la musique écrite est destinée à l'interprétation par des musicien-ne-s, à même de réintroduire variations et richesse physique et musicale, se révèlent cependant limitants dans le cadre de la création musicale assistée par ordinateur, restituant froidement ces abstractions, où ils tendent à restreindre la diversité des musiques qu'il est possible de produire. À travers la présentation de plusieurs interfaces typiques de la création musicale, je montre qu'un facteur essentiel est l'échelle des interactions humain-machine proposées par ces abstractions. À leur plus grand niveau de flexibilité, telles les représentations audio ou les piano-rolls sur un temps non quantifié, elles se révèlent difficiles à manipuler, car elles requièrent un haut degré de précision, particulièrement inadapté aux terminaux mobiles et tactiles modernes. A contrario, dans de nombreuses abstractions communément employées, comme les partitions ou les séquenceurs, à temps discrétisé, elles se révèlent contraignantes pour la création de musiques culturellement diverses. Dans cette thèse, je soutiens que l'intelligence artificielle, par la capacité qu'elle offre à construire des représentations haut-niveau d'objets complexes donnés, permet de construire de nouvelles échelles de la création musicale, pensées pour l'interaction, et de proposer ainsi des approches radicalement neuves de la création musicale. Je présente et illustre cette idée à travers le design et le développement de trois prototypes web de création musicales assistés par IA, dont un basé sur un modèle neuronal nouveau pour l'inpainting de sons d'instruments de musique également conçu dans le cadre de cette thèse. Ces représentations haut-niveau -- pour les partitions, les piano-rolls et les spectrogrammes -- se déploient à une échelle temps-fréquence plus grossière que les données d'origine, mais mieux adaptée à l'interaction. En permettant d'effectuer des transformations localisées sur cette représentation mais en capturant également, par la modélisation statistique, des spécificités esthétiques et micro-variations des données musicales d'entraînement, ces outils permettent d'obtenir aisément et de façon contrôlable des résultats musicalement riches. À travers l'évaluation en conditions réelles par plusieur-e-s artistes de ces trois prototypes, je montre que ces nouvelles échelles de création interactive sont utiles autant pour les expert-e-s que pour les novices. Grâce à l'assistance de l'IA sur des aspects techniques nécessitant normalement précision et expertise, elles se prêtent de plus à une utilisation sur écrans tactiles et mobiles<br>Modern musical creation unfolds on many different time scales: from the vibration of a string or the resonance of an electronic instrument at the millisecond scale, through the few seconds typical of an instrument's note, to the tens of minutes of operas or DJ sets. The interleaving of these multiple scales has led to the development of numerous technical and theoretical tools to ease the manipulation of time. These abstractions, such as scales, rhythmic notations, or even usual models of audio synthesis, largely infuse current tools -- software and hardware -- for musical creation. However, these abstractions, which emerged for the most part during the 20th century in the West on the basis of classical musical theories of written music, are not devoid of cultural a priori. They reflect various principles aimed at abstracting away certain aspects of the music (for example, micro-deviations with respect to a metronomic time or micro-deviations of frequency with respect to an idealized pitch), whose high degree of physical variability makes them typically inconvenient for musical writing. These compromises, typically relevant when the written music is intended to be performed by musicians, able to reintroduce variations and physical and musical richness, are however limiting in the context of computer-assisted music creation, with computers coldly rendering these coarse representations abstractions, and they tend to restrict the diversity of the music that can be produced with these tools. Through a review of several typical interfaces for music creation, I show that an essential factor is the scale of the human-machine interactions proposed by these abstractions. At their most flexible level, such as audio representations or piano-roll representations with unquantized time, they prove difficult to manipulate, as they require a high degree of precision, particularly unsuitable for modern mobile and touch terminals. On the other hand, in most commonly used abstractions with discretized time, such as scores or sequencers, they prove to be too constraining for the creation of culturally diverse music that does not follow the proposed time and pitch grids. In this thesis, I argue that artificial intelligence, through its ability to build high-level representations of given complex objects, allows the construction of new scales of music creation, designed for interaction, and thus enables radically new approaches to music creation. I present and illustrate this idea through the design and implementation of three web-based prototypes of music creation assisted by artificial intelligence, one of which is based on a new neural model for the inpainting of musical instrument sounds also designed in the framework of this thesis. These high-level representations -- for sheet music, piano-rolls, and spectrograms -- are deployed at a time-frequency scale coarser than the original data, but better suited to interaction. By allowing localized transformations on these representations but also capturing, through statistical modeling, aesthetic specificities and fine micro-variations of the original musical training data, these tools allow to easily and controllably obtain musically rich results. Through the evaluation of these three prototypes in real conditions by several artists, I show that these new scales of interactive creation are useful for both experts and novices. Thanks to the assistance of AI on technical aspects that normally require precision and expertise, they are also suitable for use on touch screens and mobile devices
APA, Harvard, Vancouver, ISO, and other styles
44

Cassa, Christopher A. "Spatial outbreak detection analysis tool : a system to create sets of semi-synthetic geo-spatial clusters." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/33124.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.<br>Includes bibliographical references (leaves 55-57).<br>Syndromic surveillance systems, especially software systems, have emerged as the leading outbreak detection mechanisms. Early outbreak detection systems can assist with medical and logistic decision support. One important concern for effectively testing these systems in practice is the scarcity of authentic outbreak health data. Because of this shortage, creating suitable geotemporal test clusters for surveillance algorithm validation is essential. Described is an automated tool that creates artificial patient clusters by varying a large variety of realistic outbreak parameters. The cluster creation tool is an open-source program that accepts a set of outbreak parameters and creates artificial geospatial patient data for a single cluster or a series of similar clusters. This helps automate the process of rigorous testing and validation of outbreak detection algorithms. Using the cluster generator, single patient clusters and series of patient clusters were created - as files and series of files containing patient longitude and latitude coordinates. These clusters were then tested and validated using a publicly-available GIS visualization program. All generated clusters were properly created within the ranges that were entered as parameters at program execution. Sample semi-synthetic datasets from the cluster creation tool were then used to validate a popular spatial outbreak detection algorithm, the M-Statistic.<br>by Christopher A. Casa.<br>M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
45

Castex, Graciela Maria. "An analysis and synthesis of current theories of ethnicity and ethnic group processes using the creation of the Hispanic group as a case example /." Access Digital Full Text version, 1990. http://pocketknowledge.tc.columbia.edu/home.php/bybib/10937742.

Full text
Abstract:
Thesis (Ed.D.)--Teachers College, Columbia University.<br>Typescript; issued also on microfilm. Sponsor: Ellen Condliffe Lagemann. Dissertation Committee: Lambros Comitas. Bibliography: leaves 109-114.
APA, Harvard, Vancouver, ISO, and other styles
46

Smith, Andrew Fairley. "'Risk and resilience' : the mobilisation of professional knowledge in the creation of patient safety in anaesthetic practice : a reflexive interpretive synthesis of previously published work." Thesis, Lancaster University, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.727388.

Full text
Abstract:
The patient safety ‘movement’ has relied largely on formal, explicit structures in attempting to promote principles and practices from safety-critical industries within the healthcare sector. The ‘craft’ nature of medical practice is however, predicated on the interplay of formal and tacit experiential knowledge. The specialty of anaesthesia is a branch of medicine and yet the nature of anaesthetic practice resembles operational work within e.g. aviation and nuclear power, sharing with them the characteristics of time pressure, complex human-machine interactions, uncertainty and risk. This thesis takes the form of an interpretive meta-synthesis of six published journal articles to examine how patient safety is constructed and enacted through the transmission and co-creation of professional knowledge. After delineating the forms of knowledge used in anaesthetic work, it describes how abstract notions of safety are expressed within a number of specific aspects of practice: the use of electronic monitoring equipment; communication between actors on induction of, and emergence from, general anaesthesia; the definition and analysis of adverse incidents; postoperative handover of care in the recovery room and in the performance of regional anaesthesia as an exemplar technical procedure. Anaesthetists draw both on informal logics and routines of practice and on codified safety knowledge and tools. How, and when, to reconcile these two approaches relies on a dynamic combination of cognitive, affective and normative influences, which seem to be embedded in anaesthetists’ professional identity. Paradoxically, much of this seems to be learned through contact with the very failure and error that the routines are designed to avoid and prevent; It is developed through dealing with the many perturbations and threats to the safety of their patients, whether through personal experience, the ‘cautionary tales’ of others, or imagining what might go wrong. The interpretive synthesis thus provides empirical support for theoretical notions of how resilience to error is created in everyday work in safety-critical settings.
APA, Harvard, Vancouver, ISO, and other styles
47

Franklin, Donna. "Meaningful Encounters: Creating a multi-method site for interacting with nonhuman life through bioarts praxis." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2014. https://ro.ecu.edu.au/theses/1574.

Full text
Abstract:
This research advocates a multi-method approach to bioarts praxis, reflexively and critically questioning the contemporary contexts that frame our engagement with nonhuman life. In doing so, the research aims to generate further community engagement with nonhuman life and the environment, and engender critical discourse on the implications of developing biotechnologies. Hegemonic institutions influence the way culture is produced and how information is constructed and understood. Habermas (1987) suggests that these institutions will inevitably influence the individual’s lifeworld as they shape lived experience through the process of systemic colonisation. I assert that this process also shapes how individuals engage with or understand nonhuman life. Through the implementation of three major projects the research aims to develop the capacity of bioarts in challenging such institutions by providing the opportunity for hands-on life science activities and real-time interactions with nonhuman life. The research by employing such methods aims to counter-act the impact of urbanised living and indifference to environmental conservation. Each aspect of the creative praxis provides a reflexive case study to establish the research aims and answer the research agenda. This includes my creative bioartworks, an art-science secondary educational course and a curated group exhibition, symposium and workshop. This research provides an alternative communicative approach to hegemonic institutions such as the mass media, scientific biotechnological industries and traditional gallery spaces (Shanken, 2011).
APA, Harvard, Vancouver, ISO, and other styles
48

Forsee, William Joel. "Implementation of a Hybrid Weather Generator and Creating Sets of Synthetic Weather Series Consistent with Seasonal Climate Forecasts in the Southeastern United States." Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_theses/215.

Full text
Abstract:
Stochastic weather generators create multiple series of synthetic daily weather (precipitation, maximum temperature, etc.), and ideally these series will have statistical properties similar to those of the input historical data. The synthetic output has many applications and for example, can be used in sectors such as agriculture and hydrology. This work used a ?hybrid? weather generator which consists of a parametric Markov chain for generating precipitation occurrence and a nonparametric k-nearest neighbor method for generating values of maximum temperature, minimum temperature, and precipitation. The hybrid weather generator was implemented and validated for use at 11 different locations in the Southeastern United States. A total of 36 graphic diagnostics were used to assess the model?s performance. These diagnostics revealed that the weather generator successfully created synthetic series with most statistical properties of the historical data including extreme wet and dry spell lengths and days of first and last freeze. Climate forecasts are typically provided for seasons or months. Alternatively, process models used for risk assessment often operate at daily time scales. If climate forecasts were incorporated into the daily weather input for process models, stakeholders could then use these models to assess possible impacts on their sector of interest due to anticipated changes in climate conditions. In this work, an ?ad hoc? resampling approach was developed to create sets of daily synthetic weather series consistent with seasonal climate forecasts in the Southeastern United States. In this approach, the output of the hybrid weather generator was resampled based on forecasts in two different formats: the commonly used tercile format and a probability distribution function. This resampling approach successfully created sets of synthetic series which reflected different forecast scenarios (i.e. wetter or drier conditions). Distributions of quarterly total precipitation from the resampled synthetic series were found to be shifted with respect to the corresponding historical distributions, and in some cases, the occurrence and intensity statistics of precipitation in the new weather series had changed with respect to the historical values.
APA, Harvard, Vancouver, ISO, and other styles
49

Tolle, Isabella [Verfasser], Nediljko [Akademischer Betreuer] Budisa, Nediljko [Gutachter] Budisa, Thomas [Gutachter] Friedrich, and Zoya [Gutachter] Ignatova. "Towards the creation of synthetic Escherichia coli via Tryptophan and Methionine substitutions / Isabella Tolle ; Gutachter: Nediljko Budisa, Thomas Friedrich, Zoya Ignatova ; Betreuer: Nediljko Budisa." Berlin : Technische Universität Berlin, 2021. http://d-nb.info/1238143121/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gavazza, Giuseppe. "La synthèse par modèle physique comme outil de formalisation musicale." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS041.

Full text
Abstract:
La synthèse pour par modèles physiques propose une approche à de la création musicale alternative à de celui celle plus habituelle du traitement du signal. En prenant en considération le phénomène musical comme un unicum émergeant de l'interaction entre le musicien et les instruments à sa disposition, on expérimente et donne corps phénoménologique et sensible aux actions créatrices.En ne considérant pas comme entités distinctes la matière sonore et la structure musicale, on oriente les potentialités de l'ordinateur et crée une dialectique originale et féconde entre le formel (structurel) et le perceptif (cognitif).Le champ d'action envisagé dans cette thèse concerne le développement, la formalisation et la catégorisation des modèles physiques réalisés à l’aide du formalisme CORDIS-ANIMA de structures - créées par modélisation physique - utiles pour la composition musicale, dans la perspective de mettre en évidence la fonction de formalisation musicale associée portée au par ce paradigme. de simulation par modèle physique CORDIS-ANIMA.Le point de départ est ma pratique personnelle de près de 20 années, en tant que compositeur, avec le logiciel de création sonore GENESIS du laboratoire ACROE-ICA. Cette pratique, à travers des travaux à la fois scientifiques (en modélisation) et artistiques (en composition musicale), m'a conduit à considérer cet environnement non pas comme un synthétiseur, mais comme un instrument " organique " permettant de créer une composition musicale complète couvrant les trois échelles de catégorisation usuelle de l'acoustique et de la musique : micro-formelle (le timbre, l'harmonie, l'orchestration), mezzo-formelle (le rythme, la mélodie et les séquences / structures harmoniques de premier niveau) et macro-formelle (la structure harmonique de niveau supérieur, le schéma formel de la composition complète).L'objectif ne consiste pas à proposer le cadre d'une "nouvelle musique" ou d'une nouvelle esthétique, mais à "bien tempérer" les instruments pour une nouvelle pratique de la création musicale explorant et exploitant au mieux les potentialités de l'ordinateur et des technologies numériques, aussi dans la direction d'un élargissement de la dialectique instrumentalité - écriture musicale vers une "supra instrumentalité" [Cadoz6] et vers des perspectives "post-scriptiques" de la création musicale [Cadoz7]<br>Physical model synthesis offers an approach to composition alternative to the more usual signal processing. Considering the musical phenomenon as a "unicum" emerging from the interaction between the musicians and their instruments the physical model synthesis realizes and gives phenomenological and sensitive corporeality to the creative actions. By conceiving not as separate entities sound material and musical structure directs the potentialities of the computer and creates an original and fruitful dialectic between the formal (structural) and perceptual (cognitive).The sphere of action for my PhD concerns the development, formalisation and categorisation of structural models – created by physical modelling – useful for musical composition in the perspective to highlight the musical formalisation function associated with the CORDIS-ANIMA physical model simulation paradigm.The starting point for this work is 20 years of personal use as a composer, of the GENESIS physical model musical creation software developed by the ACROE-ICA laboratory.This experience has led me, through works both scientific (modelling) and artistic (music composition), to consider this environment not as a synthesis tool, but as a complex instrument, which allows to create a complete musical composition covering all three usual categories of acoustics and music: micro-formal (the tone, harmony, orchestration), mezzo-formal (the rhythm, melody, and the basic sequences/harmonic structures) and macro-formal (the higher level harmonic structure, the formal outline of the entire composition).My goal is not to propose the framework of a new music or a new aesthetic, but develop "well-tempered" instruments for a new practice of music creativity that explores and accomplishes better the potentialities of computer and digital technology. This also leads in the direction of broadening the dialectic instrumentality - writing music to a "supra instrumentality" [Cadoz6] and to post-scriptic outlook on musical creation [Cadoz7]
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!