Academic literature on the topic 'Data-driven synthesis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data-driven synthesis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data-driven synthesis"

1

Carlson, Rolf, and Björn Granström. "Data-driven multimodal synthesis." Speech Communication 47, no. 1-2 (September 2005): 182–93. http://dx.doi.org/10.1016/j.specom.2005.02.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Campbell, Nick. "Data‐driven speech synthesis." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1029–30. http://dx.doi.org/10.1121/1.424923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Taylor, Sam, Doug A. Edwards, Luis A. Plana, and Luis A. Tarazona D. "Asynchronous Data-Driven Circuit Synthesis." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 18, no. 7 (July 2010): 1093–106. http://dx.doi.org/10.1109/tvlsi.2009.2020168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Nannan, Mingrui Zhu, Jie Li, Bin Song, and Zan Li. "Data-driven vs. model-driven: Fast face sketch synthesis." Neurocomputing 257 (September 2017): 214–21. http://dx.doi.org/10.1016/j.neucom.2016.07.071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lv, Pei, Mingliang Xu, Bailin Yang, Mingyuan Li, and Bing Zhou. "Data-driven humanlike reaching behaviors synthesis." Neurocomputing 177 (February 2016): 26–32. http://dx.doi.org/10.1016/j.neucom.2015.10.118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bohg, Jeannette, Antonio Morales, Tamim Asfour, and Danica Kragic. "Data-Driven Grasp Synthesis—A Survey." IEEE Transactions on Robotics 30, no. 2 (April 2014): 289–309. http://dx.doi.org/10.1109/tro.2013.2289018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ravitz, Orr. "Data-driven computer aided synthesis design." Drug Discovery Today: Technologies 10, no. 3 (September 2013): e443-e449. http://dx.doi.org/10.1016/j.ddtec.2013.01.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aghdasi, F. "Controller synthesis using data-driven clocks." Microelectronics Journal 26, no. 5 (July 1995): 449–61. http://dx.doi.org/10.1016/0026-2692(95)98947-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Horta, N. C., and J. E. Franca. "Algorithm-driven synthesis of data conversion architectures." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 16, no. 10 (1997): 1116–35. http://dx.doi.org/10.1109/43.662675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Macon, Michael W. "Waveform models for data‐driven speech synthesis." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1031. http://dx.doi.org/10.1121/1.424928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data-driven synthesis"

1

Scott, Simon David. "A data-driven approach to visual speech synthesis." Thesis, University of Bath, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Inanoglu, Zeynep. "Data driven parameter generation for emotional speech synthesis." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lundberg, Anton. "Data-Driven Procedural Audio : Procedural Engine Sounds Using Neural Audio Synthesis." Thesis, KTH, Datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280132.

Full text
Abstract:
The currently dominating approach for rendering audio content in interactivemedia, such as video games and virtual reality, involves playback of static audiofiles. This approach is inflexible and requires management of large quantities of audio data. An alternative approach is procedural audio, where sound models are used to generate audio in real time from live inputs. While providing many advantages, procedural audio has yet to find widespread use in commercial productions, partly due to the audio produced by many of the proposed models not meeting industry standards. This thesis investigates how procedural audio can be performed using datadriven methods. We do this by specifically investigating how to generate the sound of car engines using neural audio synthesis. Building on a recently published method that integrates digital signal processing with deep learning, called Differentiable Digital Signal Processing (DDSP), our method obtains sound models by training deep neural networks to reconstruct recorded audio examples from interpretable latent features. We propose a method for incorporating engine cycle phase information, as well as a differentiable transient synthesizer. Our results illustrate that DDSP can be used for procedural engine sounds; however, further work is needed before our models can generate engine sounds without undesired artifacts and before they can be used in live real-time applications. We argue that our approach can be useful for procedural audio in more general contexts, and discuss how our method can be applied to other sound sources.
Det i dagsläget dominerande tillvägagångssättet för rendering av ljud i interaktivamedia, såsom datorspel och virtual reality, innefattar uppspelning av statiska ljudfiler. Detta tillvägagångssätt saknar flexibilitet och kräver hantering av stora mängder ljuddata. Ett alternativt tillvägagångssätt är procedurellt ljud, vari ljudmodeller styrs för att generera ljud i realtid. Trots sina många fördelar används procedurellt ljud ännu inte i någon vid utsträckning inom kommersiella produktioner, delvis på grund av att det genererade ljudet från många föreslagna modeller inte når upp till industrins standarder. Detta examensarbete undersöker hur procedurellt ljud kan utföras med datadrivna metoder. Vi gör detta genom att specifikt undersöka metoder för syntes av bilmotorljud baserade på neural ljudsyntes. Genom att bygga på en nyligen publicerad metod som integrerar digital signalbehandling med djupinlärning, kallad Differentiable Digital Signal Processing (DDSP), kan vår metod skapa ljudmodeller genom att träna djupa neurala nätverk att rekonstruera inspelade ljudexempel från tolkningsbara latenta prediktorer. Vi föreslår en metod för att använda fasinformation från motorers förbränningscykler, samt en differentierbar metod för syntes av transienter. Våra resultat visar att DDSP kan användas till procedurella motorljud, men mer arbete krävs innan våra modeller kan generera motorljud utan oönskade artefakter samt innan de kan användas i realtidsapplikationer. Vi diskuterar hur vårt tillvägagångssätt kan vara användbart inom procedurellt ljud i mer generella sammanhang, samt hur vår metod kan tillämpas på andra ljudkällor
APA, Harvard, Vancouver, ISO, and other styles
4

Hagrot, Joel. "A Data-Driven Approach For Automatic Visual Speech In Swedish Speech Synthesis Applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246393.

Full text
Abstract:
This project investigates the use of artificial neural networks for visual speech synthesis. The objective was to produce a framework for animated chat bots in Swedish. A survey of the literature on the topic revealed that the state-of-the-art approach was using ANNs with either audio or phoneme sequences as input. Three subjective surveys were conducted, both in the context of the final product, and in a more neutral context with less post-processing. They compared the ground truth, captured using the deep-sensing camera of the iPhone X, against both the ANN model and a baseline model. The statistical analysis used mixed effects models to find any statistically significant differences. Also, the temporal dynamics and the error were analyzed. The results show that a relatively simple ANN was capable of learning a mapping from phoneme sequences to blend shape weight sequences with satisfactory results, except for the fact that certain consonant requirements were unfulfilled. The issues with certain consonants were also observed in the ground truth, to some extent. Post-processing with consonant-specific overlays made the ANN’s animations indistinguishable from the ground truth and the subjects perceived them as more realistic than the baseline model’s animations. The ANN model proved useful in learning the temporal dynamics and coarticulation effects for vowels, but may have needed more data to properly satisfy the requirements of certain consonants. For the purposes of the intended product, these requirements can be satisfied using consonant-specific overlays.
Detta projekt utreder hur artificiella neuronnät kan användas för visuell talsyntes. Ändamålet var att ta fram ett ramverk för animerade chatbotar på svenska. En översikt över litteraturen kom fram till att state-of-the-art-metoden var att använda artificiella neuronnät med antingen ljud eller fonemsekvenser som indata. Tre enkäter genomfördes, både i den slutgiltiga produktens kontext, samt i en mer neutral kontext med mindre bearbetning. De jämförde sanningsdatat, inspelat med iPhone X:s djupsensorkamera, med både neuronnätsmodellen och en grundläggande så kallad baselinemodell. Den statistiska analysen använde mixed effects-modeller för att hitta statistiskt signifikanta skillnader i resultaten. Även den temporala dynamiken analyserades. Resultaten visar att ett relativt enkelt neuronnät kunde lära sig att generera blendshapesekvenser utifrån fonemsekvenser med tillfredsställande resultat, förutom att krav såsom läppslutning för vissa konsonanter inte alltid uppfylldes. Problemen med konsonanter kunde också i viss mån ses i sanningsdatat. Detta kunde lösas med hjälp av konsonantspecifik bearbetning, vilket gjorde att neuronnätets animationer var oskiljbara från sanningsdatat och att de samtidigt upplevdes vara bättre än baselinemodellens animationer. Sammanfattningsvis så lärde sig neuronnätet vokaler väl, men hade antagligen behövt mer data för att på ett tillfredsställande sätt uppfylla kraven för vissa konsonanter. För den slutgiltiga produktens skull kan dessa krav ändå uppnås med hjälp av konsonantspecifik bearbetning.
APA, Harvard, Vancouver, ISO, and other styles
5

Deaguero, Andria Lynn. "Improving the enzymatic synthesis of semi-synthetic beta-lactam antibiotics via reaction engineering and data-driven protein engineering." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42727.

Full text
Abstract:
Semi-synthetic β-lactam antibiotics are the most prescribed class of antibiotics in the world. Chemical coupling of a β-lactam moiety with an acyl side chain has dominated the industrial production of semi-synthetic β-lactam antibiotics since their discovery in the early 1960s. Enzymatic coupling of a β-lactam moiety with an acyl side chain can be accomplished in a process that is much more environmentally benign but also results in a much lower yield. The goal of the research presented in this dissertation is to improve the enzymatic synthesis of β-lactam antibiotics via reaction engineering, medium engineering and data-drive protein engineering. Reaction engineering was employed to demonstrate that the hydrolysis of penicillin G to produce the β-lactam nucleus 6-aminopenicillanic acid (6-APA), and the synthesis of ampicillin from 6-APA and (R)-phenylglycine methyl ester ((R)-PGME), can be combined in a cascade conversion. In this work, penicillin G acylase (PGA) was utilized to catalyze the hydrolysis step, and PGA and α-amino ester hydrolase (AEH) were both studied to catalyze the synthesis step. Two different reaction configurations and various relative enzyme loadings were studied. Both configurations present a promising alternative to the current two-pot set-up which requires intermittent isolation of the intermediate, 6-APA. Medium engineering is primarily of interest in β-lactam antibiotic synthesis as a means to suppress the undesired primary and secondary hydrolysis reactions. The synthesis of ampicillin from 6-APA and (R)-PGME in the presence of ethylene glycol was chosen for study after a review of the literature. It was discovered that the transesterification product of (R)-PGME and ethylene glycol, (R)-phenylglycine hydroxyethyl ester, is transiently formed during the synthesis reactions. This never reported side reaction has the ability to positively affect yield by re-directing a portion of the consumption of (R)-PGME to an intermediate that could be used to synthesize ampicillin, rather than to an unusable hydrolysis product. Protein engineering was utilized to alter the selectivity of wild-type PGA with respect to the substituent on the alpha carbon of its substrates. Four residues were identified that had altered selectivity toward the desired product, (R)-ampicillin. Furthermore, the (R)-selective variants improved the yield from pure (R)-PGME up to 2-fold and significantly decreased the amount of secondary hydrolysis present in the reactions. Overall, we have expanded the applicability of PGA and AEH for the synthesis of semi-synthetic β-lactam antibiotics. We have shown the two enzymes can be combined in a novel one-pot cascade, which has the potential to eliminate an isolation step in the current manufacturing process. Furthermore, we have shown that the previously reported ex-situ mixed donor synthesis of ampicillin for PGA can also occur in-situ in the presence of a suitable side chain acyl donor and co-solvent. Finally, we have made significant progress towards obtaining a selective PGA that is capable of synthesizing diastereomerically pure semi-synthetic β-lactam antibiotics from racemic substrates.
APA, Harvard, Vancouver, ISO, and other styles
6

Naert, Lucie. "Capture, annotation and synthesis of motions for the data-driven animation of sign language avatars." Thesis, Lorient, 2020. http://www.theses.fr/2020LORIS561.

Full text
Abstract:
Cette thèse porte sur la capture, l'annotation, la synthèse et l'évaluation des mouvements des mains et des bras pour l'animation d'avatars communiquant en Langues des Signes (LS). Actuellement, la production et la diffusion de messages en LS dépendent souvent d'enregistrements vidéo qui manquent d'informations de profondeur et dont l’édition et l'analyse sont difficiles. Les avatars signeurs constituent une alternative prometteuse à la vidéo. Ils sont généralement animés soit à l'aide de techniques procédurales, soit par des techniques basées données. L'animation procédurale donne souvent lieu à des mouvements peu naturels, mais n'importe quel signe peut être produit avec précision. Avec l’animation basée données, les mouvements de l'avatar sont réalistes mais la variété des signes pouvant être synthétisés est limitée et/ou biaisée par la base de données initiale. Privilégiant l’acceptation de l’avatar, nous avons choisi l'approche basée sur les données mais, pour remédier à sa principale limitation, nous proposons d'utiliser les mouvements annotés présents dans une base de mouvements de LS capturés pour synthétiser de nouveaux signes et énoncés absents de cette base. Pour atteindre cet objectif, notre première contribution est la conception, l'enregistrement et l'évaluation perceptuelle d'une base de données de capture de mouvements en Langue des Signes Française (LSF) composée de signes et d'énoncés réalisés par des enseignants sourds de LSF. Notre deuxième contribution est le développement de techniques d'annotation automatique pour différentes pistes d’annotation basées sur l'analyse des propriétés cinématiques de certaines articulations et des algorithmes d'apprentissage automatique existants. Notre dernière contribution est la mise en œuvre de différentes techniques de synthèse de mouvements basées sur la récupération de mouvements par composant phonologique et sur la reconstruction modulaire de nouveaux contenus de LSF avec l'utilisation de techniques de génération de mouvements, comme la cinématique inverse, paramétrées pour se conformer aux propriétés des mouvements réels
This thesis deals with the capture, annotation, synthesis and evaluation of arm and hand motions for the animation of avatars communicating in Sign Languages (SL). Currently, the production and dissemination of SL messages often depend on video recordings which lack depth information and for which editing and analysis are complex issues. Signing avatars constitute a powerful alternative to video. They are generally animated using either procedural or data-driven techniques. Procedural animation often results in robotic and unrealistic motions, but any sign can be precisely produced. With data-driven animation, the avatar's motions are realistic but the variety of the signs that can be synthesized is limited and/or biased by the initial database. As we considered the acceptance of the avatar to be a prime issue, we selected the data-driven approach but, to address its main limitation, we propose to use annotated motions present in an SL Motion Capture database to synthesize novel SL signs and utterances absent from this initial database. To achieve this goal, our first contribution is the design, recording and perceptual evaluation of a French Sign Language (LSF) Motion Capture database composed of signs and utterances performed by deaf LSF teachers. Our second contribution is the development of automatic annotation techniques for different tracks based on the analysis of the kinematic properties of specific joints and existing machine learning algorithms. Our last contribution is the implementation of different motion synthesis techniques based on motion retrieval per phonological component and on the modular reconstruction of new SL content with the additional use of motion generation techniques such as inverse kinematics, parameterized to comply to the properties of real motions
APA, Harvard, Vancouver, ISO, and other styles
7

Želiar, Dušan. "Automatizovaná syntéza stromových struktur z reálných dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403196.

Full text
Abstract:
This masters thesis deals with analysis of tree structure data. The aim of this thesis is to design and implement a tool for automated detection of relation among samples of read data considering their three structure and node values. Output of the tool is a prescription for automated synthesis of data for testing purposes. The tool is a part of Testos platform developed at FIT BUT.
APA, Harvard, Vancouver, ISO, and other styles
8

BOURDEAUDUCQ, SÉBASTIEN. "A performance-driven SoC architecture for video synthesis." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-26151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kersten, Stefan. "Statistical modelling and resynthesis of environmental texture sounds." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/400395.

Full text
Abstract:
Environmental texture sounds are an integral, though often overlooked, part of our daily life. They constitute those elements of our sounding environment that we tend to perceive subconsciously but which we miss when they are missing. Those sounds are also increasingly important for adding realism to virtual environments, from immersive artificial worlds through computer games to mobile augmented reality systems. This work spans the spectrum from data-driven stochastic sound synthesis methods to distributed virtual reality environments and their aesthetic and technological implications. We propose a framework for statistically modelling environmental texture sounds in different sparse signal representations. We explore three different instantiations of this framework, two of which constitute a novel way of representing texture sounds in a physically-inspired sparse statistical model and of estimating model parameters from recorded sound examples.
Los sonidos texturales ambientales son parte integral de nuestra vida diaria, a pesar de que muchas veces pasen desapercibidos. Constituyen esos elementos de nuestro entorno sonoro que solemos percibir de manera subconsciente pero que extrañamos cuando desaparecen. Esos sonidos son también cada vez más importantes para añadir realismo a los ambientes virtuales, desde mundos artificiales de inmersión hasta sistemas móviles de realidad aumentada, pasando por juegos de ordenador. Este trabajo abarca todo el espectro desde métodos de síntesis de sonido estocásticos basados en datos hasta entornos distribuidos de realidad virtual, así como sus implicaciones estéticas y tecnológicas. Proponemos un marco para modelar estadísticamente sonidos ambientales texturales en diferentes representaciones sparse de señales. Exploramos tres diferentes instanciaciones de este marco, dos de las cuales constituyen una nueva manera de representar sonidos texturales en un modelo estadístico inspirado físicamente así como de estimar parámetros de modelo a partir de ejemplos de sonido grabados.
APA, Harvard, Vancouver, ISO, and other styles
10

Gopalan, Ranganath. "Leakage power driven behavioral synthesis of pipelined asics." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Data-driven synthesis"

1

Damper, R. I. Data-Driven Techniques in Speech Synthesis. Boston, MA: Springer US, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Damper, Robert I., ed. Data-Driven Techniques in Speech Synthesis. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

George, Safonov Michael, ed. Safe adaptive control: Data-driven stability analysis and robust synthesis. London: Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

I, Damper R., ed. Data-driven techniques in speech synthesis. Boston: Kluwer Academic Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Scaletti, Carla. Sonification ≠ Music. Edited by Roger T. Dean and Alex McLean. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190226992.013.9.

Full text
Abstract:
Starting from the observation that symbolic language is not the only channel for human communication, this chapter examines ‘data sonification’, a means of understanding, reasoning about, and communicating meaning that extends beyond that which can be conveyed by symbolic language alone. Data sonification is a mapping from data generated by a model, captured in an experiment, or otherwise gathered through observation to one or more parameters of an audio signal or sound synthesis model for the purpose of better understanding, communicating, or reasoning about the original model, experiment, or system. Although data sonification shares techniques and materials with data-driven music, it is in the interests of the practitioners of both music composition and data sonification to maintain a distinction between the two fields.
APA, Harvard, Vancouver, ISO, and other styles
6

Reynolds, Paul. The Supply Networks of the Roman East and West. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198790662.003.0012.

Full text
Abstract:
This chapter provides a synthesis, on a scale that has not been attempted before, across both the eastern and western Mediterranean, of the picture provided by ceramic data, using amphorae, finewares, and cookwares, for long-distance trade from the second to seventh centuries AD. The chapter examines the degree to which exchange in different products spanned the entire Mediterranean, or only particular basins within it, at different periods, and traces the evolution of regional exchange networks. It examines the impact of state-driven supply both for imperial Rome, and for the military annona in the early Byzantine period, on wider private trade circuits.
APA, Harvard, Vancouver, ISO, and other styles
7

Huffaker, Ray, Marco Bittelli, and Rodolfo Rosa. Empirically Detecting Causality. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198782933.003.0008.

Full text
Abstract:
Phenomenological models mathematically describe relationships among empirically observed phenomena without attempting to explain underlying mechanisms. Within the context of NLTS, phenomenological modeling goes beyond phase space reconstruction to extract equations governing real-world system dynamics from a single or multiple observed time series. Phenomenological models provide several benefits. They can be used to characterize the dynamics of variable interactions; for example, whether an incremental increase in one variable drives a marginal increase/decrease in the growth rate of another, and whether these dynamic interactions follow systematic patterns over time. They provide an analytical framework for data driven science still searching for credible theoretical explanation. They set a descriptive standard for how the real world operates so that theory is not misdirected in explaining fanciful behavior. The success of phenomenological modeling depends critically on selection of governing parameters. Model dimensionality, and the time delays used to synthesize dynamic variables, are guided by statistical tests run for phase space reconstruction. Other regression and numerical integration parameters can be set on a trial and error basis within ranges providing numerical stability and successful reproduction of empirically-detected dynamics. We illustrate phenomenological modeling with solutions of the Lorenz model so that we can recognize the dynamics that need to be reproduced.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data-driven synthesis"

1

Komura, Taku, Ikhsanul Habibie, Jonathan Schwarz, and Daniel Holden. "Data-Driven Character Animation Synthesis." In Handbook of Human Motion, 1–29. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_10-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jörg, Sophie. "Data-Driven Hand Animation Synthesis." In Handbook of Human Motion, 1–13. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_13-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Komura, Taku, Ikhsanul Habibie, Jonathan Schwarz, and Daniel Holden. "Data-Driven Character Animation Synthesis." In Handbook of Human Motion, 2003–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jörg, Sophie. "Data-Driven Hand Animation Synthesis." In Handbook of Human Motion, 2079–91. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Damper, Robert I. "Learning About Speech from Data: Beyond NETtalk." In Data-Driven Techniques in Speech Synthesis, 1–25. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Coleman, John, and Andrew Slater. "Estimation of Parameters for the Klatt Synthesizer from a Speech Database." In Data-Driven Techniques in Speech Synthesis, 215–38. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hirschberg, Julia. "Training Accent and Phrasing Assignment on Large Corpora." In Data-Driven Techniques in Speech Synthesis, 239–73. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cohen, Andrew D. "Learnable Phonetic Representations in a Connectionist TTS System — II: Phonetics to Speech." In Data-Driven Techniques in Speech Synthesis, 275–82. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bakiri, Ghulum, and Thomas G. Dietterich. "Constructing High-Accuracy Letter-to-Phoneme Rules with Machine Learning." In Data-Driven Techniques in Speech Synthesis, 27–44. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sullivan, Kirk P. H. "Analogy, the Corpus and Pronunciation." In Data-Driven Techniques in Speech Synthesis, 45–70. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data-driven synthesis"

1

Yessenov, Kuat, Zhilei Xu, and Armando Solar-Lezama. "Data-driven synthesis for object-oriented frameworks." In the 2011 ACM international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2048066.2048075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yu, and Shuhong Xu. "Data-Driven Feature-Based 3D Face Synthesis." In Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007). IEEE, 2007. http://dx.doi.org/10.1109/3dim.2007.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mabrok, Mohamed A., and Ian R. Petersen. "Data driven controller synthesis for negative imaginary systems." In 2015 10th Asian Control Conference (ASCC). IEEE, 2015. http://dx.doi.org/10.1109/ascc.2015.7244481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Keel, L. H., and S. P. Bhattacharyya. "Data driven synthesis of three term digital controllers." In 2006 American Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/acc.2006.1655365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mahmudi, Mentar, and Marcelo Kallmann. "Multi-modal data-driven motion planning and synthesis." In MIG '15: Motion in Games. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2822013.2822044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Essa, Irfan. "Data-driven and Procedural Analysis and Synthesis of Multimedia." In Eighth International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS '07). IEEE, 2007. http://dx.doi.org/10.1109/wiamis.2007.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kastner, R., Wenrui Gong, Xin Hao, F. Brewer, A. Kaplan, P. Brisk, and M. Sarrafzadeh. "Layout Driven Data Communication Optimization for High Level Synthesis." In 2006 Design, Automation and Test in Europe. IEEE, 2006. http://dx.doi.org/10.1109/date.2006.244021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Charlie Irawan, Hung-Wei Hsu, Wen-Kai Tai, Chin-Chen Chang, and Der-Lor Way. "A Data-Driven Path Synthesis Framework for Racing Games." In 2013 Seventh International Conference on Image and Graphics (ICIG). IEEE, 2013. http://dx.doi.org/10.1109/icig.2013.138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Jingbo, Chungha Sung, Mukund Raghothaman, and Chao Wang. "Data-Driven Synthesis of Provably Sound Side Channel Analyses." In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 2021. http://dx.doi.org/10.1109/icse43902.2021.00079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Heertjes, Marcel, Bram Hunnekens, Nathan van de Wouw, and Henk Nijmeijer. "Learning in the synthesis of data-driven variable-gain controllers." In 2013 American Control Conference (ACC). IEEE, 2013. http://dx.doi.org/10.1109/acc.2013.6580889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography