Academic literature on the topic 'Data-driven synthesis'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data-driven synthesis.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Data-driven synthesis"
Carlson, Rolf, and Björn Granström. "Data-driven multimodal synthesis." Speech Communication 47, no. 1-2 (September 2005): 182–93. http://dx.doi.org/10.1016/j.specom.2005.02.015.
Full textCampbell, Nick. "Data‐driven speech synthesis." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1029–30. http://dx.doi.org/10.1121/1.424923.
Full textTaylor, Sam, Doug A. Edwards, Luis A. Plana, and Luis A. Tarazona D. "Asynchronous Data-Driven Circuit Synthesis." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 18, no. 7 (July 2010): 1093–106. http://dx.doi.org/10.1109/tvlsi.2009.2020168.
Full textWang, Nannan, Mingrui Zhu, Jie Li, Bin Song, and Zan Li. "Data-driven vs. model-driven: Fast face sketch synthesis." Neurocomputing 257 (September 2017): 214–21. http://dx.doi.org/10.1016/j.neucom.2016.07.071.
Full textLv, Pei, Mingliang Xu, Bailin Yang, Mingyuan Li, and Bing Zhou. "Data-driven humanlike reaching behaviors synthesis." Neurocomputing 177 (February 2016): 26–32. http://dx.doi.org/10.1016/j.neucom.2015.10.118.
Full textBohg, Jeannette, Antonio Morales, Tamim Asfour, and Danica Kragic. "Data-Driven Grasp Synthesis—A Survey." IEEE Transactions on Robotics 30, no. 2 (April 2014): 289–309. http://dx.doi.org/10.1109/tro.2013.2289018.
Full textRavitz, Orr. "Data-driven computer aided synthesis design." Drug Discovery Today: Technologies 10, no. 3 (September 2013): e443-e449. http://dx.doi.org/10.1016/j.ddtec.2013.01.005.
Full textAghdasi, F. "Controller synthesis using data-driven clocks." Microelectronics Journal 26, no. 5 (July 1995): 449–61. http://dx.doi.org/10.1016/0026-2692(95)98947-p.
Full textHorta, N. C., and J. E. Franca. "Algorithm-driven synthesis of data conversion architectures." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 16, no. 10 (1997): 1116–35. http://dx.doi.org/10.1109/43.662675.
Full textMacon, Michael W. "Waveform models for data‐driven speech synthesis." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1031. http://dx.doi.org/10.1121/1.424928.
Full textDissertations / Theses on the topic "Data-driven synthesis"
Scott, Simon David. "A data-driven approach to visual speech synthesis." Thesis, University of Bath, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307116.
Full textInanoglu, Zeynep. "Data driven parameter generation for emotional speech synthesis." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612250.
Full textLundberg, Anton. "Data-Driven Procedural Audio : Procedural Engine Sounds Using Neural Audio Synthesis." Thesis, KTH, Datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280132.
Full textDet i dagsläget dominerande tillvägagångssättet för rendering av ljud i interaktivamedia, såsom datorspel och virtual reality, innefattar uppspelning av statiska ljudfiler. Detta tillvägagångssätt saknar flexibilitet och kräver hantering av stora mängder ljuddata. Ett alternativt tillvägagångssätt är procedurellt ljud, vari ljudmodeller styrs för att generera ljud i realtid. Trots sina många fördelar används procedurellt ljud ännu inte i någon vid utsträckning inom kommersiella produktioner, delvis på grund av att det genererade ljudet från många föreslagna modeller inte når upp till industrins standarder. Detta examensarbete undersöker hur procedurellt ljud kan utföras med datadrivna metoder. Vi gör detta genom att specifikt undersöka metoder för syntes av bilmotorljud baserade på neural ljudsyntes. Genom att bygga på en nyligen publicerad metod som integrerar digital signalbehandling med djupinlärning, kallad Differentiable Digital Signal Processing (DDSP), kan vår metod skapa ljudmodeller genom att träna djupa neurala nätverk att rekonstruera inspelade ljudexempel från tolkningsbara latenta prediktorer. Vi föreslår en metod för att använda fasinformation från motorers förbränningscykler, samt en differentierbar metod för syntes av transienter. Våra resultat visar att DDSP kan användas till procedurella motorljud, men mer arbete krävs innan våra modeller kan generera motorljud utan oönskade artefakter samt innan de kan användas i realtidsapplikationer. Vi diskuterar hur vårt tillvägagångssätt kan vara användbart inom procedurellt ljud i mer generella sammanhang, samt hur vår metod kan tillämpas på andra ljudkällor
Hagrot, Joel. "A Data-Driven Approach For Automatic Visual Speech In Swedish Speech Synthesis Applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246393.
Full textDetta projekt utreder hur artificiella neuronnät kan användas för visuell talsyntes. Ändamålet var att ta fram ett ramverk för animerade chatbotar på svenska. En översikt över litteraturen kom fram till att state-of-the-art-metoden var att använda artificiella neuronnät med antingen ljud eller fonemsekvenser som indata. Tre enkäter genomfördes, både i den slutgiltiga produktens kontext, samt i en mer neutral kontext med mindre bearbetning. De jämförde sanningsdatat, inspelat med iPhone X:s djupsensorkamera, med både neuronnätsmodellen och en grundläggande så kallad baselinemodell. Den statistiska analysen använde mixed effects-modeller för att hitta statistiskt signifikanta skillnader i resultaten. Även den temporala dynamiken analyserades. Resultaten visar att ett relativt enkelt neuronnät kunde lära sig att generera blendshapesekvenser utifrån fonemsekvenser med tillfredsställande resultat, förutom att krav såsom läppslutning för vissa konsonanter inte alltid uppfylldes. Problemen med konsonanter kunde också i viss mån ses i sanningsdatat. Detta kunde lösas med hjälp av konsonantspecifik bearbetning, vilket gjorde att neuronnätets animationer var oskiljbara från sanningsdatat och att de samtidigt upplevdes vara bättre än baselinemodellens animationer. Sammanfattningsvis så lärde sig neuronnätet vokaler väl, men hade antagligen behövt mer data för att på ett tillfredsställande sätt uppfylla kraven för vissa konsonanter. För den slutgiltiga produktens skull kan dessa krav ändå uppnås med hjälp av konsonantspecifik bearbetning.
Deaguero, Andria Lynn. "Improving the enzymatic synthesis of semi-synthetic beta-lactam antibiotics via reaction engineering and data-driven protein engineering." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42727.
Full textNaert, Lucie. "Capture, annotation and synthesis of motions for the data-driven animation of sign language avatars." Thesis, Lorient, 2020. http://www.theses.fr/2020LORIS561.
Full textThis thesis deals with the capture, annotation, synthesis and evaluation of arm and hand motions for the animation of avatars communicating in Sign Languages (SL). Currently, the production and dissemination of SL messages often depend on video recordings which lack depth information and for which editing and analysis are complex issues. Signing avatars constitute a powerful alternative to video. They are generally animated using either procedural or data-driven techniques. Procedural animation often results in robotic and unrealistic motions, but any sign can be precisely produced. With data-driven animation, the avatar's motions are realistic but the variety of the signs that can be synthesized is limited and/or biased by the initial database. As we considered the acceptance of the avatar to be a prime issue, we selected the data-driven approach but, to address its main limitation, we propose to use annotated motions present in an SL Motion Capture database to synthesize novel SL signs and utterances absent from this initial database. To achieve this goal, our first contribution is the design, recording and perceptual evaluation of a French Sign Language (LSF) Motion Capture database composed of signs and utterances performed by deaf LSF teachers. Our second contribution is the development of automatic annotation techniques for different tracks based on the analysis of the kinematic properties of specific joints and existing machine learning algorithms. Our last contribution is the implementation of different motion synthesis techniques based on motion retrieval per phonological component and on the modular reconstruction of new SL content with the additional use of motion generation techniques such as inverse kinematics, parameterized to comply to the properties of real motions
Želiar, Dušan. "Automatizovaná syntéza stromových struktur z reálných dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403196.
Full textBOURDEAUDUCQ, SÉBASTIEN. "A performance-driven SoC architecture for video synthesis." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-26151.
Full textKersten, Stefan. "Statistical modelling and resynthesis of environmental texture sounds." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/400395.
Full textLos sonidos texturales ambientales son parte integral de nuestra vida diaria, a pesar de que muchas veces pasen desapercibidos. Constituyen esos elementos de nuestro entorno sonoro que solemos percibir de manera subconsciente pero que extrañamos cuando desaparecen. Esos sonidos son también cada vez más importantes para añadir realismo a los ambientes virtuales, desde mundos artificiales de inmersión hasta sistemas móviles de realidad aumentada, pasando por juegos de ordenador. Este trabajo abarca todo el espectro desde métodos de síntesis de sonido estocásticos basados en datos hasta entornos distribuidos de realidad virtual, así como sus implicaciones estéticas y tecnológicas. Proponemos un marco para modelar estadísticamente sonidos ambientales texturales en diferentes representaciones sparse de señales. Exploramos tres diferentes instanciaciones de este marco, dos de las cuales constituyen una nueva manera de representar sonidos texturales en un modelo estadístico inspirado físicamente así como de estimar parámetros de modelo a partir de ejemplos de sonido grabados.
Gopalan, Ranganath. "Leakage power driven behavioral synthesis of pipelined asics." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001064.
Full textBooks on the topic "Data-driven synthesis"
Damper, R. I. Data-Driven Techniques in Speech Synthesis. Boston, MA: Springer US, 2001.
Find full textDamper, Robert I., ed. Data-Driven Techniques in Speech Synthesis. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3.
Full textGeorge, Safonov Michael, ed. Safe adaptive control: Data-driven stability analysis and robust synthesis. London: Springer, 2011.
Find full textI, Damper R., ed. Data-driven techniques in speech synthesis. Boston: Kluwer Academic Publishers, 2001.
Find full textScaletti, Carla. Sonification ≠ Music. Edited by Roger T. Dean and Alex McLean. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190226992.013.9.
Full textReynolds, Paul. The Supply Networks of the Roman East and West. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198790662.003.0012.
Full textHuffaker, Ray, Marco Bittelli, and Rodolfo Rosa. Empirically Detecting Causality. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198782933.003.0008.
Full textBook chapters on the topic "Data-driven synthesis"
Komura, Taku, Ikhsanul Habibie, Jonathan Schwarz, and Daniel Holden. "Data-Driven Character Animation Synthesis." In Handbook of Human Motion, 1–29. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_10-1.
Full textJörg, Sophie. "Data-Driven Hand Animation Synthesis." In Handbook of Human Motion, 1–13. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_13-1.
Full textKomura, Taku, Ikhsanul Habibie, Jonathan Schwarz, and Daniel Holden. "Data-Driven Character Animation Synthesis." In Handbook of Human Motion, 2003–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_10.
Full textJörg, Sophie. "Data-Driven Hand Animation Synthesis." In Handbook of Human Motion, 2079–91. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_13.
Full textDamper, Robert I. "Learning About Speech from Data: Beyond NETtalk." In Data-Driven Techniques in Speech Synthesis, 1–25. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_1.
Full textColeman, John, and Andrew Slater. "Estimation of Parameters for the Klatt Synthesizer from a Speech Database." In Data-Driven Techniques in Speech Synthesis, 215–38. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_10.
Full textHirschberg, Julia. "Training Accent and Phrasing Assignment on Large Corpora." In Data-Driven Techniques in Speech Synthesis, 239–73. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_11.
Full textCohen, Andrew D. "Learnable Phonetic Representations in a Connectionist TTS System — II: Phonetics to Speech." In Data-Driven Techniques in Speech Synthesis, 275–82. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_12.
Full textBakiri, Ghulum, and Thomas G. Dietterich. "Constructing High-Accuracy Letter-to-Phoneme Rules with Machine Learning." In Data-Driven Techniques in Speech Synthesis, 27–44. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_2.
Full textSullivan, Kirk P. H. "Analogy, the Corpus and Pronunciation." In Data-Driven Techniques in Speech Synthesis, 45–70. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3413-3_3.
Full textConference papers on the topic "Data-driven synthesis"
Yessenov, Kuat, Zhilei Xu, and Armando Solar-Lezama. "Data-driven synthesis for object-oriented frameworks." In the 2011 ACM international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2048066.2048075.
Full textZhang, Yu, and Shuhong Xu. "Data-Driven Feature-Based 3D Face Synthesis." In Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007). IEEE, 2007. http://dx.doi.org/10.1109/3dim.2007.17.
Full textMabrok, Mohamed A., and Ian R. Petersen. "Data driven controller synthesis for negative imaginary systems." In 2015 10th Asian Control Conference (ASCC). IEEE, 2015. http://dx.doi.org/10.1109/ascc.2015.7244481.
Full textKeel, L. H., and S. P. Bhattacharyya. "Data driven synthesis of three term digital controllers." In 2006 American Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/acc.2006.1655365.
Full textMahmudi, Mentar, and Marcelo Kallmann. "Multi-modal data-driven motion planning and synthesis." In MIG '15: Motion in Games. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2822013.2822044.
Full textEssa, Irfan. "Data-driven and Procedural Analysis and Synthesis of Multimedia." In Eighth International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS '07). IEEE, 2007. http://dx.doi.org/10.1109/wiamis.2007.32.
Full textKastner, R., Wenrui Gong, Xin Hao, F. Brewer, A. Kaplan, P. Brisk, and M. Sarrafzadeh. "Layout Driven Data Communication Optimization for High Level Synthesis." In 2006 Design, Automation and Test in Europe. IEEE, 2006. http://dx.doi.org/10.1109/date.2006.244021.
Full textTan, Charlie Irawan, Hung-Wei Hsu, Wen-Kai Tai, Chin-Chen Chang, and Der-Lor Way. "A Data-Driven Path Synthesis Framework for Racing Games." In 2013 Seventh International Conference on Image and Graphics (ICIG). IEEE, 2013. http://dx.doi.org/10.1109/icig.2013.138.
Full textWang, Jingbo, Chungha Sung, Mukund Raghothaman, and Chao Wang. "Data-Driven Synthesis of Provably Sound Side Channel Analyses." In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 2021. http://dx.doi.org/10.1109/icse43902.2021.00079.
Full textHeertjes, Marcel, Bram Hunnekens, Nathan van de Wouw, and Henk Nijmeijer. "Learning in the synthesis of data-driven variable-gain controllers." In 2013 American Control Conference (ACC). IEEE, 2013. http://dx.doi.org/10.1109/acc.2013.6580889.
Full text