Literatura científica selecionada sobre o tema "Audiovisual synthesis"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Audiovisual synthesis".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Audiovisual synthesis"
Lokki, Tapio, Jarmo Hiipakka, Rami Hänninen, Tommi Ilmonen, Lauri Savioja e Tapio Takala. "Realtime audiovisual rendering and contemporary audiovisual art". Organised Sound 3, n.º 3 (dezembro de 1998): 219–33. http://dx.doi.org/10.1017/s1355771898003069.
Texto completo da fontePueo Ortega, Basilio, e Victoria Tur Viñes. "Sonido espacial para una inmersión audiovisual de alto realismo". Revista ICONO14 Revista científica de Comunicación y Tecnologías emergentes 7, n.º 2 (1 de julho de 2009): 334–45. http://dx.doi.org/10.7195/ri14.v7i2.330.
Texto completo da fonteAdaikhanovna, Utemgaliyeva Nassikhat, Bektemirova Saule Bekmukhamedovna, Odanova Sagira Amangeldiyevna, William P. Rivers e Akimisheva Zhanar Abdisadykkyzy. "Texts with academic terms". XLinguae 15, n.º 2 (abril de 2022): 121–29. http://dx.doi.org/10.18355/xl.2022.15.02.09.
Texto completo da fonteRichards, Michael D., Herbert C. Goltz e Agnes M. F. Wong. "Audiovisual perception in amblyopia: A review and synthesis". Experimental Eye Research 183 (junho de 2019): 68–75. http://dx.doi.org/10.1016/j.exer.2018.04.017.
Texto completo da fonteDufour, Frank, e Lee Dufour. "DreamArchitectonics: An Interactive Audiovisual Installation". Leonardo 51, n.º 2 (abril de 2018): 105–10. http://dx.doi.org/10.1162/leon_a_01188.
Texto completo da fonteLi, Yuanqing, Fangyi Wang, Yongbin Chen, Andrzej Cichocki e Terrence Sejnowski. "The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study". Cerebral Cortex 28, n.º 10 (25 de setembro de 2017): 3623–37. http://dx.doi.org/10.1093/cercor/bhx235.
Texto completo da fonteDoel, Kees van den, Dave Knott e Dinesh K. Pai. "Interactive Simulation of Complex Audiovisual Scenes". Presence: Teleoperators and Virtual Environments 13, n.º 1 (fevereiro de 2004): 99–111. http://dx.doi.org/10.1162/105474604774048252.
Texto completo da fontePecheranskyi, Ihor. "Brief Technical History and Audiovisual Parameters of Electromechanical Television". Bulletin of Kyiv National University of Culture and Arts. Series in Audiovisual Art and Production 6, n.º 2 (20 de outubro de 2023): 263–76. http://dx.doi.org/10.31866/2617-2674.6.2.2023.289313.
Texto completo da fonteSchabus, Dietmar, Michael Pucher e Gregor Hofer. "Joint Audiovisual Hidden Semi-Markov Model-Based Speech Synthesis". IEEE Journal of Selected Topics in Signal Processing 8, n.º 2 (abril de 2014): 336–47. http://dx.doi.org/10.1109/jstsp.2013.2281036.
Texto completo da fonteZozulia, I., A. Stadnii e A. Slobodianiuk. "Audiovisual teaching aids in the formation process of foreign language communicative competence". Teaching languages at higher institutions, n.º 40 (30 de maio de 2022): 12–28. http://dx.doi.org/10.26565/2073-4379-2022-40-01.
Texto completo da fonteTeses / dissertações sobre o assunto "Audiovisual synthesis"
Mital, Parag Kumar. "Audiovisual scene synthesis". Thesis, Goldsmiths College (University of London), 2014. http://research.gold.ac.uk/10662/.
Texto completo da fonteThomas, Zach (Zachary R. ). "Audiovisual Concatenative Synthesis and "Replica"". Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538747/.
Texto completo da fonteMelenchón, Maldonado Javier. "Síntesis Audiovisual Realista Personalizable". Doctoral thesis, Universitat Ramon Llull, 2007. http://hdl.handle.net/10803/9133.
Texto completo da fonteSe presenta un esquema único para la síntesis y análisis audiovisual personalizable realista de secuencias audiovisuales de caras parlantes y secuencias visuales de lengua de signos en entorno doméstico. En el primer caso, con animación totalmente sincronizada a través de una fuente de texto o voz; en el segundo, utilizando la técnica de deletreo de palabras mediante la mano. Sus posibilidades de personalización facilitan la creación de secuencias audiovisuales por parte de usuarios no expertos. Las aplicaciones posibles de este esquema de síntesis comprenden desde la creación de personajes virtuales realistas para interacción natural o vídeo juegos hasta vídeo conferencia de muy bajo ancho de banda y telefonía visual para las personas con problemas de oído, pasando por ofrecer ayuda en la pronunciación y la comunicación a este mismo colectivo. El sistema permite procesar secuencias largas con un consumo de recursos muy reducido gracias al desarrollo de un nuevo procedimiento de cálculo incremental para la descomposición en valores singulares con actualización de la información media.
A shared framework for realistic and personalizable audiovisual synthesis and analysis of audiovisual sequences of talking heads and visual sequences of sign language is presented in a domestic environment. The former has full synchronized animation using a text or auditory source of information; the latter consists in finger spelling. Their personalization capabilities ease the creation of audiovisual sequences by non expert users. The applications range from realistic virtual avatars for natural interaction or videogames to low bandwidth videoconference and visual telephony for the hard of hearing, including help to speech therapists. Long sequences can be processed with reduced resources, specially storing ones. This is allowed thanks to the proposed scheme for the incremental singular value decomposition with mean preservation. This scheme is complemented with another three: the decremental, the split and the composed ones.
Mohamadi, Tayeb. "Synthèse à partir du texte de visages parlants : réalisation d'un prototype et mesures d'intelligibilité bimodale". Grenoble INPG, 1993. http://www.theses.fr/1993INPG0010.
Texto completo da fonteLe, Goff Bertrand. "Synthèse à partir du texte de visage 3D parlant français". Grenoble INPG, 1997. http://www.theses.fr/1997INPG0140.
Texto completo da fonteBoutet, de Monvel Violaine. "Du feedback vidéo à l'IA générative : sur la récursivité dans les arts et médias". Electronic Thesis or Diss., Paris 3, 2025. http://www.theses.fr/2025PA030009.
Texto completo da fonteThis thesis raises, through the prism of feedback, a bridge between pioneer video art from the 1960s to the 1980s and the practices associated with generative AI, which the phenomenal advances in deep learning have precipitated since the mid-2010s. Retroaction in cybernetics refers to the self-regulation by the loop of natural and technological systems. Applied to closed-circuit analog, digital or hybrid setups, this automated process also qualifies the contingent effects that result from it on screen. The first section looks back at the colossal influence that information theory and the notion of noise have had on the genesis of the video genre since the advent of the medium, in 1965. It revolves around the narcissistic paradigm (Rosalind Krauss, 1976) that essentialized its canons until the late 1970s, by analyzing the central place occupied by human perception and its televisual prosthetic extension. The second section focuses on the concurrent exploration of so-called machine vision, in dialogue with the tools (Steina and Woody Vasulka, 1976). Building upon the technocratic reversal of aesthetics then inherent to real-time image processing, a transition is made from audiovisual synthesis to its cinematic, and artificial counterparts. The third section contemplates creation with generative AI models developed since the introduction of GANs, in 2014. Questioning the redistribution of agency in networks, it ultimately considers the recursive genealogy of the arts and media, as well as the conditions for a sensitive algorithmic culture between signal and data
Dahmani, Sara. "Synthèse audiovisuelle de la parole expressive : modélisation des émotions par apprentissage profond". Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0137.
Texto completo da fonte: The work of this thesis concerns the modeling of emotions for expressive audiovisual textto-speech synthesis. Today, the results of text-to-speech synthesis systems are of good quality, however audiovisual synthesis remains an open issue and expressive synthesis is even less studied. As part of this thesis, we present an emotions modeling method which is malleable and flexible, and allows us to mix emotions as we mix shades on a palette of colors. In the first part, we present and study two expressive corpora that we have built. The recording strategy and the expressive content of these corpora are analyzed to validate their use for the purpose of audiovisual speech synthesis. In the second part, we present two neural architectures for speech synthesis. We used these two architectures to model three aspects of speech : 1) the duration of sounds, 2) the acoustic modality and 3) the visual modality. First, we use a fully connected architecture. This architecture allowed us to study the behavior of neural networks when dealing with different contextual and linguistic descriptors. We were also able to analyze, with objective measures, the network’s ability to model emotions. The second neural architecture proposed is a variational auto-encoder. This architecture is able to learn a latent representation of emotions without using emotion labels. After analyzing the latent space of emotions, we presented a procedure for structuring it in order to move from a discrete representation of emotions to a continuous one. We were able to validate, through perceptual experiments, the ability of our system to generate emotions, nuances of emotions and mixtures of emotions, and this for expressive audiovisual text-to-speech synthesis
Majerová, Radka. "Lingvistika ve speciální pedagogice". Doctoral thesis, 2016. http://www.nusl.cz/ntk/nusl-353603.
Texto completo da fonteLivros sobre o assunto "Audiovisual synthesis"
Statistics on Selected Service Sectors in the EU: A Synthesis of Quantitative Results of Pilot Surveys on Audiovisual Services, Hotels and Travel Agencies and Transport. European Communities / Union (EUR-OP/OOPEC/OPOCE), 1997.
Encontre o texto completo da fontePitozzi, Enrico. Body Soundscape. Editado por Yael Kaduri. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199841547.013.43.
Texto completo da fonteCapítulos de livros sobre o assunto "Audiovisual synthesis"
Aller, Sven, e Mark Fishel. "Adapting Audiovisual Speech Synthesis to Estonian". In Lecture Notes in Computer Science, 13–23. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70566-3_2.
Texto completo da fonteSevillano, Xavier, Javier Melenchón, Germán Cobo, Joan Claudi Socoró e Francesc Alías. "Audiovisual Analysis and Synthesis for Multimodal Human-Computer Interfaces". In Engineering the User Interface, 1–16. London: Springer London, 2008. http://dx.doi.org/10.1007/978-1-84800-136-7_13.
Texto completo da fonteLuerssen, Martin, Trent Lewis e David Powers. "Head X: Customizable Audiovisual Synthesis for a Multi-purpose Virtual Head". In AI 2010: Advances in Artificial Intelligence, 486–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17432-2_49.
Texto completo da fonteAlmeida, Nuno, Diogo Cunha, Samuel Silva e António Teixeira. "Designing and Deploying an Interaction Modality for Articulatory-Based Audiovisual Speech Synthesis". In Speech and Computer, 36–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87802-3_4.
Texto completo da fonteMeister Einar, Fagel Sascha e Metsvahi Rainer. "Towards Audiovisual TTS in Estonian". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2012. https://doi.org/10.3233/978-1-61499-133-5-138.
Texto completo da fonteMeister Einar, Metsvahi Rainer e Fagel Sascha. "Evaluation of the Estonian Audiovisual Speech Synthesis". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2014. https://doi.org/10.3233/978-1-61499-442-8-11.
Texto completo da fonteRojas Parra, Rosa Mariana, Ana María Salazar Montes e Lina María Rodríguez Granada. "Análisis de contenido audiovisual en el ejercicio físico de las personas adultas mayores. Revisión sistemática". In Psicología de la actividad física y el deporte. Formación y aplicación en Colombia, 142–68. Asociación Colombiana de Facultades de Psicología, 2023. http://dx.doi.org/10.61676/9786289532425.05.
Texto completo da fonteFernández-Martín, María José, Pilar Moreno-Crespo e Francisco Núñez-Román. "Cinema and Secondary Education in Spain". In Educational Innovation to Address Complex Societal Challenges, 103–20. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-3073-9.ch008.
Texto completo da fonte"Sholay, Stereo Sound and the Auditory Spectacle". In Sound in Indian Film and Audiovisual Media. Nieuwe Prinsengracht 89 1018 VR Amsterdam Nederland: Amsterdam University Press, 2023. http://dx.doi.org/10.5117/9789463724739_ch08.
Texto completo da fonte"Popular Films from the Dubbing Era". In Sound in Indian Film and Audiovisual Media. Nieuwe Prinsengracht 89 1018 VR Amsterdam Nederland: Amsterdam University Press, 2023. http://dx.doi.org/10.5117/9789463724739_ch06.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Audiovisual synthesis"
Batty, Joshua, Kipps Horn e Stefan Greuter. "Audiovisual granular synthesis". In The 9th Australasian Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2513002.2513568.
Texto completo da fonteŠimbelis, Vygandas 'Vegas', e Anders Lundström. "Synthesis in the Audiovisual". In CHI'16: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2851581.2889462.
Texto completo da fonteHussen Abdelaziz, Ahmed, Anushree Prasanna Kumar, Chloe Seivwright, Gabriele Fanelli, Justin Binder, Yannis Stylianou e Sachin Kajareker. "Audiovisual Speech Synthesis using Tacotron2". In ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3462244.3479883.
Texto completo da fonteChen, Sihang, Junliang Chen e Xiaojuan Gu. "EDAVS: Emotion-Driven Audiovisual Synthesis Experience". In SIGGRAPH '24: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3641234.3671080.
Texto completo da fonteSilva, Samuel, e António Teixeira. "An Anthropomorphic Perspective for Audiovisual Speech Synthesis". In 10th International Conference on Bio-inspired Systems and Signal Processing. SCITEPRESS - Science and Technology Publications, 2017. http://dx.doi.org/10.5220/0006150201630172.
Texto completo da fonteBailly, Gérard. "Audiovisual speech synthesis. from ground truth to models". In 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA: ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-422.
Texto completo da fonteMatthews, I. A. "Scale based features for audiovisual speech recognition". In IEE Colloquium on Integrated Audio-Visual Processing for Recognition, Synthesis and Communication. IEE, 1996. http://dx.doi.org/10.1049/ic:19961152.
Texto completo da fonteThangthai, Ausdang, Sumonmas Thatphithakkul, Kwanchiva Thangthai e Arnon Namsanit. "TSynC-3miti: Audiovisual Speech Synthesis Database from Found Data". In 2020 23rd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA). IEEE, 2020. http://dx.doi.org/10.1109/o-cocosda50338.2020.9295001.
Texto completo da fonteMawass, Khaled, Pierre Badin e Gérard Bailly. "Synthesis of fricative consonants by audiovisual-to-articulatory inversion". In 5th European Conference on Speech Communication and Technology (Eurospeech 1997). ISCA: ISCA, 1997. http://dx.doi.org/10.21437/eurospeech.1997-386.
Texto completo da fonteFagel, Sascha, e Walter F. Sendlmeier. "An expandable web-based audiovisual text-to-speech synthesis system". In 8th European Conference on Speech Communication and Technology (Eurospeech 2003). ISCA: ISCA, 2003. http://dx.doi.org/10.21437/eurospeech.2003-673.
Texto completo da fonte