Academic literature on the topic 'Visemes'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visemes.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Visemes"
Chelali, Fatma Zohra, and Amar Djeradi. "Primary Research on Arabic Visemes, Analysis in Space and Frequency Domain." International Journal of Mobile Computing and Multimedia Communications 3, no. 4 (October 2011): 1–19. http://dx.doi.org/10.4018/jmcmc.2011100101.
Full textBear, Helen L., and Richard Harvey. "Alternative Visual Units for an Optimized Phoneme-Based Lipreading System." Applied Sciences 9, no. 18 (September 15, 2019): 3870. http://dx.doi.org/10.3390/app9183870.
Full textOwens, Elmer, and Barbara Blazek. "Visemes Observed by Hearing-Impaired and Normal-Hearing Adult Viewers." Journal of Speech, Language, and Hearing Research 28, no. 3 (September 1985): 381–93. http://dx.doi.org/10.1044/jshr.2803.381.
Full textFenghour, Souheil, Daqing Chen, Kun Guo, Bo Li, and Perry Xiao. "An Effective Conversion of Visemes to Words for High-Performance Automatic Lipreading." Sensors 21, no. 23 (November 26, 2021): 7890. http://dx.doi.org/10.3390/s21237890.
Full textPreminger, Jill E., Hwei-Bing Lin, Michel Payen, and Harry Levitt. "Selective Visual Masking in Speechreading." Journal of Speech, Language, and Hearing Research 41, no. 3 (June 1998): 564–75. http://dx.doi.org/10.1044/jslhr.4103.564.
Full textDe Martino, José Mario, Léo Pini Magalhães, and Fábio Violaro. "Facial animation based on context-dependent visemes." Computers & Graphics 30, no. 6 (December 2006): 971–80. http://dx.doi.org/10.1016/j.cag.2006.08.017.
Full textLalonde, Kaylah, and Grace A. Dwyer. "Visual phonemic knowledge and audiovisual speech-in-noise perception in school-age children." Journal of the Acoustical Society of America 153, no. 3_supplement (March 1, 2023): A337. http://dx.doi.org/10.1121/10.0019067.
Full textLazalde, Oscar Martinez, Steve Maddock, and Michael Meredith. "A Constraint-Based Approach to Visual Speech for a Mexican-Spanish Talking Head." International Journal of Computer Games Technology 2008 (2008): 1–7. http://dx.doi.org/10.1155/2008/412056.
Full textThangthai, Ausdang, Ben Milner, and Sarah Taylor. "Synthesising visual speech using dynamic visemes and deep learning architectures." Computer Speech & Language 55 (May 2019): 101–19. http://dx.doi.org/10.1016/j.csl.2018.11.003.
Full textHenton, Caroline. "Beyond visemes: Using disemes in synthetic speech with facial animation." Journal of the Acoustical Society of America 95, no. 5 (May 1994): 3010. http://dx.doi.org/10.1121/1.408830.
Full textDissertations / Theses on the topic "Visemes"
Taylor, Sarah. "Discovering dynamic visemes." Thesis, University of East Anglia, 2013. https://ueaeprints.uea.ac.uk/47913/.
Full textBear, Helen L. "Decoding visemes : improving machine lip-reading." Thesis, University of East Anglia, 2016. https://ueaeprints.uea.ac.uk/59384/.
Full textRamage, Matthew David. "Disproving visemes as the basic visual unit of speech." Thesis, Curtin University, 2013. http://hdl.handle.net/20.500.11937/1618.
Full textThangthai, Ausdang. "Visual speech synthesis using dynamic visemes and deep learning architectures." Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/69371/.
Full textZailskas, Vytautas. "Lietuvių šnekos vizemų vizualizavimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110615_134342-38011.
Full textIn this final Master's thesis work features of Lithuanian speech visemes visualization are analyzed. Possibility of coding with SAMPA codes, software methods and algorithms are inspected. Type of computer graphics is picked, which is suitable for software objectives – vector graphics. Two transformation methods for vector graphics are created and their differences and practical usability are analyzed. Software for visemes creation, transformation and tuning of their duration and duration of transformation between visemes is created and described. The main purpose of this software is to animate Lithuanian speech by the method selected. Five visemes for two Lithuanian words animation is created. Using these visemes research has been done which is showing the quality of realization and adaptability of this software.
Martinez, Lazalde Oscar Manuel. "Analyzing and evaluating the use of visemes in an interpolative synthesizer for visual speech." Thesis, University of Sheffield, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531167.
Full textAxelsson, Andreas, and Erik Björhäll. "Real Time Speech Driven Face Animation." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2015.
Full textThe goal of this project is to implement a system to analyse an audio signal containing speech, and produce a classifcation of lip shape categories (visemes) in order to synchronize the lips of a computer generated face with the speech.
The thesis describes the work to derive a method that maps speech to lip move- ments, on an animated face model, in real time. The method is implemented in C++ on the PC/Windows platform. The program reads speech from pre-recorded audio files and continuously performs spectral analysis of the speech. Neural networks are used to classify the speech into a sequence of phonemes, and the corresponding visemes are shown on the screen.
Some time delay between input speech and the visualization could not be avoided, but the overall visual impression is that sound and animation are synchronized.
Akdemir, Eren. "Bimodal Automatic Speech Segmentation And Boundary Refinement Techniques." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611732/index.pdf.
Full texta hidden Markov model (HMM) based fine tuning system and an inverse filtering based fine tuning system. The segment boundaries obtained by the bimodal speech segmentation system are improved further by using these techniques. To fulfill these goals, a complete two-stage automatic speech segmentation system is produced and tested in two different databases. A phonetically rich Turkish audiovisual speech database, that contains acoustic data and camera recordings of 1600 Turkish sentences uttered by a male speaker, is build from scratch in order to be used in the experiments. The visual features of the recordings are extracted and manual phonetic alignment of the database is done to be used as a ground truth for the performance tests of the automatic speech segmentation systems.
Melenchón, Maldonado Javier. "Síntesis Audiovisual Realista Personalizable." Doctoral thesis, Universitat Ramon Llull, 2007. http://hdl.handle.net/10803/9133.
Full textSe presenta un esquema único para la síntesis y análisis audiovisual personalizable realista de secuencias audiovisuales de caras parlantes y secuencias visuales de lengua de signos en entorno doméstico. En el primer caso, con animación totalmente sincronizada a través de una fuente de texto o voz; en el segundo, utilizando la técnica de deletreo de palabras mediante la mano. Sus posibilidades de personalización facilitan la creación de secuencias audiovisuales por parte de usuarios no expertos. Las aplicaciones posibles de este esquema de síntesis comprenden desde la creación de personajes virtuales realistas para interacción natural o vídeo juegos hasta vídeo conferencia de muy bajo ancho de banda y telefonía visual para las personas con problemas de oído, pasando por ofrecer ayuda en la pronunciación y la comunicación a este mismo colectivo. El sistema permite procesar secuencias largas con un consumo de recursos muy reducido gracias al desarrollo de un nuevo procedimiento de cálculo incremental para la descomposición en valores singulares con actualización de la información media.
A shared framework for realistic and personalizable audiovisual synthesis and analysis of audiovisual sequences of talking heads and visual sequences of sign language is presented in a domestic environment. The former has full synchronized animation using a text or auditory source of information; the latter consists in finger spelling. Their personalization capabilities ease the creation of audiovisual sequences by non expert users. The applications range from realistic virtual avatars for natural interaction or videogames to low bandwidth videoconference and visual telephony for the hard of hearing, including help to speech therapists. Long sequences can be processed with reduced resources, specially storing ones. This is allowed thanks to the proposed scheme for the incremental singular value decomposition with mean preservation. This scheme is complemented with another three: the decremental, the split and the composed ones.
Turkmani, Aseel. "Visual analysis of viseme dynamics." Thesis, University of Surrey, 2008. http://epubs.surrey.ac.uk/804944/.
Full textBooks on the topic "Visemes"
Velázquez, Cuauhtémoc J. Allende. Tornillo de banco. [Oaxaca, Mexico]: Instituto Politécnico Nacional, Centro Interdisciplinario y de Investigación para el Desarrollo Integral Regional, Unidad Oaxaca, 1988.
Find full textMurvai, Olga. Viseltes szavak: Hoc est transsylvanicum : tanulmánygyűjtemény. Csíkszereda: Pallas-Akadémia, 2006.
Find full textL' architecture du IIIe Reich: Origines intellectuelles et visées idéologiques. Berne: P. Lang, 1987.
Find full textLegrand-Blain, Marie. Spiriferacea (Brachiopoda) viséens et serpukhoviens du Sahara algérien. Brest: Université de Bretagne occidentale, 1986.
Find full textBrochu, André. La visée critique: Essais autobiographiques et littéraires. Montréal: Boréal, 1988.
Find full textMalomfălean, Laurențiu. Nocturnalul "postmodern": Când visele imaginează hiperindivizi cu onirografii. Bucureşti: Tracus Arte, 2015.
Find full textPer, Aslaksen, ed. De trykte illegale visene: Hvordan ble de produsert? Skien: Falken Forlag, 1988.
Find full textRowling, J. K. Harry Potter och de vises sten. 2nd ed. Stockholm, Sweden: Tiden, 2001.
Find full textBook chapters on the topic "Visemes"
Visser, Michiel, Mannes Poel, and Anton Nijholt. "Classifying Visemes for Automatic Lipreading." In Text, Speech and Dialogue, 349–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48239-3_65.
Full textShdaifat, Islam, Rolf-Rainer Grigat, and Stefan Lütgert. "Recognition of the German Visemes Using Multiple Feature Matching." In Lecture Notes in Computer Science, 437–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45404-7_58.
Full textPajorová, Eva, and Ladislav Hluchý. "Correct Speech Visemes as a Root of Total Communication Method for Deaf People." In Agent and Multi-Agent Systems. Technologies and Applications, 389–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-30947-2_43.
Full textLeszczynski, Mariusz, and Władysław Skarbek. "Viseme Classification for Talking Head Application." In Computer Analysis of Images and Patterns, 773–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11556121_95.
Full textLeszczynski, Mariusz, Władysław Skarbek, and Stanisław Badura. "Fast Viseme Recognition for Talking Head Application." In Lecture Notes in Computer Science, 516–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11559573_64.
Full textRevéret, Lionel, and Christian Benoît. "A viseme-based approach to labiometrics for automatic lipreading." In Audio- and Video-based Biometric Person Authentication, 335–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0016013.
Full textLee, Soonkyu, and Dongsuk Yook. "Viseme Recognition Experiment Using Context Dependent Hidden Markov Models." In Intelligent Data Engineering and Automated Learning — IDEAL 2002, 557–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45675-9_84.
Full textBastanfard, Azam, Mohammad Aghaahmadi, Alireza Abdi kelishami, Maryam Fazel, and Maedeh Moghadam. "Persian Viseme Classification for Developing Visual Speech Training Application." In Advances in Multimedia Information Processing - PCM 2009, 1080–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10467-1_104.
Full textKoller, Oscar, Hermann Ney, and Richard Bowden. "Read My Lips: Continuous Signer Independent Weakly Supervised Viseme Recognition." In Computer Vision – ECCV 2014, 281–96. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10590-1_19.
Full textFernandez-Lopez, Adriana, and Federico M. Sukno. "Optimizing Phoneme-to-Viseme Mapping for Continuous Lip-Reading in Spanish." In Communications in Computer and Information Science, 305–28. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12209-6_15.
Full textConference papers on the topic "Visemes"
Bear, Helen L., and Richard Harvey. "Decoding visemes: Improving machine lip-reading." In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. http://dx.doi.org/10.1109/icassp.2016.7472029.
Full textSanthosh Kumar, S. "Encoding Malayalam visemes for facial image synthesis." In IET Conference on Wireless, Mobile and Multimedia Networks. IEE, 2008. http://dx.doi.org/10.1049/cp:20080195.
Full textHui Zhao and Chaojing Tang. "Visual speech synthesis based on Chinese dynamic visemes." In 2008 International Conference on Information and Automation (ICIA). IEEE, 2008. http://dx.doi.org/10.1109/icinfa.2008.4607983.
Full textCosta, Paula Dornhofer Paro, and José Mario De Martino. "Compact 2D facial animation based on context-dependent visemes." In the ACM/SSPNET 2nd International Symposium. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1924035.1924047.
Full textThangthai, Ausdang, Ben Milner, and Sarah Taylor. "Visual Speech Synthesis Using Dynamic Visemes, Contextual Features and DNNs." In Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-1084.
Full textOkita, Shinsuke, Yasue Mitsukura, and Nozomu Hamada. "Lip reading system using novel Japanese visemes classification and hierarchical weighted discrimination." In 2013 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). IEEE, 2013. http://dx.doi.org/10.1109/ispacs.2013.6704608.
Full textOkita, Shinsuke, Yasue Mitsukura, and Nozomu Hamada. "Augmented classification of Japanese visemes and hierarchical weighted discrimination for visual speech recognition." In 2013 IEEE Conference on Systems, Process & Control (ICSPC). IEEE, 2013. http://dx.doi.org/10.1109/spc.2013.6735104.
Full textXie Lei, Jiang Dongmei, I. Ravyse, Zhao Rongchun, W. Verhelst, H. Sahli, and J. Conlenis. "Visualize speech: a continuous speech recognition system for facial animation using acoustic visemes." In Proceedings of 2003 International Conference on Neural Networks and Signal Processing. IEEE, 2003. http://dx.doi.org/10.1109/icnnsp.2003.1280738.
Full textDehshibi, Mohammad Mahdi, Meysam Alavi, and Jamshid Shanbehzadeh. "Kernel-based Persian viseme clustering." In 2013 13th International Conference on Hybrid Intelligent Systems (HIS). IEEE, 2013. http://dx.doi.org/10.1109/his.2013.6920468.
Full textRoslan, Rosniza, Nursuriati Jamil, Noraini Seman, and Syafiqa Ain Alfida Abdul Rahim. "Face and mouth localization of viseme." In 2016 IEEE Industrial Electronics and Applications Conference (IEACon). IEEE, 2016. http://dx.doi.org/10.1109/ieacon.2016.8067385.
Full textReports on the topic "Visemes"
Adhikari, Kamal, Bharat Adhikari, Sue Cavill, Santosh Mehrotra, Vijeta Rao Bejjanki, and Matteus Van Der Velden. Monitoria de Campanhas de Saneamento: Metas, Relatórios e Informação, e Realismo. Institute of Development Studies, June 2022. http://dx.doi.org/10.19088/slh.2022.008.
Full text