Auswahl der wissenschaftlichen Literatur zum Thema „Gesture Synthesis“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Gesture Synthesis" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Gesture Synthesis"

1

Bao, Yihua, Dongdong Weng, and Nan Gao. "Editable Co-Speech Gesture Synthesis Enhanced with Individual Representative Gestures." Electronics 13, no. 16 (2024): 3315. http://dx.doi.org/10.3390/electronics13163315.

Der volle Inhalt der Quelle
Annotation:
Co-speech gesture synthesis is a challenging task due to the complexity and uncertainty between gestures and speech. Gestures that accompany speech (i.e., Co-Speech Gesture) are an essential part of natural and efficient embodied human communication, as they work in tandem with speech to convey information more effectively. Although data-driven approaches have improved gesture synthesis, existing deep learning-based methods use deterministic modeling which could lead to averaging out predicted gestures. Additionally, these methods lack control over gesture generation such as user editing of ge
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pang, Kunkun, Dafei Qin, Yingruo Fan, et al. "BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer." ACM Transactions on Graphics 42, no. 4 (2023): 1–12. http://dx.doi.org/10.1145/3592456.

Der volle Inhalt der Quelle
Annotation:
Automatic gesture synthesis from speech is a topic that has attracted researchers for applications in remote communication, video games and Metaverse. Learning the mapping between speech and 3D full-body gestures is difficult due to the stochastic nature of the problem and the lack of a rich cross-modal dataset that is needed for training. In this paper, we propose a novel transformer-based framework for automatic 3D body gesture synthesis from speech. To learn the stochastic nature of the body gesture during speech, we propose a variational transformer to effectively model a probabilistic dis
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Deng, Linhai. "FPGA-based gesture recognition and voice interaction." Applied and Computational Engineering 40, no. 1 (2024): 174–79. http://dx.doi.org/10.54254/2755-2721/40/20230646.

Der volle Inhalt der Quelle
Annotation:
Human gestures, a fundamental trait, enable human-machine interactions and possibilities in interfaces. Amid technological advancements, gesture recognition research has gained prominence. Gesture recognition possesses merits in sample acquisition and intricate delineation. Delving into its nuances remains significant. Existing techniques leverage PC-based OpenCV and deep learnings computational prowess, showcasing complexity. This scholarly exposition outlines an experimental framework, centered on mobile FPGA for enhanced gesture recognition. The focus lies on DE2-115 as an image discernment
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ao, Tenglong, Qingzhe Gao, Yuke Lou, Baoquan Chen, and Libin Liu. "Rhythmic Gesticulator." ACM Transactions on Graphics 41, no. 6 (2022): 1–19. http://dx.doi.org/10.1145/3550454.3555435.

Der volle Inhalt der Quelle
Annotation:
Automatic synthesis of realistic co-speech gestures is an increasingly important yet challenging task in artificial embodied agent creation. Previous systems mainly focus on generating gestures in an end-to-end manner, which leads to difficulties in mining the clear rhythm and semantics due to the complex yet subtle harmony between speech and gestures. We present a novel co-speech gesture synthesis method that achieves convincing results both on the rhythm and semantics. For the rhythm, our system contains a robust rhythm-based segmentation pipeline to ensure the temporal coherence between the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yang, Qi, and Georg Essl. "Evaluating Gesture-Augmented Keyboard Performance." Computer Music Journal 38, no. 4 (2014): 68–79. http://dx.doi.org/10.1162/comj_a_00277.

Der volle Inhalt der Quelle
Annotation:
The technology of depth cameras has made designing gesture-based augmentation for existing instruments inexpensive. We explored the use of this technology to augment keyboard performance with 3-D continuous gesture controls. In a user study, we compared the control of one or two continuous parameters using gestures versus the traditional control using pitch and modulation wheels. We found that the choice of mapping depends on the choice of synthesis parameter in use, and that the gesture control under suitable mappings can outperform pitch-wheel performance when two parameters are controlled s
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Souza, Fernando, and Adolfo Maia Jr. "A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition." Revista Vórtex 9, no. 2 (2021): 1–27. http://dx.doi.org/10.33871/23179937.2021.9.2.4.

Der volle Inhalt der Quelle
Annotation:
We show a method for Granular Synthesis Composition based on a mathematical modeling for the musical gesture. Each gesture is drawn as a curve generated from a particular mathematical model (or function) and coded as a MATLAB script. The gestures can be deterministic through defining mathematical time functions, hand free drawn, or even randomly generated. This parametric information of gestures is interpreted through OSC messages by a granular synthesizer (Granular Streamer). The musical composition is then realized with the models (scripts) written in MATLAB and exported to a graphical score
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Patil, Prof Ravindra. "AI-Driven Gesture Recognition and Multilingual Translation." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48713.

Der volle Inhalt der Quelle
Annotation:
Abstract – This paper presents a real-time system designed to improve communication effectively for everyone with speech and hearing impairments through gesture-based language translation. This approach uses machine learning algorithms to interpret American Sign Language (ASL) hand gestures which is a universal sign language and convert them into both text and speech outputs. By integrating Mediapipe for landmark detection with a Convolutional Neural Network (CNN) for gesture classification, the system effectively identifies static hand signs and ensures robustness in diverse surrounding envir
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

He, Zhiyuan. "Automatic Quality Assessment of Speech-Driven Synthesized Gestures." International Journal of Computer Games Technology 2022 (March 16, 2022): 1–11. http://dx.doi.org/10.1155/2022/1828293.

Der volle Inhalt der Quelle
Annotation:
The automatic synthesis of realistic gestures has the ability to change the fields of animation, avatars, and communication agents. Although speech-driven synthetic gesture generation methods have been proposed and optimized, the evaluation system of synthetic gestures is still lacking. The current evaluation method still needs manual participation, but it is inefficient in the industry of synthetic gestures and has the interference of human factors. So we need a model that can construct an automatic and objective quantitative quality assessment of the synthesized gesture video. We noticed tha
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Vasuki, M. "Al Powered Real-Time Sign Language Detection and Translation System for Inclusive Communication Between Deaf and Hearing Communities Worldwide." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50025.

Der volle Inhalt der Quelle
Annotation:
Abstract - Sign language is a vital communication tool for individuals who are deaf or hard of hearing, yet it remains largely inaccessible to the wider population. This project aims to address this barrier by developing a sign language recognition system that converts hand gestures into text, followed by text-to-speech (TTS) conversion. The system utilizes Convolutional Neural Networks (CNNs) to recognize static hand gestures and translate them into corresponding textual representations. The text is then processed by a TTS engine, which generates spoken language, making it comprehensible to i
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bouënard, Alexandre, Marcelo M. M. Wanderley, and Sylvie Gibet. "Gesture Control of Sound Synthesis: Analysis and Classification of Percussion Gestures." Acta Acustica united with Acustica 96, no. 4 (2010): 668–77. http://dx.doi.org/10.3813/aaa.918321.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Gesture Synthesis"

1

Faggi, Simone. "An Evaluation Model For Speech-Driven Gesture Synthesis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22844/.

Der volle Inhalt der Quelle
Annotation:
The research and development of embodied agents with advanced relational capabilities is constantly evolving. In recent years, the development of behavioural signal generation models to be integrated in social robots and virtual characters, is moving from rule-based to data-driven approaches, requiring appropriate and reliable evaluation techniques. This work proposes a novel machine learning approach for the evaluation of speech-to-gestures models that is independent from the audio source. This approach enables the measurement of the quality of gestures produced by these models and provides a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Marrin, Nakra Teresa (Teresa Anne) 1970. "Inside the conductor's jacket : analysis, interpretation and musical synthesis of expressive gesture." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9165.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.<br>Includes bibliographical references (leaves 154-167).<br>We present the design and implementation of the Conductor's Jacket, a unique wearable device that measures physiological and gestural signals, together with the Gesture Construction, a musical software system that interprets these signals and applies them expressively in a musical context. Sixteen sensors have been incorporated into the Conductor's Jacket in such a way as to not encumber or interfere wi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pun, James Chi-Him. "Gesture recognition with application in music arrangement." Diss., University of Pretoria, 2006. http://upetd.up.ac.za/thesis/available/etd-11052007-171910/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Yizhong Johnty. "Investigation of gesture control for articulatory speech synthesis with a bio-mechanical mapping layer." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43193.

Der volle Inhalt der Quelle
Annotation:
In the process of working with a real-time, gesture controlled speech and singing synthesizer used for musical performance, we have documented performer related issues and provided some suggestions that will serve to improve future work in the field from an engineering and technician's perspective. One particular, significant detrimental factor in the existing system is the sound quality caused by the limitations of the one-to-one kinematic mapping between the gesture input and output. In order to solve this a force activated bio-mechanical mapping layer was implemented to drive an articulator
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Pérez, Carrillo Alfonso Antonio. "Enhacing spectral sintesis techniques with performance gestures using the violin as a case study." Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7264.

Der volle Inhalt der Quelle
Annotation:
In this work we investigate new sound synthesis techniques for imitating musical instruments using the violin as a case study. It is a multidisciplinary research, covering several fields such as spectral modeling, machine learning, analysis of musical gestures or musical acoustics. It addresses sound production with a very empirical approach, based on the analysis of performance gestures as well as on the measurement of acoustical properties of the violin. Based on the characteristics of the main vibrating elements of the violin, we divide the study into two parts, namely bowed string and viol
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Abel, Louis. "Co-speech gesture synthesis : Towards a controllable and interpretable model using a graph deterministic approach." Electronic Thesis or Diss., Université de Lorraine, 2025. http://www.theses.fr/2025LORR0020.

Der volle Inhalt der Quelle
Annotation:
La communication humaine est un processus multimodal combinant des dimensions verbales et non verbales, destiné à favoriser la compréhension mutuelle. Les gestes, en particulier, enrichissent la parole en clarifiant les significations, en exprimant des émotions et en transmettant des idées abstraites. Cependant, bien que les avancées en synthèse de parole aient permis de produire des voix artificielles proches de la parole humaine, les systèmes existants négligent souvent les signaux visuels, limitant l'efficacité et l'immersion des interactions homme-machine. Pour combler cette lacune, cette
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Thoret, Etienne. "Caractérisation acoustique des relations entre les mouvements biologiques et la perception sonore : application au contrôle de la synthèse et à l'apprentissage de gestes." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4780/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s'est intéressée aux relations entre les mouvements biologiques et la perception sonore en considérant le cas spécifique des mouvements graphiques et des sons de frottement qu'ils génèrent. L'originalité de ces travaux réside dans l'utilisation d'un modèle de synthèse sonore basé sur un principe perceptif issu de l'approche écologique de la perception et contrôlé par des modèles de gestes. Des stimuli sonores dont le timbre n'est modulé que par des variations de vitesse produites par un geste ont ainsi pu être générés permettant de se focaliser sur l'influence perceptive de cet inv
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Devaney, Jason Wayne. "A study of articulatory gestures for speech synthesis." Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284254.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Métois, Eric. "Musical sound information : musical gestures and embedding synthesis." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29125.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Vigliensoni, Martin Augusto. "Touchless gestural control of concatenative sound synthesis." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104846.

Der volle Inhalt der Quelle
Annotation:
This thesis presents research on three-dimensional position tracking technologies used to control concatenative sound synthesis and applies the achieved research results to the design of a new immersive interface for musical expression. The underlying concepts and characteristics of position tracking technologies are reviewed and musical applications using these technologies are surveyed to exemplify their use. Four position tracking systems based on different technologies are empirically compared according to their performance parameters, technical specifications, and practical considerations
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Gesture Synthesis"

1

Bernstein, Zachary. Thinking In and About Music. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190949235.001.0001.

Der volle Inhalt der Quelle
Annotation:
Milton Babbitt (1916–2011) was, at once, one of the century’s foremost composers and a founder of American music theory. These two aspects of his creative life—“thinking in” and “thinking about” music, as he would put it—nourished each other. Theory and analysis inspired fresh compositional ideas, and compositional concerns focused theoretical and analytical inquiry. Accordingly, this book undertakes an excavation of the sources of his theorizing as a guide to analysis of his music. Babbitt’s idiosyncratic synthesis of ideas from Heinrich Schenker, analytic philosophy, and cognitive science—at
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bennett, Christopher. Grace, Freedom, and the Expression of Emotion. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198766858.003.0010.

Der volle Inhalt der Quelle
Annotation:
Schiller’s discussion of the expression of emotion takes place in the context of his arguments for the importance of grace. The expressions of emotion that help to constitute grace, on Schiller’s view, are neither physiological changes that accompany emotion, nor expressions in art, but rather gestures. Schiller notices that actions like this pose a problem for what he takes to be an attractive, Kantian conception of freedom. Schiller accepts that action out of emotion cannot be explained simply mechanistically, and accepts the Kantian conception of freedom as spontaneity; but he breaks new gr
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Silver, Morris. Economic Structures of Antiquity. Greenwood Publishing Group, Inc., 1995. http://dx.doi.org/10.5040/9798400643606.

Der volle Inhalt der Quelle
Annotation:
The economy of the ancient Middle East and Greece is reinterpreted by Morris Silver in this provocative new synthesis. Silver finds that the ancient economy emerges as a class of economies with its own laws of motion shaped by transaction costs (the resources used up in exchanging ownership rights). The analysis of transaction costs provides insights into many characteristics of the ancient economy, such as the important role of the sacred and symbolic gestures in making contracts, magical technology, the entrepreneurial role of high-born women, the elevation of familial ties and other departu
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Gesture Synthesis"

1

Losson, Olivier, and Jean-Marc Vannobel. "Sign Specification and Synthesis." In Gesture-Based Communication in Human-Computer Interaction. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Neff, Michael. "Hand Gesture Synthesis for Conversational Characters." In Handbook of Human Motion. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Neff, Michael. "Hand Gesture Synthesis for Conversational Characters." In Handbook of Human Motion. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_5-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Olivier, Patrick. "Gesture Synthesis in a Real-World ECA." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24842-2_35.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wachsmuth, Ipke, and Stefan Kopp. "Lifelike Gesture Synthesis and Timing for Conversational Agents." In Gesture and Sign Language in Human-Computer Interaction. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hartmann, Björn, Maurizio Mancini, and Catherine Pelachaud. "Implementing Expressive Gesture Synthesis for Embodied Conversational Agents." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11678816_22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Julliard, Frédéric, and Sylvie Gibet. "Reactiva’Motion Project: Motion Synthesis Based on a Reactive Representation." In Gesture-Based Communication in Human-Computer Interaction. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_23.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Arfib, Daniel, and Loïc Kessous. "Gestural Control of Sound Synthesis and Processing Algorithms." In Gesture and Sign Language in Human-Computer Interaction. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_30.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhang, Fan, Naye Ji, Fuxing Gao, and Yongping Li. "DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model." In MultiMedia Modeling. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27077-2_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Crombie Smith, Kirsty, and William Edmondson. "The Development of a Computational Notation for Synthesis of Sign and Gesture." In Gesture-Based Communication in Human-Computer Interaction. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24598-8_29.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Gesture Synthesis"

1

Lu, Haipeng, Nan Xie, Zhengxu Li, Wei Pang, and Yongming Zhang. "EngaGes: An Engagement Fused Co-Speech Gesture Synthesis Model." In 2024 11th International Forum on Electrical Engineering and Automation (IFEEA). IEEE, 2024. https://doi.org/10.1109/ifeea64237.2024.10878669.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mughal, Muhammad Hamza, Rishabh Dabral, Ikhsanul Habibie, Lucia Donatelli, Marc Habermann, and Christian Theobalt. "ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.00138.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Manjunatha, Koushik A., Morris Hsu, Rohit Kumar, and Sai Prashanth Chinnapalli. "V2R: FMCW Radar Data Synthesis from Videos for Long Range Gesture Recognition." In 2025 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNeT). IEEE, 2025. https://doi.org/10.1109/wisnet63956.2025.10905010.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bao, Yihua, Nan Gao, Dongdong Weng, Junyu Chen, and Zeyu Tian. "MuseGesture: A Framework for Gesture Synthesis by Virtual Agents in VR Museum Guides." In 2024 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2024. https://doi.org/10.1109/ismar-adjunct64951.2024.00079.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Pang, Haozhou, Tianwei Ding, Lanshan He, Ming Tao, Lu Zhang, and Qi Gan. "LLM Gesticulator: leveraging large language models for scalable and controllable co-speech gesture synthesis." In Eighth International Conference on Computer Graphics and Virtuality (ICCGV25), edited by Haiquan Zhao. SPIE, 2025. https://doi.org/10.1117/12.3060395.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mehta, Shivam, Anna Deichler, Jim O’Regan, et al. "Fake it to make it: Using synthetic data to remedy the data shortage in joint multi-modal speech-and-gesture synthesis." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00201.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bargmann, Robert, Volker Blanz, and Hans-Peter Seidel. "A nonlinear viseme model for triphone-based speech synthesis." In Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813362.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sargin, M. E., O. Aran, A. Karpov, et al. "Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis." In 2006 IEEE International Conference on Multimedia and Expo. IEEE, 2006. http://dx.doi.org/10.1109/icme.2006.262663.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lu, Shuhong, Youngwoo Yoon, and Andrew Feng. "Co-Speech Gesture Synthesis using Discrete Gesture Token Learning." In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023. http://dx.doi.org/10.1109/iros55552.2023.10342027.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wang, Siyang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, and Éva Székely. "Integrated Speech and Gesture Synthesis." In ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, 2021. http://dx.doi.org/10.1145/3462244.3479914.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!