Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Gesture Synthesis.

Zeitschriftenartikel zum Thema „Gesture Synthesis“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Gesture Synthesis" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Bao, Yihua, Dongdong Weng, and Nan Gao. "Editable Co-Speech Gesture Synthesis Enhanced with Individual Representative Gestures." Electronics 13, no. 16 (2024): 3315. http://dx.doi.org/10.3390/electronics13163315.

Der volle Inhalt der Quelle
Annotation:
Co-speech gesture synthesis is a challenging task due to the complexity and uncertainty between gestures and speech. Gestures that accompany speech (i.e., Co-Speech Gesture) are an essential part of natural and efficient embodied human communication, as they work in tandem with speech to convey information more effectively. Although data-driven approaches have improved gesture synthesis, existing deep learning-based methods use deterministic modeling which could lead to averaging out predicted gestures. Additionally, these methods lack control over gesture generation such as user editing of ge
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pang, Kunkun, Dafei Qin, Yingruo Fan, et al. "BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer." ACM Transactions on Graphics 42, no. 4 (2023): 1–12. http://dx.doi.org/10.1145/3592456.

Der volle Inhalt der Quelle
Annotation:
Automatic gesture synthesis from speech is a topic that has attracted researchers for applications in remote communication, video games and Metaverse. Learning the mapping between speech and 3D full-body gestures is difficult due to the stochastic nature of the problem and the lack of a rich cross-modal dataset that is needed for training. In this paper, we propose a novel transformer-based framework for automatic 3D body gesture synthesis from speech. To learn the stochastic nature of the body gesture during speech, we propose a variational transformer to effectively model a probabilistic dis
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Deng, Linhai. "FPGA-based gesture recognition and voice interaction." Applied and Computational Engineering 40, no. 1 (2024): 174–79. http://dx.doi.org/10.54254/2755-2721/40/20230646.

Der volle Inhalt der Quelle
Annotation:
Human gestures, a fundamental trait, enable human-machine interactions and possibilities in interfaces. Amid technological advancements, gesture recognition research has gained prominence. Gesture recognition possesses merits in sample acquisition and intricate delineation. Delving into its nuances remains significant. Existing techniques leverage PC-based OpenCV and deep learnings computational prowess, showcasing complexity. This scholarly exposition outlines an experimental framework, centered on mobile FPGA for enhanced gesture recognition. The focus lies on DE2-115 as an image discernment
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ao, Tenglong, Qingzhe Gao, Yuke Lou, Baoquan Chen, and Libin Liu. "Rhythmic Gesticulator." ACM Transactions on Graphics 41, no. 6 (2022): 1–19. http://dx.doi.org/10.1145/3550454.3555435.

Der volle Inhalt der Quelle
Annotation:
Automatic synthesis of realistic co-speech gestures is an increasingly important yet challenging task in artificial embodied agent creation. Previous systems mainly focus on generating gestures in an end-to-end manner, which leads to difficulties in mining the clear rhythm and semantics due to the complex yet subtle harmony between speech and gestures. We present a novel co-speech gesture synthesis method that achieves convincing results both on the rhythm and semantics. For the rhythm, our system contains a robust rhythm-based segmentation pipeline to ensure the temporal coherence between the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yang, Qi, and Georg Essl. "Evaluating Gesture-Augmented Keyboard Performance." Computer Music Journal 38, no. 4 (2014): 68–79. http://dx.doi.org/10.1162/comj_a_00277.

Der volle Inhalt der Quelle
Annotation:
The technology of depth cameras has made designing gesture-based augmentation for existing instruments inexpensive. We explored the use of this technology to augment keyboard performance with 3-D continuous gesture controls. In a user study, we compared the control of one or two continuous parameters using gestures versus the traditional control using pitch and modulation wheels. We found that the choice of mapping depends on the choice of synthesis parameter in use, and that the gesture control under suitable mappings can outperform pitch-wheel performance when two parameters are controlled s
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Souza, Fernando, and Adolfo Maia Jr. "A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition." Revista Vórtex 9, no. 2 (2021): 1–27. http://dx.doi.org/10.33871/23179937.2021.9.2.4.

Der volle Inhalt der Quelle
Annotation:
We show a method for Granular Synthesis Composition based on a mathematical modeling for the musical gesture. Each gesture is drawn as a curve generated from a particular mathematical model (or function) and coded as a MATLAB script. The gestures can be deterministic through defining mathematical time functions, hand free drawn, or even randomly generated. This parametric information of gestures is interpreted through OSC messages by a granular synthesizer (Granular Streamer). The musical composition is then realized with the models (scripts) written in MATLAB and exported to a graphical score
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Patil, Prof Ravindra. "AI-Driven Gesture Recognition and Multilingual Translation." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48713.

Der volle Inhalt der Quelle
Annotation:
Abstract – This paper presents a real-time system designed to improve communication effectively for everyone with speech and hearing impairments through gesture-based language translation. This approach uses machine learning algorithms to interpret American Sign Language (ASL) hand gestures which is a universal sign language and convert them into both text and speech outputs. By integrating Mediapipe for landmark detection with a Convolutional Neural Network (CNN) for gesture classification, the system effectively identifies static hand signs and ensures robustness in diverse surrounding envir
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

He, Zhiyuan. "Automatic Quality Assessment of Speech-Driven Synthesized Gestures." International Journal of Computer Games Technology 2022 (March 16, 2022): 1–11. http://dx.doi.org/10.1155/2022/1828293.

Der volle Inhalt der Quelle
Annotation:
The automatic synthesis of realistic gestures has the ability to change the fields of animation, avatars, and communication agents. Although speech-driven synthetic gesture generation methods have been proposed and optimized, the evaluation system of synthetic gestures is still lacking. The current evaluation method still needs manual participation, but it is inefficient in the industry of synthetic gestures and has the interference of human factors. So we need a model that can construct an automatic and objective quantitative quality assessment of the synthesized gesture video. We noticed tha
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Vasuki, M. "Al Powered Real-Time Sign Language Detection and Translation System for Inclusive Communication Between Deaf and Hearing Communities Worldwide." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50025.

Der volle Inhalt der Quelle
Annotation:
Abstract - Sign language is a vital communication tool for individuals who are deaf or hard of hearing, yet it remains largely inaccessible to the wider population. This project aims to address this barrier by developing a sign language recognition system that converts hand gestures into text, followed by text-to-speech (TTS) conversion. The system utilizes Convolutional Neural Networks (CNNs) to recognize static hand gestures and translate them into corresponding textual representations. The text is then processed by a TTS engine, which generates spoken language, making it comprehensible to i
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bouënard, Alexandre, Marcelo M. M. Wanderley, and Sylvie Gibet. "Gesture Control of Sound Synthesis: Analysis and Classification of Percussion Gestures." Acta Acustica united with Acustica 96, no. 4 (2010): 668–77. http://dx.doi.org/10.3813/aaa.918321.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Zhang, Zeyi, Tenglong Ao, Yuyao Zhang, et al. "Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis." ACM Transactions on Graphics 43, no. 4 (2024): 1–17. http://dx.doi.org/10.1145/3658134.

Der volle Inhalt der Quelle
Annotation:
In this work, we present Semantic Gesticulator , a novel framework designed to synthesize realistic gestures accompanying speech with strong semantic correspondence. Semantically meaningful gestures are crucial for effective non-verbal communication, but such gestures often fall within the long tail of the distribution of natural human motion. The sparsity of these movements makes it challenging for deep learning-based systems, trained on moderately sized datasets, to capture the relationship between the movements and the corresponding speech semantics. To address this challenge, we develop a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Xu, Zunnan, Yachao Zhang, Sicheng Yang, Ronghui Li, and Xiu Li. "Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 6387–95. http://dx.doi.org/10.1609/aaai.v38i6.28458.

Der volle Inhalt der Quelle
Annotation:
This study aims to improve the generation of 3D gestures by utilizing multimodal information from human speech. Previous studies have focused on incorporating additional modalities to enhance the quality of generated gestures. However, these methods perform poorly when certain modalities are missing during inference. To address this problem, we suggest using speech-derived multimodal priors to improve gesture generation. We introduce a novel method that separates priors from speech and employs multimodal priors as constraints for generating gestures. Our approach utilizes a chain-like modeling
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

G C, Shwethashree. "Inclusive Communication: Leveraging AI for Sign Language Translation and Real-Time Audio Transcription." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48555.

Der volle Inhalt der Quelle
Annotation:
Abstract - Humans communicate through both natural language and body language, including gestures, facial expressions, and lip movements. While understanding spoken language is essential, recognizing sign language is equally important, especially for individuals with hearing impairments. Deaf individuals often struggle to communicate with those unfamiliar with sign language, making real-time translation systems invaluable. This paper proposes a real-time meeting platform that recognizes Indian Sign Language (ISL) gestures and converts them into text and speech, enabling smooth interaction betw
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Fernández-Baena, Adso, Raúl Montaño, Marc Antonijoan, Arturo Roversi, David Miralles, and Francesc Alías. "Gesture synthesis adapted to speech emphasis." Speech Communication 57 (February 2014): 331–50. http://dx.doi.org/10.1016/j.specom.2013.06.005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Lakshmi, Kalapala Sri, Bobbadi Manohar Rao, Allada Lehya Varshini, Abdul Kaif, and Budu Meghana. "AUTOMATIC SIGN LANGUAGE INTERPRETER WITH MOTION DETECTION AND VOICE SYNTHESIS." Industrial Engineering Journal 54, no. 02 (2025): 75–80. https://doi.org/10.36893/iej.2025.v52i2.008.

Der volle Inhalt der Quelle
Annotation:
The goal of this project is to provide a real-time sign language translator so that people who use sign language may interact with others. Leveraging advanced machine learning techniques, the system recognizes fingerspelling hand gestures, converts them into text, and subsequently synthesizes the text into speech. MediaPipe is employed for real-time hand tracking and gesture recognition, while a Convolutional Neural Network (CNN) classifies the gestures. The recognized text is converted into speech using a Text-to-Speech (TTS) library, facilitating both visual and auditory communication. This
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Seetaram, J., Sk Sahil, Md Irfan Ahmed, and N. Harshitha. "DEEP LEARNING-BASED HAND GESTURE RECOGNITION FOR SPEECH SYNTHESIS IN TELUGU." International Journal of Advanced Research 12, no. 07 (2024): 390–98. http://dx.doi.org/10.21474/ijar01/19067.

Der volle Inhalt der Quelle
Annotation:
In a world increasingly reliant on technology, individuals had born with hearing impairments face significant communication challenges, leading to feelings of isolation and dependency. This paper addresses the pressing need to empower the deaf and mute community by proposing an innovative solution – Deep Learning-Based Hand Gesture Recognition for Speech Synthesis in Telugu. Deaf and mute individuals encounter barriers in expressing themselves verbally, hindering their integration into mainstream society. Conventional methods often fall short in providing effective communication channels, ex
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Arfib, D., J. M. Couturier, L. Kessous, and V. Verfaille. "Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces." Organised Sound 7, no. 2 (2002): 127–44. http://dx.doi.org/10.1017/s1355771802002054.

Der volle Inhalt der Quelle
Annotation:
This paper is about mapping strategies between gesture data and synthesis model parameters by means of perceptual spaces. We define three layers in the mapping chain: from gesture data to gesture perceptual space, from sound perceptual space to synthesis model parameters, and between the two perceptual spaces. This approach makes the implementation highly modular. Both perceptual spaces are developed and depicted with their features. To get a simple mapping between the gesture perceptual subspace and the sound perceptual subspace, we need to focus our attention on the two other mappings. We ex
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Nakano, Atsushi, and Junichi Hoshino. "Composite conversation gesture synthesis using layered planning." Systems and Computers in Japan 38, no. 10 (2007): 58–68. http://dx.doi.org/10.1002/scj.20532.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Gudi, Swaroop. "Sign Language Detection Using Gloves." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (2024): 1387–91. http://dx.doi.org/10.22214/ijraset.2024.65315.

Der volle Inhalt der Quelle
Annotation:
This paper presents a comprehensive system for real-time translation of Indian Sign Language (ISL) gestures into spoken language using gloves equipped with flex sensors. The system incorporates an Arduino Nano microcontroller for data acquisition, an HC-05 Bluetooth module for wireless data transmission, and an Android application for processing. A deep learning model, trained on an ISL dataset using Keras and TensorFlow, classifies the gestures. The processed data is then converted into spoken language using Google Text-to-Speech (GTTS). The gloves measure finger movements through flex sensor
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Kaveri, A. Chandan, and B. More Dr.Vijay. "INDIAN SIGN LANGUAGE INTERPRETATION USING CNN AND MEDIAPIPE WITH TEXT-TO-SPEECH INTEGRATION." Journal of the Maharaja Sayajirao University of Baroda 59, no. 1 (I) (2025): 393–403. https://doi.org/10.5281/zenodo.15237756.

Der volle Inhalt der Quelle
Annotation:
Abstract: Indian Sign Language (ISL) is of great importance in communication for deaf and speechimpairedpeople from all around the nation of India. However, the lack of universally availableinterpretation tools has resulted in an obstacle to communication between ISL users and the generalpopulation. This research presents a real-time ISL interpretation system that consists of CNN andMediaPipe hand tracking to process gestures and convert them to natural language text. Finally, Textto-Speech (TTS) technology is integrated to allow for the output of the gestured information to hearingindividuals
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

H.V, Shashidhara, Jnanesh R, Karthik V, Kiran I S, and Hemanth I R. "A Comprehensive System for Real-Time Sign Language Translation into Text and Speech." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem40428.

Der volle Inhalt der Quelle
Annotation:
Effective communication remains a challenge for individuals who rely on sign language as their primary mode of expression, especially in interactions with non-sign language users. This research explores an innovative system that converts sign language gestures into text and subsequently into synthesized speech, enabling seamless and inclusive communication. Leveraging advancements in computer vision, natural language processing (NLP), and speech synthesis, the proposed model captures real-time sign gestures, translates them into structured textual data, and outputs audible speech with high acc
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Dang, Xiaochao, Wenze Ke, Zhanjun Hao, Peng Jin, Han Deng, and Ying Sheng. "mm-TPG: Traffic Policemen Gesture Recognition Based on Millimeter Wave Radar Point Cloud." Sensors 23, no. 15 (2023): 6816. http://dx.doi.org/10.3390/s23156816.

Der volle Inhalt der Quelle
Annotation:
Automatic driving technology refers to equipment such as vehicle-mounted sensors and computers that are used to navigate and control vehicles autonomously by acquiring external environmental information. To achieve automatic driving, vehicles must be able to perceive the surrounding environment and recognize and understand traffic signs, traffic signals, pedestrians, and other traffic participants, as well as accurately plan and control their path. Recognition of traffic signs and signals is an essential part of automatic driving technology, and gesture recognition is a crucial aspect of traff
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Deepak T. Mane. "HearingHands : Amplifying the Voices of Deaf Community." Journal of Information Systems Engineering and Management 10, no. 1s (2024): 375–89. https://doi.org/10.52783/jisem.v10i1s.222.

Der volle Inhalt der Quelle
Annotation:
Natural language, sign language employs several modes of expression for everyday communication. Researchers have paid less attention to ISL interpretation than other forms of sign language. This paper introduces an automatic technique for translating manual letter movements into Hindi sign language. Because it works with pictures of bare hands, the user may interact with the device naturally. System gives deaf individuals the chance to interact with hearing people without the use of an interpreter. This paper goal is to develop techniques and a system for automatically recognizing Hindi sign l
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Cheng, Yongkang, Shaoli Huang, Xuelin Chen, Jifeng Ning, and Mingming Gong. "DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 3 (2025): 2464–72. https://doi.org/10.1609/aaai.v39i3.32248.

Der volle Inhalt der Quelle
Annotation:
Diffusion models have demonstrated remarkable synthesis quality and diversity in generating co-speech gestures. However, the computationally intensive sampling steps associated with diffusion models hinder their practicality in real-world applications. Hence, we present DIDiffGes, for a Decoupled Semi-Implicit Diffusion model-based framework, that can synthesize high-quality, expressive gestures from speech using only a few sampling steps. Our approach leverages Generative Adversarial Networks (GANs) to enable large-step sampling for diffusion model. We decouple gesture data into body and hand
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Valencia, C. Roncancio, J. Gomez Garcia-Bermejo, and E. Zalama Casanova. "Combined Gesture-Speech Recognition and Synthesis Using Neural Networks." IFAC Proceedings Volumes 41, no. 2 (2008): 2968–73. http://dx.doi.org/10.3182/20080706-5-kr-1001.00499.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Montgermont, Nicolas, Benoit Fabre, and Patricio De La Cuadra. "Gesture synthesis: basic control of a flute physical model." Journal of the Acoustical Society of America 123, no. 5 (2008): 3797. http://dx.doi.org/10.1121/1.2935477.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Alexanderson, Simon, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. "Style‐Controllable Speech‐Driven Gesture Synthesis Using Normalising Flows." Computer Graphics Forum 39, no. 2 (2020): 487–96. http://dx.doi.org/10.1111/cgf.13946.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Gao, Wanlin. "Speech Synthesis and Personalization under Unimodal and Multimodal Conditions." Transactions on Computer Science and Intelligent Systems Research 7 (November 25, 2024): 126–37. https://doi.org/10.62051/7b0mc109.

Der volle Inhalt der Quelle
Annotation:
Recently, there have been notable advancements in TTS technology, with researchers optimizing the efficiency, quality, and flexibility of speech generation through various models. This paper systematically explores end-to-end TTS models based on waveform generation, including Parallel WaveGAN, NaturalSpeech, and Multi-Band MelGAN, each of which has unique features in enhancing real-time generation capabilities and sound quality. Additionally, the paper discusses the development of speech separation and synthesis technologies, highlighting the applications of models like CONTENTVEC in pitch adj
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Mo, Dong-Han, Chuen-Lin Tien, Yu-Ling Yeh, et al. "Design of Digital-Twin Human-Machine Interface Sensor with Intelligent Finger Gesture Recognition." Sensors 23, no. 7 (2023): 3509. http://dx.doi.org/10.3390/s23073509.

Der volle Inhalt der Quelle
Annotation:
In this study, the design of a Digital-twin human-machine interface sensor (DT-HMIS) is proposed. This is a digital-twin sensor (DT-Sensor) that can meet the demands of human-machine automation collaboration in Industry 5.0. The DT-HMIS allows users/patients to add, modify, delete, query, and restore their previously memorized DT finger gesture mapping model and programmable logic controller (PLC) logic program, enabling the operation or access of the programmable controller input-output (I/O) interface and achieving the extended limb collaboration capability of users/patients. The system has
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Alves, de Sousa Fernando. "João Paulo II e a purificação da memória histórica no jubileu do ano 2000." Brasiliensis 5, no. 9 (2016): 95–124. https://doi.org/10.5281/zenodo.8128003.

Der volle Inhalt der Quelle
Annotation:
https://brasiliensis.cerm.org.br/index.php/brasiliensis/article/view/96/version/96 It presents a synthesis of the thesis of the author’s master’s degree that looks to recompose the trajectory of the “Day of Forgiveness”realized in the year 2000 during the celebrations of the Jubilee - under the perspective of its promoter and idealizer, that is, as a specific gesture of John Paul II. The Journey, known as a gesture of purification of the memory, would have been matured in the heart of Karol Wojtyla from his experiences as a son of Poland and son of the Council. His elec
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Yu, Shi Cai, and Rong Lu. "Research of Sign Language Synthesis Based on VRML." Applied Mechanics and Materials 347-350 (August 2013): 2631–35. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2631.

Der volle Inhalt der Quelle
Annotation:
Sign language is to help the deaf and normal hearing people natural communication and computer assisted instruction. Through the analysis of language features, and proposed one kind based on the VRML human body modeling and virtual human based on context of gesture smoothing algorithm, thus the sign language synthesis research and implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Arefieva, Anna. "AVANT-GARDE SYNTHESIS OF THE ARTS IN THE CREATIVITY OF SCENEGRAPHERS AT THE BEGINNING OF THE 20TH CENTURY." Bulletin of Yaroslav Mudryi National Law University. Series: philosophy, philosophy of law, political science, sociology : The collection of scientific papers / Editorial board: O. Danilyan (Ed.-in-Ch.), et al. – Kharkiv : Pravo, 2023. – Issue No. 1 (56). 56, no. 1 (2023): 99–108. https://doi.org/10.21564/2663-5704.56.274331.

Der volle Inhalt der Quelle
Annotation:
<em>The urgency of turning to the philosophical foundations of the formation of avant-garde stage synthesis on the stage of the Diaghilev seasons is emphasized. The purpose of the article is to determine the intentions of the stage synthesis of avant-garde art &ndash; cubism, Fauvism, abstractionism as a philosophical and anthropological phenomenon of destruction and deconstruction of classical art. The research methodology is determined by comparative and systematic approaches. Transcendental, phenomenological and dialectical methods make it possible to carry out a philosophical and anthropol
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Ryumin, Dmitry, Ildar Kagirov, Alexandr Axyonov, et al. "A Multimodal User Interface for an Assistive Robotic Shopping Cart." Electronics 9, no. 12 (2020): 2093. http://dx.doi.org/10.3390/electronics9122093.

Der volle Inhalt der Quelle
Annotation:
This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Ru
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

K, Kavyasree. "Hand Glide: Gesture-Controlled Virtual Mouse with Voice Assistant." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 5470–76. http://dx.doi.org/10.22214/ijraset.2024.61178.

Der volle Inhalt der Quelle
Annotation:
Abstract: Hand Glide presents a unified system that uses the MediaPipe library for accurate hand landmark recognition and categorization, enabling smooth integration of gesture-based control (GBC) and voice-controlled assistant (VCA) features. The system allows users to effortlessly translate hand movements into actions such as mouse manipulation, clicking, scrolling, and system parameter adjustments by providing distinct classes for hand gesture recognition (HandRecog) and execution (Controller), as well as a main class orchestrating camera input and gesture control. In addition, Hand Glide a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

J, Ms Rashmi, Mr Saurav Sahani, and Mr Pulkit Kumar Yadav. "A Review on Advances in Indian Sign Language Recognition: Techniques, Models, and Applications." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (2024): 2332–38. https://doi.org/10.22214/ijraset.2024.65539.

Der volle Inhalt der Quelle
Annotation:
Abstract: Sign language serves as a crucial medium of communication for individuals with hearing and speech impairments, yet it presents barriers for those unfamiliar with its nuances. Recent advancements in artificial intelligence, computer vision, and deep learning have paved the way for innovative sign language recognition (SLR) systems. This paper provides a comprehensive survey of cutting-edge approaches to real-time sign language recognition, with a specific focus on Indian Sign Language (ISL). Various methodologies, including Convolutional Neural Networks (CNNs), Hidden Markov Models (H
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Rasamimanana, Nicolas, Florian Kaiser, and Frederic Bevilacqua. "Perspectives on Gesture–Sound Relationships Informed from Acoustic Instrument Studies." Organised Sound 14, no. 2 (2009): 208–16. http://dx.doi.org/10.1017/s1355771809000314.

Der volle Inhalt der Quelle
Annotation:
We present an experimental study on articulation in bowed strings that provides important elements for a discussion about sound synthesis control. The study focuses on bow acceleration profiles and transient noises, measured for different players for the bowing techniquesdetachéandmartelé. We found that maximum of these profiles are not synchronous, and temporal shifts are dependent on the bowing techniques. These results allow us to bring out important mechanisms in sound and gesture articulation. In particular, the results reveal a potential shortcoming of mapping strategies using simple fra
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

LANZALONE, SILVIA. "Hidden grids: paths of expressive gesture between instruments, music and dance." Organised Sound 5, no. 1 (2000): 17–26. http://dx.doi.org/10.1017/s1355771800001047.

Der volle Inhalt der Quelle
Annotation:
In his work Contropasso (1998–9) Michelangelo Lupone collaborates with Massimo Moricone in the dance showcase piegapiaga achieving direct interaction between dancers and live electronics performance. The choreography takes advantage of acoustic events as generated by three dancers and further elaborated on via computer by the composer through use of granular algorithms and digital filtering, allowing the construction of the musical events to occur in real time. The live electronics performer changes sound parameters in relation to the dancers' movements by use of the program SDP – Sonorous Dra
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ingle1,, Madhav D., Mahesh S. Suryawanshi, Prajakta A. Shinde, and Sonali T. Waghmare. "Intelligent Hand Sign Detection for Deaf and Mute People:A Multimodal Approach to Enhancing Communicationthrough AI-Driven Gesture Recognition." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem42545.

Der volle Inhalt der Quelle
Annotation:
Intelligent hand sign detection systems offer a transformative solution for enhancing communication between deaf and mute individuals and the broader community. This paper presents a multimodal approach for hand sign detection that integrates advanced AI-driven gesture recognition techniques. By leveraging a combination of computer vision, deep learning, and sensor technologies, the proposed system is capable of accurately recognizing and interpreting a wide range of hand signs. The approach utilizes Convolutional Neural Networks (CNN) for image-based gesture recognition, while also incorporat
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Patil, Shraman S. "Real-Time Sign Language & Gesture Recognition for Speech-Impaired Individuals." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 3351–56. https://doi.org/10.22214/ijraset.2025.68057.

Der volle Inhalt der Quelle
Annotation:
This research presents an innovative real-time sign language detection system that enables speech-impaired individuals to communicate more effectively with the broader community. The system utilizes computer vision techniques and deep learning models to recognize hand gestures corresponding to alphabets and converts them into text and speech. By analyzing hand landmarks through MediaPipe and employing a trained neural network for classification, the system allows users to construct words and sentences through a series of gestures, which can then be vocalized using text-to-speech technology. Th
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Zhou, Yuxuan, Huangxun Chen, Chenyu Huang, and Qian Zhang. "WiAdv." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 2 (2022): 1–25. http://dx.doi.org/10.1145/3534618.

Der volle Inhalt der Quelle
Annotation:
WiFi-based gesture recognition systems have attracted enormous interest owing to the non-intrusive of WiFi signals and the wide adoption of WiFi for communication. Despite boosted performance via integrating advanced deep neural network (DNN) classifiers, there lacks sufficient investigation on their security vulnerabilities, which are rooted in the open nature of the wireless medium and the inherent defects (e.g., adversarial attacks) of classifiers. To fill this gap, we aim to study adversarial attacks to DNN-powered WiFi-based gesture recognition to encourage proper countermeasures. We desi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

MARTIN, JEAN-CLAUDE, RADOSLAW NIEWIADOMSKI, LAURENCE DEVILLERS, STEPHANIE BUISINE, and CATHERINE PELACHAUD. "MULTIMODAL COMPLEX EMOTIONS: GESTURE EXPRESSIVITY AND BLENDED FACIAL EXPRESSIONS." International Journal of Humanoid Robotics 03, no. 03 (2006): 269–91. http://dx.doi.org/10.1142/s0219843606000825.

Der volle Inhalt der Quelle
Annotation:
One of the challenges of designing virtual humans is the definition of appropriate models of the relation between realistic emotions and the coordination of behaviors in several modalities. In this paper, we present the annotation, representation and modeling of multimodal visual behaviors occurring during complex emotions. We illustrate our work using a corpus of TV interviews. This corpus has been annotated at several levels of information: communicative acts, emotion labels, and multimodal signs. We have defined a copy-synthesis approach to drive an Embodied Conversational Agent from these
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Ketabdar, Hamed, Amin Haji-Abolhassani, and Mehran Roshandel. "MagiThings." International Journal of Mobile Human Computer Interaction 5, no. 3 (2013): 23–41. http://dx.doi.org/10.4018/jmhci.2013070102.

Der volle Inhalt der Quelle
Annotation:
The theory of around device interaction (ADI) has recently gained a lot of attention in the field of human computer interaction (HCI). As an alternative to the classic data entry methods, such as keypads and touch screens interaction, ADI proposes a touchless user interface that extends beyond the peripheral area of a device. In this paper, the authors propose a new approach for around mobile device interaction based on magnetic field. Our new approach, which we call it “MagiThings”, takes the advantage of digital compass (a magnetometer) embedded in new generation of mobile devices such as Ap
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Thoret, Etienne, Mitsuko Aramaki, Charles Gondre, Sølvi Ystad, and Richard Kronland-Martinet. "Eluding the Physical Constraints in a Nonlinear Interaction Sound Synthesis Model for Gesture Guidance." Applied Sciences 6, no. 7 (2016): 192. http://dx.doi.org/10.3390/app6070192.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Camurri, Antonio, Giovanni De Poli, Anders Friberg, Marc Leman, and Gualtiero Volpe. "The MEGA Project: Analysis and Synthesis of Multisensory Expressive Gesture in Performing Art Applications." Journal of New Music Research 34, no. 1 (2005): 5–21. http://dx.doi.org/10.1080/09298210500123895.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Bouënard, Alexandre, Marcelo M. Wanderley, Sylvie Gibet, and Fabrice Marandola. "Virtual Gesture Control and Synthesis of Music Performances: Qualitative Evaluation of Synthesized Timpani Exercises." Computer Music Journal 35, no. 3 (2011): 57–72. http://dx.doi.org/10.1162/comj_a_00069.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Harrison, Reginald Langford, Stefan Bilbao, James Perry, and Trevor Wishart. "An Environment for Physical Modeling of Articulated Brass Instruments." Computer Music Journal 39, no. 4 (2015): 80–95. http://dx.doi.org/10.1162/comj_a_00332.

Der volle Inhalt der Quelle
Annotation:
This article presents a synthesis environment for physical modeling of valved brass instrument sounds. Synthesis is performed using finite-difference time-domain methods that allow for flexible simulation of time-varying systems. Users have control over the instrument configuration as well as player parameters, such as mouth pressure, lip dynamics, and valve depressions, which can be varied over the duration of a gesture. This article introduces the model used in the environment, the development of code from prototyping in MATLAB and optimization in C, and the incorporation of the executable f
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Deinega, Volodymyr. "Influence of Timbral Sound Coloring on the Evolution of the Conductor's Gesture." Часопис Національної музичної академії України ім.П.І.Чайковського, no. 3(60) (September 27, 2023): 85–97. http://dx.doi.org/10.31318/2414-052x.3(60).2023.296801.

Der volle Inhalt der Quelle
Annotation:
An overview of the professional aspects of conducting as a process of managing a performing team was carried out. The main directions of the development of the conductor's performance skills from the noisy type to the silent type are revealed. The dynamics of changes in the forms and methods of conducting have been traced while preserving the priority of gesturing as the main type of influence on the collective during the performance of a musical piece. It was found that the conductor controls not only the logic of the development of the musical material, but implements it by using a specific
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Phukon, Debasish. "A Deep Learning Approach for ASL Recognition and Text-to-Speech Synthesis using CNN." International Journal for Research in Applied Science and Engineering Technology 11, no. 8 (2023): 2135–43. http://dx.doi.org/10.22214/ijraset.2023.55528.

Der volle Inhalt der Quelle
Annotation:
Abstract: Sign language is a visual language that is used by the deaf and hard-of-hearing community to communicate. However, sign language is not universally understood by non-signers, which can create communication barriers for the deaf and hard-ofhearing individuals. In this paper, we present a novel application for American Sign Language (ASL) to text to speech conversion using deep learning techniques. Our app aims to bridge the communication gap between hearing-impaired individuals who use ASL as their primary mode of communication and individuals who do not understand ASL. The app compri
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Gong, Linyi. "Advancing Gesture Recognition: Innovations in Self-Powered, Flexible Wearable Devices and Sensory Systems." Highlights in Science, Engineering and Technology 102 (July 11, 2024): 338–45. http://dx.doi.org/10.54097/k2c50p45.

Der volle Inhalt der Quelle
Annotation:
Wearable biosensors, increasingly integral in a digitally-evolving landscape, offer the benefits of flexibility, portability, safety, and real-time physiological monitoring, while also being environmentally sustainable. Despite significant advancements, the market expansion of these flexible devices is impeded by challenges related to energy availability and power consumption. This review delves into recent developments in self-powered, flexible wearable devices and sensing systems, focusing particularly on the roles of generators, batteries, and functional circuits. The existing obstacles are
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Nichols, Charles. "The vBow: a virtual violin bow controller for mapping gesture to synthesis with haptic feedback." Organised Sound 7, no. 2 (2002): 215–20. http://dx.doi.org/10.1017/s135577180200211x.

Der volle Inhalt der Quelle
Annotation:
The vBow, a virtual violin bow musical controller, has been designed to provide the computer musician with most of the gestural freedom of a bow on a violin string. Four cable and servomotor systems allow for four degrees of freedom, including the lateral motion of a bow stroke across a string, the rotational motion of a bow crossing strings, the vertical motion of a bow approaching and pushing into a string, and the longitudinal motion of a bow travelling along the length of a string. Encoders, attached to the shaft of the servomotors, sense the gesture of the performer, through the rotation
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!