To see the other types of publications on this topic, follow the link: Gesture generation.

Journal articles on the topic 'Gesture generation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gesture generation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Katagami, Daisuke, Yusuke Ikeda, and Katsumi Nitta. "Behavior Generation and Evaluation of Negotiation Agent Based on Negotiation Dialogue Instances." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 7 (2010): 840–51. http://dx.doi.org/10.20965/jaciii.2010.p0840.

Full text
Abstract:
This study focuses on gestures negotiation dialogs. Analyzing the situation/gesture relationship, we suggest how to enable agents to conduct adequate human-like gestures and evaluated whether an agent’s gestures could give an impression similar to those by a human being. We collected negotiation dialogs to study common human gestures. We studied gesture frequency in different situations and extracted gestures with high frequency, making an agent gesture module based on the number of characteristics. Using a questionnaire, we evaluated the impressions of gestures by human users and agents, conf
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Nitin, Suraj Prakash Sahu, Jay Prakash Maurya, G. C. Nandi, and Pavan Chakraborty. "Recognizing Gestures for Humanoid Robot Using Proto-Symbol Space." Advanced Materials Research 403-408 (November 2011): 4769–76. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.4769.

Full text
Abstract:
This paper describes the non Verbal communication method for developing a gesture-based system using Mimesis model. The proposed method is applicable to any hand gesture represented by a multi-dimensional signal. The entire work concentrates mainly on hand gestures recognition. It develops a way to communicate between Humans and the Humanoid Robots through gestural medium. The Mimesis is the technique of performing human gestures through imitation, recognition and generation. Different Gestures are being converted into code words through the use of code book. These code words are then converte
APA, Harvard, Vancouver, ISO, and other styles
3

Ferstl, Ylva, Michael Neff, and Rachel McDonnell. "Adversarial gesture generation with realistic gesture phasing." Computers & Graphics 89 (June 2020): 117–30. http://dx.doi.org/10.1016/j.cag.2020.04.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Shuai, Yoichiro Maeda, and Yasutake Takahashi. "Chaotic Music Generation System Using Music Conductor Gesture." Journal of Advanced Computational Intelligence and Intelligent Informatics 17, no. 2 (2013): 194–200. http://dx.doi.org/10.20965/jaciii.2013.p0194.

Full text
Abstract:
In research on interactive music generation, we propose a music generation method in which the computer generates music under the recognition of a humanmusic conductor’s gestures. In this research, generated music is tuned by parameters of a network of chaotic elements which are determined by the recognized gesture in real time. The music conductor’s hand motions are detected by Microsoft Kinect in this system. Music theories are embedded in the algorithm and, as a result, generated music is richer. Furthermore, we constructed the music generation system and performed experiments for generatin
APA, Harvard, Vancouver, ISO, and other styles
5

Kirk, Elizabeth, and Carine Lewis. "Gesture Facilitates Children’s Creative Thinking." Psychological Science 28, no. 2 (2016): 225–32. http://dx.doi.org/10.1177/0956797616679183.

Full text
Abstract:
Gestures help people think and can help problem solvers generate new ideas. We conducted two experiments exploring the self-oriented function of gesture in a novel domain: creative thinking. In Experiment 1, we explored the relationship between children’s spontaneous gesture production and their ability to generate novel uses for everyday items (alternative-uses task). There was a significant correlation between children’s creative fluency and their gesture production, and the majority of children’s gestures depicted an action on the target object. Restricting children from gesturing did not s
APA, Harvard, Vancouver, ISO, and other styles
6

Azar, Zeynep, Ad Backus, and Aslı Özyürek. "Language contact does not drive gesture transfer: Heritage speakers maintain language specific gesture patterns in each language." Bilingualism: Language and Cognition 23, no. 2 (2019): 414–28. http://dx.doi.org/10.1017/s136672891900018x.

Full text
Abstract:
This paper investigates whether there are changes in gesture rate when speakers of two languages with different gesture rates (Turkish-high gesture; Dutch-low gesture) come into daily contact. We analyzed gestures produced by second-generation heritage speakers of Turkish in the Netherlands in each language, comparing them to monolingual baselines. We did not find differences between bilingual and monolingual speakers, possibly because bilinguals were proficient in both languages and used them frequently – in line with a usage-based approach to language. However, bilinguals produced more deict
APA, Harvard, Vancouver, ISO, and other styles
7

Xiao, Yang, Zhijun Zhang, Aryel Beck, Junsong Yuan, and Daniel Thalmann. "Human–Robot Interaction by Understanding Upper Body Gestures." Presence: Teleoperators and Virtual Environments 23, no. 2 (2014): 133–54. http://dx.doi.org/10.1162/pres_a_00176.

Full text
Abstract:
In this paper, a human–robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot using natural body language. The robot understands the meaning of human upper body gestures and expresses itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. This set also includes gestures with human–object interactions. The gestures are characterized by head, arm, and hand posture information. The wearable Immersion CyberGlove II is
APA, Harvard, Vancouver, ISO, and other styles
8

HUENERFAUTH, MATT. "SPATIAL, TEMPORAL, AND SEMANTIC MODELS FOR AMERICAN SIGN LANGUAGE GENERATION: IMPLICATIONS FOR GESTURE GENERATION." International Journal of Semantic Computing 02, no. 01 (2008): 21–45. http://dx.doi.org/10.1142/s1793351x08000336.

Full text
Abstract:
Software to generate animations of American Sign Language (ASL) has important accessibility benefits for the significant number of deaf adults with low levels of written language literacy. We have implemented a prototype software system to generate an important subset of ASL phenomena called "classifier predicates," complex and spatially descriptive types of sentences. The output of this prototype system has been evaluated by native ASL signers. Our generator includes several novel models of 3D space, spatial semantics, and temporal coordination motivated by linguistic properties of ASL. These
APA, Harvard, Vancouver, ISO, and other styles
9

Pusik Park, Rakhimov Rustam Igorevich, Jongchan Choi, Dugki Min, and Jongho Yoon. "Angular Depth Map Generation for Gesture Recognition." Journal of Next Generation Information Technology 3, no. 2 (2012): 1–8. http://dx.doi.org/10.4156/jnit.vol3.issue2.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Salem, Maha, Stefan Kopp, Ipke Wachsmuth, Katharina Rohlfing, and Frank Joublin. "Generation and Evaluation of Communicative Robot Gesture." International Journal of Social Robotics 4, no. 2 (2012): 201–17. http://dx.doi.org/10.1007/s12369-011-0124-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Aswad, Fadwa El, Gilde Vanel Tchane Djogdom, Martin J. D. Otis, Johannes C. Ayena, and Ramy Meziane. "Image Generation for 2D-CNN Using Time-Series Signal Features from Foot Gesture Applied to Select Cobot Operating Mode." Sensors 21, no. 17 (2021): 5743. http://dx.doi.org/10.3390/s21175743.

Full text
Abstract:
Advances in robotics are part of reducing the burden associated with manufacturing tasks in workers. For example, the cobot could be used as a “third-arm” during the assembling task. Thus, the necessity of designing new intuitive control modalities arises. This paper presents a foot gesture approach centered on robot control constraints to switch between four operating modalities. This control scheme is based on raw data acquired by an instrumented insole located at a human’s foot. It is composed of an inertial measurement unit (IMU) and four force sensors. Firstly, a gesture dictionary was pr
APA, Harvard, Vancouver, ISO, and other styles
12

de Nooijer, Jacqueline A., Tamara van Gog, Fred Paas, and Rolf A. Zwaan. "Words in action." Gesture 14, no. 1 (2014): 46–69. http://dx.doi.org/10.1075/gest.14.1.03noo.

Full text
Abstract:
Research on embodied cognition has shown that action and language are closely intertwined. The present study seeks to exploit this relationship, by systematically investigating whether motor activation would improve eight-to-nine year old children’s learning of vocabulary in their first language. In a within-subjects paradigm, 49 children learned novel object manipulation, locomotion and abstract verbs via a verbal definition alone and in combination with gesture observation, imitation, or generation (i.e., enactment). Results showed that learning of locomotion verbs significantly improved thr
APA, Harvard, Vancouver, ISO, and other styles
13

Tanaka, Yoshifumi. "The Effect of Gesture on Idea Generation Task." Proceedings of the Annual Convention of the Japanese Psychological Association 78 (September 10, 2014): 3AM—1–098–3AM—1–098. http://dx.doi.org/10.4992/pacjpa.78.0_3am-1-098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Elms, J. T. E. "Gesture-to-Speech Mismatch in the Construction of Problem Solving Insight." Annual Meeting of the Berkeley Linguistics Society 36, no. 1 (2010): 101. http://dx.doi.org/10.3765/bls.v36i1.3905.

Full text
Abstract:
In lieu of an abstract, here is a brief excerpt:The present report analyzes a case of problem-solving insight achieved in a dyadic problem-solving discourse task. The task required two participants to work together to solve a murder mystery based on a story by Raymond Chandler. One participant appeared to use propositional speech and gestural simulation as checks on each other while he hypothesized alternative interpretations for the actions of a murder suspect. Each hypothetical scenario began with a gestural metaphor for a named kinship relationship between suspect and murder victim. Irrepar
APA, Harvard, Vancouver, ISO, and other styles
15

K J, Monika. "Conversation Engine for Deaf and Dumb." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (2021): 2271–75. http://dx.doi.org/10.22214/ijraset.2021.36841.

Full text
Abstract:
Deaf and hard hearing people use linguistic communication to exchange information between their own community and with others. Sign gesture acquisition and text/speech generation are parts of computer recognition of linguistic communication. Static and dynamic are classified as sign gestures. Both recognition systems are important to the human community but static gesture recognition is less complicated than dynamic gesture recognition. Inability to talk is taken into account to be a disability among people. To speak with others people with disability use different modes, there are number of m
APA, Harvard, Vancouver, ISO, and other styles
16

Kaneko, Naoshi, Kenta Takeuchi, Dai Hasegawa, Shinichi Shirakawa, Hiroshi Sakuta, and Kazuhiko Sumi. "Speech-to-Gesture Generation Using Bi-Directional LSTM Network." Transactions of the Japanese Society for Artificial Intelligence 34, no. 6 (2019): C—J41_1–12. http://dx.doi.org/10.1527/tjsai.c-j41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bagawade, Mr Ramdas Pandurang, Ms Pournima Akash Chavan, and Ms Kajal Kantilal Jadhav. "Multi touch Gesture Generation and Recognition Techniques: A Study." IJARCCE 6, no. 3 (2017): 5–9. http://dx.doi.org/10.17148/ijarcce.2017.6302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Burgbacher, Ulrich, and Klaus Hinrichs. "Synthetic Word Gesture Generation for Stroke-Based Virtual Keyboards." IEEE Transactions on Human-Machine Systems 47, no. 2 (2017): 221–34. http://dx.doi.org/10.1109/thms.2016.2599487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ibánez, José de Jesús Luis González, and Alf Inge Wang. "Learning Recycling from Playing a Kinect Game." International Journal of Game-Based Learning 5, no. 3 (2015): 25–44. http://dx.doi.org/10.4018/ijgbl.2015070103.

Full text
Abstract:
The emergence of gesture-based computing and inexpensive gesture recognition technology such as the Kinect have opened doors for a new generation of educational games. Gesture based-based interfaces make it possible to provide user interfaces that are more nature and closer to the tasks being carried out, and helping students that learn best through movement (compared to audio and vision). For younger students, motion interfaces can stimulate development of motor skills and let students be physically active during the school day. In this article, an evaluation is presented of a Kinect educatio
APA, Harvard, Vancouver, ISO, and other styles
20

Mukherjee, Saptarshi, and Subhendu Garai. "Next Generation Television." International Journal of Students' Research in Technology & Management 4, no. 1 (2016): 21–23. http://dx.doi.org/10.18510/ijsrtm.2016.416.

Full text
Abstract:
We are blessed with UHD 4K curved TV, choremecast, 3D TV and various breakthrough services. A story with no illustration leads us to imagination, eventually we end up penetrating to a larger depth. Still , Television = idiot box. We portray here big and small ideas on how our next generation TV. can cope up with injustice, corruption, blues, provide advertisement pleasure, cartoon creation, gesture feedback, data storage and awareness, disaster help, festival togetherness, past pattern , ingredients love, magnificent features of entertainment and rejuvenation coupled with dictionary effect.
APA, Harvard, Vancouver, ISO, and other styles
21

KOPP, STEFAN, KIRSTEN BERGMANN, and IPKE WACHSMUTH. "MULTIMODAL COMMUNICATION FROM MULTIMODAL THINKING — TOWARDS AN INTEGRATED MODEL OF SPEECH AND GESTURE PRODUCTION." International Journal of Semantic Computing 02, no. 01 (2008): 115–36. http://dx.doi.org/10.1142/s1793351x08000361.

Full text
Abstract:
A computational model for the automatic production of combined speech and iconic gesture is presented. The generation of multimodal behavior is grounded in processes of multimodal thinking, in which a propositional representation interacts and interfaces with an imagistic representation of visuo-spatial imagery. An integrated architecture for this is described, in which the planning of content and the planning of form across both modalities proceed in an interactive manner. Results from an empirical study are reported that inform the on-the-spot formation of gestures.
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Jinmiao, Prakhar Jaiswal, and Rahul Rai. "Gesture-based system for next generation natural and intuitive interfaces." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 33, no. 1 (2018): 54–68. http://dx.doi.org/10.1017/s0890060418000045.

Full text
Abstract:
AbstractWe present a novel and trainable gesture-based system for next-generation intelligent interfaces. The system requires a non-contact depth sensing device such as an RGB-D (color and depth) camera for user input. The camera records the user's static hand pose and palm center dynamic motion trajectory. Both static pose and dynamic trajectory are used independently to provide commands to the interface. The sketches/symbols formed by palm center trajectory is recognized by the Support Vector Machine classifier. Sketch/symbol recognition process is based on a set of geometrical and statistic
APA, Harvard, Vancouver, ISO, and other styles
23

Cui, Runpeng, Zhong Cao, Weishen Pan, Changshui Zhang, and Jianqiang Wang. "Deep Gesture Video Generation With Learning on Regions of Interest." IEEE Transactions on Multimedia 22, no. 10 (2020): 2551–63. http://dx.doi.org/10.1109/tmm.2019.2960700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Poggi, Isabella. "Iconicity in different types of gestures." Dimensions of gesture 8, no. 1 (2008): 45–61. http://dx.doi.org/10.1075/gest.8.1.05pog.

Full text
Abstract:
The paper presents a framework for classifying gestures in terms of different parameters, and shows that the parameter of iconicity cuts across that of cognitive construction, which distinguishes codified gestures — those represented in memory as stable signal–meaning pairs — from creative ones — those invented on the spot on the basis of a few generative rules. While creative gestures are necessarily iconic, because they can be understood only thanks to their iconicity, codified gestures can be iconic too. A model for the generation of iconic gestures is presented, according to which, to crea
APA, Harvard, Vancouver, ISO, and other styles
25

HACKEL, MATTHIAS, and STEFAN SCHWOPE. "A HUMANOID INTERACTION ROBOT FOR INFORMATION, NEGOTIATION AND ENTERTAINMENT USE." International Journal of Humanoid Robotics 01, no. 03 (2004): 551–63. http://dx.doi.org/10.1142/s0219843604000198.

Full text
Abstract:
This paper presents the design, implementation and application of a humanoid interaction robot (H10). In interdisciplinary cooperation H10 was developed as a case study to operate at points of sale, information desks and demonstrations. If the user given speech input matches an entry of the adaptive database, H10 will react with a suitable answer. Synchronously to the speech generation, face animation and pre-defined gestures of hands and arms are triggered by the core of the system. The principles of the speech, gesture and physical interaction interface as well as some fundamental mechanic a
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Kun Wei, Xing Guo, and Jian Guo Wu. "Gesture Recognition System Based on Wavelet Moment." Applied Mechanics and Materials 401-403 (September 2013): 1377–80. http://dx.doi.org/10.4028/www.scientific.net/amm.401-403.1377.

Full text
Abstract:
Vision-based gesturerecognition is a key technique to achieve a new generation of human-computerinteraction. As few text input search system by gesture recognition isdeveloped, based on the existing gesture recognition techniques, we use thegestures which are corresponding to the Chinese letters and numbers as inputgesture and use Microsoft kinect to obtain depth image to conduct hand gesturesegmentation. First, the edge of the gesture is extracted by Canny algorithm,and then the feature is extracted based on wavelet moment. Finally the gestureletters are obtained. Achieved the text input syst
APA, Harvard, Vancouver, ISO, and other styles
27

Hashimoto, Shuji, and Hideyuki Sawada. "A Grasping Device to Sense Hand Gesture for Expressive Sound Generation." Journal of New Music Research 34, no. 1 (2005): 115–23. http://dx.doi.org/10.1080/09298210500124232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dessì, Stefano, and Lucio Davide Spano. "DG3: Exploiting Gesture Declarative Models for Sample Generation and Online Recognition." Proceedings of the ACM on Human-Computer Interaction 4, EICS (2020): 1–21. http://dx.doi.org/10.1145/3397870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ma, Rui, Zhendong Zhang, and Enqing Chen. "Human Motion Gesture Recognition Based on Computer Vision." Complexity 2021 (February 10, 2021): 1–11. http://dx.doi.org/10.1155/2021/6679746.

Full text
Abstract:
Human motion gesture recognition is the most challenging research direction in the field of computer vision, and it is widely used in human-computer interaction, intelligent monitoring, virtual reality, human behaviour analysis, and other fields. This paper proposes a new type of deep convolutional generation confrontation network to recognize human motion pose. This method uses a deep convolutional stacked hourglass network to accurately extract the location of key joint points on the image. The generation and identification part of the network is designed to encode the first hierarchy (paren
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Rui Hu, and Bin Fang. "Emotion Fusion Recognition for Intelligent Surveillance with PSO-CSVM." Advanced Materials Research 225-226 (April 2011): 51–56. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.51.

Full text
Abstract:
The next generation of intelligent surveillance system should be able to recognize human’s spontaneous emotion state automatically. Compared to speaker recognition, sensor signals analyzing, fingerprint or iris recognition, etc, facial expression and body gesture processing are two mainly non-intrusive vision modalities, which provides potential action information for video surveillance. In our work, we care one kind of facial expression, i.e. anxiety and gesture motion only. Firstly facial expression and body gesture feature are extracted. Particle Swarm Optimization algorithm is used to sele
APA, Harvard, Vancouver, ISO, and other styles
31

Hiramatsu, Masami, Yasushi Yagi, Koichi Hashimoto, and Masahiko Yachida. "The Robot Gesture Generation Approach Based on Appearance from the User Viewpoint." Journal of the Robotics Society of Japan 21, no. 3 (2003): 265–72. http://dx.doi.org/10.7210/jrsj.21.265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ishi, Carlos T., Daichi Machiyashiki, Ryusuke Mikata, and Hiroshi Ishiguro. "A Speech-Driven Hand Gesture Generation Method and Evaluation in Android Robots." IEEE Robotics and Automation Letters 3, no. 4 (2018): 3757–64. http://dx.doi.org/10.1109/lra.2018.2856281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Pinto, Raimundo F., Carlos D. B. Borges, Antônio M. A. Almeida, and Iális C. Paula. "Static Hand Gesture Recognition Based on Convolutional Neural Networks." Journal of Electrical and Computer Engineering 2019 (October 10, 2019): 1–12. http://dx.doi.org/10.1155/2019/4167890.

Full text
Abstract:
This paper proposes a gesture recognition method using convolutional neural networks. The procedure involves the application of morphological filters, contour generation, polygonal approximation, and segmentation during preprocessing, in which they contribute to a better feature extraction. Training and testing are performed with different convolutional neural networks, compared with architectures known in the literature and with other known methodologies. All calculated metrics and convergence graphs obtained during training are analyzed and discussed to validate the robustness of the propose
APA, Harvard, Vancouver, ISO, and other styles
34

Patel, Sunil, and Ramji Makwana. "Connectionist Temporal Classification Model for Dynamic Hand Gesture Recognition using RGB and Optical flow Data." International Arab Journal of Information Technology 17, no. 4 (2020): 497–506. http://dx.doi.org/10.34028/iajit/17/4/8.

Full text
Abstract:
Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and pass
APA, Harvard, Vancouver, ISO, and other styles
35

BREITFUSS, WERNER, HELMUT PRENDINGER, and MITSURU ISHIZUKA. "AUTOMATIC GENERATION OF GAZE AND GESTURES FOR DIALOGUES BETWEEN EMBODIED CONVERSATIONAL AGENTS." International Journal of Semantic Computing 02, no. 01 (2008): 71–90. http://dx.doi.org/10.1142/s1793351x0800035x.

Full text
Abstract:
In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Langu
APA, Harvard, Vancouver, ISO, and other styles
36

Ghuse, Prof Namrata. "Sign Language Recognition using Smart Glove." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (2021): 328–33. http://dx.doi.org/10.22214/ijraset.2021.36347.

Full text
Abstract:
Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap bet
APA, Harvard, Vancouver, ISO, and other styles
37

Ghuse, Prof Namrata. "Sign Language Recognition using Smart Glove." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (2021): 789–92. http://dx.doi.org/10.22214/ijraset.2021.36465.

Full text
Abstract:
Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap bet
APA, Harvard, Vancouver, ISO, and other styles
38

WATANABE, Takahiro, Kuniteru SAKAKIBARA, and Masahiko YACHIDA. "Real Time Gesture Recognition in A Real Complex Environment Using Interactive Model Generation." IEEJ Transactions on Industry Applications 119, no. 1 (1999): 21–29. http://dx.doi.org/10.1541/ieejias.119.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Jiajun, Jianguo Tao, Liang Ding, et al. "A new iterative synthetic data generation method for CNN based stroke gesture recognition." Multimedia Tools and Applications 77, no. 13 (2017): 17181–205. http://dx.doi.org/10.1007/s11042-017-5285-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Yoon, Youngwoo, Bok Cha, Joo-Haeng Lee, et al. "Speech gesture generation from the trimodal context of text, audio, and speaker identity." ACM Transactions on Graphics 39, no. 6 (2020): 1–16. http://dx.doi.org/10.1145/3414685.3417838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kang, Jinsheng, Kang Zhong, Shengfeng Qin, Hongan Wang, and David Wright. "Instant 3D design concept generation and visualization by real-time hand gesture recognition." Computers in Industry 64, no. 7 (2013): 785–97. http://dx.doi.org/10.1016/j.compind.2013.04.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rojc, Matej, Zdravko Kačič, Marko Presker, and Izidor Mlakar. "TTS-driven Embodied Conversation Avatar for UMB-SmartTV." International Journal of Computers and Communications 15 (April 14, 2021): 1–7. http://dx.doi.org/10.46300/91013.2021.15.1.

Full text
Abstract:
When human-TV interaction is performed by remote controller and mobile devices only, the interactions tend to be mechanical, dreary and uninformative. To achieve more advanced interaction, and more human-human like, we introduce the virtual agent technology as a feedback interface. Verbal and co-verbal gestures are linked through complex mental processes, and although they represent different sides of the same mental process, the formulations of both are quite different. Namely, verbal information is bound by rules and grammar, whereas gestures are influenced by emotions, personality etc. In t
APA, Harvard, Vancouver, ISO, and other styles
43

Saez-Mingorance, Borja, Antonio Escobar-Molero, Javier Mendez-Gomez, Encarnacion Castillo-Morales, and Diego P. Morales-Santos. "Object Positioning Algorithm Based on Multidimensional Scaling and Optimization for Synthetic Gesture Data Generation." Sensors 21, no. 17 (2021): 5923. http://dx.doi.org/10.3390/s21175923.

Full text
Abstract:
This work studies the feasibility of a novel two-step algorithm for infrastructure and object positioning, using pairwise distances. The proposal is based on the optimization algorithms, Scaling-by-Majorizing-a-Complicated-Function and the Limited-Memory-Broyden-Fletcher-Goldfarb-Shannon. A qualitative evaluation of these algorithms is performed for 3D positioning. As the final stage, smoothing filtering techniques are applied to estimate the trajectory, from the previously obtained positions. This approach can also be used as a synthetic gesture data generator framework. This framework is ind
APA, Harvard, Vancouver, ISO, and other styles
44

Nakstad, Frederik H., Hironori Washizaki, and Yoshiaki Fukazawa. "Finding and Emulating Keyboard, Mouse, and Touch Interactions and Gestures while Crawling RIAs." International Journal of Software Engineering and Knowledge Engineering 25, no. 09n10 (2015): 1777–82. http://dx.doi.org/10.1142/s0218194015710163.

Full text
Abstract:
Existing techniques for crawling Javascript-heavy Rich Internet Applications tend to ignore user interactions beyond mouse clicking, and therefore often fail to consider potential mouse, keyboard and touch interactions. We propose a new technique for automatically finding and exercising such interactions by analyzing and exercising event handlers registered in the DOM. A basic form of gesture emulation is employed to find states accessible via swiping and tapping. Testing the tool against 6 well-known gesture libraries and 5 actual RIAs, we find that the technique discovers many states and tra
APA, Harvard, Vancouver, ISO, and other styles
45

Thakur, Amrita, Pujan Budhathoki, Sarmila Upreti, Shirish Shrestha, and Subarna Shakya. "Real Time Sign Language Recognition and Speech Generation." Journal of Innovative Image Processing 2, no. 2 (2020): 65–76. http://dx.doi.org/10.36548/jiip.2020.2.001.

Full text
Abstract:
Sign Language is the method of communication of deaf and dumb people all over the world. However, it has always been a difficulty in communication between a verbal impaired person and a normal person. Sign Language Recognition is a breakthrough for helping deaf-mute people to communicate with others. The commercialization of an economical and accurate recognition system is today’s concern of researchers all over the world. Thus, sign language recognition systems based on Image processing and neural networks are preferred over gadget system as they are more accurate and easier to make. The aim
APA, Harvard, Vancouver, ISO, and other styles
46

Jones, Kim. "American Modernism: Reimagining Martha Graham's Lost Imperial Gesture (1935)." Dance Research Journal 47, no. 3 (2015): 51–69. http://dx.doi.org/10.1017/s0149767715000352.

Full text
Abstract:
This article explores the process of reimagining Martha Graham's 1935 “lost” work, Imperial Gesture, into a complete work for performance. The solo was last performed by Graham in 1938 and constitutes the first political solo of her career. With no musical score, no notation score, and scant archival evidence, Graham dancer, régisseur, and contemporary choreographer Kim Jones pieced together the fragments left behind. Beginning in 2011, Jones assembled a team of artists in order to reimagine Imperial Gesture for the Martha Graham Dance Company. This article discusses how Jones found primary an
APA, Harvard, Vancouver, ISO, and other styles
47

Williamson, Rebecca, Yu Zhang, Bruce Mehler, and Ying Wang. "Challenges and Opportunities in Developing Next Generation In-Vehicle HMI Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (2019): 2115–16. http://dx.doi.org/10.1177/1071181319631516.

Full text
Abstract:
The next generation of automotive human machine interface (HMI) systems is expected to be heavily dependent upon artificial intelligence; from autonomous driving to speech assistance, from gesture & touch-enabled interfaces to web & mobile integration. Smooth, safe, and user-friendly interaction between the driver and the vehicle is a key to winning market share. This panel aims to discuss challenges and opportunities for the next generation of automotive HMI from the perspective of human factors and user behavior. Panelists from industry and academia will offer their unique perspectiv
APA, Harvard, Vancouver, ISO, and other styles
48

Clynes, Manfred. "Methodology in Sentographic Measurement of Motor Expression of Emotion: Two-Dimensional Freedom of Gesture Essential." Perceptual and Motor Skills 68, no. 3 (1989): 779–83. http://dx.doi.org/10.2466/pms.1989.68.3.779.

Full text
Abstract:
Measures of motor expression of specific emotions recently reported by Trussoni, et al. in 1988 in this journal used large one-dimensional displacement (7 cm or more, vertical) as a measure, and not pressure, free in two dimensions as in the sentographic measurement of dynamic form used by Clynes. But to be forced to provide such one-dimensional movement strongly inhibits emotional expression. Additional problems of using displacement are discussed, including resetting. Also, to obtain adequate emotion generation experimentally, it is desirable to use the emotion-generating properties of a flo
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Xu, and Gao Wen. "Human-Computer Chinese Sign Language Interaction System." International Journal of Virtual Reality 4, no. 3 (2000): 82–92. http://dx.doi.org/10.20870/ijvr.2000.4.3.2651.

Full text
Abstract:
The generation and recognition of body language is a key technologies of VR. Sign Language is a visual-gestural language mainly used by hearing-impaired people. In this paper, gesture and facial expression models are created using computer graphics and used to synthesize Chinese Sign Language (CSL), and from it a human-computer CSL interaction system is implemented. Using a system combining CSL synthesis and CSL recognition subsystem, hearing-impaired people with data-gloves can pantomime CSL, which can then be displayed on the computer screen in real time and translated into Chinese text. Hea
APA, Harvard, Vancouver, ISO, and other styles
50

Alvarez-Lopez, Fernando, Marcelo Fabián Maina, and Francesc Saigí-Rubió. "Use of Commercial Off-The-Shelf Devices for the Detection of Manual Gestures in Surgery: Systematic Literature Review." Journal of Medical Internet Research 21, no. 5 (2019): e11925. http://dx.doi.org/10.2196/11925.

Full text
Abstract:
Background The increasingly pervasive presence of technology in the operating room raises the need to study the interaction between the surgeon and computer system. A new generation of tools known as commercial off-the-shelf (COTS) devices enabling touchless gesture–based human-computer interaction is currently being explored as a solution in surgical environments. Objective The aim of this systematic literature review was to provide an account of the state of the art of COTS devices in the detection of manual gestures in surgery and to identify their use as a simulation tool for motor skills
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!