To see the other types of publications on this topic, follow the link: Head Gestures.

Journal articles on the topic 'Head Gestures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Head Gestures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bishop, Laura, and Werner Goebl. "Beating time: How ensemble musicians’ cueing gestures communicate beat position and tempo." Psychology of Music 46, no. 1 (2017): 84–106. http://dx.doi.org/10.1177/0305735617702971.

Full text
Abstract:
Ensemble musicians typically exchange visual cues to coordinate piece entrances. “Cueing-in” gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians’ cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano–piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano–violin). Duos performed short passages as their hea
APA, Harvard, Vancouver, ISO, and other styles
2

Prieur, Jacques, Stéphanie Barbu, and Catherine Blois-Heulin. "Assessment and analysis of human laterality for manipulation and communication using the Rennes Laterality Questionnaire." Royal Society Open Science 4, no. 8 (2017): 170035. http://dx.doi.org/10.1098/rsos.170035.

Full text
Abstract:
Despite significant scientific advances, the nature of the left-hemispheric systems involved in language (speech and gesture) and manual actions is still unclear. To date, investigations of human laterality focused mainly on non-communication functions. Although gestural laterality data have been published for infants and children, relatively little is known about laterality of human gestural communication. This study investigated human laterality in depth considering non-communication manipulation actions and various gesture types involving hands, feet, face and ears. We constructed an online
APA, Harvard, Vancouver, ISO, and other styles
3

Gruber, James, Jeanette King, Jen Hay, and Lucy Johnston. "The hands, head, and brow." Gesture 15, no. 1 (2016): 1–36. http://dx.doi.org/10.1075/gest.15.1.01gru.

Full text
Abstract:
This paper examines the speech-accompanying gesture and other kinesic behaviour of bilingual English-Māori and monolingual English speakers in New Zealand. Physical expression has long been regarded a key component of Māori artistic and spoken performance, as well as in personal interactions. This study asks (1) if there are gestures more common to or exclusively employed by the Māori population of New Zealand and (2) if their frequency and form is influenced by speaking Māori? More generally, the study considers the effect of different languages on gesture within the same speaker. Four biling
APA, Harvard, Vancouver, ISO, and other styles
4

Komang Somawirata, I., and Fitri Utaminingrum. "Smart wheelchair controlled by head gesture based on vision." Journal of Physics: Conference Series 2497, no. 1 (2023): 012011. http://dx.doi.org/10.1088/1742-6596/2497/1/012011.

Full text
Abstract:
Abstract Head Gesture Recognition has been developed using a variety of devices that mostly contain a sensor, such as a gyroscope or an accelerometer, for determining the direction and magnitude of movement. This paper explains how to control a smart wheelchair using Head-Gesture Recognition based on Computer Vision. Using the Haar Cascade Algorithm Method for determining the position of the face and nose, determining the order of the head gesture would be easy to do. We classify head gestures to become four, namely: Look down, Look up/center, Turn right and Turn left. The four gesture informa
APA, Harvard, Vancouver, ISO, and other styles
5

Bross, Fabian. "Why do we shake our heads?" Gesture 19, no. 2-3 (2020): 269–98. http://dx.doi.org/10.1075/gest.17001.bro.

Full text
Abstract:
Abstract This article discusses several arguments in favor of the hypothesis that the headshake as a gesture for negation has its origins in early childhood experiences. It elaborates on Charles Darwin’s observation that children inevitably shake their heads in order to stop food intake when sated, thereby establishing a connection between rejection and the head gesture. It is argued that later in life the semantics of the headshake extends from rejection to negation – just as it can be observed in the development of spoken language negation. While Darwin’s hypothesis can hardly be tested dire
APA, Harvard, Vancouver, ISO, and other styles
6

Brown, Amanda, and Masaaki Kamiya. "Gesture in contexts of scopal ambiguity: Negation and quantification in English." Applied Psycholinguistics 40, no. 05 (2019): 1141–72. http://dx.doi.org/10.1017/s014271641900016x.

Full text
Abstract:
AbstractGestures can play a facilitative role in the interpretation of structural ambiguities (Guellaiï, Langus, & Nespor, 2014; Prieto, Borràs-Comes, Tubau, & Espinal, 2013; Tubau, González-Fuente, Prieto, & Espinal, 2015) and are associated with spoken expression of negation (Calbris, 2011; Harrison, 2014a; Kendon, 2002, 2004). This study examines gestural forms and timing patterns with specific interpretations intended by speakers in a context of negation in English where the presence of quantification (all/most/many) yields scope ambiguities, for example, All the st
APA, Harvard, Vancouver, ISO, and other styles
7

OBEN, BERT, and GEERT BRÔNE. "What you see is what you do: on the relationship between gaze and gesture in multimodal alignment." Language and Cognition 7, no. 4 (2015): 546–62. http://dx.doi.org/10.1017/langcog.2015.22.

Full text
Abstract:
abstractInteractive language use inherently involves a process of coordination, which often leads to matching behaviour between interlocutors in different semiotic channels. We study this process of interactive alignment from a multimodal perspective: using data from head-mounted eye-trackers in a corpus of face-to-face conversations, we measure which effect gaze fixations by speakers (on their own gestures, condition 1) and fixations by interlocutors (on the gestures by those speakers, condition 2) have on subsequent gesture production by those interlocutors. The results show there is a signi
APA, Harvard, Vancouver, ISO, and other styles
8

Bankar, Rushikesh Tukaram, and Suresh Salankar. "The Comparative Analysis of a Vision Based HGR System Used for Handicapped People." European Journal of Engineering Research and Science 4, no. 10 (2019): 52–54. http://dx.doi.org/10.24018/ejers.2019.4.10.1509.

Full text
Abstract:
The object tracking is critical to visual / video surveillance, analysis of the activity and gesture recognition. The major difficulties to be occurred in the visual tracking are different environmental conditions, illumination changes, occlusion and appearance. In this paper, the comparative analysis of the different systems which are used to recognize the head gestures under different environmental conditions is discussed. The existing algorithm used to recognize the head gestures has some limitations. The existing algorithm cannot work under outdoor environmental conditions. The traditional
APA, Harvard, Vancouver, ISO, and other styles
9

Bankar, Rushikesh Tukaram, and Suresh Salankar. "Comparative Analysis of a Vision Based HGR System Used for Handicapped People." European Journal of Engineering and Technology Research 4, no. 10 (2019): 52–54. http://dx.doi.org/10.24018/ejeng.2019.4.10.1509.

Full text
Abstract:
The object tracking is critical to visual / video surveillance, analysis of the activity and gesture recognition. The major difficulties to be occurred in the visual tracking are different environmental conditions, illumination changes, occlusion and appearance. In this paper, the comparative analysis of the different systems which are used to recognize the head gestures under different environmental conditions is discussed. The existing algorithm used to recognize the head gestures has some limitations. The existing algorithm cannot work under outdoor environmental conditions. The traditional
APA, Harvard, Vancouver, ISO, and other styles
10

Samad, Mariwan Asaad, and Nawzad Anwer Omar. "Speech Act Analysis for Head Movement and Gesture." Journal of University of Raparin 8, no. 4 (2021): 661–90. http://dx.doi.org/10.26750/vol(8).no(4).paper29.

Full text
Abstract:
This research, is entitled (Speech Act Analysis for Head movement and gesture) this study is an attempt to analyze movement and gestures one of the parts of the humans body, which is Head, depending on the conditions and rules of the Speech Acts theory. The research consists of the introduction and two parts as follows:The first part: This part is devoted to the Speech Act theory, highlighting the history of the theory and the diagnosis of its most important features, with a number of classifications for main parts of this theory.The second part: This part is a practical part, which includes a
APA, Harvard, Vancouver, ISO, and other styles
11

Xiao, Yang, Zhijun Zhang, Aryel Beck, Junsong Yuan, and Daniel Thalmann. "Human–Robot Interaction by Understanding Upper Body Gestures." Presence: Teleoperators and Virtual Environments 23, no. 2 (2014): 133–54. http://dx.doi.org/10.1162/pres_a_00176.

Full text
Abstract:
In this paper, a human–robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot using natural body language. The robot understands the meaning of human upper body gestures and expresses itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. This set also includes gestures with human–object interactions. The gestures are characterized by head, arm, and hand posture information. The wearable Immersion CyberGlove II is
APA, Harvard, Vancouver, ISO, and other styles
12

Lelandais, Manon. "Multimodal marks of iteration in discourse." Faits de Langues 53, no. 2 (2024): 41–68. http://dx.doi.org/10.1163/19589514-53020003.

Full text
Abstract:
Abstract This article investigates the way speakers combine different verbal and gestural resources to express aspectual meaning in American English, specifically the way an event is repeated. It identifies stable gestural correlates to the OVER AND OVER adverbial phrase, which involves lexical iteration. Speakers have been shown to rely on co-verbal gestures to communicate aspectual information along with lexical and grammatical means. However, most of the research has focused on hand gestures, and on comparing the embodiment of different categories of aspect. Cyclic gestures have particularl
APA, Harvard, Vancouver, ISO, and other styles
13

Kettner, Viktoria A., and Jeremy I. M. Carpendale. "Developing gestures for no and yes." Gesture 13, no. 2 (2013): 193–209. http://dx.doi.org/10.1075/gest.13.2.04ket.

Full text
Abstract:
Yes and no, or acceptance and refusal, are widespread communicative skills that are common across cultures. Although nodding and shaking the head are common ways to express these seemingly simple responses, these gestures develop later than others such as pointing. We analyzed diary observations from eight infants to investigate the origins of these gestures, why they develop later than other early gestures, and why nodding the head to indicate yes develops later than shaking the head for no. We found that young infants were able to shake their heads side-to-side, but they did not use this mov
APA, Harvard, Vancouver, ISO, and other styles
14

Lee, Yoonjeong, Jelena Krivokapić, McLaren Lindsey, Vikas Tatineni, and Lynn Yambaye. "The phrasal organization in speech gestures and co-speech head and eyebrow beats." Journal of the Acoustical Society of America 153, no. 3_supplement (2023): A373. http://dx.doi.org/10.1121/10.0019219.

Full text
Abstract:
Non-referential, beat-like gestures that accompany speech are often recruited as coparticipants for marking prosodic structure, especially signaling prominence. Only few studies have examined the coordination between vocal tract and co-speech gestures in relation to phrasal boundaries. Our recent study with eight Korean speakers (5F, 3M) has revealed that co-speech manual beat gestures and speech are synchronous with both phrase-edge segment and tone gestures [Lee et al., JASA 152, A199 (2022)]. Building on this, the present study examines different gesticulators, namely, co-speech head and ey
APA, Harvard, Vancouver, ISO, and other styles
15

Hasheminejad, Atiye Sadat, Mahdieh Shafiee Tabar, and Akbari Chermahini Soghra. "The Effects of High Power and Low Power Posing on Students’ Pain Threshold." Journal of Arak University Medical Sciences 24, no. 4 (2021): 554–65. http://dx.doi.org/10.32598/jams.24.4.6348.1.

Full text
Abstract:
Background and Aim: Research has shown that social power affects information processing in many ways and can induce powerful movements or gestures. This study aimed to investigate the effect of pretending power gestures on changing the pain threshold of a group of female students. Methods & Materials: The method of the present study was quasi-experimental with a pre-test post-test design with a control group. The statistical population of this study included all female students of Arak University in the academic year 2016-2017, from which 60 people selected by convenience sampling method,
APA, Harvard, Vancouver, ISO, and other styles
16

Harrison, Simon. "The organisation of kinesic ensembles associated with negation." Gesture 14, no. 2 (2014): 117–40. http://dx.doi.org/10.1075/gest.14.2.01har.

Full text
Abstract:
This paper describes the organisation of kinesic ensembles associated with negation in speech through a qualitative study of negative utterances identified in face-to-face conversations between English speakers. All the utterances contain a verbal negative particle (no, not, nothing, etc.) and the kinesic ensembles comprise Open Hand Prone gestures and head shakes, both associated with the expression of negation in previous studies (e.g., Kendon, 2002, 2004; Calbris, 1990, 2011; Harrison, 2009, 2010). To analyse how these elements relate to each other, the utterances were studied in ELAN annot
APA, Harvard, Vancouver, ISO, and other styles
17

Mechraoui, Amal, and Faridah Noor Binti Mohd Noor. "The direction giving pointing gestures of the Malay Malaysian speech community." Gesture 16, no. 1 (2017): 68–99. http://dx.doi.org/10.1075/gest.16.1.03mec.

Full text
Abstract:
Abstract When we speak, we do not only produce a chain of words and utterances, but we also perform various body movements that convey information. These movements are usually made with the hands and are what McNeill (1992) terms gestures. Although gesturing is universal, the way we gesture and the meanings we associate with gestures vary cross-culturally. Using a qualitative approach, this paper describes and illustrates the forms and functions of pointing gestures used by Malay speakers. The data discussed is based on 10 video recorded direction-giving interactions. Findings show that pointi
APA, Harvard, Vancouver, ISO, and other styles
18

Murphy, Talley. "Surveillance as Gesture." TDR: The Drama Review 68, no. 2 (2024): 93–111. http://dx.doi.org/10.1017/s1054204324000078.

Full text
Abstract:
Surveillance is gestic, in Bertolt Brecht’s sense: it constitutes and is constituted by a set of practices that police and control the social at the level of gestures. In a surveillant Gestus of the everyday, gestures conscribe bodies as subjects of surveillance, from the touchscreen scroll that operates Amazon’s Neighbors social network to the hands-over-head posture imaged by airport body scanners. Gestures, not digital devices, watch—and enforce—the bounds of a “criminal” human.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Xiaoying, Xue Wang, Gaofeng Dong, et al. "Headar." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 3 (2023): 1–28. http://dx.doi.org/10.1145/3610900.

Full text
Abstract:
Nod and shake of one's head are intuitive and universal gestures in communication. As smartwatches become increasingly intelligent through advances in user activity sensing technologies, many use scenarios of smartwatches demand quick responses from users in confirmation dialogs, to accept or dismiss proposed actions. Such proposed actions include making emergency calls, taking service recommendations, and starting or stopping exercise timers. Head gestures in these scenarios could be preferable to touch interactions for being hands-free and easy to perform. We propose Headar to recognize thes
APA, Harvard, Vancouver, ISO, and other styles
20

Afdaliah, Nihla. "Teachers’ Gestures in EFL Classroom." Al-Lisan 7, no. 2 (2022): 182–97. http://dx.doi.org/10.30603/al.v7i2.2735.

Full text
Abstract:
The research analyzed the teachers’ gestures in EFL classrooms. It covered the teachers’ gestures, the functions of the teachers’ gestures, and the effect of the teachers’ gestures on the students. The research applied a qualitative research design. The subjects of the research were 2 English teachers and 14 students of a senior high school in Majene. The research instruments were classroom observation, teachers’ and students interviews. The results of this research revealed that the teachers performed hand gestures and head gestures in the classroom. The hand gestures were pointing, beckoning
APA, Harvard, Vancouver, ISO, and other styles
21

YOSHIDA, Hanako, Paul CIRINO, Sarah S. MIRE, Joseph M. BURLING, and Sunbok LEE. "Parents’ gesture adaptations to children with autism spectrum disorder." Journal of Child Language 47, no. 1 (2019): 205–24. http://dx.doi.org/10.1017/s0305000919000497.

Full text
Abstract:
AbstractThe present study focused on parents’ social cue use in relation to young children's attention. Participants were ten parent–child dyads; all children were 36 to 60 months old and were either typically developing (TD) or were diagnosed with autism spectrum disorder (ASD). Children wore a head-mounted camera that recorded the proximate child view while their parent played with them. The study compared the following between the TD and ASD groups: (a) frequency of parent's gesture use; (b) parents’ monitoring of their child's face; and (c) how children looked at parents’ gestures. Results
APA, Harvard, Vancouver, ISO, and other styles
22

Angrisani, Leopoldo, Mauro D’Arco, Egidio De Benedetto, et al. "Performance Measurement of Gesture-Based Human–Machine Interfaces Within eXtended Reality Head-Mounted Displays." Sensors 25, no. 9 (2025): 2831. https://doi.org/10.3390/s25092831.

Full text
Abstract:
This paper proposes a method for measuring the performance of Human–Machine Interfaces based on hand-gesture recognition, implemented within eXtended Reality Head-Mounted Displays. The proposed method leverages a systematic approach, enabling performance measurement in compliance with the Guide to the Expression of Uncertainty in Measurement. As an initial step, a testbed is developed, comprising a series of icons accommodated within the field of view of the eXtended Reality Head-Mounted Display considered. Each icon must be selected through a cue-guided task using the hand gestures under eval
APA, Harvard, Vancouver, ISO, and other styles
23

Amina, Atiya Dawood. "Developing a new automated model to classify combined and basic gestures from complex head motion in real time by using All-vs-All HMM." International Journal of Emerging Technologies and Innovative Research Volume 4 Issue 3, Volume 4 Issue 3 (2017): 156–65. https://doi.org/10.5281/zenodo.579686.

Full text
Abstract:
Human head gestures convey a rich message, containing information deliver for peoples as a communication tool. Nodding, shacking are commonly used gestures as non-verbal signals to communicate their intent and emotions. However, the majority of head gestures classification systems focused on head nodding and shaking detection. while they ignored other head gestures which have more expressive emotional signals like rest(up and down), turn, tilt, and tilting. In this paper we developed a new model to classify all head gestures (rest, turn, tilt, node, shake, and tilting) from complex head motion
APA, Harvard, Vancouver, ISO, and other styles
24

Amina, Atiya Dawood. "Developing a new automated model to classify combined and basic gestures from complex head motion in real time by using All-vs-All HMM." International Journal of Emerging Technologies and Innovative Research Volume 4 Issue 3, Volume 4 Issue 3 (2017): 156–65. https://doi.org/10.5281/zenodo.579688.

Full text
Abstract:
Human head gestures convey a rich message, containing information deliver for peoples as a communication tool. Nodding, shacking are commonly used gestures as non-verbal signals to communicate their intent and emotions. However, the majority of head gestures classification systems focused on head nodding and shaking detection. while they ignored other head gestures which have more expressive emotional signals like rest(up and down), turn, tilt, and tilting. In this paper we developed a new model to classify all head gestures (rest, turn, tilt, node, shake, and tilting) from complex head motion
APA, Harvard, Vancouver, ISO, and other styles
25

Sandler, Wendy. "Dedicated gestures and the emergence of sign language." Gesture 12, no. 3 (2012): 265–307. http://dx.doi.org/10.1075/gest.12.3.01san.

Full text
Abstract:
Sign languages make use of the two hands, facial features, the head, and the body to produce multifaceted gestures that are dedicated for linguistic functions. In a newly emerging sign language — Al-Sayyid Bedouin Sign Language — the appearance of dedicated gestures in signers of four age groups or strata reveals that recruitment of gesture for language is a gradual process. Starting with only the hands in Stratum I, each additional articulator is recruited to perform grammatical functions as the language matures, resulting in ever increasing grammatical complexity. The emergence of dedicated
APA, Harvard, Vancouver, ISO, and other styles
26

Ozdemir, Ali. "HAND GESTURE CHARACTER RECOGNITION USING AI DEEP LEARNING." Pakistan's Multidisciplinary Journal for Arts & Science 6, no. 1 (2025): 01–10. https://doi.org/10.5281/zenodo.15341952.

Full text
Abstract:
A B S T R A C T <em>The interaction between two peers can occur when there is a communication medium between these peers such that both can exchange any idea, information, meaning, news or feeling. This communication medium can either be linguistics or can be gestures. The linguistic communication concerns with statistical or rule-based modeling of Natural Language from a computational point of view, whereas Gesture is moving any part of a body (like head or hand) in order to express any meaning or idea. The aim of this report is to review the different proposals about human gesture recognitio
APA, Harvard, Vancouver, ISO, and other styles
27

Mello, Heliana, Lúcia Ferrari, and Bruno Rocha. "Editorial." Journal of Speech Sciences 9 (September 9, 2020): 01–06. http://dx.doi.org/10.20396/joss.v9i00.14953.

Full text
Abstract:
Speech and gestures meet at their departure point which is actionality. The same departing point keeps the two channels connected through their execution in the creation of meaning and interactivity. Both speech and gestures require segmentation in order to be studied and understood scientifically, as knowing what the units of analysis are is crucial to the scientific endeavor. Prominence is both a characteristic carried by prosody (be it defined functionally, physically or cognitively), as well as by several gestural acts, such as widening of the eyes, increased speed in hand motion, head til
APA, Harvard, Vancouver, ISO, and other styles
28

Pizzolato, Ednaldo Brigante, Mauro dos Santos Anjo, and Sebastian Feuerstack. "An evaluation of real-time requirements for automatic sign language recognition using ANNs and HMMs - The LIBRAS use case." Journal on Interactive Systems 4, no. 1 (2013): 1. http://dx.doi.org/10.5753/jis.2013.624.

Full text
Abstract:
Sign languages are the natural way Deafs use to communicate with other people. They have their own formal semantic definitions and syntactic rules and are composed by a large set of gestures involving hands and head. Automatic recognition of sign languages (ARSL) tries to recognize the signs and translate them into a written language. ARSL is a challenging task as it involves background segmentation, hands and head posture modeling, recognition and tracking, temporal analysis and syntactic and semantic interpretation. Moreover, when real-time requirements are considered, this task becomes even
APA, Harvard, Vancouver, ISO, and other styles
29

Borowska-Terka, Anna, and Pawel Strumillo. "Person Independent Recognition of Head Gestures from Parametrised and Raw Signals Recorded from Inertial Measurement Unit." Applied Sciences 10, no. 12 (2020): 4213. http://dx.doi.org/10.3390/app10124213.

Full text
Abstract:
Numerous applications of human–machine interfaces, e.g., dedicated to persons with disabilities, require contactless handling of devices or systems. The purpose of this research is to develop a hands-free head-gesture-controlled interface that can support persons with disabilities to communicate with other people and devices, e.g., the paralyzed to signal messages or the visually impaired to handle travel aids. The hardware of the interface consists of a small stereovision rig with a built-in inertial measurement unit (IMU). The device is to be positioned on a user’s forehead. Two approaches t
APA, Harvard, Vancouver, ISO, and other styles
30

Catteau, Fanny, and Claudia S. Bianchini. "Mise en place d’un protocole d’identification de traits articulatoires communs à la gestualité co-verbale du français et à la Langue des Signes Française (LSF): application au geste épistémique." Faits de Langues 53, no. 2 (2024): 89–110. http://dx.doi.org/10.1163/19589514-53020005.

Full text
Abstract:
Abstract This article focuses on the articulatory characteristics of epistemic gestures (i.e., gestures used to express certainty or uncertainty) in co-speech gestures (CSG) in French and in French Sign Language (LSF). It presents a new methodology for analysis, which relies on the complementary use of manual annotation (using Typannot) and semi-automatic annotation (using AlphaPose) to highlight the kinesiological characteristics of these epistemic gestures. The presented methodology allows to analyze the flexion/extension movements of the head in epistemic contexts. The results of this analy
APA, Harvard, Vancouver, ISO, and other styles
31

Amangeldy, Nurzada, Marek Milosz, Saule Kudubayeva, Akmaral Kassymova, Gulsim Kalakova, and Lena Zhetkenbay. "A Real-Time Dynamic Gesture Variability Recognition Method Based on Convolutional Neural Networks." Applied Sciences 13, no. 19 (2023): 10799. http://dx.doi.org/10.3390/app131910799.

Full text
Abstract:
Among the many problems in machine learning, the most critical ones involve improving the categorical response prediction rate based on extracted features. In spite of this, it is noted that most of the time from the entire cycle of multi-class machine modeling for sign language recognition tasks is spent on data preparation, including collection, filtering, analysis, and visualization of data. To find the optimal solution for the above-mentioned problem, this paper proposes a methodology for automatically collecting the spatiotemporal features of gestures by calculating the coordinates of the
APA, Harvard, Vancouver, ISO, and other styles
32

Bavelas, Janet, and Nicole Chovil. "Some pragmatic functions of conversational facial gestures1." Gesture 17, no. 1 (2018): 98–127. http://dx.doi.org/10.1075/gest.00012.bav.

Full text
Abstract:
Abstract Conversational facial gestures are not emotional expressions (Ekman, 1997). Facial gestures are co-speech gestures – configurations of the face, eyes, and/or head that are synchronized with words and other co-speech gestures. Facial gestures are the most frequent facial actions in dialogue, and the majority serve pragmatic (meta-communicative) rather than referential functions. A qualitative microanalysis of a close-call story illustrates three pragmatic facial gestures in their macro- and micro-context: (a) The narrator’s thinking faces (Goodwin &amp; Goodwin, 1986) occurred as the n
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, Sang Hun, and Se-One Yoon. "User interface for in-vehicle systems with on-wheel finger spreading gestures and head-up displays." Journal of Computational Design and Engineering 7, no. 6 (2020): 700–721. http://dx.doi.org/10.1093/jcde/qwaa052.

Full text
Abstract:
Abstract Interacting with an in-vehicle system through a central console is known to induce visual and biomechanical distractions, thereby delaying the danger recognition and response times of the driver and significantly increasing the risk of an accident. To address this problem, various hand gestures have been developed. Although such gestures can reduce visual demand, they are limited in number, lack passive feedback, and can be vague and imprecise, difficult to understand and remember, and culture-bound. To overcome these limitations, we developed a novel on-wheel finger spreading gestura
APA, Harvard, Vancouver, ISO, and other styles
34

Esipova, Maria. "Polar responses in Russian across modalities and across interfaces." Journal of Slavic Linguistics 29, no. 3 (2021): 1–11. http://dx.doi.org/10.1353/jsl.2021.a923057.

Full text
Abstract:
abstract: This paper investigates gestures and prosody in polar responses in Russian as part of a larger research program of studying meaning as it is expressed through various channels and constrained at various levels of representation and their interfaces. Based on the data on head nods and a gestural–intonational cluster used to question the rationale behind the antecedent speech act in Russian responses, it argues that gestures and intonational contours should be treated on a par with spoken words and their parts when it comes to fitting them into typologies of meaning-encoding expression
APA, Harvard, Vancouver, ISO, and other styles
35

Naga Sesha Lakshmi Pokanati, Satvik Reddy Alla, Sai Raja Pradeep Pampana, Ashan Mohammad, Swamy Sakala, and Veera Venkata Naga Surya Devi Sai Chintam. "Next-gen interaction experience using virtual mouse system." International Journal of Science and Research Archive 15, no. 1 (2025): 348–54. https://doi.org/10.30574/ijsra.2025.15.1.0899.

Full text
Abstract:
The Virtual Mouse System is an innovative, touch-free control mechanism that replaces traditional computer mice using computer vision and artificial intelligence (AI). With a standard camera or webcam, it interprets hand, head, and eye gestures to perform actions like cursor movement, clicking, and scrolling. Designed for both general users and individuals with physical disabilities, it provides an intuitive, accessible, and futuristic interaction method. Key technologies like OpenCV and AI enable real- time gesture recognition, making it suitable for applications in accessibility, gaming, and
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Huafeng, Kang Liu, Yuanhui Zhang, and Jihong Lin. "TRANS-CNN-Based Gesture Recognition for mmWave Radar." Sensors 24, no. 6 (2024): 1800. http://dx.doi.org/10.3390/s24061800.

Full text
Abstract:
In order to improve the real-time performance of gesture recognition by a micro-Doppler map of mmWave radar, the point cloud based gesture recognition for mmWave radar is proposed in this paper. Two steps are carried out for mmWave radar-based gesture recognition. The first step is to estimate the point cloud of the gestures by 3D-FFT and the peak grouping. The second step is to train the TRANS-CNN model by combining the multi-head self-attention and the 1D-convolutional network so as to extract the features in the point cloud data at a deeper level to categorize the gestures. In the experimen
APA, Harvard, Vancouver, ISO, and other styles
37

S Sastry, Dr Anitha. "Gesture Based Wheel Chair Control." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48216.

Full text
Abstract:
Abstract— This paper presents the design and development of a gesture-controlled smart wheelchair system intended for individuals with severe physical impairments. The proposed system utilizes head movements and eye blink detection as primary inputs to enable hands-free navigation. An accelerometer sensor interprets directional head gestures, while an infrared sensor detects intentional eye blinks for auxiliary control functions. To enhance safety, ultrasonic sensors are integrated to detect obstacles and edges, with real-time alerts delivered through a buzzer, vibration motor, and a 16x2 LCD
APA, Harvard, Vancouver, ISO, and other styles
38

Epke, Michael R., Lars Kooijman, and Joost C. F. de Winter. "I See Your Gesture: A VR-Based Study of Bidirectional Communication between Pedestrians and Automated Vehicles." Journal of Advanced Transportation 2021 (April 27, 2021): 1–10. http://dx.doi.org/10.1155/2021/5573560.

Full text
Abstract:
Automated vehicles (AVs) are able to detect pedestrians reliably but still have difficulty in predicting pedestrians’ intentions from their implicit body language. This study examined the effects of using explicit hand gestures and receptive external human-machine interfaces (eHMIs) in the interaction between pedestrians and AVs. Twenty-six participants interacted with AVs in a virtual environment while wearing a head-mounted display. The participants’ movements in the virtual environment were visualized using a motion-tracking suit. The first independent variable was the participants’ opportu
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Jinhyuk, Jaekwang Cha, and Shiho Kim. "Hands-Free User Interface for VR Headsets Based on In Situ Facial Gesture Sensing." Sensors 20, no. 24 (2020): 7206. http://dx.doi.org/10.3390/s20247206.

Full text
Abstract:
The typical configuration of virtual reality (VR) devices consists of a head-mounted display (HMD) and handheld controllers. As such, these units have limited utility in tasks that require hand-free operation, such as in surgical operations or assembly works in cyberspace. We propose a user interface for a VR headset based on a wearer’s facial gestures for hands-free interaction, similar to a touch interface. By sensing and recognizing the expressions associated with the in situ intentional movements of a user’s facial muscles, we define a set of commands that combine predefined facial gesture
APA, Harvard, Vancouver, ISO, and other styles
40

Davidson, Jane W. "The Role of the Body in the Production and Perception of Solo Vocal Performance: A Case Study of Annie Lennox." Musicae Scientiae 5, no. 2 (2001): 235–56. http://dx.doi.org/10.1177/102986490100500206.

Full text
Abstract:
The work described in this paper interprets the body movements of singers in an attempt to understand the relationships between physical control and the musical material being performed, and the performer's implicit and explicit expressive intentions. The work builds upon a previous literature which has suggested that the relationship between physical execution and the expression of mental states is a subtle and complex one. For instance, performers appear to develop a vocabulary of expressive gestures, yet these gestures – though perceptually discreet – co-exist and are even integrated to bec
APA, Harvard, Vancouver, ISO, and other styles
41

Rai, Radha, Prakhar Kumar Chandraker, Shaik Nowsheen, Shantanu Kumar Singh, and Veena G. "AI Cricket Score Board." International Journal of Innovative Research in Information Security 9, no. 02 (2023): 28–38. http://dx.doi.org/10.26562/ijiris.2023.v0902.05.

Full text
Abstract:
Gesture Recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. It is of utmost important in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation, monitoring patients or elder people, surveillance systems, sports gesture analysis, human behavior analysis etc., to virtual reality. In recent years, there has been increased interest in video summarization and automatic sports highlights generation i
APA, Harvard, Vancouver, ISO, and other styles
42

Jeong, Jaehoon, Haegyeom Choi, and Donghun Lee. "Multi-Mode Hand Gesture-Based VR Locomotion Technique for Intuitive Telemanipulation Viewpoint Control in Tightly Arranged Logistic Environments." Sensors 25, no. 4 (2025): 1181. https://doi.org/10.3390/s25041181.

Full text
Abstract:
Telemanipulation-based object-side picking with a suction gripper often faces challenges such as occlusion of the target object or the gripper and the need for precise alignment between the suction cup and the object’s surface. These issues can significantly affect task success rates in logistics environments. To address these problems, this study proposes a multi-mode hand gesture-based virtual reality (VR) locomotion method to enable intuitive and precise viewpoint control. The system utilizes a head-mounted display (HMD) camera to capture hand skeleton data, which a multi-layer perceptron (
APA, Harvard, Vancouver, ISO, and other styles
43

Anjusha Pimpalshende. "Facial Gesture Recognition with Human Computer Interaction for Physically Impaired." Advances in Nonlinear Variational Inequalities 27, no. 4 (2024): 48–59. http://dx.doi.org/10.52783/anvi.v27.1492.

Full text
Abstract:
Modern technologies have succeeded in reducing human effort and risks in doing many activities. As physical disability is a barrier for computer interaction based on electronic devices, modern advancements like human-computer interaction (HCI) have proven to be a dynamic field of study in the last few decades which is a major milestone to uphold the lives of the physically impaired in terms of utilizing technology. In this study we projected a method that provides assistance to the physically impaired people for communicating with these techniques. We accomplished this task by considering the
APA, Harvard, Vancouver, ISO, and other styles
44

Fusaro, Maria, Paul L. Harris, and Barbara A. Pan. "Head nodding and head shaking gestures in children’s early communication." First Language 32, no. 4 (2011): 439–58. http://dx.doi.org/10.1177/0142723711419326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Schweitzer, Frédéric, and Alexandre Campeau-Lecours. "IMU-Based Hand Gesture Interface Implementing a Sequence-Matching Algorithm for the Control of Assistive Technologies." Signals 2, no. 4 (2021): 729–53. http://dx.doi.org/10.3390/signals2040043.

Full text
Abstract:
Assistive technologies (ATs) often have a high-dimensionality of possible movements (e.g., assistive robot with several degrees of freedom or a computer), but the users have to control them with low-dimensionality sensors and interfaces (e.g., switches). This paper presents the development of an open-source interface based on a sequence-matching algorithm for the control of ATs. Sequence matching allows the user to input several different commands with low-dimensionality sensors by not only recognizing their output, but also their sequential pattern through time, similarly to Morse code. In th
APA, Harvard, Vancouver, ISO, and other styles
46

Priyanayana, S., B. Jayasekara, and R. Gopura. "Adapting concept of human-human multimodal interaction in human-robot applications." Bolgoda Plains 2, no. 2 (2022): 18–20. http://dx.doi.org/10.31705/bprm.v2(2).2022.4.

Full text
Abstract:
Human communication is multimodal in nature. In a normal environment, people use to interact with other humans and with the environment using more than one modality or medium of communication. They speak, use gestures and look at things to interact with nature and other humans. By listening to the different voice tones, looking at face gazes, and arm movements people understand communication cues. A discussion with two people will be in vocal communication, hand gestures, head gestures, and facial cues, etc. [1]. If textbook definition is considered synergistic use of these interaction methods
APA, Harvard, Vancouver, ISO, and other styles
47

Rajath, Ravi, R. Koundinya Rohan, S. Srinivas Naidu T, Manoharan Vaishnav, and H. Anu. "Wireless Sensing of Gestures using MEMS Accelerometer." Journal of Network Security Computer Networks 5, no. 2 (2019): 6–9. https://doi.org/10.5281/zenodo.3326119.

Full text
Abstract:
<em>Innovation assumes a noteworthy job in medicinal services for sensor gadgets as well as in correspondence, recording and show gadgets. The system tends to propose, interpretation of hand or head movement via complex algorithms. These algorithms are in the form of software, hardware or a combination of both. The main goal of gesture recognition system is to enable an individual or a group of individuals to communicate using specific sign languages or gestures. This paper proposes the use of a three axis accelerometer, wireless protocols of communication and a database system in a computer.
APA, Harvard, Vancouver, ISO, and other styles
48

Dawood, Dr Amina Atiya, and Balasem Alawi Hussain. "Machine Learning for Single and Complex 3D Head Gestures: Classification in Human-Computer Interaction." Webology 19, no. 1 (2022): 1431–45. http://dx.doi.org/10.14704/web/v19i1/web19095.

Full text
Abstract:
This paper presents a new Hidden Markov Model based approach for fast and automatic detection and classification of head movements in real time dynamic videos. The model has been developed to utilize human-computer interaction applications by using only the laptop webcam. The proposed model has the ability to predict single head and combined simultaneously in fast responses. Other models paid more attention to classify head nod and shake only, but our model contribute the role of other head movements. The model proposed here doesn’t need any user intervention or previous knowledge of its envir
APA, Harvard, Vancouver, ISO, and other styles
49

Mao, Xiaoqian, Xi Wen, Yu Song, Wei Li, and Genshe Chen. "Eliminating drift of the head gesture reference to enhance Google Glass-based control of an NAO humanoid robot." International Journal of Advanced Robotic Systems 14, no. 2 (2017): 172988141769258. http://dx.doi.org/10.1177/1729881417692583.

Full text
Abstract:
This article presents a strategy for hand-free control of an NAO humanoid robot via head gesture detected by Google Glass-based multi-sensor fusion. First, we introduce a Google Glass-based robot system by integrating the Google Glass and the NAO humanoid robot, which is able to send robot commands through Wi-Fi communications between the Google Glass and the robot. Second, we detect the operator’s head gestures by processing data from multiple sensors including accelerometers, geomagnetic sensors and gyroscopes. Next, we use a complementary filter to eliminate drift of the head gesture refere
APA, Harvard, Vancouver, ISO, and other styles
50

Nurieva, Irina M. "GESTURE IN THE UDMURTS SONG CULTURE: FUNCTIONS AND SEMANTICS." Vestnik Tomskogo gosudarstvennogo universiteta. Kul'turologiya i iskusstvovedenie, no. 43 (2021): 214–21. http://dx.doi.org/10.17223/22220836/43/17.

Full text
Abstract:
The problems of non-verbal behavior remain in the shadow of the modern ethnomusicology so far. Meanwhile, spatial behavior of singers, the poses and gestures used by them during the singing are language of culture, not less informative, than musical and verbal language, but at the same time more emotional, expressive and available. Gesture as body language is usually associated with the movement of the arm, palm, head, shoulders, legs. In the present study, we operate with the broader meaning of the word “gesture” as a behavior (from Lat. Gestura). The spatial behavior of performers during the
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!