To see the other types of publications on this topic, follow the link: Gestural Movements.

Journal articles on the topic 'Gestural Movements'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gestural Movements.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wacewicz, Sławomir, Przemysław Żywiczyński, and Sylwester Orzechowski. "Visible movements of the orofacial area." Gesture 15, no. 2 (2016): 250–82. http://dx.doi.org/10.1075/gest.15.2.05wac.

Full text
Abstract:
The age-old debate between the proponents of the gesture-first and speech-first positions has returned to occupy a central place in current language evolution theorizing. The gestural scenarios, suffering from the problem known as “modality transition” (why a gestural system would have changed into a predominantly spoken system), frequently appeal to the gestures of the orofacial area as a platform for this putative transition. Here, we review currently available evidence on the significance of the orofacial area in language evolution. While our review offers some support for orofacial movements as an evolutionary “bridge” between manual gesture and speech, we see the evidence as far more consistent with a multimodal approach. We also suggest that, more generally, the “gestural versus spoken” formulation is limiting and would be better expressed in terms of the relative input and interplay of the visual and vocal-auditory sensory modalities.
APA, Harvard, Vancouver, ISO, and other styles
2

Müller, Cornelia. "How recurrent gestures mean." Gesture 16, no. 2 (2017): 277–304. http://dx.doi.org/10.1075/gest.16.2.05mul.

Full text
Abstract:
Abstract Drawing upon corpus analyses of recurrent gestures, a pragmatics perspective on gestural meaning and conventionalization will be developed. Gesture pragmatics is considered in terms of usage-based, embodied and interactively emerging meaning. The article brings together cognitive linguistic, cognitive semiotic and interactional perspectives on meaning making. How the interrelation between different types of context (interactional, semantic/pragmatic/syntactic, distribution across a corpus) with the embodied motivation of kinesic forms in actions and movement experiences of the body might play out in the process of conventionalization is illustrated by discussing three recurrent gestures: the Palm-Up-Open-Hand, the Holding Away, and the Cyclic gesture. By merging conventional and idiosyncratic elements recurrent gestures occupy a place between spontaneously created (singular) gestures and emblems as fully conventionalized gestural expressions on a continuum of increasing conventionalization (cf. Kendon’s continuum: McNeill, 1992, 2000). Recurrent gestures are an interesting case to study how processes of conventionalization may involve emergent de-compositions of gestural movements into smaller concomitant Gestalts (cf. Kendon, 2004, Chapters 15 & 16). They are particularly revealing in showing how those de-compositional processes are grounded experientially in contexts-of-use and remain grounded in conventionalized, yet still embodied, experiential frames.
APA, Harvard, Vancouver, ISO, and other styles
3

Wodehouse, Andrew, and Jonathon Marks. "Gestural Product Interaction." International Journal of Art, Culture and Design Technologies 3, no. 2 (2013): 1–13. http://dx.doi.org/10.4018/ijacdt.2013070101.

Full text
Abstract:
This research explores emotional response to gesture in order to inform future product interaction design. After describing the emergence and likely role of full-body interfaces with devices and systems, the importance of emotional reaction to the necessary movements and gestures is outlined. A gestural vocabulary for the control of a web page is then presented, along with a semantic differential questionnaire for its evaluation. An experiment is described where users undertook a series of web navigation tasks using the gestural vocabulary, then recorded their reaction to the experience. A number of insights were drawn on the context, precision, distinction, repetition and scale of gestures when used to control or activate a product. These insights will be of help in interaction design, and provide a basis for further development of the gestural vocabulary.
APA, Harvard, Vancouver, ISO, and other styles
4

Sekine, Kazuki, Catharine Wood, and Sotaro Kita. "Gestural depiction of motion events in narrative increases symbolic distance with age." Language, Interaction and Acquisition 9, no. 1 (2018): 40–68. http://dx.doi.org/10.1075/lia.15020.sek.

Full text
Abstract:
Abstract We examined gesture representation of motion events in narratives produced by three- and nine-year-olds, and adults. Two aspects of gestural depiction were analysed: how protagonists were depicted, and how gesture space was used. We found that older groups were more likely to express protagonists as an object that a gesturing hand held and manipulated, and less likely to express protagonists with whole-body enactment gestures. Furthermore, for older groups, gesture space increasingly became less similar to narrated space. The older groups were less likely to use large gestures or gestures in the periphery of the gesture space to represent movements that were large relative to a protagonist’s body or that took place next to a protagonist. They were also less likely to produce gestures on a physical surface (e.g. table) to represent movement on a surface in narrated events. The development of gestural depiction indicates that older speakers become less immersed in the story world and start to control and manipulate story representation from an outside perspective in a bounded and stage-like gesture space. We discuss this developmental shift in terms of increasing symbolic distancing (Werner & Kaplan, 1963).
APA, Harvard, Vancouver, ISO, and other styles
5

Adema, Janneke, and Kamila Kuc. "Unruly Gestures: Seven Cine-Paragraphs on Reading/Writing Practices in our Post-Digital Condition." Culture Unbound 11, no. 1 (2019): 190–208. http://dx.doi.org/10.3384/cu.2000.1525.2019111190.

Full text
Abstract:
Unruly gestures presents a hybrid performative intervention by means of video, text, and still images. With this experimental essay we aspire to break down various preconceptions about reading/writing gestures. Breaking away from a narrative that sees these gestures foremost as passive entities – as either embodiments of pure subjective intentionality, or as bodily movements shaped and controlled by media technologies (enabling specific sensory engagements with texts) – we aim to reappraise them. Indeed, in this essay we identify numerous dominant narratives that relate to gestural agency, to the media-specificity of gestures, and to their (linear) historicity, naturalness and humanism. This essay disrupts these preconceptions, and by doing so, it unfolds an alternative genealogy of ‘unruly gestures.’ These are gestures that challenge gestural conditioning through particular media technologies, cultural power structures, hegemonic discourses, and the biopolitical self. We focus on reading/writing gestures that have disrupted gestural hegemonies and material-discursive forms of gestural control through time and across media. Informed by Tristan Tzara’s cut-up techniques, where through the gesture of cutting the Dadaists subverted established traditions of authorship, intentionality, and linearity, this essay has been cut-up into seven semi-autonomous cine-paragraphs (accessible in video and print). Each of these cine-paragraphs confronts specific gestural preconceptions while simultaneously showcasing various unruly gestures.
APA, Harvard, Vancouver, ISO, and other styles
6

Browman, Catherine P., and Louis Goldstein. "Articulatory gestures as phonological units." Phonology 6, no. 2 (1989): 201–51. http://dx.doi.org/10.1017/s0952675700001019.

Full text
Abstract:
We have argued that dynamically defined articulatory gestures are the appropriate units to serve as the atoms of phonological representation. Gestures are a natural unit, not only because they involve task-oriented movements of the articulators, but because they arguably emerge as prelinguistic discrete units of action in infants. The use of gestures, rather than constellations of gestures as in Root nodes, as basic units of description makes it possible to characterise a variety of language patterns in which gestural organisation varies. Such patterns range from the misorderings of disordered speech through phonological rules involving gestural overlap and deletion to historical changes in which the overlap of gestures provides a crucial explanatory element.Gestures can participate in language patterns involving overlap because they are spatiotemporal in nature and therefore have internal duration. In addition, gestures differ from current theories of feature geometry by including the constriction degree as an inherent part of the gesture. Since the gestural constrictions occur in the vocal tract, which can be charactensed in terms of tube geometry, all the levels of the vocal tract will be constricted, leading to a constriction degree hierarchy. The values of the constriction degree at each higher level node in the hierarchy can be predicted on the basis of the percolation principles and tube geometry. In this way, the use of gestures as atoms can be reconciled with the use of Constriction degree at various levels in the vocal tract (or feature geometry) hierarchy.The phonological notation developed for the gestural approach might usefully be incorporated, in whole or in part, into other phonologies. Five components of the notation were discussed, all derived from the basic premise that gestures are the primitive phonological unit, organised into gestural scores. These components include (1) constriction degree as a subordinate of the articulator node and (2) stiffness (duration) as a subordinate of the articulator node. That is, both CD and duration are inherent to the gesture. The gestures are arranged in gestural scores using (3) articulatory tiers, with (4) the relevant geometry (articulatory, tube or feature) indicated to the left of the score and (5) structural information above the score, if desired. Association lines can also be used to indicate how the gestures are combined into phonological units. Thus, gestures can serve both as characterisations of articulatory movement data and as the atoms of phonological representation.
APA, Harvard, Vancouver, ISO, and other styles
7

Keller, M. David, Patrick Mead, and Megan Kozub. "Gaze Supported Gestural Computer Interaction: Performance Implications of Training." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (2017): 1990–94. http://dx.doi.org/10.1177/1541931213601993.

Full text
Abstract:
Gaze supported non-tactile gestural control uses a combination of gestures based body movements with eye gaze positioning to provide an input source for a user’s control with a system. Combining body gestures with eye movements allows for unique computer control methods other than the traditional mouse. However, research is mixed on the effectiveness of emerging control types, such as gestures and eye-tracking, with some showing positive performance outcomes for one or more control aspects but performance detriments in other areas that would prohibit the use of such novel control methods. One important aspect that is often ignored is familiarity with the control method. Unlike the mouse, users are typically unfamiliar with eye and gestures based control methods. In order to truly understand the benefit of new concepts like gaze supported gestural controls, testing experienced users is necessary. In the current experiment, participants were trained on the gaze supported gestures system in order to become “experts” and achieve similar levels of proficiency with the different control methods to be assessed, to include mouse, non-gaze and gaze supported gestural controls. Results showed that after as few as five practice sessions participants were able to perform a simple point and click task as well or even better than mouse control when using gaze supported gestures.
APA, Harvard, Vancouver, ISO, and other styles
8

Rahaim, Matt. "Gesture and melody in Indian vocal music." Gesture 8, no. 3 (2008): 325–47. http://dx.doi.org/10.1075/gest.8.3.04rah.

Full text
Abstract:
The gestures that accompany improvisation in Indian vocal music, like the gestures that accompany speech, are closely co-ordinated with vocalization. Though linked to what is being sung, these movements are not determined by vocal action; nor are they taught explicitly, deliberately rehearsed, or tied to specific meanings. Students tend to gesture recognizably like their teachers, producing lineage-based gesture dialects, but the gestural repertoire of every vocalist is nonetheless idiosyncratic. This paper aims to trace a brief history of song gesture in India, and to show some of the links between gesture and vocalization. It also adapts Katharine Young’s theory of the “family body” to the transmission of gesture dialects through teaching lineages. Gesture and sound are taken to be parallel channels for the expression of melody.
APA, Harvard, Vancouver, ISO, and other styles
9

Mead, Patrick, David Keller, and Megan Kozub. "Point with your eyes not with your hands." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (2016): 835–39. http://dx.doi.org/10.1177/1541931213601190.

Full text
Abstract:
The emergence of gesture based controls like the Microsoft Kinect provides new opportunities for creative and innovative methods of human computer interaction. However, such devices are not without their limitations. The gross-motor movements of gestural interaction present physical limitations that may negatively affect interaction speed, accuracy, and workload, and subsequently affect the design of system interfaces and inputs. Conversely, interaction methods such as eye tracking require little physical effort, leveraging the unconscious and natural behaviors of human eye-movements as inputs. Unfortunately, eye tracking, in most cases, is limited to a simple pointing device. However, this research shows by combining these interactions into gaze-based gestural controls it is possible to overcome the limitations of each method, improving interaction performance by associating gestural commands to interface elements within a user’s field of view.
APA, Harvard, Vancouver, ISO, and other styles
10

Florkiewicz, Brittany, and Matthew Campbell. "Chimpanzee facial gestures and the implications for the evolution of language." PeerJ 9 (September 22, 2021): e12237. http://dx.doi.org/10.7717/peerj.12237.

Full text
Abstract:
Great ape manual gestures are described as communicative, flexible, intentional, and goal-oriented. These gestures are thought to be an evolutionary pre-cursor to human language. Conversely, facial expressions are thought to be inflexible, automatic, and derived from emotion. However, great apes can make a wide range of movements with their faces, and they may possess the control needed to gesture with their faces as well as their hands. We examined whether chimpanzee facial expressions possess the four important gesture properties and how they compare to manual gestures. To do this, we quantified variables that have been previously described through largely qualitative means. Chimpanzee facial expressions met all four gesture criteria and performed remarkably similar to manual gestures. Facial gestures have implications for the evolution of language. If other mammals also show facial gestures, then the gestural origins of language may be much older than the human/great ape lineage.
APA, Harvard, Vancouver, ISO, and other styles
11

Khairunizam, Wan, Khairul Ikram, Hafiz Halim, et al. "Analysis of attribute domain for geometrical gesture performed by arm movements." Indonesian Journal of Electrical Engineering and Computer Science 16, no. 2 (2019): 759. http://dx.doi.org/10.11591/ijeecs.v16.i2.pp759-766.

Full text
Abstract:
<span>Hand gesture recognition commonly uses a camera to track hand movements and transformed into gesture database by using various computational approaches. Motion tracking utilized to map coordinate point of the subject movement, either in skeletal model or marker tracing. Data from motion trackers usually contains massive coordinate sequences of marker movement. A reliable method is required to select best features and analyze these data. However, the current issue whether the selected features and data presentation are significant for the research or not. This research brings the concept of ontology design for arm gesture recognition systems by utilizing the motion capture system. Ontology is the conceptual structure mainly used to retrieve information by establishing relation in complex data model. The proposed ontology framework is divided into three domains which are knowledge domain, attribute domain and process domain. Knowledge domain holds pre-processed gestural data from motion capture. The attribute domain is that the level where all the attribute elements were presented. This paper shows the analysis of the datasets in attribute domain. The analysis is divided into two parts which is precision measure and ANOVA test. Both analyses are to prove the reliability of datasets in attribute domain. The precision measure is used to remove all the common data for all gesture. A statistical analysis of p-value is lower than 0.01 which means the gestural data are statistically significant to be used for the similarity measure.</span>
APA, Harvard, Vancouver, ISO, and other styles
12

Wolf, Catherine G. "A Comparative Study of Gestural and Keyboard Interfaces." Proceedings of the Human Factors Society Annual Meeting 32, no. 5 (1988): 273–77. http://dx.doi.org/10.1177/154193128803200506.

Full text
Abstract:
This paper presents results from two experiments which compared gestural and keyboard interfaces to a spreadsheet program. This is the first quantitative comparison of these two types of interfaces known to the author. The gestural interface employed gestures (hand-drawn marks such as carets or brackets) for commands, and handwriting as input techniques. In one configuration, the input/output hardware consisted of a transparent digitizing tablet mounted on top of an LCD which allowed the user to interact with the program by writing on the tablet with a stylus. The experiments found that participants were faster with the gestural interface. Specifically, subjects performed the operations in about 72% of the time taken with the keyboard. In addition, there was a preference for the gestural interface over the keyboard interface. These findings are explained in terms of the fewer number of movements required to carry out an operation with the gestural interface, the greater ease of remembering gestural commands, and the benefits of performing operations directly on objects of interest.
APA, Harvard, Vancouver, ISO, and other styles
13

LEVELT, WILLEM J. M. "Speech, gesture and the origins of language." European Review 12, no. 4 (2004): 543–49. http://dx.doi.org/10.1017/s1062798704000468.

Full text
Abstract:
During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt's monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in producing utterances and considered how these mechanisms could have evolved. Wundt assumes that articulatory movements were originally rather arbitrary concomitants of larger, meaningful expressive bodily gestures. The sounds such articulations happened to produce slowly acquired the meaning of the gesture as a whole, ultimately making the gesture superfluous. Over a century later, gestural theories of language origins still abound. I argue that such theories are unlikely and wasteful, given the biological, neurological and genetic evidence.
APA, Harvard, Vancouver, ISO, and other styles
14

Givens, David B. "Reading palm-up signs: Neurosemiotic overview of a common hand gesture." Semiotica 2016, no. 210 (2016): 235–50. http://dx.doi.org/10.1515/sem-2016-0053.

Full text
Abstract:
AbstractThis article explores ways in which the human nervous system encodes and decodes palm-up gestural signs, signals, and cues. Palm-up gestures and their accompanying speech acts evolved from an ancient neurological system that gave rise to both gestural (pectoral) communication and vocal (laryngeal) language (Bass and Chagnaud 2013, Shared developmental and evolutionary origins for neural basis of vocal–acoustic and pectoral–gestural signaling. Proceedings of the National Academy of Sciences 109. 10677–10684.). Meanings of palm-up cues are multifaceted and nuanced, and express degrees of emotional helplessness, cognitive uncertainty, prosodic emphasis, and social deference. By themselves or in combination with other hand movements – such as reaching, showing, and pointing – palm-up cues are used to begin speaking turns, ask questions, request favors, and share personal opinions, feelings, and moods. The palm-up hand movement is a possibly universal signal of deference, in Erving Goffman’s (1956, The nature of deference and demeanor. American Anthropologist 58(3). 473–502.) sense of the term, not unlike other deferential body-motion cues such as the anjai mudra, bow, curtsy, genuflection, kowtow, namaste, poussi-poussi, pranama, sampeah, and wai.
APA, Harvard, Vancouver, ISO, and other styles
15

Holle, Henning, and Thomas C. Gunter. "The Role of Iconic Gestures in Speech Disambiguation: ERP Evidence." Journal of Cognitive Neuroscience 19, no. 7 (2007): 1175–92. http://dx.doi.org/10.1162/jocn.2007.19.7.1175.

Full text
Abstract:
The present series of experiments explored the extent to which iconic gestures convey information not found in speech. Electroencephalogram (EEG) was recorded as participants watched videos of a person gesturing and speaking simultaneously. The experimental sentences contained an unbalanced homonym in the initial part of the sentence (e.g., She controlled the ball …) and were disambiguated at a target word in the subsequent clause (which during the game … vs. which during the dance …). Coincident with the initial part of the sentence, the speaker produced an iconic gesture which supported either the dominant or the subordinate meaning. Event-related potentials were time-locked to the onset of the target word. In Experiment 1, participants were explicitly asked to judge the congruency between the initial homonym-gesture combination and the subsequent target word. The N400 at target words was found to be smaller after a congruent gesture and larger after an incongruent gesture, suggesting that listeners can use gestural information to disambiguate speech. Experiment 2 replicated the results using a less explicit task, indicating that the disambiguating effect of gesture is somewhat task-independent. Unrelated grooming movements were added to the paradigm in Experiment 3. The N400 at subordinate targets was found to be smaller after subordinate gestures and larger after dominant gestures as well as grooming, indicating that an iconic gesture can facilitate the processing of a lesser frequent word meaning. The N400 at dominant targets no longer varied as a function of the preceding gesture in Experiment 3, suggesting that the addition of meaningless movements weakened the impact of gesture. Thus, the integration of gesture and speech in comprehension does not appear to be an obligatory process but is modulated by situational factors such as the amount of observed meaningful hand movements.
APA, Harvard, Vancouver, ISO, and other styles
16

McMahon, April, Paul Foulkes, and Laura Tollfree. "Gestural representation and Lexical Phonology." Phonology 11, no. 2 (1994): 277–316. http://dx.doi.org/10.1017/s0952675700001974.

Full text
Abstract:
Recent work on Articulatory Phonology (Browman & Goldstein 1986, 1989, 1991, 1992a, b) raises a number of questions, specifically involving the phonetics–phonology ‘interface’. One advantage of using Articulatory Phonology (henceforth ArtP), with its basic units of abstract gestures based on articulatory movements, is its ability to link phenomena previously seen as phonological to those which are conventionally described as allophonic, or even lower-level phonetic effects, since ‘gestures are... useful primitives for characterising phonological patterns as well as for analysing the activity of the vocal tract articulators’ (Browman & Goldstein 1991: 313). If both phonetics and phonology could ultimately be cast entirely in gestural terms, the phonetics–phonology interface might effectively cease to exist, at least in terms of units of analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Davidson, Jane W. "The Role of the Body in the Production and Perception of Solo Vocal Performance: A Case Study of Annie Lennox." Musicae Scientiae 5, no. 2 (2001): 235–56. http://dx.doi.org/10.1177/102986490100500206.

Full text
Abstract:
The work described in this paper interprets the body movements of singers in an attempt to understand the relationships between physical control and the musical material being performed, and the performer's implicit and explicit expressive intentions. The work builds upon a previous literature which has suggested that the relationship between physical execution and the expression of mental states is a subtle and complex one. For instance, performers appear to develop a vocabulary of expressive gestures, yet these gestures – though perceptually discreet – co-exist and are even integrated to become part of the functional movement of playing. Additionally, there is the matter of how both musical and extra-musical concerns are coordinated between performer, co-performers and audience using body movements. A case study shows how, in the interaction between body style, musical expression and communication movements of both an individual and culturally-determined style are used. Many of these performance movements have clear functions and meanings: to communicate expressive intention (for instance, a sudden surge forwards to facilitate the execution of a loud musical passage, or a high curving hand gesture to link sections of the music during a pause); to communicate to the audience or co-performers a need for co-ordination or participation (for example, nodding the head to indicate “now” for the audience to join in a chorus of a song; or exchanging glances for the co-performer to take over a solo); to signal extra-musical concerns (for example, gesturing to the audience to remain quiet); and to present information about the performer's personality, with their individualized characteristics providing important cues (muted contained gestures, or large extravagant gestures, for example); to show off to the audience. From these results a theory is developed to explain how gestural elements help to make a performance meaningful.
APA, Harvard, Vancouver, ISO, and other styles
18

Janczyk, Markus, Aiping Xiong, and Robert W. Proctor. "Stimulus-Response and Response-Effect Compatibility With Touchless Gestures and Moving Action Effects." Human Factors: The Journal of the Human Factors and Ergonomics Society 61, no. 8 (2019): 1297–314. http://dx.doi.org/10.1177/0018720819831814.

Full text
Abstract:
Objective: To determine whether response-effect (R-E) compatibility or stimulus-response (S-R) compatibility is more critical for touchless gesture responses. Background: Content on displays can be moved in the same direction (S-R incompatible but R-E compatible) or opposite direction (S-R compatible but R-E incompatible) as the touchless gesture that produces the movement. Previous studies suggested that it is easier to produce a button-press response when it is R-E compatible (and S-R incompatible). However, whether this R-E compatibility effect also occurs for touchless gesture responses is unknown. Method: Experiments 1 and 2 employed an R-E compatibility manipulation in which participants made responses with an upward or downward touchless gesture that resulted in the display content moving in the same (compatible) or opposite (incompatible) direction. Experiment 3 employed an S-R compatibility manipulation in which the stimulus occurred at the upper or lower location on the screen. Results: Overall, only negligible influences of R-E compatibility on performing the touchless gestures were observed (in contrast to button-press responses), whereas S-R compatibility heavily affected the gestural responses. Conclusion: The R-E compatibility obtained in many previous studies with various types of responses appears not to hold for touchless gestures as responses. Application: The results suggest that in the design of touchless interfaces, unique factors may contribute to determining which mappings of gesture and display movements are preferred by users.
APA, Harvard, Vancouver, ISO, and other styles
19

Roth, Wolff-Michael. "From epistemic (ergotic) actions to scientific discourse." Pragmatics and Cognition 11, no. 1 (2003): 141–70. http://dx.doi.org/10.1075/pc.11.1.06rot.

Full text
Abstract:
The role of gestures in communication is still debated: Some claim that gestures are merely ancillary forms of expressions, whereas others suggest a central role of gestures in the development of language. In this article, I provide data in support of the overarching hypothesis that gestures have a transitional function between ergotic/epistemic movements of hands and symbolic expressions. The context for the study of these transitions is constituted by school science laboratory activities conducted by students who are also asked to describe and explain while still within proximity of the materials of their investigations. It is hypothesized that communication is distributed across the context (verbal, gestural, material) and shifts increasingly into a verbal modality as students become familiar with the phenomena they are to learn about. Furthermore, it is hypothesized that initial temporal delays between gestures and the corresponding words decrease and finally disappear so that gestural and verbal modalities coincide. It is suggested that engaging in communication in the presence of material has an important cognitive function in that it affords a distribution of cognition across different modalities until individuals have developed the competence to express themselves effectively in the verbal modality.
APA, Harvard, Vancouver, ISO, and other styles
20

Mizuguchi, Takashi, Ryoko Sugimura, and Toshisada Deguchi. "Children's Imitations of Movements are Goal-Directed and Context-Specific." Perceptual and Motor Skills 108, no. 2 (2009): 513–23. http://dx.doi.org/10.2466/pms.108.2.513-523.

Full text
Abstract:
Previous research indicates that imitation of gestures in preschool children is goal-directed. A goal may be a salient feature from a presented movement; that goal may be imitated correctly, but other features were ignored, resulting in observable errors. Objects (e.g., a dot on the table) can become the most salient features and presence or absence of objects influences imitation responses. Imitation responses were examined under conditions in which objects could not be used directly as the most salient feature. 60 children ( M age = 5:6) were assigned to Gestural, Dot, No-dot, and Un-dot conditions, and they were asked to imitate 20 movements. The type of presented movement and the occurrence of correct, mirror, and error responses were examined. Responses in the Un-dot condition were similar to those in the Dot condition. Error responses in the Un-dot condition were related to age. Children may extract a more abstract feature in a context without visible objects. This ability is associated with a cognitive mechanism developed in preschool years.
APA, Harvard, Vancouver, ISO, and other styles
21

Fonteles, Joyce Horn, Édimo Sousa Silva, and Maria Andréia Formico Rodrigues. "Gesture-Driven Interaction Using the Leap Motion to Conduct a 3D Particle System: Evaluation and Analysis of an Orchestral Performance." Journal on Interactive Systems 6, no. 2 (2015): 1. http://dx.doi.org/10.5753/jis.2015.660.

Full text
Abstract:
In this work, we present and evaluate an interactive simulation of 3D particles conducted by the Leap Motion, for an orchestral arrangement. A real-time visual feedback during gesture entry is generated for the conductor and the audience, through a set of particle emitters displayed on the screen and the path traced by the captured gesture. We use two types of data input: the captured left and right hand conducting gestures (some universal movements, such as the beat patterns for the most common time signatures, the indication of a specific section of the orchestra, and the cutoff gestures), which are responsible for setting the tempo and dynamics of the orchestra; and a MIDI file, which contains information about the score of an orchestral arrangement, defining which notes each musical instrument should play. As regards the gestural input of the conductor, we have considered two musical elements of expression: tempo and dynamics. Besides performing functional testing, we analyzed the simulation results focusing on the occurrence of false positives, false negatives, positional deviations and latency. Moreover, a professional conductor evaluated our system and provided qualitative feedback about it.
APA, Harvard, Vancouver, ISO, and other styles
22

White, T. P., F. Borgan, O. Ralley, and S. S. Shergill. "You looking at me?: Interpreting social cues in schizophrenia." Psychological Medicine 46, no. 1 (2015): 149–60. http://dx.doi.org/10.1017/s0033291715001622.

Full text
Abstract:
Background.Deficits in the perception of social cues are common in schizophrenia and predict functional outcome. While effective communication depends on deciphering both verbal and non-verbal features, work on non-verbal communication in the disorder is scarce.Method.This behavioural study of 29 individuals with schizophrenia and 25 demographically matched controls used silent video-clips to examine gestural identification, its contextual modulation and related metacognitive representations.Results.In accord with our principal hypothesis, we observed that individuals with schizophrenia exhibited a preserved ability to identify archetypal gestures and did not differentially infer communicative intent from incidental movements. However, patients were more likely than controls to perceive gestures as self-referential when confirmatory evidence was ambiguous. Furthermore, the severity of their current hallucinatory experience inversely predicted their confidence ratings associated with these self-referential judgements.Conclusions.These findings suggest a deficit in the contextual refinement of social-cue processing in schizophrenia that is potentially attributable to impaired monitoring of a mirror mechanism underlying intentional judgements, or to an incomplete semantic representation of gestural actions. Non-verbal communication may be improved in patients through psychotherapeutic interventions that include performance and perception of gestures in group interactions.
APA, Harvard, Vancouver, ISO, and other styles
23

Rusiewicz, Heather Leavy, and Jessica Lynch Rivera. "The Effect of Hand Gesture Cues Within the Treatment of /r/ for a College-Aged Adult With Persisting Childhood Apraxia of Speech." American Journal of Speech-Language Pathology 26, no. 4 (2017): 1236–43. http://dx.doi.org/10.1044/2017_ajslp-15-0172.

Full text
Abstract:
Purpose Despite the widespread use of hand movements as visual and kinesthetic cues to facilitate accurate speech produced by individuals with speech sound disorders (SSDs), no experimental investigation of gestural cues that mimic that spatiotemporal parameters of speech sounds (e.g., holding fingers and thumb together and “popping” them to cue /p/) currently exists. The purpose of this study was to examine the effectiveness of manual mimicry cues within a multisensory intervention of persisting childhood apraxia of speech (CAS). Method A single-subject ABAB withdrawal design was implemented to assess the accuracy of vowel + /r/ combinations produced by a 21-year-old woman with persisting CAS. The effect of manual mimicry gestures paired with multisensory therapy consisting of verbal instructions and visual modeling was assessed via clinician and naïve listener ratings of target sound accuracy. Results According to the perceptual ratings of the treating clinician and 28 naïve listeners, the participant demonstrated improved speech sound accuracy as a function of the manual mimicry/multisensory therapy. Conclusions These data offer preliminary support for the incorporation of gestural cues in therapy for CAS and other SSDs. The need for continued research on the interaction of speech and manual movements for individuals with SSDs is discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Mangiamele, Lisa A., Matthew J. Fuxjager, Eric R. Schuppe, Rebecca S. Taylor, Walter Hödl, and Doris Preininger. "Increased androgenic sensitivity in the hind limb muscular system marks the evolution of a derived gestural display." Proceedings of the National Academy of Sciences 113, no. 20 (2016): 5664–69. http://dx.doi.org/10.1073/pnas.1603329113.

Full text
Abstract:
Physical gestures are prominent features of many species’ multimodal displays, yet how evolution incorporates body and leg movements into animal signaling repertoires is unclear. Androgenic hormones modulate the production of reproductive signals and sexual motor skills in many vertebrates; therefore, one possibility is that selection for physical signals drives the evolution of androgenic sensitivity in select neuromotor pathways. We examined this issue in the Bornean rock frog (Staurois parvus, family: Ranidae). Males court females and compete with rivals by performing both vocalizations and hind limb gestural signals, called “foot flags.” Foot flagging is a derived display that emerged in the ranids after vocal signaling. Here, we show that administration of testosterone (T) increases foot flagging behavior under seminatural conditions. Moreover, using quantitative PCR, we also find that adult male S. parvus maintain a unique androgenic phenotype, in which androgen receptor (AR) in the hind limb musculature is expressed at levels ∼10× greater than in two other anuran species, which do not produce foot flags (Rana pipiens and Xenopus laevis). Finally, because males of all three of these species solicit mates with calls, we accordingly detect no differences in AR expression in the vocal apparatus (larynx) among taxa. The results show that foot flagging is an androgen-dependent gestural signal, and its emergence is associated with increased androgenic sensitivity within the hind limb musculature. Selection for this novel gestural signal may therefore drive the evolution of increased AR expression in key muscles that control signal production to support adaptive motor performance.
APA, Harvard, Vancouver, ISO, and other styles
25

Bloomfield, Lauren, Elizabeth Lane, Madhur Mangalam, and Damian G. Kelty-Stephen. "Perceiving and remembering speech depend on multifractal nonlinearity in movements producing and exploring speech." Journal of The Royal Society Interface 18, no. 181 (2021): 20210272. http://dx.doi.org/10.1098/rsif.2021.0272.

Full text
Abstract:
Speech perception and memory for speech require active engagement. Gestural theories have emphasized mainly the effect of speaker's movements on speech perception. They fail to address the effects of listener movement, focusing on communication as a boundary condition constraining movement among interlocutors. The present work attempts to break new ground by using multifractal geometry of physical movement as a common currency for supporting both sides of the speaker–listener dyads. Participants self-paced their listening to a narrative, after which they completed a test of memory querying their narrative comprehension and their ability to recognize words from the story. The multifractal evidence of nonlinear interactions across timescales predicted the fluency of speech perception. Self-pacing movements that enabled listeners to control the presentation of speech sounds constituted a rich exploratory process. The multifractal nonlinearity of this exploration supported several aspects of memory for the perceived spoken language. These findings extend the role of multifractal geometry in the speaker's movements to the narrative case of speech perception. In addition to posing novel basic research questions, these findings make a compelling case for calibrating multifractal structure in text-to-speech synthesizers for better perception and memory of speech.
APA, Harvard, Vancouver, ISO, and other styles
26

Buck, Bryony, Jennifer MacRitchie, and Nicholas J. Bailey. "The Interpretive Shaping of Embodied Musical Structure in Piano Performance." Empirical Musicology Review 8, no. 2 (2013): 92. http://dx.doi.org/10.18061/emr.v8i2.3929.

Full text
Abstract:
Research has indicated that the magnitude of physical expressive movements during a performance helps to communicate a musician's affective intent. However, the underlying function of these performance gestures remains unclear. Nine highly skilled solo pianists are examined here to investigate the effect of structural interpretation on performance motion patterns. Following previous findings that these performers generate repeated patterns of motion through overall upper-body movements corresponding to phrasing structure, this study now investigates the particular shapes traced by these movements. Through this we identify universal and idiosyncratic features within the shapes of motion patterns generated by these performers. Gestural shapes are examined for performances of Chopin’s explicitly structured A major Prelude (Op. 28, No. 7) and are related to individual interpretations of the more complex phrasing structure of Chopin’s B minor Prelude (Op. 28, No. 6). Findings reveal a universal general embodiment of phrasing structure and other higher-level structural features of the music. The physical makeup of this embodiment, however, is particular to both the performer and the piece being performed. Examining the link between performers' movements and interpreted structure strengthens understanding of the connection between body and instrument, furthering awareness of the relations between cognitive interpretation and physical expression of structure within music performance.
APA, Harvard, Vancouver, ISO, and other styles
27

Merello, Marcelo, Jorge Balej, and Ramon Leiguarda. "Pallidotomy in Parkinson's disease improves single-joint, repetitive, ballistic movements, but fails to modify multijoint, repetitive, gestural movements." Movement Disorders 18, no. 3 (2003): 280–86. http://dx.doi.org/10.1002/mds.10336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Schacher, Jan C. "Gestural Performance of Electronic Music—A “NIME” Practice as Research." Leonardo 49, no. 1 (2016): 84–85. http://dx.doi.org/10.1162/leon_a_01122.

Full text
Abstract:
The practice of gestural electronic music performance provides a valid context for artistic or practice-based investigations in the field of ’NIME.’ To this end, the material and conceptual conditions for the development of performance pieces using gestural actions need to be explored. The use of digital musical instruments and concepts for the expressive performance with digital sounds leads to questions of perception—by the musician and by the audience—of movements and actions, the body, the instruments, and of their affordances. When considering this performance mode as a topic for investigation, it becomes evident that in order to be based on practice, research in this field needs a definition and differentiation that helps to identify the specific perspectives that are only made possible through application in an actual artistic practice.
APA, Harvard, Vancouver, ISO, and other styles
29

Duncan, Susan D. "Gesture, verb aspect, and the nature of iconic imagery in natural discourse." Gesture 2, no. 2 (2002): 183–206. http://dx.doi.org/10.1075/gest.2.2.04dun.

Full text
Abstract:
Linguistic analyses of Mandarin Chinese and English have detailed the differences between the two languages in terms of the devices each makes available for expressing distinctions in the temporal contouring of events — verb aspect and Aktionsart. In this study, adult native speakers of each language were shown a cartoon, a movie, or a series of short action sequences and then videotaped talking about what they had seen. Comparisons revealed systematic within-language covariation of choice of aspect and/or Aktionsart in speech with features of co-occurring iconic gestures. In both languages, the gestures that speakers produced in imperfective aspect-marked speech contexts were more likely to take longer to produce and were more complex than those in perfective aspect speech contexts. Further, imperfective-progressive aspect-marked spoken utterances regularly accompanied iconic gestures in which the speaker’s hands engaged in some kind of temporally-extended, repeating or‘agitated’ movements. Gestures sometimes incorporated this type of motion even when there was nothing corresponding to it in the visual stimulus; for example, when speakers described events of stasis. These facts suggest that such gestural agitation may derive from an abstract level of representation, perhaps linked to aspectual view itself. No significant between-language differences in aspect- or Aktionsart-related gesturing were observed. We conclude that gestural representations of witnessed events, when performed in conjunction with speech, are not simply derived from visual images, stored as perceived in the stimulus, and transposed as faithfully as possible to the hands and body of the speaker (cf. Hadar & Butterworth, 1997). Rather, such gestures are part of a linguistic-conceptual representation (McNeill & Duncan, 2000) in which verb aspect has a role. We further conclude that the noted differences between the systems for marking aspectual distinctions in spoken Mandarin and English are at a level of patterning that has little or no influence on speech-co-occurring imagistic thinking.
APA, Harvard, Vancouver, ISO, and other styles
30

Frențiu, Luminița. "The Management of Non-Verbal Signs in Disagreements." Romanian Journal of English Studies 16, no. 1 (2019): 119–22. http://dx.doi.org/10.1515/rjes-2019-0014.

Full text
Abstract:
AbstractGesture, outlined in terms of physical activity, philosophical theory and linguistics are strongly connected in the course of human interaction. Utterances are accompanied by facial expressions, shifts of gaze, movements, posture, etc. Therefore, we support the thesis that a gestural approach to analyzing communicative events is appropriate, since non-verbal components communicate attitudes and emotions and complete the verbal interchange in numerous ways. Disagreements are especially prone to such an analysis.
APA, Harvard, Vancouver, ISO, and other styles
31

Child, Simon, Anna Theakston, and Simone Pika. "How do modelled gestures influence preschool children’s spontaneous gesture production?" Gesture 14, no. 1 (2014): 1–25. http://dx.doi.org/10.1075/gest.14.1.01chi.

Full text
Abstract:
Around the age of nine months, children start to communicate by using first words and gestures, during interactions with caregivers. The question remains as to how older preschool children utilise the gestures they observe into their own gestural representations of previously unseen objects. Two accounts of gesture production (the ‘gesture learning’, and ‘simulated representation’ accounts) offer different predictions for how preschool children use the gestures they observe when describing objects. To test these two competing accounts underlying gesture production, we showed 42 children (mean age: 45 months 14 days) four novel objects using speech only, or speech accompanied by either movement or physical feature gestures. Analyses revealed that (a) overall symbolic gesture production showed a high degree of individual variability, and (b) distinct observed gesture types influenced the children’s subsequent gesture use. Specifically, it was found that children preferred to match movement gestures in a subsequent communicative interaction including the same objects, but not physical feature gestures. We conclude that the observation of gestures (in particular gestures that depict movement) may act to change preschool children’s object representations, which in turn influences how they depict objects in space.
APA, Harvard, Vancouver, ISO, and other styles
32

Mantovani-Nagaoka, Joana, and Karin Zazo Ortiz. "The influence of age, gender and education on the performance of healthy individuals on a battery for assessing limb apraxia." Dementia & Neuropsychologia 10, no. 3 (2016): 232–36. http://dx.doi.org/10.1590/s1980-5764-2016dn1003010.

Full text
Abstract:
ABSTRACT Introduction: Apraxia is defined as a disorder of learned skilled movements, in the absence of elementary motor or sensory deficits and general cognitive impairment, such as inattention to commands, object-recognition deficits or poor oral comprehension. Limb apraxia has long been a challenge for clinical assessment and understanding and covers a wide spectrum of disorders, all involving motor cognition and the inability to perform previously learned actions. Demographic variables such as gender, age, and education can influence the performance of individuals on different neuropsychological tests. Objective: The present study aimed to evaluate the performance of healthy subjects on a limb apraxia battery and to determine the influence of gender, age, and education on the praxis skills assessed. Methods: Forty-four subjects underwent a limb apraxia battery, which was composed of numerous subtests for assessing both the semantic aspects of gestural production as well as motor performance itself. The tasks encompassed lexical-semantic aspects related to gestural production and motor activity in response to verbal commands and imitation. Results: We observed no gender effects on any of the subtests. Only the subtest involving visual recognition of transitive gestures showed a correlation between performance and age. However, we observed that education level influenced subject performance for all sub tests involving motor actions, and for most of these, moderate correlations were observed between education level and performance of the praxis tasks. Conclusion: We conclude that the education level of participants can have an important influence on the outcome of limb apraxia tests.
APA, Harvard, Vancouver, ISO, and other styles
33

Son, Minjung. "Normalized gestural overlap measures and spatial properties of lingual movements in Korean non-assimilating contexts*." Phonetics and Speech Sciences 11, no. 3 (2019): 31–38. http://dx.doi.org/10.13064/ksss.2019.11.3.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

WILBUR, RONNIE B., and EVGUENIA MALAIA. "CONTRIBUTIONS OF SIGN LANGUAGE RESEARCH TO GESTURE UNDERSTANDING: WHAT CAN MULTIMODAL COMPUTATIONAL SYSTEMS LEARN FROM SIGN LANGUAGE RESEARCH." International Journal of Semantic Computing 02, no. 01 (2008): 5–19. http://dx.doi.org/10.1142/s1793351x08000324.

Full text
Abstract:
This paper considers neurological, formational and functional similarities between gestures and signed verb predicates. From analysis of verb sign movement, we offer suggestions for analyzing gestural movement (motion capture, kinematic analysis, trajectory internal structure). From analysis of verb sign distinctions, we offer suggestions for analyzing co-speech gesture functions.
APA, Harvard, Vancouver, ISO, and other styles
35

Choi, Hyo-Rim, and TaeYong Kim. "Modified Dynamic Time Warping Based on Direction Similarity for Fast Gesture Recognition." Mathematical Problems in Engineering 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/2404089.

Full text
Abstract:
We propose a modified dynamic time warping (DTW) algorithm that compares gesture-position sequences based on the direction of the gestural movement. Standard DTW does not specifically consider the two-dimensional characteristic of the user’s movement. Therefore, in gesture recognition, the sequence comparison by standard DTW needs to be improved. The proposed gesture-recognition system compares the sequences of the input gesture’s position with gesture positions saved in the database and selects the most similar gesture by filtering out unrelated gestures. The suggested algorithm uses the cosine similarity of the movement direction at each moment to calculate the difference and reflects the characteristics of the gesture movement by using the ratio of the Euclidean distance and the proportional distance to the calculated difference. Selective spline interpolation assists in solving the issue of recognition-decline at instances of gestures. Through experiments with public databases (MSRC-12 and G3D), the suggested algorithm revealed an improved performance on both databases compared to other methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Schroder, Ulrike Agathe. "Between cultures." Journal of Speech Sciences 9 (September 9, 2020): 49–71. http://dx.doi.org/10.20396/joss.v9i00.14963.

Full text
Abstract:
It is still hard to find the examination of real interatcion from a cognitive, ‘embodied’ and multimodal perspective in empirical practice, concurrently maintaining the operational framework of conversation analysis. The following article aims at showing how co-participants in talk-in-interaction co-construct intercultural experience multimodally, that is, on verbal, prosodic and gestural-corporal levels. Based on two sequences taken from the ICMI corpus of the research group Intercultural Communication in Multimodal Interactions, it will be revealed how (inter)cultural conceptualizations are (co-)built by means of iconic, metaphorical and beat gestures, by gaze, posture and body movements, as well as by prosodic cues such as pitch jumps and contours, accents, volume and tempo. Concurrently, all those means serve as contextualization cues to express the interlocutors’ involvement, stance, alignment, as well as affiliation, and can be conceived as ‘points of access’ to deeper entrenched schemata related to the participant’s (inter)cultural experiences. In this sense, the study aims to gap the bridge between conversational and interactional linguistics, on the one hand, and cognitive and cultural linguistics, on the other.
APA, Harvard, Vancouver, ISO, and other styles
37

Shun-chiu, Yau. "Derivation Lexicale En Iangues Gestveiles Et Chinoises." Cahiers de Linguistique Asie Orientale 16, no. 2 (1987): 213–36. http://dx.doi.org/10.1163/19606028-90000025.

Full text
Abstract:
Given a certain number of basic signs (gestural lexical items), the lexicon of a sign language can expand by modifying the morphology of its basic items. The case I present here concerns only those data where morphological modification is exploited as a lexical branching device, i.e., where a sign acquires a new signification but no additional, morphemes after undergoing such a modification, and the root form of that sign retains its original, meaning.The movement parameter is the principal device of an intrinsic nature for lexical branching in sign languages. In this respect, the movement modification in sign language is comparable to tonal modification in certain oral languages. In Cantonese, for example, the difference between cognate pairs such as "sugar:sweet is marked by a tonal shift. Tones are generally attached to the vowels. Since vowels are the nuclei of syllables in a sonorous modality as movements are of signs in a visually dynamic medium, we can consider the two lexical, branching mechanisms in their respective systems to be parallel linguistic phenomena.
APA, Harvard, Vancouver, ISO, and other styles
38

BHUYAN, M. K., P. K. BORA, and D. GHOSH. "AN INTEGRATED APPROACH TO THE RECOGNITION OF A WIDE CLASS OF CONTINUOUS HAND GESTURES." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 02 (2011): 227–52. http://dx.doi.org/10.1142/s0218001411008592.

Full text
Abstract:
The gesture segmentation is a method that distinguishes meaningful gestures from unintentional movements. Gesture segmentation is a prerequisite stage to continuous gesture recognition which locates the start and end points of a gesture in an input sequence. Yet, this is an extremely difficult task due to both the multitude of possible gesture variations in spatio-temporal space and the co-articulation/movement epenthesis of successive gestures. In this paper, we focus our attention on coping with this problem associated with continuous gesture recognition. This requires gesture spotting that distinguishes meaningful gestures from co-articulation and unintentional movements. In our method, we first segment the input video stream by detecting gesture boundaries at which the hand pauses for a while during gesturing. Next, every segment is checked for movement epenthesis and co-articulation via finite state machine (FSM) matching or by using hand motion information. Thus, movement epenthesis phases are detected and eliminated from the sequence and we are left with a set of isolated gestures. Finally, we apply different recognition schemes to identify each individual gesture in the sequence. Our experimental results show that the proposed scheme is suitable for recognition of continuous gestures having different spatio-temporal behavior.
APA, Harvard, Vancouver, ISO, and other styles
39

Kaplan, Gisela. "Pointing gesture in a bird- merely instrumental or a cognitively complex behavior?" Current Zoology 57, no. 4 (2011): 453–67. http://dx.doi.org/10.1093/czoolo/57.4.453.

Full text
Abstract:
Abstract Gestures, particularly pointing, are regarded as important pre-speech acts. Intentional and referential pointing has been shown previously in humans and apes but not in songbirds, although some avian species show cognitive abilities rivaling those of apes, and their brain structures and functions show putative preconditions for referential gestural signaling (i.e. mirror neurons, links of vocal learning nuclei to discrete brain areas active during limb and body movements). The results reported are based on trials testing predator detection and responses to a taxidermic model of a wedge-tailed eagle by Australian magpies Gymnorhina tibicen. Magpies were subjected to three conditions of finding this model in their territory (open, sheltered and hidden). In the sheltered and hidden conditions, the discoverer simultaneously engaged in alarm calls and beak pointing, a behavior that has not been described previously. Other group members at once assembled and, after watching the first bird, adopted the same posture by pointing to the location of the intruder. The question is whether beak and body movements orienting towards important stimuli or events are instances of arousal, imitation or intentional communication. The latter presupposes that onlookers interpret the signal and respond by altering their own behavior appropriate to the original stimulus and not merely by imitating the first signaler. Evidence presented here indicates that the act of pointing may well be a complex cognitive behavior, i.e., an intentional and referential signal, showing that pointing is not limited to having hands and arms.
APA, Harvard, Vancouver, ISO, and other styles
40

Olthuis, Raimey, John van der Kamp, Koen Lemmink, and Simone Caljouw. "Touchscreen Pointing and Swiping: The Effect of Background Cues and Target Visibility." Motor Control 24, no. 3 (2020): 422–34. http://dx.doi.org/10.1123/mc.2019-0096.

Full text
Abstract:
By assessing the precision of gestural interactions with touchscreen targets, the authors investigate how the type of gesture, target location, and scene visibility impact movement endpoints. Participants made visually and memory-guided pointing and swiping gestures with a stylus to targets located in a semicircle. Specific differences in aiming errors were identified between swiping and pointing. In particular, participants overshot the target more when swiping than when pointing and swiping endpoints showed a stronger bias toward the oblique than pointing gestures. As expected, the authors also found specific differences between conditions with and without delays. Overall, the authors observed an influence on movement execution from each of the three parameters studied and uncovered that the information used to guide movement appears to be gesture specific.
APA, Harvard, Vancouver, ISO, and other styles
41

Schröder, Ulrike. "Zwischen den Welten: zur kognitiven und kommunikativen Ko-Konstruktion von Alteritätserfahrung." Linguistik Online 104, no. 4 (2020): 137–65. http://dx.doi.org/10.13092/lo.104.7321.

Full text
Abstract:
It is still hard to find the examination of real interaction from a cognitive, “embodied” and multimodal perspective in empirical practice, concurrently maintaining the operational framework of conversation analysis. The following article aims at showing how co-participants in talk-in-interaction co-construct intercultural experience multimodally, that is, on verbal, prosodic and gestural-corporal levels. Based on five sequences taken from the ICMI corpus of the research group Intercultural Communication in Multimodal Interactions, founded at the Federal University of Minas Gerais in 2010, it will be revealed how interactants narrate their intercultural experience based on reenactments, how (inter)cultural conceptualizations are (co-)built by means of iconic, metaphorical and beat gestures, by gaze, posture and body movements, as well as by prosodic cues such as pitch jumps and contours, accents, volume and tempo. Concurrently, all those means serve as contextualization cues to express the interlocutors’ involvement, stance, alignment as well as affiliation, and can be conceived as “points of access” to deeper entrenched schemata related to the participant’s (inter)cultural experiences. In this sense, the study aims to bridge the gap between conversational and interactional linguistics, on the one hand, and cognitive and cultural linguistics, on the other.
APA, Harvard, Vancouver, ISO, and other styles
42

K, Srinivas, and Manoj Kumar Rajagopal. "STUDY OF HAND GESTURE RECOGNITION AND CLASSIFICATION." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (2017): 25. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19540.

Full text
Abstract:
To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration.
APA, Harvard, Vancouver, ISO, and other styles
43

Tavares, Rafael, Hugo Mesquita, Rui Penha, Paulo Abreu, and Maria Teresa Restivo. "An Instrumented Glove for Control Audiovisual Elements in Performing Arts." International Journal of Online Engineering (iJOE) 14, no. 02 (2018): 173. http://dx.doi.org/10.3991/ijoe.v14i02.8247.

Full text
Abstract:
The use of cutting-edge technologies such as wearable devices to control reactive audiovisual systems are rarely applied in more conventional stage performances, such as opera performances. This work reports a cross-disciplinary approach for the research and development of the WMTSensorGlove, a data-glove used in an opera performance to control audiovisual elements on stage through gestural movements. A system architecture of the interaction between the wireless wearable device and the different audiovisual systems is presented, taking advantage of the Open Sound Control (OSC) protocol. The developed wearable system was used as audiovisual controller in “As sete mulheres de Jeremias Epicentro”, a portuguese opera by Quarteto Contratempus, which was premiered in September 2017.
APA, Harvard, Vancouver, ISO, and other styles
44

Leiguarda, R., M. Merello, J. Balej, S. Starkstein, and C. D. Marsden. "1-30-14 Disruption of spatial organization of gestural movements in patients with Parkinson's disease: A kinematic analysis." Journal of the Neurological Sciences 150 (September 1997): S45. http://dx.doi.org/10.1016/s0022-510x(97)85052-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Dwijayanti, Ida, I. Ketut Budayasa, and Tatag Yuli Eko Siswono. "Students' gestures in understanding algebraic concepts." Beta: Jurnal Tadris Matematika 12, no. 2 (2019): 133–43. http://dx.doi.org/10.20414/betajtm.v12i2.307.

Full text
Abstract:
[English]: The purpose of this qualitative exploratory study was to analyze students’ gestures in understanding algebraic expression. It involved 59 7th-grade students in Semarang city, Indonesia. Students’ gestures were identified through interviews and observations, then analyzed in three stages: data condensation, data display, and drawing and verifying conclusion. Time triangulation was utilized to assure data validity. The results showed that students employed: (1) direct gestures as a representation of coefficients and variables in the form of hand movements forming the shape of objects that they recognize in the everyday environment, (2) indirect gestures as a representation of coefficients and variables in the form of hand movements as if forming the shape of objects that they recognize in the daily environment then followed by consistent and repetitive hand movements as a representation of the coefficients, (3) direct gesture representing constants in the form of hand movements forming a specific number, and (4) writing gestures and pointing gestures to strengthen the explanation given. The present study concludes that the gestures made by the students in understanding the concepts of algebraic expression consist of representation, pointing, and writing. This study yields an important description of students' gestures and types of gestures about the algebraic concept, which provide a further understanding of the topic. 
 Keywords: Gesture, Conceptual understanding, Algebra
 [Bahasa]: Penelitian kualitatif ini bertujuan untuk menganalisis gestur siswa dalam memahami bentuk aljabar. Penelitian melibatkan 59 siswa di salah satu SMP di Semarang. Data gestur siswa diidentifikasi melalui observasi dan wawancara kemudian dianalisis melalui tahapan kondensasi data, penyajian data, dan penarikan dan verifikasi simpulan. Verifikasi keabsahan data dilakukan menggunakan teknik triangulasi waktu. Hasil penelitian menunjukkan bahwa siswa menggunakan (1) gestur langsung sebagai perwujudan pemahaman konsep koefisien dan variabel dalam bentuk gerakan tangan yang membentuk objek yang dikenali dalam lingkungan sehari-hari, (2) gestur tidak langsung sebagai representasi koefisien dan variabel dalam bentuk gerakan tangan seolah-olah membentuk objek yang dikenali dalam lingkungan sehari-hari kemudian diikuti oleh gerakan tangan yang konsisten dan berulang sebagai representasi koefisien, (3) gestur langsung yang menjadi representasi konstanta melalui gerakan tangan membentuk angka tertentu, dan (4) gestur menulis dan menunjuk untuk memperkuat penjelasan yang diberikan. Penelitian ini menyimpulkan bahwa gestur yang dibentuk siswa dalam memahami konsep bentuk aljabar terdiri dari gestur representasi (gestur representasi langsung dan tidak langsung), gestur menunjuk, dan gestur menulis. Penelitian ini menghasilkan deskripsi penting tentang gestur dan jenis gestur siswa tentang konsep aljabar yang memberikan pemahaman lebih lanjut tentang topik tersebut.
 Kata kunci: Gestur, Pemahaman konsep, Aljabar
APA, Harvard, Vancouver, ISO, and other styles
46

Earis, Helen, and Kearsy Cormier. "Point of view in British Sign Language and spoken English narrative discourse: the example of “The Tortoise and the Hare”." Language and Cognition 5, no. 4 (2013): 313–43. http://dx.doi.org/10.1515/langcog-2013-0021.

Full text
Abstract:
AbstractThis paper discusses how point of view (POV) is expressed in British Sign Language (BSL) and spoken English narrative discourse. Spoken languages can mark changes in POV using strategies such as direct/indirect discourse, whereas signed languages can mark changes in POV in a unique way using “role shift”. Role shift is where the signer “becomes” a referent by taking on attributes of that referent, e.g. facial expression. In this study, two native BSL users and two native British English speakers were asked to tell the story “The Tortoise and the Hare”. The data were then compared to see how point of view is expressed and maintained in both languages. The results indicated that the spoken English users preferred the narrator's perspective, whereas the BSL users preferred a character's perspective. This suggests that spoken and signed language users may structure stories in different ways. However, some co-speech gestures and facial expressions used in the spoken English stories to denote characters' thoughts and feelings bear resemblance to the hand movements and facial expressions used by the BSL storytellers. This suggests that while approaches to storytelling may differ, both languages share some gestural resources which manifest themselves in different ways across different modalities.
APA, Harvard, Vancouver, ISO, and other styles
47

Nafisi, Julia. "Gesture and body-movement as teaching and learning tools in the classical voice lesson: a survey into current practice." British Journal of Music Education 30, no. 3 (2013): 347–67. http://dx.doi.org/10.1017/s0265051712000551.

Full text
Abstract:
This article discusses the use of gesture and body-movement in the teaching of singing and reports on a survey amongst professional singing teachers in Germany regarding their use of gesture and body movement as pedagogic tools in their teaching. The nomenclature of gestures and movements used in the survey is based on a previous study by the author (Nafisi, 2008, 2010) categorising movements in the teaching of singing according to their pedagogical intent intoPhysiological Gestures, Sensation-related Gestures, Musical GesturesandBody-Movements. The survey demonstrated thatGestureswere used by a significant number of voice teachers to enhance explanation and/or demonstration, that a significant number of voice teachers encouraged their students to carry out similar Gestures whilst singing to enhance their learning experience and that another type of essentially non-expressiveBody-Movementswas also encouraged by a significant number of voice teachers to enhance students’ learning. The paper validates the author's nomenclature and offers some hitherto unpublished insights.
APA, Harvard, Vancouver, ISO, and other styles
48

Ram, Sharan, Anjan Mahadevan, Hadi Rahmat-Khah, Guiseppe Turini, and Justin G. Young. "Effect of Control-Display Gain and Mapping and Use of Armrests on Accuracy in Temporally Limited Touchless Gestural Steering Tasks." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (2017): 380–84. http://dx.doi.org/10.1177/1541931213601577.

Full text
Abstract:
Touchless gestural controls are becoming an important natural input technique for interaction with emerging virtual environments but design parameters that improve task performance while at the same time reduce user fatigue require investigation. This experiment aims to understand how control-display (CD) parameters such as gain and mapping as well as the use of armrests affect gesture accuracy in specific movement directions. Twelve participants completed temporally constrained two-dimensional steering tasks using free-hand fingertip gestures in several conditions. Use of an armrest, increased CD gain, and horizontal mapping significantly reduced success rate. The results show that optimal transfer functions for gestures will depend on the movement direction as well as arm support features.
APA, Harvard, Vancouver, ISO, and other styles
49

Ruprecht, Lucia. "Gesture, Interruption, Vibration: Rethinking Early Twentieth-Century Gestural Theory and Practice in Walter Benjamin, Rudolf von Laban, and Mary Wigman." Dance Research Journal 47, no. 2 (2015): 23–41. http://dx.doi.org/10.1017/s0149767715000200.

Full text
Abstract:
This article compares Rudolf von Laban's and Mary Wigman's practices and theories of gestural flow with Walter Benjamin's theory of gesture as interruption. For Laban and Wigman, gesture mirrors a vitalist understanding of life that is based on the rediscovery of transhistorical continuities between human and cosmic energy. Benjamin's Brechtian gestures address inscriptions and manipulations of bodies, which provide comment on the conditions of society by subjecting to critique the essentializing aspects of historical and vitalist flow. Addressing in particular forms of vibration as both enriching and destabilizing the gestural from its margins, my article explores how vibratory energy indicates a self-reflexive theory of media, but also a revolutionary charge, in Benjamin; how it engenders a politically ambivalent process of transmission between dancers and audience in Laban; and how it becomes an actual mode of movement in Wigman. The historical inquiry contributes to a genealogy of vibration in contemporary dance.
APA, Harvard, Vancouver, ISO, and other styles
50

Gerofsky, Susan. "Mathematical learning and gesture." Gesture and Multimodal Development 10, no. 2-3 (2010): 321–43. http://dx.doi.org/10.1075/gest.10.2-3.10ger.

Full text
Abstract:
This paper reports on a research project in mathematics education involving the use of gesture, movement and vocal sound to highlight mathematically salient features of the graphs of polynomial functions. Empirical observations of students’ spontaneous gesture types when enacting elicited gestures of these graphs reveal a number of useful binaries (proximal/distal, being the graph/seeing the graph, within sight/within reach). These binaries inform an analysis of videotaped gestural and interview data and appear to predict teachers’ assessments of student mathematical engagement and understanding with great accuracy. Reframing this data in terms of C-VPT and O-VPT adds a further layer of sophistication to the analysis and connects it with deeper findings in cognitive and neuroscience and gesture studies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!