To see the other types of publications on this topic, follow the link: Gestural input.

Journal articles on the topic 'Gestural input'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gestural input.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jurewicz, Katherina, and David M. Neyens. "Mapping 3D Gestural Inputs to Traditional Touchscreen Interface Designs within the Context of Anesthesiology." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (2017): 696–700. http://dx.doi.org/10.1177/1541931213601660.

Full text
Abstract:
Gestures are a natural means of every day human-human communication, and with the advances in gestural input technology, there is an opportunity to investigate gestures as a means of communicating with computers and other devices. The primary benefit of gestural input technology is that it facilitates a touchless interaction, so the ideal market demand for this technology is an environment where touch needs to be minimized. The perfect example of an environment that discourages touch are sterile or clean environments, such as operating rooms (ORs). Healthcare-associated infections are a great
APA, Harvard, Vancouver, ISO, and other styles
2

Jurewicz, Katherina A., David M. Neyens, Ken Catchpole, and Scott T. Reeves. "Developing a 3D Gestural Interface for Anesthesia-Related Human-Computer Interaction Tasks Using Both Experts and Novices." Human Factors: The Journal of the Human Factors and Ergonomics Society 60, no. 7 (2018): 992–1007. http://dx.doi.org/10.1177/0018720818780544.

Full text
Abstract:
Objective: The purpose of this research was to compare gesture-function mappings for experts and novices using a 3D, vision-based, gestural input system when exposed to the same context of anesthesia tasks in the operating room (OR). Background: 3D, vision-based, gestural input systems can serve as a natural way to interact with computers and are potentially useful in sterile environments (e.g., ORs) to limit the spread of bacteria. Anesthesia providers’ hands have been linked to bacterial transfer in the OR, but a gestural input system for anesthetic tasks has not been investigated. Methods:
APA, Harvard, Vancouver, ISO, and other styles
3

Villiers, Jill De, Lynne Bibeau, Eliane Ramos, and Janice Gatty. "Gestural communication in oral deaf mother-child pairs: Language with a helping hand?" Applied Psycholinguistics 14, no. 3 (1993): 319–47. http://dx.doi.org/10.1017/s0142716400010821.

Full text
Abstract:
ABSTRACTThis article reports a longitudinal study of developing communication in two profoundly deaf preschool boys growing up in oral deaf families who use oral English as their primary language. The children were videotaped in play interactions with their profoundly deaf mothers. The nature of the gestural communication used by the dyads is the focus of interest in this article. In contrast to hearing mothers of deaf children, the two mothers used extensive gestures to accompany their speech, including rich and varied gesture sequences. The children also developed a repertoire of gestures th
APA, Harvard, Vancouver, ISO, and other styles
4

Wacewicz, Sławomir, Przemysław Żywiczyński, and Sylwester Orzechowski. "Visible movements of the orofacial area." Gesture 15, no. 2 (2016): 250–82. http://dx.doi.org/10.1075/gest.15.2.05wac.

Full text
Abstract:
The age-old debate between the proponents of the gesture-first and speech-first positions has returned to occupy a central place in current language evolution theorizing. The gestural scenarios, suffering from the problem known as “modality transition” (why a gestural system would have changed into a predominantly spoken system), frequently appeal to the gestures of the orofacial area as a platform for this putative transition. Here, we review currently available evidence on the significance of the orofacial area in language evolution. While our review offers some support for orofacial movemen
APA, Harvard, Vancouver, ISO, and other styles
5

ZAMMIT, MARIA, and GRAHAM SCHAFER. "Maternal label and gesture use affects acquisition of specific object names." Journal of Child Language 38, no. 1 (2010): 201–21. http://dx.doi.org/10.1017/s0305000909990328.

Full text
Abstract:
ABSTRACTTen mothers were observed prospectively, interacting with their infants aged 0 ; 10 in two contexts (picture description and noun description). Maternal communicative behaviours were coded for volubility, gestural production and labelling style. Verbal labelling events were categorized into three exclusive categories: label only; label plus deictic gesture; label plus iconic gesture. We evaluated the predictive relations between maternal communicative style and children's subsequent acquisition of ten target nouns. Strong relations were observed between maternal communicative style and
APA, Harvard, Vancouver, ISO, and other styles
6

Schuler, Patrik T., Katherina A. Jurewicz, and David M. Neyens. "Applying a User-Centered Method to Develop 3D Gestural Inputs for In-Vehicle Tasks." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (2019): 408–12. http://dx.doi.org/10.1177/1071181319631416.

Full text
Abstract:
Gestures are a natural input method for human communication and may be effective for drivers to interact with in-vehicle infotainment systems (IVIS). Most of the existing work on gesture-based human-computer interaction (HCI) in and outside of the vehicle focus on the distinguishability of computer systems. The purpose of this study was to identify gesture sets that are used for IVIS tasks and to compare task times across the different functions for gesturing and touchscreens. Task times for user-defined gestures were quicker than for a novel touchscreen. There were several functions that resu
APA, Harvard, Vancouver, ISO, and other styles
7

Wolf, Catherine G. "A Comparative Study of Gestural and Keyboard Interfaces." Proceedings of the Human Factors Society Annual Meeting 32, no. 5 (1988): 273–77. http://dx.doi.org/10.1177/154193128803200506.

Full text
Abstract:
This paper presents results from two experiments which compared gestural and keyboard interfaces to a spreadsheet program. This is the first quantitative comparison of these two types of interfaces known to the author. The gestural interface employed gestures (hand-drawn marks such as carets or brackets) for commands, and handwriting as input techniques. In one configuration, the input/output hardware consisted of a transparent digitizing tablet mounted on top of an LCD which allowed the user to interact with the program by writing on the tablet with a stylus. The experiments found that partic
APA, Harvard, Vancouver, ISO, and other styles
8

NAMY, LAURA L., and SUSAN A. NOLAN. "Characterizing changes in parent labelling and gesturing and their relation to early communicative development." Journal of Child Language 31, no. 4 (2004): 821–35. http://dx.doi.org/10.1017/s0305000904006543.

Full text
Abstract:
In a longitudinal study, 17 parent–child dyads were observed during free-play when the children were 1;0, 1;6, and 2;0. Parents' labelling input in the verbal and gestural modalities was coded at each session, and parents completed a vocabulary checklist for their children at each visit. We analysed how the frequency of labelling in the verbal and gestural modalities changed across observation points and how changes in parental input related to children's vocabulary development. As a group, parents' verbal labelling remained constant across sessions, but gestural labelling declined at 2;0. How
APA, Harvard, Vancouver, ISO, and other styles
9

Lüke, Carina, Ute Ritterfeld, Angela Grimminger, Ulf Liszkowski, and Katharina J. Rohlfing. "Development of Pointing Gestures in Children With Typical and Delayed Language Acquisition." Journal of Speech, Language, and Hearing Research 60, no. 11 (2017): 3185–97. http://dx.doi.org/10.1044/2017_jslhr-l-16-0129.

Full text
Abstract:
Purpose This longitudinal study compared the development of hand and index-finger pointing in children with typical language development (TD) and children with language delay (LD). First, we examined whether the number and the form of pointing gestures during the second year of life are potential indicators of later LD. Second, we analyzed the influence of caregivers' gestural and verbal input on children's communicative development. Method Thirty children with TD and 10 children with LD were observed together with their primary caregivers in a seminatural setting in 5 sessions between the age
APA, Harvard, Vancouver, ISO, and other styles
10

Nurmala, Ma'rifah. "HOW GESTURE PROVIDES A HELPING HAND AND SUPPORTS CHILDREN’S LANGUAGE ACQUISITION." Ana' Bulava: Jurnal Pendidikan Anak 1, no. 2 (2020): 63–74. http://dx.doi.org/10.24239/abulava.vol1.iss2.13.

Full text
Abstract:
Children use gesture to refer to objects before they produce labels for these objects to convey semantic relations between objects before conveying sentences in speech. The gestural input that children receive from their or teacher shows that they provide models for their children for the types of gestures and gesture to produce, and do so by modifying their gestures to meet the communicative needs of their children. This article aims to discuss what we know about the impact of gestures on memorization of words. This article describes an explanation the form and example why using gesture would
APA, Harvard, Vancouver, ISO, and other styles
11

Rahim, Md Abdur, Jungpil Shin, and Md Rashedul Islam. "Gestural flick input-based non-touch interface for character input." Visual Computer 36, no. 8 (2019): 1559–72. http://dx.doi.org/10.1007/s00371-019-01758-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Choi, Hyo-Rim, and TaeYong Kim. "Modified Dynamic Time Warping Based on Direction Similarity for Fast Gesture Recognition." Mathematical Problems in Engineering 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/2404089.

Full text
Abstract:
We propose a modified dynamic time warping (DTW) algorithm that compares gesture-position sequences based on the direction of the gestural movement. Standard DTW does not specifically consider the two-dimensional characteristic of the user’s movement. Therefore, in gesture recognition, the sequence comparison by standard DTW needs to be improved. The proposed gesture-recognition system compares the sequences of the input gesture’s position with gesture positions saved in the database and selects the most similar gesture by filtering out unrelated gestures. The suggested algorithm uses the cosi
APA, Harvard, Vancouver, ISO, and other styles
13

Ram, Sharan, Anjan Mahadevan, Hadi Rahmat-Khah, Guiseppe Turini, and Justin G. Young. "Effect of Control-Display Gain and Mapping and Use of Armrests on Accuracy in Temporally Limited Touchless Gestural Steering Tasks." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (2017): 380–84. http://dx.doi.org/10.1177/1541931213601577.

Full text
Abstract:
Touchless gestural controls are becoming an important natural input technique for interaction with emerging virtual environments but design parameters that improve task performance while at the same time reduce user fatigue require investigation. This experiment aims to understand how control-display (CD) parameters such as gain and mapping as well as the use of armrests affect gesture accuracy in specific movement directions. Twelve participants completed temporally constrained two-dimensional steering tasks using free-hand fingertip gestures in several conditions. Use of an armrest, increase
APA, Harvard, Vancouver, ISO, and other styles
14

Keller, M. David, Patrick Mead, and Megan Kozub. "Gaze Supported Gestural Computer Interaction: Performance Implications of Training." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (2017): 1990–94. http://dx.doi.org/10.1177/1541931213601993.

Full text
Abstract:
Gaze supported non-tactile gestural control uses a combination of gestures based body movements with eye gaze positioning to provide an input source for a user’s control with a system. Combining body gestures with eye movements allows for unique computer control methods other than the traditional mouse. However, research is mixed on the effectiveness of emerging control types, such as gestures and eye-tracking, with some showing positive performance outcomes for one or more control aspects but performance detriments in other areas that would prohibit the use of such novel control methods. One
APA, Harvard, Vancouver, ISO, and other styles
15

Fonteles, Joyce Horn, Édimo Sousa Silva, and Maria Andréia Formico Rodrigues. "Gesture-Driven Interaction Using the Leap Motion to Conduct a 3D Particle System: Evaluation and Analysis of an Orchestral Performance." Journal on Interactive Systems 6, no. 2 (2015): 1. http://dx.doi.org/10.5753/jis.2015.660.

Full text
Abstract:
In this work, we present and evaluate an interactive simulation of 3D particles conducted by the Leap Motion, for an orchestral arrangement. A real-time visual feedback during gesture entry is generated for the conductor and the audience, through a set of particle emitters displayed on the screen and the path traced by the captured gesture. We use two types of data input: the captured left and right hand conducting gestures (some universal movements, such as the beat patterns for the most common time signatures, the indication of a specific section of the orchestra, and the cutoff gestures), w
APA, Harvard, Vancouver, ISO, and other styles
16

Tencer, Heather L., and Jana M. Iverson. "Maternal input: Its role in infant gestural communication." Infant Behavior and Development 21 (April 1998): 714. http://dx.doi.org/10.1016/s0163-6383(98)91927-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Vallotton, Claire D., Kalli B. Decker, Alicia Kwon, Wen Wang, and TzuFen Chang. "Quantity and Quality of Gestural Input: Caregivers’ Sensitivity Predicts Caregiver-Infant Bidirectional Communication Through Gestures." Infancy 22, no. 1 (2016): 56–77. http://dx.doi.org/10.1111/infa.12155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

양승민, 반영환, and Kim, Hyung Min. "A Study on Gestural Input Interaction on Wrist Wearable Devices." Journal of Digital Design 14, no. 3 (2014): 823–30. http://dx.doi.org/10.17280/jdd.2014.14.3.081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gullberg, Marianne, Leah Roberts, and Christine Dimroth. "What word-level knowledge can adult learners acquire after minimal exposure to a new language?" International Review of Applied Linguistics in Language Teaching 50, no. 4 (2012): 239–76. http://dx.doi.org/10.1515/iral-2012-0010.

Full text
Abstract:
Abstract Discussions about the adult L2 learning capacity often take as their starting point stages where considerable L2 knowledge has already been accumulated. This paper probes the absolute earliest stages of learning and investigates what lexical knowledge adult learners can extract from complex, continuous speech in an unknown language after minimal exposure and without any help. Dutch participants were exposed to naturalistic but controlled audiovisual input in Mandarin Chinese, in which item frequency and gestural highlighting were manipulated. The results from a word recognition task s
APA, Harvard, Vancouver, ISO, and other styles
20

Jurewicz, Katherina A., and David M. Neyens. "A Longitudinal Study Investigating the Effects of Workload and Exposure on 3D Gestural Human Computer Interaction." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (2020): 390–94. http://dx.doi.org/10.1177/1071181320641088.

Full text
Abstract:
3D gestural input technology has the ability to expand human-computer interaction (HCI) beyond traditional input modalities. It is known that context and domain expertise are influential to gesture development, but there is little known about other individual factors such as workload and exposure. Therefore, the objective of this work is to explore the effects of workload and exposure on intuitive gesture choice and reaction time under a general HCI context. A longitudinal study was conducted to investigate the differences in intuitive mappings for high and low workload conditions as well as a
APA, Harvard, Vancouver, ISO, and other styles
21

ÖZÇALIŞKAN, ŞEYDA, and SUSAN GOLDIN-MEADOW. "Do parents lead their children by the hand?" Journal of Child Language 32, no. 3 (2005): 481–505. http://dx.doi.org/10.1017/s0305000905007002.

Full text
Abstract:
The types of gesture+speech combinations children produce during the early stages of language development change over time. This change, in turn, predicts the onset of two-word speech and thus might reflect a cognitive transition that the child is undergoing. An alternative, however, is that the change merely reflects changes in the types of gesture+speech combinations that their caregivers produce. To explore this possibility, we videotaped 40 American child–caregiver dyads in their homes for 90 minutes when the children were 1;2, 1;6, and 1;10. Each gesture was classified according to type (
APA, Harvard, Vancouver, ISO, and other styles
22

Avera, Angie, Christy Harper, Natalia Russi-Vigoya, and Stephen Stoll. "Effects of Touchpad Size on Pointing and Gestural Input Area and Performance." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (2016): 825–29. http://dx.doi.org/10.1177/1541931213601188.

Full text
Abstract:
With the introduction of gestural input, many laptops are now being designed with larger touchpads in order to allow for the most accurate use of these features. Vendors have recommended surface input area sizes ranging from 60mm (w) x 45mm (h) to 105mm (w) x 65mm (h) and larger. However, as touchpads have continued to grow, it has been discovered that large touchpads can sometimes increase usability issues from unintended activation. From observing users over time we have noticed that people tend to focus input in the center area of the touchpad, regardless of touchpad dimensions. Therefore i
APA, Harvard, Vancouver, ISO, and other styles
23

Moyle, M., and A. Cockburn. "A flick in the right direction: a case study of gestural input." Behaviour & Information Technology 24, no. 4 (2005): 275–88. http://dx.doi.org/10.1080/01449290512331321866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jatmika, Ovan Bagus. "Faktor Penunjang Pertunjukan Musik: Input, Proses, dan Output." Journal of Music Science, Technology, and Industry 3, no. 1 (2020): 79–90. http://dx.doi.org/10.31091/jomsti.v3i1.966.

Full text
Abstract:
This research examines the supporting factors in improving the quality of musical performances. The method used is qualitative and the research data are found through library research. The author concludes that a performance is considered to have ideal conditions if it involves three elements that support each other in the process towards the end of the musical execution on the stage, namely input, process, and output. These three elements (input, process, and output) are expected to improve the quality of the performance. The intended form of quality improvement is the formation of two-way co
APA, Harvard, Vancouver, ISO, and other styles
25

Weismer, Susan Ellis, and Linda J. Hesketh. "The Influence of Prosodic and Gestural Cues on Novel Word Acquisition by Children With Specific Language Impairment." Journal of Speech, Language, and Hearing Research 36, no. 5 (1993): 1013–25. http://dx.doi.org/10.1044/jshr.3605.1013.

Full text
Abstract:
The purpose of this study was to investigate the effects of prosodic and gestural cues on children’s lexical learning. Acquisition of novel words was examined under linguistic input conditions that varied in terms of rate of speech, stress, and use of supplemental visual cues i.e., gestures). Sixteen kindergarten children served as subjects in this study, including 8 children with normal language (NL) and 8 children with specific language impairment (SLI). A repeated-measures design was used such that all subjects in both groups participated in each of the three experimental conditions (the Ra
APA, Harvard, Vancouver, ISO, and other styles
26

Flaherty, Molly, Dea Hunsicker, and Susan Goldin-Meadow. "Structural biases that children bring to language learning: A cross-cultural look at gestural input to homesign." Cognition 211 (June 2021): 104608. http://dx.doi.org/10.1016/j.cognition.2021.104608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tadeja, Sławomir Konrad, Yupu Lu, Maciej Rydlewicz, Wojciech Rydlewicz, Tomasz Bubas, and Per Ola Kristensson. "Exploring gestural input for engineering surveys of real-life structures in virtual reality using photogrammetric 3D models." Multimedia Tools and Applications 80, no. 20 (2021): 31039–58. http://dx.doi.org/10.1007/s11042-021-10520-z.

Full text
Abstract:
AbstractPhotogrammetry is a promising set of methods for generating photorealistic 3D models of physical objects and structures. Such methods may rely solely on camera-captured photographs or include additional sensor data. Digital twins are digital replicas of physical objects and structures. Photogrammetry is an opportune approach for generating 3D models for the purpose of preparing digital twins. At a sufficiently high level of quality, digital twins provide effective archival representations of physical objects and structures and become effective substitutes for engineering inspections an
APA, Harvard, Vancouver, ISO, and other styles
28

O'Neill, Tara, Janice Light, and Lauramarie Pope. "Effects of Interventions That Include Aided Augmentative and Alternative Communication Input on the Communication of Individuals With Complex Communication Needs: A Meta-Analysis." Journal of Speech, Language, and Hearing Research 61, no. 7 (2018): 1743–65. http://dx.doi.org/10.1044/2018_jslhr-l-17-0132.

Full text
Abstract:
PurposeThe purpose of this meta-analysis was to investigate the effects of augmentative and alternative communication (AAC) interventions that included aided AAC input (e.g., aided AAC modeling, aided language modeling, aided language stimulation, augmented input) on communicative outcomes (both comprehension and expression) for individuals with developmental disabilities who use AAC.MethodA systematic search resulted in the identification of 26 single-case experimental designs (88 participants) and 2 group experimental designs (103 participants). Studies were coded in terms of participants, i
APA, Harvard, Vancouver, ISO, and other styles
29

López Ibáñez, Manuel, Maximiliano Miranda, Nahum Alvarez, and Federico Peinado. "Using gestural emotions recognised through a neural network as input for an adaptive music system in virtual reality." Entertainment Computing 38 (May 2021): 100404. http://dx.doi.org/10.1016/j.entcom.2021.100404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Butkiewicz, Thomas. "A More Flexible Approach to Utilizing Depth Cameras for Hand andTouch Interaction." International Journal of Virtual Reality 11, no. 3 (2012): 53–57. http://dx.doi.org/10.20870/ijvr.2012.11.3.2851.

Full text
Abstract:
Many researchers have utilized depth cameras for tracking user's hands to implement various interaction methods, such as touch-sensitive displays and gestural input. With the recent introduction of Microsoft's low-cost Kinect sensor, there is increased interest in this strategy. However, a review of the existing literature on these systems suggests that the majority suffer from similar limitations due to the image processing methods used to extract, segment, and relate the user's body to the environment/display. This paper presents a simple, efficient method for extracting interactions from de
APA, Harvard, Vancouver, ISO, and other styles
31

Menzies, Dylan. "Composing instrument control dynamics." Organised Sound 7, no. 3 (2002): 255–66. http://dx.doi.org/10.1017/s1355771802003059.

Full text
Abstract:
The expression gestural mapping is well imbedded in the language of instrument designers, describing the function from interface control parameters to synthesis control parameters. This function is in most cases implicitly assumed to be instantaneous, so that at any time its output depends only on its input at that time. Here more general functions are considered, in which the output depends on the history of input, especially functions that behave like physical dynamic systems, such as a damped resonator. Acoustic instruments are rich in dynamical behaviour. Introducing dynamics at the contro
APA, Harvard, Vancouver, ISO, and other styles
32

Namy, Laura L., Rebecca Vallas, and Jennifer Knight-Schwarz. "Linking parent input and child receptivity to symbolic gestures." Gesture 8, no. 3 (2008): 302–24. http://dx.doi.org/10.1075/gest.8.3.03nam.

Full text
Abstract:
This study explored the relation between parents’ production of gestures and symbolic play during free play and children’s production and comprehension of symbolic gestures. Thirty-one 16- to 22-month-olds and their parents participated in a free play session. Children also participated in a forced-choice novel gesture-learning task. Parents’ pretend play with objects in hand was predictive of children’s gesture production during play and gesture vocabulary according to parental report. No relationship was found between parent gesture and child performance on the forced-choice gesture-learning
APA, Harvard, Vancouver, ISO, and other styles
33

Schneegass, Stefan, Thomas Olsson, Sven Mayer, and Kristof van Laerhoven. "Mobile Interactions Augmented by Wearable Computing." International Journal of Mobile Human Computer Interaction 8, no. 4 (2016): 104–14. http://dx.doi.org/10.4018/ijmhci.2016100106.

Full text
Abstract:
Wearable computing has a huge potential to shape the way we interact with mobile devices in the future. Interaction with mobile devices is still mainly limited to visual output and tactile finger-based input. Despite the visions of next-generation mobile interaction, the hand-held form factor hinders new interaction techniques becoming commonplace. In contrast, wearable devices and sensors are intended for more continuous and close-to-body use. This makes it possible to design novel wearable-augmented mobile interaction methods – both explicit and implicit. For example, the EEG signal from a w
APA, Harvard, Vancouver, ISO, and other styles
34

ÖZYÜREK, ASLI, REYHAN FURMAN, and SUSAN GOLDIN-MEADOW. "On the way to language: event segmentation in homesign and gesture." Journal of Child Language 42, no. 1 (2014): 64–94. http://dx.doi.org/10.1017/s0305000913000512.

Full text
Abstract:
ABSTRACTLanguages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in o
APA, Harvard, Vancouver, ISO, and other styles
35

Holder, Sherrie, and Leia Stirling. "Effect of Gesture Interface Mapping on Controlling a Multi-degree-of-freedom Robotic Arm in a Complex Environment." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (2020): 183–87. http://dx.doi.org/10.1177/1071181320641045.

Full text
Abstract:
There are many robotic scenarios that require real-time function in large or unconstrained environments, for example, the robotic arm on the International Space Station (ISS). Use of fully-wearable gesture control systems are well-suited to human-robot interaction scenarios where users are mobile and must have hands free. A human study examined operation of a simulated ISS robotic arm using three different gesture input mappings compared to the traditional joystick interface. Two gesture mappings permitted multiple simultaneous inputs (multi-input), while the third was a single-input method. E
APA, Harvard, Vancouver, ISO, and other styles
36

Applebaum, Lauren, Marie Coppola, and Susan Goldin-Meadow. "Prosody in a communication system developed without a language model." Sign Language and Linguistics 17, no. 2 (2014): 181–212. http://dx.doi.org/10.1075/sll.17.2.02app.

Full text
Abstract:
Prosody, the “music” of language, is an important aspect of all natural languages, spoken and signed. We ask here whether prosody is also robust across learning conditions. If a child were not exposed to a conventional language and had to construct his own communication system, would that system contain prosodic structure? We address this question by observing a deaf child who received no sign language input and whose hearing loss prevented him from acquiring spoken language. Despite his lack of a conventional language model, this child developed his own gestural system. In this system, featur
APA, Harvard, Vancouver, ISO, and other styles
37

Kota, Sreemannarayana, and Justin G. Young. "Effects of Control-Display Mapping and Spatially Dependent Gain on Supported Free-Hand Gesture Pointing Performance." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (2019): 391–95. http://dx.doi.org/10.1177/1071181319631518.

Full text
Abstract:
While gesture controls will be important as future computing input modalities, they are limited by reduced performance and increase ergonomic risk. Performance of pointing in free-hand gestural controls is affected by control-display (CD) parameters such as gain and mapping, but specific CD function parameters that yield optimal performance are unclear. This paper describes an experiment that examines the effect of altering CD gain in different movement directions (‘spatially dependent gain’, SDG) for different CD mappings on their performance in armrest-supported pointing tasks. Thirteen part
APA, Harvard, Vancouver, ISO, and other styles
38

Thom-Santelli, Jennifer, and Alan Hedge. "Effects of a Multitouch Keyboard on Wrist Posture, Typing Performance and Comfort." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 5 (2005): 646–49. http://dx.doi.org/10.1177/154193120504900503.

Full text
Abstract:
This study compares the use of a conventional keyboard (CK) and a prototype ultra-low profile MultiTouch keyless keyboard (MTK) that only requires contact force to register a keystroke and allows mousing and gestural input on the same surface. Twelve subjects completed eight randomly assigned 7.5- minute typing tasks of text passages of similar difficulty and identical length for each keyboard condition. Typing speed, accuracy, wrist postures and user comfort were measured. Subjects, typed slower (F1,11 = 41.86, p=0.000) and less accurately (F1,11 = 23.55, p=0.001) on the MTK during the typing
APA, Harvard, Vancouver, ISO, and other styles
39

Lucente, Luciana. "Dynamic model of speech." Journal of Speech Sciences 3, no. 2 (2021): 21–62. http://dx.doi.org/10.20396/joss.v3i2.15045.

Full text
Abstract:
This article explores the relationship between intonational patterns and its relationship with speech rhythm and discourse, according to the dynamic systems research program. The study of these relationships were based on Barbosa’s (2006) Dynamic Model of Speech Rhythm; on Dato intonational annotation system proposed by Lucente (2008); and on the Computational Model of the Structure of Discourse, proposed by Grosz & Sidner (1986). The Dynamic Model of Rhythm suggests that the speech rhythm is the result of the action of two oscillators – accentual and syllabic - which receive as input ling
APA, Harvard, Vancouver, ISO, and other styles
40

Wilbourn, Makeba Parramore, and Jacqueline Prince Sims. "Get by With a Little Help From a Word: Multimodal Input Facilitates 26-Month-Olds' Ability to Map and Generalize Arbitrary Gestural Labels." Journal of Cognition and Development 14, no. 2 (2013): 250–69. http://dx.doi.org/10.1080/15248372.2012.658930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Guo, Xing, Zhen Yu Lu, Rong Bin Xu, Zheng Yi Liu, and Jian Guo Wu. "Big-Screen Text Input System Based on Gesture Recognition." Advanced Materials Research 765-767 (September 2013): 2653–56. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.2653.

Full text
Abstract:
Now more and more based on gesture interaction system applications, but there are simple gestures to operate the mouse interaction, no text input to the system function. In this paper, the blind letters gestures as input gestures ,using kinect to get the depth image, gestures split, use SIFT feature extraction, then get manual alphabet as Pinyin input method to provide an Chinese character input to the system.
APA, Harvard, Vancouver, ISO, and other styles
42

BHUYAN, M. K., P. K. BORA, and D. GHOSH. "AN INTEGRATED APPROACH TO THE RECOGNITION OF A WIDE CLASS OF CONTINUOUS HAND GESTURES." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 02 (2011): 227–52. http://dx.doi.org/10.1142/s0218001411008592.

Full text
Abstract:
The gesture segmentation is a method that distinguishes meaningful gestures from unintentional movements. Gesture segmentation is a prerequisite stage to continuous gesture recognition which locates the start and end points of a gesture in an input sequence. Yet, this is an extremely difficult task due to both the multitude of possible gesture variations in spatio-temporal space and the co-articulation/movement epenthesis of successive gestures. In this paper, we focus our attention on coping with this problem associated with continuous gesture recognition. This requires gesture spotting that
APA, Harvard, Vancouver, ISO, and other styles
43

Beaupoil-Hourdel, Pauline, and Camille Debras. "Developing communicative postures." Language, Interaction and Acquisition 8, no. 1 (2017): 89–116. http://dx.doi.org/10.1075/lia.8.1.05bea.

Full text
Abstract:
Abstract This article analyses the development of a composite communicative posture, the shrug (which can combine palm-up flips, lifted shoulders and a head tilt), in a video corpus of spontaneous interactions between a typically developing British girl, Ellie, and her mother, filmed at home one hour each month from Ellie’s tenth month to her fourth birthday. The systematic coding of every shrug yields a total of 124 tokens (Ellie: 98; her mother: 26), providing results in terms of forms, functions and input. Ellie’s first shrug components emerge from non-linguistic actions and she acquires th
APA, Harvard, Vancouver, ISO, and other styles
44

Shaw, Alex, Jaime Ruiz, and Lisa Anthony. "A Survey on Applying Automated Recognition of Touchscreen Stroke Gestures to Children’s Input." Interacting with Computers 32, no. 5-6 (2020): 524–47. http://dx.doi.org/10.1093/iwc/iwab009.

Full text
Abstract:
Abstract Gesture recognition algorithms help designers create intelligent user interfaces for a number of application areas. However, these recognition algorithms are usually designed to recognize the gestures of adults, not children, and as such they generally do not perform as well for children as adults. Recognition of younger children’s gestures is particularly poor when compared to recognition of older children’s and adults’ gestures. Researchers have begun to examine the aspects of children’s gesture articulation patterns that make recognition difficult. This paper extends the initial wo
APA, Harvard, Vancouver, ISO, and other styles
45

Goldin-Meadow, Susan, and Carolyn Mylander. "The role of parental input in the development of a morphological system." Journal of Child Language 17, no. 3 (1990): 527–63. http://dx.doi.org/10.1017/s0305000900010874.

Full text
Abstract:
ABSTRACTIn order to isolate the properties of language whose development can withstand wide variations in learning conditions, we have observed children who have not had access to any conventional linguistic input but who have otherwise experienced normal social environments. The children we study are deaf with hearing losses so severe that they cannot naturally acquire spoken language, and whose hearing parents have chosen not to expose them to a sign language. In previous work, we demonstrated that, despite their lack of conventional linguistic input, the children developed spontaneous gestu
APA, Harvard, Vancouver, ISO, and other styles
46

Newby, Gregory B. "Gesture Recognition Based upon Statistical Similarity." Presence: Teleoperators and Virtual Environments 3, no. 3 (1994): 236–43. http://dx.doi.org/10.1162/pres.1994.3.3.236.

Full text
Abstract:
One of the improvements virtual reality offers traditional human-computer interfaces is that it enables the user to interact with virtual objects using gestures. The use of natural hand gestures for computer input provides opportunities for direct manipulation in computing environments, but not without some challenges. The mapping of a human gesture onto a particular system function is not nearly so easy as mapping with a keyboard or mouse. Reasons for this difficulty include individual variations in the exact gesture movement, the problem of knowing when a gesture starts and ends, and variati
APA, Harvard, Vancouver, ISO, and other styles
47

Saputra, Artha Gilang, Ema Utami, and Hanif Al Fatta. "Analisis Penerapan Metode Convex Hull Dan Convexity Defects Untuk Pengenalan Isyarat Tangan." Jurnal SAINTEKOM 8, no. 2 (2018): 105. http://dx.doi.org/10.33020/saintekom.v8i2.59.

Full text
Abstract:
Research of Human Computer Interaction (HCI) and Computer Vision (CV) is increasingly focused on advanced interface for interacting with humans and creating system models for various purposes. Especially for input device problem to interact with computer. Humans are accustomed to communicate with fellow human beings using voice communication and accompanied by body pose and hand gesture. The main purpose of this research is to applying the Convex Hull and Convexity Defects methods for Hand Gesture Recognition system.
 In this research, the Hand Gesture Recognition system designed with the
APA, Harvard, Vancouver, ISO, and other styles
48

Haber, Jeffrey, and Joon Chung. "Assessment of UAV operator workload in a reconfigurable multi-touch ground control station environment." Journal of Unmanned Vehicle Systems 4, no. 3 (2016): 203–16. http://dx.doi.org/10.1139/juvs-2015-0039.

Full text
Abstract:
Multi-touch computer inputs allow users to interact with a virtual environment through the use of gesture commands on a monitor instead of a mouse and keyboard. This style of input is easy for the human mind to adapt to because gestures directly reflect how one interacts with the natural environment. This paper presents and assesses a personal-computer-based unmanned aerial vehicle ground control station that utilizes multi-touch gesture inputs and system reconfigurability to enhance operator performance. The system was developed at Ryerson University’s Mixed-Reality Immersive Motion Simulatio
APA, Harvard, Vancouver, ISO, and other styles
49

Harrington, M. E., R. W. Daniel, and P. J. Kyberd. "A Measurement System for the Recognition of Arm Gestures Using Accelerometers." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 209, no. 2 (1995): 129–34. http://dx.doi.org/10.1243/pime_proc_1995_209_330_02.

Full text
Abstract:
This paper describes a strategy to measure arm movements using accelerometers for the computer recognition of arm gestures. Gesture recognition is being investigated as an alternative method of computer input for people with severe speech and motor impairment; the emphasis is on the needs of people with athetoid cerebral palsy who have difficulties with existing computer input devices. An initial model-based approach to estimate the kinematic motion of the arm from acceleration measurements is given, followed by the chosen measurement scheme. The current system considers the forearm as a rigid
APA, Harvard, Vancouver, ISO, and other styles
50

Udvag, R. Mahesh, T. Hari Kumar, R. S. Kavin Raj, and M. P. Karthikeyan. "Manipulation of Web Using Gestures." Journal of Computational and Theoretical Nanoscience 17, no. 8 (2020): 3782–85. http://dx.doi.org/10.1166/jctn.2020.9320.

Full text
Abstract:
Gesture recognition is a type of perceptual computing user interface that allows computers to capture and interpret human gestures as commands. Gestures are generally the movement of the hands which is a form of non-verbal communication. We use gestures as input to control devices or applications or websites. By using the gestures, we directly open or manipulate the website for the users. When the user uses the gesture, the system captures the gesture and compares it with the stored gesture data. If the gesture matches the data, then the required website will be manipulated. These gestures are
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!