Academic literature on the topic 'Audio-tactile interaction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Audio-tactile interaction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Audio-tactile interaction"

1

Hoggan, Eve. "Crossmodal Audio and Tactile Interaction with Mobile Touchscreens." International Journal of Mobile Human Computer Interaction 2, no. 4 (October 2010): 29–44. http://dx.doi.org/10.4018/jmhci.2010100102.

Full text
Abstract:
This article asserts that using crossmodal auditory and tactile interaction can aid mobile touchscreen users in accessing data non-visually and, by providing a choice of modalities, can help to overcome problems that occur in different mobile situations where one modality may be less suitable than another (Hoggan, 2010). By encoding data using the crossmodal parameters of audio and vibration, users can learn mappings and translate information between both modalities. In this regard, data may be presented to the most appropriate modality given the situation and surrounding environment.
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, I.-Fan, and Makio Kashino. "Is There Audio-Tactile Interaction in Perceptual Organization?" i-Perception 2, no. 8 (October 2011): 802. http://dx.doi.org/10.1068/ic802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Altinsoy, M. Ercan, Sebastian Merchel, and Sebastian Tilsch. "Perceptual evaluation of violin vibrations and audio-tactile interaction." Journal of the Acoustical Society of America 133, no. 5 (May 2013): 3255. http://dx.doi.org/10.1121/1.4805250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Martolini, Chiara, Giulia Cappagli, Sabrina Signorini, and Monica Gori. "Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions." Brain Sciences 11, no. 3 (March 9, 2021): 343. http://dx.doi.org/10.3390/brainsci11030343.

Full text
Abstract:
Research has shown that the ability to integrate complementary sensory inputs into a unique and coherent percept based on spatiotemporal coincidence can improve perceptual precision, namely multisensory integration. Despite the extensive research on multisensory integration, very little is known about the principal mechanisms responsible for the spatial interaction of multiple sensory stimuli. Furthermore, it is not clear whether the size of spatialized stimulation can affect unisensory and multisensory perception. The present study aims to unravel whether the stimulated area’s increase has a detrimental or beneficial effect on sensory threshold. Sixteen typical adults were asked to discriminate unimodal (visual, auditory, tactile), bimodal (audio-visual, audio-tactile, visuo-tactile) and trimodal (audio-visual-tactile) stimulation produced by one, two, three or four devices positioned on the forearm. Results related to unisensory conditions indicate that the increase of the stimulated area has a detrimental effect on auditory and tactile accuracy and visual reaction times, suggesting that the size of stimulated areas affects these perceptual stimulations. Concerning multisensory stimulation, our findings indicate that integrating auditory and tactile information improves sensory precision only when the stimulation area is augmented to four devices, suggesting that multisensory interaction is occurring for expanded spatial areas.
APA, Harvard, Vancouver, ISO, and other styles
5

Zeng, Limin, Mei Miao, and Gerhard Weber. "Interactive Audio-haptic Map Explorer on a Tactile Display." Interacting with Computers 27, no. 4 (February 26, 2014): 413–29. http://dx.doi.org/10.1093/iwc/iwu006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Serino, Andrea, Elisa Canzoneri, and Alessio Avenanti. "Fronto-parietal Areas Necessary for a Multisensory Representation of Peripersonal Space in Humans: An rTMS Study." Journal of Cognitive Neuroscience 23, no. 10 (October 2011): 2956–67. http://dx.doi.org/10.1162/jocn_a_00006.

Full text
Abstract:
A network of brain regions including the ventral premotor cortex (vPMc) and the posterior parietal cortex (PPc) is consistently recruited during processing of multisensory stimuli within peripersonal space (PPS). However, to date, information on the causal role of these fronto-parietal areas in multisensory PPS representation is lacking. Using low-frequency repetitive TMS (rTMS; 1 Hz), we induced transient virtual lesions to the left vPMc, PPc, and visual cortex (V1, control site) and tested whether rTMS affected audio–tactile interaction in the PPS around the hand. Subjects performed a timed response task to a tactile stimulus on their right (contralateral to rTMS) hand while concurrent task-irrelevant sounds were presented either close to the hand or 1 m far from the hand. When no rTMS was delivered, a sound close to the hand reduced RT-to-tactile targets as compared with when a far sound was presented. This space-dependent, auditory modulation of tactile perception was specific to a hand-centered reference frame. Such a specific form of multisensory interaction near the hand can be taken as a behavioral hallmark of PPS representation. Crucially, virtual lesions to vPMc and PPc, but not to V1, eliminated the speeding effect due to near sounds, showing a disruption of audio–tactile interactions around the hand. These findings indicate that multisensory interaction around the hand depends on the functions of vPMc and PPc, thus pointing to the necessity of this human fronto-parietal network in multisensory representation of PPS.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Lihan, Qingcui Wang, and Ming Bao. "Spatial References and Audio-Tactile Interaction in Cross-Modal Dynamic Capture." Multisensory Research 27, no. 1 (2014): 55–70. http://dx.doi.org/10.1163/22134808-00002441.

Full text
Abstract:
In audiotactile dynamic capture, judgment of the direction of an apparent motion stream (such as auditory motion) was impeded (hence ‘captured’) by the presentation of a concurrent, but directionally opposite apparent motion stream (such as tactile motion) from a distractor modality, leading to a cross-modal dynamic capture (CDC) effect. That is to say, the percentage of correct reporting of the direction of the target motion was reduced. Previous studies have revealed the effect of stimulus onset asynchronies (SOAs) and the potential spatial remapping (by adopting a cross-hands posture) in CDC. However, further exploration of the dynamic capture process under different postures was not available due to the fact that only two levels of time asynchronies were employed (either synchronous or with an SOA of 500 ms). This study introduced a broad range of SOAs (−400 ms to 400 ms, tactile stream preceded auditory stream or vice versa) to explore the time course of audio-tactile interaction in CDC with two spatial references — arms-uncrossed or arms-crossed postures. Participants judged the direction of auditory apparent motion with tactile distractors. The results showed that in the arms-uncrossed condition, the CDC effect was prominent when the auditory–tactile events were in the temporal integration window (0–60 ms). However, with a preceding tactile cueing effect of SOA equal to and above 150 ms, the CDC effect was reduced, and no CDC effect was observed with the arms-crossed posture. These results suggest CDC effect is modulated by both cross-modal interaction and the spatial reference (especially for the distractors). The magnitude of the CDC effects in audiotactile interaction may be accounted for by reliability of tactile spatial-temporal information.
APA, Harvard, Vancouver, ISO, and other styles
8

Tonelli, Alessia, Claudio Campus, Andrea Serino, and Monica Gori. "Enhanced audio-tactile multisensory interaction in a peripersonal task after echolocation." Experimental Brain Research 237, no. 3 (January 7, 2019): 855–64. http://dx.doi.org/10.1007/s00221-019-05469-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karam, M., F. A. Russo, and D. I. Fels. "Designing the Model Human Cochlea: An Ambient Crossmodal Audio-Tactile Display." IEEE Transactions on Haptics 2, no. 3 (July 2009): 160–69. http://dx.doi.org/10.1109/toh.2009.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Monkman, G. J. "An Electrorheological Tactile Display." Presence: Teleoperators and Virtual Environments 1, no. 2 (January 1992): 219–28. http://dx.doi.org/10.1162/pres.1992.1.2.219.

Full text
Abstract:
In addition to force and torque reflection, teleoperation also requires a degree of tactile feedback. This is particularly important where knowledge of a surface topology is desired, such as might be encountered by an underwater or space exploration vehicle. Similarly, the aerospace industry is presently developing ever increasingly sophisticated virtual reality environments for pilot training. It is felt that, in addition to visual, audio, and torque feedback, some form of tactile feedback would be useful. This paper presents a means by which electrorheological fluids may be used to provide a relatively high resolution tactile display containing virtually no moving parts. Design parameters are outlined and an example of a working model is shown. The extension of this and similar technology to the display of rapidly time varying tactile images is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Audio-tactile interaction"

1

Hoggan, Eve Elizabeth. "Crossmodal audio and tactile interaction with mobile touchscreens." Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/1863/.

Full text
Abstract:
Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Hobeika, Lise. "Interplay between multisensory integration and social interaction in auditory space : towards an integrative neuroscience approach of proxemics." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB116.

Full text
Abstract:
L'homme ne perçoit pas l'espace de manière homogène : le cerveau code l'espace proche du corps différemment de l'espace lointain. Cette distinction joue un rôle primordial notre comportement social : l'espace proche du corps, appelé espace péripersonnel (EPP), serait une zone de protection du corps, où la présence d'un individu est perçue comme une menace. L'EPP a été initialement décrit par la psychologie sociale et l'anthropologie, comme un facteur de la communication humaine. L'EPP a été plus tard décrit chez le singe par des études de neurophysiologie comme un espace codé par des neurones multisensoriels. Ces neurones déchargent uniquement en réponse à des évènements sensoriels situés à une distance limitée du corps du singe (qu'ils soient tactiles, visuels ou auditifs). L'ensemble de ces neurones multisensoriels code ainsi l'EPP tout autour du corps. Ce codage exclusif de l'EPP est crucial pour interagir avec le monde extérieur, car c'est dans cet espace que sont réalisées les actions visant à protéger le corps ou visant à atteindre des objets autour de soi. Le codage mutlisensoriel de l'EPP pendant des interactions sociales est à ce jour peu étudié. Dans ce travail de recherche, nous avons réalisé plusieurs études en vu d'identifier des facteurs contribuant à la perméabilité de l'EPP et ses aspects adaptatifs. Une première étude a examiné les frontières latérales de l'EPP chez des individus seuls, en mesurant l'interaction d'une source sonore dynamique s'approchant du corps avec le temps de détection de stimulations tactiles. Cette étude a montré des différences dans la taille de l'EPP entre les deux hémi-espaces, qui seraient liées à la latéralité manuelle. Une seconde étude a exploré les modulations de l'EPP dans des contextes sociaux. Elle a montré que l'EPP est modifié lorsque des individus réalisent une tâche en collaboration. La troisième étude est une recherche méthodologique qui vise à dépasser les limitations des paradigmes comportementaux utilisés actuellement pour mesurer l'EPP. Elle propose de nouvelles pistes pour évaluer comment les stimuli approchant le corps sont intégrés en fonction de leur distance et du contexte multisensoriel dans lequel ils sont traités. L'ensemble de ces travaux montre l'intérêt d'étudier l'intégration multisensorielle autour du corps dans l'espace 3D pour comprendre pleinement l'EPP, et les impacts potentiels de facteurs sociaux sur les processus multisensoriels de bas-niveaux. De plus, ces études soulignent l'importance pour les neurosciences sociales de développer des protocoles expérimentaux réellement sociaux, à plusieurs participants
The space near the body, called peripersonal space (PPS), was originally studied in social psychology and anthropology as an important factor in interpersonal communication. It was later described by neurophysiological studies in monkeys as a space mapped with multisensory neurons. Those neurons discharge only when events are occurring near the body (be it tactile, visual or audio information), delineating the space that people consider as belonging to them. The human brain also codes events that are near the body differently from those that are farther away. This dedicated brain function is critical to interact satisfactorily with the external world, be it for defending oneself or to reach objects of interest. However, little is known about how this function is impacted by real social interactions. In this work, we have conducted several studies aiming at understanding the factors that contribute to the permeability and adaptive aspects of PPS. A first study examined lateral PPS for individuals in isolation, by measuring reaction time to tactile stimuli when an irrelevant sound is looming towards the body of the individual. It revealed an anisotropy of reaction time across hemispaces, that we could link to handedness. A second study explored the modulations of PPS in social contexts. It was found that minimal social instructions could influence the shape of peripersonal space, with a complex modification of behaviors in collaborative tasks that outreaches the handedness effect. The third study is a methodological investigation attempting to go beyond the limitations of the behavioral methods measuring PPS, and proposing a new direction to assess how stimuli coming towards the body are integrated according to their distance and the multisensory context in which they are processed. Taken together, our work emphasizes the importance of investigating multisensory integration in 3D space around the body to fully capture PPS mechanisms, and the potential impacts of social factors on low-level multisensory processes. Moreover, this research provides evidence that neurocognitive social investigations, in particular on space perception, benefit from going beyond the traditional isolated individual protocols towards actual live social interactive paradigms
APA, Harvard, Vancouver, ISO, and other styles
3

Dicke, Christina. "A Holistic Design Concept For Eyes-Free Mobile Interfaces." Thesis, University of Canterbury. Computer Science and Software Engineering, 2012. http://hdl.handle.net/10092/7174.

Full text
Abstract:
This thesis presents a series of studies to explore and understand the design of eyes-free interfaces for mobile devices. The motivation is to devise a holistic design concept that is based on the WIMP paradigm and is adapted to the requirements of mobile user interaction. It is proposed that audio is a very efficient and effective modality for use in an eyes-free mobile interface. Methods to transfer the WIMP paradigm to eyes-free interfaces are proposed and evaluated. Guidelines for the implementation of the paradigm are given and – by means of an example – a holistic design concept is proposed. This thesis begins with an introduction to and critical reflection of re- currently important themes and research methods from the disciplines of psychoacoustics, psychology, and presence research. An overview of related work is given, paying particular attention to the use of interface metaphors in mobile eyes-free interfaces. The notion of distance is discussed as a method to prioritise, structure, and manage attention in eyes-free interfaces. Practical issues arising from sources becoming inaudible with increasing distance can be addressed by proposing a method modeled on echo location. This method was compared to verbally coded distance information and proved useful for identifying the closest of several objects, while verbally coded distance infor- mation was found to be more efficient for identifying the precise distance of an object. The knowledge gained from the study can contribute to improv- ing other applications, such as GPS based navigation. Furthermore, the issue of gaining an overview of accessible objects by means of sound was exam- ined. The results showed that a minimum of 200 ms between adjacent sound samples should be adhered to. Based on these findings, both earcons and synthesized speech are recommendable, although speech has the advantage of being more flexible and easier to learn. Monophonic reproduction yields comparable results to spatial reproduction. However, spatial reproduction has the additional benefit of indicating an item’s position. These results are transferable and generally relevant for the use of audio in HCI. Tactile interaction techniques were explored as a means to interact with an auditory interface and were found to be both effective and enjoyable. One of the more general observations was that 2D and 3D gestures were intuitively used by participants, who transferred their knowledge of established gestures to auditory interfaces. It was also found that participants often used 2D ges- tures to select an item and proceeded to manipulate it with a 3D gesture. The results suggest the use of a small gesture set with reversible gestures for do/undo-type actions, which was further explored in a follow up study. It could be shown that simple 3D gestures are a viable way of manipulating spatialized sound sources in a complex 3D auditory display. While the main contribution of this thesis lies in the area of HCI, pre- viously unresearched issues from adjacent disciplines that impact the user experience of auditory interfaces have been addressed. It was found that regular, predictable movement patterns in 3D audio spaces cause symptoms of simulator sickness. However, these were found to be minor and only oc- curred under extreme conditions. Additionally, the influence of the audio reproduction method on the perception of presence, social presence, and realism was examined. It was found that both stereophonic and binaural reproduction have advantages over monophonic sound reproduction: stereo- phonic sound increases the perception of social presence while binaural sound increases the feeling of being present in a virtual environment. The results are important contributions insofar as one of the main applications of mobile devices is voice based communication; it is reasonable to assume that there will be an increase in real-time voice based social and cooperative networking applications. This thesis concludes with a conceptual design of a system called “Foogue”, which uses the results of the previous experiments as the basis of an eyes-free interface that utilizes spatial audio and gesture input.
APA, Harvard, Vancouver, ISO, and other styles
4

Giesa, Anette Isabella. "Navigating through haptics and sound: Exploring non-visual navigation for urban cyclists to enhance the cycling experience." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-21693.

Full text
Abstract:
Bicyclist are increasingly shaping the picture of urban traffic. With regard to guided navigation through urban areas, navigation systems that are designed for this type of traffic participants do not offer a satisfying solution. Voice instructions are often perceived as annoying and far too detailed. In addition, the usage of headphones to hear these instructions reduces the hearing and localization of environmental sounds significantly. Visual information on the other hand, draws the attention too much away from the main traffic situation. This effects the ability to react to and interact with other traffic participants and the surrounding and results in a feeling of insecurity.This thesis investigates how acoustic and vibro-tactile signals can be used to provide cyclists with necessary navigation instructions while maintaining the ability to perceive ambient sounds and keep attention to the environment. In addition, the focus is placed on the experience of guided navigation with a non-visual, multi-sensory system.
APA, Harvard, Vancouver, ISO, and other styles
5

Vissani, Matteo. "Multisensory features of peripersonal space representation: an analysis via neural network modelling." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
The peripersonal space (PPS) is the space immediately surrounding the body. It is coded in the brain in a multisensory, body part-centered (e.g. hand-centered, trunk-centered), modular fashion. This is supported by the existence of multisensory neurons (in fronto-parietal areas) with tactile receptive field on a specific body part (hand, arm, trunk, etc.) and visual/auditory receptive field surrounding the same body part. Recent behavioural results (Serino et al. Sci Rep 2015), obtained by using an audio-tactile paradigm, have further supported the existence of distinct PPS representations, each specific of a single body part (hand, trunk, face) and characterized by specific properties. That study has also evidenced that the PPS representations– although distinct – are not independent. In particular, the hand-PPS loses its properties and assumes those of the trunk-PPS when the hand is close to the trunk, as the hand-PPS was encapsulated within the trunk-PPS. Similarly, the face-PPS appears to be englobed into the trunk-PPS. It remains unclear how this interaction, which manifests behaviourally, can be implemented at a neural level by the modular organization of PPS representations. The aim of this Thesis is to propose a neural network model to help the comprehension of the underlying neurocomputational mechanisms. The model includes three subnetworks devoted to the single PPS representations around the hand, face and the trunk. Furthermore, interaction mechanisms– controlled by proprioceptive neurons – have been postulated among the subnetworks. The network is able to reproduce the behavioural data, explaining them in terms of neural properties and response. Moreover, the network provides some novel predictions, that can be tested in vivo. One of this prediction has been tested in this work, by performing an ad-hoc behavioural experiment at the Laboratory of Cognitive Neuroscience (Campus Biotech, Geneva) under the supervision of the neuropsychologist Dr Serino.
APA, Harvard, Vancouver, ISO, and other styles
6

de, Barros Paulo. "Evaluation of Multi-sensory Feedback in Virtual and Real Remote Environments in a USAR Robot Teleoperation Scenario." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-dissertations/454.

Full text
Abstract:
The area of Human-Robot Interaction deals with problems not only related to robots interacting with humans, but also with problems related to humans interacting and controlling robots. This dissertation focuses on the latter and evaluates multi-sensory (vision, hearing, touch, smell) feedback interfaces as a means to improve robot-operator cognition and performance. A set of four empirical studies using both simulated and real robotic systems evaluated a set of multi-sensory feedback interfaces with various levels of complexity. The task scenario for the robot in these studies involved the search for victims in a debris-filled environment after a fictitious catastrophic event (e.g., earthquake) took place. The results show that, if well-designed, multi-sensory feedback interfaces can indeed improve the robot operator data perception and performance. Improvements in operator performance were detected for navigation and search tasks despite minor increases in workload. In fact, some of the multi-sensory interfaces evaluated even led to a reduction in workload. The results also point out that redundant feedback is not always beneficial to the operator. While introducing the concept of operator omni-directional perception, that is, the operator’s capability of perceiving data or events coming from all senses and in all directions, this work explains that feedback redundancy is only beneficial when it enhances the operator omni-directional perception of data relevant to the task at hand. Last, the comprehensive methodology employed and refined over the course of the four studies is suggested as a starting point for the design of future HRI user studies. In summary, this work sheds some light on the benefits and challenges multi-sensory feedback interfaces bring, specifically on teleoperated robotics. It adds to our current understanding of these kinds of interfaces and provides a few insights to assist the continuation of research in the area.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Audio-tactile interaction"

1

Edwards, Alistair D. N., Nazatul Naquiah Abd Hamid, and Helen Petrie. "Exploring Map Orientation with Interactive Audio-Tactile Maps." In Human-Computer Interaction – INTERACT 2015, 72–79. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22701-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tahiroğlu, Koray, Johan Kildal, Teemu Ahmaniemi, Simon Overstall, and Valtteri Wikström. "Embodied Interactions with Audio-Tactile Virtual Objects in AHNE." In Haptic and Audio Interaction Design, 101–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32796-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Papetti, Stefano, Federico Fontana, Marco Civolani, Amir Berrezag, and Vincent Hayward. "Audio-tactile Display of Ground Properties Using Interactive Shoes." In Haptic and Audio Interaction Design, 117–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15841-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tuuri, Kai, Tuomas Eerola, and Antti Pirhonen. "Leaping across Modalities: Speed Regulation Messages in Audio and Tactile Domains." In Haptic and Audio Interaction Design, 10–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15841-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hoggan, Eve. "Crossmodal Interaction: Using Audio or Tactile Displays in Mobile Devices." In Lecture Notes in Computer Science, 577–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74800-7_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Olivetti Belardinelli, Marta, Stefano Federici, Franco Delogu, and Massimiliano Palmiero. "Sonification of Spatial Information: Audio-Tactile Exploration Strategies by Normal and Blind Subjects." In Universal Access in Human-Computer Interaction. Intelligent and Ubiquitous Interaction Environments, 557–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02710-9_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Picelli Sanches, Emilia Christie, Juliana Bueno, and Maria Lucia Leite Ribeiro Okimoto. "Designing 3D Printed Audio-Tactile Graphics: Recommendations from Prior Research." In Universal Access in Human-Computer Interaction. Design Methods and User Experience, 461–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78092-0_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Engel, Christin, Nadja Konrad, and Gerhard Weber. "TouchPen: Rich Interaction Technique for Audio-Tactile Charts by Means of Digital Pens." In Lecture Notes in Computer Science, 446–55. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58796-3_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abd Hamid, Nazatul Naquiah, Wan Adilah Wan Adnan, and Fariza Hanis Abdul Razak. "Identifying Sound Cues of the Outdoor Environment by Blind People to Represent Landmarks on Audio-Tactile Maps." In Universal Access in Human–Computer Interaction. Human and Technological Environments, 279–90. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58700-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koustriava, Eleni, Konstantinos Papadopoulos, Panagiotis Koukourikos, and Marialena Barouti. "The Impact of Orientation and Mobility Aids on Wayfinding of Individuals with Blindness: Verbal Description vs. Audio-Tactile Map." In Universal Access in Human-Computer Interaction. Users and Context Diversity, 577–85. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40238-3_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Audio-tactile interaction"

1

Altinsoy, M. Ercan, Sebastian Merchel, and Sebastian Tilsch. "Perceptual evaluation of violin vibrations and audio-tactile interaction." In ICA 2013 Montreal. ASA, 2013. http://dx.doi.org/10.1121/1.4799328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Metatla, Oussama, Fiore Martin, Tony Stockman, and Nick Bryan-Kinns. "Non-Visual Menu Navigation: the Effect of an Audio-Tactile Display." In Proceedings of the 28th International BCS Human Computer Interaction Conference (HCI 2014). BCS Learning & Development, 2014. http://dx.doi.org/10.14236/ewic/hci2014.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hamid, Nazatul Naquiah Abd, Wan Adilah Wan Adnan, and Fariza Hanis Abdul Razak. "Investigating blind people's preferences when exploring maps using static and rotatable audio-tactile maps at different orientations." In OzCHI '18: 30th Australian Computer-Human Interaction Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3292147.3292199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Altinsoy, M. Ercan, and Maik Stamm. "Touch the sound: The role of audio-tactile and audio-proprioceptive interaction on the spatial orientation in virtual scenes." In ICA 2013 Montreal. ASA, 2013. http://dx.doi.org/10.1121/1.4799882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Väänänen-Vainio-Mattila, Kaisa, Katja Suhonen, Jari Laaksonen, Johan Kildal, and Koray Tahiroğlu. "User experience and usage scenarios of audio-tactile interaction with virtual objects in a physical environment." In the 6th International Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2513506.2513514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Reichinger, Andreas, Anton Fuhrmann, Stefan Maierhofer, and Werner Purgathofer. "Gesture-Based Interactive Audio Guide on Tactile Reliefs." In ASSETS '16: The 18th International ACM SIGACCESS Conference on Computers and Accessibility. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2982142.2982176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Thevin, Lauren, Christophe Jouffrais, Nicolas Rodier, Nicolas Palard, Martin Hachet, and Anke M. Brock. "Creating Accessible Interactive Audio-Tactile Drawings using Spatial Augmented Reality." In ISS '19: Interactive Surfaces and Spaces. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3343055.3359711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Abd Hamid, Nazatul Naquiah, and Alistair D. N. Edwards. "Facilitating route learning using interactive audio-tactile maps for blind and visually impaired people." In CHI '13 Extended Abstracts on Human Factors in Computing Systems. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2468356.2468364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography