To see the other types of publications on this topic, follow the link: Audio-tactile interaction.

Journal articles on the topic 'Audio-tactile interaction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 journal articles for your research on the topic 'Audio-tactile interaction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hoggan, Eve. "Crossmodal Audio and Tactile Interaction with Mobile Touchscreens." International Journal of Mobile Human Computer Interaction 2, no. 4 (October 2010): 29–44. http://dx.doi.org/10.4018/jmhci.2010100102.

Full text
Abstract:
This article asserts that using crossmodal auditory and tactile interaction can aid mobile touchscreen users in accessing data non-visually and, by providing a choice of modalities, can help to overcome problems that occur in different mobile situations where one modality may be less suitable than another (Hoggan, 2010). By encoding data using the crossmodal parameters of audio and vibration, users can learn mappings and translate information between both modalities. In this regard, data may be presented to the most appropriate modality given the situation and surrounding environment.
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, I.-Fan, and Makio Kashino. "Is There Audio-Tactile Interaction in Perceptual Organization?" i-Perception 2, no. 8 (October 2011): 802. http://dx.doi.org/10.1068/ic802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Altinsoy, M. Ercan, Sebastian Merchel, and Sebastian Tilsch. "Perceptual evaluation of violin vibrations and audio-tactile interaction." Journal of the Acoustical Society of America 133, no. 5 (May 2013): 3255. http://dx.doi.org/10.1121/1.4805250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Martolini, Chiara, Giulia Cappagli, Sabrina Signorini, and Monica Gori. "Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions." Brain Sciences 11, no. 3 (March 9, 2021): 343. http://dx.doi.org/10.3390/brainsci11030343.

Full text
Abstract:
Research has shown that the ability to integrate complementary sensory inputs into a unique and coherent percept based on spatiotemporal coincidence can improve perceptual precision, namely multisensory integration. Despite the extensive research on multisensory integration, very little is known about the principal mechanisms responsible for the spatial interaction of multiple sensory stimuli. Furthermore, it is not clear whether the size of spatialized stimulation can affect unisensory and multisensory perception. The present study aims to unravel whether the stimulated area’s increase has a detrimental or beneficial effect on sensory threshold. Sixteen typical adults were asked to discriminate unimodal (visual, auditory, tactile), bimodal (audio-visual, audio-tactile, visuo-tactile) and trimodal (audio-visual-tactile) stimulation produced by one, two, three or four devices positioned on the forearm. Results related to unisensory conditions indicate that the increase of the stimulated area has a detrimental effect on auditory and tactile accuracy and visual reaction times, suggesting that the size of stimulated areas affects these perceptual stimulations. Concerning multisensory stimulation, our findings indicate that integrating auditory and tactile information improves sensory precision only when the stimulation area is augmented to four devices, suggesting that multisensory interaction is occurring for expanded spatial areas.
APA, Harvard, Vancouver, ISO, and other styles
5

Zeng, Limin, Mei Miao, and Gerhard Weber. "Interactive Audio-haptic Map Explorer on a Tactile Display." Interacting with Computers 27, no. 4 (February 26, 2014): 413–29. http://dx.doi.org/10.1093/iwc/iwu006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Serino, Andrea, Elisa Canzoneri, and Alessio Avenanti. "Fronto-parietal Areas Necessary for a Multisensory Representation of Peripersonal Space in Humans: An rTMS Study." Journal of Cognitive Neuroscience 23, no. 10 (October 2011): 2956–67. http://dx.doi.org/10.1162/jocn_a_00006.

Full text
Abstract:
A network of brain regions including the ventral premotor cortex (vPMc) and the posterior parietal cortex (PPc) is consistently recruited during processing of multisensory stimuli within peripersonal space (PPS). However, to date, information on the causal role of these fronto-parietal areas in multisensory PPS representation is lacking. Using low-frequency repetitive TMS (rTMS; 1 Hz), we induced transient virtual lesions to the left vPMc, PPc, and visual cortex (V1, control site) and tested whether rTMS affected audio–tactile interaction in the PPS around the hand. Subjects performed a timed response task to a tactile stimulus on their right (contralateral to rTMS) hand while concurrent task-irrelevant sounds were presented either close to the hand or 1 m far from the hand. When no rTMS was delivered, a sound close to the hand reduced RT-to-tactile targets as compared with when a far sound was presented. This space-dependent, auditory modulation of tactile perception was specific to a hand-centered reference frame. Such a specific form of multisensory interaction near the hand can be taken as a behavioral hallmark of PPS representation. Crucially, virtual lesions to vPMc and PPc, but not to V1, eliminated the speeding effect due to near sounds, showing a disruption of audio–tactile interactions around the hand. These findings indicate that multisensory interaction around the hand depends on the functions of vPMc and PPc, thus pointing to the necessity of this human fronto-parietal network in multisensory representation of PPS.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Lihan, Qingcui Wang, and Ming Bao. "Spatial References and Audio-Tactile Interaction in Cross-Modal Dynamic Capture." Multisensory Research 27, no. 1 (2014): 55–70. http://dx.doi.org/10.1163/22134808-00002441.

Full text
Abstract:
In audiotactile dynamic capture, judgment of the direction of an apparent motion stream (such as auditory motion) was impeded (hence ‘captured’) by the presentation of a concurrent, but directionally opposite apparent motion stream (such as tactile motion) from a distractor modality, leading to a cross-modal dynamic capture (CDC) effect. That is to say, the percentage of correct reporting of the direction of the target motion was reduced. Previous studies have revealed the effect of stimulus onset asynchronies (SOAs) and the potential spatial remapping (by adopting a cross-hands posture) in CDC. However, further exploration of the dynamic capture process under different postures was not available due to the fact that only two levels of time asynchronies were employed (either synchronous or with an SOA of 500 ms). This study introduced a broad range of SOAs (−400 ms to 400 ms, tactile stream preceded auditory stream or vice versa) to explore the time course of audio-tactile interaction in CDC with two spatial references — arms-uncrossed or arms-crossed postures. Participants judged the direction of auditory apparent motion with tactile distractors. The results showed that in the arms-uncrossed condition, the CDC effect was prominent when the auditory–tactile events were in the temporal integration window (0–60 ms). However, with a preceding tactile cueing effect of SOA equal to and above 150 ms, the CDC effect was reduced, and no CDC effect was observed with the arms-crossed posture. These results suggest CDC effect is modulated by both cross-modal interaction and the spatial reference (especially for the distractors). The magnitude of the CDC effects in audiotactile interaction may be accounted for by reliability of tactile spatial-temporal information.
APA, Harvard, Vancouver, ISO, and other styles
8

Tonelli, Alessia, Claudio Campus, Andrea Serino, and Monica Gori. "Enhanced audio-tactile multisensory interaction in a peripersonal task after echolocation." Experimental Brain Research 237, no. 3 (January 7, 2019): 855–64. http://dx.doi.org/10.1007/s00221-019-05469-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karam, M., F. A. Russo, and D. I. Fels. "Designing the Model Human Cochlea: An Ambient Crossmodal Audio-Tactile Display." IEEE Transactions on Haptics 2, no. 3 (July 2009): 160–69. http://dx.doi.org/10.1109/toh.2009.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Monkman, G. J. "An Electrorheological Tactile Display." Presence: Teleoperators and Virtual Environments 1, no. 2 (January 1992): 219–28. http://dx.doi.org/10.1162/pres.1992.1.2.219.

Full text
Abstract:
In addition to force and torque reflection, teleoperation also requires a degree of tactile feedback. This is particularly important where knowledge of a surface topology is desired, such as might be encountered by an underwater or space exploration vehicle. Similarly, the aerospace industry is presently developing ever increasingly sophisticated virtual reality environments for pilot training. It is felt that, in addition to visual, audio, and torque feedback, some form of tactile feedback would be useful. This paper presents a means by which electrorheological fluids may be used to provide a relatively high resolution tactile display containing virtually no moving parts. Design parameters are outlined and an example of a working model is shown. The extension of this and similar technology to the display of rapidly time varying tactile images is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Brecher, Christian, Daniel Kolster, and Werner Herfs. "Audio-Tactile Feedback Mechanisms for Multi-Touch HMI Panels of Production Engineering Systems." International Journal of Automation Technology 6, no. 3 (May 5, 2012): 369–76. http://dx.doi.org/10.20965/ijat.2012.p0369.

Full text
Abstract:
Over the last decade, touch screen interaction has been gaining wide acceptance in information technology and daily consumer products. Accordingly, first approaches to applications and devices for production engineering systems are now entering the market. Although they employ intuitive user concepts, touch screens for industrial HMI panels still lack haptic feedback. Since operators observes the machining process and machine handling is often done blind, false handling or wrong input signalsmay damagemachines or injure human workers. With this in mind, this paper presents a haptic feedback mechanism for touch-based interaction. A user evaluation performed on the developed system unveils increasing input security and thus an enhanced user experience.
APA, Harvard, Vancouver, ISO, and other styles
12

Altinsoy, M. Ercan, and Maik Stamm. "Touch the sound: The role of audio-tactile and audio-proprioceptive interaction on the spatial orientation in virtual scenes." Journal of the Acoustical Society of America 133, no. 5 (May 2013): 3458. http://dx.doi.org/10.1121/1.4806149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Haynes, Alice, Jonathan Lawry, Christopher Kent, and Jonathan Rossiter. "FeelMusic: Enriching Our Emotive Experience of Music through Audio-Tactile Mappings." Multimodal Technologies and Interaction 5, no. 6 (May 31, 2021): 29. http://dx.doi.org/10.3390/mti5060029.

Full text
Abstract:
We present and evaluate the concept of FeelMusic and evaluate an implementation of it. It is an augmentation of music through the haptic translation of core musical elements. Music and touch are intrinsic modes of affective communication that are physically sensed. By projecting musical features such as rhythm and melody into the haptic domain, we can explore and enrich this embodied sensation; hence, we investigated audio-tactile mappings that successfully render emotive qualities. We began by investigating the affective qualities of vibrotactile stimuli through a psychophysical study with 20 participants using the circumplex model of affect. We found positive correlations between vibration frequency and arousal across participants, but correlations with valence were specific to the individual. We then developed novel FeelMusic mappings by translating key features of music samples and implementing them with “Pump-and-Vibe”, a wearable interface utilising fluidic actuation and vibration to generate dynamic haptic sensations. We conducted a preliminary investigation to evaluate the FeelMusic mappings by gathering 20 participants’ responses to the musical, tactile and combined stimuli, using valence ratings and descriptive words from Hevner’s adjective circle to measure affect. These mappings, and new tactile compositions, validated that FeelMusic interfaces have the potential to enrich musical experiences and be a means of affective communication in their own right. FeelMusic is a tangible realisation of the expression “feel the music”, enriching our musical experiences.
APA, Harvard, Vancouver, ISO, and other styles
14

Vacher, Michel, François Portet, Anthony Fleury, and Norbert Noury. "Development of Audio Sensing Technology for Ambient Assisted Living." International Journal of E-Health and Medical Communications 2, no. 1 (January 2011): 35–54. http://dx.doi.org/10.4018/jehmc.2011010103.

Full text
Abstract:
One of the greatest challenges in Ambient Assisted Living is to design health smart homes that anticipate the needs of its inhabitant while maintaining their safety and comfort. It is thus essential to ease the interaction with the smart home through systems that naturally react to voice command using microphones rather than tactile interfaces. However, efficient audio analysis in such noisy environment is a challenging task. In this paper, a real-time audio analysis system, the AuditHIS system, devoted to audio analysis in smart home environment is presented. AuditHIS has been tested thought three experiments carried out in a smart home that are detailed. The results show the difficulty of the task and serve as basis to discuss the stakes and the challenges of this promising technology in the domain of AAL.
APA, Harvard, Vancouver, ISO, and other styles
15

ROSÉN, B., and G. LUNDBORG. "Enhanced Sensory Recovery after Median Nerve Epair Using Cortical Audio–Tactile Interaction. A Randomised Multicentre Study." Journal of Hand Surgery (European Volume) 32, no. 1 (February 2007): 31–37. http://dx.doi.org/10.1016/j.jhsb.2006.08.019.

Full text
Abstract:
The “Sensor Glove System” offers an alternate afferent inflow from the hand early after nerve repair in the forearm, mediated through the hearing sense, implying that deprivation of one sense can be compensated by another sense. This sensory “by-pass” was used early after repair of the median nerve with the intention of improving recovery of functional sensibility by maintaining an active sensory map of the hand in the somatosensory cortex during the deafferentation period. In a prospective multicentre clinical study, one group ( n = 14) started early after surgery with sensory re-education using the Sensor Glove System and the control group ( n = 12) received conventional sensory re-education, starting 3 months postoperatively. The patients were checked regularly during a 1-year period, with focus on recovery of tactile gnosis. After 12, months, tactile gnosis was significantly better in the Sensor Glove System group. This highlights the timing for introduction of training after nerve repair, focusing on the importance of immediate sensory re-learning.
APA, Harvard, Vancouver, ISO, and other styles
16

Mendes, Raquel Metzker, Carlo Rondinoni, Marisa de Cássia Registro Fonseca, Rafael Inácio Barbosa, Carlos Ernesto Garrido Salmón, Cláudio Henrique Barbieri, and Nilton Mazzer. "Cortical and functional responses to an early protocol of sensory re-education of the hand using audio–tactile interaction." Hand Therapy 23, no. 2 (December 7, 2017): 45–52. http://dx.doi.org/10.1177/1758998317746699.

Full text
Abstract:
Introduction Early sensory re-education techniques are important strategies associated with cortical hand area preservation. The aim of this study was to investigate early cortical responses, sensory function outcomes and disability in patients treated with an early protocol of sensory re-education of the hand using an audio-tactile interaction device with a sensor glove model. Methods After surgical repair of median and/or ulnar nerves, participants received either early sensory re-education twice a week with the sensor glove during three months or no specific sensory training. Both groups underwent standard rehabilitation. Patients were assessed at one, three and six months after surgery on training-related cortical responses by functional magnetic resonance imaging, sensory thresholds, discriminative touch and disability using the Disabilities of the Arm, Shoulder and Hand patient-reported questionnaire. Results At six-months, there were no statistically significant differences in sensory function between groups. During functional magnetic resonance imaging, trained patients presented complex cortical responses to auditory stimulation indicating an effective connectivity between the cortical hand map and associative areas. Conclusion Training with the sensor glove model seems to provide some type of early cortical audio-tactile interaction in patients with sensory impairment at the hand after nerve injury. Although no differences were observed between groups related to sensory function and disability at the intermediate phase of peripheral reinnervation, this study suggests that an early sensory intervention by sensory substitution could be an option to enhance the response on cortical reorganization after nerve repair in the hand. Longer follow-up and an adequately powered trial is needed to confirm our findings.
APA, Harvard, Vancouver, ISO, and other styles
17

Tanaka, Yukari, Yasuhiro Kanakogi, Masahiro Kawasaki, and Masako Myowa. "The integration of audio−tactile information is modulated by multimodal social interaction with physical contact in infancy." Developmental Cognitive Neuroscience 30 (April 2018): 31–40. http://dx.doi.org/10.1016/j.dcn.2017.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Haritaipan, Lalita, Masahiro Hayashi, and Céline Mougenot. "Design of a Massage-Inspired Haptic Device for Interpersonal Connection in Long-Distance Communication." Advances in Human-Computer Interaction 2018 (July 9, 2018): 1–11. http://dx.doi.org/10.1155/2018/5853474.

Full text
Abstract:
The use of tactile senses in mediated communication has generated considerable research interest in past decades. Since massage is a common practice in Asian cultures, we propose to introduce massage-based interactions in mediated communication between people in a close relationship. We designed a device for distant interactive massage to be used during online conversation and we assessed its effect on interpersonal connection with eight pairs of Chinese participants in romantic relationships. All pairs were asked to engage in a conversation, either through a video call or through a massage-assisted video call. The findings showed that the use of the massage device significantly increased the perceived emotional and physical connection between the users. The results also showed a significant increase in the engagement in the massage activity, e.g., total massage time and average force per finger, from positive conversation to negative conversation, demonstrating an evidence of the interplay between audio-visual and haptic communication. Post hoc interviews showed the potential of the massage device for long-distance communication in romantic relationships as well as in parents-children relationships.
APA, Harvard, Vancouver, ISO, and other styles
19

Papadopoulos, Konstantinos, Eleni Koustriava, and Marialena Barouti. "Cognitive maps of individuals with blindness for familiar and unfamiliar spaces: Construction through audio-tactile maps and walked experience." Computers in Human Behavior 75 (October 2017): 376–84. http://dx.doi.org/10.1016/j.chb.2017.04.057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Maldonado, Ivan, Alfredo Illanes, Marco Kalmar, Thomas Sühn, Axel Boese, and Michael Friebe. "Audio waves and its loss of energy in puncture needles." Current Directions in Biomedical Engineering 5, no. 1 (September 1, 2019): 21–24. http://dx.doi.org/10.1515/cdbme-2019-0006.

Full text
Abstract:
AbstractThe location of a puncture needle’s tip and the resistance of tissue against puncture are crucial information for clinicians during a percutaneous procedure. The tip location and needle alignment can be observed by image guidance. Tactile information caused by tissue resistance to rupture, allow clinicians the perception of structural changes during puncture. Nevertheless, this sense is individual and subjective. To improve percutaneous procedures, the implementation of transducers to enhance or complement the senses offer objective feedback to the user. Known approaches are e.g. based on integrated force sensors. However, this is connected to higher device costs, sterilization and certification issues. A recent publication shows the implementation of an audio transducer clipped at the proximal end of the needle. This sensor is capable of acquiring emitted sounds of the distal tiptissue interaction that are transmitted over the needle structure. The interpretation of the measured audio signals is highly depended on the transmission over the needle, the tissue and, the penetration depth. To evaluate the influence of these parameters, this work implements a simplified experimental setup in a controlled environment with a minimum of noise and without micro tremors induced by clinician’s hands. A steel rod simulating a needle is inserted into pork meat of different thickness. A controlled impact covering the needle’s tip mimics tissue contact. The resulting signals are recorded and analyzed for better understanding of the system.
APA, Harvard, Vancouver, ISO, and other styles
21

Griffin, Weston B., William R. Provancher, and Mark R. Cutkosky. "Feedback Strategies for Telemanipulation with Shared Control of Object Handling Forces." Presence: Teleoperators and Virtual Environments 14, no. 6 (December 2005): 720–31. http://dx.doi.org/10.1162/105474605775196634.

Full text
Abstract:
Shared control represents a middle ground between supervisory control and traditional bilateral control in which the remote system can exert control over some aspects of the task while the human operator maintains access to low-level forces and motions. In the case of dexterous telemanipulation, a natural approach is to share control of the object handling forces while giving the human operator direct access to remote tactile and force information at the slave fingertips. We describe a set of experiments designed to determine whether shared control can improve the ability of an operator to handle objects delicately and to determine what combinations of force, visual, and audio feedback provide the best level of performance and operator sense of presence. The results demonstrate the benefits of shared control and the need to carefully choose the types and methods of direct and indirect feedback.
APA, Harvard, Vancouver, ISO, and other styles
22

Ramenahalli, Sudarshan. "A Biologically Motivated, Proto-Object-Based Audiovisual Saliency Model." AI 1, no. 4 (November 3, 2020): 487–509. http://dx.doi.org/10.3390/ai1040030.

Full text
Abstract:
The natural environment and our interaction with it are essentially multisensory, where we may deploy visual, tactile and/or auditory senses to perceive, learn and interact with our environment. Our objective in this study is to develop a scene analysis algorithm using multisensory information, specifically vision and audio. We develop a proto-object-based audiovisual saliency map (AVSM) for the analysis of dynamic natural scenes. A specialized audiovisual camera with 360∘ field of view, capable of locating sound direction, is used to collect spatiotemporally aligned audiovisual data. We demonstrate that the performance of a proto-object-based audiovisual saliency map in detecting and localizing salient objects/events is in agreement with human judgment. In addition, the proto-object-based AVSM that we compute as a linear combination of visual and auditory feature conspicuity maps captures a higher number of valid salient events compared to unisensory saliency maps. Such an algorithm can be useful in surveillance, robotic navigation, video compression and related applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Fife Donaldson, Lucy. "Surface Contact: Film Design as an Exchange of Meaning." Film-Philosophy 22, no. 2 (June 2018): 203–21. http://dx.doi.org/10.3366/film.2018.0073.

Full text
Abstract:
Surface has become an important consideration of sensory film theory, conceived of in various forms: the screen itself as less a barrier than a permeable skin, the site of a meaningful interaction between film and audience; the image as a surface to be experienced haptically, the eye functioning as a hand that brushes across and engages with the field of vision; surfaces within the film, be they organic or fabricated, presenting a tactile appeal. Surface evokes contact and touch, the look or sound it produces (or produced on it) inviting consideration of its materiality, and perhaps even a tactile interchange. If the surface of film, across its varied associations, presents the possibility of an intersubjective contact between film and audience, this article seeks to include another body: that of the filmmaker. There are many people who contribute to the material constitution of a film and I would suggest that we might seek to appreciate its textures just as we might that of a painting. Focus on the fine detail of textures within the film becomes a way to foreground the contributions of filmmakers who have been less central to discussions of meaning, but whose work in the making of décor, costume and sound effects, has a significant impact on filmic affect. Through detailed discussion of film moments, archival design materials and interviews with film designers, this article will attend to the exchanges of meaning situated on the audio-visual surfaces of film.
APA, Harvard, Vancouver, ISO, and other styles
24

Canzoneri, Elisa, Elisa Magosso, Amedeo Amoresano, and Andrea Serino. "Plasticity in multisensory body representations after amputation and prosthesis implantation." Seeing and Perceiving 25 (2012): 135. http://dx.doi.org/10.1163/187847612x647676.

Full text
Abstract:
Multisensory representations of the body and of the space around it (i.e., Peripersonal space, PPS) depend on the physical structure of the body, in that they are constructed from incoming multisensory signals from different body parts. After a sudden change in the physical structure of the body, such as limb amputation, little is known about how multimodal representations of the body and of the PPS adapt to loosing a part of the body, and how partially restoring the function of missing body part by means of prosthesis implantation affects these multimodal body representations. We assessed body representation in a group of upper limb amputees by means of a tactile distance perception task, measuring the implicitly perceived length of the arm, and PPS representation by means of an audio–tactile interaction task, assessing the extension of the multisensory space where environmental stimuli interact with somatosensory processing. When patients performed the task on the amputated limb, without the prosthesis, the perceived arm length shrank, with a concurrent shift of PPS boundaries towards the stump. Wearing the prosthesis increased the perceived length of the stump and extended the boundaries of the PPS so to include the prosthetic hand. The representations of the healthy limb were comparable to those of healthy controls. These results suggest that a modification in the physical body affects multisensory body and PPS representations for the amputated side; such representations are further shaped if prostheses are used to replace the lost body part.
APA, Harvard, Vancouver, ISO, and other styles
25

Engels, Leonard F., Leonardo Cappello, Anke Fischer, and Christian Cipriani. "Testing silicone digit extensions as a way to suppress natural sensation to evaluate supplementary tactile feedback." PLOS ONE 16, no. 9 (September 1, 2021): e0256753. http://dx.doi.org/10.1371/journal.pone.0256753.

Full text
Abstract:
Dexterous use of the hands depends critically on sensory feedback, so it is generally agreed that functional supplementary feedback would greatly improve the use of hand prostheses. Much research still focuses on improving non-invasive feedback that could potentially become available to all prosthesis users. However, few studies on supplementary tactile feedback for hand prostheses demonstrated a functional benefit. We suggest that confounding factors impede accurate assessment of feedback, e.g., testing non-amputee participants that inevitably focus intently on learning EMG control, the EMG’s susceptibility to noise and delays, and the limited dexterity of hand prostheses. In an attempt to assess the effect of feedback free from these constraints, we used silicone digit extensions to suppress natural tactile feedback from the fingertips and thus used the tactile feedback-deprived human hand as an approximation of an ideal feed-forward tool. Our non-amputee participants wore the extensions and performed a simple pick-and-lift task with known weight, followed by a more difficult pick-and-lift task with changing weight. They then repeated these tasks with one of three kinds of audio feedback. The tests were repeated over three days. We also conducted a similar experiment on a person with severe sensory neuropathy to test the feedback without the extensions. Furthermore, we used a questionnaire based on the NASA Task Load Index to gauge the subjective experience. Unexpectedly, we did not find any meaningful differences between the feedback groups, neither in the objective nor the subjective measurements. It is possible that the digit extensions did not fully suppress sensation, but since the participant with impaired sensation also did not improve with the supplementary feedback, we conclude that the feedback failed to provide relevant grasping information in our experiments. The study highlights the complex interaction between task, feedback variable, feedback delivery, and control, which seemingly rendered even rich, high-bandwidth acoustic feedback redundant, despite substantial sensory impairment.
APA, Harvard, Vancouver, ISO, and other styles
26

Fanibhare, Vaibhav, Nurul I. Sarkar, and Adnan Al-Anbuky. "A Survey of the Tactile Internet: Design Issues and Challenges, Applications, and Future Directions." Electronics 10, no. 17 (September 6, 2021): 2171. http://dx.doi.org/10.3390/electronics10172171.

Full text
Abstract:
The Tactile Internet (TI) is an emerging area of research involving 5G and beyond (B5G) communications to enable real-time interaction of haptic data over the Internet between tactile ends, with audio-visual data as feedback. This emerging TI technology is viewed as the next evolutionary step for the Internet of Things (IoT) and is expected to bring about a massive change in Healthcare 4.0, Industry 4.0 and autonomous vehicles to resolve complicated issues in modern society. This vision of TI makes a dream into a reality. This article aims to provide a comprehensive survey of TI, focussing on design architecture, key application areas, potential enabling technologies, current issues, and challenges to realise it. To illustrate the novelty of our work, we present a brainstorming mind-map of all the topics discussed in this article. We emphasise the design aspects of the TI and discuss the three main sections of the TI, i.e., master, network, and slave sections, with a focus on the proposed application-centric design architecture. With the help of the proposed illustrative diagrams of use cases, we discuss and tabulate the possible applications of the TI with a 5G framework and its requirements. Then, we extensively address the currently identified issues and challenges with promising potential enablers of the TI. Moreover, a comprehensive review focussing on related articles on enabling technologies is explored, including Fifth Generation (5G), Software-Defined Networking (SDN), Network Function Virtualisation (NFV), Cloud/Edge/Fog Computing, Multiple Access, and Network Coding. Finally, we conclude the survey with several research issues that are open for further investigation. Thus, the survey provides insights into the TI that can help network researchers and engineers to contribute further towards developing the next-generation Internet.
APA, Harvard, Vancouver, ISO, and other styles
27

Renzi, Chiara, Patrick Bruns, Kirstin-Friederike Heise, Maximo Zimerman, Jan-Frederik Feldheim, Friedhelm C. Hummel, and Brigitte Röder. "Spatial Remapping in the Audio-tactile Ventriloquism Effect: A TMS Investigation on the Role of the Ventral Intraparietal Area." Journal of Cognitive Neuroscience 25, no. 5 (May 2013): 790–801. http://dx.doi.org/10.1162/jocn_a_00362.

Full text
Abstract:
Previous studies have suggested that the putative human homologue of the ventral intraparietal area (hVIP) is crucially involved in the remapping of tactile information into external spatial coordinates and in the realignment of tactile and visual maps. It is unclear, however, whether hVIP is critical for the remapping process during audio-tactile cross-modal spatial interactions. The audio-tactile ventriloquism effect, where the perceived location of a sound is shifted toward the location of a synchronous but spatially disparate tactile stimulus, was used to probe spatial interactions in audio-tactile processing. Eighteen healthy volunteers were asked to report the perceived location of brief auditory stimuli presented from three different locations (left, center, and right). Auditory stimuli were presented either alone (unimodal stimuli) or concurrently to a spatially discrepant tactile stimulus applied to the left or right index finger (bimodal stimuli), with the hands adopting either an uncrossed or a crossed posture. Single pulses of TMS were delivered over the hVIP or a control site (primary somatosensory cortex, SI) 80 msec after trial onset. TMS to the hVIP, compared with the control SI-TMS, interfered with the remapping of touch into external space, suggesting that hVIP is crucially involved in transforming spatial reference frames across audition and touch.
APA, Harvard, Vancouver, ISO, and other styles
28

Zelic, Gregory, Denis Mottet, and Julien Lagarde. "Multisensory integration enhances coordination: The necessity of a phasing matching between cross-modal events and movements." Seeing and Perceiving 25 (2012): 212–13. http://dx.doi.org/10.1163/187847612x648404.

Full text
Abstract:
Recent research revealed what substrates may subserve the fascinating capacity of the brain to put together different senses, from single cell to extending networks (see for review, Driver and Noesselt, 2008; Ghazanfar and Schroeder, 2006; Sperdin et al., 2010; Stein and Stanford, 2008), and lead to interesting behavioral benefits in response to cross-modal events such as shorter reaction times, easier detections or more precise synchronization (Diederich and Colonius, 2004; Elliott et al., 2010). But what happens when a combination of multisensory perception and action is required? This is a key issue, since the organization of movements in space–time in harmony with our surrounding environment is the basis of our everdaylife. Surprisingly enough, little is known about how different senses and movement are combined dynamically. Coordination skills allow to test the effectiveness of such a combination, since external events have been shown to stabilize the coordination performance when adequately tuned (Fink et al., 2000). We then tested the modulation of the capacity of participants to produce an anti-symmetric rhythmic bimanual coordination while synchronizing with audio–tactile versus audio and tactile metronomes pacing the coordination from low to high rates of motion. Three condition of metronome structure found to stabilize the anti-symmetric mode have been handled: Simple, Double and Lateralized. We found redundant signal effects for Lateralized metronomes, but not for Simple and Double metronomes, rather explained by neural audio–tactile interactions than by a simple statistical redundancy. These results reflect the effective cortical cooperation between components in charge of the audio–tactile integration and ones sustaining the anti-symmetric coordination pattern. We will discuss the apparent necessity for cross-modal events to match the phasing of movements to greater stabilize the coordination.
APA, Harvard, Vancouver, ISO, and other styles
29

Kalra, Siddharth, Sarika Jain, and Amit Agarwal. "Gesture Controlled Tactile Augmented Reality Interface for the Visually Impaired." Journal of Information Technology Research 14, no. 2 (April 2021): 125–51. http://dx.doi.org/10.4018/jitr.2021040107.

Full text
Abstract:
This paper proposes to create an augmented reality interface for the visually impaired, enabling a way of haptically interacting with the computer system by creating a virtual workstation, facilitating a natural and intuitive way to accomplish a multitude of computer-based tasks (such as emailing, word processing, storing and retrieving files from the computer, making a phone call, searching the web, etc.). The proposed system utilizes a combination of a haptic glove device, a gesture-based control system, and an augmented reality computer interface which creates an immersive interaction between the blind user and the computer. The gestures are recognized, and the user is provided with audio and vibratory haptic feedbacks. This user interface allows the user to actually “touch, feel, and physically interact” with digital controls and virtual real estate of a computer system. A test of applicability was conducted which showcased promising positive results.
APA, Harvard, Vancouver, ISO, and other styles
30

Raheel, Aasim, Muhammad Majid, Majdi Alnowami, and Syed Muhammad Anwar. "Physiological Sensors Based Emotion Recognition While Experiencing Tactile Enhanced Multimedia." Sensors 20, no. 14 (July 21, 2020): 4037. http://dx.doi.org/10.3390/s20144037.

Full text
Abstract:
Emotion recognition has increased the potential of affective computing by getting an instant feedback from users and thereby, have a better understanding of their behavior. Physiological sensors have been used to recognize human emotions in response to audio and video content that engages single (auditory) and multiple (two: auditory and vision) human senses, respectively. In this study, human emotions were recognized using physiological signals observed in response to tactile enhanced multimedia content that engages three (tactile, vision, and auditory) human senses. The aim was to give users an enhanced real-world sensation while engaging with multimedia content. To this end, four videos were selected and synchronized with an electric fan and a heater, based on timestamps within the scenes, to generate tactile enhanced content with cold and hot air effect respectively. Physiological signals, i.e., electroencephalography (EEG), photoplethysmography (PPG), and galvanic skin response (GSR) were recorded using commercially available sensors, while experiencing these tactile enhanced videos. The precision of the acquired physiological signals (including EEG, PPG, and GSR) is enhanced using pre-processing with a Savitzky-Golay smoothing filter. Frequency domain features (rational asymmetry, differential asymmetry, and correlation) from EEG, time domain features (variance, entropy, kurtosis, and skewness) from GSR, heart rate and heart rate variability from PPG data are extracted. The K nearest neighbor classifier is applied to the extracted features to classify four (happy, relaxed, angry, and sad) emotions. Our experimental results show that among individual modalities, PPG-based features gives the highest accuracy of 78.57 % as compared to EEG- and GSR-based features. The fusion of EEG, GSR, and PPG features further improved the classification accuracy to 79.76 % (for four emotions) when interacting with tactile enhanced multimedia.
APA, Harvard, Vancouver, ISO, and other styles
31

SHLIENKOVA, Elena V., and Khristina V. KAYGORODOVA. "IMMERSIVE AUDIO EXPOSITION AND ITS VISUAL CONTENT AS ACTUALIZATION OF MUSEUM DESIGN PRINCIPLES." Urban construction and architecture 10, no. 3 (December 15, 2020): 114–22. http://dx.doi.org/10.17673/vestnik.2020.03.15.

Full text
Abstract:
The article is devoted to the experimental practice of new local history and museum design, the study of collective identity, the actualization of “gene memory” and the representation of the Finno-Ugric ethnic group of the North of Udmurtia, Russia. The project continues to develop a long-term partnership of an inter-regional consortium consisting of specialists in the fi eld of cultural anthropology and authentic geography, local history, music and stage art, folklore, design, architecture and modern art practices, and the local community. The article deals with the study of the principles of organizing a traditional a local history museum, its tactile and spatial reconfi guration based on immersive interaction with the visitor, his active participation, polylogue, post-empathy and total involvement (psychophysiological “linkage” with reality). It covers a wide range of topics from technological to meta-immersion, creation spaces of holistic experience, space-event, space-situations, where the viewer becomes a key subject.
APA, Harvard, Vancouver, ISO, and other styles
32

Kostov, Vlaho, Eiichi Naito, Takashi Tajima, and Jun Ozawa. "Evaluation of Appropriate Body Placement and Notification Modality of a Wearable Clip-on Notifier Using an Experimental Platform." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 1 (January 20, 2007): 111–17. http://dx.doi.org/10.20965/jaciii.2007.p0111.

Full text
Abstract:
We have designed and developed an experimental infrastructure capable of interacting with different wearable information terminals for delivery of timely push information based on the user profile. Focusing on the evaluation of the body placement, we have developed a wearable clip-on notifier which can be placed on different parts of the human body. It is a small accessory device capable of producing visual, buzzer-audio and tactile-vibrator effects with a natural and relatively small form-factor in a shape of a paper clip that can be attached or be worn as a pendant. In order to evaluate the best placement and modality for the clip-on notifier, we have selected a number of position candidates and have performed in-situ notification experiments while the subjects performed different activities. We have presented a good hierarchical system for subjective evaluation of the notifier placements, and our device was shown as potential solution for personalized information notification.
APA, Harvard, Vancouver, ISO, and other styles
33

Noel, Jean-Paul, Olaf Blanke, Elisa Magosso, and Andrea Serino. "Neural adaptation accounts for the dynamic resizing of peripersonal space: evidence from a psychophysical-computational approach." Journal of Neurophysiology 119, no. 6 (June 1, 2018): 2307–33. http://dx.doi.org/10.1152/jn.00652.2017.

Full text
Abstract:
Interactions between the body and the environment occur within the peripersonal space (PPS), the space immediately surrounding the body. The PPS is encoded by multisensory (audio-tactile, visual-tactile) neurons that possess receptive fields (RFs) anchored on the body and restricted in depth. The extension in depth of PPS neurons’ RFs has been documented to change dynamically as a function of the velocity of incoming stimuli, but the underlying neural mechanisms are still unknown. Here, by integrating a psychophysical approach with neural network modeling, we propose a mechanistic explanation behind this inherent dynamic property of PPS. We psychophysically mapped the size of participant’s peri-face and peri-trunk space as a function of the velocity of task-irrelevant approaching auditory stimuli. Findings indicated that the peri-trunk space was larger than the peri-face space, and, importantly, as for the neurophysiological delineation of RFs, both of these representations enlarged as the velocity of incoming sound increased. We propose a neural network model to mechanistically interpret these findings: the network includes reciprocal connections between unisensory areas and higher order multisensory neurons, and it implements neural adaptation to persistent stimulation as a mechanism sensitive to stimulus velocity. The network was capable of replicating the behavioral observations of PPS size remapping and relates behavioral proxies of PPS size to neurophysiological measures of multisensory neurons’ RF size. We propose that a biologically plausible neural adaptation mechanism embedded within the network encoding for PPS can be responsible for the dynamic alterations in PPS size as a function of the velocity of incoming stimuli. NEW & NOTEWORTHY Interactions between body and environment occur within the peripersonal space (PPS). PPS neurons are highly dynamic, adapting online as a function of body-object interactions. The mechanistic underpinning PPS dynamic properties are unexplained. We demonstrate with a psychophysical approach that PPS enlarges as incoming stimulus velocity increases, efficiently preventing contacts with faster approaching objects. We present a neurocomputational model of multisensory PPS implementing neural adaptation to persistent stimulation to propose a neurophysiological mechanism underlying this effect.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Ning, and Linda Ng Boyle. "Allocation of Driver Attention for Varying In-Vehicle System Modalities." Human Factors: The Journal of the Human Factors and Ergonomics Society 62, no. 8 (December 30, 2019): 1349–64. http://dx.doi.org/10.1177/0018720819879585.

Full text
Abstract:
Objective This paper examines drivers’ allocation of attention using response time to a tactile detection response task (TDRT) while interacting with an in-vehicle information system (IVIS) over time. Background Longer TDRT response time is associated with higher cognitive workload. However, it is not clear what role is assumed by the human and system in response to varying in-vehicle environments over time. Method A driving simulator study with 24 participants was conducted with a restaurant selection task of two difficulty levels (easy and hard) presented in three modalities (audio only, visual only, hybrid). A linear mixed-effects model was applied to identify factors that affect TDRT response time. A nonparametric time-series model was also used to explore the visual attention allocation under the hybrid mode over time. Results The visual-only mode significantly increased participants’ response time compared with the audio-only mode. Females took longer to respond to the TDRT when engaged with an IVIS. The study showed that participants tend to use the visual component more toward the end of the easy tasks, whereas the visual mode was used more at the beginning of the harder tasks. Conclusion The visual-only mode of the IVIS increased drivers’ cognitive workload when compared with the auditory-only mode. Drivers showed different visual attention allocation during the easy and hard restaurant selection tasks in the hybrid mode. Application The findings can help guide the design of automotive user interfaces and help manage cognitive workload.
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Sang Hun, and Se-One Yoon. "User interface for in-vehicle systems with on-wheel finger spreading gestures and head-up displays." Journal of Computational Design and Engineering 7, no. 6 (June 19, 2020): 700–721. http://dx.doi.org/10.1093/jcde/qwaa052.

Full text
Abstract:
Abstract Interacting with an in-vehicle system through a central console is known to induce visual and biomechanical distractions, thereby delaying the danger recognition and response times of the driver and significantly increasing the risk of an accident. To address this problem, various hand gestures have been developed. Although such gestures can reduce visual demand, they are limited in number, lack passive feedback, and can be vague and imprecise, difficult to understand and remember, and culture-bound. To overcome these limitations, we developed a novel on-wheel finger spreading gestural interface combined with a head-up display (HUD) allowing the user to choose a menu displayed in the HUD with a gesture. This interface displays audio and air conditioning functions on the central console of a HUD and enables their control using a specific number of fingers while keeping both hands on the steering wheel. We compared the effectiveness of the newly proposed hybrid interface against a traditional tactile interface for a central console using objective measurements and subjective evaluations regarding both the vehicle and driver behaviour. A total of 32 subjects were recruited to conduct experiments on a driving simulator equipped with the proposed interface under various scenarios. The results showed that the proposed interface was approximately 20% faster in emergency response than the traditional interface, whereas its performance in maintaining vehicle speed and lane was not significantly different from that of the traditional one.
APA, Harvard, Vancouver, ISO, and other styles
36

Bosman, Isak de Villiers, Koos De Beer, and Theo J. D. Bothma. "Creating pseudo-tactile feedback in virtual reality using shared crossmodal properties of audio and tactile feedback." South African Computer Journal 33, no. 1 (July 12, 2021). http://dx.doi.org/10.18489/sacj.v33i1.883.

Full text
Abstract:
Virtual reality has the potential to enhance a variety of real-world training and entertainment applications by creating the illusion that a user of virtual reality is physically present inside the digitally created environment. However, the use of tactile feedback to convey information about this environment is often lacking in VR applications. New methods for inducing a degree of tactile feedback in users are described, which induced the illusion of a tactile experience, referred to as pseudo-tactile feedback. These methods utilised shared properties between audio and tactile feedback that can be crossmodally mapped between the two modalities in the design of a virtual reality prototype for a qualitative usability study in order to test the effectiveness and underlying causes of such feedback in the total absence of any real-world tactile feedback. Results show that participants required believable audio stimuli that they could conceive as real-world textures as well a sense of hand-ownership to suspend disbelief and construct an internally consistent mental model of the virtual environment. This allowed them to conceive believable tactile sensations that result from interaction with virtual objects inside this environment.
APA, Harvard, Vancouver, ISO, and other styles
37

Murat Baldwin, Mina, Zhuoni Xiao, and Aja Murray. "Temporal Synchrony in Autism: a Systematic Review." Review Journal of Autism and Developmental Disorders, July 2, 2021. http://dx.doi.org/10.1007/s40489-021-00276-5.

Full text
Abstract:
AbstractTemporal synchrony is the alignment of processes in time within or across individuals in social interaction and is observed and studied in various domains using wide-ranging paradigms. Evidence suggesting reduced temporal synchrony in autism (e.g. compared to neurotypicals) has hitherto not been reviewed. To systematically review the magnitude and generalisability of the difference across different tasks and contexts, EBSCO, OVID, Web of Science, and Scopus databases were searched. Thirty-two studies were identified that met our inclusion criteria in audio-visual, audio-motor, visuo-tactile, visuo-motor, social motor, and conversational synchrony domains. Additionally, two intervention studies were included. The findings suggest that autistic participants showed reduced synchrony tendencies in every category of temporal synchrony reviewed. Implications, methodological weaknesses, and evidence gaps are discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Cairco Dukes, Lauren, Amy Ulinski Banic, Jerome McClendon, Toni Bloodworth Pence, James Mathieson, Joshua Summers, and Larry F. Hodges. "Evaluation of System-Directed Multimodal Systems for Vehicle Inspection." Journal of Computing and Information Science in Engineering 13, no. 1 (January 7, 2013). http://dx.doi.org/10.1115/1.4023004.

Full text
Abstract:
Multimodal systems have been previously used as an aid to improve quality and safety inspection in various domains, though few studies have evaluated these systems for accuracy and user comfort. Our research aims to combine our software interface designed for high usability with multimodal hardware configurations and to evaluate these systems to determine their user performance benefits and user acceptance data. We present two multimodal systems for using a novel system-directed interface to aid in inspecting vehicles along the assembly line: (1) wearable monocular display with speech input and audio output and (2) large screen display with speech input and audio output. We conducted two evaluations: (a) an experimental evaluation with novice users, resulting in accuracy, timing, user preferences, and other performance results and (b) an expert-based usability evaluation conducted on and off the assembly line providing insight on user acceptance, preferences, and performance potential in the production environment. We also compared these systems to current technology used in the production environment: a handheld display without speech input/output. Our results show that for visual and tactile tasks, benefits of system-directed interfaces are best realized when used with multimodal systems that reduce visual and tactile interaction per item and instead deliver system-directed information on the audio channel. Interface designers that combine system-directed interfaces with multimodal systems can expect faster and more efficient user performance when the delivery channel is different from channels necessary for task completion.
APA, Harvard, Vancouver, ISO, and other styles
39

Cornelio, Patricia, Carlos Velasco, and Marianna Obrist. "Multisensory Integration as per Technological Advances: A Review." Frontiers in Neuroscience 15 (June 22, 2021). http://dx.doi.org/10.3389/fnins.2021.652611.

Full text
Abstract:
Multisensory integration research has allowed us to better understand how humans integrate sensory information to produce a unitary experience of the external world. However, this field is often challenged by the limited ability to deliver and control sensory stimuli, especially when going beyond audio–visual events and outside laboratory settings. In this review, we examine the scope and challenges of new technology in the study of multisensory integration in a world that is increasingly characterized as a fusion of physical and digital/virtual events. We discuss multisensory integration research through the lens of novel multisensory technologies and, thus, bring research in human–computer interaction, experimental psychology, and neuroscience closer together. Today, for instance, displays have become volumetric so that visual content is no longer limited to 2D screens, new haptic devices enable tactile stimulation without physical contact, olfactory interfaces provide users with smells precisely synchronized with events in virtual environments, and novel gustatory interfaces enable taste perception through levitating stimuli. These technological advances offer new ways to control and deliver sensory stimulation for multisensory integration research beyond traditional laboratory settings and open up new experimentations in naturally occurring events in everyday life experiences. Our review then summarizes these multisensory technologies and discusses initial insights to introduce a bridge between the disciplines in order to advance the study of multisensory integration.
APA, Harvard, Vancouver, ISO, and other styles
40

"Color synesthesia in modern female poetry." Journal of V. N. Karazin Kharkiv National University, Series "Philology", no. 88 (2021). http://dx.doi.org/10.26565/2227-1864-2021-88-08.

Full text
Abstract:
A display of synesthesia in the poetic speech of contemporary artists is analyzed. The lack of special studies of synesthetic linguistic representation in modern poetry determines the relevance of the proposed investigation. In scientific interpretation, synesthesia is the interaction of words to denote emotions or other abstract or particular concepts. In linguistics, the term „synesthesia” is used to denote the mechanism of metaphorical analogy, formed on the basis of visual, taste, auditory, odorative, tactile human senses, which supply their own units to denote other conceptual areas. In modern poetry, synesthesia is productive as a mean of verbalizing subjective experiences. Furthermore, achromatic colors (white, gray, black) are actively used in poetry. Tokens to denote shades of blue (blue, cyan, blue), associated with the concept of melancholy, are also used quite often. In the analyzed poems color-sound synesthesia prevails. This fact can be explained by the physiological significance of visual and audio channels for obtaining information, as well as the genre specificity of the analyzed texts. At the syntactic level, the traditional model of a synaesthetic phrase is a combination of an adjectival colorative with a noun token to denote a concept of another conceptual sphere. The definition location in the postposition to the denoted word gives synesthetic constructions additional stylistic significance, strengthens their expressive potential. The use of complex adjectives as double epithets is a rarely used syntactic model of synesthesia. Sometimes synesthesia becomes the creative basis of the whole poetry. The analysis of synesthetic constructions makes it possible to comprehend the deep inner world of poets, features of poetic thinking, Ukrainian worldview in general.
APA, Harvard, Vancouver, ISO, and other styles
41

Yau, Jeffrey M., Alison I. Weber, and Sliman J. Bensmaia. "Separate Mechanisms for Audio-Tactile Pitch and Loudness Interactions." Frontiers in Psychology 1 (2010). http://dx.doi.org/10.3389/fpsyg.2010.00160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lerner, France, Guillaume Tahar, Alon Bar, Ori Koren, and Tamar Flash. "VR Setup to Assess Peripersonal Space Audio-Tactile 3D Boundaries." Frontiers in Virtual Reality 2 (May 13, 2021). http://dx.doi.org/10.3389/frvir.2021.644214.

Full text
Abstract:
Many distinct spaces surround our bodies. Most schematically, the key division is between peripersonal space (PPS), the close space surrounding our body, and an extrapersonal space, which is the space out of one’s reach. The PPS is considered as an action space, which allows us to interact with our environment by touching and grasping. In the current scientific literature, PPS’ visual representations are appearing as mere bubbles of even dimensions wrapped around the body. Although more recent investigations of PPS’ upper body (trunk, head, and hands) and lower body (legs and foot) have provided new representations, no investigation has been made yet concerning the estimation of PPS’s overall representation in 3D. Previous findings have demonstrated how the relationship between tactile processing and the location of sound sources in space is modified along a spatial continuum. These findings suggest that similar methods can be used to localize the boundaries of the subjective individual representation of PPS. Hence, we designed a behavioral paradigm in virtual reality based on audio-tactile interactions, which has enabled us to infer a detailed individual 3D audio-tactile representation of PPS. Considering that inadequate body-related multisensory integration processes can produce incoherent spatio–temporal perception, the development of a virtual reality setup and a method to estimate the representation of the subjective PPS volumetric boundaries will be a valuable addition for the comprehension of the mismatches occurring between body physical boundaries and body schema representations in 3D.
APA, Harvard, Vancouver, ISO, and other styles
43

Matsuda, Yusuke, Maki Sugimoto, Masahiko Inami, and Michiteru Kitazaki. "Peripersonal space in the front, rear, left and right directions for audio-tactile multisensory integration." Scientific Reports 11, no. 1 (May 28, 2021). http://dx.doi.org/10.1038/s41598-021-90784-5.

Full text
Abstract:
AbstractPeripersonal space (PPS) is important for humans to perform body–environment interactions. However, many previous studies only focused on the specific direction of the PPS, such as the front space, despite suggesting that there were PPSs in all directions. We aimed to measure and compare the peri-trunk PPS in four directions (front, rear, left, and right). To measure the PPS, we used a tactile and an audio stimulus because auditory information is available at any time in all directions. We used the approaching and receding task-irrelevant sounds in the experiment. Observers were asked to respond as quickly as possible when a tactile stimulus was applied to a vibrator on their chest. We found that peri-trunk PPS representations exist with an approaching sound, irrespective of the direction.
APA, Harvard, Vancouver, ISO, and other styles
44

Jethani, Suneel. "Lists, Spatial Practice and Assistive Technologies for the Blind." M/C Journal 15, no. 5 (October 12, 2012). http://dx.doi.org/10.5204/mcj.558.

Full text
Abstract:
IntroductionSupermarkets are functionally challenging environments for people with vision impairments. A supermarket is likely to house an average of 45,000 products in a median floor-space of 4,529 square meters and many visually impaired people are unable to shop without assistance, which greatly impedes personal independence (Nicholson et al.). The task of selecting goods in a supermarket is an “activity that is expressive of agency, identity and creativity” (Sutherland) from which many vision-impaired persons are excluded. In response to this, a number of proof of concept (demonstrating feasibility) and prototype assistive technologies are being developed which aim to use smart phones as potential sensorial aides for vision impaired persons. In this paper, I discuss two such prototypic technologies, Shop Talk and BlindShopping. I engage with this issue’s list theme by suggesting that, on the one hand, list making is a uniquely human activity that demonstrates our need for order, reliance on memory, reveals our idiosyncrasies, and provides insights into our private lives (Keaggy 12). On the other hand, lists feature in the creation of spatial inventories that represent physical environments (Perec 3-4, 9-10). The use of lists in the architecture of assistive technologies for shopping illuminates the interaction between these two modalities of list use where items contained in a list are not only textual but also cartographic elements that link the material and immaterial in space and time (Haber 63). I argue that despite the emancipatory potential of assistive shopping technologies, their efficacy in practical situations is highly dependent on the extent to which they can integrate a number of lists to produce representations of space that are meaningful for vision impaired users. I suggest that the extent to which these prototypes may translate to becoming commercially viable, widely adopted technologies is heavily reliant upon commercial and institutional infrastructures, data sources, and regulation. Thus, their design, manufacture and adoption-potential are shaped by the extent to which certain data inventories are accessible and made interoperable. To overcome such constraints, it is important to better understand the “spatial syntax” associated with the shopping task for a vision impaired person; that is, the connected ordering of real and virtual spatial elements that result in a supermarket as a knowable space within which an assisted “spatial practice” of shopping can occur (Kellerman 148, Lefebvre 16).In what follows, I use the concept of lists to discuss the production of supermarket-space in relation to the enabling and disabling potentials of assistive technologies. First, I discuss mobile digital technologies relative to disability and impairment and describe how the shopping task produces a disabling spatial practice. Second, I present a case study showing how assistive technologies function in aiding vision impaired users in completing the task of supermarket shopping. Third, I discuss various factors that may inhibit the liberating potential of technology assisted shopping by vision-impaired people. Addressing Shopping as a Disabling Spatial Practice Consider how a shopping list might inform one’s experience of supermarket space. The way shopping lists are written demonstrate the variability in the logic that governs list writing. As Bill Keaggy demonstrates in his found shopping list Web project and subsequent book, Milk, Eggs, Vodka, a shopping list may be written on a variety of materials, be arranged in a number of orientations, and the writer may use differing textual attributes, such as size or underlining to show emphasis. The writer may use longhand, abbreviate, write neatly, scribble, and use an array of alternate spelling and naming conventions. For example, items may be listed based on knowledge of the location of products, they may be arranged on a list as a result of an inventory of a pantry or fridge, or they may be copied in the order they appear in a recipe. Whilst shopping, some may follow strictly the order of their list, crossing back and forth between aisles. Some may work through their list item-by-item, perhaps forward scanning to achieve greater economies of time and space. As a person shops, their memory may be stimulated by visual cues reminding them of products they need that may not be included on their list. For the vision impaired, this task is near impossible to complete without the assistance of a relative, friend, agency volunteer, or store employee. Such forms of assistance are often unsatisfactory, as delays may be caused due to the unavailability of an assistant, or the assistant having limited literacy, knowledge, or patience to adequately meet the shopper’s needs. Home delivery services, though readily available, impede personal independence (Nicholson et al.). Katie Ellis and Mike Kent argue that “an impairment becomes a disability due to the impact of prevailing ableist social structures” (3). It can be said, then, that supermarkets function as a disability producing space for the vision impaired shopper. For the vision impaired, a supermarket is a “hegemonic modern visual infrastructure” where, for example, merchandisers may reposition items regularly to induce customers to explore areas of the shop that they wouldn’t usually, a move which adds to the difficulty faced by those customers with impaired vision who work on the assumption that items remain as they usually are (Schillmeier 161).In addressing this issue, much emphasis has been placed on the potential of mobile communications technologies in affording vision impaired users greater mobility and flexibility (Jolley 27). However, as Gerard Goggin argues, the adoption of mobile communication technologies has not necessarily “gone hand in hand with new personal and collective possibilities” given the limited access to standard features, even if the device is text-to-speech enabled (98). Issues with Digital Rights Management (DRM) limit the way a device accesses and reproduces information, and confusion over whether audio rights are needed to convert text-to-speech, impede the accessibility of mobile communications technologies for vision impaired users (Ellis and Kent 136). Accessibility and functionality issues like these arise out of the needs, desires, and expectations of the visually impaired as a user group being considered as an afterthought as opposed to a significant factor in the early phases of design and prototyping (Goggin 89). Thus, the development of assistive technologies for the vision impaired has been left to third parties who must adopt their solutions to fit within certain technical parameters. It is valuable to consider what is involved in the task of shopping in order to appreciate the considerations that must be made in the design of shopping intended assistive technologies. Shopping generally consists of five sub-tasks: travelling to the store; finding items in-store; paying for and bagging items at the register; exiting the store and getting home; and, the often overlooked task of putting items away once at home. In this process supermarkets exhibit a “trichotomous spatial ontology” consisting of locomotor space that a shopper moves around the store, haptic space in the immediate vicinity of the shopper, and search space where individual products are located (Nicholson et al.). In completing these tasks, a shopper will constantly be moving through and switching between all three of these spaces. In the next section I examine how assistive technologies function in producing supermarkets as both enabling and disabling spaces for the vision impaired. Assistive Technologies for Vision Impaired ShoppersJason Farman (43) and Adriana de Douza e Silva both argue that in many ways spaces have always acted as information interfaces where data of all types can reside. Global Positioning System (GPS), Radio Frequency Identification (RFID), and Quick Response (QR) codes all allow for practically every spatial encounter to be an encounter with information. Site-specific and location-aware technologies address the desire for meaningful representations of space for use in everyday situations by the vision impaired. Further, the possibility of an “always-on” connection to spatial information via a mobile phone with WiFi or 3G connections transforms spatial experience by “enfolding remote [and latent] contexts inside the present context” (de Souza e Silva). A range of GPS navigation systems adapted for vision-impaired users are currently on the market. Typically, these systems convert GPS information into text-to-speech instructions and are either standalone devices, such as the Trekker Breeze, or they use the compass, accelerometer, and 3G or WiFi functions found on most smart phones, such as Loadstone. Whilst both these products are adequate in guiding a vision-impaired user from their home to a supermarket, there are significant differences in their interfaces and data architectures. Trekker Breeze is a standalone hardware device that produces talking menus, maps, and GPS information. While its navigation functionality relies on a worldwide radio-navigation system that uses a constellation of 24 satellites to triangulate one’s position (May and LaPierre 263-64), its map and text-to-speech functionality relies on data on a DVD provided with the unit. Loadstone is an open source software system for Nokia devices that has been developed within the vision-impaired community. Loadstone is built on GNU General Public License (GPL) software and is developed from private and user based funding; this overcomes the issue of Trekker Breeze’s reliance on trading policies and pricing models of the few global vendors of satellite navigation data. Both products have significant shortcomings if viewed in the broader context of the five sub-tasks involved in shopping described above. Trekker Breeze and Loadstone require that additional devices be connected to it. In the case of Trekker Breeze it is a tactile keypad, and with Loadstone it is an aftermarket screen reader. To function optimally, Trekker Breeze requires that routes be pre-recorded and, according to a review conducted by the American Foundation for the Blind, it requires a 30-minute warm up time to properly orient itself. Both Trekker Breeze and Loadstone allow users to create and share Points of Interest (POI) databases showing the location of various places along a given route. Non-standard or duplicated user generated content in POI databases may, however, have a negative effect on usability (Ellis and Kent 2). Furthermore, GPS-based navigation systems are accurate to approximately ten metres, which means that users must rely on their own mobility skills when they are required to change direction or stop for traffic. This issue with GPS accuracy is more pronounced when a vision-impaired user is approaching a supermarket where they are likely to encounter environmental hazards with greater frequency and both pedestrian and vehicular traffic in greater density. Here the relations between space defined and spaces poorly defined or undefined by the GPS device interact to produce the supermarket surrounds as a disabling space (Galloway). Prototype Systems for Supermarket Navigation and Product SelectionIn the discussion to follow, I look at two prototype systems using QR codes and RFID that are designed to be used in-store by vision-impaired shoppers. Shop Talk is a proof of concept system developed by researchers at Utah State University that uses synthetic verbal route directions to assist vision impaired shoppers with supermarket navigation, product search, and selection (Nicholson et al.). Its hardware consists of a portable computational unit, a numeric keypad, a wireless barcode scanner and base station, headphones for the user to receive the synthetic speech instructions, a USB hub to connect all the components, and a backpack to carry them (with the exception of the barcode scanner) which has been slightly modified with a plastic stabiliser to assist in correct positioning. Shop Talk represents the supermarket environment using two data structures. The first is comprised of two elements: a topological map of locomotor space that allows for directional labels of “left,” “right,” and “forward,” to be added to the supermarket floor plan; and, for navigation of haptic space, the supermarket inventory management system, which is used to create verbal descriptions of product information. The second data structure is a Barcode Connectivity Matrix (BCM), which associates each shelf barcode with several pieces of information such as aisle, aisle side, section, shelf, position, Universal Product Code (UPC) barcode, product description, and price. Nicholson et al. suggest that one of their “most immediate objectives for future work is to migrate the system to a more conventional mobile platform” such as a smart phone (see Mobile Shopping). The Personalisable Interactions with Resources on AMI-Enabled Mobile Dynamic Environments (PRIAmIDE) research group at the University of Deusto is also approaching Ambient Assisted Living (AAL) by exploring the smart phone’s sensing, communication, computing, and storage potential. As part of their work, the prototype system, BlindShopping, was developed to address the issue of assisted shopping using entirely off-the-shelf technology with minimal environmental adjustments to navigate the store and search, browse and select products (López-de-Ipiña et al. 34). Blind Shopping’s architecture is based on three components. Firstly, a navigation system provides the user with synthetic verbal instructions to users via headphones connected to the smart phone device being used in order to guide them around the store. This requires a RFID reader to be attached to the tip of the user’s white cane and road-marking-like RFID tag lines to be distributed throughout the aisles. A smartphone application processes the RFID data that is received by the smart phone via Bluetooth generating the verbal navigation commands as a result. Products are recognised by pointing a QR code reader enabled smart phone at an embossed code located on a shelf. The system is managed by a Rich Internet Application (RIA) interface, which operates by Web browser, and is used to register the RFID tags situated in the aisles and the QR codes located on shelves (López-de-Ipiña and 37-38). A typical use-scenario for Blind Shopping involves a user activating the system by tracing an “L” on the screen or issuing the “Location” voice command, which activates the supermarket navigation system which then asks the user to either touch an RFID floor marking with their cane or scan a QR code on a nearby shelf to orient the system. The application then asks the user to dictate the product or category of product that they wish to locate. The smart phone maintains a continuous Bluetooth connection with the RFID reader to keep track of user location at all times. By drawing a “P” or issuing the “Product” voice command, a user can switch the device into product recognition mode where the smart phone camera is pointed at an embossed QR code on a shelf to retrieve information about a product such as manufacturer, name, weight, and price, via synthetic speech (López-de-Ipiña et al. 38-39). Despite both systems aiming to operate with as little environmental adjustment as possible, as well as minimise the extent to which a supermarket would need to allocate infrastructural, administrative, and human resources to implementing assistive technologies for vision impaired shoppers, there will undoubtedly be significant establishment and maintenance costs associated with the adoption of production versions of systems resembling either prototype described in this paper. As both systems rely on data obtained from a server by invoking Web services, supermarkets would need to provide in-store WiFi. Further, both systems’ dependence on store inventory data would mean that commercial versions of either of these systems are likely to be supermarket specific or exclusive given that there will be policies in place that forbid access to inventory systems, which contain pricing information to third parties. Secondly, an assumption in the design of both prototypes is that the shopping task ends with the user arriving at home; this overlooks the important task of being able to recognise products in order to put them away or to use at a later time.The BCM and QR product recognition components of both respective prototypic systems associates information to products in order to assist users in the product search and selection sub-tasks. However, information such as use-by dates, discount offers, country of manufacture, country of manufacturer’s origin, nutritional information, and the labelling of products as Halal, Kosher, containing alcohol, nuts, gluten, lactose, phenylalanine, and so on, create further challenges in how different data sources are managed within the devices’ software architecture. The reliance of both systems on existing smartphone technology is also problematic. Changes in the production and uptake of mobile communication devices, and the software that they operate on, occurs rapidly. Once the fit-out of a retail space with the necessary instrumentation in order to accommodate a particular system has occurred, this system is unlikely to be able to cater to the requirement for frequent upgrades, as built environments are less flexible in the upgrading of their technological infrastructure (Kellerman 148). This sets up a scenario where the supermarket may persist as a disabling space due to a gap between the functional capacities of applications designed for mobile communication devices and the environments in which they are to be used. Lists and Disabling Spatial PracticeThe development and provision of access to assistive technologies and the data they rely upon is a commercial issue (Ellis and Kent 7). The use of assistive technologies in supermarket-spaces that rely on the inter-functional coordination of multiple inventories may have the unintended effect of excluding people with disabilities from access to legitimate content (Ellis and Kent 7). With de Certeau, we can ask of supermarket-space “What spatial practices correspond, in the area where discipline is manipulated, to these apparatuses that produce a disciplinary space?" (96).In designing assistive technologies, such as those discussed in this paper, developers must strive to achieve integration across multiple data inventories. Software architectures must be optimised to overcome issues relating to intellectual property, cross platform access, standardisation, fidelity, potential duplication, and mass-storage. This need for “cross sectioning,” however, “merely adds to the muddle” (Lefebvre 8). This is a predicament that only intensifies as space and objects in space become increasingly “representable” (Galloway), and as the impetus for the project of spatial politics for the vision impaired moves beyond representation to centre on access and meaning-making.ConclusionSupermarkets act as sites of hegemony, resistance, difference, and transformation, where the vision impaired and their allies resist the “repressive socialization of impaired bodies” through their own social movements relating to environmental accessibility and the technology assisted spatial practice of shopping (Gleeson 129). It is undeniable that the prototype technologies described in this paper, and those like it, indeed do have a great deal of emancipatory potential. However, it should be understood that these devices produce representations of supermarket-space as a simulation within a framework that attempts to mimic the real, and these representations are pre-determined by the industrial, technological, and regulatory forces that govern their production (Lefebvre 8). Thus, the potential of assistive technologies is dependent upon a range of constraints relating to data accessibility, and the interaction of various kinds of lists across the geographic area that surrounds the supermarket, locomotor, haptic, and search spaces of the supermarket, the home-space, and the internal spaces of a shopper’s imaginary. These interactions are important in contributing to the reproduction of disability in supermarkets through the use of assistive shopping technologies. The ways by which people make and read shopping lists complicate the relations between supermarket-space as location data and product inventories versus that which is intuited and experienced by a shopper (Sutherland). Not only should we be creating inventories of supermarket locomotor, haptic, and search spaces, the attention of developers working in this area of assistive technologies should look beyond the challenges of spatial representation and move towards a focus on issues of interoperability and expanded access of spatial inventory databases and data within and beyond supermarket-space.ReferencesDe Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.De Souza e Silva, A. “From Cyber to Hybrid: Mobile Technologies As Interfaces of Hybrid Spaces.” Space and Culture 9.3 (2006): 261-78.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012.Galloway, Alexander. “Are Some Things Unrepresentable?” Theory, Culture and Society 28 (2011): 85-102.Gleeson, Brendan. Geographies of Disability. London: Routledge, 1999.Goggin, Gerard. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006.Haber, Alex. “Mapping the Void in Perec’s Species of Spaces.” Tattered Fragments of the Map. Ed. Adam Katz and Brian Rosa. S.l.: Thelimitsoffun.org, 2009.Jolley, William M. When the Tide Comes in: Towards Accessible Telecommunications for People with Disabilities in Australia. Sydney: Human Rights and Equal Opportunity Commission, 2003.Keaggy, Bill. Milk Eggs Vodka: Grocery Lists Lost and Found. Cincinnati, Ohio: HOW Books, 2007.Kellerman, Aharon. Personal Mobilities. London: Routledge, 2006.Kleege, Georgia. “Blindness and Visual Culture: An Eyewitness Account.” The Disability Studies Reader. 2nd edition. Ed. Lennard J. Davis. New York: Routledge, 2006. 391-98.Lefebvre, Henri. The Production of Space. Oxford, UK: Blackwell, 1991.López-de-Ipiña, Diego, Tania Lorido, and Unai López. “Indoor Navigation and Product Recognition for Blind People Assisted Shopping.” Ambient Assisted Living. Ed. J. Bravo, R. Hervás, and V. Villarreal. Berlin: Springer-Verlag, 2011. 25-32. May, Michael, and Charles LaPierre. “Accessible Global Position System (GPS) and Related Orientation Technologies.” Assistive Technology for Visually Impaired and Blind People. Ed. Marion A. Hersh, and Michael A. Johnson. London: Springer-Verlag, 2008. 261-88. Nicholson, John, Vladimir Kulyukin, and Daniel Coster. “Shoptalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans.” The Open Rehabilitation Journal 2.1 (2009): 11-23.Perec, Georges. Species of Spaces and Other Pieces. Trans. and Ed. John Sturrock. London: Penguin Books, 1997.Schillmeier, Michael W. J. Rethinking Disability: Bodies, Senses, and Things. New York: Routledge, 2010.Sutherland, I. “Mobile Media and the Socio-Technical Protocols of the Supermarket.” Australian Journal of Communication. 36.1 (2009): 73-84.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography