To see the other types of publications on this topic, follow the link: Auditory Interface.

Journal articles on the topic 'Auditory Interface'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Auditory Interface.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Song, Eun-sung, and Eun-seok Lee. "Study on Design Methodology of Auditory User Interface(AUI) Considering Silver Generation." Journal of Communication Design 67 (April 30, 2019): 63–74. http://dx.doi.org/10.25111/jcd.2019.67.05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nijboer, Femke, Adrian Furdea, Ingo Gunst, Jürgen Mellinger, Dennis J. McFarland, Niels Birbaumer, and Andrea Kübler. "An auditory brain–computer interface (BCI)." Journal of Neuroscience Methods 167, no. 1 (January 2008): 43–50. http://dx.doi.org/10.1016/j.jneumeth.2007.02.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hirota, Koichi, and Michitaka Hirose. "AN IMPLEMENTATION OF WEARABLE AUDITORY INTERFACE." Proceedings of the International Conference on Motion and Vibration Control 6.1 (2002): 570–75. http://dx.doi.org/10.1299/jsmeintmovic.6.1.570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Omiya, Hidefumi, Akinori Komatsubara, and Shigeo Fujisaki. "Evaluation of Usability on Auditory Display Interface." Japanese journal of ergonomics 35, no. 1Supplement (1999): 145. http://dx.doi.org/10.5100/jje.35.1supplement_145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Omiya, Hidefumi, Akinori Komatsubara, and Shigeo Fujisaki. "Evaluation of Usability on Auditory Display Interface." Japanese journal of ergonomics 35 (1999): 548–49. http://dx.doi.org/10.5100/jje.35.2supplement_548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Edwards, Alistair. "Soundtrack: An Auditory Interface for Blind Users." Human-Computer Interaction 4, no. 1 (March 1, 1989): 45–66. http://dx.doi.org/10.1207/s15327051hci0401_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Amer, T. S., and Todd L. Johnson. "The Perceived Hazard of Sound Scheme and Desktop Theme Auditory Elements." International Journal of Technology and Human Interaction 17, no. 1 (January 2021): 59–74. http://dx.doi.org/10.4018/ijthi.2021010104.

Full text
Abstract:
The interface concept of adaptable design allows users to select and apply alternative auditory elements to the user interface. This study examines the consistency of the arousal strength of auditory elements that accompany exception messages available in two adaptable design options available for the Microsoft Windows operating system: (1) sound schemes and (2) desktop themes. The auditory elements available in these options differ in composition and sound features. Prior work indicates that such differences could result in differences in the arousal strength communicated by the auditory elements and therefore violate the key user interface design principle of consistency. The auditory elements within IT environments should communicate consistent levels of hazard as measured by arousal strength in order to achieve “hazard matching.” Results reveal differences in the arousal strength of the important critical stop auditory element across both sound schemes and desktop themes. Implications of this finding are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Fernández-Rodríguez, Álvaro, Ricardo Ron-Angevin, Ernesto J. Sanz-Arigita, Antoine Parize, Juliette Esquirol, Alban Perrier, Simon Laur, Jean-Marc André, Véronique Lespinet-Najib, and Liliana Garcia. "Effect of Distracting Background Speech in an Auditory Brain–Computer Interface." Brain Sciences 11, no. 1 (January 1, 2021): 39. http://dx.doi.org/10.3390/brainsci11010039.

Full text
Abstract:
Studies so far have analyzed the effect of distractor stimuli in different types of brain–computer interface (BCI). However, the effect of a background speech has not been studied using an auditory event-related potential (ERP-BCI), a convenient option when the visual path cannot be adopted by users. Thus, the aim of the present work is to examine the impact of a background speech on selection performance and user workload in auditory BCI systems. Eleven participants tested three conditions: (i) auditory BCI control condition, (ii) auditory BCI with a background speech to ignore (non-attentional condition), and (iii) auditory BCI while the user has to pay attention to the background speech (attentional condition). The results demonstrated that, despite no significant differences in performance, shared attention to auditory BCI and background speech required a higher cognitive workload. In addition, the P300 target stimuli in the non-attentional condition were significantly higher than those in the attentional condition for several channels. The non-attentional condition was the only condition that showed significant differences in the amplitude of the P300 between target and non-target stimuli. The present study indicates that background speech, especially when it is attended to, is an important interference that should be avoided while using an auditory BCI.
APA, Harvard, Vancouver, ISO, and other styles
9

Davies, T. Claire, Catherine M. Burns, and Shane D. Pinder. "Testing a Novel Auditory Interface Display to Enable Visually Impaired Travelers to Use Sonar Mobility Devices Effectively." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 4 (October 2007): 278–82. http://dx.doi.org/10.1177/154193120705100428.

Full text
Abstract:
This paper discusses the pilot testing of an auditory interface designed to increase navigational ability of visually impaired individuals. Sonar devices have been developed to increase preview distances, but these have gained limited acceptance as they lack an easily interpreted interface. This paper presents usability testing of an auditory prototype interface developed using the work domain analysis of ecological interface design (EID). An interface design that provides the user with sufficient preview to avoid obstacles may offer more environmental information than the single tones of the current designs.
APA, Harvard, Vancouver, ISO, and other styles
10

Jacko, Julie A., and David J. Rosenthal. "Psychology of Computer Use: XLVI. Age-Related Differences in the Mapping of Auditory Icons to Visual Icons in Computer Interfaces for Children." Perceptual and Motor Skills 84, no. 3_suppl (June 1997): 1223–33. http://dx.doi.org/10.2466/pms.1997.84.3c.1223.

Full text
Abstract:
An investigation was conducted to characterize how children ages 6 through 9 identify auditory icons present in educational software. 24 subjects were required to map auditory icons to visual icons, both present in a computer interface. The interface used in the experiment was constructed with Visual Basic and involved 40 auditory icons, 40 corresponding visual icons, and 66 extraneous visual icons. It was hypothesized that older children would be better able to map the auditory icons to visual icons due to more extensive exposure to everyday sounds. The results supported the hypothesis and suggestions for additional research were provided.
APA, Harvard, Vancouver, ISO, and other styles
11

Walker, Bruce N., Jeffrey Lindsay, Amanda Nance, Yoko Nakano, Dianne K. Palladino, Tilman Dingler, and Myounghoon Jeon. "Spearcons (Speech-Based Earcons) Improve Navigation Performance in Advanced Auditory Menus." Human Factors: The Journal of the Human Factors and Ergonomics Society 55, no. 1 (July 2, 2012): 157–82. http://dx.doi.org/10.1177/0018720812450587.

Full text
Abstract:
Objective: The goal of this project is to evaluate a new auditory cue, which the authors call spearcons, in comparison to other auditory cues with the aim of improving auditory menu navigation. Background: With the shrinking displays of mobile devices and increasing technology use by visually impaired users, it becomes important to improve usability of non-graphical user interface (GUI) interfaces such as auditory menus. Using nonspeech sounds called auditory icons (i.e., representative real sounds of objects or events) or earcons (i.e., brief musical melody patterns) has been proposed to enhance menu navigation. To compensate for the weaknesses of traditional nonspeech auditory cues, the authors developed spearcons by speeding up a spoken phrase, even to the point where it is no longer recognized as speech. Method: The authors conducted five empirical experiments. In Experiments 1 and 2, they measured menu navigation efficiency and accuracy among cues. In Experiments 3 and 4, they evaluated learning rate of cues and speech itself. In Experiment 5, they assessed spearcon enhancements compared to plain TTS (text to speech: speak out written menu items) in a two-dimensional auditory menu. Results: Spearcons outperformed traditional and newer hybrid auditory cues in navigation efficiency, accuracy, and learning rate. Moreover, spearcons showed comparable learnability as normal speech and led to better performance than speech-only auditory cues in two-dimensional menu navigation. Conclusion: These results show that spearcons can be more effective than previous auditory cues in menu-based interfaces. Application: Spearcons have broadened the taxonomy of nonspeech auditory cues. Users can benefit from the application of spearcons in real devices.
APA, Harvard, Vancouver, ISO, and other styles
12

Seki, Shinya, Ikuo Akita, Tatsuya Hayashi, Tomoya Nishida, and Nobuyuki Yamawaki. "Brain–computer interface using auditory event-related potential." Neuroscience Research 71 (September 2011): e204. http://dx.doi.org/10.1016/j.neures.2011.07.884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Nojo, Hideki, Minoru Kawasaki, Tetsu Jyo, Atsushi Ishiyama, Naoko Kasai, and Yumie Ono. "Appropriate auditory stimuli for P300 brain–computer interface." Neuroscience Research 68 (January 2010): e327. http://dx.doi.org/10.1016/j.neures.2010.07.1449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Takao, Hidenobu, Kaoru Sakai, Jun Osufi, and Hiroaki Ishii. "Acoustic User Interface (AUI) for the auditory displays." Displays 23, no. 1-2 (April 2002): 65–73. http://dx.doi.org/10.1016/s0141-9382(02)00011-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Vasilijevic, A., D. Nad, N. Miskovic, and Z. Vukic. "Auditory interface for teleoperation - Path following experimental results." IFAC Proceedings Volumes 47, no. 3 (2014): 4234–39. http://dx.doi.org/10.3182/20140824-6-za-1003.02064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gaver, William. "The SonicFinder: An Interface That Uses Auditory Icons." Human-Computer Interaction 4, no. 1 (March 1, 1989): 67–94. http://dx.doi.org/10.1207/s15327051hci0401_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

LOPEZ-GORDO, M. A., F. PELAYO, A. PRIETO, and E. FERNANDEZ. "AN AUDITORY BRAIN-COMPUTER INTERFACE WITH ACCURACY PREDICTION." International Journal of Neural Systems 22, no. 03 (May 16, 2012): 1250009. http://dx.doi.org/10.1142/s0129065712500098.

Full text
Abstract:
Fully auditory Brain-computer interfaces based on the dichotic listening task (DL-BCIs) are suited for users unable to do any muscular movement, which includes gazing, exploration or coordination of their eyes looking for inputs in form of feedback, stimulation or visual support. However, one of their disadvantages, in contrast with the visual BCIs, is their lower performance that makes them not adequate in applications that require a high accuracy. To overcome this disadvantage, we employed a Bayesian approach in which the DL-BCI was modeled as a Binary phase shift keying receiver for which the accuracy can be estimated a priori as a function of the signal-to-noise ratio. The results showed the measured accuracy to match the predefined target accuracy, thus validating this model that made possible to estimate in advance the classification accuracy on a trial-by-trial basis. This constitutes a novel methodology in the design of fully auditory DL-BCIs that let us first, define the target accuracy for a specific application and second, classify when the signal-to-noise ratio guarantees that target accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Murray, La Tondra A. "Spatially-Enhanced Auditory Cues and Computer Performance on a Simulated Monitoring Task." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 5 (October 1998): 496–500. http://dx.doi.org/10.1177/154193129804200511.

Full text
Abstract:
This research examines the selection of nonspeech sounds for use in a computer-based application and furthermore assesses the performance of users engaged in a simulated manufacturing task. An empirical study was conducted to evaluate the utility of spatially-enhanced (three-dimensional) auditory cues in the computer interface. The seventy-two participants in this experiment interacted with a simulated interface using one of two spatially-enhanced sound groups (pure and hybrid) as selected from previous studies. Participants used the mouse to interact with the interface simulation and received alert information in the form of auditory cues, textual cues, or both. Performance for the simulated interface was measured in terms of: skill acquisition (learning), perceived workload, task omissions, incorrect responses, and task response time. The results show that spatially-enhanced nonspeech auditory cues can reduce perceived workload and shorten response times.
APA, Harvard, Vancouver, ISO, and other styles
19

SAIKA, Hiroki, Shuoyu WANG, and Naoki MIURA. "1413 Development of a Brain-Computer Interface using auditory area." Proceedings of Conference of Chugoku-Shikoku Branch 2009.47 (2009): 497–98. http://dx.doi.org/10.1299/jsmecs.2009.47.497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Brown, James, Justin F. Morgan, John Campbell, Connor Hoover, and Christian Jerome. "Validations of Integrated Driver Vehicle Interface (DVI) Configurations." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1431–35. http://dx.doi.org/10.1177/1541931213601843.

Full text
Abstract:
This project examines questions of discernibility and presentation methods for safety-critical driving messages. A driving simulator experiment tested two methods of providing safety messages: distinct (with all alerts having distinct auditory and visual components) and master (a common visual and auditory alert) presentations. Participants completed drives that contained a safety critical event, with and without an alert, and reported their perceptions of the alert’s meaning and hazard location. No significant differences were observed in participants’ ability to identify the location of the referent hazard. There were significant differences in participants’ ability to assess the meaning of the alert: the distinct group displayed higher overall performance as compared to the master group. Implications of the study for design guidance and potential future research topics are discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Tudor, Leslie G. "Designing an Auditory Interface for a Call-Handling Tutorial." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 23 (October 1998): 1612. http://dx.doi.org/10.1177/154193129804202325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Edwards, Alistair D. N. "Modelling blind users' interactions with an auditory computer interface." International Journal of Man-Machine Studies 30, no. 5 (May 1989): 575–89. http://dx.doi.org/10.1016/s0020-7373(89)80035-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lopez-Gordo, M. A., E. Fernandez, S. Romero, F. Pelayo, and Alberto Prieto. "An auditory brain–computer interface evoked by natural speech." Journal of Neural Engineering 9, no. 3 (May 25, 2012): 036013. http://dx.doi.org/10.1088/1741-2560/9/3/036013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jing Guo, Shangkai Gao, and Bo Hong. "An Auditory Brain–Computer Interface Using Active Mental Response." IEEE Transactions on Neural Systems and Rehabilitation Engineering 18, no. 3 (June 2010): 230–35. http://dx.doi.org/10.1109/tnsre.2010.2047604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Edwards, Alistair D. N. "Soundtrack: An Auditory Interface for Blind Users (Abstract Only)." ACM SIGCHI Bulletin 21, no. 1 (August 1989): 124. http://dx.doi.org/10.1145/67880.1046600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Halder, S., M. Rea, R. Andreoni, F. Nijboer, E. M. Hammer, S. C. Kleih, N. Birbaumer, and A. Kübler. "An auditory oddball brain–computer interface for binary choices." Clinical Neurophysiology 121, no. 4 (April 2010): 516–23. http://dx.doi.org/10.1016/j.clinph.2009.11.087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yin, Erwei, Timothy Zeyl, Rami Saab, Dewen Hu, Zongtan Zhou, and Tom Chau. "An Auditory-Tactile Visual Saccade-Independent P300 Brain–Computer Interface." International Journal of Neural Systems 26, no. 01 (January 5, 2016): 1650001. http://dx.doi.org/10.1142/s0129065716500015.

Full text
Abstract:
Most P300 event-related potential (ERP)-based brain–computer interface (BCI) studies focus on gaze shift-dependent BCIs, which cannot be used by people who have lost voluntary eye movement. However, the performance of visual saccade-independent P300 BCIs is generally poor. To improve saccade-independent BCI performance, we propose a bimodal P300 BCI approach that simultaneously employs auditory and tactile stimuli. The proposed P300 BCI is a vision-independent system because no visual interaction is required of the user. Specifically, we designed a direction-congruent bimodal paradigm by randomly and simultaneously presenting auditory and tactile stimuli from the same direction. Furthermore, the channels and number of trials were tailored to each user to improve online performance. With 12 participants, the average online information transfer rate (ITR) of the bimodal approach improved by 45.43% and 51.05% over that attained, respectively, with the auditory and tactile approaches individually. Importantly, the average online ITR of the bimodal approach, including the break time between selections, reached 10.77 bits/min. These findings suggest that the proposed bimodal system holds promise as a practical visual saccade-independent P300 BCI.
APA, Harvard, Vancouver, ISO, and other styles
28

Strybel, Thomas Z. "Auditory Spatial Information and Head-Coupled Display Systems." Proceedings of the Human Factors Society Annual Meeting 32, no. 2 (October 1988): 75. http://dx.doi.org/10.1177/154193128803200215.

Full text
Abstract:
Developments of head-coupled control/display systems have focused primarily on the display of three dimensional visual information, as the visual system is the optimal sensory channel for the aquisition of spatial information in humans. The auditory system improves the efficiency of vision, however, by obtaining spatial information about relevant objects outside of the visual field of view. This auditory information can be used to direct head and eye movements. Head-coupled display systems, can also benefit from the addition of auditory spatial information, as it provides a natural method of signaling the location of important events outside of the visual field of view. This symposium will report on current efforts in the developments of head-coupled display systems, with an emphasis on the auditory spatial component. The first paper “Virtual Interface Environment Workstations”, by Scott S. Fisher, will report on the development of a prototype virtual environment. This environment consists of a head-mounted, wide-angle, stereoscopic display system which is controlled by operator position, voice, and gesture. With this interface, an operator can virtually explore a 360 degree synthesized environment, and viscerally interact with its components. The second paper, “A Virtual Display System For Conveying Three-Dimensional Acoustic Information” by Elizabeth M. Wenzel, Frederic L. Wightman and Scott H. Foster, will report on the development of a method of synthetically generating three-dimensional sound cues for the above-mentioned interface. The development of simulated auditory spatial cues is limited to some extent, by our knowlege of auditory spatial processing. The remaining papers will report on two areas of auditory space perception that have recieved little attention until recently. “Perception of Real and Simulated Motion in the Auditory Modality”, by Thomas Z. Strybel, will review recent research on auditory motion perception, because a natural acoustic environment must contain moving sounds. This review will consider applications of this knowledge to head-coupled display systems. The last paper, “Auditory Psychomotor Coordination”, will examine the interplay between the auditory, visual and motor systems. The specific emphasis of this paper is the use of auditory spatial information in the regulation of motor responses so as to provide efficient application of the visual channel.
APA, Harvard, Vancouver, ISO, and other styles
29

Maddox, Ross K., Willy Cheung, and Adrian K. C. Lee. "Selective attention in an overcrowded auditory scene: Implications for auditory-based brain-computer interface design." Journal of the Acoustical Society of America 132, no. 5 (November 2012): EL385—EL390. http://dx.doi.org/10.1121/1.4757696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Baek, Hyun Jae, Min Hye Chang, Jeong Heo, and Kwang Suk Park. "Enhancing the Usability of Brain-Computer Interface Systems." Computational Intelligence and Neuroscience 2019 (June 16, 2019): 1–12. http://dx.doi.org/10.1155/2019/5427154.

Full text
Abstract:
Brain-computer interfaces (BCIs) aim to enable people to interact with the external world through an alternative, nonmuscular communication channel that uses brain signal responses to complete specific cognitive tasks. BCIs have been growing rapidly during the past few years, with most of the BCI research focusing on system performance, such as improving accuracy or information transfer rate. Despite these advances, BCI research and development is still in its infancy and requires further consideration to significantly affect human experience in most real-world environments. This paper reviews the most recent studies and findings about ergonomic issues in BCIs. We review dry electrodes that can be used to detect brain signals with high enough quality to apply in BCIs and discuss their advantages, disadvantages, and performance. Also, an overview is provided of the wide range of recent efforts to create new interface designs that do not induce fatigue or discomfort during everyday, long-term use. The basic principles of each technique are described, along with examples of current applications in BCI research. Finally, we demonstrate a user-friendly interface paradigm that uses dry capacitive electrodes that do not require any preparation procedure for EEG signal acquisition. We explore the capacitively measured steady-state visual evoked potential (SSVEP) response to an amplitude-modulated visual stimulus and the auditory steady-state response (ASSR) to an auditory stimulus modulated by familiar natural sounds to verify their availability for BCI. We report the first results of an online demonstration that adopted this ergonomic approach to evaluating BCI applications. We expect BCI to become a routine clinical, assistive, and commercial tool through advanced EEG monitoring techniques and innovative interface designs.
APA, Harvard, Vancouver, ISO, and other styles
31

Gao, Hai Juan, Lei Wang, and Ping Wang. "An Auditory Brain-Computer Interface Based on Dichotic Listening Paradigm." Advanced Materials Research 1030-1032 (September 2014): 2360–63. http://dx.doi.org/10.4028/www.scientific.net/amr.1030-1032.2360.

Full text
Abstract:
This study proposes a novel auditory brain–computer interface (BCI) based on dichotic listening paradigm, which allows the subject to select a target from two different sound stimulus sequences played in each ear. EEG data from 6 subjects has shown that the amplitude of N200 and P300 elicited by target was significantly higher than that of non-target ones. We found the N2ac component: a negativity wave in the N2 latency range at anterior contralateral electrodes. The target detection accuracy was assessed by support vector machine (SVM). The accuracy based on multiple electrodes is higher than a single electrode. The dichotic listening paradigm can be used for binary-class of brain-computer interface system.
APA, Harvard, Vancouver, ISO, and other styles
32

Marassi, Alessandro, Riccardo Budai, and Luca Chittaro. "A P300 auditory brain-computer interface based on mental repetition." Biomedical Physics & Engineering Express 4, no. 3 (April 26, 2018): 035040. http://dx.doi.org/10.1088/2057-1976/aab7d4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Thorp, Elias B., Eric Larson, and Cara E. Stepp. "Combined Auditory and Vibrotactile Feedback for Human–Machine-Interface Control." IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, no. 1 (January 2014): 62–68. http://dx.doi.org/10.1109/tnsre.2013.2273177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Gayer, William W. "The SonicFinder: An Interface that Uses Auditory Icons (Abstract Only)." ACM SIGCHI Bulletin 21, no. 1 (August 1989): 124. http://dx.doi.org/10.1145/67880.1046601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Halder, Sebastian, Eva Maria Hammer, Sonja Claudia Kleih, Martin Bogdan, Wolfgang Rosenstiel, Niels Birbaumer, and Andrea Kübler. "Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude." PLoS ONE 8, no. 2 (February 14, 2013): e53513. http://dx.doi.org/10.1371/journal.pone.0053513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Käthner, Ivo, Carolin A. Ruf, Emanuele Pasqualotto, Christoph Braun, Niels Birbaumer, and Sebastian Halder. "A portable auditory P300 brain–computer interface with directional cues." Clinical Neurophysiology 124, no. 2 (February 2013): 327–38. http://dx.doi.org/10.1016/j.clinph.2012.08.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Klobassa, D. S., T. M. Vaughan, P. Brunner, N. E. Schwartz, J. R. Wolpaw, C. Neuper, and E. W. Sellers. "Toward a high-throughput auditory P300-based brain–computer interface." Clinical Neurophysiology 120, no. 7 (July 2009): 1252–61. http://dx.doi.org/10.1016/j.clinph.2009.04.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Darkow, David J., and William P. Marshak. "In Search of an Objective Metric for Complex Displays." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 19 (October 1998): 1361–65. http://dx.doi.org/10.1177/154193129804201907.

Full text
Abstract:
Advanced displays for military and other user-interaction intensive systems need objective measures of merit for analyzing the information transfer from the displays to the user. A usable objective metric for display interface designers needs to be succinct, modular and scaleable. The authors have combined the concepts of weighted Signal to Noise Ratio (SNR) and multidimensional correlation to calculate a novel index of display complexity. Preliminary data supporting the development of this metric for complex visual, auditory and mixed auditory and visual displays will be presented. Analysis of the human subject data indicates the coefficients for the algorithm are easily determined. Furthermore, the metric can predict reaction-times and accuracy rates for complex displays. This combination of semi-automated reduction of display information and calculation of a single complexity index makes this algorithm a potentially convenient tool for designers of complex display interfaces.
APA, Harvard, Vancouver, ISO, and other styles
39

Aboitiz, Francisco. "Voice, gesture and working memory in the emergence of speech." Interaction Studies 19, no. 1-2 (September 17, 2018): 70–85. http://dx.doi.org/10.1075/is.17032.abo.

Full text
Abstract:
Abstract Language and speech depend on a relatively well defined neural circuitry, located predominantly in the left hemisphere. In this article, I discuss the origin of the speech circuit in early humans, as an expansion of an auditory-vocal articulatory network that took place after the last common ancestor with the chimpanzee. I will attempt to converge this perspective with aspects of the Mirror System Hypothesis, particularly those related to the emergence of a meaningful grammar in human communication. Basically, the strengthening of auditory-vocal connectivity via the arcuate fasciculus and related tracts generated an expansion of working memory capacity for vocalizations, that was key for learning complex utterances. This process was concomitant with the development of a robust interface with visual working memory, both in the dorsal and ventral streams of auditory and visual processing. This enabled the bidirectional translation of sequential codes into hierarchical visual representations, through the development of a multimodal interface between both systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Kim, Kwang S., Hantao Wang, and Ludo Max. "It's About Time: Minimizing Hardware and Software Latencies in Speech Research With Real-Time Auditory Feedback." Journal of Speech, Language, and Hearing Research 63, no. 8 (August 10, 2020): 2522–34. http://dx.doi.org/10.1044/2020_jslhr-19-00419.

Full text
Abstract:
Purpose Various aspects of speech production related to auditory–motor integration and learning have been examined through auditory feedback perturbation paradigms in which participants' acoustic speech output is experimentally altered and played back via earphones/headphones “in real time.” Scientific rigor requires high precision in determining and reporting the involved hardware and software latencies. Many reports in the literature, however, are not consistent with the minimum achievable latency for a given experimental setup. Here, we focus specifically on this methodological issue associated with implementing real-time auditory feedback perturbations, and we offer concrete suggestions for increased reproducibility in this particular line of work. Method Hardware and software latencies as well as total feedback loop latency were measured for formant perturbation studies with the Audapter software. Measurements were conducted for various audio interfaces, desktop and laptop computers, and audio drivers. An approach for lowering Audapter's software latency through nondefault parameter specification was also tested. Results Oft-overlooked hardware-specific latencies were not negligible for some of the tested audio interfaces (adding up to 15 ms). Total feedback loop latencies (including both hardware and software latency) were also generally larger than claimed in the literature. Nondefault parameter values can improve Audapter's own processing latency without negative impact on formant tracking. Conclusions Audio interface selection and software parameter optimization substantially affect total feedback loop latency. Thus, the actual total latency (hardware plus software) needs to be correctly measured and described in all published reports. Future speech research with “real-time” auditory feedback perturbations should increase scientific rigor by minimizing this latency.
APA, Harvard, Vancouver, ISO, and other styles
41

Sanderson, Penelope M., and Marcus O. Watson. "From Information Content to Auditory Display with Ecological Interface Design: Prospects and Challenges." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 3 (September 2005): 259–63. http://dx.doi.org/10.1177/154193120504900310.

Full text
Abstract:
We examine how Ecological Interface Design (EID) might better bridge the gap from analysis to design by taking different modalities into account. Whereas almost all previous research using EID has focused on visual displays, attempts to extend the use of EID to non-visual modalities have revealed hidden assumptions that need to be made explicit and questioned. In this paper we explore the potential for EID to support a systematic process for the design for auditory displays, illustrating our argument with the design of auditory displays to support anaesthesia monitoring. We propose a set of steps that analysts might take to move more deliberatively and effectively from analysis to design with EID.
APA, Harvard, Vancouver, ISO, and other styles
42

Vasilijevic, A., Z. Vukic, and N. Miskovic. "Teleoperated trajectory tracking of remotely operated vehicles using spatial auditory interface." IFAC-PapersOnLine 49, no. 23 (2016): 97–102. http://dx.doi.org/10.1016/j.ifacol.2016.10.327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chang, Moonjeong, Koichi Mori, Shoji Makino, and Tomasz M. Rutkowski. "Spatial Auditory Two-step Input Japanese Syllabary Brain-computer Interface Speller." Procedia Technology 18 (2014): 25–31. http://dx.doi.org/10.1016/j.protcy.2014.11.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kumar, N., W. Himmelbauer, G. Cauwenberghs, and A. G. Andreou. "An analog VLSI chip with asynchronous interface for auditory feature extraction." IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 45, no. 5 (May 1998): 600–606. http://dx.doi.org/10.1109/82.673642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Cohen, Robert F., Arthur Meacham, and Joelle Skaff. "Teaching graphs to visually impaired students using an active auditory interface." ACM SIGCSE Bulletin 38, no. 1 (March 31, 2006): 279–82. http://dx.doi.org/10.1145/1124706.1121428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Senn, Pascal, Marta Roccio, Stefan Hahnewald, Claudia Frick, Monika Kwiatkowska, Masaaki Ishikawa, Peter Bako, et al. "NANOCI—Nanotechnology Based Cochlear Implant With Gapless Interface to Auditory Neurons." Otology & Neurotology 38, no. 8 (September 2017): e224-e231. http://dx.doi.org/10.1097/mao.0000000000001439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

KANOH, Shin'ichiro, Ko-ichiro MIYAMOTO, and Tatsuo YOSHINOBU. "A Brain-Computer Interface (BCI) System Based on Auditory Stream Segregation." Journal of Biomechanical Science and Engineering 5, no. 1 (2010): 32–40. http://dx.doi.org/10.1299/jbse.5.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

McCreadie, Karl A., Damien H. Coyle, and Girijesh Prasad. "Sensorimotor learning with stereo auditory feedback for a brain–computer interface." Medical & Biological Engineering & Computing 51, no. 3 (November 30, 2012): 285–93. http://dx.doi.org/10.1007/s11517-012-0992-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Cai, Zhenyu, Shoji Makino, and Tomasz M. Rutkowski. "Brain Evoked Potential Latencies Optimization for Spatial Auditory Brain–Computer Interface." Cognitive Computation 7, no. 1 (August 6, 2013): 34–43. http://dx.doi.org/10.1007/s12559-013-9228-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Jibin, Qi Liu, Malu Zhang, Zihan Pan, Haizhou Li, and Kay Chen Tan. "HuRAI: A brain-inspired computational model for human-robot auditory interface." Neurocomputing 465 (November 2021): 103–13. http://dx.doi.org/10.1016/j.neucom.2021.08.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography