To see the other types of publications on this topic, follow the link: Audiometry. Speech perception. Speech processing systems.

Journal articles on the topic 'Audiometry. Speech perception. Speech processing systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Audiometry. Speech perception. Speech processing systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hope, A. J., L. M. Luxon, and D.-E. Bamiou. "Effects of chronic noise exposure on speech-in-noise perception in the presence of normal audiometry." Journal of Laryngology & Otology 127, no. 3 (2013): 233–38. http://dx.doi.org/10.1017/s002221511200299x.

Full text
Abstract:
AbstractObjective:To assess auditory processing in noise-exposed subjects with normal audiograms and compare the findings with those of non-noise-exposed normal controls.Methods:Ten noise-exposed Royal Air Force aircrew pilots were compared with 10 Royal Air Force administrators who had no history of noise exposure. Participants were matched in terms of age and sex. The subjects were assessed in terms of: pure tone audiometry, transient evoked otoacoustic emissions, suppression of transient evoked otoacoustic emissions in contralateral noise and auditory processing task performance (i.e. maski
APA, Harvard, Vancouver, ISO, and other styles
2

Sams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (1997): 347. http://dx.doi.org/10.1068/v970029.

Full text
Abstract:
Persons with hearing loss use visual information from articulation to improve their speech perception. Even persons with normal hearing utilise visual information, especially when the stimulus-to-noise ratio is poor. A dramatic demonstration of the role of vision in speech perception is the audiovisual fusion called the ‘McGurk effect’. When the auditory syllable /pa/ is presented in synchrony with the face articulating the syllable /ka/, the subject usually perceives /ta/ or /ka/. The illusory perception is clearly auditory in nature. We recently studied the audiovisual fusion (acoustical /p/
APA, Harvard, Vancouver, ISO, and other styles
3

Mullennix, John W., and David B. Pisoni. "Stimulus variability and processing dependencies in speech perception." Perception & Psychophysics 47, no. 4 (1990): 379–90. http://dx.doi.org/10.3758/bf03210878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koohi, Nehzat, Gilbert Thomas-Black, Paola Giunti, and Doris-Eva Bamiou. "Auditory Phenotypic Variability in Friedreich’s Ataxia Patients." Cerebellum 20, no. 4 (2021): 497–508. http://dx.doi.org/10.1007/s12311-021-01236-9.

Full text
Abstract:
AbstractAuditory neural impairment is a key clinical feature of Friedreich’s Ataxia (FRDA). We aimed to characterize the phenotypical spectrum of the auditory impairment in FRDA in order to facilitate early identification and timely management of auditory impairment in FRDA patients and to explore the relationship between the severity of auditory impairment with genetic variables (the expansion size of GAA trinucleotide repeats, GAA1 and GAA2), when controlled for variables such as disease duration, severity of the disease and cognitive status. Twenty-seven patients with genetically confirmed
APA, Harvard, Vancouver, ISO, and other styles
5

Boymans, Monique, and Wouter A. Dreschler. "In situ Hearing Tests for the Purpose of a Self-Fit Hearing Aid." Audiology and Neurotology 22, no. 1 (2017): 15–23. http://dx.doi.org/10.1159/000457829.

Full text
Abstract:
This study investigated the potential and limitations of a self-fit hearing aid. This can be used in the “developing” world or in countries with large distances between the hearing-impaired subjects and the professional. It contains an on-board tone generator for in situ user-controlled, automated audiometry, and other tests for hearing aid fitting. Twenty subjects with mild hearing losses were involved. In situ audiometry showed a test-retest reliability (SD <3.7 dB) that compared well with the precision of diagnostic audiometry using headphones. There was good correspondence (SD <5.2 d
APA, Harvard, Vancouver, ISO, and other styles
6

Pinard, Minola A. "Native and Cross-Language Speech Sounds: Some Perceptual Processes." Perceptual and Motor Skills 73, no. 1 (1991): 227–34. http://dx.doi.org/10.2466/pms.1991.73.1.227.

Full text
Abstract:
Using a developmental approach, two aspects of debate in the speech perception literature were tested, (a) the nature of adult speech processing, the dichotomy being along nonlinguistic versus linguistic lines, and (b) the nature of speech processing by children of different ages, the hypotheses here implying in infancy detector-like processes and at age four “adult-like” speech perception reorganizations. Children ranging in age from 4 up to 18 years discriminated native and foreign speech contrasts. Results confirm the hypotheses for adults. It is clear that different processes are operating
APA, Harvard, Vancouver, ISO, and other styles
7

Ito, Takayuki, Alexis R. Johns, and David J. Ostry. "Left Lateralized Enhancement of Orofacial Somatosensory Processing Due to Speech Sounds." Journal of Speech, Language, and Hearing Research 56, no. 6 (2013): 1875–81. http://dx.doi.org/10.1044/1092-4388(2013/12-0226).

Full text
Abstract:
Purpose Somatosensory information associated with speech articulatory movements affects the perception of speech sounds and vice versa, suggesting an intimate linkage between speech production and perception systems. However, it is unclear which cortical processes are involved in the interaction between speech sounds and orofacial somatosensory inputs. The authors examined whether speech sounds modify orofacial somatosensory cortical potentials that were elicited using facial skin perturbations. Method Somatosensory event-related potentials in EEG were recorded in 3 background sound conditions
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Hsiao-Lan S., I.-Chen Chen, Chun-Han Chiang, Ying-Hui Lai, and Yu Tsao. "Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers." Perceptual and Motor Skills 123, no. 2 (2016): 365–82. http://dx.doi.org/10.1177/0031512516663164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ujiie, Yuta, and Kohske Takahashi. "Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits." Multisensory Research 34, no. 6 (2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.

Full text
Abstract:
Abstract While visual information from facial speech modulates auditory speech perception, it is less influential on audiovisual speech perception among autistic individuals than among typically developed individuals. In this study, we investigated the relationship between autistic traits (Autism-Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin’s vase-type speech stimuli with degraded facial speech information. Participants were 31 university students (13 males and 18 females; mean age: 19.2, SD: 1.13 years) who reported normal (or corrected-to-normal) hear
APA, Harvard, Vancouver, ISO, and other styles
10

Tampas, Joanna W., Ashley W. Harkrider, and Mark S. Hedrick. "Neurophysiological Indices of Speech and Nonspeech Stimulus Processing." Journal of Speech, Language, and Hearing Research 48, no. 5 (2005): 1147–64. http://dx.doi.org/10.1044/1092-4388(2005/081).

Full text
Abstract:
Auditory event-related potentials (mismatch negativity and P300) and behavioral discrimination were measured to synthetically generated consonant-vowel (CV) speech and nonspeech contrasts in 10 young adults with normal auditory systems. Previous research has demonstrated that behavioral and P300 responses reflect a phonetic, categorical level of processing. The aims of the current investigation were (a) to examine whether the mismatch negativity (MMN) response is also influenced by the phonetic characteristics of a stimulus or if it reflects purely an acoustic level of processing and (b) to ex
APA, Harvard, Vancouver, ISO, and other styles
11

Di Stadio, Arianna, Laura Dipietro, Roberta Toffano, et al. "Working Memory Function in Children with Single Side Deafness Using a Bone-Anchored Hearing Implant: A Case-Control Study." Audiology and Neurotology 23, no. 4 (2018): 238–44. http://dx.doi.org/10.1159/000493722.

Full text
Abstract:
The importance of a good hearing function to preserve memory and cognitive abilities has been shown in the adult population, but studies on the pediatric population are currently lacking. This study aims at evaluating the effects of a bone-anchored hearing implant (BAHI) on speech perception, speech processing, and memory abilities in children with single side deafness (SSD). We enrolled n = 25 children with SSD and assessed them prior to BAHI implantation, and at 1-month and 3-month follow-ups after BAHI implantation using tests of perception in silence and perception in phonemic confusion, d
APA, Harvard, Vancouver, ISO, and other styles
12

Everdell, Ian T., Heidi Marsh, Micheal D. Yurick, Kevin G. Munhall, and Martin Paré. "Gaze Behaviour in Audiovisual Speech Perception: Asymmetrical Distribution of Face-Directed Fixations." Perception 36, no. 10 (2007): 1535–45. http://dx.doi.org/10.1068/p5852.

Full text
Abstract:
Speech perception under natural conditions entails integration of auditory and visual information. Understanding how visual and auditory speech information are integrated requires detailed descriptions of the nature and processing of visual speech information. To understand better the process of gathering visual information, we studied the distribution of face-directed fixations of humans performing an audiovisual speech perception task to characterise the degree of asymmetrical viewing and its relationship to speech intelligibility. Participants showed stronger gaze fixation asymmetries while
APA, Harvard, Vancouver, ISO, and other styles
13

McCotter, Maxine V., and Timothy R. Jordan. "The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception." Perception 32, no. 8 (2003): 921–36. http://dx.doi.org/10.1068/p3316.

Full text
Abstract:
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments la (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and v
APA, Harvard, Vancouver, ISO, and other styles
14

Delić, Vlado, Zoran Perić, Milan Sečujski, et al. "Speech Technology Progress Based on New Machine Learning Paradigm." Computational Intelligence and Neuroscience 2019 (June 25, 2019): 1–19. http://dx.doi.org/10.1155/2019/4368036.

Full text
Abstract:
Speech technologies have been developed for decades as a typical signal processing area, while the last decade has brought a huge progress based on new machine learning paradigms. Owing not only to their intrinsic complexity but also to their relation with cognitive sciences, speech technologies are now viewed as a prime example of interdisciplinary knowledge area. This review article on speech signal analysis and processing, corresponding machine learning algorithms, and applied computational intelligence aims to give an insight into several fields, covering speech production and auditory per
APA, Harvard, Vancouver, ISO, and other styles
15

Clark, Graeme M. "The multiple-channel cochlear implant: the interface between sound and the central nervous system for hearing, speech, and language in deaf people—a personal perspective." Philosophical Transactions of the Royal Society B: Biological Sciences 361, no. 1469 (2006): 791–810. http://dx.doi.org/10.1098/rstb.2005.1782.

Full text
Abstract:
The multiple-channel cochlear implant is the first sensori-neural prosthesis to effectively and safely bring electronic technology into a direct physiological relation with the central nervous system and human consciousness, and to give speech perception to severely-profoundly deaf people and spoken language to children. Research showed that the place and temporal coding of sound frequencies could be partly replicated by multiple-channel stimulation of the auditory nerve. This required safety studies on how to prevent the effects to the cochlea of trauma, electrical stimuli, biomaterials and m
APA, Harvard, Vancouver, ISO, and other styles
16

Finke, Mareike, Pascale Sandmann, Hanna Bönitz, Andrej Kral, and Andreas Büchner. "Consequences of Stimulus Type on Higher-Order Processing in Single-Sided Deaf Cochlear Implant Users." Audiology and Neurotology 21, no. 5 (2016): 305–15. http://dx.doi.org/10.1159/000452123.

Full text
Abstract:
Single-sided deaf subjects with a cochlear implant (CI) provide the unique opportunity to compare central auditory processing of the electrical input (CI ear) and the acoustic input (normal-hearing, NH, ear) within the same individual. In these individuals, sensory processing differs between their two ears, while cognitive abilities are the same irrespectively of the sensory input. To better understand perceptual-cognitive factors modulating speech intelligibility with a CI, this electroencephalography study examined the central-auditory processing of words, the cognitive abilities, and the sp
APA, Harvard, Vancouver, ISO, and other styles
17

Gabr, Takwa A., and Reham M. Lasheen. "Binaural Interaction in Tinnitus Patients." Audiology and Neurotology 25, no. 6 (2020): 315–22. http://dx.doi.org/10.1159/000507274.

Full text
Abstract:
The auditory brainstem response (ABR) is a commonly used objective clinical measure for hearing evaluation. It can be also used to draw conclusions about the functioning of distinct stages of the auditory pathway including the binaural processing stages using the binaural interaction component (BIC) of the ABR. <b><i>Objective:</i></b> To study binaural processing in normal hearing subjects complaining of tinnitus. <b><i>Methods:</i></b> Sixty cases with bilateral normal peripheral hearing were included in this work, divided into 2 groups, i.e.,
APA, Harvard, Vancouver, ISO, and other styles
18

Rance, Gary, Louise Corben, and Martin Delatycki. "Auditory Processing Deficits in Children With Friedreich Ataxia." Journal of Child Neurology 27, no. 9 (2012): 1197–203. http://dx.doi.org/10.1177/0883073812448963.

Full text
Abstract:
Friedreich ataxia is a neurodegenerative disease with an average age of onset of 10 years. The authors sought to investigate the presence and functional consequences of auditory neuropathy in a group of affected children and to evaluate the ability of personal FM-listening systems to improve perception. Nineteen school-aged individuals with Friedreich ataxia and a cohort of matched control subjects underwent a battery of auditory function tests. Sound detection was relatively normal, but auditory temporal processing and speech understanding in noise were severely impaired, with children with F
APA, Harvard, Vancouver, ISO, and other styles
19

Arunphalungsanti, Kittipun, and Chailerd Pichitpornchai. "Brain Processing (Auditory Event-Related Potential) of Stressed Versus Unstressed Words in Thai Speech." Perceptual and Motor Skills 125, no. 6 (2018): 995–1010. http://dx.doi.org/10.1177/0031512518794107.

Full text
Abstract:
This study investigated the effect of the stressed word in Thai language on auditory event-related potential (aERP) in unattended conditions. We presented 30 healthy participants with monosyllabic Thai words consisting of either stressed or unstressed words. We instructed them not to attend to the sound stimuli, but rather to watch and memorize the contents of a silent natural documentary without subtitles. The two listening conditions consisted of 20% deviant stimuli (70 stressed and 70 unstressed words, respectively) and 80% standard stimuli (other 280 unstressed words) presented pseudorando
APA, Harvard, Vancouver, ISO, and other styles
20

Mirman, Daniel, and Melissa Thye. "Uncovering the Neuroanatomy of Core Language Systems Using Lesion-Symptom Mapping." Current Directions in Psychological Science 27, no. 6 (2018): 455–61. http://dx.doi.org/10.1177/0963721418787486.

Full text
Abstract:
Recent studies have integrated noninvasive brain-imaging methods and advanced analysis techniques to study associations between the location of brain damage and cognitive deficits. By applying data-driven analysis methods to large sets of data on language deficits after stroke (aphasia), these studies have identified the cognitive systems that support language processing—phonology, semantics, fluency, and executive functioning—and their neural basis. Phonological processing is supported by dual pathways around the Sylvian fissure, a ventral speech-recognition component and a dorsal speech-prod
APA, Harvard, Vancouver, ISO, and other styles
21

Kelly, Andrea S., Suzanne C. Purdy, and Peter R. Thorne. "Electrophysiological and speech perception measures of auditory processing in experienced adult cochlear implant users." Clinical Neurophysiology 116, no. 6 (2005): 1235–46. http://dx.doi.org/10.1016/j.clinph.2005.02.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Calvino, Miryam, Isabel Sánchez-Cuadrado, Javier Gavilán, and Luis Lassaletta. "Cochlear Implant Users with Otosclerosis: Are Hearing and Quality of Life Outcomes Worse than in Cochlear Implant Users without Otosclerosis?" Audiology and Neurotology 23, no. 6 (2018): 345–55. http://dx.doi.org/10.1159/000496191.

Full text
Abstract:
Background: The otosclerotic process may influence the performance of the cochlear implant (CI). Difficulty in inserting the electrode array due to potential ossification of the cochlea, facial nerve stimulation, and instability of the results are potential challenges for the CI team. Objectives: To evaluate hearing results and subjective outcomes of CI users with otosclerosis and to compare them with those of CI users without otosclerosis. Method: Retrospective review of 239 adults with bilateral profound postlingual deafness who underwent unilateral cochlear implantation between 1992 and 201
APA, Harvard, Vancouver, ISO, and other styles
23

Calvert, Gemma A., and Ruth Campbell. "Reading Speech from Still and Moving Faces: The Neural Substrates of Visible Speech." Journal of Cognitive Neuroscience 15, no. 1 (2003): 57–70. http://dx.doi.org/10.1162/089892903321107828.

Full text
Abstract:
Speech is perceived both by ear and by eye. Unlike heard speech, some seen speech gestures can be captured in stilled image sequences. Previous studies have shown that in hearing people, natural time-varying silent seen speech can access the auditory cortex (left superior temporal regions). Using functional magnetic resonance imaging (fMRI), the present study explored the extent to which this circuitry was activated when seen speech was deprived of its time-varying characteristics. In the scanner, hearing participants were instructed to look for a prespecified visible speech target sequence (“
APA, Harvard, Vancouver, ISO, and other styles
24

Liang, Chun, Lisa M. Houston, Ravi N. Samy, Lamiaa Mohamed Ibrahim Abedelrehim, and Fawen Zhang. "Cortical Processing of Frequency Changes Reflected by the Acoustic Change Complex in Adult Cochlear Implant Users." Audiology and Neurotology 23, no. 3 (2018): 152–64. http://dx.doi.org/10.1159/000492170.

Full text
Abstract:
The purpose of this study was to examine neural substrates of frequency change detection in cochlear implant (CI) recipients using the acoustic change complex (ACC), a type of cortical auditory evoked potential elicited by acoustic changes in an ongoing stimulus. A psychoacoustic test and electroencephalographic recording were administered in 12 postlingually deafened adult CI users. The stimuli were pure tones containing different magnitudes of upward frequency changes. Results showed that the frequency change detection threshold (FCDT) was 3.79% in the CI users, with a large variability. The
APA, Harvard, Vancouver, ISO, and other styles
25

Cheng, Stella T. T., Gary Y. H. Lam, and Carol K. S. To. "Pitch Perception in Tone Language-Speaking Adults With and Without Autism Spectrum Disorders." i-Perception 8, no. 3 (2017): 204166951771120. http://dx.doi.org/10.1177/2041669517711200.

Full text
Abstract:
Enhanced low-level pitch perception has been universally reported in autism spectrum disorders (ASD). This study examined whether tone language speakers with ASD exhibit this advantage. The pitch perception skill of 20 Cantonese-speaking adults with ASD was compared with that of 20 neurotypical individuals. Participants discriminated pairs of real syllable, pseudo-syllable (syllables that do not conform the phonotactic rules or are accidental gaps), and non-speech (syllables with attenuated high-frequency segmental content) stimuli contrasting pitch levels. The results revealed significantly h
APA, Harvard, Vancouver, ISO, and other styles
26

Mankel, Kelsey, and Gavin M. Bidelman. "Inherent auditory skills rather than formal music training shape the neural encoding of speech." Proceedings of the National Academy of Sciences 115, no. 51 (2018): 13129–34. http://dx.doi.org/10.1073/pnas.1811793115.

Full text
Abstract:
Musical training is associated with a myriad of neuroplastic changes in the brain, including more robust and efficient neural processing of clean and degraded speech signals at brainstem and cortical levels. These assumptions stem largely from cross-sectional studies between musicians and nonmusicians which cannot address whether training itself is sufficient to induce physiological changes or whether preexisting superiority in auditory function before training predisposes individuals to pursue musical interests and appear to have similar neuroplastic benefits as musicians. Here, we recorded n
APA, Harvard, Vancouver, ISO, and other styles
27

Thoidis, Iordanis, Lazaros Vrysis, Dimitrios Markou, and George Papanikolaou. "Temporal Auditory Coding Features for Causal Speech Enhancement." Electronics 9, no. 10 (2020): 1698. http://dx.doi.org/10.3390/electronics9101698.

Full text
Abstract:
Perceptually motivated audio signal processing and feature extraction have played a key role in the determination of high-level semantic processes and the development of emerging systems and applications, such as mobile phone telecommunication and hearing aids. In the era of deep learning, speech enhancement methods based on neural networks have seen great success, mainly operating on the log-power spectra. Although these approaches surpass the need for exhaustive feature extraction and selection, it is still unclear whether they target the important sound characteristics related to speech per
APA, Harvard, Vancouver, ISO, and other styles
28

Moore, Brian C. J. "The Role of Temporal Fine Structure Processing in Pitch Perception, Masking, and Speech Perception for Normal-Hearing and Hearing-Impaired People." Journal of the Association for Research in Otolaryngology 9, no. 4 (2008): 399–406. http://dx.doi.org/10.1007/s10162-008-0143-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chang, Yu-Tuan, Hui-Mei Yang, Yi-Hui Lin, Shu-Hui Liu, and Jiunn-Liang Wu. "Tone Discrimination and Speech Perception Benefit in Mandarin-Speaking Children Fit With HiRes Fidelity 120 Sound Processing." Otology & Neurotology 30, no. 6 (2009): 750–57. http://dx.doi.org/10.1097/mao.0b013e3181b286b2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hickok, Gregory, and Bradley Buchsbaum. "Temporal lobe speech perception systems are part of the verbal working memory circuit: Evidence from two recent fMRI studies." Behavioral and Brain Sciences 26, no. 6 (2003): 740–41. http://dx.doi.org/10.1017/s0140525x03340166.

Full text
Abstract:
In the verbal domain, there is only very weak evidence favoring the view that working memory is an active state of long-term memory. We strengthen existing evidence by reviewing two recent fMRI studies of verbal working memory, which clearly demonstrate activation in the superior temporal lobe, a region known to be involved in processing speech during comprehension tasks.
APA, Harvard, Vancouver, ISO, and other styles
31

Jeon, Jeong-Bae, Min-Chae Jeon, and Dong-Hee Lee. "Verbal Auditory Agnosia Developed after Unilateral Temporal Lobe Infarction." Korean Journal of Otorhinolaryngology-Head and Neck Surgery 64, no. 4 (2021): 277–84. http://dx.doi.org/10.3342/kjorl-hns.2020.00332.

Full text
Abstract:
Stroke results in sudden loss of function related to its damaged portion. When this occurs in temporal lobe, the function of hearing and listening may be affected, although receptive language processing is affected while hearing perception is relatively spared. This is called as “central deafness.” It has been known that hearing ability is seldom impaired in the case of temporal lobe stroke except in the case of bilateral lesions. However, we experienced a 72-year-old, right-handed woman who presented with both sudden hearing difficulty due to unilateral temporal lobe infarctions after suddenl
APA, Harvard, Vancouver, ISO, and other styles
32

Sussman, Harvey M., David Fruchter, Jon Hilbert, and Joseph Sirosh. "Linear correlates in the speech signal: The orderly output constraint." Behavioral and Brain Sciences 21, no. 2 (1998): 241–59. http://dx.doi.org/10.1017/s0140525x98001174.

Full text
Abstract:
Neuroethological investigations of mammalian and avian auditory systems have documented species-specific specializations for processing complex acoustic signals that could, if viewed in abstract terms, have an intriguing and striking relevance for human speech sound categorization and representation. Each species forms biologically relevant categories based on combinatorial analysis of information-bearing parameters within the complex input signal. This target article uses known neural models from the mustached bat and barn owl to develop, by analogy, a conceptualization of human processing of
APA, Harvard, Vancouver, ISO, and other styles
33

Sanchez-Lopez, Raul, Michal Fereczkowski, Tobias Neher, Sébastien Santurette, and Torsten Dau. "Robust Data-Driven Auditory Profiling Towards Precision Audiology." Trends in Hearing 24 (January 2020): 233121652097353. http://dx.doi.org/10.1177/2331216520973539.

Full text
Abstract:
The sources and consequences of a sensorineural hearing loss are diverse. While several approaches have aimed at disentangling the physiological and perceptual consequences of different etiologies, hearing deficit characterization and rehabilitation have been dominated by the results from pure-tone audiometry. Here, we present a novel approach based on data-driven profiling of perceptual auditory deficits that attempts to represent auditory phenomena that are usually hidden by, or entangled with, audibility loss. We hypothesize that the hearing deficits of a given listener, both at hearing thr
APA, Harvard, Vancouver, ISO, and other styles
34

Кропотов, Юрий, Yuriy Kropotov, Алексей Белов, Aleksey Belov, Александр Проскуряков, and Aleksandr Proskuryakov. "EFFECTIVENESS INCREASE IN AUDIO EXCHANGE TELECOMMUNICATION SYSTEMS UNDER CONDITIONS OF EXTERNAL ACOUSTIC NOISE BY METHODS OF ADAPTIVE FILTERING." Bulletin of Bryansk state technical university 2019, no. 3 (2019): 71–77. http://dx.doi.org/10.30987/article_5c8b5cebac6217.27543313.

Full text
Abstract:
Signal processing in the telecommunication systems of audioinformation exchange is conditioned on the requirement in the separation of useful speech acoustic information, in the increase of the verification of information perception by subscribers of a communication system, in the stability increase of telecommunication systems at the suppression of external acoustic interference and echosignal compensations. Therefore during designing telecommunication systems, in particular, speakerphone systems (SS) operating under conditions of an active impact of external acoustic interference and echosig
APA, Harvard, Vancouver, ISO, and other styles
35

Persson, Ann-Charlotte, Sabine Reinfeldt, Bo Håkansson, Cristina Rigato, Karl-Johan Fredén Jansson, and Måns Eeg-Olofsson. "Three-Year Follow-Up with the Bone Conduction Implant." Audiology and Neurotology 25, no. 5 (2020): 263–75. http://dx.doi.org/10.1159/000506588.

Full text
Abstract:
Background: The bone conduction implant (BCI) is an active transcutaneous bone conduction device where the transducer has direct contact to the bone, and the skin is intact. Sixteen patients have been implanted with the BCI with a planned follow-up of 5 years. This study reports on hearing, quality of life, and objective measures up to 36 months of follow-up in 10 patients. Method: Repeated measures were performed at fitting and after 1, 3, 6, 12, and 36 months including sound field warble tone thresholds, speech recognition thresholds in quiet, speech recognition score in noise, and speech-to
APA, Harvard, Vancouver, ISO, and other styles
36

Tremblay, Kelly L., Curtis Billings, and Neeru Rohila. "Speech Evoked Cortical Potentials: Effects of Age and Stimulus Presentation Rate." Journal of the American Academy of Audiology 15, no. 03 (2004): 226–37. http://dx.doi.org/10.3766/jaaa.15.3.5.

Full text
Abstract:
We examined the effects of stimulus complexity and stimulus presentation rate in ten younger and ten older normal-hearing adults. A 1 kHz tone burst as well as a speech syllable were used to elicit the N1-P2 complex. Three different interstimulus intervals (ISI) were used (510, 910, and 1510 msec). When stimuli were presented at the medium presentation rate (910 msec ISI), N1 and P2 latencies were prolonged for older listeners in response to the speech stimulus but not the tone stimulus. These age effects were absent when stimuli were presented at a slower rate (1510 msec ISI). Results from th
APA, Harvard, Vancouver, ISO, and other styles
37

Green, Patrick A., Nicholas C. Brandley, and Stephen Nowicki. "Categorical perception in animal communication and decision-making." Behavioral Ecology 31, no. 4 (2020): 859–67. http://dx.doi.org/10.1093/beheco/araa004.

Full text
Abstract:
Abstract The information an animal gathers from its environment, including that associated with signals, often varies continuously. Animals may respond to this continuous variation in a physical stimulus as lying in discrete categories rather than along a continuum, a phenomenon known as categorical perception. Categorical perception was first described in the context of speech and thought to be uniquely associated with human language. Subsequent work has since discovered that categorical perception functions in communication and decision-making across animal taxa, behavioral contexts, and sen
APA, Harvard, Vancouver, ISO, and other styles
38

Wagner, Luise, Stefan K. Plontke, and Torsten Rahne. "Perception of Iterated Rippled Noise Periodicity in Cochlear Implant Users." Audiology and Neurotology 22, no. 2 (2017): 104–15. http://dx.doi.org/10.1159/000478649.

Full text
Abstract:
Pitch perception is more challenging for individuals with cochlear implants (CIs) than normal-hearing subjects because the signal processing by CIs is restricted. Processing and perceiving the periodicity of signals may contribute to pitch perception. Whether individuals with CIs can discern pitch within an iterated rippled noise (IRN) signal is still unclear. In a prospective controlled psychoacoustic study with 34 CI users and 15 normal-hearing control subjects, the difference limen between IRN signals with different numbers of iterations was measured. In 7 CI users and 15 normal-hearing con
APA, Harvard, Vancouver, ISO, and other styles
39

Piro, Joseph M. "Laterality Effects for Music Perception among Differentially Talented Adolescents." Perceptual and Motor Skills 76, no. 2 (1993): 499–514. http://dx.doi.org/10.2466/pms.1993.76.2.499.

Full text
Abstract:
To examine the comparative nature of laterality patterns for music perception among differentially talented adolescents, 138 right-handed subjects (56 boys, 82 girls) trained in music, mathematics, and dance, respectively, were tested on dichotic chords and dichotic melodies tasks. Analyses demonstrated that only the musically trained subjects displayed task-dependent ear asymmetry, that is, a left-ear advantage for dichotic chords and a right-ear advantage for dichotic melodies. The mathematically and dance-talented students displayed a left-ear bias for both tasks of music perception. A cont
APA, Harvard, Vancouver, ISO, and other styles
40

Shallice, Tim, Peter McLeod, and Kristin Lewis. "Isolating Cognitive Modules with the Dual-Task Paradigm: Are Speech Perception and Production Separate Processes?" Quarterly Journal of Experimental Psychology Section A 37, no. 4 (1985): 507–32. http://dx.doi.org/10.1080/14640748508400917.

Full text
Abstract:
A dual-task paradigm is used to investigate whether the auditory input logogen is distinct from the articulatory output logogen. In the first two experiments it is shown that the task of detecting an unspecified name in an auditory input stream can be combined with reading aloud visually presented words with relatively little single- to dual-task decrement. The stimuli for both tasks are independent streams of random words presented at rapid rates. A series of control experiments suggest that the first task places a considerable information processing load on the auditory input logogen, the se
APA, Harvard, Vancouver, ISO, and other styles
41

Smart, Jennifer L., Suzanne C. Purdy, and Andrea S. Kelly. "Impact of Personal Frequency Modulation Systems on Behavioral and Cortical Auditory Evoked Potential Measures of Auditory Processing and Classroom Listening in School-Aged Children with Auditory Processing Disorder." Journal of the American Academy of Audiology 29, no. 07 (2018): 568–86. http://dx.doi.org/10.3766/jaaa.16074.

Full text
Abstract:
AbstractPersonal frequency modulation (FM) systems are often recommended for children diagnosed with auditory processing disorder (APD) to improve their listening environment in the classroom. Further evidence is required to support the continuation of this recommendation.To determine whether personal FM systems enhance auditory processing abilities and classroom listening in school-aged children with APD.Two baseline assessments separated by eight weeks were undertaken before a 20-week trial of bilateral personal FM in the classroom. The third assessment was completed immediately after the FM
APA, Harvard, Vancouver, ISO, and other styles
42

Lount, Sarah A., Suzanne C. Purdy, and Linda Hand. "Hearing, Auditory Processing, and Language Skills of Male Youth Offenders and Remandees in Youth Justice Residences in New Zealand." Journal of Speech, Language, and Hearing Research 60, no. 1 (2017): 121–35. http://dx.doi.org/10.1044/2016_jslhr-l-15-0131.

Full text
Abstract:
Purpose International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand. Method Thirty-three male YORs, aged 14–17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligenc
APA, Harvard, Vancouver, ISO, and other styles
43

Sánchez-García, Carolina, Sonia Kandel, Christophe Savariaux, and Salvador Soto-Faraco. "The Time Course of Audio-Visual Phoneme Identification: a High Temporal Resolution Study." Multisensory Research 31, no. 1-2 (2018): 57–78. http://dx.doi.org/10.1163/22134808-00002560.

Full text
Abstract:
Speech unfolds in time and, as a consequence, its perception requires temporal integration. Yet, studies addressing audio-visual speech processing have often overlooked this temporal aspect. Here, we address the temporal course of audio-visual speech processing in a phoneme identification task using a Gating paradigm. We created disyllabic Spanish word-like utterances (e.g., /pafa/, /paθa/, …) from high-speed camera recordings. The stimuli differed only in the middle consonant (/f/, /θ/, /s/, /r/, /g/), which varied in visual and auditory saliency. As in classical Gating tasks, the utterances
APA, Harvard, Vancouver, ISO, and other styles
44

Kates, James M. "Principles of Digital Dynamic-Range Compression." Trends in Amplification 9, no. 2 (2005): 45–76. http://dx.doi.org/10.1177/108471380500900202.

Full text
Abstract:
This article provides an overview of dynamic-range compression in digital hearing aids. Digital technology is becoming increasingly common in hearing aids, particularly because of the processing flexibility it offers and the opportunity to create more-effective devices. The focus of the paper is on the algorithms used to build digital compression systems. Of the various approaches that can be used to design a digital hearing aid, this paper considers broadband compression, multi-channel filter banks, a frequency-domain compressor using the FFT, the side-branch design that separates the filteri
APA, Harvard, Vancouver, ISO, and other styles
45

DIMOSKA, A., S. MCDONALD, M. C. PELL, R. L. TATE, and C. M. JAMES. "Recognizing vocal expressions of emotion in patients with social skills deficits following traumatic brain injury." Journal of the International Neuropsychological Society 16, no. 2 (2010): 369–82. http://dx.doi.org/10.1017/s1355617709991445.

Full text
Abstract:
AbstractPerception of emotion in voice is impaired following traumatic brain injury (TBI). This study examined whether an inability to concurrently process semantic information (the “what”) and emotional prosody (the “how”) of spoken speech contributes to impaired recognition of emotional prosody and whether impairment is ameliorated when little or no semantic information is provided. Eighteen individuals with moderate-to-severe TBI showing social skills deficits during inpatient rehabilitation were compared with 18 demographically matched controls. Participants completed two discrimination ta
APA, Harvard, Vancouver, ISO, and other styles
46

Savage, Robert, Ulla Patni, Norah Frederickson, Roz Goodwin, Nicola Smith, and Louise Tuersley. "Evaluating Current Deficit Theories of Poor Reading: Role of Phonological Processing, Naming Speed, Balance Automaticity, Rapid Verbal Perception and Working Memory." Perceptual and Motor Skills 101, no. 2 (2005): 345–61. http://dx.doi.org/10.2466/pms.101.2.345-361.

Full text
Abstract:
To clarify the nature of cognitive deficits experienced by poor readers, 9 10-yr.-old poor readers were matched against 9 chronological age and 9 younger reading age-matched controls screened and selected from regular classrooms. Poor readers performed significantly more poorly than chronological age-matched peers on digit naming speed, spoonerisms, and nonsense word reading. Poor readers were also significantly poorer than reading age-matched controls on nonword reading but were significantly better than reading age-matched controls on postural stability. Analyses of effect sizes were consist
APA, Harvard, Vancouver, ISO, and other styles
47

Häggström, Jenny, Christina Hederstierna, Ulf Rosenhall, Per Östberg, and Esma Idrizbegovic. "Prognostic Value of a Test of Central Auditory Function in Conversion from Mild Cognitive Impairment to Dementia." Audiology and Neurotology 25, no. 5 (2020): 276–82. http://dx.doi.org/10.1159/000506621.

Full text
Abstract:
Background/Objective: It has been suggested that central auditory processing dysfunction might precede the development of cognitive decline and Alzheimer’s disease (AD). The Dichotic Digits Test (DDT) has been proposed as a test of central auditory function. Our objective was to evaluate the predictive capacity of the DDT in conversion from mild cognitive impairment (MCI) to dementia. Methods: A total of 57 participants (26 females) with MCI were tested at baseline with pure tone audiometry, speech in quiet and in noise, and the DDT. The cognitive outcome was retrieved from medical files after
APA, Harvard, Vancouver, ISO, and other styles
48

Blumstein, Sheila E., Emily B. Myers, and Jesse Rissman. "The Perception of Voice Onset Time: An fMRI Investigation of Phonetic Category Structure." Journal of Cognitive Neuroscience 17, no. 9 (2005): 1353–66. http://dx.doi.org/10.1162/0898929054985473.

Full text
Abstract:
This study explored the neural systems underlying the perception of phonetic category structure by investigating the perception of a voice onset time (VOT) continuum in a phonetic categorization task. Stimuli consisted of five synthetic speech stimuli which ranged in VOT from 0 msec ([da]) to 40 msec ([ta]). Results from 12 subjects showed that the neural system is sensitive to VOT differences of 10 msec and that details of phonetic category structure are retained throughout the phonetic processing stream. Both the left inferior frontal gyrus (IFG) and cingulate showed graded activation as a f
APA, Harvard, Vancouver, ISO, and other styles
49

R, Skuratovskii, Bazarna A, and Osadhyy E. "Analysis of speech MEL scale and its classification as big data by parameterized KNN." Artificial Intelligence 26, jai2021.26(1) (2021): 42–57. http://dx.doi.org/10.15407/jai2021.01.042.

Full text
Abstract:
Recognizing emotions and human speech has always been an exciting challenge for scientists. In our work the parameterization of the vector is obtained and realized from the sentence divided into the containing emotional-informational part and the informational part is effectively applied. The expressiveness of human speech is improved by the emotion it conveys. There are several characteristics and features of speech that differentiate it among utterances, i.e. various prosodic features like pitch, timbre, loudness and vocal tone which categorize speech into several emotions. They were supplem
APA, Harvard, Vancouver, ISO, and other styles
50

McGettigan, Carolyn, Jane E. Warren, Frank Eisner, Chloe R. Marshall, Pradheep Shanmugalingam, and Sophie K. Scott. "Neural Correlates of Sublexical Processing in Phonological Working Memory." Journal of Cognitive Neuroscience 23, no. 4 (2011): 961–77. http://dx.doi.org/10.1162/jocn.2010.21491.

Full text
Abstract:
This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural responses to these manipulations under conditions of covert rehearsal (Experiment 1). A left-dominant network of temporal and motor cortex showed increased activity for longer items, with motor cortex only showing greater activity concomitant with addin
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!