Academic literature on the topic 'Audiometry. Speech perception. Speech processing systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Audiometry. Speech perception. Speech processing systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Audiometry. Speech perception. Speech processing systems"

1

Hope, A. J., L. M. Luxon, and D.-E. Bamiou. "Effects of chronic noise exposure on speech-in-noise perception in the presence of normal audiometry." Journal of Laryngology & Otology 127, no. 3 (2013): 233–38. http://dx.doi.org/10.1017/s002221511200299x.

Full text
Abstract:
AbstractObjective:To assess auditory processing in noise-exposed subjects with normal audiograms and compare the findings with those of non-noise-exposed normal controls.Methods:Ten noise-exposed Royal Air Force aircrew pilots were compared with 10 Royal Air Force administrators who had no history of noise exposure. Participants were matched in terms of age and sex. The subjects were assessed in terms of: pure tone audiometry, transient evoked otoacoustic emissions, suppression of transient evoked otoacoustic emissions in contralateral noise and auditory processing task performance (i.e. maski
APA, Harvard, Vancouver, ISO, and other styles
2

Sams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (1997): 347. http://dx.doi.org/10.1068/v970029.

Full text
Abstract:
Persons with hearing loss use visual information from articulation to improve their speech perception. Even persons with normal hearing utilise visual information, especially when the stimulus-to-noise ratio is poor. A dramatic demonstration of the role of vision in speech perception is the audiovisual fusion called the ‘McGurk effect’. When the auditory syllable /pa/ is presented in synchrony with the face articulating the syllable /ka/, the subject usually perceives /ta/ or /ka/. The illusory perception is clearly auditory in nature. We recently studied the audiovisual fusion (acoustical /p/
APA, Harvard, Vancouver, ISO, and other styles
3

Mullennix, John W., and David B. Pisoni. "Stimulus variability and processing dependencies in speech perception." Perception & Psychophysics 47, no. 4 (1990): 379–90. http://dx.doi.org/10.3758/bf03210878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koohi, Nehzat, Gilbert Thomas-Black, Paola Giunti, and Doris-Eva Bamiou. "Auditory Phenotypic Variability in Friedreich’s Ataxia Patients." Cerebellum 20, no. 4 (2021): 497–508. http://dx.doi.org/10.1007/s12311-021-01236-9.

Full text
Abstract:
AbstractAuditory neural impairment is a key clinical feature of Friedreich’s Ataxia (FRDA). We aimed to characterize the phenotypical spectrum of the auditory impairment in FRDA in order to facilitate early identification and timely management of auditory impairment in FRDA patients and to explore the relationship between the severity of auditory impairment with genetic variables (the expansion size of GAA trinucleotide repeats, GAA1 and GAA2), when controlled for variables such as disease duration, severity of the disease and cognitive status. Twenty-seven patients with genetically confirmed
APA, Harvard, Vancouver, ISO, and other styles
5

Boymans, Monique, and Wouter A. Dreschler. "In situ Hearing Tests for the Purpose of a Self-Fit Hearing Aid." Audiology and Neurotology 22, no. 1 (2017): 15–23. http://dx.doi.org/10.1159/000457829.

Full text
Abstract:
This study investigated the potential and limitations of a self-fit hearing aid. This can be used in the “developing” world or in countries with large distances between the hearing-impaired subjects and the professional. It contains an on-board tone generator for in situ user-controlled, automated audiometry, and other tests for hearing aid fitting. Twenty subjects with mild hearing losses were involved. In situ audiometry showed a test-retest reliability (SD <3.7 dB) that compared well with the precision of diagnostic audiometry using headphones. There was good correspondence (SD <5.2 d
APA, Harvard, Vancouver, ISO, and other styles
6

Pinard, Minola A. "Native and Cross-Language Speech Sounds: Some Perceptual Processes." Perceptual and Motor Skills 73, no. 1 (1991): 227–34. http://dx.doi.org/10.2466/pms.1991.73.1.227.

Full text
Abstract:
Using a developmental approach, two aspects of debate in the speech perception literature were tested, (a) the nature of adult speech processing, the dichotomy being along nonlinguistic versus linguistic lines, and (b) the nature of speech processing by children of different ages, the hypotheses here implying in infancy detector-like processes and at age four “adult-like” speech perception reorganizations. Children ranging in age from 4 up to 18 years discriminated native and foreign speech contrasts. Results confirm the hypotheses for adults. It is clear that different processes are operating
APA, Harvard, Vancouver, ISO, and other styles
7

Ito, Takayuki, Alexis R. Johns, and David J. Ostry. "Left Lateralized Enhancement of Orofacial Somatosensory Processing Due to Speech Sounds." Journal of Speech, Language, and Hearing Research 56, no. 6 (2013): 1875–81. http://dx.doi.org/10.1044/1092-4388(2013/12-0226).

Full text
Abstract:
Purpose Somatosensory information associated with speech articulatory movements affects the perception of speech sounds and vice versa, suggesting an intimate linkage between speech production and perception systems. However, it is unclear which cortical processes are involved in the interaction between speech sounds and orofacial somatosensory inputs. The authors examined whether speech sounds modify orofacial somatosensory cortical potentials that were elicited using facial skin perturbations. Method Somatosensory event-related potentials in EEG were recorded in 3 background sound conditions
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Hsiao-Lan S., I.-Chen Chen, Chun-Han Chiang, Ying-Hui Lai, and Yu Tsao. "Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers." Perceptual and Motor Skills 123, no. 2 (2016): 365–82. http://dx.doi.org/10.1177/0031512516663164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ujiie, Yuta, and Kohske Takahashi. "Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits." Multisensory Research 34, no. 6 (2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.

Full text
Abstract:
Abstract While visual information from facial speech modulates auditory speech perception, it is less influential on audiovisual speech perception among autistic individuals than among typically developed individuals. In this study, we investigated the relationship between autistic traits (Autism-Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin’s vase-type speech stimuli with degraded facial speech information. Participants were 31 university students (13 males and 18 females; mean age: 19.2, SD: 1.13 years) who reported normal (or corrected-to-normal) hear
APA, Harvard, Vancouver, ISO, and other styles
10

Tampas, Joanna W., Ashley W. Harkrider, and Mark S. Hedrick. "Neurophysiological Indices of Speech and Nonspeech Stimulus Processing." Journal of Speech, Language, and Hearing Research 48, no. 5 (2005): 1147–64. http://dx.doi.org/10.1044/1092-4388(2005/081).

Full text
Abstract:
Auditory event-related potentials (mismatch negativity and P300) and behavioral discrimination were measured to synthetically generated consonant-vowel (CV) speech and nonspeech contrasts in 10 young adults with normal auditory systems. Previous research has demonstrated that behavioral and P300 responses reflect a phonetic, categorical level of processing. The aims of the current investigation were (a) to examine whether the mismatch negativity (MMN) response is also influenced by the phonetic characteristics of a stimulus or if it reflects purely an acoustic level of processing and (b) to ex
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Audiometry. Speech perception. Speech processing systems"

1

Ng, H. N. Elaine. "Effects of noise type on speech understanding." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37990159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ng, H. N. Elaine, and 吳凱寧. "Effects of noise type on speech understanding." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37990159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yip, Ki-chun Charis, and 葉琪蓁. "Effects of noise on speech understanding in individuals with moderate to severe hearing loss." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44489948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cummings, Kathleen E. "Analysis, synthesis, and recognition of stressed speech." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Xin. "Ensemble methods in large vocabulary continuous speech recognition." Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/5797.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2008.<br>The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on August 28, 2008) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Sanders, Lisa Diane. "Speech segmentation by native and non-native speakers : behavioral and event-related potential evidence /." view abstract or download file of text, 2001. http://wwwlib.umi.com/cr/uoregon/fullcit?p3018392.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2001.<br>Typescript. Includes vita and abstract. Includes bibliographical references (leaves 215-239). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
7

Menozi, Lucia B. S., David Ph D. Ryan, Kim S. Ph D. Schairer, Sherri L. Au D. Ph D. Smith, and Marcy K. Au D. Ph D. Lau. "Objective Measurement of Cognitive Systems During Effortful Listening." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/asrf/2019/schedule/237.

Full text
Abstract:
INTRODUCTION: Adults with hearing loss who report difficulty understanding speech with and without hearing aids often also report increased mental or listening effort. Although speech recognition measures are well known and have been in use for decades, measures of listening effort are relatively new and include objective measures such as working memory tasks, pupillometry, heart rate, skin conductance, and brain imaging. OBJECTIVES: The purpose of this study is to evaluate an electroencephalogram (EEG)-based method to assess cognitive states during a speech in noise perception task. METHODS:
APA, Harvard, Vancouver, ISO, and other styles
8

Cho, Jaeyoun. "Speech enhancement using microphone array." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1132239060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rohani, Mehdiabadi Behrooz. "Power control for mobile radio systems using perceptual speech quality metrics." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0174.

Full text
Abstract:
As the characteristics of mobile radio channels vary over time, transmit power must be controlled accordingly to ensure that the received signal level is within the receiver's sensitivity. As a consequence, modern mobile radio systems employ power control to regulate the received signal level such that it is neither less nor excessively larger than receiver sensitivity in order to maintain adequate service quality. In this context, speech quality measurement is an important aspect in the delivery of speech services as it will impact satisfaction of customers as well as the usage of precious sy
APA, Harvard, Vancouver, ISO, and other styles
10

Du, Toit Ilze. "Non-acoustic speaker recognition." Thesis, Stellenbosch : University of Stellenbosch, 2004. http://hdl.handle.net/10019.1/16315.

Full text
Abstract:
Thesis (MScIng)--University of Stellenbosch, 2004.<br>ENGLISH ABSTRACT: In this study the phoneme labels derived from a phoneme recogniser are used for phonetic speaker recognition. The time-dependencies among phonemes are modelled by using hidden Markov models (HMMs) for the speaker models. Experiments are done using firstorder and second-order HMMs and various smoothing techniques are examined to address the problem of data scarcity. The use of word labels for lexical speaker recognition is also investigated. Single word frequencies are counted and the use of various word selections as
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Audiometry. Speech perception. Speech processing systems"

1

Ainsworth, W. A., Steven Greenberg, and Richard R. Fay. Speech processing in the auditory system. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Audiovisual speech processing. Cambridge University Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nelson, Morgan, ed. Speech and audio signal processing: Processing and perception of speech and music. John Wiley, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gold, Bernard. Speech and audio signal processing: Processing and perception of speech and music. 2nd ed. Wiley, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saitō, Shūzō. Fundamentals of speech signal processing. Academic Press, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kazuo, Nakata, ed. Fundamentals of speech signal processing. Academic Press, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kummert, Franz. Flexible Steuerung eines sprachverste[he]nden Systems mit homogener Wissensbasis. Infix, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

United States. Social Security Administration. Technology Assessment and Forecasting Group. ADP voice technology: Speech recognition and speech synthesis. U.S. Social Security Administration, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

United States. Social Security Administration. Technology Assessment and Forecasting Group. ADP voice technology: Speech recognition and speech synthesis. U.S. Social Security Administration, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Poock, G. K. An examination of some error correcting techniques for continuous speech recognition technology. Naval Postgraduate School, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Audiometry. Speech perception. Speech processing systems"

1

Oviatt, Sharon. "Advances in the Robust Processing of Multimodal Speech and Pen Systems." In Series in Machine Perception and Artificial Intelligence. WORLD SCIENTIFIC, 2002. http://dx.doi.org/10.1142/9789812778543_0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Singh, Preety, Vijay Laxmi, and M. S. Gaur. "Speechreading using Modified Visual Feature Vectors." In Emerging Applications of Natural Language Processing. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2169-5.ch012.

Full text
Abstract:
Audio-Visual Speech Recognition (AVSR) is an emerging technology that helps in improved machine perception of speech by taking into account the bimodality of human speech. Automated speech is inspired from the fact that human beings subconsciously use visual cues to interpret speech. This chapter surveys the techniques for audio-visual speech recognition. Through this survey, the authors discuss the steps involved in a robust mechanism for perception of speech for human-computer interaction. The main emphasis is on visual speech recognition taking only the visual cues into account. Previous research has shown that visual-only speech recognition systems pose many challenges. The authors present a speech recognition system where only the visual modality is used for recognition of the spoken word. Significant features are extracted from lip images. These features are used to build n-gram feature vectors. Classification of speech using these modified feature vectors results in improved accuracy of the spoken word.
APA, Harvard, Vancouver, ISO, and other styles
3

Kaur, Gagandeep. "Introduction to Human Electroencephalography." In Advances in Systems Analysis, Software Engineering, and High Performance Computing. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7879-6.ch013.

Full text
Abstract:
This chapter is a general introduction to electroencephalography and popular methods used to manipulate EEG in order to elicit markers of sensory, cognitive perception, and behavior. With development of interdisciplinary research, there is increased curiosity among engineers towards biomedical research. Those using signal processing techniques attempt to employ algorithms to the real-life signals and retrieve characteristics of signals such as speech, echo, EEG, among others. The chapter briefs the history of human EEG and goes back to the origins and fundamentals of electrical activity in brain, how this activity reaches the scalp, methods to capture this high temporal activity. It then takes the reader through design methodology that goes behind EEG experiments, general schema for analysis of EEG signal. It describes the concept of early evoked potentials, which are known responses for study of sensory perception and are used extensively in medical science. It moves on to another popular manipulation of EEG technique used to elicit event related potentials.
APA, Harvard, Vancouver, ISO, and other styles
4

Bourguet, Marie-Luce. "An Overview of Multimodal Interaction Techniques and Applications." In Encyclopedia of Human Computer Interaction. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-562-7.ch068.

Full text
Abstract:
Desktop multimedia (multimedia personal computers) dates from the early 1970s. At that time, the enabling force behind multimedia was the emergence of the new digital technologies in the form of digital text, sound, animation, photography, and, more recently, video. Nowadays, multimedia systems mostly are concerned with the compression and transmission of data over networks, large capacity and miniaturized storage devices, and quality of services; however, what fundamentally characterizes a multimedia application is that it does not understand the data (sound, graphics, video, etc.) that it manipulates. In contrast, intelligent multimedia systems at the crossing of the artificial intelligence and multimedia disciplines gradually have gained the ability to understand, interpret, and generate data with respect to content. Multimodal interfaces are a class of intelligent multimedia systems that make use of multiple and natural means of communication (modalities), such as speech, handwriting, gestures, and gaze, to support human-machine interaction. More specifically, the term modality describes human perception on one of the three following perception channels: visual, auditive, and tactile. Multimodality qualifies interactions that comprise more than one modality on either the input (from the human to the machine) or the output (from the machine to the human) and the use of more than one device on either side (e.g., microphone, camera, display, keyboard, mouse, pen, track ball, data glove). Some of the technologies used for implementing multimodal interaction come from speech processing and computer vision; for example, speech recognition, gaze tracking, recognition of facial expressions and gestures, perception of sounds for localization purposes, lip movement analysis (to improve speech recognition), and integration of speech and gesture information. In 1980, the put-that-there system (Bolt, 1980) was developed at the Massachusetts Institute of Technology and was one of the first multimodal systems. In this system, users simultaneously could speak and point at a large-screen graphics display surface in order to manipulate simple shapes. In the 1990s, multimodal interfaces started to depart from the rather simple speech-and-point paradigm to integrate more powerful modalities such as pen gestures and handwriting input (Vo, 1996) or haptic output. Currently, multimodal interfaces have started to understand 3D hand gestures, body postures, and facial expressions (Ko, 2003), thanks to recent progress in computer vision techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Bourguet, Marie-Luce. "An Overview of Multimodal Interaction Techniques and Applications." In Human Computer Interaction. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-87828-991-9.ch008.

Full text
Abstract:
Desktop multimedia (multimedia personal computers) dates from the early 1970s. At that time, the enabling force behind multimedia was the emergence of the new digital technologies in the form of digital text, sound, animation, photography, and, more recently, video. Nowadays, multimedia systems mostly are concerned with the compression and transmission of data over networks, large capacity and miniaturized storage devices, and quality of services; however, what fundamentally characterizes a multimedia application is that it does not understand the data (sound, graphics, video, etc.) that it manipulates. In contrast, intelligent multimedia systems at the crossing of the artificial intelligence and multimedia disciplines gradually have gained the ability to understand, interpret, and generate data with respect to content. Multimodal interfaces are a class of intelligent multimedia systems that make use of multiple and natural means of communication (modalities), such as speech, handwriting, gestures, and gaze, to support human-machine interaction. More specifically, the term modality describes human perception on one of the three following perception channels: visual, auditive, and tactile. Multimodality qualifies interactions that comprise more than one modality on either the input (from the human to the machine) or the output (from the machine to the human) and the use of more than one device on either side (e.g., microphone, camera, display, keyboard, mouse, pen, track ball, data glove). Some of the technologies used for implementing multimodal interaction come from speech processing and computer vision; for example, speech recognition, gaze tracking, recognition of facial expressions and gestures, perception of sounds for localization purposes, lip movement analysis (to improve speech recognition), and integration of speech and gesture information. In 1980, the put-that-there system (Bolt, 1980) was developed at the Massachusetts Institute of Technology and was one of the first multimodal systems. In this system, users simultaneously could speak and point at a large-screen graphics display surface in order to manipulate simple shapes. In the 1990s, multimodal interfaces started to depart from the rather simple speech-and-point paradigm to integrate more powerful modalities such as pen gestures and handwriting input (Vo, 1996) or haptic output. Currently, multimodal interfaces have started to understand 3D hand gestures, body postures, and facial expressions (Ko, 2003), thanks to recent progress in computer vision techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

Jain, Saransh, and Suma Raju. "Subjective Fatigue in Children and Adults." In Advances in Psychology, Mental Health, and Behavioral Studies. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-4955-0.ch010.

Full text
Abstract:
Fatigue is a common yet poorly understood topic. The psychological, physiological, social, emotional, and cognitive wellbeing of a person may be affected due to fatigue. Despite a century of research in understanding the effect of fatigue on human systems, there is no concrete explanation as how fatigue affects the perception of speech. Fatigue impairs auditory cognition and the reduced cognitive abilities further increase mental and physical fatigue. Since cognition is markedly affected in individuals experiencing mental fatigue, its consequences are widespread. According to the top-down approach of auditory processing, there is a direct link between cognition and speech perception. Thus, in the present chapter, the influence of fatigue on perception is reviewed. It is noted that the impact of fatigue on cognition and quality of life is different for children and adults. Training in music, meditation, and exposure to more than one language are some of the measures that help to reduce the effect of fatigue and improve cognitive abilities in both children as well as in adults.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Audiometry. Speech perception. Speech processing systems"

1

"Session MA7b: Biological models of speech perception and their applications in automatic speech processing." In 2010 44th Asilomar Conference on Signals, Systems and Computers. IEEE, 2010. http://dx.doi.org/10.1109/acssc.2010.5757475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wenjing, Han, Li Haifeng, and Guo Chunyu. "A Hybrid Speech Emotion Perception Method of VQ-based Feature Processing and ANN Recognition." In 2009 WRI Global Congress on Intelligent Systems. IEEE, 2009. http://dx.doi.org/10.1109/gcis.2009.432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Labelle, Felix, Roch Lefebvre, and Philippe Gournay. "A subjective evaluation of the effects of speech coding on the perception of emotions." In 2016 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). IEEE, 2016. http://dx.doi.org/10.1109/ispacs.2016.7824685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kong, Xiang, Jeung-Yoon Choi, and Stefanie Shattuck-Hufnagel. "Evaluating automatic speech recognition systems in comparison with human perception results using distinctive feature measures." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7953270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rahmawati, Sabrina, and Michitaka Ohgishi. "Cross cultural studies on audiovisual speech processing: The Mcgurk effects observed in consonant and vowel perception." In 2011 6th International Conference on Telecommunication Systems, Services, and Applications (TSSA). IEEE, 2011. http://dx.doi.org/10.1109/tssa.2011.6095406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhigoreva, Marina V., Svetlana A. Kuzminova, and Larisa A. Panteleeva. "Development of communication and speech skills of children with disabilities based on the polymodal audiovisual method." In Особый ребенок: Обучение, воспитание, развитие. Yaroslavl state pedagogical university named after К. D. Ushinsky, 2021. http://dx.doi.org/10.20323/978-5-00089-474-3-2021-157-164.

Full text
Abstract:
The paper considers the polymodal characteristics of the audiovisual method and reveals the mechanism of its application in the correctional work on the development of communication and speech skills of children with disabilities, in which various analyzer systems are activated and a compensatory effect is exerted on the lost or disturbed channels of perception and processing of information.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!