To see the other types of publications on this topic, follow the link: Speech Comprehension.

Journal articles on the topic 'Speech Comprehension'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Speech Comprehension.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Marslen-Wilson, William D. "Speech shadowing and speech comprehension." Speech Communication 4, no. 1-3 (August 1985): 55–73. http://dx.doi.org/10.1016/0167-6393(85)90036-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reynolds, Mary E., and Donald Fucci. "Synthetic Speech Comprehension." Journal of Speech, Language, and Hearing Research 41, no. 2 (April 1998): 458–66. http://dx.doi.org/10.1044/jslhr.4102.458.

Full text
Abstract:
This study compared the ability of children with normal language (NL) and children with specific language impairment (SLI) to comprehend natural speech and DECtalk synthetic speech by using a sentence verification task. The effect of listening practice on subjects' ability to comprehend both types of speech also was investigated. Subjects were matched for age and sex. Mean nonverbal intelligence scores of the groups did not differ significantly. Results showed that DECtalk was significantly more difficult for all subjects to comprehend than was natural speech and false sentences were significantly more difficult to comprehend than were true sentences. Response latencies shortened significantly from time 1 to time 2 for all subjects. Subjects with SLI had significantly more difficulty comprehending both natural and synthetic speech than did subjects with NL. Implications these results might have for theories of the underlying cause of specific language impairment are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Sariroh, Chilyatus. "A COMPREHENSIVE ABOUT THE PART OF SPEECH USING MIND MAPPING ATSTUDENTS OF2017A STKIP PGRI JOMBANG." JURNAL EDUKASI: KAJIAN ILMU PENDIDIKAN 5, no. 1 (July 4, 2020): 87–94. http://dx.doi.org/10.51836/je.v5i1.117.

Full text
Abstract:
We present an implementation about mind mapping learning of part of speech on direct explanation using mind mapping to the students. This research aims to share the knowledge about learning part of speech using mind mapping and also to know the students’ comprehension ability about it. So that student can understand its function of each part of speech by using simple learning of mind mapping. This study was carried out among 20 students ofSTKIP PGRI Jombang especially in English Department. This study was descriptive and applied qualitative research method. The results show that 25% student have a very good comprehension and 50% have good comprehension20%have an enough comprehension and the last 5% are having low comprehension about part of speech (n=20). Part of speech is one of grammar factor that very important basic to understand the whole next grammar. Therefore, we need to share the explanation clearly and simple to the students who still don’t understand. In conclusion, the students’ comprehension in grammar about part of speech using mind mapping were categorized into “good comprehensive” category. However, based on these results, we have managed to put forward a number of recommendation and suggestion.
APA, Harvard, Vancouver, ISO, and other styles
4

Meyer, Antje S., and Willem J. M. Levelt. "Merging speech perception and production." Behavioral and Brain Sciences 23, no. 3 (June 2000): 339–40. http://dx.doi.org/10.1017/s0140525x00373241.

Full text
Abstract:
A comparison of Merge, a model of comprehension, and WEAVER, a model of production, raises five issues: (1) merging models of comprehension and production necessarily creates feedback; (2) neither model is a comprehensive account of word processing; (3) the models are incomplete in different ways; (4) the models differ in their handling of competition; (5) as opposed to WEAVER, Merge is a model of metalinguistic behavior.
APA, Harvard, Vancouver, ISO, and other styles
5

Yamadori, Atsushi. "Categorical aspects in speech comprehension." Higher Brain Function Research 17, no. 1 (1997): 15–24. http://dx.doi.org/10.2496/apr.17.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Monahan, Philip J. "Phonological Knowledge and Speech Comprehension." Annual Review of Linguistics 4, no. 1 (January 14, 2018): 21–47. http://dx.doi.org/10.1146/annurev-linguistics-011817-045537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Specht, Karsten. "Neuronal basis of speech comprehension." Hearing Research 307 (January 2014): 121–35. http://dx.doi.org/10.1016/j.heares.2013.09.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Boulenger, Véronique, Michel Hoen, Emmanuel Ferragne, François Pellegrino, and Fanny Meunier. "Real-time lexical competitions during speech-in-speech comprehension." Speech Communication 52, no. 3 (March 2010): 246–53. http://dx.doi.org/10.1016/j.specom.2009.11.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Freese, Anne Reilley. "Subvocal Speech, Reading Rate, and Comprehension." Perceptual and Motor Skills 82, no. 3_suppl (June 1996): 1343–68. http://dx.doi.org/10.2466/pms.1996.82.3c.1343.

Full text
Abstract:
The relationship of subvocal speech and reading rate to comprehension of 25 children, ranging from 8 to 15 years of age, was investigated by means of electromyographic (EMG) recordings taken while the subjects silently read two meaningful passages. The first was orthographically regular, and the second was composed of approximately sixty percent homophones, Labial muscle action recordings, latencies, and comprehension measures were obtained. Variables derived from these measures were used to predict reading age Profiles derived from the EMGs provided information about how each reader processed the information from the reading passages. The empirical results of the study provide strong support for the valuable role of subvocal speech in the extraction of information and the importance of readers demonstrating the ability to use flexibility of the reading process when reading for meaning.
APA, Harvard, Vancouver, ISO, and other styles
10

Shibata, Midori, Hiroaki Itoh, Koji Shimada, and Jun-ichi Abe. "Neuroanatomical bases of indirect speech comprehension." Neuroscience Research 68 (January 2010): e409. http://dx.doi.org/10.1016/j.neures.2010.07.1813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Crinion, Jennifer, Matt Lambon-Ralph, David Howard, Elizabeth Warburton, and Richard Wise. "Cortical regions involved in speech comprehension." NeuroImage 13, no. 6 (June 2001): 519. http://dx.doi.org/10.1016/s1053-8119(01)91862-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Beaucousin, V., A. Lacheret, M. R. Turbelin, M. Morel, B. Mazoyer, and N. Tzourio-Mazoyer. "FMRI Study of Emotional Speech Comprehension." Cerebral Cortex 17, no. 2 (February 22, 2006): 339–52. http://dx.doi.org/10.1093/cercor/bhj151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Carraturo, Sita, and Kristin J. Van Engen. "Bilinguals' comprehension of foreign-accented speech." Journal of the Acoustical Society of America 148, no. 4 (October 2020): 2654. http://dx.doi.org/10.1121/1.5147385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bozic, M., L. K. Tyler, D. T. Ives, B. Randall, and W. D. Marslen-Wilson. "Bihemispheric foundations for human speech comprehension." Proceedings of the National Academy of Sciences 107, no. 40 (September 20, 2010): 17439–44. http://dx.doi.org/10.1073/pnas.1000531107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ralston, James V., John W. Mullennix, Scott E. Lively, Beth G. Greene, and David B. Pisoni. "Comprehension of natural and synthetic speech." Journal of the Acoustical Society of America 86, S1 (November 1989): S101. http://dx.doi.org/10.1121/1.2027259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hustad, Katherine C., and David R. Beukelman. "Listener Comprehension of Severely Dysarthric Speech." Journal of Speech, Language, and Hearing Research 45, no. 3 (June 2002): 545–58. http://dx.doi.org/10.1044/1092-4388(2002/043).

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

FitzPatrick, Ian, and Peter Indefrey. "Lexical Competition in Nonnative Speech Comprehension." Journal of Cognitive Neuroscience 22, no. 6 (June 2010): 1165–78. http://dx.doi.org/10.1162/jocn.2009.21301.

Full text
Abstract:
Electrophysiological studies consistently find N400 effects of semantic incongruity in nonnative (L2) language comprehension. These N400 effects are often delayed compared with native (L1) comprehension, suggesting that semantic integration in one's second language occurs later than in one's first language. In this study, we investigated whether such a delay could be attributed to (1) intralingual lexical competition and/or (2) interlingual lexical competition. We recorded EEG from Dutch–English bilinguals who listened to English (L2) sentences in which the sentence-final word was (a) semantically fitting and (b) semantically incongruent or semantically incongruent but initially congruent due to sharing initial phonemes with (c) the most probable sentence completion within the L2 or (d) the L1 translation equivalent of the most probable sentence completion. We found an N400 effect in each of the semantically incongruent conditions. This N400 effect was significantly delayed to L2 words but not to L1 translation equivalents that were initially congruent with the sentence context. Taken together, these findings firstly demonstrate that semantic integration in nonnative listening can start based on word initial phonemes (i.e., before a single lexical candidate could have been selected based on the input) and secondly suggest that spuriously elicited L1 lexical candidates are not available for semantic integration in L2 speech comprehension.
APA, Harvard, Vancouver, ISO, and other styles
18

Link, Kristen E., and Roger J. Kreuz. "The Comprehension of Ostensible Speech Acts." Journal of Language and Social Psychology 24, no. 3 (September 2005): 227–51. http://dx.doi.org/10.1177/0261927x05278384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lundberg, Ingvar, and Åke Olofsson. "Can computer speech support reading comprehension?" Computers in Human Behavior 9, no. 2-3 (June 1993): 283–93. http://dx.doi.org/10.1016/0747-5632(93)90012-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Huntress, Linda M., Linda Lee, Nancy A. Creaghead, Daniel D. Wheeler, and Kathleen M. Braverman. "Aphasic Subjects' Comprehension of Synthetic and Natural Speech." Journal of Speech and Hearing Disorders 55, no. 1 (February 1990): 21–27. http://dx.doi.org/10.1044/jshd.5501.21.

Full text
Abstract:
This study investigated the ability of aphasic patients with mild auditory comprehension problems to respond to synthetic speech produced by an inexpensive speech synthesizer attached to a personal computer. Subjects were given four practice sessions with synthetic speech; testing of synthetic speech comprehension was performed during Sessions 1 and 4. During testing, aphasic subjects' comprehension of synthetic speech was compared with their comprehension of natural speech on four tasks: (a) picture identification, (b) following commands, (c) yes/no questions, and (d) paragraph comprehension with yes/no questions. Aphasic subjects comprehended natural speech better than synthetic speech in Session 1 but not in Session 4. Their synthetic speech scores improved between Sessions 1 and 4. There was also a significant difference among scores on the four tasks for both sessions. The means for picture identification were highest, followed by yes/no questions, commands, and finally paragraph comprehension for both sessions. Although performance by some subjects on some tasks was accurate enough to indicate that an inexpensive speech synthesizer could be a useful tool for working with mild aphasic patients, considerable caution in selecting both tasks and patients is warranted.
APA, Harvard, Vancouver, ISO, and other styles
21

Wilsch, Anna, Toralf Neuling, Jonas Obleser, and Christoph S. Herrmann. "Transcranial alternating current stimulation with speech envelopes modulates speech comprehension." NeuroImage 172 (May 2018): 766–74. http://dx.doi.org/10.1016/j.neuroimage.2018.01.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Fontan, Lionel, Julien Tardieu, Pascal Gaillard, Virginie Woisard, and Robert Ruiz. "Relationship Between Speech Intelligibility and Speech Comprehension in Babble Noise." Journal of Speech, Language, and Hearing Research 58, no. 3 (June 2015): 977–86. http://dx.doi.org/10.1044/2015_jslhr-h-13-0335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Marchegiani, Letizia, Xenofon Fafoutis, and Sahar Abbaspour. "Speech Identification and Comprehension in the Urban Soundscape." Environments 5, no. 5 (May 7, 2018): 56. http://dx.doi.org/10.3390/environments5050056.

Full text
Abstract:
Urban environments are characterised by the presence of copious and unstructured noise. This noise continuously challenges speech intelligibility both in normal-hearing and hearing-impaired individuals. In this paper, we investigate the impact of urban noise, such as traffic, on speech identification and, more generally, speech understanding. With this purpose, we perform listening experiments to evaluate the ability of individuals with normal hearing to detect words and interpret conversational speech in the presence of urban noise (e.g., street drilling, traffic jams). Our experiments confirm previous findings in different acoustic environments and demonstrate that speech identification is influenced by the similarity between the target speech and the masking noise also in urban scenarios. More specifically, we propose the use of the structural similarity index to quantify this similarity. Our analysis confirms that speech identification is more successful in presence of noise with tempo-spectral characteristics different from speech. Moreover, our results show that speech comprehension is not as challenging as word identification in urban sound environments that are characterised by the presence of severe noise. Indeed, our experiments demonstrate that speech comprehension can be fairly successful even in acoustic scenes where the ability to identify speech is highly reduced.
APA, Harvard, Vancouver, ISO, and other styles
24

Massey, Holly J. "Language-Impaired Children's Comprehension of Synthesized Speech." Language, Speech, and Hearing Services in Schools 19, no. 4 (October 1988): 401–9. http://dx.doi.org/10.1044/0161-1461.1904.401.

Full text
Abstract:
The Token Test for Children was given in a synthesized-speech version and a natural-speech version to 11 language-impaired children aged 8 years, 9 months to 10 years, 1 month and to 11 control subjects matched for age and sex. The scores of the language-impaired children on the synthesized version were significantly lower than (a) the synthesized-speech scores of the control group and (b) their own scores on the natural-speech version. Task complexity was a significant factor for the experimental group. Language-impaired children may have difficulty understanding some synthesized voice commands.
APA, Harvard, Vancouver, ISO, and other styles
25

Huyck, Julia Jones. "Comprehension of Degraded Speech Matures During Adolescence." Journal of Speech, Language, and Hearing Research 61, no. 4 (April 17, 2018): 1012–22. http://dx.doi.org/10.1044/2018_jslhr-h-17-0252.

Full text
Abstract:
Purpose The aim of the study was to compare comprehension of spectrally degraded (noise-vocoded [NV]) speech and perceptual learning of NV speech between adolescents and young adults and examine the role of phonological processing and executive functions in this perception. Method Sixteen younger adolescents (11–13 years), 16 older adolescents (14–16 years), and 16 young adults (18–22 years) listened to 40 NV sentences and repeated back what they heard. They also completed tests assessing phonological processing and a variety of executive functions. Results Word-report scores were generally poorer for younger adolescents than for the older age groups. Phonological processing also predicted initial word-report scores. Learning (i.e., improvement across training times) did not differ with age. Starting performance and processing speed predicted learning, with greater learning for those who started with the lowest scores and those with faster processing speed. Conclusions Degraded (NV) speech comprehension is not mature even by early adolescence; however, like adults, adolescents are able to improve their comprehension of degraded speech with training. Thus, although adolescents may have initial difficulty in understanding degraded speech or speech as presented through hearing aids or cochlear implants, they are able to improve their perception with experience. Processing speed and phonological processing may play a role in degraded speech comprehension in these age groups.
APA, Harvard, Vancouver, ISO, and other styles
26

Schneider, Bruce A., Liang Li, and Meredyth Daneman. "How Competing Speech Interferes with Speech Comprehension in Everyday Listening Situations." Journal of the American Academy of Audiology 18, no. 07 (July 2007): 559–72. http://dx.doi.org/10.3766/jaaa.18.7.4.

Full text
Abstract:
Listeners often complain that they have trouble following a conversation when the environment is noisy. The environment could be noisy because of the presence of other unrelated but meaningful conversations, or because of the presence of less meaningful sound sources such as ventilation noise. Both kinds of distracting sound sources produce interference at the auditory periphery (activate similar regions along the basilar membrane), and this kind of interference is called "energetic masking." However, in addition to energetic masking, meaningful sound sources, such as competing speech, can and do interfere with the processing of the target speech at more central levels (phonetic and/or semantic), and this kind of interference is often called informational masking. In this article we review what is known about informational masking of speech by competing speech, and the auditory and cognitive factors that determine its severity. Las personas que escuchan a menudo tienen problema para seguir una conversación cuando el ambiente es ruidoso. El ambiente puede ser ruidoso por la presencia de otras conversaciones no relacionadas pero significativas, o por la presencia de otras fuentes de ruido menos significativas, tales como ruidos de ventilación. Ambas fuentes de sonidos de distracción producen interferencia en la periferia auditiva (activan regiones similares a lo largo de la membrana basilar), y este tipo de interferencia se la llama enmascaramiento energético. Sin embargo, además del enmascaramiento energético, otras fuentes significativas de sonido, tales como el lenguaje en competencia, pueden interferir, y de hecho interfieren con el procesamiento de un lenguaje meta a niveles más centrales (fonético y/o semántico), y este tipo de interferencia se le llama enmascaramiento informacional. En este artículo revisamos lo que se conoce sobre enmascaramiento informacional del lenguaje por lenguaje competitivo, y los factores auditivos y cognitivos que determinan su severidad.
APA, Harvard, Vancouver, ISO, and other styles
27

Hoen, Michel, Claire Grataloup, François Pellegrino, Lionel Collet, and Fanny Meunier. "Characterizing lexical interferences in informational masking during speech‐in‐speech comprehension." Journal of the Acoustical Society of America 123, no. 5 (May 2008): 3719. http://dx.doi.org/10.1121/1.2935175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dahan, Delphine. "The Time Course of Interpretation in Speech Comprehension." Current Directions in Psychological Science 19, no. 2 (April 2010): 121–26. http://dx.doi.org/10.1177/0963721410364726.

Full text
Abstract:
Determining how language comprehension proceeds over time has been central to theories of human language use. Early research on the comprehension of speech in real time put special emphasis on the sequential property of speech, by assuming that the interpretation of what is said proceeds at the same rate that information in the speech signal reaches the senses. The picture that is emerging from recent work suggests a more complex process, one in which information from speech has an immediate influence while enabling later-arriving information to modulate initial hypotheses. “Right-context” effects, in which the later portion of a spoken stimulus can affect the interpretation of an earlier portion, are pervasive and can span several syllables or words. Thus, the interpretation of a segment of speech appears to result from the accumulation of information and integration of linguistic constraints over a larger temporal window than the duration of the speech segment itself. This helps explain how human listeners can understand language so efficiently, despite massive perceptual uncertainty in the speech signal.
APA, Harvard, Vancouver, ISO, and other styles
29

Fox Tree, Jean E. "Listeners' uses ofum anduh in speech comprehension." Memory & Cognition 29, no. 2 (March 2001): 320–26. http://dx.doi.org/10.3758/bf03194926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pulvermüller, Friedemann, Yury Shtyrov, Risto J. Ilmoniemi, and William D. Marslen-Wilson. "Tracking speech comprehension in space and time." NeuroImage 31, no. 3 (July 2006): 1297–305. http://dx.doi.org/10.1016/j.neuroimage.2006.01.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Quené, Hugo, Gün R. Semin, and Francesco Foroni. "Audible smiles and frowns affect speech comprehension." Speech Communication 54, no. 7 (September 2012): 917–22. http://dx.doi.org/10.1016/j.specom.2012.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ye, Zheng, Arjen Stolk, Ivan Toni, and Peter Hagoort. "Oxytocin Modulates Semantic Integration in Speech Comprehension." Journal of Cognitive Neuroscience 29, no. 2 (February 2017): 267–76. http://dx.doi.org/10.1162/jocn_a_01044.

Full text
Abstract:
Listeners interpret utterances by integrating information from multiple sources including word level semantics and world knowledge. When the semantics of an expression is inconsistent with their knowledge about the world, the listener may have to search through the conceptual space for alternative possible world scenarios that can make the expression more acceptable. Such cognitive exploration requires considerable computational resources and might depend on motivational factors. This study explores whether and how oxytocin, a neuropeptide known to influence social motivation by reducing social anxiety and enhancing affiliative tendencies, can modulate the integration of world knowledge and sentence meanings. The study used a between-participant double-blind randomized placebo-controlled design. Semantic integration, indexed with magnetoencephalography through the N400m marker, was quantified while 45 healthy male participants listened to sentences that were either congruent or incongruent with facts of the world, after receiving intranasally delivered oxytocin or placebo. Compared with congruent sentences, world knowledge incongruent sentences elicited a stronger N400m signal from the left inferior frontal and anterior temporal regions and medial pFC (the N400m effect) in the placebo group. Oxytocin administration significantly attenuated the N400m effect at both sensor and cortical source levels throughout the experiment, in a state-like manner. Additional electrophysiological markers suggest that the absence of the N400m effect in the oxytocin group is unlikely due to the lack of early sensory or semantic processing or a general downregulation of attention. These findings suggest that oxytocin drives listeners to resolve challenges of semantic integration, possibly by promoting the cognitive exploration of alternative possible world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
33

Tomlinson, John M., and Jean E. Fox Tree. "Listeners’ comprehension of uptalk in spontaneous speech." Cognition 119, no. 1 (April 2011): 58–69. http://dx.doi.org/10.1016/j.cognition.2010.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Holtgraves, Thomas. "Second Language Learners and Speech Act Comprehension." Language Learning 57, no. 4 (October 18, 2007): 595–610. http://dx.doi.org/10.1111/j.1467-9922.2007.00429.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Mikk, Jaan. "Parts of speech in predicting reading comprehension." Journal of Quantitative Linguistics 4, no. 1-3 (December 1997): 156–63. http://dx.doi.org/10.1080/09296179708590091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Norris, Dennis, Anne Cutler, James M. McQueen, and Sally Butterfield. "Phonological and conceptual activation in speech comprehension." Cognitive Psychology 53, no. 2 (September 2006): 146–93. http://dx.doi.org/10.1016/j.cogpsych.2006.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Iimura, Daichi, Shintaro Uehara, Shinji Yamamoto, Tsuyoshi Aihara, and Keisuke Kushiro. "Does Excessive Attention to Speech Contribute to Stuttering? A Preliminary Study With a Reading Comprehension Task." Perspectives of the ASHA Special Interest Groups 1, no. 4 (March 31, 2016): 5–15. http://dx.doi.org/10.1044/persp1.sig4.5.

Full text
Abstract:
People who stutter (PWS) presumably pay excessive attention to monitoring their speech, possibly exacerbating speech fluency. Using a reading comprehension task, we investigated whether or not PWS devote excessive attention to their speech. Methods Eleven PWS and 11 people who do not stutter (PNS) read passages in silent and oral reading conditions with and without noise masking, then answered comprehension questions. For PWS, auditory noise masking and silent reading would presumably divert their attention away from their speech. Results The comprehension performance of PWS was lower in the oral-no-masking condition than the oral-masking and silent-no-masking conditions. In contrast, there were no significant differences in the comprehension performance of PNS between the four conditions. Conclusions PWS had poor comprehension when listening to their speech, suggesting excessive attention to speech and limited attention to concurrent cognitive tasks.
APA, Harvard, Vancouver, ISO, and other styles
38

Driskell, James E., and Paul H. Radtke. "The Effect of Gesture on Speech Production and Comprehension." Human Factors: The Journal of the Human Factors and Ergonomics Society 45, no. 3 (September 2003): 445–54. http://dx.doi.org/10.1518/hfes.45.3.445.27258.

Full text
Abstract:
Hand gestures are ubiquitous in communication. However, there is considerable debate regarding the fundamental role that gesture plays in communication and, subsequently, regarding the value of gesture for telecommunications. Controversy exists regarding whether gesture has a primarily communicative function (enhancing listener comprehension) or a primarily noncommunicative function (enhancing speech production). Moreover, some have argued that gesture seems to enhance listener comprehension only because of the effect gesture has on speech production. The purpose of this study was to examine the extent to which gesture enhances listener comprehension and the extent to which the effect of gesture on listener comprehension is mediated by the effects of gesture on speech production. Results indicated that gesture enhanced both listener comprehension and speech production. When the effects of gesture on speech production were controlled, the relationship between gesture and listener comprehension was reduced but still remained significant. These results suggest that gesture aids the listener as well as the speaker and that gesture has a direct effect on listener comprehension, independent of the effects gesture has on speech production. Implications for understanding the value of gestural information in telecommunications are discussed. Potential applications of this research include the design of computer-mediated communication systems and displays in which the visibility of gestures may be beneficial.
APA, Harvard, Vancouver, ISO, and other styles
39

Weissbart, Hugo, Katerina D. Kandylaki, and Tobias Reichenbach. "Cortical Tracking of Surprisal during Continuous Speech Comprehension." Journal of Cognitive Neuroscience 32, no. 1 (January 2020): 155–66. http://dx.doi.org/10.1162/jocn_a_01467.

Full text
Abstract:
Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictability and precision vary over a large range. Here, we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network and through relating these speech features to EEG responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies, including the delta band as well as in the higher frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
APA, Harvard, Vancouver, ISO, and other styles
40

BILIANSKA, Iryna. "DEVELOPING SPEECH PERCEPTION SKILLS FOR BETTER LISTENING COMPREHENSION." Освітні обрії 48, no. 1 (March 18, 2019): 20–23. http://dx.doi.org/10.15330/obrii.48.1.20-23.

Full text
Abstract:
This article provides arguments for incorporating bottom-up practice activities into listening instruction at university EFL classroom. It begins with a brief overview of the current research into listening, followed by a review of main processing components involved in speech comprehension. Also, the paper discusses Ukrainian pre-service teachers’ major listening difficulties and reasons which prevent them from developing listening fluency in English. The role of the speech processor in the process of teaching listening comprehension has been determined. Psycholinguistic peculiarities of its development are discussed. It is proved that a well-developed L2 speech processor will ensure adequate perception of foreign speech and auditory self-control of one’s own articulation. Activities for enriching Ukrainian university students’ L2 perceptual experience (transcribing, noticing exercises) and training their L2 pronunciation (reading aloud, shadow reading) are suggested. Finally, it is concluded that improvement in speech perception makes listening process increasingly automatic, which in turn makes the development of top-down listening skills more effective.
APA, Harvard, Vancouver, ISO, and other styles
41

Nagaraj, Naveen K. "Working Memory and Speech Comprehension in Older Adults With Hearing Impairment." Journal of Speech, Language, and Hearing Research 60, no. 10 (October 17, 2017): 2949–64. http://dx.doi.org/10.1044/2017_jslhr-h-17-0022.

Full text
Abstract:
Purpose This study examined the relationship between working memory (WM) and speech comprehension in older adults with hearing impairment (HI). It was hypothesized that WM would explain significant variance in speech comprehension measured in multitalker babble (MTB). Method Twenty-four older (59–73 years) adults with sensorineural HI participated. WM capacity (WMC) was measured using 3 complex span tasks. Speech comprehension was assessed using multiple passages, and speech identification ability was measured using recall of sentence final-word and key words. Speech measures were performed in quiet and in the presence of MTB at + 5 dB signal-to-noise ratio. Results Results suggested that participants' speech identification was poorer in MTB, but their ability to comprehend discourse in MTB was at least as good as in quiet. WMC did not explain significant variance in speech comprehension before and after controlling for age and audibility. However, WMC explained significant variance in low-context sentence key words identification in MTB. Conclusions These results suggest that WMC plays an important role in identifying low-context sentences in MTB, but not when comprehending semantically rich discourse passages. In general, data did not support individual variability in WMC as a factor that predicts speech comprehension ability in older adults with HI.
APA, Harvard, Vancouver, ISO, and other styles
42

Battestini, Joëlle, and Jeanne Rolin-Ianziti. "Nonverbal features of speech and foreign language comprehension." Australian Review of Applied Linguistics 23, no. 1 (January 1, 2000): 15–30. http://dx.doi.org/10.1075/aral.23.1.02bat.

Full text
Abstract:
Abstract Are nonverbal features of speech a valuable source of information in L2 aural comprehension? This paper starts with the results of a questionnaire suggesting the teaching profession’s belief that nonverbal features of speech facilitate foreign language comprehension. This belief is then examined in the light of studies in various fields (applied linguistics, social psychology, anthropology, cross-cultural studies) where research deals with nonverbal features of speech at the receptive level. Our review of this literature raises several issues: 1) the importance of the context of use in decoding the meaning and functions of gestures; 2) potential discrepancies in the use of gestures in L1 and L2; 3) the interpretability of gestural expressions across cultures; and 4) the most appropriate teaching approach to integrate nonverbal features of speech into the teaching of L2 comprehension. The paper also discusses possible avenues for further research.
APA, Harvard, Vancouver, ISO, and other styles
43

Remington, Bob, and Sue Clarke. "Simultaneous communication and speech comprehension. Part I: comparison of two methods of teaching expressive signing and speech comprehension skills." Augmentative and Alternative Communication 9, no. 1 (January 1993): 36–48. http://dx.doi.org/10.1080/07434619312331276391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Crinion, Jennifer, S. Catrin Blank, and Richard Wise. "Central neural systems for both narrative speech comprehension and propositional speech production." NeuroImage 13, no. 6 (June 2001): 520. http://dx.doi.org/10.1016/s1053-8119(01)91863-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hoen, Michel, Fanny Meunier, Claire-Léonie Grataloup, François Pellegrino, Nicolas Grimault, Fabien Perrin, Xavier Perrot, and Lionel Collet. "Phonetic and lexical interferences in informational masking during speech-in-speech comprehension." Speech Communication 49, no. 12 (December 2007): 905–16. http://dx.doi.org/10.1016/j.specom.2007.05.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Newman, Rochelle S., Monita Chatterjee, Giovanna Morini, and Molly Nasuta. "Toddlers' comprehension of noise-vocoded speech and sine-wave analogs to speech." Journal of the Acoustical Society of America 133, no. 5 (May 2013): 3338. http://dx.doi.org/10.1121/1.4805627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kubose, Tate T., Kathryn Bock, Gary S. Dell, Susan M. Garnsey, Arthur F. Kramer, and Jeff Mayhugh. "The effects of speech production and speech comprehension on simulated driving performance." Applied Cognitive Psychology 20, no. 1 (January 2006): 43–63. http://dx.doi.org/10.1002/acp.1164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hux, Karen, Kelly Knollman-Porter, Jessica Brown, and Sarah E. Wallace. "Comprehension of synthetic speech and digitized natural speech by adults with aphasia." Journal of Communication Disorders 69 (September 2017): 15–26. http://dx.doi.org/10.1016/j.jcomdis.2017.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hjelmquist, E., U. Dahlstrand;, and L. Hedelin. "Visually Impaired Persons’ Comprehension of Text Presented with Speech Synthesis." Journal of Visual Impairment & Blindness 86, no. 10 (December 1992): 426–28. http://dx.doi.org/10.1177/0145482x9208601005.

Full text
Abstract:
Three groups of visually impaired persons (two middle aged and one old) were investigated with respect to memory and understanding of texts presented with speech synthesis and natural speech, respectively. The results showed that speech synthesis generally yielded lower results than did natural speech. Experience had no effect on performance, and there were only marginal effects related to age. However, there were big differences among the groups with respect to the presentation speed chosen in the speech-synthesis condition.
APA, Harvard, Vancouver, ISO, and other styles
50

Wood, Sarah G., Jerad H. Moxley, Elizabeth L. Tighe, and Richard K. Wagner. "Does Use of Text-to-Speech and Related Read-Aloud Tools Improve Reading Comprehension for Students With Reading Disabilities? A Meta-Analysis." Journal of Learning Disabilities 51, no. 1 (January 23, 2017): 73–84. http://dx.doi.org/10.1177/0022219416688170.

Full text
Abstract:
Text-to-speech and related read-aloud tools are being widely implemented in an attempt to assist students’ reading comprehension skills. Read-aloud software, including text-to-speech, is used to translate written text into spoken text, enabling one to listen to written text while reading along. It is not clear how effective text-to-speech is at improving reading comprehension. This study addresses this gap in the research by conducting a meta-analysis on the effects of text-to-speech technology and related read-aloud tools on reading comprehension for students with reading difficulties. Random effects models yielded an average weighted effect size of ([Formula: see text] = .35, with a 95% confidence interval of .14 to .56, p < .01). Moderator effects of study design were found to explain some of the variance. Taken together, this suggests that text-to-speech technologies may assist students with reading comprehension. However, more studies are needed to further explore the moderating variables of text-to-speech and read-aloud tools’ effectiveness for improving reading comprehension. Implications and recommendations for future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography