To see the other types of publications on this topic, follow the link: Emotionell prosodi.

Journal articles on the topic 'Emotionell prosodi'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Emotionell prosodi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Schirmer, Annett, and Sonja A. Kotz. "ERP Evidence for a Sex-Specific Stroop Effect in Emotional Speech." Journal of Cognitive Neuroscience 15, no. 8 (November 1, 2003): 1135–48. http://dx.doi.org/10.1162/089892903322598102.

Full text
Abstract:
The present study investigated the interaction of emotional prosody and word valence during emotional comprehension in men and women. In a prosody-word interference task, participants listened to positive, neutral, and negative words that were spoken with a happy, neutral, and angry prosody. Participants were asked to rate word valence while ignoring emotional prosody, or vice versa. Congruent stimuli were responded faster and more accurately as compared to incongruent emotional stimuli. This behavioral effect was more salient for the word valence task than for the prosodic task and was comparable between men and women. The event-related potentials (ERPs) revealed a smaller N400 amplitude for congruent as compared to emotionally incongruent stimuli. This ERP effect, however, was significant only for the word valence judgment and only for female listeners. The present data suggest that the word valence judgment was more difficult and more easily influenced by task-irrelevant emotional information than the prosodic task in both men and women. Furthermore, although emotional prosody and word valence may have a similar influence on an emotional judgment in both sexes, ERPs indicate sex differences in the underlying processing. Women, but not men, show an interaction between prosody and word valence during a semantic processing stage.
APA, Harvard, Vancouver, ISO, and other styles
2

CHAMPOUX-LARSSON, MARIE-FRANCE, and ALEXANDRA S. DYLMAN. "A prosodic bias, not an advantage, in bilinguals' interpretation of emotional prosody." Bilingualism: Language and Cognition 22, no. 2 (June 4, 2018): 416–24. http://dx.doi.org/10.1017/s1366728918000640.

Full text
Abstract:
A bilingual advantage has been found in prosody understanding in pre-school children. To understand this advantage better, we asked 73 children (6-8 years) to identify the emotional valence of spoken words, based on either semantics or emotional prosody (which were either consistent or discrepant with each other). Bilingual experience ranged from no to equal exposure to and use of two languages. Both age and bilingual experience predicted accurate identification of prosody, particularly for trials where the semantics were discrepant with the targeted prosody. Bilingual experience, but not age, predicted a prosodic bias, meaning that participants had more difficulty ignoring the irrelevant discrepant prosody when the task was to identify the semantics of the word. The decline of a semantic bias was predicted by age and bilingual experience together. Our results suggest that previous findings on the bilingual advantage in prosody processing may in fact be driven by a prosodic bias.
APA, Harvard, Vancouver, ISO, and other styles
3

Chinn, Lisa K., Irina Ovchinnikova, Anastasia A. Sukmanova, Aleksandra O. Davydova, and Elena L. Grigorenko. "Early institutionalized care disrupts the development of emotion processing in prosody." Development and Psychopathology 33, no. 2 (February 15, 2021): 421–30. http://dx.doi.org/10.1017/s0954579420002023.

Full text
Abstract:
AbstractMillions of children worldwide are raised in institutionalized settings. Unfortunately, institutionalized rearing is often characterized by psychosocial deprivation, leading to difficulties in numerous social, emotional, physical, and cognitive skills. One such skill is the ability to recognize emotional facial expressions. Children with a history of institutional rearing tend to be worse at recognizing emotions in facial expressions than their peers, and this deficit likely affects social interactions. However, emotional information is also conveyed vocally, and neither prosodic information processing nor the cross-modal integration of facial and prosodic emotional expressions have been investigated in these children to date. We recorded electroencephalograms (EEG) while 47 children under institutionalized care (IC) (n = 24) or biological family care (BFC) (n = 23) viewed angry, happy, or neutral facial expressions while listening to pseudowords with angry, happy, or neutral prosody. The results indicate that 20- to 40-month-olds living in IC have event-related potentials (ERPs) over midfrontal brain regions that are less sensitive to incongruent facial and prosodic emotions relative to children under BFC, and that their brain responses to prosody are less lateralized. Children under IC also showed midfrontal ERP differences in processing of angry prosody, indicating that institutionalized rearing may specifically affect the processing of anger.
APA, Harvard, Vancouver, ISO, and other styles
4

Ben-David, Boaz M., Namita Multani, Vered Shakuf, Frank Rudzicz, and Pascal H. H. M. van Lieshout. "Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech." Journal of Speech, Language, and Hearing Research 59, no. 1 (February 2016): 72–89. http://dx.doi.org/10.1044/2015_jslhr-h-14-0323.

Full text
Abstract:
Purpose Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. Method We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics). Results We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech. Conclusions Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.
APA, Harvard, Vancouver, ISO, and other styles
5

Misiewicz, Sylwia, Adam M. Brickman, and Giuseppe Tosto. "Prosodic Impairment in Dementia: Review of the Literature." Current Alzheimer Research 15, no. 2 (January 3, 2018): 157–63. http://dx.doi.org/10.2174/1567205014666171030115624.

Full text
Abstract:
Objective: Prosody, an important aspect of spoken language, is defined as the emphasis placed on certain syllables, changes in tempo or timing, and variance in pitch and intonation. Most studies investigating expression and comprehension of prosody have focused primarily on emotional prosody and less extensively on supralexical prosody. The distinction is indeed important, as the latter conveys information such as interrogative or assertive mode, whereas the former delivers emotional connotation, such as happiness, anger, and sadness. These functions appear to rely on distinct neuronal networks, supported by functional neuroimaging studies that show activation of the right hemisphere, specifically in the right inferior frontal area during emotional detection. Conclusion: This review summarizes the studies conducted on prosody impairment in Alzheimer's disease and other dementias, with emphasis on experiments designed to investigate the emotional vs. the supralexical aspect of speech production. We also discussed the available tools validated to test and quantify the prosodic impairment.
APA, Harvard, Vancouver, ISO, and other styles
6

Martens, Heidi, Gwen Van Nuffelen, Patrick Cras, Barbara Pickut, Miet De Letter, and Marc De Bodt. "Assessment of Prosodic Communicative Efficiency in Parkinson's Disease As Judged by Professional Listeners." Parkinson's Disease 2011 (2011): 1–10. http://dx.doi.org/10.4061/2011/129310.

Full text
Abstract:
This study examines the impact of Parkinson's disease (PD) on communicative efficiency conveyed through prosody. A new assessment method for evaluating productive prosodic skills in Dutch speaking dysarthric patients was devised and tested on 36 individuals (18 controls, 18 PD patients). Three professional listeners judged the intended meanings in four communicative functions of Dutch prosody: Boundary Marking, Focus, Sentence Typing, and Emotional Prosody. Each function was tested through reading and imitation. Interrater agreement was calculated. Results indicated that healthy speakers, compared to PD patients, performed significantly better on imitation of Boundary Marking, Focus, and Sentence Typing. PD patients with a moderate or severe dysarthria performed significantly worse on imitation of Focus than on reading of Focus. No significant differences were found for Emotional Prosody. Judges agreed well on all tasks except Emotional Prosody. Future research will focus on elaborating the assessment and on developing a therapy programme paralleling the assessment.
APA, Harvard, Vancouver, ISO, and other styles
7

Bach, D. R., K. Buxtorf, D. Grandjean, and W. K. Strik. "The influence of emotion clarity on emotional prosody identification in paranoid schizophrenia." Psychological Medicine 39, no. 6 (November 12, 2008): 927–38. http://dx.doi.org/10.1017/s0033291708004704.

Full text
Abstract:
BackgroundIdentification of emotional facial expression and emotional prosody (i.e. speech melody) is often impaired in schizophrenia. For facial emotion identification, a recent study suggested that the relative deficit in schizophrenia is enhanced when the presented emotion is easier to recognize. It is unclear whether this effect is specific to face processing or part of a more general emotion recognition deficit.MethodWe used clarity-graded emotional prosodic stimuli without semantic content, and tested 25 in-patients with paranoid schizophrenia, 25 healthy control participants and 25 depressive in-patients on emotional prosody identification. Facial expression identification was used as a control task.ResultsPatients with paranoid schizophrenia performed worse than both control groups in identifying emotional prosody, with no specific deficit in any individual emotion category. This deficit was present in high-clarity but not in low-clarity stimuli. Performance in facial control tasks was also impaired, with identification of emotional facial expression being a better predictor of emotional prosody identification than illness-related factors. Of those, negative symptoms emerged as the best predictor for emotional prosody identification.ConclusionsThis study suggests a general deficit in identifying high-clarity emotional cues. This finding is in line with the hypothesis that schizophrenia is characterized by high noise in internal representations and by increased fluctuations in cerebral networks.
APA, Harvard, Vancouver, ISO, and other styles
8

Van Rheenen, Tamsyn E., and Susan L. Rossell. "Multimodal Emotion Integration in Bipolar Disorder: An Investigation of Involuntary Cross-Modal Influences between Facial and Prosodic Channels." Journal of the International Neuropsychological Society 20, no. 5 (April 11, 2014): 525–33. http://dx.doi.org/10.1017/s1355617714000253.

Full text
Abstract:
AbstractThe ability to integrate information from different sensory channels is a vital process that serves to facilitate perceptual decoding in times of unimodal ambiguity. Despite its relevance to psychosocial functioning, multimodal integration of emotional information across facial and prosodic modes has not been addressed in bipolar disorder (BD). In light of this paucity of research we investigated multimodal processing in a BD cohort using a focused attention paradigm. Fifty BD patients and 52 healthy controls completed a task assessing the cross-modal influence of emotional prosody on facial emotion recognition across congruent and incongruent facial and prosodic conditions, where attention was directed to the facial channel. There were no differences in multi-modal integration between groups at the level of accuracy, but differences were evident at the level of response time; emotional prosody biased facial recognition latencies in the control group only, where a fourfold increase in response times was evident between congruent and incongruent conditions relative to patients. The results of this study indicate that the automatic process of integrating multimodal information from facial and prosodic sensory channels is delayed in BD. Given that interpersonal communication usually occurs in real time, these results have implications for social functioning in the disorder. (JINS, 2014, 20, 1–9)
APA, Harvard, Vancouver, ISO, and other styles
9

Demenescu, Liliana Ramona, Yutaka Kato, and Klaus Mathiak. "Neural Processing of Emotional Prosody across the Adult Lifespan." BioMed Research International 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/590216.

Full text
Abstract:
Emotion recognition deficits emerge with the increasing age, in particular, a decline in the identification of sadness. However, little is known about the age-related changes of emotion processing in sensory, affective, and executive brain areas. This functional magnetic resonance imaging (fMRI) study investigated neural correlates of auditory processing of prosody across adult lifespan. Unattended detection of emotional prosody changes was assessed in 21 young (age range: 18–35 years), 19 middle-aged (age range: 36–55 years), and 15 older (age range: 56–75 years) adults. Pseudowords uttered with neutral prosody were standards in an oddball paradigm with angry, sad, happy, and gender deviants (total 20% deviants). Changes in emotional prosody and voice gender elicited bilateral superior temporal gyri (STG) responses reflecting automatic encoding of prosody. At the right STG, responses to sad deviants decreased linearly with age, whereas happy events exhibited a nonlinear relationship. In contrast to behavioral data, no age by sex interaction emerged on the neural networks. The aging decline of emotion processing of prosodic cues emerges already at an early automatic stage of information processing at the level of the auditory cortex. However, top-down modulation may lead to an additional perceptional bias, for example, towards positive stimuli, and may depend on context factors such as the listener’s sex.
APA, Harvard, Vancouver, ISO, and other styles
10

Beatty, William W., Diana M. Orbelo, Kristen H. Sorocco, and Elliott D. Ross. "Comprehension of affective prosody in multiple sclerosis." Multiple Sclerosis Journal 9, no. 2 (April 2003): 148–53. http://dx.doi.org/10.1191/1352458503ms897oa.

Full text
Abstract:
Deficits in cognition have been repeatedly documented in patients with multiple sclerosis (MS), but their ability to comprehend emotional information has received little study. Forty-seven patients with MS and 19 demographic controls received the comprehension portion of the A prosodia Battery, which is known to be sensitive to the impairments of patients with strokes and other neurological conditions. Patients also received tests of hearing, verbal comprehension and naming, a short cognitive battery, and the Beck Depression Inventory. Patients with MS were impaired in identifying emotional states from prosodic cues. The magnitude of the deficits was greatest for patients with severe physical disability and under test conditions of limited prosodic information. Correlational analyses suggested that the patients’ difficulties in comprehending affective prosodic information were not secondary to hearing loss, aphasic deficits, cognitive impairment, or depression. For some patients with MS, deficits in comprehending emotional information may contribute to their difficulties in maintaining effective social interactions.
APA, Harvard, Vancouver, ISO, and other styles
11

Ramos-Loyo, Julieta, Leonor Mora-Reynoso, Luis Miguel Sánchez-Loyo, and Virginia Medina-Hernández. "Sex Differences in Facial, Prosodic, and Social Context Emotional Recognition in Early-Onset Schizophrenia." Schizophrenia Research and Treatment 2012 (2012): 1–12. http://dx.doi.org/10.1155/2012/584725.

Full text
Abstract:
The purpose of the present study was to determine sex differences in facial, prosodic, and social context emotional recognition in schizophrenia (SCH). Thirty-eight patients (SCH, 20 females) and 38 healthy controls (CON, 20 females) participated in the study. Clinical scales (BPRS and PANSS) and an Affective States Scale were applied, as well as tasks to evaluate facial, prosodic, and within a social context emotional recognition. SCH showed lower accuracy and longer response times than CON, but no significant sex differences were observed in either facial or prosody recognition. In social context emotions, however, females showed higher empathy than males with respect to happiness in both groups. SCH reported being more identified with sad films than CON and females more with fear than males. The results of this study confirm the deficits of emotional recognition in male and female patients with schizophrenia compared to healthy subjects. Sex differences were detected in relation to social context emotions and facial and prosodic recognition depending on age.
APA, Harvard, Vancouver, ISO, and other styles
12

Leon, Susan A., and Amy D. Rodriguez. "Aprosodia and Its Treatment." Perspectives on Neurophysiology and Neurogenic Speech and Language Disorders 18, no. 2 (June 2008): 66–72. http://dx.doi.org/10.1044/nnsld18.2.66.

Full text
Abstract:
Abstract Aprosodia is a deficit in comprehending or expressing variations in tone of voice used to express both linguistic and emotional information. Affective aprosodia refers to a specific deficit in producing or comprehending the emotional or affective tones of voice. Aprosodia is most commonly associated with right hemisphere strokes; however, it may also result from other types of brain damage such as traumatic brain injury. Although research investigating hemispheric lateralization of prosody continues, there is strong evidence that most aspects of affective prosody are directed by the right hemisphere. Disorders of emotional communication can have a significant impact on quality of life for those affected and their families. However, there has been relatively little research regarding treatment for this disorder. Recently, 14 individuals were treated for affective aprosodia using two treatments, one based on cognitive-linguistic cues and the other on imitation of prosodic modeling. Most of the participants responded to at least one of the two treatments, and a refinement of the treatments are currently underway. Because researchers are finding support for the hypothesis that expressive aprosodia can result from a motor deficit, the refined treatment incorporates principles of motor learning to enhance imitation of prosodic models, as well as cognitive-linguistic cues.
APA, Harvard, Vancouver, ISO, and other styles
13

Fonseca, Rochele Paz, Jandyra Maria Guimarães Fachel, Márcia Lorena Fagundes Chaves, Francéia Veiga Liedtke, and Maria Alice de Mattos Pimenta Parente. "Right hemisphere damage: Communication processing in adults evaluated by the Brazilian Protocole MEC - Bateria MAC." Dementia & Neuropsychologia 1, no. 3 (September 2007): 266–75. http://dx.doi.org/10.1590/s1980-57642008dn10300008.

Full text
Abstract:
Abstract Right-brain-damaged individuals may present discursive, pragmatic, lexical-semantic and/or prosodic disorders. Objective: To verify the effect of right hemisphere damage on communication processing evaluated by the Brazilian version of the Protocole Montréal d'Évaluation de la Communication (Montreal Communication Evaluation Battery) - Bateria Montreal de Avaliação da Comunicação, Bateria MAC, in Portuguese. Methods: A clinical group of 29 right-brain-damaged participants and a control group of 58 non-brain-damaged adults formed the sample. A questionnaire on sociocultural and health aspects, together with the Brazilian MAC Battery was administered. Results: Significant differences between the clinical and control groups were observed in the following MAC Battery tasks: conversational discourse, unconstrained, semantic and orthographic verbal fluency, linguistic prosody repetition, emotional prosody comprehension, repetition and production. Moreover, the clinical group was less homogeneous than the control group. Conclusions: A right-brain-damage effect was identified directly, on three communication processes: discursive, lexical-semantic and prosodic processes, and indirectly, on pragmatic process.
APA, Harvard, Vancouver, ISO, and other styles
14

Spierings, Michelle J., and Carel ten Cate. "Zebra finches are sensitive to prosodic features of human speech." Proceedings of the Royal Society B: Biological Sciences 281, no. 1787 (July 22, 2014): 20140480. http://dx.doi.org/10.1098/rspb.2014.0480.

Full text
Abstract:
Variation in pitch, amplitude and rhythm adds crucial paralinguistic information to human speech. Such prosodic cues can reveal information about the meaning or emphasis of a sentence or the emotional state of the speaker. To examine the hypothesis that sensitivity to prosodic cues is language independent and not human specific, we tested prosody perception in a controlled experiment with zebra finches. Using a go/no-go procedure, subjects were trained to discriminate between speech syllables arranged in XYXY patterns with prosodic stress on the first syllable and XXYY patterns with prosodic stress on the final syllable. To systematically determine the salience of the various prosodic cues (pitch, duration and amplitude) to the zebra finches, they were subjected to five tests with different combinations of these cues. The zebra finches generalized the prosodic pattern to sequences that consisted of new syllables and used prosodic features over structural ones to discriminate between stimuli. This strong sensitivity to the prosodic pattern was maintained when only a single prosodic cue was available. The change in pitch was treated as more salient than changes in the other prosodic features. These results show that zebra finches are sensitive to the same prosodic cues known to affect human speech perception.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Yi, Hongwei Ding, and Yang Zhang. "Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects." Journal of Speech, Language, and Hearing Research 63, no. 3 (March 23, 2020): 896–912. http://dx.doi.org/10.1044/2020_jslhr-19-00258.

Full text
Abstract:
Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics–prosody Stroop task) and cross-modal audiovisual task (i.e., semantics–prosody–face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Tompkins, Connie A., and Charles R. Flowers. "Perception of Emotional Intonation by Brain-Damaged Adults." Journal of Speech, Language, and Hearing Research 28, no. 4 (December 1985): 527–38. http://dx.doi.org/10.1044/jshr.2804.527.

Full text
Abstract:
This research examined perception of moods from the tone-of-voice of semantically neutral phrases following unilateral cerebrovascular accident. It was hypothesized that right hemisphere damage (RHD) would impair even low-level discrimination and recognition of affective prosody, while left hemisphere damage (LHD) would affect performance only as associational- cognitive task demands increased. Thirty-three male subjects, 11 each in RHD, LHD, and normal groups, were given three tasks that varied in presumed amounts of processing undertaken for successful completion. Discrimination of prosodic patterns was expected to require the fewest cognitive operations. An intermediate task involved selecting from two possibilities the label that described moods conveyed prosodically. In the third task, prosodic mood selection was made from four choices, increasing the number of comparisons necessary for accurate judgment. As hypothesized, RHD subjects were inferior to normal subjects in all tasks. LHD subjects were equivalent to normal subjects for the first two tasks, but fell to the level of the RHD group for the third task. These results indicated that the right hemisphere in men was primarily involved in the reception and recognition of emotional prosodic stimuli. Increasing cognitive demands, however, brought about a shift in emphasis from the right hemisphere to both hemispheres. An implication of these findings concerns the need to examine performance levels that invoke changes from expected patterns of hemispheric specialization to advance our knowledge of functional asymmetries.
APA, Harvard, Vancouver, ISO, and other styles
17

Brosch, Tobias, Didier Grandjean, David Sander, and Klaus R. Scherer. "Cross-modal Emotional Attention: Emotional Voices Modulate Early Stages of Visual Processing." Journal of Cognitive Neuroscience 21, no. 9 (September 2009): 1670–79. http://dx.doi.org/10.1162/jocn.2009.21110.

Full text
Abstract:
Emotional attention, the boosting of the processing of emotionally relevant stimuli, has, up to now, mainly been investigated within a sensory modality, for instance, by using emotional pictures to modulate visual attention. In real-life environments, however, humans typically encounter simultaneous input to several different senses, such as vision and audition. As multiple signals entering different channels might originate from a common, emotionally relevant source, the prioritization of emotional stimuli should be able to operate across modalities. In this study, we explored cross-modal emotional attention. Spatially localized utterances with emotional and neutral prosody served as cues for a visually presented target in a cross-modal dot-probe task. Participants were faster to respond to targets that appeared at the spatial location of emotional compared to neutral prosody. Event-related brain potentials revealed emotional modulation of early visual target processing at the level of the P1 component, with neural sources in the striate visual cortex being more active for targets that appeared at the spatial location of emotional compared to neutral prosody. These effects were not found using synthesized control sounds matched for mean fundamental frequency and amplitude envelope. These results show that emotional attention can operate across sensory modalities by boosting early sensory stages of processing, thus facilitating the multimodal assessment of emotionally relevant stimuli in the environment.
APA, Harvard, Vancouver, ISO, and other styles
18

Weed, Ethan, and Riccardo Fusaroli. "Acoustic Measures of Prosody in Right-Hemisphere Damage: A Systematic Review and Meta-Analysis." Journal of Speech, Language, and Hearing Research 63, no. 6 (June 22, 2020): 1762–75. http://dx.doi.org/10.1044/2020_jslhr-19-00241.

Full text
Abstract:
Purpose The aim of the study was to use systematic review and meta-analysis to quantitatively assess the currently available acoustic evidence for prosodic production impairments as a result of right-hemisphere damage (RHD), as well as to develop methodological recommendations for future studies. Method We systematically reviewed papers reporting acoustic features of prosodic production in RHD in order to identify shortcomings in the literature and make recommendations for future studies. We estimated the meta-analytic effect size of the acoustic features. We extracted standardized mean differences from 16 papers and estimated aggregated effect sizes using hierarchical Bayesian regression models. Results RHD did present reduced fundamental frequency variation, but the trait was shared with left-hemisphere damage. RHD also presented evidence for increased pause duration. No meta-analytic evidence for an effect of prosody type (emotional vs. linguistic) was found. Conclusions Taken together, the currently available acoustic data show only a weak specific effect of RHD on prosody production. However, the results are not definitive, as more reliable analyses are hindered by small sample sizes, lack of detail on lesion location, and divergent measuring techniques. We propose recommendations to overcome these issues: Cumulative science practices (e.g., open data and code sharing), more nuanced speech signal processing techniques, and the integration of acoustic measures and perceptual judgments are recommended to more effectively investigate prosody in RHD.
APA, Harvard, Vancouver, ISO, and other styles
19

Yazdani motlagh, Negin, and Masih Rahimi Nezhad. "Investigation on Productivity of Synonym Words with Different Semantic Prosody in English." International Journal of Linguistics and Translation Studies 2, no. 3 (April 28, 2021): 65–75. http://dx.doi.org/10.36892/ijlts.v2i3.146.

Full text
Abstract:
“Semantic prosody” has been researched since the first claim of Sinclair in (1987). Since then, semantic prosody became one of the most important issues in language studies as a linguistic phenomenon. In 1993, Louw defined semantic prosody as a special tendency of words, which might be in a pleasant environment that creates a ‘positive semantic prosody’ or in an unpleasant environment that creates a ‘negative semantic prosody’. The current research is based on a corpus analysis design, in “COCA” and “COHA”. Two synonym pair words of “Start/Begin” and “Guide/Lead to” were chosen as a case study. Representative number of each word was estimated by “Cochran’s formula”. This study is concentrated on investigation of the fact that while negative semantic prosodies are much more frequent than words with positive semantic prosody, but based on the linguistic positivity bias and “The Pollyanna hypothesis” which is introduced by Boucher and Osgood (1969), in English, the productivity of words with positive semantic prosody in synonym pairs, is more than productivity of negative semantic productivity. This fact might be due to the social interactions, the emotional content of words and linguistic behavior. It is notable to say that people tend to talk more about the brighter side than the darker side of life. This discrepancy makes words’ choosing somehow problematic for translators and English learners.
APA, Harvard, Vancouver, ISO, and other styles
20

Lyssenko, Catherine. "Prosodic Peculiarities of Introgatives in Theatric Public Speech as a Variety of Dramatic Soliloquy." PROBLEMS OF SEMANTICS, PRAGMATICS AND COGNITIVE LINGUISTICS, no. 37 (2020): 54–66. http://dx.doi.org/10.17721/2663-6530.2020.37.04.

Full text
Abstract:
The article deals with specific features of perception of public speaking prosody. Speech intonation is an important component of oral text, a carrier of semantic meanings, and at the same time a means of demonstrating the emotionally expressive nature of expression. The nature and influence of public speaking depends not only on the adequately disclosed facts, but also on the very speech form of their presentation. The leading role in this process belongs to prosodic means, in which the peculiarities of public broadcasting are clearly revealed. The article examines the interrogatives of quasispontaneous public speakingmonologues from Shakespeare's plays, gathered from audio recordings of different periods of time, dating back to 30 years of the last century. This approach made it possible to compare the main features of interrogative rhetorical questions in the chronological plan. In the analysis of prosodic features of question structures, the inventory of their universal characteristics was distinguished, since there are certain normative, invariant prosodic models for the main types of questions. On the other hand, the melody should be attributed to the most variable parameters of interoactive constructions, so the most typical variants of intonational deviations were also analyzed.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhu, Yinyin. "Which is the best listener group?" Dutch Journal of Applied Linguistics 2, no. 2 (October 7, 2013): 170–83. http://dx.doi.org/10.1075/dujal.2.2.03zhu.

Full text
Abstract:
This study investigated the perception of six Chinese emotional prosodies (neutrality, happiness, anger, surprise, sadness and sarcasm) by 20 Chinese native listeners, 20 naïve Dutch listeners and 20 advanced Dutch L2 learners of Chinese. The results showed that advanced Dutch L2 learners of Chinese recognized Chinese emotional prosody significantly better than Chinese native listeners and Dutch naïve listeners. The results also indicated that naïve non-native listeners could recognize emotions in an unknown language as well as the natives did. Chinese native listeners did not show an in-group advantage for identifying emotions in Chinese more accurately and confidently. Neutrality was the easiest emotion for all the three listener groups to identify and anger was recognized equally well by all the listener groups. The prediction made in the beginning of the study is confirmed, which claims that listeners of a tonal language will be less intent on paralinguistic use of prosody than listeners of a non-tonal language.
APA, Harvard, Vancouver, ISO, and other styles
22

Korolova, Tetiana, and Natalya Zhmayeva. "PROSODY OF NEGATIVE MODALITY IN TRANSLATION." Naukovy Visnyk of South Ukrainian National Pedagogical University named after K. D. Ushynsky: Linguistic Sciences 2021, no. 32 (2021): 74–83. http://dx.doi.org/10.24195/2616-5317-2021-32-6.

Full text
Abstract:
The work is devoted to the research of the ways that reflect the prosody characteristics of the English utterances containing negative semantics in the Ukrainian oral translation. The difficulty of achieving this goal when treating prosody is in its multifunctionallity and multicomponential nature. Prosody is the main means of signifying the communicative types of sentences and the pragmaticsof the communicative process. Correlation of prosody with pragmatic andsemantic components of communication is carried out through the prism of the emotional sphere of human speech behavior. The analysis of the experimental material provided an opportunity to find both distinctive and typologically common for Ukrainian and English characteristics of negative modality at the level of negative connotative meanings prosody, such as of objections, condemnation, reproach, and specific correlates of negative modality in either of the two languages under analysis. Typologically similar characteristics include the following: the register of melody, the range of dynamic parameter andtimbre components of prosody, the state of the vocal cords and larynx and their combinations. The mechanism of realization of prosody of negative connotations is typologically similar in two languages: the most frequent combinations of interaction of prosody parameters in English and Ukrainian languages are the increase in intensity and the synchronous growth of melody – 24,7% and 25,3%, respectively, in the amount of the experimental phrases. The functional significance of the temporal component at the segmental level in the English language and the absence of such a characteristic in the Ukrainian language expands the possibility of applying the time parameter at the suprasegmental level in the Ukrainian language. Knowledge of the laws of prosodic variability in two languages helps to interpret correctly and produce the semantics of a foreign language in translation.
APA, Harvard, Vancouver, ISO, and other styles
23

Werner, S., and G. N. Petrenko. "Speech Emotion Recognition: Humans vs Machines." Discourse 5, no. 5 (December 18, 2019): 136–52. http://dx.doi.org/10.32603/2412-8562-2019-5-5-136-152.

Full text
Abstract:
Introduction. The study focuses on emotional speech perception and speech emotion recognition using prosodic clues alone. Theoretical problems of defining prosody, intonation and emotion along with the challenges of emotion classification are discussed. An overview of acoustic and perceptional correlates of emotions found in speech is provided. Technical approaches to speech emotion recognition are also considered in the light of the latest emotional speech automatic classification experiments.Methodology and sources. The typical “big six” classification commonly used in technical applications is chosen and modified to include such emotions as disgust and shame. A database of emotional speech in Russian is created under sound laboratory conditions. A perception experiment is run using Praat software’s experimental environment.Results and discussion. Cross-cultural emotion recognition possibilities are revealed, as the Finnish and international participants recognised about a half of samples correctly. Nonetheless, native speakers of Russian appear to distinguish a larger proportion of emotions correctly. The effects of foreign languages knowledge, musical training and gender on the performance in the experiment were insufficiently prominent. The most commonly confused pairs of emotions, such as shame and sadness, surprise and fear, anger and disgust as well as confusions with neutral emotion were also given due attention.Conclusion. The work can contribute to psychological studies, clarifying emotion classification and gender aspect of emotionality, linguistic research, providing new evidence for prosodic and comparative language studies, and language technology, deepening the understanding of possible challenges for SER systems.
APA, Harvard, Vancouver, ISO, and other styles
24

Mitchell, Rachel L. C., Rebecca Elliott, Martin Barry, Alan Cruttenden, and Peter W. R. Woodruff. "Neural response to emotional prosody in schizophrenia and in bipolar affective disorder." British Journal of Psychiatry 184, no. 3 (March 2004): 223–30. http://dx.doi.org/10.1192/bjp.184.3.223.

Full text
Abstract:
BackgroundEvidence suggests a reversal of the normal left-lateralised response to speech in schizophrenia.AimsTo test the brain's response to emotional prosody in schizophrenia and bipolar disorder.MethodBOLD contrast functional magnetic resonance imaging of subjects while they passively listened or attended to sentences that differed in emotional prosody.ResultsPatients with schizophrenia exhibited normal right-lateralisation of the passive response to ‘pure’ emotional prosody and relative left-lateralisation of the response to unfiltered emotional prosody. When attending to emotional prosody, patients with schizophrenia activated the left insula more than healthy controls. When listening passively, patients with bipolar disorder demonstrated less activation of the bilateral superior temporal gyri in response to pure emotional prosody, and greater activation of the left superior temporal gyrus in response to unfiltered emotional prosody. In both passive experiments, the patient groups activated different lateral temporal lobe regions.ConclusionsPatients with schizophrenia and bipolar disorder may display some left-lateralisation of the normal right-lateralised temporal lobe response to emotional prosody.
APA, Harvard, Vancouver, ISO, and other styles
25

Abelin, Åsa. "Emotional Prosody in Interjections: A Case of Non-arbitrariness in Language." Public Journal of Semiotics 5, no. 1 (December 8, 2013): 63–76. http://dx.doi.org/10.37693/pjos.2013.5.9648.

Full text
Abstract:
Emotional prosody shows a connection between meaning and expression, and constitutes a special case of non-arbitrariness in language. Emotional interjections are learned at an early age, and also have a biological basis for their expression. It is therefore possible that emotional prosody of interjections is part of the phonological and semantic representation in the mental lexicon, and hence can be expected to influence visual lexical decisions. If confirmed, this would have some bearing on the debate concerning the relations between ‘linguistic’ and ‘affective’ prosody. A priming experiment was performed on Swedish interjections with two emotional meanings: HAPPINESS (positive words) and DISGUST (negative words). The main question was whether emotional prosody primes written interjections with emotional content, through cross-modal priming, and the chosen method was to elicit lexical decisions in a cross-modal priming task and in isolation. The results show that there was an effect of priming, and that the effect was significantly greater for HAPPINESS words (and HAPPINESS prosody) than for DISGUST words (and DISGUST prosody). For individual words, there was a positive correlation between a high priming effect for the corresponding emotion and the degree of correct interpretations of emotional primes. Furthermore, there was a tendency for high-frequency words to be primed more than low-frequency words, when the emotion of the prosody was matched. There was no such effect for high-frequency words when the emotion of the prosody was mismatched. There was also a tendency to a negative correlation between degree of correct interpretations of emotional primes and high priming effect, when the prosody was mismatched. We interpret these results to mean that it is problematic to regard emotional prosody as non-linguistic and disconnected from the lexicon, since there was a gradual connection between spoken emotional prosody, written emotional interjections, and lexical frequency of interjections.
APA, Harvard, Vancouver, ISO, and other styles
26

Kjelgaard, Margaret M., and Helen Tager-Flusberg. "The Perception of the Relationship Between Affective Prosody and the Emotional Content in Utterances in Children With Autism Spectrum Disorders." Perspectives on Language Learning and Education 20, no. 1 (February 2013): 20–32. http://dx.doi.org/10.1044/lle20.1.20.

Full text
Abstract:
Children with autism spectrum disorders (ASD) were compared to children with specific language impairment (SLI) and typically developing (TD) children and adults in their ability to perceive and judge the emotional information conveyed by happy, neutral, and sad prosody. Authors found that high-functioning verbal children with ASD have an implicit sensitivity to emotional prosody, but are unable to explicitly judge the emotion of the same prosody. Children with SLI showed they were better able to judge the emotional prosody, similar to TD children, although not as well as adults. The findings indicate that, unique to the children with ASD, there is a disconnect between the implicit processing of emotional prosody and the explicit labeling of the emotion in prosody. This is promising for interventions aimed at facilitating the abilities of ASD children in their everyday understanding of emotional prosody in conversation.
APA, Harvard, Vancouver, ISO, and other styles
27

Hoertnagl, Christine M., Falko Biedermann, Nursen Yalcin-Siedentopf, Anna-Sophia Welte, Beatrice Frajo-Apor, Eberhard A. Deisenhammer, Armand Hausmann, Georg Kemmler, Moritz Muehlbacher, and Alex Hofer. "Combined Processing of Facial and Vocal Emotion in Remitted Patients With Bipolar I Disorder." Journal of the International Neuropsychological Society 25, no. 3 (February 7, 2019): 275–84. http://dx.doi.org/10.1017/s1355617718001145.

Full text
Abstract:
AbstractObjectives: Bipolar disorder (BD) is associated with impairments in facial emotion and emotional prosody perception during both mood episodes and periods of remission. To expand on previous research, the current study investigated cross-modal emotion perception, that is, matching of facial emotion and emotional prosody in remitted BD patients. Methods: Fifty-nine outpatients with BD and 45 healthy volunteers were included into a cross-sectional study. Cross-modal emotion perception was investigated by using two subtests out of the Comprehensive Affective Testing System (CATS). Results: Compared to control subjects patients were impaired in matching sad (p < .001) and angry emotional prosody (p = .034) to one of five emotional faces exhibiting the corresponding emotion and significantly more frequently matched sad emotional prosody to happy faces (p < .001) and angry emotional prosody to neutral faces (p = .017). In addition, patients were impaired in matching neutral emotional faces to the emotional prosody of one of three sentences (p = .006) and significantly more often matched neutral faces to sad emotional prosody (p = .014). Conclusions: These findings demonstrate that, even during periods of symptomatic remission, patients suffering from BD are impaired in matching facial emotion and emotional prosody. As this type of emotion processing is relevant in everyday life, our results point to the necessity to provide specific training programs to improve psychosocial outcomes. (JINS, 2019, 25, 336–342)
APA, Harvard, Vancouver, ISO, and other styles
28

Leiva, Samanta, Micaela Difalcis, Cynthia López, Laura Margulis, Andrea Micciulli, Valeria Abusamra, and Aldo Ferreres. "Disociaciones entre prosodia emocional y lingüística en pacientes con lesiones cerebrales del hemisferio derecho." Liberabit: Revista Peruana de Psicología 23, no. 2 (December 30, 2017): 213–34. http://dx.doi.org/10.24265/liberabit.2017.v23n2.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Roux, Paul, Damien Vistoli, Anne Christophe, Christine Passerieux, and Eric Brunet-Gouet. "ERP Evidence of a Stroop-Like Effect in Emotional Speech Related to Social Anhedonia." Journal of Psychophysiology 28, no. 1 (January 1, 2014): 11–21. http://dx.doi.org/10.1027/0269-8803/a000106.

Full text
Abstract:
The present study investigated the ERP correlates of the integration of emotional prosody to the emotional meaning of a spoken word. Thirty-four nonclinical participants listened to negative and positive words that were spoken with an angry or happy prosody and classified the emotional valence of the word meaning while ignoring emotional prosody. Social anhedonia was also self-rated by the subjects. Compared to congruent trials, incongruent ones elicited slower and less accurate behavioral responses, and a smaller P300 component at the brain response level. The present data suggest that vocal emotional information is salient enough to be integrated early in verbal processing. The P300 amplitude modulation by the prosody-meaning congruency positively correlated with the social anhedonia score, suggesting that the sensitivity of the electrical brain response to emotional prosody increased with social anhedonia. Interpretations of this result in terms of emotional processing in social anhedonia are discussed.
APA, Harvard, Vancouver, ISO, and other styles
30

Mitchell, Rachel L. C., and Rachel A. Kingston. "Age-Related Decline in Emotional Prosody Discrimination." Experimental Psychology 61, no. 3 (November 1, 2014): 215–23. http://dx.doi.org/10.1027/1618-3169/a000241.

Full text
Abstract:
It is now accepted that older adults have difficulty recognizing prosodic emotion cues, but it is not clear at what processing stage this ability breaks down. We manipulated the acoustic characteristics of tones in pitch, amplitude, and duration discrimination tasks to assess whether impaired basic auditory perception coexisted with our previously demonstrated age-related prosodic emotion perception impairment. It was found that pitch perception was particularly impaired in older adults, and that it displayed the strongest correlation with prosodic emotion discrimination. We conclude that an important cause of age-related impairment in prosodic emotion comprehension exists at the fundamental sensory level of processing.
APA, Harvard, Vancouver, ISO, and other styles
31

Föcker, Julia, and Brigitte Röder. "Event-Related Potentials Reveal Evidence for Late Integration of Emotional Prosody and Facial Expression in Dynamic Stimuli: An ERP Study." Multisensory Research 32, no. 6 (2019): 473–97. http://dx.doi.org/10.1163/22134808-20191332.

Full text
Abstract:
Abstract The aim of the present study was to test whether multisensory interactions of emotional signals are modulated by intermodal attention and emotional valence. Faces, voices and bimodal emotionally congruent or incongruent face–voice pairs were randomly presented. The EEG was recorded while participants were instructed to detect sad emotional expressions in either faces or voices while ignoring all stimuli with another emotional expression and sad stimuli of the task irrelevant modality. Participants processed congruent sad face–voice pairs more efficiently than sad stimuli paired with an incongruent emotion and performance was higher in congruent bimodal compared to unimodal trials, irrespective of which modality was task-relevant. Event-related potentials (ERPs) to congruent emotional face–voice pairs started to differ from ERPs to incongruent emotional face–voice pairs at 180 ms after stimulus onset: Irrespectively of which modality was task-relevant, ERPs revealed a more pronounced positivity (180 ms post-stimulus) to emotionally congruent trials compared to emotionally incongruent trials if the angry emotion was presented in the attended modality. A larger negativity to incongruent compared to congruent trials was observed in the time range of 400–550 ms (N400) for all emotions (happy, neutral, angry), irrespectively of whether faces or voices were task relevant. These results suggest an automatic interaction of emotion related information.
APA, Harvard, Vancouver, ISO, and other styles
32

Gil, Sandrine, Marc Aguert, Ludovic Le Bigot, Agnès Lacroix, and Virginie Laval. "Children’s understanding of others’ emotional states." International Journal of Behavioral Development 38, no. 6 (May 14, 2014): 539–49. http://dx.doi.org/10.1177/0165025414535123.

Full text
Abstract:
The ability to infer the emotional states of others is central to our everyday interactions. These inferences can be drawn from several different sources of information occurring simultaneously in the communication situation. Based on previous studies revealing that children pay more heed to situational context than to emotional prosody when inferring the emotional states of others, we decided to focus on this issue, broadening the investigation to find out whether the natural combination of emotional prosody and faces (that is, paralinguistic cues) can overcome the dominance of situational context (that is, extralinguistic cues), and if so, at what age? In Experiment 1, children aged 3–9 years played a computer game in which they had to judge the emotional state of a character, based on two sources of information (that is, extralinguistic and paralinguistic) that were either congruent or conflicting. In Condition 1, situational context was compared with emotional prosody; in Condition 2, situational context was compared with emotional prosody combined with emotional faces. In a complementary study (Experiment 2) the same 3-year-olds performed recognition tasks with the three cues presented in isolation. Results highlighted the fundamental role of both cues, as a) situational context dominated prosody in all age groups, but b) the combination of emotional facial expression and prosody overcame this dominance, especially among the youngest and oldest children. We discuss our findings in the light of previous research and theories of both language and emotional development.
APA, Harvard, Vancouver, ISO, and other styles
33

Le Maner-Idrissi, Gaïd, Sandrine Le Sourn Bissaoui, Virginie Dardier, Maxime Codet, Nathalie Botte-Bonneton, Fanny Delahaye, Virginie Laval, Marc Aguert, Géraldine Tan-Bescond, and Benoit Godey. "Emotional Speech Comprehension in Deaf Children with Cochlear Implant." Psychology of Language and Communication 24, no. 1 (January 1, 2020): 44–69. http://dx.doi.org/10.2478/plc-2020-0003.

Full text
Abstract:
AbstractWe examined the understanding of emotional speech by deaf children with cochlear implant (CI). Thirty deaf children with CI and 60 typically developing controls (matched on chronological age or hearing age) performed a computerized task featuring emotional prosody, either embedded in a discrepant context or without any context at all. Across the task conditions, the deaf participants with CI scored lower on the prosody-bases responses than their peers matched on chronological age or hearing age. Additionally, we analyzed the effect of age on determining correct prosody-based responses and we found that hearing age was a predictor of the accuracy of prosody-based responses. We discuss these findings with respect to delay in prosody and intermodal processing. Future research should aim to specify the nature of the cognitive processes that would be required to process prosody.
APA, Harvard, Vancouver, ISO, and other styles
34

Savchenko, Yevheniia. "PHONETIC MEANS EXECUTING THEME AND RHEME FUNCTIONING IN SPEECH." Naukovy Visnyk of South Ukrainian National Pedagogical University named after K. D. Ushynsky: Linguistic Sciences 18, no. 28 (July 2019): 165–76. http://dx.doi.org/10.24195/2616-5317-2019-28-15.

Full text
Abstract:
The paper deals with phonetic means executing theme and rheme functioning in speech. The main components of prosodic arrangement of the theme and rheme structure of the utterance are studied, and a problem of structural units of intonation is investigated. Multi-functionality of intonation tends to complicate a study of speech prosody. At the stage of inventory and taxonomic analysis of the formal means of intonation the basic components of prosodic arrangement of the theme and rheme structure of the utterance are considered and a problem of the structural intonation units is studied. The analysis is based on a study of the material essence of the intonation units which differentiation is provided not only by the melodic component but also by speech intensity, speech tempo (including pauses), voice timbre as well as the integral prosodic characteristic — the phrase stress. It is possible to speak definitely about presence of essential differences in the degree of informational melody, speech intensity, tempo and timbre in the context of communication of meanings, and a complex nature of their accomplishment in speech. Therefore, it becomes important to study not just the role of each of these components in the accomplishment of the communicative function of intonation but also to establish their hierarchy, inter-relation and interdependence. Functional analysis of intonation is primarily aimed at specification of the very principle of classification of the intonation structure functional loading. It is advisable to study the relative autonomy of various functions and the nature of their interaction. The list of intonation functions may be limited with such a set: intelligent and logical function (segmentation by syntagms, links between syntagms, actual segmentation, accentual marking of the syntagm elements), differentiation function of the communication types (situations), the function expressing the emotional state and relations and the function that transfers modal relations. At the prosody level the actual segmentation of utterances is accomplished in speech primarily by using tonal and, partially, dynamic means of intonation (the emphasis is often linked to the forceful intonation components — intensity and energy component): at that, in order to identify the content, the place of stress is important as well as certain peculiarities of its accomplishment.
APA, Harvard, Vancouver, ISO, and other styles
35

Gil, Sandrine, Jamila Hattouti, and Virginie Laval. "How children use emotional prosody: Crossmodal emotional integration?" Developmental Psychology 52, no. 7 (2016): 1064–72. http://dx.doi.org/10.1037/dev0000121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Nesterenko, N. M., and C. V. Lyssenko. "Specificity of Repetition as a Rhetoric Device in public speech." PROBLEMS OF SEMANTICS, PRAGMATICS AND COGNITIVE LINGUISTICS, no. 36 (2019): 65–81. http://dx.doi.org/10.17721/2663-6530.2019.36.05.

Full text
Abstract:
The article deals with the peculiarities of the intonation design of certain elements of such rhetorical reception as a repetition on the material of audio recordings of Shakespeare's plays in chronology, namely rhetorical questions related to expressions of a peculiar interogative modality. The article deals with the results of the study of the invariant features of the prosody of the interrogative sentences in dramatic discourse inchronological terms. Repetition as a means of emotional enhancement is considered. In public speaking, repetition serves as a means of expressing a specific function of information - convincing which adds a rich emotional and intonational content. Through repetition, the speaker deepens the semantic side of speech and heightens emotional impact. Syntactic concurrency, which is realized in the combination of repetitions of syntactic constructions and various intensifiers, has been analyzed, which is perceived as rhythmicality. The syntactical parallelism of identical questions or sentences is amplified and correlated with the identical prosodic contour of intonation groups. To achieve an emotional effect, when presenting syntactically parallel interrogative constructions of the second and third questions, actors can violate the rule of normative intonation of a question, using a gradually ascending scale. Or, on the contrary, to adhere to the normative intonation contours, and design them according to the canonical rule.
APA, Harvard, Vancouver, ISO, and other styles
37

Nestereko, Natalia, and Catherine Lyssenko. "Prosodic Peculiarities of Repetition as a Rhetorical Device in Public Speech." PROBLEMS OF SEMANTICS, PRAGMATICS AND COGNITIVE LINGUISTICS, no. 37 (2020): 39–53. http://dx.doi.org/10.17721/2663-6530.2020.37.03.

Full text
Abstract:
The article deals with the peculiarities of the intonation design of certain elements of such rhetorical reception as a repetition on the material of audio recordings of Shakespeare's plays in chronology, namely rhetorical questions related to expressions of a peculiar interogative modality. The article deals with the results of the study of the invariant features of the prosody of the interrogative sentences in dramatic discourse in chronological terms. Repetition as a means of emotional enhancementis considered. In public speaking, repetition serves as a means of expressing a specific function of information -convincing which adds a rich emotional and intonational content. Through repetition, the speaker deepens the semantic side of speech and heightens emotional impact. Syntactic concurrency, which is realized in the combination of repetitions of syntactic constructions and various intensifiers, has been analyzed, which is perceived as rhythmicality. The syntactical parallelism of identical questions or sentences is amplified and correlated with the identical prosodic contour of intonation groups. To achieve an emotional effect, when presentingsyntactically parallel interrogative constructions of the second and third questions, actors can violate the rule of normative intonation of a question, using a gradually ascending scale. Or, on the contrary, to adhere to the normative intonation contours, and design them according to the canonical rule.
APA, Harvard, Vancouver, ISO, and other styles
38

LEITMAN, DAVID I., RACHEL ZIWICH, ROEY PASTERNAK, and DANIEL C. JAVITT. "Theory of Mind (ToM) and counterfactuality deficits in schizophrenia: misperception or misinterpretation?" Psychological Medicine 36, no. 8 (May 15, 2006): 1075–83. http://dx.doi.org/10.1017/s0033291706007653.

Full text
Abstract:
Background. Theory of Mind (ToM) refers to the ability to infer another person's mental state based upon interactional information. ToM deficits have been suggested to underlie crucial aspects of social interaction failure in disorders such as autism and schizophrenia, although the development of paradigms for demonstrating such deficits remains an ongoing area of research. Recent studies have explored the use of sarcasm perception, in which subjects must infer an individual's sincerity or lack thereof, as a ‘real-life’ index of ToM ability, and as an index of functioning of specific right hemispheric structures. Sarcastic detection ability has not previously been studied in schizophrenia, although patients have been shown to have deficits in ability to decode emotional information from speech (‘affective prosody’).Method. Twenty-two schizophrenia patients and 17 control subjects were tested on their ability to detect sarcasm from spoken speech as well as measures of affective prosody and basic pitch perception.Results. Despite normal overall intelligence, patients performed substantially worse than controls in ability to detect sarcasm (d=2·2), showing both decreased sensitivity (A′) in detection of sincerity versus sarcasm and an increased bias (B″) toward sincerity. Correlations across groups revealed significant relationships between impairments in sarcasm recognition, affective prosody and basic pitch perception.Conclusions. These findings demonstrate substantial deficits in ability to infer an internal subjective state based upon vocal modulation among subjects with schizophrenia. Deficits were related to, but were significantly more severe than, more general forms of prosodic and sensorial misperception, and are consistent with both right hemispheric and ‘bottom-up’ theories of the disorder.
APA, Harvard, Vancouver, ISO, and other styles
39

Scholten, M. R. M., A. Aleman, and R. S. Kahn. "The processing of emotional prosody and semantics in schizophrenia: relationship to gender and IQ." Psychological Medicine 38, no. 6 (October 22, 2007): 887–98. http://dx.doi.org/10.1017/s0033291707001742.

Full text
Abstract:
BackgroundFemale patients with schizophrenia are less impaired in social life than male patients. Because social impairment in schizophrenia has been found to be associated with deficits in emotion recognition, we examined whether the female advantage in processing emotional prosody and semantics is preserved in schizophrenia.MethodForty-eight patients (25 males, 23 females) and 46 controls (23 males, 23 females) were assessed using an emotional language task (in which healthy women generally outperform healthy men), consisting of 96 sentences in four conditions: (1) neutral-content/emotional-tone (happy, sad, angry or anxious); (2) neutral-tone/emotional-content; (3) emotional-tone/incongruous emotional-content; and (4) emotional-content/incongruous emotional-tone. Participants had to ignore the emotional-content in the third condition and the emotional-tone in the fourth condition. In addition, participants were assessed with a visuospatial task (in which healthy men typically excel). Correlation coefficients were computed for associations between emotional language data, visuospatial data, IQ measures and patient variables.ResultsOverall, on the emotional language task, patients made more errors than control subjects, and women outperformed men across diagnostic groups. Controlling for IQ revealed a significant effect on task performance in all groups, especially in the incongruent tasks. On the rotation task, healthy men outperformed healthy women, but male patients, female patients and female controls obtained similar scores.ConclusionThe advantage in emotional prosodic and semantic processing in healthy women is preserved in schizophrenia, whereas the male advantage in visuospatial processing is lost. These findings may explain, in part, why social functioning is less compromised in women with schizophrenia than in men.
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Yi, Hongwei Ding, and Yang Zhang. "Emotional Prosody Processing in Schizophrenic Patients: A Selective Review and Meta-Analysis." Journal of Clinical Medicine 7, no. 10 (October 17, 2018): 363. http://dx.doi.org/10.3390/jcm7100363.

Full text
Abstract:
Emotional prosody (EP) has been increasingly recognized as an important area of schizophrenic patients’ dysfunctions in their language use and social communication. The present review aims to provide an updated synopsis on emotional prosody processing (EPP) in schizophrenic disorders, with a specific focus on performance characteristics, the influential factors and underlying neural mechanisms. A literature search up to 2018 was conducted with online databases, and final selections were limited to empirical studies which investigated the prosodic processing of at least one of the six basic emotions in patients with a clear diagnosis of schizophrenia without co-morbid diseases. A narrative synthesis was performed, covering the range of research topics, task paradigms, stimulus presentation, study populations and statistical power with a quantitative meta-analytic approach in Comprehensive Meta-Analysis Version 2.0. Study outcomes indicated that schizophrenic patients’ EPP deficits were consistently observed across studies (d = −0.92, 95% CI = −1.06 < δ < −0.78), with identification tasks (d = −0.95, 95% CI = −1.11 < δ < −0.80) being more difficult to process than discrimination tasks (d = −0.74, 95% CI = −1.03 < δ < −0.44) and emotional stimuli being more difficult than neutral stimuli. Patients’ performance was influenced by both participant- and experiment-related factors. Their social cognitive deficits in EP could be further explained by right-lateralized impairments and abnormalities in primary auditory cortex, medial prefrontal cortex and auditory-insula connectivity. The data pointed to impaired pre-attentive and attentive processes, both of which played important roles in the abnormal EPP in the schizophrenic population. The current selective review and meta-analysis support the clinical advocacy of including EP in early diagnosis and rehabilitation in the general framework of social cognition and neurocognition deficits in schizophrenic disorders. Future cross-sectional and longitudinal studies are further suggested to investigate schizophrenic patients’ perception and production of EP in different languages and cultures, modality forms and neuro-cognitive domains.
APA, Harvard, Vancouver, ISO, and other styles
41

Zaidan, Noor Aina, and Md Sah Hj Salam. "Emotional speech feature selection using end-part segmented energy feature." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 3 (September 1, 2019): 1374. http://dx.doi.org/10.11591/ijeecs.v15.i3.pp1374-1381.

Full text
Abstract:
The accuracy of human emotional detection is crucial in the industry to ensure effective conversations and messages delivery. The process involved in identifying emotions must be carried out properly and using a method that guarantees high level of emotional recognition. Energy feature is said to be a prosodic information encoder and there are still studies on energy use in speech prosody and it motivate us to run an experiment on energy features. We have conducted two sets of studies: 1) whether local or global features that contribute most to emotional recognition and 2) the effect of the end-part segment length towards emotion recognition accuracy using 2 types of segmentation approach. This paper discussed about Absolute Time Intervals at Relative Positions (ATIR) segmentation approach and global ATIR (GATIR) using end-part segmented global energy feature extracted from Berlin Emotional Speech Database (EMO-DB). We observed that global feature contribute more to the emotional recognition and global features that are derived from longer segments give higher recognition accuracy than global feature derived from short segments. The addition of utterance-based feature (GTI) to ATIR segmentation somewhat contributes to increase the accuracy by 5% up to 8% and conclude that GATIR outperformed ATIR segmentation approached in term of its higher recognition rate. The results of this study where almost all the sub-tests provide an increased result proving that global feature derived from longer segment lengths acquire more emotional information and enhance the system performance.
APA, Harvard, Vancouver, ISO, and other styles
42

Monnot, Marilee, Robert Foley, and Elliott Ross. "Affective prosody: Whence motherese." Behavioral and Brain Sciences 27, no. 4 (August 2004): 518–19. http://dx.doi.org/10.1017/s0140525x04390114.

Full text
Abstract:
Motherese is a form of affective prosody injected automatically into speech during caregiving solicitude. Affective prosody is the aspect of language that conveys emotion by changes in tone, rhythm, and emphasis during speech. It is a neocortical function that allows graded, highly varied vocal emotional expression. Other mammals have only rigid, species-specific, limbic vocalizations. Thus, encephalization with corticalization is necessary for the evolution of progressively complex vocal emotional displays.
APA, Harvard, Vancouver, ISO, and other styles
43

SHEA, T., A. SERGEJEW, D. BURNHAM, C. JONES, S. ROSSELL, D. COPOLOV, and G. EGAN. "Emotional prosodic processing in auditory hallucinations." Schizophrenia Research 90, no. 1-3 (February 2007): 214–20. http://dx.doi.org/10.1016/j.schres.2006.09.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tsao, J. W., D. H. Dickey, and K. M. Heilman. "Emotional prosody in primary progressive aphasia." Neurology 63, no. 1 (July 12, 2004): 192–93. http://dx.doi.org/10.1212/01.wnl.0000132836.03040.2d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Raithel, Vivian, and Martina Hielscher-Fastabend. "Emotional and Linguistic Perception of Prosody." Folia Phoniatrica et Logopaedica 56, no. 1 (2004): 7–13. http://dx.doi.org/10.1159/000075324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Paulmann, Silke, Desire Furnes, Anne Ming Bøkenes, and Philip J. Cozzolino. "How Psychological Stress Affects Emotional Prosody." PLOS ONE 11, no. 11 (November 1, 2016): e0165022. http://dx.doi.org/10.1371/journal.pone.0165022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Szymanowski, F., F. Szymanowski, S. A. Kotz, C. Schröder, M. Rotte, and R. Dengler. "Gender Differences in Processing Emotional Prosody." Clinical Neurophysiology 118, no. 4 (April 2007): e102-e103. http://dx.doi.org/10.1016/j.clinph.2006.11.239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Krestar, Maura L., and Conor T. McLennan. "Responses to Semantically Neutral Words in Varying Emotional Intonations." Journal of Speech, Language, and Hearing Research 62, no. 3 (March 25, 2019): 733–44. http://dx.doi.org/10.1044/2018_jslhr-h-17-0428.

Full text
Abstract:
Purpose Recent research on perception of emotionally charged material has found both an “emotionality effect” in which participants respond differently to emotionally charged stimuli relative to neutral stimuli in some cognitive–linguistic tasks and a “negativity bias” in which participants respond differently to negatively charged stimuli relative to neutral and positively charged stimuli. The current study investigated young adult listeners' bias when responding to neutral-meaning words in 2 tasks that varied attention to emotional intonation. Method Half the participants completed a word identification task in which they were instructed to type a word they had heard presented binaurally through Sony stereo MDR-ZX100 headphones. The other half of the participants completed an intonation identification task in which they were instructed to use a SuperLab RB-740 button box to identify the emotional prosody of the same words over headphones. For both tasks, all auditory stimuli were semantically neutral words spoken in happy, sad, and neutral emotional intonations. Researchers measured percent correct and reaction time (RT) for each word in both tasks. Results In the word identification task, when identifying semantically neutral words spoken in happy, sad, and neutral intonations, listeners' RTs to words in a sad intonation were longer than RTs to words in a happy intonation. In the intonation identification task, when identifying the emotional intonation of the same words spoken in the same emotional tones of voice, listeners' RTs to words in a sad intonation were significantly faster than those in a neutral intonation. Conclusions Results demonstrate a potential attentional negativity bias for neutral words varying in emotional intonation. Such results support an attention-based theoretical account. In an intonation identification task, an advantage emerged for words in a negative (sad) intonation relative to words in a neutral intonation. Thus, current models of emotional speech should acknowledge the amount of attention to emotional content (i.e., prosody) necessary to complete a cognitive task, as it has the potential to bias processing.
APA, Harvard, Vancouver, ISO, and other styles
49

Yow, W. Quin, Jiawen Lee, and Xiaoqian Li. "AGE-RELATED DECLINES IN SOCIAL COGNITIVE PROCESSES OF OLDER ADULTS." Innovation in Aging 3, Supplement_1 (November 2019): S882—S883. http://dx.doi.org/10.1093/geroni/igz038.3232.

Full text
Abstract:
Abstract Despite current literature suggesting that various social cognitive processes seem to be impaired in late adulthood, e.g., processing of social gaze cues, the trajectory decline of social cognition in late adulthood is not well understood (e.g., Grainger et al., 2018; Paal & Bereczkei, 2007). As part of a multi-institutional research project, we began to systematically investigate whether there is age-related decline in older adults’ ability to infer others’ mental states, integrate multiple referential cues, and identify emotional states of others using prosodic cues. Sixteen older adults aged 71-85, of which 9 were cognitively healthy and 7 with mild-to-moderate dementia, and 7 younger adults aged 19-37 underwent three tasks. In a theory-of-mind story task, participants answered true/false questions about the beliefs of the protagonists in the stories. A cue integration task assessed participants’ ability to integrate the experimenter’s gaze and semantic cues to identify a referent object. In an emotion-prosody task, participants judged whether the speaker sounded happy or sad in low-pass filtered audio. Non-parametric tests revealed that younger adults outperformed both groups of older adults (both ps=.001) in inferring the protagonists’ beliefs in the stories. Younger adults were also better and more accurate than both groups of older adults in integrating cues to identify the referent object and in using prosodic cues to identify emotional states respectively (ps&lt;.001). Both groups of older adults did not differ significantly from each other in the tasks. These findings provide emerging and important insights into the decline of social cognitive processes in late adulthood.
APA, Harvard, Vancouver, ISO, and other styles
50

Pinheiro, A. P., E. del Re, J. Mezin, P. G. Nestor, A. Rauber, R. W. McCarley, Ó. F. Gonçalves, and M. A. Niznikiewicz. "Sensory-based and higher-order operations contribute to abnormal emotional prosody processing in schizophrenia: an electrophysiological investigation." Psychological Medicine 43, no. 3 (July 10, 2012): 603–18. http://dx.doi.org/10.1017/s003329171200133x.

Full text
Abstract:
BackgroundSchizophrenia is characterized by deficits in emotional prosody (EP) perception. However, it is not clear which stages of processing prosody are abnormal and whether the presence of semantic content contributes to the abnormality. This study aimed to examine event-related potential (ERP) correlates of EP processing in 15 chronic schizophrenia individuals and 15 healthy controls.MethodA total of 114 sentences with neutral semantic content [sentences with semantic content (SSC) condition] were generated by a female speaker (38 with happy, 38 with angry, and 38 with neutral intonation). The same sentences were synthesized and presented in the ‘pure prosody’ sentences (PPS) condition where semantic content was unintelligible.ResultsGroup differences were observed for N100 and P200 amplitude: patients were characterized by more negative N100 for SSC, and more positive P200 for angry and happy SSC and happy PPS. Correlations were found between delusions and P200 amplitude for happy SSC and PPS. Higher error rates in the recognition of EP were also observed in schizophrenia: higher error rates in neutral SSC were associated with reduced N100, and higher error rates in angry SSC were associated with reduced P200.ConclusionsThese results indicate that abnormalities in prosody processing occur at the three stages of EP processing, and are enhanced in SSC. Correlations between P200 amplitude for happy prosody and delusions suggest a role that abnormalities in the processing of emotionally salient acoustic cues may play in schizophrenia symptomatology. Correlations between ERP and behavioral data point to a relationship between early sensory abnormalities and prosody recognition in schizophrenia.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography