To see the other types of publications on this topic, follow the link: Musical perception and cognition.

Journal articles on the topic 'Musical perception and cognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Musical perception and cognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Merten, Natascha, Mary E. Fischer, Lauren K. Dillard, Barbara E. K. Klein, Ted S. Tweed, and Karen J. Cruickshanks. "Benefit of Musical Training for Speech Perception and Cognition Later in Life." Journal of Speech, Language, and Hearing Research 64, no. 7 (2021): 2885–96. http://dx.doi.org/10.1044/2021_jslhr-20-00588.

Full text
Abstract:
Purpose The aim of this study was to determine the long-term associations of musical training with speech perception in adverse conditions and cognition in a longitudinal cohort study of middle-age to older adults. Method This study is based on Epidemiology of Hearing Loss Study participants. We asked participants at baseline (1993–1995) about their musical training. Speech perception (word recognition in competing message; Northwestern University Auditory Test Number 6), cognitive function (cognitive test battery), and impairment (self-report or surrogate report of Alzheimer's disease or dementia, and/or a Mini-Mental State Examination score ≤ 24) were assessed up to 5 times over the 20-year follow-up. We included 2,938 Epidemiology of Hearing Loss Study participants who had musical training data and at least one follow-up of speech perception and/or cognitive assessment. We used linear mixed-effects models to determine associations between musicianship and decline in speech perception and cognitive function over time and Cox regression models to evaluate associations of musical training with 20-year cumulative incidence of speech perception and cognitive impairment. Models were adjusted for age, sex, and occupation and repeated with additional adjustment for health-related confounders and education. Results Musicians showed less speech perception decline over time with stronger effects in women (0.16% difference, 95% confidence interval [CI] [0.05, 0.26]). Among men, musicians had, on average, better speech perception than nonmusicians (3.41% difference, 95% CI [0.62, 6.20]) and were less likely to develop a cognitive impairment than nonmusicians (hazard ratio = 0.58, 95% CI [0.37, 0.91]). Conclusions Musicians showed an advantage in speech perception abilities and cognition later in life and less decline over time with different magnitudes of effect sizes in men and women. Associations remained with further adjustment, indicating that some degree of the advantage of musical training is independent of socioeconomic or health differences. If confirmed, these findings could have implications for developing speech perception intervention and prevention strategies. Supplemental Material https://doi.org/10.23641/asha.14825454
APA, Harvard, Vancouver, ISO, and other styles
2

Large, Edward W., Ji Chul Kim, Nicole Kristine Flaig, Jamshed J. Bharucha, and Carol Lynne Krumhansl. "A Neurodynamic Account of Musical Tonality." Music Perception 33, no. 3 (2016): 319–31. http://dx.doi.org/10.1525/mp.2016.33.3.319.

Full text
Abstract:
Science since antiquity has asked whether mathematical relationships among acoustic frequencies govern musical relationships. Psychophysics rejected frequency ratio theories, focusing on sensory phenomena predicted by linear analysis of sound. Cognitive psychologists have since focused on long-term exposure to the music of one’s culture and short-term sensitivity to statistical regularities. Today evidence is rapidly mounting that oscillatory neurodynamics is an important source of nonlinear auditory responses. This leads us to reevaluate the significance of frequency relationships in the perception of music. Here, we present a dynamical systems analysis of mode-locked neural oscillation that predicts cross-cultural invariances in music perception and cognition. We show that this theoretical framework combines with short- and long-term learning to explain the perception of Hindustani rāgas, not only by encultured Indian listeners but also by Western listeners unfamiliar with the style. These findings demonstrate that intrinsic neurodynamics contribute significantly to the perception of musical structure.
APA, Harvard, Vancouver, ISO, and other styles
3

Kippen, James. "An Ethnomusicological Approach to the Analysis of Musical Cognition." Music Perception 5, no. 2 (1987): 173–95. http://dx.doi.org/10.2307/40285391.

Full text
Abstract:
A genre of North Indian drumming has become the focus of experimental research in which an "expert system" is programmed to simulate the musical knowledge of the drummers themselves. Experiments involve the interaction of musicians with a computerized linguistic model contained within the expert system that formalizes their intuitive ideas regarding musical structure in a generative grammar. The accuracy of the model is determined by the musicians themselves, who assess its ability to generate correct pieces of music. The main aims of the research are the identification of the cognitive patterns involved in the creation and interpretation of a particular musical system, and the establishment of new techniques that make this approach to cognitive analysis applicable to other musical systems. This article attempts to demonstrate the advantages an ethnomusicological approach can bring to the analysis of musical perception and cognition. Such an approach links the analysis of musical sound to an understanding of the sociocultural context in which that music is created and interpreted.
APA, Harvard, Vancouver, ISO, and other styles
4

Heaton, Pamela. "Assessing musical skills in autistic children who are not savants." Philosophical Transactions of the Royal Society B: Biological Sciences 364, no. 1522 (2009): 1443–47. http://dx.doi.org/10.1098/rstb.2008.0327.

Full text
Abstract:
Descriptions of autistic musical savants suggest that they possess extraordinary skills within the domain. However, until recently little was known about the musical skills and potential of individuals with autism who are not savants. The results from these more recent studies investigating music perception, cognition and learning in musically untrained children with autism have revealed a pattern of abilities that are either enhanced or spared. For example, increased sensitivity to musical pitch and timbre is frequently observed, and studies investigating perception of musical structure and emotions have consistently failed to reveal deficits in autism. While the phenomenon of the savant syndrome is of considerable theoretical interest, it may have led to an under-consideration of the potential talents and skills of that vast majority of autistic individuals, who do not meet savant criteria. Data from empirical studies show that many autistic children possess musical potential that can and should be developed.
APA, Harvard, Vancouver, ISO, and other styles
5

Koniari, Dimitra, Sandrine Predazzer, and Marc Méélen. "Categorization and Schematization Processes Used in Music Perception by 10- to 11-Year-Old Children." Music Perception 18, no. 3 (2001): 297–324. http://dx.doi.org/10.1525/mp.2001.18.3.297.

Full text
Abstract:
This study investigates the role of the cue abstraction mechanism within the framework of cognitive processes underlying listening to a piece of music by 10- to 11-year-old children. Four experiments used different procedures to address three main processes: (a) the categorization of musical features, (b) the segmentation of the musical discourse, and (c) the elaboration of a mental schema of the piece. Two short tonal pieces from the classical piano repertoire were used as experimental material. Experiments 1 and 2 assessed children's capacity to classify segments from the same musical piece into the appropriate category and to evaluate the segments' degree of similarity. Experiment 3 investigated the segmentation process, which underlies the organization of musical events into groups. Experiment 4 explored children's ability to reconstruct a piece of music after hearing it. The influence of musical training is investigated by comparing musician and nonmusician children. In addition, the effects of different musical features are explored. La préésente éétude porte sur le méécanisme d'extraction d'indices dans le cadre des processus cognitifs sous-tendant l'éécoute d'un morceau de musique chez des enfants de 10-11 ans. Les diverses procéédures utiliséées dans les quatre expéériences portent sur trois processus: a) la catéégorisation de caractééristiques musicales, b) la segmentation du discours musical et c) l'éélaboration d'un schééma mental de la pièèce. Deux courtes pièèces du réépertoire classique pour piano ont servi de matéériel expéérimental. Les expéériences 1 et 2 éévaluent la capacitéé des enfants àà classer des segments issus d'un mêême pièèce musicale dans leur catéégorie respective et àà appréécier leur degréé de similaritéé. L'expéérience 3 éétudie le processus de segmentation qui sous-tend l'organisation des éévéénements musicaux en groupes. L'expéérience 4 explore l'aptitude des enfants àà reconstruire la pièèce aprèès l'avoir entendue. L'influence de la formation musicale est prise en compte àà travers une comparaison d'enfants musiciens et non-musiciens. De plus, l'effet de difféérents traits musicaux est exploréé.
APA, Harvard, Vancouver, ISO, and other styles
6

Marčenoka, Marina. "Formation of the Artistic Image in the Music Perception Process." SOCIETY, INTEGRATION, EDUCATION. Proceedings of the International Scientific Conference 1 (May 30, 2015): 467. http://dx.doi.org/10.17770/sie2013vol1.578.

Full text
Abstract:
Musical art, which clears vast possibilities for cognition of the man’s internal world, develops feelings of empathy and tolerance, facilitates the creative comprehension of personal, moral and aesthetical values of micro and macro social media. Musical art, while reflecting the reality by means of the artistic image, the system of musical expression means, has its own specificity in development of universal values. This specificity consists in development of personality’s aesthetical and moral needs and in recovery of the spiritual culture; and only music with high spiritual contents is able to achieve it. Aim of the paper is to define the content and succession of formation of students’ artistic image of a musical composition in the process of music perception. Methods of the research are: theoretical analysis of psychological, pedagogical and musical literature about approaches to the problem of formation of the image in the music perception process. Results of the research: the essence and content of the artistic image of a musical composition and succession of its development in the musical education process were defined.
APA, Harvard, Vancouver, ISO, and other styles
7

Maes, Pieter-Jan, Edith Van Dyck, Micheline Lesaffre, Marc Leman, and Pieter M. Kroonenberg. "The Coupling of Action and Perception in Musical Meaning Formation." Music Perception 32, no. 1 (2014): 67–84. http://dx.doi.org/10.1525/mp.2014.32.1.67.

Full text
Abstract:
The embodied perspective on music cognition has stressed the central role of the body and body movements in musical meaning formation processes. In the present study, we investigate by means of a behavioral experiment how free body movements in response to music (i.e., action) can be linked to specific linguistic, metaphorical descriptions people use to describe the expressive qualities they perceive in the music (i.e., perception). We introduce a dimensional model based on the Effort/Shape theory of Laban in order to target musical expressivity from an embodied perspective. Also, we investigate whether a coupling between action and perception is dependent on the musical background of the participants (i.e., trained versus untrained). The results show that the physical appearance of the free body movements that participants perform in response to music are reliably linked to the linguistic descriptions of musical expressiveness in terms of the underlying quality. Moreover, this result is found to be independent of the participants’ musical background.
APA, Harvard, Vancouver, ISO, and other styles
8

Patel, Aniruddh D. "Musical Rhythm, Linguistic Rhythm, and Human Evolution." Music Perception 24, no. 1 (2006): 99–104. http://dx.doi.org/10.1525/mp.2006.24.1.99.

Full text
Abstract:
There is now a vigorous debate over the evolutionary status of music. Some scholars argue that humans have been shaped by evolution to be musical, while others maintain that musical abilities have not been a target of natural selection but reflect an alternative use of more adaptive cognitive skills. One way to address this debate is to break music cognition into its underlying components and determine whether any of these are innate, specific to music, and unique to humans. Taking this approach, Justus and Hutsler (2005) and McDermott and Hauser (2005) suggest that musical pitch perception can be explained without invoking natural selection for music. However, they leave the issue of musical rhythm largely unexplored. This comment extends their conceptual approach to musical rhythm and suggests how issues of innateness, domain specificity, and human specificity might be addressed.
APA, Harvard, Vancouver, ISO, and other styles
9

Leman, Marc, and Pieter-Jan Maes. "The Role of Embodiment in the Perception of Music." Empirical Musicology Review 9, no. 3-4 (2015): 236. http://dx.doi.org/10.18061/emr.v9i3-4.4498.

Full text
Abstract:
In this paper, we present recent and on-going research in the field of embodied music cognition, with a focus on studies conducted at IPEM, the research laboratory in systematic musicology at Ghent University, Belgium. Attention is devoted to encoding/decoding principles underlying musical expressiveness, synchronization and entrainment, and action-based effects on music perception. The discussed empirical findings demonstrate that embodiment is only one component in an interconnected network of sensory, motor, affective, and cognitive systems involved in music perception. Currently, these findings drive embodiment theory towards a more dynamical approach in which the interaction between various internal processes and the external environment are of central importance. <br />
APA, Harvard, Vancouver, ISO, and other styles
10

Matyja, Jakub Ryszard. "Toward Extended Music Cognition: Commentary on Music and Cognitive Extension." Empirical Musicology Review 9, no. 3-4 (2015): 203. http://dx.doi.org/10.18061/emr.v9i3-4.4450.

Full text
Abstract:
In his paper, Luke Kersten (2014) argues that since music cognition is part of a locationally wide computational system, it can be considered as an extended process. Overall I sympathize with Kersten’s (2014) view. However, in the present paper I underline those issues that need to be, in my opinion, developed in a more detailed and cautious way. Extended music perception is the idea that “it ain’t all in the head”, but rather involves the exploitation of non-neural body and musical environment. In order to push the debate further, I suggest situating Kersten’s views within a broader context of recent research, thus strengthening the theoretical importance of his proposal.
APA, Harvard, Vancouver, ISO, and other styles
11

Ross, Barry, and Sarah Knight. "Reports of equitonic scale systems in African musical traditions and their implications for cognitive models of pitch organization." Musicae Scientiae 23, no. 4 (2017): 387–402. http://dx.doi.org/10.1177/1029864917736105.

Full text
Abstract:
Psychological research into musical behavior has mostly focused on Western music, explored with experiments utilizing Western participants. This ethnocentric bias limits the generalizability of many claims in the field. We argue that our current understanding of the cognition of pitch organization might be helpfully informed by data gathered in non-Western contexts. In particular, musical traditions featuring equal-spaced scales (where all scale-step interval sizes are equal) are suggested to pose a challenge to popular models of pitch organization, in which unequally spaced scales are suggested to provide cognitive anchor points for on-the-fly pitch orientation. This article presents a summary and theoretical consideration of all available evidence on equal-spaced scales, the vast majority of which appear in east Africa. It is noted that despite equal spacing, there is evidence to suggest that tonal centers are still perceived by idiomatic listeners. We then proceed to propose how such tonal center perception is possible within equal-spaced tonal environments. In short, the existence of equal-spaced scale systems shifts the focus of research from interval uniqueness to alternative explanations for the perception of tonal centers, such as implicit statistical tracking, secondary parameters, recognition of learnt patterns as tonal cues, and so on. Throughout, we note that interdisciplinary work involving ethnomusicologists and psychologists would be beneficial in answering questions about music cognition, and by extension, human cognition in general.
APA, Harvard, Vancouver, ISO, and other styles
12

Iyer, Vijay. "Embodied Mind, Situated Cognition, and Expressive Microtiming in African-American Music." Music Perception 19, no. 3 (2002): 387–414. http://dx.doi.org/10.1525/mp.2002.19.3.387.

Full text
Abstract:
The dual theories of embodied mind and situated cognition, in which physical/temporal embodiment and physical/social/cultural environment contribute crucially to the structure of mind, are brought to bear on issues in music perception. It is argued that cognitive universals grounded in human bodily experience are tempered by the cultural specificity that constructs the role of the body in musical performance. Special focus is given to microrhythmic techniques in specific forms of African-American music, using audio examples created by the author or sampled from well-known jazz recordings.
APA, Harvard, Vancouver, ISO, and other styles
13

Haslinger, B., P. Erhard, E. Altenmüller, U. Schroeder, H. Boecker, and A. O. Ceballos-Baumann. "Transmodal Sensorimotor Networks during Action Observation in Professional Pianists." Journal of Cognitive Neuroscience 17, no. 2 (2005): 282–93. http://dx.doi.org/10.1162/0898929053124893.

Full text
Abstract:
Audiovisual perception and imitation are essential for musical learning and skill acquisition. We compared professional pianists to musically naive controls with fMRI while observing piano playing finger–hand movements and serial finger–thumb opposition movements both with and without synchronous piano sound. Pianists showed stronger activations within a fronto-parieto-temporal network while observing piano playing compared to controls and contrasted to perception of serial finger–thumb opposition movements. Observation of silent piano playing additionally recruited auditory areas in pianists. Perception of piano sounds coupled with serial finger–thumb opposition movements evoked increased activation within the sensorimotor network. This indicates specialization of multimodal auditory– sensorimotor systems within a fronto-parieto-temporal network by professional musical training. Musical “language,” which is acquired by observation and imitation, seems to be tightly coupled to this network in accord with an observation– execution system linking visual and auditory perception to motor performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Nan, Yun, Li Liu, Eveline Geiser, et al. "Piano training enhances the neural processing of pitch and improves speech perception in Mandarin-speaking children." Proceedings of the National Academy of Sciences 115, no. 28 (2018): E6630—E6639. http://dx.doi.org/10.1073/pnas.1808412115.

Full text
Abstract:
Musical training confers advantages in speech-sound processing, which could play an important role in early childhood education. To understand the mechanisms of this effect, we used event-related potential and behavioral measures in a longitudinal design. Seventy-four Mandarin-speaking children aged 4–5 y old were pseudorandomly assigned to piano training, reading training, or a no-contact control group. Six months of piano training improved behavioral auditory word discrimination in general as well as word discrimination based on vowels compared with the controls. The reading group yielded similar trends. However, the piano group demonstrated unique advantages over the reading and control groups in consonant-based word discrimination and in enhanced positive mismatch responses (pMMRs) to lexical tone and musical pitch changes. The improved word discrimination based on consonants correlated with the enhancements in musical pitch pMMRs among the children in the piano group. In contrast, all three groups improved equally on general cognitive measures, including tests of IQ, working memory, and attention. The results suggest strengthened common sound processing across domains as an important mechanism underlying the benefits of musical training on language processing. In addition, although we failed to find far-transfer effects of musical training to general cognition, the near-transfer effects to speech perception establish the potential for musical training to help children improve their language skills. Piano training was not inferior to reading training on direct tests of language function, and it even seemed superior to reading training in enhancing consonant discrimination.
APA, Harvard, Vancouver, ISO, and other styles
15

Boltz, Marilyn G. "Illusory Tempo Changes Due to Musical Characteristics." Music Perception 28, no. 4 (2011): 367–86. http://dx.doi.org/10.1525/mp.2011.28.4.367.

Full text
Abstract:
Recent research in music cognition has investigated ways in which different structural dimensions interact to influence perception and cognition. In the present research, various musical characteristics were manipulated to observe their potential influence on perceived tempo. In Experiment 1, participants were given a paired comparison task in which music-like patterns differed in both the pitch octave (high vs. low) and timbre (bright vs. dull) in which they were played. The results indicated that relative to their standard referents, comparison melodies were judged faster when displaying a higher pitch and/or a brighter timbre—even when no actual tempo differences existed. Experiment 2 converged on these findings by demonstrating that the perceived tempo of a melody was judged faster when it increased in pitch and/or loudness over time. These results are suggested to stem from an overgeneralization of certain structural correlations within the natural environment that, in turn, has implications for both musical performance and the processing of tempo information.
APA, Harvard, Vancouver, ISO, and other styles
16

Carey, Daniel, Stuart Rosen, Saloni Krishnan, et al. "Generality and specificity in the effects of musical expertise on perception and cognition." Cognition 137 (April 2015): 81–105. http://dx.doi.org/10.1016/j.cognition.2014.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Leech-Wilkinson, Daniel. "Sound and Meaning in Recordings of Schubert's “Die junge Nonne”." Musicae Scientiae 11, no. 2 (2007): 209–36. http://dx.doi.org/10.1177/102986490701100204.

Full text
Abstract:
Musicology's growing interest in performance brings it closer to musical science through a shared interest in the relationship between musical sounds and emotional states. However, the fact that musical performance styles change over time implies that understandings of musical compositions change too. And this has implications for studies of music cognition. While the mechanisms by which musical sounds suggest meaning are likely to be biologically grounded, what musical sounds signify in specific performance contexts today may not always be what they signified in the past, nor what they will signify in the future. Studies of music cognition need to take account of performance style change and its potential to inflect conclusions with cultural assumptions. The recorded performance history of Schubert's “Die junge Nonne” offers examples of significant change in style, as well as a range of radically contrasting views of what the song's text may mean. By examining details of performances, and interpreting them in the light of work on music perception and cognition, it is possible to gain a clearer understanding of how signs of emotional state are deployed in performance by singers. At the same time, in the absence of strong evidence as to how individual performances were understood in the past, we have to recognise that we can only speak with any confidence for our own time.
APA, Harvard, Vancouver, ISO, and other styles
18

Seger, Carol A., Brian J. Spiering, Anastasia G. Sares, et al. "Corticostriatal Contributions to Musical Expectancy Perception." Journal of Cognitive Neuroscience 25, no. 7 (2013): 1062–77. http://dx.doi.org/10.1162/jocn_a_00371.

Full text
Abstract:
This study investigates the functional neuroanatomy of harmonic music perception with fMRI. We presented short pieces of Western classical music to nonmusicians. The ending of each piece was systematically manipulated in the following four ways: Standard Cadence (expected resolution), Deceptive Cadence (moderate deviation from expectation), Modulated Cadence (strong deviation from expectation but remaining within the harmonic structure of Western tonal music), and Atonal Cadence (strongest deviation from expectation by leaving the harmonic structure of Western tonal music). Music compared with baseline broadly recruited regions of the bilateral superior temporal gyrus (STG) and the right inferior frontal gyrus (IFG). Parametric regressors scaled to the degree of deviation from harmonic expectancy identified regions sensitive to expectancy violation. Areas within the BG were significantly modulated by expectancy violation, indicating a previously unappreciated role in harmonic processing. Expectancy violation also recruited bilateral cortical regions in the IFG and anterior STG, previously associated with syntactic processing in other domains. The posterior STG was not significantly modulated by expectancy. Granger causality mapping found functional connectivity between IFG, anterior STG, posterior STG, and the BG during music perception. Our results imply the IFG, anterior STG, and the BG are recruited for higher-order harmonic processing, whereas the posterior STG is recruited for basic pitch and melodic processing.
APA, Harvard, Vancouver, ISO, and other styles
19

Schaefer, Rebecca S. "Mental Representations in Musical Processing and their Role in Action-Perception Loops." Empirical Musicology Review 9, no. 3-4 (2015): 161. http://dx.doi.org/10.18061/emr.v9i3-4.4291.

Full text
Abstract:
Music is created in the listener as it is perceived and interpreted - its meaning derived from our unique sense of it; likely driving the range of interpersonal differences found in music processing. Person-specific mental representations of music are thought to unfold on multiple levels as we listen, spanning from an entire piece of music to regularities detected across notes. As we track incoming auditory information, predictions are generated at different levels for different musical aspects, leading to specific percepts and behavioral outputs, illustrating a tight coupling of cognition, perception and action. This coupling, together with a prominent role of prediction in music processing, fits well with recently described ideas about the role of predictive processing in cognitive function, which appears to be especially suitable to account for the role of mental models in musical perception and action. Investigating the cerebral correlates of constructive music imagination offers an experimentally tractable approach to clarifying how mental models of music are represented in the brain. I suggest here that mental representations underlying imagery are multimodal, informed and modulated by the body and its in- and outputs, while perception and action are informed and modulated by predictions based on mental models.  
APA, Harvard, Vancouver, ISO, and other styles
20

Schaefer, Rebecca S., Katie Overy, and Peter Nelson. "Affect and non-uniform characteristics of predictive processing in musical behaviour." Behavioral and Brain Sciences 36, no. 3 (2013): 226–27. http://dx.doi.org/10.1017/s0140525x12002373.

Full text
Abstract:
AbstractThe important roles of prediction and prior experience are well established in music research and fit well with Clark's concept of unified perception, cognition, and action arising from hierarchical, bidirectional predictive processing. However, in order to fully account for human musical intelligence, Clark needs to further consider the powerful and variable role of affect in relation to prediction error.
APA, Harvard, Vancouver, ISO, and other styles
21

Bharucha, Jamshed J. "Music Cognition and Perceptual Facilitation: A Connectionist Framework." Music Perception 5, no. 1 (1987): 1–30. http://dx.doi.org/10.2307/40285384.

Full text
Abstract:
The mind internalizes persistent structural regularities in music and recruits these internalized representations to facilitate subsequent perception. Facilitation underlies the generation of musical expectations and implications and the influence of a musical context on consonance and memory. Facilitation is demonstrated in experiments showing priming of chords: chords that are harmonically closely related to a preceding context are processed more quickly than chords that are harmonically distant from the context. A tonal context enhances intonational sensitivity for related chords and heightens their consonance. Facilitation occurs even when related chords don't share component tones with the context, and even when overlapping harmonics are eliminated. These results point to the indirect activation of representational units at a cognitive level. In a parallel study conducted in India, tones considered to play an important role in a rag but absent from the experimental rendition of that rag were facilitated in the same way. In a connectionist framework, facilitation is a consequence of activation spreading through a network of representational units whose pattern of connectivity encodes musical relationships. In a proposed connectionist model of harmony, each event in a musical sequence activates tone units, and activation spreads via connecting links to parent chord units and then to parent key units. Activation reverberates bidirectionally until the network settles into a state of equilibrium. The initial stages of the activation process constitute the bottom-up influence of the sounded tones, while the later, reverberatory stages constitute the top-down influence of learned, schematic structures internalized at the cognitive level. Computer simulations of the model show the same pattern of data as human subjects in experiments on relatedness judgments of chords and memory for chord sequences.
APA, Harvard, Vancouver, ISO, and other styles
22

Sammler, Daniela, and Stefan Elmer. "Advances in the Neurocognition of Music and Language." Brain Sciences 10, no. 8 (2020): 509. http://dx.doi.org/10.3390/brainsci10080509.

Full text
Abstract:
Neurocomparative music and language research has seen major advances over the past two decades. The goal of this Special Issue “Advances in the Neurocognition of Music and Language” was to showcase the multiple neural analogies between musical and linguistic information processing, their entwined organization in human perception and cognition and to infer the applicability of the combined knowledge in pedagogy and therapy. Here, we summarize the main insights provided by the contributions and integrate them into current frameworks of rhythm processing, neuronal entrainment, predictive coding and cognitive control.
APA, Harvard, Vancouver, ISO, and other styles
23

Gerardi, Gina M., and Louann Gerken. "The Development of Affective Responses to Modality and Melodic Contour." Music Perception 12, no. 3 (1995): 279–90. http://dx.doi.org/10.2307/40286184.

Full text
Abstract:
Although it is well established that melodic contour (ascending vs. descending) and modality (major vs. minor) evoke consistent emotional responses in adult listeners, the mechanisms underlying musical affect are unknown. One possibility is that the mechanisms are based on innate perceptual abilities (e.g., Helmholtz, 1885/1954). Another possibility is that the ability to associate various aspects of music with emotion is learned through exposure to one's musical culture (e.g., Serafine, 1988). The current research examines the affective responses to major and minor ascending and descending melodies by 5-year-olds, 8-yearolds, and college students. Affective responses to modality did not appear until age eight and affective responses to contour appeared only in the college students. These results are consistent with previous developmental perception experiments on contour and modality (Imberty, 1969; Krumhansl & Keil, 1982; Morrongiello & Roes, 1990) and extend the understanding of the relation of perception, cognition, and culture in determining musical affect.
APA, Harvard, Vancouver, ISO, and other styles
24

Schüler, Nico. "From Musical Grammars to Music Cognition in the 1980s and 1990s: Highlights of the History of Computer-Assisted Music Analysis." Musicological Annual 43, no. 2 (2007): 371–96. http://dx.doi.org/10.4312/mz.43.2.371-396.

Full text
Abstract:
While approaches that had already established historical precedents – computer-assisted analytical approaches drawing on statistics and information theory – developed further, many research projects conducted during the 1980s aimed at the development of new methods of computer-assisted music analysis. Some projects discovered new possibilities related to using computers to simulate human cognition and perception, drawing on cognitive musicology and Artificial Intelligence, areas that were themselves spurred on by new technical developments and by developments in computer program design. The 1990s ushered in revolutionary methods of music analysis, especially those drawing on Artificial Intelligence research. Some of these approaches started to focus on musical sound, rather than scores. They allowed music analysis to focus on how music is actually perceived. In some approaches, the analysis of music and of music cognition merged. This article provides an overview of computer-assisted music analysis of the 1980s and 1990s, as it relates to music cognition. Selected approaches are being discussed.
APA, Harvard, Vancouver, ISO, and other styles
25

Petitot, Jean. "Perception, cognition and morphological objectivity." Contemporary Music Review 4, no. 1 (1989): 171–80. http://dx.doi.org/10.1080/07494468900640271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ohnishi, Takashi, Hiroshi Matsuda, Takashi Asada, et al. "Functional anatomy of musical perception in musicians." NeuroImage 13, no. 6 (2001): 923. http://dx.doi.org/10.1016/s1053-8119(01)92265-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bharucha, Jamshed J., and W. Einar Mencl. "Two Issues in Auditory Cognition: Self-Organization of Octave Categories and Pitch-Invariant Pattern Recognition." Psychological Science 7, no. 3 (1996): 142–49. http://dx.doi.org/10.1111/j.1467-9280.1996.tb00347.x.

Full text
Abstract:
The study of auditory and music cognition provides opportunities to explore general cognitive mechanisms in a specific, highly structured domain We discuss two problems with implications for other domains of perception the self-organization of perceptual categories and invariant pattern recognition The perceptual category we consider is the octave We show how general principles of self-organization operating on a cochlear spectral representation can yield octave categories The example of invariant pattern recognition we consider is the recognition of invariant frequency patterns transformed to different absolute frequencies We suggest a system that uses pitch or musical key to map tones into a pitch-invariant format
APA, Harvard, Vancouver, ISO, and other styles
28

Kon, Maria. "The Context-Dependency of the Experience of Auditory Succession and Prospects for Embodying Philosophical Models of Temporal Experience." Empirical Musicology Review 9, no. 3-4 (2015): 213. http://dx.doi.org/10.18061/emr.v9i3-4.4478.

Full text
Abstract:
Recent philosophical work on temporal experience offers generic models that are often assumed to apply to all sensory modalities.  We show that the models serve as broad frameworks in which different aspects of cognitive science can be slotted and, thus, are beneficial to furthering research programs in embodied music cognition.  Here we discuss a particular feature of temporal experience that plays a key role in such philosophical work: a distinction between the experience of succession and the mere succession of experiences.  We question the presupposition that there is such an evident, clear distinction and suggest that, instead, it is context-dependent.  After suggesting a way to modify the philosophical models of temporal experience to accommodate this context-dependency, we illustrate that these models can fruitfully incorporate research programs in embodied musical cognition.  To do so we supplement a philosophical model with Godøy’s recent work that links bodily movement with musical perception.  The Godøy-informed model is shown to facilitate novel hypotheses, refine our general notion of context-dependency and point towards possible extensions.
APA, Harvard, Vancouver, ISO, and other styles
29

Storino, Mariateresa, Rossana Dalmonte, and Mario Baroni. "An Investigation on the Perception of Musical Style." Music Perception 24, no. 5 (2007): 417–32. http://dx.doi.org/10.1525/mp.2007.24.5.417.

Full text
Abstract:
THIS STUDY FOCUSES ON THE PROCESSING of Italian Baroque composer Giovanni Legrenzi's musical style. In a previous work, Baroni, Dalmonte, and Jacoboni (2003) elaborated a generative grammar and implemented it in a computer program called LEGRE that supposedly produces pieces in the style of Legrenzi. The main purpose of the present research is to assess whether a grammar can capture stylistic features that are relevant for stylistic categorization. Four experiments were designed to inquire into the cognitive processes involved in musical style categorization. The overall set of data was then used to evaluate whether a generative grammar could describe all of the features of a style, and to shed new light onto the cognitive processes involved in musical style perception.
APA, Harvard, Vancouver, ISO, and other styles
30

Serdiuk, Ya A. "Complex structures of the virtual in the formation of an associative-figurative plan of a musical work." Aspects of Historical Musicology 14, no. 14 (2018): 207–28. http://dx.doi.org/10.34064/khnum2-14.14.

Full text
Abstract:
Background. In recent decades, musicology has demonstrated a steady interest not only in the grammar of the musical language, in the structural-logical side of the musical form, but also in the associative-figurative plan of music. The latter is increasingly becoming the subject of not only spontaneously-intuitive cognition, but also of decoding and systematization. The works on musical semantics, disclosing the meanings of musical lexemes in the context of that culture, that epoch-making style, in which they originated, belong, in particular, to L. Shaimuhametova, A. Asfandiarova, I. Alekseeva, H. Baikieva, A. Hordeeva, N. Kuznetsova N. Drach, I. Stohniy, H. Taraeva, L. Kazantseva. We also need to note the numerous studies of rhetoric and symbolism in Baroque music, especially, in J. S. Bach’s works. H. Poltavtseva describes the connection of types of musical language and their perception, as well as the process of comprehension on this basis of musical imagery. The study of musical content as a new direction of scientific thought was carried out by V. Kholopova and A. Kudriashov and found its application in creating of same name training courses. The purpose of the proposed study is an attempt to describe the process of forming a figurative subtext of a musical work using the concept of “virtual”, which, despite the wide spreading in modern musicological works of various directions, still does not have an established semantic structure. At the same time, it can be fruitfully used to study many musical phenomena, including the figurative subtext of a musical work. Research methodоlogy. In this study, we rely on the previously developed by us the conceptual system of the virtual in music, in particular, on one of its components – the virtual cognitive applied to the sphere of musical semantics. We used the modeling method to reproduce the algorithm of the formation of the figurative plan of a musical work, the method of semantic analysis to reveal the meanings of musical lexemes, the analytical methods of musical theoretical disciplines for considering the musical material. The results of the research. Virtual cognitive is connected with the hidden possibilities of the musical text and the hidden semantic plans of the musical work. We define hidden musical text plans as “virtual structures”. The latter, in combination with other components of the musical composition and specific features of the perception of music, shape the complex structure of the virtual, which we consider as one of the factors in the formation of the associativefigurative plan of the musical work. In our study, we rely on the idea of both a virtual cognitive and a musical work as a sign-oriented structure that can be studied from the standpoint of the general theory of language, as well as on the statement about the dependence of the perception of the musical text on a thesaurus of a performer or a listener. In this connection, using such terminological pairs as “language-speech”, “text-context”, “denotation-connotation”, we add one more: “text-subtext”. The relation of the last two pairs, in our opinion, is a correlation: “denotation / text”, “connotation / subtext”. Connotations arise thanks to the context, in which this or that text appears and functions. Both denotations (direct meanings) and connotations (accompanying meanings constituting the area of a subtext) belong to the realm of the virtual, because: 1) the meanings of the linguistic sign are products of individual and collective consciousness; 2) contextualized meanings can be revealed to the recipient only if the latter knows the context, in which this or that sign unit originated. However, if there is not enough auditory experience, both the denotation and the hidden connotations of the text (subtext) exist only as an unrealized opportunity, that is, as virtual. We will consider as musical-speech denotations: pitch characteristics, in particular, mode and tonal, harmonic, various “types of musical language” (after H. Poltavtseva), intonational turns with fixed meanings, the virtual structures of the facture-sound level – individual coloring of sound, hidden polyphony. As connotations – the virtual structures of the composition of the dramaturgic level: numerical symbolism, hidden form-building principles, which are the expression of a certain philosophical idea. Thus, the figurative subtext of a musical work is a compound structure formed by a number of elements and processes. The substrates of this complex are: 1) the musical notation, in which the basic parameters of the sounding are fixed; 2) sounding. On this basis, the other virtual formations arise: 1) performer’s mental, audial and motor ideas about a musical work; 2) listener’s perceptions – the modes of psychosomatic activity and figurative associations that arise on their basis; 3) denotations – semantics, fixed to one or another tonalities, to intonation formulas, to rhetorical figures etc.; 4) connotations, the area of subtext, often generating by contexts, in which the text is appeared, functioned and apperceived, taking into account the recipient’s thesaurus. In the context of perception, we include not only the properties of the recipient’s thesaurus, but also the communicative situation, in which this or that work is performed / perceived. Conclusions. Consequently, another else part of the sphere of virtual cognitive is the complex structures of the virtual, which act as a factor of the formation of an associative-figurative plan of a musical work. The components of these complexes are the components of the musical language (which include the virtual structures of the musical text, especially the factural and syntactic levels) in conjunction with the individual characteristics of perception due to the context of the communicative situation, the capacity and the nature of recipient’s thesaurus – a kind of filter, through which the meaning goes filling those or other musical constructions. The prospects for further research on virtual cognitive in the field of musical semantics provide for a more detailed and multidimensional consideration of the complex structures of the virtual in musical texts of different historical eras and styles.
APA, Harvard, Vancouver, ISO, and other styles
31

Leman, Marc, and Luiz Naveda. "Basic Gestures as Spatiotemporal Reference Frames for Repetitive Dance/Music Patterns in Samba and Charleston." Music Perception 28, no. 1 (2010): 71–91. http://dx.doi.org/10.1525/mp.2010.28.1.71.

Full text
Abstract:
The goal of the present study is to gain better insight into how dancers establish, through dancing, a spatiotemporal reference frame in synchrony with musical cues. With the aim of achieving this, repetitive dance patterns of samba and Charleston were recorded using a three-dimensional motion capture system. Geometric patterns then were extracted from each joint of the dancer's body. The method uses a body-centered reference frame and decomposes the movement into nonorthogonal periodicities that match periods of the musical meter. Musical cues (such as meter and loudness) as well as action-based cues (such as velocity) can be projected onto the patterns, thus providing spatiotemporal reference frames, or 'basic gestures,' for action-perception couplings. Conceptually speaking, the spatiotemporal reference frames control minimum effort points in action-perception couplings. They reside as memory patterns in the mental and/or motor domains, ready to be dynamically transformed in dance movements. The present study raises a number of hypotheses related to spatial cognition that may serve as guiding principles for future dance/music studies.
APA, Harvard, Vancouver, ISO, and other styles
32

Gfeller, Kate, Jacob Oleson, John F. Knutson, Patrick Breheny, Virginia Driscoll, and Carol Olszewski. "Multivariate Predictors of Music Perception and Appraisal by Adult Cochlear Implant Users." Journal of the American Academy of Audiology 19, no. 02 (2008): 120–34. http://dx.doi.org/10.3766/jaaa.19.2.3.

Full text
Abstract:
The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music. La investigación examinó si el desempeño, por parte de adultos receptores de un implante coclear, sobre una variedad de pruebas de reconocimiento y evaluación derivadas de la música del mundo real, podrían predecirse a partir de variables tecnológicas, demográficas y de experiencias de vida, así como de puntajes de reconocimiento del lenguaje. Participó una muestra representativa de 209 adultos implantados entre 1965 y el 2006. Usando múltiples modelos de regresión lineal y modelos mixtos lineales generalizados, se seleccionaron grupos de variables óptimas de predicción, que pudieran predecir efectivamente el desempeño por medio de una batería de pruebas que permitiera evaluar diferentes aspectos de la apreciación musical. Estos análisis establecieron la importancia de distinguir entre la exactitud en la percepción musical y la evaluación de estímulos musicales cuando se utiliza la apreciación musical como un índice de éxito en la implantación. Importantemente, ningún tipo de dispositivo o estrategia de procesamiento predijo la percepción o la evaluación musical. El desempeño en el reconocimiento del lenguaje no fue un elemento fuerte de predicción, y llegó a predecir primariamente la percepción musical cuando los estímulos de prueba incluyeron las letras. Adicionalmente, las limitaciones en la utilidad de la percepción del lenguaje a la hora de predecir la percepción y la evaluación musical, subrayan la utilidad de la percepción de la música como una medida alternativa de resultado para evaluar la implantación coclear. La música de fondo, la audición residual (p.e., el uso de auxiliares auditivos), los factores cognitivos, y algunos factores demográficos predijeron varios índices de exactitud y evaluación perceptual de la música.
APA, Harvard, Vancouver, ISO, and other styles
33

Gottfried, Terry L., Irene Deliege, and John Sloboda. "Perception and Cognition of Music." Notes 55, no. 2 (1998): 374. http://dx.doi.org/10.2307/900181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Noble, Jason, Tanor Bonin, and Stephen McAdams. "Experiences of Time and Timelessness in Electroacoustic Music." Organised Sound 25, no. 2 (2020): 232–47. http://dx.doi.org/10.1017/s135577182000014x.

Full text
Abstract:
Electroacoustic music and its historical antecedents open up new ways of thinking about musical time. Whereas music performed by humans is necessarily constrained by certain temporal limits that define human information processing and embodiment, machines are capable of producing sound with scales and structures of time that reach potentially very far outside of these human limitations. But even musics produced with superhuman means are still subject to human constraints in music perception and cognition. Focusing on five principles of auditory perception – segmentation, grouping, pulse, metre and repetition – we hypothesise that musics that exceed or subvert the thresholds that define ‘human time’ are likely to be recognised by listeners as expressing timelessness. To support this hypothesis, we report an experiment in which a listening panel reviewed excerpts of electroacoustic music selected for their temporally subversive or excessive properties, and rated them (1) for the pace of time they express (normative, speeding up, or slowing down), and (2) for whether or not the music expresses ‘timelessness’. We find that while the specific musical parameters associated with temporal phenomenology vary from one musical context to the next, a general trend obtains across musical contexts through the excess or subversion of a particular perceptual constraint by a given musical parameter on the one hand, and the subjective experiences of time and timelessness on the other.
APA, Harvard, Vancouver, ISO, and other styles
35

Wise, Karen J., and John A. Sloboda. "Establishing an empirical profile of self-defined “tone deafness”: Perception, singing performance and self-assessment." Musicae Scientiae 12, no. 1 (2008): 3–26. http://dx.doi.org/10.1177/102986490801200102.

Full text
Abstract:
Research has suggested that around 17% of Western adults self-define as “tone deaf” (Cuddy, Balkwill, Peretz & Holden, 2005). But questions remain about the exact nature of tone deafness. One candidate for a formal definition is “congenital amusia” (Peretz et al., 2003), characterised by a dense music-specific perceptual deficit. However, most people self-defining as tone deaf are not congenially amusic (Cuddy et al., 2005). According to Sloboda, Wise and Peretz (2005), the general population defines tone deafness as perceived poor singing ability, suggesting the need to extend investigations to production abilities and self-perceptions. The present research aims to discover if self-defined tone deaf people show any pattern of musical difficulties relative to controls, and to offer possible explanations for them ( e.g. perceptual, cognitive, productive, motivational). 13 self-reporting “tone deaf” (TD) and 17 self-reporting “not tone deaf” (NTD) participants were assessed on a range of measures for musical perception, cognition, memory, production and self-ratings of performance. This paper reports on four measures to assess perception (Montreal Battery of Evaluation of Amusia), vocal production (songs and pitch-matching) and self-report. Results showed that the TD group performed significantly less well than the NTD group in all measures, but did not demonstrate the dense deficits characteristic of “congenital amusics”. Singing performance was influenced by context, with both groups performing better when accompanied than unaccompanied. The TD group self-rated the accuracy of their singing significantly lower than the NTD group, but not disproportionately so, and were less confident in their vocal quality. The TD participants are not facing an insurmountable difficulty, but are likely to improve with targeted intervention.
APA, Harvard, Vancouver, ISO, and other styles
36

Wallmark, Zachary, Marco Iacoboni, Choi Deblieck, and Roger A. Kendall. "Embodied Listening and Timbre." Music Perception 35, no. 3 (2018): 332–63. http://dx.doi.org/10.1525/mp.2018.35.3.332.

Full text
Abstract:
Timbre plays an essential role in transmitting musical affect, and in recent years, our understanding of emotional expression in music has been enriched by contributions from the burgeoning field of embodied music cognition. However, little attention has been paid to timbre as a possible mediator between musical embodiment and affect. In three experiments, we investigated the embodied dimensions of timbre perception by focusing on timbral qualities considered “noisy” and aversive. In Experiment 1, participants rated brief isolated natural timbres scaled into ordinal levels of “noisiness.” Experiment 2 employed the same design with a focus on polyphonic timbre, using brief (400 ms) excerpts from 6 popular music genres as stimuli. In Experiment 3, functional magnetic resonance imaging was used to explore neural activations associated with perception of stimuli from Experiment 1. Converging results from behavioral, acoustical, and fMRI data suggest a motor component to timbre processing, particularly timbral qualities considered “noisy,” indicating a possible enactive mechanism in timbre processing. Activity in somatomotor areas, insula, and the limbic system increased the more participants disliked a timbre, and connectivity between the premotor cortex and insula relay decreased. Implications for recent theories of embodied music cognition, affect, and timbre semantics are discussed in conclusion.
APA, Harvard, Vancouver, ISO, and other styles
37

Madsen, Clifford K., and Katia Madsen. "Perception and Cognition in Music: Musically Trained and Untrained Adults Compared to Sixth-Grade and Eighth-Grade Children." Journal of Research in Music Education 50, no. 2 (2002): 111–30. http://dx.doi.org/10.2307/3345816.

Full text
Abstract:
We investigated different levels of age and musical training in relation to subjects' melodic perception in music by testing their ability to perceive a target melody when extremely similar melodies are interpolated between this original melody and its reoccurrence. Participants were sixth graders, eighth graders, young adults, and trained musicians who listened to 16 original melodies, each of which was followed by 8 extremely similar melodies. Two different experiments (A and B) tested different arrangements of mode and meter interpolations. We also asked the adult musicians to specify cognitive strategies for accomplishing the task. Results demonstrated greater accuracy among experienced musicians, yet results show that even young students are capable of remembering and discriminating similar melodies with high accuracy. Written analyses of strategies used by musicians indicated they considered the task extremely difficult and that their past musical training helped with the task; they also indicated that children could not do this task, which was not the case.
APA, Harvard, Vancouver, ISO, and other styles
38

Tramo, Mark Jude, Jamshed J. Bharucha, and Frank E. Musiek. "Music Perception and Cognition Following Bilateral Lesions of Auditory Cortex." Journal of Cognitive Neuroscience 2, no. 3 (1990): 195–212. http://dx.doi.org/10.1162/jocn.1990.2.3.195.

Full text
Abstract:
We present experimental and anatomical data from a case study of impaired auditory perception following bilateral hemispheric strokes. To consider the cortical representation of sensory, perceptual, and cognitive functions mediating tonal information processing in music, pure tone sensation thresholds, spectral intonation judgments, and the associative priming of spectral intonation judgments by harmonic context were examined, and lesion localization was analyzed quantitatively using straight-line two-dimensional maps of the cortical surface reconstructed from magnetic resonance images. Despite normal pure tone sensation thresholds at 250–8000 Hz, the perception of tonal spectra was severely impaired, such that harmonic structures (major triads) were almost uniformly judged to sound dissonant; yet, the associative priming of spectral intonation judgments by harmonic context was preserved, indicating that cognitive representations of tonal hierarchies in music remained intact and accessible. Brainprints demonstrated complete bilateral lesions of the transverse gyri of Heschl and partial lesions of the right and left superior temporal gyri involving 98 and 20% of their surface areas, respectively. In the right hemisphere, there was partial sparing of the planum temporale, temporoparietal junction, and inferior parietal cortex. In the left hemisphere, all of the superior temporal region anterior to the transverse gyrus and parts of the planum temporale, temporoparietal junction, inferior parietal cortex, and insula were spared. These observations suggest that (1) sensory, perceptual, and cognitive functions mediating tonal information processing in music are neurologically dissociable; (2) complete bilateral lesions of primary auditory cortex combined with partial bilateral lesions of auditory association cortex chronically impair tonal consonance perception; (3) cognitive functions that hierarchically structure pitch information and generate harmonic expectancies during music perception do not rely on the integrity of primary auditory cortex; and (4) musical priming may be mediated by broadly tuned subcomponents of the thala-mocortical auditory system.
APA, Harvard, Vancouver, ISO, and other styles
39

Klein, ME, and RJ Zatorre. "Neural Correlates of Categorical Perception in Musical Chords." NeuroImage 47 (July 2009): S54. http://dx.doi.org/10.1016/s1053-8119(09)70167-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bhatara, Anjali, Anna K. Tirovolas, Lilu Marie Duan, Bianca Levy, and Daniel J. Levitin. "Perception of emotional expression in musical performance." Journal of Experimental Psychology: Human Perception and Performance 37, no. 3 (2011): 921–34. http://dx.doi.org/10.1037/a0021922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lynch, Michael P., Rebecca E. Eilers, D. Kimbrough Oller, Richard C. Urbano, and Paul Wilson. "Influences of acculturation and musical sophistication on perception of musical interval patterns." Journal of Experimental Psychology: Human Perception and Performance 17, no. 4 (1991): 967–75. http://dx.doi.org/10.1037/0096-1523.17.4.967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bogart, Willard Van De. "Cognition, Perception and the Computer." Leonardo 23, no. 2/3 (1990): 307. http://dx.doi.org/10.2307/1578629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dolean, Dacian Dorin, and Ioana Tincas. "Cognitive factors explain inter-cultural variations of abilities in rhythm perception: The case of the Roma minority." Psychology of Music 47, no. 5 (2018): 757–66. http://dx.doi.org/10.1177/0305735618766715.

Full text
Abstract:
This study aimed to determine what role cognitive factors play in inter-cultural variations of rhythm perception and to assess whether the stereotype of enhanced musical abilities in the Roma minority is supported by empirical evidence. The rhythm perception of 487 Roma and non-Roma children was assessed comparatively, while controlling for cognitive skills. Contrary to popular belief, the rhythm perception of Roma children was lower than that of their non-Roma peers; however, this difference in performance was explained fully by cognitive variables. The results indicate that further comparative investigations of rhythm perception across cultures should account for cognitive factors, and that the reported enhanced musical ability of the Roma minority is a stereotype that is not supported by empirical evidence.
APA, Harvard, Vancouver, ISO, and other styles
44

Addessi, Anna Rita, and Roberto Caterina. "Perceptual Musical Analysis: Segmentation and Perception of Tension." Musicae Scientiae 4, no. 1 (2000): 31–54. http://dx.doi.org/10.1177/102986490000400102.

Full text
Abstract:
Recent investigations have studied the processes of segmentation and perception of the points of tension which occur while listening to tonal and post tonal music. The present study aims to investigate the criteria people use to segment and memorise post tonal pieces.
APA, Harvard, Vancouver, ISO, and other styles
45

Umemoto, Takao. "The Psychological Structure of Music." Music Perception 8, no. 2 (1990): 115–27. http://dx.doi.org/10.2307/40285492.

Full text
Abstract:
Music is rich in information that can be processed along different dimensions. Four types of musical dimensions that correspond to different levels of perception and cognition are discussed: (1) the dimension of sound, (2) the dimensions of melody, rhythm, and harmony, (3) the dimension of compositional structure, and (4) the dimension of compositional content. These psychological dimensions of music, and the psychological activities relevant to these dimensions, depend highly on context and on schema. Thus the four types of musical dimensions are not independent, but interact with each other. Some evidence from new research on the sense of pitch deviation, the sense of fitness of timbre to melody, and similarity and octave judgments referring to the problem of wording are discussed.
APA, Harvard, Vancouver, ISO, and other styles
46

Halpern, Andrea R., Jennifer M. Talarico, Nura Gouda, and Victoria J. Williamson. "Are Musical Autobiographical Memories Special? It Ain’t Necessarily So." Music Perception 35, no. 5 (2018): 561–72. http://dx.doi.org/10.1525/mp.2018.35.5.561.

Full text
Abstract:
We compared young adults’ autobiographical (AB) memories involving Music to memories concerning other specific categories and to Everyday AB memories with no specific cue. In all cases, participants reported both their most vivid memory and another AB memory from approximately the same time. We analyzed responses via quantitative ratings scales on aspects such as vividness and importance, as well as via qualitative thematic coding. In the initial phase, comparison of Music-related to Everyday memories suggested all Musical memories had high emotional and vividness characteristics whereas Everyday memories elicited emotion and other heightened responses only in the “vivid” instruction condition. However, when we added two other specific AB categories (Dining and Holidays) in phase two, the Music memories were no longer unique. We offer these results as a cautionary tale: before concluding that music is special in its relationship to cognition, perception, or emotion, studies should include appropriate control conditions.
APA, Harvard, Vancouver, ISO, and other styles
47

Filimon, Rosina Caterina. "Decoding the Musical Message via the Structural Analogy between Verbal and Musical Language." Artes. Journal of Musicology 18, no. 1 (2018): 151–60. http://dx.doi.org/10.2478/ajm-2018-0009.

Full text
Abstract:
Abstract The topic approached in this paper aims to identify the structural similarities between the verbal and the musical language and to highlight the process of decoding the musical message through the structural analogy between them. The process of musical perception and musical decoding involves physiological, psychological and aesthetic phenomena. Besides receiving the sound waves, it implies complex cognitive processes being activated, whose aim is to decode the musical material at cerebral level. Starting from the research methods in cognitive psychology, music researchers redefine the process of musical perception in a series of papers in musical cognitive psychology. In the case of the analogy between language and music, deciphering the musical structure and its perception are due, according to researchers, to several common structural configurations. A significant model for the description of the musical structure is Noam Chomsky’s generative-transformational model. This claimed that, at a deep level, all languages have the same syntactic structure, on account of innate anatomical and physiological structures which became specialized as a consequence of the universal nature of certain mechanisms of the human intellect. Chomsky’s studies supported by sophisticated experimental devices, computerised analyses and algorithmic models have identified the syntax of the musical message, as well as the rules and principles that underlie the processing of sound-related information by the listener; this syntax, principles and rules show surprising similarities with the verbal language. The musicologist Heinrich Schenker, 20 years ahead of Chomsky, considers that there is a parallel between the analysis of natural language and that of the musical structure, and has developed his own theory on the structure of music. Schenker’s structural analysis is based on the idea that tonal music is organized hierarchically, in a layering of structural levels. Thus, spoken language and music are governed by common rules: phonology, syntax and semantics. Fred Lerdahl and Ray Jackendoff develop a musical grammar where a set of generating rules are defined to explain the hierarchical structure of tonal music. The authors of the generative theory propose the hypothesis of a musical grammar based on two types of rules, which take into account the conscious and unconscious principles that govern the organization of the musical perception. The structural analogy between verbal and musical language consists of several common elements. Among those is the hierarchical organization of both fields, a governance by the same rules – phonology, syntax, semantics – and as a consequence of the universal nature of certain mechanisms of the human intellect, decoding the transmitted message is accomplished thanks to some universal innate structures, biologically inherited. Also, according to Chomsky's linguistics model a musical grammar is configured, one governed by wellformed rules and preference rules. Thus, a musical piece is not perceived as a stream of disordered sounds, but it is deconstructed, developed and assimilated at cerebral level by means of cognitive pre-existing schemes.
APA, Harvard, Vancouver, ISO, and other styles
48

Bimbot, Frédéric, Emmanuel Deruty, Gabriel Sargent, and Emmanuel Vincent. "System & Contrast." Music Perception 33, no. 5 (2016): 631–61. http://dx.doi.org/10.1525/mp.2016.33.5.631.

Full text
Abstract:
This article introduces the System &Contrast (S&C) model, which aims at describing the inner organization of structural segments within music pieces as: (i) a carrier system, i.e., a sequence of morphological elements forming a network of self-deducible syntagmatic relationships, and (ii) a contrast, i.e., a substitutive element, usually the last one, which departs from the logic implied by the carrier system. Initially used for the structural annotation of pop songs (Bimbot, Deruty, Sargent, & Vincent, 2012), the S&C model provides a framework to describe implication patterns in musical segments by encoding similarities and relations between its elements. It is applicable at several timescales to various musical dimensions in a polymorphous way, thus offering an attractive meta-description of musical contents. We formalize the S&C model, illustrate how it applies to music and establish its filiation with Narmour’s implication-realization model (Narmour, 1990, 1992) and cognitive rule-mapping (Narmour, 2000). We introduce the minimum description length scheme as a productive paradigm to support the estimation of S&C descriptions. The S&C model highlights promising connections between music data processing and information retrieval on the one hand, and modern theories in music perception, cognition and semiotics on the other hand, together with interesting perspectives in Musicology.
APA, Harvard, Vancouver, ISO, and other styles
49

Cameron, Daniel, Keith Potter, Geraint Wiggins, and Marcus Pearce. "Perception of Rhythmic Similarity is Asymmetrical, and Is Influenced by Musical Training, Expressive Performance, and Musical Context." Timing & Time Perception 5, no. 3-4 (2017): 211–27. http://dx.doi.org/10.1163/22134468-00002085.

Full text
Abstract:
Rhythm is an essential part of the structure, behaviour, and aesthetics of music. However, the cognitive processing that underlies the perception of musical rhythm is not fully understood. In this study, we tested whether rhythm perception is influenced by three factors: musical training, the presence of expressive performance cues in human-performed music, and the broader musical context. We compared musicians and nonmusicians’ similarity ratings for pairs of rhythms taken from Steve Reich’s Clapping Music. The rhythms were heard both in isolation and in musical context and both with and without expressive performance cues. The results revealed that rhythm perception is influenced by the experimental conditions: rhythms heard in musical context were rated as less similar than those heard in isolation; musicians’ ratings were unaffected by expressive performance, but nonmusicians rated expressively performed rhythms as less similar than those with exact timing; and expressively-performed rhythms were rated as less similar compared to rhythms with exact timing when heard in isolation but not when heard in musical context. The results also showed asymmetrical perception: the order in which two rhythms were heard influenced their perceived similarity. Analyses suggest that this asymmetry was driven by the internal coherence of rhythms, as measured by normalized Pairwise Variability Index (nPVI). As predicted, rhythms were perceived as less similar when the first rhythm in a pair had greater coherence (lower nPVI) than the second rhythm, compared to when the rhythms were heard in the opposite order.
APA, Harvard, Vancouver, ISO, and other styles
50

Padjen, Ante L. "Music, Brain and Health." International Journal of Whole Person Care 7, no. 1 (2020): 38. http://dx.doi.org/10.26443/ijwpc.v7i1.229.

Full text
Abstract:
Music, like language, is a uniquely human experience, ubiquitous across human cultures and across the human life span.Musical capacity appears early in evolution and it seems to be innate to most of the human population. Neurobiological studies of music perception and music performance profoundly affect the brain, in an acute and chronic way, by modulating networks involved in cognition, sensation, emotion, reward, and movement corresponding to the empirical findings why people listen to music: pleasure, self-awareness, social relatedness, and arousal and mood regulation.Most intriguing is “salutogenic” effect of musical activities, such as instrumental and choral “musicking” (particularly in non-professional musicians), both on the individual level and in populations. Musical training can promote the development of non-musical skills as diverse as language development, attention, visuospatial perception, and executive functions.Music is also a prophylactic resource, it improves the bonding of mother and child. There is a wide range of therapeutic domains and disorders where musical interventions improve the outcome. As an example, familiar music has an exceptional ability to elicit memories, movements, motivation and positive emotions from adults affected by dementia.Considering that one of the most important problems in biomedicine is “understanding what is to be human” then “music should be an essential part of this pursuit” – of an understanding of the whole person. Despite evidence of significant effects of music on health and well-being - music is not well present in current re-humanization of medicine
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!