To see the other types of publications on this topic, follow the link: Music Perception and cognition.

Dissertations / Theses on the topic 'Music Perception and cognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Music Perception and cognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Viel, Massimiliano. "Listening patterns : from music to perception and cognition." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/11809.

Full text
Abstract:
The research aims to propose a narrative of the experience of listening and to provide some first examples of its possible application. This is done in three parts. Part One, “Words”, aims to methodologically frame the narrative by discussing the limits and requirements of a theory of listening. After discussing the difficulties of building an objective characterization of the listening experience, the research proposes that any theorization on listening can only express a point of view that is implied by descriptions of listening both in linguistic terms and in the data they involve. The analysis of theories about listening is therefore conducted through a grammatical path that unfolds by following the syntactic roles of the words involved in theoretical claims about listening. Starting from the problem of synonymy, the analysis moves around the subject, the object, adjectives and adverbs to finally discuss the status of the references of the discourses on listening. The Part One ends by claiming the need to reintroduce the subject in theories about listening and proposes to attribute the epistemological status of the narrative to any discourse about the listening experience. This implies that any proposed narrative must substitute its truth-value with the instrumental value that is expressed by the idea of “viability”. The Part Two, “Patterns”, is devoted to introducing a narrative of listening. This is first informally introduced in terms of the experience of a distinction within the sonic flow. After an intermission dedicated to connecting the idea of distinction to Gaston Bachelard’s metaphysics of time, the narrative is finally presented as a dialectics among three ways of organizing perceptive distinctions. Three perceptive modes of distinctions are presented as a basic mechanism that is responsible for articulating the sonic continuum in a complex structure of expectations and reactions, in terms of patterns, that is constantly renewed under the direction of statistical learning. The final chapter of the Part Two aims to briefly apply the narrative of pattern structures to dealing with the experience of noise. Part Three aims to show the “viability” of the proposed narrative of listening. First, a method for analysing music by listening is discussed. Then, a second chapter puts the idea of pattern structures in contact with music composition, as a framework that can be applied to data sonification, installations, music production and to the didactics of composition. Finally, the last chapter is devoted to the discussion of the idea of “soundscape” and “identity formation”, in order to show the potential of applying the proposed narrative to the context of cultural and social studies.
APA, Harvard, Vancouver, ISO, and other styles
2

Tirovolas, Anna Kristina. "Applied music perception and cognition: predicting sight-reading performance." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116886.

Full text
Abstract:
AbstractThis research sought to translate three standardized assessment measures of phonological processing known to be related to text reading, into experimental tasks that measure music processing. The primary aim of this thesis was to ascertain the relationship between these musically-adapted tasks and sight-reading performance in music. A broader goal was to explore and compare task performance across text and music, thereby informing a larger issue in cognitive and educational psychology: the relationship between music and language. In this manuscript-based thesis, there are six chapters, including three manuscripts (one previously published) that contribute to these goals. The first manuscript, published in the journal Music Perception, is a 26-year review of the field of music perception and cognition. The categorical and bibliometric analysis sought to document the longitudinal course of empirical studies in the journal Music Perception, by examining 384 empirical articles, as well as the full set of 578 articles, published between 1983 and 2010. The review suggested that only 9% of music perception studies use any assessment measures (mostly standardized tests, but also measures of musical ability). An increase over time in the use of assessment measures (ß = .40, p < .05) as data collection instruments was observed. It was thus inferred that the development of tasks which measure musical ability would be important to the continued advancement of psychometrics in the field of music perception and cognition. The second and third manuscripts were devoted to designing measures of music processing based on standardized tests of text reading. The objective was to search for relationships between the language and music tasks themselves, as well as testing their capacity to predict errors in musical sight-reading (SR) performance. In other words, an investigation of whether musically-adapted tasks, initially developed specifically for the assessment of text reading, would be significant predictors of SR performance. The second manuscript explored the effectiveness of the Rapid Automatized Naming (RAN) task in predicting SR by testing 41 participants: pianists aged 18 to 36. For all RAN tasks, response times (interonset intervals of vocal responses) were used to predict errors in sight reading performance of piano music. Correlational analyses revealed several significant associations between performance on standard RAN and music RAN tasks. Regression analyses revealed that the RAN letter task was the most consistent predictor of SR, with music RAN tasks adding additional explanatory power to the model. These findings suggested that processing specific to musical symbols may underlie aspects of SR performance, but that an already existing standardized task typically used for text reading could be more useful in predicting SR ability. The third manuscript reports an experiment in which musical tasks were designed to mirror two phonological awareness tasks from the "Comprehensive Test of Phonological Processing" (CTOPP), Elision and Blending Words. Participants were 25 pianists, aged 18 to 53. Regression analyses revealed the importance of music training and working memory in SR, and showed that performance on a musical blending task was important to the prediction of SR performance in certain cases.
RésuméCette recherche a tenté de traduire trois mesures d'évaluation normalisées d'habiletés de traitement phonologique liés à la lecture du texte, en tâches expérimentales mesurant le traitement de musique. L'objectif principal de cette thèse était de déterminer la relation entre ces tâches adaptées musicalement et la lecture à vue musicale. Un objectif plus large était d'explorer et de comparer la performance des tâches dans le texte et la musique, élucidant ainsi une question plus vaste de la psychologie cognitive et éducative: la relation entre la musique et la langue. Cette thèse comprend six chapitres, et trois manuscits (un publié) qui contribuent à ces objectifs. Le premier manuscrit, publié dans la revue Music Perception, est une analyse de 26 ans de littérature dans domaine de la perception et de la cognition musicale. L'analyse bibliométrique et catégorique a cherché à documenter l'évolution longitudinale des études empiriques dans la publication Music Perception, en examinant 384 articles empiriques, ainsi que l'ensemble complet des 578 articles publiés entre 1983 et 2010. L'analyse suggère que seulement 9% des études sur la perception de la musique utilisent des mesures d'évaluation (essentiellement des essais normalisés, mais aussi des mesures de la capacité musicale). J'ai observé une augmentation au fil du temps dans l'utilisation des mesures d'évaluation (ß = .40, p < .05) comme des instruments de collecte de données. Par conséquent, j'ai déduit que le développement de tâches qui mesurent la capacité musicale était considéré important pour l'avancement continu de la psychométrie dans le domaine de la perception et la cognition de la musique. Les deuxième et troisième manuscrits ont été consacrés à l'élaboration de mesures de traitement de la musique basés sur des tests standardisés de lecture de texte. L'objectif était de chercher les relations entre les tâches langagières et musicales elles-mêmes, ainsi que de tester leur capacité à prédire des erreurs dans la lecture à vue musicale. Autrement dit, j'ai examiné si les tâches musicales, initialement développées spécifiquement pour l'évaluation de la lecture du texte, seraient des prédicteurs significatifs de la lecture à vue. Le second manuscrit a exploré l'efficacité de la tâche Rapid Automatized Naming (RAN) dans la prédiction de la lecture à vue en testant 41 participants: des pianistes âgés de 18 à 36 ans. Pour toutes les tâches RAN, le temps de réponse (intervalles "interonset" de réponses vocales) a été utilisé pour prédire des erreurs dans la lecture à vue des performances de musique pour piano. Les analyses de corrélation ont révélé plusieurs associations significatives entre les performances sur les RAN standards et les RAN musicaux. Les analyses de regression ont révélé un modèle dans lequel la tâche RAN lettre était le prédicteur le plus constant de la lecture à vue, avec une des tâches RAN musique ajoutant un pouvoir explicatif au modèle. Ces résultats suggèrent que le traitement spécifique des symboles musicaux peuvent sous-tendre les aspects de la performance de la lecture à vue, mais aussi qu'une tâche déjà existante normalisée généralement utilisée pour la lecture du texte pourrait être plus utile pour prédire la capacité de la lecture à vue. Le troisième manuscrit présente une expérience où des tâches musicales ont été conçues pour refléter deux tâches de conscience phonologique comprises dans le "Comprehensive Test of Phonological Processing", Elision et Blending Words. Les participants étaient 25 pianistes, âgés de 18 à 53 ans. Les analyses de régression ont révélé l'importance de la formation musicale et de la mémoire de travail dans la lecture à vue et ont montré que la performance sur une tâche musicale était importante pour la prédiction de performance musicale dans certains cas.
APA, Harvard, Vancouver, ISO, and other styles
3

Ilari, Beatriz Senoi. "Music cognition in infancy : infants' preferences and long-term memory for complex music." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38490.

Full text
Abstract:
The purpose of this study was to investigate infants' preferences and long-term memory for two contrasting complex pieces of music, that is, Prelude and Forlane from Le Tombeau de Couperin by Maurice Ravel (1875--1937). Seventy 8.5-month-old infants were randomly assigned to one of four experiments conducted on the Headturn Preference Procedure. The first experiment examined infants' preferences for Prelude and Forlane in piano timbre. The second experiment assessed infants' preferences for Prelude and Forlane in orchestra timbre. Infants' preferences for the Forlane in piano and orchestra timbres were investigated in the third experiment. The last experiment aimed at infants' long-term memory for complex music. Thirty infants were exposed to either the Prelude or the Forlane three times a day for ten consecutive days. Two weeks following the exposure, infants were tested on the HPP. It was predicted that these infants would prefer to listen to the familiar piece from the exposure over the unfamiliar one. Results suggested that 8.5-month-olds could tell apart two complex pieces of music in orchestra timbre and could discriminate between the piano and the orchestra timbres. Contrary to the belief that infants are ill equipped to process complex music, this study found that infants could encode and remember complex pieces of music for at least two weeks.
Because infants rely on their caretakers to provide musical experiences for them, maternal beliefs and uses of music were also investigated. Mothers of participating infants were interviewed on musical background, listening preferences and musical behaviors and beliefs with their infants. The analysis of interview data yielded the following main results: (1) Singing was the primary musical activity of mothers and babies; (2) Maternal occupation and previous musical experiences affected their musical behaviors with their babies; (3) Most mothers held the belief that there is appropriate music for babies to listen to although there was no consensus as to what is appropriate music. Such beliefs reflect a conflict between maternal beliefs regarding infants' music cognition and the actual music-related perceptual and cognitive abilities of infants. Attempting to attenuate this conflict, suggestions for music educators, parents and researchers were proposed.
APA, Harvard, Vancouver, ISO, and other styles
4

Carrabré, Ariel. "Understanding Schenkerian Analysis from the Perspective of Music Perception and Cognition." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32850.

Full text
Abstract:
This thesis investigates the perceptual and cognitive reality of Schenkerian theory through a survey of relevant empirical research. It reviews existing Schenkerian-specific empirical research, examines general tonal research applicable to Schenkerian analysis, and proposes the possibility of an optimal empirical research method by which to explore the theory. It evaluates data dealing with musical instruction’s effect on perception. From this review, reasonable evidence for the perceptual reality of Schenkerian-style structural levels is found to exist. This thesis asserts that the perception of Schenkerian analytical structures is largely an unconscious process.
APA, Harvard, Vancouver, ISO, and other styles
5

Pinard-Welyczko, Kira. "Does Training Enhance Entraining? Musical Ability and Neural Signatures of Beat Perception." Oberlin College Honors Theses / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1495617848085978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hass, Richard William. "DEVELOPMENT OF CREATIVE EXPERTISE IN MUSIC: A QUANTITATIVE ANALYSIS OF THE SONGS OF COLE PORTER AND IRVING BERLIN." Diss., Temple University Libraries, 2008. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/21257.

Full text
Abstract:
Psychology
Ph.D.
Previous studies of musical creativity lacked strong foundations in music theory and music analysis. The goal of the current project was to merge the study of music perception and cognition with the study of expertise-based musical creativity. Three hypotheses about the nature of creativity were tested. According to the productive-thinking hypothesis, creativity represents a complete break from past knowledge. According to the reproductive-thinking hypothesis, creators develop a core collection of kernel ideas early in their careers and continually recombine those ideas in novel ways. According to what can be called the field hypothesis, creativity involves more than just the individual creator; creativity represents an interaction between the individual creator, the domain in which the creator works, and the field, or collection of institutions that evaluate creative products. In order to evaluate each hypothesis, the musical components of a sample of songs by two eminent 20th century American songwriters, Cole Porter and Irving Berlin, were analyzed. Five separate analyses were constructed to examine changes in the psychologically salient musical components of Berlin's and Porter's songs over time. In addition, comparisons between hit songs and non-hit songs were also drawn to investigate whether the composers learned from their cumulative songwriting experiences. Several developmental trends were found in the careers of both composers; however, there were few differences between hit songs and non-hit songs on all measures. The careers of both composers contain evidence of productive and reproductive creativity. Implications of the results and suggestions for future research are discussed.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
7

Leinbach, Cade. "A Multi-Dimensional Approach towards Understanding Music Notation through Cognition." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1703356/.

Full text
Abstract:
Composition has been conceptualized as a method for communicating a way of thinking (i.e., cognition) from composers to performers and audience members. Music notation, or how music is represented in a visual format, becomes the vehicle through which such cognition is communicated. In the past, research on notation has been approached either categorically or as a taxonomy, where it is placed into separate categories based primarily on visual elements, including its symbols, conventions, and practices. The modern application of notation in Western classical music repertoire, however, has shown that the boundaries between these systems are not always clear and sometimes blend together. Viewing music notation from a spectrum-based approach instead provides a better understanding of notation through its cognitive effects. These spectra can then be viewed through multiple dimensions, all addressing different aspects. The first dimension consists of the historical systems of notation, ranging from standard music notation (SMN) to music graphics. Additional kinds of notation, such as proportional, pictorial, and aleatoric, work as the mediary levels between these two. The second dimension focuses on whether notation is processed intuitively, based on either cultural priming or general cognitive principles, or through conscious interpretation. The last dimension views notation as either a visual representation of the sound (descriptive) or a representation of the process performed to create the sound (prescriptive). This thesis conceptualizes a theory for understanding music notation though these multiple dimensions by synthesizing psychological studies about music, music notation research, and pre-existing musical scores.
APA, Harvard, Vancouver, ISO, and other styles
8

Bianchi, Frederick W. "The cognition of atonal pitch structures." Virtual Press, 1985. http://liblink.bsu.edu/uhtbin/catkey/438705.

Full text
Abstract:
The Cognition of Atonal Pitch Structures investigated the ability of a listener to internally organize atonal pitch sequences into hierarchical structures. Based on an information processing model proposed by Deutsch and Feroe (1981), the internal organization of well processed pitch sequences will result in the formation of hierarchical structures. The more efficiently information is processed by the listener, the more organized its internal hierarchical representation in memory. Characteristic of a well organized internal hierarchy As redundancy. Each ensuing level of the hierarchical structure represents a parsimoniuos recoding of the lower levels. In this respect, each higher hierarchical level contains the most salient structural features extracted from lower levels.Because efficient internal organization increases redundancy, more memory space must be allocated to retain a well processed pitch sequence. Based on this assumption, an experiment was conducted to determine the amount of information retained when listening to pre-organized atonal pitch structures and randomly organized pitch structures. Using time duration estimation techniques (Ornstein, 1969; Block, 1974), the relative size of memory allocated for a processing task was determined. Since the subjective experience of time is influenced by the amount of information processed and retained in memory (Ornstein, 1969; Block, 1974), longer time estimations corresponded to larger memory space allocations, and thus, more efficiently organized hierarchical structures.ConclusionThough not significant at the .05 level (p-.21), the results indicate a tendency to suggest that atonal pitch structures were more efficiently organized into internal hierarchical structures than were random pitch structures. The results of the experiment also suggest that a relationship exists between efficient internal hierarchical organization and increased attention and enjoyment. The present study also investigated the influence that other parameters may have on the cognition of pre-organized music. Of interest were the characteristics inherent in music which may facilitate internal organization.
APA, Harvard, Vancouver, ISO, and other styles
9

Feinberg, Daniel K. "Infants’ Responses to Affect in Music and Speech." Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/pitzer_theses/44.

Full text
Abstract:
Existing literature demonstrates that infants can discriminate between categories of infant-directed (ID) speech based on the speaker’s intended message – that is, infants recognize the difference between comforting and approving ID speech, and treat different utterances from within these two categories similarly. Furthermore, the literature also demonstrates that infants understand many aspects of music and can discriminate between happy and sad music. Building on these findings, the present study investigated whether exposure to happy or sad piano music would systematically affect infants’ preferences for comforting or approving ID speech. Five- to nine-month-old infants’ preferences for comforting or approving ID speech were examined as a function of whether infants were exposed to sad or happy piano music. Seventeen (10 male, 7 female) full-term, healthy infants were included in the study. It was hypothesized that relative to infants exposed to happy music, infants exposed to sad music would demonstrate a stronger desire to hear comforting ID speech. The study employed an infant controlled, preferential looking procedure to test this hypothesis. The results of the study did not statistically support the researchers’ hypotheses. Limitations of the present work and suggestions for future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Plazak, Joseph Stephen. "Listener Knowledge Gained from Brief Musical Excerpts." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250696592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bhatara, Anjali K. "Music as a means of investigating perception of emotion and social attribution in typical development and in autism spectrum disorders." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=109613.

Full text
Abstract:
This thesis uses music as a means of investigating both typical and atypical perception of emotion and attribution of social intent. The primary aim of this thesis is to investigate this perception and attribution in indviduals with autism spectrum disorders (ASD) and compare this with typical adults and children. Chapter 1 comprises a literature review of music and emotion, and of music cognition in individuals with ASD. The first manuscript (Chapter 2) describes the development of a new method for investigating perception of emotion from musical performance. Using this method, we found that typical adults can reliably rate the emotional content of musical performances which vary in expressive parameters. In the second manuscript (Chapter 3), we used this method to examine the ability of adolescents with ASD to rate the emotional content of musical performance. We compared the group with ASD to a group of typically developing adolescents as well as a group of individuals with Williams syndrome (WS). The results of this study showed that adolescents with ASD are impaired in this kind of emotional recognition relative to both comparison groups. Emotional recognition is an important aspect of everyday social interactions, both in understanding and predicting others' actions. Thus, in the third manuscript (Chapter 4), we examined the effect of musical soundtracks on attribution of social action and intent in ASD by adding music to an established visual task. [...]
Dans cette these, la musique est utili see pour investiguer la perceptiontypique et atypique des emotions ainsi que l' attribution d'intentions sociales.L' objectif premier est d' evaluer la perception des emotions et des intentionssociales chez des individus presentant un trouble du spectre autistique (TSA) enles comparant a des .adultes et des enfants dont le developpement est typique. Lepremier chapitre est consacre a une revue de la litterature portant sur la musiqueet les emotions ainsi que sur la cognition de la musique chez des individuspresentant un TSA. Le premier manuscrit (chapitre 2) porte sur le developpement d'une nouvelle methode permettant d'evaluer la perception desemotions associees a des performances musicales. [...]
APA, Harvard, Vancouver, ISO, and other styles
12

Carpinteiro, Otavio Augusto Salgado. "A connectionist approach in music perception." Thesis, University of Sussex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309481.

Full text
Abstract:
Little research has been carried out in order to understand the mechanisms underlying the perception of polyphonic music. Perception of polyphonic music involves thematic recognition, that is, recognition of instances of theme through polyphonic voices, whether they appear unaccompanied, transposed, altered or not. There are many questions still open to debate concerning thematic recognition in the polyphonic domain. One of them, in particular, is the question of whether or not cognitive mechanisms of segmentation and thematic reinforcement facilitate thematic recognition in polyphonic music. This dissertation proposes a connectionist model to investigate the role of segmentation and thematic reinforcement in thematic recognition in polyphonic music. The model comprises two stages. The first stage consists of a supervised artificial neural model to segment musical pieces in accordance with three cases of rhythmic segmentation. The supervised model is trained and tested on sets of contrived patterns, and successfully applied to six musical pieces from J. S. Bach. The second stage consists of an original unsupervised artificial neural model to perform thematic recognition. The unsupervised model is trained and assessed on a four-part fugue from J. S. Bach. The research carried out in this dissertation contributes into two distinct fields. Firstly, it contributes to the field of artificial neural networks. The original unsupervised model encodes and manipulates context information effectively, and that enables it to perform sequence classification and discrimination efficiently. It has application in cognitive domains which demand classifying either a set of sequences of vectors in time or sub-sequences within a unique and large sequence of vectors in time. Secondly, the research contributes to the field of music perception. The results obtained by the connectionist model suggest, along with other important conclusions, that thematic recognition in polyphony is not facilitated by segmentation, but otherwise, facilitated by thematic reinforcement.
APA, Harvard, Vancouver, ISO, and other styles
13

Spurrier, Graham. "Consonant and dissonant music chords improve visual attention capture." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/cmc_theses/2125.

Full text
Abstract:
Recent research has suggested that music may enhance or reduce cognitive interference, depending on whether it is tonally consonant or dissonant. Tonal consonance is often described as being pleasant and agreeable, while tonal dissonance is often described as being unpleasant and harsh. However, the exact cognitive mechanisms underlying these effects remain unclear. We hypothesize that tonal dissonance may increase cognitive interference through its effects on attentional cueing. We predict that (a) consonant musical chords are attentionally demanding, but (b) dissonant musical chords are more attentionally demanding than consonant musical chords. Using a Posner cueing task, a standard measure of attention capture, we measured the differential effects of consonant chords, dissonant chords, and no music on attentional cueing. Musical chords were presented binaurally at the same time as a visual cue which correctly predicted the spatial location of a subsequent target in 80% of trials. As in previous studies, valid cues led to faster response times (RTs) compared to invalid cues; however, contrary to our predictions, both consonant and dissonant music chords produced faster RTs compared to the no music condition. Although inconsistent with our hypotheses, these results support previous research on cross-modal cueing, which suggests that non-predictive auditory cues enhance the effectiveness of visual cues. Our study further demonstrates that this effect is not influenced by auditory qualities such as tonal consonance and dissonance, suggesting that previously reported cognitive interference effects for tonal dissonance may depend on high-level changes in mood and arousal.
APA, Harvard, Vancouver, ISO, and other styles
14

Kung, Hsiang-Ning. "Cultural Influence on the Perception and Cognition of Musical Pulse and Meter." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494228392604585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bowles, Shannon L. "MEMORY, COGNITION, AND THE EFFECT OF A MUSIC INTERVENTION ON HEALTHY OLDER ADULTS." UKnowledge, 2013. http://uknowledge.uky.edu/gerontol_etds/8.

Full text
Abstract:
Music is a powerful modality that can bring about changes in individuals of all ages. This research employed both an experimental and quasi-experimental design to identify the effects of music as it influenced psychological well-being, memory, and cognition among older adults. Specifically, it addressed three aims: (a) To determine to what extent learning to play a music instrument later in life influenced psychological well-being and cognitive function of non-institutionalized healthy seniors, (b) To determine the effects of the amount of music involvement on psychological well-being and cognitive function (c) To determine the benefit of music for those with limited/no music experience. For the first aim, it was hypothesized that individuals in the experimental music group would maintain and/or improve psychological well-being, memory, and cognitive function more than those assigned to the wait-list control group. For the second aim, it was hypothesized that participants with extensive music involvement would have higher scores on cognitive ability measures and experience greater psychological well-being than those who had not been actively involved in music throughout their life. For the third aim, it was hypothesized that the participants with limited/no music involvement throughout their life would have a larger change on the psychological well-being measures and cognitive assessments than those who had more music involvement. For the experimental portion (Aim 1), the study employed a 6-week music intervention with non-institutionalized older adults. The quasi-experimental portion (Aims 2 & 3) divided participants according to their amount of time involved in music and then looked at psychological well-being and cognitive function. This dissertation did not show a strong connection between music, memory, and cognition so it did not achieve the desired overall results. However, the findings did suggest that music may modify some areas of cognitive function (verbal learning, memory, and retention) and psychological well-being but did not influence other areas (playing a music instrument for any length of time). Therefore, the findings of this dissertation can be a basis upon which future research relating to music, cognitive functioning, psychological well-being and involvement in music can build.
APA, Harvard, Vancouver, ISO, and other styles
16

Granzow, John, and University of Lethbridge Faculty of Arts and Science. "Ventriloquial dummy tones : embodied cognition of pitch direction." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Psychology, c2010, 2010. http://hdl.handle.net/10133/2558.

Full text
Abstract:
Tone pairs constructed with the frequencies of the overtones moving in opposition to the missing fundamental frequencies they imply, produce expertise differences in the tracking of pitch direction. One interpretation of this result is that it arises as a function of rudimentary differences in the perceptual systems of musicians and non-musicians. Several experiments suggest instead a more embodied source of expertise to be found in vocal mediation such that the effect of musical experience in these tasks is the result of the most salient action of musicians: making sound.
x, 87 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
17

Salselas, Inês. "Exploring interactions between music and language during the early development of music cognition. A computational modelling approach." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/112058.

Full text
Abstract:
This dissertation concerns the computational modelling of early life development of music perception and cognition. Experimental psychology and neuroscience show results that suggest that the development of musical representations in infancy, whether concerning pitch or rhythm features, depend on exposure both to music and language. Early musical and linguistic skills seem to be, therefore, tangled in ways we are yet to characterize. In parallel, computational modelling has produced powerful frameworks for the study of learning and development. The use of these models for studying the development of music information perception and cognition, connecting music and language still remains to be explored. This way, we propose to produce computational solutions suitable for studying factors that contribute to shape our cognitive structure, building our predispositions that allow us to enjoy and make sense of music. We will also adopt a comparative approach to the study of early development of musical predispositions that involves both music and language, searching for possible interactions and correlations. We first address pitch representation (absolute vs relative) and its relations with development. Simulations have allowed us to observe a parallel between learning and the type of pitch information being used, where the type of encoding influenced the ability of the model to perform a discrimination task correctly. Next, we have performed a prosodic characterization of infant-directed speech and singing by comparing rhythmic and melodic patterning in two Portuguese (European and Brazilian) variants. In the computational experiments, rhythm related descriptors exhibited a strong predictive ability for both speech and singing language variants' discrimination tasks, presenting different rhythmic patterning for each variant. This reveals that the prosody of the surrounding sonic environment of an infant is a source of rich information and rhythm as a key element for characterizing the prosody from language and songs from each culture. Finally, we built a computational model based on temporal information processing and representation for exploring how the temporal prosodic patterns of a specific culture influence the development of rhythmic representations and predispositions. The simulations show that exposure to the surrounding sound environment influences the development of temporal representations and that the structure of the exposure environment, specifically the lack of maternal songs, has an impact on how the model organizes its internal representations. We conclude that there is a reciprocal influence between music and language. The exposure to the structure of the sonic background influences the shaping of our cognitive structure, which supports our understanding of musical experience. Among the sonic background, language's structure has a predominant role in biasing the building of musical predispositions and representations.
Esta tesis aborda la modelización computacional de algunos fenómenos de la percepción y cognición de la música durante el período de desarrollo en la primera infancia. La Psicología experimental y la Neurociencia muestran resultados que sugieren que el desarrollo de las representaciones del ritmo o de la altura musicales durante la infancia son dependientes de la exposición tanto a la música como al lenguaje de las culturas en las que se nace y crece. La capacidad musical y lingüística, durante los primeros años de desarrollo, están inter-relacionadas de formas que aún no ha sido posible caracterizar. En paralelo, las herramientas computacionales proporcionan un marco teórico y empírico eficaz para el estudio del aprendizaje y el desarrollo. El uso de los modelos computacionales para estudiar el desarrollo de la percepción y la cognición de información musical, conectando la música y el lenguaje, todavía queda por explorar. Así, nos proponemos producir soluciones computacionales adecuadas para el estudio de los factores que contribuyen a dar forma a nuestra estructura cognitiva y a la construcción de las predisposiciones que nos permiten disfrutar y dar sentido a la música. También adoptamos una perspectiva comparativa para la investigación que, englobando la música y el lenguaje, busca sus posibles interacciones y correlaciones. Primeramente, hemos abordado la representación de la altura tonal (absoluta vs. relativa) y sus relaciones con el desarrollo. Las simulaciones computacionales han permitido observar que el tipo de codificación utilizada ha influido en la capacidad del modelo para efectuar correctamente una tarea de discriminación, lo cual sugiere una relación entre el aprendizaje y el tipo de información de altura que se utiliza. Seguidamente, se ha realizado una caracterización prosódica del habla y del canto dirigidos al bebé, mediante la comparación de patrones rítmicos y melódicos en dos variantes de Portugués (Europeo y Brasileño). En los experimentos computacionales, los descriptores relacionados con el ritmo han exhibido una fuerte capacidad predictiva para el habla y canto, en tareas de discriminación de variante de lenguaje, siendo observados diferentes patrones rítmicos para cada variante. Se revela que la prosodia del entorno sonoro de un bebé es una fuente rica de información y que el ritmo es un elemento fundamental para la caracterización de la prosodia del lenguaje y las canciones de una cultura. Por último, se construyó un modelo computacional basado en el procesamiento y representación de información temporal para explorar cómo los patrones prosódicos temporales del habla de una cultura específica influyen en el desarrollo de las representaciones y predisposiciones rítmicas. Las simulaciones muestran que la exposición al ambiente sonoro circundante influye en el desarrollo de las representaciones temporales y que la estructura del entorno a que se esta expuesto, específicamente, la falta de canciones maternales, tiene un impacto sobre la forma como el modelo organiza sus representaciones rítmicas internas. Se concluye que existe una influencia recíproca entre la música y el lenguaje. La exposición a la estructura del entorno sonoro influye en la formación de la estructura cognitiva, que sustenta la comprensión de la experiencia musical. De entre todos los “inputs” del entorno sonoro, la estructura del lenguaje tiene una influencia predominante en la construcción de predisposiciones y representaciones musicales.
APA, Harvard, Vancouver, ISO, and other styles
18

Vassillière, Christa Theresa. "The Spatial Properties of Music Perception: Differences in Visuo-spatial Performance According to Musicianship and Interference of Musical Structure." Oberlin College Honors Theses / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1340331749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Buonviri, Nathan. "EFFECTS OF VISUAL PRESENTATION ON AURAL MEMORY FOR MELODIES." Diss., Temple University Libraries, 2010. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/215416.

Full text
Abstract:
Music Education
Ph.D.
The purpose of this study was to determine how pitch and rhythm aspects of melodic memory are affected by aural distractions when melodic stimuli are presented both visually and aurally, as compared to aurally only. The rationale for this research is centered on the need for improved melodic memory skills of students taking melodic dictation, and the possibility that temporary visual imagery storage of target melodies might enhance those skills. The participants in this study were undergraduate and graduate music majors (n=41) at a large northeastern university. All participants had successfully completed the first two semesters of college-level music theory, and none had perfect pitch. Participants progressed through two self-contained experimental tests at the computer. Identical target melodies were presented: 1) aurally only on one test; and 2) aurally, with visual presentation of the matching notation, on the other test. After the target melody, a distraction melody sounded, during which time participants were to maintain the original target melody in memory. Participants then chose which of two aural options matched the original target, with a third choice of "neither." The incorrect answer choice in each item contained either a pitch or rhythm discrepancy. The 2x2 factorial design of this experiment was based on independent variables of test presentation format and answer discrepancy type. The dependent variable was experimental test scores. Each participant took both parts of both tests, yielding 164 total observations. Additional data were collected for exploratory analysis: the order in which each participant took the tests, the major instrument of each participant, and the educational status of each participant (undergraduate or graduate). Results of a 2x2 ANOVA revealed no significant differences in test scores, based on either test format or answer discrepancy type, and no interaction between the factors. The exploratory analyses revealed no significant differences in test scores, based on test order, major instrument, or student status. Results suggest that visual reinforcement of melodies does not affect aural memory for those melodies, in terms of either pitch or rhythm. Suggestions for further research include an aural-visual melodic memory test paired with a learning modalities survey, a longitudinal study of visual imagery applied to aural skills study, and a detailed survey of strategies used by successful and unsuccessful dictation students.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
20

Buonviri, Nathan. "Audio-OnlyTest [Digital File]." Diss., Temple University Libraries, 2010. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/254227.

Full text
Abstract:
Music Education
Ph.D.
The purpose of this study was to determine how pitch and rhythm aspects of melodic memory are affected by aural distractions when melodic stimuli are presented both visually and aurally, as compared to aurally only. The rationale for this research is centered on the need for improved melodic memory skills of students taking melodic dictation, and the possibility that temporary visual imagery storage of target melodies might enhance those skills. The participants in this study were undergraduate and graduate music majors (n=41) at a large northeastern university. All participants had successfully completed the first two semesters of college-level music theory, and none had perfect pitch. Participants progressed through two self-contained experimental tests at the computer. Identical target melodies were presented: 1) aurally only on one test; and 2) aurally, with visual presentation of the matching notation, on the other test. After the target melody, a distraction melody sounded, during which time participants were to maintain the original target melody in memory. Participants then chose which of two aural options matched the original target, with a third choice of "neither." The incorrect answer choice in each item contained either a pitch or rhythm discrepancy. The 2x2 factorial design of this experiment was based on independent variables of test presentation format and answer discrepancy type. The dependent variable was experimental test scores. Each participant took both parts of both tests, yielding 164 total observations. Additional data were collected for exploratory analysis: the order in which each participant took the tests, the major instrument of each participant, and the educational status of each participant (undergraduate or graduate). Results of a 2x2 ANOVA revealed no significant differences in test scores, based on either test format or answer discrepancy type, and no interaction between the factors. The exploratory analyses revealed no significant differences in test scores, based on test order, major instrument, or student status. Results suggest that visual reinforcement of melodies does not affect aural memory for those melodies, in terms of either pitch or rhythm. Suggestions for further research include an aural-visual melodic memory test paired with a learning modalities survey, a longitudinal study of visual imagery applied to aural skills study, and a detailed survey of strategies used by successful and unsuccessful dictation students.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
21

Buonviri, Nathan. "Audio-VisualTest [Digital File]." Diss., Temple University Libraries, 2010. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/254228.

Full text
Abstract:
Music Education
Ph.D.
The purpose of this study was to determine how pitch and rhythm aspects of melodic memory are affected by aural distractions when melodic stimuli are presented both visually and aurally, as compared to aurally only. The rationale for this research is centered on the need for improved melodic memory skills of students taking melodic dictation, and the possibility that temporary visual imagery storage of target melodies might enhance those skills. The participants in this study were undergraduate and graduate music majors (n=41) at a large northeastern university. All participants had successfully completed the first two semesters of college-level music theory, and none had perfect pitch. Participants progressed through two self-contained experimental tests at the computer. Identical target melodies were presented: 1) aurally only on one test; and 2) aurally, with visual presentation of the matching notation, on the other test. After the target melody, a distraction melody sounded, during which time participants were to maintain the original target melody in memory. Participants then chose which of two aural options matched the original target, with a third choice of "neither." The incorrect answer choice in each item contained either a pitch or rhythm discrepancy. The 2x2 factorial design of this experiment was based on independent variables of test presentation format and answer discrepancy type. The dependent variable was experimental test scores. Each participant took both parts of both tests, yielding 164 total observations. Additional data were collected for exploratory analysis: the order in which each participant took the tests, the major instrument of each participant, and the educational status of each participant (undergraduate or graduate). Results of a 2x2 ANOVA revealed no significant differences in test scores, based on either test format or answer discrepancy type, and no interaction between the factors. The exploratory analyses revealed no significant differences in test scores, based on test order, major instrument, or student status. Results suggest that visual reinforcement of melodies does not affect aural memory for those melodies, in terms of either pitch or rhythm. Suggestions for further research include an aural-visual melodic memory test paired with a learning modalities survey, a longitudinal study of visual imagery applied to aural skills study, and a detailed survey of strategies used by successful and unsuccessful dictation students.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
22

Russell, Michael L. "The Phenomenology of Harmonic Progression." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1703408/.

Full text
Abstract:
This dissertation explores a method of music analysis that is designed to reflect the phenomenology of the listening experience, specifically in regards to harmony. It is primarily inspired by the theoretical approaches of the music theorist Moritz Hauptmann and by the writings of philosopher Edmund Husserl.
APA, Harvard, Vancouver, ISO, and other styles
23

Barbosa, Rafael. "Vers des outils d'analyse musicale à l'échelle humaine." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR2013/document.

Full text
Abstract:
Si la musique a participé au développement de la psychologie cognitive et de l'approche expérimentale de l'esthétique, la musicologie, et plus particulièrement sa branche analytique, n'a que très peu bénéficié des acquis de ce que François Delalande appelle « les sciences de la musique ». Cette situation est le résultat d'un l'éloignement ontologique entre les paradigmes sous-jacents des sciences cognitives, et ceux de la théorie et l'analyse musicologiques. La difficulté à assimiler une méthodologie transdisciplinaire – question épistémologique qui accompagne le développement des sciences cognitives –, est aussi responsable d'une forme de désintéressement chronique de la part des musicologues pour des disciplines scientifiques qui pourtant, permettent aujourd'hui de comprendre la musique en tant qu'objet façonné par une dynamique des contraintes perceptives et cognitives, ainsi que comme expérience esthétique vivante. Ce travail doctoral cherche à évaluer les raisons qui rendent pertinente et nécessaire l'ouverture de la musicologie analytique vers l'étude scientifique de la perception et de l'expérience esthétique, et propose une formulation des objectifs et des moyens qui pourraient être ceux d'une musicologie analytique qui reconnait et préserve sa place au sein de l’épistémè contemporain des sciences humaines et naturelles
While music has contributed to the development of cognitive psychology and experimental aesthetics, musicology, and more particularly its analytical branch, has taken little benefit from the achievements of what has been called "the sciences of music ". This situation is the result of a growing ontological distance between the paradigms underlying the development of cognitive sciences and those on which musicological theory and analysis are grounded. The difficulty in assimilating a transdisciplinary methodology – a central epistemological question that accompanies the development of cognitive sciences – is also responsible for the chronic lack of interest on the part of musicologists for the scientific disciplines which have open the possibility to understand music as an object shaped both by perception and cognition, and as a living aesthetic experience. This research evaluates the reasons that prove the relevance and the necessity of building a straight relation between the analytical musicology and the scientific study of perception and aesthetics. It also leads a discussion in order to propose a definition of the aims and the means characterizing an analytical musicology that recognizes and preserves its place within the frame of the contemporary human and natural sciences
APA, Harvard, Vancouver, ISO, and other styles
24

Olivier, Ryan. "Musica Speculativa: An Exploration of the Multimedia Concert Experience through Theory and Practice." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/329943.

Full text
Abstract:
Music Composition
D.M.A.
Musica Speculativa is a final project in two parts in which I explore, through both theory and practice, the role of metaphors in our understanding of reality with special attention given to the use of visual representation in multimedia concert works that employ electroacoustics. Part I, entitled, "Imaginary Cognition: Interpreting the Topoi of Intermedia Electroacoustic Concert Works," explores how metaphors play a core role in our musical experience and how aural metaphors can be enhanced by and ultimately interact with visual metaphors to create a contrapuntal intermedia experience. Part II, "Musica Speculativa: A Multimedia Concert in Five Movements and Three Intermezzi," for mezzo-soprano, flute, B-flat bass clarinet, violin, cello, piano, a percussionist performing an array of lightning bottles, a dancer with a gesture-sensing wand, and a technologist operating interactive audio and video processing, focuses on the medieval philosophy of Musica Speculativa and how it relates to our current understanding of the world. In part I explore the heightened experience of metaphorical exchange through the utilization of multimedia. The starting point is the expansion of visual enhancement in electroacoustic compositions due to the widespread availability of projection in concert halls and the multimedia expectations created through 21st-century Western culture. With the use of visual representation comes the potential to map musical ideas onto visual signs, creating another level of cognition. The subsequent unfolding of visual signifiers offers a direct visual complement and subsequent interaction to the unfolding of aural themes in electroacoustic compositions. The paper surveys the current research surrounding metaphorical thematic recognition in electroacoustic works whose transformational processes might be unfamiliar, and which in turn create fertile ground for the negotiation of meaning. The interaction of media and the differences created among the various signs within the music and the visual art create a heightened concert experience that is familiar to and in many ways expected by contemporary listeners. Composers such as Jaroslaw Kapuscinski have sought to use multimedia as a means to enhance the concert experience, giving movement to the acousmatic presence in their electroacoustic works. In turn, these works create a concert experience that is more familiar to the 21st-century audience. Through examining Kapuscinski's recent work, Oli's Dream, in light of cognitive research by Zbikowski (1998 & 2002), topic theory by Agawu (1991 & 2009), and multimedia research by Cook (1998), I propose a theory for analyzing contrapuntal meaning in multimedia concert works. The themes explored in Part I, regarding the use of metaphor to interpret both visual and aural stimuli, ultimately creating a metaphor for a reality never fully grasped due to the limits of human understanding, are further explored artistically in the multimedia concert work, Musica Speculativa. The medieval philosophy of Musica Speculativa suggests that music as it is understood today (musica instrumentalis) is the only tangible form of the metaphysical music ruling human interactions (musica humana) and ordering the cosmos (musica mundana). I found the concept of Musica Speculativa to be a fitting metaphor for how music and art allude to our own perception of reality and our place within that world. The project as a whole re-examines the concept of Musica Speculativa in light of our current technological landscape to gain a deeper understanding of how we interact with the world around us.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
25

Olivier, Ryan. "MusicaSpeculativa-PartII-MultimediaConcertWork.pdf." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/330327.

Full text
Abstract:
Music Composition
D.M.A.
Musica Speculativa is a final project in two parts in which I explore, through both theory and practice, the role of metaphors in our understanding of reality with special attention given to the use of visual representation in multimedia concert works that employ electroacoustics. Part I, entitled, "Imaginary Cognition: Interpreting the Topoi of Intermedia Electroacoustic Concert Works," explores how metaphors play a core role in our musical experience and how aural metaphors can be enhanced by and ultimately interact with visual metaphors to create a contrapuntal intermedia experience. Part II, "Musica Speculativa: A Multimedia Concert in Five Movements and Three Intermezzi," for mezzo-soprano, flute, B-flat bass clarinet, violin, cello, piano, a percussionist performing an array of lightning bottles, a dancer with a gesture-sensing wand, and a technologist operating interactive audio and video processing, focuses on the medieval philosophy of Musica Speculativa and how it relates to our current understanding of the world. In part I explore the heightened experience of metaphorical exchange through the utilization of multimedia. The starting point is the expansion of visual enhancement in electroacoustic compositions due to the widespread availability of projection in concert halls and the multimedia expectations created through 21st-century Western culture. With the use of visual representation comes the potential to map musical ideas onto visual signs, creating another level of cognition. The subsequent unfolding of visual signifiers offers a direct visual complement and subsequent interaction to the unfolding of aural themes in electroacoustic compositions. The paper surveys the current research surrounding metaphorical thematic recognition in electroacoustic works whose transformational processes might be unfamiliar, and which in turn create fertile ground for the negotiation of meaning. The interaction of media and the differences created among the various signs within the music and the visual art create a heightened concert experience that is familiar to and in many ways expected by contemporary listeners. Composers such as Jaroslaw Kapuscinski have sought to use multimedia as a means to enhance the concert experience, giving movement to the acousmatic presence in their electroacoustic works. In turn, these works create a concert experience that is more familiar to the 21st-century audience. Through examining Kapuscinski's recent work, Oli's Dream, in light of cognitive research by Zbikowski (1998 & 2002), topic theory by Agawu (1991 & 2009), and multimedia research by Cook (1998), I propose a theory for analyzing contrapuntal meaning in multimedia concert works. The themes explored in Part I, regarding the use of metaphor to interpret both visual and aural stimuli, ultimately creating a metaphor for a reality never fully grasped due to the limits of human understanding, are further explored artistically in the multimedia concert work, Musica Speculativa. The medieval philosophy of Musica Speculativa suggests that music as it is understood today (musica instrumentalis) is the only tangible form of the metaphysical music ruling human interactions (musica humana) and ordering the cosmos (musica mundana). I found the concept of Musica Speculativa to be a fitting metaphor for how music and art allude to our own perception of reality and our place within that world. The project as a whole re-examines the concept of Musica Speculativa in light of our current technological landscape to gain a deeper understanding of how we interact with the world around us.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
26

Soley, Gaye. "Exploring the nature of early social preferences: The case of music." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10390.

Full text
Abstract:
This dissertation aims to explore the nature of early social preferences by testing attention to a cue that might have evolved as a reliable signal of shared group membership – shared cultural knowledge. Part 1 shows that children attend to this cue when making social choices: Children both prefer others who know songs they themselves know, and avoid others who know songs they do not know, while other cues such as shared preferences for songs are not as powerful drivers of social preferences. Part 2 shows that this cue affects how five-months-old infants allocate attention to human singers. After listening to two individuals singing different songs, infants look longer at singers of familiar songs than at singers of unfamiliar songs. When both songs are unfamiliar, infants do not show preferences for singers of songs that follow or violate Western melodic structure, although they are sensitive to these differences. In focusing on familiar songs but not musical styles, infants may selectively attend to information that might mark group membership later in life, namely shared knowledge of specific songs. Part 3 investigates whether children are selective in the properties they use to infer that two individuals belong to the same group, targeting two potentially important social cues: race and gender. Specifically, Part 3 asks if children attribute shared musical knowledge to individuals of the same race or gender. Four-year-olds attribute shared knowledge to individuals of the same gender, but not of the same race. Five-year-olds attribute shared knowledge to individuals of the same race, but not of the same gender. In contrast, a control unrelated to group-membership – attributions of shared musical preferences – do not yield any dissociation between attributions based on race or gender. Thus, as they gain experience, children seem to adaptively update the social cues they use to infer shared group-membership. Together these results begin to elucidate the mechanisms underlying early social preferences by showing that children might selectively attend to the most reliable cues to shared group-membership, which, in turn, might allow them later in life to participate in the complex social organization that is unique to human societies.
Psychology
APA, Harvard, Vancouver, ISO, and other styles
27

Love, Diana Bonham. "The relationship of tempo, pattern length, and grade level on the recognition of rhythm patterns." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/77909.

Full text
Abstract:
The purpose of this study was to investigate the effects of tempo, pattern length, and grade level on student ability to recognize rhythm patterns. It was intended that the study would also determine if age and experience are factors which affect rhythm recognition and memory. A 48 item Rhythm Pattern Identification (RPI) test was administered to 2146 band students and 114 nonmusic students in grades 6 through 12. The RPI consisted of 48 pairs of rhythm patterns varied in time length (seconds), number of note values (sound events), and tempo. Students indicated if the pairs of rhythm patterns were the same or different. Statistical analysis indicated the reliability estimate (KR-20) of the RPI to range from .445 to .792 with the median being .553. Criterion related validity was established through a correlation of student scores on the Iowa Tests of Music Literacy (Gordon, 1970) and the RPI, r = .39. A multiple regression analysis of the data indicated that .36 of the variance in the RPI scores was attributable to the linear combination of tempo, length in seconds, number of sound events, and grade level. As expected, the independent variables of length in seconds and length in sound events were significantly correlated R = .63; however, there were no significant correlations between the other independent variables. Inverse relationships were found between tempo and score and length and score. Beta weights indicated that the number of sound events was the most significant influence on student scores. Data indicated a slight increase in score from one grade level to the next with significant differences occurring between grades six and eleven and twelve and between grades seven and eleven and twelve. The results of the study indicate that length of pattern in seconds, number of sound events, tempo, and grade level all affect memory of rhythm patterns. These findings corroborate with those of Dowling (1973), Sink (1983), and Fraisse (1982). The implications for music education are: (1) tempo may be a factor that influences how students learn rhythm and (2) student perception of rhythm may be more affected by the length of the rhythm pattern in the number of sound events rather than the length of a pattern in seconds. Future research should include further investigation of young students ability to comprehend rhythm patterns. It is evident that young students can perceive and recognize as complex patterns as older students.
Ed. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Crespo, Bojorque Paola 1985. "Biological foundations of consonance perception : exploring phylogenetic roots and neural mechanisms." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/481992.

Full text
Abstract:
La consonancia es una de las características más salientes de la música. A pesar de su papel fundamental en la música occidental, sus orígenes siguen siendo controversiales. Por lo tanto, la comprensión de los mecanismos implicados en la percepción de un acorde como consonante (estable) o disonante (inestable), se ha convertido en una cuestión pendiente en la investigación de la percepción musical. La presente tesis doctoral está dedicada a explorar las bases biológicas de la percepción de la consonancia a través de dos enfoques, comparativo y neurofisiológico. Los resultados de varios experimentos mostraron que las ratas, una especie sin habilidades vocales, comparten con los humanos la capacidad de discriminar la consonancia de la disonancia. Sin embargo, los animales carecen de la capacidad de generalizar a nuevos estímulos y no presentan beneficios para el procesamiento de la consonancia como lo hacen los humanos. Por otra parte, las respuestas neuronales desencadenas por cambios en la consonancia y disonancia difieren entre músicos y no músicos. En conjunto, los resultados reportados en esta tesis ponen de manifiesto que la experiencia con estímulos harmónicos, tales como la producción vocal y la formación musical, es un factor importante para explicar el fenómeno del a consonancia dentro de nuestro sistema musical.
Consonance is one of the most salient features of music. Despite its central role in Western music, its origins remain controversial. Thus, understanding the mechanisms involved in the perception of a chord as consonant (stable) or dissonant (unstable), have become an outstanding issue in music perception research. The present dissertation is devoted to explore the biological bases of consonance perception through a comparative and a neurophysiological approach. Results from several experiments showed that rats, a species with no documented vocal learning abilities, share with humans the capacity to discriminate consonance from dissonance. The animals however lack the ability to generalize to new stimuli and do not exhibit processing benefits for consonance as humans do. Moreover, musicians’ neural responses triggered by changes in consonance and dissonance differed from those of non-musicians. Together, the results reported in the present dissertation highlight that experience with harmonic stimuli, such as vocal production and musical training, is an important factor to account for the emergence of consonance within our musical system.
APA, Harvard, Vancouver, ISO, and other styles
29

Aarden, Bret J. "Dynamic melodic expectancy." The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1060969388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Giorgio, Maurizio. "Salience concept in auditory domain with regard to music cognition." Thesis, Paris 10, 2014. http://www.theses.fr/2014PA100099/document.

Full text
Abstract:
Le travail de recherche examine plusieurs problématiques relatives à la perception, la représentation et la catégorisation des stimuli musicaux durant l’écoute. Nous souhaitons enquêter ces processus cognitifs dans le cadre des différentes approches théorétiques présentes dans la littérature scientifique internationale. En particulier, la thèse s’est focalisée sur le processus de segmentation perceptif du morceau pendant l'écoute, et a analysé au moyen de deux expériences comportementales, les différents rôles des nombreuses caractéristiques structurelles et dynamiques dans le développement de la représentation de la composition musicale par Pauditeur. Ils sont aussi considérés les variables liées au musicien et à l’écouter. Les données expérimentales obtenues sont étudiées en relation avec les modèles modernes de auditory map of salience, et parallèlement, avec les modèles plus spécifiques de segmentation développés pendant ces trente dernières années dans la cadre de la psychologie cognitive de la musique. Pour les expériences on a utilisé un paradigme de segmentation musical avec deux écoutes de morceaux atonales et un ordre balancé de présentation. Les résultats expérimentaux démontrent que la carte de saillance n'est pas une trame immuable pouvant être remplie avec des combinaisons de caractéristiques du stimulus. Au contraire, elle peut être modulée par la répartition de l'attention « goal directed » il travers, par exemple, une modulation des seuils perceptifs spécifiques pour certaines caractéristiques
This research examines several issues related to the collection, representation and the categorization of musical stimuli during the listening. We investigate these cognitive processes in the with reference to the different theoretical approaches existing in the international scientific literature. In particular, the thesis focuses on the process of perceptual segmentation of musical pieces during the listening. Two behavioral experiments allow analyzing the different roles of many structural and dynamic features in the development of the listeners’ representation of the music. Experiments take into account also the variables related to the musician and the listener. The experimental data obtained are discussed with regard to the current models of auditory map of salience, as well as with models of music segmentation models. In the paradigm of musical segmentation we used subjects have to hear and segment two versions of an atonal piece. Order of presentation is balanced across participants. The results demonstrate that the saliency map is not an immutable frame deriving only from the features of the stimuli. On the opposite, it can be modulated by goal-directed attention through, for example, modulation of specific perceptual thresholds for certain characteristics
APA, Harvard, Vancouver, ISO, and other styles
31

Rodrigues, Fabrizio Veloso. "O processamento de informação rítmica em pessoas com ouvido absoluto." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/47/47135/tde-04092017-180636/.

Full text
Abstract:
O ouvido absoluto é descrito como a habilidade de nomear ou produzir notas musicais sem uma referência externa. Estudos recentes sugerem o processamento mais rápido de informação linguística em pessoas que possuem a habilidade. Sabe-se que conteúdo rítmico é um elemento essencial para o processamento linguístico. No entanto, não se sabe se pessoas com ouvido absoluto processam informação rítmica de maneira distinta. O objetivo deste trabalho foi verificar as possíveis diferenças entre portadores e não portadores de ouvido absoluto no processamento de padrões rítmicos em estímulos sonoros. Dezesseis participantes, sendo 8 com ouvido absoluto e 8 sem a habilidade foram submetidos a uma tarefa psicofísica, na qual deveriam reproduzir sequências rítmicas, com acurácia. Como critério de comparação, consideraram-se o número de intervalos produzidos e a evolução da acurácia temporal ao longo da tarefa. Não se observaram diferenças significativas entre os grupos. Os resultados sugerem que, no processamento de informação rítmica não há participação significativa de processos nervosos especificamente presentes apenas em pessoas com ouvido absoluto
Absolute pitch is described as the ability to name or produce musical notes without an external reference. Recent studies suggest faster processing of linguistic information in people with this skill. It is known that rhythmic content is an essential element for linguistic processing. However, it is not known whether people with absolute pitch process rhythmic information differently. The objective of this work was to verify whether differences exist between absolute pitch and non-absolute pitch possessors in processing rhythmic patterns in sound stimuli. Sixteen participants, 8 with absolute pitch and 8 without the ability, underwent a psychophysical task, in which they were asked to reproduce as accurately as possible rhythmic sequences presented to them. As a criterion of performance, we considered the number of intervals produced and the evolution of temporal accuracy as the task was carried out. No significant difference was found between the two groups. The results suggest that in the processing of rhythmic information there is no significant participation of nervous circuitry specifically present only in absolute pitch possessors
APA, Harvard, Vancouver, ISO, and other styles
32

Lima, Letícia Dias de [UNESP]. "Percepção musical e cognição: abordagem de aspectos rítmicos no treinamento auditivo." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/154736.

Full text
Abstract:
Submitted by Letícia Dias De Lima (leticiadiaspiano@gmail.com) on 2018-07-27T12:25:02Z No. of bitstreams: 1 Dissertação_Pós Defesa PDF.pdf: 3477366 bytes, checksum: 1926c33d823bc289e6209d7eb09bec8e (MD5)
Approved for entry into archive by Laura Mariane de Andrade null (laura.andrade@ia.unesp.br) on 2018-07-27T18:50:31Z (GMT) No. of bitstreams: 1 lima_ld_me_ia.pdf: 3477366 bytes, checksum: 1926c33d823bc289e6209d7eb09bec8e (MD5)
Made available in DSpace on 2018-07-27T18:50:31Z (GMT). No. of bitstreams: 1 lima_ld_me_ia.pdf: 3477366 bytes, checksum: 1926c33d823bc289e6209d7eb09bec8e (MD5) Previous issue date: 2018-06-29
O objetivo deste trabalho é investigar as relações entre a percepção do ritmo musical e os métodos de treinamento auditivo utilizados na graduação em música das universidades estaduais de São Paulo. A ferramenta principal desta pesquisa são teorias e experimentos na área da cognição musical. Este é um campo que investiga as formas de aquisição, processamento e organização de informações; ou seja, atividades cognitivas relacionadas ao conhecimento. Abordamos, sob este ponto de vista, aspectos da percepção rítmica, do desenvolvimento cognitivo e do ensino e aprendizado da música. Primeiramente, são discutidos os conceitos de pulsação, acento, metro, ritmo e agrupamento, e os processos perceptivos neles envolvidos. Contextualizamos estas questões ao discorrer sobre o desenvolvimento cognitivo e aspectos do aprendizado e da memória. Por fim, buscamos compreender como a percepção rítmica é trabalhada pelos métodos utilizados na disciplina de Percepção Musical nas universidades mencionadas. As avaliações realizadas mostram que os métodos selecionados não trabalham a percepção rítmica diretamente, pois não levam o aluno a desenvolver os processos perceptivos da forma como eles ocorrem na escuta e prática interpretativa real da música. Eles se encaixam em um modelo mecanicista de ensino, em que o aprendizado consiste em treino e prática repetida de exercícios cujo foco é a emissão – em oposição à percepção – musical. A falta de aprofundamentos teóricos, da discussão de conceitos, da contextualização musical, e da abordagem de questões estilísticas e organológicas ligadas à interpretação nos faz concluir que estes métodos apenas complementam a disciplina; sua função é fornecer materiais para um treinamento auditivo, baseado especialmente no solfejo de padrões rítmicos.
This work aims to investigate relations between the perception of musical rhythm and the ear training methods used in higher education in the state universities of São Paulo. The main tool of this research are the theories and experiments in the field of music cognition. This field investigates the ways of acquiring, processing and organizing information; that is, cognitive activities related to knowledge. From this point of view, we approach some aspects of rhythm perception, cognitive development and music teaching and learning. First, we discuss the concepts of beat, accent, meter, rhythm and grouping, and the perceptual processes involved in it. We contextualize these issues by discussing the cognitive development and aspects of learning and memory. Finally, we seek to understand how the methods used in the discipline of Music Perception in the mentioned universities deal with rhythm perception. The evaluations show that the selected methods do not deal with rhythm perception directly, since they do not lead the student to develop the perceptual processes in the way they occur in real musical listening and performance. They fit into a mechanistic model of teaching, in which learning consists of training and repeated practice of exercises, whose focus is musical production – as opposed to perception. The lack of theoretical insights, discussion of concepts, musical contextualization, and approach to stylistic and organological issues related to performance leads us to conclude that these methods only complement the discipline; its function is to provide materials for ear training, based especially on the performance of rhythmic patterns.
APA, Harvard, Vancouver, ISO, and other styles
33

Ramos, Danilo. "Fatores emocionais durante uma escuta musical afetam a percepção temporal de músicos e não-músicos?" Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/59/59137/tde-08102008-013413/.

Full text
Abstract:
RAMOS, Danilo. Fatores emocionais durante uma escuta musical afetam a percepção temporal de músicos e não músicos? 2008, 268 p. Tese (Doutorado). Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto. Universidade de São Paulo, Ribeirão Preto, 2008. Esta pesquisa teve como objetivo verificar o papel das emoções desencadeadas pela música na percepção temporal de músicos e não músicos. Quatro experimentos foram realizados: no Experimento I, músicos e não músicos realizaram tarefas de associações emocionais a trechos musicais de 36 segundos de duração, pertencentes ao repertório erudito ocidental. A tarefa consistia em escutar cada trecho musical e associá-lo às categorias emocionais Alegria, Serenidade, Tristeza, Medo ou Raiva. Os resultados mostraram que a maioria dos trechos musicais desencadeou uma única emoção específica nos ouvintes; além disso, as associações emocionais dos músicos foram semelhantes às associações emocionais dos não músicos para a maioria dos trechos musicais apresentados. No Experimento II, músicos e não músicos realizaram tarefas de associação temporal aos trechos musicais mais representativos de cada emoção, utilizados no Experimento I. Assim, os trechos musicais eram apresentados e os participantes deveriam associar cada um deles a durações de 16, 18, 20, 22 ou 24 segundos. Os resultados mostraram que, para o grupo Músicos, os três trechos musicais associados à Tristeza foram subestimados em relação às suas durações reais; nenhuma outra categoria emocional apresentou mais do que um trecho musical sendo subestimado ou superestimado em relação a suas durações reais, para ambos os grupos. Pesquisas recentes em Psicologia da Música têm mostrado duas propriedades estruturais como sendo moduladoras da percepção de emoções específicas durante uma escuta musical: o modo (organização das notas dentro de uma escala musical) e o andamento (número de batidas por minuto). Assim, no Experimento III, músicos e não músicos realizaram tarefas de associações emocionais a composições musicais construídas em sete modos (Jônio, Dórico, Frígio, Lídio, Mixolídio, Eólio e Lócrio) e três andamentos (adágio, moderato e presto). O procedimento foi o mesmo utilizado no Experimento I. Os resultados mostraram que o modo musical modulou a valência afetiva desencadeada pelos trechos musicais: trechos musicais apresentados em modos maiores obtiveram índices positivos de valência afetiva e trechos musicais apresentados em modos menores obtiveram índices negativos de valência afetiva; além disso, o andamento musical modulou o arousal desencadeado pelos trechos musicais: quanto mais rápido o andamento do trecho musical, maiores os níveis de arousal desencadeados e vice-versa. No Experimento IV, músicos e não músicos realizaram tarefas de associação temporal aos trechos musicais modais utilizados no Experimento III. O procedimento foi o mesmo utilizado no Experimento II. Os resultados mostraram que manipulações, principalmente no arousal, afetaram a percepção temporal dos ouvintes: para ambos os grupos, foram encontradas subestimações temporais para trechos musicais desencadeadores de baixos índices de arousal; além disso, para o grupo Não Músicos, foram encontradas superestimações temporais para trechos musicais desencadeadores de altos índices de arousal. Estes resultados mostraram que, no caso dos músicos, a percepção temporal foi afetada por atmosferas emocionais relacionadas à Tristeza; no caso dos Não Músicos, a percepção temporal foi afetada por fatores relacionados ao nível do arousal dos eventos musicais apreciados.
RAMOS, Danilo. Do emotional factors during music listening tasks affect time perception of musicians and nonmusicians? 2008, 268 pages. Thesis (PhD). Faculty of Philosophy, Sciences and Letters of Ribeirão Preto. University of São Paulo, Ribeirão Preto, 2008. This study aimed to verify the role of emotions triggered by music on time perception of musicians and nonmusicians. Four experiments were conducted: In Experiment I, musicians and nonmusicians performed emotional association tasks for musical excerpts of 36 seconds duration belonging to the Western classic repertoire. The tasks required to listen to each musical excerpt and to associate it with emotional categories: Joy, Serenity, Sadness, or Fear/Anger. The results showed that most musical excerpts triggered a specific single emotion in listeners; moreover, the emotional associations of musicians were similar to the emotional associations of nonmusicians for most musical excerpts presented. In Experiment II, musicians and nonmusicians performed temporal association tasks for the three most representative excerpts of each emotion used in Experiment I. Thus, the participants had to associate each of such musical excerpts with the following durations: 16, 18, 20, 22 or 24 seconds. The results showed that for the musicians, the three musical excerpts associated with Sadness were underestimated in relation to their real time; moreover, no other emotional category was associated with more than one musical excerpt whether being underestimated or overestimated, regarding their real time, for both groups. Recent researches in Psychology of Music have shown two structural properties as the modulators of specific emotions perceived during a music listening task: the mode (the organization of the notes in a musical scale) and tempo (the number of beats per minute). Thus, in Experiment III, musicians and nonmusicians carried out emotional association tasks with musical compositions constructed in seven modes (Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian) and three tempi (adagio, moderato, and presto). The procedure was the same used in Experiment I. The results showed that the musical mode modulated the affective valence triggered by the excerpts: musical excerpts based on major modes obtained positive affective valence indexes and musical excerpts based on minor modes obtained negative affective valence indexes; moreover, the musical tempo modulated the arousal triggered by the excerpts: the faster the tempo of the musical excerpts, the higher the arousal levels and vice versa, for both groups. In Experiment IV, musicians and nonmusicians performed temporal association tasks for those modal musical excerpts used in Experiment III. The procedure was the same used in Experiment II. The results showed that manipulations concerning arousal affected the time perception of the listeners: time underestimations due to low arousal excerpts were found for both groups; moreover, time underestimations due to high arousal excerpts were found only for nonmusicians. These results showed that in the case of musicians, time perception was affected by emotional atmospheres related to Sadness; in the case of nonmusicians, time perception was affected by factors related to the level of arousal of music events appreciated.
APA, Harvard, Vancouver, ISO, and other styles
34

Warrenburg, Lindsay Alison. "Subtle Semblances of Sorrow: Exploring Music, Emotional Theory, and Methodology." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1566765247386444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Jimison, Zachary N. "The Effect of Music Familiarity on Driving: A Simulated Study of the Impact of Music Familiarity Under Different Driving Conditions." UNF Digital Commons, 2014. http://digitalcommons.unf.edu/etd/539.

Full text
Abstract:
Music is one of the most popular activities while driving. Previous research on music while driving has been mixed, with some researchers finding music to be a distractor and some research finding music to be facilitative to driving performance. The current study was designed to determine if familiarity with the music might explain the difference found between self-selected and experimenter-selected music, and whether the difficulty of the driving conditions affected music’s relationship to driving performance. One hundred and sixty-five University students participated in a driving simulation both with music and without music. Under the “with music” condition, participants were randomly assigned to three music conditions: self-selected music, experimenter-selected familiar music, and experimenter-selected unfamiliar music. In the simulation drive, participants first drove under a simple, low-mental workload condition (car following task in a simulated suburban road) and then drove under a complex, high-mental workload condition (city/urban road). The results showed that whether music was self- or experimenter-selected did not affect driving performance. Whether the music was familiar or unfamiliar did not affect performance either. However, self-selected music appeared to improve driving performance under low-workload conditions, leading to less car-following delay and less standard deviation in steering, but also caused participants to drive faster, leading to faster mean speed and higher car-following modulus, but not more speed limit violations. Self-selected music did not have any significant effect in high-mental workload conditions.
APA, Harvard, Vancouver, ISO, and other styles
36

Woloszyn, Michael Richard. "Perceptual asymmetries in a diatonic context." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0031/NQ66246.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mendoza, Jennifer. "Characterizing the Structure of Infants' Everyday Musical Input." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23763.

Full text
Abstract:
Infants acculturate to their soundscape over the first year of life (e.g., Hannon & Trehub, 2005a; Werker & Tees, 1984). This perceptual tuning of early auditory skills requires integrating across experiences that repeat and vary in content and are distributed in time. Music is part of this soundscape, yet little is known about the real-world musical input available to infants as they begin learning sounds, melodies, rhythms, and words. In this dissertation, we collected and analyzed a first-of-its-kind corpus of music identified in day-long audio recordings of 6- to 12-month-old infants and their caregivers in their natural, at-home environments. We characterized the structure of this input in terms of key distributional and temporal properties that shape learning in many domains (e.g., Oakes & Spalding, 1997; Roy et al., 2015; Vlach et al., 2008; Weisleder & Fernald, 2013). This everyday sensory input serves as the data available for infants to aggregate in order to build knowledge about music. We discovered that infants encountered nearly an hour of cumulative music per day distributed across multiple instances. Infants encountered many different tunes and voices in their daily music. Within this diverse range, infants encountered consistency, such that some tunes and voices were more available than others in infants’ everyday musical input. The proportion of music produced by live voices varied widely across infants. As infants progressed in time through their days, they encountered many music instances close together in time as well as some music instances separated by much longer lulls. This bursty temporal pattern also characterized how infants encountered instances of their top tune and their top voice – the specific tune and specific voice that occurred for the longest cumulative duration in each infant’s day. Finally, infants encountered many pairs of consecutive music bouts with repeated content – the same unique tune or the same unique voice. Taken together, we discovered that infants’ everyday musical input was more consistent than random in both content and time across infants’ days at home. These findings have potential to inform theory and future research examining how the nature of early music experience shapes infants’ early learning.
APA, Harvard, Vancouver, ISO, and other styles
38

Byron, Timothy Patrick. "The processing of pitch and temporal information in relational memory for melodies." View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/37492.

Full text
Abstract:
Thesis (Ph.D.) -- University of Western Sydney, 2008.
A thesis submitted to the University of Western Sydney, College of Arts, School of Psychology, in fulfilment of the requirements for the degree of Doctor of Philosophy. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
39

Celma, Miralles Alexandre 1991. "Neural and evolutionary correlates of rhythm processing through beat and meter." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/668448.

Full text
Abstract:
El temps és un component estructural de la música. A cada cultura, els sons de la música es produeixen i es perceben com patrons rítmics que posseeixen una pulsació isocrònica subjacent. Aquesta pulsació isocrònica s'organitza mitjançant el compàs en patrons que jerarquitzen posicions fortes i febles. Ambdós, la pulsació isocrònica i el compàs, són constructes cognitius que funcionen com a punts de referència temporal per categoritzar i predir esdeveniments, fet que permet sincronitzar moviments (entre altres coses). Aquesta tesi pretén explorar les bases biològiques de la pulsació isocrònica i del compàs jeràrquic des d'un enfocament neurofisiològic i comparatiu. Els estudis electrofisiològics amb humans han revelat que les poblacions neuronals poden sincronitzar-se amb estímuls periòdics visuals i auditius; i amb el compàs ternari, sigui imaginat en la modalitat visual o marcada per característiques auditives espacials. A més, la formació musical i l'atenció interaccionen amb el processament del ritme i reforcen la sincronia neural amb les periodicitats de la pulsació i el compàs. Els estudis conductuals amb rates han revelat que altres animals són capaços de reconèixer l'estructura rítmica subjacent a una cançó familiar i que poden detectar isocronia en seqüències auditives presentades a diversos tempos, independentment de la durada absoluta dels tons. A diferència dels humans, les rates no tenen habilitats d'aprenentatge vocal, les quals semblen no ser necessàries per processar aquests dos components temporals del ritme. En conjunt, aquestes troballes assenyalen que alguns aspectes rítmics de la música van més enllà de la modalitat auditiva en els humans i que els seus orígens es poden trobar en altres espècies.
APA, Harvard, Vancouver, ISO, and other styles
40

Vinke, Louis Nicholas. "Factors Affecting the Perceived Rhythmic Complexity of Auditory Rhythms." Bowling Green State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1269042162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Peckel, Mathieu. "Le lien réciproque entre musique et mouvement étudié à travers les mouvements induits par la musique." Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOL025/document.

Full text
Abstract:
La musique et le mouvement sont inséparables. Les mouvements produits spontanément lors de l'écoute musicale seraient le reflet d'un lien étroit entre le système perceptif et moteur. Ce lien est l'objet d'étude de cette thèse. Une première approche concernait l'impact des mouvements induits par la musique sur la cognition musicale. Dans deux études, nous montrons que bouger en rythme sur la musique n'améliore ni la rétention de nouveaux morceaux de musique (Etude 1) ni la rétention d'informations contextuelles relatives à leur encodage (Etude 2). Les résultats des ces deux études suggèrent la superficialité du traitement inhérent à l'expression des affordances musicales nécessaire à la production de mouvements induits par la musique dans la tâche motrice ainsi qu'un traitement moteur automatique de la musique indépendamment de la tâche. L'importance du groove musical a également été mise en évidence. Une deuxième approche concernait l'influence de la perception de rythmes musicaux sur la production de mouvements rythmiques. Notre troisième étude testait l'hypothèse selon laquelle les membres du corps seraient influencés de manière différente en fonction du tempo musical. Les résultats montrent que la tâche de tapping était la plus influencée par la perception de rythmes musicaux. Ceci serait dû à la nature similaire de la pulsation musicale et des mécanismes de timing impliqués dans le tapping ainsi qu'à des phénomènes de résonance motrice. Nous avons également observé la mise en place de certaines stratégies face à la tâche. L'ensemble de ces résultats est discuté à la lumière du lien entre perception et action, de la cognition musicale incarnée et des affordances musicales
Music and movement are inseparable. The movements that are spontaneously procuded when listening to music are thought to be related to the close relationship between the perceptual and motor system in listeners. This particular link is the main topic of this thesis. A first approach was focused on the impact of music-induced movements on music cognition. In two studies, we show that moving along to music neither enhances the retention of new musical pieces (Study 1) nor the retention of the contextual information related to their encoding (Study 2). These results suggest a shallow processing inherent to the expression of musical affordances required for the production of music-induced movements in the motor task. Moreover, they suggest that music is automatically processed in a motoric fashion independantly of the task. Our results also brought forward the importance of the musical groove. A second approach focused on the influence of the perception of musical rhythms on the production of rythmic movements. Our third study tested the hypothesis that different limbs would be differentially influenced depending on the musical tempo. Results show that the tapping taks was the most influenced by the perception of musical rhythms. We argued that this would come from the similar nature of the musical pulse and the timing mecanisms involved in the tapping task and motor resonance phenomena. We also observed different strategies put in place to cope with the task. All these results are discussed in light of the link between perception and action, embodied musical cognition and musical affordances
APA, Harvard, Vancouver, ISO, and other styles
42

Savander, Alma. "Is music listening associated with our cognitive abilities? : A study about how auditory working memory, speech-in-noise perception and listening habits are connected." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170527.

Full text
Abstract:
This study explores whether hours listening to music of young adults with self-reported normal hearing is associated with auditory working memory and if hours listening to music and auditory working memory can predict speech-in-noise perception. Thirty native Swedish speaking university students with self-reported normal hearing in the ages ranging from 21 to 29 years old (M= 23.2) participated in filling out a self-reporting questionnaire concerning their listening habits, a listening span test and a speech-in-noise test. A hierarchical multiple linear regression analysis was performed. The results did not suggest a significant correlation between hours listening to music and auditory working memory nor did it indicate that hours listening to music and auditory working memory could significantly predict speech-in-noise perception. These insignificant findings might be due to several reasons including methodological issues such as the sample size, communication difficulties due to poor internet connection and/or the use of self-reported answers. These results and the arguments presented in the discussion indicate that further research is needed to better answer the research questions of the current study.
APA, Harvard, Vancouver, ISO, and other styles
43

Harris, Philip Geoffrey. "Cortical activity associated with rhythmic grouping of pitch sequences." Australasian Digital Thesis Program, 2007. http://adt.lib.swin.edu.au/public/adt-VSWT20071001.113258/index.html.

Full text
Abstract:
Thesis (PhD) - Swinburne University of Technology, Brain Sciences Institute, 2007.
A thesis for Doctorate of Philosophy, Brain Sciences Institute, Swinburne University of Technology - 2007. Typescript. Bibliography: p. 245-285.
APA, Harvard, Vancouver, ISO, and other styles
44

Wilson, Maura L. "Examining the effects of variation in emotional tone of voice on spoken word recognition." Cleveland State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=csu1304093822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Giomi, Andrea. "La pensée sonore du corps : Pour une approche écologique à la médiation technologique, au mouvement et à l'interaction sonore." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR2041/document.

Full text
Abstract:
Au cours des dernières années, l’avènement des technologies de captation du mouvement a radicalement transformé l’univers de la pratique artistique tout en ouvrant des perspectives inédites pour la recherche scientifique. La musique est actuellement l’un des domaines les plus impliqués dans ce renouvellement expressif et épistémologique. Dans ce cadre, les processus d’interaction entre médiation technologique, mouvement et son, semblent se décliner selon deux modalités majeures : d’une part, les technologies d’analyse du mouvement permettent d’étudier expérimentalement la connexion mutuelle entre phénomène acoustique et système sensori-moteur; de l’autre, la compréhension de la nature incarnée de l’expérience musicale oriente la conception et le développement de technologies interactives pour la performance vers un modèle plus holistique. En partant de ces prémisses, cette thèse porte sur la manière dont la transformation des aspects imperceptibles du mouvement en données perceptibles – sous forme de son – permet de prendre conscience des processus physiologiques et figuratifs qui sont à la base du geste. Dans ce contexte, la relation entre mouvement et feedback sonore est analysée selon une perspective écologique visant à mettre en lumière comment la médiation technologique induit un processus d’extension et d’intensification autopoïétique de l’anatomie corporelle. Notamment dans le cas de la pratique performative, l’interaction sonore offre alors au performeur la possibilité de redéfinir sa propre organisation perceptive sur la base d’un un nouveau répertoire des données sensorielles, lui permettant ainsi de repenser la composition expressive du mouvement
During the last years, motion sensing technologies have radically transformed the universe of the artistic practice. This dramatic change has recently inspired new perspectives in scientific research. Music is actually among the most affected domaines by this expressive and epistemological renewal. The interactive relation between mediation technology, movement and sound, seems to be declined into two main modalities : on one hand, movement analysis’ technologies allow to study mutual connections between acoustic phenomenon and sensorimotor system, on the other hand, embodied understanding of musical experience can help to devise an holistic approach to interactive systems conception and development. Given this background scenario, this thesis focuses on how movement’s qualities transformation into sound allows the performer to become aware of physiological and imaginative processes in gesture composition. In this framework, sound feedback-movement relation is analyzed from an ecological point of view. According to this approach, mediation technology seems to elicit an autopoietic process of extension and intensification of corporeality. Especially in the artistic performance, sound interaction offers to performer a new sensorial geography that allows him/her to renew his/her perceptive organization and thereby rethink expressive composition of movement
APA, Harvard, Vancouver, ISO, and other styles
46

Jarvstad, Andreas. "The optimality of perception and cognition : the perception-cognition gap explored." Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/24208/.

Full text
Abstract:
The ability to choose wisely is crucial for our survival. Yet, the received wisdom has been that humans choose irrationally and sub-optimally. This conclusion is largely based on studies in which participants are asked to make choices on the basis of explicit numerical information. Lately, our ability to make such high-level choices has been contrasted with our ability to make low-level (perceptual or perceptuo-motor) choices. Remarkably, we seem able to make near-optimal low-level choices. Taken at face value, the discrepancy gives rise to a perception-cognition gap. The gap implies, for example, that our ancestors were much better at choosing where to put their feet on a rocky ridge (a perceptuo-motor task), compared to choosing which prey to hunt (a cognitive task).The work reported herein probes this gap. There are many differences between literatures showing optimal and sub-optimal performance. The main approach taken here was to match low- and high-level tasks as closely as possible to eliminate such differences. When this is done one finds very little evidence for a perception-cognition gap. Moreover, once the standards of performance assessment of the respective literature are applied to data generated under such conditions it becomes apparent that the cause of the gap seems to lie in the standards themselves. When low-level standards are applied, human choice, whether low- or high-level, looks good. When high-level standards are applied, human choice, whether low- or high-level, looks rather poor. It is easy to see then, that applying high-level standards to high-level tasks, and low-level standards to low-level tasks, will give rise to a “gap”, with no or little actual difference in performance.
APA, Harvard, Vancouver, ISO, and other styles
47

Pinard, Dominique. "Analogie et saisie discursive de l'expérience sonore : du sensible à l'intelligible." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCH046/document.

Full text
Abstract:
Comme toute expérience humaine, la relation au sonore appelle le langage, qui contribue à l‟inscrire dans la vie des individus et des communautés. Récurrente cependant chez les locuteurs, l‟expression d‟une difficulté particulière à parler des phénomènes acoustiques doit-elle être attribuée à l‟irréductibilité essentielle du perçu au dire, à d‟éventuelles lacunes de la langue dans la saisie de l‟expérience auditive, ou encore à la spécificité d‟une situation dans laquelle le discours, attaché au sonore par le double lien du signans et du signatum, réactive à sa source le procès sémiotique ? Mettant en évidence le fait que parler des sons est toujours parler de notre relation aux phénomènes acoustiques, l‟étude, basée sur une analyse de corpus, s‟intéresse à la façon dont le discours témoigne - et participe - de l‟inscription de l‟expérience sonore au coeur de l‟être au monde. Utilisant les ressources de l‟approche cognitive de l‟analogie (Hofstadter, Sander, 2013), elle s‟attache aux relations unissant, du sonore au musical, les processus de catégorisation auditive aux diverses modalités de saisie discursive du perçu. Si, comme en témoigne l‟analyse, ce sont bien les mêmes analogies qui participent de la relation aux phénomènes auditifs et de l‟usage signifiant du sonore en langue, le discours sur le son permet-il de mieux comprendre les enjeux du lien originaire unissant, dans et par l‟expérience acoustique, l‟exploration du sensible à l‟engagement de l‟aventure du sens?
Like any human experience, the relationship to sound calls for language, which contributes to engaging it in the lives of individuals and communities. Speakers, though, often claim to encounter special difficulties when talking about acoustic phenomena. Should this situation be attributed to the essential irreductibility of percept to speech, to possible deficiencies of language in the field of auditory experience or to the specificity of a situation in which speech, bound both by signans and signatum to sound, has to reactivate the semiotic process at its source? Highlighting the fact that talking about soundsalways means talking about our relation to acoustic phenomena, this study is based on a corpus analysis and focuses on the way speech bears witness to and takes part in the central contribution of acoustic experience to “being in the world“. Based on the cognitive approach of analogy (Hofstadter, Sander, 2013), it focuses - from acoustic to musical points of view - on the links between auditory categorization processes and the different modalities of “speech on sound“. If, as the analysis suggests, analogies which play a part in auditory perception are the same as those which contribute to the significant useof sound in language, can speech on sound enable us to better understand what is at stake in the original link binding, in and by acoustic experience, the exploration of the sensitive universe to the adventure of meaning ?
APA, Harvard, Vancouver, ISO, and other styles
48

Hoch, Lisianne. "Perception et apprentissage des structures musicales et langagières : études des ressources cognitives partagées et des effets attentionnels." Thesis, Lyon 2, 2010. http://www.theses.fr/2010LYO20049/document.

Full text
Abstract:
La musique et le langage sont des matériels structurés à partir de principes combinatoires. Les auditeurs ont acquis des connaissances sur ces régularités structurelles par simple exposition. Ces connaissances permettent le développement d’attentes sur les événements à venir en musique et en langage. Mon travail de thèse étudiait deux aspects de la spécificité versus la généralité des processus de traitement de la musique et du langage: la perception et l’apprentissage statistique.Dans la première partie (perception), les Études 1 à 4 ont montré que le traitement des structures musicales influence le traitement de la parole et du langage présenté en modalité visuelle, reflétant l’influence des mécanismes d’attention dynamique (Jones, 1976). Plus précisément, le traitement des structures musicales interagissait avec le traitement des structures syntaxiques, mais pas avec le traitement des structures sémantiques en langage (Étude 3). Ces résultats sont en accord avec l’hypothèse de ressources d’intégration syntaxique partagées de Patel (2003). Nos résultats et les précédentes études sur les traitements simultanés des structures musicales et linguistiques (syntaxiques et sémantiques), nous ont incités à élargir l’hypothèse de ressources d’intégration partagées au traitement d’autres d’informations structurées qui nécessitent également des ressources d’intégration structurelle et temporelle. Cette hypothèse a été testée et confirmée par l’observation d’une interaction entre les traitements simultanés des structures musicales et arithmétiques (Étude 4). Dans la deuxième partie (apprentissage), l’apprentissage statistique était étudié en comparaison directe pour des matériels verbaux et non-verbaux. Plus particulièrement, nous avons étudié l’influence de l’attention dynamique guidée par des indices temporels non-acoustiques (Études 5 et 6) et acoustiques (Étude 7) sur l’apprentissage statistique. Les indices temporels non-acoustiques influençaient l’apprentissage statistique de matériels verbaux et non-verbaux. En accord avec la théorie de l’attention dynamique (Jones, 1976), une hypothèse est que les indices temporels non-acoustiques guident l’attention dans le temps et influencent l’apprentissage statistique.Les études de ce travail de thèse ont suggéré que les ressources d’attention dynamique influençaient la perception et l’apprentissage de matériels structurés et que les traitements des structures musicales et d’autres informations structurées (e.g., langage, arithmétique) partagent des ressources d’intégration structurelle et temporelle. L’ensemble de ces résultats amène de nouvelles questions sur la possible influence du traitement des structures auditives tonales et temporelles sur les capacités cognitives générales de séquencement notamment requises pour la perception et l’apprentissage d’informations séquentielles structurées.Jones, M. R. (1976). Time, our lost dimension: Toward a new theory of perception, attention, and memory. Psychological Review, 83(5), 323-355. doi:10.1037/0033-295X.83.5.323Patel, A. D. (2003). Language, music, syntax and the brain. Nature Neuroscience, 6(7), 674-681. doi:10.1038/nn1082
Music and language are structurally organized materials that are based on combinatorial principles. Listeners have acquired knowledge about these structural regularities via mere exposure. This knowledge allows them to develop expectations about future events in music and language perception. My PhD investigated two aspects of domain-specificity versus generality of cognitive functions in music and language processing: perception and statistical learning.In the first part (perception), musical structure processing has been shown to influence spoken and visual language processing (Études 1 & 4), partly due to dynamic attending mechanisms (Jones, 1976). More specifically, musical structure processing has been shown to interact with linguistic-syntactic processing, but not with linguistic-semantic processing (Étude 3), thus supporting the hypothesis of shared syntactic resources for music and language processing (Patel, 2003). Together with previous studies that have investigated simultaneous musical and linguistic (syntactic and semantic) structure processing, we proposed that these shared resources might extend to the processing of other structurally organized information that require structural and temporal integration resources. This hypothesis was tested and supported by interactive influences between simultaneous musical and arithmetic structure processing (Étude 4). In the second part (learning), statistical learning was directly compared for verbal and nonverbal materials. In particular, we aimed to investigate the influence of dynamic attention driven by non-acoustic (Études 5 & 6) and acoustic (Étude 7) cues on statistical learning. Non-acoustic temporal cues have been shown to influence statistical learning of verbal and nonverbal artificial languages. In agreement with the dynamic attending theory (Jones, 1976), we proposed that non-acoustic temporal cues guide attention over time and influence statistical learning.Based on the influence of dynamic attending mechanisms on perception and learning and on evidence of shared structural and temporal integration resources for the processing of musical structures and other structured information, this PhD opens new questions about the potential influence of tonal and temporal auditory structure processing on general cognitive sequencing abilities, notably required in structured sequence perception and learning.Jones, M. R. (1976). Time, our lost dimension: Toward a new theory of perception, attention, and memory. Psychological Review, 83(5), 323-355. doi:10.1037/0033-295X.83.5.323Patel, A. D. (2003). Language, music, syntax and the brain. Nature Neuroscience, 6(7), 674-681. doi:10.1038/nn1082
APA, Harvard, Vancouver, ISO, and other styles
49

Veto, Peter, Marvin Uhlig, Nikolaus F. Troje, and Wolfgang Einhäuser. "Cognition modulates action-to-perception transfer in ambiguous perception." Association for Research in Vision and Ophthalmology (ARVO), 2018. https://monarch.qucosa.de/id/qucosa%3A31533.

Full text
Abstract:
Can cognition penetrate action-to-perception transfer? Participants observed a structure-from-motion cylinder of ambiguous rotation direction. Beforehand, they experienced one of two mechanical models: An unambiguous cylinder was connected to a rod by either a belt (cylinder and rod rotating in the same direction) or by gears (both rotating in opposite directions). During ambiguous cylinder presentation, mechanics and rod were invisible, making both conditions visually identical. Observers inferred the rod's direction from their moment-by-moment subjective perceptual interpretation of the ambiguous cylinder. They reported the (hidden) rod's direction by rotating a manipulandum in either the same or the opposite direction. With respect to their effect on perceptual stability, the resulting match/nonmatch between perceived cylinder rotation and manipulandum rotation showed a significant interaction with the cognitive model they had previously been biased with. For the “belt” model, congruency between cylinder perception and manual action is induced by same-direction report. Here, we found that same-direction movement stabilized the perceived motion direction, replicating a known congruency effect. For the “gear” model, congruency between perception and action is—in contrast—induced by opposite-direction report. Here, no effect of perception-action congruency was found: Perceptual congruency and cognitive model nullified each other. Hence, an observer's internal model of a machine's operation guides action-to-perception transfer.
APA, Harvard, Vancouver, ISO, and other styles
50

Siegel, Max Harmon. "Compositional simulation in perception and cognition." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/121814.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019
Cataloged from PDF version of thesis. "February 2019."
Includes bibliographical references (pages 97-103).
Despite rapid recent progress in machine perception and models of biological perception, fundamental questions remain open. In particular, the paradigm underlying these advances, pattern recognition, requires large amounts of training data and struggles to generalize to situations outside the domain of training. In this thesis, I focus on a broad class of perceptual concepts - those that are generated by the composition of multiple causal processes, in this case certain physical interactions - that human use essentially and effortlessly in making sense of the world, but for which any specific instance is extremely rare in our experience. Pattern recognition, or any strongly learning-based approach, might then be an inappropriate way to understand people's perceptual inferences.
I propose an alternative approach, compositional simulation, that can in principle account for these inferences, and I show in practice that it provides both qualitative and quantitative explanatory value for several experimental settings. Consider a box and a number of marbles in the box, and imagine trying to guess how many there are based on the sound produced when the box is shaken. I demonstrate that human observers are quite good at this task, even for subtle numerical differences. Compositional simulation hypothesizes that people succeed by leveraging internal causal models: they simulate the physical collisions that would result from shaking the box (in a particular way), and what those collisions would sound like, for different numbers of marbles. They then compare their simulated sounds with the sound they heard.
Crucially these simulation models can generalize to a wide range of percepts, even those never before experienced, by exploiting the compositional structure of the causal processes being modeled, in terms of objects and their interactions, and physical dynamics and auditory events. Because the motion of the box is a key ingredient in physical simulation, I hypothesize that people can take cues to motion into account in our task; I give evidence that people do. I also consider the domain of unfamiliar objects covered by cloth. a similar mechanism should enable successful recognition even for unfamiliar covered objects (like airplanes). I show that people can succeed in the recognition task, even when the shape of the object is very different when covered. Finally, I show how compositional simulation provides a way to "glue together" the data received by perception (images and sounds) with the contents of cognition (objects).
I apply compositional simulation to two cognitive domains: children's intuitive exploration (obtaining quantitative prediction of exploration time), and causal inference from audiovisual information.
by Max Harmon Siegel.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography