To see the other types of publications on this topic, follow the link: Audio-visual integration.

Dissertations / Theses on the topic 'Audio-visual integration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Audio-visual integration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dietrich, Kelly. "Analysis of talker characteristics in audio-visual speech integration." Connect to resource, 2008. http://hdl.handle.net/1811/32149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mihalik, Agoston. "The neural basis of audio-visual integration and adaptation." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7692/.

Full text
Abstract:
The brain integrates or segregates audio-visual signals effortlessly in everyday life. In order to do so, it needs to infer the causal structure by which the signals were generated. Although behavioural studies extensively characterized causal inference in audio-visual perception, the neural mechanisms are barely explored. The current thesis sheds light on these neural processes and demonstrates how the brain adapts to dynamic as well as long-term changes in the environmental statistics of audio-visual signals. In Chapter 1, I introduce the causal inference problem and demonstrate how spatial audiovisual signals are integrated at the behavioural as well as neural level. In Chapter 2, I describe methodological foundations for the following empirical chapters. In Chapter 3, I present the neural mechanisms of explicit causal inference and the representations of audio-visual space along the human cortical hierarchy. Chapter 4 reveals that the brain is able to use recent past to adapt to the dynamically changing environment. In Chapter 5, I discuss the neural substrates of encoding auditory space and its adaptive changes in response to spatially conflicting visual signals. Finally, in Chapter 6, I summarize the findings of the thesis, its contributions to the literature, and I outline directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
3

Makovac, Elena. "Audio-visual interactions in manual and saccadic responses." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8040.

Full text
Abstract:
Chapter 1 introduces the notions of multisensory integration (the binding of information coming from different modalities into a unitary percept) and multisensory response enhancement (the improvement of the response to multisensory stimuli, relative to the response to the most efficient unisensory stimulus), as well as the general goal of the present thesis, which is to investigate different aspects of the multisensory integration of auditory and visual stimuli in manual and saccadic responses. The subsequent chapters report experimental evidence of different factors affecting the multisensory response: spatial discrepancy, stimulus salience, congruency between cross-modal attributes, and the inhibitory influence of concurring distractors. Chapter 2 reports three experiments on the role of the superior colliculus (SC) in multisensory integration. In order to achieve this, the absence of S-cone input to the SC has been exploited, following the method introduced by Sumner, Adamjee, and Mollon (2002). I found evidence that the spatial rule of multisensory integration (Meredith & Stein, 1983) applies only to SC-effective (luminance-channel) stimuli, and does not apply to SC-ineffective (S-cone) stimuli. The same results were obtained with an alternative method for the creation of S-cone stimuli: the tritanopic technique (Cavanagh, MacLeod, & Anstis, 1987; Stiles, 1959; Wald, 1966). In both cases significant multisensory response enhancements were obtained using a focused attention paradigm, in which the participants had to focus their attention on the visual modality and to inhibit responses to auditory stimuli. Chapter 3 reports two experiments showing the influence of shape congruency between auditory and visual stimuli on multisensory integration; i.e. the correspondence between structural aspects of visual and auditory stimuli (e.g., spiky shape and “spiky” sounds). Detection of audio-visual events was faster for congruent than incongruent pairs, and this congruency effect occurred also in a focused attention task, where participants were required to respond only to visual targets and could ignore irrelevant auditory stimuli. This particular type of cross-modal congruency was been evaluated in relation to the inverse effectiveness rule of multisensory integration (Meredith & Stein, 1983). In Chapter 4, the locus of the cross-modal shape congruency was evaluated applying the race model analysis (Miller, 1982). The results showed that the violation of the model is stronger for some congruent pairings in comparison to incongruent pairings. Evidence of multisensory depression was found for some pairs of incongruent stimuli. These data imply a perceptual locus for the cross-modal shape congruency effect. Moreover, it is evident that multisensoriality does not always induce an enhancement, and in some cases, when the attributes of the stimuli are particularly incompatible, a unisensory response may be more effective that the multisensory one. Chapter 5 reports experiments centred on saccadic generation mechanisms. Specifically, the multisensoriality of the saccadic inhibition (SI; Reingold&Stampe, 2002) phenomenon is investigated. Saccadic inhibition refers to a characteristic inhibitory dip in saccadic frequency beginning 60-70 ms after onset of a distractor. The very short latency of SI suggests that the distractor interferes directly with subcortical target selection processes in the SC. The impact of multisensory stimulation on SI was studied in four experiments. In Experiments 7 and 8, a visual target was presented with a concurrent audio, visual or audio-visual distractor. Multisensory audio-visual distractors induced stronger SI than did unisensory distractors, but there was no evidence of multisensory integration (as assessed by a race model analysis). In Experiments 9 and 10, visual, auditory or audio-visual targets were accompanied by a visual distractor. When there was no distractor, multisensory integration was observed for multisensory targets. However, this multisensory integration effect disappeared in the presence of a visual distractor. As a general conclusion, the results from Chapter 5 results indicate that multisensory integration occurs for target stimuli, but not for distracting stimuli, and that the process of audio-visual integration is itself sensitive to disruption by distractors.
APA, Harvard, Vancouver, ISO, and other styles
4

Exner, Megan. "Training effects in audio-visual integration of sine wave speech." Connect to resource, 2008. http://hdl.handle.net/1811/32154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Slabu, Lavinia Mihaela. "Auditory processing in the brainstem and audio visual integration in humans studied with fMRI." [S.l. : Groningen : s.n. ; University Library of Groningen] [Host], 2007. http://irs.ub.rug.nl/ppn/305609564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Megnin, O. "Electrophysiological correlates of audio-visual integration of spoken words in typical development and autism spectrum disorder." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/19735/.

Full text
Abstract:
This thesis examined audio-visual (AV) integration effects in speech processing using event-related potentials (ERP) in healthy adults, adolescents and children. In a further study ERP recordings in adolescent boys with autistic spectrum disorder were compared to matched typically developing boys. ERP effects were examined in three post-stimulus time windows: the N1 time window as a measure of sensitivity to word onset; the P2 as a measure of transition from phonetic to lexical-semantic processing; and the N4 as a measure of semantic processing. Participants were presented with monosyllabic words in four conditions: auditory-only, visualonly, audio-visual with face, and audio-visual with scrambled face. The study reports on the modulation of ERP due to such multimodal interactions between visual and auditory input, its developmental trajectory and evidence for disruption in ASD.
APA, Harvard, Vancouver, ISO, and other styles
7

Zachau, S. (Swantje). "Signs in the brain: Hearing signers’ cross-linguistic semantic integration strategies." Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526213293.

Full text
Abstract:
Abstract Audio-oral speech and visuo-manual sign language as used by the Deaf community are two very different realizations of the human linguistic communication system. Sign language is not only used by the hearing impaired but also by different groups of hearing individuals. To date, there is a great discrepancy in scientific knowledge about signed and spoken languages. Particularly little is known about the integration of the two systems, even though the vast majority of deaf and hearing signers also have a command of some form of speech. This neurolinguistic study aimed to achieve basic knowledge about semantic integration mechanisms across speech and sign language in hearing native and non-native signers. Basic principles of sign processing as reflected in electrocortical brain activation and behavioral decisions were examined in three groups of study participants: Hearing native signers (children of deaf adults, CODAs), hearing late learned signers (professional sign language interpreters), and hearing non-signing controls. Event-related brain potentials (ERPs) and behavioral response frequencies were recorded while the participants performed a semantic decision task for priming lexeme pairs. The lexeme pairs were presented either within speech (spoken prime-spoken target) or across speech and sign language (spoken prime-signed target). Target-related ERP responses were subjected to temporal principal component analyses (tPCA). The neurocognitive basis of semantic integration processes were assessed by analyzing different ERP components (N170, N400, late positive complex) in response to the antonymic and unrelated targets. Behavioral decision sensitivity to the target lexemes is discussed in relation to the measured brain activity. Behaviorally, all three groups of study participants performed above chance level when making semantic decisions about the primed targets. Different result patterns, however, hinted at three different processing strategies. As the target-locked electrophysiological data was analyzed by PCA, for the first time in the context of sign language processing, objectively allocated ERP components of interest could be explored. A little surprisingly, the overall study results from the sign-naïve control group showed that they performed in a more content-guided way than expected. This suggested that even non-experts in the field of sign language were equipped with basic skills to process the cross-linguistically primed signs. Behavioral and electrophysiological study results together further brought up qualitative differences in processing between the native and late learned signers, which raised the question: can a unitary model of sign processing do justice to different groups of sign language users?
Tiivistelmä Kuuloaistiin ja ääntöelimistön motoriikkaan perustuva puhe ja kuurojen yhteisön käyttämä, näköaistiin ja käsien liikkeisiin perustuva viittomakieli ovat kaksi varsin erilaista ihmisen kielellisen viestintäjärjestelmän toteutumismuotoa. Viittomakieltä käyttävät kuulovammaisten ohella myös monet kuulevat ihmisryhmät. Tähänastinen tutkimustiedon määrä viittomakielistä ja puhutuista kielistä eroaa huomattavasti. Erityisen vähän on tiedetty näiden kahden järjestelmän yhdistämisestä, vaikka valtaosa kuuroista ja kuulevista viittomakielen käyttäjistä hallitsee myös puheen jossain muodossa. Tämän neurolingvistisen tutkimuksen tarkoituksena oli hankkia perustietoja puheen ja viittomakielen välisistä semanttisista yhdistämismekanismeista kuulevilla, viittomakieltä äidinkielenään tai muuna kielenä käyttävillä henkilöillä. Viittomien prosessoinnin perusperiaatteita, jotka ilmenevät aivojen sähköisen toiminnan muutoksina ja valintapäätöksinä, tutkittiin kolmessa koehenkilöryhmässä: kuulevilla viittomakieltä äidinkielenään käyttävillä henkilöillä (kuurojen aikuisten kuulevilla ns. CODA-lapsilla, engl. children of deaf adults), kuulevilla viittomakielen myöhemmin oppineilla henkilöillä (viittomakielen ammattitulkeilla) sekä kuulevilla viittomakieltä osaamattomilla verrokkihenkilöillä. Tapahtumasidonnaiset herätepotentiaalit (ERP:t) ja käyttäytymisvasteen frekvenssit rekisteröitiin koehenkilöiden tehdessä semanttisia valintoja viritetyistä (engl. primed) lekseemipareista. Lekseemiparit esitettiin joko puheena (puhuttu viritesana – puhuttu kohdesana) tai puheen ja viittomakielen välillä (puhuttu viritesana – viitottu kohdesana). Kohdesidonnaisille ERP-vasteille tehtiin temporaaliset pääkomponenttianalyysit (tPCA). Semanttisten yhdistämisprosessien neurokognitiivista perustaa arvioitiin analysoimalla erilaisia ERP-komponentteja (N170, N400, myöhäinen positiivinen kompleksi) vastineina antonyymisiin ja toisiinsa liittymättömiin kohteisiin. Käyttäytymispäätöksen herkkyyttä kohdelekseemeille tarkastellaan suhteessa mitattuun aivojen aktiviteettiin. Käyttäytymisen osalta kaikki kolme koehenkilöryhmää suoriutuivat satunnaistasoa paremmin tehdessään semanttisia valintoja viritetyistä kohdelekseemeistä. Erilaiset tulosmallit viittaavat kuitenkin kolmeen erilaiseen prosessointistrategiaan. Kun kohdelukittua elektrofysiologista dataa analysoitiin pääkomponenttianalyysin avulla ensimmäistä kertaa viittomakielen prosessoinnin yhteydessä, voitiin tutkia tarkkaavaisuuden objektiivisesti allokoituja ERP-komponentteja. Oli jossain määrin yllättävää, että viittomakielellisesti natiivin verrokkiryhmän tulokset osoittivat sen jäsenten toimivan odotettua sisältölähtöisemmin. Tämä viittaa siihen, että viittomakieleen perehtymättömilläkin henkilöillä on perustaidot lingvistisesti ristiin viritettyjen viittomien prosessointiin. Yhdessä käyttäytymisperäiset ja elektrofysiologiset tutkimustulokset toivat esiin laadullisia eroja prosessoinnissa viittomakieltä äidinkielenään puhuvien henkilöiden ja kielen myöhemmin oppineiden henkilöiden välillä. Tämä puolestaan johtaa kysymykseen, voiko yksi viittomien prosessointimalli soveltua erilaisille viittomakielen käyttäjäryhmille?
APA, Harvard, Vancouver, ISO, and other styles
8

Moulin, Samuel. "Quel son spatialisé pour la vidéo 3D ? : influence d'un rendu Wave Field Synthesis sur l'expérience audio-visuelle 3D." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015PA05H102/document.

Full text
Abstract:
Le monde du divertissement numérique connaît depuis plusieurs années une évolution majeure avec la démocratisation des technologies vidéo 3D. Il est désormais commun de visualiser des vidéos stéréoscopiques sur différents supports : au cinéma, à la télévision, dans les jeux vidéos, etc. L'image 3D a considérablement évolué mais qu'en est-il des technologies de restitution sonore associées ? La plupart du temps, le son qui accompagne la vidéo 3D est basé sur des effets de latéralisation, plus au moins étendus (stéréophonie, systèmes 5.1). Il est pourtant naturel de s'interroger sur le besoin d'introduire des événements sonores en lien avec l'ajout de cette nouvelle dimension visuelle : la profondeur. Plusieurs technologies semblent pouvoir offrir une description sonore 3D de l'espace (technologies binaurales, Ambisonics, Wave Field Synthesis). Le recours à ces technologies pourrait potentiellement améliorer la qualité d'expérience de l'utilisateur, en termes de réalisme tout d'abord grâce à l'amélioration de la cohérence spatiale audio-visuelle, mais aussi en termes de sensation d'immersion. Afin de vérifier cette hypothèse, nous avons mis en place un système de restitution audio-visuelle 3D proposant une présentation visuelle stéréoscopique associée à un rendu sonore spatialisé par Wave Field Synthesis. Trois axes de recherche ont alors été étudiés : 1 / Perception de la distance en présentation unimodale ou bimodale. Dans quelle mesure le système audio-visuel est-il capable de restituer des informations spatiales relatives à la distance, dans le cas d'objets sonores, visuels, ou audio-visuels ? Les expériences menées montrent que la Wave Field Synthesis permet de restituer la distance de sources sonores virtuelles. D'autre part, les objets visuels et audio-visuels sont localisés avec plus de précisions que les objets uniquement sonores. 2 / Intégration multimodale suivant la distance. Comment garantir une perception spatiale audio-visuelle cohérente de stimuli simples ? Nous avons mesuré l'évolution de la fenêtre d'intégration spatiale audio-visuelle suivant la distance, c'est-à-dire les positions des stimuli audio et visuels pour lesquelles la fusion des percepts a lieu. 3 / Qualité d'expérience audio-visuelle 3D. Quel est l'apport du rendu de la profondeur sonore sur la qualité d'expérience audio-visuelle 3D ? Nous avons tout d'abord évalué la qualité d'expérience actuelle, lorsque la présentation de contenus vidéo 3D est associée à une bande son 5.1, diffusée par des systèmes grand public (système 5.1, casque, et barre de son). Nous avons ensuite étudié l'apport du rendu de la profondeur sonore grâce au système audio-visuel proposé (vidéo 3D associée à la Wave Field Synthesis)
The digital entertainment industry is undergoing a major evolution due to the recent spread of stereoscopic-3D videos. It is now possible to experience 3D by watching movies, playing video games, and so on. In this context, video catches most of the attention but what about the accompanying audio rendering? Today, the most often used sound reproduction technologies are based on lateralization effects (stereophony, 5.1 surround systems). Nevertheless, it is quite natural to wonder about the need of introducing a new audio technology adapted to this new visual dimension: the depth. Many alternative technologies seem to be able to render 3D sound environments (binaural technologies, ambisonics, Wave Field Synthesis). Using these technologies could potentially improve users' quality of experience. It could impact the feeling of realism by adding audio-visual spatial congruence, but also the immersion sensation. In order to validate this hypothesis, a 3D audio-visual rendering system is set-up. The visual rendering provides stereoscopic-3D images and is coupled with a Wave Field Synthesis sound rendering. Three research axes are then studied: 1/ Depth perception using unimodal or bimodal presentations. How the audio-visual system is able to render the depth of visual, sound, and audio-visual objects? The conducted experiments show that Wave Field Synthesis can render virtual sound sources perceived at different distances. Moreover, visual and audio-visual objects can be localized with a higher accuracy in comparison to sound objects. 2/ Crossmodal integration in the depth dimension. How to guarantee the perception of congruence when audio-visual stimuli are spatially misaligned? The extent of the integration window was studied at different visual object distances. In other words, according to the visual stimulus position, we studied where sound objects should be placed to provide the perception of a single unified audio-visual stimulus. 3/ 3D audio-visual quality of experience. What is the contribution of sound depth rendering on the 3D audio-visual quality of experience? We first assessed today's quality of experience using sound systems dedicated to the playback of 5.1 soundtracks (5.1 surround system, headphones, soundbar) in combination with 3D videos. Then, we studied the impact of sound depth rendering using the set-up audio-visual system (3D videos and Wave Field Synthesis)
APA, Harvard, Vancouver, ISO, and other styles
9

Zannoli, Marina. "Organisation de l'espace audiovisuel tridimensionnel." Phd thesis, Université René Descartes - Paris V, 2012. http://tel.archives-ouvertes.fr/tel-00789816.

Full text
Abstract:
Le terme stéréopsie renvoie à la sensation de profondeur qui est perçue lorsqu'une scène est vue de manière binoculaire. Le système visuel s'appuie sur les disparités horizontales entre les images projetées sur les yeux gauche et droit pour calculer une carte des différentes profondeurs présentes dans la scène visuelle. Il est communément admis que le système stéréoscopique est encapsulé et fortement contraint par les connexions neuronales qui s'étendent des aires visuelles primaires (V1/V2) aux aires intégratives des voies dorsales et ventrales (V3, cortex temporal inférieur, MT). A travers quatre projets expérimentaux, nous avons étudié comment le système visuel utilise la disparité binoculaire pour calculer la profondeur des objets. Nous avons montré que le traitement de la disparité binoculaire peut être fortement influencé par d'autres sources d'information telles que l'occlusion binoculaire ou le son. Plus précisément, nos résultats expérimentaux suggèrent que : (1) La stéréo de da Vinci est résolue par un mécanisme qui intègre des processus de stéréo classiques (double fusion), des contraintes géométriques (les objets monoculaires sont nécessairement cachés à un œil, par conséquent ils sont situés derrière le plan de l'objet caché) et des connaissances à priori (une préférence pour les faibles disparités). (2) Le traitement du mouvement en profondeur peut être influencé par une information auditive : un son temporellement corrélé avec une cible définie par le mouvement stéréo peut améliorer significativement la recherche visuelle. Les détecteurs de mouvement stéréo sont optimalement adaptés pour détecter le mouvement 3D mais peu adaptés pour traiter le mouvement 2D. (3) Grouper la disparité binoculaire avec un signal auditif dans une dimension orthogonale (hauteur tonale) peut améliorer l'acuité stéréo d'approximativement 30%
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, KUAN-CHEN, and 吳冠辰. "The Integration of Digital Audio Workstation andAudio-Visualization: The Case of VISUAL PHOTIS." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/84v27a.

Full text
Abstract:
碩士
輔仁大學
音樂學系
106
The thoughts of music creation and the manners of music presentation have undergone changes following an era of constant innovation in digital music production software and hardware. This study involved a discussion of digital music production methods studied by the author, as well as a technical discussion of audio–visual integration. The author’s audio–visual integration project VISUAL PHOTIS was used as an example to analyze the user experience and technological integration of music production software and hardware. The author first integrated the experience of using four digital music production software packages—Imaschine 2, Maschine 2, Cubase 8, and Bitwig 2—as the starting point of this research. After obtaining basic understanding of the functions and production concepts of these software’s function areas, tool areas, work areas, and time axes, this study conducted an in-depth exploration of the three digital music software packages (Imaschine 2, Maschine 2, Cubase 8) used by the author during the three stages of music composition—creative thinking, integration, and postmixing. The author’s application was also analyzed to illustrate how the author customized a music production workflow. After exploring the basic concepts of the digital music software and creative tools employed, the author’s audio–visual integration project VISUAL PHOTIS was analyzed. The project combined the results of the author’s integration of digital music and video production software packages. The author also analyzed the actual experience of performing the work to provide a complete presentation of the work as a series of materials and integration methods. Finally, the author interviewed other users and students from the digital music class taught by the author to collect their questions on the actual operation of the interactive audio–visual production process designed by the author, and then integrated their responses to questions regarding digital music composition. The purpose was to improve the production techniques explored by the author, and implement them in digital music education to provide audiovisual integration researchers with substantial assistance and references for the future.
APA, Harvard, Vancouver, ISO, and other styles
11

Roach, N. W., James Heron, and Paul V. McGraw. "Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration." 2006. http://hdl.handle.net/10454/3564.

Full text
Abstract:
No
In order to maintain a coherent, unified percept of the external environment, the brain must continuously combine information encoded by our different sensory systems. Contemporary models suggest that multisensory integration produces a weighted average of sensory estimates, where the contribution of each system to the ultimate multisensory percept is governed by the relative reliability of the information it provides (maximum-likelihood estimation). In the present study, we investigate interactions between auditory and visual rate perception, where observers are required to make judgments in one modality while ignoring conflicting rate information presented in the other. We show a gradual transition between partial cue integration and complete cue segregation with increasing inter-modal discrepancy that is inconsistent with mandatory implementation of maximum-likelihood estimation. To explain these findings, we implement a simple Bayesian model of integration that is also able to predict observer performance with novel stimuli. The model assumes that the brain takes into account prior knowledge about the correspondence between auditory and visual rate signals, when determining the degree of integration to implement. This provides a strategy for balancing the benefits accrued by integrating sensory estimates arising from a common source, against the costs of conflating information relating to independent objects or events.
APA, Harvard, Vancouver, ISO, and other styles
12

"Investigating Compensatory Mechanisms for Sound Localization: Visual Cue Integration and the Precedence Effect." Master's thesis, 2015. http://hdl.handle.net/2286/R.I.34880.

Full text
Abstract:
abstract: Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused auditory event in a process called the precedence effect (Litovsky et al., 1999). Visual cues also influence sound localization through a phenomenon known as the ventriloquist effect. It is classically demonstrated by a puppeteer who speaks without visible lip movements while moving the mouth of a puppet synchronously with his/her speech (Gelder and Bertelson, 2003). If the ventriloquist is successful, sound will be “captured” by vision and be perceived to be originating at the location of the puppet. This thesis investigates the influence of vision on the spatial localization of audio-visual stimuli. Participants seated in a sound-attenuated room indicated their perceived locations of either ISI or level-difference stimuli in free field conditions. Two types of stereophonic phantom sound sources, created by modulating the inter-stimulus time interval (ISI) or level difference between two loudspeakers, were used as auditory stimuli. The results showed that the light cues influenced auditory spatial perception to a greater extent for the ISI stimuli than the level difference stimuli. A binaural signal analysis further revealed that the greater visual bias for the ISI phantom sound sources was correlated with the increasingly ambiguous binaural cues of the ISI signals. This finding suggests that when sound localization cues are unreliable, perceptual decisions become increasingly biased towards vision for finding a sound source. These results support the cue saliency theory underlying cross-modal bias and extend this theory to include stereophonic phantom sound sources.
Dissertation/Thesis
Masters Thesis Bioengineering 2015
APA, Harvard, Vancouver, ISO, and other styles
13

Quan-Baffour, Kofi Poku. "The introduction of audio cassettes in an integrated study package in solving the problems of adult distance education students in Lesotho." Diss., 1995. http://hdl.handle.net/10500/15839.

Full text
Abstract:
This research project reports on an empirical study on the suitability and feasibility of audio cassette lectures in solving the study problems of adult distance education students. Having reviewed relevant literature on the subject the researcher collected data through: (a) Empirical investigation by contituting a two-group (experimental/control) design. (b) Questionnaires to find out opinions of students on audio cassettes. The study reveals that there is a significnt difference between the academic achievement of students who study via audio cassette lectures in addition to textbooks and face-to-face lectures and those who study through textbooks and face-to-face lectures only. The study therefore validates audio cassette lectures in an integrated study package. Other outcomes of the study are: (a) Suggestions to l.E.M.S. authorities to introduce audio cassette lectures on l.E.M.S. part-time courses. (b) Suggestions to course organisers at 1.E.M.S. to liaise with distance education institutions to adopt their instructional strategies.
Teacher Education
M. Ed. (Didactics)
APA, Harvard, Vancouver, ISO, and other styles
14

Hadid, Vanessa. "Réadaptation et performance visuelle chez la personne hémianopsique : une étude de cas portant sur les saccades oculaires et le blindsight." Thèse, 2015. http://hdl.handle.net/1866/13871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

CHIEN, YEN HSIU, and 顏秀倩. "Action Research of Integrating Art Picture Books and Audio Books into Visual Arts Appreciation Instruction for the Second Graders in an Elementary School: Using《Li Mei-Shu in San-Sia》and《Van Gogh》 as Examples." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/84495804573763323418.

Full text
Abstract:
碩士
臺北市立教育大學
視覺藝術學系視覺藝術教學碩士學位班
100
Art educationist Su Jen-ming once said, “Majority of the students now will not become the creators of art in the future; instead, they will become the appreciators of art.” Thus, for every student to be aware of the various values of art and their cultural contexts through aesthetical and appreciation activities; and at the same time, being engaged in varied art activities has become one of the most important goals in the Art and Humanities course. This study aims to explore the effect of student learning and teacher professional development during the process of integrating audio and paper art picture books into the teaching of art appreciation. This is an action research. The participants of this study were 27 grade two students from an elementary school in Taoyuan County. The purposes of this study are as follows: (1)To explore the value of audio and paper art picture books on visual art appreciation teaching to elementary school students. (2)To explore the possible strategies for applying audio and paper art picture books on visual art appreciation teaching to elementary school students. (3)To explore and understand student learning outcomes and teachers’ teaching reflection through the guidance of audio and paper art picture books. Data were collected through observations, recordings, interviews and questionnaires. The results of the study revealed the followings: (1)The delivery of texts and characteristics of images in the paper art picture books can boost students’ interest in art appreciation. A good audio art picture book can help lead auditory learners into the world of art appreciation. (2)Combing both visual and auditory learning styles can help students understand art appreciation. Using ask and think observation method can lead students to a more structural observation and discovery of the artworks. Props and role-playing games can be adopted to create a sensory experience and better learning environment for students. (3)Through the appreciation of audio and paper art picture books, students are able to increase their knowledge about art and sharpen their aesthetic abilities. Students are able to apply what they have learned to their daily lives, and increase their understanding of the local artwork. An ideal art appreciation activity should be student-centered, and teaching activities should be designed to meet the needs of students.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography