Academic literature on the topic 'Parole visuelle et audiovisuelle'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parole visuelle et audiovisuelle.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Parole visuelle et audiovisuelle"
Mueller, Jeansue, and Charles M. Mueller. "Style Shift in Korean Teledramas." FORUM / Revue internationale d’interprétation et de traduction / International Journal of Interpretation and Translation 7, no. 2 (October 1, 2009): 215–46. http://dx.doi.org/10.1075/forum.7.2.09mue.
Full textStonner, Christian. "L’écriture et l’illusion dans l’adaptation audiovisuelle et le cas particulier de « Karambolage »." FORUM / Revue internationale d’interprétation et de traduction / International Journal of Interpretation and Translation 15, no. 2 (December 1, 2017): 331–41. http://dx.doi.org/10.1075/forum.15.2.10sto.
Full textDuée, Claude. "L’énonciation et l’avènement de Gaston Lagaffe." Semiotica 2019, no. 226 (January 8, 2019): 1–27. http://dx.doi.org/10.1515/sem-2018-0012.
Full textGeorge, Éric, and Philippe-Antoine Lupien. "Internet, nouvel eldorado pour la circulation de la production audiovisuelle autochtone ?" Recherches amérindiennes au Québec 42, no. 1 (March 7, 2014): 31–40. http://dx.doi.org/10.7202/1023718ar.
Full textJérôme, Laurent, and Vicky Veilleux. "Witamowikok, « dire » le territoire atikamekw nehirowisiw aujourd’hui." Recherches amérindiennes au Québec 44, no. 1 (December 17, 2014): 11–22. http://dx.doi.org/10.7202/1027876ar.
Full textArmstrong, Philip, and Cosmin Popovici-Toma. "Gloss (à partir de quelques photos d’Ann Hamilton)." Études françaises 51, no. 2 (June 17, 2015): 163–74. http://dx.doi.org/10.7202/1031234ar.
Full textKamerhuber, Julia, Julia Horvath, and Elissa Pustka. "Lecture, répétition, parole spontanée : l’impact de la tâche sur le comportement du schwa en FLE." Journal of French Language Studies 30, no. 2 (July 2020): 161–88. http://dx.doi.org/10.1017/s095926952000006x.
Full textRemillet, Gilles. "Filmer les pratiques de soin dans la consultation médicale en acupuncture." Ethnologies 33, no. 2 (April 4, 2013): 99–121. http://dx.doi.org/10.7202/1015027ar.
Full textVampé, Anne, and Véronique Aubergé. "Prosodie expressive audio-visuelle de l'interaction personne-machine. Etats mentaux, attitudes, intentions et affects (Feeling of Thinking) en dehors du tour de parole." Techniques et sciences informatiques 29, no. 7 (September 20, 2010): 807–32. http://dx.doi.org/10.3166/tsi.29.807-832.
Full textVassiliou, Konstantinos. "Artistic Creativity and Human Evolution – Art Theory and the Work of André Leroi-Gourhan." Zeitschrift für Ästhetik und Allgemeine Kunstwissenschaft 58, no. 2 (2013): 107–21. http://dx.doi.org/10.28937/1000106223.
Full textDissertations / Theses on the topic "Parole visuelle et audiovisuelle"
Fort, Mathilde. "L'accès au lexique dans la perception audiovisuelle et visuelle de la parole." Phd thesis, Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00652068.
Full textRouger, Julien. "Perception audiovisuelle de la parole chez le sourd postlingual implanté cochléaire et le sujet normo-entendant : étude longitudinale psychophysique et neurofonctionnelle." Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00364405.
Full textBorel, Stéphanie. "Perception auditive, visuelle et audiovisuelle des voyelles nasales par les adultes devenus sourds. Lecture labiale, implant cochléaire, implant du tronc cérébral." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCA016/document.
Full textThis thesis focuses on the visual, auditory and auditory-visual perception of french nasal vowels [ɑ̃](« lent »), [ɔ̃] (« long ») and [ɛ̃] (« lin ») by Cochlear Implant (CI) and Auditory Brainstem Implant(ABI) adults users. The study on visual perception of vowels, with 22 deafened adults, redefines thelip configuration of french nasal vowels and provides an update of the classification of vocalic visualphonemes. Three studies on auditory identification of nasal vowels with 82, 15 and 10 CI usershighlight their difficulty in recognizing the three nasal vowels, which they perceive as oral vowels.Acoustic and perceptual analyzes suggest that adults with CI rely on frequency informations of thefirst two spectral peaks but miss the informations of relative intensity of these peaks. The study with13 ABI users show that some linguistic acoustic cues are transmitted by the ABI but the fusion ofauditory and visual features could be optimized for the identification of vowels. Finally, a survey of179 Speech Language and Hearing Therapists show the need of an update on the phonetic articulationof french nasal vowels [ɑ̃] and [ɛ̃]
Burfin, Sabine. "L'apport des informations visuelles des gestes oro-faciaux dans le traitement phonologique des phonèmes natifs et non-natifs : approches comportementale, neurophysiologique." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENS002/document.
Full textDuring audiovisual speech perception, like in face-to-face conversations, we can takeadvantage of the visual information conveyed by the speaker's oro-facial gestures. Thisenhances the intelligibility of the utterance. The aim of this work was to determine whetherthis “audiovisual benefit” can improve the identification of phonemes that do not exist in ourmother tongue. Our results revealed that the visual information contributes to overcome thephonological deafness phenomenon we experience in an audio only situation (Study 1). AnERP study indicates that this benefit could be due to the modulation of early processing in theprimary auditory cortex (Study 2). The audiovisual presentation of non native phonemesgenerates a P50 that is not observed for native phonemes. The linguistic background affectsthe way we use visual information. Early bilinguals take less advantage of the visual cuesduring the processing of unfamiliar phonemes (Study 3). We examined the identificationprocesses of native plosive consonants with a gating paradigm to evaluate the differentialcontribution of auditory and visual cues across time (Study 4). We observed that theaudiovisual benefit is not systematic. Phoneme predictability depends on the visual saliencyof the articulatory movements of the speaker
Drouet, Jeanne. "La "performance contée" à l'épreuve des technologies audiovisuelles : des passerelles culturelles et sociales en images et en sons." Thesis, Lyon 2, 2014. http://www.theses.fr/2014LYO20092.
Full textThe investigation related in this thesis was conducted in Bretagne (France) and in urban areas of Lyon, in close collaboration with some contemporary storytellers. Most of their practice were formed in the wake of the so-called "revival of storytelling" that took place in France in the early 1970s. The present research provides an analysis of the oral performance, in order to better understand the scope of this practice – the social ties, acquaintances, encounters it creates -- and the causes of its social efficiency. In that aim, the fieldwork was considered under three ethnographic scales: the stage (when the enounciation is evaluated very closely), the wings (an observation of the creative process) and the context (ethnography that aims to understand social and cultural environment of storytelling). The methodology pretended to be experimental – searching by trial and error approach – reflexive and dialogic. Many devices were developed, most of them requiring the use of audiovisual technology.The itinerary proposed here starts with an immersion in the world of two Bretons storytellers, which shows why storytellers can be considered as "memory holders". Then, we make a "zoom" on the oral performance of storytellers; the ways they enter in stage, their choreography and the reception by audience are the subjects of a long examination. It follows that the storytellers inciting a situation in which the imaginaries cross. The last itinerary of research refers to the situations in which the storytelling is used as an instrument of social mediation. At that time, storytellers and their apprentices work to "put in their mouths" stories in which are expressed, beneath the surface, feelings of belonging, life experiences and through which social and cultural bridges are created
Erjavec, Grozdana. "Apport des mouvements buccaux, des mouvements extra-buccaux et du contexte facial à la perception de la parole chez l'enfant et chez l'adulte." Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080118/document.
Full textThe present thesis work fits into the domain/is incorporated within the framework of research on audio-visual (AV) speech perception. Its objective is to answer the following questions: (i) What is the nature of visual input processing (holistic vs analytic) in AV speech perception? (ii) What is the implication of extra-oral facial movement in AV speech perception? (iii) What are the oculomotor patterns in AV speech perception? (iv) What are the developmental changes in the above-mentioned aspects (i), (ii) and (iii)? The classic noise degradation paradigm was applied in two experiments conducted in the framework of the present thesis. Each experiment were conducted on participants of 4 age groups, adults, adolescents, pre-adolescents and children. Each group consisted of 16 participants. Participants’ task was to repeat consonant-vowel (/a/) syllables. The syllables were both mildly and strongly degraded by pink noise and were presented in four audio(-visual) conditions, one purely auditory (AO) and three audio-visual conditions. The AV conditions were the following: (i) AV face (AVF), (ii) AV « mouth extraction » (AVM-E ; mouth format without visual contrasts), (iii) AV « mouth window » (AVM-W ; mouth format with high visual contrasts) in experiment 1, and (i) AVF, (ii) AVF « mouth active (and facial frame static) » (AVF-MA), (iii) AVF « extra-oral regions active (and mouth absent) » (AVF-EOA) in experiment 2. The data relative to (i) the total number of correct repetitions (total performance), (ii) the difference in the correct repetitions score between each AV and the AO condition (AV gain), and (iii) the total fixations duration in the oral area and other facial areas (for the AV formats) were analyzed. The main results showed that the mechanisms involved in AV speech perception reach their maturity before late childhood. The vision of the talker’s full face does not seem to be advantageous in this context. It seems that the vision of the talker’s full face might perturb AV speech processing in adults, possibly because it triggers processing of other types of information (identity, facial expressions) which could in terms interfere with the processing of acoustic aspects of speech. The contribution of the extra-oral articulatory movement to AV speech perception was poor and limited to the condition of highly degraded auditory information. For ecologically presented facial information, the oculomotor patterns in AV speech perception varied as a function of the level of auditory information degradation, but appeared rather stable across the 4 groups. Finally, the modalities of the featural (mouth) facial information presentation affected the oculomotor behavior patterns in adults, pre-adolescents and children, thus suggesting a certain sensitivity of visuo-attentional processing to low-level visual stimuli characteristics in AV speech perception. The variations in visuo-attentional processing seemed to be associated to a certain extent with variations in AV speech perception
Ouni, Slim. "Parole Multimodale : de la parole articulatoire à la parole audiovisuelle." Habilitation à diriger des recherches, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00927119.
Full textAdjoudani, Ali. "Reconnaissance automatique de la parole audiovisuelle : stratégies d'intégration et réalisation du liptrack, labiomètre temps réel." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0022.
Full textDahmani, Sara. "Synthèse audiovisuelle de la parole expressive : modélisation des émotions par apprentissage profond." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0137.
Full text: The work of this thesis concerns the modeling of emotions for expressive audiovisual textto-speech synthesis. Today, the results of text-to-speech synthesis systems are of good quality, however audiovisual synthesis remains an open issue and expressive synthesis is even less studied. As part of this thesis, we present an emotions modeling method which is malleable and flexible, and allows us to mix emotions as we mix shades on a palette of colors. In the first part, we present and study two expressive corpora that we have built. The recording strategy and the expressive content of these corpora are analyzed to validate their use for the purpose of audiovisual speech synthesis. In the second part, we present two neural architectures for speech synthesis. We used these two architectures to model three aspects of speech : 1) the duration of sounds, 2) the acoustic modality and 3) the visual modality. First, we use a fully connected architecture. This architecture allowed us to study the behavior of neural networks when dealing with different contextual and linguistic descriptors. We were also able to analyze, with objective measures, the network’s ability to model emotions. The second neural architecture proposed is a variational auto-encoder. This architecture is able to learn a latent representation of emotions without using emotion labels. After analyzing the latent space of emotions, we presented a procedure for structuring it in order to move from a discrete representation of emotions to a continuous one. We were able to validate, through perceptual experiments, the ability of our system to generate emotions, nuances of emotions and mixtures of emotions, and this for expressive audiovisual text-to-speech synthesis
Dubois, Cyril Michel Robert. "Les bases neurophysiologiques de la perception audiovisuelle syllabique : étude simultanée en Imagerie par Résonance Magnétique fonctionnelle et en électroencéphalographie (IRMf/EEG)." Strasbourg, 2009. https://publication-theses.unistra.fr/public/theses_doctorat/2009/DUBOIS_Cyril_Michel_Robert_2009.pdf.
Full textIn a noisy environment, speech intelligibility is improved by perceiving a speaker’s face (Sumby & Pollack, 1954), a dimension which seemingly involves a facilitation effect in accessing the mental lexicon. Massaro (1990) assumes that the influence of one source of information is greatest when the other source is neutral or ambiguous. However, the McGurk effect suggests that audible and visible sources have an equal impact on the speech perception system (McGurk & MacDonald, 1976). The result is indeed a perturbation, in terms of misperception of the “target”. Several studies claim that the McGurk effect operates on the lexical level as well as on word or phrasal levels. Taken together, previous studies indicate that the bimodal integration of the visual source is early and prelexical, moreover it could be influenced by a top-down effect. We conducted a study with simultaneous recordings in fMRI/EEG, in a discrimination task, comprising consonant-vowel syllables, in two perception modalities : audiovisual and audio only, in order to investigate the neural substrates of audiovisual syllabic perception. The discrimination task was based on syllable pairs, contrasting three features : vowel lip rounding, consonant place of articulation and voicing. For syllabic discrimination, the results show bilateral activation of the primary auditive cortex for each modality. Furthermore, the fusiform gyrus and MT/V5 area (in occipital cortex) are recruited in the audio-visual modality. ERPs indicate significant modulation around 150 and 250 milliseconds
Books on the topic "Parole visuelle et audiovisuelle"
Mirzoeff, Nicholas. An introduction to visual culture. 2nd ed. New York: Routledge, 2009.
Find full textBook chapters on the topic "Parole visuelle et audiovisuelle"
Dupont, Malika, and Brigitte Lejeune. "Lecture Labiale et Perception Audio-Visuelle de la Parole." In Rééducation De la Boucle Audio-phonatoire, 17–18. Elsevier, 2010. http://dx.doi.org/10.1016/b978-2-294-70754-4.50003-9.
Full text