Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Lipreading.

Dissertationen zum Thema „Lipreading“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-45 Dissertationen für die Forschung zum Thema "Lipreading" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Lucey, Patrick Joseph. "Lipreading across multiple views." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16676/1/Patrick_Joseph_Lucey_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Visual information from a speaker's mouth region is known to improve automatic speech recognition (ASR) robustness, especially in the presence of acoustic noise. Currently, the vast majority of audio-visual ASR (AVASR) studies assume frontal images of the speaker's face, which is a rather restrictive human-computer interaction (HCI) scenario. The lack of research into AVASR across multiple views has been dictated by the lack of large corpora that contains varying pose/viewpoint speech data. Recently, research has concentrated on recognising human be- haviours within &quotmeeting " or &quotlect
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lucey, Patrick Joseph. "Lipreading across multiple views." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16676/.

Der volle Inhalt der Quelle
Annotation:
Visual information from a speaker's mouth region is known to improve automatic speech recognition (ASR) robustness, especially in the presence of acoustic noise. Currently, the vast majority of audio-visual ASR (AVASR) studies assume frontal images of the speaker's face, which is a rather restrictive human-computer interaction (HCI) scenario. The lack of research into AVASR across multiple views has been dictated by the lack of large corpora that contains varying pose/viewpoint speech data. Recently, research has concentrated on recognising human be- haviours within &quotmeeting " or &quotlect
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

MacLeod, A. "Effective methods for measuring lipreading skills." Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233400.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

MacDermid, Catriona. "Lipreading and language processing by deaf children." Thesis, University of Surrey, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291020.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yuan, Hanfeng 1972. "Tactual display of consonant voicing to supplement lipreading." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87906.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.<br>Includes bibliographical references (p. 241-251).<br>This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing s
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chiou, Greg I. "Active contour models for distinct feature tracking and lipreading /." Thesis, Connect to this title online; UW restricted, 1995. http://hdl.handle.net/1773/6023.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kaucic, Robert August. "Lip tracking for audio-visual speech recognition." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360392.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Matthews, Iain. "Features for audio-visual speech recognition." Thesis, University of East Anglia, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Thangthai, Kwanchiva. "Computer lipreading via hybrid deep neural network hidden Markov models." Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/69215/.

Der volle Inhalt der Quelle
Annotation:
Constructing a viable lipreading system is a challenge because it is claimed that only 30% of information of speech production is visible on the lips. Nevertheless, in small vocabulary tasks, there have been several reports of high accuracies. However, investigation of larger vocabulary tasks is rare. This work examines constructing a large vocabulary lipreading system using an approach based-on Deep Neural Network Hidden Markov Models (DNN-HMMs). We present the historical development of computer lipreading technology and the state-ofthe-art results in small and large vocabulary tasks. In prel
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hiramatsu, Sandra. "Does lipreading help word reading? : an investigation of the relationship between visible speech and early reading achievement /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/7913.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Divin, William. "The irrelevant speech effect, lipreading and theories of short-term memory." Thesis, University of Ulster, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365401.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Alness, Borg Axel, and Marcus Enström. "A study of the temporal resolution in lipreading using event vision." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280325.

Der volle Inhalt der Quelle
Annotation:
Mechanically analysing visual features from lips to extract spoken words consists of finding patterns in movements, which is why machine learning has been applied in previous research to address this problem. In previous research conventional frame based cameras have been used with good results. Classifying visual features is expensive and capturing just enough information can be of importance. Event cameras are a type of cameras which is inspired by the human vision and only capture changes in the scene and offer very high temporal resolution. In this report we investigate the importance of t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Gray, Michael Stewart. "Unsupervised statistical methods for processing of image sequences /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9901442.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Dupuis, Karine. "Bimodal cueing in aphasia : the influence of lipreading on speech discrimination and language comprehension." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/33791.

Der volle Inhalt der Quelle
Annotation:
Previous research on the influence of lipreading on speech perception has failed to consistently show that individuals with aphasia benefit from the visual cues provided by lipreading. The present study was designed to replicate these findings, and to investigate the role of lipreading at the discourse level. Six participants with aphasia took part in this study. A syllable discrimination task using the syllables /pem, tem, kem, bem, dem, gem/, and a discourse task consisting of forty short fictional passages, were administered to the participants. The stimuli were presented in two modality co
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Zhou, Yichao. "Lip password-based speaker verification system with unknown language alphabet." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/562.

Der volle Inhalt der Quelle
Annotation:
The traditional security systems that verify the identity of users based on password usually face the risk of leaking the password contents. To solve this problem, biometrics such as the face, iris, and fingerprint, begin to be widely used in verifying the identity of people. However, these biometrics cannot be changed if the database is hacked. What's more, verification systems based on the traditional biometrics might be cheated by fake fingerprint or the photo.;Liu and Cheung (Liu and Cheung 2014) have recently initiated the concept of lip password, which is composed of a password embedded
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Montserrat, Maria Navarro. "The influence of situational cues on a standardized speechreading test." PDXScholar, 1985. https://pdxscholar.library.pdx.edu/open_access_etds/3546.

Der volle Inhalt der Quelle
Annotation:
The purpose of the present study was to determine the influence of situational cues on a standardized speechreading test in order to assess an individual's natural speechreading ability. The widely used, standardized Utley Lipreading Test was selected to which photoslides of message-related situational cues were added. The Utley Lipreading Test consists of two relatively equivalent test lists, containing series of unrelated sentences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Nayfeh, Taysir H. "Multi-signal processing for voice recognition in noisy environments." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-10222009-125021/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Ho, Eve. "Speechreading abilities of Cantonese-speaking hearing-impaired children on consonants and words." Click to view the E-thesis via HKUTO, 1997. http://sunzi.lib.hku.hk/hkuto/record/B36209454.

Der volle Inhalt der Quelle
Annotation:
Thesis (B.Sc)--University of Hong Kong, 1997.<br>"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, April 30, 1997." Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Liu, Xin. "Lip motion tracking and analysis with application to lip-password based speaker verification." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1538.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Gorman, Benjamin Millar. "A framework for speechreading acquisition tools." Thesis, University of Dundee, 2018. https://discovery.dundee.ac.uk/en/studentTheses/fc05921f-024e-471e-abd4-0d053634a2e7.

Der volle Inhalt der Quelle
Annotation:
At least 360 million people worldwide have disabling hearing loss that frequently causes difficulties in day-to-day conversations. Hearing aids often fail to offer enough benefits and have low adoption rates. However, people with hearing loss find that speechreading can improve their understanding during conversation. Speechreading (often called lipreading) refers to using visual information about the movements of a speaker's lips, teeth, and tongue to help understand what they are saying. Speechreading is commonly used by people with all severities of hearing loss to understand speech, and pe
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Li, Meng. "On study of lip segmentation in color space." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/42.

Der volle Inhalt der Quelle
Annotation:
This thesis mainly addresses two issues: 1) to investigate how to perform the lip segmentation without knowing the true number of segments in advance, and 2) to investigate how to select the local optimal observation scale for each structure from the viewpoint of lip segmentation e.ectively. Regarding the .rst issue, two number of prede.ned segments independent lip segmentation methods are proposed. In the .rst one, a multi-layer model is built up, in which each layer corresponds to one segment cluster. Subsequently, a Markov random .eld (MRF) derived from this model is obtained such that the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Lees, Nicole C. "Vocalisations with a better view : hyperarticulation augments the auditory-visual advantage for the detection of speech in noise." Thesis, View thesis, 2007. http://handle.uws.edu.au:8081/1959.7/19576.

Der volle Inhalt der Quelle
Annotation:
Recent studies have shown that there is a visual influence early in speech processing - visual speech enhances the ability to detect auditory speech in noise. However, identifying exactly how visual speech interacts with auditory processing at such an early stage has been challenging, because this so-called AV speech detection advantage is both highly related to a specific lower-order, signal-based, optic-acoustic relationship between the second formant amplitude and the area of the mouth (F2/Mouth-area), and mediated by higher-order, information-based factors. Previous investigations either h
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Lees, Nicole C. "Vocalisations with a better view hyperarticulation augments the auditory-visual advantage for the detection of speech in noise /." View thesis, 2007. http://handle.uws.edu.au:8081/1959.7/19576.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--University of Western Sydney, 2007.<br>A thesis submitted to the University of Western Sydney, College of Arts, in fulfilment of the requirements for the degree of Doctor of Philosophy. Includes bibliography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Habermann, Barbara L. "Speechreading ability in elementary school-age children with and without functional articulation disorders." PDXScholar, 1990. https://pdxscholar.library.pdx.edu/open_access_etds/4087.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study was to compare the speechreading abilities of elementary school-age children with mild to severe articulation disorders with those of children with normal articulation. Speechreading ability, as determined by a speechreading test, indicates how well a person recognizes the visual cues of speech. Speech sounds that have similar visual characteristics have been defined as visemes by Jackson in 1988 and can be categorized into distinct groups based on their place of articulation. A relationship between recognition of these visemes and correct articulation was first propo
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Engelbrecht, Elizabeth M. "Die ontwikkeling van sosiale verhoudings van adolessente met ernstige gehoorverlies met hulle normaal horende portuurgroep." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-09122008-135458/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Wagner, Jessica Lynn. "Exploration of Lip Shape Measures and their Association with Tongue Contact Patterns." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd984.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Lidestam, Björn. "Semantic Framing of Speech : Emotional and Topical Cues in Perception of Poorly Specified Speech." Doctoral thesis, Linköpings universitet, Institutionen för beteendevetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6344.

Der volle Inhalt der Quelle
Annotation:
The general aim of this thesis was to test the effects of paralinguistic (emotional) and prior contextual (topical) cues on perception of poorly specified visual, auditory, and audiovisual speech. The specific purposes were to (1) examine if facially displayed emotions can facilitate speechreading performance; (2) to study the mechanism for such facilitation; (3) to map information-processing factors that are involved in processing of poorly specified speech; and (4) to present a comprehensive conceptual framework for speech perception, with specification of the signal being considered. Experi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Shou, Virginia. "WHAT?: Visual Interpretations of the Miscommunication Between the Hearing and Deaf." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3125.

Der volle Inhalt der Quelle
Annotation:
This thesis visualizes the communication challenges both latent and obvious of my daily life as a hard of hearing individual. By focusing on a variety of experiences and examples I demonstrate the implications of a hard of hearing individual’s life. The prints, objects and videos that I have created for my visual thesis aim to enrich the understanding of a broader public on issues regularly faced by Deaf people. At the heart of my work my goal is to generate mutual empathy between the hearing and the Deaf.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Horacio, Camila Paes. "Manifestações linguísticas em adultos com alterações no espectro da neuropatia auditiva." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/5/5143/tde-26082010-170001/.

Der volle Inhalt der Quelle
Annotation:
Introdução: A presença de perdas auditivas de origem neural no adulto que já desenvolveu linguagem pode acarretar alteração de compreensão da fala com dificuldade na discriminação auditiva dos sons e entendimento completo da mensagem. Entre as causas de perdas auditivas neurais está o distúrbio do espectro da neuropatia auditiva (DENA). A maioria das publicações sobre o DENA descrevem o padrão do diagnóstico auditivo, entretanto as consequências dessa alteração auditiva para a comunicação do indivíduo e as implicações dessas para o tratamento fonoaudiólogico são escassas. Faz-se necessária a i
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Charlier, Brigitte. "Le développement des représentations phonologiques chez l'enfant sourd: étude comparative du langage parlé complété avec d'autres outils de communication." Doctoral thesis, Universite Libre de Bruxelles, 1994. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212631.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Bayard, Clémence. "Perception de la langue française parlée complétée: intégration du trio lèvres-main-son." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209168.

Der volle Inhalt der Quelle
Annotation:
La Langue française Parlée Complétée est un système peu connu du grand public. Adapté du Cued Speech en 1977, il a pour ambition d’aider les sourds francophones à percevoir un message oral en complétant les informations fournies par la lecture labiale à l’aide d’un geste manuel. Si, depuis sa création, la LPC a fait l’objet de nombreuses recherches scientifiques, peu de chercheurs ont, jusqu’à présent, étudié les processus mis en jeu dans la perception de la parole codée. Or, par la présence conjointe d’indices visuels (liés aux lèvres et à la main) et d’indices auditifs (via les prothèses aud
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Huyse, Aurélie. "Intégration audio-visuelle de la parole: le poids de la vision varie-t-il en fonction de l'âge et du développement langagier?" Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209690.

Der volle Inhalt der Quelle
Annotation:
Pour percevoir la parole, le cerveau humain utilise les informations sensorielles provenant non seulement de la modalité auditive mais également de la modalité visuelle. En effet, de précédentes recherches ont mis en évidence l’importance de la lecture labiale dans la perception de la parole, en montrant sa capacité à améliorer et à modifier celle-ci. C’est ce que l’on appelle l’intégration audio-visuelle de la parole. L’objectif de cette thèse de doctorat était d’étudier la possibilité de faire varier ce processus d’intégration en fonction de différentes variables. Ce travail s’inscrit ainsi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Fang-Chen, Chang, and 昌芳騁. "Lipreading System." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/42032761223426994701.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Yuan, Hanfeng. "Tactual Display of Consonant Voicing to Supplement Lipreading." 2004. http://hdl.handle.net/1721.1/6568.

Der volle Inhalt der Quelle
Annotation:
This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing schemes to extract information about voicing from the acoustic speech signal, 2) methods of displaying this information through a multi-finger tactual display, and 3) percep
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Chang, Chih-Yu, and 張志瑜. "A Lipreading System Based on Hidden Markov Model." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/68212276335580923113.

Der volle Inhalt der Quelle
Annotation:
碩士<br>淡江大學<br>電機工程學系碩士班<br>97<br>Nowadays, the conventional speech recognition system has been used in many applications. However, the conventional speech recognition system would be interfered by the voice noise According to the disturbance, the recognition rate would be decreased in the noise condition. So, researchers proposed the singular visual feature speech recognition system, a lipreading system, to avoid the affection of voice noise. The lipreading system can be the assistance part of the conventional speech recognition system, to raise the speech recognition rate. In our research, we
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Southard, Stuart D. Morris Richard. "Speechreading's benefit to the recognition of sentences as a function of signal-to-noise ratio." 2003. http://etd.lib.fsu.edu/theses/available/etd-11202003-175600/.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Florida State University, 2003.<br>Advisor: Dr. Richard Morris, Florida State University, College of Communication, Dept. of Communication Disorders. Title and description from dissertation home page (viewed Mar. 3, 2004). Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Lees, Nicole C., University of Western Sydney, and College of Arts. "Vocalisations with a better view : hyperarticulation augments the auditory-visual advantage for the detection of speech in noise." 2007. http://handle.uws.edu.au:8081/1959.7/19576.

Der volle Inhalt der Quelle
Annotation:
Recent studies have shown that there is a visual influence early in speech processing - visual speech enhances the ability to detect auditory speech in noise. However, identifying exactly how visual speech interacts with auditory processing at such an early stage has been challenging, because this so-called AV speech detection advantage is both highly related to a specific lower-order, signal-based, optic-acoustic relationship between the second formant amplitude and the area of the mouth (F2/Mouth-area), and mediated by higher-order, information-based factors. Previous investigations either h
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Mirus, Gene R. 1969. "The linguistic repertoire of deaf cuers: an ethnographic query on practice." Thesis, 2008. http://hdl.handle.net/2152/3889.

Der volle Inhalt der Quelle
Annotation:
Taking an anthropological perspective, this dissertation focuses on a small segment of the American deaf community that uses Cued Speech by examining the nature of the cuers' linguistic repertoire. Multimodality is at issue for this dissertation. It can affect the ways of speaking or more appropriately, ways of communicating (specifically, signing or cueing). Speech and Cued Speech rely on different modalities by using different sets of articulators. Hearing adults do not learn Cued Speech the same way deaf children do. English-speaking, hearing adult learners can base their articulation of Cu
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Lin, Wen-Chieh, and 林文杰. "A Space-Time Delay Neural Network for Motion Recognition and Its Application to Lipreading in Bimodal Speech Recognition." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/30448892490229517714.

Der volle Inhalt der Quelle
Annotation:
碩士<br>國立交通大學<br>控制工程系<br>84<br>The researches of the motion recognition has received more and more attentions in recent years because the need for computer vision is increasing in many domains, such as the surveillance system, multimodal human computer interface, and traffic control system. Most of the existing approaches separate the recognition into the spatial feature extraction and time domai??cognition. However, we believe that the information of motion resides in the space-time domain
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Gritzman, Ashley Daniel. "Adaptive threshold optimisation for colour-based lip segmentation in automatic lip-reading systems." Thesis, 2016. http://hdl.handle.net/10539/22664.

Der volle Inhalt der Quelle
Annotation:
A thesis submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in ful lment of the requirements for the degree of Doctor of Philosophy. Johannesburg, September 2016<br>Having survived the ordeal of a laryngectomy, the patient must come to terms with the resulting loss of speech. With recent advances in portable computing power, automatic lip-reading (ALR) may become a viable approach to voice restoration. This thesis addresses the image processing aspect of ALR, and focuses three contributions to colour-based lip segmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Hochstrasser, Daniel. "Investigating the effect of visual phonetic cues on the auditory N1 & P2." Thesis, 2017. http://hdl.handle.net/1959.7/uws:44884.

Der volle Inhalt der Quelle
Annotation:
Studies have shown that the N1 and P2 auditory event-related potentials (ERPs) that occur to a speech sound when the talker can be seen (i.e., Auditory-Visual speech), occur earlier and are reduced in amplitude compared to when the talker cannot be seen (auditory-only speech). An explanation for why seeing the talker changes the brain’s response to sound is that visual speech provides information about the upcoming auditory speech event. This information reduces uncertainty about when the sound will occur and about what the event will be (resulting in a smaller N1 and P2, which are markers ass
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Tan, Sok Hui (Jessica). "Seeing a talking face matters to infants, children and adults : behavioural and neurophysiological studies." Thesis, 2020. http://hdl.handle.net/1959.7/uws:59610.

Der volle Inhalt der Quelle
Annotation:
Everyday conversations typically occur face-to-face. Over and above auditory information, visual information from a speaker’s face, e.g., lips, eyebrows, contributes to speech perception and comprehension. The facilitation that visual speech cues bring— termed the visual speech benefit—are experienced by infants, children and adults. Even so, studies on speech perception have largely focused on auditory-only speech leaving a relative paucity of research on the visual speech benefit. Central to this thesis are the behavioural and neurophysiological manifestations of the visual speech benefit. A
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Goecke, Roland. "A stereo vision lip tracking algorithm and subsequent statistical analyses of the audio-video correlation in Australian English." Phd thesis, 2004. http://hdl.handle.net/1885/149999.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Fitzpatrick, Michael F. "Auditory and auditory-visual speech perception and production in noise in younger and older adults." Thesis, 2014. http://handle.uws.edu.au:8081/1959.7/uws:31936.

Der volle Inhalt der Quelle
Annotation:
The overall aim of the thesis was to investigate spoken communication in adverse conditions using methods that take into account that spoken communication is a highly dynamic and adaptive process, underpinned by interaction and feedback between speech partners. To this end, first I assessed the speech production adaptations of talkers in quiet and in noise, and in different communicative settings, i.e., where the talker and interlocutor were face to face (FTF) or could not see each other (Non-visual) (Chapter 2). Results showed that talkers adapted their speech production to suit the specific
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Beadle, Julianne M. "Contributions of visual speech, visual distractors, and cognition to speech perception in noise for younger and older adults." Thesis, 2019. http://hdl.handle.net/1959.7/uws:55879.

Der volle Inhalt der Quelle
Annotation:
Older adults report that understanding speech in noisy situations (e.g., a restaurant) is difficult. Repeated experiences of frustration in noisy situations may cause older adults to withdraw socially, increasing their susceptibility to mental and physical illness. Understanding the factors that contribute to older adults’ difficulty in noise, and in turn, what might be able to alleviate this difficulty, is therefore an important area of research. The experiments in this thesis investigated how sensory and cognitive factors, in particular attention, affect older and younger adults’ ability to
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!