To see the other types of publications on this topic, follow the link: Speech Recognition and Transcription Technologies.

Dissertations / Theses on the topic 'Speech Recognition and Transcription Technologies'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 42 dissertations / theses for your research on the topic 'Speech Recognition and Transcription Technologies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Silvestre, Cerdà Joan Albert. "Different Contributions to Cost-Effective Transcription and Translation of Video Lectures." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62194.

Full text
Abstract:
[EN] In recent years, on-line multimedia repositories have experiencied a strong growth that have made them consolidated as essential knowledge assets, especially in the area of education, where large repositories of video lectures have been built in order to complement or even replace traditional teaching methods. However, most of these video lectures are neither transcribed nor translated due to a lack of cost-effective solutions to do so in a way that gives accurate enough results. Solutions of this kind are clearly necessary in order to make these lectures accessible to speakers of different languages and to people with hearing disabilities. They would also facilitate lecture searchability and analysis functions, such as classification, recommendation or plagiarism detection, as well as the development of advanced educational functionalities like content summarisation to assist student note-taking. For this reason, the main aim of this thesis is to develop a cost-effective solution capable of transcribing and translating video lectures to a reasonable degree of accuracy. More specifically, we address the integration of state-of-the-art techniques in Automatic Speech Recognition and Machine Translation into large video lecture repositories to generate high-quality multilingual video subtitles without human intervention and at a reduced computational cost. Also, we explore the potential benefits of the exploitation of the information that we know a priori about these repositories, that is, lecture-specific knowledge such as speaker, topic or slides, to create specialised, in-domain transcription and translation systems by means of massive adaptation techniques. The proposed solutions have been tested in real-life scenarios by carrying out several objective and subjective evaluations, obtaining very positive results. The main outcome derived from this thesis, The transLectures-UPV Platform, has been publicly released as an open-source software, and, at the time of writing, it is serving automatic transcriptions and translations for several thousands of video lectures in many Spanish and European universities and institutions.<br>[ES] Durante estos últimos años, los repositorios multimedia on-line han experimentado un gran crecimiento que les ha hecho establecerse como fuentes fundamentales de conocimiento, especialmente en el área de la educación, donde se han creado grandes repositorios de vídeo charlas educativas para complementar e incluso reemplazar los métodos de enseñanza tradicionales. No obstante, la mayoría de estas charlas no están transcritas ni traducidas debido a la ausencia de soluciones de bajo coste que sean capaces de hacerlo garantizando una calidad mínima aceptable. Soluciones de este tipo son claramente necesarias para hacer que las vídeo charlas sean más accesibles para hablantes de otras lenguas o para personas con discapacidades auditivas. Además, dichas soluciones podrían facilitar la aplicación de funciones de búsqueda y de análisis tales como clasificación, recomendación o detección de plagios, así como el desarrollo de funcionalidades educativas avanzadas, como por ejemplo la generación de resúmenes automáticos de contenidos para ayudar al estudiante a tomar apuntes. Por este motivo, el principal objetivo de esta tesis es desarrollar una solución de bajo coste capaz de transcribir y traducir vídeo charlas con un nivel de calidad razonable. Más específicamente, abordamos la integración de técnicas estado del arte de Reconocimiento del Habla Automático y Traducción Automática en grandes repositorios de vídeo charlas educativas para la generación de subtítulos multilingües de alta calidad sin requerir intervención humana y con un reducido coste computacional. Además, también exploramos los beneficios potenciales que conllevaría la explotación de la información de la que disponemos a priori sobre estos repositorios, es decir, conocimientos específicos sobre las charlas tales como el locutor, la temática o las transparencias, para crear sistemas de transcripción y traducción especializados mediante técnicas de adaptación masiva. Las soluciones propuestas en esta tesis han sido testeadas en escenarios reales llevando a cabo nombrosas evaluaciones objetivas y subjetivas, obteniendo muy buenos resultados. El principal legado de esta tesis, The transLectures-UPV Platform, ha sido liberado públicamente como software de código abierto, y, en el momento de escribir estas líneas, está sirviendo transcripciones y traducciones automáticas para diversos miles de vídeo charlas educativas en nombrosas universidades e instituciones Españolas y Europeas.<br>[CAT] Durant aquests darrers anys, els repositoris multimèdia on-line han experimentat un gran creixement que els ha fet consolidar-se com a fonts fonamentals de coneixement, especialment a l'àrea de l'educació, on s'han creat grans repositoris de vídeo xarrades educatives per tal de complementar o inclús reemplaçar els mètodes d'ensenyament tradicionals. No obstant això, la majoria d'aquestes xarrades no estan transcrites ni traduïdes degut a l'absència de solucions de baix cost capaces de fer-ho garantint una qualitat mínima acceptable. Solucions d'aquest tipus són clarament necessàries per a fer que les vídeo xarres siguen més accessibles per a parlants d'altres llengües o per a persones amb discapacitats auditives. A més, aquestes solucions podrien facilitar l'aplicació de funcions de cerca i d'anàlisi tals com classificació, recomanació o detecció de plagis, així com el desenvolupament de funcionalitats educatives avançades, com per exemple la generació de resums automàtics de continguts per ajudar a l'estudiant a prendre anotacions. Per aquest motiu, el principal objectiu d'aquesta tesi és desenvolupar una solució de baix cost capaç de transcriure i traduir vídeo xarrades amb un nivell de qualitat raonable. Més específicament, abordem la integració de tècniques estat de l'art de Reconeixement de la Parla Automàtic i Traducció Automàtica en grans repositoris de vídeo xarrades educatives per a la generació de subtítols multilingües d'alta qualitat sense requerir intervenció humana i amb un reduït cost computacional. A més, també explorem els beneficis potencials que comportaria l'explotació de la informació de la que disposem a priori sobre aquests repositoris, és a dir, coneixements específics sobre les xarrades tals com el locutor, la temàtica o les transparències, per a crear sistemes de transcripció i traducció especialitzats mitjançant tècniques d'adaptació massiva. Les solucions proposades en aquesta tesi han estat testejades en escenaris reals duent a terme nombroses avaluacions objectives i subjectives, obtenint molt bons resultats. El principal llegat d'aquesta tesi, The transLectures-UPV Platform, ha sigut alliberat públicament com a programari de codi obert, i, en el moment d'escriure aquestes línies, està servint transcripcions i traduccions automàtiques per a diversos milers de vídeo xarrades educatives en nombroses universitats i institucions Espanyoles i Europees.<br>Silvestre Cerdà, JA. (2016). Different Contributions to Cost-Effective Transcription and Translation of Video Lectures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62194<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
2

Valor, Miró Juan Daniel. "Evaluation of innovative computer-assisted transcription and translation strategies for video lecture repositories." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90496.

Full text
Abstract:
Nowadays, the technology enhanced learning area has experienced a strong growth with many new learning approaches like blended learning, flip teaching, massive open online courses, and open educational resources to complement face-to-face lectures. Specifically, video lectures are fast becoming an everyday educational resource in higher education for all of these new learning approaches, and they are being incorporated into existing university curricula around the world. Transcriptions and translations can improve the utility of these audiovisual assets, but rarely are present due to a lack of cost-effective solutions to do so. Lecture searchability, accessibility to people with impairments, translatability for foreign students, plagiarism detection, content recommendation, note-taking, and discovery of content-related videos are examples of advantages of the presence of transcriptions. For this reason, the aim of this thesis is to test in real-life case studies ways to obtain multilingual captions for video lectures in a cost-effective way by using state-of-the-art automatic speech recognition and machine translation techniques. Also, we explore interaction protocols to review these automatic transcriptions and translations, because unfortunately automatic subtitles are not error-free. In addition, we take a step further into multilingualism by extending our findings and evaluation to several languages. Finally, the outcomes of this thesis have been applied to thousands of video lectures in European universities and institutions.<br>Hoy en día, el área del aprendizaje mejorado por la tecnología ha experimentado un fuerte crecimiento con muchos nuevos enfoques de aprendizaje como el aprendizaje combinado, la clase inversa, los cursos masivos abiertos en línea, y nuevos recursos educativos abiertos para complementar las clases presenciales. En concreto, los videos docentes se están convirtiendo rápidamente en un recurso educativo cotidiano en la educación superior para todos estos nuevos enfoques de aprendizaje, y se están incorporando a los planes de estudios universitarios existentes en todo el mundo. Las transcripciones y las traducciones pueden mejorar la utilidad de estos recursos audiovisuales, pero rara vez están presentes debido a la falta de soluciones rentables para hacerlo. La búsqueda de y en los videos, la accesibilidad a personas con impedimentos, la traducción para estudiantes extranjeros, la detección de plagios, la recomendación de contenido, la toma de notas y el descubrimiento de videos relacionados son ejemplos de las ventajas de la presencia de transcripciones. Por esta razón, el objetivo de esta tesis es probar en casos de estudio de la vida real las formas de obtener subtítulos multilingües para videos docentes de una manera rentable, mediante el uso de técnicas avanzadas de reconocimiento automático de voz y de traducción automática. Además, exploramos diferentes modelos de interacción para revisar estas transcripciones y traducciones automáticas, pues desafortunadamente los subtítulos automáticos no están libres de errores. Además, damos un paso más en el multilingüismo extendiendo nuestros hallazgos y evaluaciones a muchos idiomas. Por último, destacar que los resultados de esta tesis se han aplicado a miles de vídeos docentes en universidades e instituciones europeas.<br>Hui en dia, l'àrea d'aprenentatge millorat per la tecnologia ha experimentat un fort creixement, amb molts nous enfocaments d'aprenentatge com l'aprenentatge combinat, la classe inversa, els cursos massius oberts en línia i nous recursos educatius oberts per tal de complementar les classes presencials. En concret, els vídeos docents s'estan convertint ràpidament en un recurs educatiu quotidià en l'educació superior per a tots aquests nous enfocaments d'aprenentatge i estan incorporant-se als plans d'estudi universitari existents arreu del món. Les transcripcions i les traduccions poden millorar la utilitat d'aquests recursos audiovisuals, però rara vegada estan presents a causa de la falta de solucions rendibles per fer-ho. La cerca de i als vídeos, l'accessibilitat a persones amb impediments, la traducció per estudiants estrangers, la detecció de plagi, la recomanació de contingut, la presa de notes i el descobriment de vídeos relacionats són un exemple dels avantatges de la presència de transcripcions. Per aquesta raó, l'objectiu d'aquesta tesi és provar en casos d'estudi de la vida real les formes d'obtenir subtítols multilingües per a vídeos docents d'una manera rendible, mitjançant l'ús de tècniques avançades de reconeixement automàtic de veu i de traducció automàtica. A més a més, s'exploren diferents models d'interacció per a revisar aquestes transcripcions i traduccions automàtiques, puix malauradament els subtítols automàtics no estan lliures d'errades. A més, es fa un pas més en el multilingüisme estenent els nostres descobriments i avaluacions a molts idiomes. Per últim, destacar que els resultats d'aquesta tesi s'han aplicat a milers de vídeos docents en universitats i institucions europees.<br>Valor Miró, JD. (2017). Evaluation of innovative computer-assisted transcription and translation strategies for video lecture repositories [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90496<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
3

Shah, Afnan Arafat. "Improving automatic speech recognition transcription through signal processing." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/418970/.

Full text
Abstract:
Automatic speech recognition (ASR) in the educational environment could be a solution to address the problem of gaining access to the spoken words of a lecture for many students who find lectures hard to understand, such as those whose mother tongue is not English or who have a hearing impairment. In such an environment, it is difficult for ASR to provide transcripts with Word Error Rates (WER) less than 25% for the wide range of speakers. Reducing the WER reduces the time and therefore cost of correcting errors in the transcripts. To deal with the variation of acoustic features between speakers, ASR systems implement automatic vocal tract normalisation (VTN) that warps the formants (resonant frequencies) of the speaker to better match the formants of the speakers in the training set. The ASR also implements automatic dynamic time warping (DTW) to deal with variation in the speaker’s rate of speaking, by aligning the time series of the new spoken words with the time series of the matching spoken words of the training set. This research investigates whether the ASR’s automatic estimation of VTN and DTW can be enhanced through pre-processing the recording by manually warping the formants and speaking rate of the recordings using sound processing libraries (Rubber Band and SoundTouch) before transcribing the pre-processed recordings using ASR. An initial experiment, performed with the recordings of two male and two female speakers, showed that pre-processing the recording could improve the WER by an average of 39.5% for male speakers and 36.2% for female speakers. However the selection of the best warp factors was achieved through an iterative ‘trial and error’ approach that involved many hours calculating the word error rate for each warp factor setting. Finding a more efficient approach for selecting the warp factors for pre-processing was then investigated. The second experiment investigated the development of a modification function using, as its training set, the best warp factors from the ‘trial and error’ approach to estimate the modification percentage required to improve the WER of a recording. A modification function was found that on average improved the WER by 16% for female speakers and 7% for male speakers.
APA, Harvard, Vancouver, ISO, and other styles
4

Sundaram, Ramasubramanian H. "Effects of transcription errors on supervised learning in speech recognition." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-06132003-120252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rangarajan, Vibhav Shyam 1980. "Interfacing speech recognition an vision guided microphone array technologies." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29687.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.<br>Includes bibliographical references (p. 57-58).<br>One goal of a pervasive computing environment is to allow the user to interact with the environment in an easy and natural manner. The use of spoken commands, as inputs to a speech recognition system, is one such way to naturally interact with the environment. In challenging acoustic environments, microphone arrays can improve the quality of the input audio signal by beamforming, or steering, to the location of the speaker of interest. The existence of multiple speakers, large interfering signals and/or reverberations or reflections in the audio signal(s) requires the use of advanced beamforming techniques which attempt to separate the target audio from the mixed signal received at the microphone array. In this thesis I present and evaluate a method of modeling reverberations as separate anechoic interfering sources emanating from fixed locations. This acoustic modelling technique allows for tracking of acoustic changes in the environment, such as those caused by speaker motion.<br>by Vibhav Shyam Rangarajan.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Girerd, Daniel. "Strategic Selection of Training Data for Domain-Specific Speech Recognition." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1847.

Full text
Abstract:
Speech recognition is now a key topic in computer science with the proliferation of voice-activated assistants, and voice-enabled devices. Many companies over a speech recognition service for developers to use to enable smart devices and services. These speech-to-text systems, however, have significant room for improvement, especially in domain specific speech. IBM's Watson speech-to-text service attempts to support domain specific uses by allowing users to upload their own training data for making custom models that augment Watson's general model. This requires deciding a strategy for picking the training model. This thesis experiments with different training choices for custom language models that augment Watson's speech to text service. The results show that using recent utterances is the best choice of training data in our use case of Digital Democracy. We are able to improve speech recognition accuracy by 2.3% percent over the control with no custom model. However, choosing training utterances most specific to the use case is better when large enough volumes of such training data is available.
APA, Harvard, Vancouver, ISO, and other styles
7

Tran, Thao, and Nathalie Tkauc. "Face recognition and speech recognition for access control." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-39776.

Full text
Abstract:
This project is a collaboration with the company JayWay in Halmstad. In order to enter theoffice today, a tag-key is needed for the employees and a doorbell for the guests. If someonerings the doorbell, someone on the inside has to open the door manually which is consideredas a disturbance during work time. The purpose with the project is to minimize thedisturbances in the office. The goal with the project is to develop a system that uses facerecognition and speech-to-text to control the lock system for the entrance door. The components used for the project are two Raspberry Pi’s, a 7 inch LCD-touch display, aRaspberry Pi Camera Module V2, a external sound card, a microphone and speaker. Thewhole project was written in Python and the platform used was Amazon Web Services (AWS)for storage and the face recognition while speech-to-text was provided by Google.The system is divided in three functions for employees, guests and deliveries. The employeefunction has two authentication steps, the face recognition and a random generated code that needs to be confirmed to avoid biometric spoofing. The guest function includes the speech-to-text service to state an employee's name that the guest wants to meet and the employee is then notified. The delivery function informs the specific persons in the office that are responsiblefor the deliveries by sending a notification.The test proves that the system will always match with the right person when using the facerecognition. It also shows what the threshold for the face recognition can be set to, to makesure that only authorized people enters the office.Using the two steps authentication, the face recognition and the code makes the system secureand protects the system against spoofing. One downside is that it is an extra step that takestime. The speech-to-text is set to swedish and works quite well for swedish-speaking persons.However, for a multicultural company it can be hard to use the speech-to-text service. It canalso be hard for the service to listen and translate if there is a lot of background noise or ifseveral people speak at the same time.
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Toit A. (Andre). "Automatic classification of spoken South African English variants using a transcription-less speech recognition approach." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49866.

Full text
Abstract:
Thesis (MEng)--University of Stellenbosch, 2004.<br>ENGLISH ABSTRACT: We present the development of a pattern recognition system which is capable of classifying different Spoken Variants (SVs) of South African English (SAE) using a transcriptionless speech recognition approach. Spoken Variants (SVs) allow us to unify the linguistic concepts of accent and dialect from a pattern recognition viewpoint. The need for the SAE SV classification system arose from the multi-linguality requirement for South African speech recognition applications and the costs involved in developing such applications.<br>AFRIKAANSE OPSOMMING: Ons beskryf die ontwikkeling van 'n patroon herkenning stelsel wat in staat is om verskillende Gesproke Variante (GVe) van Suid Afrikaanse Engels (SAE) te klassifiseer met behulp van 'n transkripsielose spraak herkenning metode. Gesproke Variante (GVe) stel ons in staat om die taalkundige begrippe van aksent en dialek te verenig vanuit 'n patroon her kenning oogpunt. Die behoefte aan 'n SAE GV klassifikasie stelsel het ontstaan uit die meertaligheid vereiste vir Suid Afrikaanse spraak herkenning stelsels en die koste verbonde aan die ontwikkeling van sodanige stelsels.
APA, Harvard, Vancouver, ISO, and other styles
9

Harris, Leroy W. "Feasibility study of speech recognition technologies for operating within a medical First Responder's environment." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA386380.

Full text
Abstract:
Thesis (M.S. in Iinformation Systems Technology) Naval Postgraduate School, Dec. 2000.<br>Thesis advisor(s): Monigue P. Fargues, Ray T. Clifford, Douglas E. Brinkley. "December 2000." Includes bibliographical references (p. 55-56). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
10

De, Villiers Pieter Theunis. "Lecture transcription systems in resource-scarce environments / Pieter Theunis de Villiers." Thesis, North-West University, 2014. http://hdl.handle.net/10394/10620.

Full text
Abstract:
Classroom note taking is a fundamental task performed by learners on a daily basis. These notes provide learners with valuable offline study material, especially in the case of more difficult subjects. The use of class notes has been found to not only provide students with a better learning experience, but also leads to an overall higher academic performance. In a previous study, an increase of 10.5% in student grades was observed after these students had been provided with multimedia class notes. This is not surprising, as other studies have found that the rate of successful transfer of information to humans increases when provided with both visual and audio information. Note taking might seem like an easy task; however, students with hearing impairments, visual impairments, physical impairments, learning disabilities or even non-native listeners find this task very difficult to impossible. It has also been reported that even non-disabled students find note taking time consuming and that it requires a great deal of mental effort while also trying to pay full attention to the lecturer. This is illustrated by a study where it was found that college students were only able to record ~40% of the data presented by the lecturer. It is thus reasonable to expect an automatic way of generating class notes to be beneficial to all learners. Lecture transcription (LT) systems are used in educational environments to assist learners by providing them with real-time in-class transcriptions or recordings and transcriptions for offline use. Such systems have already been successfully implemented in the developed world where all required resources were easily obtained. These systems are typically trained on hundreds to thousands of hours of speech while their language models are trained on millions or even hundreds of millions of words. These amounts of data are generally not available in the developing world. In this dissertation, a number of approaches toward the development of LT systems in resource-scarce environments are investigated. We focus on different approaches to obtaining sufficient amounts of well transcribed data for building acoustic models, using corpora with few transcriptions and of variable quality. One approach investigates the use of alignment using a dynamic programming phone string alignment procedure to harvest as much usable data as possible from approximately transcribed speech data. We find that target-language acoustic models are optimal for this purpose, but encouraging results are also found when using models from another language for alignment. Another approach entails using unsupervised training methods where an initial low accuracy recognizer is used to transcribe a set of untranscribed data. Using this poorly transcribed data, correctly recognized portions are extracted based on a word confidence threshold. The initial system is retrained along with the newly recognized data in order to increase its overall accuracy. The initial acoustic models are trained using as little as 11 minutes of transcribed speech. After several iterations of unsupervised training, a noticeable increase in accuracy was observed (47.79% WER to 33.44% WER). Similar results were however found (35.97% WER) after using a large speaker-independent corpus to train the initial system. Usable LMs were also created using as few as 17955 words from transcribed lectures; however, this resulted in large out-of-vocabulary rates. This problem was solved by means of LM interpolation. LM interpolation was found to be very beneficial in cases where subject specific data (such as lecture slides and books) was available. We also introduce our NWU LT system, which was developed for use in learning environments and was designed using a client/server based architecture. Based on the results found in this study we are confident that usable models for use in LT systems can be developed in resource-scarce environments.<br>MSc (Computer Science), North-West University, Vaal Triangle Campus, 2014
APA, Harvard, Vancouver, ISO, and other styles
11

Sánchez, Cortina Isaías. "Confidence Measures for Automatic and Interactive Speech Recognition." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/61473.

Full text
Abstract:
[EN] This thesis work contributes to the field of the {Automatic Speech Recognition} (ASR). And particularly to the {Interactive Speech Transcription} and {Confidence Measures} (CM) for ASR. The main goals of this thesis work can be summarised as follows: 1. To design IST methods and tools to tackle the problem of improving automatically generated transcripts. 2. To assess the designed IST methods and tools on real-life tasks of transcription in large educational repositories of video lectures. 3. To improve the reliability of the IST by improving the underlying (CM). Abstracts: The {Automatic Speech Recognition} (ASR) is a crucial task in a broad range of important applications which could not accomplished by means of manual transcription. The ASR can provide cost-effective transcripts in scenarios of increasing social impact such as the {Massive Open Online Courses} (MOOC), for which the availability of accurate enough is crucial even if they are not flawless. The transcripts enable search-ability, summarisation, recommendation, translation; they make the contents accessible to non-native speakers and users with impairments, etc. The usefulness is such that students improve their academic performance when learning from subtitled video lectures even when transcript is not perfect. Unfortunately, the current ASR technology is still far from the necessary accuracy. The imperfect transcripts resulting from ASR can be manually supervised and corrected, but the effort can be even higher than manual transcription. For the purpose of alleviating this issue, a novel {Interactive Transcription of Speech} (IST) system is presented in this thesis. This IST succeeded in reducing the effort if a small quantity of errors can be allowed; and also in improving the underlying ASR models in a cost-effective way. In other to adequate the proposed framework into real-life MOOCs, another intelligent interaction methods involving limited user effort were investigated. And also, it was introduced a new method which benefit from the user interactions to improve automatically the unsupervised parts ({Constrained Search} for ASR). The conducted research was deployed into a web-based IST platform with which it was possible to produce a massive number of semi-supervised lectures from two different well-known repositories, videoLectures.net and poliMedia. Finally, the performance of the IST and ASR systems can be easily increased by improving the computation of the {Confidence Measure} (CM) of transcribed words. As so, two contributions were developed: a new particular {Logistic Regresion} (LR) model; and the speaker adaption of the CM for cases in which it is possible, such with MOOCs.<br>[ES] Este trabajo contribuye en el campo del {reconocimiento automático del habla} (RAH). Y en especial, en el de la {transcripción interactiva del habla} (TIH) y el de las {medidas de confianza} (MC) para RAH. Los objetivos principales son los siguientes: 1. Diseño de métodos y herramientas TIH para mejorar las transcripciones automáticas. 2. Evaluar los métodos y herramientas TIH empleando tareas de transcripción realistas extraídas de grandes repositorios de vídeos educacionales. 3. Mejorar la fiabilidad del TIH mediante la mejora de las MC. Resumen: El {reconocimiento automático del habla} (RAH) es una tarea crucial en una amplia gama de aplicaciones importantes que no podrían realizarse mediante transcripción manual. El RAH puede proporcionar transcripciones rentables en escenarios de creciente impacto social como el de los {cursos abiertos en linea masivos} (MOOC), para el que la disponibilidad de transcripciones es crucial, incluso cuando no son completamente perfectas. Las transcripciones permiten la automatización de procesos como buscar, resumir, recomendar, traducir; hacen que los contenidos sean más accesibles para hablantes no nativos y usuarios con discapacidades, etc. Incluso se ha comprobado que mejora el rendimiento de los estudiantes que aprenden de videos con subtítulos incluso cuando estos no son completamente perfectos. Desafortunadamente, la tecnología RAH actual aún está lejos de la precisión necesaria. Las transcripciones imperfectas resultantes del RAH pueden ser supervisadas y corregidas manualmente, pero el esfuerzo puede ser incluso superior al de la transcripción manual. Con el fin de aliviar este problema, esta tesis presenta un novedoso sistema de {transcripción interactiva del habla} (TIH). Este método TIH consigue reducir el esfuerzo de semi-supervisión siempre que sea aceptable una pequeña cantidad de errores; además mejora a la par los modelos RAH subyacentes. Con objeto de transportar el marco propuesto para MOOCs, también se investigaron otros métodos de interacción inteligentes que involucran esfuerzo limitado por parte del usuario. Además, se introdujo un nuevo método que aprovecha las interacciones para mejorar aún más las partes no supervisadas (ASR con {búsqueda restringida}). La investigación en TIH llevada a cabo se desplegó en una plataforma web con el que fue posible producir un número masivo de transcripciones de videos de dos conocidos repositorios, videoLectures.net y poliMedia. Por último, el rendimiento de la TIH y los sistemas de RAH se puede aumentar directamente mediante la mejora de la estimación de la {medida de confianza} (MC) de las palabras transcritas. Por este motivo se desarrollaron dos contribuciones: un nuevo modelo discriminativo {logístico} (LR); y la adaptación al locutor de la MC para los casos en que es posible, como por ejemplo en MOOCs.<br>[CAT] Aquest treball hi contribueix al camp del {reconeixment automàtic de la parla} (RAP). I en especial, al de la {transcripció interactiva de la parla} i el de {mesures de confiança} (MC) per a RAP. Els objectius principals són els següents: 1. Dissenyar mètodes i eines per a TIP per tal de millorar les transcripcions automàtiques. 2. Avaluar els mètodes i eines TIP per a tasques de transcripció realistes extretes de grans repositoris de vídeos educacionals. 3. Millorar la fiabilitat del TIP, mitjançant la millora de les MC. Resum: El {reconeixment automàtic de la parla} (RAP) és una tasca crucial per una àmplia gamma d'aplicacions importants que no es poden dur a terme per mitjà de la transcripció manual. El RAP pot proporcionar transcripcions en escenaris de creixent impacte social com els {cursos online oberts massius} (MOOC). Les transcripcions permeten automatitzar tasques com ara cercar, resumir, recomanar, traduir; a més a més, fa accessibles els continguts als parlants no nadius i els usuaris amb discapacitat, etc. Fins i tot, pot millorar el rendiment acadèmic de estudiants que aprenen de xerrades amb subtítols, encara que aquests subtítols no siguen perfectes. Malauradament, la tecnologia RAP actual encara està lluny de la precisió necessària. Les transcripcions imperfectes resultants de RAP poden ser supervisades i corregides manualment, però aquest l'esforç pot acabar sent superior a la transcripció manual. Per tal de resoldre aquest problema, en aquest treball es presenta un sistema nou per a {transcripció interactiva de la parla} (TIP). Aquest sistema TIP va ser reeixit en la reducció de l'esforç per quan es pot permetre una certa quantitat d'errors; així com també en en la millora dels models RAP subjacents. Per tal d'adequar el marc proposat per a MOOCs, també es van investigar altres mètodes d'interacció intel·ligents amb esforç d''usuari limitat. A més a més, es va introduir un nou mètode que aprofita les interaccions per tal de millorar encara més les parts no supervisades (RAP amb {cerca restringida}). La investigació en TIP duta a terme es va desplegar en una plataforma web amb la qual va ser possible produir un nombre massiu de transcripcions semi-supervisades de xerrades de repositoris ben coneguts, videoLectures.net i poliMedia. Finalment, el rendiment de la TIP i els sistemes de RAP es pot augmentar directament mitjançant la millora de l'estimació de la {Confiança Mesura} (MC) de les paraules transcrites. Per tant, es van desenvolupar dues contribucions: un nou model discriminatiu logístic (LR); i l'adaptació al locutor de la MC per casos en que és possible, per exemple amb MOOCs.<br>Sánchez Cortina, I. (2016). Confidence Measures for Automatic and Interactive Speech Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61473<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
12

Schulz, Henrik. "Large vocabulary continuous speech recognition for the transcription of Catalan broadcast news and conversations : towards analysis and modelling of acoustic reduction in spontaneous speech." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/405985.

Full text
Abstract:
The transcription of spontaneous speech still poses a challenge to state-of-the-art methods for automatic speech recognition. The present thesis describes the comprehensive development of a large vocabulary continuous speech recognition system for the transcription of Catalan broadcast news and conversions and evolves towards novel approaches for analysis and modelling of acoustic reduction in spontaneous speech. It emphasises initially on various conventional methods for acoustic analysis, acoustic and language modelling and hypothesis search. Improvements over the original single-pass baseline system are mainly attained by domain and speaking style emphasising interpolation of individually estimated language models, linear discriminating projection of acoustic observations that improves the phonetic class separability, speaker normalisation of the acoustic observations, speaker adaptive training and acoustic model adaptation in a multi-pass system approach. The analysis of acoustic reduction initially emphasises on context independent vowel and consonant specific spectral and temporal properties whose parameters display statistically significant differences between the phoneme prototypes in spontaneous speech and their canonical realisations in planned speech. The introduction of the feature space analysis provides the general means to reveal these differences in conventional acoustic observations for automatic speech recognition. It displays statistically significant differences context-independently but also in a syllable context between adjacent phonemes suggesting particular reduction patterns. The analysis furthermore challenges the often suggested coherence between the co-occurring reduction of spectral and temporal properties. The modelling of acoustic reduction first emphasises on segment conditioned discriminating variables and variability class dependent models and variability class specific adaptation of the original acoustic model. It introduces phoneme rate as means to analyse temporal properties and feature space reduction ratio as means to analyse the reduction of spectral properties in conventional feature space for large vocabulary continuous speech recognition as discriminating variables. These variables are clustered and determine the classes for segment conditioned variability class dependent models and their scoring during the hypothesis search in recognition. Both approaches displays no significant performance improvement. Furthermore the modelling advances towards segment constituent predictability dependent models that introduce predictability as discriminating variable for variability class dependent models relying on the fundamental coherence between predictability and acoustic reduction that is suggested through the principle of least effort and the redundancy theory. It thereby emphasises on word and phoneme predictability. This approach displays no significant performance improvement. Planned speech is apparently antagonising the principle of least effort. Thus, a prior segment conditioned analysis of acoustic reduction may indicate its average degree of reduction, while their within-segment variation may indicate whether it exhibits sufficient relaxation of the speaking style to adopt the principle of least effort. Thus, segments exhibiting small within-segment variation may be modelled separately from those of large within-segment variation, whereas modelling the latter by word, syllable or phoneme predictability dependent models may provide a research perspective.<br>La transcripció de converses espontànies encara suposa un repte per als mètodes actuals de reconeixement automàtic de veu. Aquesta tesi descriu el desenvolupament d'un sistema de reconeixement de veu continu de vocabulari gran per a la transcripció de converses i notícies emeses en català i condueix cap a noves aproximacions per a l'anàlisi i modelat de la reducció acústica en converses espontànies. Es centra inicialment en diversos mètodes convencionals per a l'anàlisi acústica, modelat acústic i del llenguatge i en la cerca d'hipòtesis. Les millores respecte el sistema original d'única passada són principalment degudes al domini i l'estil en la parla posant èmfasi en la interpolació de models de llenguatge, discriminació lineal i projecció d'observacions acústiques, entrenament adaptat al locutor per millorar la separació de les classes fonètiques, normalització de les observacions acústiques, i adaptació del model acústic en una sistema de múltiples passades. L'anàlisi de reducció acústica posa inicialment èmfasi en les propietats espectrals i temporals independents de vocals i consonant específiques, els paràmetres de les quals mostren diferències estadísticament significatives entre els prototips de fonemes en la conversa espontània i la seva realització canònica en el discurs planejat. La introducció de l'anàlisi del espai de característiques proporciona els mitjans generals per a revelar aquestes diferències en observacions acústiques convencionals per al reconeixement automàtic de veu. Mostra diferències estadísticament significatives independents de context però també entre fonemes adjacents en el context de síl·laba suggerint patrons de reducció particulars. A més, l'anàlisi desafia la, sovint suggerida, coherència entre les reducció simultànies de les propietats espectrals i temporals. El modelat de la reducció acústica primer fa èmfasi en variables discriminants de cada segment, models dependents de la variabilitat de la classe i l'adaptació del model acústic original. Introdueix la taxa de fonemes com a mitjà d'analitzar propietats temporals i la proporció de la reducció del espai de característiques com a mitjà d'analitzar la reducció dels propietats espectrals en el espai de característiques convencional per al reconeixement de veu continu de vocabulari gran com a variables discriminants. Aquestes variables s'agrupen i determinen les classes per a models dependents de la variabilitat de cada segment i la seva puntuació durant el reconeixement i cerca d'hipòtesi. Ambdues aproximacions no mostren una millora significativa en el rendiment. A més a més, les tècniques de modelat es dirigeixen cap a models dependents de la predicibilitat del segment que introdueixen la predicibilitat com a variable discriminant per a models dependents de la classe de variabilitat basats en la coherència fonamental entre predicibilitat i reducció acústica que es suggereix pel principi del mínim esforç i la teoria de la redundància. Per tant, emfatitza la predicibilitat de les paraules i dels fonemes. Aquesta aproximació no suposa cap millora significativa de rendiment. El discurs planejat és aparentment antagònic amb el principi del mínim esforç. Per tant, un anàlisi previ condicionat al segment de la reducció acústica pot indicar el seu grau mig de reducció, mentre la variació intra-segmental pot indicar si exhibeix prou relaxació en l'estil de parlar per adoptar el principi del mínim esforç. Per tant, segments amb poca variació intra-segmental poden ser modelats apart dels que tenen gran variació intra-segmental, mentre que modelar aquestes darreres mitjançant models dependents de predicibilitat de paraula, síl·laba o fonema poden aportar una perspectiva viable de recerca.
APA, Harvard, Vancouver, ISO, and other styles
13

Granell, Romero Emilio. "Advances on the Transcription of Historical Manuscripts based on Multimodality, Interactivity and Crowdsourcing." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86137.

Full text
Abstract:
Natural Language Processing (NLP) is an interdisciplinary research field of Computer Science, Linguistics, and Pattern Recognition that studies, among others, the use of human natural languages in Human-Computer Interaction (HCI). Most of NLP research tasks can be applied for solving real-world problems. This is the case of natural language recognition and natural language translation, that can be used for building automatic systems for document transcription and document translation. Regarding digitalised handwritten text documents, transcription is used to obtain an easy digital access to the contents, since simple image digitalisation only provides, in most cases, search by image and not by linguistic contents (keywords, expressions, syntactic or semantic categories). Transcription is even more important in historical manuscripts, since most of these documents are unique and the preservation of their contents is crucial for cultural and historical reasons. The transcription of historical manuscripts is usually done by paleographers, who are experts on ancient script and vocabulary. Recently, Handwritten Text Recognition (HTR) has become a common tool for assisting paleographers in their task, by providing a draft transcription that they may amend with more or less sophisticated methods. This draft transcription is useful when it presents an error rate low enough to make the amending process more comfortable than a complete transcription from scratch. Thus, obtaining a draft transcription with an acceptable low error rate is crucial to have this NLP technology incorporated into the transcription process. The work described in this thesis is focused on the improvement of the draft transcription offered by an HTR system, with the aim of reducing the effort made by paleographers for obtaining the actual transcription on digitalised historical manuscripts. This problem is faced from three different, but complementary, scenarios: · Multimodality: The use of HTR systems allow paleographers to speed up the manual transcription process, since they are able to correct on a draft transcription. Another alternative is to obtain the draft transcription by dictating the contents to an Automatic Speech Recognition (ASR) system. When both sources (image and speech) are available, a multimodal combination is possible and an iterative process can be used in order to refine the final hypothesis. · Interactivity: The use of assistive technologies in the transcription process allows one to reduce the time and human effort required for obtaining the actual transcription, given that the assistive system and the palaeographer cooperate to generate a perfect transcription. Multimodal feedback can be used to provide the assistive system with additional sources of information by using signals that represent the whole same sequence of words to transcribe (e.g. a text image, and the speech of the dictation of the contents of this text image), or that represent just a word or character to correct (e.g. an on-line handwritten word). · Crowdsourcing: Open distributed collaboration emerges as a powerful tool for massive transcription at a relatively low cost, since the paleographer supervision effort may be dramatically reduced. Multimodal combination allows one to use the speech dictation of handwritten text lines in a multimodal crowdsourcing platform, where collaborators may provide their speech by using their own mobile device instead of using desktop or laptop computers, which makes it possible to recruit more collaborators.<br>El Procesamiento del Lenguaje Natural (PLN) es un campo de investigación interdisciplinar de las Ciencias de la Computación, Lingüística y Reconocimiento de Patrones que estudia, entre otros, el uso del lenguaje natural humano en la interacción Hombre-Máquina. La mayoría de las tareas de investigación del PLN se pueden aplicar para resolver problemas del mundo real. Este es el caso del reconocimiento y la traducción del lenguaje natural, que se pueden utilizar para construir sistemas automáticos para la transcripción y traducción de documentos. En cuanto a los documentos manuscritos digitalizados, la transcripción se utiliza para facilitar el acceso digital a los contenidos, ya que la simple digitalización de imágenes sólo proporciona, en la mayoría de los casos, la búsqueda por imagen y no por contenidos lingüísticos. La transcripción es aún más importante en el caso de los manuscritos históricos, ya que la mayoría de estos documentos son únicos y la preservación de su contenido es crucial por razones culturales e históricas. La transcripción de manuscritos históricos suele ser realizada por paleógrafos, que son personas expertas en escritura y vocabulario antiguos. Recientemente, los sistemas de Reconocimiento de Escritura (RES) se han convertido en una herramienta común para ayudar a los paleógrafos en su tarea, la cual proporciona un borrador de la transcripción que los paleógrafos pueden corregir con métodos más o menos sofisticados. Este borrador de transcripción es útil cuando presenta una tasa de error suficientemente reducida para que el proceso de corrección sea más cómodo que una completa transcripción desde cero. Por lo tanto, la obtención de un borrador de transcripción con una baja tasa de error es crucial para que esta tecnología de PLN sea incorporada en el proceso de transcripción. El trabajo descrito en esta tesis se centra en la mejora del borrador de transcripción ofrecido por un sistema RES, con el objetivo de reducir el esfuerzo realizado por los paleógrafos para obtener la transcripción de manuscritos históricos digitalizados. Este problema se enfrenta a partir de tres escenarios diferentes, pero complementarios: · Multimodalidad: El uso de sistemas RES permite a los paleógrafos acelerar el proceso de transcripción manual, ya que son capaces de corregir en un borrador de la transcripción. Otra alternativa es obtener el borrador de la transcripción dictando el contenido a un sistema de Reconocimiento Automático de Habla. Cuando ambas fuentes están disponibles, una combinación multimodal de las mismas es posible y se puede realizar un proceso iterativo para refinar la hipótesis final. · Interactividad: El uso de tecnologías asistenciales en el proceso de transcripción permite reducir el tiempo y el esfuerzo humano requeridos para obtener la transcripción correcta, gracias a la cooperación entre el sistema asistencial y el paleógrafo para obtener la transcripción perfecta. La realimentación multimodal se puede utilizar en el sistema asistencial para proporcionar otras fuentes de información adicionales con señales que representen la misma secuencia de palabras a transcribir (por ejemplo, una imagen de texto, o la señal de habla del dictado del contenido de dicha imagen de texto), o señales que representen sólo una palabra o carácter a corregir (por ejemplo, una palabra manuscrita mediante una pantalla táctil). · Crowdsourcing: La colaboración distribuida y abierta surge como una poderosa herramienta para la transcripción masiva a un costo relativamente bajo, ya que el esfuerzo de supervisión de los paleógrafos puede ser drásticamente reducido. La combinación multimodal permite utilizar el dictado del contenido de líneas de texto manuscrito en una plataforma de crowdsourcing multimodal, donde los colaboradores pueden proporcionar las muestras de habla utilizando su propio dispositivo móvil en lugar de usar ordenadores,<br>El Processament del Llenguatge Natural (PLN) és un camp de recerca interdisciplinar de les Ciències de la Computació, la Lingüística i el Reconeixement de Patrons que estudia, entre d'altres, l'ús del llenguatge natural humà en la interacció Home-Màquina. La majoria de les tasques de recerca del PLN es poden aplicar per resoldre problemes del món real. Aquest és el cas del reconeixement i la traducció del llenguatge natural, que es poden utilitzar per construir sistemes automàtics per a la transcripció i traducció de documents. Quant als documents manuscrits digitalitzats, la transcripció s'utilitza per facilitar l'accés digital als continguts, ja que la simple digitalització d'imatges només proporciona, en la majoria dels casos, la cerca per imatge i no per continguts lingüístics (paraules clau, expressions, categories sintàctiques o semàntiques). La transcripció és encara més important en el cas dels manuscrits històrics, ja que la majoria d'aquests documents són únics i la preservació del seu contingut és crucial per raons culturals i històriques. La transcripció de manuscrits històrics sol ser realitzada per paleògrafs, els quals són persones expertes en escriptura i vocabulari antics. Recentment, els sistemes de Reconeixement d'Escriptura (RES) s'han convertit en una eina comuna per ajudar els paleògrafs en la seua tasca, la qual proporciona un esborrany de la transcripció que els paleògrafs poden esmenar amb mètodes més o menys sofisticats. Aquest esborrany de transcripció és útil quan presenta una taxa d'error prou reduïda perquè el procés de correcció siga més còmode que una completa transcripció des de zero. Per tant, l'obtenció d'un esborrany de transcripció amb un baixa taxa d'error és crucial perquè aquesta tecnologia del PLN siga incorporada en el procés de transcripció. El treball descrit en aquesta tesi se centra en la millora de l'esborrany de la transcripció ofert per un sistema RES, amb l'objectiu de reduir l'esforç realitzat pels paleògrafs per obtenir la transcripció de manuscrits històrics digitalitzats. Aquest problema s'enfronta a partir de tres escenaris diferents, però complementaris: · Multimodalitat: L'ús de sistemes RES permet als paleògrafs accelerar el procés de transcripció manual, ja que són capaços de corregir un esborrany de la transcripció. Una altra alternativa és obtenir l'esborrany de la transcripció dictant el contingut a un sistema de Reconeixement Automàtic de la Parla. Quan les dues fonts (imatge i parla) estan disponibles, una combinació multimodal és possible i es pot realitzar un procés iteratiu per refinar la hipòtesi final. · Interactivitat: L'ús de tecnologies assistencials en el procés de transcripció permet reduir el temps i l'esforç humà requerits per obtenir la transcripció real, gràcies a la cooperació entre el sistema assistencial i el paleògraf per obtenir la transcripció perfecta. La realimentació multimodal es pot utilitzar en el sistema assistencial per proporcionar fonts d'informació addicionals amb senyals que representen la mateixa seqüencia de paraules a transcriure (per exemple, una imatge de text, o el senyal de parla del dictat del contingut d'aquesta imatge de text), o senyals que representen només una paraula o caràcter a corregir (per exemple, una paraula manuscrita mitjançant una pantalla tàctil). · Crowdsourcing: La col·laboració distribuïda i oberta sorgeix com una poderosa eina per a la transcripció massiva a un cost relativament baix, ja que l'esforç de supervisió dels paleògrafs pot ser reduït dràsticament. La combinació multimodal permet utilitzar el dictat del contingut de línies de text manuscrit en una plataforma de crowdsourcing multimodal, on els col·laboradors poden proporcionar les mostres de parla utilitzant el seu propi dispositiu mòbil en lloc d'utilitzar ordinadors d'escriptori o portàtils, la qual cosa permet ampliar el nombr<br>Granell Romero, E. (2017). Advances on the Transcription of Historical Manuscripts based on Multimodality, Interactivity and Crowdsourcing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86137<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
14

Cesene, Daniel Fredrick. "The Completeness of the Electronic Medical Record with the Implementation of Speech Recognition Technology." Youngstown State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1401735616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Karlsson, Fredrik. "User-centered Visualizations of Transcription Uncertainty in AI-generated Subtitles of News Broadcast." Thesis, Uppsala universitet, Människa-datorinteraktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-415658.

Full text
Abstract:
AI-generated subtitles have recently started to automate the process of subtitling with automatic speech recognition. However, people may not perceive that the transcription is based on probabilities and may entail errors. For news that is broadcast live may this be controversial and cause misinterpretation. A user-centered design approach was performed investigating three possible solutions towards visualizing transcription uncertainties in real-time presentation. Based on the user needs, one proposed solution was used in a qualitative comparison with AI- generated subtitles without visualizations. The results suggest that visualization of uncertainties support users’ interpretation of AI-generated subtitles and helps to identify possible errors. However, it does not improve the transcription intelligibility. The result also suggests that unnoticed transcription errors during news broadcast is perceived as critical and decrease trust towards the news. Uncertainty visualizations may increase trust and prevent the risk of misinterpretation with important information.
APA, Harvard, Vancouver, ISO, and other styles
16

Washburn, Scott Stuart. "New technologies for data collection and their application for empirical investigation of travel time measurement issues /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/10139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kruspe, Anna Marie [Verfasser], Karlheinz [Akademischer Betreuer] Brandenburg, Masataka [Gutachter] Goto, and Sebastian [Gutachter] Stober. "Application of automatic speech recognition technologies to singing / Anna Marie Kruspe ; Gutachter: Masataka Goto, Sebastian Stober ; Betreuer: Karlheinz Brandenburg." Ilmenau : TU Ilmenau, 2018. http://d-nb.info/1178128814/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sella, Valeria. "Automatic phonological transcription using forced alignment : FAVE toolkit performance on four non-standard varieties of English." Thesis, Stockholms universitet, Engelska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-167843.

Full text
Abstract:
Forced alignment, a speech recognition software performing semi-automatic phonological transcription, constitutes a methodological revolution in the recent history of linguistic research. Its use is progressively becoming the norm in research fields such as sociophonetics, but its general performance and range of applications have been relatively understudied. This thesis investigates the performance and portability of the Forced Alignment and Vowel Extraction program suite (FAVE), an aligner that was trained on, and designed to study, American English. It was decided to test FAVE on four non-American varieties of English (Scottish, Irish, Australian and Indian English) and a control variety (General American). First, the performance of FAVE was compared with human annotators, and then it was tested on three potentially problematic variables: /p, t, k/ realization, rhotic consonants and /l/. Although FAVE was found to perform significantly differently from human annotators on identical datasets, further analysis revealed that the aligner performed quite similarly on the non-standard varieties and the control variety, suggesting that the difference in accuracy does not constitute a major drawback to its extended usage. The study discusses the implications of the findings in relation to doubts expressed about the usage of such technology and argues for a wider implementation of forced alignment tools such as FAVE in sociophonetic research.
APA, Harvard, Vancouver, ISO, and other styles
19

Prince, Bradley Justin Cegielski Casey. "An exploration of the impact of speech recognition technologies on group efficiency and effectiveness during an electronic idea generation scenario." Auburn, Ala., 2006. http://repo.lib.auburn.edu/2006%20Spring/doctoral/PRINCE_BRADLEY_15.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Radke, Annemarie Katherine. "Design and Development of a Metadata-Driven Search Tool for use with Digital Recordings." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90376.

Full text
Abstract:
It is becoming more common for researchers to use existing recordings as a source for data rather than to generate new media for research. Prior to the examination of recordings, data must be extracted from the recordings and the recordings must be described with metadata to allow users to search for the recordings and to search information within the recordings. The purpose of this small-scale study was to develop a web based search tool that will permit a comprehensive search of spoken information within a collection of existing digital recordings archived in an open-access digital repository. The study is significant to the field of instructional design and technology (IDT) as the digital recordings used in this study are interviews, which contain personal histories and insight from leaders and scholars who have influenced and advanced the field of IDT. This study explored and used design and development research methods for the development of a search tool for use with digital video interviews. The study applied speech recognition technology, tool prototypes, usability testing, expert review, and the skills of a program developer. Results from the study determined that the produced tool provided a more comprehensive and flexible search for users to locate content from within AECT Legends and Legacies Project video interviews.<br>Doctor of Philosophy<br>It is becoming more common for researchers to use existing recordings in studies. Prior to examination, the information about the recordings and within the recordings must be determined to allow users the ability to search information. The purpose of this small-scale study was to develop an online search tool that allows users to locate spoken words within a video interview. The study is important to the field of instructional design and technology (IDT) as the video interviews used in this study contain experience and insight from people who have advanced the field of IDT. Using current and free technology, this study developed a practical search tool to search information from AECT Legends and Legacies Project video interviews.
APA, Harvard, Vancouver, ISO, and other styles
21

Bougares, Fethi. "Attelage de systèmes de transcription automatique de la parole." Phd thesis, Université du Maine, 2012. http://tel.archives-ouvertes.fr/tel-00839990.

Full text
Abstract:
Nous abordons, dans cette thèse, les méthodes de combinaison de systèmesde transcription de la parole à Large Vocabulaire. Notre étude se concentre surl'attelage de systèmes de transcription hétérogènes dans l'objectif d'améliorerla qualité de la transcription à latence contrainte. Les systèmes statistiquessont affectés par les nombreuses variabilités qui caractérisent le signal dela parole. Un seul système n'est généralement pas capable de modéliserl'ensemble de ces variabilités. La combinaison de différents systèmes detranscription repose sur l'idée d'exploiter les points forts de chacun pourobtenir une transcription finale améliorée. Les méthodes de combinaisonproposées dans la littérature sont majoritairement appliquées a posteriori,dans une architecture de transcription multi-passes. Cela nécessite un tempsde latence considérable induit par le temps d'attente requis avant l'applicationde la combinaison.Récemment, une méthode de combinaison intégrée a été proposée. Cetteméthode est basée sur le paradigme de décodage guidé (DDA :Driven DecodingAlgorithm) qui permet de combiner différents systèmes durant le décodage. Laméthode consiste à intégrer des informations en provenance de plusieurs systèmes dits auxiliaires dans le processus de décodage d'un système dit primaire.Notre contribution dans le cadre de cette thèse porte sur un double aspect : d'une part, nous proposons une étude sur la robustesse de la combinaison par décodage guidé. Nous proposons ensuite, une amélioration efficacement généralisable basée sur le décodage guidé par sac de n-grammes,appelé BONG. D'autre part, nous proposons un cadre permettant l'attelagede plusieurs systèmes mono-passe pour la construction collaborative, à latenceréduite, de la sortie de l'hypothèse de reconnaissance finale. Nous présentonsdifférents modèles théoriques de l'architecture d'attelage et nous exposons unexemple d'implémentation en utilisant une architecture client/serveur distribuée. Après la définition de l'architecture de collaboration, nous nous focalisons sur les méthodes de combinaison adaptées à la transcription automatiqueà latence réduite. Nous proposons une adaptation de la combinaison BONGpermettant la collaboration, à latence réduite, de plusieurs systèmes mono-passe fonctionnant en parallèle. Nous présentons également, une adaptationde la combinaison ROVER applicable durant le processus de décodage via unprocessus d'alignement local suivi par un processus de vote basé sur la fréquence d'apparition des mots. Les deux méthodes de combinaison proposéespermettent la réduction de la latence de la combinaison de plusieurs systèmesmono-passe avec un gain significatif du WER.
APA, Harvard, Vancouver, ISO, and other styles
22

Cui, Can. "Séparation, diarisation et reconnaissance de la parole conjointes pour la transcription automatique de réunions." Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0103.

Full text
Abstract:
La transcription de réunions enregistrées par une antenne de microphones distante est particulièrement difficile en raison de la superposition des locuteurs, du bruit ambiant et de la réverbération. Pour résoudre ces problèmes, nous avons exploré trois approches. Premièrement, nous utilisons un modèle de séparation de sources multicanal pour séparer les locuteurs, puis un modèle de reconnaissance automatique de la parole (ASR) monocanal et mono-locuteur pour transcrire la parole séparée et rehaussée. Deuxièmement, nous proposons un modèle multicanal multi-locuteur de bout-en-bout (MC-SA-ASR), qui s'appuie sur un modèle multi-locuteur monocanal (SA-ASR) existant et inclut un encodeur multicanal par Conformer avec un mécanisme d'attention multi-trame intercanale (MFCCA). Contrairement aux approches traditionnelles qui nécessitent un modèle de rehaussement de la parole multicanal en amont, le modèle MC-SA-ASR traite les microphones distants de bout-en-bout. Nous avons également expérimenté différentes caractéristiques d'entrée, dont le banc de filtres Mel et les caractéristiques de phase, pour ce modèle. Enfin, nous utilisons un modèle de formation de voies et de rehaussement multicanal comme pré-traitement, suivi d'un modèle SA-ASR monocanal pour traiter la parole multi-locuteur rehaussée. Nous avons testé différentes techniques de formation de voies fixe, hybride ou neuronale et proposé d'apprendre conjointement les modèles de formation de voies neuronale et de SA-ASR en utilisant le coût d'apprentissage de ce dernier. En plus de ces méthodes, nous avons développé un pipeline de transcription de réunions qui intègre la détection de l'activité vocale, la diarisation et le SA-ASR pour traiter efficacement les enregistrements de réunions réelles. Les résultats expérimentaux indiquent que, même si l'utilisation d'un modèle de séparation de sources peut améliorer la qualité de la parole, les erreurs de séparation peuvent se propager à l'ASR, entraînant des performances sous-optimales. Une approche guidée de séparation de sources s'avère plus efficace. Notre modèle MC-SA-ASR proposé démontre l'efficacité de l'intégration des informations multicanales et des informations partagées entre les modules d'ASR et de locuteur. Des expériences avec différentes catactéristiques d'entrée révèlent que les modèles appris avec les caractéristiques de Mel Filterbank fonctionnent mieux en termes de taux d'erreur sur les mots (WER) et de taux d'erreur sur les locuteurs (SER) lorsque le nombre de canaux et de locuteurs est faible (2 canaux avec 1 ou 2 locuteurs). Cependant, pour les configurations à 3 ou 4 canaux et 3 locuteurs, les modèles appris sur des caractéristiques de phase supplémentaires surpassent ceux utilisant uniquement les caractéristiques Mel. Cela suggère que les informations de phase peuvent améliorer la transcription du contenu vocal en exploitant les informations de localisation provenant de plusieurs canaux. Bien que MC-SA-ASR basé sur MFCCA surpasse les modèles SA-ASR et MC-ASR monocanal sans module de locuteur, les modèle de formation de voies et de SA-ASR conjointes permet d'obtenir des résultats encore meilleurs. Plus précisément, l'apprentissage conjoint de la formation de voies neuronale et de SA-ASR donne les meilleures performances, ce qui indique que l'amélioration de la qualité de la parole pourrait être une approche plus directe et plus efficace que l'utilisation d'un modèle MC-SA-ASR de bout-en-bout pour la transcription de réunions multicanales. En outre, l'étude du pipeline de transcription de réunions réelles souligne le potentiel pour des meilleurs modèles de bout-en-bout. Dans notre étude sur l'amélioration de l'attribution des locuteurs par SA-ASR, nous avons constaté que le module d'ASR n'est pas sensible aux modifications du module de locuteur. Cela met en évidence la nécessité d'architectures améliorées qui intègrent plus efficacement l'ASR et l'information de locuteur<br>Far-field microphone-array meeting transcription is particularly challenging due to overlapping speech, ambient noise, and reverberation. To address these issues, we explored three approaches. First, we employ a multichannel speaker separation model to isolate individual speakers, followed by a single-channel, single-speaker automatic speech recognition (ASR) model to transcribe the separated and enhanced audio. This method effectively enhances speech quality for ASR. Second, we propose an end-to-end multichannel speaker-attributed ASR (MC-SA-ASR) model, which builds on an existing single-channel SA-ASR model and incorporates a multichannel Conformer-based encoder with multi-frame cross-channel attention (MFCCA). Unlike traditional approaches that require a multichannel front-end speech enhancement model, the MC-SA-ASR model handles far-field microphones in an end-to-end manner. We also experimented with different input features, including Mel filterbank and phase features, for that model. Lastly, we incorporate a multichannel beamforming and enhancement model as a front-end processing step, followed by a single-channel SA-ASR model to process the enhanced multi-speaker speech signals. We tested different fixed, hybrid, and fully neural network-based beamformers and proposed to jointly optimize the neural beamformer and SA-ASR models using the training objective for the latter. In addition to these methods, we developed a meeting transcription pipeline that integrates voice activity detection, speaker diarization, and SA-ASR to process real meeting recordings effectively. Experimental results indicate that, while using a speaker separation model can enhance speech quality, separation errors can propagate to ASR, resulting in suboptimal performance. A guided speaker separation approach proves to be more effective. Our proposed MC-SA-ASR model demonstrates efficiency in integrating multichannel information and the shared information between the ASR and speaker blocks. Experiments with different input features reveal that models trained with Mel filterbank features perform better in terms of word error rate (WER) and speaker error rate (SER) when the number of channels and speakers is low (2 channels with 1 or 2 speakers). However, for settings with 3 or 4 channels and 3 speakers, models trained with additional phase information outperform those using only Mel filterbank features. This suggests that phase information can enhance ASR by leveraging localization information from multiple channels. Although MFCCA-based MC-SA-ASR outperforms the single-channel SA-ASR and MC-ASR models without a speaker block, the joint beamforming and SA-ASR model further improves the performance. Specifically, joint training of the neural beamformer and SA-ASR yields the best performance, indicating that improving speech quality might be a more direct and efficient approach than using an end-to-end MC-SA-ASR model for multichannel meeting transcription. Furthermore, the study of the real meeting transcription pipeline underscores the potential for better end-to-end models. In our investigation on improving speaker assignment in SA-ASR, we found that the speaker block does not effectively help improve the ASR performance. This highlights the need for improved architectures that more effectively integrate ASR and speaker information
APA, Harvard, Vancouver, ISO, and other styles
23

Bazillon, Thierry. "Transcription et traitement manuel de la parole spontanée pour sa reconnaissance automatique." Phd thesis, Université du Maine, 2011. http://tel.archives-ouvertes.fr/tel-00598427.

Full text
Abstract:
Le projet EPAC est le point de départ de nos travaux de recherche. Nous présentons ce contexte de travail dans notre premier chapitre.Dans un deuxième temps, nous nous intéressons à la tâche de transcription de la parole. Nous en exposerons quelques jalons, ainsi qu'un inventaire des corpus oraux disponibles aujourd'hui. Enfin, nous comparons deux méthodes de transcription : manuelle et assistée. Par la suite, nous réalisons une étude comparative de huit logiciels d'aide à la transcription. Cela afin de démontrer que, suivant les situations, certains sont plus indiqués que d'autres. Le codage des données est l'objet de notre quatrième chapitre. Peut-on facilement échanger des transcriptions? Nous démontrerons que l'interopérabilité est un domaine où beaucoup de travail reste à faire. Enfin, nous terminons par une analyse détaillée de ce que nous appelons la parole spontanée. Par différents angles, définitions et expériences, nous tentons de circonscrire ce que cette appellation recouvre.
APA, Harvard, Vancouver, ISO, and other styles
24

Nezval, Jiří. "Odhad přesnosti řečových technologií na základě měření signálové kvality a obsahové bohatosti audia." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413168.

Full text
Abstract:
This thesis discusses theoretical analysis of the origin of speech, introduces applications of speech technologies and explains the contemporary approach to phonetical transcription of speech recordings. Furthermore, it describes the metrics of audio recordings quality assessment, which is split into two discrete classes. The first one groups signal quality metrics, while the other one groups content richness metrics. The first goal of the practical section is to create a statistical model for accuracy prediction of machine transcription of speech recordings based on a measurement of their quality. The second goal is to evaluate which partial metrics are the most essential for accuracy prediction of machine transcription.
APA, Harvard, Vancouver, ISO, and other styles
25

Ringelienė, Živilė. "Programinė įranga kompiuterio valdymui balsu." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080924_182111-55003.

Full text
Abstract:
Magistro darbe pristatoma sukurta programa, realizuojanti interneto naršyklės valdymą balsu. Ši programa papildo atskirų žodžių prototipinę atpažinimo sistemą, pagrįstą paslėptaisiais Markovo modeliais (PMM). Šios dvi dalys ir sudaro interneto naršyklės valdymo balsu prototipą, kuris gali atpažinti 71 komandą (vienas arba du žodžiai) lietuvių kalba: 1 komandą, skirtą naršyklės atvėrimui, 54 naršyklės valdymo komandas, 16 komandų, atveriančių konkrečius iš anksto sistemai nurodytus tinklalapius. Darbe aprašytas lietuvių kalbos atskirų žodžių atpažinimo sistemos akustinių modelių, grįstų paslėptaisiais Markovo modeliais, rinkinių eksperimentinis tyrimas. Atsižvelgiant į įvairius atpažinimui turinčius įtakos veiksnius (mokymo duomenų kiekį, mišinio komponenčių skaičių, kalbėtojo lytį, skirtingos techninės įrangos naudojimą atpažinime), buvo sukurti skirtingi balso komandų akustinių modelių rinkiniai. Eksperimentinio tyrimo metu buvo tiriama šių rinkinių panaudojimo atpažinimo sistemoje įtaka sistemos atpažinimo tikslumui. Eksperimentinio tyrimo rezultatai parodė, kad interneto naršyklės valdymo balsu sistemos prototipo atpažinimo tikslumas siekia 98%. Sistema gali būti naudojama kaip vaizdinė priemonė vyresniųjų klasių moksleiviams informacinių technologijų, fizikos, psichologijos, matematikos pamokose.<br>The thesis presents a prototype of the software (system) for Web browser control by voice. The prototype consists of two parts: the Hidden Markov Models based word recognition system and the program, which implements browser control by voice commands and is integrated in the word recognition system. The prototype is a speaker-independent Lithuanian word (voice commands) recognition system and can recognize 71 voice commands: 1 command is intended to run browser, 54 commands – for browser control, and 16 commands – to open various user predefined websites. Taking into account various factors (amount of training data, number of Gaussian mixture components, gender of speaker, use of different hardware for recognition) which have impact on recognition, different sets of acoustic models of Lithuanian voice commands were created and trained. An experimental investigation of the influence of the sets usage in Lithuanian word recognition system on the word recognition accuracy was performed. The results of the experimental investigation showed that created prototype system achieves 98% word recognition accuracy. The prototype system can be used at secondary school as a visual speech recognition learning tool in the informatics, physics, psychology, and mathematics lessons for the pupils of senior classes.
APA, Harvard, Vancouver, ISO, and other styles
26

Struhař, Michal. "Detekce chybné výslovnosti v mluvené řeči." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217311.

Full text
Abstract:
This thesis deals with detection of speech disorders. One of the aims of this thesis is choosing suitable parameterization: short-time energy, zero-crossing rate, linear predictive analysis, perceptual linear predictive analysis, RASTA method, cepstral analysis and mel-frequency cepstral coefficient can be chosed for detections. Next aim is construction of detector of speech disorders based on DTW (Dynamic Time Warping) and artificial neuron network. Single detection proceeds on the base of collected tokens from chosen analysis and phonetic transcription of speech. Analyses, detector and phonetic transcription of Czech language are implemented in simulation environment of MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
27

Alabau, Gonzalvo Vicente. "Multimodal interactive structured prediction." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/35135.

Full text
Abstract:
This thesis presents scientific contributions to the field of multimodal interac- tive structured prediction (MISP). The aim of MISP is to reduce the human effort required to supervise an automatic output, in an efficient and ergonomic way. Hence, this thesis focuses on the two aspects of MISP systems. The first aspect, which refers to the interactive part of MISP, is the study of strate- gies for efficient human¿computer collaboration to produce error-free outputs. Multimodality, the second aspect, deals with other more ergonomic modalities of communication with the computer rather than keyboard and mouse. To begin with, in sequential interaction the user is assumed to supervise the output from left-to-right so that errors are corrected in sequential order. We study the problem under the decision theory framework and define an optimum decoding algorithm. The optimum algorithm is compared to the usually ap- plied, standard approach. Experimental results on several tasks suggests that the optimum algorithm is slightly better than the standard algorithm. In contrast to sequential interaction, in active interaction it is the system that decides what should be given to the user for supervision. On the one hand, user supervision can be reduced if the user is required to supervise only the outputs that the system expects to be erroneous. In this respect, we define a strategy that retrieves first the outputs with highest expected error first. Moreover, we prove that this strategy is optimum under certain conditions, which is validated by experimental results. On the other hand, if the goal is to reduce the number of corrections, active interaction works by selecting elements, one by one, e.g., words of a given output to be supervised by the user. For this case, several strategies are compared. Unlike the previous case, the strategy that performs better is to choose the element with highest confidence, which coincides with the findings of the optimum algorithm for sequential interaction. However, this also suggests that minimizing effort and supervision are contradictory goals. With respect to the multimodality aspect, this thesis delves into techniques to make multimodal systems more robust. To achieve that, multimodal systems are improved by providing contextual information of the application at hand. First, we study how to integrate e-pen interaction in a machine translation task. We contribute to the state-of-the-art by leveraging the information from the source sentence. Several strategies are compared basically grouped into two approaches: inspired by word-based translation models and n-grams generated from a phrase-based system. The experiments show that the former outper- forms the latter for this task. Furthermore, the results present remarkable improvements against not using contextual information. Second, similar ex- periments are conducted on a speech-enabled interface for interactive machine translation. The improvements over the baseline are also noticeable. How- ever, in this case, phrase-based models perform much better than word-based models. We attribute that to the fact that acoustic models are poorer estima- tions than morphologic models and, thus, they benefit more from the language model. Finally, similar techniques are proposed for dictation of handwritten documents. The results show that speech and handwritten recognition can be combined in an effective way. Finally, an evaluation with real users is carried out to compare an interactive machine translation prototype with a post-editing prototype. The results of the study reveal that users are very sensitive to the usability aspects of the user interface. Therefore, usability is a crucial aspect to consider in an human evaluation that can hinder the real benefits of the technology being evaluated. Hopefully, once usability problems are fixed, the evaluation indicates that users are more favorable to work with the interactive machine translation system than to the post-editing system.<br>Alabau Gonzalvo, V. (2014). Multimodal interactive structured prediction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35135<br>TESIS<br>Premiado
APA, Harvard, Vancouver, ISO, and other styles
28

Zhezhela, Oleksandr. "Vizualizace výstupu z řečových technologií pro potřeby kontaktních center." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236041.

Full text
Abstract:
The thesis is aimed on visualisation of data mined by speech processing technologies. Some methods speech data extraction were studied and technologies for this task were analysed. The variety of meta data that can be mined from speech was defined. Were also examined existing standards and processes of call centres. Some requirements for the user interface were gathered and analysed. On that basis and after communication with call centre employees there was defined and implemented a concept for speech data visualization. Gained solutions were integrated into Speech Analytics Server (SPAS).
APA, Harvard, Vancouver, ISO, and other styles
29

Jansson, Annika. "Tal till text för relevant metadatataggning av ljudarkiv hos Sveriges Radio." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169464.

Full text
Abstract:
Tal till text för relevant metadatataggning av ljudarkiv hos Sveriges Radio Sammanfattning Under åren 2009-2013 har Sveriges Radio digitaliserat sitt programarkiv. Sveriges Radios ambition är att mer material från de 175 000 timmar radio som sänds varje år ska arkiveras. Det är en relativt tidsödande process att göra allt material sökbart och det är långt ifrån säkert att kvaliteten på dessa data är lika hög hos alla objekt.         Frågeställningen som har behandlats för detta examensarbete är: Vilka tekniska lösningar finns för att utveckla ett system åt Sveriges Radio för automatisk igenkänning av svenskt tal till text utifrån deras ljudarkiv?         System inom tal till text har analyserats och undersökts för att ge Sveriges Radio en aktuell sammanställning inom området.         Intervjuer med andra liknande organisationer som arbetar inom området har utförts för att se hur långt de har kommit i sin utveckling av det berörda ämnet.         En litteraturstudie har genomförts på de senare forskningsrapporterna inom taligenkänning för att jämföra vilket system som skulle passa Sveriges Radio behov och krav bäst att gå vidare med.         Det Sveriges Radio bör koncentrera sig på först för att kunna bygga en ASR, Automatic Speech Recognition, är att transkribera sitt ljudmaterial. Där finns det tre alternativ, antingen transkribera själva genom att välja ut ett antal program med olika inriktning för att få en så stor bredd som möjligt på innehållet, gärna med olika talare för att sedan även kunna utveckla vidare för igenkänning av talare. Enklaste sättet är att låta olika yrkeskategorier som lägger in inslagen/programmen i systemet göra det. Andra alternativet är att starta ett liknade projekt som BBC har gjort och ta hjälp av allmänheten. Tredje alternativet är att köpa tjänsten för transkribering.         Mitt råd är att fortsätta utvärdera systemet Kaldi, eftersom det har utvecklats mycket på senaste tiden och verkar vara relativt lätt att utvidga. Även den öppna källkod som Lingsoft använder sig av är intressant att studera vidare.<br>Speech to text for relevant metadata tagging of audio archive at Sveriges Radio Abstract In the years 2009-2013, Sveriges Radio digitized its program archive. Sveriges Radio's ambition is that more material from the 175 000 hours of radio they broadcast every year should be archived. This is a relatively time-consuming process to make all materials to be searchable and it's far from certain that the quality of the data is equally high on all items.         The issue that has been treated for this thesis is: What opportunities exist to develop a system to Sveriges Radio for Swedish speech to text?         Systems for speech to text has been analyzed and examined to give Sveriges Radio a current overview in this subject.         Interviews with other similar organizations working in the field have been performed to see how far they have come in their development of the concerned subject.         A literature study has been conducted on the recent research reports in speech recognition to compare which system would match Sveriges Radio's needs and requirements best to get on with.         What Sveriges Radio should concentrate at first, in order to build an ASR, Automatic Speech Recognition, is to transcribe their audio material. Where there are three alternatives, either transcribe themselves by selecting a number of programs with different orientations to get such a large width as possible on the content, preferably with different speakers and then also be able to develop further recognition of the speaker. The easiest way is to let different professions who make the features/programs in the system do it. Other option is to start a similar project that the BBC has done and take help of the public. The third option is to buy the service for transcription.         My advice is to continue evaluate the Kaldi system, because it has evolved significantly in recent years and seems to be relatively easy to extend. Also the open-source that Lingsoft uses is interesting to study further.
APA, Harvard, Vancouver, ISO, and other styles
30

Dufraux, Adrien. "Exploitation de transcriptions bruitées pour la reconnaissance automatique de la parole." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0032.

Full text
Abstract:
Les méthodes usuelles pour la conception d'un système de reconnaissance automatique de la parole nécessitent des jeux de données de parole transcrite de bonne qualité. Ceux-ci sont composés du signal acoustique produit par un locuteur ainsi que de la transcription mot à mot de ce qui a été dit. Pour construire un bon modèle de reconnaissance automatique il faut plusieurs milliers d'heures de parole transcrite. Le jeu de données doit être crée à partir d'un panel de locuteurs et de situations différentes pour couvrir la variabilité de la parole et de la langue. Pour créer un tel jeu de données, on demande généralement à des annotateurs humains d'écouter les signaux acoustiques et d'écrire le texte correspondant. Ce procédé coûte cher et est source d'erreurs car ce qui est dit lors d'un enregistrement en conditions réelles n'est pas toujours facilement intelligible. Des signaux mal transcrits impliquent une baisse de performance du modèle acoustique. Pour améliorer la qualité des transcriptions, plusieurs personnes peuvent annoter le même signal acoustique, mais alors le procédé coûte encore plus cher. Cette thèse prend le contre-pied de cette démarche et propose de concevoir des algorithmes permettant d'utiliser des jeux de données dont les transcriptions sont « bruitées », c'est-à-dire qu'elles contiennent des erreurs. Le but principal est donc de réduire les coûts pour construire un système de reconnaissance automatique de la parole en limitant la perte de qualité du système induite par ces erreurs.Dans un premier temps, nous présentons l'algorithme Lead2Gold. Lead2Gold est basé sur une fonction de coût qui permet d'utiliser des jeux de données dont les transcriptions contiennent des erreurs. Nous modélisons ces erreurs par un modèle de bruit simple basé au niveau des lettres. Pour une transcription présente dans le jeu de données, l'algorithme cherche un ensemble de transcriptions probablement meilleures. Nous utilisons pour cela une recherche en faisceau dans le graphe. Une telle technique de recherche n'est habituellement pas utilisée pour la formulation d'une fonction de coût. Nous montrons qu'il est possible d'ajouter explicitement de nouveaux éléments, ici un modèle de bruit, pour créer des fonctions de coût complexes. Ensuite nous améliorons la formulation de Lead2Gold pour que la fonction de coût soit modulable. Pour cela, nous utilisons des wFST. Les wFST sont des graphes dont les arcs sont pondérés et représentent des symboles. Nous pouvons composer différents graphes pour construire des fonctions de coût de façon flexible. Avec notre proposition, il devient plus facile d'ajouter de nouveaux éléments, comme un lexique, pour mieux caractériser les bonnes transcriptions. Nous montrons que l'utilisation des wFST est une bonne alternative à l'utilisation explicite de la recherche en faisceau de Lead2Gold. La formulation modulaire nous permet de proposer une nouvelle gamme de fonctions de coût modélisant les erreurs de transcription. Enfin nous procédons à une expérience de collecte de données en conditions réelles. Nous observons les différents profils d'annotateurs. Les annotateurs n'ont pas la même perception des signaux acoustiques et les erreurs qu'ils commettent peuvent être de natures différentes. Le but explicite de cette expérience est d’obtenir des transcriptions erronées et de prouver l'utilité de modéliser ces erreurs<br>Usual methods to design automatic speech recognition systems require speech datasets with high quality transcriptions. These datasets are composed of the acoustic signals uttered by speakers and the corresponding word-level transcripts representing what is being said. It takes several thousand hours of transcribed speech to build a good speech recognition model. The dataset must include a variety of speakers recorded in different situations in order to cover the wide variability of speech and language. To create such a system, human annotators are asked to listen to audio tracks and to write down the corresponding text. This process is costly and can lead to errors. What is beeing said in realistic settings is indeed not always easy to understand. Poorly transcribed signals cause a drop of performance of the acoustic model. To improve the quality of the transcripts, the same utterances may be transcribed by several people, but this leads to an even more expensive process.This thesis takes the opposite view. We design algorithms which can exploit datasets with “noisy” transcriptions i.e., which contain errors. The main goal of this thesis is to reduce the costs of building an automatic speech recognition system by limiting the performance drop induced by these errors.We first introduce the Lead2Gold algorithm. Lead2Gold is based on a cost function that is tolerant to datasets with noisy transcriptions. We model transcription errors at the letter level with a noise model. For each transcript in the dataset, the algorithm searches for a set of likely better transcripts relying on a beam search in a graph. This technique is usually not used to design cost functions. We show that it is possible to explicitly add new elements (here a noise model) to design complex cost functions.We then express the Lead2Gold loss in the wFST formalism. wFSTs are graphs whose edges are weighted and represent symbols. To build flexible cost functions we can compose several graphs. With our proposal, it becomes easier to add new elements, such as a lexicon, to better characterize good transcriptions. We show that using wFSTs is a good alternative to using Lead2Gold's explicit beam search. The modular formulation allows us to design a new variety of cost functions that model transcription errors.Finally, we conduct a data collection experiment in real conditions. We observe different types of annotator profiles. Annotators do not have the same perception of acoustic signals and hence can produce different types of errors. The explicit goal of this experiment is to collect transcripts with errors and to prove the usefulness of modeling these errors
APA, Harvard, Vancouver, ISO, and other styles
31

Vythelingum, Kévin. "Construction rapide, performante et mutualisée de systèmes de reconnaissance et de synthèse de la parole pour de nouvelles langues." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1035.

Full text
Abstract:
Nous étudions dans cette thèse la construction mutualisée de systèmes de reconnaissance et de synthèse de la parole pour de nouvelles langues, avec un objectif de performance et de rapidité de développement. Le développement rapide des technologies vocales pour de nouvelles langues anime des ambitions scientifiques et est aujourd’hui considéré comme stratégique par les acteurs industriels. Cependant, le développement des langues est conduit de manière morcelée par quelques centres de recherche travaillant chacun sur un nombre réduit de langues. Or, ces technologies partagent de nombreux points communs. Notre étude se concentre sur la construction et la mutualisation d'outils pour la création de lexiques, l’apprentissage de règles de phonétisation et l’exploitation de données imparfaites. Nos contributions portent sur la sélection de données pertinentes pour l’apprentissage de modèles acoustiques, le développement conjoint de phonétiseurs et de lexiques de prononciation pour la reconnaissance et la synthèse de la parole, et l’exploitation de modèles neuronaux pour la transcription phonétique à partir du texte et du signal de parole. De plus, nous présentons une approche de détection automatique des erreurs de transcriptions phonétiques dans les bases de données annotées de signal de parole. Cette étude a montré qu’il était possible de réduire de manière importante la quantité de données à annoter manuellement lors du développement de nouveaux systèmes de synthèse de la parole. Cela contribue naturellement à réduire le temps de collecte de données pour la création de nouveaux systèmes. Finalement, nous étudions un cas applicatif<br>We study in this thesis the joint construction of speech recognition and synthesis systems for new languages, with the goals of accuracy and quick development. The rapid development of voice technologies for new languages is driving scientific ambitions and is now considered strategic by industial players. However, language development research is led by a few research centers, each working on a limited number of languages. However, these technologies share many common points.Our study focuses on building and sharing tools between systems for creating lexicons, learning phonetic rules and taking advantage of imperfect data. Our contributions focus on the selection of relevant data for learning acoustic models, the joint development of phonetizers and pronunciation lexicons for speech recognition and synthesis, and the use of neural models for phonetic transcription from text and speech signal. In addition, we present an approach for automatic detection of phonetic transcript errors in annotated speech signal databases. This study has shown that it is possible to significantly reduce the quantity of data annotation useful for the development of new text-to-speech systems. It naturally helps to reduce data collection time in the process of new systems creation.Finally, we study an application case by jointly building a system for recognizing and synthesizing speech for a new language
APA, Harvard, Vancouver, ISO, and other styles
32

Treml, Felicia, and Pontus Claesson. "Att skriva eller att tala in text? Likheter och skillnader i textkvalitet och textlängd med och utan tal-till-text-teknik." Thesis, Linnéuniversitetet, Institutionen för psykologi (PSY), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-103663.

Full text
Abstract:
Att kunna uttrycka sig skriftligt är en förutsättning för delaktighet i samhället och att kunna utbilda sig inför yrkeslivet. Forskning visar att kompensatoriska hjälpmedel i form av assisterande teknik för individer med läs- och skrivsvårigheter är särskilt viktigt i inlärningssammanhang. Denna studie undersökte likheter och skillnader i elevers textkvalitet och textlängd vid skrivande med tangentbord jämfört med användning av assisterande teknik i form av tal-till-text-program. I studien deltog 41 svenska mellanstadieelever. Resultaten visade att användning av taligenkänningsprogram, varigenom elever får producera text genom att tala istället för att skriva med tangentbord, genererar både längre texter och texter av högre kvalitet. Tal-till-text-program sparade också tid jämfört med skrivande med tangentbord. Utifrån dessa resultat så kan taligenkänningsteknik medföra pedagogiska fördelar. Resultaten diskuteras utifrån tidigare forskning och metodologiska begränsningar. Mer forskning behövs bland annat i syfte att förstå hur långsiktig användning av assisterande teknik kan påverka elevers skrivförmåga.<br>Being able to express yourself in writing is a prerequisite for academic success and participation in society. Research shows that compensatory aids in the form of assistive technologies for individuals with reading and writing difficulties are particularly important in learning contexts. This study examined similarities and differences in students’ text quality and text length when typing with keyboard compared to when using a particular type of assistive technology in the form of a speech-to-text program. The study comprised of 41 Swedish middle school pupils. The results showed that using speech recognition software, whereby students are allowed to produce text by speaking instead of typing, generates both longer texts and higher-quality texts. Speech-to-text programs were also significantly more time efficient. Based on these results, speech recognition technology can bring educational benefits. The results are discussed based on previous research and methodological limitations. More research is needed, among other things, in order to understand how long-term use of assistant technology can affect students’ writing ability.
APA, Harvard, Vancouver, ISO, and other styles
33

Planet, García Santiago. "Reconeixement afectiu automàtic mitjançant l'anàlisi de paràmetres acústics i lingüístics de la parla espontània." Doctoral thesis, Universitat Ramon Llull, 2013. http://hdl.handle.net/10803/125335.

Full text
Abstract:
Aquesta tesi aborda el reconeixement automàtic d'emocions espontànies basat en l'anàlisi del senyal de veu. Es realitza dins del Grup de recerca de Tecnologies Mèdia d’Enginyeria i Arquitectura La Salle, tenint el seu origen en un moment en el qual existeixen obertes diverses línies de recerca relacionades amb la síntesi afectiva però cap d’elles relacionada amb la seva anàlisi. La motivació és millorar la interacció persona-màquina aportant un mòdul d'anàlisi en l'entrada dels sistemes que permeti, posteriorment, generar una resposta adequada a través dels mòduls de síntesis en la sortida dels mateixos. El focus d'atenció se situa en l'expressivitat afectiva, intentant dotar d'habilitats d'intel•ligència emocional a sistemes d'intel•ligència artificial amb l'objectiu d'aconseguir que la interacció persona-màquina s'assembli, en la major mesura possible, a la comunicació humana. En primer lloc es realitza una anàlisi preliminar basada en locucions gravades en condicions ideals. L'expressivitat vocal en aquest cas és actuada i els enregistraments responen a un guió previ que determina a priori l'etiqueta que descriu el contingut afectiu de les mateixes. Si bé aquest no és el paradigma de la interacció en un entorn realista, aquest primer pas serveix per provar les primeres aproximacions a la parametrització dels corpus, els mètodes de selecció de paràmetres i la seva utilitat en l'optimització dels procediments, així com la viabilitat de considerar el sistema de reconeixement afectiu com un exercici de classificació categòrica. Així mateix, permet comparar els resultats obtinguts en aquest escenari amb els que s'obtenen posteriorment en l'escenari realista. Si bé pot considerar-se que la utilitat d'un marc de treball com l'aquí proposat manca d'interès més enllà de l’exercici de comprovació citat, en aquesta tesi es proposa un sistema basat en aquest plantejament la finalitat del qual és la validació automàtica d'un corpus de veu expressiva destinat a síntesi, ja que en síntesi sí és necessari que el corpus estigui gravat en condicions òptimes posat perquè serà emprat per a la generació de noves locucions. En segon lloc la tesi aprofundeix en l'anàlisi del corpus FAU Aibo, un corpus multilocutor de veu expressiva espontània gravat en alemany a partir d'interaccions naturals d'un grup de nens i nenes amb un robot dotat d'un micròfon. En aquest cas el plantejament és completament diferent a l'anterior partint de la definició del propi corpus, en el qual les locucions no responen a un guió previ i les etiquetes afectives s'assignen posteriorment a partir de l'avaluació subjectiva de les mateixes. Així mateix, el grau d'expressivitat emocional d'aquestes locucions és inferior al de les gravades per un actor o una actriu perquè que són espontànies i les emocions, atès que es generen de forma natural, no responen necessàriament a una definició prototípica. Tot això sense considerar que les condicions d'enregistrament no són les mateixes que les que s'obtindrien en un estudi d'enregistrament professional. En aquest escenari els resultats són molt diferents als obtinguts en l'escenari anterior raó per la qual es fa necessari un estudi més detallat. En aquest sentit es plantegen dues parametritzacions, una a nivell acústic i una altra a nivell lingüístic, ja que la segona podria no veure's tan afectada pels elements que poden degradar la primera, tals com a soroll o altres artefactes. Es proposen diferents sistemes de classificació de complexitat variable malgrat que, sovint, els sistemes més senzills produeixen resultats adequats. També es proposen diferents agrupacions de paràmetres intentant aconseguir un conjunt de dades el més petit possible que sigui capaç de dur a terme un reconeixement afectiu automàtic de forma eficaç. Els resultats obtinguts en l'anàlisi de les expressions espontànies posen de manifest la complexitat del problema tractat i es corresponen amb valors inferiors als obtinguts a partir de corpus gravats en condicions ideals. No obstant això, els esquemes proposats aconsegueixen obtenir resultats que superen els publicats a data d’avui en estudis realitzats en condicions anàlogues i obren, per tant, la porta a recerques futures en aquest àmbit.<br>Esta tesis aborda el reconocimiento automático de emociones espontáneas basado en el análisis de la señal de voz. Se realiza dentro del Grup de recerca de Tecnologies Mèdia de Enginyeria i Arquitectura La Salle, teniendo su origen en un momento en el que existen abiertas varias líneas de investigación relacionadas con la síntesis afectiva pero ninguna relacionada con su análisis. La motivación es mejorar la interacción persona-máquina aportando un módulo de análisis en la entrada de los sistemas que permita, posteriormente, generar una respuesta adecuada a través de los módulos de síntesis en la salida de los mismos. El centro de atención se sitúa en la expresividad afectiva, intentando dotar de habilidades de inteligencia emocional a sistemas de inteligencia artificial con el objetivo de lograr que la interacción persona-máquina se asemeje, en la mayor medida posible, a la comunicación humana. En primer lugar se realiza un análisis preliminar basado en locuciones grabadas en condiciones ideales. La expresividad vocal en este caso es actuada y las grabaciones responden a un guion previo que determina a priori la etiqueta que describe el contenido afectivo de las mismas. Si bien este no es el paradigma de la interacción en un entorno realista, este primer paso sirve para probar las primeras aproximaciones a la parametrización de los corpus, los métodos de selección de parámetros y su utilidad en la optimización de los procedimientos, así como la viabilidad de considerar el sistema de reconocimiento afectivo como un ejercicio de clasificación categórica. Asimismo, permite comparar los resultados obtenidos en este escenario con los que se obtienen posteriormente en el escenario realista. Si bien pudiera considerarse que la utilidad de un marco de trabajo como el aquí propuesto carece de interés más allá del mero ejercicio de comprobación citado, en esta tesis se propone un sistema basado en este planteamiento cuya finalidad es la validación automática de un corpus de voz expresiva destinado a síntesis, ya que en síntesis sí es necesario que el corpus esté grabado en condiciones óptimas puesto que será empleado para la generación de nuevas locuciones. En segundo lugar la tesis profundiza en el análisis del corpus FAU Aibo, un corpus multilocutor de voz expresiva espontánea grabado en alemán a partir de interacciones naturales de un grupo de niños y niñas con un robot dotado de un micrófono. En este caso el planteamiento es completamente distinto al anterior partiendo de la definición del propio corpus, en el que las locuciones no responden a un guion previo y las etiquetas afectivas se asignan posteriormente a partir de la evaluación subjetiva de las mismas. Asimismo, el grado de expresividad emocional de estas locuciones es inferior al de las grabadas por un actor o una actriz en tanto que son espontáneas y las emociones, dado que se generan de forma natural, no responden necesariamente a una definición prototípica. Todo ello sin considerar que las condiciones de grabación no son las mismas que las que se obtendrían en un estudio de grabación profesional. En este escenario los resultados son muy diferentes a los obtenidos en el escenario anterior por lo que se requiere un estudio más detallado. En este sentido se plantean dos parametrizaciones, una a nivel acústico y otra a nivel lingüístico, ya que la segunda podría no verse tan afectada por los elementos que pueden degradar la primera, tales como ruido u otros artefactos. Se proponen distintos sistemas de clasificación de complejidad variable a pesar de que, a menudo, los sistemas más sencillos producen resultados buenos. También se proponen distintas agrupaciones de parámetros intentando conseguir un conjunto de datos lo más pequeño posible que sea capaz de llevar a cabo un reconocimiento afectivo automático de forma eficaz. Los resultados obtenidos en el análisis de las expresiones espontáneas ponen de manifiesto la complejidad del problema tratado y se corresponden con valores inferiores a los obtenidos a partir de corpus grabados en condiciones ideales. Sin embargo, los esquemas propuestos logran obtener resultados que superan los publicados hasta la fecha en estudios realizados en condiciones análogas y abren, por lo tanto, la puerta a investigaciones futuras en este ámbito.<br>The topic of this thesis is about automatic spontaneous emotion recognition from the analysis of the speech signal. It is carried out in the Grup de recerca de Tecnologies Mèdia of Enginyeria i Arquitectura La Salle, and it was started when several research lines related to the synthesis of emotions were in progress but no one related to its analysis. The motivation is to improve human-machine interaction by developing an analysis module to be adapted as an input to the devices able to generate an appropriate answer at the output through their synthesis modules. The highlight is the expression of emotion, trying to give emotional intelligence skills to systems of artificial intelligence. The main goal is to make human-machine interaction more similar to human communication. First, we carried out a preliminary analysis of utterances recorded under ideal conditions. Vocal expression was, in this case, acted and the recordings followed a script which determined the descriptive label of their emotional content. Although this was not the paradigm of interaction in a realistic scenario, this previous step was useful to test the first approaches to parameterisation of corpora, feature selection methods and their utility optimizing the proposed procedures, and to determine whether the consideration of the emotion recognition problem as a categorical classification exercise is viable. Moreover, it allowed the comparison of the results in this scenario with the results obtained in the realistic environment. This framework can be useful in other contexts, additionally to this comparison utility. In this thesis we propose a system based on it with the goal of validating automatically an expressive speech corpus for synthesis. In the synthesis field, corpora must be recorded under real conditions to create new speech utterances. Second, we present an analysis of the FAU Aibo corpus, a multispeaker corpus of emotional spontaneous speech recorded in German from the interaction of a group of children with a robot with a microphone. In this case the approach was different because of the definition of the corpus. The recordings of the FAU Aibo corpus did not follow a script and the emotion category labels were assigned after a subjective evaluation process. Moreover, the emotional content of these recordings was lower than in those recorded by actors because of their spontaneity and emotions were not prototypical because they were generated naturally, not following a script. Furthermore, recording conditions were not the same that in a professional recording studio. In this scenario, results were very different to those obtained in the previous one. For this reason a more accurate analysis was required. In this sense we used two parameterisations, adding linguistic parameters to the acoustic information because the first one could be more robust to noise or some other artefacts than the second one. We considered several classifiers of different complexity although, often, simple systems get the better results. Moreover, we defined several sets of features trying to get a reduced set of data able to work efficiently in the automatic emotion recognition task. Results related to the analysis of the spontaneous emotions confirmed the complexity of the problem and revealed lower values than those associated to the corpus recorded under ideal conditions. However, the schemas got better results than those published so far in works carried out under similar conditions. This opens a door to future research in this area.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Chih Ying, and 陳至瑩. "Automatically Phonetic Transcription of Taiwanese Speech Corpus based on HTK Continuous Speech Recognition." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/20640147119716046254.

Full text
Abstract:
碩士<br>長庚大學<br>資訊工程學系<br>99<br>Collection of Taiwanese speech corpus with phonetic transcrip-tion suffers from the problems of pronunciation variation and the mismatch between speech and corresponding phonetic transcription. In this paper, we propose a procedure to verify the correctness of the collected speech corpus. The procedure is based on speech recognition using Hidden Markov Model Toolkit (HTK). By further augmenting the text with read speech, and using a sausage searching net constructed from the multiple pronun¬ci¬ations of the text corresponding to its speech utterance, we are able to reduce the effort for phonetic transcription to some extent. Experiments are conducted using a Taiwanese speech and text corpus, and the accuracy of this procedure can achieve about 90% at best.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Sing-Yue, and 王星月. "Speech Recognition Quality Estimation-based Semi-Supervised Training for Broadcast Radio Program Transcription." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/v665zj.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>電子工程系研究所<br>105<br>It is difficult to collect enough labeled data to well train a high performance Automatic Speech Recognition. However, we can easily obtain unlimited amount of unlabeled speech data. In order to gain this advantage, the objective of this paper to use semi-supervised training to improve ASR’s performance. We using Quality Estimation to predict utterance WER. Then a subset of the unlabeled speech utterance which is predicted to have good recognition quality was added into the training data of the speech recognizer and retrain acoustic model. In experimental results, we evaluate two test data set of broadcast materials. The CER could be reduced from 25.00% to 23.61% and from 14.24% to 13.24% with QE-based data selection methods. We also retrain language model with Giga Word, the CER could be reduced from 23.61% to 23.25% and from 13.24% to 12.63%. Finally, we implement online Automatic Radio Transcriber provides speech recognition service.
APA, Harvard, Vancouver, ISO, and other styles
36

徐偉棠. "Error-Spotting in Pronunciation of English Vowels based on Speech Recognition Technologies." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/50928695748515630113.

Full text
Abstract:
碩士<br>國立清華大學<br>資訊工程學系<br>93<br>This thesis investigates the method for detecting error pronunciation of English vowels in utterances spoken by L2 learners, which requires the techniques from digital signal processing and speech recognition. We propose a text independent approach (which does not require the use of a target utterance) for English vowels error detection and learning. Various studies in formant-based speech synthesis have suggested the importance of formant coefficients; this motivates us to investigate pronunciation assessment using formant information instead of MFCC (Mel-frequency cesptrum coefficients) alone. In particular, we explore the addition of formant information to improve the recognition rates of HMM. Then we propose the use of PCN (pronunciation confusion network) together with a formant-based confidence measure to raise error detection rates. The phonology knowledge about the formant and the articulator is then employing to generate high-level feedbacks to the user. Experimental results demonstrate that automatic generation of reliable pronunciation instruction (without using a target utterance) becomes highly possible.
APA, Harvard, Vancouver, ISO, and other styles
37

Boulanger-Lewandowski, Nicolas. "Modeling High-Dimensional Audio Sequences with Recurrent Neural Networks." Thèse, 2014. http://hdl.handle.net/1866/11181.

Full text
Abstract:
Cette thèse étudie des modèles de séquences de haute dimension basés sur des réseaux de neurones récurrents (RNN) et leur application à la musique et à la parole. Bien qu'en principe les RNN puissent représenter les dépendances à long terme et la dynamique temporelle complexe propres aux séquences d'intérêt comme la vidéo, l'audio et la langue naturelle, ceux-ci n'ont pas été utilisés à leur plein potentiel depuis leur introduction par Rumelhart et al. (1986a) en raison de la difficulté de les entraîner efficacement par descente de gradient. Récemment, l'application fructueuse de l'optimisation Hessian-free et d'autres techniques d'entraînement avancées ont entraîné la recrudescence de leur utilisation dans plusieurs systèmes de l'état de l'art. Le travail de cette thèse prend part à ce développement. L'idée centrale consiste à exploiter la flexibilité des RNN pour apprendre une description probabiliste de séquences de symboles, c'est-à-dire une information de haut niveau associée aux signaux observés, qui en retour pourra servir d'à priori pour améliorer la précision de la recherche d'information. Par exemple, en modélisant l'évolution de groupes de notes dans la musique polyphonique, d'accords dans une progression harmonique, de phonèmes dans un énoncé oral ou encore de sources individuelles dans un mélange audio, nous pouvons améliorer significativement les méthodes de transcription polyphonique, de reconnaissance d'accords, de reconnaissance de la parole et de séparation de sources audio respectivement. L'application pratique de nos modèles à ces tâches est détaillée dans les quatre derniers articles présentés dans cette thèse. Dans le premier article, nous remplaçons la couche de sortie d'un RNN par des machines de Boltzmann restreintes conditionnelles pour décrire des distributions de sortie multimodales beaucoup plus riches. Dans le deuxième article, nous évaluons et proposons des méthodes avancées pour entraîner les RNN. Dans les quatre derniers articles, nous examinons différentes façons de combiner nos modèles symboliques à des réseaux profonds et à la factorisation matricielle non-négative, notamment par des produits d'experts, des architectures entrée/sortie et des cadres génératifs généralisant les modèles de Markov cachés. Nous proposons et analysons également des méthodes d'inférence efficaces pour ces modèles, telles la recherche vorace chronologique, la recherche en faisceau à haute dimension, la recherche en faisceau élagué et la descente de gradient. Finalement, nous abordons les questions de l'étiquette biaisée, du maître imposant, du lissage temporel, de la régularisation et du pré-entraînement.<br>This thesis studies models of high-dimensional sequences based on recurrent neural networks (RNNs) and their application to music and speech. While in principle RNNs can represent the long-term dependencies and complex temporal dynamics present in real-world sequences such as video, audio and natural language, they have not been used to their full potential since their introduction by Rumelhart et al. (1986a) due to the difficulty to train them efficiently by gradient-based optimization. In recent years, the successful application of Hessian-free optimization and other advanced training techniques motivated an increase of their use in many state-of-the-art systems. The work of this thesis is part of this development. The main idea is to exploit the power of RNNs to learn a probabilistic description of sequences of symbols, i.e. high-level information associated with observed signals, that in turn can be used as a prior to improve the accuracy of information retrieval. For example, by modeling the evolution of note patterns in polyphonic music, chords in a harmonic progression, phones in a spoken utterance, or individual sources in an audio mixture, we can improve significantly the accuracy of polyphonic transcription, chord recognition, speech recognition and audio source separation respectively. The practical application of our models to these tasks is detailed in the last four articles presented in this thesis. In the first article, we replace the output layer of an RNN with conditional restricted Boltzmann machines to describe much richer multimodal output distributions. In the second article, we review and develop advanced techniques to train RNNs. In the last four articles, we explore various ways to combine our symbolic models with deep networks and non-negative matrix factorization algorithms, namely using products of experts, input/output architectures, and generative frameworks that generalize hidden Markov models. We also propose and analyze efficient inference procedures for those models, such as greedy chronological search, high-dimensional beam search, dynamic programming-like pruned beam search and gradient descent. Finally, we explore issues such as label bias, teacher forcing, temporal smoothing, regularization and pre-training.
APA, Harvard, Vancouver, ISO, and other styles
38

Hsu, Wei-Hao, and 許偉皓. "Improved Technologies for Distributed Speech Recognition : Feature Compression, Extra Transmission Functionalities and Integrated System Simulation." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/07574833527594406563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Agua, Teba Miguel Ángel del. "CONTRIBUTIONS TO EFFICIENT AUTOMATIC TRANSCRIPTION OF VIDEO LECTURES." Doctoral thesis, 2019. http://hdl.handle.net/10251/130198.

Full text
Abstract:
[ES] Durante los últimos años, los repositorios multimedia en línea se han convertido en fuentes clave de conocimiento gracias al auge de Internet, especialmente en el área de la educación. Instituciones educativas de todo el mundo han dedicado muchos recursos en la búsqueda de nuevos métodos de enseñanza, tanto para mejorar la asimilación de nuevos conocimientos, como para poder llegar a una audiencia más amplia. Como resultado, hoy en día disponemos de diferentes repositorios con clases grabadas que siven como herramientas complementarias en la enseñanza, o incluso pueden asentar una nueva base en la enseñanza a distancia. Sin embargo, deben cumplir con una serie de requisitos para que la experiencia sea totalmente satisfactoria y es aquí donde la transcripción de los materiales juega un papel fundamental. La transcripción posibilita una búsqueda precisa de los materiales en los que el alumno está interesado, se abre la puerta a la traducción automática, a funciones de recomendación, a la generación de resumenes de las charlas y además, el poder hacer llegar el contenido a personas con discapacidades auditivas. No obstante, la generación de estas transcripciones puede resultar muy costosa. Con todo esto en mente, la presente tesis tiene como objetivo proporcionar nuevas herramientas y técnicas que faciliten la transcripción de estos repositorios. En particular, abordamos el desarrollo de un conjunto de herramientas de reconocimiento de automático del habla, con énfasis en las técnicas de aprendizaje profundo que contribuyen a proporcionar transcripciones precisas en casos de estudio reales. Además, se presentan diferentes participaciones en competiciones internacionales donde se demuestra la competitividad del software comparada con otras soluciones. Por otra parte, en aras de mejorar los sistemas de reconocimiento, se propone una nueva técnica de adaptación de estos sistemas al interlocutor basada en el uso Medidas de Confianza. Esto además motivó el desarrollo de técnicas para la mejora en la estimación de este tipo de medidas por medio de Redes Neuronales Recurrentes. Todas las contribuciones presentadas se han probado en diferentes repositorios educativos. De hecho, el toolkit transLectures-UPV es parte de un conjunto de herramientas que sirve para generar transcripciones de clases en diferentes universidades e instituciones españolas y europeas.<br>[CAT] Durant els últims anys, els repositoris multimèdia en línia s'han convertit en fonts clau de coneixement gràcies a l'expansió d'Internet, especialment en l'àrea de l'educació. Institucions educatives de tot el món han dedicat molts recursos en la recerca de nous mètodes d'ensenyament, tant per millorar l'assimilació de nous coneixements, com per poder arribar a una audiència més àmplia. Com a resultat, avui dia disposem de diferents repositoris amb classes gravades que serveixen com a eines complementàries en l'ensenyament, o fins i tot poden assentar una nova base a l'ensenyament a distància. No obstant això, han de complir amb una sèrie de requisits perquè la experiència siga totalment satisfactòria i és ací on la transcripció dels materials juga un paper fonamental. La transcripció possibilita una recerca precisa dels materials en els quals l'alumne està interessat, s'obri la porta a la traducció automàtica, a funcions de recomanació, a la generació de resums de les xerrades i el poder fer arribar el contingut a persones amb discapacitats auditives. No obstant, la generació d'aquestes transcripcions pot resultar molt costosa. Amb això en ment, la present tesi té com a objectiu proporcionar noves eines i tècniques que faciliten la transcripció d'aquests repositoris. En particular, abordem el desenvolupament d'un conjunt d'eines de reconeixement automàtic de la parla, amb èmfasi en les tècniques d'aprenentatge profund que contribueixen a proporcionar transcripcions precises en casos d'estudi reals. A més, es presenten diferents participacions en competicions internacionals on es demostra la competitivitat del programari comparada amb altres solucions. D'altra banda, per tal de millorar els sistemes de reconeixement, es proposa una nova tècnica d'adaptació d'aquests sistemes a l'interlocutor basada en l'ús de Mesures de Confiança. A més, això va motivar el desenvolupament de tècniques per a la millora en l'estimació d'aquest tipus de mesures per mitjà de Xarxes Neuronals Recurrents. Totes les contribucions presentades s'han provat en diferents repositoris educatius. De fet, el toolkit transLectures-UPV és part d'un conjunt d'eines que serveix per generar transcripcions de classes en diferents universitats i institucions espanyoles i europees.<br>[EN] During the last years, on-line multimedia repositories have become key knowledge assets thanks to the rise of Internet and especially in the area of education. Educational institutions around the world have devoted big efforts to explore different teaching methods, to improve the transmission of knowledge and to reach a wider audience. As a result, online video lecture repositories are now available and serve as complementary tools that can boost the learning experience to better assimilate new concepts. In order to guarantee the success of these repositories the transcription of each lecture plays a very important role because it constitutes the first step towards the availability of many other features. This transcription allows the searchability of learning materials, enables the translation into another languages, provides recommendation functions, gives the possibility to provide content summaries, guarantees the access to people with hearing disabilities, etc. However, the transcription of these videos is expensive in terms of time and human cost. To this purpose, this thesis aims at providing new tools and techniques that ease the transcription of these repositories. In particular, we address the development of a complete Automatic Speech Recognition Toolkit with an special focus on the Deep Learning techniques that contribute to provide accurate transcriptions in real-world scenarios. This toolkit is tested against many other in different international competitions showing comparable transcription quality. Moreover, a new technique to improve the recognition accuracy has been proposed which makes use of Confidence Measures, and constitutes the spark that motivated the proposal of new Confidence Measures techniques that helped to further improve the transcription quality. To this end, a new speaker-adapted confidence measure approach was proposed for models based on Recurrent Neural Networks. The contributions proposed herein have been tested in real-life scenarios in different educational repositories. In fact, the transLectures-UPV toolkit is part of a set of tools for providing video lecture transcriptions in many different Spanish and European universities and institutions.<br>Agua Teba, MÁD. (2019). CONTRIBUTIONS TO EFFICIENT AUTOMATIC TRANSCRIPTION OF VIDEO LECTURES [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/130198<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
40

Ananthakrishnan, G. "Music And Speech Analysis Using The 'Bach' Scale Filter-Bank." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/592.

Full text
Abstract:
The aim of this thesis is to define a perceptual scale for the ‘Time-Frequency’ analysis of music signals. The equal tempered ‘Bach ’ scale is a suitable scale, since it covers most of the genres of music and the error is equally distributed for each semi-tone. However, it may be necessary to allow a tolerance of around 50 cents or half the interval of the Bach scale, so that the interval can accommodate other common intonation schemes. The thesis covers the formulation of the Bach scale filter-bank as a time-varying model. It makes a comparative study with other commonly used perceptual scales. Two applications for the Bach scale filter-bank are also proposed, namely automated segmentation of speech signals and transcription of singing voice for query-by-humming applications. Even though this filter-bank is suggested with a motivation from music, it could also be applied to speech. A method for automatically segmenting continuous speech into phonetic units is proposed. The results, obtained from the proposed method, show around 82% accuracy for the English and 85% accuracy for the Hindi databases. This is an improvement of around 2 -3% when the performance is compared with other popular methods in the literature. Interestingly, the Bach scale filters perform better than the filters designed for other common perceptual scales, such as Mel and Bark scales. ‘Musical transcription’ refers to the process of converting a musical rendering or performance into a set of symbols or notations. A query in a ‘query-by-humming system’ can be made in several ways, some of which are singing with words, or with arbitrary syllables, or whistling. Two algorithms are suggested to annotate a query. The algorithms are designed to be fairly robust for these various forms of queries. The first algorithm is a frequency selection based method. It works on the basis of selecting the most likely frequency components at any given time instant. The second algorithm works on the basis of finding time-connected contours of high energy in the ‘Time-Frequency’ plane of the input signal. The time domain algorithm works better in terms of instantaneous pitch estimates. It results in an error of around 10 -15%, while the frequency domain method results in an error of around 12 -20%. A song rendered by two different people will have quite a few different properties. Their absolute pitches, rates of rendering, timbres based on voice quality and inaccuracies, may be different. The thesis discusses a method to quantify the distance between two different renderings of musical pieces. The distance function has been evaluated by attempting a search for a particular song from a database of a size of 315, made up of songs sung by both male and female singers and whistled queries. Around 90 % of the time, the correct song is found among the top five best choices picked. Thus, the Bach scale has been proposed as a suitable scale for representing the perception of music. It has been explored in two applications, namely automated segmentation of speech and transcription of singing voices. Using the transcription obtained, a measure of the distance between renderings of musical pieces has also been suggested.
APA, Harvard, Vancouver, ISO, and other styles
41

Ananthakrishnan, G. "Music And Speech Analysis Using The 'Bach' Scale Filter-Bank." Thesis, 2007. http://hdl.handle.net/2005/592.

Full text
Abstract:
The aim of this thesis is to define a perceptual scale for the ‘Time-Frequency’ analysis of music signals. The equal tempered ‘Bach ’ scale is a suitable scale, since it covers most of the genres of music and the error is equally distributed for each semi-tone. However, it may be necessary to allow a tolerance of around 50 cents or half the interval of the Bach scale, so that the interval can accommodate other common intonation schemes. The thesis covers the formulation of the Bach scale filter-bank as a time-varying model. It makes a comparative study with other commonly used perceptual scales. Two applications for the Bach scale filter-bank are also proposed, namely automated segmentation of speech signals and transcription of singing voice for query-by-humming applications. Even though this filter-bank is suggested with a motivation from music, it could also be applied to speech. A method for automatically segmenting continuous speech into phonetic units is proposed. The results, obtained from the proposed method, show around 82% accuracy for the English and 85% accuracy for the Hindi databases. This is an improvement of around 2 -3% when the performance is compared with other popular methods in the literature. Interestingly, the Bach scale filters perform better than the filters designed for other common perceptual scales, such as Mel and Bark scales. ‘Musical transcription’ refers to the process of converting a musical rendering or performance into a set of symbols or notations. A query in a ‘query-by-humming system’ can be made in several ways, some of which are singing with words, or with arbitrary syllables, or whistling. Two algorithms are suggested to annotate a query. The algorithms are designed to be fairly robust for these various forms of queries. The first algorithm is a frequency selection based method. It works on the basis of selecting the most likely frequency components at any given time instant. The second algorithm works on the basis of finding time-connected contours of high energy in the ‘Time-Frequency’ plane of the input signal. The time domain algorithm works better in terms of instantaneous pitch estimates. It results in an error of around 10 -15%, while the frequency domain method results in an error of around 12 -20%. A song rendered by two different people will have quite a few different properties. Their absolute pitches, rates of rendering, timbres based on voice quality and inaccuracies, may be different. The thesis discusses a method to quantify the distance between two different renderings of musical pieces. The distance function has been evaluated by attempting a search for a particular song from a database of a size of 315, made up of songs sung by both male and female singers and whistled queries. Around 90 % of the time, the correct song is found among the top five best choices picked. Thus, the Bach scale has been proposed as a suitable scale for representing the perception of music. It has been explored in two applications, namely automated segmentation of speech and transcription of singing voices. Using the transcription obtained, a measure of the distance between renderings of musical pieces has also been suggested.
APA, Harvard, Vancouver, ISO, and other styles
42

Peche, Marius. "Spoken language identification in resource-scarce environments." Diss., 2010. http://hdl.handle.net/2263/27513.

Full text
Abstract:
South Africa has eleven official languages, ten of which are considered “resource-scarce”. For these languages, even basic linguistic resources required for the development of speech technology systems can be difficult or impossible to obtain. In this thesis, the process of developing Spoken Language Identification (S-LID) systems in resource-scarce environments is investigated. A Parallel Phoneme Recognition followed by Language Modeling (PPR-LM) architecture is utilized and three specific scenarios are investigated: (1) incomplete resources, including the lack of audio transcriptions and/or pronunciation dictionaries; (2) inconsistent resources, including the use of speech corpora that are unmatched with regard to domain or channel characteristics; and (3) poor quality resources, such as wrongly labeled or poorly transcribed data. Each situation is analysed, techniques defined to mitigate the effect of limited or poor quality resources, and the effectiveness of these techniques evaluated experimentally. Techniques evaluated include the development of orthographic tokenizers, bootstrapping of transcriptions, filtering of low quality audio, diarization and channel normalization techniques, and the human verification of miss-classified utterances. The knowledge gained from this research is used to develop the first S-LID system able to distinguish between all South African languages. The system performs well, able to differentiate among the eleven languages with an accuracy of above 67%, and among the six primary South African language families with an accuracy of higher than 80%, on segments of speech of between 2s and 10s in length. AFRIKAANS : Suid-Afrika het elf amptelike tale waarvan tien as hulpbron-skaars beskou word. Vir die tien tale kan selfs die basiese hulpbronne wat benodig word om spraak tegnologie stelsels te ontwikkel moeilik wees om te bekom. Die proses om ‘n Gesproke Taal Identifisering stelsel vir hulpbron-skaars omgewings te ontwikkel, word in hierdie tesis ondersoek. ‘n Parallelle Foneem Herkenning gevolg deur Taal Modellering argitektuur word ingespan om drie spesifieke moontlikhede word ondersoek: (1) Onvolledige Hulpbronne, byvoorbeeld vermiste transkripsies en uitspraak woordeboeke; (2) Teenstrydige Hulpbronne, byvoorbeeld die gebruik van spraak data-versamelings wat teenstrydig is in terme van kanaal kenmerke; en (3) Hulpbronne van swak kwaliteit, byvoorbeeld foutief geklasifiseerde data en klank opnames wat swak getranskribeer is. Elke situasie word geanaliseer, tegnieke om die negatiewe effekte van min of swak hulpbronne te verminder word ontwikkel, en die bruikbaarheid van hierdie tegnieke word deur middel van eksperimente bepaal. Tegnieke wat ontwikkel word sluit die ontwikkeling van ortografiese ontleders, die outomatiese ontwikkeling van nuwe transkripsies, die filtrering van swak kwaliteit klank-data, klank-verdeling en kanaal normalisering tegnieke, en menslike verifikasie van verkeerd geklassifiseerde uitsprake in. Die kennis wat deur hierdie navorsing bekom word, word gebruik om die eerste Gesproke Taal Identifisering stelsel wat tussen al die tale van Suid-Afrika kan onderskei, te ontwikkel. Hierdie stelsel vaar relatief goed, en kan die elf tale met ‘n akkuraatheid van meer as 67% identifiseer. Indien daar op die ses taal families gefokus word, verbeter die persentasie tot meer as 80% vir segmente wat tussen 2 en 10 sekondes lank. Copyright<br>Dissertation (MEng)--University of Pretoria, 2010.<br>Electrical, Electronic and Computer Engineering<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography