Добірка наукової літератури з теми "Interfaces vocales"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Interfaces vocales".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Interfaces vocales":
Fagundes Pase, André, Gisele Noll, Mariana Gomes da Fontoura, and Letícia Dallegrave. "Who Controls the Voice? The Journalistic Use and the Informational Domain in Vocal Transactors." Brazilian Journalism Research 16, no. 3 (December 29, 2020): 576–603. http://dx.doi.org/10.25200/bjr.v16n3.2021.1316.
Swoboda, Danya, Jared Boasen, Pierre-Majorique Léger, Romain Pourchon, and Sylvain Sénécal. "Comparing the Effectiveness of Speech and Physiological Features in Explaining Emotional Responses during Voice User Interface Interactions." Applied Sciences 12, no. 3 (January 25, 2022): 1269. http://dx.doi.org/10.3390/app12031269.
Wagner, Amber, and Jeff Gray. "An Empirical Evaluation of a Vocal User Interface for Programming by Voice." International Journal of Information Technologies and Systems Approach 8, no. 2 (July 2015): 47–63. http://dx.doi.org/10.4018/ijitsa.2015070104.
Topoleanu, Tudor Sabin, and Gheorghe Leonte Mogan. "Aspects Concerning Voice Cognitive Control Systems for Mobile Robots." Solid State Phenomena 166-167 (September 2010): 427–32. http://dx.doi.org/10.4028/www.scientific.net/ssp.166-167.427.
Gutiérrez Calderón, Jenny Alejandra, Erika Nathalia Gama Melo, Darío Amaya Hurtado, and Oscar Fernando Avilés Sánchez. "Desarrollo de interfaces para la detección del habla sub-vocal." Revista Tecnura 17, no. 37 (September 18, 2013): 138. http://dx.doi.org/10.14483/udistrital.jour.tecnura.2013.3.a12.
Young, Andrea. "The Voice-Index and Digital Voice Interface." Leonardo Music Journal 24 (December 2014): 3–5. http://dx.doi.org/10.1162/lmj_a_00186.
Carvalho, Diogo Rebel e. "Multiplicidade de recursos vocais, dramáticos e expressivos a partir da análise da obra Sound - para quatro vozes femininas, luz, cena e amplificação - de Luiz Carlos Csekö." Per Musi, no. 41 (September 29, 2021): 1–15. http://dx.doi.org/10.35699/2317-6377.2021.34851.
Suzuki, Toshitaka N., David Wheatcroft, and Michael Griesser. "The syntax–semantics interface in animal vocal communication." Philosophical Transactions of the Royal Society B: Biological Sciences 375, no. 1789 (November 18, 2019): 20180405. http://dx.doi.org/10.1098/rstb.2018.0405.
Metzner, W. "An audio-vocal interface in echolocating horseshoe bats." Journal of Neuroscience 13, no. 5 (May 1, 1993): 1899–915. http://dx.doi.org/10.1523/jneurosci.13-05-01899.1993.
Goble, J. R., P. F. Suarez, S. K. Rogers, D. W. Ruck, C. Arndt, and M. Kabrisky. "A facial feature communications interface for the non-vocal." IEEE Engineering in Medicine and Biology Magazine 12, no. 3 (September 1993): 46–48. http://dx.doi.org/10.1109/51.232340.
Дисертації з теми "Interfaces vocales":
Janer, Mestres Jordi. "Singing-driven interfaces for sound synthesizers." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7550.
Amb la present recerca, intentem relacionar la veu amb el so dels instruments musicals, tenint en compte tan la descripció del senyal de veu, com les corresponents estratègies de mapeig per un control adequat del sintetitzador.
Proposem dos enfocaments diferents, d'una banda el control d'un sintetitzador de veu cantada, i d'altra banda el control de la síntesi de sons instrumentals. Per aquest últim, suggerim una representació del senyal de veu com a gests vocals, que inclou una sèrie d'algoritmes d'anàlisis de veu. A la vegada, per demostrar els resultats obtinguts, hem desenvolupat dos prototips a temps real.
Los instrumentos musicales digitales se pueden separar en dos componentes: el interfaz de usuario y el motor de sintesis. El interfaz de usuario se ha denominado tradicionalmente controlador musical. El objectivo de esta tesis es el diseño de un interfaz que permita el control de la sintesis de sonidos instrumentales a partir de la voz cantada.
La presente investigación pretende relacionar las caracteristicas de la voz con el sonido de los instrumentos musicales, teniendo en cuenta la descripción de la señal de voz, como las correspondientes estrategias de mapeo para un control apropiado del sintetizador. Se proponen dos enfoques distintos, el control de un sintetizador de voz cantada, y el control de la sintesis de sonidos insturmentales. Para este último, se sugiere una representación de la señal de voz como gestos vocales, incluyendo varios algoritmos de analisis de voz. Los resultados obtenidos se demuestran con dos prototipos a tiempo real.
Digital musical instruments are usually decomposed in two main constituent parts: a user interface and a sound synthesis engine. The user interface is popularly referred as a musical controller, and its design is the primary objective of this dissertation. Under the title of singing-driven interfaces, we aim to design systems that allow controlling the synthesis of musical instruments sounds with the singing voice.
This dissertation searches for the relationships between the voice and the sound of musical instruments by addressing both, the voice signal description, as well as the mapping strategies for a meaningful control of the synthesized sound.
We propose two different approaches, one for controlling a singing voice synthesizer, and another for controlling the synthesis of instrumental sounds. For the latter, we suggest to represent voice signal as vocal gestures, contributing with several voice analysis methods.
To demonstrate the obtained results, we developed two real-time prototypes.
Srivastava, Brij Mohan Lal. "Anonymisation du locuteur : représentation, évaluation et garanties formelles." Thesis, Université de Lille (2018-2021), 2021. https://pepite-depot.univ-lille.fr/LIBRE/EDMADIS/2021/2021LILUB029.pdf.
Large-scale centralized storage of speech data poses severe privacy threats to the speakers. Indeed, the emergence and widespread usage of voice interfaces starting from telephone to mobile applications, and now digital assistants have enabled easier communication between the customers and the service providers. Massive speech data collection allows its users, for instance researchers, to develop tools for human convenience, like voice passwords for banking, personalized smart speakers, etc. However, centralized storage is vulnerable to cybersecurity threats which, when combined with advanced speech technologies like voice cloning, speaker recognition, and spoofing, may endow a malicious entity with the capability to re-identify speakers and breach their privacy by gaining access to their sensitive biometric characteristics, emotional states, personality attributes, pathological conditions, etc.Individuals and the members of civil society worldwide, and especially in Europe, are getting aware of this threat. With firm backing by the GDPR, several initiatives are being launched, including the publication of white papers and guidelines, to spread mass awareness and to regulate voice data so that the citizens' privacy is protected.This thesis is a timely effort to bolster such initiatives and propose solutions to remove the biometric identity of speakers from speech signals, thereby rendering them useless for re-identifying the speakers who spoke them.Besides the goal of protecting the speaker's identity from malicious access, this thesis aims to explore the solutions which do so without degrading the usefulness of speech.We present several anonymization schemes based on voice conversion methods to achieve this two-fold objective. The output of such schemes is a high-quality speech signal that is usable for publication and a variety of downstream tasks.All the schemes are subjected to a rigorous evaluation protocol which is one of the major contributions of this thesis.This protocol led to the finding that the previous approaches do not effectively protect the privacy and thereby directly inspired the VoicePrivacy initiative which is an effort to gather individuals, industry, and the scientific community to participate in building a robust anonymization scheme.We introduce a range of anonymization schemes under the purview of the VoicePrivacy initiative and empirically prove their superiority in terms of privacy protection and utility.Finally, we endeavor to remove the residual speaker identity from the anonymized speech signal using the techniques inspired by differential privacy. Such techniques provide provable analytical guarantees to the proposed anonymization schemes and open up promising perspectives for future research.In practice, the tools developed in this thesis are an essential component to build trust in any software ecosystem where voice data is stored, transmitted, processed, or published. They aim to help the organizations to comply with the rules mandated by civil governments and give a choice to individuals who wish to exercise their right to privacy
Murdoch, Michael J. "Nonverbal vocal interface /." Link to online version, 2006. https://ritdml.rit.edu/dspace/handle/1850/10346.
Hatt, Grégory. "Interface homme-machine intégrant la reconnaissance vocale et l'analyse d'image /." Sion, 2008. http://doc.rero.ch/record/12810?ln=fr.
Martin, Pierre. "C3i systeme de reconnaissance vocale du chinois moderne (chinese ideograms input interface)." Nice, 1994. http://www.theses.fr/1994NICE4809.
CARNEIRO, Maria Isabel Farias. "Abordagem multidimensional para avaliação da acessibilidade de interfaces vocais considerando a modelagem da incerteza." Universidade Federal de Campina Grande, 2014. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1307.
Made available in DSpace on 2018-07-31T19:39:43Z (GMT). No. of bitstreams: 1 MARIA ISABEL FARIA CARNEIRO - DISSERTAÇÃO PPGCC 2014..pdf: 45568096 bytes, checksum: 7fe570750f4904224de8b7e2f76035e2 (MD5) Previous issue date: 2014-03
0 desenvolvimento de interfaces vocais [VUI - Voice User Interface) per se não é uma garantia para um processo interativo de qualidade entre usuários com deficiência visual e sistemas computacionais. Com o intuito de avaliar os problemas de acessibilidade em VUI, a presente pesquisa focalizou a proposição de uma abordagem de avaliação baseada em um conjunto de técnicas já conhecidas pela comunidade de IHC (Interação Homem-Máquina). No tocante a cada técnica utilizada, o problema foi focado a partir de diferentes perspectivas: (i) do usuário, expresso a partir das visões dos usuários sobre o produto, reunidas a partir de uma abordagem de avaliação; (ii) do especialista, expresso sob a forma de análise dos resultados dos desempenhos dos usuários em sessões de teste de acessibilidade; e (iii) da comunidade de acessibilidade, expresso com base em revisões de projeto, a fim de determinar se o projeto da interface está em conformidade com um padrão. Além disso, visando a evidenciar a incerteza associada aos julgamentos do avaliador na inspeção de conformidade do produto, incorporou-se a modelagem de incerteza, a partir da utilização de Redes Bayesianas, possibilitando ao avaliador explicitar os níveis de incerteza associados às inspeções de conformidade do produto a um padrão, por ele realizadas. A abordagem metodológica foi validada a partir de um estudo de caso envolvendo a avaliação da acessibilidade do sistema computacional DOSVOX, desenvolvido na Universidade Federal do Rio de Janeiro (UFRJ), com o objetivo de auxiliar usuários com deficiência visual no uso de sistemas computacionais. No enfoque da inspeção de conformidade, consideraram-se as partes 14 (Diálogos via menus), 17 (Diálogos via preenchimento via formulários) e 171 (Guia de acessibilidade de software) do padrão internacional ISO 9241. Por outro lado, nos enfoques da mensuração de desempenho e da sondagem da satisfação subjetiva do usuário, foram realizados testes de acessibilidade, envolvendo um universo amostrai de 100 usuários. Inicialmente, os participantes foram agrupados como cegos (40 usuários), baixa visão (20 usuários) e sem deficiência visual (40 usuários), de acordo com tipo de deficiência visual. Em seguida, eles foram classificados como principiantes (46 usuários) ou intermediários (54 usuários), de acordo com o nível de conhecimento em Informática e de experiência o produto avaliado. Os dados resultantes dos testes de acessibilidade foram processados estatisticamente, a fim de verificar a correlação entre os desempenhos dos grupos de usuários e entre o desempenho das categorias de usuários de cada grupo. O processamento estatístico dos dados evidenciou a inexistência de diferenças significativas entre os desempenhos dos grupos, bem como entre as categorias de usuários. Por outro lado, a confrontação dos resultados dos três enfoques (mensuração de desempenho do usuário, mensuração da satisfação subjetiva do usuário e inspeção de conformidade do produto a padrões) demonstrou que a abordagem de avaliação proposta produziu resultados complementares e reforçou a relevância da utilização de uma abordagem multimétodos para a avaliação de acessibilidade de interfaces vocais.
Voice interaction design per se does not provide quality assurance of the interactive process for visually impaired users. In this dissertation, a method for evaluating voice user interface (VUI) accessibility based upon a set of techniques already well-known to the HCI (Human-Computer Interaction) community is proposed. For each technique, the problem is focused from a different perspective: (i) the user's perspective, which is expressed as views on the product gathered from an inquiry-based approach; (ii) the specialist's perspective, which is expressed by the analysis of the performance results in accessibility testing sessions; and (iii) the accessibility community's perspective, which is expressed by design reviews to determine whether a user interface design conforms to standards. Additionally, Bayesian networks were used in order to make explicit the uncertainty inherent in conformity inspection processes. A case study with DOSVOX system was performed to validate the proposed approach. DOSVOX system was developed at Federal University of Rio de Janeiro (UFRJ) with the aim of helping visually impaired users use the computer. A conformity inspection was performed in accordance with parts 14 (Menu dialogues), 17 (Form-filling dialogues) 171 (Guidance on software accessibility) of ISO 9241. On the other hand, the user performance measurement and the user subjective satisfaction measurement were conducted via accessibility testing. One hundred subjects were enrolled in this study. First, they were categorized as blind (40 users), low vision (20 users) and non-visually impaired (40 users), according to their visual impairment. Second, they were grouped as novices (46 users) and intermediates (54 users), according to their knowledge level in Informatics and experience with the evaluated product. Accessibility test results were statistically analyzed in order to verify the correlation between category performances and between group performances. No statistically significant differences between the user categories or the user groups were found. On the other hand, data comparison showed that the three strategies adopted (user performance measurement, user satisfaction measurement and standard conformity inspection) add to the evaluation process, producing complimentary data that are significant to the process, and reinforcing the relevance of a multi-layered approach for the accessibility evaluation of voice user interfaces.
Chapman, Jana Lynn. "BYU Vocal Performance Database." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2146.
Perrotin, Olivier. "Chanter avec les mains : interfaces chironomiques pour les instruments de musique numériques." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112207/document.
This thesis deals with the real-time control of singing voice synthesis by a graphic tablet, based on the digital musical instrument Cantor Digitalis.The relevance of the graphic tablet for the intonation control is first considered, showing that the tablet provides a more precise pitch control than real voice in experimental conditions.To extend the accuracy of control to any situation, a dynamic pitch warping method for intonation correction is developed. It enables to play under the pitch perception limens preserving at the same time the musician's expressivity. Objective and perceptive evaluations validate the method efficiency.The use of new interfaces for musical expression raises the question of the modalities implied in the playing of the instrument. A third study reveals a preponderance of the visual modality over the auditive perception for the intonation control, due to the introduction of visual clues on the tablet surface. Nevertheless, this is compensated by the expressivity allowed by the interface.The writing or drawing ability acquired since early childhood enables a quick acquisition of an expert control of the instrument. An ensemble of gestures dedicated to the control of different vocal effects is suggested.Finally, an intensive practice of the instrument is made through the Chorus Digitalis ensemble, to test and promote our work. An artistic research has been conducted for the choice of the Cantor Digitalis' musical repertoire. Moreover, a visual feedback dedicated to the audience has been developed, extending the perception of the players' pitch and articulation
Dours, Daniel. "Conception d'un système multiprocesseur traitant un flot continu de données en temps réel pour la réalisation d'une interface vocale intelligente." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb375972845.
Dours, Daniel. "Conception d'un systeme multiprocesseur traitant un flot continu de donnees en temps reel pour la realisation d'une interface vocale intelligente." Toulouse 3, 1986. http://www.theses.fr/1986TOU30107.
Книги з теми "Interfaces vocales":
Cavicchio, Federica, and Emanuela Magno Caldognetto, eds. Aspetti emotivi e relazionali nell'e-learning. Florence: Firenze University Press, 2008. http://dx.doi.org/10.36253/978-88-8453-833-8.
Jacobi, Jeffrey. The Vocal Advantage. Prentice Hall, 1996.
Jacobi, Jeffrey. The Vocal Advantage. Prentice Hall Trade, 1996.
Jacobi, Jeffrey. The Vocal Advantage. Prentice Hall, 1996.
Brauth, Steven E., and W. S. Hall. Avian Auditory-Vocal Motor Interfaces (Journal-Brain, Behavior and Evolution, 1994 , Vol 44, No 4-5). S Karger Pub, 1994.
Частини книг з теми "Interfaces vocales":
Céspedes-Hernández, David, Juan Manuel González-Calleros, Josefina Guerrero-García, and Liliana Rodríguez-Vizzuett. "Model-Driven Development of Vocal User Interfaces." In Human Computer Interaction, 30–34. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03068-5_7.
Atassi, Hicham, Maria Teresa Riviello, Zdeněk Smékal, Amir Hussain, and Anna Esposito. "Emotional Vocal Expressions Recognition Using the COST 2102 Italian Database of Emotional Speech." In Development of Multimodal Interfaces: Active Listening and Synchrony, 255–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12397-9_21.
Murphy, Peter J., and Anne-Maria Laukkanen. "Analysis of Emotional Voice Using Electroglottogram-Based Temporal Measures of Vocal Fold Opening." In Development of Multimodal Interfaces: Active Listening and Synchrony, 286–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12397-9_24.
Paternò, Fabio, and Christian Sisti. "Deriving Vocal Interfaces from Logical Descriptions in Multi-device Authoring Environments." In Lecture Notes in Computer Science, 204–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13911-6_14.
Esposito, Anna, and Alda Troncone. "Emotions and Speech Disorders: Do Developmental Stutters Recognize Emotional Vocal Expressions?" In Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, 155–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18184-9_14.
Schuller, Gerd, and Susanne Radtke-Schuller. "Midbrain Areas as Candidates for Audio-Vocal Interface in Echolocating Bats." In Animal Sonar, 93–98. Boston, MA: Springer US, 1988. http://dx.doi.org/10.1007/978-1-4684-7493-0_10.
Morise, Masanori, Masato Onishi, Hideki Kawahara, and Haruhiro Katayose. "v.morish’09: A Morphing-Based Singing Design Interface for Vocal Melodies." In Lecture Notes in Computer Science, 185–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04052-8_18.
Vinciarelli, Alessandro, Hugues Salamin, Gelareh Mohammadi, and Khiet Truong. "More Than Words: Inference of Socially Relevant Information from Nonverbal Vocal Cues in Speech." In Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, 23–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18184-9_3.
Ons, Bart, Jort F. Gemmeke, and Hugo Van hamme. "Label Noise Robustness and Learning Speed in a Self-Learning Vocal User Interface." In Natural Interaction with Robots, Knowbots and Smartphones, 249–59. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-8280-2_22.
"Vocal Interfaces in Supporting and Enhancing Accessibility in Digital Libraries." In The Universal Access Handbook, 709–20. CRC Press, 2009. http://dx.doi.org/10.1201/9781420064995-54.
Тези доповідей конференцій з теми "Interfaces vocales":
Sena, Claudia P. P., and Celso A. S. Santos. "Desenvolvimento de interfaces multimodais a partir da integração de comandos vocais à interface gráfica." In VII Brazilian symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1298023.1298028.
Tahiroǧlu, Koray, and Teemu Ahmaniemi. "Vocal sketching." In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1891903.1891956.
Gemmeke, Jort F. "The self-taught vocal interface." In 2014 4th Joint Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA). IEEE, 2014. http://dx.doi.org/10.1109/hscma.2014.6843243.
Gemmeke, Jort F., Siddharth Sehgal, Stuart Cunningham, and Hugo Van hamme. "Dysarthric vocal interfaces with minimal training data." In 2014 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2014. http://dx.doi.org/10.1109/slt.2014.7078582.
Zielasko, Daniel, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen. "A reliable non-verbal vocal input metaphor for clicking." In 2017 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2017. http://dx.doi.org/10.1109/3dui.2017.7893316.
Céspedes-Hernández, David, Juan González-Calleros, Josefina Guerrero-García, Jean Vanderdonckt, and Liliana Rodríguez-Vizzuett. "Methodology for the development of vocal user interfaces." In the 4th Mexican Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2382176.2382184.
Nakano, Tomoyasu, Yuki Koyama, Masahiro Hamasaki, and Masataka Goto. "Autocomplete vocal- f o annotation of songs using musical repetitions." In IUI '19: 24th International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3308557.3308700.
Guimaraes, Rui, Theologos Athanaselis, Stelios Bakamidis, Ioannis Dologlou, and Stavroula-Evita Fotinea. "A vocal user interface plug-in for jMRUI." In 2010 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2010. http://dx.doi.org/10.1109/ist.2010.5548488.
"AN INNOVATIVE VOCAL INTERFACE FOR AUTOMOTIVE INFORMATION SYSTEMS." In 6th International Conference on Enterprise Information Systems. SciTePress - Science and and Technology Publications, 2004. http://dx.doi.org/10.5220/0002653100090014.
Fan, Yuan-Yi, Soyoung Shin, and Vids Samanta. "Evaluating expressiveness of a voice-guided speech re-synthesis system using vocal prosodic parameters." In IUI '19: 24th International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3308557.3308715.