Добірка наукової літератури з теми "Interfaces vocales"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Interfaces vocales".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Interfaces vocales":

1

Fagundes Pase, André, Gisele Noll, Mariana Gomes da Fontoura, and Letícia Dallegrave. "Who Controls the Voice? The Journalistic Use and the Informational Domain in Vocal Transactors." Brazilian Journalism Research 16, no. 3 (December 29, 2020): 576–603. http://dx.doi.org/10.25200/bjr.v16n3.2021.1316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article aims to understand the transformations caused by new informational ecosystems in contemporary journalism. This analysis is performed based on news accessed through personal digital assistants embedded in smart speakers. As a methodological procedure, it adopts a multiple case study, defining the vocal transactors of Google (Nest Home/Google Assistant) and Amazon (Echo/Alexa) as its object. Therefore, this paper notes that the inclusion of algorithmic routines and the extension of news content to intelligent voice interfaces requires adaptation for the personalization of information, an ecosystem that is feedback by traditional vehicles, journalists, and people who interact with the artifacts.O presente artigo tem como objetivo compreender as transformações causadas por novos ecossistemas informacionais no jornalismo contemporâneo. Essa análise é realizada a partir de notícias acessadas através de assistentes pessoais digitais embarcados em alto-falantes inteligentes. Como procedimento metodológico, adota o estudo de caso múltiplo, definindo como objeto os transatores vocais da Google (Nest Home/Google Assistant) e da Amazon (Echo/Alexa). Observa, portanto, que a inclusão de rotinas algorítmicas e a extensão de conteúdo noticioso para interfaces de voz inteligentes demanda adaptação para a personalização das informações, ecossistema que é retroalimentado por veículos tradicionais, jornalistas e pessoas que interagem com os artefatos.Este artículo tiene como objetivo comprender las transformaciones causadas por los nuevos ecosistemas informativos en el periodismo contemporáneo. Este análisis se realiza en función de las noticias a las que se accede a través de asistentes digitales personales integrados en altavoces inteligentes. Como procedimiento metodológico, adopta un estudio de caso múltiple, definiendo los transactores vocales de Google (Nest Home/Google Assistant) y Amazon (Echo/Alexa) como su objeto. Señala, por lo tanto, que la inclusión de rutinas algorítmicas y la extensión del contenido de noticias a interfaces de voz inteligentes requiere adaptación para la personalización de la información, un ecosistema que es retroalimentado por vehículos tradicionales, periodistas y personas que interactúan con los artefactos.
2

Swoboda, Danya, Jared Boasen, Pierre-Majorique Léger, Romain Pourchon, and Sylvain Sénécal. "Comparing the Effectiveness of Speech and Physiological Features in Explaining Emotional Responses during Voice User Interface Interactions." Applied Sciences 12, no. 3 (January 25, 2022): 1269. http://dx.doi.org/10.3390/app12031269.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The rapid rise of voice user interface technology has changed the way users traditionally interact with interfaces, as tasks requiring gestural or visual attention are swapped by vocal commands. This shift has equally affected designers, required to disregard common digital interface guidelines in order to adapt to non-visual user interaction (No-UI) methods. The guidelines regarding voice user interface evaluation are far from the maturity of those surrounding digital interface evaluation, resulting in a lack of consensus and clarity. Thus, we sought to contribute to the emerging literature regarding voice user interface evaluation and, consequently, assist user experience professionals in their quest to create optimal vocal experiences. To do so, we compared the effectiveness of physiological features (e.g., phasic electrodermal activity amplitude) and speech features (e.g., spectral slope amplitude) to predict the intensity of users’ emotional responses during voice user interface interactions. We performed a within-subjects experiment in which the speech, facial expression, and electrodermal activity responses of 16 participants were recorded during voice user interface interactions that were purposely designed to elicit frustration and shock, resulting in 188 analyzed interactions. Our results suggest that the physiological measure of facial expression and its extracted feature, automatic facial expression-based valence, is most informative of emotional events lived through voice user interface interactions. By comparing the unique effectiveness of each feature, theoretical and practical contributions may be noted, as the results contribute to voice user interface literature while providing key insights favoring efficient voice user interface evaluation.
3

Wagner, Amber, and Jeff Gray. "An Empirical Evaluation of a Vocal User Interface for Programming by Voice." International Journal of Information Technologies and Systems Approach 8, no. 2 (July 2015): 47–63. http://dx.doi.org/10.4018/ijitsa.2015070104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Although Graphical User Interfaces (GUIs) often improve usability, individuals with physical disabilities may be unable to use a mouse and keyboard to navigate through a GUI-based application. In such situations, a Vocal User Interface (VUI) may be a viable alternative. Existing vocal tools (e.g., Vocal Joystick) can be integrated into software applications; however, integrating an assistive technology into a legacy application may require tedious and manual adaptation. Furthermore, the challenges are deeper for an application whose GUI changes dynamically (e.g., based on the context of the program) and evolves with each new application release. This paper provides a discussion of challenges observed while mapping a GUI to a VUI. The context of the authors' examples and evaluation are taken from Myna, which is the VUI that is mapped to the Scratch programming environment. Initial user studies on the effectiveness of Myna are also presented in the paper.
4

Topoleanu, Tudor Sabin, and Gheorghe Leonte Mogan. "Aspects Concerning Voice Cognitive Control Systems for Mobile Robots." Solid State Phenomena 166-167 (September 2010): 427–32. http://dx.doi.org/10.4028/www.scientific.net/ssp.166-167.427.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper we present a general structure of a cognitive control system that allows a mobile robot to behave semi-autonomously while receiving tasks through vocal commands. Furthermore, the paper contains an analysis of human robot interfaces, voice interface systems and cognitive systems. The main purpose is to identify the optimum structure of a mobile robot control platform and determine the outlines within which this solution will be developed. The mobile robot using such a solution will operate in the services and leisure domain, and therefore the specifications for the cognitive system will be tailored to the needs of such applications.
5

Gutiérrez Calderón, Jenny Alejandra, Erika Nathalia Gama Melo, Darío Amaya Hurtado, and Oscar Fernando Avilés Sánchez. "Desarrollo de interfaces para la detección del habla sub-vocal." Revista Tecnura 17, no. 37 (September 18, 2013): 138. http://dx.doi.org/10.14483/udistrital.jour.tecnura.2013.3.a12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Por medio de este artículo se explorarán las técnicas más sobresalientes utilizadas actualmente para la detección del habla sub-vocal tanto en personas con parálisis cerebralcomo para aplicaciones comerciales (por ejemplo, permitir la comunicación en lugares ruidosos). Las metodologías expuestas se ocupan de adquirir y procesar las señales del habla desde diferentes niveles de su generación, de esta manera se presentan métodos que detectan y analizan señales desde que estas son producidas como impulsos neuronales en el cerebro, hasta que llegan al aparato fonador ubicado en la garganta, justo antes de ser pronunciadas. La calidad de la adquisición y procesamiento dependerá de varios factores que serán analizados en las siguientes secciones. La primera parte de este artículo constituye una breve explicación del proceso completo de generación de voz. Posteriormente, se exponen las técnicas de adquisición y análisis de las señales del habla sub-vocal, para finalmente incluir un análisis de las ventajas y desventajas que estas presentan para su posible implementación en un dispositivo para la detección del habla sub-vocal o lenguaje silencioso. Los resultados de la investigación realizada demuestran cómo la implementación del micrófono NAM (Non-audible Murmur) es una de las alternativas que aporta mayores beneficios no solo para la adquisición y procesamiento de las señales, sino para la futura discriminación de los fonemas del idioma español.
6

Young, Andrea. "The Voice-Index and Digital Voice Interface." Leonardo Music Journal 24 (December 2014): 3–5. http://dx.doi.org/10.1162/lmj_a_00186.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The voice-index is discussed as a conceptual model for creating a live digital voice. Vocal feature extraction employs the voice as a live electronic interface, referenced in the author’s performative work.
7

Carvalho, Diogo Rebel e. "Multiplicidade de recursos vocais, dramáticos e expressivos a partir da análise da obra Sound - para quatro vozes femininas, luz, cena e amplificação - de Luiz Carlos Csekö." Per Musi, no. 41 (September 29, 2021): 1–15. http://dx.doi.org/10.35699/2317-6377.2021.34851.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
O presente artigo tem como objetivo investigar a multiplicidade de recursos vocais, dramáticos e expressivos, além da diversidade de estilos e tendências nas interfaces entre canto e performance, e a ampliação do espaço cênico-musical a partir da análise da obra Sound, para quatro vozes femininas, luz, cena e amplificação, do compositor Luiz Carlos Csekö. Após um breve relato da trajetória do compositor, serão apresentadas as estratégias utilizadas por Csekö para registrar sua obra, como a Notação Gráfica Híbrida, o Tempo em Suspensão e as Interfaces com Multimeios e Intermeios, além da utilização de microfonação e amplificação em sua abordagem do conceito de amálgama eletroacústico. Além de Sound, serão citadas três obras vocais selecionadas, onde são, também, exploradas pelo compositor, a enorme gama de possibilidades e de materiais sonoros particulares, a manipulação do texto traduzido em texturas e as indicações de elementos cênicos ao intérprete.
8

Suzuki, Toshitaka N., David Wheatcroft, and Michael Griesser. "The syntax–semantics interface in animal vocal communication." Philosophical Transactions of the Royal Society B: Biological Sciences 375, no. 1789 (November 18, 2019): 20180405. http://dx.doi.org/10.1098/rstb.2018.0405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Syntax (rules for combining words or elements) and semantics (meaning of expressions) are two pivotal features of human language, and interaction between them allows us to generate a limitless number of meaningful expressions. While both features were traditionally thought to be unique to human language, research over the past four decades has revealed intriguing parallels in animal communication systems. Many birds and mammals produce specific calls with distinct meanings, and some species combine multiple meaningful calls into syntactically ordered sequences. However, it remains largely unclear whether, like phrases or sentences in human language, the meaning of these call sequences depends on both the meanings of the component calls and their syntactic order. Here, leveraging recently demonstrated examples of meaningful call combinations, we introduce a framework for exploring the interaction between syntax and semantics (i.e. the syntax-semantic interface) in animal vocal sequences. We outline methods to test the cognitive mechanisms underlying the production and perception of animal vocal sequences and suggest potential evolutionary scenarios for syntactic communication. We hope that this review will stimulate phenomenological studies on animal vocal sequences as well as experimental studies on the cognitive processes, which promise to provide further insights into the evolution of language. This article is part of the theme issue ‘What can animal communication teach us about human language?’
9

Metzner, W. "An audio-vocal interface in echolocating horseshoe bats." Journal of Neuroscience 13, no. 5 (May 1, 1993): 1899–915. http://dx.doi.org/10.1523/jneurosci.13-05-01899.1993.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Goble, J. R., P. F. Suarez, S. K. Rogers, D. W. Ruck, C. Arndt, and M. Kabrisky. "A facial feature communications interface for the non-vocal." IEEE Engineering in Medicine and Biology Magazine 12, no. 3 (September 1993): 46–48. http://dx.doi.org/10.1109/51.232340.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Interfaces vocales":

1

Janer, Mestres Jordi. "Singing-driven interfaces for sound synthesizers." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7550.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Els instruments musicals digitals es descomponen usualment en dues parts: la interfície d'usuari i el motor de síntesi. Tradicionalment la interfície d'usuari pren el nom de controlador musical. L'objectiu d'aquesta tesi és el disseny d'un interfície que permeti el control de la síntesi de sons instrumentals a partir de la veu cantada.

Amb la present recerca, intentem relacionar la veu amb el so dels instruments musicals, tenint en compte tan la descripció del senyal de veu, com les corresponents estratègies de mapeig per un control adequat del sintetitzador.
Proposem dos enfocaments diferents, d'una banda el control d'un sintetitzador de veu cantada, i d'altra banda el control de la síntesi de sons instrumentals. Per aquest últim, suggerim una representació del senyal de veu com a gests vocals, que inclou una sèrie d'algoritmes d'anàlisis de veu. A la vegada, per demostrar els resultats obtinguts, hem desenvolupat dos prototips a temps real.
Los instrumentos musicales digitales se pueden separar en dos componentes: el interfaz de usuario y el motor de sintesis. El interfaz de usuario se ha denominado tradicionalmente controlador musical. El objectivo de esta tesis es el diseño de un interfaz que permita el control de la sintesis de sonidos instrumentales a partir de la voz cantada.

La presente investigación pretende relacionar las caracteristicas de la voz con el sonido de los instrumentos musicales, teniendo en cuenta la descripción de la señal de voz, como las correspondientes estrategias de mapeo para un control apropiado del sintetizador. Se proponen dos enfoques distintos, el control de un sintetizador de voz cantada, y el control de la sintesis de sonidos insturmentales. Para este último, se sugiere una representación de la señal de voz como gestos vocales, incluyendo varios algoritmos de analisis de voz. Los resultados obtenidos se demuestran con dos prototipos a tiempo real.
Digital musical instruments are usually decomposed in two main constituent parts: a user interface and a sound synthesis engine. The user interface is popularly referred as a musical controller, and its design is the primary objective of this dissertation. Under the title of singing-driven interfaces, we aim to design systems that allow controlling the synthesis of musical instruments sounds with the singing voice.

This dissertation searches for the relationships between the voice and the sound of musical instruments by addressing both, the voice signal description, as well as the mapping strategies for a meaningful control of the synthesized sound.
We propose two different approaches, one for controlling a singing voice synthesizer, and another for controlling the synthesis of instrumental sounds. For the latter, we suggest to represent voice signal as vocal gestures, contributing with several voice analysis methods.
To demonstrate the obtained results, we developed two real-time prototypes.
2

Srivastava, Brij Mohan Lal. "Anonymisation du locuteur : représentation, évaluation et garanties formelles." Thesis, Université de Lille (2018-2021), 2021. https://pepite-depot.univ-lille.fr/LIBRE/EDMADIS/2021/2021LILUB029.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'émergence et la généralisation des interfaces vocales présentesdans les téléphones, les applications mobiles et les assistantsnumériques ont permis de faciliter la communication entre les citoyens,utilisateurs d'un service, et les prestataires de services. Citons àtitre d'exemple l'utilisation de mots de passe vocaux pour lesopérations bancaires, des haut-parleurs intelligents personnalisés, etc.Pour réaliser ces innovations, la collecte massive de données vocalesest essentielle aux entreprises comme aux chercheurs. Mais le stockagecentralisé à grande échelle des données vocales pose de graves menaces àla vie privée des locuteurs. En effet, le stockage centralisé estvulnérable aux menaces de cybersécurité qui, lorsqu'elles sont combinéesavec des technologies vocales avancées telles que le clonage vocal, lareconnaissance du locuteur et l'usurpation d'identité peuvent conférer àune entité malveillante la capacité de ré-identifier les locuteurs et devioler leur vie privée en accédant à leurs caractéristiques biométriquessensibles, leurs états émotionnels, leurs attributs de personnalité,leurs conditions pathologiques, etc.Les individus et les membres de la société civile du monde entier, etparticulièrement en Europe, prennent conscience de cette menace. Avecl'entrée en vigueur du règlement général sur la protection des données(RGPD), plusieurs initiatives sont lancées, notamment la publication delivres blancs et de lignes directrices, pour sensibiliser les masses etréguler les données vocales afin que la vie privée des citoyens soitprotégée.Cette thèse constitue un effort pour soutenir de telles initiatives etpropose des solutions pour supprimer l'identité biométrique deslocuteurs des signaux de parole, les rendant ainsi inutiles pourré-identifier les locuteurs qui les ont prononcés.Outre l'objectif de protéger l'identité du locuteur contre les accèsmalveillants, cette thèse vise à explorer les solutions qui le font sansdégrader l'utilité de la parole.Nous présentons plusieurs schémas d'anonymisation basés sur des méthodesde conversion vocale pour atteindre ce double objectif. La sortie detels schémas est un signal vocal de haute qualité qui est utilisablepour la publication et pour un ensemble de tâches en aval.Tous les schémas sont soumis à un protocole d'évaluation rigoureux quiest l'un des apports majeurs de cette thèse.Ce protocole a conduit à la découverte que les approches existantes neprotègent pas efficacement la vie privée et a ainsi directement inspirél'initiative VoicePrivacy qui rassemble les individus, l'industrie et lacommunauté scientifique pour participer à la construction d'un schémad'anonymisation robuste.Nous introduisons une gamme de schémas d'anonymisation dans le cadre del'initiative VoicePrivacy et prouvons empiriquement leur supériorité entermes de protection de la vie privée et d'utilité.Enfin, nous nous efforçons de supprimer l'identité résiduelle dulocuteur du signal de parole anonymisé en utilisant les techniquesinspirées de la confidentialité différentielle. De telles techniquesfournissent des garanties analytiques démontrables aux schémasd'anonymisation proposés et ouvrent des portes pour de futures recherches.En pratique, les outils développés dans cette thèse sont un élémentessentiel pour établir la confiance dans tout écosystème logiciel où lesdonnées vocales sont stockées, transmises, traitées ou publiées. Ilsvisent à aider les organisations à se conformer aux règles mandatées parles gouvernements et à donner le choix aux individus qui souhaitentexercer leur droit à la vie privée
Large-scale centralized storage of speech data poses severe privacy threats to the speakers. Indeed, the emergence and widespread usage of voice interfaces starting from telephone to mobile applications, and now digital assistants have enabled easier communication between the customers and the service providers. Massive speech data collection allows its users, for instance researchers, to develop tools for human convenience, like voice passwords for banking, personalized smart speakers, etc. However, centralized storage is vulnerable to cybersecurity threats which, when combined with advanced speech technologies like voice cloning, speaker recognition, and spoofing, may endow a malicious entity with the capability to re-identify speakers and breach their privacy by gaining access to their sensitive biometric characteristics, emotional states, personality attributes, pathological conditions, etc.Individuals and the members of civil society worldwide, and especially in Europe, are getting aware of this threat. With firm backing by the GDPR, several initiatives are being launched, including the publication of white papers and guidelines, to spread mass awareness and to regulate voice data so that the citizens' privacy is protected.This thesis is a timely effort to bolster such initiatives and propose solutions to remove the biometric identity of speakers from speech signals, thereby rendering them useless for re-identifying the speakers who spoke them.Besides the goal of protecting the speaker's identity from malicious access, this thesis aims to explore the solutions which do so without degrading the usefulness of speech.We present several anonymization schemes based on voice conversion methods to achieve this two-fold objective. The output of such schemes is a high-quality speech signal that is usable for publication and a variety of downstream tasks.All the schemes are subjected to a rigorous evaluation protocol which is one of the major contributions of this thesis.This protocol led to the finding that the previous approaches do not effectively protect the privacy and thereby directly inspired the VoicePrivacy initiative which is an effort to gather individuals, industry, and the scientific community to participate in building a robust anonymization scheme.We introduce a range of anonymization schemes under the purview of the VoicePrivacy initiative and empirically prove their superiority in terms of privacy protection and utility.Finally, we endeavor to remove the residual speaker identity from the anonymized speech signal using the techniques inspired by differential privacy. Such techniques provide provable analytical guarantees to the proposed anonymization schemes and open up promising perspectives for future research.In practice, the tools developed in this thesis are an essential component to build trust in any software ecosystem where voice data is stored, transmitted, processed, or published. They aim to help the organizations to comply with the rules mandated by civil governments and give a choice to individuals who wish to exercise their right to privacy
3

Murdoch, Michael J. "Nonverbal vocal interface /." Link to online version, 2006. https://ritdml.rit.edu/dspace/handle/1850/10346.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hatt, Grégory. "Interface homme-machine intégrant la reconnaissance vocale et l'analyse d'image /." Sion, 2008. http://doc.rero.ch/record/12810?ln=fr.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Martin, Pierre. "C3i systeme de reconnaissance vocale du chinois moderne (chinese ideograms input interface)." Nice, 1994. http://www.theses.fr/1994NICE4809.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le chinois moderne ou mandarin est une langue ideographique comptant pres de 6000 caracteres. La necessite d'un codage phonetique des caracteres chinois a incite les linguistes de la chine populaire a concevoir le pinyin. Il s'agit d'un alphabet phonetique constitue des prononciations des ideogrammes. Cependant le symbole graphique n'interprete pas le symbole phonetique. Cet aspect a toujours pose des difficultes dans la conception des systemes de saisie des caracteres chinois. Afin de pallier a cet inconvenient, dans nos travaux, nous proposons un systeme de comprehension de la parole adapte a ce langage naturel. Les deux modules du systeme, a savoir le decodage acoustico-phonetique et l'analyseur linguistique ont ete elabores en fonction des specificites de cette langue. L'approche experte utilisee dans le module de dap, en l'occurence les triplets phonetiques (un son en contexte) correspond a la structure particuliere des syllabes de la forme consonne + voyelle + partie finale. Les sons en contexte correspondent aux voyelles centrales des pinyins. Pour pouvoir exploiter l'expertise phonetique, il est indispensable de disposer de bons detecteurs d'evenements acoustiques. Le suivi des formants est le plus important d'entre eux car la partie centrale d'un triplet est exclusivement une voyelle. Le ton present sur la syllabe est identifie a l'aide de l'allure de la frequence du fondamental. Un treillis de syllabes phonetiques est le resultat du module de dap. L'analyseur linguistique, qui de ce treillis, essaie de determiner, dans un premier temps, une representation lexicale (mots) puis une representation syntaxique, eventuellement incomplete voir multiple, de la phrase de depart. Cependant, le chinois moderne manque de changements morphologiques. Il recourt a l'ordre des mots, a la fonction des mots dans la phrase et a l'emploi de mots vide pour exprimer les notions et les categories grammaticales. Les analyseurs morpho-lexical et syntaxique ont ete elabores a partir de ces particularites. Une analyse semantique permet de valider la phrase en levant les ambiguites syntaxiques
6

CARNEIRO, Maria Isabel Farias. "Abordagem multidimensional para avaliação da acessibilidade de interfaces vocais considerando a modelagem da incerteza." Universidade Federal de Campina Grande, 2014. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-31T19:39:43Z No. of bitstreams: 1 MARIA ISABEL FARIA CARNEIRO - DISSERTAÇÃO PPGCC 2014..pdf: 45568096 bytes, checksum: 7fe570750f4904224de8b7e2f76035e2 (MD5)
Made available in DSpace on 2018-07-31T19:39:43Z (GMT). No. of bitstreams: 1 MARIA ISABEL FARIA CARNEIRO - DISSERTAÇÃO PPGCC 2014..pdf: 45568096 bytes, checksum: 7fe570750f4904224de8b7e2f76035e2 (MD5) Previous issue date: 2014-03
0 desenvolvimento de interfaces vocais [VUI - Voice User Interface) per se não é uma garantia para um processo interativo de qualidade entre usuários com deficiência visual e sistemas computacionais. Com o intuito de avaliar os problemas de acessibilidade em VUI, a presente pesquisa focalizou a proposição de uma abordagem de avaliação baseada em um conjunto de técnicas já conhecidas pela comunidade de IHC (Interação Homem-Máquina). No tocante a cada técnica utilizada, o problema foi focado a partir de diferentes perspectivas: (i) do usuário, expresso a partir das visões dos usuários sobre o produto, reunidas a partir de uma abordagem de avaliação; (ii) do especialista, expresso sob a forma de análise dos resultados dos desempenhos dos usuários em sessões de teste de acessibilidade; e (iii) da comunidade de acessibilidade, expresso com base em revisões de projeto, a fim de determinar se o projeto da interface está em conformidade com um padrão. Além disso, visando a evidenciar a incerteza associada aos julgamentos do avaliador na inspeção de conformidade do produto, incorporou-se a modelagem de incerteza, a partir da utilização de Redes Bayesianas, possibilitando ao avaliador explicitar os níveis de incerteza associados às inspeções de conformidade do produto a um padrão, por ele realizadas. A abordagem metodológica foi validada a partir de um estudo de caso envolvendo a avaliação da acessibilidade do sistema computacional DOSVOX, desenvolvido na Universidade Federal do Rio de Janeiro (UFRJ), com o objetivo de auxiliar usuários com deficiência visual no uso de sistemas computacionais. No enfoque da inspeção de conformidade, consideraram-se as partes 14 (Diálogos via menus), 17 (Diálogos via preenchimento via formulários) e 171 (Guia de acessibilidade de software) do padrão internacional ISO 9241. Por outro lado, nos enfoques da mensuração de desempenho e da sondagem da satisfação subjetiva do usuário, foram realizados testes de acessibilidade, envolvendo um universo amostrai de 100 usuários. Inicialmente, os participantes foram agrupados como cegos (40 usuários), baixa visão (20 usuários) e sem deficiência visual (40 usuários), de acordo com tipo de deficiência visual. Em seguida, eles foram classificados como principiantes (46 usuários) ou intermediários (54 usuários), de acordo com o nível de conhecimento em Informática e de experiência o produto avaliado. Os dados resultantes dos testes de acessibilidade foram processados estatisticamente, a fim de verificar a correlação entre os desempenhos dos grupos de usuários e entre o desempenho das categorias de usuários de cada grupo. O processamento estatístico dos dados evidenciou a inexistência de diferenças significativas entre os desempenhos dos grupos, bem como entre as categorias de usuários. Por outro lado, a confrontação dos resultados dos três enfoques (mensuração de desempenho do usuário, mensuração da satisfação subjetiva do usuário e inspeção de conformidade do produto a padrões) demonstrou que a abordagem de avaliação proposta produziu resultados complementares e reforçou a relevância da utilização de uma abordagem multimétodos para a avaliação de acessibilidade de interfaces vocais.
Voice interaction design per se does not provide quality assurance of the interactive process for visually impaired users. In this dissertation, a method for evaluating voice user interface (VUI) accessibility based upon a set of techniques already well-known to the HCI (Human-Computer Interaction) community is proposed. For each technique, the problem is focused from a different perspective: (i) the user's perspective, which is expressed as views on the product gathered from an inquiry-based approach; (ii) the specialist's perspective, which is expressed by the analysis of the performance results in accessibility testing sessions; and (iii) the accessibility community's perspective, which is expressed by design reviews to determine whether a user interface design conforms to standards. Additionally, Bayesian networks were used in order to make explicit the uncertainty inherent in conformity inspection processes. A case study with DOSVOX system was performed to validate the proposed approach. DOSVOX system was developed at Federal University of Rio de Janeiro (UFRJ) with the aim of helping visually impaired users use the computer. A conformity inspection was performed in accordance with parts 14 (Menu dialogues), 17 (Form-filling dialogues) 171 (Guidance on software accessibility) of ISO 9241. On the other hand, the user performance measurement and the user subjective satisfaction measurement were conducted via accessibility testing. One hundred subjects were enrolled in this study. First, they were categorized as blind (40 users), low vision (20 users) and non-visually impaired (40 users), according to their visual impairment. Second, they were grouped as novices (46 users) and intermediates (54 users), according to their knowledge level in Informatics and experience with the evaluated product. Accessibility test results were statistically analyzed in order to verify the correlation between category performances and between group performances. No statistically significant differences between the user categories or the user groups were found. On the other hand, data comparison showed that the three strategies adopted (user performance measurement, user satisfaction measurement and standard conformity inspection) add to the evaluation process, producing complimentary data that are significant to the process, and reinforcing the relevance of a multi-layered approach for the accessibility evaluation of voice user interfaces.
7

Chapman, Jana Lynn. "BYU Vocal Performance Database." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2146.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The vocal performance database is a tool by which BYU vocal performance faculty and students may practice, assess, and review vocal performances, including practice juries, recitals, and end-of-semester juries. This document describes the process and results of designing, developing, implementing, and evaluating the vocal performance database. By using this tool, vocal performance professors are able to give faster, more quality feedback to students following the jury. Students are able to receive legible feedback from their professors in a timely manner.
8

Perrotin, Olivier. "Chanter avec les mains : interfaces chironomiques pour les instruments de musique numériques." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112207/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le travail de cette thèse porte sur l'étude du contrôle en temps réel de synthèse de voix chantée par une tablette graphique dans le cadre de l'instrument de musique numérique Cantor Digitalis.La pertinence de l'utilisation d'une telle interface pour le contrôle de l'intonation vocale a été traitée en premier lieu, démontrant que la tablette permet un contrôle de la hauteur mélodique plus précis que la voix réelle en situation expérimentale.Pour étendre la justesse du jeu à toutes situations, une méthode de correction dynamique de l'intonation a été développée, permettant de jouer en dessous du seuil de perception de justesse et préservant en même temps l'expressivité du musicien. Des évaluations objective et perceptive ont permis de valider l'efficacité de cette méthode.L'utilisation de nouvelles interfaces pour la musique pose la question des modalités impliquées dans le jeu de l'instrument. Une troisième étude révèle une prépondérance de la perception visuelle sur la perception auditive pour le contrôle de l'intonation, due à l'introduction d'indices visuels sur la surface de la tablette. Néanmoins, celle-ci est compensée par l'important pouvoir expressif de l'interface.En effet, la maîtrise de l'écriture ou du dessin dès l'enfance permet l'acquisition rapide d'un contrôle expert de l'instrument. Pour formaliser ce contrôle, nous proposons une suite de gestes adaptés à différents effets musicaux rencontrés dans la musique vocale. Enfin, une pratique intensive de l'instrument est réalisée au sein de l'ensemble Chorus Digitalis à des fins de test et de diffusion. Un travail de recherche artistique est conduit tant dans la mise en scène que dans le choix du répertoire musical à associer à l'instrument. De plus, un retour visuel dédié au public a été développé, afin d'aider à la compréhension du maniement de l'instrument
This thesis deals with the real-time control of singing voice synthesis by a graphic tablet, based on the digital musical instrument Cantor Digitalis.The relevance of the graphic tablet for the intonation control is first considered, showing that the tablet provides a more precise pitch control than real voice in experimental conditions.To extend the accuracy of control to any situation, a dynamic pitch warping method for intonation correction is developed. It enables to play under the pitch perception limens preserving at the same time the musician's expressivity. Objective and perceptive evaluations validate the method efficiency.The use of new interfaces for musical expression raises the question of the modalities implied in the playing of the instrument. A third study reveals a preponderance of the visual modality over the auditive perception for the intonation control, due to the introduction of visual clues on the tablet surface. Nevertheless, this is compensated by the expressivity allowed by the interface.The writing or drawing ability acquired since early childhood enables a quick acquisition of an expert control of the instrument. An ensemble of gestures dedicated to the control of different vocal effects is suggested.Finally, an intensive practice of the instrument is made through the Chorus Digitalis ensemble, to test and promote our work. An artistic research has been conducted for the choice of the Cantor Digitalis' musical repertoire. Moreover, a visual feedback dedicated to the audience has been developed, extending the perception of the players' pitch and articulation
9

Dours, Daniel. "Conception d'un système multiprocesseur traitant un flot continu de données en temps réel pour la réalisation d'une interface vocale intelligente." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb375972845.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dours, Daniel. "Conception d'un systeme multiprocesseur traitant un flot continu de donnees en temps reel pour la realisation d'une interface vocale intelligente." Toulouse 3, 1986. http://www.theses.fr/1986TOU30107.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Une serie de transformations syntaxiques et semantiques permettant de paralleliser une application, sont definies dans le deuxieme chapitre. On obtient ainsi une representation de l'application en terme de reseaux de modules imbriques. Une architecture modulaire reconfigurable adaptee a ce type de representation est decrite dans le troisieme chapitre. Pour projeter l'application sur cette architecture, un langage approprie est defini et un ensemble de moyens et de methodes permettant la construction d'un logiciel interactif recherchant la configuration optimale du systeme multiprocesseur executant l'application donnee est decrit. Quant a la derniere partie, elle a pour but de montrer la parfaite adequation entre le systeme multiprocesseur ainsi concu et l'organisation modulaire d'un terminal vocal, de jeter un regard prospectif sur l'utilisation d'un tel systeme dans d'autre domaines d'application en particulier les systemes de vision et les robots intelligents

Книги з теми "Interfaces vocales":

1

Cavicchio, Federica, and Emanuela Magno Caldognetto, eds. Aspetti emotivi e relazionali nell'e-learning. Florence: Firenze University Press, 2008. http://dx.doi.org/10.36253/978-88-8453-833-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This book investigates the role of emotions and multimodal communication in face-to-face teaching and in e-learning, and assesses the incidence of these not merely verbal components on the cognitive processes of the student. It also presents certain types of man-machine interface that utilise natural language in written, vocal and multimodal form; the latter implement a new metaphor of interaction with the computer that is more human-oriented. This is, therefore, a new and interdisciplinary theme of research that highlights the technical and theoretical complexity that e-learning specialists and scholars of multimodal communication and emotions address in order to devise new systems of human-computer communication that are more natural and more motivating for learning.
2

Jacobi, Jeffrey. The Vocal Advantage. Prentice Hall, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jacobi, Jeffrey. The Vocal Advantage. Prentice Hall Trade, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jacobi, Jeffrey. The Vocal Advantage. Prentice Hall, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Brauth, Steven E., and W. S. Hall. Avian Auditory-Vocal Motor Interfaces (Journal-Brain, Behavior and Evolution, 1994 , Vol 44, No 4-5). S Karger Pub, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Interfaces vocales":

1

Céspedes-Hernández, David, Juan Manuel González-Calleros, Josefina Guerrero-García, and Liliana Rodríguez-Vizzuett. "Model-Driven Development of Vocal User Interfaces." In Human Computer Interaction, 30–34. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03068-5_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Atassi, Hicham, Maria Teresa Riviello, Zdeněk Smékal, Amir Hussain, and Anna Esposito. "Emotional Vocal Expressions Recognition Using the COST 2102 Italian Database of Emotional Speech." In Development of Multimodal Interfaces: Active Listening and Synchrony, 255–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12397-9_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Murphy, Peter J., and Anne-Maria Laukkanen. "Analysis of Emotional Voice Using Electroglottogram-Based Temporal Measures of Vocal Fold Opening." In Development of Multimodal Interfaces: Active Listening and Synchrony, 286–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12397-9_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Paternò, Fabio, and Christian Sisti. "Deriving Vocal Interfaces from Logical Descriptions in Multi-device Authoring Environments." In Lecture Notes in Computer Science, 204–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13911-6_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Esposito, Anna, and Alda Troncone. "Emotions and Speech Disorders: Do Developmental Stutters Recognize Emotional Vocal Expressions?" In Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, 155–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18184-9_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Schuller, Gerd, and Susanne Radtke-Schuller. "Midbrain Areas as Candidates for Audio-Vocal Interface in Echolocating Bats." In Animal Sonar, 93–98. Boston, MA: Springer US, 1988. http://dx.doi.org/10.1007/978-1-4684-7493-0_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Morise, Masanori, Masato Onishi, Hideki Kawahara, and Haruhiro Katayose. "v.morish’09: A Morphing-Based Singing Design Interface for Vocal Melodies." In Lecture Notes in Computer Science, 185–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04052-8_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Vinciarelli, Alessandro, Hugues Salamin, Gelareh Mohammadi, and Khiet Truong. "More Than Words: Inference of Socially Relevant Information from Nonverbal Vocal Cues in Speech." In Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, 23–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18184-9_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ons, Bart, Jort F. Gemmeke, and Hugo Van hamme. "Label Noise Robustness and Learning Speed in a Self-Learning Vocal User Interface." In Natural Interaction with Robots, Knowbots and Smartphones, 249–59. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-8280-2_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

"Vocal Interfaces in Supporting and Enhancing Accessibility in Digital Libraries." In The Universal Access Handbook, 709–20. CRC Press, 2009. http://dx.doi.org/10.1201/9781420064995-54.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Interfaces vocales":

1

Sena, Claudia P. P., and Celso A. S. Santos. "Desenvolvimento de interfaces multimodais a partir da integração de comandos vocais à interface gráfica." In VII Brazilian symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1298023.1298028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tahiroǧlu, Koray, and Teemu Ahmaniemi. "Vocal sketching." In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1891903.1891956.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gemmeke, Jort F. "The self-taught vocal interface." In 2014 4th Joint Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA). IEEE, 2014. http://dx.doi.org/10.1109/hscma.2014.6843243.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gemmeke, Jort F., Siddharth Sehgal, Stuart Cunningham, and Hugo Van hamme. "Dysarthric vocal interfaces with minimal training data." In 2014 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2014. http://dx.doi.org/10.1109/slt.2014.7078582.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zielasko, Daniel, Neha Neha, Benjamin Weyers, and Torsten W. Kuhlen. "A reliable non-verbal vocal input metaphor for clicking." In 2017 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2017. http://dx.doi.org/10.1109/3dui.2017.7893316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Céspedes-Hernández, David, Juan González-Calleros, Josefina Guerrero-García, Jean Vanderdonckt, and Liliana Rodríguez-Vizzuett. "Methodology for the development of vocal user interfaces." In the 4th Mexican Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2382176.2382184.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Nakano, Tomoyasu, Yuki Koyama, Masahiro Hamasaki, and Masataka Goto. "Autocomplete vocal- f o annotation of songs using musical repetitions." In IUI '19: 24th International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3308557.3308700.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Guimaraes, Rui, Theologos Athanaselis, Stelios Bakamidis, Ioannis Dologlou, and Stavroula-Evita Fotinea. "A vocal user interface plug-in for jMRUI." In 2010 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2010. http://dx.doi.org/10.1109/ist.2010.5548488.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

"AN INNOVATIVE VOCAL INTERFACE FOR AUTOMOTIVE INFORMATION SYSTEMS." In 6th International Conference on Enterprise Information Systems. SciTePress - Science and and Technology Publications, 2004. http://dx.doi.org/10.5220/0002653100090014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fan, Yuan-Yi, Soyoung Shin, and Vids Samanta. "Evaluating expressiveness of a voice-guided speech re-synthesis system using vocal prosodic parameters." In IUI '19: 24th International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3308557.3308715.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії