Статті в журналах з теми "Interfaces vocales"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Interfaces vocales.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Interfaces vocales".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Fagundes Pase, André, Gisele Noll, Mariana Gomes da Fontoura, and Letícia Dallegrave. "Who Controls the Voice? The Journalistic Use and the Informational Domain in Vocal Transactors." Brazilian Journalism Research 16, no. 3 (December 29, 2020): 576–603. http://dx.doi.org/10.25200/bjr.v16n3.2021.1316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article aims to understand the transformations caused by new informational ecosystems in contemporary journalism. This analysis is performed based on news accessed through personal digital assistants embedded in smart speakers. As a methodological procedure, it adopts a multiple case study, defining the vocal transactors of Google (Nest Home/Google Assistant) and Amazon (Echo/Alexa) as its object. Therefore, this paper notes that the inclusion of algorithmic routines and the extension of news content to intelligent voice interfaces requires adaptation for the personalization of information, an ecosystem that is feedback by traditional vehicles, journalists, and people who interact with the artifacts.O presente artigo tem como objetivo compreender as transformações causadas por novos ecossistemas informacionais no jornalismo contemporâneo. Essa análise é realizada a partir de notícias acessadas através de assistentes pessoais digitais embarcados em alto-falantes inteligentes. Como procedimento metodológico, adota o estudo de caso múltiplo, definindo como objeto os transatores vocais da Google (Nest Home/Google Assistant) e da Amazon (Echo/Alexa). Observa, portanto, que a inclusão de rotinas algorítmicas e a extensão de conteúdo noticioso para interfaces de voz inteligentes demanda adaptação para a personalização das informações, ecossistema que é retroalimentado por veículos tradicionais, jornalistas e pessoas que interagem com os artefatos.Este artículo tiene como objetivo comprender las transformaciones causadas por los nuevos ecosistemas informativos en el periodismo contemporáneo. Este análisis se realiza en función de las noticias a las que se accede a través de asistentes digitales personales integrados en altavoces inteligentes. Como procedimiento metodológico, adopta un estudio de caso múltiple, definiendo los transactores vocales de Google (Nest Home/Google Assistant) y Amazon (Echo/Alexa) como su objeto. Señala, por lo tanto, que la inclusión de rutinas algorítmicas y la extensión del contenido de noticias a interfaces de voz inteligentes requiere adaptación para la personalización de la información, un ecosistema que es retroalimentado por vehículos tradicionales, periodistas y personas que interactúan con los artefactos.
2

Swoboda, Danya, Jared Boasen, Pierre-Majorique Léger, Romain Pourchon, and Sylvain Sénécal. "Comparing the Effectiveness of Speech and Physiological Features in Explaining Emotional Responses during Voice User Interface Interactions." Applied Sciences 12, no. 3 (January 25, 2022): 1269. http://dx.doi.org/10.3390/app12031269.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The rapid rise of voice user interface technology has changed the way users traditionally interact with interfaces, as tasks requiring gestural or visual attention are swapped by vocal commands. This shift has equally affected designers, required to disregard common digital interface guidelines in order to adapt to non-visual user interaction (No-UI) methods. The guidelines regarding voice user interface evaluation are far from the maturity of those surrounding digital interface evaluation, resulting in a lack of consensus and clarity. Thus, we sought to contribute to the emerging literature regarding voice user interface evaluation and, consequently, assist user experience professionals in their quest to create optimal vocal experiences. To do so, we compared the effectiveness of physiological features (e.g., phasic electrodermal activity amplitude) and speech features (e.g., spectral slope amplitude) to predict the intensity of users’ emotional responses during voice user interface interactions. We performed a within-subjects experiment in which the speech, facial expression, and electrodermal activity responses of 16 participants were recorded during voice user interface interactions that were purposely designed to elicit frustration and shock, resulting in 188 analyzed interactions. Our results suggest that the physiological measure of facial expression and its extracted feature, automatic facial expression-based valence, is most informative of emotional events lived through voice user interface interactions. By comparing the unique effectiveness of each feature, theoretical and practical contributions may be noted, as the results contribute to voice user interface literature while providing key insights favoring efficient voice user interface evaluation.
3

Wagner, Amber, and Jeff Gray. "An Empirical Evaluation of a Vocal User Interface for Programming by Voice." International Journal of Information Technologies and Systems Approach 8, no. 2 (July 2015): 47–63. http://dx.doi.org/10.4018/ijitsa.2015070104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Although Graphical User Interfaces (GUIs) often improve usability, individuals with physical disabilities may be unable to use a mouse and keyboard to navigate through a GUI-based application. In such situations, a Vocal User Interface (VUI) may be a viable alternative. Existing vocal tools (e.g., Vocal Joystick) can be integrated into software applications; however, integrating an assistive technology into a legacy application may require tedious and manual adaptation. Furthermore, the challenges are deeper for an application whose GUI changes dynamically (e.g., based on the context of the program) and evolves with each new application release. This paper provides a discussion of challenges observed while mapping a GUI to a VUI. The context of the authors' examples and evaluation are taken from Myna, which is the VUI that is mapped to the Scratch programming environment. Initial user studies on the effectiveness of Myna are also presented in the paper.
4

Topoleanu, Tudor Sabin, and Gheorghe Leonte Mogan. "Aspects Concerning Voice Cognitive Control Systems for Mobile Robots." Solid State Phenomena 166-167 (September 2010): 427–32. http://dx.doi.org/10.4028/www.scientific.net/ssp.166-167.427.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper we present a general structure of a cognitive control system that allows a mobile robot to behave semi-autonomously while receiving tasks through vocal commands. Furthermore, the paper contains an analysis of human robot interfaces, voice interface systems and cognitive systems. The main purpose is to identify the optimum structure of a mobile robot control platform and determine the outlines within which this solution will be developed. The mobile robot using such a solution will operate in the services and leisure domain, and therefore the specifications for the cognitive system will be tailored to the needs of such applications.
5

Gutiérrez Calderón, Jenny Alejandra, Erika Nathalia Gama Melo, Darío Amaya Hurtado, and Oscar Fernando Avilés Sánchez. "Desarrollo de interfaces para la detección del habla sub-vocal." Revista Tecnura 17, no. 37 (September 18, 2013): 138. http://dx.doi.org/10.14483/udistrital.jour.tecnura.2013.3.a12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Por medio de este artículo se explorarán las técnicas más sobresalientes utilizadas actualmente para la detección del habla sub-vocal tanto en personas con parálisis cerebralcomo para aplicaciones comerciales (por ejemplo, permitir la comunicación en lugares ruidosos). Las metodologías expuestas se ocupan de adquirir y procesar las señales del habla desde diferentes niveles de su generación, de esta manera se presentan métodos que detectan y analizan señales desde que estas son producidas como impulsos neuronales en el cerebro, hasta que llegan al aparato fonador ubicado en la garganta, justo antes de ser pronunciadas. La calidad de la adquisición y procesamiento dependerá de varios factores que serán analizados en las siguientes secciones. La primera parte de este artículo constituye una breve explicación del proceso completo de generación de voz. Posteriormente, se exponen las técnicas de adquisición y análisis de las señales del habla sub-vocal, para finalmente incluir un análisis de las ventajas y desventajas que estas presentan para su posible implementación en un dispositivo para la detección del habla sub-vocal o lenguaje silencioso. Los resultados de la investigación realizada demuestran cómo la implementación del micrófono NAM (Non-audible Murmur) es una de las alternativas que aporta mayores beneficios no solo para la adquisición y procesamiento de las señales, sino para la futura discriminación de los fonemas del idioma español.
6

Young, Andrea. "The Voice-Index and Digital Voice Interface." Leonardo Music Journal 24 (December 2014): 3–5. http://dx.doi.org/10.1162/lmj_a_00186.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The voice-index is discussed as a conceptual model for creating a live digital voice. Vocal feature extraction employs the voice as a live electronic interface, referenced in the author’s performative work.
7

Carvalho, Diogo Rebel e. "Multiplicidade de recursos vocais, dramáticos e expressivos a partir da análise da obra Sound - para quatro vozes femininas, luz, cena e amplificação - de Luiz Carlos Csekö." Per Musi, no. 41 (September 29, 2021): 1–15. http://dx.doi.org/10.35699/2317-6377.2021.34851.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
O presente artigo tem como objetivo investigar a multiplicidade de recursos vocais, dramáticos e expressivos, além da diversidade de estilos e tendências nas interfaces entre canto e performance, e a ampliação do espaço cênico-musical a partir da análise da obra Sound, para quatro vozes femininas, luz, cena e amplificação, do compositor Luiz Carlos Csekö. Após um breve relato da trajetória do compositor, serão apresentadas as estratégias utilizadas por Csekö para registrar sua obra, como a Notação Gráfica Híbrida, o Tempo em Suspensão e as Interfaces com Multimeios e Intermeios, além da utilização de microfonação e amplificação em sua abordagem do conceito de amálgama eletroacústico. Além de Sound, serão citadas três obras vocais selecionadas, onde são, também, exploradas pelo compositor, a enorme gama de possibilidades e de materiais sonoros particulares, a manipulação do texto traduzido em texturas e as indicações de elementos cênicos ao intérprete.
8

Suzuki, Toshitaka N., David Wheatcroft, and Michael Griesser. "The syntax–semantics interface in animal vocal communication." Philosophical Transactions of the Royal Society B: Biological Sciences 375, no. 1789 (November 18, 2019): 20180405. http://dx.doi.org/10.1098/rstb.2018.0405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Syntax (rules for combining words or elements) and semantics (meaning of expressions) are two pivotal features of human language, and interaction between them allows us to generate a limitless number of meaningful expressions. While both features were traditionally thought to be unique to human language, research over the past four decades has revealed intriguing parallels in animal communication systems. Many birds and mammals produce specific calls with distinct meanings, and some species combine multiple meaningful calls into syntactically ordered sequences. However, it remains largely unclear whether, like phrases or sentences in human language, the meaning of these call sequences depends on both the meanings of the component calls and their syntactic order. Here, leveraging recently demonstrated examples of meaningful call combinations, we introduce a framework for exploring the interaction between syntax and semantics (i.e. the syntax-semantic interface) in animal vocal sequences. We outline methods to test the cognitive mechanisms underlying the production and perception of animal vocal sequences and suggest potential evolutionary scenarios for syntactic communication. We hope that this review will stimulate phenomenological studies on animal vocal sequences as well as experimental studies on the cognitive processes, which promise to provide further insights into the evolution of language. This article is part of the theme issue ‘What can animal communication teach us about human language?’
9

Metzner, W. "An audio-vocal interface in echolocating horseshoe bats." Journal of Neuroscience 13, no. 5 (May 1, 1993): 1899–915. http://dx.doi.org/10.1523/jneurosci.13-05-01899.1993.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Goble, J. R., P. F. Suarez, S. K. Rogers, D. W. Ruck, C. Arndt, and M. Kabrisky. "A facial feature communications interface for the non-vocal." IEEE Engineering in Medicine and Biology Magazine 12, no. 3 (September 1993): 46–48. http://dx.doi.org/10.1109/51.232340.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ogata, Kohichi, Tayuto Kodama, Tomohiro Hayakawa, and Riku Aoki. "Inverse estimation of the vocal tract shape based on a vocal tract mapping interface." Journal of the Acoustical Society of America 145, no. 4 (April 2019): 1961–74. http://dx.doi.org/10.1121/1.5095409.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

BROWER, VIRGIL W. "TECHNO-TELEPATHY & SILENT SUBVOCAL SPEECH-RECOGNITION ROBOTICS: DO ANDROIDS READ OF ELECTRIC THOUGHTS?" HORIZON / Fenomenologicheskie issledovanija/ STUDIEN ZUR PHÄNOMENOLOGIE / STUDIES IN PHENOMENOLOGY / ÉTUDES PHÉNOMÉNOLOGIQUES 10, no. 1 (2021): 232–57. http://dx.doi.org/10.21638/2226-5260-2021-10-1-232-257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The primary focus of this project is the silent and subvocal speech-recognition interface unveiled in 2018 as an ambulatory device wearable on the neck that detects a myoelectrical signature by electrodes worn on the surface of the face, throat, and neck. These emerge from an alleged “intending to speak” by the wearer silently-saying-something-to-oneself. This inner voice is believed to occur while one reads in silence or mentally talks to oneself. The artifice does not require spoken sounds, opening the mouth, or any explicit or external movement of the lips. The essay then considers such subvocal “speech” as a mode of writing or saying and the interior of the mouth or oral cavity as its writing surface. It briefly revisits discussions of telepathy to recontextualize Heidegger’s warning against enframing language exclusively within calculative technics and physiology, which he suggests is detrimental to Mundarten (mouth-modes of regional dialects). It closes in reconsideration of Husserl’s phenomenology of language and meaning in Ideas as it might apply to subvocal speech-recognition interfaces. It suggests ways by which the electrophysiology that the device detects and deciphers (as an alleged intention of a presumed natural language unspoken vocally or aloud) might supplement Husserl’s insinuation of the Leiblichkeit of language through a self-stamping extraction of an extension of meaning.
13

Pardo, Bryan, Mark Cartwright, Prem Seetharaman, and Bongjun Kim. "Learning to Build Natural Audio Production Interfaces." Arts 8, no. 3 (August 29, 2019): 110. http://dx.doi.org/10.3390/arts8030110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Improving audio production tools provides a great opportunity for meaningful enhancement of creative activities due to the disconnect between existing tools and the conceptual frameworks within which many people work. In our work, we focus on bridging the gap between the intentions of both amateur and professional musicians and the audio manipulation tools available through software. Rather than force nonintuitive interactions, or remove control altogether, we reframe the controls to work within the interaction paradigms identified by research done on how audio engineers and musicians communicate auditory concepts to each other: evaluative feedback, natural language, vocal imitation, and exploration. In this article, we provide an overview of our research on building audio production tools, such as mixers and equalizers, to support these kinds of interactions. We describe the learning algorithms, design approaches, and software that support these interaction paradigms in the context of music and audio production. We also discuss the strengths and weaknesses of the interaction approach we describe in comparison with existing control paradigms.
14

Bastianelli, Emanuele, Daniele Nardi, Luigia Carlucci Aiello, Fabrizio Giacomelli, and Nicolamaria Manes. "Speaky for robots: the development of vocal interfaces for robotic applications." Applied Intelligence 44, no. 1 (July 23, 2015): 43–66. http://dx.doi.org/10.1007/s10489-015-0695-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Holdengreber, Eldad, Roi Yozevitch, and Vitali Khavkin. "Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures." Sensors 21, no. 16 (August 5, 2021): 5291. http://dx.doi.org/10.3390/s21165291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Muteness at its various levels is a common disability. Most of the technological solutions to the problem creates vocal speech through the transition from mute languages to vocal acoustic sounds. We present a new approach for creating speech: a technology that does not require prior knowledge of sign language. This technology is based on the most basic level of speech according to the phonetic division into vowels and consonants. The speech itself is expected to be expressed through sensing of the hand movements, as the movements are divided into three rotations: yaw, pitch, and roll. The proposed algorithm converts these rotations through programming to vowels and consonants. For the hand movement sensing, we used a depth camera and standard speakers in order to produce the sounds. The combination of the programmed depth camera and the speakers, together with the cognitive activity of the brain, is integrated into a unique speech interface. Using this interface, the user can develop speech through an intuitive cognitive process in accordance with the ongoing brain activity, similar to the natural use of the vocal cords. Based on the performance of the presented speech interface prototype, it is substantiated that the proposed device could be a solution for those suffering from speech disabilities.
16

Fasciani, Stefano, and Lonce Wyse. "Vocal Control of Sound Synthesis Personalized by Unsupervised Machine Listening and Learning." Computer Music Journal 42, no. 1 (April 2018): 37–59. http://dx.doi.org/10.1162/comj_a_00450.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this article we describe a user-driven adaptive method to control the sonic response of digital musical instruments using information extracted from the timbre of the human voice. The mapping between heterogeneous attributes of the input and output timbres is determined from data collected through machine-listening techniques and then processed by unsupervised machine-learning algorithms. This approach is based on a minimum-loss mapping that hides any synthesizer-specific parameters and that maps the vocal interaction directly to perceptual characteristics of the generated sound. The mapping adapts to the dynamics detected in the voice and maximizes the timbral space covered by the sound synthesizer. The strategies for mapping vocal control to perceptual timbral features and for automating the customization of vocal interfaces for different users and synthesizers, in general, are evaluated through a variety of qualitative and quantitative methods.
17

Aboitiz, Francisco. "Voice, gesture and working memory in the emergence of speech." Interaction Studies 19, no. 1-2 (September 17, 2018): 70–85. http://dx.doi.org/10.1075/is.17032.abo.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Language and speech depend on a relatively well defined neural circuitry, located predominantly in the left hemisphere. In this article, I discuss the origin of the speech circuit in early humans, as an expansion of an auditory-vocal articulatory network that took place after the last common ancestor with the chimpanzee. I will attempt to converge this perspective with aspects of the Mirror System Hypothesis, particularly those related to the emergence of a meaningful grammar in human communication. Basically, the strengthening of auditory-vocal connectivity via the arcuate fasciculus and related tracts generated an expansion of working memory capacity for vocalizations, that was key for learning complex utterances. This process was concomitant with the development of a robust interface with visual working memory, both in the dorsal and ventral streams of auditory and visual processing. This enabled the bidirectional translation of sequential codes into hierarchical visual representations, through the development of a multimodal interface between both systems.
18

Levendoski, Elizabeth Erickson, Ciara Leydon, and Susan L. Thibeault. "Vocal Fold Epithelial Barrier in Health and Injury: A Research Review." Journal of Speech, Language, and Hearing Research 57, no. 5 (October 2014): 1679–91. http://dx.doi.org/10.1044/2014_jslhr-s-13-0283.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
PurposeVocal fold epithelium is composed of layers of individual epithelial cells joined by junctional complexes constituting a unique interface with the external environment. This barrier provides structural stability to the vocal folds and protects underlying connective tissue from injury while being nearly continuously exposed to potentially hazardous insults, including environmental or systemic-based irritants such as pollutants and reflux, surgical procedures, and vibratory trauma. Small disruptions in the epithelial barrier may have a large impact on susceptibility to injury and overall vocal health. The purpose of this article is to provide a broad-based review of current knowledge of the vocal fold epithelial barrier.MethodA comprehensive review of the literature was conducted. Details of the structure of the vocal fold epithelial barrier are presented and evaluated in the context of function in injury and pathology. The importance of the epithelial-associated vocal fold mucus barrier is also introduced.Results/ConclusionsInformation presented in this review is valuable for clinicians and researchers as it highlights the importance of this understudied portion of the vocal folds to overall vocal health and disease. Prevention and treatment of injury to the epithelial barrier is a significant area awaiting further investigation.
19

Geringer, John M., and Justine K. Sasanfar. "Listener Perception of Expressivity in Collaborative Performances Containing Expressive and Unexpressive Playing by the Pianist." Journal of Research in Music Education 61, no. 2 (June 13, 2013): 160–74. http://dx.doi.org/10.1177/0022429413485246.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Listener perception of musical expression in collaborative performance was explored in this study. Performances of two duos (a violinist and pianist, and a vocalist and pianist) were recorded. The level of expressivity of the violinist and vocalist remained stylistically appropriate during pieces; however, the pianist alternated between very expressive and unexpressive playing during each performance. The piece performed by each duo contained approximately equal sections of expressive and unexpressive playing by the pianist, and listeners heard each piece twice with the sections juxtaposed. Sixty-six undergraduate and graduate music students turned a Continuous Response Digital Interface dial to indicate their ongoing perception of expressivity as they listened throughout each performance. Graphic analysis of listeners’ responses for both pieces illustrated that they differentiated between sections with expressive and unexpressive playing by the pianist. Statistical analysis revealed that sections in which the pianist played expressively were perceived with significantly higher levels of expressivity than unexpressive sections. We found no significant differences in perceived expressivity between performance experience groups, gender, graduates versus undergraduates, or orders. Thus, in collaborative performances of a vocalist or instrumentalist with a pianist, pianist expressiveness appears to influence perception of overall expressivity.
20

Merker, Bjorn. "The Uneven Interface Between Culture and Biology in Human Music." Music Perception 24, no. 1 (September 1, 2006): 95–98. http://dx.doi.org/10.1525/mp.2006.24.1.95.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Two recent reviews in Music Perception address potential cognitive adaptations for music. In this commentary I sketch a number of connections between issues raised in these reviews and the biology of music more generally. Potential perceptual and cognitive specializations for music are distinguished from those of production, the latter supplying a key adaptation for music in the form of vocal learning. The generative nature of human music is emphasized, as is the potential relevance of nonlinear resonance phenomena in audition and the shaping power of the “learner bottle-neck” in cultural transmission for our understanding of the structural content of extant musical forms.
21

M. J., Arpitha, Binduja B., Jahnavi G., and Kusuma Mohanchandra. "Brain Computer Interface for Emergency Virtual Voice." International Journal of Artificial Intelligence 8, no. 1 (June 22, 2021): 40–47. http://dx.doi.org/10.36079/lamintang.ijai-0801.222.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Brain computer interface (BCI) is one of the thriving emergent technology which acts as an interface between a brain and an external device. BCI for speech communication is acquiring recognition in various fields. Speech is one of the most natural ways to express thoughts and feelings by articulate vocal sounds. The purpose of this study is to restore communication ability of the people suffering from severe muscular disorders like amyotrophic lateral sclerosis (ALS), stroke which causes paralysis, locked-in syndrome, tetraplegia and Myasthenia gravis. They cannot interact with their environment even though their intellectual capabilities are intact. Our work attempts to provide summary of the research articles being published in reputed journals which lead to the investigation of published BCI articles, BCI prototypes, Bio-Signals for BCI, intent of the articles, target applications, classification techniques, algorithms and methodologies, BCI system types. Thus, the result of detailed survey presents an outline of available studies, recent results and looks forward to future developments which provides a communication pathway for paralyzed patients to convey their needs.
22

Catarci, Tiziana, Francesco Leotta, Andrea Marrella, Massimo Mecella, and Mahmoud Sharf. "Process-Aware Enactment of Clinical Guidelines through Multimodal Interfaces." Computers 8, no. 3 (September 11, 2019): 67. http://dx.doi.org/10.3390/computers8030067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Healthcare is one of the largest business segments in the world and is a critical area for future growth. In order to ensure efficient access to medical and patient-related information, hospitals have invested heavily in improving clinical mobile technologies and spreading their use among doctors towards a more efficient and personalized delivery of care procedures. However, there are also indications that their use may have a negative impact on patient-centeredness and often places many cognitive and physical demands on doctors, making them prone to make medical errors. To tackle this issue, in this paper, we present the main outcomes of the project TESTMED, which aimed at realizing a clinical system that provides operational support to doctors using mobile technologies for delivering care to patients, in a bid to minimize medical errors. The system exploits concepts from Business Process Management (BPM) on how to manage a specific class of care procedures, called clinical guidelines, and how to support their execution and mobile orchestration among doctors. To allow a non-invasive interaction of doctors with the system, we leverage the use of touch and vocal user interfaces. A robust user evaluation performed in a real clinical case study shows the usability and effectiveness of the system.
23

Ons, Bart, Jort F. Gemmeke, and Hugo Van hamme. "Fast vocabulary acquisition in an NMF-based self-learning vocal user interface." Computer Speech & Language 28, no. 4 (July 2014): 997–1017. http://dx.doi.org/10.1016/j.csl.2014.03.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Reimer, Bryan, Linda Angell, David Strayer, Louis Tijerina, and Bruce Mehler. "Evaluating Demands Associated with the Use of Voice-Based In-Vehicle Interfaces." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (September 2016): 2083–87. http://dx.doi.org/10.1177/1541931213601472.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This panel addresses current efforts associated with the evaluation of demands associated with the use of voice-based in-vehicle interfaces. As generally implemented, these systems are perhaps best characterized as mixed-mode interfaces drawing upon varying levels of auditory, vocal, visual, manual and cognitive resources. Numerous efforts have quantified demands associated with these systems and several have proposing evaluation methods. However, there has been limited discussion in the scientific literature on the benefits and drawbacks of various measures of workload; appropriate reference points for comparison (i.e. just driving, visual-manual versions of the task one is looking to replace, etc.); the relationship of demand characteristics to safety; and practical design considerations that can be gleamed from efforts to date. Panelists will discuss scientific progress in the topic areas. Each panelist is expected to present a brief perspective followed by discussion and Q&A.
25

Hamel, Marie-Josée. "Les outils de TALN dans SAFRAN." ReCALL 10, no. 1 (May 1998): 79–85. http://dx.doi.org/10.1017/s0958344000004286.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le projet SAFRAN a pour objectif le développement d'une interface dédiée à I'enseignement du français assisté par ordinateur, interface dans laquelle sont intégrés progressivement des outils de traitement automatique des langues naturelles. Ces outils, dans le cadre du projet SAFRAN, sont I'analyseur syntaxique et le synthétiseur vocal FIPSvox, le dictionnaire électronique conceptuel FR-Tool et le conjugueur FLEX, lls permettent I'accès à des ressources linguistiques riches et variées, favorisent I'expérimentation et enfin, offrent un support au diagnostic. Notre article fait le compte-rendu de deux années d'activités scientifiques pour lesquelles nos efforts ont porté sur le développement d'un module sur I'enseignement de la phonétique du français qui intègre les outils de TALN mentionnŕs supra.
26

Sattler, Gene D., Patricia Sawaya, and Michael J. Braun. "An Assessment of Song Admixture as an Indicator of Hybridization in Black-Capped Chickadees (Poecile Atricapillus) and Carolina Chickadees (P. Carolinensis)." Auk 124, no. 3 (July 1, 2007): 926–44. http://dx.doi.org/10.1093/auk/124.3.926.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractVocal admixture often occurs where differentiated populations or species of birds meet. This may entail song sympatry, bilingually singing birds, and songs with intermediate or atypical characteristics. Different levels of vocal admixture at the range interface between Black-capped Chickadees (Poecile atricapillus) and Carolina Chickadees (P. carolinensis) have been interpreted as indicating that hybridization is frequent at some locations but not others. However, song ontogeny in these birds has a strong nongenetic component, so that inferences regarding hybridization based on vocal admixture require confirmation. We used diagnostic genetic markers and quantitative analyses of song to characterize population samples along two transects of the chickadee contact zone in the Appalachian Mountains. More than 50% of individuals at the range interface were of hybrid ancestry, yet only 20% were observed to be bilingual or to sing atypical songs. Principal component analysis revealed minimal song intermediacy. This result contrasts with an earlier analysis of the hybrid zone in Missouri that found considerable song intermediacy. Re-analysis of the Missouri data confirmed this difference. Correlation between an individual’s genetic composition and its song type was weak in Appalachian hybrid populations, and genetic introgression in both forms extended far beyond the limits of vocal admixture. Therefore, song is not a reliable indicator of levels of hybridization or genetic introgression at this contact zone. Varying ecological factors may play a role in producing variable levels of song admixture in different regions of the range interface.Una Evaluación de la Mixtura de Cantos como Indicador de Hibridación en Poecile atricapillus y P. carolinensis
27

Aoki, Riku, Kohichi Ogata, and Akihiro Taruguchi. "Feasibility study on synthesis of English vowels with a vocal tract mapping interface." Journal of the Acoustical Society of America 140, no. 4 (October 2016): 2960. http://dx.doi.org/10.1121/1.4969142.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Hackworth, Rhonda S. "Prevalence of vocal problems: Speech-language pathologists’ evaluation of music and non-music teacher recordings." International Journal of Music Education 31, no. 1 (February 9, 2012): 26–34. http://dx.doi.org/10.1177/0255761411431398.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The current study, a preliminary examination of whether music teachers are more susceptible to vocal problems than teachers of other subjects, asked for expert evaluation of audio recordings from licensed speech-language pathologists. Participants ( N = 41) taught music ( n = 23) or another subject ( n = 18) in either elementary ( n = 21), middle ( n = 10), or high school ( n = 10), and had a mean of 14 years’ teaching experience. Each teacher read a poem while being audio recorded. Nine licensed speech-language pathologists with a mean of 20 years’ clinical experience served as expert evaluators by listening to the 41 recordings while manipulating the Continuous Response Digital Interface (CRDI) dial. Results showed no significant differences between music and non-music teacher evaluations. The individual variations in scores showed no trends for any particular group, but rather pointed out how personal vocal hygiene (care of the voice) is for individual teachers. Suggestions for future research include ways to best help teachers manage individual vocal problems.
29

McMahon, April, Paul Foulkes, and Laura Tollfree. "Gestural representation and Lexical Phonology." Phonology 11, no. 2 (August 1994): 277–316. http://dx.doi.org/10.1017/s0952675700001974.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent work on Articulatory Phonology (Browman & Goldstein 1986, 1989, 1991, 1992a, b) raises a number of questions, specifically involving the phonetics–phonology ‘interface’. One advantage of using Articulatory Phonology (henceforth ArtP), with its basic units of abstract gestures based on articulatory movements, is its ability to link phenomena previously seen as phonological to those which are conventionally described as allophonic, or even lower-level phonetic effects, since ‘gestures are... useful primitives for characterising phonological patterns as well as for analysing the activity of the vocal tract articulators’ (Browman & Goldstein 1991: 313). If both phonetics and phonology could ultimately be cast entirely in gestural terms, the phonetics–phonology interface might effectively cease to exist, at least in terms of units of analysis.
30

Lewandowski, Brian, Alexei Vyssotski, Richard H. R. Hahnloser, and Marc Schmidt. "At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning." Journal of Physiology-Paris 107, no. 3 (June 2013): 178–92. http://dx.doi.org/10.1016/j.jphysparis.2013.04.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ogata, Kohichi, and Tomohiro Hayakawa. "Vowel synthesis using a vocal tract mapping interface and simulation study of inverse mapping." Journal of the Acoustical Society of America 133, no. 5 (May 2013): 3522. http://dx.doi.org/10.1121/1.4806326.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Mulwafu, Wapulumuka. "The Interface of Christianity and Conservation in Colonial Malawi, C. 1850-1930." Journal of Religion in Africa 34, no. 3 (2004): 298–319. http://dx.doi.org/10.1163/1570066041725420.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe study of the relationship between religion and the environment in Malawi has only recently begun to be appreciated. Christian missionaries in general did not actively promote the campaign for conservation of resources but some early missionaries frequently evoked biblical images and ideas that had a strong bearing on the perception and management of the environment. Later, certain religious groups were vocal in their support for or opposition to state-sponsored conservation schemes in the colonial period. This paper demonstrates that African religious beliefs and customs equally played a critical role in creating a set of ideas about conservation and the environment. The study is part of an effort to recover some early voices promoting conservation of natural resources in the country. It thus addresses the issues of religion and conservation as critical in the initial encounter between Europeans and Africans.
33

Otto, Randal A., and William Davis. "Functional Electrical Stimulation for the Treatment of Bilaterial Recurrent Laryngeal Nerve Paralysis." Otolaryngology–Head and Neck Surgery 95, no. 1 (July 1986): 47–51. http://dx.doi.org/10.1177/019459988609500111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We have previously presented the concept of electrophysiologic pacing of bilaterally paralyzed vocal cord abductors as a solution to the difficult problem incurred in this clinical situation. Initially, we demonstrated that it was indeed feasible to electrophysiologically pace abduction of the vocal cords synchronously with respiration, employing the EMG activity of the diaphragm as a trigger stimulus. Further research has led us to evaluate other possible physiologic trigger stimuli to ascertain which of these will prove most suitable in long-term pacing studies. In this article, we will report our preliminary results, employing negative intrathoracic pressure occurring with respiration—as detected by an implanted pressure transducer as a trigger stimulus. This device was interfaced with a muscle stimulator attached to electrodes placed in the cricoarytenoid muscles in five canines whose recurrent laryngeal nerves had been sectioned bilaterally. In all animals, obvious physiologic synchrony of vocal cord abduction and a reduciton of negative inspiratory intratracheal pressure was achieved during electrical pacing. This reinforces our initial findings that it is indeed feasible to pace vocal cord abduction in bilaterial recurrent laryngeal nerve paralysis with resultant return of physiologic normality to the glottis. Thus, functional electrical stimulation offers an alternative approach to the difficult problems incurred in the patient with bilateral recurrent laryngeal nerve paralysis. It also demonstrates that physiologic negative intrathoracic pressure activity occurring with inspiration can be a trigger source.
34

Otto, Randal A., Jerry Templer, William Davis, David Homeyer, and Mark Stroble. "Coordinated Electrical Pacing of Vocal Cord Abductors in Recurrent Laryngeal Nerve Paralysis." Otolaryngology–Head and Neck Surgery 93, no. 5 (October 1985): 634–38. http://dx.doi.org/10.1177/019459988509300512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Electrodes were placed into the posterior cricoarytenoid and diaphragmatic muscles of five tracheostomized dogs. With the use of a sensor that would selectively detect diaphragmatic electromyographic activity, this activity served as a trigger and was amplified and interfaced with a muscle stimulator attached to electrodes placed in the posterior cricoarytenoid muscles. In all animals obvious physiologic synchrony of vocal fold abduction and a reduction of the negative inspiratory intratracheal pressure were observed during electrical pacing. This represents a preliminary step in the development of an alternative approach to the patient with bilateral recurrent laryngeal nerve paralysis.
35

Winstanley, Sue, and Hazel Wright. "Vocal fold contact area patterns in normal speakers: An investigation using the electro-laryngograph interface system." International Journal of Language & Communication Disorders 26, no. 1 (January 1991): 25–39. http://dx.doi.org/10.3109/13682829109011991.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Lee, Wookey, Jessica Jiwon Seong, Busra Ozlu, Bong Sup Shim, Azizbek Marakhimov, and Suan Lee. "Biosignal Sensors and Deep Learning-Based Speech Recognition: A Review." Sensors 21, no. 4 (February 17, 2021): 1399. http://dx.doi.org/10.3390/s21041399.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Voice is one of the essential mechanisms for communicating and expressing one’s intentions as a human being. There are several causes of voice inability, including disease, accident, vocal abuse, medical surgery, ageing, and environmental pollution, and the risk of voice loss continues to increase. Novel approaches should have been developed for speech recognition and production because that would seriously undermine the quality of life and sometimes leads to isolation from society. In this review, we survey mouth interface technologies which are mouth-mounted devices for speech recognition, production, and volitional control, and the corresponding research to develop artificial mouth technologies based on various sensors, including electromyography (EMG), electroencephalography (EEG), electropalatography (EPG), electromagnetic articulography (EMA), permanent magnet articulography (PMA), gyros, images and 3-axial magnetic sensors, especially with deep learning techniques. We especially research various deep learning technologies related to voice recognition, including visual speech recognition, silent speech interface, and analyze its flow, and systematize them into a taxonomy. Finally, we discuss methods to solve the communication problems of people with disabilities in speaking and future research with respect to deep learning components.
37

Efthimiou, Eleni, Stavroula-Evita Fotinea, Theodore Goulas, Anna Vacalopoulou, Kiki Vasilaki, and Athanasia-Lida Dimou. "Sign Language Technologies and the Critical Role of SL Resources in View of Future Internet Accessibility Services." Technologies 7, no. 1 (January 29, 2019): 18. http://dx.doi.org/10.3390/technologies7010018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we touch upon the requirement for accessibility via Sign Language as regards dynamic composition and exchange of new content in the context of natural language-based human interaction, and also the accessibility of web services and electronic content in written text by deaf and hard-of-hearing individuals. In this framework, one key issue remains the option for composition of signed “text”, along with the ability for the reuse of pre-existing signed “text” by exploiting basic editing facilities similar to those available for written text that serve vocal language representation. An equally critical related issue is accessibility of vocal language text by born or early deaf signers, as well as the use of web-based facilities via Sign Language-supported interfaces, taking into account that the majority of native signers present limited reading skills. It is, thus, demonstrated how Sign Language technologies and resources may be integrated in human-centered applications, enabling web services and content accessibility in the education and an everyday communication context, in order to facilitate integration of signer populations in a societal environment that is strongly defined by smart life style conditions. This potential is also demonstrated by end-user-evaluation results.
38

Killian, Janice N., and Lynn Basinger. "Perception of Choral Blend Among Choral, Instrumental, and Nonmusic Majors Using the Continuous Response Digital Interface." Journal of Research in Music Education 55, no. 4 (December 2007): 313–25. http://dx.doi.org/10.1177/0022429408317373.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The concept of choral blend is often adjudicated but seldom researched. Voice matching to achieve choral blend (placing specific voices next to one another to achieve a blended sound within a section) is frequently recommended. The authors asked participants ( N = 55) comprised of vocal, instrumental, and nonmusic majors to move a continuous response digital interface dial to indicate judgment of blend quality while listening to voice-matched choral groupings. Graphic analyses indicated general agreement in judgments of good blend and bad blend among all three groups especially within alto and bass excerpts. Less agreement appeared for soprano and tenor excerpts. Pearson correlations between repeated excerpts were highly positive for vocalists but less consistent for others. Vocalists listened longer before making a judgment. Few group differences in judgment magnitude appeared, but general tendencies toward good blend judgments were evident. Discussion included future research implications and applications for educators.
39

Fairhurst, M. C., and M. Q. Hasan. "Characteristics of user interaction with a communication system for the non-vocal using a hierarchically structured interface." International Journal of Bio-Medical Computing 26, no. 1-2 (July 1990): 53–61. http://dx.doi.org/10.1016/0020-7101(90)90019-q.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Lee, Seo-young, Gyuho Lee, Soomin Kim, and Joonhwan Lee. "Expressing Personalities of Conversational Agents through Visual and Verbal Feedback." Electronics 8, no. 7 (July 16, 2019): 794. http://dx.doi.org/10.3390/electronics8070794.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As the uses of conversational agents increase, the affective and social abilities of agents become important with their functional abilities. Agents that lack affective abilities could frustrate users during interaction. This study applied personality to implement the natural feedback of conversational agents referring to the concept of affective computing. Two types of feedback were used to express conversational agents’ personality: (1) visual feedback and (2) verbal cues. For visual feedback, participants (N = 45) watched visual feedback with different colors and motions. For verbal cues, participants (N = 60) heard different conditions of agents’ voices with different scripts. The results indicated that the motions of visual feedback were more significant than colors. Fast motions could express distinct and positive personalities. Different verbal cues were perceived as different personalities. The perceptions of personalities differed according to the vocal gender. This study provided design implications for personality expressions applicable to diverse interfaces.
41

Roy, Arani, and Richard Mooney. "Song Decrystallization in Adult Zebra Finches Does Not Require the Song Nucleus NIf." Journal of Neurophysiology 102, no. 2 (August 2009): 979–91. http://dx.doi.org/10.1152/jn.00293.2009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In adult male zebra finches, transecting the vocal nerve causes previously stable (i.e., crystallized) song to slowly degrade, presumably because of the resulting distortion in auditory feedback. How and where distorted feedback interacts with song motor networks to induce this process of song decrystallization remains unknown. The song premotor nucleus HVC is a potential site where auditory feedback signals could interact with song motor commands. Although the forebrain nucleus interface of the nidopallium (NIf) appears to be the primary auditory input to HVC, NIf lesions made in adult zebra finches do not trigger song decrystallization. One possibility is that NIf lesions do not interfere with song maintenance, but do compromise the adult zebra finch's ability to express renewed vocal plasticity in response to feedback perturbations. To test this idea, we bilaterally lesioned NIf and then transected the vocal nerve in adult male zebra finches. We found that bilateral NIf lesions did not prevent nerve section–induced song decrystallization. To test the extent to which the NIf lesions disrupted auditory processing in the song system, we made in vivo extracellular recordings in HVC and a downstream anterior forebrain pathway (AFP) in NIf-lesioned birds. We found strong and selective auditory responses to the playback of the birds' own song persisted in HVC and the AFP following NIf lesions. These findings suggest that auditory inputs to the song system other than NIf, such as the caudal mesopallium, could act as a source of auditory feedback signals to the song motor network.
42

Cardin, Jessica A., Jonathan N. Raksin, and Marc F. Schmidt. "Sensorimotor Nucleus NIf Is Necessary for Auditory Processing But Not Vocal Motor Output in the Avian Song System." Journal of Neurophysiology 93, no. 4 (April 2005): 2157–66. http://dx.doi.org/10.1152/jn.01001.2004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.
43

Berg, Silvia Maria Pires Cabrera. "A composição original para vozes infantis e infanto-juvenis e suas interfaces com a contemporaneidade, tradição e a literatura popular e neofolclórica." Revista da Tulha 1, no. 2 (December 22, 2015): 444. http://dx.doi.org/10.11606/issn.2447-7117.rt.2015.108800.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
O desenvolvimento do canto coral como instrumento essencial na área de Educação Musical faz com que se tornem necessárias as investigações sobre a produção de materiais didático-pedagógicos para o ensino básico, médio e avançado do canto coral como ferramenta para a educação musical de crianças e jovens. A demanda por materiais pedagógicos que sejam alinhados às crescentes pesquisas nas áreas de técnica vocal infantil, partindo da interação das áreas teórico e performáticas com as áreas da educação musical, apontam a necessidade de projetos integrados em composição, interpretação, execução, pesquisa e educação musical. Partindo das propostas de Émile Jaques-Dalcroze (1865–1950) e Zoltán Kodály (1882–1967), Boysen e Enevold, pretende-se neste artigo fazer um levantamento para a produção de tais materiais, assim como da adaptação de materiais existentes, e a composição de materiais originais que contemplem as atividades de ensino, pesquisa e extensão nas três grandes áreas da música e suas respectivas interfaces e interdisciplinaridades: composição, interpretação/execução e pesquisa.
44

Subali, Muhammad, Djasiodi Djasri, and Neneng Alawiyah. "MAKHRAJ PENGUCAPAN HURUF HIJAIYAH DALAM BENTUK SIMULASI MODEL TABUNG VOCAL TRACT DARI ALAT UCAP MANUSIA." ALQALAM 31, no. 2 (February 7, 2019): 334. http://dx.doi.org/10.32678/alqalam.v31i2.1403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
When reading the Qur'an, each letter should be pronounced according to its proper articulation (Makhraj). Mistake in pronunciation of a letter or makhraj can change the meaning of that letter. Elements of sound in Arabic is very important to learn in order to the pronunciation of the Arabic language accordance with the rules of the Arabic language that have been assigned. The purpose of this research is to analyze the pattern of frequencies called formant for each pronunciation hijaiyah which express the proper pronunciation. Data is obtained by recording the expert qori and qoriah, then processed with software ''Praat" to get the formant frequencies and analyzed. Furthermore, the data is processed in the form of Graphical User Interface (GUI) using MatLab, thus obtained the pattern pronunciation for each letter pronounced in the form of a tube resonator models which express the the pattern of articulation tool.Keywords: makhraj, resonator, formant, fonem
45

Tosin, Giuliano. "Poesia sonora no Brasil e no mundo:." FronteiraZ. Revista do Programa de Estudos Pós-Graduados em Literatura e Crítica Literária, no. 27 (December 15, 2021): 50–64. http://dx.doi.org/10.23925/1983-4373.2021i27p50-64.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Poesia sonora é um gênero específico da poesia experimental que explora uma ampla possibilidade de usos do aparelho fonético humano, conduzindo a instâncias anteriores às entonações da fala. Este artigo inicia-se com uma apresentação conceitual da poesia sonora e, em seguida, constrói a sua retrospectiva histórica desde o dadaísmo, passando pelo letrismo e ultra-letrismo, o que demonstra as raízes predominantemente europeias de uma forma de arte que, em seguida, se internacionalizou. Poucas décadas depois, o surgimento dos gravadores de áudio em fita magnética interferiu diretamente na poesia sonora, que passou a conciliar a expressão vocal com o uso de recursos tecnológicos. Essa possibilidade intensificou-se a partir da difusão dos recursos digitais, tanto dos equipamentos de hardware (computadores, interfaces e placas) quanto dos programas, que se tornaram cada vez mais acessíveis. Apesar de um começo tardio, a poesia sonora brasileira concentra uma produção considerável, que também é abordada neste artigo.
46

Robbins, Mark B., Michael J. Braun, and Emily A. Tobey. "Morphological and Vocal Variation Across a Contact Zone between the Chickadees Parus Atricapillus and P. Carolinensis." Auk 103, no. 4 (October 1, 1986): 655–66. http://dx.doi.org/10.1093/auk/103.4.655.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract A contact zone between Black-capped and Carolina chickadees (Parus atricapillus and P. carolinensis) exists in southwestern Missouri. It was less than 15 km wide and paralleled the interface between the relatively treeless Great Plains and the forested Ozark Plateau. Many birds in this zone were intermediate in morphology or vocalizations or both. Moreover, both morphological and vocal discriminant analysis scores of contact zone birds were unimodally distributed and there was no correlation between morphological discriminant scores of mated males and females in the contact zone, indicating little or no assortative mating. Playback experiments demonstrated that birds to the north or south of the contact zone responded aggressively only to their own song type, while contact zone birds responded to either song type. We believe that southwestern Missouri contact zone populations are derived from extensive hybridization between atricapillus and carolinensis.
47

Hahnloser, Richard H. R., and Michale S. Fee. "Sleep-Related Spike Bursts in HVC Are Driven by the Nucleus Interface of the Nidopallium." Journal of Neurophysiology 97, no. 1 (January 2007): 423–35. http://dx.doi.org/10.1152/jn.00547.2006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The function and the origin of replay of motor activity during sleep are currently unknown. Spontaneous activity patterns in the nucleus robustus of the arcopallium (RA) and in HVC (high vocal center) of the sleeping songbird resemble premotor patterns in these areas observed during singing. We test the hypothesis that the nucleus interface of the nidopallium (NIf) has an important role for initiating and shaping these sleep-related activity patterns. In head-fixed, sleeping zebra finches we find that injections of the GABAA-agonist muscimol into NIf lead to transient abolishment of premotor-like bursting activity in HVC neurons. Using antidromic activation of NIf neurons by electrical stimulation in HVC, we are able to distinguish a class of HVC-projecting NIf neurons from a second class of NIf neurons. Paired extracellular recordings in NIf and HVC show that NIf neurons provide a strong bursting drive to HVC. In contrast to HVC neurons, whose bursting activity waxes and wanes in burst epochs, individual NIf projection neurons are nearly continuously bursting and tend to burst only once on the timescale of song syllables. Two types of HVC projection neurons—premotor and striatal projecting—respond differently to the NIf drive, in agreement with notions of HVC relaying premotor signals to RA and an anticipatory copy thereof to areas of a basal ganglia pathway.
48

Su, Yue, Kainan Ma, Xu Zhang, and Ming Liu. "Neural Network-Enabled Flexible Pressure and Temperature Sensor with Honeycomb-like Architecture for Voice Recognition." Sensors 22, no. 3 (January 19, 2022): 759. http://dx.doi.org/10.3390/s22030759.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Flexible pressure sensors have been studied as wearable voice-recognition devices to be utilized in human-machine interaction. However, the development of highly sensitive, skin-attachable, and comfortable sensing devices to achieve clear voice detection remains a considerable challenge. Herein, we present a wearable and flexible pressure and temperature sensor with a sensitive response to vibration, which can accurately recognize the human voice by combing with the artificial neural network. The device consists of a polyethylene terephthalate (PET) printed with a silver electrode, a filament-microstructured polydimethylsiloxane (PDMS) film embedded with single-walled carbon nanotubes and a polyimide (PI) film sputtered with a patterned Ti/Pt thermistor strip. The developed pressure sensor exhibited a pressure sensitivity of 0.398 kPa−1 in the low-pressure regime, and the fabricated temperature sensor shows a desirable temperature coefficient of resistance of 0.13% ∘C in the range of 25 ∘C to 105 ∘C. Through training and testing the neural network model with the waveform data of the sensor obtained from human pronunciation, the vocal fold vibrations of different words can be successfully recognized, and the total recognition accuracy rate can reach 93.4%. Our results suggest that the fabricated sensor has substantial potential for application in the human-computer interface fields, such as voice control, vocal healthcare monitoring, and voice authentication.
49

Naie, Katja, and Richard H. R. Hahnloser. "Regulation of learned vocal behavior by an auditory motor cortical nucleus in juvenile zebra finches." Journal of Neurophysiology 106, no. 1 (July 2011): 291–300. http://dx.doi.org/10.1152/jn.01035.2010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the process of song learning, songbirds such as the zebra finch shape their initial soft and poorly formed vocalizations (subsong) first into variable plastic songs with a discernable recurring motif and then into highly stereotyped adult songs. A premotor brain area critically involved in plastic and adult song production is the cortical nucleus HVC. One of HVC's primary afferents, the nucleus interface of the nidopallium (NIf), provides a significant source of auditory input to HVC. However, the premotor involvement of NIf has not been extensively studied yet. Here we report that brief and reversible pharmacological inactivation of NIf in juvenile birds leads to transient degradation of plastic song toward subsong, as revealed by spectral and temporal song features. No such song degradation is seen following NIf inactivation in adults. However, in both juveniles and adults NIf inactivation leads to a transient decrease in song stereotypy. Our findings reveal a contribution of NIf to song production in juveniles that agrees with its known role in adults in mediating thalamic drive to downstream vocal motor areas during sleep.
50

Boutet, Dominique, Claudia S. Bianchini, Patrick Doan, Léa Chèvrefils-Desbiolles, Chloé Thomas, Morgane Rébulard, Adrien Contesse, Claire Danet, Jean-François Dauphin, and Mathieu Réguer. "Réflexions sur la formalisation, en tant que système, d’une transcription des formes des Langues des Signes : l’approche Typannot." SHS Web of Conferences 78 (2020): 11001. http://dx.doi.org/10.1051/shsconf/20207811001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Transcrire les langues des signes (LS) nécessite de prendre en considération leur nature gestuelle et de comprendre les raisons (parmi lesquelles centration sur les mains et utilisation d’un cadre de référence égocentré) pour lesquelles des systèmes typographiques (HamNoSys, SignWriting) ont échoué à s’imposer comme outil de transcription. La gestualité met en mouvement tous les segments du membre supérieur selon des degrés de liberté, en fonction d’amplitudes particulières et à travers une série de cadres de référence intrinsèque centrée sur chaque segment. Typannot, le système typographique présenté ici, repose sur ces caractéristiques et les intègre dans une hiérarchie structurelle ménageant un niveau informationnel correspondant aux traits (caractères), rassemblés dans un niveau de glyphes composés. La facilité d’utilisation de Typannot est assurée par le respect de quatre principes de conception (généricité, modularité, lisibilité et inscriptibilité) et l’utilisation d’une interface de saisie ménageant ces niveaux d’informations. Afin d’illustrer l’usage de Typannot, ont été menée des analyses portant sur les configurations et l’emplacement propre de la main, qui montrent l’influence de la gestualité praxique sur la gestualité symbolique (c.-à-d. les signes des LS). Vue la durée d’annotation encore très importante qu’un tel système de transcription impose, l’objectif est de permettre une "dictée gestuelle", une transcription directe à partir des données de capture de mouvement. Cette perspective devrait aussi faciliter la transcription de l’ensemble des gestes dits co-verbaux de n’importe quelle langue vocale.

До бібліографії