Siga este enlace para ver otros tipos de publicaciones sobre el tema: Computational auditory scene analysis.

Artículos de revistas sobre el tema "Computational auditory scene analysis"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Computational auditory scene analysis".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Brown, Guy J., and Martin Cooke. "Computational auditory scene analysis." Computer Speech & Language 8, no. 4 (1994): 297–336. http://dx.doi.org/10.1006/csla.1994.1016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Alain, Claude, and Lori J. Bernstein. "Auditory Scene Analysis." Music Perception 33, no. 1 (2015): 70–82. http://dx.doi.org/10.1525/mp.2015.33.1.70.

Texto completo
Resumen
Albert Bregman’s (1990) book Auditory Scene Analysis: The Perceptual Organization of Sound has had a tremendous impact on research in auditory neuroscience. Here, we outline some of the accomplishments. This review is not meant to be exhaustive, but rather aims to highlight milestones in the brief history of auditory neuroscience. The steady increase in neuroscience research following the book’s pivotal publication has advanced knowledge about how the brain forms representations of auditory objects. This research has far-reaching societal implications on health and quality of life. For instanc
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Brown, Guy J. "Computational auditory scene analysis: A representational approach." Journal of the Acoustical Society of America 94, no. 4 (1993): 2454. http://dx.doi.org/10.1121/1.407441.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lewicki, Michael S., Bruno A. Olshausen, Annemarie Surlykke, and Cynthia F. Moss. "Computational issues in natural auditory scene analysis." Journal of the Acoustical Society of America 137, no. 4 (2015): 2249. http://dx.doi.org/10.1121/1.4920202.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Niessen, Maria E., Ronald A. Van Elburg, Dirkjan J. Krijnders, and Tjeerd C. Andringa. "A computational model for auditory scene analysis." Journal of the Acoustical Society of America 123, no. 5 (2008): 3301. http://dx.doi.org/10.1121/1.2933719.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Nakadai, Kazuhiro, and Hiroshi G. Okuno. "Robot Audition and Computational Auditory Scene Analysis." Advanced Intelligent Systems 2, no. 9 (2020): 2000050. http://dx.doi.org/10.1002/aisy.202000050.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

McMullin, Margaret A., Rohit Kumar, Nathan C. Higgins, Brian Gygi, Mounya Elhilali, and Joel S. Snyder. "Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception." Open Mind 8 (2024): 333–65. http://dx.doi.org/10.1162/opmi_a_00131.

Texto completo
Resumen
Abstract Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field’s ecologica
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Godsmark, Darryl, and Guy J. Brown. "A blackboard architecture for computational auditory scene analysis." Speech Communication 27, no. 3-4 (1999): 351–66. http://dx.doi.org/10.1016/s0167-6393(98)00082-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kondo, Hirohito M., Anouk M. van Loon, Jun-Ichiro Kawahara, and Brian C. J. Moore. "Auditory and visual scene analysis: an overview." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (2017): 20160099. http://dx.doi.org/10.1098/rstb.2016.0099.

Texto completo
Resumen
We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Cooke, M. P., and G. J. Brown. "Computational auditory scene analysis: Exploiting principles of perceived continuity." Speech Communication 13, no. 3-4 (1993): 391–99. http://dx.doi.org/10.1016/0167-6393(93)90037-l.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Shao, Yang, and DeLiang Wang. "Sequential organization of speech in computational auditory scene analysis." Speech Communication 51, no. 8 (2009): 657–67. http://dx.doi.org/10.1016/j.specom.2009.02.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Fodróczi, Zoltán, and András Radványi. "Computational auditory scene analysis in cellular wave computing framework." International Journal of Circuit Theory and Applications 34, no. 4 (2006): 489–515. http://dx.doi.org/10.1002/cta.362.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Bregman, Albert S. "Progress in Understanding Auditory Scene Analysis." Music Perception 33, no. 1 (2015): 12–19. http://dx.doi.org/10.1525/mp.2015.33.1.12.

Texto completo
Resumen
In this paper, I make the following claims: (1) Subjective experience is tremendously useful in guiding productive research. (2) Studies of auditory scene analysis (ASA) in adults, newborn infants, and non-human animals (e.g., in goldfish or pigeons) establish the generality of ASA and suggest that it has an innate foundation. (3) ASA theory does not favor one musical style over another. (4) The principles used in the composition of polyphony (slightly modified) apply not only to one particular musical style or culture but to any form of layered music. (5) Neural explanations of ASA do not sup
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Crawford, Malcolm, Martin Cooke, and Guy Brown. "Interactive computational auditory scene analysis: An environment for exploring auditory representations and groups." Journal of the Acoustical Society of America 93, no. 4 (1993): 2308. http://dx.doi.org/10.1121/1.406432.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Hongyan, Li, Cao Meng, and Wang Yue. "Separation of Reverberant Speech Based on Computational Auditory Scene Analysis." Automatic Control and Computer Sciences 52, no. 6 (2018): 561–71. http://dx.doi.org/10.3103/s0146411618060068.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Kawamoto, Mitsuru. "Sound-Environment Monitoring Method Based on Computational Auditory Scene Analysis." Journal of Signal and Information Processing 08, no. 02 (2017): 65–77. http://dx.doi.org/10.4236/jsip.2017.82005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Cooke, Martin, Guy J. Brown, Malcolm Crawford, and Phil Green. "Computational auditory scene analysis: listening to several things at once." Endeavour 17, no. 4 (1993): 186–90. http://dx.doi.org/10.1016/0160-9327(93)90061-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

McLACHLAN, NEIL, DINESH KANT KUMAR, and JOHN BECKER. "WAVELET CLASSIFICATION OF INDOOR ENVIRONMENTAL SOUND SOURCES." International Journal of Wavelets, Multiresolution and Information Processing 04, no. 01 (2006): 81–96. http://dx.doi.org/10.1142/s0219691306001105.

Texto completo
Resumen
Computational auditory scene analysis (CASA) has been attracting growing interest since the publication of Bregman's text on human auditory scene analysis, and is expected to find many applications in data retrieval, autonomous robots, security and environmental analysis. This paper reports on the use of Fourier transforms and wavelet transforms to produce spectral data of sounds from different sources for classification by neural networks. It was found that the multiresolution time-frequency analyses of wavelet transforms dramatically improved classification accuracy when statistical descript
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Haykin, Simon, and Zhe Chen. "The Cocktail Party Problem." Neural Computation 17, no. 9 (2005): 1875–902. http://dx.doi.org/10.1162/0899766054322964.

Texto completo
Resumen
This review presents an overview of a challenging problem in auditory perception, the cocktail party phenomenon, the delineation of which goes back to a classic paper by Cherry in 1953. In this review, we address the following issues: (1) human auditory scene analysis, which is a general process carried out by the auditory system of a human listener; (2) insight into auditory perception, which is derived from Marr's vision theory; (3) computational auditory scene analysis, which focuses on specific approaches aimed at solving the machine cocktail party problem; (4) active audition, the proposa
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Drake, Laura A., Janet C. Rutledge, and Aggelos Katsaggelos. "Computational auditory scene analysis‐constrained array processing for sound source separation." Journal of the Acoustical Society of America 106, no. 4 (1999): 2238. http://dx.doi.org/10.1121/1.427622.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Kashino, Makio, Eisuke Adachi, and Haruto Hirose. "A computational model for the dynamic aspects of primitive auditory scene analysis." Journal of the Acoustical Society of America 131, no. 4 (2012): 3230. http://dx.doi.org/10.1121/1.4708046.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Hu, Ying, and Guizhong Liu. "Singer identification based on computational auditory scene analysis and missing feature methods." Journal of Intelligent Information Systems 42, no. 3 (2013): 333–52. http://dx.doi.org/10.1007/s10844-013-0271-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Kaya, Emine Merve, and Mounya Elhilali. "Modelling auditory attention." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (2017): 20160101. http://dx.doi.org/10.1098/rstb.2016.0101.

Texto completo
Resumen
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a mu
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Cichy, Radoslaw Martin, and Santani Teng. "Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (2017): 20160108. http://dx.doi.org/10.1098/rstb.2016.0108.

Texto completo
Resumen
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique p
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Bregman, Albert S. "Constraints on computational models of auditory scene analysis, as derived from human perception." Journal of the Acoustical Society of Japan (E) 16, no. 3 (1995): 133–36. http://dx.doi.org/10.1250/ast.16.133.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Shao, Yang, Soundararajan Srinivasan, Zhaozhang Jin, and DeLiang Wang. "A computational auditory scene analysis system for speech segregation and robust speech recognition." Computer Speech & Language 24, no. 1 (2010): 77–93. http://dx.doi.org/10.1016/j.csl.2008.03.004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Zeremdini, Jihen, Mohamed Anouar Ben Messaoud, and Aicha Bouzid. "A comparison of several computational auditory scene analysis (CASA) techniques for monaural speech segregation." Brain Informatics 2, no. 3 (2015): 155–66. http://dx.doi.org/10.1007/s40708-015-0016-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Andringa, Tjeerd C. "Appraising the Sonic Environment: A Conceptual Framework for Perceptual, Computational, and Cognitive Requirements." Behavioral Sciences 15, no. 6 (2025): 797. https://doi.org/10.3390/bs15060797.

Texto completo
Resumen
This paper provides a conceptual framework for soundscape appraisal as a key outcome of the hearing process. Sound appraisal involves auditory sense-making and produces the soundscape as the perceived and understood acoustic environment. The soundscape exists in the experiential domain and involves meaning-giving. Soundscape research has reached a consensus about the relevance of two experiential dimensions—pleasure and eventfulness—which give rise to four appraisal quadrants: calm, lively/vibrant, chaotic, and boring/monotonous. Requirements for and constraints on the hearing and appraisal pr
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Hupé, Jean-Michel, and Daniel Pressnitzer. "The initial phase of auditory and visual scene analysis." Philosophical Transactions of the Royal Society B: Biological Sciences 367, no. 1591 (2012): 942–53. http://dx.doi.org/10.1098/rstb.2011.0368.

Texto completo
Resumen
Auditory streaming and visual plaids have been used extensively to study perceptual organization in each modality. Both stimuli can produce bistable alternations between grouped (one object) and split (two objects) interpretations. They also share two peculiar features: (i) at the onset of stimulus presentation, organization starts with a systematic bias towards the grouped interpretation; (ii) this first percept has ‘inertia’; it lasts longer than the subsequent ones. As a result, the probability of forming different objects builds up over time, a landmark of both behavioural and neurophysiol
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Darwin, Chris. "Computational Auditory Scene Analysis: Principles, Algorithms and ApplicationsComputational Auditory Scene Analysis: Principles, Algorithms and ApplicationsDeLiangWangGuy J.BrownWiley-IEEE Press, Hoboken, N.J., 2006. xxiii+395 pp. $95.50 (hardcover), ISBN: 0471741094." Journal of the Acoustical Society of America 124, no. 1 (2008): 13. http://dx.doi.org/10.1121/1.2920958.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Li, P., Y. Guan, B. Xu, and W. Liu. "Monaural Speech Separation Based on Computational Auditory Scene Analysis and Objective Quality Assessment of Speech." IEEE Transactions on Audio, Speech and Language Processing 14, no. 6 (2006): 2014–23. http://dx.doi.org/10.1109/tasl.2006.883258.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

PARK, J. H., J. S. YOON, and H. K. KIM. "HMM-Based Mask Estimation for a Speech Recognition Front-End Using Computational Auditory Scene Analysis." IEICE Transactions on Information and Systems E91-D, no. 9 (2008): 2360–64. http://dx.doi.org/10.1093/ietisy/e91-d.9.2360.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Veale, Richard, Ziad M. Hafed, and Masatoshi Yoshida. "How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (2017): 20160113. http://dx.doi.org/10.1098/rstb.2016.0113.

Texto completo
Resumen
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority map
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

McElveen, J. K., Leonid Krasny, and Scott Nordlund. "Applying matched field array processing and machine learning to computational auditory scene analysis and source separation challenges." Journal of the Acoustical Society of America 151, no. 4 (2022): A232. http://dx.doi.org/10.1121/10.0011162.

Texto completo
Resumen
Matched field processing (MFP) techniques employing physics-based models of acoustic propagation have been successfully and widely applied to underwater target detection and localization, while machine learning (ML) techniques have enabled detection and extraction of patterns in data. Fusing MFP and ML enables the estimation of Green’s Function solutions to the Acoustic Wave Equation for waveguides from data captured in real, reverberant acoustic environments. These Green’s Function estimates can further enable the robust separation of individual sources, even in the presence of multiple loud,
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Jiang, Yi, Yuan Yuan Zu, and Ying Ze Wang. "An Unsupervised Approach to Close-Talk Speech Enhancement." Applied Mechanics and Materials 614 (September 2014): 363–66. http://dx.doi.org/10.4028/www.scientific.net/amm.614.363.

Texto completo
Resumen
A K-means based unsupervised approach to close-talk speech enhancement is proposed in this paper. With the frame work of computational auditory scene analysis (CASA), the dual-microphone energy difference (DMED) is used as the cue to classify the noise domain time-frequency (T-F) units and target speech domain units. A ratio mask is used to separate the target speech and noise. Experiment results show the robust performance of the proposed algorithm than the Wiener filtering algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Gregoire, Jerry. "Review and comparison of methods using spectral characteristics for the purposes of CASA, Computational Auditory Scene Analysis." Journal of the Acoustical Society of America 114, no. 4 (2003): 2331. http://dx.doi.org/10.1121/1.4781037.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Rouat, J. "Computational Auditory Scene Analysis: Principles, Algorithms, and Applications (Wang, D. and Brown, G.J., Eds.; 2006) [Book review]." IEEE Transactions on Neural Networks 19, no. 1 (2008): 199. http://dx.doi.org/10.1109/tnn.2007.913988.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Zhou, Hong, Yi Jiang, Ming Jiang, and Qiang Chen. "Energy Difference Based Speech Segregation for Close-Talk System." Applied Mechanics and Materials 229-231 (November 2012): 1738–41. http://dx.doi.org/10.4028/www.scientific.net/amm.229-231.1738.

Texto completo
Resumen
Within the framework of computational auditory scene analysis (CASA), a speech separation algorithm based on energy difference for close-talk system was proposed. The two microphones received the mixture signal of close target speech and far noise sound at the same time. The inter-microphone intensity differences (IMID) of the two microphones in time-frequency (T-F) units were calculated. And used as cues to generate the binary masks with the K-means two class clustering method. Experiments indicated that this novel algorithm could separate the target speech from the mixture sound, and perform
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Otsuka, Takuma, Katsuhiko Ishiguro, Hiroshi Sawada, and Hiroshi Okuno. "Bayesian Unification of Sound Source Localization and Separation with Permutation Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 2038–45. http://dx.doi.org/10.1609/aaai.v26i1.8376.

Texto completo
Resumen
Sound source localization and separation with permutation resolution are essential for achieving a computational auditory scene analysis system that can extract useful information from a mixture of various sounds. Because existing methods cope separately with these problems despite their mutual dependence, the overall result with these approaches can be degraded by any failure in one of these components. This paper presents a unified Bayesian framework to solve these problems simultaneously where localization and separation are regarded as a clustering problem. Experimental results confirm tha
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Ellis, Daniel P. W. "Using knowledge to organize sound: The prediction-driven approach to computational auditory scene analysis and its application to speech/nonspeech mixtures." Speech Communication 27, no. 3-4 (1999): 281–98. http://dx.doi.org/10.1016/s0167-6393(98)00083-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Li, Hongyan, Yue Wang, Rongrong Zhao, and Xueying Zhang. "An Unsupervised Two-Talker Speech Separation System Based on CASA." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 07 (2018): 1858002. http://dx.doi.org/10.1142/s0218001418580028.

Texto completo
Resumen
On the basis of the theory about blind separation of monaural speech based on computational auditory scene analysis (CASA), a two-talker speech separation system combining CASA and speaker recognition was proposed to separate speech from other speech interferences in this paper. First, a tandem algorithm is used to organize voiced speech, then based on the clustering of gammatone frequency cepstral coefficients (GFCCs), an object function is established to recognize the speaker, and the best group is achieved through exhaustive search or beam search, so that voiced speech is organized sequenti
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Abe, Mototsugu, and Shigeru Ando. "Auditory scene analysis based on time-frequency integration of shared FM and AM (I): Lagrange differential features and frequency-axis integration." Systems and Computers in Japan 33, no. 11 (2002): 95–106. http://dx.doi.org/10.1002/scj.1167.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Abe, Mototsugu, and Shigeru Ando. "Auditory scene analysis based on time-frequency integration of shared FM and AM (II): Optimum time-domain integration and stream sound reconstruction." Systems and Computers in Japan 33, no. 10 (2002): 83–94. http://dx.doi.org/10.1002/scj.1160.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

He, Yuebo, Hui Gao, Hai Liu, and Guoxi Jing. "Identification of prominent noise components of an electric powertrain using a psychoacoustic model." Noise Control Engineering Journal 70, no. 2 (2022): 103–14. http://dx.doi.org/10.3397/1/37709.

Texto completo
Resumen
Because of the electric power transmission system has no sound masking effect compared with the traditional internal combustion power transmission system, electric powertrain noise has become the prominent noise of electric vehicles, adversely affecting the sound quality of the vehicle interior. Because of the strong coupling of motor and transmission noise, it is difficult to separate and identify the compositions of the electric powertrain by experiments. A psychoacoustic model is used to separate and identify the noise sources of the electric powertrain of a vehicle, considering the masking
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

He, Zhuang, and Yin Feng. "Singing Transcription from Polyphonic Music Using Melody Contour Filtering." Applied Sciences 11, no. 13 (2021): 5913. http://dx.doi.org/10.3390/app11135913.

Texto completo
Resumen
Automatic singing transcription and analysis from polyphonic music records are essential in a number of indexing techniques for computational auditory scenes. To obtain a note-level sequence in this work, we divide the singing transcription task into two subtasks: melody extraction and note transcription. We construct a salience function in terms of harmonic and rhythmic similarity and a measurement of spectral balance. Central to our proposed method is the measurement of melody contours, which are calculated using edge searching based on their continuity properties. We calculate the mean cont
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Johnson, Keith. "Auditory Scene Analysis." Journal of Phonetics 21, no. 4 (1993): 491–96. http://dx.doi.org/10.1016/s0095-4470(19)30232-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

FAY, RICHARD R. "AUDITORY SCENE ANALYSIS." Bioacoustics 17, no. 1-3 (2008): 106–9. http://dx.doi.org/10.1080/09524622.2008.9753783.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Sutter, Mitchell L., Christopher Petkov, Kathleen Baynes, and Kevin N. OʼConnor. "Auditory scene analysis in dyslexics." NeuroReport 11, no. 9 (2000): 1967–71. http://dx.doi.org/10.1097/00001756-200006260-00032.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Fay, Richard. "Auditory scene analysis in fish." Journal of the Acoustical Society of America 126, no. 4 (2009): 2290. http://dx.doi.org/10.1121/1.3249386.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Itatani, Naoya, and Georg M. Klump. "Animal models for auditory streaming." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (2017): 20160112. http://dx.doi.org/10.1098/rstb.2016.0112.

Texto completo
Resumen
Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!