Literatura académica sobre el tema "Computational auditory scene analysis"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Computational auditory scene analysis".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Computational auditory scene analysis"

1

Brown, Guy J., and Martin Cooke. "Computational auditory scene analysis." Computer Speech & Language 8, no. 4 (October 1994): 297–336. http://dx.doi.org/10.1006/csla.1994.1016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Alain, Claude, and Lori J. Bernstein. "Auditory Scene Analysis." Music Perception 33, no. 1 (September 1, 2015): 70–82. http://dx.doi.org/10.1525/mp.2015.33.1.70.

Texto completo
Resumen
Albert Bregman’s (1990) book Auditory Scene Analysis: The Perceptual Organization of Sound has had a tremendous impact on research in auditory neuroscience. Here, we outline some of the accomplishments. This review is not meant to be exhaustive, but rather aims to highlight milestones in the brief history of auditory neuroscience. The steady increase in neuroscience research following the book’s pivotal publication has advanced knowledge about how the brain forms representations of auditory objects. This research has far-reaching societal implications on health and quality of life. For instance, it helped us understand why some people experience difficulties understanding speech in noise, which in turn has led to development of therapeutic interventions. Importantly, the book acts as a catalyst, providing scientists with a common conceptual framework for research in such diverse fields as speech perception, music perception, neurophysiology and computational neuroscience. This interdisciplinary approach to research in audition is one of this book’s legacies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Brown, Guy J. "Computational auditory scene analysis: A representational approach." Journal of the Acoustical Society of America 94, no. 4 (October 1993): 2454. http://dx.doi.org/10.1121/1.407441.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lewicki, Michael S., Bruno A. Olshausen, Annemarie Surlykke, and Cynthia F. Moss. "Computational issues in natural auditory scene analysis." Journal of the Acoustical Society of America 137, no. 4 (April 2015): 2249. http://dx.doi.org/10.1121/1.4920202.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Niessen, Maria E., Ronald A. Van Elburg, Dirkjan J. Krijnders, and Tjeerd C. Andringa. "A computational model for auditory scene analysis." Journal of the Acoustical Society of America 123, no. 5 (May 2008): 3301. http://dx.doi.org/10.1121/1.2933719.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Nakadai, Kazuhiro, and Hiroshi G. Okuno. "Robot Audition and Computational Auditory Scene Analysis." Advanced Intelligent Systems 2, no. 9 (July 8, 2020): 2000050. http://dx.doi.org/10.1002/aisy.202000050.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

McMullin, Margaret A., Rohit Kumar, Nathan C. Higgins, Brian Gygi, Mounya Elhilali, and Joel S. Snyder. "Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception." Open Mind 8 (2024): 333–65. http://dx.doi.org/10.1162/opmi_a_00131.

Texto completo
Resumen
Abstract Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field’s ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33–0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants’ ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Godsmark, Darryl, and Guy J. Brown. "A blackboard architecture for computational auditory scene analysis." Speech Communication 27, no. 3-4 (April 1999): 351–66. http://dx.doi.org/10.1016/s0167-6393(98)00082-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kondo, Hirohito M., Anouk M. van Loon, Jun-Ichiro Kawahara, and Brian C. J. Moore. "Auditory and visual scene analysis: an overview." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160099. http://dx.doi.org/10.1098/rstb.2016.0099.

Texto completo
Resumen
We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Cooke, M. P., and G. J. Brown. "Computational auditory scene analysis: Exploiting principles of perceived continuity." Speech Communication 13, no. 3-4 (December 1993): 391–99. http://dx.doi.org/10.1016/0167-6393(93)90037-l.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Computational auditory scene analysis"

1

Ellis, Daniel Patrick Whittlesey. "Prediction-driven computational auditory scene analysis." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11006.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.<br>Includes bibliographical references (p. 173-180).<br>by Daniel P.W. Ellis.<br>Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Delmotte, Varinthira Duangudom. "Computational auditory saliency." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45888.

Texto completo
Resumen
The objective of this dissertation research is to identify sounds that grab a listener's attention. These sounds that draw a person's attention are sounds that are considered salient. The focus here will be on investigating the role of saliency in the auditory attentional process. In order to identify these salient sounds, we have developed a computational auditory saliency model inspired by our understanding of the human auditory system and auditory perception. By identifying salient sounds we can obtain a better understanding of how sounds are processed by the auditory system, and in particular, the key features contributing to sound salience. Additionally, studying the salience of different auditory stimuli can lead to improvements in the performance of current computational models in several different areas, by making use of the information obtained about what stands out perceptually to observers in a particular scene. Auditory saliency also helps to rapidly sort the information present in a complex auditory scene. Since our resources are finite, not all information can be processed equally. We must, therefore, be able to quickly determine the importance of different objects in a scene. Additionally, an immediate response or decision may be required. In order to respond, the observer needs to know the key elements of the scene. The issue of saliency is closely related to many different areas, including scene analysis. The thesis provides a comprehensive look at auditory saliency. It explores the advantages and limitations of using auditory saliency models through different experiments and presents a general computational auditory saliency model that can be used for various applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Shao, Yang. "Sequential organization in computational auditory scene analysis." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1190127412.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Brown, Guy Jason. "Computational auditory scene analysis : a representational approach." Thesis, University of Sheffield, 1992. http://etheses.whiterose.ac.uk/2982/.

Texto completo
Resumen
This thesis addresses the problem of how a listener groups together acoustic components which have arisen from the same environmental event, a phenomenon known as auditory scene analysis. A computational model of auditory scene analysis is presented, which is able to separate speech from a variety of interfering noises. The model consists of four processing stages. Firstly, the auditory periphery is simulated by a bank of bandpass filters and a model of inner hair cell function. In the second stage, physiologically-inspired models of higher auditory organization - aiditory maps - are used to provide a rich representational basis for scene analysis. Periodicities in the acoustic input are coded by an ant ocorrelation map and a crosscorrelation map. Information about spectral continuity is extracted by a frequency transition map. The times at which acoustic components start and stop are identified by an onset map and an offset map. In the third 8tage of processing, information from the periodicity and frequency transition maps is used to characterize the auditory scene as a collection of symbolic auditory objects. Finally, a search strategy identifies objects that have similar properties and groups them together. Specifically, objects are likely to form a group if they have a similar periodicity, onset time or offset time. The model has been evaluated in two ways, using the task of segregating voiced speech from a number of interfering sounds such as random noise, "cocktail party" noise and other speech. Firstly, a waveform can be resynthesized for each group in the auditory scene, so that segregation performance can be assessed by informal listening tests. The resynthesized speech is highly intelligible and fairly natural. Secondly, the linear nature of the resynthesis process allows the signal-to-noise ratio (SNR) to be compared before and after segregation. An improvement in SNR is obtained after segregation for each type of interfering noise. Additionally, the performance of the model is significantly better than that of a conventional frame-based autocorrelation segregation strategy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Srinivasan, Soundararajan. "Integrating computational auditory scene analysis and automatic speech recognition." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1158250036.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Narayanan, Arun. "Computational auditory scene analysis and robust automatic speech recognition." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1401460288.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Unnikrishnan, Harikrishnan. "AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/622.

Texto completo
Resumen
Auditory stream denotes the abstract effect a source creates in the mind of the listener. An auditory scene consists of many streams, which the listener uses to analyze and understand the environment. Computer analyses that attempt to mimic human analysis of a scene must first perform Audio Scene Segmentation (ASS). ASS find applications in surveillance, automatic speech recognition and human computer interfaces. Microphone arrays can be employed for extracting streams corresponding to spatially separated sources. However, when a source moves to a new location during a period of silence, such a system loses track of the source. This results in multiple spatially localized streams for the same source. This thesis proposes to identify local streams associated with the same source using auditory features extracted from the beamformed signal. ASS using the spatial cues is first performed. Then auditory features are extracted and segments are linked together based on similarity of the feature vector. An experiment was carried out with two simultaneous speakers. A classifier is used to classify the localized streams as belonging to one speaker or the other. The best performance was achieved when pitch appended with Gammatone Frequency Cepstral Coefficeints (GFCC) was used as the feature vector. An accuracy of 96.2% was achieved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Nakatani, Tomohiro. "Computational Auditory Scene Analysis Based on Residue-driven Architecture and Its Application to Mixed Speech Recognition." 京都大学 (Kyoto University), 2002. http://hdl.handle.net/2433/149754.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Javadi, Ailar. "Bio-inspired noise robust auditory features." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44801.

Texto completo
Resumen
The purpose of this work is to investigate a series of biologically inspired modifications to state-of-the-art Mel- frequency cepstral coefficients (MFCCs) that may improve automatic speech recognition results. We have provided recommendations to improve speech recognition results de- pending on signal-to-noise ratio levels of input signals. This work has been motivated by noise-robust auditory features (NRAF). In the feature extraction technique, after a signal is filtered using bandpass filters, a spatial derivative step is used to sharpen the results, followed by an envelope detector (recti- fication and smoothing) and down-sampling for each filter bank before being compressed. DCT is then applied to the results of all filter banks to produce features. The Hidden- Markov Model Toolkit (HTK) is used as the recognition back-end to perform speech recognition given the features we have extracted. In this work, we investigate the role of filter types, window size, spatial derivative, rectification types, smoothing, down- sampling and compression and compared the final results to state-of-the-art Mel-frequency cepstral coefficients (MFCC). A series of conclusions and insights are provided for each step of the process. The goal of this work has not been to outperform MFCCs; however, we have shown that by changing the compression type from log compression to 0.07 root compression we are able to outperform MFCCs for all noisy conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Melih, Kathy, and n/a. "Audio Source Separation Using Perceptual Principles for Content-Based Coding and Information Management." Griffith University. School of Information Technology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20050114.081327.

Texto completo
Resumen
The information age has brought with it a dual problem. In the first place, the ready access to mechanisms to capture and store vast amounts of data in all forms (text, audio, image and video), has resulted in a continued demand for ever more efficient means to store and transmit this data. In the second, the rapidly increasing store demands effective means to structure and access the data in an efficient and meaningful manner. In terms of audio data, the first challenge has traditionally been the realm of audio compression research that has focused on statistical, unstructured audio representations that obfuscate the inherent structure and semantic content of the underlying data. This has only served to further complicate the resolution of the second challenge resulting in access mechanisms that are either impractical to implement, too inflexible for general application or too low level for the average user. Thus, an artificial dichotomy has been created from what is in essence a dual problem. The founding motivation of this thesis is that, although the hypermedia model has been identified as the ideal, cognitively justified method for organising data, existing audio data representations and coding models provide little, if any, support for, or resemblance to, this model. It is the contention of the author that any successful attempt to create hyperaudio must resolve this schism, addressing both storage and information management issues simultaneously. In order to achieve this aim, an audio representation must be designed that provides compact data storage while, at the same time, revealing the inherent structure of the underlying data. Thus it is the aim of this thesis to present a representation designed with these factors in mind. Perhaps the most difficult hurdle in the way of achieving the aims of content-based audio coding and information management is that of auditory source separation. The MPEG committee has noted this requirement during the development of its MPEG-7 standard, however, the mechanics of "how" to achieve auditory source separation were left as an open research question. This same committee proposed that MPEG-7 would "support descriptors that can act as handles referring directly to the data, to allow manipulation of the multimedia material." While meta-data tags are a part solution to this problem, these cannot allow manipulation of audio material down to the level of individual sources when several simultaneous sources exist in a recording. In order to achieve this aim, the data themselves must be encoded in such a manner that allows these descriptors to be formed. Thus, content-based coding is obviously required. In the case of audio, this is impossible to achieve without effecting auditory source separation. Auditory source separation is the concern of computational auditory scene analysis (CASA). However, the findings of CASA research have traditionally been restricted to a limited domain. To date, the only real application of CASA research to what could loosely be classified as information management has been in the area of signal enhancement for automatic speech recognition systems. In these systems, a CASA front end serves as a means of separating the target speech from the background "noise". As such, the design of a CASA-based approach, as presented in this thesis, to one of the most significant challenges facing audio information management research represents a significant contribution to the field of information management. Thus, this thesis unifies research from three distinct fields in an attempt to resolve some specific and general challenges faced by all three. It describes an audio representation that is based on a sinusoidal model from which low-level auditory primitive elements are extracted. The use of a sinusoidal representation is somewhat contentious with the modern trend in CASA research tending toward more complex approaches in order to resolve issues relating to co-incident partials. However, the choice of a sinusoidal representation has been validated by the demonstration of a method to resolve many of these issues. The majority of the thesis contributes several algorithms to organise the low-level primitives into low-level auditory objects that may form the basis of nodes or link anchor points in a hyperaudio structure. Finally, preliminary investigations in the representation’s suitability for coding and information management tasks are outlined as directions for future research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "Computational auditory scene analysis"

1

F, Rosenthal David, and Okuno Hiroshi G, eds. Computational auditory scene analysis. Mahwah, N.J: Lawrence Erlbaum Associates, 1998.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ellis, Daniel P. W. Prediction-driven computational auditory scene analysis. [New York, N.Y.?]: [publisher not identified], 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Lerch, Alexander. Audio content analysis: An introduction. Hoboken, N.J: Wiley, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Wang, Wenwu. Machine audition: Principles, algorithms, and systems. Hershey, PA: Information Science Reference, 2010.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Rowe, Robert. Interactive music systems: Machine listening and composing. Cambridge, Mass: MIT Press, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

author, Pikrakis Aggelos, ed. Introduction to audio analysis: A MATLAB approach. Kidlington, Oxford: Academic Press is an imprint of Elsevier, 2014.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

McDonald, Kelly Loreen. The role of harmonicity and location cues in auditory scene analysis. Ottawa: National Library of Canada, 2003.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Emerit, Sibylle, and Sylvain Perrot. Le paysage sonore de l'antiquité: Méthodologie, historiographie et perspectives : actes de la journée d'études tenue à l'École française de Rome, le 7 janvier 2013. Le Caire: Institut Française d'Archéologie Orientale, 2015.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rosenthal, David F., Hiroshi G. Okuno, Hiroshi Okuno, and David Rosenthal, eds. Computational Auditory Scene Analysis. CRC Press, 2020. http://dx.doi.org/10.1201/9781003064183.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Wang, DeLiang, and Guy J. Brown. Computational Auditory Scene Analysis. IEEE, 2006. http://dx.doi.org/10.1109/9780470043387.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Capítulos de libros sobre el tema "Computational auditory scene analysis"

1

Mellinger, David K., and Bernard M. Mont-Reynaud. "Scene Analysis." In Auditory Computation, 271–331. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-4070-9_7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Brown, Guy J. "Physiological Models of Auditory Scene Analysis." In Computational Models of the Auditory System, 203–36. Boston, MA: Springer US, 2010. http://dx.doi.org/10.1007/978-1-4419-5934-8_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Narayanan, Arun, and Deliang Wang. "Computational Auditory Scene Analysis and Automatic Speech Recognition." In Techniques for Noise Robustness in Automatic Speech Recognition, 433–62. Chichester, UK: John Wiley & Sons, Ltd, 2012. http://dx.doi.org/10.1002/9781118392683.ch16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kashino, Makio, Eisuke Adachi, and Haruto Hirose. "A Computational Approach to the Dynamic Aspects of Primitive Auditory Scene Analysis." In Advances in Experimental Medicine and Biology, 519–26. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-1590-9_57.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hummersone, Christopher, Toby Stokes, and Tim Brookes. "On the Ideal Ratio Mask as the Goal of Computational Auditory Scene Analysis." In Blind Source Separation, 349–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-55016-4_12.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wang, DeLiang. "Computational Scene Analysis." In Challenges for Computational Intelligence, 163–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-71984-7_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Leibold, Lori J. "Development of Auditory Scene Analysis and Auditory Attention." In Human Auditory Development, 137–61. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-1421-6_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Carlyon, Robert P., Sarah K. Thompson, Antje Heinrich, Friedemann Pulvermuller, Matthew H. Davis, Yury Shtyrov, Rhodri Cusack, and Ingrid S. Johnsrude. "Objective Measures of Auditory Scene Analysis." In The Neurophysiological Bases of Auditory Perception, 507–19. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-5686-6_47.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Stowell, Dan. "Computational Bioacoustic Scene Analysis." In Computational Analysis of Sound Scenes and Events, 303–33. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mountain, David C., and Allyn E. Hubbard. "Computational Analysis of Hair Cell and Auditory Nerve Processes." In Auditory Computation, 121–56. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-4070-9_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Computational auditory scene analysis"

1

Pile, Santa, Oleg Lesota, Silvan David Peter, Christina Humer, and Martin Di Gasser. "Spin-Wave Voices: Sonification of Nanoscale Spin Waves as an Engagement and Research Tool." In ICAD 2024: The 29th International Conference on Auditory Display, 132–39. icad.org: International Community for Auditory Display, 2024. http://dx.doi.org/10.21785/icad2024.024.

Texto completo
Resumen
Magnonics is an emerging research field that addresses the use of spin waves (magnons), purely magnetic waves, for information transport and processing. Spin waves are a potential replacement for electric current in novel computational devices that would make them more compact and energy effcient. The field is yet little known, even among physicists. Additionally, with the development of new measuring techniques and computational physics, the obtained magnetic data becomes more complex, in some cases including 3D vector fields and time-resolution. This work presents an approach to the audio-visual representation of the spin waves and discusses its use as a tool for science communication exhibits and possible data analysis tool. The work also details an instance of such an exhibit presented at the annual international digital art exhibition Ars Electronica Festival in 2022.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Brown, Guy J., and Martin P. Cooke. "A computational model of auditory scene analysis." In 2nd International Conference on Spoken Language Processing (ICSLP 1992). ISCA: ISCA, 1992. http://dx.doi.org/10.21437/icslp.1992-172.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Yang Shao and DeLiang Wang. "Robust speaker identification using auditory features and computational auditory scene analysis." In ICASSP 2008 - 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4517928.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tu, Ming, Xiang Xie, and Xingyu Na. "Computational Auditory Scene Analysis Based Voice Activity Detection." In 2014 22nd International Conference on Pattern Recognition (ICPR). IEEE, 2014. http://dx.doi.org/10.1109/icpr.2014.147.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kawamoto, Mitsuru, and Takuji Hamamoto. "Building Health Monitoring Using Computational Auditory Scene Analysis." In 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS). IEEE, 2020. http://dx.doi.org/10.1109/dcoss49796.2020.00033.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Larigaldie, Nathanael, and Ulrik Beierholm. "Explaining Human Auditory Scene Analysis Through Bayesian Clustering." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1227-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Okuno, Hiroshi G., Tetsuya Ogata, and Kazunori Komatani. "Robot Audition from the Viewpoint of Computational Auditory Scene Analysis." In International Conference on Informatics Education and Research for Knowledge-Circulating Society (icks 2008). IEEE, 2008. http://dx.doi.org/10.1109/icks.2008.10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Srinivasan, Soundararajan, Yang Shao, Zhaozhang Jin, and DeLiang Wang. "A computational auditory scene analysis system for robust speech recognition." In Interspeech 2006. ISCA: ISCA, 2006. http://dx.doi.org/10.21437/interspeech.2006-19.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kawamoto, Mitsuru. "Sound-environment monitoring technique based on computational auditory scene analysis." In 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017. http://dx.doi.org/10.23919/eusipco.2017.8081664.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Okuno, Hiroshi G., and Kazuhiro Nakadai. "Computational Auditory Scene Analysis and its Application to Robot Audition." In 2008 Hands-Free Speech Communication and Microphone Arrays (HSCMA 2008). IEEE, 2008. http://dx.doi.org/10.1109/hscma.2008.4538702.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Computational auditory scene analysis"

1

Shao, Yang, Soundararajan Srinivasan, Zhaozhang Jin, and DeLiang Wang. A Computational Auditory Scene Analysis System for Speech Segregation and Robust Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, January 2007. http://dx.doi.org/10.21236/ad1001212.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Lazzaro, John, and John Wawrzynek. Silicon Models for Auditory Scene Analysis. Fort Belvoir, VA: Defense Technical Information Center, January 1995. http://dx.doi.org/10.21236/ada327239.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

McKinnon, Mark, and Daniel Madryzkowski. Literature Review to Support the Development of a Database of Contemporary Material Properties for Fire Investigation Analysis. UL Firefighter Safety Research Institute, June 2020. http://dx.doi.org/10.54206/102376/wmah2173.

Texto completo
Resumen
The NIJ Technology Working Group’s Operational Requirements (TWG ORs) for Fire and Arson Investigation have included several scientific research needs that require knowledge of the thermophysical properties of materials that are common in the built environment, and therefore likely to be involved in a fire scene. The specific areas of research include: adequate materials property data inputs for accurate computer models, understanding the effect of materials properties on the development and interpretation of fire patterns, and evaluation of incident heat flux profiles to walls and neighboring items in support of fire model validation. These topics certainly address, in a concise way, many of the gaps that limit the analysis capability of fire investigators and engineers. Each of the three aforementioned research topics rely, in part, on accurate knowledge of the physical conditions of a material prior to the fire, how the material will respond to the exposure of heat, and how it will perform once it has ignited. This general information is required to visually assess a fire scene. The same information is needed by investigators to estimate the evolution and consequences of a fire incident using a computer model. Data sources that are currently most commonly used to determine the required properties and model inputs are outdated and incomplete. This report includes the literature review used to provide a technical approach to developing a materials database for use in fire investigations and computational fire models. A summary of the input from the project technical panel is presented which guided the initial selection of materials to be included in the database as well as the selection of test measurements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía