Letteratura scientifica selezionata sul tema "Flux audio"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Flux audio".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Flux audio"
Bryant, M. D. "Bond Graph Models for Linear Motion Magnetostrictive Actuators". Journal of Dynamic Systems, Measurement, and Control 118, n. 1 (1 marzo 1996): 161–67. http://dx.doi.org/10.1115/1.2801139.
Testo completoStupacher, Jan, Michael J. Hove e Petr Janata. "Audio Features Underlying Perceived Groove and Sensorimotor Synchronization in Music". Music Perception 33, n. 5 (1 giugno 2016): 571–89. http://dx.doi.org/10.1525/mp.2016.33.5.571.
Testo completoRossetti, Danilo, e Jônatas Manzolli. "Analysis of Granular Acousmatic Music: Representation of sound flux and emergence". Organised Sound 24, n. 02 (agosto 2019): 205–16. http://dx.doi.org/10.1017/s1355771819000244.
Testo completoValiveti, Hima Bindu, Anil Kumar B., Lakshmi Chaitanya Duggineni, Swetha Namburu e Swaraja Kuraparthi. "Soft computing based audio signal analysis for accident prediction". International Journal of Pervasive Computing and Communications 17, n. 3 (26 marzo 2021): 329–48. http://dx.doi.org/10.1108/ijpcc-08-2020-0120.
Testo completoStanton, Polly. "Sound, listening and the moving image". Qualitative Research Journal 19, n. 1 (4 febbraio 2019): 65–71. http://dx.doi.org/10.1108/qrj-12-2018-0019.
Testo completoIstvanek, Matej, Zdenek Smekal, Lubomir Spurny e Jiri Mekyska. "Enhancement of Conventional Beat Tracking System Using Teager–Kaiser Energy Operator". Applied Sciences 10, n. 1 (4 gennaio 2020): 379. http://dx.doi.org/10.3390/app10010379.
Testo completoHao, Yiya, Yaobin Chen, Weiwei Zhang, Gong Chen e Liang Ruan. "A real-time music detection method based on convolutional neural network using Mel-spectrogram and spectral flux". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, n. 1 (1 agosto 2021): 5910–18. http://dx.doi.org/10.3397/in-2021-11599.
Testo completoLuck, Geoff, e Petri Toiviainen. "Ensemble Musicians’ Synchronization With Conductors’ Gestures: An Automated Feature-Extraction Analysis". Music Perception 24, n. 2 (1 dicembre 2006): 189–200. http://dx.doi.org/10.1525/mp.2006.24.2.189.
Testo completoPurnomo, Endra Dwi, Ubaidillah Ubaidillah, Fitrian Imaduddin, Iwan Yahya e Saiful Amri Mazlan. "Preliminary experimental evaluation of a novel loudspeaker featuring magnetorheological fluid surround absorber". Indonesian Journal of Electrical Engineering and Computer Science 17, n. 2 (1 febbraio 2020): 922. http://dx.doi.org/10.11591/ijeecs.v17.i2.pp922-928.
Testo completoMauch, Matthias, Robert M. MacCallum, Mark Levy e Armand M. Leroi. "The evolution of popular music: USA 1960–2010". Royal Society Open Science 2, n. 5 (maggio 2015): 150081. http://dx.doi.org/10.1098/rsos.150081.
Testo completoTesi sul tema "Flux audio"
Nesvadba, Jan. "Segmentation sémantique des contenus audio-visuels". Bordeaux 1, 2007. http://www.theses.fr/2007BOR13456.
Testo completoRamona, Mathieu. "Classification automatique de flux radiophoniques par Machines à Vecteurs de Support". Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00529331.
Testo completoSoldi, Giovanni. "Diarisation du locuteur en temps réel pour les objets intelligents". Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0061.
Testo completoOn-line speaker diarization aims to detect “who is speaking now" in a given audio stream. The majority of proposed on-line speaker diarization systems has focused on less challenging domains, such as broadcast news and plenary speeches, characterised by long speaker turns and low spontaneity. The first contribution of this thesis is the development of a completely unsupervised adaptive on-line diarization system for challenging and highly spontaneous meeting data. Due to the obtained high diarization error rates, a semi-supervised approach to on-line diarization, whereby speaker models are seeded with a modest amount of manually labelled data and adapted by an efficient incremental maximum a-posteriori adaptation (MAP) procedure, is proposed. Obtained error rates may be low enough to support practical applications. The second part of the thesis addresses instead the problem of phone normalisation when dealing with short-duration speaker modelling. First, Phone Adaptive Training (PAT), a recently proposed technique, is assessed and optimised at the speaker modelling level and in the context of automatic speaker verification (ASV) and then is further developed towards a completely unsupervised system using automatically generated acoustic class transcriptions, whose number is controlled by regression tree analysis. PAT delivers significant improvements in the performance of a state-of-the-art iVector ASV system even when accurate phonetic transcriptions are not available
Poignant, Johann. "Identification non-supervisée de personnes dans les flux télévisés". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00958774.
Testo completoTrad, Abdelbasset. "Déploiement à grande échelle de la voix sur IP dans des environnements hétérogènes". Phd thesis, Nice, 2006. http://tel.archives-ouvertes.fr/tel-00406513.
Testo completoMichaud, Jérôme. "Re-conceptualiser notre expérience de l’environnement audio-visuel qui nous entoure : l’individuation, entre attention et mémoire". Thèse, 2016. http://hdl.handle.net/1866/16151.
Testo completoThis thesis re-conceptualizes our new audio-visual environment and analyses the experience we make of it. In the digital age marked by the dissemination of moving images, we circumscribe a category of images which we see as the most likely to have an impact on human development. We call it synchrono-photo-temporalized images-sounds. Specifically, we seek to highlight their power of affection and control by showing that they have some influence on the process of individuation, an influence which is greatly facilitated by the structural isotopy between the stream of consciousness and the flow of motion images. By examining the research of Bernard Stiegler, we also note the important roles attention and memory play in the process of individuation. This thinking makes us realize how the current education system in Quebec fails in its mission to give a good civic education by not providing an adequate teaching of moving images.
Libri sul tema "Flux audio"
Colbert, Don. Bible Cure for Colds, Flu & Sinus Infections (Bible Cure (Oasis Audio)). Oasis Audio, 2004.
Cerca il testo completo(Narrator), Greg Wheatley, a cura di. The Bible Cure for Colds, Flu and Sinus Infections (Bible Cure (Oasis Audio)). Oasis Audio, 2004.
Cerca il testo completoShatzkin, Mike, e Robert Paris Riger. The Book Business. Oxford University Press, 2019. http://dx.doi.org/10.1093/wentk/9780190628031.001.0001.
Testo completoCapitoli di libri sul tema "Flux audio"
Elrom, Elad. "Facilitating Audio and Video". In AdvancED Flex 4, 461–503. Berkeley, CA: Apress, 2010. http://dx.doi.org/10.1007/978-1-4302-2484-6_14.
Testo completoRichardson, Darren, Paul Milbourne, Steve Webster, Todd Yard e Sean McSharry. "Using Audio". In Foundation ActionScript 3.0 for Flash and Flex, 301–53. Berkeley, CA: Apress, 2009. http://dx.doi.org/10.1007/978-1-4302-1919-4_8.
Testo completoMcSharry, Sean. "Using Audio". In Foundation ActionScript 3.0 with Flash CS3 and Flex, 293–343. Berkeley, CA: Apress, 2008. http://dx.doi.org/10.1007/978-1-4302-0196-0_8.
Testo completoI. Al-Shoshan, Abdullah. "Classification and Separation of Audio and Music Signals". In Multimedia Information Retrieval [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.94940.
Testo completoAtti di convegni sul tema "Flux audio"
Wang, Wengen, Xiaoqing Yu, Yun Hui Wang e Ram Swaminathan. "Audio fingerprint based on Spectral Flux for audio retrieval". In 2012 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2012. http://dx.doi.org/10.1109/icalip.2012.6376781.
Testo completoLee, Sangkil, Jieun Kim e Insung Lee. "Speech/Audio Signal Classification Using Spectral Flux Pattern Recognition". In 2012 IEEE Workshop on Signal Processing Systems (SiPS). IEEE, 2012. http://dx.doi.org/10.1109/sips.2012.36.
Testo completoXu, Y., P. Smagacz, J. Lapinskas, J. Webster, P. Shaw e R. P. Taleyarkhan. "Neutron Detection with Centrifugally-Tensioned Metastable Fluid Detectors (CTMFD)". In 14th International Conference on Nuclear Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/icone14-89199.
Testo completoZheng, Haiming, e Tieqiao Guo. "Relative Accuracy Test Audit Evaluation for Flue Gas Continuous Emission Monitoring Systems in Power Plant". In 2008 Pacific-Asia Workshop on Computational Intelligence and Industrial Application (PACIIA). IEEE, 2008. http://dx.doi.org/10.1109/paciia.2008.123.
Testo completoMeng, Liu, Chen Yang, Zhong Zhuhai, Zhang Xiaodan, Deng Guoliang, Mingyan Yin, Jun Li e Qi Sun. "Numerical Tests on the Effect Factors of the Last Stage Blade for Low Pressure Exhaust Hood Simulation". In ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-63964.
Testo completo