Academic literature on the topic 'Musical Instrument Recognition'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Musical Instrument Recognition.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Musical Instrument Recognition"
Livshin, A., and X. Rodet. "Purging Musical Instrument Sample Databases Using Automatic Musical Instrument Recognition Methods." IEEE Transactions on Audio, Speech, and Language Processing 17, no. 5 (July 2009): 1046–51. http://dx.doi.org/10.1109/tasl.2009.2018439.
Full textLei, Lei. "Multiple Musical Instrument Signal Recognition Based on Convolutional Neural Network." Scientific Programming 2022 (March 25, 2022): 1–11. http://dx.doi.org/10.1155/2022/5117546.
Full textV. Chitre, Abhijit, Ketan J. Raut, Tushar Jadhav, Minal S. Deshmukh, and Kirti Wanjale. "Hybrid Feature Based Classifier Performance Evaluation of Monophonic and Polyphonic Indian Classical Instruments Recognition." Journal of University of Shanghai for Science and Technology 23, no. 11 (November 2, 2021): 879–90. http://dx.doi.org/10.51201/jusst/21/11969.
Full textEssid, S., G. Richard, and B. David. "Musical instrument recognition by pairwise classification strategies." IEEE Transactions on Audio, Speech and Language Processing 14, no. 4 (July 2006): 1401–12. http://dx.doi.org/10.1109/tsa.2005.860842.
Full textMartin, Keith D., and Youngmoo E. Kim. "Musical instrument identification: A pattern‐recognition approach." Journal of the Acoustical Society of America 104, no. 3 (September 1998): 1768. http://dx.doi.org/10.1121/1.424083.
Full textRajesh, Sangeetha, and Nalini N. J. "Recognition of Musical Instrument Using Deep Learning Techniques." International Journal of Information Retrieval Research 11, no. 4 (October 2021): 41–60. http://dx.doi.org/10.4018/ijirr.2021100103.
Full textKurnia, Yusuf, and Toga Parlindungan Silaen. "Android-Based Musical Instrument Recognition Application For Vocational High School Level." bit-Tech 4, no. 2 (December 30, 2021): 47–55. http://dx.doi.org/10.32877/bt.v4i2.288.
Full textGonzalez, Yubiry, and Ronaldo C. Prati. "Similarity of Musical Timbres Using FFT-Acoustic Descriptor Analysis and Machine Learning." Eng 4, no. 1 (February 9, 2023): 555–68. http://dx.doi.org/10.3390/eng4010033.
Full textR.Sankaye, Satish, Suresh C.Mehrotra, and U. S. Tandon. "Indian Musical Instrument Recognition using Modified LPC Features." International Journal of Computer Applications 122, no. 13 (July 18, 2015): 6–10. http://dx.doi.org/10.5120/21758-4991.
Full textSiedenburg, Kai, Marc René Schädler, and David Hülsmeier. "Modeling the onset advantage in musical instrument recognition." Journal of the Acoustical Society of America 146, no. 6 (December 2019): EL523—EL529. http://dx.doi.org/10.1121/1.5141369.
Full textDissertations / Theses on the topic "Musical Instrument Recognition"
Malheiro, Frederico Alberto Santos de Carteado. "Automatic musical instrument recognition for multimedia indexing." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/6124.
Full textThe subject of automatic indexing of multimedia has been a target of numerous discussion and study. This interest is due to the exponential growth of multimedia content and the subsequent need to create methods that automatically catalogue this data. To fulfil this idea, several projects and areas of study have emerged. The most relevant of these are the MPEG-7 standard, which defines a standardized system for the representation and automatic extraction of information present in the content, and Music Information Retrieval (MIR), which gathers several paradigms and areas of study relating to music. The main approach to this indexing problem relies on analysing data to obtain and identify descriptors that can help define what we intend to recognize (as, for instance,musical instruments, voice, facial expressions, and so on), this then provides us with information we can use to index the data. This dissertation will focus on audio indexing in music, specifically regarding the recognition of musical instruments from recorded musical notes. Moreover, the developed system and techniques will also be tested for the recognition of ambient sounds (such as the sound of running water, cars driving by, and so on). Our approach will use non-negative matrix factorization to extract features from various types of sounds, these will then be used to train a classification algorithm that will be then capable of identifying new sounds.
Cros, Vila Laura. "Musical Instrument Recognition using the Scattering Transform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-283597.
Full textTack vare den tekniska utvecklingen i nätverk och signalbehandling kan vi få tillgång till en stor mängd musikaliskt innehåll. For att användare ska söka bland dessa stora kataloger måste de ha tillgång till musikrelaterad information utöver den rena digitala musikfilen. Eftersom den manuella annotationsprocessen skulle vara för dyr måste den automatiseras. En meningsfull beskrivning av musikstyckena kräver införlivande av information om instrumenten som finns i dem. I det här arbetet presenterar vi en metod for igenkänning av musikinstrument med hjälp av den scattering transform, som är en transformation som ger en översattnings-invariant representation, som är stabil för deformationer och bevarar högfrekvensinformation för klassicering. Vi studerar igenkännande i både enskilda instrument- och flera instrumentförhållanden. Vi jämför modellerna med den scattering transforms prestanda med de som använder andra standardfunktioner. Vi undersöker också effekterna av mangden traningsdata. Experimenten som utförs visar inte en tydlig överlagsen prestanda for någon av representationsföreställningarna jämfört med den andra. Fortfarande är den scattering transform värd att ta hänsyn till när man väljer ett sätt att extrahera funktioner om vi vill kunna karakterisera icke-stationära signalstrukturer.
Fuhrmann, Ferdinand. "Automatic musical instrument recognition from polyphonic music audio signals." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/81328.
Full textIn this dissertation we present a method for the automatic recognition of musical instruments from music audio signal. Unlike most related approaches, our specific conception mostly avoids laboratory constraints on the method’s algorithmic design, its input data, or the targeted application context. To account for the complex nature of the input signal, we limit the basic process in the processing chain to the recognition of a single predominant musical instrument from a short audio fragment. We thereby prevent resolving the mixture and rather predict one source from the timbre of the sound. To compensate for this restriction we further incorporate information derived from a hierarchical music analysis; we first incorporate musical context to extract instrumental labels from the time-varying model decisions. Second, the method incorporates information regarding the piece’s formal aspects into the process. Finally, we include information from the collection level by exploiting associations between musical genres and instrumentations.
Sandrock, Trudie. "Multi-label feature selection with application to musical instrument recognition." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019/11071.
Full textENGLISH ABSTRACT: An area of data mining and statistics that is currently receiving considerable attention is the field of multi-label learning. Problems in this field are concerned with scenarios where each data case can be associated with a set of labels instead of only one. In this thesis, we review the field of multi-label learning and discuss the lack of suitable benchmark data available for evaluating multi-label algorithms. We propose a technique for simulating multi-label data, which allows good control over different data characteristics and which could be useful for conducting comparative studies in the multi-label field. We also discuss the explosion in data in recent years, and highlight the need for some form of dimension reduction in order to alleviate some of the challenges presented by working with large datasets. Feature (or variable) selection is one way of achieving dimension reduction, and after a brief discussion of different feature selection techniques, we propose a new technique for feature selection in a multi-label context, based on the concept of independent probes. This technique is empirically evaluated by using simulated multi-label data and it is shown to achieve classification accuracy with a reduced set of features similar to that achieved with a full set of features. The proposed technique for feature selection is then also applied to the field of music information retrieval (MIR), specifically the problem of musical instrument recognition. An overview of the field of MIR is given, with particular emphasis on the instrument recognition problem. The particular goal of (polyphonic) musical instrument recognition is to automatically identify the instruments playing simultaneously in an audio clip, which is not a simple task. We specifically consider the case of duets – in other words, where two instruments are playing simultaneously – and approach the problem as a multi-label classification one. In our empirical study, we illustrate the complexity of musical instrument data and again show that our proposed feature selection technique is effective in identifying relevant features and thereby reducing the complexity of the dataset without negatively impacting on performance.
AFRIKAANSE OPSOMMING: ‘n Area van dataontginning en statistiek wat tans baie aandag ontvang, is die veld van multi-etiket leerteorie. Probleme in hierdie veld beskou scenarios waar elke datageval met ‘n stel etikette geassosieer kan word, instede van slegs een. In hierdie skripsie gee ons ‘n oorsig oor die veld van multi-etiket leerteorie en bespreek die gebrek aan geskikte standaard datastelle beskikbaar vir die evaluering van multi-etiket algoritmes. Ons stel ‘n tegniek vir die simulasie van multi-etiket data voor, wat goeie kontrole oor verskillende data eienskappe bied en wat nuttig kan wees om vergelykende studies in die multi-etiket veld uit te voer. Ons bespreek ook die onlangse ontploffing in data, en beklemtoon die behoefte aan ‘n vorm van dimensie reduksie om sommige van die uitdagings wat deur sulke groot datastelle gestel word die hoof te bied. Veranderlike seleksie is een manier van dimensie reduksie, en na ‘n vlugtige bespreking van verskillende veranderlike seleksie tegnieke, stel ons ‘n nuwe tegniek vir veranderlike seleksie in ‘n multi-etiket konteks voor, gebaseer op die konsep van onafhanklike soek-veranderlikes. Hierdie tegniek word empiries ge-evalueer deur die gebruik van gesimuleerde multi-etiket data en daar word gewys dat dieselfde klassifikasie akkuraatheid behaal kan word met ‘n verminderde stel veranderlikes as met die volle stel veranderlikes. Die voorgestelde tegniek vir veranderlike seleksie word ook toegepas in die veld van musiek dataontginning, spesifiek die probleem van die herkenning van musiekinstrumente. ‘n Oorsig van die musiek dataontginning veld word gegee, met spesifieke klem op die herkenning van musiekinstrumente. Die spesifieke doel van (polifoniese) musiekinstrument-herkenning is om instrumente te identifiseer wat saam in ‘n oudiosnit speel. Ons oorweeg spesifiek die geval van duette – met ander woorde, waar twee instrumente saam speel – en hanteer die probleem as ‘n multi-etiket klassifikasie een. In ons empiriese studie illustreer ons die kompleksiteit van musiekinstrumentdata en wys weereens dat ons voorgestelde veranderlike seleksie tegniek effektief daarin slaag om relevante veranderlikes te identifiseer en sodoende die kompleksiteit van die datastel te verminder sonder ‘n negatiewe impak op klassifikasie akkuraatheid.
Cox, Bethany G. "The Effects of Musical Instrument Gender on Spoken Word Recognition." Cleveland State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=csu1624382611571213.
Full textKitahara, Tetsuro. "Computational musical instrument recognition and its application to content-based music information retrieval." 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/135955.
Full textFreddi, Jacopo. "Metodi di Machine Learning applicati alla classificazione degli strumenti musicali." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13225/.
Full textNyströmer, Carl. "Musical Instrument Activity Detection using Self-Supervised Learning and Domain Adaptation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280810.
Full textI och med de ständigt växande media- och musikkatalogerna krävs verktyg för att söka och navigera i dessa. För mer komplexa sökförfrågningar så behövs det metadata, men att manuellt annotera de enorma mängderna av ny data är omöjligt. I denna uppsats undersöks automatisk annotering utav instrumentsaktivitet inom musik, med ett fokus på bristen av annoterad data för modellerna för instrumentaktivitetsigenkänning. Två metoder för att komma runt bristen på data föreslås och undersöks. Den första metoden bygger på självövervakad inlärning baserad på automatisk annotering och slumpartad mixning av olika instrumentspår. Den andra metoden använder domänadaption genom att träna modeller på samplade MIDI-filer för detektering av instrument i inspelad musik. Metoden med självövervakning gav bättre resultat än baseline och pekar på att djupinlärningsmodeller kan lära sig instrumentigenkänning trots att ljudmixarna saknar musikalisk struktur. Domänadaptionsmodellerna som endast var tränade på samplad MIDI-data presterade sämre än baseline, men att använda MIDI-data tillsammans med data från inspelad musik gav förbättrade resultat. En hybridmodell som kombinerade både självövervakad inlärning och domänadaption genom att använda både samplad MIDI-data och inspelad musik gav de bästa resultaten totalt.
Kaminskyj, Ian. "Automatic recognition of musical instruments using isolated monophonic sounds." Monash University, Dept. of Electrical and Computer Systems Engineering, 2004. http://arrow.monash.edu.au/hdl/1959.1/5212.
Full textCAROTA, MASSIMO. "Neural network approach to problems of static/dynamic classification." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2008. http://hdl.handle.net/2108/580.
Full textBooks on the topic "Musical Instrument Recognition"
Lapidus, Benjamin. New York and the International Sound of Latin Music, 1940-1990. University Press of Mississippi, 2020. http://dx.doi.org/10.14325/mississippi/9781496831286.001.0001.
Full textCook, Nicholas. Music: A Very Short Introduction. 2nd ed. Oxford University Press, 2021. http://dx.doi.org/10.1093/actrade/9780198726043.001.0001.
Full textGrutzmacher, Patricia A. The effect of tonal pattern training on the aural perception, reading recognition and melodic sight reading achievement of first year instrumental music students. 1985.
Find full textLewis, Philip. Recognition and remediation of common playing problems of second-year grade 9 instrumentalists. 1986, 1986.
Find full textSuchowiejko, Renata. Polsko-rosyjskie spotkania w przestrzeni kultury muzycznej: XIX wiek i początek XX stulecia. Ksiegarnia Akademicka Publishing, 2022. http://dx.doi.org/10.12797/9788381386685.
Full textIrving, John. Performing Topics in Mozart’s Chamber Music with Piano. Edited by Danuta Mirka. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199841578.013.0021.
Full textBook chapters on the topic "Musical Instrument Recognition"
Datta, Asoke Kumar, Sandeep Singh Solanki, Ranjan Sengupta, Soubhik Chakraborty, Kartik Mahto, and Anirban Patranabis. "Automatic Musical Instrument Recognition." In Signals and Communication Technology, 167–232. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3959-1_9.
Full textEichhoff, Markus, and Claus Weihs. "Musical Instrument Recognition by High-Level Features." In Challenges at the Interface of Data Analysis, Computer Science, and Optimization, 373–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24466-7_38.
Full textPatil, Swarupa R., and Sheetal J. Machale. "Indian Musical Instrument Recognition Using Gaussian Mixture Model." In Techno-Societal 2018, 51–57. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16962-6_6.
Full textHall, Glenn Eric, Hassan Ezzaidi, and Mohammed Bahoura. "Study of Feature Categories for Musical Instrument Recognition." In Communications in Computer and Information Science, 152–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35326-0_16.
Full textŚlȩzak, Dominik, Piotr Synak, Alicja Wieczorkowska, and Jakub Wróblewski. "KDD-Based Approach to Musical Instrument Sound Recognition." In Lecture Notes in Computer Science, 28–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-48050-1_5.
Full textDiment, Aleksandr, Padmanabhan Rajan, Toni Heittola, and Tuomas Virtanen. "Group Delay Function from All-Pole Models for Musical Instrument Recognition." In Lecture Notes in Computer Science, 606–18. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12976-1_37.
Full textBhalke, D. G., C. B. Rama Rao, and D. S. Bormane. "Fractional Fourier Transform Based Features for Musical Instrument Recognition Using Machine Learning Techniques." In Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2013, 155–63. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-02931-3_19.
Full textMazarakis, Giorgos, Panagiotis Tzevelekos, and Georgios Kouroupetroglou. "Musical Instrument Recognition and Classification Using Time Encoded Signal Processing and Fast Artificial Neural Networks." In Advances in Artificial Intelligence, 246–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752912_26.
Full textEichhoff, Markus, and Claus Weihs. "Recognition of Musical Instruments in Intervals and Chords." In Studies in Classification, Data Analysis, and Knowledge Organization, 333–41. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-01595-8_36.
Full textKubera, Elżbieta, Alicja A. Wieczorkowska, and Zbigniew W. Raś. "Time Variability-Based Hierarchic Recognition of Multiple Musical Instruments in Recordings." In Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam, 347–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-30341-8_18.
Full textConference papers on the topic "Musical Instrument Recognition"
Hall, Glenn Eric, Hassan Ezzaidi, and Mohammed Bahoura. "Hierarchical parametrisation and classification for musical instrument recognition." In 2012 11th International Conference on Information Sciences, Signal Processing and their Applications (ISSPA). IEEE, 2012. http://dx.doi.org/10.1109/isspa.2012.6310442.
Full textLee, Wan-chi, and C. C. Jay Kuo. "Feature extraction for musical instrument recognition with application to music segmentation." In Optics East 2005, edited by Anthony Vetro, Chang Wen Chen, C. C. J. Kuo, Tong Zhang, Qi Tian, and John R. Smith. SPIE, 2005. http://dx.doi.org/10.1117/12.634225.
Full textJeyalakshmi, C., B. Murugeshwari, and M. Karthick. "HMM and K-NN based Automatic Musical Instrument Recognition." In 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). IEEE, 2018. http://dx.doi.org/10.1109/i-smac.2018.8653725.
Full textSingh, Chetan Pratap, and T. Kishore Kumar. "Efficient selection of rhythmic features for musical instrument recognition." In 2014 International Conference on Advanced Communication, Control and Computing Technologies (ICACCCT). IEEE, 2014. http://dx.doi.org/10.1109/icaccct.2014.7019329.
Full textZhang, Lin, Shan Wang, Lianming Wang, and Yiyuan Zhang. "Musical Instrument Recognition Based on the Bionic Auditory Model." In 2013 International Conference on Information Science and Cloud Computing Companion (ISCC-C). IEEE, 2013. http://dx.doi.org/10.1109/iscc-c.2013.91.
Full textFujimoto, Minoru, Naotaka Fujita, Yoshinari Takegawa, Tsutomu Terada, and Masahiko Tsukamoto. "A Motion Recognition Method for a Wearable Dancing Musical Instrument." In 2009 International Symposium on Wearable Computers (ISWC). IEEE, 2009. http://dx.doi.org/10.1109/iswc.2009.22.
Full textAshraf, Mohsin, Farooq Ahmad, Raeena Rauqir, Fazeel Abid, Mudasser Naseer, and Ehteshamul Haq. "Emotion Recognition Based on Musical Instrument using Deep Neural Network." In 2021 International Conference on Frontiers of Information Technology (FIT). IEEE, 2021. http://dx.doi.org/10.1109/fit53504.2021.00066.
Full textGunasekaran, S., and K. Revathy. "Fractal dimension analysis of audio signals for Indian musical instrument recognition." In 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590238.
Full textBhalke, D. G., C. B. Rama Rao, and D. S. Bormane. "Dynamic time warping technique for musical instrument recognition for isolated notes." In 2011 International Conference on Emerging Trends in Electrical and Computer Technology (ICETECT 2011). IEEE, 2011. http://dx.doi.org/10.1109/icetect.2011.5760221.
Full textAzarloo, Akram, and Fardad Farokhi. "Automatic Musical Instrument Recognition Using K-NN and MLP Neural Networks." In 2012 4th International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2012). IEEE, 2012. http://dx.doi.org/10.1109/cicsyn.2012.61.
Full text