Academic literature on the topic 'Musical Instrument Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Musical Instrument Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Musical Instrument Recognition"

1

Livshin, A., and X. Rodet. "Purging Musical Instrument Sample Databases Using Automatic Musical Instrument Recognition Methods." IEEE Transactions on Audio, Speech, and Language Processing 17, no. 5 (July 2009): 1046–51. http://dx.doi.org/10.1109/tasl.2009.2018439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lei, Lei. "Multiple Musical Instrument Signal Recognition Based on Convolutional Neural Network." Scientific Programming 2022 (March 25, 2022): 1–11. http://dx.doi.org/10.1155/2022/5117546.

Full text
Abstract:
To improve the accuracy of multi-instrument recognition, based on the basic principles and structure of CNN, a multipitch instrument recognition method based on the convolutional neural network (CNN) is proposed. First of all, the pitch feature detection technology and constant Q transform (CQT) are adopted to extract the signal characteristics of multiple instruments, which are used as the input of the CNN network. Moreover, in order to improve the accuracy of multi-instrument signal recognition, the benchmark recognition model and two-level recognition model are constructed. Finally, the above models are verified by experiments. The results show that the two-level classification model established in this article can accurately identify and classify various musical instruments, and the recognition accuracy is improved most obviously in xylophone. Compared with the benchmark model, the constructed two-level recognition has the highest accuracy and precision, which shows that this model has superior performance and can improve the accuracy of multi-instrument recognition.
APA, Harvard, Vancouver, ISO, and other styles
3

V. Chitre, Abhijit, Ketan J. Raut, Tushar Jadhav, Minal S. Deshmukh, and Kirti Wanjale. "Hybrid Feature Based Classifier Performance Evaluation of Monophonic and Polyphonic Indian Classical Instruments Recognition." Journal of University of Shanghai for Science and Technology 23, no. 11 (November 2, 2021): 879–90. http://dx.doi.org/10.51201/jusst/21/11969.

Full text
Abstract:
Instrument recognition in computer music is an important research area that deals with sound modelling. Musical sounds comprises of five prominent constituents which are Pitch, timber, loudness, duration, and spatialization. The tonal sound is function of all these components playing critical role in deciding quality. The first four parameters can be modified, but timbre remains a challenge [6]. Then, inevitably, timbre became the focus of this piece. It is a sound quality that distinguishes one musical instrument from another, regardless of pitch or volume, and it is critical. Monophonic and polyphonic recordings of musical instruments can be identified using this method. To evaluate the proposed approach, three Indian instruments were experimented to generate training data set. Flutes, harmoniums, and sitars are among the instruments used. Indian musical instruments classify sounds using statistical and spectral parameters. The hybrid features from different domains extracting important characteristics from musical sounds are extracted. An Indian Musical Instrument SVM and GMM classifier demonstrate their ability to classify accurately. Using monophonic sounds, SVM and Polyphonic produce an average accuracy of 89.88% and 91.10%, respectively. According to the results of the experiments, GMM outperforms SVM in monophonic recordings by a factor of 96.33 and polyphonic recordings by a factor of 93.33.
APA, Harvard, Vancouver, ISO, and other styles
4

Essid, S., G. Richard, and B. David. "Musical instrument recognition by pairwise classification strategies." IEEE Transactions on Audio, Speech and Language Processing 14, no. 4 (July 2006): 1401–12. http://dx.doi.org/10.1109/tsa.2005.860842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Keith D., and Youngmoo E. Kim. "Musical instrument identification: A pattern‐recognition approach." Journal of the Acoustical Society of America 104, no. 3 (September 1998): 1768. http://dx.doi.org/10.1121/1.424083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rajesh, Sangeetha, and Nalini N. J. "Recognition of Musical Instrument Using Deep Learning Techniques." International Journal of Information Retrieval Research 11, no. 4 (October 2021): 41–60. http://dx.doi.org/10.4018/ijirr.2021100103.

Full text
Abstract:
The proposed work investigates the impact of Mel Frequency Cepstral Coefficients (MFCC), Chroma DCT Reduced Pitch (CRP), and Chroma Energy Normalized Statistics (CENS) for instrument recognition from monophonic instrumental music clips using deep learning techniques, Bidirectional Recurrent Neural Networks with Long Short-Term Memory (BRNN-LSTM), stacked autoencoders (SAE), and Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM). Initially, MFCC, CENS, and CRP features are extracted from instrumental music clips collected as a dataset from various online libraries. In this work, the deep neural network models have been fabricated by training with extracted features. Recognition rates of 94.9%, 96.8%, and 88.6% are achieved using combined MFCC and CENS features, and 90.9%, 92.2%, and 87.5% are achieved using combined MFCC and CRP features with deep learning models BRNN-LSTM, CNN-LSTM, and SAE, respectively. The experimental results evidence that MFCC features combined with CENS and CRP features at score level revamp the efficacy of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
7

Kurnia, Yusuf, and Toga Parlindungan Silaen. "Android-Based Musical Instrument Recognition Application For Vocational High School Level." bit-Tech 4, no. 2 (December 30, 2021): 47–55. http://dx.doi.org/10.32877/bt.v4i2.288.

Full text
Abstract:
With the old way of learning and methods, Vocational High School students will be bored, especially in the field of Arts and Culture, which on average are considered difficult and not liked by most students. Therefore, it is necessary to make other learning media based on Android so that learning becomes more interesting and enjoyable. In addition, with this learning media, it is hoped that it can help children's creativity and thinking power, because it is equipped with several important components that can hone children's imaginative power such as pictures, videos in the material provided. Therefore, the authors design and create an application about interactive learning media that includes these components. This application will be aimed at the Vocational High School level with the title "Android-Based Musical Instrument Recognition Application for Vocational High School Level". Therefore, research was conducted at the Vocational High School level. Users want attractive images of each musical instrument such as guitar, bass, drums, keyboard, have video links, there are questions in the application, a menu in the form of icons, and attractive colors. After designing, manufacturing, and testing this application, several conclusions can be drawn. First, with this application, Vocational High School students can easily learn musical instruments in different ways. In addition, with this learning method, students become enthusiastic and learning becomes fun in studying cultural arts lessons in the field of musical instruments, especially musical instrument techniques and chords of musical instruments. Second, this application is able to hone the cognitive abilities of students because in this application there are practice questions, the questions are integrated with the material provided, so that students can easily practice their abilities through the practice questions available in this application. Third, and lastly, according to the author, this application is lacking in terms of animation in the application that was made late. Hopefully this application can be a learning material for students who want to take a Thesis with a title related to learning musical instruments. The author really expects criticism and suggestions because this design system still has many shortcomings
APA, Harvard, Vancouver, ISO, and other styles
8

Gonzalez, Yubiry, and Ronaldo C. Prati. "Similarity of Musical Timbres Using FFT-Acoustic Descriptor Analysis and Machine Learning." Eng 4, no. 1 (February 9, 2023): 555–68. http://dx.doi.org/10.3390/eng4010033.

Full text
Abstract:
Musical timbre is a phenomenon of auditory perception that allows the recognition of musical sounds. The recognition of musical timbre is a challenging task because the timbre of a musical instrument or sound source is a complex and multifaceted phenomenon that is influenced by a variety of factors, including the physical properties of the instrument or sound source, the way it is played or produced, and the recording and processing techniques used. In this paper, we explore an abstract space with 7 dimensions formed by the fundamental frequency and FFT-Acoustic Descriptors in 240 monophonic sounds from the Tinysol and Good-Sounds databases, corresponding to the fourth octave of the transverse flute and clarinet. This approach allows us to unequivocally define a collection of points and, therefore, a timbral space (Category Theory) that allows different sounds of any type of musical instrument with its respective dynamics to be represented as a single characteristic vector. The geometric distance would allow studying the timbral similarity between audios of different sounds and instruments or between different musical dynamics and datasets. Additionally, a Machine-Learning algorithm that evaluates timbral similarities through Euclidean distances in the abstract space of 7 dimensions was proposed. We conclude that the study of timbral similarity through geometric distances allowed us to distinguish between audio categories of different sounds and musical instruments, between the same type of sound and an instrument with different relative dynamics, and between different datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

R.Sankaye, Satish, Suresh C.Mehrotra, and U. S. Tandon. "Indian Musical Instrument Recognition using Modified LPC Features." International Journal of Computer Applications 122, no. 13 (July 18, 2015): 6–10. http://dx.doi.org/10.5120/21758-4991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Siedenburg, Kai, Marc René Schädler, and David Hülsmeier. "Modeling the onset advantage in musical instrument recognition." Journal of the Acoustical Society of America 146, no. 6 (December 2019): EL523—EL529. http://dx.doi.org/10.1121/1.5141369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Musical Instrument Recognition"

1

Malheiro, Frederico Alberto Santos de Carteado. "Automatic musical instrument recognition for multimedia indexing." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/6124.

Full text
Abstract:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
The subject of automatic indexing of multimedia has been a target of numerous discussion and study. This interest is due to the exponential growth of multimedia content and the subsequent need to create methods that automatically catalogue this data. To fulfil this idea, several projects and areas of study have emerged. The most relevant of these are the MPEG-7 standard, which defines a standardized system for the representation and automatic extraction of information present in the content, and Music Information Retrieval (MIR), which gathers several paradigms and areas of study relating to music. The main approach to this indexing problem relies on analysing data to obtain and identify descriptors that can help define what we intend to recognize (as, for instance,musical instruments, voice, facial expressions, and so on), this then provides us with information we can use to index the data. This dissertation will focus on audio indexing in music, specifically regarding the recognition of musical instruments from recorded musical notes. Moreover, the developed system and techniques will also be tested for the recognition of ambient sounds (such as the sound of running water, cars driving by, and so on). Our approach will use non-negative matrix factorization to extract features from various types of sounds, these will then be used to train a classification algorithm that will be then capable of identifying new sounds.
APA, Harvard, Vancouver, ISO, and other styles
2

Cros, Vila Laura. "Musical Instrument Recognition using the Scattering Transform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-283597.

Full text
Abstract:
Thanks to the advancement of technological progress in networking and signal processing, we can access a large amount of musical content. In order for users to search among these vast catalogs, they need to have access to music-related information beyond the pure digital music file. Manual annotation of music is too expensive, therefore automated annotation would be of great use. A meaningful description of the musical pieces requires the incorporation of information about the instruments present in them. In this work, we present an approach for musical instrument recognition using the scattering transform, which is a transformation that gives a translation invariant representation, that is stable to deformations and preserves high frequency information for classication. We study recognition in both singleinstrument and multiple-instrument contexts. We compare the performance of models using the scattering transform to those using other standard features. We also examine the impact of the amount of training data. The experiments carried out do not show a clear superior performance of either feature representation. Still, the scattering transform is worth taking into account when choosing a way to extract features if we want to be able to characterize non-stationary signal structures.
Tack vare den tekniska utvecklingen i nätverk och signalbehandling kan vi få tillgång till en stor mängd musikaliskt innehåll. For att användare ska söka bland dessa stora kataloger måste de ha tillgång till musikrelaterad information utöver den rena digitala musikfilen. Eftersom den manuella annotationsprocessen skulle vara för dyr måste den automatiseras. En meningsfull beskrivning av musikstyckena kräver införlivande av information om instrumenten som finns i dem. I det här arbetet presenterar vi en metod for igenkänning av musikinstrument med hjälp av den scattering transform, som är en transformation som ger en översattnings-invariant representation, som är stabil för deformationer och bevarar högfrekvensinformation för klassicering. Vi studerar igenkännande i både enskilda instrument- och flera instrumentförhållanden. Vi jämför modellerna med den scattering transforms prestanda med de som använder andra standardfunktioner. Vi undersöker också effekterna av mangden traningsdata. Experimenten som utförs visar inte en tydlig överlagsen prestanda for någon av representationsföreställningarna jämfört med den andra. Fortfarande är den scattering transform värd att ta hänsyn till när man väljer ett sätt att extrahera funktioner om vi vill kunna karakterisera icke-stationära signalstrukturer.
APA, Harvard, Vancouver, ISO, and other styles
3

Fuhrmann, Ferdinand. "Automatic musical instrument recognition from polyphonic music audio signals." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/81328.

Full text
Abstract:
En aquesta tesi presentem un mètode general per al reconeixement automàtic d’instruments musicals partint d’un senyal d’àudio. A diferència de molts enfocs relacionats, el nostre evita restriccions artificials o artificioses pel que fa al disseny algorísmic, les dades proporcionades al sistema, o el context d’aplicació. Per tal de fer el problema abordable, limitem el procés a l’operació més bàsica consistent a reconèixer l’instrument predominant en un breu fragment d’àudio. Així ens estalviem la separació de fonts sonores en la mescla i, més específicament, predim una font sonora a partir del timbre general del so analitzat. Per tal de compensar aquesta restricció incorporem, addicionalment, informació derivada d’una anàlisi musical jeràrquica: primer incorporem context temporal a l’hora d’extraure etiquetes dels instruments, després incorporem aspectes formals de la peça que poden ajudar al reconeixement de l’instrument, i finalment incloem informació general gràcies a l’explotació de les associacions entre gèneres musicals i instruments.
In this dissertation we present a method for the automatic recognition of musical instruments from music audio signal. Unlike most related approaches, our specific conception mostly avoids laboratory constraints on the method’s algorithmic design, its input data, or the targeted application context. To account for the complex nature of the input signal, we limit the basic process in the processing chain to the recognition of a single predominant musical instrument from a short audio fragment. We thereby prevent resolving the mixture and rather predict one source from the timbre of the sound. To compensate for this restriction we further incorporate information derived from a hierarchical music analysis; we first incorporate musical context to extract instrumental labels from the time-varying model decisions. Second, the method incorporates information regarding the piece’s formal aspects into the process. Finally, we include information from the collection level by exploiting associations between musical genres and instrumentations.
APA, Harvard, Vancouver, ISO, and other styles
4

Sandrock, Trudie. "Multi-label feature selection with application to musical instrument recognition." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019/11071.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: An area of data mining and statistics that is currently receiving considerable attention is the field of multi-label learning. Problems in this field are concerned with scenarios where each data case can be associated with a set of labels instead of only one. In this thesis, we review the field of multi-label learning and discuss the lack of suitable benchmark data available for evaluating multi-label algorithms. We propose a technique for simulating multi-label data, which allows good control over different data characteristics and which could be useful for conducting comparative studies in the multi-label field. We also discuss the explosion in data in recent years, and highlight the need for some form of dimension reduction in order to alleviate some of the challenges presented by working with large datasets. Feature (or variable) selection is one way of achieving dimension reduction, and after a brief discussion of different feature selection techniques, we propose a new technique for feature selection in a multi-label context, based on the concept of independent probes. This technique is empirically evaluated by using simulated multi-label data and it is shown to achieve classification accuracy with a reduced set of features similar to that achieved with a full set of features. The proposed technique for feature selection is then also applied to the field of music information retrieval (MIR), specifically the problem of musical instrument recognition. An overview of the field of MIR is given, with particular emphasis on the instrument recognition problem. The particular goal of (polyphonic) musical instrument recognition is to automatically identify the instruments playing simultaneously in an audio clip, which is not a simple task. We specifically consider the case of duets – in other words, where two instruments are playing simultaneously – and approach the problem as a multi-label classification one. In our empirical study, we illustrate the complexity of musical instrument data and again show that our proposed feature selection technique is effective in identifying relevant features and thereby reducing the complexity of the dataset without negatively impacting on performance.
AFRIKAANSE OPSOMMING: ‘n Area van dataontginning en statistiek wat tans baie aandag ontvang, is die veld van multi-etiket leerteorie. Probleme in hierdie veld beskou scenarios waar elke datageval met ‘n stel etikette geassosieer kan word, instede van slegs een. In hierdie skripsie gee ons ‘n oorsig oor die veld van multi-etiket leerteorie en bespreek die gebrek aan geskikte standaard datastelle beskikbaar vir die evaluering van multi-etiket algoritmes. Ons stel ‘n tegniek vir die simulasie van multi-etiket data voor, wat goeie kontrole oor verskillende data eienskappe bied en wat nuttig kan wees om vergelykende studies in die multi-etiket veld uit te voer. Ons bespreek ook die onlangse ontploffing in data, en beklemtoon die behoefte aan ‘n vorm van dimensie reduksie om sommige van die uitdagings wat deur sulke groot datastelle gestel word die hoof te bied. Veranderlike seleksie is een manier van dimensie reduksie, en na ‘n vlugtige bespreking van verskillende veranderlike seleksie tegnieke, stel ons ‘n nuwe tegniek vir veranderlike seleksie in ‘n multi-etiket konteks voor, gebaseer op die konsep van onafhanklike soek-veranderlikes. Hierdie tegniek word empiries ge-evalueer deur die gebruik van gesimuleerde multi-etiket data en daar word gewys dat dieselfde klassifikasie akkuraatheid behaal kan word met ‘n verminderde stel veranderlikes as met die volle stel veranderlikes. Die voorgestelde tegniek vir veranderlike seleksie word ook toegepas in die veld van musiek dataontginning, spesifiek die probleem van die herkenning van musiekinstrumente. ‘n Oorsig van die musiek dataontginning veld word gegee, met spesifieke klem op die herkenning van musiekinstrumente. Die spesifieke doel van (polifoniese) musiekinstrument-herkenning is om instrumente te identifiseer wat saam in ‘n oudiosnit speel. Ons oorweeg spesifiek die geval van duette – met ander woorde, waar twee instrumente saam speel – en hanteer die probleem as ‘n multi-etiket klassifikasie een. In ons empiriese studie illustreer ons die kompleksiteit van musiekinstrumentdata en wys weereens dat ons voorgestelde veranderlike seleksie tegniek effektief daarin slaag om relevante veranderlikes te identifiseer en sodoende die kompleksiteit van die datastel te verminder sonder ‘n negatiewe impak op klassifikasie akkuraatheid.
APA, Harvard, Vancouver, ISO, and other styles
5

Cox, Bethany G. "The Effects of Musical Instrument Gender on Spoken Word Recognition." Cleveland State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=csu1624382611571213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kitahara, Tetsuro. "Computational musical instrument recognition and its application to content-based music information retrieval." 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/135955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Freddi, Jacopo. "Metodi di Machine Learning applicati alla classificazione degli strumenti musicali." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13225/.

Full text
Abstract:
Nell'elaborato viene valutata l’efficacia delle features a medio termine spettrali e temporali nel problema del riconoscimento degli strumenti musicali a partire dai loro toni. In particolare ne viene testata la differenza al variare della fonte di dati e la robustezza dell’analisi basata su queste in contesti in cui i campioni provengono da molteplici fonti, con differenti tecniche esecutive e di registrazione. Collateralmente vengono valutate le influenze di altre variabili sull’efficacia del riconoscimento e vengono testate alcune tecniche di analisi complementari.
APA, Harvard, Vancouver, ISO, and other styles
8

Nyströmer, Carl. "Musical Instrument Activity Detection using Self-Supervised Learning and Domain Adaptation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280810.

Full text
Abstract:
With the ever growing media and music catalogs, tools that search and navigate this data are important. For more complex search queries, meta-data is needed, but to manually label the vast amounts of new content is impossible. In this thesis, automatic labeling of musical instrument activities in song mixes is investigated, with a focus on ways to alleviate the lack of annotated data for instrument activity detection models. Two methods for alleviating the problem of small amounts of data are proposed and evaluated. Firstly, a self-supervised approach based on automatic labeling and mixing of randomized instrument stems is investigated. Secondly, a domain-adaptation approach that trains models on sampled MIDI files for instrument activity detection on recorded music is explored. The self-supervised approach yields better results compared to the baseline and points to the fact that deep learning models can learn instrument activity detection without an intrinsic musical structure in the audio mix. The domain-adaptation models trained solely on sampled MIDI files performed worse than the baseline, however using MIDI data in conjunction with recorded music boosted the performance. A hybrid model combining both self-supervised learning and domain adaptation by using both sampled MIDI data and recorded music produced the best results overall.
I och med de ständigt växande media- och musikkatalogerna krävs verktyg för att söka och navigera i dessa. För mer komplexa sökförfrågningar så behövs det metadata, men att manuellt annotera de enorma mängderna av ny data är omöjligt. I denna uppsats undersöks automatisk annotering utav instrumentsaktivitet inom musik, med ett fokus på bristen av annoterad data för modellerna för instrumentaktivitetsigenkänning. Två metoder för att komma runt bristen på data föreslås och undersöks. Den första metoden bygger på självövervakad inlärning baserad på automatisk annotering och slumpartad mixning av olika instrumentspår. Den andra metoden använder domänadaption genom att träna modeller på samplade MIDI-filer för detektering av instrument i inspelad musik. Metoden med självövervakning gav bättre resultat än baseline och pekar på att djupinlärningsmodeller kan lära sig instrumentigenkänning trots att ljudmixarna saknar musikalisk struktur. Domänadaptionsmodellerna som endast var tränade på samplad MIDI-data presterade sämre än baseline, men att använda MIDI-data tillsammans med data från inspelad musik gav förbättrade resultat. En hybridmodell som kombinerade både självövervakad inlärning och domänadaption genom att använda både samplad MIDI-data och inspelad musik gav de bästa resultaten totalt.
APA, Harvard, Vancouver, ISO, and other styles
9

Kaminskyj, Ian. "Automatic recognition of musical instruments using isolated monophonic sounds." Monash University, Dept. of Electrical and Computer Systems Engineering, 2004. http://arrow.monash.edu.au/hdl/1959.1/5212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

CAROTA, MASSIMO. "Neural network approach to problems of static/dynamic classification." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2008. http://hdl.handle.net/2108/580.

Full text
Abstract:
The purpose of my doctorate work has consisted in the exploration of the potentialities and of the effectiveness of different neural classifiers, by experimenting their application in the solution of classification problems occurring in the fields of interest typical of the research group of the “Laboratorio Circuiti” at the Department of Electronic Engineering in Tor Vergata. Moreover, though inspired by works already developed by other scholars, the adopted neural classifiers have been partially modified, in order to add to them interesting peculiarities not present in the original versions, as well as to adapt them to the applications of interest. These applications can be grouped in two great families. As regards the first application, the objects to be classified are identified by features of static nature, while as regards the second family, the objects to be classified are identified by features evolving in time. In relation to the research fields taken as reference, the ones that belong to the first family are the following: • classification, by means of fuzzy algorithms, of acoustic signals, with the aim of attributing them to the source that generated them (recognition of musical instruments) • exclusive classification of simple human motor acts for the purpose of a precocious diagnosis of nervous system diseases The second family of application has been represented by that research field that aims to the development of neural tools for the Automatic Tanscription of piano pieces. The first part of this thesis has been devoted to the detailed description of the adopted neural classification techniques, as well as of the modifications introduced in order to improve their behavior in relation to the particular applications. In the second part, the experiments by means of which I have estimated the before-mentioned neural classification techniques have been introduced. It exactly deals with experiments carried out in the chosen research fields. For every application, the II results achieved have been reported; in some cases, the further steps to perform have also been proposed. After a brief introduction to the biological neural model, a description follows about the model of the artificial neuron that has afterwards inspired all the other models: the one proposed by McCulloch and Pitts in 1943. Subsequently, the different typologies of architectures that characterize neural networks are shortly introduced, as regards the feed-forward networks as well as the recursive networks. Then, a description of some learning strategies (supervised and unsupervised), adopted in order to train neural networks, is also given; some criteria by means of which one can estimate the goodness of an opportunely trained neural network are also given (errors made vs. generalization capability). A great part of the adopted networks is based on adaptations of the Backpropagation algorithm; the other networks have been instead trained by means of algorithms based on statistical or geometric criteria. The Backpropagation algorithm has been improved by augmenting the degrees of freedom to the learning ability of a feed-forward neural network with the introduction of a spline adaptive activation function. A wide description has been given of the recurrent neural networks and particularly of the locally recurrent neural networks, networks for dynamic classification exploited in the automatic transcription of piano music. After a more or less rigorous definition of the concepts of classification and clustering, some paragraphs have been devoted to some statistical and geometric neural architectures, exploited in the implementation of static classifiers of common use and in particular in the application fields that have regarded my doctorate work. A separate paragraph has been devoted to the Simpson’s classifier and to the variants originated from my research work. They have revealed themselves to be static classifiers very simple to implement and at the same time very ductile and efficient, in many situations as well as regards the problem of musical source recognition. Two have been the choices in this case. In the first one, III these classifiers have been trained, by means of a pure supervised learning approach, while in the second the training algorithm, though keeping a substantially supervised nature, is prepared by a clustering phase, with the aim of improving, in terms of errors and generalization, the covering of the input space. Subsequently, the locally recurrent neural networks seen as dynamic classifiers are retrieved. However, their training has been rethought according to the effective reduction of the classification error instead of the classic mean-square error. The last three paragraphs have been devoted to a detailed description, in terms of specifications, implementative choices and final results, of the aforesaid fields of applications. The results obtained in all the three fields of application can be considered encouraging. Particularly, the recognition of musical instruments by means of the adopted neural networks has shown results tha can be considered out comparable if not better than those obtained by means of other techniques, but with considerably less complex structures. In case of the Automatic Transcription of piano pieces, the dynamic networks I adopted have given good results. Unfortunately, the required computational resources required by such networks cannot be considered negligible. As far as the medical applications, we are still in an incipent phase of the research. However, opinions expressed by those people who work in this field can be considered substantially eulogistic. The research activities my doctorate work is part of have been carried out in collaboration with the Department “INFOCOM” of the first University of Rome “La Sapienza”, as far as the recognition of musical instruments and the Automatic Transcription of piano pieces. The necessity to study the potentialities of neural classifiers in medical application has instead come from a profitable existing collaboration with the Istituto Superiore di Sanità in Rome.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Musical Instrument Recognition"

1

Lapidus, Benjamin. New York and the International Sound of Latin Music, 1940-1990. University Press of Mississippi, 2020. http://dx.doi.org/10.14325/mississippi/9781496831286.001.0001.

Full text
Abstract:
New York City has long been a generative nexus for the transnational Latin music scene. Currently, there is no other place in the Americas where such large numbers of people from throughout the Caribbean come together to make music. This book seeks to recognize all of those musicians under one mighty musical sound, especially those who have historically gone unnoticed. Based on archival research, oral histories, interviews, and musicological analysis, the book examines how interethnic collaboration among musicians, composers, dancers, instrument builders, and music teachers in New York City set a standard for the study, creation, performance, and innovation of Latin music. Musicians specializing in Spanish Caribbean music in New York cultivated a sound that was grounded in tradition, including classical, jazz, and Spanish Caribbean folkloric music. The book studies this sound in detail and in its context. It offers a fresh understanding of how musicians made and formally transmitted Spanish Caribbean popular music in New York City from 1940 to 1990. Without diminishing the historical facts of segregation and racism the musicians experienced, the book treats music as a unifying force. By giving recognition to those musicians who helped bridge the gap between cultural and musical backgrounds, it recognizes the impact of entire ethnic groups who helped change music in New York. The study of these individual musicians through interviews and musical transcriptions helps to characterize the specific and identifiable New York City Latin music aesthetic that has come to be emulated internationally.
APA, Harvard, Vancouver, ISO, and other styles
2

Cook, Nicholas. Music: A Very Short Introduction. 2nd ed. Oxford University Press, 2021. http://dx.doi.org/10.1093/actrade/9780198726043.001.0001.

Full text
Abstract:
Music: A Very Short Introduction is a study of music and thinking about music, focusing on its social, cultural, and historical dimensions. It draws on a wealth of accessible examples, ranging from Beethoven to Chinese zither music. This VSI also discusses the nature of music as a real-time performance practice; the role of music in social and political action; and the nature of musical thinking, including the roles played in it by instruments, notations, and creative imagination. It explores the impact of digital technology on the production and consumption of music, including how it has transformed participatory music-making and the music business. Finally it examines music’s position in a globalized world. In many ways music has changed out of all recognition over the last twenty years, and so the second edition of this VSI has been comprehensively rewritten.
APA, Harvard, Vancouver, ISO, and other styles
3

Grutzmacher, Patricia A. The effect of tonal pattern training on the aural perception, reading recognition and melodic sight reading achievement of first year instrumental music students. 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lewis, Philip. Recognition and remediation of common playing problems of second-year grade 9 instrumentalists. 1986, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Suchowiejko, Renata. Polsko-rosyjskie spotkania w przestrzeni kultury muzycznej: XIX wiek i początek XX stulecia. Ksiegarnia Akademicka Publishing, 2022. http://dx.doi.org/10.12797/9788381386685.

Full text
Abstract:
POLISH-RUSSIAN ENCOUNTERS IN THE SPACE OF MUSICAL CULTURE: THE 19TH AND EARLY 20TH CENTURIES Musicians travel: in their youth, to learn and perfect their métier; later, to gain fame and recognition, secure artistic and financial satisfaction. Circulating music prints reach various recipients at home and abroad, while the production and distribution of such publications depend mainly on the needs and tastes of consumers. Musical instruments provided by the music industry also find their way to many customers. This industry is an integral part of culture as it provides the material basis for creating and performing music. Musical culture emerges ‘in movement’: through encounters and the exchange of people, compositions, ideas, and physical goods. It has its own dynamics and channels of expansion; it relies on extensive and ever changing networks on personal, professional, institutional, and commercial levels. This musical exchange happens across state borders; it is not blocked by geography or politics, although both may affect it to an extent. The present collective work Polsko-rosyjskie spotkania w przestrzeni kultury muzycznej (XIX wiek i początek XX stulecia) [Polish-Russian Encounters in the Space of Musical Culture: The 19th and Early 20th Centuries] attempts to show this exchange through the testimony of historical sources: autographs, music prints, records of social life (concert programmes), and press materials. The main focus of the articles is on the presence of Polish music and Polish musicians in Russian culture; however, there is also a discussion of the opposite perspective, of Russian music and musicians in Polish culture.
APA, Harvard, Vancouver, ISO, and other styles
6

Irving, John. Performing Topics in Mozart’s Chamber Music with Piano. Edited by Danuta Mirka. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199841578.013.0021.

Full text
Abstract:
This chapter discusses ways in which an awareness of topics might influence performance behaviors. It contrasts topics as understood respectively by Aristotle (abstract concepts) and Vico (potential for action). Through case studies taken from Mozart’s chamber music with piano (specifically in a “period-instrument” context), it investigates subtle interactions between different dance topics (sarabande, gavotte, bourrée), which emerge only through careful consideration of notational features such as beat hierarchy and other aspects of historically informed performance practice hinted at in the notation. Awareness of these interactions, and recognition of their invitations to engage in certain performance gestures, offers the potential to create performance narratives that counterpoint the formal design mapped out in the notated score.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Musical Instrument Recognition"

1

Datta, Asoke Kumar, Sandeep Singh Solanki, Ranjan Sengupta, Soubhik Chakraborty, Kartik Mahto, and Anirban Patranabis. "Automatic Musical Instrument Recognition." In Signals and Communication Technology, 167–232. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3959-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Eichhoff, Markus, and Claus Weihs. "Musical Instrument Recognition by High-Level Features." In Challenges at the Interface of Data Analysis, Computer Science, and Optimization, 373–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24466-7_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Patil, Swarupa R., and Sheetal J. Machale. "Indian Musical Instrument Recognition Using Gaussian Mixture Model." In Techno-Societal 2018, 51–57. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16962-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hall, Glenn Eric, Hassan Ezzaidi, and Mohammed Bahoura. "Study of Feature Categories for Musical Instrument Recognition." In Communications in Computer and Information Science, 152–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35326-0_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ślȩzak, Dominik, Piotr Synak, Alicja Wieczorkowska, and Jakub Wróblewski. "KDD-Based Approach to Musical Instrument Sound Recognition." In Lecture Notes in Computer Science, 28–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-48050-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Diment, Aleksandr, Padmanabhan Rajan, Toni Heittola, and Tuomas Virtanen. "Group Delay Function from All-Pole Models for Musical Instrument Recognition." In Lecture Notes in Computer Science, 606–18. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12976-1_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bhalke, D. G., C. B. Rama Rao, and D. S. Bormane. "Fractional Fourier Transform Based Features for Musical Instrument Recognition Using Machine Learning Techniques." In Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2013, 155–63. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-02931-3_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mazarakis, Giorgos, Panagiotis Tzevelekos, and Georgios Kouroupetroglou. "Musical Instrument Recognition and Classification Using Time Encoded Signal Processing and Fast Artificial Neural Networks." In Advances in Artificial Intelligence, 246–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752912_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Eichhoff, Markus, and Claus Weihs. "Recognition of Musical Instruments in Intervals and Chords." In Studies in Classification, Data Analysis, and Knowledge Organization, 333–41. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-01595-8_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kubera, Elżbieta, Alicja A. Wieczorkowska, and Zbigniew W. Raś. "Time Variability-Based Hierarchic Recognition of Multiple Musical Instruments in Recordings." In Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam, 347–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-30341-8_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Musical Instrument Recognition"

1

Hall, Glenn Eric, Hassan Ezzaidi, and Mohammed Bahoura. "Hierarchical parametrisation and classification for musical instrument recognition." In 2012 11th International Conference on Information Sciences, Signal Processing and their Applications (ISSPA). IEEE, 2012. http://dx.doi.org/10.1109/isspa.2012.6310442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Wan-chi, and C. C. Jay Kuo. "Feature extraction for musical instrument recognition with application to music segmentation." In Optics East 2005, edited by Anthony Vetro, Chang Wen Chen, C. C. J. Kuo, Tong Zhang, Qi Tian, and John R. Smith. SPIE, 2005. http://dx.doi.org/10.1117/12.634225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jeyalakshmi, C., B. Murugeshwari, and M. Karthick. "HMM and K-NN based Automatic Musical Instrument Recognition." In 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). IEEE, 2018. http://dx.doi.org/10.1109/i-smac.2018.8653725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Chetan Pratap, and T. Kishore Kumar. "Efficient selection of rhythmic features for musical instrument recognition." In 2014 International Conference on Advanced Communication, Control and Computing Technologies (ICACCCT). IEEE, 2014. http://dx.doi.org/10.1109/icaccct.2014.7019329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Lin, Shan Wang, Lianming Wang, and Yiyuan Zhang. "Musical Instrument Recognition Based on the Bionic Auditory Model." In 2013 International Conference on Information Science and Cloud Computing Companion (ISCC-C). IEEE, 2013. http://dx.doi.org/10.1109/iscc-c.2013.91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fujimoto, Minoru, Naotaka Fujita, Yoshinari Takegawa, Tsutomu Terada, and Masahiko Tsukamoto. "A Motion Recognition Method for a Wearable Dancing Musical Instrument." In 2009 International Symposium on Wearable Computers (ISWC). IEEE, 2009. http://dx.doi.org/10.1109/iswc.2009.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ashraf, Mohsin, Farooq Ahmad, Raeena Rauqir, Fazeel Abid, Mudasser Naseer, and Ehteshamul Haq. "Emotion Recognition Based on Musical Instrument using Deep Neural Network." In 2021 International Conference on Frontiers of Information Technology (FIT). IEEE, 2021. http://dx.doi.org/10.1109/fit53504.2021.00066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gunasekaran, S., and K. Revathy. "Fractal dimension analysis of audio signals for Indian musical instrument recognition." In 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bhalke, D. G., C. B. Rama Rao, and D. S. Bormane. "Dynamic time warping technique for musical instrument recognition for isolated notes." In 2011 International Conference on Emerging Trends in Electrical and Computer Technology (ICETECT 2011). IEEE, 2011. http://dx.doi.org/10.1109/icetect.2011.5760221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Azarloo, Akram, and Fardad Farokhi. "Automatic Musical Instrument Recognition Using K-NN and MLP Neural Networks." In 2012 4th International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2012). IEEE, 2012. http://dx.doi.org/10.1109/cicsyn.2012.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography