To see the other types of publications on this topic, follow the link: Musical Instrument Recognition.

Journal articles on the topic 'Musical Instrument Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Musical Instrument Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Livshin, A., and X. Rodet. "Purging Musical Instrument Sample Databases Using Automatic Musical Instrument Recognition Methods." IEEE Transactions on Audio, Speech, and Language Processing 17, no. 5 (July 2009): 1046–51. http://dx.doi.org/10.1109/tasl.2009.2018439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lei, Lei. "Multiple Musical Instrument Signal Recognition Based on Convolutional Neural Network." Scientific Programming 2022 (March 25, 2022): 1–11. http://dx.doi.org/10.1155/2022/5117546.

Full text
Abstract:
To improve the accuracy of multi-instrument recognition, based on the basic principles and structure of CNN, a multipitch instrument recognition method based on the convolutional neural network (CNN) is proposed. First of all, the pitch feature detection technology and constant Q transform (CQT) are adopted to extract the signal characteristics of multiple instruments, which are used as the input of the CNN network. Moreover, in order to improve the accuracy of multi-instrument signal recognition, the benchmark recognition model and two-level recognition model are constructed. Finally, the above models are verified by experiments. The results show that the two-level classification model established in this article can accurately identify and classify various musical instruments, and the recognition accuracy is improved most obviously in xylophone. Compared with the benchmark model, the constructed two-level recognition has the highest accuracy and precision, which shows that this model has superior performance and can improve the accuracy of multi-instrument recognition.
APA, Harvard, Vancouver, ISO, and other styles
3

V. Chitre, Abhijit, Ketan J. Raut, Tushar Jadhav, Minal S. Deshmukh, and Kirti Wanjale. "Hybrid Feature Based Classifier Performance Evaluation of Monophonic and Polyphonic Indian Classical Instruments Recognition." Journal of University of Shanghai for Science and Technology 23, no. 11 (November 2, 2021): 879–90. http://dx.doi.org/10.51201/jusst/21/11969.

Full text
Abstract:
Instrument recognition in computer music is an important research area that deals with sound modelling. Musical sounds comprises of five prominent constituents which are Pitch, timber, loudness, duration, and spatialization. The tonal sound is function of all these components playing critical role in deciding quality. The first four parameters can be modified, but timbre remains a challenge [6]. Then, inevitably, timbre became the focus of this piece. It is a sound quality that distinguishes one musical instrument from another, regardless of pitch or volume, and it is critical. Monophonic and polyphonic recordings of musical instruments can be identified using this method. To evaluate the proposed approach, three Indian instruments were experimented to generate training data set. Flutes, harmoniums, and sitars are among the instruments used. Indian musical instruments classify sounds using statistical and spectral parameters. The hybrid features from different domains extracting important characteristics from musical sounds are extracted. An Indian Musical Instrument SVM and GMM classifier demonstrate their ability to classify accurately. Using monophonic sounds, SVM and Polyphonic produce an average accuracy of 89.88% and 91.10%, respectively. According to the results of the experiments, GMM outperforms SVM in monophonic recordings by a factor of 96.33 and polyphonic recordings by a factor of 93.33.
APA, Harvard, Vancouver, ISO, and other styles
4

Essid, S., G. Richard, and B. David. "Musical instrument recognition by pairwise classification strategies." IEEE Transactions on Audio, Speech and Language Processing 14, no. 4 (July 2006): 1401–12. http://dx.doi.org/10.1109/tsa.2005.860842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Keith D., and Youngmoo E. Kim. "Musical instrument identification: A pattern‐recognition approach." Journal of the Acoustical Society of America 104, no. 3 (September 1998): 1768. http://dx.doi.org/10.1121/1.424083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rajesh, Sangeetha, and Nalini N. J. "Recognition of Musical Instrument Using Deep Learning Techniques." International Journal of Information Retrieval Research 11, no. 4 (October 2021): 41–60. http://dx.doi.org/10.4018/ijirr.2021100103.

Full text
Abstract:
The proposed work investigates the impact of Mel Frequency Cepstral Coefficients (MFCC), Chroma DCT Reduced Pitch (CRP), and Chroma Energy Normalized Statistics (CENS) for instrument recognition from monophonic instrumental music clips using deep learning techniques, Bidirectional Recurrent Neural Networks with Long Short-Term Memory (BRNN-LSTM), stacked autoencoders (SAE), and Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM). Initially, MFCC, CENS, and CRP features are extracted from instrumental music clips collected as a dataset from various online libraries. In this work, the deep neural network models have been fabricated by training with extracted features. Recognition rates of 94.9%, 96.8%, and 88.6% are achieved using combined MFCC and CENS features, and 90.9%, 92.2%, and 87.5% are achieved using combined MFCC and CRP features with deep learning models BRNN-LSTM, CNN-LSTM, and SAE, respectively. The experimental results evidence that MFCC features combined with CENS and CRP features at score level revamp the efficacy of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
7

Kurnia, Yusuf, and Toga Parlindungan Silaen. "Android-Based Musical Instrument Recognition Application For Vocational High School Level." bit-Tech 4, no. 2 (December 30, 2021): 47–55. http://dx.doi.org/10.32877/bt.v4i2.288.

Full text
Abstract:
With the old way of learning and methods, Vocational High School students will be bored, especially in the field of Arts and Culture, which on average are considered difficult and not liked by most students. Therefore, it is necessary to make other learning media based on Android so that learning becomes more interesting and enjoyable. In addition, with this learning media, it is hoped that it can help children's creativity and thinking power, because it is equipped with several important components that can hone children's imaginative power such as pictures, videos in the material provided. Therefore, the authors design and create an application about interactive learning media that includes these components. This application will be aimed at the Vocational High School level with the title "Android-Based Musical Instrument Recognition Application for Vocational High School Level". Therefore, research was conducted at the Vocational High School level. Users want attractive images of each musical instrument such as guitar, bass, drums, keyboard, have video links, there are questions in the application, a menu in the form of icons, and attractive colors. After designing, manufacturing, and testing this application, several conclusions can be drawn. First, with this application, Vocational High School students can easily learn musical instruments in different ways. In addition, with this learning method, students become enthusiastic and learning becomes fun in studying cultural arts lessons in the field of musical instruments, especially musical instrument techniques and chords of musical instruments. Second, this application is able to hone the cognitive abilities of students because in this application there are practice questions, the questions are integrated with the material provided, so that students can easily practice their abilities through the practice questions available in this application. Third, and lastly, according to the author, this application is lacking in terms of animation in the application that was made late. Hopefully this application can be a learning material for students who want to take a Thesis with a title related to learning musical instruments. The author really expects criticism and suggestions because this design system still has many shortcomings
APA, Harvard, Vancouver, ISO, and other styles
8

Gonzalez, Yubiry, and Ronaldo C. Prati. "Similarity of Musical Timbres Using FFT-Acoustic Descriptor Analysis and Machine Learning." Eng 4, no. 1 (February 9, 2023): 555–68. http://dx.doi.org/10.3390/eng4010033.

Full text
Abstract:
Musical timbre is a phenomenon of auditory perception that allows the recognition of musical sounds. The recognition of musical timbre is a challenging task because the timbre of a musical instrument or sound source is a complex and multifaceted phenomenon that is influenced by a variety of factors, including the physical properties of the instrument or sound source, the way it is played or produced, and the recording and processing techniques used. In this paper, we explore an abstract space with 7 dimensions formed by the fundamental frequency and FFT-Acoustic Descriptors in 240 monophonic sounds from the Tinysol and Good-Sounds databases, corresponding to the fourth octave of the transverse flute and clarinet. This approach allows us to unequivocally define a collection of points and, therefore, a timbral space (Category Theory) that allows different sounds of any type of musical instrument with its respective dynamics to be represented as a single characteristic vector. The geometric distance would allow studying the timbral similarity between audios of different sounds and instruments or between different musical dynamics and datasets. Additionally, a Machine-Learning algorithm that evaluates timbral similarities through Euclidean distances in the abstract space of 7 dimensions was proposed. We conclude that the study of timbral similarity through geometric distances allowed us to distinguish between audio categories of different sounds and musical instruments, between the same type of sound and an instrument with different relative dynamics, and between different datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

R.Sankaye, Satish, Suresh C.Mehrotra, and U. S. Tandon. "Indian Musical Instrument Recognition using Modified LPC Features." International Journal of Computer Applications 122, no. 13 (July 18, 2015): 6–10. http://dx.doi.org/10.5120/21758-4991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Siedenburg, Kai, Marc René Schädler, and David Hülsmeier. "Modeling the onset advantage in musical instrument recognition." Journal of the Acoustical Society of America 146, no. 6 (December 2019): EL523—EL529. http://dx.doi.org/10.1121/1.5141369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Yuhan. "Study on Intelligent Online Piano Teaching System Based on Deep Learning Recurrent Neural Network Model." Mobile Information Systems 2022 (July 21, 2022): 1–9. http://dx.doi.org/10.1155/2022/9469975.

Full text
Abstract:
This study has been conducted to solve the problem of repetitive piano lessons and to bring a personalized experience for each piano learner. The application of deep learning (DL) technology for children’s piano teaching has a positive effect on their interest in the subject and improves the teaching quality. Music instruments were identified in the system using an instrument recognition model that was developed using deep learning techniques. It was also utilized to help children learn to play the piano by giving them direction and boosting their excitement for it. The proposed model’s ability to recognize and acquire features has been improved. The recurrent neural network (RNN) demonstrated instrument recognition accuracy of 96.4%, and the model’s recognition error rate decreased and stabilized as the number of iterations increased. The proposed RNN for musical instruments recognizes instruments by using DL to accurately identify musical properties.
APA, Harvard, Vancouver, ISO, and other styles
12

Basri, Muntasyir, Lewi Jutomo, Caro David Hadel Edon, Marline Mayners, and Sri Prilmayanti Awaluddin. "Pendampingan Siswa SMP Dan SMA Dalam Memainkan Alat Musik Tradisional Sasando Daun Untuk Melestarikan Alat Musik Tradisional Etnik Nusa Tenggara Timur." JATI EMAS (Jurnal Aplikasi Teknik dan Pengabdian Masyarakat) 4, no. 1 (April 7, 2020): 33. http://dx.doi.org/10.36339/je.v4i1.273.

Full text
Abstract:
This community service aims to empower junior and senior high school students to improve their knowledge, abilities and skills in playing NTT ethnic traditional musical instruments as an effort to preserve local culture, especially traditional musical instruments. Implementation methods are designed based on community empowerment methods including identification of schools and target participants, preparation of training tools and materials, provision of materials, introduction of sasando daun traditional musical instruments, recognition of tone and sound, training and use of playing sasando daun traditional musical instruments and monitoring and evaluation and sustainability. This activity is carried out on St. Catholic Catholic Middle School students Familia and Kupang Superior Generation Christian High School, each 20 students. The results of the activity showed that as many as 40 participants were able to play the Sasando Daun traditional musical instrument skillfully, skillfully and well. It is expected that with the presence of 40 students the students will be able to direct and motivate other students to play the traditional Sasando Daun musical instrument so as to be able to preserve the NTT ethnic culture, especially maintaining the ability to play this instrument. For the continuation of the activities of all these students will be included in the appearance of the traditional Sasando Daun musical instrument play in various events in Kupang and outside Kupang and formed the Sasando Traditional Music Art Unit.
APA, Harvard, Vancouver, ISO, and other styles
13

Sreekar, K., and A. Devansh Reddy. "Musical Tones Classification using Machine Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 12 (December 31, 2022): 1004–7. http://dx.doi.org/10.22214/ijraset.2022.48084.

Full text
Abstract:
Abstract: In today's scenario, most things are digitized. Music is one of the most famous forms of art, learned by different people and taught by several musicians. Therefore, music information retrieval (MIR) and its applications are gaining popularity following advances in machine learning technology. Various applications such as genre recognition, song recognition, automatic score generation, music transcription, tempo, beat type, etc. have been developed to achieve this goal. However, little research has been done to identify musical instruments. The new method proposed here develops a machine learning model in combination with the Mel-frequency-cepstral-coefficient by extracting audio features directly from the raw audio dataset. A total of 600 audio files containing sound samples from 6 different instruments are used for system training and instrument prediction.
APA, Harvard, Vancouver, ISO, and other styles
14

Rajesh, Sangeetha, and N. J. Nalini. "Musical instrument emotion recognition using deep recurrent neural network." Procedia Computer Science 167 (2020): 16–25. http://dx.doi.org/10.1016/j.procs.2020.03.178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Driscoll, Virginia D., Jacob Oleson, Dingfeng Jiang, and Kate Gfeller. "Effects of Training on Recognition of Musical Instruments Presented through Cochlear Implant Simulations." Journal of the American Academy of Audiology 20, no. 01 (January 2009): 071–82. http://dx.doi.org/10.3766/jaaa.20.1.7.

Full text
Abstract:
Background: The simulation of the CI (cochlear implant) signal presents a degraded representation of each musical instrument, which makes recognition difficult. Purpose: To examine the efficiency and effectiveness of three types of training on recognition of musical instruments as presented through simulations of the sounds transmitted through a CI. Research Design: Participants were randomly assigned to one of three training conditions: repeated exposure, feedback, and direct instruction. Study Sample: Sixty-six adults with normal hearing. Intervention: Each participant completed three training sessions per week, over a five-week time period, in which they listened to the CI simulations of eight different musical instruments. Data Collection and Analysis: Analyses on percent of instruments identified correctly showed statistically significant differences between recognition accuracy of the three training conditions (p< .01). Results: those assigned to the direct instruction group showed the greatest improvement over the five-week training period as well as sustained improvement after training. The feedback group achieved the next highest level of recognition accuracy. The repeated exposure group showed modest improvement during the first three-week time period, but no subsequent improvements. Conclusions: These results indicate that different types of training are differentially effective with regard to improving recognition of musical instruments presented through a degraded signal, which has practical implications for the auditory rehabilitation of persons who use cochlear implants.
APA, Harvard, Vancouver, ISO, and other styles
16

Kwon, Soon-Il, and Wan-Joo Park. "Musical Instrument Recognition for the Categorization of UCC Music Source." KIPS Transactions:PartB 17B, no. 2 (April 30, 2010): 107–14. http://dx.doi.org/10.3745/kipstb.2010.17b.2.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Eyharabide, Victoria, Imad Eddine Ibrahim Bekkouch, and Nicolae Dragoș Constantin. "Knowledge Graph Embedding-Based Domain Adaptation for Musical Instrument Recognition." Computers 10, no. 8 (August 3, 2021): 94. http://dx.doi.org/10.3390/computers10080094.

Full text
Abstract:
Convolutional neural networks raised the bar for machine learning and artificial intelligence applications, mainly due to the abundance of data and computations. However, there is not always enough data for training, especially when it comes to historical collections of cultural heritage where the original artworks have been destroyed or damaged over time. Transfer Learning and domain adaptation techniques are possible solutions to tackle the issue of data scarcity. This article presents a new method for domain adaptation based on Knowledge graph embeddings. Knowledge Graph embedding forms a projection of a knowledge graph into a lower-dimensional where entities and relations are represented into continuous vector spaces. Our method incorporates these semantic vector spaces as a key ingredient to guide the domain adaptation process. We combined knowledge graph embeddings with visual embeddings from the images and trained a neural network with the combined embeddings as anchors using an extension of Fisher’s linear discriminant. We evaluated our approach on two cultural heritage datasets of images containing medieval and renaissance musical instruments. The experimental results showed a significant increase in the baselines and state-of-the-art performance compared with other domain adaptation methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Kaminskyj, Ian, and Tadeusz Czaszejko. "Automatic Recognition of Isolated Monophonic Musical Instrument Sounds using kNNC." Journal of Intelligent Information Systems 24, no. 2-3 (March 2005): 199–221. http://dx.doi.org/10.1007/s10844-005-0323-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Giannoulis, Dimitrios, and Anssi Klapuri. "Musical Instrument Recognition in Polyphonic Audio Using Missing Feature Approach." IEEE Transactions on Audio, Speech, and Language Processing 21, no. 9 (September 2013): 1805–17. http://dx.doi.org/10.1109/tasl.2013.2248720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Artan, Ismihan, and Gulden Uyanik Balat. "Recognition of Musical Instruments by Children between 4 and 6 Years of Age and Research concerning the Natural Sounds They Associate with Those Instruments." Contemporary Issues in Early Childhood 4, no. 3 (September 2003): 357–69. http://dx.doi.org/10.2304/ciec.2003.4.3.9.

Full text
Abstract:
Musical instruments help children to gain a lot of experience related to sounds and they play an important role in supporting skill development in children. In addition, with instruments children can create and explore their own music, rather than participate with and react to others. In this school-based research study 147 children were chosen randomly from among those who attended private kindergartens in high socio-economic areas in the city center of Ankara, Turkey. All children were aged between 4 and 6 years. The research methodology comprised a questionnaire to gather demographic information about the children, the use of musical instruments and a set of cards containing pictures of musical instruments. When the children were asked the question, ‘What is music’, they answered mainly by saying, ‘playing a musical instrument’. Many of the children were able to identify musical instruments correctly when shown pictures of them.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Lili. "Lute Acoustic Quality Evaluation and Note Recognition Based on the Softmax Regression BP Neural Network." Mathematical Problems in Engineering 2022 (April 12, 2022): 1–7. http://dx.doi.org/10.1155/2022/1978746.

Full text
Abstract:
Note recognition technology has very important applications in instrument tuning, automatic computer music recognition, music database retrieval, and electronic music synthesis. This paper addresses the above issues by conducting a study on acoustic quality evaluation and its note recognition based on artificial neural networks, taking the lute as an example. For the acoustic quality evaluation of musical instruments, this paper uses the subjective evaluation criteria of musical instruments as the basis for obtaining the results of the subjective evaluation of the acoustic quality of the lute, similar to the acoustic quality evaluation, extracts the CQT and MFCC note signal features, and uses the single and combined features as the input to the Softmax regression BP neural network multiclassification recogniser; the classification coding of standard tones is used as the target for supervised network learning. The algorithm can identify 25 notes from bass to treble with high accuracy, with an average recognition rate of 95.6%; compared to other recognition algorithms, the algorithm has the advantage of fewer constraints, a wider range of notes, and a higher recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Zhuo, Zhenjiang Zhao, and Lujia Wei. "Correlation Analysis between Cultural Differences and Normal Music Perception Based on Embedded Voice Multisensor." Journal of Sensors 2022 (January 10, 2022): 1–9. http://dx.doi.org/10.1155/2022/9248820.

Full text
Abstract:
In order to effectively improve the sense of difference brought by the extracorporeal machine to users and minimize the related derived problems, the implementation based on embedded multisensor has become a major breakthrough in the research of cochlear implant. To explore the impact of different cultural differences on timbre perception, effectively evaluate the correlation between cultural differences and music perception teaching based on embedded multisensor normal hearing, evaluate the discrimination ability of embedded multisensor normal hearing to music timbre, and analyse the correlation between cultural differences and timbre perception, it provides a basis for the evaluation of music perception of normal hearing people with embedded multisensor and the design and development of evaluation tool. In this paper, adults with normal hearing in different cultures matched with music experience are selected to test their recognition ability of different musical instruments and the number of musical instruments by using music evaluation software, and the recognition accuracy of the two tests is recorded. The results show that the accuracy of musical instrument recognition in the mother tongue group is 15% higher than that in the foreign language group, and the average recognition rates of oboe, trumpet, and xylophone in the foreign language group are lower than those in the mother tongue group, the recognition rate of oboe and trumpet in wind instruments was low in both groups, and the recognition rate of oboe and trumpet in foreign language group was high.
APA, Harvard, Vancouver, ISO, and other styles
23

Blazhevych, Vasyl. "EVOLUTION OF GUITAR ART PERFORMANCE TRADITIONS IN THE NATIONAL CULTURAL AND EDUCATIONAL DIMENSION." Aesthetics and Ethics of Pedagogical Action, no. 15 (March 9, 2017): 107–15. http://dx.doi.org/10.33989/2226-4051.2017.15.175896.

Full text
Abstract:
The essence and the content of “performing tradition” and “cultural and educational dimension” have been explained in the article. The author examines the history of the emergence and development guitar art in Ukraine as a whole, and specifically performance traditions of the guitarists. Practical educational and performing experience of a lot of prominent guitarists of national cultural and educational dimension, their performing concepts, techniques and methods, has been described; the author gives a complete description of the evolution of guitar art in Ukraine.An objective study of the historical development of national musical culture in today's extremely topical issue in the context of scientific understanding, particularly by disclosing distinctive features of the national musical performance and in particular instrument. Currently growing interest in issues of history, theory and techniques of instrumental performance has been considered, and study of the evolution of performance traditions due to the diversity of the world's musical instruments has been conducted.The XX century has started a process of recognition of the guitar as a professional instrument and it has integrated into the system of specialized music education. As a result of significantly increased quality guitar performance is becoming more popular palette of guitar music; multidisciplinary academic chamber and instrumental direction began to be classical and jazz guitar techniques.Principles and methods of forming performance skills that have been elaborated by practice of Ukrainian and foreign guitarists can be used for further development of musical training and education of talented youth.
APA, Harvard, Vancouver, ISO, and other styles
24

Dewi, Christine, and Rung-Ching Chen. "Combination of Resnet and Spatial Pyramid Pooling for Musical Instrument Identification." Cybernetics and Information Technologies 22, no. 1 (March 1, 2022): 104–16. http://dx.doi.org/10.2478/cait-2022-0007.

Full text
Abstract:
Abstract Identifying similar objects is one of the most challenging tasks in computer vision image recognition. The following musical instruments will be recognized in this study: French horn, harp, recorder, bassoon, cello, clarinet, erhu, guitar saxophone, trumpet, and violin. Numerous musical instruments are identical in size, form, and sound. Further, our works combine Resnet 50 with Spatial Pyramid Pooling (SPP) to identify musical instruments that are similar to one another. Next, the Resnet 50 and Resnet 50 SPP model evaluation performance includes the Floating-Point Operations (FLOPS), detection time, mAP, and IoU. Our work can increase the detection performance of musical instruments similar to one another. The method we propose, Resnet 50 SPP, shows the highest average accuracy of 84.64% compared to the results of previous studies.
APA, Harvard, Vancouver, ISO, and other styles
25

KumarBanchhor, Sumit, and Arif Khan. "Musical Instrument Recognition using Zero Crossing Rate and Short-time Energy." International Journal of Applied Information Systems 1, no. 3 (February 18, 2012): 16–19. http://dx.doi.org/10.5120/ijais12-450131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Szeliga, Dominika, Paweł Tarasiuk, Bartłomiej Stasiak, and Piotr S. Szczepaniak. "Musical Instrument Recognition with a Convolutional Neural Network and Staged Training." Procedia Computer Science 207 (2022): 2493–502. http://dx.doi.org/10.1016/j.procs.2022.09.307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Morvidone, Marcela, Bob L. Sturm, and Laurent Daudet. "Incorporating scale information with cepstral features: Experiments on musical instrument recognition." Pattern Recognition Letters 31, no. 12 (September 2010): 1489–97. http://dx.doi.org/10.1016/j.patrec.2009.12.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zlatintsi, A., and P. Maragos. "Multiscale Fractal Analysis of Musical Instrument Signals With Application to Recognition." IEEE Transactions on Audio, Speech, and Language Processing 21, no. 4 (April 2013): 737–48. http://dx.doi.org/10.1109/tasl.2012.2231073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Maliki, I., and Sofiyanudin. "Musical Instrument Recognition using Mel-Frequency Cepstral Coefficients and Learning Vector Quantization." IOP Conference Series: Materials Science and Engineering 407 (September 26, 2018): 012118. http://dx.doi.org/10.1088/1757-899x/407/1/012118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bhalke, D. G., C. B. Rama Rao, and D. S. Bormane. "Hybridization of Fractional Fourier Transform and Acoustic Features for Musical Instrument Recognition." International Journal of Signal Processing, Image Processing and Pattern Recognition 7, no. 1 (February 28, 2014): 275–82. http://dx.doi.org/10.14257/ijsip.2014.7.1.26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kania, Paulina, Dariusz Kania, and Tomasz Łukaszewicz. "A Hardware-Oriented Algorithm for Real-Time Music Key Signature Recognition." Applied Sciences 11, no. 18 (September 20, 2021): 8753. http://dx.doi.org/10.3390/app11188753.

Full text
Abstract:
The algorithm presented in this paper provides the means for the real-time recognition of the key signature associated with a given piece of music, based on the analysis of a very small number of initial notes. The algorithm can easily be implemented in electronic musical instruments, enabling real-time generation of musical notation. The essence of the solution proposed herein boils down to the analysis of a music signature, defined as a set of twelve vectors representing the particular pitch classes. These vectors are anchored in the center of the circle of fifths, pointing radially towards each of the twelve tones of the chromatic scale. Besides a thorough description of the algorithm, the authors also present a theoretical introduction to the subject matter. The results of the experiments performed on preludes and fugues by J.S. Bach, as well as the preludes, nocturnes, and etudes of F. Chopin, validating the usability of the method, are also presented and thoroughly discussed. Additionally, the paper includes a comparison of the efficacies obtained using the developed solution with the efficacies observed in the case of music notation generated by a musical instrument of a reputable brand, which clearly indicates the superiority of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

King, Caleb J., Anya E. Shorey, Kelly L. Whiteford, and Christian E. Stilp. "Testing the role of primary musical instrument on context effects in music perception." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A132. http://dx.doi.org/10.1121/10.0015790.

Full text
Abstract:
Musicians display numerous perceptual benefits versus nonmusicians, such as better pitch and melody perception (the “musician advantage”). Recently, Shorey et al. (2021 ASA) investigated whether this musician advantage extended to spectral contrast effects (SCEs; categorization shifts produced by acoustic properties of surrounding sounds) in musical instrument recognition. Musicians and nonmusicians listened to a context sound (filtered string quartet passage highlighting frequencies of the horn or saxophone), then categorized a target sound (tone from a six-step series varying from horn to saxophone). Although musicians displayed superior pitch discrimination, their SCEs did not differ from those of nonmusicians. Importantly, separate research has reported that a musician’s instrument of training heavily influences musical perception, potentially improving frequency discrimination and rhythm perception/production. However, in the Shorey et al. study, musicians were recruited without respect to their primary instrument. This follow-up study uses the same methodology as Shorey et al. but recruits only musicians who play horn or saxophone (the instruments used as target sounds) as their primary instrument. It is predicted that horn and saxophone players will display larger SCEs than nonmusicians due to their intimate familiarity with the instrument timbres. Preliminary data are trending in the predicted direction; full results will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Daeyeol, Tegg Taekyong Sung, Soo Young Cho, Gyunghak Lee, and Chae Bong Sohn. "A Single Predominant Instrument Recognition of Polyphonic Music Using CNN-based Timbre Analysis." International Journal of Engineering & Technology 7, no. 3.34 (September 1, 2018): 590. http://dx.doi.org/10.14419/ijet.v7i3.34.19388.

Full text
Abstract:
Classifying musical instrument from polyphonic music is a challenging but important task in music information retrieval. This work enables to automatically tag music information, such as genre classification. In previous, almost every work of spectrogram analysis has been used Short Time Fourier Transform (STFT) and Mel Frequency Cepstral Coefficient (MFCC). Recently, sparkgram is researched and used in audio source analysis. Moreover, for deep learning approach, modified convolutional neural networks (CNN) widely have been researched, but many results have not been improved drastically. Instead of improving backbone networks, we have researched on preprocessing process.In this paper, we use CNN and Hilbert Spectral Analysis (HSA) to solve the polyphonic music problem. The HSA is performed at the fixed length of polyphonic music, and a predominant instrument is labeled at its result. As result, we have achieved the state-of-the-art result in IRMAS dataset and 3% performance improvement in individual instruments
APA, Harvard, Vancouver, ISO, and other styles
34

Ramadhan, Zulkifli Syahrir, Reza Andrea, and Suswanto. "Development of Augmented Reality Traditional Musical Education Applications." TEPIAN 3, no. 1 (March 1, 2022): 49–54. http://dx.doi.org/10.51967/tepian.v3i1.690.

Full text
Abstract:
Technology that is developing very rapidly today is no exception in the world of education, the convenience offered by technology cannot be denied by shifting the way children learn who were born and raised in the digital era. In the field of education, augmented reality technology has been widely implemented, such as applications that use augmented reality book aids. This study aims to create an alternative application as a learning medium for the introduction of traditional musical instruments with a multimedia development method by an elementary school educational institution. In this study, the augmented reality method used is Image Based Tracking, which uses images as markers. The tool that helps in making AR is the Vuforia SDK which is used as a storage area marker. By doing this research, the results were obtained in the form of an Augmented Reality Traditional Musical Instrument Recognition application that runs on the Android platform and respondent testing using the User Acceptance Testing (UAT) method with a value of 86%.
APA, Harvard, Vancouver, ISO, and other styles
35

Wieczorkowska, Alicja, Elżbieta Kubera, and Agnieszka Kubik-Komar. "Analysis of Recognition of a Musical Instrument in Sound Mixes Using Support Vector Machines." Fundamenta Informaticae 107, no. 1 (2011): 85–104. http://dx.doi.org/10.3233/fi-2011-394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Manitsaris, Sotiris, Apostolos Tsagaris, Kosmas Dimitropoulos, and Athanasios Manitsaris. "Finger musical gesture recognition in 3D space without any tangible instrument for performing arts." International Journal of Arts and Technology 8, no. 1 (2015): 11. http://dx.doi.org/10.1504/ijart.2015.067390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Duan, Ying. "Construction of Vocal Timbre Evaluation System Based on Classification Algorithm." Scientific Programming 2022 (June 6, 2022): 1–10. http://dx.doi.org/10.1155/2022/6893128.

Full text
Abstract:
With the continuous development of communication technology, computer technology, and network technology, a large amount of information such as images, videos, and audios has grown exponentially, and people have started to be exposed to massive multimedia contents, which can easily and quickly access the increasingly rich music resources, so new technologies are urgently needed for their effective management, and automatic classification of audio signals has become the focus of engineering and academic attention. Currently, music retrieval can be achieved by selecting song titles and singer names, but as people’s living standards continue to improve, the spiritual realm is also enriched. People want to be able to select music with different types of emotional expressions with their emotions. It mainly includes the basic principles of audio classification, the analysis and extraction of music emotion features, and the selection of the best classifier. Two classification algorithms, hybrid Gaussian model and AdaBoost, are used to classify music emotions, and the two classifiers are combined. In this paper, we propose the Discrete Harmonic Transform (DHT), a sparse transform based on harmonic frequencies. This paper derives and proves the formula of Discrete Harmonic Transform and further analyzes the harmonic structure of musical tone signal and the accuracy of harmonic structure. Since the timbre of musical instruments depends on the harmonic structure, and similar instruments have similar harmonic structures, the discrete harmonic transform coefficients can be defined as objective indicators corresponding to the timbre of musical instruments, and thus the concept of timbre expression spectrum is proposed, and a specific construction algorithm is given in this paper. In the application of musical instrument recognition, the 53-dimensional combined features of LPCC, MFCC, and timbre expression spectrum are selected, and a nonlinear support vector machine is used as the classifier. The classification recognition rate is improved by reducing the number of feature dimensions.
APA, Harvard, Vancouver, ISO, and other styles
38

Bai, Jie. "Improvement of Speech Recognition Technology in Piano Music Scene Based on Deep Learning of Internet of Things." Computational Intelligence and Neuroscience 2022 (July 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/4024511.

Full text
Abstract:
The main goal of speech recognition technology is to use computers to convert human analog speech signals into computer-generated signals, such as behavior patterns or binary codes. Different from speaker identification and speaker confirmation, the latter attempts to identify or confirm the speaker who uttered the speech rather than the lexical content contained in it. The short-term idea is that it should be able to record the musical sound played by the user with a certain musical instrument, then extract the note and duration information from it, and finally generate the corresponding MID file according to the MIDI standard, which can set the type of musical instrument in advance to complete the function of musical sound transformation, such as playing with a harmonica, and playing the MID at the end is the piano sound. With the rapid development of the mobile Internet, fields such as machine learning, electronic communication, and navigation have placed high demands on real-time and standard text recognition technology. This paper merges the sound of visual music into text-based data set training, uses the exported scanner features for model training, uses the model to extract features, then uses the features for prior training, and then uses pretraining. DNN results show that the combined training of target prevention and expansion plans, by replacing long-term and short-term memory networks, end-to-end speech recognition programs, and behavioral tests organized by mobile devices, can provide a larger receptive field combined with expanded convolution instead of long and short periods. The experimental results show that when the input sampling point is 2400, it can be seen that the convergence speed of the model becomes slower with more than 90 iterations and the loss of the model on the verification set increases with the increase in the number of iterations. This shows that the model in this paper can fully meet the needs of speech recognition in piano music scenes.
APA, Harvard, Vancouver, ISO, and other styles
39

Shi, Xuan, Erica Cooper, and Junichi Yamagishi. "Use of Speaker Recognition Approaches for Learning and Evaluating Embedding Representations of Musical Instrument Sounds." IEEE/ACM Transactions on Audio, Speech, and Language Processing 30 (2022): 367–77. http://dx.doi.org/10.1109/taslp.2022.3140549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Özden, Berrin, and Selin Özdemir. "University Students’ Attitudes towards Perceived Family Support in Individual Musical Instrument Education." International Journal of Education and Literacy Studies 10, no. 4 (October 31, 2022): 75–80. http://dx.doi.org/10.7575/aiac.ijels.v.10n.4p.75.

Full text
Abstract:
Education promotes mental, physical and emotional development of individuals. In particular, music education has a great impact on an individual’s personal development in terms of self-recognition and self-actualization of the individual. One of the most significant factors in the development of the individual is their family. The family support is crucial in terms of education as well as in all aspects of life. This study is aimed to determine university students’ attitudes towards perceived family support in individual musical instrument education and to analyze the differences in family support in this context. A set of different variables for the perceived family support in individual instrument education of the university students studying in music education undergraduate programs were examined in the study. Three different variables were identified, including gender, undergraduate level and university where the students receive education. This descriptive study is based on a correlational survey model. The study sample consists of 216 students studying in music education programs in 2021-2022 academic year. 120 (55.6 %) of the students are female while 96 (44.4 %) of them are male. When the perceived family support of music education students in terms of instrument education was examined, it was concluded that there was no significant relationship between the gender of the students in the dimensions of the families’ valuing instrument education and their involvement in the process. However, there was no significant relationship in the dimension of the families’ valuing instrument education while there was a significant relationship in their involvement in the process. Considering the perceived family support of the students studying in the music education program in terms of instrument education, the responses of the students indicated a positive relationship.
APA, Harvard, Vancouver, ISO, and other styles
41

Luo, Xin, and Brendon Warner. "Effect of instrument timbre on musical emotion recognition in normal-hearing listeners and cochlear implant users." Journal of the Acoustical Society of America 147, no. 6 (June 2020): EL535—EL539. http://dx.doi.org/10.1121/10.0001475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Samson, Guillaume, and Carlos Sandroni. "The recognition of Brazilian samba de roda and reunion maloya as intangible cultural heritage of humanity." Vibrant: Virtual Brazilian Anthropology 10, no. 1 (June 2013): 530–51. http://dx.doi.org/10.1590/s1809-43412013000100022.

Full text
Abstract:
In this essay, we present a comparative analysis of the UNESCO heritage nomination process for two African Diaspora music and dance forms: samba de roda, from the Bahian Recôncavo (a coastal area of the northeastern Brazilian state of Bahia), and maloya, from Reunion Island (a former French colony in the Indian Ocean, which is now officially an "overseas department of France"). samba de roda, as the Brazilian candidate, was included in the III Proclamation of Masterpieces of the Intangible Heritage of Humanity, in 2005. And maloya, the French candidate, was inscribed onto the Representative List of the Intangible Cultural Heritage of Humanity, in 2009. Despite a number of formal commonalities between samba de roda and maloya, such as responsorial singing, choreography, and the main musical instrument types, the controversies raised during their respective processes of nomination were quite distinct. The former is regarded as a traditional and less well known style of samba, the musical genre widely recognized as the musical emblem of Brazil. The latter competes with séga-a genre of popular music consolidated in the local media-for the position of chief musical representative of Reunion Island. The disparate symbolic identities attributed to these musical expressions pave the way for a distinct manner of employing the international resources related to the safeguarding of intangible heritage. This suggests that the local impact of the inclusion onto international lists depends as much on the contextual particularities of each candidacy as on central decision-making bodies such as UNESCO.
APA, Harvard, Vancouver, ISO, and other styles
43

Dubnov, Shlomo, and Melvin J. Hinich. "Analyzing several musical instrument tones using the randomly modulated periodicity model." Signal Processing 89, no. 1 (January 2009): 24–30. http://dx.doi.org/10.1016/j.sigpro.2008.07.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Stock, Jonathan P. J. "An Ethnomusicological Perspective on Musical Style, with Reference to Music for Chinese Two-Stringed Fiddles." Journal of the Royal Musical Association 118, no. 2 (1993): 276–99. http://dx.doi.org/10.1093/jrma/118.2.276.

Full text
Abstract:
In a major publication of 1983 Bruno Netti identified the explanation of musical style as a central problem in ethnomusicological research. This essay is intended to offer a partial solution of that problem, seeking to define musical style as an abstraction of the matrix of cognitive and physical aspects which constitute human music-making. In the cognitive part of this equation I include the critically important role played by social context, concurring with John Blacking's statement that ‘the creation of a musical style is the result of conscious decisions about the organization of musical symbols in the context of real or imagined social interaction’. However, in this category, I accord equal recognition to the body of musical and music-related knowledge held by a musician or any other member of society, whether this knowledge is implicitly assumed or explicitly acknowledged, historically conditioned or geographically referent, abstractly theoretical or firmly practical. The ‘conscious decisions’ Blacking points to are indeed made in the actual or perceptual domain of social interaction, but they are also considered from the cognitive perspective of acquired musical thought. Physical ingredients which help form the concept of musical style include the limits and possibilities of the human body and its movement patterns and material factors such as the parameters of any musical instrument (size, shape, posture, potential playing techniques, etc.) and performance location.
APA, Harvard, Vancouver, ISO, and other styles
45

Prins, Yopie. "“What Is Historical Poetics?”." Modern Language Quarterly 77, no. 1 (March 1, 2016): 13–40. http://dx.doi.org/10.1215/00267929-3331577.

Full text
Abstract:
AbstractIn posing questions about what is “historical” and what counts as “poetics,” historical poetics cannot separate the practice of reading a poem from the histories and theories of reading that mediate our ideas about poetry. While nineteenth-century verse cultures revolved around reading by generic recognition, a reading of poetry as a form of cognition emerges among later critics like I. A. Richards, who illustrates how a line from Robert Browning is read in the mind’s eye, as if in the present tense. But Browning was already doing a version of historical poetics, in writing “Pan and Luna” as a poem about reading other poems about Pan, among them “A Musical Instrument,” by Elizabeth Barrett Browning. In the composition and reception of her poem, we see how Victorian poetry foregrounds its multiple mediations, including the mediation of voice by meter as a musical instrument. The recirculation of her popular poem through citation and recitation, illustration and anthologization, prosody and parody, demonstrates a varied history of thinking through—simultaneously “about” and “in”—verse.
APA, Harvard, Vancouver, ISO, and other styles
46

Apridiansyah, Yovi Apri, and Pahrizal Pahrizal. "PENGENALAN ALAT MUSIK TRADISIONAL BENGKULU (DOL) DIGITAL BERBASIS ANDROID." Journal of Technopreneurship and Information System (JTIS) 2, no. 1 (March 5, 2019): 12–17. http://dx.doi.org/10.36085/jtis.v2i1.179.

Full text
Abstract:
Abstract—. Indonesia possesses a highly diverse affluence of art and culture, from Sabang to Merauke, deployed in a variety of arts and cultures that have been bequeathed from generation to generation. Dol is a traditional musical instrument that is performed by striking, which is based on electroacoustic technology or digital methods. His tone is harked through an amplifier and loudspeaker. In terms of sound quality, electronic dol practically makes no difference with common dol. Based on the background, the problem was formulated on how to make a recognition to traditional musical instruments (dol) built upon Android. The research objective of introducing traditional musical instruments (dol) on Android-based digital is to append concept into learning virtual dol with Android so that it is increasingly interactive. The costs of this application is that it is not mobile in 3 dimensions, in dol and tasa voice recording settings still utilizing normal recording.Keywords: Application, Music, Dol, AndroidAbstrak—. Indonesia memiliki kekayaan seni dan budaya yang sangat beragam, dari Sabang sampai Merauke, tersebar beraneka ragam seni dan budaya yang diwariskan secara turun temurun. Dol adalah alat musik tradisional yang dimainkan dengan dipukul, yang didasarkan pada teknologi elektroakustik atau metode digital. Nada suaranya terdengar melalui sebuah amplifier dan loudspeaker. Dari sisi mutu suara, dol elektronik nyaris tak ada bedanya dengan dol biasa. Berdasarkan latar belakang, maka dirumuskan masalahnya bagaimana membuat pengenalan alat musik tradisional (dol) digital berbasis android. Tujuan penelitian pengenalan alat musik tradisional (dol) digital berbasis android adalah untuk menambah wawasan dalam belajar dol virtual dengan android sehingga yang lebih interaktif. Kekurangan aplikasi ini tidak bersifat 3 dimensi yang berbasis mobile, dalam pengaturan perekaman suara dol dan tasa masih menggunakan perekaman biasa.Kata Kunci : Aplikasi, Musik, Dol, Android
APA, Harvard, Vancouver, ISO, and other styles
47

Benassi-Werke, Mariana E., Marcelo Queiroz, Rúben S. Araújo, Orlando F. A. Bueno, and Maria Gabriela M. Oliveira. "Musicians' Working Memory for Tones, Words, and Pseudowords." Quarterly Journal of Experimental Psychology 65, no. 6 (June 2012): 1161–71. http://dx.doi.org/10.1080/17470218.2011.644799.

Full text
Abstract:
Studies investigating factors that influence tone recognition generally use recognition tests, whereas the majority of the studies on verbal material use self-generated responses in the form of serial recall tests. In the present study we intended to investigate whether tonal and verbal materials share the same cognitive mechanisms, by presenting an experimental instrument that evaluates short-term and working memories for tones, using self-generated sung responses that may be compared to verbal tests. This paradigm was designed according to the same structure of the forward and backward digit span tests, but using digits, pseudowords, and tones as stimuli. The profile of amateur singers and professional singers in these tests was compared in forward and backward digit, pseudoword, tone, and contour spans. In addition, an absolute pitch experimental group was included, in order to observe the possible use of verbal labels in tone memorization tasks. In general, we observed that musical schooling has a slight positive influence on the recall of tones, as opposed to verbal material, which is not influenced by musical schooling. Furthermore, the ability to reproduce melodic contours (up and down patterns) is generally higher than the ability to reproduce exact tone sequences. However, backward spans were lower than forward spans for all stimuli (digits, pseudowords, tones, contour). Curiously, backward spans were disproportionately lower for tones than for verbal material—that is, the requirement to recall sequences in backward rather than forward order seems to differentially affect tonal stimuli. This difference does not vary according to musical expertise.
APA, Harvard, Vancouver, ISO, and other styles
48

H T, Panduranga, and Mani C. "Non – Vision Based Sensors for Dynamic Hand Gesture Recognition Systems: A Comparative Study." International Journal of Engineering & Technology 7, no. 3.12 (July 20, 2018): 1175. http://dx.doi.org/10.14419/ijet.v7i3.12.17782.

Full text
Abstract:
Gestures are considered as a type of configuration associated with motion in concerned body part, signifying meaningful information or expressing motion or intending to command and control. Wide ranges of sensors working with different technology are available in market. Gesture recognition process involves steps like data acquisition from sensor, segmentation, an algorithm for taking gesture data as input, an algorithm to extract parameters and algorithm to classify hand gestures. Three - dimensional hand gestures have been widely accepted for advanced applications like creation of virtual world where in users can feel the naturality of interacting or playing a musical instrument without presence of any physical device. Techniques for dynamic finger gesture recognition can be classified as visual based and wearable sensor based. The purpose of this paper is to compare various non – vision based sensors with different tracking technologies, updating advantages and drawbacks helping investigators and researchers working on this area.
APA, Harvard, Vancouver, ISO, and other styles
49

Tansella, Francesca, Luisa Vigorelli, Gabriele Ricchiardi, Alessandro Re, Letizia Bonizzoni, Sabrina Grassini, Manuel Staropoli, and Alessandro Lo Giudice. "X-ray Computed Tomography Analysis of Historical Woodwind Instruments of the Late Eighteenth Century." Journal of Imaging 8, no. 10 (September 24, 2022): 260. http://dx.doi.org/10.3390/jimaging8100260.

Full text
Abstract:
In this work, two historical flutes of the late eighteenth century were analysed by means of X-ray computed tomography (CT). The first one is a piccolo flute whose manufacturer is unknown, though some features could suggest an English or American origin. The second musical instrument is a baroque transverse flute, probably produced by Lorenzo Cerino, an Italian instrument maker active in Turin (Italy) in the late eighteenth century. Analyses carried out provided information on manufacturing techniques, materials and conservation state, and are suitable to plan restoration intervention. In particular, through the CT images, it was possible to observe the presence of defects, cracks, fractures and previous restorations, as well as indications of the tools used in the making of the instruments. Particular attention was directed towards extracting metrological information about the objects. In fact, this work is the first step of a study with a final aim of determining an operative protocol to enable the making of precise-sounding copies of ancient instruments starting from CT images, that can be used to plan a virtual restoration, consisting in the creation of digitally restored copies with a 3D printer.
APA, Harvard, Vancouver, ISO, and other styles
50

Maszczyńska, Dominika. "Nannette and Johann Andreas Streicher - their role in shaping musical life in Vienna in the early 19th century." Notes Muzyczny 1, no. 13 (June 9, 2020): 49–80. http://dx.doi.org/10.5604/01.3001.0014.1937.

Full text
Abstract:
Nannette and Andreas Streicher were important figures in the musical life of Vienna in the early 19th century. The article introduces their profiles, describes the history of their company, their social, cultural and teaching activity as well as different types of artistic activity. It also explains how keyboard instruments shaped sound and aesthetics-related piano ideals at the turn of the 19th century. The versatile activity of the Streichers, which first of all included instrument building, piano play- ing, composition, teaching and organisation of musical life, made a great contribution to Europe’s cultural heritage. We can notice their numerous connections with outstanding figures of musical life of that time, one that deserves particular attention is their acquaintance with Beethoven. Nannette Streicher was an extremely talented builder who not only coped with the typically masculine craft at that time, but she was also significantly successful in that field. Her instruments were popular, earning general recognition, and the innovative solutions introduced by her also influenced the work of other builders and further development of the piano. Their marriage became the basis for a very fruitful cooperation. Andreas’s numerous connections and his familiarity with the community became an important part of the activity of the company and contributed to its development. Nannette and Andreas shared their passion and passed it on to their son Johann Baptist, who successfully continued their piano making tradition and introduced further improvements, earning a great reputation as well. Social, cultural and teaching activities of the Streichers also played an important role in the musical life of Vienna. Andreas Streicher taught his students the secrets of piano technique and apart from that he shaped their musical and aesthetical awareness. His Kurze Bemerkungen are a valuable source of knowledge also for modern-time performers who can – thanks to this text – learn more about the piano playing aesthetics at the turn of the 19th century as well as a number of universal music and performance topics, which remain accurate to this day. Concerts organised in their house had an educational function too, on the one hand they shaped the tastes of music lovers and supported composers, allowing them to present their latest pieces, and on the other hand they contributed to the promotion of young performers for whom concerts there were often the first step leading towards Vienna’s professional musical stage. The development of the topic of the article in this issue of “Notes Muzyczny” is the trans- lation of the text by Andreas Streicher entitled: Some observations on the playing, tuning and maintenance of pianos built in Vienna by Nannette Streicher nee Stein.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography