To see the other types of publications on this topic, follow the link: Mel-Frequency Cepstral coefficients.

Journal articles on the topic 'Mel-Frequency Cepstral coefficients'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Mel-Frequency Cepstral coefficients.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ajinurseto, Galih, La Ode Bakrim, and Nur Islamuddin. "Penerapan Metode Mel Frequency Cepstral Coefficients pada Sistem Pengenalan Suara Berbasis Desktop." Infomatek 25, no. 1 (June 29, 2023): 11–20. http://dx.doi.org/10.23969/infomatek.v25i1.6109.

Full text
Abstract:
Teknologi biometrik sedang menjadi tren teknologi dalam berbagai bidang kehidupan. Teknologi biometrik memanfaatkan bagian tubuh manusia sebagai alat ukur sistem yang memiliki keunikan disetiap individu. Suara merupakan bagian tubuh manusia yang memiliki keunikan dan cocok dijadikan sebagai alat ukur dalam sistem yang mengadopsi teknologi biometrik. Sistem pengenalan suara adalah salah satu penerapan teknologi biometrik yang fokus kepada suara manusia. Sistem pengenalan suara memerlukan metode ekstraksi fitur, salah satu metode ekstraksi fitur adalah metode Mel Frequency Cepstral Coefficients. Metode Mel Frequency Cepstral Coefficients merupakan metode ekstraksi fitur suara yang mengadopsi prinsip indra pendengeran manusia dengan tujuan mendapatkan hasil yang semirip mungkin sebagaimana indra pendengaran manusia. Metode ini dimulai dari tahap pre-emphasis, frame blocking, windowing, fast fourier transform, mel frequency wrapping dan cepstrum. Berdasarkan hasil pengujian, metode Mel Frequency Cepstral Coefficients pada pengujian dengan kondisi ideal, persentase keberhasilan sistem mencapai 90% dan persentase kegagalan sistem sebesar 10% dengan top 5 error rate sebesar 0%, sedangkan pada pengujian dengan kondisi tidak ideal, persentase keberhasilan sistem sebesar 76.6667% dan persentase kegagalan sistem sebesar 23.333% dengan top 5 error rate sebesar 0%.
APA, Harvard, Vancouver, ISO, and other styles
2

Sato, Nobuo, and Yasunari Obuchi. "Emotion Recognition using Mel-Frequency Cepstral Coefficients." Journal of Natural Language Processing 14, no. 4 (2007): 83–96. http://dx.doi.org/10.5715/jnlp.14.4_83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hashad, F. G., T. M. Halim, S. M. Diab, B. M. Sallam, and F. E. Abd El-Samie. "Fingerprint recognition using mel-frequency cepstral coefficients." Pattern Recognition and Image Analysis 20, no. 3 (September 2010): 360–69. http://dx.doi.org/10.1134/s1054661810030120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Won Gyeong, Young Bae Lim, Dong Woo Kim, Ho Kyoung Lee, and Sdeongwon Cho. "Prediction Method of Electrical Abnormal States Using Simplified Mel-Frequency Cepstral Coefficients." Journal of Korean Institute of Intelligent Systems 28, no. 5 (October 31, 2018): 514–22. http://dx.doi.org/10.5391/jkiis.2018.28.5.514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

INDRAWATY, YOULLIA, IRMA AMELIA DEWI, and RIZKI LUKMAN. "Ekstraksi Ciri Pelafalan Huruf Hijaiyyah Dengan Metode Mel-Frequency Cepstral Coefficients." MIND Journal 4, no. 1 (June 1, 2019): 49–64. http://dx.doi.org/10.26760/mindjournal.v4i1.49-64.

Full text
Abstract:
Huruf hijaiyyah merupakan huruf penyusun ayat dalam Al Qur’an. Setiap hurufhijaiyyah memiliki karakteristik pelafalan yang berbeda. Tetapi dalam praktiknya,ketika membaca huruf hijaiyyah terkadang tidak memperhatikan kaidah bacaanmakhorijul huruf. Makhrorijul huruf adalah cara melafalkan atau tempatkeluarnya huruf hijaiyyah. Dengan adanya teknologi pengenalan suara, dalammelafalkan huruf hijaiyyah dapat dilihat perbedaannya secara kuantitatif melaluisistem. Terdapat dua tahapan agar suara dapat dikenali, dengan terlebih dahulumelakukan ekstraksi sinyal suara selanjutnya melakukan identifikasi suara ataubacaan. MFCC (Mel Frequency Cepstral Coefficients) merupakan sebuah metodeuntuk melakukan ektraksi ciri yang menghasilkan nilai cepstral dari sinyal suara.Penelitian ini bertujuan untuk mengetahui nilai cepstral pada setiap hurufhijaiyyah. Hasil pengujian yang telah dilakukan, setiap huruf hijaiyyah memilikinilai cepstral yang berbeda.
APA, Harvard, Vancouver, ISO, and other styles
6

ARORA, SHRUTI, SUSHMA JAIN, and INDERVEER CHANA. "A FUSION FRAMEWORK BASED ON CEPSTRAL DOMAIN FEATURES FROM PHONOCARDIOGRAM TO PREDICT HEART HEALTH STATUS." Journal of Mechanics in Medicine and Biology 21, no. 04 (April 22, 2021): 2150034. http://dx.doi.org/10.1142/s0219519421500342.

Full text
Abstract:
A great increase in the number of cardiovascular cases has been a cause of serious concern for the medical experts all over the world today. In order to achieve valuable risk stratification for patients, early prediction of heart health can benefit specialists to make effective decisions. Heart sound signals help to know about the condition of heart of a patient. Motivated by the success of cepstral features in speech signal classification, authors have used here three different cepstral features, viz. Mel-frequency cepstral coefficients (MFCCs), gammatone frequency cepstral coefficients (GFCCs), and Mel-spectrogram for classifying phonocardiogram into normal and abnormal. Existing research has explored only MFCCs and Mel-feature set extensively for classifying the phonocardiogram. However, in this work, the authors have used a fusion of GFCCs with MFCCs and Mel-spectrogram, and achieved a better accuracy score of 0.96 with sensitivity and specificity scores as 0.91 and 0.98, respectively. The proposed model has been validated on the publicly available benchmark dataset PhysioNet 2016.
APA, Harvard, Vancouver, ISO, and other styles
7

Elizarov, D. A., P. A. Ashaeva, and E. A. Stepanova. "Voice authentication module using mel-cepstral coefficients." Herald of Dagestan State Technical University. Technical Sciences 51, no. 2 (July 25, 2024): 77–82. http://dx.doi.org/10.21822/2073-6185-2024-51-2-77-82.

Full text
Abstract:
Objective. The purpose of the study is to develop and apply a method for extracting information about the identity of users from recordings of their voices using the calculation of mel-cepstral coefficients.Method. In the study of the application of methods for extracting informative features from a voice recording, allowing identification of the speaker, an authentication scheme using mel-cepstral coefficients is presented.Result. Based on this method, an authentication module was implemented using audio recordings of user voices using the simplest MFCC. The authentication module was developed using Python languageConclusion. The biometric authentication method is an inexpensive and relatively simple way to verify the authenticity of users. Despite the obvious advantages of mel-cepstral coefficients, this method has certain disadvantages. To eliminate shortcomings, various frequency filters can be used, as well as third-party algorithms for analyzing audio recordings.
APA, Harvard, Vancouver, ISO, and other styles
8

Ma, Liqiang, Anqi Jiang, and Wanlu Jiang. "The Intelligent Diagnosis of a Hydraulic Plunger Pump Based on the MIGLCC-DLSTM Method Using Sound Signals." Machines 12, no. 12 (November 29, 2024): 869. https://doi.org/10.3390/machines12120869.

Full text
Abstract:
To fully exploit the rich state and fault information embedded in the acoustic signals of a hydraulic plunger pump, this paper proposes an intelligent diagnostic method based on sound signal analysis. First, acoustic signals were collected under normal and various fault conditions. Then, four distinct acoustic features—Mel Frequency Cepstral Coefficients (MFCCs), Inverse Mel Frequency Cepstral Coefficients (IMFCCs), Gammatone Frequency Cepstral Coefficients (GFCCs), and Linear Prediction Cepstral Coefficients (LPCCs)—were extracted and integrated into a novel hybrid cepstral feature called MIGLCCs. This fusion enhances the model’s ability to distinguish both high- and low-frequency characteristics, resist noise interference, and capture resonance peaks, achieving a complementary advantage. Finally, the MIGLCC feature set was input into a double layer long short-term memory (DLSTM) network to enable intelligent recognition of the hydraulic plunger pump’s operational states. The results indicate that the MIGLCC-DLSTM method achieved a diagnostic accuracy of 99.41% under test conditions. Validation on the CWRU bearing dataset and operational data from a high-pressure servo motor in a turbine system yielded overall recognition accuracies of 99.64% and 98.07%, respectively, demonstrating the robustness and broad application potential of the MIGLCC-DLSTM method.
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Hao, Huajun Bai, Xianbiao Zhan, Zhenghao Wu, Liang Wen, and Xisheng Jia. "Combination of VMD Mapping MFCC and LSTM: A New Acoustic Fault Diagnosis Method of Diesel Engine." Sensors 22, no. 21 (October 30, 2022): 8325. http://dx.doi.org/10.3390/s22218325.

Full text
Abstract:
Diesel engines have a wide range of functions in the industrial and military fields. An urgent problem to be solved is how to diagnose and identify their faults effectively and timely. In this paper, a diesel engine acoustic fault diagnosis method based on variational modal decomposition mapping Mel frequency cepstral coefficients (MFCC) and long-short-term memory network is proposed. Variational mode decomposition (VMD) is used to remove noise from the original signal and differentiate the signal into multiple modes. The sound pressure signals of different modes are mapped to the Mel filter bank in the frequency domain, and then the Mel frequency cepstral coefficients of the respective mode signals are calculated in the mapping range of frequency domain, and the optimized Mel frequency cepstral coefficients are used as the input of long and short time memory network (LSTM) which is trained and verified, and the fault diagnosis model of the diesel engine is obtained. The experimental part compares the fault diagnosis effects of different feature extraction methods, different modal decomposition methods and different classifiers, finally verifying the feasibility and effectiveness of the method proposed in this paper, and providing solutions to the problem of how to realise fault diagnosis using acoustic signals.
APA, Harvard, Vancouver, ISO, and other styles
10

Sheu, Jia-Shing, and Ching-Wen Chen. "Voice Recognition and Marking Using Mel-frequency Cepstral Coefficients." Sensors and Materials 32, no. 10 (October 9, 2020): 3209. http://dx.doi.org/10.18494/sam.2020.2860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Koolagudi, Shashidhar G., Deepika Rastogi, and K. Sreenivasa Rao. "Identification of Language using Mel-Frequency Cepstral Coefficients (MFCC)." Procedia Engineering 38 (2012): 3391–98. http://dx.doi.org/10.1016/j.proeng.2012.06.392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Saldanha, Jennifer C., T. Ananthakrishna, and Rohan Pinto. "Vocal Fold Pathology Assessment Using Mel-Frequency Cepstral Coefficients and Linear Predictive Cepstral Coefficients Features." Journal of Medical Imaging and Health Informatics 4, no. 2 (April 1, 2014): 168–73. http://dx.doi.org/10.1166/jmihi.2014.1253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Anacleto Silva, Harry. "ATRIBUTOS PNCC PARA RECONOCIMIENTO ROBUSTO DE LOCUTOR INDEPENDIENTE DEL TEXTO." INGENIERÍA: Ciencia, Tecnología e Innovación 3, no. 2 (September 12, 2016): 35–40. http://dx.doi.org/10.26495/icti.v3i2.431.

Full text
Abstract:
El reconocimiento automático de locutores ha sido sujeto de intensa investigación durante toda la década pasada. Sin embargo las características, del estado de arte de los algoritmos son drásticamente degradados en presencia de ruido. Este artículo se centra en la aplicación de una nueva técnica llamada Power-Normalized Cepstral Coefficients (PNCC) para el reconocimiento de locutor independiente del texto. El objetivo de este estudio es evaluar las características de esta técnica en comparación con la técnica convencional Mel Frequency Cepstral Coefficients (MFCC) y la técnica Gammatone Frequency Cepstral Coefficients (GFCC).
APA, Harvard, Vancouver, ISO, and other styles
14

Rasyid, Muhammad Fahim, Herlina Jayadianti, and Herry Sofyan. "APLIKASI PENGENALAN PENUTUR PADA IDENTIFIKASI SUARA PENELEPON MENGGUNAKAN MEL-FREQUENCY CEPSTRAL COEFFICIENT DAN VECTOR QUANTIZATION (Studi Kasus : Layanan Hotline Universitas Pembangunan Nasional “Veteran” Yogyakarta)." Telematika 17, no. 2 (October 31, 2020): 68. http://dx.doi.org/10.31315/telematika.v1i1.3380.

Full text
Abstract:
Layanan hotline Universitas Pembangunan Nasional “Veteran” Yogyakarta merupakan layanan yang dapat digunakan oleh semua orang. Layanan tersebut digunakan dosen dan pegawai untuk berbagi informasi dengan bagian-bagian yang berlokasi di gedung rektorat. Penelepon dapat berkomunikasi dengan bagian yang dituju apabila telah teridentifikasi oleh petugas layanan hotline. Terminologi identitas yang terdiri dari nama, jabatan serta asal jurusan atau bagian ditanyakan saat proses identifikasi. Tidak terdapat catatan hasil identifikasi penelepon baik dalam bentuk fisik maupun basis data yang terekam pada komputer. Hal tersebut mengakibatkan tidak adanya dokumentasi yang dapat dijadikan barang bukti untuk menindak lanjuti kasus kesalahan identifikasi. Penelitian ini fokus untuk mengurangi resiko kesalahan identifikasi penelepon menggunakan teknologi speaker recognition. Frekuensi suara diekstraksi menggunakan metode Mel-Frequency Cepstral Coefficient (MFCC) sehingga dihasilkan nilai Mel Frequency Cepstrum Coefficients. Nilai Mel Frequency Cepstrum Coefficients dari semua data latih suara pegawai Universitas Pembangunan Nasional “Veteran” Yogyakarta kemudian dibandingkan dengan sinyal suara penelpon menggunakan metode Vector Quantization (VQ). Aplikasi pengenalan penutur mampu mengidentifikasi suara penelepon dengan tingkat akurasi 80% pada nilai ambang (threshold) 25.
APA, Harvard, Vancouver, ISO, and other styles
15

Ramashini, Murugaiya, P. Emeroylariffion Abas, Kusuma Mohanchandra, and Liyanage C. De Silva. "Robust cepstral feature for bird sound classification." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 2 (April 1, 2022): 1477. http://dx.doi.org/10.11591/ijece.v12i2.pp1477-1487.

Full text
Abstract:
Birds are excellent environmental indicators and may indicate sustainability of the ecosystem; birds may be used to provide provisioning, regulating, and supporting services. Therefore, birdlife conservation-related researches always receive centre stage. Due to the airborne nature of birds and the dense nature of the tropical forest, bird identifications through audio may be a better solution than visual identification. The goal of this study is to find the most appropriate cepstral features that can be used to classify bird sounds more accurately. Fifteen (15) endemic Bornean bird sounds have been selected and segmented using an automated energy-based algorithm. Three (3) types of cepstral features are extracted; linear prediction cepstrum coefficients (LPCC), mel frequency cepstral coefficients (MFCC), gammatone frequency cepstral coefficients (GTCC), and used separately for classification purposes using support vector machine (SVM). Through comparison between their prediction results, it has been demonstrated that model utilising GTCC features, with 93.3% accuracy, outperforms models utilising MFCC and LPCC features. This demonstrates the robustness of GTCC for bird sounds classification. The result is significant for the advancement of bird sound classification research, which has been shown to have many applications such as in eco-tourism and wildlife management.
APA, Harvard, Vancouver, ISO, and other styles
16

Murugaiya, Ramashini, Emeroylariffion Abas Pg, Mohanchandra Kusuma, and C. De Silva Liyanage. "Robust cepstral feature for bird sound classification." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 2 (April 1, 2022): 1477–87. https://doi.org/10.11591/ijece.v12i2.pp1477-1487.

Full text
Abstract:
Birds are excellent environmental indicators and may indicate sustainability of the ecosystem; birds may be used to provide provisioning, regulating, and supporting services. Therefore, birdlife conservation-related researches always receive centre stage. Due to the airborne nature of birds and the dense nature of the tropical forest, bird identifications through audio may be a better solution than visual identification. The goal of this study is to find the most appropriate cepstral features that can be used to classify bird sounds more accurately. Fifteen (15) endemic Bornean bird sounds have been selected and segmented using an automated energy-based algorithm. Three (3) types of cepstral features are extracted; linear prediction cepstrum coefficients (LPCC), mel frequency cepstral coefficients (MFCC), gammatone frequency cepstral coefficients (GTCC), and used separately for classification purposes using support vector machine (SVM). Through comparison between their prediction results, it has been demonstrated that model utilising GTCC features, with 93.3% accuracy, outperforms models utilising MFCC and LPCC features. This demonstrates the robustness of GTCC for bird sounds classification. The result is significant for the advancement of bird sound classification research, which has been shown to have many applications such as in eco-tourism and wildlife management.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Young-Long, Neng-Chung Wang, Jing-Fong Ciou, and Rui-Qi Lin. "Combined Bidirectional Long Short-Term Memory with Mel-Frequency Cepstral Coefficients Using Autoencoder for Speaker Recognition." Applied Sciences 13, no. 12 (June 10, 2023): 7008. http://dx.doi.org/10.3390/app13127008.

Full text
Abstract:
Recently, neural network technology has shown remarkable progress in speech recognition, including word classification, emotion recognition, and identity recognition. This paper introduces three novel speaker recognition methods to improve accuracy. The first method, called long short-term memory with mel-frequency cepstral coefficients for triplet loss (LSTM-MFCC-TL), utilizes MFCC as input features for the LSTM model and incorporates triplet loss and cluster training for effective training. The second method, bidirectional long short-term memory with mel-frequency cepstral coefficients for triplet loss (BLSTM-MFCC-TL), enhances speaker recognition accuracy by employing a bidirectional LSTM model. The third method, bidirectional long short-term memory with mel-frequency cepstral coefficients and autoencoder features for triplet loss (BLSTM-MFCCAE-TL), utilizes an autoencoder to extract additional AE features, which are then concatenated with MFCC and fed into the BLSTM model. The results showed that the performance of the BLSTM model was superior to the LSTM model, and the method of adding AE features achieved the best learning effect. Moreover, the proposed methods exhibit faster computation times compared to the reference GMM-HMM model. Therefore, utilizing pre-trained autoencoders for speaker encoding and obtaining AE features can significantly enhance the learning performance of speaker recognition. Additionally, it also offers faster computation time compared to traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Fahmy, Maged M. M. "Palmprint recognition based on Mel frequency Cepstral coefficients feature extraction." Ain Shams Engineering Journal 1, no. 1 (September 2010): 39–47. http://dx.doi.org/10.1016/j.asej.2010.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mahalakshmi, P. "A REVIEW ON VOICE ACTIVITY DETECTION AND MEL-FREQUENCY CEPSTRAL COEFFICIENTS FOR SPEAKER RECOGNITION (TREND ANALYSIS)." Asian Journal of Pharmaceutical and Clinical Research 9, no. 9 (December 1, 2016): 360. http://dx.doi.org/10.22159/ajpcr.2016.v9s3.14352.

Full text
Abstract:
ABSTRACTObjective: The objective of this review article is to give a complete review of various techniques that are used for speech recognition purposes overtwo decades.Methods: VAD-Voice Activity Detection, SAD-Speech Activity Detection techniques are discussed that are used to distinguish voiced from unvoicedsignals and MFCC- Mel Frequency Cepstral Coefficient technique is discussed which detects specific features.Results: The review results show that research in MFCC has been dominant in signal processing in comparison to VAD and other existing techniques.Conclusion: A comparison of different speaker recognition techniques that were used previously were discussed and those in current research werealso discussed and a clear idea of the better technique was identified through the review of multiple literature for over two decades.Keywords: Cepstral analysis, Mel-frequency cepstral coefficients, signal processing, speaker recognition, voice activity detection.
APA, Harvard, Vancouver, ISO, and other styles
20

Varma, V. Sai Nitin, and Abdul Majeed K.K. "Advancements in Speaker Recognition: Exploring Mel Frequency Cepstral Coefficients (MFCC) for Enhanced Performance in Speaker Recognition." International Journal for Research in Applied Science and Engineering Technology 11, no. 8 (August 31, 2023): 88–98. http://dx.doi.org/10.22214/ijraset.2023.55124.

Full text
Abstract:
Abstract: Speaker recognition, a fundamental capability of software or hardware systems, involves receiving speech signals, identifying the speaker present in the speech signal, and subsequently recognizing the speaker for future interactions. This process emulates the cognitive task performed by the human brain. At its core, speaker recognition begins with speech as the input to the system. Various techniques have been developed for speech recognition, including Mel frequency cepstral coefficients (MFCC), Linear Prediction Coefficients (LPC), Linear Prediction Cepstral coefficients (LPCC), Line Spectral Frequencies (LSF), Discrete Wavelet Transform (DWT), and Perceptual Linear Prediction (PLP). Although LPC and several other techniques have been explored, they are often deemed impractical for real-time applications. In contrast, MFCC stands out as one of the most prominent and widely used techniques for speaker recognition. The utilization of cepstrum allows for the computation of resemblance between two cepstral feature vectors, making it an effective tool in this domain. In comparison to LPC-derived cepstrum features, the use of MFCC features has demonstrated superior performance in metrics such as False Acceptance Rate (FAR) and False Rejection Rate (FRR) for speaker recognition systems. MFCCs leverage the human ear's critical bandwidth fluctuations with respect to frequency. To capture phonetically important characteristics of speech signals, filters are linearly separated at low frequencies and logarithmically separated at high frequencies. This design choice is central to the effectiveness of the MFCC technique. The primary objective of the proposed work is to devise efficient techniques that extract pertinent information related to the speaker, thereby enhancing the overall performance of the speaker recognition system. By optimizing feature extraction methods, this research aims to contribute to the advancement of speaker recognition technology.
APA, Harvard, Vancouver, ISO, and other styles
21

Pedalanka, P. S. Subhashini, M. SatyaSai Ram, and Duggirala Sreenivasa Rao. "Mel Frequency Cepstral Coefficients based Bacterial Foraging Optimization with DNN-RBF for Speaker Recognition." Indian Journal of Science and Technology 14, no. 41 (November 3, 2021): 3082–92. http://dx.doi.org/10.17485/ijst/v14i41.1858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

PROF., MANTRI D.B. "IMPLEMENTATION OF SPEECH RECOGNITION SYSTEM." IJIERT - International Journal of Innovations in Engineering Research and Technology 3, no. 12 (December 20, 2016): 72–80. https://doi.org/10.5281/zenodo.1462451.

Full text
Abstract:
<strong>Speech recognition is an important and active analysis area of the recent years. This analysis aims to make a system for speech recognition with the help of dynamic time wrapping algorithm program,by examining the speech signal of the speaker with pre - stored speech signals with in the stored database,and extracting by using Mel - frequency Cepstral coefficients which is the main features of the speaker speech signal and one of the most necessary factors in achieving high recognition accuracy. The process of extraction and matching is implemented after the Pre Process or filtering signal is performed. The non - parametric methodology for modeling the human perception system,MFCCs (Mel Frequency Cepstral Coefficients) are utilizing as extraction techniques.</strong> <strong>https://www.ijiert.org/paper-details?paper_id=140981</strong>
APA, Harvard, Vancouver, ISO, and other styles
23

Tran, Thi Thanh. "Analysis of Building the Music Feature Extraction Systems: A Review." Engineering and Technology Journal 9, no. 05 (May 22, 2024): 4055–60. https://doi.org/10.5281/zenodo.11242886.

Full text
Abstract:
Music genre classification is a basic method for sound processing in the field of music retrieval. The application of machine learning has become increasingly popular in automatically classifying music genres. Therefore, in recent years, many methods have been studied and developed to solve this problem. In this article, an overview on the process and some music feature extraction methods is presented. Here, the feature extraction method using Mel Frequency Cepstral Coefficients (MFCC) is discussed in detail. Some typical results in using Mel Frequency Cepstral Coefficients for improving accuracy in the classification process are introduced and discussed. Therefore, the feature extraction method using MFCC has shown its suitability due to high accuracy and has much potential for further research and development.
APA, Harvard, Vancouver, ISO, and other styles
24

Shao, Xu, and Ben Milner. "Predicting fundamental frequency from mel-frequency cepstral coefficients to enable speech reconstruction." Journal of the Acoustical Society of America 118, no. 2 (August 2005): 1134–43. http://dx.doi.org/10.1121/1.1953269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nasr, Marwa A., Mohammed Abd-Elnaby, Adel S. El-Fishawy, S. El-Rabaie, and Fathi E. Abd El-Samie. "Speaker identification based on normalized pitch frequency and Mel Frequency Cepstral Coefficients." International Journal of Speech Technology 21, no. 4 (September 17, 2018): 941–51. http://dx.doi.org/10.1007/s10772-018-9524-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kasim, Anita Ahmad, Muhammad Bakri, Irwan Mahmudi, Rahmawati Rahmawati, and Zulnabil Zulnabil. "Artificial Intelligent for Human Emotion Detection with the Mel-Frequency Cepstral Coefficient (MFCC)." JUITA : Jurnal Informatika 11, no. 1 (May 6, 2023): 47. http://dx.doi.org/10.30595/juita.v11i1.15435.

Full text
Abstract:
Emotions are an important aspect of human communication. Expression of human emotions can be identified through sound. The development of voice detection or speech recognition is a technology that has developed rapidly to help improve human-machine interaction. This study aims to classify emotions through the detection of human voices. One of the most frequently used methods for sound detection is the Mel-Frequency Cepstrum Coefficient (MFCC) where sound waves are converted into several types of representation. Mel-frequency cepstral coefficients (MFCCs) are the coefficients that collectively represent the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The primary data used in this research is the data recorded by the author. The secondary data used is data from the "Berlin Database of Emotional Speech" in the amount of 500 voice recording data. The use of MFCC can extract implied information from the human voice, especially to recognize the feelings experienced by humans when pronouncing the sound. In this study, the highest accuracy was obtained when training with epochs of 10000 times, which was 85% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
27

Лавриненко, Александр Юрьевич, Юрий Анатольевич Кочергин та Георгий Филимонович Конахович. "СИСТЕМА РАСПОЗНАВАНИЯ СТЕГАНОГРАФИЧЕСКИ-ПРЕОБРАЗОВАННЫХ ГОЛОСОВЫХ КОМАНД УПРАВЛЕНИЯ БПЛА". RADIOELECTRONIC AND COMPUTER SYSTEMS, № 3 (30 жовтня 2018): 20–28. http://dx.doi.org/10.32620/reks.2018.3.03.

Full text
Abstract:
It is created the system of recognition the steganographic-transformed voice commands of unmanned aerial vehicle control based on a cepstral analysis. It provides effective recognition and hidden commands transmission of to the board of an unmanned aerial vehicle, by converting voice control commands into a kind of steganographic characteristics vector, which implies the concealment of voice control information of an unmanned aerial vehicle. The mathematical model of the algorithm for calculating the mel-frequency cepstral coefficients and the recognition classifier of voice control commands for the solution of the problem of semantic identification and securing the control information of the unmanned aerial vehicle in the communication channel is synthesized. A software package has been developed that includes tools for compiling the base of reference voice images of subjects of management for training and testing the system for recognizing steganographic-transformed voice commands of the unmanned aerial vehicle control based on the cepstral analysis and computer models of the proposed methods and algorithms for recognition voice control commands in the MATLAB environment. The expediency of applying the proposed system for recognizing steganographic-transformed voice commands of the unmanned aerial vehicle control based on a cepstral analysis is substantiated and experimentally proved. An algorithm is presented for calculating the mel-frequency cepstral coefficients that appear in the role of the main features of recognition and the result of the steganographic transformation of speech, where for the evaluation of automatic recognition of voice commands using the results of classifier constructed by the criterion of minimum distance in the role which acts as the variance of the difference of the expectation of a mel-frequency cepstral coefficients. The obtained results of the experimental research allow to draw a conclusion about the expediency of further practical application of the developed system of recognition the steganographic-transformed voice commands of the unmanned aerial vehicle control based on the cepstral analysis
APA, Harvard, Vancouver, ISO, and other styles
28

Deng, Lei, and Yong Gao. "Gammachirp Filter Banks Applied in Roust Speaker Recognition Based GMM-UBM Classifier." International Arab Journal of Information Technology 17, no. 2 (February 28, 2019): 170–77. http://dx.doi.org/10.34028/iajit/17/2/4.

Full text
Abstract:
In this paper, authors propose an auditory feature extraction algorithm in order to improve the performance of the speaker recognition system in noisy environments. In this auditory feature extraction algorithm, the Gammachirp filter bank is adapted to simulate the auditory model of human cochlea. In addition, the following three techniques are applied: cube-root compression method, Relative Spectral Filtering Technique (RASTA), and Cepstral Mean and Variance Normalization algorithm (CMVN).Subsequently, based on the theory of Gaussian Mixes Model-Universal Background Model (GMM-UBM), the simulated experiment was conducted. The experimental results implied that speaker recognition systems with the new auditory feature has better robustness and recognition performance compared to Mel-Frequency Cepstral Coefficients(MFCC), Relative Spectral-Perceptual Linear Predictive (RASTA-PLP),Cochlear Filter Cepstral Coefficients (CFCC) and gammatone Frequency Cepstral Coefficeints (GFCC)
APA, Harvard, Vancouver, ISO, and other styles
29

Noda, Juan J., Carlos M. Travieso-González, David Sánchez-Rodríguez, and Jesús B. Alonso-Hernández. "Acoustic Classification of Singing Insects Based on MFCC/LFCC Fusion." Applied Sciences 9, no. 19 (October 1, 2019): 4097. http://dx.doi.org/10.3390/app9194097.

Full text
Abstract:
This work introduces a new approach for automatic identification of crickets, katydids and cicadas analyzing their acoustic signals. We propose the building of a tool to identify this biodiversity. The study proposes a sound parameterization technique designed specifically for identification and classification of acoustic signals of insects using Mel Frequency Cepstral Coefficients (MFCC) and Linear Frequency Cepstral Coefficients (LFCC). These two sets of coefficients are evaluated individually as has been done in previous studies and have been compared with the fusion proposed in this work, showing an outstanding increase in identification and classification at species level reaching a success rate of 98.07% on 343 insect species.
APA, Harvard, Vancouver, ISO, and other styles
30

C Reghuraj P, Sreejith. "Isolated Spoken Word Identification in Malayalam using Mel-frequency Cepstral Coefficients and K-means clustering." International Journal of Science and Research (IJSR) 1, no. 3 (March 5, 2012): 163–67. http://dx.doi.org/10.21275/ijsr12120377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Astuti, Dwi. "Aplikasi Identifikasi Suara Hewan Menggunakan Metode Mel-Frequency Cepstral Coefficients (MFCC)." Journal of Informatics, Information System, Software Engineering and Applications (INISTA) 1, no. 2 (May 30, 2019): 26–34. http://dx.doi.org/10.20895/inista.v1i2.50.

Full text
Abstract:
Pengenalan suara berada dibawah bidang komputasi linguistik. Hal ini mencakup identifikasi, pengakuan, dan terjemahan ucapan yang terdeteksi ke dalam teks oleh komputer. Penelitian ini menggunakan handphone dan sistem yang dirancang menggunakan suara. Tujuan utama dari penelitian ini adalah menggunakan teknik pengenalan suara untuk mendeteksi, mengidentifikasi dan menerjemahkan suara binatang. Sistem ini terdiri dari dua tahap yaitu pelatihan dan pengujian. Pelatihan melibatkan pengajaran sistem dengan membangun kamus, model akustik untuk setiap kata yang perlu dikenali oleh sistem (analisis offline). Tahap pengujian menggunakan model akustik untuk mengenali kata-kata terisolasi menggunakan algoritma klasifikasi. Aplikasi penyimpanan audio untuk mengidentifikasi berbagai suara binatang dapat dilakukan dengan lebih akurat dimasa depan.
APA, Harvard, Vancouver, ISO, and other styles
32

Wulandari Siagian, Thasya Nurul, Hilal Hudan Nuha, and Rahmat Yasirandi. "Footstep Recognition Using Mel Frequency Cepstral Coefficients and Artificial Neural Network." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 4, no. 3 (June 20, 2020): 497–503. http://dx.doi.org/10.29207/resti.v4i3.1964.

Full text
Abstract:
Footstep recognition is relatively new biometrics and based on the learning of footsteps signals captured from people walking on the sensing area. The footstep signals classification process for security systems still has a low level of accuracy. Therefore, we need a classification system that has a high accuracy for security systems. Most systems are generally developed using geometric and holistic features but still provide high error rates. In this research, a new system is proposed by using the Mel Frequency Cepstral Coefficients (MFCCs) feature extraction, because it has a good linear frequency as a copycat of the human hearing system and Artificial Neural Network (ANN) as a classification algorithm because it has a good level of accuracy with a dataset of 500 recording footsteps. The classification results show that the proposed system can achieve the highest accuracy of validation loss value 57.3, Accuracy testing 92.0%, loss value 193.8, and accuracy training 100%, the accuracy results are an evaluation of the system in improving the foot signal recognition system for security systems in the smart home environment.
APA, Harvard, Vancouver, ISO, and other styles
33

Ayvaz, Uğur, Hüseyin Gürüler, Faheem Khan, Naveed Ahmed, Taegkeun Whangbo, and Abdusalomov Akmalbek Bobomirzaevich. "Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning." Computers, Materials & Continua 71, no. 3 (2022): 5511–21. http://dx.doi.org/10.32604/cmc.2022.023278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kapoor, Tripti, and R. K. Sharma. "Parkinsons disease Diagnosis using Mel frequency Cepstral Coefficients and Vector Quantization." International Journal of Computer Applications 14, no. 3 (January 12, 2011): 43–46. http://dx.doi.org/10.5120/1821-2393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

., Rachna. "FEATURE EXTRACTION FROM ASTHMA PATIENT’S VOICE USING MEL-FREQUENCY CEPSTRAL COEFFICIENTS." International Journal of Research in Engineering and Technology 03, no. 06 (June 25, 2014): 273–76. http://dx.doi.org/10.15623/ijret.2014.0306050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Delian, Xiaorui Wang, Jianqi Zhang, and Xi Huang. "Feature extraction using Mel frequency cepstral coefficients for hyperspectral image classification." Applied Optics 49, no. 14 (May 6, 2010): 2670. http://dx.doi.org/10.1364/ao.49.002670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Milner, Ben, and Jonathan Darch. "Robust Acoustic Speech Feature Prediction From Noisy Mel-Frequency Cepstral Coefficients." IEEE Transactions on Audio, Speech, and Language Processing 19, no. 2 (February 2011): 338–47. http://dx.doi.org/10.1109/tasl.2010.2047811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

K, Sureshkumar, and Thatchinamoorthy P. "Speech and Spectral Landscapes using Mel-Frequency Cepstral Coefficients Signal Processing." International Journal of VLSI & Signal Processing 3, no. 1 (April 25, 2016): 5–8. http://dx.doi.org/10.14445/23942584/ijvsp-v3i1p102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Eskidere, Ömer, and Ahmet Gürhanlı. "Voice Disorder Classification Based on Multitaper Mel Frequency Cepstral Coefficients Features." Computational and Mathematical Methods in Medicine 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/956249.

Full text
Abstract:
The Mel Frequency Cepstral Coefficients (MFCCs) are widely used in order to extract essential information from a voice signal and became a popular feature extractor used in audio processing. However, MFCC features are usually calculated from a single window (taper) characterized by large variance. This study shows investigations on reducing variance for the classification of two different voice qualities (normal voice and disordered voice) using multitaper MFCC features. We also compare their performance by newly proposed windowing techniques and conventional single-taper technique. The results demonstrate that adapted weighted Thomson multitaper method could distinguish between normal voice and disordered voice better than the results done by the conventional single-taper (Hamming window) technique and two newly proposed windowing methods. The multitaper MFCC features may be helpful in identifying voices at risk for a real pathology that has to be proven later.
APA, Harvard, Vancouver, ISO, and other styles
40

Krasnoproshin, D. V., and M. I. Vashkevich. "Speech Emotion Recognition Method Based on Support Vector Machine and Suprasegmental Acoustic Features." Doklady BGUIR 22, no. 3 (June 24, 2024): 93–100. http://dx.doi.org/10.35596/1729-7648-2024-22-3-93-100.

Full text
Abstract:
The problem of recognizing emotions in a speech signal using mel-frequency cepstral coefficients using a classifier based on the support vector machine has been studied. The RAVDESS data set was used in the experiments. A model is proposed that uses a 306-component suprasegmental feature vector as input to a support vector machine classifier. Model quality was assessed using unweighted average recall (UAR). The use of linear, polynomial and radial basis functions as a kernel in a classifier based on the support vector machine is considered. The use of different signal analysis frame sizes (from 23 to 341 ms) at the stage of extracting mel-frequency cepstral coefficients was investigated. The research results revealed significant accuracy of the resulting model (UAR = 48 %). The proposed approach shows potential for applications such as voice assistants, virtual agents, and mental health diagnostics.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Guan Yu, Hong Zhi Yu, Yong Hong Li, and Ning Ma. "Features Extraction for Lhasa Tibetan Speech Recognition." Applied Mechanics and Materials 571-572 (June 2014): 205–8. http://dx.doi.org/10.4028/www.scientific.net/amm.571-572.205.

Full text
Abstract:
Speech feature extraction is discussed. Mel frequency cepstral coefficients (MFCC) and perceptual linear prediction coefficient (PLP) method is analyzed. These two types of features are extracted in Lhasa large vocabulary continuous speech recognition system. Then the recognition results are compared.
APA, Harvard, Vancouver, ISO, and other styles
42

Ali, Yusnita Mohd, Emilia Noorsal, Nor Fadzilah Mokhtar, Siti Zubaidah Md Saad, Mohd Hanapiah Abdullah, and Lim Chee Chin. "Speech-based gender recognition using linear prediction and mel-frequency cepstral coefficients." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 2 (November 1, 2022): 753–61. https://doi.org/10.11591/ijeecs.v28.i2.pp753-761.

Full text
Abstract:
Gender discrimination and awareness are essentially practiced in social, education, workplace, and economic sectors across the globe. A person manifests this attribute naturally in gait, body gesture, facial, including speech. For that reason, automatic gender recognition (AGR) has become an interesting sub-topic in speech recognition systems that can be found in many speech technology applications. However, retrieving salient genderrelated information from a speech signal is a challenging problem since speech contains abundant information apart from gender. The paper intends to compare the performance of human vocal tract-based model i.e., linear prediction coefficients (LPC) and human auditory-based model i.e., Melfrequency cepstral coefficients (MFCC) which are popularly used in other speech recognition tasks by experimentation of optimal feature parameters and classifier&rsquo;s parameters. The audio data used in this study was obtained from 93 speakers uttering selected words with different vowels. The two feature vectors were tested using two classification algorithms namely, discriminant analysis (DA) and artificial neural network (ANN). Although the experimental results were promising using both feature parameters, the best overall accuracy rate of 97.07% was recorded using MFCC-ANN techniques with almost equal performance for male and female classes.
APA, Harvard, Vancouver, ISO, and other styles
43

Patil, Adwait. "Covid Classification Using Audio Data." International Journal for Research in Applied Science and Engineering Technology 9, no. 10 (October 31, 2021): 1633–37. http://dx.doi.org/10.22214/ijraset.2021.38675.

Full text
Abstract:
Abstract: Coronavirus outbreak has affected the entire world adversely this project has been developed in order to help common masses diagnose their chances of been covid positive just by using coughing sound and basic patient data. Audio classification is one of the most interesting applications of deep learning. Similar to image data audio data is also stored in form of bits and to understand and analyze this audio data we have used Mel frequency cepstral coefficients (MFCCs) which makes it possible to feed the audio to our neural network. In this project we have used Coughvid a crowdsource dataset consisting of 27000 audio files and metadata of same amount of patients. In this project we have used a 1D Convolutional Neural Network (CNN) to process the audio and metadata. Future scope for this project will be a model that rates how likely it is that a person is infected instead of binary classification. Keywords: Audio classification, Mel frequency cepstral coefficients, Convolutional neural network, deep learning, Coughvid
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Wei, Zixuan Zhou, Junze Bao, Chengniu Wang, Hanqing Chen, Chen Xu, Gangcai Xie, Hongmin Shen, and Huiqun Wu. "Classifying Heart-Sound Signals Based on CNN Trained on MelSpectrum and Log-MelSpectrum Features." Bioengineering 10, no. 6 (May 25, 2023): 645. http://dx.doi.org/10.3390/bioengineering10060645.

Full text
Abstract:
The intelligent classification of heart-sound signals can assist clinicians in the rapid diagnosis of cardiovascular diseases. Mel-frequency cepstral coefficients (MelSpectrums) and log Mel-frequency cepstral coefficients (Log-MelSpectrums) based on a short-time Fourier transform (STFT) can represent the temporal and spectral structures of original heart-sound signals. Recently, various systems based on convolutional neural networks (CNNs) trained on the MelSpectrum and Log-MelSpectrum of segmental heart-sound frames that outperform systems using handcrafted features have been presented and classified heart-sound signals accurately. However, there is no a priori evidence of the best input representation for classifying heart sounds when using CNN models. Therefore, in this study, the MelSpectrum and Log-MelSpectrum features of heart-sound signals combined with a mathematical model of cardiac-sound acquisition were analysed theoretically. Both the experimental results and theoretical analysis demonstrated that the Log-MelSpectrum features can reduce the classification difference between domains and improve the performance of CNNs for heart-sound classification.
APA, Harvard, Vancouver, ISO, and other styles
45

Prajapati, Pooja, and Miral Patel. "Feature Extraction of Isolated Gujarati Digits with Mel Frequency Cepstral Coefficients (MFCCs)." International Journal of Computer Applications 163, no. 6 (April 17, 2017): 29–33. http://dx.doi.org/10.5120/ijca2017913551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Almanfaluti, Istian Kriya, and Judi Prajetno Sugiono. "Identifikasi Pola Suara Pada Bahasa Jawa Meggunakan Mel Frequency Cepstral Coefficients (MFCC)." JURNAL MEDIA INFORMATIKA BUDIDARMA 4, no. 1 (January 29, 2020): 22. http://dx.doi.org/10.30865/mib.v4i1.1793.

Full text
Abstract:
Voice Recognition is a process of developing systems used between computer and human. The purpose of this study is to find out the sound pattern of a person based on the spoken Javanese language. This study used the Mel Frequency Cepstral Coefficients (MFCC) method to solve the problem of feature extraction from human voices. Tests were carried out on 4 users consisting of 2 women and 2 men, each saying 1 word "KUTHO", the word pronounced 5 times. The results of the testing are to get a sound pattern from the characteristics of 1 person with another person so that research using the MFCC method can produce different sound patterns
APA, Harvard, Vancouver, ISO, and other styles
47

H.Mansour, Abdelmajid, Gafar Zen Alabdeen Salh, and Khalid A. Mohammed. "Voice Recognition using Dynamic Time Warping and Mel-Frequency Cepstral Coefficients Algorithms." International Journal of Computer Applications 116, no. 2 (April 22, 2015): 34–41. http://dx.doi.org/10.5120/20312-2362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mohd Ali, Yusnita, Emilia Noorsal, Nor Fadzilah Mokhtar, Siti Zubaidah Md Saad, Mohd Hanapiah Abdullah, and Lim Chee Chin. "Speech-based gender recognition using linear prediction and mel-frequency cepstral coefficients." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 2 (November 1, 2022): 753. http://dx.doi.org/10.11591/ijeecs.v28.i2.pp753-761.

Full text
Abstract:
Gender discrimination and awareness are essentially practiced in social, education, workplace, and economic sectors across the globe. A person manifests this attribute naturally in gait, body gesture, facial, including speech. For that reason, automatic gender recognition (AGR) has become an interesting sub-topic in speech recognition systems that can be found in many speech technology applications. However, retrieving salient gender-related information from a speech signal is a challenging problem since speech contains abundant information apart from gender. The paper intends to compare the performance of human vocal tract-based model i.e., linear prediction coefficients (LPC) and human auditory-based model i.e., Mel-frequency cepstral coefficients (MFCC) which are popularly used in other speech recognition tasks by experimentation of optimal feature parameters and classifier’s parameters. The audio data used in this study was obtained from 93 speakers uttering selected words with different vowels. The two feature vectors were tested using two classification algorithms namely, discriminant analysis (DA) and artificial neural network (ANN). Although the experimental results were promising using both feature parameters, the best overall accuracy rate of 97.07% was recorded using MFCC-ANN techniques with almost equal performance for male and female classes.
APA, Harvard, Vancouver, ISO, and other styles
49

Maliki, I., and Sofiyanudin. "Musical Instrument Recognition using Mel-Frequency Cepstral Coefficients and Learning Vector Quantization." IOP Conference Series: Materials Science and Engineering 407 (September 26, 2018): 012118. http://dx.doi.org/10.1088/1757-899x/407/1/012118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cinoglu, Bahadir, Umut Durak, and T. Hikmet Karakoc. "Utilizing Mel-Frequency Cepstral Coefficients for Acoustic Diagnostics of Damaged UAV Propellers." International Journal of Aviation Science and Technology vm05, is02 (November 2, 2024): 79–89. http://dx.doi.org/10.23890/ijast.vm05is02.0201.

Full text
Abstract:
In this study, the diagnostic potential of the acoustic signatures of Unmanned Aerial Vehicle (UAVs) propellers which is one of the critical components of these vehicles were examined under different damage conditions. For this purpose, a test bench was set up and acoustic data of five different damaged propellers and one undamaged propeller were collected. The methodology emphasized contains using an omnidirectional microphone to collect data under three different thrust levels which correspond to 25%, 50% and 75%. Propeller acoustics sound characteristics extracted using the Mel Frequency Cepstrum Coefficient (MFCC) technique that incorporates Fast Fourier Transform (FFT) in order to obtain feature extracted data, and the visual differences of sound patterns were discussed to underline its importance in terms of diagnostics. The results indicated that there is a potential for classifying slightly and symmetrically damaged and undamaged propellers successfully in an Artificial Intelligence-based diagnostic application using MFCC. This study aimed to demonstrate a way to effectively use MFCC detecting damaged and undamaged propellers through their sound profiles and highlighted its usage potential for future integration into Artificial Intelligence (AI) methods in terms of UAV diagnostics. The findings provided a foundation for creating an advanced diagnostic method for increasing UAV safety and operational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography