To see the other types of publications on this topic, follow the link: Speech Features.

Journal articles on the topic 'Speech Features'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Speech Features.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Axelrod, Scott E. "Speech recognition utilizing multitude of speech features." Journal of the Acoustical Society of America 128, no. 4 (2010): 2259. http://dx.doi.org/10.1121/1.3500788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Khonglah, Banriskhem K., and S. R. Mahadeva Prasanna. "Speech / music classification using speech-specific features." Digital Signal Processing 48 (January 2016): 71–83. http://dx.doi.org/10.1016/j.dsp.2015.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Qiang, Zhong Wang, Yunfeng Dou, and Jian Zhou. "Whispered Speech Conversion Based on the Inversion of Mel Frequency Cepstral Coefficient Features." Algorithms 15, no. 2 (2022): 68. http://dx.doi.org/10.3390/a15020068.

Full text
Abstract:
A conversion method based on the inversion of Mel frequency cepstral coefficient (MFCC) features was proposed to convert whispered speech into normal speech. First, the MFCC features of whispered speech and normal speech were extracted and a matching relation between the MFCC feature parameters of whispered speech and normal speech was developed through the Gaussian mixture model (GMM). Then, the MFCC feature parameters of normal speech corresponding to whispered speech were obtained based on the GMM and, finally, whispered speech was converted into normal speech through the inversion of MFCC
APA, Harvard, Vancouver, ISO, and other styles
4

Усар, Угилой. "Manpulative speech discourse features." Арабский язык в эпоху глобализации: инновационные подходы и методы обучения 1, no. 1 (2023): 648–51. http://dx.doi.org/10.47689/atgd:iyom-vol1-iss1-pp648-651-id28726.

Full text
Abstract:
Manipulation is one of the phenomena that are highly affected by propaganda and closely related to media and political discourses. This paper is an approach, the purpose of which is to study through comparative contrastive, and observation methods the concept of manipulation as a and linguistic phenomenon where the central emphasis is on the manipulative techniques and tactics that are utilized for various reasons in various fields of study.
APA, Harvard, Vancouver, ISO, and other styles
5

Jurayeva, Dilorom Rakhmatillo qizi. "FEATURES OF DIALOGIC SPEECH." Educational Research in Universal Sciences 2, no. 1 (2023): 522–24. https://doi.org/10.5281/zenodo.7620256.

Full text
Abstract:
When speaking a language, a person not only communicates information but also thinks and becomes aware of the hidden aspects of the natural world. Speech is the actual form of communication; language is merely a means of that. The language is not part of history, but the speech has historical characteristics. Language is a social phenomenon, while speaking is a mental phenomenon. The manner in which the speaker and the listener participate in communication are not necessarily the same. As a result, dialogic and monologic speech types develop. At the same time dialogic speech has several types.
APA, Harvard, Vancouver, ISO, and other styles
6

Kizi, Bakirova Umida Bakhtiyor. "FEATURES OF SPEECH FORMATION IN PRESCHOOL CHILDREN." International Journal of Pedagogics 03, no. 03 (2023): 49–53. http://dx.doi.org/10.37547/ijp/volume03issue03-10.

Full text
Abstract:
Speech is a wonderful gift of nature – it is not given to a person from birth. It should take time for the child to start talking. And adults should make a lot of efforts to ensure that the child's speech develops correctly and in a timely manner.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Ying, Xue-Ying Zhang, Jiang-He Ma, Chun-Xiao Song, and Hui-Fen Lv. "Nonlinear Dynamic Feature Extraction Based on Phase Space Reconstruction for the Classification of Speech and Emotion." Mathematical Problems in Engineering 2020 (April 9, 2020): 1–15. http://dx.doi.org/10.1155/2020/9452976.

Full text
Abstract:
Due to the shortcomings of linear feature parameters in speech signals, and the limitations of existing time- and frequency-domain attribute features in characterizing the integrity of the speech information, in this paper, we propose a nonlinear method for feature extraction based on the phase space reconstruction (PSR) theory. First, the speech signal was analyzed using a nonlinear dynamic model. Then, the model was used to reconstruct a one-dimensional time speech signal. Finally, nonlinear dynamic (NLD) features based on the reconstruction of the phase space were extracted as the new chara
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Shidan, Ying Shen, and Dongqing Wang. "Target Speaker Extraction by Fusing Voiceprint Features." Applied Sciences 12, no. 16 (2022): 8152. http://dx.doi.org/10.3390/app12168152.

Full text
Abstract:
It is a critical problem to accurately separate clean speech in the multispeaker scenario for different speakers. However, in most cases, smart devices such as smart phones interact with only one specific user. As a consequence, the speech separation models adopted by these devices only have to extract the target speaker’s speech. A voiceprint, which reflects the speaker’s voice characteristics, provides prior knowledge for the target speech separation. Therefore, how to efficiently integrate voiceprint features into the existing speech separation models to improve their performance for the ta
APA, Harvard, Vancouver, ISO, and other styles
9

Miao, Yuji, Haiying Liu, and Shan Gu. "English Speech Feature Recognition-Based Fuzzy Algorithm and Artificial Intelligent." Wireless Communications and Mobile Computing 2022 (May 4, 2022): 1–10. http://dx.doi.org/10.1155/2022/4421520.

Full text
Abstract:
It is necessary to study the application of digital technology in English speech feature recognition. This paper combines the actual needs of English speech feature recognition to improve the digital algorithm. Moreover, this paper combines fuzzy algorithm to analyze English speech features, analyzes the shortcomings of traditional algorithms, proposes the fuzzy digitized English speech recognition algorithm, and builds an English speech feature recognition model on this basis. In addition, this paper conducts time-frequency analysis on chaotic signals and speech signals, eliminates noise in E
APA, Harvard, Vancouver, ISO, and other styles
10

JAFARI, AYYOOB, and FARSHAD ALMASGANJ. "USING NONLINEAR MODELING OF RECONSTRUCTED PHASE SPACE AND FREQUENCY DOMAIN ANALYSIS TO IMPROVE AUTOMATIC SPEECH RECOGNITION PERFORMANCE." International Journal of Bifurcation and Chaos 22, no. 03 (2012): 1250053. http://dx.doi.org/10.1142/s0218127412500538.

Full text
Abstract:
This paper introduces a combinational feature extraction approach to improve speech recognition systems. The main idea is to simultaneously benefit from some features obtained from nonlinear modeling applied to speech reconstructed phase space (RPS) and typical Mel frequency Cepstral coefficients (MFCCs) which have a proved role in speech recognition field. With an appropriate dimension, the reconstructed phase space of speech signal is assured to be topologically equivalent to the dynamics of the speech production system, and could therefore include information that may be absent in linear an
APA, Harvard, Vancouver, ISO, and other styles
11

Kumar, Dr Tribhuwan, Klinge Orlando Villalba-Condori, Dennis Arias-Chavez, Rajesh K., Kalyan Chakravarthi M, and Dr Suman Rajest S. "An Evaluation on Speech Recognition Technology based on Machine Learning." Webology 19, no. 1 (2022): 646–63. http://dx.doi.org/10.14704/web/v19i1/web19046.

Full text
Abstract:
Speech is the basic way of interaction between the listener to the speaker by voice or expression. Humans can easily understand the speakers' message, but machines can't understand the speaker's word. Nowadays, most of our lives are occupied by machines; but we can't interact with machines. The human brain, like machine learning technology, is essential for speech recognition to interact with machines to humans. The language used for speech recognition must be a global language, so English has been used in this paper. The machine learning methodology is used in a lot of assignments through the
APA, Harvard, Vancouver, ISO, and other styles
12

Lin, Chang-Hong, Wei-Kai Liao, Wen-Chi Hsieh, Wei-Jiun Liao, and Jia-Ching Wang. "Emotion Identification Using Extremely Low Frequency Components of Speech Feature Contours." Scientific World Journal 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/757121.

Full text
Abstract:
The investigations of emotional speech identification can be divided into two main parts, features and classifiers. In this paper, how to extract an effective speech feature set for the emotional speech identification is addressed. In our speech feature set, we use not only statistical analysis of frame-based acoustical features, but also the approximated speech feature contours, which are obtained by extracting extremely low frequency components to speech feature contours. Furthermore, principal component analysis (PCA) is applied to the approximated speech feature contours so that an efficie
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Li-Min, Yang Li, Yue-Ting Zhang, Giap Weng Ng, Yu-Beng Leau, and Hao Yan. "A Deep Learning Method Using Gender-Specific Features for Emotion Recognition." Sensors 23, no. 3 (2023): 1355. http://dx.doi.org/10.3390/s23031355.

Full text
Abstract:
Speech reflects people’s mental state and using a microphone sensor is a potential method for human–computer interaction. Speech recognition using this sensor is conducive to the diagnosis of mental illnesses. The gender difference of speakers affects the process of speech emotion recognition based on specific acoustic features, resulting in the decline of emotion recognition accuracy. Therefore, we believe that the accuracy of speech emotion recognition can be effectively improved by selecting different features of speech for emotion recognition based on the speech representations of differen
APA, Harvard, Vancouver, ISO, and other styles
14

Huang, Chenchen, Wei Gong, Wenlong Fu, and Dongyu Feng. "A Research of Speech Emotion Recognition Based on Deep Belief Network and SVM." Mathematical Problems in Engineering 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/749604.

Full text
Abstract:
Feature extraction is a very important part in speech emotion recognition, and in allusion to feature extraction in speech emotion recognition problems, this paper proposed a new method of feature extraction, using DBNs in DNN to extract emotional features in speech signal automatically. By training a 5 layers depth DBNs, to extract speech emotion feature and incorporate multiple consecutive frames to form a high dimensional feature. The features after training in DBNs were the input of nonlinear SVM classifier, and finally speech emotion recognition multiple classifier system was achieved. Th
APA, Harvard, Vancouver, ISO, and other styles
15

Phapatanaburi, Khomdet, Wongsathon Pathonsuwan, Longbiao Wang, et al. "Whispered Speech Detection Using Glottal Flow-Based Features." Symmetry 14, no. 4 (2022): 777. http://dx.doi.org/10.3390/sym14040777.

Full text
Abstract:
Recent studies have reported that the performance of Automatic Speech Recognition (ASR) technologies designed for normal speech notably deteriorates when it is evaluated by whispered speech. Therefore, the detection of whispered speech is useful in order to attenuate the mismatch between training and testing situations. This paper proposes two new Glottal Flow (GF)-based features, namely, GF-based Mel-Frequency Cepstral Coefficient (GF-MFCC) as a magnitude-based feature and GF-based relative phase (GF-RP) as a phase-based feature for whispered speech detection. The main contribution of the pro
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Xuan, Silong Zhou, Mingwei Chen, et al. "Combined spectral and speech features for pig speech recognition." PLOS ONE 17, no. 12 (2022): e0276778. http://dx.doi.org/10.1371/journal.pone.0276778.

Full text
Abstract:
The sound of the pig is one of its important signs, which can reflect various states such as hunger, pain or emotional state, and directly indicates the growth and health status of the pig. Existing speech recognition methods usually start with spectral features. The use of spectrograms to achieve classification of different speech sounds, while working well, may not be the best approach for solving such tasks with single-dimensional feature input. Based on the above assumptions, in order to more accurately grasp the situation of pigs and take timely measures to ensure the health status of pig
APA, Harvard, Vancouver, ISO, and other styles
17

Kanwal, Sofia, Sohail Asghar, and Hazrat Ali. "Feature selection enhancement and feature space visualization for speech-based emotion recognition." PeerJ Computer Science 8 (November 4, 2022): e1091. http://dx.doi.org/10.7717/peerj-cs.1091.

Full text
Abstract:
Robust speech emotion recognition relies on the quality of the speech features. We present speech features enhancement strategy that improves speech emotion recognition. We used the INTERSPEECH 2010 challenge feature-set. We identified subsets from the features set and applied principle component analysis to the subsets. Finally, the features are fused horizontally. The resulting feature set is analyzed using t-distributed neighbour embeddings (t-SNE) before the application of features for emotion recognition. The method is compared with the state-of-the-art methods used in the literature. The
APA, Harvard, Vancouver, ISO, and other styles
18

Yuan, Hongyan, Linjuan Zhang, Baoning Niu, and Xianrong Zheng. "A Spoofing Speech Detection Method Combining Multi-Scale Features and Cross-Layer Information." Information 16, no. 3 (2025): 194. https://doi.org/10.3390/info16030194.

Full text
Abstract:
Pre-trained self-supervised speech models can extract general acoustic features, providing feature inputs for various speech downstream tasks. Spoofing speech detection, which is a pressing issue in the age of generative AI, requires both global information and local features of speech. The multi-layer transformer structure in pre-trained speech models can effectively capture temporal information and global context in speech, but there is still room for improvement in handling local features. To address this issue, a speech spoofing detection method that integrates multi-scale features and cro
APA, Harvard, Vancouver, ISO, and other styles
19

Avdji, Sumeyra Hussein. "Speech features of stuttering children." SCIENTIFIC WORK 62, no. 01 (2021): 151–54. http://dx.doi.org/10.36719/2663-4619/62/151-154.

Full text
Abstract:
To understand the nature of stuttering, it is important to clarify the speech characteristics of children who stutter. The level of development of language skills in stuttering children is almost the same as in normal-speaking children. They determine the reactions of individual characteristics to the influence of various situational factors. Research on the speech characteristics of stuttering children shows that they have difficulty using the means of communication in the communicative processes of speech, despite the richness of vocabulary and the ability to compose sentences. Stuttering is
APA, Harvard, Vancouver, ISO, and other styles
20

Myrhorod, Violetta. "FEATURES OF LITERARY TRANSLATION." Scientific Journal of Polonia University 66, no. 5 (2024): 46–51. https://doi.org/10.23856/6606.

Full text
Abstract:
The aim of this paper is to determine the phenomenon of artistic translation. In modern philological science, there are various directions of studying the speech characteristics of characters as the primary component of a character's image in a literary work. Contemporary literary studies consider the speech characteristics of characters as one of the aspects of creating characters in literary texts of different genres. In linguistics, the subject of study is based on the material of oral speech and texts that present general, typological, and specific features of communication, as well as the
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, M. Rupesh, Susmitha Vekkot, S. Lalitha, et al. "Dementia Detection from Speech Using Machine Learning and Deep Learning Architectures." Sensors 22, no. 23 (2022): 9311. http://dx.doi.org/10.3390/s22239311.

Full text
Abstract:
Dementia affects the patient’s memory and leads to language impairment. Research has demonstrated that speech and language deterioration is often a clear indication of dementia and plays a crucial role in the recognition process. Even though earlier studies have used speech features to recognize subjects suffering from dementia, they are often used along with other linguistic features obtained from transcriptions. This study explores significant standalone speech features to recognize dementia. The primary contribution of this work is to identify a compact set of speech features that aid in th
APA, Harvard, Vancouver, ISO, and other styles
22

Deichakivska, Oleksandra. "Pragmatic features of using predicative adjectives." Revista Amazonia Investiga 13, no. 82 (2024): 27–42. https://doi.org/10.34069/ai/2024.82.10.2.

Full text
Abstract:
The article examines predicative adjectives as components of the speech acts of apology and gratitude, as well as the speech act of compliment. The examples show the importance of the speech acts of apology, gratitude, and compliment; the general characteristics and multi-intentionality of these acts are presented. Peculiarities of the language implementation of apology, gratitude, and compliment when using the structure Vcop+Adj are highlighted. Ways of implementing the contact-establishing strategy by using expressive speech acts of apology, gratitude, praise, and compliment are demonstrated
APA, Harvard, Vancouver, ISO, and other styles
23

Zhao, Peng, Fangai Liu, and Xuqiang Zhuang. "Speech Sentiment Analysis Using Hierarchical Conformer Networks." Applied Sciences 12, no. 16 (2022): 8076. http://dx.doi.org/10.3390/app12168076.

Full text
Abstract:
Multimodality has been widely used for sentiment analysis tasks, especially for speech sentiment analysis. Compared with the emotion expression of most text languages, speech is more intuitive for human emotion, as speech contains more and richer emotion features. Most of the current studies mainly involve the extraction of speech features, but the accuracy and prediction rate of the models still need to be improved. To improve the extraction and fusion of speech sentiment feature information, we present a new framework. The framework adopts a hierarchical conformer model and an attention-base
APA, Harvard, Vancouver, ISO, and other styles
24

Rakhmanenko, I. A., A. A. Shelupanov, and E. Y. Kostyuchenko. "Automatic text-independent speaker verification using convolutional deep belief network." Computer Optics 44, no. 4 (2020): 596–605. http://dx.doi.org/10.18287/2412-6179-co-621.

Full text
Abstract:
This paper is devoted to the use of the convolutional deep belief network as a speech feature extractor for automatic text-independent speaker verification. The paper describes the scope and problems of automatic speaker verification systems. Types of modern speaker verification systems and types of speech features used in speaker verification systems are considered. The structure and learning algorithm of convolutional deep belief networks is described. The use of speech features extracted from three layers of a trained convolution deep belief network is proposed. Experimental studies of the
APA, Harvard, Vancouver, ISO, and other styles
25

qizi, Majidova Maftuna Samad. "Specific Features of Linguosomatic Speech of Women and Men." American Journal Of Social Sciences And Humanity Research 5, no. 5 (2025): 340–42. https://doi.org/10.37547/ajsshr/volume05issue05-68.

Full text
Abstract:
Speech produced through nonverbal means (linguosomatic speech) is one of the means of effectively expressing the emotional, mental and social state of a person. Linguosomatic speech of women and men is formed on the basis of specific physiological, psychological, cultural and social differences. This article studies the gender-related features of nonverbal communication based on a linguosomatic approach. How and for what purposes women and men use body language, the degree of its harmony with linguistic speech and its connection with gender stereotypes formed in society are analyzed on a scien
APA, Harvard, Vancouver, ISO, and other styles
26

kizi, Xolmurodova Madina Alisher. "PHONETIC FEATURES OF ENGLISH DIALECTS." American Journal of Philological Sciences 4, no. 4 (2024): 39–45. http://dx.doi.org/10.37547/ajps/volume04issue04-08.

Full text
Abstract:
The article examines the term dialect is often used in the sense of regional, local or geographic varieties of a language mainly used in oral speech. A language belongs to a nation or nations, as English does, therefore it is a social phenomenon, understandable by all its members. A language is not a complex combination of individual speech forms. The phonetic and phonological features of a language dialect relationship, natural bilingualism and also some types of speech communities classified by their social characteristics are studiedin a new branch of phonetics, namely social phonetics. Idi
APA, Harvard, Vancouver, ISO, and other styles
27

Byun, Sung-Woo, and Seok-Pil Lee. "A Study on a Speech Emotion Recognition System with Effective Acoustic Features Using Deep Learning Algorithms." Applied Sciences 11, no. 4 (2021): 1890. http://dx.doi.org/10.3390/app11041890.

Full text
Abstract:
The goal of the human interface is to recognize the user’s emotional state precisely. In the speech emotion recognition study, the most important issue is the effective parallel use of the extraction of proper speech features and an appropriate classification engine. Well defined speech databases are also needed to accurately recognize and analyze emotions from speech signals. In this work, we constructed a Korean emotional speech database for speech emotion analysis and proposed a feature combination that can improve emotion recognition performance using a recurrent neural network model. To i
APA, Harvard, Vancouver, ISO, and other styles
28

Mporas, Iosif, Todor Ganchev, and Mihalis Siafarikas. "Comparison of Speech Features on the Speech Recognition Task." Journal of Computer Science 3, no. 8 (2007): 608–16. http://dx.doi.org/10.3844/jcssp.2007.608.616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ramírez, J., P. Yélamos, J. M. Górriz, and J. C. Segura. "SVM-based speech endpoint detection using contextual speech features." Electronics Letters 42, no. 7 (2006): 426. http://dx.doi.org/10.1049/el:20064068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kacur, Juraj, and Vladimir Chudy. "Topological invariants as speech features for automatic speech recognition." International Journal of Signal and Imaging Systems Engineering 7, no. 4 (2014): 235. http://dx.doi.org/10.1504/ijsise.2014.066601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zharovskaya, Elena Viktorovna. "PROSODIC FEATURES OF YOUTH’S SPEECH." Philological Sciences. Issues of Theory and Practice, no. 8-1 (August 2018): 95–99. http://dx.doi.org/10.30853/filnauki.2018-8-1.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kamaruddin, Norhaslinda, and Abdul Wahab. "Features extraction for speech emotion." Journal of Computational Methods in Sciences and Engineering 9, s1 (2009): S1—S12. http://dx.doi.org/10.3233/jcm-2009-0231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Rozov, V. A. "TYPOLOGICAL FEATURES OF SACRAL SPEECH." Philologos 44, no. 1 (2020): 50–57. http://dx.doi.org/10.24888/2079-2638-2020-44-1-50-57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Vyaltseva, Darya. "Acoustic Features of Twins’ Speech." Vestnik Volgogradskogo gosudarstvennogo universiteta. Serija 2. Jazykoznanije 16, no. 3 (2017): 227–34. http://dx.doi.org/10.15688/jvolsu2.2017.3.24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bahl, Lalit R. "Speech recognition using dynamic features." Journal of the Acoustical Society of America 102, no. 6 (1997): 3252. http://dx.doi.org/10.1121/1.420242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, Chang-Han, and Frank Torsten Bernd Seide. "Tone features for speech recognition." Journal of the Acoustical Society of America 117, no. 5 (2005): 2698. http://dx.doi.org/10.1121/1.1932393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sitosanova, Ol'ga. "ORAL SPEECH AND ITS FEATURES." Bulletin of the Angarsk State Technical University 1, no. 18 (2024): 428–30. https://doi.org/10.36629/2686-777x-2024-1-18-428-430.

Full text
Abstract:
Modern literary language has two forms: oral and written. They are characterized by features in terms of lexical composition and grammatical structure, as they are designed for different types of perception - auditory and visual. The article examines the features of oral speech
APA, Harvard, Vancouver, ISO, and other styles
38

Eide, Ellen M. "Speech recognition using discriminant features." Journal of the Acoustical Society of America 126, no. 3 (2009): 1646. http://dx.doi.org/10.1121/1.3230471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Kuo-Chang, Yau-Tarng Juang, and Wen-Chieh Chang. "Robust integration for speech features." Signal Processing 86, no. 9 (2006): 2282–88. http://dx.doi.org/10.1016/j.sigpro.2005.10.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Chia-Ping, and Jeff A. Bilmes. "MVA Processing of Speech Features." IEEE Transactions on Audio, Speech and Language Processing 15, no. 1 (2007): 257–70. http://dx.doi.org/10.1109/tasl.2006.876717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Babchuk, Yuliia, and Sergii Babii. "PROSODIC FEATURES OF SPONTANEOUS SPEECH." Knowledge, Education, Law, Management 56, no. 4 (2023): 76–80. http://dx.doi.org/10.51647/kelm.2023.4.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Nacafova, S. "National features of speech etiquette." Bulletin of Science and Practice, no. 11 (November 14, 2017): 436–42. https://doi.org/10.5281/zenodo.1048747.

Full text
Abstract:
The article shows the differences between the speech etiquette of different peoples. The most important thing is to find a common language with this or that interlocutor. Knowledge of national etiquette, national character helps to learn the principles of speech of another nation. The article indicates in which cases certain forms of etiquette considered acceptable. At the same time, the rules of etiquette emphasized in the conduct of a dialogue in official meetings and for example, in the exchange of business cards. Because the prerequisite for the culture of the language is conversational et
APA, Harvard, Vancouver, ISO, and other styles
43

Ergashova, Mashkhura Amandzhanovna, and Madina Mamasolievna Ibragimova. "DISTINCTIVE FEATURES OF SPEECH CULTURE." ACADEMIC RESEARCH JOURNAL 1, no. 5 (2022): 148–52. https://doi.org/10.5281/zenodo.7319736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Zhao, Huan, Lixuan Li, Xupeng Zha, Yujiang Wang, Zhaoxin Xie, and Zixing Zhang. "ACG-EmoCluster: A Novel Framework to Capture Spatial and Temporal Information from Emotional Speech Enhanced by DeepCluster." Sensors 23, no. 10 (2023): 4777. http://dx.doi.org/10.3390/s23104777.

Full text
Abstract:
Speech emotion recognition (SER) is a task that tailors a matching function between the speech features and the emotion labels. Speech data have higher information saturation than images and stronger temporal coherence than text. This makes entirely and effectively learning speech features challenging when using feature extractors designed for images or texts. In this paper, we propose a novel semi-supervised framework for extracting spatial and temporal features from speech, called the ACG-EmoCluster. This framework is equipped with a feature extractor for simultaneously extracting the spatia
APA, Harvard, Vancouver, ISO, and other styles
45

Park, Seongjin, and John Culnan. "Automatic proficiency judgments: Accentedness, fluency, and comprehensibility." Journal of the Acoustical Society of America 150, no. 4 (2021): A357. http://dx.doi.org/10.1121/10.0008582.

Full text
Abstract:
The goal of the present study is to investigate whether computational models can approximate human perceptual judgments of accentedness, fluency, and comprehensibility in non-native speech using low-level acoustic features and speech rhythm features. Previous studies have used the results of automatic speech recognition systems, such as word error rate, as features to automatically measure speakers’ accentedness or fluency. However, in the present study, we aim to build automatic perceptual judgment model for lexically constrained speech using only audio vectors (wav2vec), acoustic features, a
APA, Harvard, Vancouver, ISO, and other styles
46

Agustinus, Bimo Gumelar, Yogatama Astri, Pramono Adi Derry, Frismanda, and Sugiarto Indar. "Forward feature selection for toxic speech classification using support vector machine and random forest." International Journal of Artificial Intelligence (IJ-AI) 11, no. 2 (2022): 717–26. https://doi.org/10.11591/ijai.v11.i2.pp717-726.

Full text
Abstract:
This study describes the methods for eliminating irrelevant features in speech data to enhance toxic speech classification accuracy and reduce the complexity of the learning process. Therefore, the wrapper method is introduced to estimate the forward selection technique based on support vector machine (SVM) and random forest (RF) classifier algorithms. Eight main speech features were then extracted with derivatives consisting of 9 statistical sub-features from 72 features in the extraction process. Furthermore, Python is used to implement the classifier algorithm of 2,000 toxic data collected
APA, Harvard, Vancouver, ISO, and other styles
47

Zheng, Chunjun, Chunli Wang, and Ning Jia. "An Ensemble Model for Multi-Level Speech Emotion Recognition." Applied Sciences 10, no. 1 (2019): 205. http://dx.doi.org/10.3390/app10010205.

Full text
Abstract:
Speech emotion recognition is a challenging and widely examined research topic in the field of speech processing. The accuracy of existing models in speech emotion recognition tasks is not high, and the generalization ability is not strong. Since the feature set and model design of effective speech directly affect the accuracy of speech emotion recognition, research on features and models is important. Because emotional expression is often correlated with the global features, local features, and model design of speech, it is often difficult to find a universal solution for effective speech emo
APA, Harvard, Vancouver, ISO, and other styles
48

Jokić, Ivan, Stevan Jokić, Vlado Delić, and Zoran Perić. "One Solution of Extension of Mel-Frequency Cepstral Coefficients Feature Vector for Automatic Speaker Recognition." Information Technology And Control 49, no. 2 (2020): 224–36. http://dx.doi.org/10.5755/j01.itc.49.2.22258.

Full text
Abstract:
One extension of feature vector for automatic speaker recognition is considered in this paper. The starting feature vector consisted of 18 mel-frequency cepstral coefficients (MFCCs). Extension was done with two additional features derived from the spectrum of the speech signal. The main idea that generated this research is that it is possible to increase the efficiency of automatic speaker recognition by constructing a feature vector which tracks a real perceived spectrum in the observed speech. Additional features are based on the energy maximums in the appropriate frequency ranges of observ
APA, Harvard, Vancouver, ISO, and other styles
49

Guo, Pengfei, Shucheng Huang, and Mingxing Li. "DDA-MSLD: A Multi-Feature Speech Lie Detection Algorithm Based on a Dual-Stream Deep Architecture." Information 16, no. 5 (2025): 386. https://doi.org/10.3390/info16050386.

Full text
Abstract:
Speech lie detection is a technique that analyzes speech signals in detail to determine whether a speaker is lying. It has significant application value and has attracted attention from various fields. However, existing speech lie detection algorithms still have certain limitations. These algorithms fail to fully explore manually extracted features based on prior knowledge and also neglect the dynamic characteristics of speech as well as the impact of temporal context, resulting in reduced detection accuracy and generalization. To address these issues, this paper proposes a multi-feature speec
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Yuezhou, Guimin Li, and Qiang Fu. "Non-Intrusive Air Traffic Control Speech Quality Assessment with ResNet-BiLSTM." Applied Sciences 13, no. 19 (2023): 10834. http://dx.doi.org/10.3390/app131910834.

Full text
Abstract:
In the current field of air traffic control speech, there is a lack of effective objective speech quality evaluation methods. This paper proposes a new network framework based on ResNet–BiLSTM to address this issue. Firstly, the mel-spectrogram of the speech signal is segmented using the sliding window technique. Next, a preceding feature extractor composed of convolutional and pooling layers is employed to extract shallow features from the mel-spectrogram segment. Then, ResNet is utilized to extract spatial features from the shallow features, while BiLSTM is used to extract temporal features,
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!