To see the other types of publications on this topic, follow the link: Phonetic algorithm.

Journal articles on the topic 'Phonetic algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Phonetic algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Volodymyr, Buriachok, Hadzhyiev Matin, Sokolov Volodymyr, Skladannyi Pavlo, and Kuzmenko Lidiia. "IMPLANTATION OF INDEXING OPTIMIZATION TECHNOLOGY FOR HIGHLY SPECIALIZED TERMS BASED ON METAPHONE PHONETICAL ALGORITHM." Eastern-European Journal of Enterprise Technologies 5, no. 2 (101) (2019): 43–50. https://doi.org/10.15587/1729-4061.2019.181943.

Full text
Abstract:
When compiling databases, for example to meet the needs of healthcare establishments, there is quite a common problem with the introduction and further processing of names and surnames of doctors and patients that are highly specialized both in terms of pronunciation and writing. This is because names and surnames of people cannot be unique, their notation is not subject to any rules of phonetics, while their length in different languages may not match. With the advent of the Internet, this situation has become generally critical and can lead to that multiple copies of e-mails are sent to one address. It is possible to solve the specified problem by using phonetic algorithms for comparing words Daitch-Mokotoff, SoundEx, NYSIIS, Polyphone, and Metaphone, as well as the Levenstein and Jaro algorithms, Q-gram-based algorithms, which make it possible to find distances between words. The most widespread among them are the SoundЕx and Metaphone algorithms, which are designed to index the words based on their sound, taking into consideration the rules of pronunciation. By applying the Metaphone algorithm, an attempt has been made to optimize the phonetic search processes for tasks of fuzzy coincidence, for example, at data deduplication in various databases and registries, in order to reduce the number of errors of incorrect input of surnames. An analysis of the most common surnames reveals that some of them are of the Ukrainian or Russian origin. At the same time, the rules following which the names are pronounced and written, for example in Ukrainian, differ radically from basic algorithms for English and differ quite significantly for the Russian language. That is why a phonetic algorithm should take into consideration first of all the peculiarities in the formation of Ukrainian surnames, which is of special relevance now. The paper reports results from an experiment to generate phonetic indexes, as well as results of the increased performance when using the formed indexes. A method for adapting the search for other areas and several related languages is presented separately using an example of search for medical preparations
APA, Harvard, Vancouver, ISO, and other styles
2

Lopatin, D. V., E. S. Chirkin, and A. A. Fadeeva. "PHONETIC SEARCH ALGORITHM OF INAPPROPRIATE CONTENT." Tambov University Reports. Series: Natural and Technical Sciences 22, no. 5-2 (2017): 1138–41. http://dx.doi.org/10.20310/1810-0198-2017-22-5-1138-1141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grabowski, Emily, and Jennifer Kuo. "Comparing K-means and OPTICS clustering algorithms for identifying vowel categories." Proceedings of the Linguistic Society of America 8, no. 1 (2023): 5488. http://dx.doi.org/10.3765/plsa.v8i1.5488.

Full text
Abstract:
The K-means algorithm is the most commonly used clustering method for phonetic vowel description but has some properties that may be sub-optimal for representing phonetic data. This study compares K-means with an alternative algorithm, OPTICS, in two speech styles (lab vs. conversational) in English to test whether OPTICS is a viable alternative to K-means for characterizing vowel spaces. We find that with noisier data, OPTICS identifies clusters that more accurately represent the underlying data. Our results highlight the importance of choosing an algorithm whose assumptions are in line with the phonetic data being considered.
APA, Harvard, Vancouver, ISO, and other styles
4

Druzhinets, M. L. "PSYCHO-PHONO-SEMANTIC POTENTIAL OF A FEMALE NAME: AN ALGORITHM OF ATTRIBUTIVE GRADATION." Opera in Linguistica Ukrainiana, no. 31 (July 14, 2024): 26–36. http://dx.doi.org/10.18524/2414-0627.2024.31.309394.

Full text
Abstract:
The article is devoted to the study of the psycho-phono-semantics of female names by the author’s algorithm. The purpose of the study is to find out the functions of the sound of female names according to a gradation algorithm based on phonetic content and colour associations of vowel sounds. Objectives of the study are: to present the psycho-phono-semantics of vowel sounds based on empirical studies; to substantiate the phonetic content of the female noun according to the attributive algorithm gradation (names with the status of very, more, the most); to present the colour range of the female noun based on the stressed vowel(s); to find out the role and functions of the psycho-phono-semantics of the female noun. The object of the study is the psychological features of the sound organisation of female names. The subject of the study is the phonetic content and colour of the most popular names according to the author’s algorithm. The actual research material was based on data from the Ministry of Justice, which lists the most popular female names. Using the author’s algorithm of attributive gradation, the phonetic content of female names based on vowel sounds was studied for the first time. For the first time, the colour scheme of a female noun was studied based on all vowel sounds of the name. The common attributive meaning beautiful in all of female names may be related to the phonetic properties of the vowels or perhaps historical associations that these names have. Differences in other associations and attributes may be due to other phonetic details of each name and the specifics of their cultural usage. Analysis of colour associations and semantics of names with different vowels but with unstressed [i] is interesting. The sound [i] can have different correlations with colours that arise as a result of phonetic features. Names with vowels can have a wide range of colours. The psycho-phono-semantics of the feminine noun plays an important role in the perception and understanding of names, determining what associations, impressions and emotions female names evoke. The phonetic meaning of names is subjective and may vary depending on the on the perception of the individual. A proper name, in addition to its phonetic properties, always carries a personal identity and history, and this is also an important aspect of its semantics. Prospects for further research of the stated problem are in a thorough study of the psycho-phono-semantics of masculine and feminine nouns based on the phonetic content and colour associations of vowels and consonants using the algorithm of attributive gradation.
APA, Harvard, Vancouver, ISO, and other styles
5

Khan, Shahidul Islam, Md Mahmudul Hasan, Mohammad Imran Hossain, and Abu Sayed Md Latiful Hoque. "nameGist: a novel phonetic algorithm with bilingual support." International Journal of Speech Technology 22, no. 4 (2019): 1135–48. http://dx.doi.org/10.1007/s10772-019-09653-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, HongLin. "English Phonetic Synthesis Based on DFGA G2P Conversion Algorithm." Journal of Physics: Conference Series 1533 (April 2020): 032031. http://dx.doi.org/10.1088/1742-6596/1533/3/032031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Lili, Xiujing Yan, and Jing Wang. "A Recognition Method Based on Speech Feature Parameters-English Teaching Practice." Mathematical Problems in Engineering 2022 (April 27, 2022): 1–11. http://dx.doi.org/10.1155/2022/2287468.

Full text
Abstract:
In order to improve the effect of English teaching practice, this paper constructs an intelligent English phonetic teaching system combined with the method of phonetic feature parameter recognition. Moreover, this paper simulates the self-mixing interference signal containing noise by establishing a simulation, analyzes the size of the noise and its various possibilities, and selects the EEMD method as the English speech denoising algorithm. In addition, with the support of an intelligent denoising algorithm, this paper implements an English intelligent teaching system based on the recognition algorithm of English speech feature parameters. Finally, this paper evaluates the teaching effect of the intelligent English speech feature recognition algorithm proposed in this paper and the intelligent teaching system of this paper by means of simulation teaching. The research shows that the English teaching system based on the intelligent speech feature recognition algorithm proposed in this paper has a good effect.
APA, Harvard, Vancouver, ISO, and other styles
8

Schatz, Thomas, Naomi H. Feldman, Sharon Goldwater, Xuan-Nga Cao, and Emmanuel Dupoux. "Early phonetic learning without phonetic categories: Insights from large-scale simulations on realistic input." Proceedings of the National Academy of Sciences 118, no. 7 (2021): e2001844118. http://dx.doi.org/10.1073/pnas.2001844118.

Full text
Abstract:
Before they even speak, infants become attuned to the sounds of the language(s) they hear, processing native phonetic contrasts more easily than nonnative ones. For example, between 6 to 8 mo and 10 to 12 mo, infants learning American English get better at distinguishing English and [l], as in “rock” vs. “lock,” relative to infants learning Japanese. Influential accounts of this early phonetic learning phenomenon initially proposed that infants group sounds into native vowel- and consonant-like phonetic categories—like and [l] in English—through a statistical clustering mechanism dubbed “distributional learning.” The feasibility of this mechanism for learning phonetic categories has been challenged, however. Here, we demonstrate that a distributional learning algorithm operating on naturalistic speech can predict early phonetic learning, as observed in Japanese and American English infants, suggesting that infants might learn through distributional learning after all. We further show, however, that, contrary to the original distributional learning proposal, our model learns units too brief and too fine-grained acoustically to correspond to phonetic categories. This challenges the influential idea that what infants learn are phonetic categories. More broadly, our work introduces a mechanism-driven approach to the study of early phonetic learning, together with a quantitative modeling framework that can handle realistic input. This allows accounts of early phonetic learning to be linked to concrete, systematic predictions regarding infants’ attunement.
APA, Harvard, Vancouver, ISO, and other styles
9

Vijay Sharma, Nishu, and Anshu Malhotra. "An encryption and decryption of phonetic alphabets using signed graphs." Scientific Temper 15, spl-2 (2024): 212–17. https://doi.org/10.58414/scientifictemper.2024.15.spl-2.33.

Full text
Abstract:
Indeed, in signed graphs, the weights on the edges can be both positive and negative; this will provide a solid representation and manipulation framework for complicated relationships among phonetic symbols. Encryption and decryption of phonetic alphabets pose a number of special challenges and opportunities. This paper introduces a novel approach utilizing the eigenvalues and eigenvectors of signed graphs to develop more secure and efficient methods of encoding phonetic alphabets. Presented is a new cryptographic scheme; consider a mapping from phonetic alphabets onto a signed graph. Encryption should be carried out by means of structure-changing transformations of the latter, which leave intact the integrity of the information encoded. This approach allows for secure, invertible transformations to resist typical cryptographic attacks. Here, the decryption algorithm restores the encrypted graph back to the original phonetic symbols by systematically going through steps opposite to that taken during encryption. The proposal of signed graphs in the processes of phonetic alphabet encryption and decryption opens new frontiers of cryptographic practices, which have useful implications for secure communication systems and data protection.
APA, Harvard, Vancouver, ISO, and other styles
10

Ponomaryova, Liliya, and Elena Osadcha. "Development of the Phonetic Skills in German as the Second Foreign Language on the Basis of the English Language." International Letters of Social and Humanistic Sciences 70 (June 2016): 62–69. http://dx.doi.org/10.18052/www.scipress.com/ilshs.70.62.

Full text
Abstract:
The problems of forming phonetic skills of the German language which is studied on the basis of the English language have been considered. The aim of this research is to make the comparative analysis of the phonetic aspects of the foreign languages that are taught one after another. There has been the attempt to analyze, generalize and systematize the material on the given topic which is presented in works in German, English, Ukrainian and Russian on the main theoretical questions connected with the process of teaching the second foreign language. It was shown that while forming phonetic skills in German, it is necessary to give the characteristics to the phonetic, rhythmic and intonation peculiarities of both German and English; to point out the difficulties of mastering the pronunciation system of German, to develop the introductory course and the material for phonetic warming-up and to work out the algorithm of introducing a new sound.
APA, Harvard, Vancouver, ISO, and other styles
11

Georgiou, Georgios P. "Identification of Perceptual Phonetic Training Gains in a Second Language Through Deep Learning." AI 6, no. 7 (2025): 134. https://doi.org/10.3390/ai6070134.

Full text
Abstract:
Background/Objectives: While machine learning has made substantial strides in pronunciation detection in recent years, there remains a notable gap in the literature regarding research on improvements in the acquisition of speech sounds following a training intervention, especially in the domain of perception. This study addresses this gap by developing a deep learning algorithm designed to identify perceptual gains resulting from second language (L2) phonetic training. Methods: The participants underwent multiple sessions of high-variability phonetic training, focusing on discriminating challenging L2 vowel contrasts. The deep learning model was trained on perceptual data collected before and after the intervention. Results: The results demonstrated good model performance across a range of metrics, confirming that learners’ gains in phonetic training could be effectively detected by the algorithm. Conclusions: This research underscores the potential of deep learning techniques to track improvements in phonetic training, offering a promising and practical approach for evaluating language learning outcomes and paving the way for more personalized, adaptive language learning solutions. Deep learning enables the automatic extraction of complex patterns in learner behavior that might be missed by traditional methods. This makes it especially valuable in educational contexts where subtle improvements need to be captured and assessed objectively.
APA, Harvard, Vancouver, ISO, and other styles
12

Koo, Chan-Mo, and Gi-Nam Wang. "Phonetic Acoustic Knowledge and Divide And Conquer Based Segmentation Algorithm." KIPS Transactions:PartB 9B, no. 2 (2002): 215–22. http://dx.doi.org/10.3745/kipstb.2002.9b.2.215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhu, Ping, Guangli Xiang, Wenna Song, Ankang Li, Yuexin Zhang, and Ran Tao. "A text zero-watermarking algorithm based on Chinese phonetic alphabets." Wuhan University Journal of Natural Sciences 21, no. 4 (2016): 277–82. http://dx.doi.org/10.1007/s11859-016-1171-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zeldin, A. "The Euclidean metrics applied to the interphonemic distance measurements." Philology and Culture, no. 1 (March 20, 2023): 10–13. http://dx.doi.org/10.26907/2782-4756-2023-71-1-10-13.

Full text
Abstract:
The similarity or dissimilarity of the spoken words is generally rendered by intuition, depending on the personal orientation or the personal traits of a listener/speaker. The existing methods of phonetic encoding of words suffer from a number of shortcomings, the main one being the impossibility of weighing spoken words in quantitative terms. Moreover, the existing methods may be related to a certain language or language family. The algorithm advanced in the present paper compares the characteristics of different phonemes that make up a word. The paper treats phonemic frequency and sonority as elements common both for consonants and vowels, backness and openness, as features pertaining to vowels, and the place of articulation pertaining to consonants only. The algorithm in question permits to compare in quantitative terms the words of different length, whether formed by open or closed syllables. The inter-phonemic distances are calculated by employing Euclidean metrics. The paper suggests fields of application of the method treated in the paper: this scheme can be applied in the fields of comparative linguistics, in medicine, when the hearing disorders are scrutinized, as well as in the brain cortex mapping.
APA, Harvard, Vancouver, ISO, and other styles
15

Hu, Nan, and Yongyi Bi. "Multimodal Intelligent Acoustic Sensor-Assisted English Pronunciation Signal Acquisition and Phonetic Calibration." Journal of Sensors 2022 (March 2, 2022): 1–11. http://dx.doi.org/10.1155/2022/3383685.

Full text
Abstract:
In this paper, a multimodal intelligent acoustic sensor is used for an in-depth study and analysis of English pronunciation signal acquisition and calibration analysis of English phonetic symbols based on the acquired sound signals. This paper proposes a bimodal fusion algorithm around the direction of feature extension and fusion of acoustic recognition features. After each unimodal classification error cost is minimized, the current fusion process is determined by adaptive weights to fix its one decision layer on the fusion. The adaptive weight approach in this algorithm improves the drawback of always identifying one mode as the optimal mode in fixed-weight fusion and further improves applicability and performance compared to unimodal recognition. The random network generation algorithm is used to generate a random network for sound source data acquisition; then, the algorithm is investigated using the decomposition containing fusion center algorithm to each node, and data preprocessing is implemented at each node; finally, the distributed consistency algorithm based on average weights is used for consistent averaging iterations to achieve a consistent speech enhancement effect at each node. The experimental results show that this distributed algorithm can effectively suppress the interference of noncoherent noise, and each node can obtain an enhanced signal close to the source signal-to-noise ratio. In this study, factors that may affect the readability of spoken texts are summarized, analyzed, defined, and extracted, and the difficulty of spoken items obtained from the divisional scoring model is used as the dependent variable, and the extracted influencing factors are used as independent variables for feature screening, model construction, and tuning, and the generated results are interpreted and analyzed. From this, it was found that phonological features have a strong influence on the readability of spoken texts, mainly in features such as phonemes, syllables, and stress. This study is summarized, and the shortcomings of location-based contextual mobile learning of spoken English in terms of student management, device deployment, and empirical evidence are pointed out, to provide references and lessons for the research on IT-supported language learning.
APA, Harvard, Vancouver, ISO, and other styles
16

STATHOPOULOU-ZOIS, P. "A GRAPHEME-TO-PHONEME TRANSLATOR FOR TTS SYNTHESIS IN GREEK." International Journal on Artificial Intelligence Tools 14, no. 06 (2005): 901–18. http://dx.doi.org/10.1142/s0218213005002466.

Full text
Abstract:
In this paper is presented the algorithm of an automatic grapheme-to-phoneme translator for the Greek language. The proposed algorithm is designed to collaborate with a high quality Text-to-Speech synthesis system in Greek. The algorithm assimilates the full reading process of written text as realized by a Greek speaking person. A detailed study of the Greek language's operation, led us to the implementation of an automatic integrated system which describes its phonetic behaviour in an exact and natural way. The software that implements the algorithm has the capability to receive written text from any input (keyboard, file, screen reader, OCR system e.t.c.) and transform it to phonetic form. Afterwards the output of the algorithm is directed to the input of a concatenation-based speech synthesizer and the right pronunciation of any written text is achieved in real-time. During the reading process the software locates and distinguishes Greek written text from any foreign language words, specially written symbols, abbreviations e. t. c… and afterwards manages them in order that the flow of the reading process permits the right perception of the produced spoken messages. The most important qualification of the algorithm is the possibility to incorporate it in other Text-to-Speech synthesis systems of different technology. Finally experimental measurements indicate the successful operation of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
17

Shah, Rima, and Dheeraj Kumar Singh. "Improvement of Soundex Algorithm for Indian Language Based on Phonetic Matching." International Journal of Computer Science, Engineering and Applications 4, no. 3 (2014): 31–39. http://dx.doi.org/10.5121/ijcsea.2014.4303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Fengying. "Chinese Language Information Processing Considering Efficient Decoding Algorithm for Phonetic Conversion." Journal of Physics: Conference Series 1578 (July 2020): 012038. http://dx.doi.org/10.1088/1742-6596/1578/1/012038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cuzzocrea, Alfredo, Enzo Mumolo, and Giorgio Mario Grasso. "An Effective and Efficient Genetic-Fuzzy Algorithm for Supporting Advanced Human-Machine Interfaces in Big Data Settings." Algorithms 13, no. 1 (2019): 13. http://dx.doi.org/10.3390/a13010013.

Full text
Abstract:
In this paper we describe a novel algorithm, inspired by the mirror neuron discovery, to support automatic learning oriented to advanced man-machine interfaces. The algorithm introduces several points of innovation, based on complex metrics of similarity that involve different characteristics of the entire learning process. In more detail, the proposed approach deals with an humanoid robot algorithm suited for automatic vocalization acquisition from a human tutor. The learned vocalization can be used to multi-modal reproduction of speech, as the articulatory and acoustic parameters that compose the vocalization database can be used to synthesize unrestricted speech utterances and reproduce the articulatory and facial movements of the humanoid talking face automatically synchronized. The algorithm uses fuzzy articulatory rules, which describe transitions between phonemes derived from the International Phonetic Alphabet (IPA), to allow simpler adaptation to different languages, and genetic optimization of the membership degrees. Large experimental evaluation and analysis of the proposed algorithm on synthetic and real data sets confirms the benefits of our proposal. Indeed, experimental results show that the vocalization acquired respects the basic phonetic rules of Italian languages and that subjective results show the effectiveness of multi-modal speech production with automatic synchronization between facial movements and speech emissions. The algorithm has been applied to a virtual speaking face but it may also be used in mechanical vocalization systems as well.
APA, Harvard, Vancouver, ISO, and other styles
20

Mateus, Maria Helena, and Ernesto D’Andrade. "THE SYLLABLE STRUCTURE IN EUROPEAN PORTUGUESE." DELTA: Documentação de Estudos em Lingüística Teórica e Aplicada 14, no. 1 (1998): 13–32. http://dx.doi.org/10.1590/s0102-44501998000100002.

Full text
Abstract:
The goal of this paper is to discuss the internal structure of the syllable in European Portuguese and to propose an algorithm for base syllabification. Due to the analysis of consonant clusters in onset position and the occurrence of epenthetic vowels, and considering the variation of the vowels in word initial position that occupy the syllable nucleus without an onset at the phonetic level, we assume that, in European Portuguese, the syllable is always constituted by an onset and a rhyme even though one of these constituents (but not both) may be empty, that is, one of then may have no phonetic realisation.
APA, Harvard, Vancouver, ISO, and other styles
21

Christopher Jaisunder, G., Israr Ahmed, and R. K. Mishra. "Need for Customized Soundex based Algorithm on Indian Names for Phonetic Matching." Global Journal of Enterprise Information System 8, no. 2 (2017): 30. http://dx.doi.org/10.18311/gjeis/2016/7658.

Full text
Abstract:
In any digitization program, the reproduction of the handwritten demographic data is a challenging job particularly for the records of previous decades. Nowadays, the requirement of the digitization of the individual’s past records becomes very much essential. In the areas like financial inclusion, border security, driving license, passport issuance, weapon license, banking sectors, health care and social welfare benefits, the individual’s earlier case history is a mandatory part of the decision making process. Documents are scanned and stored in a systematic method; each and every scanned document is tagged with a proper key. Documents are retrieved with the help of assigned key, for the purpose of data entry through the software program/ package. Here comes the difficulty that the data, particularly the critical personal data like name and father name etc., may not be legible for the reading purpose and the data entry operators type the characters as per their understanding. The chances of error is of high order in name variations in terms of duplicate characters, abbreviations, omissions, ignoring space between names and wrong spelling. Now the challenge is that, result of data retrieval over these key fields may not be proper because of the wrong data entry. We need to explore the opportunities and challenges for defining the effective strategies to execute this job without compromising the quality and quantity of the matches. In this scenario, we need to have an appropriate string matching algorithm with the phonetic matching. The algorithm is to be defined according to the nature, type and region of the data domain so that the search shall be phonetic based rather than simple string comparison. In this paper, I have tried to explain the need for customized soundex based algorithm on phonetic matching over the misspelt, incomplete, repetitive and partial prevalent data.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Peng Hao. "Study Speech Recognition System Based on Manifold Learning." Applied Mechanics and Materials 380-384 (August 2013): 3762–65. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3762.

Full text
Abstract:
This paper conducts a comprehensive research and discussion on the relevant technologies and manifold learning.Traditional MFCC phonetic feature will lead a slower learning speed on account of it has high dimension and is large in data quantities. In order to solve this problem, we introduce a manifold learning, putting forward two new extraction methods of MFCC-Manifold phonetic feature. We can reduce dimensions by making use of ISOMAP algorithm which bases on the classical MDS (Multidimensional scaling). Introducing geodesic distance to replace the original European distance data will make twenty-four dimensional data, which using the traditional MFCC feature extraction down to ten dimensional data.
APA, Harvard, Vancouver, ISO, and other styles
23

Shi, Liu, Moyan Li, Yawen Su, and Yi Chen. "Study about Chinese Speech Synthesis Algorithm and Acoustic Model Based on Wireless Communication Network." Wireless Communications and Mobile Computing 2021 (October 4, 2021): 1–14. http://dx.doi.org/10.1155/2021/7180769.

Full text
Abstract:
Chinese speech synthesis refers to the technology that machines transform human speech signals into corresponding texts or commands through recognition and understanding. This paper combines the classic VAD and GSM VAD1 algorithm simulations, improves on the above two algorithms to recognize and collect speech, and analyzes their Chinese proficiency by amplifying the signal through a filter, in order to study the adulthood of Zhengzhou University in Southeast Asian students (mother tongues are Indonesian and Thai) as the research objects, to explore the relationship between the Chinese phonetic proficiency and the acquisition motivation of Southeast Asian students. This article combines algorithm and language disciplines. According to the results of Praat and SPSS: 55-80 points account for 70%, 55 points below 20% and 80 points above 10%, we find that intrinsic motivation plays a role in CSL acquisition, a vital role. Intrinsic motivation can help mature learners from Southeast Asia to acquire Chinese better and better. The earlier you learn Chinese, the higher your motivation, and the easier it is to set your Chinese learning goals. The greater the enthusiasm for learning Chinese, the better the Chinese scores (such as HSK test scores and Chinese phonetic test scores). Therefore, the Chinese proficiency of international students has a great relationship with their interest in Chinese language, that is, the greater the interest in Chinese, the stronger their motivation to learn, and the Chinese proficiency will be very good.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Na. "Simulation of English Linguistic Phonetic and Lexical Variation Based on Sociology Perspective." Tobacco Regulatory Science 7, no. 5 (2021): 4752–62. http://dx.doi.org/10.18001/trs.7.5.2.40.

Full text
Abstract:
Objectives: From the perspective of sociology, the speech and vocabulary variation simulation of English linguistics is discussed in depth. Firstly, the background of the research object and the significance of the research are elaborated, and then the research theory related to the English phonetic and lexical variation simulation is analyzed. Methods: Through the design of the English phonetic intonation network teaching system, the design ideas that conform to the development of each function of the platform are proposed. Results: Furthermore, the English linguistic speech and lexical variation simulation model algorithm based on sociological perspective is used to design and verify the function of the teaching system, and the effectiveness of the algorithm is verified by empirical analysis. Conclusion: The final results of the experiment show that by using the Internet of Things (IoT) technology to develop a system tool that conforms to the teaching method and put it into specific teaching work can improve students’ English linguistics pronunciation and vocabulary learning ability.
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Beom-Seung, and Soon-Hyob Kim. "The Automated Threshold Decision Algorithm for Node Split of Phonetic Decision Tree." Journal of the Acoustical Society of Korea 31, no. 3 (2012): 170–78. http://dx.doi.org/10.7776/ask.2012.31.3.170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ibrahim, Ahmed B., Yasser Mohammad Seddiq, Ali Hamid Meftah, et al. "Optimizing Arabic Speech Distinctive Phonetic Features and Phoneme Recognition Using Genetic Algorithm." IEEE Access 8 (2020): 200395–411. http://dx.doi.org/10.1109/access.2020.3034762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Baicheng. "Hybrid Algorithm for English Translation Speech Recognition Based on Deep Learning Model and Clustering." Security and Communication Networks 2022 (May 21, 2022): 1–11. http://dx.doi.org/10.1155/2022/9308188.

Full text
Abstract:
Speech recognition is the most important research direction in human-computer interaction. It is the key to the connection between human beings and machines and the expression of intelligence and automation in the information society. Taking English as the research object, using the related knowledge of speech recognition, it is based on the hidden Markov model technology of deep learning and clustering analysis algorithm and evaluated according to the cross-language English phonemic recognition system of sparse autoencoder (SA) method. By studying the speech recognition algorithm of the English translation, the influence of the speech recognition environment on the accuracy of speech recognition is confirmed. This provides a direction for humans to study speech recognition at a deeper level. Based on the language model of Transformer and the language model based on Seq2Seq, it sets different vocabularies, and the data are collected in the laboratory and outdoors, respectively, and the posttest template library is formed after collection. In the task of restoring phonetic symbols to English characters when phonemes are modeling units, the error rate is the lowest. The error rate on the test set reached 9.54%, which was 6.97 percentage points higher than that of the syllable modeling unit.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhan, Xue Gang, and Peng Zhang. "Recursive Enumeration K-Best Decoding Algorithm in Chinese Input Method Application." Advanced Materials Research 846-847 (November 2013): 1326–29. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1326.

Full text
Abstract:
The sentence transformation from phonetic to word is a very critical part of the input method. When the input method can not find the candidate through a dictionary word directly, it needs to obtain the results desired by the user through sentence transformation. In this paper, based on recursive enumeration k-best decoding algorithm is used in the input method sentence transformation , with the language model, to get the k-optimal transformation results . Experimental results show that in the input method application environment, based on a recursive enumeration k-best decoding algorithm decoding efficiency significantly better than the deletion algorithm which is the baseline.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Cheng‐Hong. "A new mandarin phonetic Morse code recognition method using a variant LMS algorithm." Journal of the Chinese Institute of Engineers 23, no. 6 (2000): 741–48. http://dx.doi.org/10.1080/02533839.2000.9670595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Yonghong, Shibing Zhang, and Dongmei Li. "Helium Speech Recognition Method Based on Spectrogram with Deep Learning." Big Data and Cognitive Computing 9, no. 5 (2025): 136. https://doi.org/10.3390/bdcc9050136.

Full text
Abstract:
With the development of the marine economy and the increase in marine activities, deep saturation diving has gained significant attention. Helium speech communication is indispensable for saturation diving operations and is a critical technology for deep saturation diving, serving as the sole communication method to ensure the smooth execution of such operations. This study introduces deep learning into helium speech recognition and proposes a spectrogram-based dual-model helium speech recognition method. First, we extract the spectrogram features from the helium speech. Then, we combine a deep fully convolutional neural network with connectionist temporal classification (CTC) to form an acoustic model, in which the spectrogram features of helium speech are used as an input to convert speech signals into phonetic sequences. Finally, a maximum entropy hidden Markov model (MEMM) is employed as the language model to convert the phonetic sequences to word outputs, which is regarded as a dynamic programming problem. We use a Viterbi algorithm to find the optimal path to decode the phonetic sequences to word sequences. The simulation results show that the method can effectively recognize helium speech with a recognition rate of 97.89% for isolated words and 95.99% for continuous helium speech.
APA, Harvard, Vancouver, ISO, and other styles
31

Tarak, Hussain, and P. S. Aithal. "Fake Name Clustering using Locality Sensitive Hashing." International Journal of Enhanced Research in Management & Computer Applications 12, no. 3 (2023): 1–5. https://doi.org/10.5281/zenodo.7947062.

Full text
Abstract:
This paper proposes a method for generating fake names using Locality Sensitive Hashing (LSH). The approach involves creating a hash function that maps real names to fake names based on phonetic similarity measures. The Data set is taken from Kaggle which is reengineered.  The LSH algorithm is then used to find pairs of real and fake names that have similar phonetic codes. The proposed method is implemented in Python using the data sketch library, and a sample code is provided to demonstrate its feasibility. The results show that LSH can be used to generate fake names that are similar in structure and characteristics to real names, and the approach could potentially be useful in contexts where anonymity is desired. However, the ethical and legal implications of using fake names should be carefully considered before adopting this approach.
APA, Harvard, Vancouver, ISO, and other styles
32

Paliulionis, Viktoras. "Lietuviškų adresų geokodavimo problemos ir jų sprendimo būdai." Informacijos mokslai 50 (January 1, 2009): 217–22. http://dx.doi.org/10.15388/im.2009.0.3235.

Full text
Abstract:
Geokodavimas yra procesas, kai tekstinis vietos aprašas transformuojamas į geografi nes koordinates. Vienas iš dažniausiai naudojamų vietos aprašymo būdų yra pašto adresas, kurį sudaro gyvenvietės pavadinimas, gatvės pavadinimas, namo numeris ir kiti adreso elementai. Šiame straipsnyje nagrinėjamos lietuviškų adresų geokodavimo problemos, atsirandančios dėl adreso formatų įvairovės, netiksliai ir su rašybos klaidomis užrašomų adresų. Straipsnyje aprašyti geokodavimo procesoetapai ir juose naudojamų algoritmų principai. Pasiūlytas lietuvių kalbai pritaikytas LT-Soundex algoritmas, leidžiantis indeksuoti adreso elementus pagal fonetinį panašumą ir atlikti apytikslę paiešką.Lithuanian Address Geocoding: Problems and SolutionsViktoras Paliulionis SummaryGeocoding is the process of converting of a textual description of a location into geographic coordinates. One of the most frequently used way to describe a place is its postal address that contains a city name, street name, house number and other address components. The paper deals with the problems of the geocoding of Lithuanian addresses. The main problems are variety of used address formats and possible typing and spelling errors. The paper describes the steps of the geocoding process and used algorithms. We propose a phonetic algorithm called LT-Soundex, adapted for the Lithuanian language and enabling to index addresses components by phonetic similarity and perform approximate address searching. It is used with Levenshtein distance for effective approximate address searching.;">
APA, Harvard, Vancouver, ISO, and other styles
33

Fitriani, Intan Khairunnisa, Moch Arif Bijaksana, and Kemas Muslim Lhaksmana. "Qur’an Search System for Handling Cross Verse Based on Phonetic Similarity." Jurnal Sisfokom (Sistem Informasi dan Komputer) 10, no. 1 (2021): 46–51. http://dx.doi.org/10.32736/sisfokom.v10i1.986.

Full text
Abstract:
The number of verses in the Qur'an that is not small will be difficult and time consuming if done manually. Building a search system in the Qur'anic verse using the Indonesian Arabic-Latin equivalent will be very helpful for the Muslim community in Indonesia, especially for those who are not familiar with Arabic writing. In this study, a verse search system will be built on the Al-Qur'an based on phonetic similarity, more details about the handling of the verses in the Al-Qur'an. The system was built using the Jaro-Winkler algorithm to calculate the value of similarity and using the N-Grams algorithm for ranking documents. The same study has been done before with the name Lafzi +, with MAP 90% and 93% recall. In previous studies cases such as nun wiqoyah at the end of the verse could not be handled, so the system could not handle the search for the entire Qur'an. In addition, in the previous system the application of the Jaro-Winkler method to calculate the value of similarity has also not been fully implemented. So to complete the previous research, in this study added rules other than pre-existing rules so that they can handle nun wiqoyah at the end of the verse. By applying the Jaro-Winkler method to calculate the value of similarity and N-Grams for ranking documents and adding nun wiqoyah rules, this system generates 94% MAP and 92% recall. The results of this study indicate an increase in MAP, this shows that this system can improve the accuracy of systems that have been built before.
APA, Harvard, Vancouver, ISO, and other styles
34

Arora, Monika, and Vineet Kansal. "The Inverse Edit Term Frequency for Informal Word Conversion Using Soundex for Analysis of Customer’s Reviews." Recent Advances in Computer Science and Communications 13, no. 5 (2020): 917–25. http://dx.doi.org/10.2174/2213275912666190405114330.

Full text
Abstract:
Background: E-commerce/ M-commerce has emerged as a new way of doing businesses in the present world which requires an understanding of the customer’s needs with the utmost precision and appropriateness. With the advent of technology, mobile devices have become vital tools in today’s world. In fact, smart phones have changed the way of communication. The user can access any information on a single click. Text messages have become the basic channel of communication for interaction. The use of informal text messages by the customers has created a challenge for the business segments in terms of creating a gap pertaining to the actual requirement of the customers due to the inappropriate representation of it's need by using short message service in an informal manner. Objective: The informally written text messages have become a center of attraction for researchers to analyze and normalize such textual data. In this paper, the SMS data have been analyzed for information retrieval using Soundex Phonetic algorithm and its variations. Methods: Two datasets have been considered, SMS- based FAQ of FIRE 2012 and self-generated survey dataset have been tested for evaluating the performance of the proposed Soundex Phonetic algorithm. Results: It has been observed that by applying Soundex with Inverse Edit Term Frequency, the lexical similarity between the SMS word and Natural language text has been significantly improved. The results have been shown to prove the work. Conclusion: Soundex with Inverse Edit Term Frequency Distribution algorithm is best suited among the various variations of Soundex. This algorithm normalizes the informally written text and gets the exact match from the bag of words.
APA, Harvard, Vancouver, ISO, and other styles
35

Wedel, Andrew B. "Feedback and regularity in the lexicon." Phonology 24, no. 1 (2007): 147–85. http://dx.doi.org/10.1017/s0952675707001145.

Full text
Abstract:
Phonologies are characterised by regularity, from the stereotyped phonetic characteristics of allophones to the contextually conditioned alternations between them. Most models of grammar account for regularity by hypothesising that there is only a limited set of symbols for expressing underlying forms, and that an independent grammar algorithm transforms symbol sequences into an output representation. However, this explanation for regularity is called into question by research which suggests that the mental lexicon records rich phonetic detail that directly informs production. Given evidence for biases favouring previously experienced forms at many levels of production and perception, I argue that positive feedback within a richly detailed lexicon can produce regularity over many cycles of production and perception. Using simulation as a tool, I show that under the influence of positive feedback, gradient biases in usage can convert an initially gradient and variable distribution of lexical behaviours into a more categorical and simpler pattern.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Yingyi. "Russian Speech Conversion Algorithm Based on a Parallel Corpus and Machine Translation." Wireless Communications and Mobile Computing 2022 (March 23, 2022): 1–9. http://dx.doi.org/10.1155/2022/8023115.

Full text
Abstract:
The phonetic conversion technology is crucial in the resource construction of Russian phonetic information processing. This paper explains how to build a corpus and the key algorithms that are used, as well as how to design auxiliary translation software and implement the key algorithms. This paper focuses on the “parallel corpus” method of problem solving and the indispensable role of a parallel corpus in Russian learning. This paper examines the foundations, motivations, and methods for using parallel corpora in translation instruction. The main way of using a parallel corpus in the classroom environment is to present data, so that learners can be exposed to a large amount of easily screened bilingual data, and translation skills and specific language item translation can be taught in a concentrated and focused manner. Among them, the creation of a large-scale Russian-Chinese parallel corpus will play an important role not only in improving the translation quality of Russian-Chinese machine translation systems but also in Chinese and Russian teaching as well as other branches of linguistics and translation studies, all of which should be given sufficient attention. This paper proposes the use of automatic speech analysis technology to assist Russian pronunciation learning and designs a Russian word pronunciation learning assistant system with demonstration, scoring, and feedback functions, in response to the shortcomings of pronunciation teaching in Russian teaching in China. It can provide corpus support for gathering a large number of parallel corpora and, in the future, enabling online translation. This system is used for corpus automatic construction, and future corpus automatic construction systems could be built on top of it. The proper application of parallel corpus data will aid in the development of a high-quality autonomous learning and translation teaching environment.
APA, Harvard, Vancouver, ISO, and other styles
37

Flis, P. P., V. V. Filonenko, A. O. Melnyk, Y. P. Nemyrovych, and A. P. Lopoha. "ALGORITHM FOR SPEECH DISORDERS CORRECTION USING PROPRIETARY CONSTRUCTION DEVICE." Вісник наукових досліджень, no. 4 (January 31, 2019): 145–51. http://dx.doi.org/10.11603/2415-8798.2018.4.9780.

Full text
Abstract:
Currently, there is a tendency in Ukraine to increase the number of children with speech disorders. One of the most common disorders of speech function is dyslalia. Speech therapists are the main form of correctional training, children are assigned certain and consistent stages of speech therapy. Along with that, various individual and standard devices are used.
 The aim of the study – to conduct logopedic correction of speech disorders in patients with physiologic occlusion using the in-house designed device according to the proposed algorithm.
 Materials and Methods. A survey was conducted on 73 children (24 – aged 3 to 6 years, 49 – from 6 to 12 years old) without significant orthodontic pathology in the presence of speech impairment with normal hearing and intelligence and speech correction. In addition to logopedic exercises, it was recommended to use vestibular plates Dr. Hinz - MUPPY-P with beads, removable orthodontic devices with beads, Bluegrass appliances, devices for elimination and prevention of unhealthy tongue habits. In order to identify early risk factors for major dental diseases, the hygienic state of the cavity of the mouth, the intensity of caries, the presence or absence of inflammatory processes in the tissues of periodontal disease were determined.
 Results and Discussion. The first step in the algorithm for successful correction of speech disorders were to explain its necessity. The second stage involved the phonetic diagnosis of all aspects of speech, logic, intelligence, memory and thinking. Polymorphic dyslalia was diagnosed in all subjects of reporting panel. The third stage of the algorithm is to carry out work to overcome the abnormalities of the phonetic side of speech was a direct speech correction. The proposed device for elimination and prevention of unhealthy tongue habits was used in 6 cases.
 Conclusions. After the speech therapy correction correct articulation and sound were formed. The proposed device for the elimination and prevention unhealthy tongue habits should be used in conjunction with speech therapy, in particular, dyslalia. In addition to the positive logopedic effect of the proposed therapeutic and prophylactic measures, we have also received improvement of the hygienic state of the oral cavity, the absence of an increase in the intensity of the caries of permanent teeth and increased motivation in patients.
APA, Harvard, Vancouver, ISO, and other styles
38

Alfano, Iolanda. "Intonation, information structure and syntax in yes-no questions in the Spanish of Barcelona." Loquens 3, no. 1 (2016): 027. http://dx.doi.org/10.3989/loquens.2016.027.

Full text
Abstract:
The aim of this work is to study the intonation of yes-no questions in Spanish of Barcelona, analysing the interface with informative and morphosyntactic structure. To this purpose, we present new data to describe this kind of utterances and we examine the state-of-the-art and controversial issues. Even if experimental phonetic and phonological research has paid particular attention to yes-no questions, there still exist some open problems. We use a transcription system which is closely linked to the phonetic realization of the intonation contour, running it in a semiautomatic mode by a program that provides a stylization algorithm and an annotation process. Our findings provide empirical evidence which shows that information structure and morphosyntactic level do affect prosodic realizations of utterances. We can definitely conclude that even if it presents various theoretical and methodological problems, the study of linguistic interfaces is very useful and it allows a deeper and a better description compared to the separated analysis of the same linguistic levels.
APA, Harvard, Vancouver, ISO, and other styles
39

Makowski, Ryszard, and Robert Hossa. "Automatic speech signal segmentation based on the innovation adaptive filter." International Journal of Applied Mathematics and Computer Science 24, no. 2 (2014): 259–70. http://dx.doi.org/10.2478/amcs-2014-0019.

Full text
Abstract:
Abstract Speech segmentation is an essential stage in designing automatic speech recognition systems and one can find several algorithms proposed in the literature. It is a difficult problem, as speech is immensely variable. The aim of the authors’ studies was to design an algorithm that could be employed at the stage of automatic speech recognition. This would make it possible to avoid some problems related to speech signal parametrization. Posing the problem in such a way requires the algorithm to be capable of working in real time. The only such algorithm was proposed by Tyagi et al., (2006), and it is a modified version of Brandt’s algorithm. The article presents a new algorithm for unsupervised automatic speech signal segmentation. It performs segmentation without access to information about the phonetic content of the utterances, relying exclusively on second-order statistics of a speech signal. The starting point for the proposed method is time-varying Schur coefficients of an innovation adaptive filter. The Schur algorithm is known to be fast, precise, stable and capable of rapidly tracking changes in second order signal statistics. A transfer from one phoneme to another in the speech signal always indicates a change in signal statistics caused by vocal track changes. In order to allow for the properties of human hearing, detection of inter-phoneme boundaries is performed based on statistics defined on the mel spectrum determined from the reflection coefficients. The paper presents the structure of the algorithm, defines its properties, lists parameter values, describes detection efficiency results, and compares them with those for another algorithm. The obtained segmentation results, are satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
40

P.Parmar, Vimal, and C. K Kumbharana. "Study Existing Various Phonetic Algorithms and Designing and Development of a working model for the New Developed Algorithm and Comparison by implementing it with Existing Algorithm(s)." International Journal of Computer Applications 98, no. 19 (2014): 45–49. http://dx.doi.org/10.5120/17295-7795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Uvarova, Olga V. "The Pedagogical Potential of the Phonetic Approach in the Training a Novice Wind Instruments Musician." Musical Art and Education 8, no. 2 (2020): 109–23. http://dx.doi.org/10.31862//2309-1428-2020-8-2-109-123.

Full text
Abstract:
This article is devoted to the improvement of the educational process in the wind instrument class, focused on the production of the performing apparatus for novice musicians-wind instruments. The author presents a scientifically-based technology for teaching novice performers on wind instruments-trumpet, trombone, french horn, tuba – high-quality professionally competent sound production. It is based on the author's phonetic approach, which involves a conscious restructuring of the articulatory apparatus by changing the shape of the oral cavity according to mentally pronounced phonemes and modeling the position of the larynx. The article describes the content and organization of classes aimed at gradual mastering of the performing apparatus by students, starting with specially composed exercises, followed by the transition to learning instructional material in the form of more complex exercises, scales, etudes and plays from repertory collections, and then – to ensemble performance. As a result of the conducted experimental work, an effective algorithm of pedagogical actions was modeled, and the validity and pedagogical expediency of using the phonetic approach in the training of future musician performer on wind instruments was proved.
APA, Harvard, Vancouver, ISO, and other styles
42

Reddy, Akkireddy Mohan Kumar. "Optimized Multirate Wideband Speech Steganography for Improving Embedding Capacity Compared with Neighbor-Index-Division Codebook Division Algorithm." Revista Gestão Inovação e Tecnologias 11, no. 2 (2021): 1362–76. http://dx.doi.org/10.47059/revistageintec.v11i2.1763.

Full text
Abstract:
Aim: The main motive of this study is to perform Adaptive Multi Rate Wideband (AMR-WB) Speech Steganography in network security to produce the stego speech with less loss of quality while increasing embedding capacities. Materials and Methods: TIMIT Acoustic-Phonetic Continuous Speech Corpus dataset consists of about 16000 speech samples out of which 1000 samples are taken and 80% pretest power for analyzing the speech steganography. AMR-WB Speech steganography is performed by Diameter Neighbor codebook partition algorithm (Group 1) and Neighbor Index Division codebook division algorithm (Group 2). Results: The AMR-WB speech steganography using DN codebook partition obtained average quality rate of 2.8893 and NID codebook division algorithm obtained average quality rate of 2.4196 in the range of 300bps embedding capacity. Conclusion: The outcomes of this study proves that the decrease in quality in NID is twice more than the DN based steganography while increasing the embedding capacities.
APA, Harvard, Vancouver, ISO, and other styles
43

Novak, Irina. "Распределение переднеязычных щелевых согласных в говорах карельского языка Средней Карелии (на основе применения алгоритма «анализ когнатов» лингвистической платформы ЛингвоДок)". Ural-Altaic Studies 45, № 2 (2022): 79–105. http://dx.doi.org/10.37892/2500-2902-2022-45-2-79-105.

Full text
Abstract:
The article reports the results of an analysis of the distribution of front fricative consonants in the Middle Karelian group of Karelian sub-dialects. The study area was chosen due to its position at a transition between Karelian supradialects, where two opposite sibilant presentation systems collide. Intensive migrations of Karelians inside the study area have generated a fairly sophisticated situation with the phenomenon in question: which consonant variant is used depends on quite a few factors (opening or closing position in the word, presence of the vowel i in the immediate vicinity, front or back vocalism of the word, quality of the second component in consonant blends), which appear in different combinations across the distribution range. Application of the cognate analysis algorithm of LingvoDoc linguistic platform to the thematic dictionaries, which were made using the “Programs for collecting material for the dialectal atlas of the Karelian language” filled out in the mid-20th century in 146 settlements in Karelia, permitted determining which specific word beginning and middle phonetic positions influence the distribution of possible variants of front fricatives in the Middle Karelian sub-dialect group. Visualization of the results in a map brings about the conclusion about the areal nature of the dialect differentiating phonetic phenomenon, on the one hand, and demonstrates that the main sibilant distribution isoglosses do not coincide with the boundaries of Karelian dialects and supradialects in the traditional division, on the other.
APA, Harvard, Vancouver, ISO, and other styles
44

Risky, Aswi Ramadhani, Ketut Gede Darma Putra I, Sudarma Made, and A. D. Giriantari I. "Detecting Indonesian ambiguous sentences using Boyer-Moore algorithm." TELKOMNIKA Telecommunication, Computing, Electronics and Control 18, no. 5 (2020): 2480~2487. https://doi.org/10.12928/TELKOMNIKA.v18i5.14027.

Full text
Abstract:
Ambiguous sentences are divided into 3 types namely phonetic, lexical, and grammatical. This study focuses on grammatical ambiguous sentences, grammatical ambiguous sentences are ambiguities that occur due to incorrect grammar, but this ambiguity will disappear once it is used within a sentence. Ambiguous sentences become a big problem when they are processed by a computer. In order for the computer to interpret ambiguous words correctly, this study seeks to develop detection of Indonesian ammbiguous sentences using Boyer Moore algorithm. This algorithm matches ambiguous sentences that are inserted as input with the data set. Then the sentence is being detected whether it contains ambiguous sentences, by calculating the percentage of similarity using cosine similarity method. Cosine similarity system is able to find out the meaning of the sentence. In the data set, the number of ambiguous sentences that can be collected is 50 words. The 50 words consist of ambiguous words data, ambiguous sentences, and ambiguous sentence meanings. This system trial was carried out for 200 times and the accuracy level was 0.935, precision was 0.9320, and Recall was 0.8. While the F-Measure was 0.8061. While the speed for word search 0.003275 seconds
APA, Harvard, Vancouver, ISO, and other styles
45

G, Manju. "MACHINE LEARNING-BASED EARLY DETECTION OF PARKINSON’S DISEASE USING VOICE ANALYSIS." Al-Shodhana 13, no. 1 (2025): 1–15. https://doi.org/10.70644/as.v13.i1.1.

Full text
Abstract:
This study investigates the potential of machine learning techniques, specifically k-Nearest Neighbours (KNN)and Support Vector Machines (SVM), for detecting early-stage cases of Parkinson’s disease (PD) using voice data. Leveraging a dataset from the UCI Machine Learning Repository, which consists of 147 phonetic samples from PD patients and 48 from healthy controls, the methodology involved data preprocessing, feature selection using a genetic algorithm, and handling class imbalance with the Synthetic Minority Oversampling Technique (SMOTE). Dimensionality reduction was performed using Principal Component Analysis (PCA), retaining the most informative features. Both classifiers were trained and validated using stratified 10-fold cross-validation to ensure robust performance evaluation. The KNN classifier achieved an accuracy of 96.11%, with high precision (95.87%), recall (94.76%), and an AUC-ROC of 0.97, indicating superior discriminatory power. The SVM classifier also demonstrated strong performance, with an accuracy of 94.57%, precision of 93.68%, recall of 92.35%, and an AUC-ROC of 0.95. The results shows that KNN model is effective in distinguishing PD patients from healthy individuals using non-invasive phonetic data. The study underscores the phonetic analysis as a reliable biomarker for early PD detection, offering a promising alternative to current diagnostic methods that rely heavily on clinical observation. Future work should focus on validating these findings with larger, more diverse datasets and integrating additional data types to further improve diagnostic accuracy and clinical applicability. The findings support the development of accessible and early diagnostic tools that could significantly enhance patient’s quality of life.
APA, Harvard, Vancouver, ISO, and other styles
46

Dong, Hui. "Modeling and Simulation of English Speech Rationality Optimization Recognition Based on Improved Particle Filter Algorithm." Complexity 2020 (August 24, 2020): 1–10. http://dx.doi.org/10.1155/2020/6053129.

Full text
Abstract:
As one of the most important communication tools for human beings, English pronunciation not only conveys literal information but also conveys emotion through the change of tone. Based on the standard particle filtering algorithm, an improved auxiliary traceless particle filtering algorithm is proposed. In importance sampling, based on the latest observation information, the unscented Kalman filter method is used to calculate each particle estimate to improve the accuracy of particle nonlinear transformation estimation; during the resampling process, auxiliary factors are introduced to modify the particle weights to enrich the diversity of particles and weaken particle degradation. The improved particle filter algorithm was used for online parameter identification and compared with the standard particle filter algorithm, extended Kalman particle filter algorithm, and traceless particle filter algorithm for parameter identification accuracy and calculation efficiency. The topic model is used to extract the semantic space vector representation of English phonetic text and to sequentially predict the emotional information of different scales at the chapter level, paragraph level, and sentence level. The system has reasonable recognition ability for general speech, and the improved particle filter algorithm evaluation method is further used to optimize the defect of the English speech rationality and high recognition error rate Related experiments have verified the effectiveness of the method.
APA, Harvard, Vancouver, ISO, and other styles
47

Sabdenbekova, B. M. "METHODOLOGICAL ASPECTS OF USING SONGS IN THE PROCESS OF TEACHING A FOREIGN LANGUAGE." Statistika, učet i audit 87, no. 4 (2022): 77–83. http://dx.doi.org/10.51579/1563-2415.2022-4.08.

Full text
Abstract:
Among new methods of teaching a foreign language there is a method of teaching by using authentic materials (audio, video). This article discusses the role of songs in teaching foreign languages. The author shows that due to the use of music in teaching him, all aspects of a foreign language are used: phonetic, lexical, grammatical, syntactic. The article presents the algorithm and examples of using song material as a means of teaching grammar in an English lesson. The training methodology discussed in this article grammar of a foreign language with the help of authentic songs allows you to increase interest and motivation to learn the language, as well as find new forms of work in the lesson. One of the basic concepts in our work is the concept of “authenticity”. Authentic material is material that has been created by a native speaker for other native speakers. It is not intended for educational use, however, as practice and our study show, can be used. The special methodological value of this material is that it contains ready-made phonetic, lexical and grammatical speech samples, which eliminates the need for students to independently construct these forms by translating from their native language.
APA, Harvard, Vancouver, ISO, and other styles
48

Kumar, S. "A pattern-based approach to detect irony in twitter sentiment analysis." i-manager’s Journal on Pattern Recognition 10, no. 2 (2023): 19. http://dx.doi.org/10.26634/jpr.10.2.20354.

Full text
Abstract:
Twitter sentiment analysis poses challenges due to the informal language, limited character count, and prevalence of sarcasm, which can alter the polarity of messages. This paper presents a pattern-based approach to detect irony in Twitter sentiment analysis. By analyzing various types of irony and identifying their patterns, this paper proposes a methodology to improve the efficiency of sentiment analysis. Tweets are classified into different categories based on their sarcasm using a machine learning algorithm. The proposed approach involves feature extraction from tweets, including sentiment-related features, punctuation-related features, grammatical and phonetic features, and patternbased features. A hybrid pattern extraction with a classification model is employed to process tweet data and classify it as sarcastic or not. Experimental results demonstrate the effectiveness of the proposed approach in detecting sarcasm in tweets, with precision ranging from 84.6% to 98.1% across different classifier algorithms. This pattern-based approach offers promising results for enhancing sentiment analysis on Twitter and understanding the nuances of communication in social media discourse.
APA, Harvard, Vancouver, ISO, and other styles
49

Treeratpituk, Pucktada, and C. Lee Giles. "Name-Ethnicity Classification and Ethnicity-Sensitive Name Matching." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 1141–47. http://dx.doi.org/10.1609/aaai.v26i1.8324.

Full text
Abstract:
Personal names are important and common information in many data sources, ranging from social networks and news articles to patient records and scientific documents.They are often used as queries for retrieving records and also as key information for linking documents from multiple sources. Matching personal names can be challenging due to variations in spelling and various formatting of names. While many approximated name matching techniques have been proposed, most are generic string-matching algorithms. Unlike other types of proper names, personal names are highly cultural. Many ethnicities have their own unique naming systems and identifiable characteristics. In this paper we explore such relationships between ethnicities and personal names to improve the name matching performance. First, we propose a name-ethnicity classifier based on the multinomial logistic regression. Our model can effectively identify name-ethnicity from personal names in Wikipedia, which we use to define name-ethnicity, to within 85\% accuracy.Next, we propose a novel alignment-based name matching algorithm, based on Smith–Waterman algorithm and logistic regression.Different name matching models are then trained for different name-ethnicity groups.Our preliminary experimental result on DBLP's disambiguated author dataset yields a performance of 99\% precision and 89\% recall.Surprisingly, textual features carry more weight than phonetic ones in name-ethnicity classification.
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Xiaofeng, Pradeep Kumar Singh, and Pljonkin Anton Pavlovich. "Accent labeling algorithm based on morphological rules and machine learning in English conversion system." Journal of Intelligent Systems 30, no. 1 (2021): 881–92. http://dx.doi.org/10.1515/jisys-2020-0144.

Full text
Abstract:
Abstract The dependency of a speech recognition system on the accent of a user leads to the variation in its performance, as the people from different backgrounds have different accents. Accent labeling and conversion have been reported as a prospective solution for the challenges faced in language learning and various other voice-based advents. In the English TTS system, the accent labeling of unregistered words is another very important link besides the phonetic conversion. Since the importance of the primary stress is much greater than that of the secondary stress, and the primary stress is easier to call than the secondary stress, the labeling of the primary stress is separated from the secondary stress. In this work, the labeling of primary accents uses a labeling algorithm that combines morphological rules and machine learning; the labeling of secondary accents is done entirely through machine learning algorithms. After 10 rounds of cross-validation, the average tagging accuracy rate of primary stress was 94%, the average tagging accuracy rate of secondary stress was 94%, and the total tagging accuracy rate was 83.6%. This perceptual study separates the labeling of primary and secondary accents providing the promising outcomes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!