Academic literature on the topic 'Soundex algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Soundex algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Soundex algorithm"

1

Amin, M. Miftakul, and Yevi Dwitayanti. "Komparasi Kinerja Algoritma Blocking Pada Proses Indexing Untuk Deteksi Duplikasi." Jurnal Teknologi Informasi dan Ilmu Komputer 11, no. 4 (2024): 715–22. http://dx.doi.org/10.25126/jtiik.1148080.

Full text
Abstract:
Proses integrasi data dari heterogeneous data sources memerlukan kualitas data yang baik. Salah satu ciri kualitas data yang baik adalah terhindar dari terjadinya duplikasi data. Untuk melakukan deteksi duplikasi, langkah yang dapat dilakukan adalah membandingkan setiap record dalam sebuah dataset sehingga membentuk candidate record pair. Teknik blocking digunakan untuk proses indexing yang dapat mengurangi jumlah pasangan record dalam proses deteksi duplikasi. Penelitian ini bertujuan untuk melakukan perbandingan beberapa algoritma blocking sehingga diperoleh rekomendasi algoritma mana yang paling optimal digunakan. Penelitian ini melakukan investigasi terhadap 6 buah algoritma blocking, yaitu Soundex, NYSIIS, Metaphone, Double Metaphone, Jaro Winkler Similarity, dan Cosine Similarity. Dataset yang digunakan dalam penelitian ini adalah dataset restaurant yang berisi 112 record, yang di dalamnya terdapat beberapa record yang terindikasi duplikat. Hasil penelitian menunjukkan bahwa algoritma NYSIIS memberikan hasil record blocking paling optimal, yaitu sebesar 97 record. Sedangkan algoritma Soundex dan Cosine Similarity memberikan hasil yang paling optimal, yaitu sebesar 8 buah candidate record pair. Sedangkan dari sisi waktu eksekusi algoritma Soundex dan NYSIIS memberikan proses yang paling cepat dengan durasi 0,04 detik. Abstract The process of integrating data from heterogeneous data sources requires good data quality. One of the characteristics of good data quality is avoiding data duplication. To perform duplication detection, a step that can be done is to compare each record in a dataset to form a candidate record pair. The blocking algorithm is used for the indexing process which can reduce the number of record pairs in the duplication detection process. This research aims to compare several blocking algorithms so as to obtain recommendations on which algorithm is most optimally used. This research investigates 6 blocking algorithms, namely Soundex, NYSIIS, Metaphone, Double Metaphone, Jaro Winkler Similarity, and Cosine Similarity. The dataset used in this research is a restaurant dataset containing 112 records, in which there are several records that indicate duplicates. The results showed that the NYSIIS algorithm provided the most optimal record blocking results, which amounted to 97 records. While the Soundex and Cosine Similarity algorithms provide the most optimal results, which are 8 candidate record pairs. In terms of execution time, the Soundex and NYSIIS algorithms provide the fastest process with a duration of 0.04 seconds.
APA, Harvard, Vancouver, ISO, and other styles
2

Arora, Monika, and Vineet Kansal. "The Inverse Edit Term Frequency for Informal Word Conversion Using Soundex for Analysis of Customer’s Reviews." Recent Advances in Computer Science and Communications 13, no. 5 (2020): 917–25. http://dx.doi.org/10.2174/2213275912666190405114330.

Full text
Abstract:
Background: E-commerce/ M-commerce has emerged as a new way of doing businesses in the present world which requires an understanding of the customer’s needs with the utmost precision and appropriateness. With the advent of technology, mobile devices have become vital tools in today’s world. In fact, smart phones have changed the way of communication. The user can access any information on a single click. Text messages have become the basic channel of communication for interaction. The use of informal text messages by the customers has created a challenge for the business segments in terms of creating a gap pertaining to the actual requirement of the customers due to the inappropriate representation of it's need by using short message service in an informal manner. Objective: The informally written text messages have become a center of attraction for researchers to analyze and normalize such textual data. In this paper, the SMS data have been analyzed for information retrieval using Soundex Phonetic algorithm and its variations. Methods: Two datasets have been considered, SMS- based FAQ of FIRE 2012 and self-generated survey dataset have been tested for evaluating the performance of the proposed Soundex Phonetic algorithm. Results: It has been observed that by applying Soundex with Inverse Edit Term Frequency, the lexical similarity between the SMS word and Natural language text has been significantly improved. The results have been shown to prove the work. Conclusion: Soundex with Inverse Edit Term Frequency Distribution algorithm is best suited among the various variations of Soundex. This algorithm normalizes the informally written text and gets the exact match from the bag of words.
APA, Harvard, Vancouver, ISO, and other styles
3

Shah, Rima, and Dheeraj Kumar Singh. "Improvement of Soundex Algorithm for Indian Language Based on Phonetic Matching." International Journal of Computer Science, Engineering and Applications 4, no. 3 (2014): 31–39. http://dx.doi.org/10.5121/ijcsea.2014.4303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Christopher Jaisunder, G., Israr Ahmed, and R. K. Mishra. "Need for Customized Soundex based Algorithm on Indian Names for Phonetic Matching." Global Journal of Enterprise Information System 8, no. 2 (2017): 30. http://dx.doi.org/10.18311/gjeis/2016/7658.

Full text
Abstract:
In any digitization program, the reproduction of the handwritten demographic data is a challenging job particularly for the records of previous decades. Nowadays, the requirement of the digitization of the individual’s past records becomes very much essential. In the areas like financial inclusion, border security, driving license, passport issuance, weapon license, banking sectors, health care and social welfare benefits, the individual’s earlier case history is a mandatory part of the decision making process. Documents are scanned and stored in a systematic method; each and every scanned document is tagged with a proper key. Documents are retrieved with the help of assigned key, for the purpose of data entry through the software program/ package. Here comes the difficulty that the data, particularly the critical personal data like name and father name etc., may not be legible for the reading purpose and the data entry operators type the characters as per their understanding. The chances of error is of high order in name variations in terms of duplicate characters, abbreviations, omissions, ignoring space between names and wrong spelling. Now the challenge is that, result of data retrieval over these key fields may not be proper because of the wrong data entry. We need to explore the opportunities and challenges for defining the effective strategies to execute this job without compromising the quality and quantity of the matches. In this scenario, we need to have an appropriate string matching algorithm with the phonetic matching. The algorithm is to be defined according to the nature, type and region of the data domain so that the search shall be phonetic based rather than simple string comparison. In this paper, I have tried to explain the need for customized soundex based algorithm on phonetic matching over the misspelt, incomplete, repetitive and partial prevalent data.
APA, Harvard, Vancouver, ISO, and other styles
5

Volodymyr, Buriachok, Hadzhyiev Matin, Sokolov Volodymyr, Skladannyi Pavlo, and Kuzmenko Lidiia. "IMPLANTATION OF INDEXING OPTIMIZATION TECHNOLOGY FOR HIGHLY SPECIALIZED TERMS BASED ON METAPHONE PHONETICAL ALGORITHM." Eastern-European Journal of Enterprise Technologies 5, no. 2 (101) (2019): 43–50. https://doi.org/10.15587/1729-4061.2019.181943.

Full text
Abstract:
When compiling databases, for example to meet the needs of healthcare establishments, there is quite a common problem with the introduction and further processing of names and surnames of doctors and patients that are highly specialized both in terms of pronunciation and writing. This is because names and surnames of people cannot be unique, their notation is not subject to any rules of phonetics, while their length in different languages may not match. With the advent of the Internet, this situation has become generally critical and can lead to that multiple copies of e-mails are sent to one address. It is possible to solve the specified problem by using phonetic algorithms for comparing words Daitch-Mokotoff, SoundEx, NYSIIS, Polyphone, and Metaphone, as well as the Levenstein and Jaro algorithms, Q-gram-based algorithms, which make it possible to find distances between words. The most widespread among them are the SoundЕx and Metaphone algorithms, which are designed to index the words based on their sound, taking into consideration the rules of pronunciation. By applying the Metaphone algorithm, an attempt has been made to optimize the phonetic search processes for tasks of fuzzy coincidence, for example, at data deduplication in various databases and registries, in order to reduce the number of errors of incorrect input of surnames. An analysis of the most common surnames reveals that some of them are of the Ukrainian or Russian origin. At the same time, the rules following which the names are pronounced and written, for example in Ukrainian, differ radically from basic algorithms for English and differ quite significantly for the Russian language. That is why a phonetic algorithm should take into consideration first of all the peculiarities in the formation of Ukrainian surnames, which is of special relevance now. The paper reports results from an experiment to generate phonetic indexes, as well as results of the increased performance when using the formed indexes. A method for adapting the search for other areas and several related languages is presented separately using an example of search for medical preparations
APA, Harvard, Vancouver, ISO, and other styles
6

Paliulionis, Viktoras. "Lietuviškų adresų geokodavimo problemos ir jų sprendimo būdai." Informacijos mokslai 50 (January 1, 2009): 217–22. http://dx.doi.org/10.15388/im.2009.0.3235.

Full text
Abstract:
Geokodavimas yra procesas, kai tekstinis vietos aprašas transformuojamas į geografi nes koordinates. Vienas iš dažniausiai naudojamų vietos aprašymo būdų yra pašto adresas, kurį sudaro gyvenvietės pavadinimas, gatvės pavadinimas, namo numeris ir kiti adreso elementai. Šiame straipsnyje nagrinėjamos lietuviškų adresų geokodavimo problemos, atsirandančios dėl adreso formatų įvairovės, netiksliai ir su rašybos klaidomis užrašomų adresų. Straipsnyje aprašyti geokodavimo procesoetapai ir juose naudojamų algoritmų principai. Pasiūlytas lietuvių kalbai pritaikytas LT-Soundex algoritmas, leidžiantis indeksuoti adreso elementus pagal fonetinį panašumą ir atlikti apytikslę paiešką.Lithuanian Address Geocoding: Problems and SolutionsViktoras Paliulionis SummaryGeocoding is the process of converting of a textual description of a location into geographic coordinates. One of the most frequently used way to describe a place is its postal address that contains a city name, street name, house number and other address components. The paper deals with the problems of the geocoding of Lithuanian addresses. The main problems are variety of used address formats and possible typing and spelling errors. The paper describes the steps of the geocoding process and used algorithms. We propose a phonetic algorithm called LT-Soundex, adapted for the Lithuanian language and enabling to index addresses components by phonetic similarity and perform approximate address searching. It is used with Levenshtein distance for effective approximate address searching.;">
APA, Harvard, Vancouver, ISO, and other styles
7

Malek, Z. Alksasbeh, A. Y. Alqaralleh Bassam, Abukhalil Tamer, Abukaraki Anas, Al Rawashdeh Tawfiq, and Al-Jaafreh Moha'med. "Smart detection of offensive words in social media using the soundex algorithm and permuterm index." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 5 (2021): 4431–38. https://doi.org/10.11591/ijece.v11i5.pp4431-4438.

Full text
Abstract:
Offensive posts in the social media that are inappropriate for a specific age, level of maturity, or impression are quite often destined more to unadult than adult participants. Nowadays, the growth in the number of the masked offensive words in the social media is one of the ethically challenging problems. Thus, there has been growing interest in development of methods that can automatically detect posts with such words. This study aimed at developing a method that can detect the masked offensive words in which partial alteration of the word may trick the conventional monitoring systems when being posted on social media. The proposed method progresses in a series of phases that can be broken down into a pre-processing phase, which includes filtering, tokenization, and stemming; offensive word extraction phase, which relies on using the soundex algorithm and permuterm index; and a post-processing phase that classifies the users’ posts in order to highlight the offensive content. Accordingly, the method detects the masked offensive words in the written text, thus forbidding certain types of offensive words from being published. Results of evaluation of performance of the proposed method indicate a 99% accuracy of detection of offensive words.
APA, Harvard, Vancouver, ISO, and other styles
8

Alksasbeh, Malek Z., Bassam A. Y. Alqaralleh, Tamer Abukhalil, Anas Abukaraki, Tawfiq Al Rawashdeh, and Moha'med Al-Jaafreh. "Smart detection of offensive words in social media using the soundex algorithm and permuterm index." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 5 (2021): 4431. http://dx.doi.org/10.11591/ijece.v11i5.pp4431-4438.

Full text
Abstract:
Offensive posts in the social media that are inappropriate for a specific age, level of maturity, or impression are quite often destined more to unadult than adult participants. Nowadays, the growth in the number of the masked offensive words in the social media is one of the ethically challenging problems. Thus, there has been growing interest in development of methods that can automatically detect posts with such words. This study aimed at developing a method that can detect the masked offensive words in which partial alteration of the word may trick the conventional monitoring systems when being posted on social media. The proposed method progresses in a series of phases that can be broken down into a pre-processing phase, which includes filtering, tokenization, and stemming; offensive word extraction phase, which relies on using the soundex algorithm and permuterm index; and a post-processing phase that classifies the users’ posts in order to highlight the offensive content. Accordingly, the method detects the masked offensive words in the written text, thus forbidding certain types of offensive words from being published. Results of evaluation of performance of the proposed method indicate a 99% accuracy of detection of offensive words.
APA, Harvard, Vancouver, ISO, and other styles
9

Cox, Shelley, Rohan Martin, Piyali Somaia, and Karen Smith. "The development of a data-matching algorithm to define the ‘case patient’." Australian Health Review 37, no. 1 (2013): 54. http://dx.doi.org/10.1071/ah11161.

Full text
Abstract:
Objectives. To describe a model that matches electronic patient care records within a given case to one or more patients within that case. Method. This retrospective study included data from all metropolitan Ambulance Victoria electronic patient care records (n = 445 576) for the time period 1 January 2009–31 May 2010. Data were captured via VACIS (Ambulance Victoria, Melbourne, Vic., Australia), an in-field electronic data capture system linked to an integrated data warehouse database. The case patient algorithm included ‘Jaro–Winkler’, ‘Soundex’ and ‘weight matching’ conditions. Results. The case patient matching algorithm has a sensitivity of 99.98%, a specificity of 99.91% and an overall accuracy of 99.98%. Conclusions. The case patient algorithm provides Ambulance Victoria with a sophisticated, efficient and highly accurate method of matching patient records within a given case. This method has applicability to other emergency services where unique identifiers are case based rather than patient based. What is known about the topic? Accurate pre-hospital data that can be linked to patient outcomes is widely accepted as critical to support pre-hospital patient care and system performance. What does this paper add? There is a paucity of literature describing electronic matching of patient care records at the patient level rather than the case level. Ambulance Victoria has developed a complex yet efficient and highly accurate method for electronically matching patient records, in the absence of a patient-specific unique identifier. Linkage of patient information from multiple patient care records to determine if the records are for the same individual defines the ‘case patient’. What are the implications for practitioners? This paper describes a model of record linkage where patients are matched within a given case at the patient level as opposed to the case level. This methodology is applicable to other emergency services where unique identifiers are case based.
APA, Harvard, Vancouver, ISO, and other styles
10

Haris, Al Qodri Maarif, Surya Gunawan Teddy, and Akmeliawati Rini. "Adaptive language processing unit for Malaysian sign language synthesizer." IAES International Journal of Robotics and Automation (IJRA) 10, no. 4 (2021): 326–39. https://doi.org/10.11591/ijra.v10i4.pp326-339.

Full text
Abstract:
Language processing unit (LPU) is a system built to process text-based data to comply with the rules of the sign language grammar. This system was developed as an important part of the sign language synthesizer system. Sign language (SL) uses different grammatical rules from the spoken/verbal language, which only involves the important words that hearing/impaired speech people can understand. Therefore, it needs word classification by LPU to determine grammatically processed sentences for the sign language synthesizer. However, the existing language processing unit in SL synthesizers suffers time lagging and complexity problems, resulting in high processing time. The two features, i.e., the computational time and success rate, become trade-offs which means the processing time becomes longer to achieve a higher success rate. This paper proposes an adaptive LPU that allows processing the words from spoken words to Malaysian SL grammatical rule that results in relatively fast processing time and a good success rate. It involves n-grams, natural language processing (NLP), and hidden Markov models (HMM)/Bayesian networks as the classifier to process the text-based input. As a result, the proposed LPU system has successfully provided an efficient (fast) processing time and a good success rate compared to LPU with other edit distances (mahalanobis, Levenshtein, and soundex). The system has been tested on 130 text-input sentences with several words ranging from 3 to 10 words. Results showed that the proposed LPU could achieve around 1.497ms processing time with an average success rate of 84.23% for a maximum of ten-word sentences.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Soundex algorithm"

1

Alghassi, Hedayat. "Eye array sound source localization." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/5114.

Full text
Abstract:
Sound source localization with microphone arrays has received considerable attention as a means for the automated tracking of individuals in an enclosed space and as a necessary component of any general-purpose speech capture and automated camera pointing system. A novel computationally efficient method compared to traditional source localization techniques is proposed and is both theoretically and experimentally investigated in this research. This thesis first reviews the previous work in this area. The evolution of a new localization algorithm accompanied by an array structure for audio signal localization in three dimensional space is then presented. This method, which has similarities to the structure of the eye, consists of a novel hemispherical microphone array with microphones on the shell and one microphone in the center of the sphere. The hemispherical array provides such benefits as 3D coverage, simple signal processing and low computational complexity. The signal processing scheme utilizes parallel computation of a special and novel closeness function for each microphone direction on the shell. The closeness functions have output values that are linearly proportional to the spatial angular difference between the sound source direction and each of the shell microphone directions. Finally by choosing directions corresponding to the highest closeness function values and implementing linear weighted spatial averaging in those directions we estimate the sound source direction. The experimental tests validate the method with less than 3.10 of error in a small office room. Contrary to traditional algorithmic sound source localization techniques, the proposed method is based on parallel mathematical calculations in the time domain. Consequently, it can be easily implemented on a custom designed integrated circuit.
APA, Harvard, Vancouver, ISO, and other styles
2

Prescott, Tom. "Sound design, composition and performance with interactive genetic algorithms." Thesis, Keele University, 2018. http://eprints.keele.ac.uk/5030/.

Full text
Abstract:
A variety of work has been carried out investigating the suitability of interactive genetic algorithms (IGAs) for musical composition. There have been some promising results demonstrating that it is, in principle, an effective approach. Modern sound synthesis and processing techniques (SSPTs) are often very complex and difficult to use. They often consist of tens or hundreds of parameters and a large range of values can be assigned to each parameter. This results in an immense number of parameter combinations; listening to the result of each one is clearly not viable. Furthermore, the effect each parameter has on the audio output may not be immediately obvious. Effectively using these systems can require a considerable time commitment and a great deal of theoretical knowledge. This means that in many cases these techniques are not being used to their full potential. IGAs offer a solution to this problem by providing a user with a simpler, more accessible interface to a range of SSPTs. This allows the user to navigate more effectively through the parameter space and explore the range of materials which can be generated by an SSPT. This thesis presents compositions and software that investigate a range of approaches to the application of IGAs to sound design, composition and performance. While investigating these areas, the aim has been to overcome the limitations of previous IGA based systems and extend this approach into new areas. A number of IGA based systems have been developed which allow a user to develop varied compositions consisting of diverse and complex material with minimal training.
APA, Harvard, Vancouver, ISO, and other styles
3

Markle, Blake L. "A comparative study of time-stretching algorithms for audio signals /." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=31119.

Full text
Abstract:
Algorithms exist which will perform independent transformations on frequency or duration of a digital audio signal. These processes have different results different types of audio signals. A comparative study of granular and phase vocoder algorithms, implementation, and their respective effects on audio signals was made to determine which algorithm is best suited to a particular type of audio signal.
APA, Harvard, Vancouver, ISO, and other styles
4

Nalavolu, Praveen Reddy. "PERFORMANCE ANALYSIS OF SRCP IMAGE BASED SOUND SOURCE DETECTION ALGORITHMS." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/50.

Full text
Abstract:
Steered Response Power based algorithms are widely used for finding sound source location using microphone array systems. SRCP-PHAT is one such algorithm that has a robust performance under noisy and reverberant conditions. The algorithm creates a likelihood function over the field of view. This thesis employs image processing methods on SRCP-PHAT images, to exploit the difference in power levels and pixel patterns to discriminate between sound source and background pixels. Hough Transform based ellipse detection is used to identify the sound source locations by finding the centers of elliptical edge pixel regions typical of source patterns. Monte Carlo simulations of an eight microphone perimeter array with single and multiple sound sources are used to simulate the test environment and area under receiver operating characteristic (ROCA) curve is used to analyze the algorithm performance. Performance was compared to a simpler algorithm involving Canny edge detection and image averaging and an algorithms based simply on the magnitude of local maxima in the SRCP image. Analysis shows that Canny edge detection based method performed better in the presence of coherent noise sources.
APA, Harvard, Vancouver, ISO, and other styles
5

Khan, Muhammad Salman. "Informed algorithms for sound source separation in enclosed reverberant environments." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/13350.

Full text
Abstract:
While humans can separate a sound of interest amidst a cacophony of contending sounds in an echoic environment, machine-based methods lag behind in solving this task. This thesis thus aims at improving performance of audio separation algorithms when they are informed i.e. have access to source location information. These locations are assumed to be known a priori in this work, for example by video processing. Initially, a multi-microphone array based method combined with binary time-frequency masking is proposed. A robust least squares frequency invariant data independent beamformer designed with the location information is utilized to estimate the sources. To further enhance the estimated sources, binary time-frequency masking based post-processing is used but cepstral domain smoothing is required to mitigate musical noise. To tackle the under-determined case and further improve separation performance at higher reverberation times, a two-microphone based method which is inspired by human auditory processing and generates soft time-frequency masks is described. In this approach interaural level difference, interaural phase difference and mixing vectors are probabilistically modeled in the time-frequency domain and the model parameters are learned through the expectation-maximization (EM) algorithm. A direction vector is estimated for each source, using the location information, which is used as the mean parameter of the mixing vector model. Soft time-frequency masks are used to reconstruct the sources. A spatial covariance model is then integrated into the probabilistic model framework that encodes the spatial characteristics of the enclosure and further improves the separation performance in challenging scenarios i.e. when sources are in close proximity and when the level of reverberation is high. Finally, new dereverberation based pre-processing is proposed based on the cascade of three dereverberation stages where each enhances the twomicrophone reverberant mixture. The dereverberation stages are based on amplitude spectral subtraction, where the late reverberation is estimated and suppressed. The combination of such dereverberation based pre-processing and use of soft mask separation yields the best separation performance. All methods are evaluated with real and synthetic mixtures formed for example from speech signals from the TIMIT database and measured room impulse responses.
APA, Harvard, Vancouver, ISO, and other styles
6

Virgulti, Marco. "Advancend algorithms for sound reproduction enhancement in portable multimedia devices." Doctoral thesis, Università Politecnica delle Marche, 2015. http://hdl.handle.net/11566/243003.

Full text
Abstract:
In questa tesi e' stato svolto uno studio sulle problematiche relative alla riproduzione audio nel contesto dei sistemi mobile. Le tematiche principali di studio riguardano la spazializzazione, il miglioramento della riproduzione delle basse frequenze e tecniche avanzate di equalizzazione acustica ambientale. Il lavoro puo essere diviso in due parti. Nella prima parte, e' stato illustrato un sistema acustico integrato per sistemi mobili. Le difficolta incontrate in questo contesto sono dovute essenzialmente alle ridotte capacita di calcolo e dalle dimensioni fisiche degli altoparlanti. Lo scopo principale e' stato di sviluppare una architettura di algoritmi avanzati volta a migliorare l'esperienza acustica con un basso costo computazionale. Essa e' composta da tre componenti quali un cancellatore di crosstalk, un equalizzatore multipunto avanzato ed un sistema di miglioramento della riproduzione delle basse frequenze. Nella seconda parte del lavoro e' stato condotto uno studio sugli equalizzatori grafici. Sono state studiate due strategie di ottimizzazione partendo da un equalizzatore a banchi filtri FIR ad alte prestazioni; esse consistono nella sostituzione del filtro prototipo FIR con due filtri IIR approssimanti, mantenendo la qualita complessiva originale. Dopo uno studio teorico, il lavoro si e' concentrato sull'implementazione degli approcci proposti. Per il sistema audio integrato, sono state realizzate le implementazioni iOS ed Android; per gli equalizzatori grafici ottimizzati l'implementazione e' stata realizzata in ambiente NU-Tech. Una sessione di test e' stata condotta per ottenere una valutazione delle prestazioni. L'implementazione del sistema audio integrato e' stata testata utilizzando test sia oggettivi che soggettivi. Per quanto concerne gli equalizzatori grafici ottimizzati proposti sono state valutate le prestazioni sia in termini di banda che di complessita computazionale.<br>In this thesis, a study of the problems related to the audio reproduction in the context of mobile devices has been carried out. The main topics are audio spatialization techniques, low frequency enhancement approaches and advanced techniques for room equalization, taking into consideration different types of equalizers. The work could be divided in two parts. In the first part an advanced audio reproduction enhancement architecture for mobile devices has been presented. The main problems to solve are the low computational capabilities and the physical dimensions of the loudspeakers. The proposed solution is to develop a suitable architecture of advanced algorithms, capable to enhance the audio reproduction, creating an integrated acoustic system with low computational effort. The system architecture is composed of three components, i.e., a crosstalk canceller, a multipoint equalizer and a virtual bass enhancer. In the second part of the work, the study of graphic equalizers has been presented. In this context, a high quality graphic equalizer FIR filterbank structure has been optimized in order to reduce its computational cost but keeping untouched its perfomance in terms of quality. Two different IIR filters approximation have been proposed. After a theoric study, the work has been focused on the software implementations of the proposed approaches. For the advanced audio system, two software implementations have been realized for the main mobile operating systems i.e., Android and iOS. For the high performance graphic equalizers, the implementations are realized on the NU-Tech real time platform. Finally, a full set of tests have been performed on these platforms to obtain performance evaluation and measurement of the required computational cost. Differently the proposed graphic equalizers have been tested considering its performance in terms of frequency bandwidth and computational complexity. The optimized graphic equalizers are compared with the reference FIR approach showing the effectiveness of the proprosed approaches.
APA, Harvard, Vancouver, ISO, and other styles
7

Burt, Warren. "Algorithms, microtonality, performance eleven musical compositions /." Access electronically, 2007. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20080131.162859/index.html.

Full text
Abstract:
Thesis (Ph.D.)--University of Wollongong, 2007.<br>Typescript. Includes 2 sound discs and 1 DVD-ROM in back pocket. CD 1: The animation of lists; CD 2: And the archytan transpositions. DVD-ROM contains Part Three - Appendix. Includes bibliographical references: leaf 291-301.
APA, Harvard, Vancouver, ISO, and other styles
8

Cousins, David Bruce. "A model-based algorithm for environmentally adaptive bathymetry and sound velocity profile estimation /." View online ; access limited to URI, 2005. http://0-wwlib.umi.com.helin.uri.edu/dissertations/dlnow/3186901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Roman, Nicoleta. "Auditory-based algorithms for sound segregation in multisource and reverberant environments." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1124370749.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.<br>Title from first page of PDF file. Document formatted into pages; contains i-xxii, xx-xxi, 183 p.; also includes graphics. Includes bibliographical references (p. 171-183). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Xun. "Sound source localization with data and model uncertainties using the EM and Evidential EM algorithms." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2164/document.

Full text
Abstract:
Ce travail de thèse se penche sur le problème de la localisation de sources acoustiques à partir de signaux déterministes et aléatoires mesurés par un réseau de microphones. Le problème est résolu dans un cadre statistique, par estimation via la méthode du maximum de vraisemblance. La pression mesurée par un microphone est interprétée comme étant un mélange de signaux latents émis par les sources. Les positions et les amplitudes des sources acoustiques sont estimées en utilisant l’algorithme espérance-maximisation (EM). Dans cette thèse, deux types d’incertitude sont également pris en compte : les positions des microphones et le nombre d’onde sont supposés mal connus. Ces incertitudes sont transposées aux données dans le cadre théorique des fonctions de croyance. Ensuite, les positions et les amplitudes des sources acoustiques peuvent être estimées en utilisant l’algorithme E2M, qui est une variante de l’algorithme EM pour les données incertaines.La première partie des travaux considère le modèle de signal déterministe sans prise en compte de l’incertitude. L’algorithme EM est utilisé pour estimer les positions et les amplitudes des sources. En outre, les résultats expérimentaux sont présentés et comparés avec le beamforming et la holographie optimisée statistiquement en champ proche (SONAH), ce qui démontre l’avantage de l’algorithme EM. La deuxième partie considère le problème de l’incertitude du modèle et montre comment les incertitudes sur les positions des microphones et le nombre d’onde peuvent être quantifiées sur les données. Dans ce cas, la fonction de vraisemblance est étendue aux données incertaines. Ensuite, l’algorithme E2M est utilisé pour estimer les sources acoustiques. Finalement, les expériences réalisées sur les données réelles et simulées montrent que les algorithmes EM et E2M donnent des résultats similaires lorsque les données sont certaines, mais que ce dernier est plus robuste en présence d’incertitudes sur les paramètres du modèle. La troisième partie des travaux présente le cas de signaux aléatoires, dont l’amplitude est considérée comme une variable aléatoire gaussienne. Dans le modèle sans incertitude, l’algorithme EM est utilisé pour estimer les sources acoustiques. Dans le modèle incertain, les incertitudes sur les positions des microphones et le nombre d’onde sont transposées aux données comme dans la deuxième partie. Enfin, les positions et les variances des amplitudes aléatoires des sources acoustiques sont estimées en utilisant l’algorithme E2M. Les résultats montrent ici encore l’avantage d’utiliser un modèle statistique pour estimer les sources en présence, et l’intérêt de prendre en compte l’incertitude sur les paramètres du modèle<br>This work addresses the problem of multiple sound source localization for both deterministic and random signals measured by an array of microphones. The problem is solved in a statistical framework via maximum likelihood. The pressure measured by a microphone is interpreted as a mixture of latent signals emitted by the sources; then, both the sound source locations and strengths can be estimated using an expectation-maximization (EM) algorithm. In this thesis, two kinds of uncertainties are also considered: on the microphone locations and on the wave number. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as Evidential EM (E2M) algorithm. The first part of this work begins with the deterministic signal model without consideration of uncertainty. The EM algorithm is then used to estimate the source locations and strengths : the update equations for the model parameters are provided. Furthermore, experimental results are presented and compared with the beamforming and the statistically optimized near-field holography (SONAH), which demonstrates the advantage of the EM algorithm. The second part raises the issue of model uncertainty and shows how the uncertainties on microphone locations and wave number can be taken into account at the data level. In this case, the notion of the likelihood is extended to the uncertain data. Then, the E2M algorithm is used to solve the sound source estimation problem. In both the simulation and real experiment, the E2M algorithm proves to be more robust in the presence of model and data uncertainty. The third part of this work considers the case of random signals, in which the amplitude is modeled by a Gaussian random variable. Both the certain and uncertain cases are investigated. In the former case, the EM algorithm is employed to estimate the sound sources. In the latter case, microphone location and wave number uncertainties are quantified similarly to the second part of the thesis. Finally, the source locations and the variance of the random amplitudes are estimated using the E2M algorithm
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Soundex algorithm"

1

Mo, Tsan. Microwave humidity sounder calibration algorithm. U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Environmental Satellite, Data, and Information Service, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mo, Tsan. Microwave humidity sounder calibration algorithm. U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Environmental Satellite, Data, and Information Service, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mo, Tsan. Microwave humidity sounder calibration algorithm. U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Environmental Satellite, Data, and Information Service, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kacprzyk, Janusz. Music-Inspired Harmony Search Algorithm: Theory and Applications. Springer Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Wenwu. Machine audition: Principles, algorithms, and systems. Information Science Reference, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pitre, Richard. A boundary scattering strength extraction algorithm for the analysis of long-range reverberation data. Naval Research Laboratory, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Garas, John. Adaptive 3D sound systems. Kluwer Academic, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nelson, P. A. MINT, the multiple error LMS algorithm, and the design of inverse filters for multi-channel sound reproduction systems. University of Southampton, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Grecu, Andrei. Musical instrument sound separation: Extracting instruments from musical performances : theory and algorithms. VDM Verlag Dr. Müller, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

1925-, Lee Ding, Sternberg Robert L, Schultz Martin H, and International Association for Mathematics and Computers in Simulation., eds. Computational acoustics: Proceedings of the 1st IMACS Symposium on Computational Acoustics, New Haven, CT, USA, 6-8 August, 1986. North-Holland, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Soundex algorithm"

1

Anand, Rahul, Rohan Mahajan, Nimish Verma, and Prabhnoor Singh. "Soundex Algorithm for Hindi Language Names." In Lecture Notes in Electrical Engineering. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0372-6_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gautam, Vishakha, Aayush Pipal, and Monika Arora. "SoundEx Algorithm Revisited for Indian Language." In International Conference on Innovative Computing and Communications. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2354-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pinto, David, Darnes Vilariño, Yuridiana Alemán, Helena Gómez, Nahun Loya, and Héctor Jiménez-Salazar. "The Soundex Phonetic Algorithm Revisited for SMS Text Representation." In Text, Speech and Dialogue. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32790-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dardinier, Thibault, Gaurav Parthasarathy, Noé Weeks, Peter Müller, and Alexander J. Summers. "Sound Automation of Magic Wands." In Computer Aided Verification. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_7.

Full text
Abstract:
AbstractThe magic wand $$\mathbin {-\!\!*}$$ - ∗ (also called separating implication) is a separation logic connective commonly used to specify properties of partial data structures, for instance during iterative traversals. A footprint of a magic wand formula "Equation missing" is a state that, combined with any state in which A holds, yields a state in which B holds. The key challenge of proving a magic wand (also called packaging a wand) is to find such a footprint. Existing package algorithms either have a high annotation overhead or, as we show in this paper, are unsound.We present a formal framework that precisely characterises a wide design space of possible package algorithms applicable to a large class of separation logics. We prove in Isabelle/HOL that our formal framework is sound and complete, and use it to develop a novel package algorithm that offers competitive automation and is sound. Moreover, we present a novel, restricted definition of wands and prove in Isabelle/HOL that it is possible to soundly combine fractions of such wands, which is not the case for arbitrary wands. We have implemented our techniques for the Viper language, and demonstrate that they are effective in practice.
APA, Harvard, Vancouver, ISO, and other styles
5

Akshay, S., Paul Gastin, and Karthik R. Prakash. "Fast Zone-Based Algorithms for Reachability in Pushdown Timed Automata." In Computer Aided Verification. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_30.

Full text
Abstract:
AbstractGiven the versatility of timed automata a huge body of work has evolved that considers extensions of timed automata. One extension that has received a lot of interest is timed automata with a, possibly unbounded, stack, also called pushdown timed automata (PDTA). While different algorithms have been given for reachability in different variants of this model, most of these results are purely theoretical and do not give rise to efficient implementations. One main reason for this is that none of these algorithms (and the implementations that exist) use the so-called zone-based abstraction, but rely either on the region-abstraction or other approaches, which are significantly harder to implement.In this paper, we show that a naive extension, using simulations, of the zone based reachability algorithm for the control state reachability problem of timed automata is not sound in the presence of a stack. To understand this better we give an inductive rule based view of the zone reachability algorithm for timed automata. This alternate view allows us to analyze and adapt the rules to also work for pushdown timed automata. We obtain the first zone-based algorithm for PDTA which is terminating, sound and complete. We implement our algorithm in the tool TChecker and perform experiments to show its efficacy, thus leading the way for more practical approaches to the verification of timed pushdown systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Hartmanns, Arnd, Bram Kohlen, and Peter Lammich. "Efficient Formally Verified Maximal End Component Decomposition for MDPs." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71162-6_11.

Full text
Abstract:
AbstractIdentifying a Markov decision process’s maximal end components is a prerequisite for applying sound probabilistic model checking algorithms. In this paper, we present the first mechanized correctness proof of a maximal end component decomposition algorithm, which is an important algorithm in model checking, using the Isabelle/HOL theorem prover. We iteratively refine the high-level algorithm and proof into an imperative LLVM bytecode implementation that we integrate into the Modest Toolset ’s existing model checker. We bring the benefits of interactive theorem proving into practice by reducing the trusted code base of a popular probabilistic model checker and we experimentally show that our new verified maximal end component decomposition in performs on par with the tool’s previous unverified implementation.
APA, Harvard, Vancouver, ISO, and other styles
7

Kokologiannakis, Michalis, Iason Marmanis, and Viktor Vafeiadis. "Unblocking Dynamic Partial Order Reduction." In Computer Aided Verification. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37706-8_12.

Full text
Abstract:
Abstract Existing dynamic partial order reduction (DPOR) algorithms scale poorly on concurrent data structure benchmarks because they visit a huge number of blocked executions due to spinloops.In response, we develop Awamoche, a sound, complete, and strongly optimal DPOR algorithm that avoids exploring any useless blocked executions in programs with await and confirmation-CAS loops. Consequently, it outperforms the state-of-the-art, often by an exponential factor.
APA, Harvard, Vancouver, ISO, and other styles
8

Eilers, Marco, Malte Schwerhoff, and Peter Müller. "Verification Algorithms for Automated Separation Logic Verifiers." In Computer Aided Verification. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65627-9_18.

Full text
Abstract:
AbstractMost automated program verifiers for separation logic use either symbolic execution or verification condition generation to extract proof obligations, which are then handed over to an SMT solver. Existing verification algorithms are designed to be sound, but differ in performance and completeness. These characteristics may also depend on the programs and properties to be verified. Consequently, developers and users of program verifiers have to select a verification algorithm carefully for their application domain. Taking an informed decision requires a systematic comparison of the performance and completeness characteristics of the verification algorithms used by modern separation logic verifiers, but such a comparison does not exist.This paper describes five verification algorithms for separation logic, three that are used in existing tools and two novel algorithms that combine characteristics of existing symbolic execution and verification condition generation algorithms. A detailed evaluation of implementations of these five algorithms in the Viper infrastructure assesses their performance and completeness for different classes of input programs. Based on the experimental results, we identify candidate portfolios of algorithms that maximize completeness and performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Meggendorfer, Tobias, and Maximilian Weininger. "Playing Games with Your PET: Extending the Partial Exploration Tool to Stochastic Games." In Computer Aided Verification. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65633-0_16.

Full text
Abstract:
AbstractWe present version 2.0 of the Partial Exploration Tool (Pet), a tool for verification of probabilistic systems. We extend the previous version by adding support for stochastic games, based on a recent unified framework for sound value iteration algorithms. Thereby, Pet2 is the first tool implementing a sound and efficient approach for solving stochastic games with objectives of the type reachability/safety and mean payoff. We complement this approach by developing and implementing a partial-exploration based variant for all three objectives. Our experimental evaluation shows that Pet2 offers the most efficient partial-exploration based algorithm and is the most viable tool on SGs, even outperforming unsound tools.
APA, Harvard, Vancouver, ISO, and other styles
10

Karakasidis, Alexandros, and Georgia Koloniari. "More Sparking Soundex-Based Privacy-Preserving Record Linkage." In Algorithmic Aspects of Cloud Computing. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-33437-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Soundex algorithm"

1

Lai, Yu Liang, Yen Min Jasmina Khaw, Seng Poh Lim, and Tien Ping Tan. "Text Normalization of Penang Hokkien Dialect Leveraging Adapted Soundex Algorithm." In 2024 5th International Conference on Artificial Intelligence and Data Sciences (AiDAS). IEEE, 2024. http://dx.doi.org/10.1109/aidas63860.2024.10730327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shin, Sanghyun, Yiqing Ding, and Inseok Hwang. "Cockpit Alarm Detection and Identification Algorithm for Helicopters." In Vertical Flight Society 72nd Annual Forum & Technology Display. The Vertical Flight Society, 2016. http://dx.doi.org/10.4050/f-0072-2016-11532.

Full text
Abstract:
In recent years, the National Transportation Safety Board (NTSB) has emphasized the importance of analyzing flight data such as cockpit voice recordings as an effective method to improve the safety of helicopter operations. Cockpit voice recordings contain the sounds of engines, crew conversations, alarms, switch activations, and others within a cockpit. Thus, analyzing cockpit voice recordings can contribute to identifying the causes of an accident or incident. Among various types of the sounds in cockpit voice recordings, this paper focuses on cockpit alarm sounds as an object of analysis. Identifying the cockpit alarm sound which is activated when a helicopter enters an atypical state of flying could help identify the state and timing of the incident. Nonetheless, alarm sound analysis presents challenges due to the corruption of the alarm sounds by various noises from the engine and wind. In order to assist in resolving such a problem, this paper proposes an alarm sound analysis algorithm as a way to identify types of alarm sounds and detect the occurrence times of an abnormal flight. For this purpose, the algorithm finds the highest correlation with the Short Time Fourier Transform (STFT) and the Cumulative Sum Control Chart (CUSUM) using a database of the characteristic features of the alarm sounds. The proposed algorithm is successfully applied to a set of simulated audio data which was generated by the X-plane flight simulator in order to demonstrate its desired performance and utility in enhancing helicopter safety.
APA, Harvard, Vancouver, ISO, and other styles
3

Grzywalski, Tomasz, Dick Botteldooren, Yanjue Song, and Nilesh Madhu. "Salient sound extraction using deep neural networks predicting complex masks." In 2024 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA). IEEE, 2024. http://dx.doi.org/10.23919/spa61993.2024.10715626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arogeti, Merav, Etan Fisher, and Dima Bykhovsky. "Sound Analysis of Drop Characteristics by Evaluation of Impact on Water Pool." In 2024 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA). IEEE, 2024. http://dx.doi.org/10.23919/spa61993.2024.10715636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Tianqi, Guanqun Liu, and Rubo Zhang. "Research on Sound Source Localization Based on Random Forest." In 2025 2nd International Conference on Algorithms, Software Engineering and Network Security (ASENS). IEEE, 2025. https://doi.org/10.1109/asens64990.2025.11011154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shahriare, Hasan Md, and Nursadul Mamun. "Classification of Heart Sounds using Machine Learning Algorithm." In 2025 International Conference on Electrical, Computer and Communication Engineering (ECCE). IEEE, 2025. https://doi.org/10.1109/ecce64574.2025.11013872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bhadoria, Mahendra P. S., Prantik Chakraborty, Ameya Anil Kesarkar, and Ch V. N. Rao. "Automatic Gain and Offset Control Algorithm for Millimeter-Wave Humidity Sounder Payload." In 2024 IEEE India Geoscience and Remote Sensing Symposium (InGARSS). IEEE, 2024. https://doi.org/10.1109/ingarss61818.2024.10984043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

He, Yuxuan, Jingyi Su, and Hongshan Shang. "Multi-Feature Fusion Based Sound Classification Algorithm." In 2024 IEEE 2nd International Conference on Image Processing and Computer Applications (ICIPCA). IEEE, 2024. http://dx.doi.org/10.1109/icipca61593.2024.10709199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Das, Madhav, Manish Kumar, Ganesh Mulay, Vinaykumar S, Himanshu Patel, and B. Saravana Kumar. "Fault Resilient Payload Controller With AGOC Algorithm for mm-Wave Humidity Sounder Payload." In 2024 IEEE Space, Aerospace and Defence Conference (SPACE). IEEE, 2024. http://dx.doi.org/10.1109/space63117.2024.10667699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zahid, Muhammad Adeel, Naveed Iqbal Rao, and Adil Masood Siddiqui. "English to Urdu transliteration: An application of Soundex algorithm." In 2010 International Conference on Information and Emerging Technologies (ICIET). IEEE, 2010. http://dx.doi.org/10.1109/iciet.2010.5625681.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Soundex algorithm"

1

Ostashev, Vladimir, Michael Muhlestein, and D. Wilson. Extra-wide-angle parabolic equations in motionless and moving media. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/42043.

Full text
Abstract:
Wide-angle parabolic equations (WAPEs) play an important role in physics. They are derived by an expansion of a square-root pseudo-differential operator in one-way wave equations, and then solved by finite-difference techniques. In the present paper, a different approach is suggested. The starting point is an extra-wide-angle parabolic equation (EWAPE) valid for small variations of the refractive index of a medium. This equation is written in an integral form, solved by a perturbation technique, and transformed to the spectral domain. The resulting split-step spectral algorithm for the EWAPE accounts for the propagation angles up to 90° with respect to the nominal direction. This EWAPE is also generalized to large variations in the refractive index. It is shown that WAPEs known in the literature are particular cases of the two EWAPEs. This provides an alternative derivation of the WAPEs, enables a better understanding of the underlying physics and ranges of their applicability, and opens an opportunity for innovative algorithms. Sound propagation in both motionless and moving media is considered. The split-step spectral algorithm is particularly useful in the latter case since complicated partial derivatives of the sound pressure and medium velocity reduce to wave vectors (essentially, propagation angles) in the spectral domain.
APA, Harvard, Vancouver, ISO, and other styles
2

Peñaloza, Rafael. Towards a Tableau Algorithm for Fuzzy ALC with Product T-norm. Technische Universität Dresden, 2011. http://dx.doi.org/10.25368/2022.181.

Full text
Abstract:
Very recently, the tableau-based algorithm for deciding consistency of general fuzzy DL ontologies over the product t-norm was shown to be incorrect, due to a very weak blocking condition. In this report we take the first steps towards a correct algorithm by modifying the blocking condition, such that the (finite) structure obtained through the algorithm uniquely describes an infinite system of quadratic constraints. We show that this procedure terminates, and is sound and complete in the sense that the input is consistent iff the corresponding infinite system of constraints is satisfiable.
APA, Harvard, Vancouver, ISO, and other styles
3

Kelner, E., Darren George, Marybeth Nored, and Russell C. Burkey. NMCQ4YK Development of a Low Cost Inferential Natural Gas Energy Flow Rate Prototype Retrofit Module. Pipeline Research Council International, Inc. (PRCI), 2008. http://dx.doi.org/10.55274/r0011158.

Full text
Abstract:
In 1998, a multi-year project to develop a working prototype instrument module for natural gas energy measurement was initiated. The module will be used to retrofit a natural gas custody transfer flow meter for energy measurement, at a cost an order of magnitude lower than a gas chromatograph. Development and evaluation of the prototype energy meter in 2002-2003 included: (1) refinement of the algorithm used to infer properties of the natural gas stream, such as heating value; (2) evaluation of potential sensing technologies for nitrogen content, improvements in carbon dioxide measurements, and improvements in ultrasonic measurement technology and signal processing for improved speed of sound measurements; (3) design, fabrication and testing of a new prototype energy meter module incorporating these algorithm and sensor refinements; and (4) laboratory and field performance tests of the original and modified energy meter modules.Field tests of the original energy meter module have provided results in close agreement with an onsite gas chromatograph. The original algorithm has also been tested at a field site as a stand-alone application using measurements from in situ instruments, and has demonstrated its usefulness as a diagnostic tool. The algorithm has been revised to use measurement technologies existing in the module to measure the gas stream at multiple states and infer nitrogen content. The instrumentation module has also been modified to incorporate recent improvements in CO2 and sound speed sensing technology. Laboratory testing of the upgraded module has identified additional testing needed to attain the target accuracy in sound speed measurements and heating value.
APA, Harvard, Vancouver, ISO, and other styles
4

Dolotii, Marharyta H., and Pavlo V. Merzlykin. Using the random number generator with a hardware entropy source for symmetric cryptography problems. [б. в.], 2018. http://dx.doi.org/10.31812/123456789/2883.

Full text
Abstract:
The aim of the research is to test the possibility of using the developed random number generator [1], which utilizes the sound card noise as an entropy source, in the symmetric cryptography algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Berges, B. J. P., and A. T. M. van Helmond. Practical implementation of real-time fish classification from acoustic broadband echo sounder data - RealFishEcho : classification algorithm improvements. Wageningen Marine Research, 2018. http://dx.doi.org/10.18174/440683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brandt, Sebastian, Anni-Yasmin Turhan, and Ralf Küsters. Foundations of non-standard inferences for DLs with transitive roles. Technische Universität Dresden, 2003. http://dx.doi.org/10.25368/2022.127.

Full text
Abstract:
Description Logics (DLs) are a family of knowledge representation formalisms used for terminological reasoning. They have a wide range of applications such as medical knowledge-bases, or the semantic web. Research on DLs has been focused on the development of sound and complete inference algorithms to decide satisfiability and subsumption for increasingly expressive DLs. Non-standard inferences are a group of relatively new inference services which provide reasoning support for the building, maintaining, and deployment of DL knowledge-bases. So far, non-standard inferences are not available for very expressive DLs. In this paper we present first results on non-standard inferences for DLs with transitive roles. As a basis, we give a structural characterization of subsumption for DLs where existential and value restrictions can be imposed on transitive roles. We propose sound and complete algorithms to compute the least common subsumer (lcs).
APA, Harvard, Vancouver, ISO, and other styles
7

Murrill, Steven R., and Michael V. Scanlon. Design of a Heart Sound Extraction Algorithm for an Acoustic-Based Health Monitoring System. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada409127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Horrocks, Ian, Ulrike Sattler, and Stephan Tobies. A Description Logic with Transitive and Converse Roles, Role Hierarchies and Qualifying Number Restrictions. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.94.

Full text
Abstract:
As widely argued [HG97; Sat96], transitive roles play an important role in the adequate representation of aggregated objects: they allow these objects to be described by referring to their parts without specifying a level of decomposition. In [HG97], the Description Logic (DL) ALCHR+ is presented, which extends ALC with transitive roles and a role hierarchy. It is argued in [Sat98] that ALCHR+ is well-suited to the representation of aggregated objects in applications that require various part-whole relations to be distinguished, some of which are transitive. However, ALCHR+ allows neither the description of parts by means of the whole to which they belong, or vice versa. To overcome this limitation, we present the DL SHI which allows the use of, for example, has part as well as is part of. To achieve this, ALCHR+ was extended with inverse roles. It could be argued that, instead of defining yet another DL, one could make use of the results presented in [DL96] and use ALC extended with role expressions which include transitive closure and inverse operators. The reason for not proceeding like this is the fact that transitive roles can be implemented more efficiently than the transitive closure of roles (see [HG97]), although they lead to the same complexity class (ExpTime-hard) when added, together with role hierarchies, to ALC. Furthermore, it is still an open question whether the transitive closure of roles together with inverse roles necessitates the use of the cut rule [DM98], and this rule leads to an algorithm with very bad behaviour. We will present an algorithm for SHI without such a rule. Furthermore, we enrich the language with functional restrictions and, finally, with qualifying number restrictions. We give sound and complete decision proceduresfor the resulting logics that are derived from the initial algorithm for SHI. The structure of this report is as follows: In Section 2, we introduce the DL SI and present a tableaux algorithm for satisfiability (and subsumption) of SI-concepts—in another report [HST98] we prove that this algorithm can be refined to run in polynomial space. In Section 3 we add role hierarchies to SI and show how the algorithm can be modified to handle this extension appropriately. Please note that this logic, namely SHI, allows for the internalisation of general concept inclusion axioms, one of the most general form of terminological axioms. In Section 4 we augment SHI with functional restrictions and, using the so-called pairwise-blocking technique, the algorithm can be adapted to this extension as well. Finally, in Section 5, we show that standard techniques for handling qualifying number restrictions [HB91;BBH96] together with the techniques described in previous sections can be used to decide satisfiability and subsumption for SHIQ, namely ALC extended with transitive and inverse roles, role hierarchies, and qualifying number restrictions. Although Section 5 heavily depends on the previous sections, we have made it self-contained, i.e. it contains all necessary definitions and proofs from scratch, for a better readability. Building on the previous sections, Section 6 presents an algorithm that decides the satisfiability of SHIQ-ABoxes.
APA, Harvard, Vancouver, ISO, and other styles
9

Baader, Franz, and Ralf Küsters. Matching Concept Descriptions with Existential Restrictions. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.93.

Full text
Abstract:
Matching of concepts with variables (concept patterns) is a relatively new operation that has been introduced in the context of description logics, originally to help filter out unimportant aspects of large concepts appearing in industrial-strength knowledge bases. Previous work has concentrated on (sub-)languages of CLASSIC, which in particular do not allow for existential restrictions. In this work, we present sound and complete decision algorithms for the solvability of matching problems and for computing sets of matchers for matching problems in description logics with existential restrictions.
APA, Harvard, Vancouver, ISO, and other styles
10

Baader, Franz, and Ralf Küsters. Matching Concept Descriptions with Existential Restrictions. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.93.

Full text
Abstract:
Matching of concepts with variables (concept patterns) is a relatively new operation that has been introduced in the context of description logics, originally to help filter out unimportant aspects of large concepts appearing in industrial-strength knowledge bases. Previous work has concentrated on (sub-)languages of CLASSIC, which in particular do not allow for existential restrictions. In this work, we present sound and complete decision algorithms for the solvability of matching problems and for computing sets of matchers for matching problems in description logics with existential restrictions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography