Academic literature on the topic 'Name matching algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Name matching algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Name matching algorithm"

1

Treeratpituk, Pucktada, and C. Lee Giles. "Name-Ethnicity Classification and Ethnicity-Sensitive Name Matching." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 1141–47. http://dx.doi.org/10.1609/aaai.v26i1.8324.

Full text
Abstract:
Personal names are important and common information in many data sources, ranging from social networks and news articles to patient records and scientific documents.They are often used as queries for retrieving records and also as key information for linking documents from multiple sources. Matching personal names can be challenging due to variations in spelling and various formatting of names. While many approximated name matching techniques have been proposed, most are generic string-matching algorithms. Unlike other types of proper names, personal names are highly cultural. Many ethnicities have their own unique naming systems and identifiable characteristics. In this paper we explore such relationships between ethnicities and personal names to improve the name matching performance. First, we propose a name-ethnicity classifier based on the multinomial logistic regression. Our model can effectively identify name-ethnicity from personal names in Wikipedia, which we use to define name-ethnicity, to within 85\% accuracy.Next, we propose a novel alignment-based name matching algorithm, based on Smith–Waterman algorithm and logistic regression.Different name matching models are then trained for different name-ethnicity groups.Our preliminary experimental result on DBLP's disambiguated author dataset yields a performance of 99\% precision and 89\% recall.Surprisingly, textual features carry more weight than phonetic ones in name-ethnicity classification.
APA, Harvard, Vancouver, ISO, and other styles
2

Hadwan, Mohammed, Mohammed A. Al-Hagery, Maher Al-Sanabani, and Salah Al-Hagree. "Soft Bigram distance for names matching." PeerJ Computer Science 7 (April 21, 2021): e465. http://dx.doi.org/10.7717/peerj-cs.465.

Full text
Abstract:
Background Bi-gram distance (BI-DIST) is a recent approach to measure the distance between two strings that have an important role in a wide range of applications in various areas. The importance of BI-DIST is due to its representational and computational efficiency, which has led to extensive research to further enhance its efficiency. However, developing an algorithm that can measure the distance of strings accurately and efficiently has posed a major challenge to many developers. Consequently, this research aims to design an algorithm that can match the names accurately. BI-DIST distance is considered the best orthographic measure for names identification; nevertheless, it lacks a distance scale between the name bigrams. Methods In this research, the Soft Bigram Distance (Soft-Bidist) measure is proposed. It is an extension of BI-DIST by softening the scale of comparison among the name Bigrams for improving the name matching. Different datasets are used to demonstrate the efficiency of the proposed method. Results The results show that Soft-Bidist outperforms the compared algorithms using different name matching datasets.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Wei, Yong Xian, Juan Su, Daqiao Zhang, Shaopeng Li, and Bing Li. "Visible light infrared matching algorithm based on improved SuperPoint and linear converter." Journal of Applied Artificial Intelligence 1, no. 2 (2024): 122–33. http://dx.doi.org/10.59782/aai.v1i2.295.

Full text
Abstract:
In order to solve the problems of high difficulty and mismatch rate in heterogeneous image matching of visible light and infrared images, a deep learning matching algorithm based on improved SuperPoint and linear transformer is proposed. The algorithm first introduces the idea of feature pyramid to construct a feature description branch based on the SuperPoint network structure, and trains it based on the hinge loss function, so as to better learn the multi-scale deep features of visible light and infrared images and increase the similarity of image point pair descriptors with the same name; in the feature matching module, the linear transformer is used to improve the SuperGlue matching algorithm and aggregate features to improve matching performance. The proposed algorithm is experimentally verified on multiple data sets. The results show that compared with the existing algorithms, the algorithm achieves better matching results and improves the matching accuracy.
APA, Harvard, Vancouver, ISO, and other styles
4

Asarina, Alya, and Olga Simek. "Using Crowdsourcing to Generate an Evaluation Dataset for Name Matching Technologies." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November 3, 2013): 6–7. http://dx.doi.org/10.1609/hcomp.v1i1.13122.

Full text
Abstract:
Crowdsourcing can be a fast, flexible and cost-effective approach to obtaining data for training and evaluating machine learning algorithms. In this paper, we discuss a novel crowdsourcing application: creating a dataset for evaluating name matchers. Name matching is the challenging and subjective task of identifying which names refer to the same person; it is crucial for effective entity disambiguation and search. We have developed an effective question interface and work quality analysis algorithm for our task, which can be applied to other ranking tasks (e.g. search result ranking, recommendation system evaluation, etc.). We have demonstrated that our crowdsourced dataset can successfully be used to evaluate automatic name-matching algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Shuai, Qingsheng Guo, Xinglin Xu, and Yuwu Xie. "A Study on a Matching Algorithm for Urban Underground Pipelines." ISPRS International Journal of Geo-Information 8, no. 8 (2019): 352. http://dx.doi.org/10.3390/ijgi8080352.

Full text
Abstract:
Urban underground pipelines are known as “urban blood vessels”. To detect changes in integrated pipelines and professional pipelines, the matching of same-name spatial objects is critical. Existing algorithms used for vector network matching were analyzed to develop an improved matching algorithm that can adapt to underground pipeline networks. Our algorithm improves the holistic matching of pipeline strokes, and also a partial matching algorithm is provided. In this study, appropriate geometric measures were selected to calculate the geometric similarity between pipeline strokes in their holistic matching. Existing methods for evaluating similarities in spatial scene structures in partial underground pipeline networks were improved. A method of partial matching of strokes was additionally investigated, and it compensates for the deficiencies of holistic stroke matching. Experiments showed that the matching performance was good, and the operation efficiency was high.
APA, Harvard, Vancouver, ISO, and other styles
6

Yiu, Cheuk Hei Josh. "Research on Matching and Vertex Cover Problems in Bipartite Graphs using Simplex Method." Highlights in Science, Engineering and Technology 38 (March 16, 2023): 82–89. http://dx.doi.org/10.54097/hset.v38i.5737.

Full text
Abstract:
This paper considers a bipartite graph where a perfect matching does not necessarily exist. Linear programming is used in this paper as a special case of the linear program is the assignment problem, which is another name for the weighted maximum matching problem. The objective is to show that linear programming, in particular the simplex algorithm, can be used to calculate maximum weight matchings and minimum weighted vertex covers. This study also calculates and shows the equivalence of the maximum matching and minimum vertex cover cardinalities, and uses linear programming duality to present its relevance to König's theorem. Finally, Hall’s marriage theorem is used to explicitly prove the absence of a perfect matching in particular graphs. There exist many algorithms developed over the years that are designed to solve maximum matching problems in polynomial time, but the simplex method is one of the oldest and simplest algorithms to understand in solving these problems.
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Shu Chun, Xiao Yang Yu, Jian Ying Fan, and Hai Bin Wu. "Aligning Genomic Sequence Applied in Stereo Matching." Applied Mechanics and Materials 121-126 (October 2011): 4357–61. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.4357.

Full text
Abstract:
The stereo matching algorithm based on aligning genomic sequence is proposed in the paper. This method is divided in three steps: do genomic sequence on the same name epipolar of stereo matching, get branch matrix by establishing genomic sequences using the same name epipolar after being genomic sequences, control points technology and dynamic backdate method to get disparity. The experimental results show that stereo matching method based on genomic sequences has fast speed and good matching quality.
APA, Harvard, Vancouver, ISO, and other styles
8

Kirubakaran, Anusuya, and M. Aramudhan. "A watchdog approach - name-matching algorithm for big data risk intelligence." International Journal of Data Analysis Techniques and Strategies 10, no. 3 (2018): 273. http://dx.doi.org/10.1504/ijdats.2018.094128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aramudhan, M., and Anusuya Kirubakaran. "A watchdog approach - name-matching algorithm for big data risk intelligence." International Journal of Data Analysis Techniques and Strategies 10, no. 3 (2018): 273. http://dx.doi.org/10.1504/ijdats.2018.10015173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Yue, Kaiyu Feng, Gao Cong, and Han Mao Kiah. "Example-based spatial pattern matching." Proceedings of the VLDB Endowment 15, no. 11 (2022): 2572–84. http://dx.doi.org/10.14778/3551793.3551815.

Full text
Abstract:
The prevalence of GPS-enabled mobile devices and location-based services yield massive volume of spatial objects where each object contains information including geographical location, name, address, category and other attributes. This paper introduces a novel type of query termed example-based spatial pattern matching (EPM) query. It takes as input a set of spatial objects, each of which is associated with one or more keywords and a location. These objects serve as an example that depicts the spatial pattern that users want to retrieve. The EPM query returns all sets of objects that match the spatial pattern. The EPM query can be used for applications like urban planning, scene recognition and similar region search. We propose an efficient algorithm and three pruning techniques to answer EPM queries. Furthermore, we provide an approximation guarantee for intermediate results of the algorithm. Our experimental evaluations on four real-world datasets demonstrate the effectiveness and efficiency of our proposed algorithm and techniques.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Name matching algorithm"

1

Momeninasab, Leila. "Design and Implementation of a Name Matching Algorithm for Persian Language." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102210.

Full text
Abstract:
Name matching plays a vital and crucial role in many applications. They are for example used in information retrieval or deduplication systems to do comparisons among names to match them together or to find the names that refer to identical objects, persons, or companies. Since names in each application are subject to variations and errors that are unavoidable in any system and because of the importance of name matching, so far many algorithms have been developed to handle matching of names. These algorithms consider the name variations that may happen because of spelling, pattern or phonetic modifications. However most existing methods were developed for use with the English language and so cover the characteristics of this language. Up to now no specific one has been designed and implemented for the Persian language. The purpose of this thesis is to present a name matching algorithm for Persian. In this project, after consideration of all major algorithms in this area, we selected one of the basic methods for name matching that we then expanded to make it work particularly well for Persian names. This proposed algorithm, called Persian Edit Distance Algorithm or shortly PEDA, was built based on the characteristics of the Persian language and it compares Persian names with each other on three levels: phonetic similarity, character form similarity and keyboard distance, in order to give more accurate results for Persian names. The algorithm gets Persian names as its input and determines their similarity as a percentage in the output. In this thesis three series of experiments have been accomplished in order to evaluate the proposed algorithm. The f-measure average shows a value of 0.86 for the first series and a value of 0.80 for the second series results. The first series of experiments have been repeated with Levenshtein as well, and have 33.9% false negatives on average while PEDA has a false negative average of 6.4%. The third series of experiments shows that PEDA works well for one edit, two edits and three edits with true positive average values of 99%, 81%, and 69% respectively.
APA, Harvard, Vancouver, ISO, and other styles
2

Tang, Ling-Xiang. "Link discovery for Chinese/English cross-language web information retrieval." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/58416/1/Ling-Xiang_Tang_Thesis.pdf.

Full text
Abstract:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Name matching algorithm"

1

Valarakos, Alexandros G., Georgios Paliouras, Vangelis Karkaletsis, and George Vouros. "A Name-Matching Algorithm for Supporting Ontology Enrichment." In Methods and Applications of Artificial Intelligence. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24674-9_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grannis Shaun J., Overhage J. Marc, and McDonald Clement. "Real World Performance of Approximate String Comparators for use in Patient Matching." In Studies in Health Technology and Informatics. IOS Press, 2004. https://doi.org/10.3233/978-1-60750-949-3-43.

Full text
Abstract:
Medical record linkage is becoming increasingly important as clinical data is distributed across independent sources. To improve linkage accuracy we studied different name comparison methods that establish agreement or disagreement between corresponding names. In addition to exact raw name matching and exact phonetic name matching, we tested three approximate string comparators. The approximate comparators included the modified Jaro-Winkler method, the longest common substring, and the Levenshtein edit distance. We also calculated the combined root-mean square of all three. We tested each name comparison method using a deterministic record linkage algorithm. Results were consistent across both hospitals. At a threshold comparator score of 0.8, the Jaro-Winkler comparator achieved the highest linkage sensitivities of 97.4% and 97.7%. The combined root-mean square method achieved sensitivities higher than the Levenshtein edit distance or longest common substring while sustaining high linkage specificity. Approximate string comparators increase deterministic linkage sensitivity by up to 10% compared to exact match comparisons and represent an accurate method of linking to vital statistics data.
APA, Harvard, Vancouver, ISO, and other styles
3

Schuster Martin, Tittmann Lukas, and Wolf Andreas. "Predicting Matching Quality of Record Linkage Algorithms on Growing Data Sets." In Studies in Health Technology and Informatics. IOS Press, 2018. https://doi.org/10.3233/978-1-61499-896-9-70.

Full text
Abstract:
A record linkage algorithm tries to identify records which belong to the same individual. We analyze the matching behavior of an approach used in the E-PIX matching tool on the very limited attribute set of first name, last name, date of birth and sex. Our benchmark set contains almost 37,000 records from the Popgen biobank. We develop a model which allows us to predict the workload on clerical review for data sets growing up to a factor of 10 or even more, without the need for a data set of this size. Based on this model we show two parameter sets with comparable detection rate of true duplicates, but where only one of them scales well on growing data sets. Our model provides realistic example records for each predicted matching of an upscaled data set. Thus, it enables to identify the parameters which need to be adjusted in order to improve the quality of the matching candidates. We also show that unreviewed merging of records is prone to homonym errors on data sets with 200,000 records and the limited attribute set above, while the merged record pairs are obviously different in clerical review.
APA, Harvard, Vancouver, ISO, and other styles
4

Das, Tapan Kumar. "Logo Matching and Recognition Based on Context." In Feature Dimension Reduction for Content-Based Image Identification. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5775-3.ch009.

Full text
Abstract:
Logos are graphic productions that recall some real-world objects or emphasize a name, simply display some abstract signs that have strong perceptual appeal. Color may have some relevance to assess the logo identity. Different logos may have a similar layout with slightly different spatial disposition of the graphic elements, localized differences in the orientation, size and shape, or differ by the presence/absence of one or few traits. In this chapter, the author uses ensemble-based framework to choose the best combination of preprocessing methods and candidate extractors. The proposed system has reference logos and test logos which are verified depending on some features like regions, pre-processing, key points. These features are extracted by using gray scale image by scale-invariant feature transform (SIFT) and Affine-SIFT (ASIFT) descriptor method. Pre-processing phase employs four different filters. Key points extraction is carried by SIFT and ASIFT algorithm. Key points are matched to recognize fake logo.
APA, Harvard, Vancouver, ISO, and other styles
5

Pates Robert D., Scully Kenneth W., Einbinder Jonathan S., et al. "Adding Value to Clinical Data By Linkage to a Public Death Registry." In Studies in Health Technology and Informatics. IOS Press, 2001. https://doi.org/10.3233/978-1-60750-928-8-1384.

Full text
Abstract:
We describe the methodology and impact of merging detailed statewide mortality data into the master patient index tables of the clinical data repository (CDR) of the University of Virginia Health System (UVAHS). We employ three broadly inclusive linkage passes (designed to result in large numbers of false positives) to match the patients in the CDR to those in the statewide files using the following criteria: a) Social Security Number; b) Patient Last Name and Birth Date; c) Patient Last Name and Patient First Name. The results from these initial matches are refined by calculation and assignment of a total score comprised of partial scores depending on the quality of matching between the various identifiers. In order to validate our scoring algorithm, we used those patients known to have died at UVAHS over the eight year period as an internal control. We conclude that we are able to update our CDR with 97% of the deaths from the state source using this scheme. We illustrate the potential of the resulting system to assist caregivers in identification of at-risk patient groups by description of those patients in the CDR who were found to have committed suicide. We suggest that our approach represents an efficient and inexpensive way to enrich hospital data with important outcomes information.
APA, Harvard, Vancouver, ISO, and other styles
6

"Appendix 2: Name-Matching Algorithms." In The Small Worlds of Corporate Governance. The MIT Press, 2012. http://dx.doi.org/10.7551/mitpress/9115.003.0014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kasiiti Noah, Wawira Judy, Purkayastha Saptarshi, and Were Martin C. "Comparative Performance Analysis of Different Fingerprint Biometric Scanners for Patient Matching." In Studies in Health Technology and Informatics. IOS Press, 2017. https://doi.org/10.3233/978-1-61499-830-3-1053.

Full text
Abstract:
Unique patient identification within health services is an operational challenge in healthcare settings. Use of key identifiers, such as patient names, hospital identification numbers, national ID, and birth date are often inadequate for ensuring unique patient identification. In addition approximate string comparator algorithms, such as distance-based algorithms, have proven suboptimal for improving patient matching, especially in low-resource settings. Biometric approaches may improve unique patient identification. However, before implementing the technology in a given setting, such as health care, the right scanners should be rigorously tested to identify an optimal package for the implementation. This study aimed to investigate the effects of factors such as resolution, template size, and scan capture area on the matching performance of different fingerprint scanners for use within health care settings. Performance analysis of eight different scanners was tested using the demo application distributed as part of the Neurotech Verifinger SDK 6.0.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim Kyungmo and Choi Jinwook. "Developing Methodologies to Find Abbreviated Laboratory Test Names in Narrative Clinical Documents by Generating High Quality Q-Grams." In Studies in Health Technology and Informatics. IOS Press, 2017. https://doi.org/10.3233/978-1-61499-830-3-452.

Full text
Abstract:
Laboratory test names are used as basic information to diagnose diseases. However, this kind of medical information is usually written in a natural language. To find this information, lexicon based methods have been good solutions but they cannot find terms that do not have abbreviated expressions, such as “neuts” that means “neutrophils”. To address this issue, similar word matching can be used; however, it can be disadvantageous because of significant false positives. Moreover, processing time is longer as the size of terms is bigger. Therefore, we suggest a novel q-gram based algorithm, named modified triangular area filtering, to find abbreviated laboratory test terms in clinical documents, minimizing the possibility to impair the lexicons' precision. In addition, we found the terms using the methodology with reasonable processing time. The results show that this method can achieve 92.54 precision, 87.72 recall, 90.06 f1-score in test sets when edit distance threshold(τ) = 3.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Chenglong, Jintao Liu, Haorao Wei, et al. "Detecting Objects as Cascade Corners." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240536.

Full text
Abstract:
The corner-based detection paradigm enjoys the potential to produce high-quality boxes. But the development is constrained by three factors: 1) Hard to match corners. Heuristic corner matching algorithms can lead to incorrect boxes, especially when similar-looking objects co-occur. 2) Poor instance context. Two separate corners preserve few instance semantics, so it is difficult to guarantee getting both two class-specific corners on the same heatmap channel. 3) Unfriendly backbone. The training cost of the hourglass network is high. Accordingly, we build a novel corner-based framework, named Corner2Net. To achieve the corner-matching-free manner, we devise the cascade corner pipeline which progressively predicts the associated corner pair in two steps instead of synchronously searching two independent corners via parallel heads. Corner2Net decouples corner localization and object classification. Both two corners are class-agnostic and the instance-specific bottom-right corner further simplifies its search space. Meanwhile, RoI features with rich semantics are extracted for classification. Popular backbones (e.g., ResNeXt) can be easily connected to Corner2Net. Experimental results on COCO show Corner2Net surpasses all existing corner-based detectors by a large margin in accuracy and speed.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Name matching algorithm"

1

Top, Philip, Farid Dowla, and Jim Gansemer. "A Dynamic Programming Algorithm for Name Matching." In 2007 IEEE Symposium on Computational Intelligence and Data Mining. IEEE, 2007. http://dx.doi.org/10.1109/cidm.2007.368923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Snae, Chakkrit, and Michael Brueckner. "Novel Phonetic Name Matching Algorithm with a Statistical Ontology for Analysing Names Given in Accordance with Thai Astrology." In InSITE 2009: Informing Science + IT Education Conference. Informing Science Institute, 2009. http://dx.doi.org/10.28945/3347.

Full text
Abstract:
Since antiquity names have been very important to people. Naming from the past to the present has been continuously developed and has evolved into a variety of patterns. Each pattern has its own rules depending on local belief and language that has been developed until the present. In many cultures naming is not only important because every individual needs to have a name but have helpful names or names with a good sound. The basic goal of naming in Thai society is to provide a good fortune and progress of living. Most Thai parents try to choose names they feel will bring good luck to their offspring and to the family. The choice of appropriate names is based on old rules of Thai astrology according to weekday of birth, and the rules of available letters can influence the destiny of the individuals as is described in Thai astrology, since it uses the day of birth as an input. Thais can change their own given names as often as they want in order to achieve a good fortune. The current web based systems for Thai names are static web pages and cannot deal with the problem of helping change a name to a good name with similar sound.
APA, Harvard, Vancouver, ISO, and other styles
3

Abdel Ghafour, Hesham H., Ali El-Bastawissy, and Abdel Fattah A. Heggazy. "AEDA: Arabic edit distance algorithm Towards a new approach for Arabic name matching." In Systems (ICCES). IEEE, 2011. http://dx.doi.org/10.1109/icces.2011.6141061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Branting, L. Karl. "A comparative evaluation of name-matching algorithms." In the 9th international conference. ACM Press, 2003. http://dx.doi.org/10.1145/1047788.1047837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shaikh, Muniba, Nasrullah Memon, and Uffe Kock Wiil. "Extended approximate string matching algorithms to detect name aliases." In 2011 IEEE International Conference on Intelligence and Security Informatics (ISI 2011). IEEE, 2011. http://dx.doi.org/10.1109/isi.2011.5984085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, Gang, Fei Wang, Haiyang Lv, and Yinling Zhang. "A new matching algorithm for Chinese place names." In 2011 19th International Conference on Geoinformatics. IEEE, 2011. http://dx.doi.org/10.1109/geoinformatics.2011.5980801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Al-Hagree, Salah, Maher Al-Sanabani, Khaled M. A. Alalayah, and Mohammed Hadwan. "Designing an Accurate and Efficient Algorithm for Matching Arabic Names." In 2019 First International Conference of Intelligent Computing and Engineering (ICOICE). IEEE, 2019. http://dx.doi.org/10.1109/icoice48418.2019.9035184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Hagree, Salah, Sarah Abdulmalik, Muneer Alsurori, and Maher Al-Sanabani. "An Enhanced Algorithm for Matching Arabic Names Entered by Mobile Phones." In 2019 First International Conference of Intelligent Computing and Engineering (ICOICE). IEEE, 2019. http://dx.doi.org/10.1109/icoice48418.2019.9035148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Weiquan, Xuelun Shen, Cheng Wang, Zhihong Zhang, Chenglu Wen, and Jonathan Li. "H-Net: Neural Network for Cross-domain Image Patch Matching." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/119.

Full text
Abstract:
Describing the same scene with different imaging style or rendering image from its 3D model gives us different domain images. Different domain images tend to have a gap and different local appearances, which raise the main challenge on the cross-domain image patch matching. In this paper, we propose to incorporate AutoEncoder into the Siamese network, named as H-Net, of which the structural shape resembles the letter H. The H-Net achieves state-of-the-art performance on the cross-domain image patch matching. Furthermore, we improved H-Net to H-Net++. The H-Net++ extracts invariant feature descriptors in cross-domain image patches and achieves state-of-the-art performance by feature retrieval in Euclidean space. As there is no benchmark dataset including cross-domain images, we made a cross-domain image dataset which consists of camera images, rendering images from UAV 3D model, and images generated by CycleGAN algorithm. Experiments show that the proposed H-Net and H-Net++ outperform the existing algorithms. Our code and cross-domain image dataset are available at https://github.com/Xylon-Sean/H-Net.
APA, Harvard, Vancouver, ISO, and other styles
10

Koneru, Keerthi, Venkata Sai Venkatesh Pulla, and Cihan Varol. "Performance Evaluation of Phonetic Matching Algorithms on English Words and Street Names - Comparison and Correlation." In 5th International Conference on Data Management Technologies and Applications. SCITEPRESS - Science and Technology Publications, 2016. http://dx.doi.org/10.5220/0005926300570064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography