To see the other types of publications on this topic, follow the link: Neighbor selection.

Journal articles on the topic 'Neighbor selection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neighbor selection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Bingming, Shi Ying, and Zhe Yang. "A Log-Based Anomaly Detection Method with Efficient Neighbor Searching and Automatic K Neighbor Selection." Scientific Programming 2020 (June 2, 2020): 1–17. http://dx.doi.org/10.1155/2020/4365356.

Full text
Abstract:
Using the k-nearest neighbor (kNN) algorithm in the supervised learning method to detect anomalies can get more accurate results. However, when using kNN algorithm to detect anomaly, it is inefficient at finding k neighbors from large-scale log data; at the same time, log data are imbalanced in quantity, so it is a challenge to select proper k neighbors for different data distributions. In this paper, we propose a log-based anomaly detection method with efficient selection of neighbors and automatic selection of k neighbors. First, we propose a neighbor search method based on minhash and MVP-tree. The minhash algorithm is used to group similar logs into the same bucket, and MVP-tree model is built for samples in each bucket. In this way, we can reduce the effort of distance calculation and the number of neighbor samples that need to be compared, so as to improve the efficiency of finding neighbors. In the process of selecting k neighbors, we propose an automatic method based on the Silhouette Coefficient, which can select proper k neighbors to improve the accuracy of anomaly detection. Our method is verified on six different types of log data to prove its universality and feasibility.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhai, Junhai, Jiaxing Qi, and Sufang Zhang. "An instance selection algorithm for fuzzy K-nearest neighbor." Journal of Intelligent & Fuzzy Systems 40, no. 1 (January 4, 2021): 521–33. http://dx.doi.org/10.3233/jifs-200124.

Full text
Abstract:
The condensed nearest neighbor (CNN) is a pioneering instance selection algorithm for 1-nearest neighbor. Many variants of CNN for K-nearest neighbor have been proposed by different researchers. However, few studies were conducted on condensed fuzzy K-nearest neighbor. In this paper, we present a condensed fuzzy K-nearest neighbor (CFKNN) algorithm that starts from an initial instance set S and iteratively selects informative instances from training set T, moving them from T to S. Specifically, CFKNN consists of three steps. First, for each instance x ∈ T, it finds the K-nearest neighbors in S and calculates the fuzzy membership degrees of the K nearest neighbors using S rather than T. Second it computes the fuzzy membership degrees of x using the fuzzy K-nearest neighbor algorithm. Finally, it calculates the information entropy of x and selects an instance according to the calculated value. Extensive experiments on 11 datasets are conducted to compare CFKNN with four state-of-the-art algorithms (CNN, edited nearest neighbor (ENN), Tomeklinks, and OneSidedSelection) regarding the number of selected instances, the testing accuracy, and the compression ratio. The experimental results show that CFKNN provides excellent performance and outperforms the other four algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Huawen, Xindong Wu, and Shichao Zhang. "Neighbor selection for multilabel classification." Neurocomputing 182 (March 2016): 187–96. http://dx.doi.org/10.1016/j.neucom.2015.12.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sironen, S., A. Kangas, M. Maltamo, and J. Kangas. "Estimating individual tree growth with nonparametric methods." Canadian Journal of Forest Research 33, no. 3 (March 1, 2003): 444–49. http://dx.doi.org/10.1139/x02-162.

Full text
Abstract:
The aim of the study was to demonstrate the use of nonparametric methods in estimating tree-level growth models. In the nonparametric methods the growth of a tree is predicted as a weighted mean of the values of neighboring observations. The selection of the nearest neighbors is based on the similarities between tree- and stand-level characteristics of the target tree and the neighbors. The data for the models were collected from Kuusamo in northeastern Finland. Models for the 5-year diameter growth were constructed for Scots pine (Pinus sylvestris L.) with three different nonparametric methods: the k-nearest neighbor regression, k-most-similar neighbor, and generalized additive model.
APA, Harvard, Vancouver, ISO, and other styles
5

Pfahlberg, A., O. Gefeller, and R. Weißbach. "Double-smoothing in Kernel Hazard Rate Estimation." Methods of Information in Medicine 47, no. 02 (2008): 167–73. http://dx.doi.org/10.3414/me0447.

Full text
Abstract:
Summary Objectives: In oncological studies, the hazard rate can be used to differentiate subgroups of the study population according to their patterns of survival risk over time. Nonparametric curve estimation has been suggested as an exploratory means of revealing such patterns. The decision about the type of smoothing parameter is critical for performance in practice. In this paper, we study data-adaptive smoothing. Methods: A decade ago, the nearest-neighbor bandwidth was introduced for censored data in survival analysis. It is specified by one parameter, namely the number of nearest neighbors. Bandwidth selection in this setting has rarely been investigated, although the heuristical advantages over the frequently-studied fixed bandwidth are quite obvious. The asymptotical relationship between the fixed and the nearest-neighbor bandwidth can be used to generate novel approaches. Results: We develop a new selection algorithm termed double-smoothing for the nearest-neighbor bandwidth in hazard rate estimation. Our approach uses a finite sample approximation of the asymptotical relationship between the fixed and nearest-neighbor bandwidth. By so doing, we identify the nearest-neighbor bandwidth as an additional smoothing step and achieve further data-adaption after fixed bandwidth smoothing. We illustrate the application of the new algorithm in a clinical study and compare the outcome to the traditional fixed bandwidth result, thus demonstrating the practical performance of the technique. Conclusion: The double-smoothing approach enlarges the methodological repertoire for selecting smoothing parameters in nonparametric hazard rate estimation. The slight increase in computational effort is rewarded with a substantial amount of estimation stability, thus demonstrating the benefit of the technique for biostatistical applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Jagruthi, Y., Dr Y. Ramadevi, and A. Sangeeta. "An Instance Selection Algorithm Based On Reverse k Nearest Neighbor." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 10, no. 7 (August 30, 2013): 1858–61. http://dx.doi.org/10.24297/ijct.v10i7.3217.

Full text
Abstract:
Classification is one of the most important data mining techniques. It belongs to supervised learning. The objective of classification is to assign class label to unlabelled data. As data is growing rapidly, handling it has become a major concern. So preprocessing should be done before classification and hence data reduction is essential. Data reduction is to extract a subset of features from a set of features of a data set. Data reduction helps in decreasing the storage requirement and increases the efficiency of classification. A way to measure data reduction is reduction rate. The main thing here is choosing representative samples to the final data set. There are many instance selection algorithms which are based on nearest neighbor decision rule (NN). These algorithms select samples on incremental strategy or decremental strategy. Both the incremental algorithms and decremental algorithms take much processing time as they iteratively scan the dataset. There is another instance selection algorithm, reverse nearest neighbor reduction (RNNR) based on the concept of reverse nearest neighbor (RNN). RNNR does not iteratively scan the data set. In this paper, we extend the RNN to RkNN and we use the concept of RNNR to RkNN. RkNN finds the query objects that has the query point as their k nearest-neighbors. Our approach utilizes the advantage of RNN and proposes to use the concept of RkNN. We have taken the dataset of theatres, hospitals and restaurants and extracted the sample set. Classification has been done the resultant sample data set. We observe two parameters here they are classification accuracy and reduction rate.
APA, Harvard, Vancouver, ISO, and other styles
7

Cano, José-Ramón, Naif R. Aljohani, Rabeeh Ayaz Abbasi, Jalal S. Alowidbi, and Salvador García. "Prototype selection to improve monotonic nearest neighbor." Engineering Applications of Artificial Intelligence 60 (April 2017): 128–35. http://dx.doi.org/10.1016/j.engappai.2017.02.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Xianglong, Junfeng He, and Shih-Fu Chang. "Hash Bit Selection for Nearest Neighbor Search." IEEE Transactions on Image Processing 26, no. 11 (November 2017): 5367–80. http://dx.doi.org/10.1109/tip.2017.2695895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Shichao. "Nearest neighbor selection for iteratively kNN imputation." Journal of Systems and Software 85, no. 11 (November 2012): 2541–52. http://dx.doi.org/10.1016/j.jss.2012.05.073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cafagna, Francesco, Michael H. Böhlen, and Annelies Bracher. "Category- and selection-enabled nearest neighbor joins." Information Systems 68 (August 2017): 3–16. http://dx.doi.org/10.1016/j.is.2017.01.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Hee-Sung, Jae-Hun Lee, and Eun-Tai Kim. "Feature Selection for Multiple K-Nearest Neighbor classifiers using GAVaPS." Journal of Korean Institute of Intelligent Systems 18, no. 6 (December 25, 2008): 871–75. http://dx.doi.org/10.5391/jkiis.2008.18.6.871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zeniarja, Junta, Anisatawalanita Ukhifahdhina, and Abu Salam. "Diagnosis Of Heart Disease Using K-Nearest Neighbor Method Based On Forward Selection." Journal of Applied Intelligent System 4, no. 2 (March 6, 2020): 39–47. http://dx.doi.org/10.33633/jais.v4i2.2749.

Full text
Abstract:
Heart is one of the essential organs that assume a significant part in the human body. However, heart can also cause diseases that affect the death. World Health Organization (WHO) data from 2012 showed that all deaths from cardiovascular disease (vascular) 7.4 million (42.3%) were caused by heart disease. Increased cases of heart disease require a step as an early prevention and prevention efforts by making early diagnosis of heart disease. In this research will be done early diagnosis of heart disease by using data mining process in the form of classification. The algorithm used is K-Nearest Neighbor algorithm with Forward Selection method. The K-Nearest Neighbor algorithm is used for classification in order to obtain a decision result from the diagnosis of heart disease, while the forward selection is used as a feature selection whose purpose is to increase the accuracy value. Forward selection works by removing some attributes that are irrelevant to the classification process. In this research the result of accuracy of heart disease diagnosis with K-Nearest Neighbor algorithm is 73,44%, while result of K-Nearest Neighbor algorithm accuracy with feature selection method 78,66%. It is clear that the incorporation of the K-Nearest Neighbor algorithm with the forward selection method has improved the accuracy result. Keywords - K-Nearest Neighbor, Classification, Heart Disease, Forward Selection, Data Mining
APA, Harvard, Vancouver, ISO, and other styles
13

Paik, Minhui, and Yuhong Yang. "Combining Nearest Neighbor Classifiers Versus Cross-Validation Selection." Statistical Applications in Genetics and Molecular Biology 3, no. 1 (January 9, 2004): 1–19. http://dx.doi.org/10.2202/1544-6115.1054.

Full text
Abstract:
Various discriminant methods have been applied for classification of tumors based on gene expression profiles, among which the nearest neighbor (NN) method has been reported to perform relatively well. Usually cross-validation (CV) is used to select the neighbor size as well as the number of variables for the NN method. However, CV can perform poorly when there is considerable uncertainty in choosing the best candidate classifier. As an alternative to selecting a single “winner," we propose a weighting method to combine the multiple NN rules. Four gene expression data sets are used to compare its performance with CV methods. The results show that when the CV selection is unstable, the combined classifier performs much better.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Peipei, Bin Lu, and Daofeng Li. "BGP Neighbor Trust Establishment Mechanism Based on the Bargaining Game." Information 12, no. 3 (March 4, 2021): 110. http://dx.doi.org/10.3390/info12030110.

Full text
Abstract:
The Border Gateway Protocol (BGP) is the standard inter-domain route protocol on the Internet. Autonomous System (AS) traffic is forwarded by the BGP neighbors. In the route selection, if there are malicious or inactive neighbors, it will affect the network’s performance or even cause the network to crash. Therefore, choosing trusted and safe neighbors is an essential part of BGP security research. In response to such a problem, in this paper we propose a BGP Neighbor Trust Establishment Mechanism based on the Bargaining Game (BNTE-BG). By combining service quality attributes such as bandwidth, packet loss rate, jitter, delay, and price with bargaining game theory, it allows the AS to select trusted neighbors which satisfy the Quality of Service independently. When the trusted neighbors are forwarding data, we draw on the gray correlation algorithm to calculate neighbors’ behavioral trust and detect malicious or inactive BGP neighbors.
APA, Harvard, Vancouver, ISO, and other styles
15

Tomasev, Nenad, Krisztian Buza, and Dunja Mladenic. "Correcting the hub occurrence prediction bias in many dimensions." Computer Science and Information Systems 13, no. 1 (2016): 1–21. http://dx.doi.org/10.2298/csis140929039t.

Full text
Abstract:
Data reduction is a common pre-processing step for k-nearest neighbor classification (kNN). The existing prototype selection methods implement different criteria for selecting relevant points to use in classification, which constitutes a selection bias. This study examines the nature of the instance selection bias in intrinsically high-dimensional data. In high-dimensional feature spaces, hubs are known to emerge as centers of influence in kNN classification. These points dominate most kNN sets and are often detrimental to classification performance. Our experiments reveal that different instance selection strategies bias the predictions of the behavior of hub-points in high-dimensional data in different ways. We propose to introduce an intermediate un-biasing step when training the neighbor occurrence models and we demonstrate promising improvements in various hubness-aware classification methods, on a wide selection of high-dimensional synthetic and real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
16

Cooper, William E. "Tree selection by the broad-headed skink, Eumeces laticeps: size, holes, and cover." Amphibia-Reptilia 14, no. 3 (1993): 285–94. http://dx.doi.org/10.1163/156853893x00480.

Full text
Abstract:
AbstractBroad-headed skinks (Eumeces laticeps) are semiarboreallizards that are strongly associated with live oak trees (Quercus virginiana). Examination of the frequencies with which lizards occupied the largest of four nearest-neighbor trees and those having holes revealed a strong preference for large trees having holes. The presence of holes large enough for entry was a more important factor than tree size per se, as indicated by consistent occupation of smaller trees having holes when the largest of the four nearest neighbors lacked holes, although a significant preference for large size was demonstrated by the significantly greater than chance occupation of the largest of four nearest neighbor trees among those having holes. Large adults occupied significantly larger trees than did smaller adults, suggesting that larger individuals aggressively exclude smaller ones from preferred trees. Pairs consisting of an adult female and the male guarding her preferred trees surrounded by dense bushes, presumably because bushes limit detection and attack by predators and possibly because they harbor prey. Broad-headed skinks thus prefer large live oaks having holes and a fringe of dense cover.
APA, Harvard, Vancouver, ISO, and other styles
17

Kuncheva, Ludmila I., and Lakhmi C. Jain. "Nearest neighbor classifier: Simultaneous editing and feature selection." Pattern Recognition Letters 20, no. 11-13 (November 1999): 1149–56. http://dx.doi.org/10.1016/s0167-8655(99)00082-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gong, Maoguo, Licheng Jiao, Haifeng Du, and Liefeng Bo. "Multiobjective Immune Algorithm with Nondominated Neighbor-Based Selection." Evolutionary Computation 16, no. 2 (June 2008): 225–55. http://dx.doi.org/10.1162/evco.2008.16.2.225.

Full text
Abstract:
Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.
APA, Harvard, Vancouver, ISO, and other styles
19

Ferrandiz, Sylvain, and Marc Boullé. "Bayesian instance selection for the nearest neighbor rule." Machine Learning 81, no. 3 (May 29, 2010): 229–56. http://dx.doi.org/10.1007/s10994-010-5170-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

KIM, T. H. "An Improved Neighbor Selection Algorithm in Collaborative Filtering." IEICE Transactions on Information and Systems E88-D, no. 5 (May 1, 2005): 1072–76. http://dx.doi.org/10.1093/ietisy/e88-d.5.1072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Gertheiss, Jan, and Gerhard Tutz. "Feature selection and weighting by nearest neighbor ensembles." Chemometrics and Intelligent Laboratory Systems 99, no. 1 (November 2009): 30–38. http://dx.doi.org/10.1016/j.chemolab.2009.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zarifzadeh, Sajjad, and Nasser Yazdani. "Neighbor Selection Game in Wireless Ad Hoc Networks." Wireless Personal Communications 70, no. 2 (June 15, 2012): 617–40. http://dx.doi.org/10.1007/s11277-012-0711-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zubeldía, Martín, Andrés Ferragut, and Fernando Paganini. "Neighbor selection for proportional fairness in P2P networks." Computer Networks 83 (June 2015): 249–64. http://dx.doi.org/10.1016/j.comnet.2015.03.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Peramunage, Dasun, Sheila E. Blumstein, Emily B. Myers, Matthew Goldrick, and Melissa Baese-Berk. "Phonological Neighborhood Effects in Spoken Word Production: An fMRI Study." Journal of Cognitive Neuroscience 23, no. 3 (March 2011): 593–603. http://dx.doi.org/10.1162/jocn.2010.21489.

Full text
Abstract:
The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the stimulus set. Behavioral results showed longer voice-onset time for MP target words, replicating earlier behavioral results [Baese-Berk, M., & Goldrick, M. Mechanisms of interaction in speech production. Language and Cognitive Processes, 24, 527–554, 2009]. fMRI results revealed reduced activation for MP words compared to NMP words in a network including left posterior superior temporal gyrus, the supramarginal gyrus, inferior frontal gyrus, and precentral gyrus. These findings support cascade models of spoken word production and show that neural activation at the lexical level modulates activation in those brain regions involved in lexical selection, phonological planning, and, ultimately, motor plans for production. The facilitatory effects for words with MP neighbors suggest that competition effects reflect the overlap inherent in the phonological representation of the target word and its MP neighbor.
APA, Harvard, Vancouver, ISO, and other styles
25

Czuppon, Peter, and Peter Pfaffelhuber. "A spatial model for selection and cooperation." Journal of Applied Probability 54, no. 2 (June 2017): 522–39. http://dx.doi.org/10.1017/jpr.2017.15.

Full text
Abstract:
Abstract We study the evolution of cooperation in an interacting particle system with two types. The model we investigate is an extension of a two-type biased voter model. One type (called defector) has a (positive) bias α with respect to the other type (called cooperator). However, a cooperator helps a neighbor (either defector or cooperator) to reproduce at rate γ. We prove that the one-dimensional nearest-neighbor interacting dynamical system exhibits a phase transition at α = γ. A special choice of interaction kernels yield that for α > γ cooperators always die out, but if γ > α, cooperation is the winning strategy.
APA, Harvard, Vancouver, ISO, and other styles
26

Mirman, Daniel, and Kristen M. Graziano. "The Neural Basis of Inhibitory Effects of Semantic and Phonological Neighbors in Spoken Word Production." Journal of Cognitive Neuroscience 25, no. 9 (September 2013): 1504–16. http://dx.doi.org/10.1162/jocn_a_00408.

Full text
Abstract:
Theories of word production and word recognition generally agree that multiple word candidates are activated during processing. The facilitative and inhibitory effects of these “lexical neighbors” have been studied extensively using behavioral methods and have spurred theoretical development in psycholinguistics, but relatively little is known about the neural basis of these effects and how lesions may affect them. This study used voxel-wise lesion overlap subtraction to examine semantic and phonological neighbor effects in spoken word production following left hemisphere stroke. Increased inhibitory effects of near semantic neighbors were associated with inferior frontal lobe lesions, suggesting impaired selection among strongly activated semantically related candidates. Increased inhibitory effects of phonological neighbors were associated with posterior superior temporal and inferior parietal lobe lesions. In combination with previous studies, these results suggest that such lesions cause phonological-to-lexical feedback to more strongly activate phonologically related lexical candidates. The comparison of semantic and phonological neighbor effects and how they are affected by left hemisphere lesions provides new insights into the cognitive dynamics and neural basis of phonological, semantic, and cognitive control processes in spoken word production.
APA, Harvard, Vancouver, ISO, and other styles
27

Salam, Abu, Ferry Bintang Nugroho, and Junta Zeniarja. "Implementasi Algoritma K-Nearest Neighbor Berbasis Forward Selection Untuk Prediksi Mahasiswa Non Aktif Universitas Dian Nuswantoro Semarang." JOINS (Journal of Information System) 5, no. 1 (May 31, 2020): 69–76. http://dx.doi.org/10.33633/joins.v5i1.3351.

Full text
Abstract:
Masalah yang muncul berkaitan dengan status mahasiswa salah satunya adalah status mahasiswa yang non aktif. Beberapa faktor penyebab status non aktif tersebut diantaranya adalah faktor ekonomi, kemampuan akademik, dan lain – lain. Manajemen perguruan tinggi perlu mengidentifikasi serta melakukan tindakan terhadap mahasiswa yang mempunyai status “tidak diharapkan” untuk mengetahui faktor munculnya masalah tersebut perlu dilakukan evaluasi saat pertengahan masa studi mahasiswa guna mencegah sedini mungkin munculnya mahasiswa yang diindikasi terdapat status tidak aktif untuk mengurangi dampak yang ditimbulkan akibat status non aktif tersebut. Pada penelitian ini akan dilakukan prediksi mahasiswa non aktif menggunakan algoritma klasifikasi K-Nearest Neighbor yang dikombinasikan dengan metode forward selection untuk seleksi atribut yang diharapkan mampu meningkatkan nilai akurasi pada proses klasifikasi. Nilai akurasi yang didapatkan pada algoritma K-Nearest Neighbor sebesar 96.43% sedangkan pada algoritma K-Nearest Neighbor berbasis Forward Selection sebesar 97.27%. Kata Kunci: Mahasiswa Non Aktif, Forward Selection, K-Nearest Neighbor
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Aiguo, Ning An, Guilin Chen, Lian Li, and Gil Alterovitz. "Accelerating wrapper-based feature selection with K-nearest-neighbor." Knowledge-Based Systems 83 (July 2015): 81–91. http://dx.doi.org/10.1016/j.knosys.2015.03.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sandberg, Oskar. "Neighbor selection and hitting probability in small-world graphs." Annals of Applied Probability 18, no. 5 (October 2008): 1771–93. http://dx.doi.org/10.1214/07-aap499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Marchiori, E. "Class Conditional Nearest Neighbor for Large Margin Instance Selection." IEEE Transactions on Pattern Analysis and Machine Intelligence 32, no. 2 (February 2010): 364–70. http://dx.doi.org/10.1109/tpami.2009.164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kaleli, Cihan. "An entropy-based neighbor selection approach for collaborative filtering." Knowledge-Based Systems 56 (January 2014): 273–80. http://dx.doi.org/10.1016/j.knosys.2013.11.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bellogín, Alejandro, Pablo Castells, and Iván Cantador. "Neighbor Selection and Weighting in User-Based Collaborative Filtering." ACM Transactions on the Web 8, no. 2 (March 2014): 1–30. http://dx.doi.org/10.1145/2579993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Vasseur, David A., Priyanga Amarasekare, Volker H. W. Rudolf, and Jonathan M. Levine. "Eco-Evolutionary Dynamics Enable Coexistence via Neighbor-Dependent Selection." American Naturalist 178, no. 5 (November 2011): E96—E109. http://dx.doi.org/10.1086/662161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

BARANDELA, RICARDO, FRANCESC J. FERRI, and J. SALVADOR SÁNCHEZ. "DECISION BOUNDARY PRESERVING PROTOTYPE SELECTION FOR NEAREST NEIGHBOR CLASSIFICATION." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 06 (September 2005): 787–806. http://dx.doi.org/10.1142/s0218001405004332.

Full text
Abstract:
The excessive computational resources required by the Nearest Neighbor rule are a major concern for a number of specialists and practitioners in the Pattern Recognition community. Many proposals for decreasing this computational burden, through reduction of the training sample size, have been published. This paper introduces an algorithm to reduce the training sample size while preserving the original decision boundaries as much as possible. Consequently, the algorithm tends to obtain classification accuracy close to that of the whole training sample. Several experimental results demonstrate the effectiveness of this method when compared to other reduction algorithms based on similar ideas.
APA, Harvard, Vancouver, ISO, and other styles
35

Xiao, Cao, and Wanpracha Art Chaovalitwongse. "Optimization Models for Feature Selection of Decomposed Nearest Neighbor." IEEE Transactions on Systems, Man, and Cybernetics: Systems 46, no. 2 (February 2016): 177–84. http://dx.doi.org/10.1109/tsmc.2015.2429637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kwon, Kwiseok, Jinhyung Cho, and Yongtae Park. "Multidimensional credibility model for neighbor selection in collaborative recommendation." Expert Systems with Applications 36, no. 3 (April 2009): 7114–22. http://dx.doi.org/10.1016/j.eswa.2008.08.071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Helms, T. C., J. H. Orf, and R. A. Scott. "Nearest-neighbor-adjusted means as a selection criterion within two soybean populations." Canadian Journal of Plant Science 75, no. 4 (October 1, 1995): 857–63. http://dx.doi.org/10.4141/cjps95-142.

Full text
Abstract:
When the nearest-neighbor adjustment (NNA) reduces the magnitude of the residual mean square, plant breeders have the option of selecting genotypes on the basis of the NNA or unadjusted (UNADJ) means. The actual gain from selection for a specific set of experiments can be compared when selection is based on each criterion. Our objective was to compare the yields of lines selected with the NNA and UNADJ criteria. Three hundred soybean [Glycine max (L.) Merr.] experimental lines were evaluated in six environments. Each environment was considered a selection environment, and the actual yield advance was measured in the other five environments. In 11 out of 12 cases, the lines selected by the NNA and UNADJ criteria were equal in yield when compared in testing environments. Interlocation correlations were similar for both models. Predicted genetic gain was overestimated more often when using the NNA than the UNADJ model. Key words:Glycine max, heritability
APA, Harvard, Vancouver, ISO, and other styles
38

GAGNÉ, CHRISTIAN, and MARC PARIZEAU. "COEVOLUTION OF NEAREST NEIGHBOR CLASSIFIERS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 05 (August 2007): 921–46. http://dx.doi.org/10.1142/s0218001407005752.

Full text
Abstract:
This paper presents experiments of Nearest Neighbor (NN) classifier design using different evolutionary computation methods. Through multiobjective and coevolution techniques, it combines genetic algorithms and genetic programming to both select NN prototypes and design a neighborhood proximity measure, in order to produce a more efficient and robust classifier. The proposed approach is compared with the standard NN classifier, with and without the use of classic prototype selection methods, and classic data normalization. Results on both synthetic and real data sets show that the proposed methodology performs as well or better than other methods on all tested data sets.
APA, Harvard, Vancouver, ISO, and other styles
39

ABBAS, SYED RAHAT, and MUHAMMAD ARIF. "LONG RANGE TIME SERIES FORECASTING BY UPSAMPLING AND USING CROSS-CORRELATION BASED SELECTION OF NEAREST NEIGHBOR." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 08 (December 2006): 1261–78. http://dx.doi.org/10.1142/s021800140600523x.

Full text
Abstract:
Long range or multistep-ahead time series forecasting is an important issue in various fields of business, science and technology. In this paper, we have proposed a modified nearest neighbor based algorithm that can be used for long range time series forecasting. In the original time series, optimal selection of embedding dimension that can unfold the dynamics of the system is improved by using upsampling of the time series. Zeroth order cross-correlation and Euclidian distance criterion are used to select the nearest neighbor from up-sampled time series. Embedding dimension size and number of candidate vectors for nearest neighbor selection play an important role in forecasting. The size of embedding is optimized by using auto-correlation function (ACF) plot of the time series. It is observed that proposed algorithm outperforms the standard nearest neighbor algorithm. The cross-correlation based criteria shows better performance than Euclidean distance criteria.
APA, Harvard, Vancouver, ISO, and other styles
40

LUO, FULIN, JIAMIN LIU, HONG HUANG, and YUMEI LIU. "HYPERSPECTRAL IMAGE CLASSIFICATION USING LOCAL SPECTRAL ANGLE-BASED MANIFOLD LEARNING." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 06 (September 2014): 1450016. http://dx.doi.org/10.1142/s0218001414500165.

Full text
Abstract:
Locally linear embedding (LLE) depends on the Euclidean distance (ED) to select the k-nearest neighbors. However, the ED may not reflect the actual geometry structure of data, which may lead to the selection of ineffective neighbors. The aim of our work is to make full use of the local spectral angle (LSA) to find proper neighbors for dimensionality reduction (DR) and classification of hyperspectral remote sensing data. At first, we propose an improved LLE method, called local spectral angle LLE (LSA-LLE), for DR. It uses the ED of data to obtain large-scale neighbors, then utilizes the spectral angle to get the exact neighbors in the large-scale neighbors. Furthermore, a local spectral angle-based nearest neighbor classifier (LSANN) has been proposed for classification. Experiments on two hyperspectral image data sets demonstrate the effectiveness of the presented methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Sanjaya, Rangga, and Fitriyani Fitriyani. "Prediksi Bedah Toraks Menggunakan Seleksi Fitur Forward Selection dan K-Nearest Neighbor." Jurnal Edukasi dan Penelitian Informatika (JEPIN) 5, no. 3 (December 24, 2019): 316. http://dx.doi.org/10.26418/jp.v5i3.35324.

Full text
Abstract:
Kanker paru merupakan penyakit yang memerlukan tindakan penanganan yang cepat dan terarah, dimana penyebab paling tinggi dari kanker paru adalah merokok. Bedah toraks merupakan operasi yang paling umum untuk kanker paru. Bedah toraks dapat mengobati kanker paru, akan tetapi usia hidup pasien pasca operasi yang menjadi masalah, sehingga sebelum melakukan operasi dokter harus dapat memilih pasien dengan tepat berdasarkan resiko dan manfaat. Penelitian ini menggunakan dataset thoracic surgery dengan menggunakan algoritma K-Nearest Neighbor. Pada dataset thoracic surgery terdapat kelas atau fitur yang tidak relevan sehingga dilakukan seleksi fitur menggunakan Forward Selection. Eksperimen dan pengolahan data yang dilakukan, dibantu oleh software Rapidminer. Pada penelitian ini akan dilakukan perbandingan performa antara algoritma K-Nearest Neighbor tanpa seleksi fitur dengan K-Nearest Neighbor dengan seleksi fitur Forward Selection. Berdasarkan hasil pengujian dan perbandingan dari kedua model yang diusulkan, algoritma K-NN dengan optimasi fitur menggunakan metode forward selection memiliki nilai akurasi lebih baik dibandingkan dengan algoritma K-NN tanpa seleksi fitur.
APA, Harvard, Vancouver, ISO, and other styles
42

Kim, Taek-Hun, and Sung-Bong Yang. "A Refined Neighbor Selection Algorithm for Clustering-Based Collaborative Filtering." KIPS Transactions:PartD 14D, no. 3 (June 30, 2007): 347–54. http://dx.doi.org/10.3745/kipstd.2007.14-d.3.347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chakareski, Jacob. "Know thy neighbor: Community-aware recovery of content selection preferences." Signal Processing 101 (August 2014): 151–61. http://dx.doi.org/10.1016/j.sigpro.2014.01.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Song, Yunsheng, Jiye Liang, Jing Lu, and Xingwang Zhao. "An efficient instance selection algorithm for k nearest neighbor regression." Neurocomputing 251 (August 2017): 26–34. http://dx.doi.org/10.1016/j.neucom.2017.04.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yao, Zhongmei, and Dmitri Loguinov. "Analysis of Link Lifetimes and Neighbor Selection in Switching DHTs." IEEE Transactions on Parallel and Distributed Systems 22, no. 11 (November 2011): 1834–41. http://dx.doi.org/10.1109/tpds.2011.101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Yun, and Bao-Liang Lu. "Feature selection based on loss-margin of nearest neighbor classification." Pattern Recognition 42, no. 9 (September 2009): 1914–21. http://dx.doi.org/10.1016/j.patcog.2008.10.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Xia, Shixiong, Shaoda Chen, and Zhixiao Wang. "An Hybrid Similarity Function for Neighbor Selection in Collaborative Filtering." International Journal of Database Theory and Application 8, no. 6 (December 31, 2015): 243–52. http://dx.doi.org/10.14257/ijdta.2015.8.6.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Garcia, Salvador, Joaquin Derrac, Jose Ramon Cano, and Francisco Herrera. "Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study." IEEE Transactions on Pattern Analysis and Machine Intelligence 34, no. 3 (March 2012): 417–35. http://dx.doi.org/10.1109/tpami.2011.142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Fuchs, Karen, Jan Gertheiss, and Gerhard Tutz. "Nearest neighbor ensembles for functional data with interpretable feature selection." Chemometrics and Intelligent Laboratory Systems 146 (August 2015): 186–97. http://dx.doi.org/10.1016/j.chemolab.2015.04.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Tutz, Gerhard, and Dominik Koch. "Improved nearest neighbor classifiers by weighting and selection of predictors." Statistics and Computing 26, no. 5 (July 5, 2015): 1039–57. http://dx.doi.org/10.1007/s11222-015-9588-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography