Academic literature on the topic 'K-Nearest Neighbours (KNN)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'K-Nearest Neighbours (KNN).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "K-Nearest Neighbours (KNN)"

1

Mahfouz, Mohamed A. "INCORPORATING DENSITY IN K-NEAREST NEIGHBORS REGRESSION." International Journal of Advanced Research in Computer Science 14, no. 03 (2023): 144–49. http://dx.doi.org/10.26483/ijarcs.v14i3.6989.

Full text
Abstract:
The application of the traditional k-nearest neighbours in regression analysis suffers from several difficulties when only a limited number of samples are available. In this paper, two decision models based on density are proposed. In order to reduce testing time, a k-nearest neighbours table (kNN-Table) is maintained to keep the neighbours of each object x along with their weighted Manhattan distance to x and a binary vector representing the increase or the decrease in each dimension compared to x’s values. In the first decision model, if the unseen sample having a distance to one of its neighbours x less than the farthest neighbour of x’s neighbour then its label is estimated using linear interpolation otherwise linear extrapolation is used. In the second decision model, for each neighbour x of the unseen sample, the distance of the unseen sample to x and the binary vector are computed. Also, the set S of nearest neighbours of x are identified from the kNN-Table. For each sample in S, a normalized distance to the unseen sample is computed using the information stored in the kNN-Table and it is used to compute the weight of each neighbor of the neighbors of the unseen object. In the two models, a weighted average of the computed label for each neighbour is assigned to the unseen object. The diversity between the two proposed decision models and the traditional kNN regressor motivates us to develop an ensemble of the two proposed models along with traditional kNN regressor. The ensemble is evaluated and the results showed that the ensemble achieves significant increase in the performance compared to its base regressors and several related algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

He, Hongxing, Simon Hawkins, Warwick Graco, and Xin Yao. "Application of Genetic Algorithm and K-Nearest Neighbour Method in Real World Medical Fraud Detection Problem." Journal of Advanced Computational Intelligence and Intelligent Informatics 4, no. 2 (2000): 130–37. http://dx.doi.org/10.20965/jaciii.2000.p0130.

Full text
Abstract:
In the k-Nearest Neighbour (kNN) algorithm, the classification of a new sample is determined by the class of its k nearest neighbours. The performance of the kNN algorithm is influenced by three main factors: (1) the distance metric used to locate the nearest neighbours; (2) the decision rule used to derive a classification from the k-nearest neighbours; and (3) the number of neighbours used to classify the new sample. Using k = 1, 3, or 5 nearest neighbours, this study uses a Genetic Algorithm (GA) to find the optimal non-Euclidean distance metric in the kNN algorithm and examines two alternative methods (Majority Rule and Bayes Rule) to derive a classification from the k nearest neighbours. This modified algorithm was evaluated on two real-world medical fraud problems. The General Practitioner (GP) database is a 2-class problem in which GPs are classified as either practising appropriately or inappropriately. The ’.Doctor-Shoppers’ database is a 5-class problem in which patients are classified according to the likelihood that they are ’doctor-shoppers’. Doctor-shoppers are patients who consult many physicians in order to obtain multiple prescriptions of drugs of addiction in excess of their own therapeutic need. In both applications, classification accuracy was improved by optimising the distance metric in the kNN algorithm. The agreement rate on the GP dataset improved from around 70% (using Euclidean distance) to 78 % (using an optimised distance metric), and from about 55% to 82% on the Doctor Shopper’s dataset. Differences in either the decision rule or the number of nearest neighbours had little or no impact on the classification performance of the kNN algorithm. The excellent performance of the kNN algorithm when the distance metric is optimised using a genetic algorithm paves the way for its application in the real world fraud detection problems faced by the Health Insurance Commission (HIC).
APA, Harvard, Vancouver, ISO, and other styles
3

Sukshitha, R., and Satyanarayana Satyanarayana. "Empirical Likelihood Ratio Based K-Nearest Neighbours Regression." INTERNATIONAL JOURNAL OF AGRICULTURAL AND STATISTICAL SCIENCES 20, no. 02 (2024): 421. https://doi.org/10.59467/ijass.2024.20.421.

Full text
Abstract:
Regression models play a pivotal role in real-life applications by enabling the analysis and prediction of continuous outcomes. Among these, the k-Nearest Neighbours (KNN) model stands out as a significant advancement in machine learning. KNN's ability to make predictions based on the proximity of data points has found wide-ranging applications in various fields. However, the traditional KNN regression model has its limitations, including sensitivity to noise and an uneven distribution of neighbours. In response, this paper introduces a novel approach: an empirical likelihood ratio (ELR) based regression algorithm. The ELR technique offers distinct advantages over distance-based nearest neighbour computations, particularly in handling skewed data distributions and minimizing the impact of outliers. The proposed ELRbased KNN regression model is rigorously assessed through both simulation studies and real-life scenarios. The results explicitly demonstrate the enhanced performance of the ELR-based approach over the conventional KNN model. This research contributes to a deeper understanding of regression techniques and underscores the practical significance of leveraging empirical likelihood ratios in refining predictive models for real-world applications.. KEYWORDS :Regression model, k-Nearest neighbours, Empirical likelihood ratio, Distance measures, Data distributions.
APA, Harvard, Vancouver, ISO, and other styles
4

Farooq, Muhammad, Sehrish Sarfraz, Christophe Chesneau, et al. "Computing Expectiles Using k-Nearest Neighbours Approach." Symmetry 13, no. 4 (2021): 645. http://dx.doi.org/10.3390/sym13040645.

Full text
Abstract:
Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.
APA, Harvard, Vancouver, ISO, and other styles
5

Fatah, Haerul, and Agus Subekti. "PREDIKSI HARGA CRYPTOCURRENCY DENGAN METODE K-NEAREST NEIGHBOURS." Jurnal Pilar Nusa Mandiri 14, no. 2 (2018): 137. http://dx.doi.org/10.33480/pilar.v14i2.894.

Full text
Abstract:
Uang elektronik menjadi pilihan yang mulai ramai digunakan oleh banyak orang, terutama para pengusaha, pebisnis dan investor, karena menganggap bahwa uang elektronik akan menggantikan uang fisik dimasa depan. Cryptocurrency muncul sebagai jawaban atas kendala uang eletronik yang sangat bergantung kepada pihak ketiga. Salah satu jenis Cryptocurrency yaitu Bitcoin. Analogi keuangan Bitcoin sama dengan analogi pasar saham, yakni fluktuasi harga tidak tentu setiap detik. Tujuan dari penelitian yang dilakukan yaitu melakukan prediksi harga Cryptocurrency dengan menggunakan metode KNN (K-Nearest Neighbours). Hasil dari penelitian ini diketahui bahwa model KNN yang paling baik dalam memprediksi harga Cryptocurrency adalah KNN dengan parameter nilai K=3 dan Nearest Neighbour Search Algorithm : Linear NN Search. Dengan nilai Mean Absolute Error (MAE) sebesar 0.0018 dan Root Mean Squared Error (RMSE) sebesar 0.0089.
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Zhigang, and Hong Shen. "An accuracy-assured privacy-preserving recommender system for internet commerce." Computer Science and Information Systems 12, no. 4 (2015): 1307–26. http://dx.doi.org/10.2298/csis140725056l.

Full text
Abstract:
Recommender systems, tool for predicting users? potential preferences by computing history data and users? interests, show an increasing importance in various Internet applications such as online shopping. As a well-known recommendation method, neighbourhood-based collaborative filtering has attracted considerable attentions recently. The risk of revealing users? private information during the process of filtering has attracted noticeable research interests. Among the current solutions, the probabilistic techniques have shown a powerful privacy preserving effect. The existing methods deploying probabilistic methods are in three categories, one [19] adds differential privacy noises in the covariance matrix; one [1] introduces the randomisation in the neighbour selection process; the other [29] applies differential privacy in both the neighbour selection process and covariance matrix. When facing the k Nearest Neighbour (kNN) attack, all the existing methods provide no data utility guarantee, for the introduction of global randomness. In this paper, to overcome the problem of recommendation accuracy loss, we propose a novel approach, Partitioned Probabilistic Neighbour Selection, to ensure a required prediction accuracy while maintaining high security against the kNN attack. We define the sum of k neighbours? similarity as the accuracy metric ?, the number of user partitions, across which we select the k neighbours, as the security metric ?. We generalise the k Nearest Neighbour attack to the ?k Nearest Neighbours attack. Differing from the existing approach that selects neighbours across the entire candidate list randomly, our method selects neighbours from each exclusive partition of size k with a decreasing probability. Theoretical and experimental analysis show that to provide an accuracy-assured recommendation, our Partitioned Probabilistic Neighbour Selection method yields a better trade-off between the recommendation accuracy and system security.
APA, Harvard, Vancouver, ISO, and other styles
7

Jagath Prasad, Himayavardhini, and Roji Marjorie S. "Optimized k-nearest neighbours classifier based prediction of epileptic seizures." Bulletin of Electrical Engineering and Informatics 13, no. 4 (2024): 2442–55. http://dx.doi.org/10.11591/eei.v13i4.6598.

Full text
Abstract:
Epileptic seizure is an unstable condition of the brain that cause severe mental disorder and can be fatal if not properly diagnosed at an early stage. Electroencephalogram (EEG) plays a major role in early diagnosis of epileptic seizures. The volume of medical databases is enormous. Classification may become less accurate if the dataset contains redundant and irrelevant attributes. To reduce the mortality rate due to epilepsy, a decision support system that can assist medical professionals in taking immediate precautionary measures prior to reaching the critical condition is required. In this work, k-nearest neighbours (KNN) classifier algorithm is optimised using genetic algorithm for effective classification and faster prediction to meet this requirement. Genetic algorithms search for optimal solutions in complex and large environments. Results are compared with other machine learning models such as support vector machine (SVM), KNN, decision tree classifier, and random forest. With optimization using genetic algorithm KNN was able to achieve an enhancement in accuracy at lower training and testing times. It was observed that the accuracy offered by optimized KNN was 92%. Random forest classifiers showed minimum complexity and KNN algorithm provided faster performance with better accuracy.
APA, Harvard, Vancouver, ISO, and other styles
8

Magnussen, S., R. E. McRoberts, and E. O. Tomppo. "A resampling variance estimator for the k nearest neighbours technique." Canadian Journal of Forest Research 40, no. 4 (2010): 648–58. http://dx.doi.org/10.1139/x10-020.

Full text
Abstract:
Current estimators of variance for the k nearest neighbours (kNN) technique are designed for estimates of population totals. Their efficiency in small-area estimation problems can be poor. In this study, we propose a modified balanced repeated replication estimator of variance (BRR) of a kNN total that performs well in small-area estimation problems and under both simple random and cluster sampling. The BRR estimate of variance is the sum of variances and covariances of unit-level kNN estimates in the area of interest. In Monte Carlo simulations of simple random and cluster sampling from seven artificial populations with real and simulated forest inventory data, the agreement between averages of BRR estimates of variance and Monte Carlo sampling variances was good both for population and for small-area totals. The modified BRR estimator is currently limited to sample sizes no larger than 1984. An accurate approximation to the proposed BRR estimator allows significant savings in computing time.
APA, Harvard, Vancouver, ISO, and other styles
9

Pandey, Shubham, Vivek Sharma, and Garima Agrawal. "Modification of KNN Algorithm." International Journal of Engineering and Computer Science 8, no. 11 (2019): 24869–77. http://dx.doi.org/10.18535/ijecs/v8i11.4383.

Full text
Abstract:
K-Nearest Neighbor (KNN) classification is one of the most fundamental and simple classification methods. It is among the most frequently used classification algorithm in the case when there is little or no prior knowledge about the distribution of the data. In this paper a modification is taken to improve the performance of KNN. The main idea of KNN is to use a set of robust neighbors in the training data. This modified KNN proposed in this paper is better from traditional KNN in both terms: robustness and performance. Inspired from the traditional KNN algorithm, the main idea is to classify an input query according to the most frequent tag in set of neighbor tags with the say of the tag closest to the new tuple being the highest. Proposed Modified KNN can be considered a kind of weighted KNN so that the query label is approximated by weighting the neighbors of the query. The procedure computes the frequencies of the same labeled neighbors to the total number of neighbors with value associated with each label multiplied by a factor which is inversely proportional to the distance between new tuple and neighbours. The proposed method is evaluated on a variety of several standard UCI data sets. Experiments show the significant improvement in the performance of KNN method.
APA, Harvard, Vancouver, ISO, and other styles
10

Safar, Maytham. "Spatial Queries in Road Networks Based on PINE." JUCS - Journal of Universal Computer Science 14, no. (4) (2008): 590–611. https://doi.org/10.3217/jucs-014-04-0590.

Full text
Abstract:
Over the last decade, due to the rapid developments in information technology (IT), a new breed of information systems has appeared such as geographic information systems that introduced new challenges for researchers, developers and users. One of its applications is the car navigation system, which allows drivers to receive navigation instructions without taking their eyes off the road. Using a Global Positioning System (GPS) in the car navigation system enables the driver to perform a wide range of queries, from locating the car position, to finding a route from a source to a destination, or dynamically selecting the best route in real time. Several types of spatial queries (e.g., nearest neighbour - NN, K nearest neighbours - KNN, continuous k nearest neighbours - CKNN, reverse nearest neighbour - RNN) have been proposed and studied in the context of spatial databases. With spatial network databases (SNDB), objects are restricted to move on pre-defined paths (e.g., roads) that are specified by an underlying network. In our previous work, we proposed a novel approach, termed Progressive Incremental Network Expansion (PINE), to efficiently support NN and KNN queries. In this work, we utilize our developed PINE system to efficiently support other spatial queries such as CKNN. The continuous K nearest neighbour (CKNN) query is an important type of query that finds continuously the K nearest objects to a query point on a given path. We focus on moving queries issued on stationary objects in Spatial Network Database (SNDB) (e.g., continuously report the five nearest gas stations while I am driving.) The result of this type of query is a set of intervals (defined by split points) and their corresponding KNNs. This means that the KNN of an object travelling on one interval of the path remains the same all through that interval, until it reaches a split point where its KNNs change. Existing methods for CKNN are based on Euclidean distances. In this paper we propose a new algorithm for answering CKNN in SNDB where the important measure for the shortest path is network distances rather than Euclidean distances. Our solution addresses a new type of query that is plausible to many applications where the answer to the query not only depends on the distances of the nearest neighbours, but also on the user or application need. By distinguishing between two types of split points, we reduce the number of computations to retrieve the continuous KNN of a moving object. We compared our algorithm with CKNN based on VN3 using IE (Intersection Examination). Our experiments show that our approach has better response time than approaches that are based on IE, and requires fewer shortest distance computations and KNN queries.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "K-Nearest Neighbours (KNN)"

1

Villa, Medina Joe Luis. "Reliability of classification and prediction in k-nearest neighbours." Doctoral thesis, Universitat Rovira i Virgili, 2013. http://hdl.handle.net/10803/127108.

Full text
Abstract:
En esta tesis doctoral seha desarrollado el cálculo de la fiabilidad de clasificación y de la fiabilidad de predicción utilizando el método de los k-vecinos más cercanos (k-nearest neighbours, kNN) y estrategias de remuestreo basadas en bootstrap. Se han desarrollado, además, dos nuevos métodos de clasificación:Probabilistic Bootstrapk-Nearest Neighbours (PBkNN) y Bagged k-Nearest Neighbours (BaggedkNN),yun nuevo método de predicción,el Direct OrthogonalizationkNN (DOkNN).En todos los casos, los resultados obtenidos con los nuevos métodos han sido comparables o mejores que los obtenidos utilizando métodos clásicos de clasificación y calibración multivariante.<br>En aquesta tesi doctoral s'ha desenvolupat el càlcul de la fiabilitat de classificació i de la fiabilitat de predicció utilitzant el mètode dels k-veïns més propers (k-nearest neighbours, kNN) i estratègies de remostreig basades en bootstrap. S'han desenvolupat, a més, dos nous mètodes de classificació: Probabilistic Bootstrap k-Nearest Neighbours (PBkNN) i Bagged k-Nearest Neighbours (Bagged kNN), i un nou mètode de predicció, el Direct OrthogonalizationkNN (DOkNN). En tots els casos, els resultats obtinguts amb els nous mètodes han estat comparables o millors que els obtinguts utilitzant mètodes clàssics de classificació i calibratge multivariant.
APA, Harvard, Vancouver, ISO, and other styles
2

Darborg, Alex. "Identifiera känslig data inom ramen för GDPR : Med K-Nearest Neighbors." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34070.

Full text
Abstract:
General Data Protection Regulation, GDPR, is a regulation coming into effect on May 25th 2018. Due to this, organizations face large decisions concerning how sensitive data, stored in databases, are to be identified. Meanwhile, there is an expansion of machine learning on the software market. The goal of this project has been to develop a tool which, through machine learning, can identify sensitive data. The development of this tool has been accomplished through the use of agile methods and has included comparisions of various algorithms and the development of a prototype. This by using tools such as Spyder and XAMPP. The results show that different types of sensitive data give variating results in the developed software solution. The kNN algorithm showed strong results in such cases when the sensitive data concerned Swedish Social Security numbers of 10 digits, and phone numbers in the length of ten or eleven digits, either starting with 46-, 070, 072 or 076 and also addresses. Regular expression showed strong results concerning e-mails and IP-addresses.<br>General Data Protection Regulation, GDPR, är en reglering som träder i kraft 25 maj 2018. I och med detta ställs organisationer inför stora beslut kring hur de ska finna känsliga data som är lagrad i databaser. Samtidigt expanderar maskininlärning på mjukvarumarknaden. Målet för detta projekt har varit att ta fram ett verktyg som med hjälp av maskininlärning kan identifiera känsliga data. Utvecklingen av detta verktyg har skett med hjälp av agila metoder och har innefattat jämförelser av olika algoritmer och en framtagning av en prototyp. Detta med hjälp av verktyg såsom Spyder och XAMPP. Resultatet visar på att olika typer av känsliga data ger olika starka resultat i den utvecklade programvaran. kNN-algoritmen visade starka resultat i de fall då den känsliga datan rörde svenska, tiosiffriga personnummer samt telefonnummer i tio- eller elva-siffrigt format, och antingen inleds med 46, 070, 072 eller 076 samt då den rörde adresser. Regular expression visade på starka resultat när det gällde e- mails och IP-adresser.
APA, Harvard, Vancouver, ISO, and other styles
3

Kuhlman, Caitlin Anne. "Pivot-based Data Partitioning for Distributed k Nearest Neighbor Mining." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/1212.

Full text
Abstract:
This thesis addresses the need for a scalable distributed solution for k-nearest-neighbor (kNN) search, a fundamental data mining task. This unsupervised method poses particular challenges on shared-nothing distributed architectures, where global information about the dataset is not available to individual machines. The distance to search for neighbors is not known a priori, and therefore a dynamic data partitioning strategy is required to guarantee that exact kNN can be found autonomously on each machine. Pivot-based partitioning has been shown to facilitate bounding of partitions, however state-of-the-art methods suffer from prohibitive data duplication (upwards of 20x the size of the dataset). In this work an innovative method for solving exact distributed kNN search called PkNN is presented. The key idea is to perform computation over several rounds, leveraging pivot-based data partitioning at each stage. Aggressive data-driven bounds limit communication costs, and a number of optimizations are designed for efficient computation. Experimental study on large real-world data (over 1 billion points) compares PkNN to the state-of-the-art distributed solution, demonstrating that the benefits of additional stages of computation in the PkNN method heavily outweigh the added I/O overhead. PkNN achieves a data duplication rate close to 1, significant speedup over previous solutions, and scales effectively in data cardinality and dimension. PkNN can facilitate distributed solutions to other unsupervised learning methods which rely on kNN search as a critical building block. As one example, a distributed framework for the Local Outlier Factor (LOF) algorithm is given. Testing on large real-world and synthetic data with varying characteristics measures the scalability of PkNN and the distributed LOF framework in data size and dimensionality.
APA, Harvard, Vancouver, ISO, and other styles
4

Aikes, Junior Jorge. "Estudo da influência de diversas medidas de similaridade na previsão de séries temporais utilizando o algoritmo KNN-TSP." Universidade Estadual do Oeste do Parana, 2012. http://tede.unioeste.br:8080/tede/handle/tede/1084.

Full text
Abstract:
Made available in DSpace on 2017-07-10T17:11:50Z (GMT). No. of bitstreams: 1 JORGE AIKES JUNIOR.PDF: 2050278 bytes, checksum: f5bae18bbcb7465240488c45b2c813e7 (MD5) Previous issue date: 2012-04-11<br>Time series can be understood as any set of observations which are time ordered. Among the many possible tasks appliable to temporal data, one that has attracted increasing interest, due to its various applications, is the time series forecasting. The k-Nearest Neighbor - Time Series Prediction (kNN-TSP) algorithm is a non-parametric method for forecasting time series. One of its advantages, is its easiness application when compared to parametric methods. Even though its easier to define kNN-TSP s parameters, some issues remain opened. This research is focused on the study of one of these parameters: the similarity measure. This parameter was empirically evaluated using various similarity measures in a large set of time series, including artificial series with seasonal and chaotic characteristics, and several real world time series. It was also carried out a case study comparing the predictive accuracy of the kNN-TSP algorithm with the Moving Average (MA), univariate Seasonal Auto-Regressive Integrated Moving Average (SARIMA) and multivariate SARIMA methods in a time series of a Korean s hospital daily patients flow in the Emergency Department. This work also proposes an approach to the development of a hybrid similarity measure which combines characteristics from several measures. The research s result demonstrated that the Lp Norm s measures have an advantage over other measures evaluated, due to its lower computational cost and for providing, in general, greater accuracy in temporal data forecasting using the kNN-TSP algorithm. Although the literature in general adopts the Euclidean similarity measure to calculate de similarity between time series, the Manhattan s distance can be considered an interesting candidate for defining similarity, due to the absence of statistical significant difference and to its lower computational cost when compared to the Euclidian measure. The measure proposed in this work does not show significant results, but it is promising for further research. Regarding the case study, the kNN-TSP algorithm with only the similarity measure parameter optimized achieves a considerably lower error than the MA s best configuration, and a slightly greater error than the univariate e multivariate SARIMA s optimal settings presenting less than one percent of difference.<br>Séries temporais podem ser entendidas como qualquer conjunto de observações que se encontram ordenadas no tempo. Dentre as várias tarefas possíveis com dados temporais, uma que tem atraído crescente interesse, devido a suas várias aplicações, é a previsão de séries temporais. O algoritmo k-Nearest Neighbor - Time Series Prediction (kNN-TSP) é um método não-paramétrico de previsão de séries temporais que apresenta como uma de suas vantagens a facilidade de aplicação, quando comparado aos métodos paramétricos. Apesar da maior facilidade na determinação de seus parâmetros, algumas questões relacionadas continuam em aberto. Este trabalho está focado no estudo de um desses parâmetros: a medida de similaridade. Esse parâmetro foi avaliado empiricamente utilizando diversas medidas de similaridade em um grande conjunto de séries temporais que incluem séries artificiais, com características sazonais e caóticas, e várias séries reais. Foi realizado também um estudo de caso comparativo entre a precisão da previsão do algoritmo kNN-TSP e a dos métodos de Médias Móveis (MA), Auto-regressivos de Médias Móveis Integrados Sazonais (SARIMA) univariado e SARIMA multivariado, em uma série de fluxo diário de pacientes na Área de Emergência de um hospital coreano. Neste trabalho é ainda proposta uma abordagem para o desenvolvimento de uma medida de similaridade híbrida, que combine características de várias medidas. Os resultados obtidos neste trabalho demonstram que as medidas da Norma Lp apresentam vantagem sobre as demais medidas avaliadas, devido ao seu menor custo computacional e por apresentar, em geral, maior precisão na previsão de dados temporais utilizando o algoritmo kNN-TSP. Apesar de na literatura, em geral, a medida Euclidiana ser adotada como medida de similaridade, a medida Manhattan pode ser considerada candidata interessante para definir a similaridade entre séries temporais, devido a não apresentar diferença estatisticamente significativa com a medida Euclidiana e possuir menor custo computacional. A medida proposta neste trabalho, não apresenta resultados significantes, mas apresenta-se promissora para novas pesquisas. Com relação ao estudo de caso, o algoritmo kNN-TSP, com apenas o parâmetro de medida de similaridade otimizado, alcança um erro consideravelmente inferior a melhor configuração com MA, e pouco maior que as melhores configurações dos métodos SARIMA univariado e SARIMA multivariado, sendo essa diferença inferior a um por cento.
APA, Harvard, Vancouver, ISO, and other styles
5

Kharsikar, Saket. "A GENE ONTOLOGY BASED COMPUTATIONAL APPROACH FOR THE PREDICTION OF PROTEIN FUNCTIONS." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1187026388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bertilsson, Tobias, and Romario Johansson. "Undersökning om hjulmotorströmmar kan användas som alternativ metod för kollisiondetektering i autonoma gräsklippare. : Klassificering av hjulmotorströmmar med KNN och MLP." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-43555.

Full text
Abstract:
Purpose – The purpose of the study is to expand the knowledge of how wheel motor currents can be combined with machine learning to be used in a collision detection system for autonomous robots, in order to decrease the number of external sensors and open new design opportunities and lowering production costs. Method – The study is conducted with design science research where two artefacts are developed in a cooperation with Globe Tools Group. The artefacts are evaluated in how they categorize data given by an autonomous robot in the two categories collision and non-collision. The artefacts are then tested by generated data to analyse their ability to categorize. Findings – Both artefacts showed a 100 % accuracy in detecting the collisions in the given data by the autonomous robot. In the second part of the experiment the artefacts show that they have different decision boundaries in how they categorize the data, which will make them useful in different applications. Implications – The study contributes to an expanding knowledge in how machine learning and wheel motor currents can be used in a collision detection system. The results can lead to lowering production costs and opening new design opportunities. Limitations – The data used in the study is gathered by an autonomous robot which only did frontal collisions on an artificial lawn. Keywords – Machine learning, K-Nearest Neighbour, Multilayer Perceptron, collision detection, autonomous robots, Collison detection based on current.<br>Syfte – Studiens syfte är att utöka kunskapen om hur hjulmotorstömmar kan kombineras med maskininlärning för att användas vid kollisionsdetektion hos autonoma robotar, detta för att kunna minska antalet krävda externa sensorer hos dessa robotar och på så sätt öppna upp design möjligheter samt minska produktionskostnader Metod – Studien genomfördes med design science research där två artefakter utvecklades i samarbete med Globe Tools Group. Artefakterna utvärderades sedan i hur de kategoriserade kollisioner utifrån en given datamängd som genererades från en autonom gräsklippare. Studiens experiment introducerade sedan in data som inte ingick i samma datamängd för att se hur metoderna kategoriserade detta. Resultat – Artefakterna klarade med 100% noggrannhet att detektera kollisioner i den giva datamängden som genererades. Dock har de två olika artefakterna olika beslutsregioner i hur de kategoriserar datamängderna till kollision samt icke-kollisioner, vilket kan ge dom olika användningsområden Implikationer – Examensarbetet bidrar till en ökad kunskap om hur maskininlärning och hjulmotorströmmar kan användas i ett kollisionsdetekteringssystem. Studiens resultat kan bidra till minskade kostnader i produktion samt nya design möjligheter Begränsningar – Datamängden som användes i studien samlades endast in av en autonom gräsklippare som gjorde frontalkrockar med underlaget konstgräs. Nyckelord – Maskininlärning, K-nearest neighbor, Multi-layer perceptron, kollisionsdetektion, autonoma robotar
APA, Harvard, Vancouver, ISO, and other styles
7

Ozsakabasi, Feray. "Classification Of Forest Areas By K Nearest Neighbor Method: Case Study, Antalya." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609548/index.pdf.

Full text
Abstract:
Among the various remote sensing methods that can be used to map forest areas, the K Nearest Neighbor (KNN) supervised classification method is becoming increasingly popular for creating forest inventories in some countries. In this study, the utility of the KNN algorithm is evaluated for forest/non-forest/water stratification. Antalya is selected as the study area. The data used are composed of Landsat TM and Landsat ETM satellite images, acquired in 1987 and 2002, respectively, SRTM 90 meters digital elevation model (DEM) and land use data from the year 2003. The accuracies of different modifications of the KNN algorithm are evaluated using Leave One Out, which is a special case of K-fold cross-validation, and traditional accuracy assessment using error matrices. The best parameters are found to be Euclidean distance metric, inverse distance weighting, and k equal to 14, while using bands 4, 3 and 2. With these parameters, the cross-validation error is 0.009174, and the overall accuracy is around 86%. The results are compared with those from the Maximum Likelihood algorithm. KNN results are found to be accurate enough for practical applicability of this method for mapping forest areas.
APA, Harvard, Vancouver, ISO, and other styles
8

Neo, TohKoon. "A Direct Algorithm for the K-Nearest-Neighbor Classifier via Local Warping of the Distance Metric." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2168.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mestre, Ricardo Jorge Palheira. "Improvements on the KNN classifier." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10923.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática<br>The object classification is an important area within the artificial intelligence and its application extends to various areas, whether or not in the branch of science. Among the other classifiers, the K-nearest neighbor (KNN) is among the most simple and accurate especially in environments where the data distribution is unknown or apparently not parameterizable. This algorithm assigns the classifying element the major class in the K nearest neighbors. According to the original algorithm, this classification implies the calculation of the distances between the classifying instance and each one of the training objects. If on the one hand, having an extensive training set is an element of importance in order to obtain a high accuracy, on the other hand, it makes the classification of each object slower due to its lazy-learning algorithm nature. Indeed, this algorithm does not provide any means of storing information about the previous calculated classifications,making the calculation of the classification of two equal instances mandatory. In a way, it may be said that this classifier does not learn. This dissertation focuses on the lazy-learning fragility and intends to propose a solution that transforms the KNNinto an eager-learning classifier. In other words, it is intended that the algorithm learns effectively with the training set, thus avoiding redundant calculations. In the context of the proposed change in the algorithm, it is important to highlight the attributes that most characterize the objects according to their discriminating power. In this framework, there will be a study regarding the implementation of these transformations on data of different types: continuous and/or categorical.
APA, Harvard, Vancouver, ISO, and other styles
10

Chucre, Mirla Rafaela Rafael Braga. "K-nearest neighbors queries in time-dependent road networks: analyzing scenarios where points of interest move to the query point." reponame:Repositório Institucional da UFC, 2015. http://www.repositorio.ufc.br/handle/riufc/23696.

Full text
Abstract:
CHUCRE, Mirla Rafaela Rafael Braga. K-nearest neighbors queries in time-dependent road networks: analyzing scenarios where points of interest move to the query point. 2015. 65 f. Dissertação (Mestrado em Ciência da Computação)-Universidade Federal do Ceará, Fortaleza, 2015.<br>Submitted by Jonatas Martins (jonatasmartins@lia.ufc.br) on 2017-06-29T12:26:58Z No. of bitstreams: 1 2015_dis_mrrbchucre.pdf: 15845328 bytes, checksum: a2e4d0a03ca943372c92852d4bcf7236 (MD5)<br>Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2017-06-29T13:54:36Z (GMT) No. of bitstreams: 1 2015_dis_mrrbchucre.pdf: 15845328 bytes, checksum: a2e4d0a03ca943372c92852d4bcf7236 (MD5)<br>Made available in DSpace on 2017-06-29T13:54:36Z (GMT). No. of bitstreams: 1 2015_dis_mrrbchucre.pdf: 15845328 bytes, checksum: a2e4d0a03ca943372c92852d4bcf7236 (MD5) Previous issue date: 2015<br>A kNN query retrieve the k points of interest that are closest to the query point, where proximity is computed from the query point to the points of interest. Time-dependent road networks are represented as weighted graphs, where the weight of an edge depends on the time one passes through that edge. This way, we can model periodic congestions during rush hour and similar effects. Travel time on road networks heavily depends on the traffic and, typically, the time a moving object takes to traverse a segment depends on departure time. In time-dependent networks, a kNN query, called TD-kNN, returns the k points of interest with minimum travel-time from the query point. As a more concrete example, consider the following scenario. Imagine a tourist in Paris who is interested to visit the touristic attraction closest from him/her. Let us consider two points of interest in the city, the Eiffel Tower and the Cathedral of Notre Dame. He/she asks a query asking for the touristic attraction whose the path leading up to it is the fastest at that time, the answer depends on the departure time. For example, at 10h it takes 10 minutes to go to the Cathedral. It is the nearest attraction. Although, if he/she asks the same query at 22h, in the same spatial point, the nearest attraction is the Eiffel Tower. In this work, we identify a variation of nearest neighbors queries in time-dependent road networks that has wide applications and requires novel algorithms for processing. Differently from TD-kNN queries, we aim at minimizing the travel time from points of interest to the query point. With this approach, a cab company can find the nearest taxi in time to a passenger requesting transportation. More specifically, we address the following query: find the k points of interest (e.g. taxi drivers) which can move to the query point (e.g. a taxi user) in the minimum amount of time. Previous works have proposed solutions to answer kNN queries considering the time dependency of the network but not computing the proximity from the points of interest to the query point. We propose and discuss a solution to this type of query which are based on the previously proposed incremental network expansion and use the A∗ search algorithm equipped with suitable heuristic functions. We also discuss the design and correctness of our algorithm and present experimental results that show the efficiency and effectiveness of our solution.<br>Uma consulta de vizinhos mais próximos (ou kNN, do inglês k nearest neighbours) recupera o conjunto de k pontos de interesse que são mais próximos a um ponto de consulta, onde a proximidade é computada do ponto de consulta para cada ponto de interesse. Nas redes de rodovias tradicionais (estáticas) o custo de deslocamento de um ponto a outro é dado pela distância física entre esses dois pontos. Por outro lado, nas redes dependentes do tempo o custo de deslocamento (ou seja, o tempo de viagem) entre dois pontos varia de acordo com o instante de partida. Nessas redes, as consultas kNN são denominadas TD-kNN (do inglês Time-Dependent kNN). As redes de rodovias dependentes do tempo representam de forma mais adequada algumas situações reais, como, por exemplo, o deslocamento em grandes centros urbanos, onde o tempo para se deslocar de um ponto a outro durante os horários de pico, quando o tráfego é intenso e as ruas estão congestionadas, é muito maior do que em horários normais. Neste contexto, uma consulta típica consiste em descobrir os k restaurantes (pontos de interesse) mais próximos de um determinado cliente (ponto de consulta) caso este inicie o seu deslocamento ao meio dia. Nesta dissertação nós estudamos o problema de processar uma variação de consulta de vizinhos mais próximos em redes viárias dependentes do tempo. Diferentemente das consultas TD-kNN, onde a proximidade é calculada do ponto de consulta para um determinado ponto de interesse, estamos interessados em situações onde a proximidade deve ser calculada de um ponto de interesse para o ponto de consulta. Neste caso, uma consulta típica consiste em descobrir os k taxistas (pontos de interesse) mais próximos (ou seja, com o menor tempo de viagem) de um determinado cliente (ponto de consulta) caso eles iniciem o seu deslocamento até o referido cliente ao meio dia. Desta forma, nos cenários investigados nesta dissertação, são os pontos de interesse que se deslocam até o ponto de consulta, e não o contrário. O método proposto para executar este tipo de consulta aplica uma busca A∗ à medida que vai, de maneira incremental, explorando a rede. O objetivo do método é reduzir o percentual da rede avaliado na busca. A construção e a corretude do método são discutidas e são apresentados resultados experimentais com dados reais e sintéticos que mostram a eficiência da solução proposta.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "K-Nearest Neighbours (KNN)"

1

Zhang, Tongyi. "Decision Tree and K-Nearest-Neighbors (KNN)." In An Introduction to Materials Informatics. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-99-7992-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hurtado, Remigio, and Eduardo Ayora. "Intelligent System for Predicting Bank Policy Acceptance by Ensemble Machine Learning and Model Explanation." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-87065-1_41.

Full text
Abstract:
Abstract Efficient management of financial resources is crucial for the sustainability and competitiveness of banks, particularly in optimizing term deposit subscriptions to maintain liquidity. This paper introduces an advanced intelligent system for predicting term deposit acceptance using ensemble machine learning techniques. Our approach combines Random Forest and K-Nearest Neighbors (KNN) models to enhance prediction accuracy while providing clear explanations. The system follows the CRISP-DM methodology, which includes detailed phases of data preparation, modeling, fine-tuning, and model explanation. We utilize Random Forest for its feature importance metrics and KNN for assessing feature relevance through nearest neighbor analysis. The integration of these methods allows us to generate comprehensive explanations of prediction outcomes by identifying and interpreting key features influencing decision-making. By applying this method to the Bank Marketing Data Set, we demonstrate improved performance across standard metrics such as accuracy, precision, recall, and F1-score. The detailed explanation phase helps understand the model’s decision process, providing actionable insights for refining telemarketing strategies. This research presents a robust framework for implementing explainable machine learning in financial marketing, enhancing both predictive accuracy and interpretability for better-informed decision-making.
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Yifan, Yutong Ai, and Hao Jiang. "LLE Based K-Nearest Neighbor Smoothing for scRNA-Seq Data Imputation." In Financial Mathematics and Fintech. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2366-3_11.

Full text
Abstract:
AbstractThe single-cell RNA sequencing (scRNA-seq) technique allows single cell level of gene expression measurements, but the scRNA-seq data often contain missing values, with a large proportion caused by technical defects failing to detect gene expressions, which is called dropout event. The dropout issue poses a great challenge for scRNA-seq data analysis. In this chapter, we introduce a method based on KNN-smoothing: LLE-KNN-smoothing to impute the dropout values in scRNA-seq data and show that the LLE-KNN-smoothing greatly improves the recovery of gene expression in cells and shows better performance than state-of-the-art imputation methods on a number of scRNA-seq data sets.
APA, Harvard, Vancouver, ISO, and other styles
4

Hoque, Nazrul, Dhruba K. Bhattacharyya, and Jugal K. Kalita. "KNN-DK: A Modified K-NN Classifier with Dynamic k Nearest Neighbors." In Advances in Applications of Data-Driven Computing. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6919-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Robindro, Khumukcham, Yambem Ranjan Singh, Urikhimbam Boby Clinton, Linthoingambi Takhellambam, and Nazrul Hoque. "CD-KNN: A Modified K-Nearest Neighbor Classifier with Dynamic K Value." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4831-2_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bartz-Beielstein, Thomas, and Martin Zaefferer. "Models." In Hyperparameter Tuning for Machine and Deep Learning with R. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-5170-1_3.

Full text
Abstract:
AbstractThis chapter presents a unique overview and a comprehensive explanation of Machine Learning (ML) and Deep Learning (DL) methods. Frequently used ML and DL methods; their hyperparameter configurations; and their features such as types, their sensitivity, and robustness, as well as heuristics for their determination, constraints, and possible interactions are presented. In particular, we cover the following methods: $$k$$ k -Nearest Neighbor (KNN), Elastic Net (EN), Decision Tree (DT), Random Forest (RF), Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), and DL. This chapter in itself might serve as a stand-alone handbook already. It contains years of experience in transferring theoretical knowledge into a practical guide.
APA, Harvard, Vancouver, ISO, and other styles
7

Hamidon Majid, Nur Farina, Muhammad Sharfi Najib, Muhamad Faruqi Zahari, Suziyanti Zaib, and Tuan Sidek Tuan Muda. "The Classification of Meat Odor-Profile Using K-Nearest Neighbors (KNN)." In Lecture Notes in Electrical Engineering. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8690-0_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bevinamarad, Prabhu, and Prakash H. Unki. "Robust Image Tampering Detection Technique Using K-Nearest Neighbors (KNN) Classifier." In Advances in Intelligent Systems and Computing. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0475-2_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pai, Siddharth S., Om Bhayde, V. G. Narendra, and G. Shivaprasad. "ScanSense: An Optical Character Recognition (OCR) Using K-Nearest Neighbors (KNN)." In Algorithms for Intelligent Systems. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-1452-3_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yannan, Jingbo Wang, and Chao Wang. "Certifying the Fairness of KNN in the Presence of Dataset Bias." In Computer Aided Verification. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37703-7_16.

Full text
Abstract:
AbstractWe propose a method for certifying the fairness of the classification result of a widely used supervised learning algorithm, the k-nearest neighbors (KNN), under the assumption that the training data may have historical bias caused by systematic mislabeling of samples from a protected minority group. To the best of our knowledge, this is the first certification method for KNN based on three variants of the fairness definition: individual fairness, $$\epsilon $$ ϵ -fairness, and label-flipping fairness. We first define the fairness certification problem for KNN and then propose sound approximations of the complex arithmetic computations used in the state-of-the-art KNN algorithm. This is meant to lift the computation results from the concrete domain to an abstract domain, to reduce the computational cost. We show effectiveness of this abstract interpretation based technique through experimental evaluation on six datasets widely used in the fairness research literature. We also show that the method is accurate enough to obtain fairness certifications for a large number of test inputs, despite the presence of historical bias in the datasets.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "K-Nearest Neighbours (KNN)"

1

Afrina Wan Muhammad Azan, Wan Nur, S. Sarifah Radiah Shariff, Siti Meriam Zahari, et al. "Classification of Severity Areas in Dengue Control Strategies Using k-Nearest Neighbours (kNN)." In 2024 IEEE 15th Control and System Graduate Research Colloquium (ICSGRC). IEEE, 2024. http://dx.doi.org/10.1109/icsgrc62081.2024.10691203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Siregar, Fachri Auliansyah, Ade Candra, and Sutarman. "Determining The Parameter K of K-Nearest Neighbors (KNN) using Random Grid Search." In 2024 4th International Conference of Science and Information Technology in Smart Administration (ICSINTESA). IEEE, 2024. http://dx.doi.org/10.1109/icsintesa62455.2024.10748027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kou, Jianshang, Benfeng Xu, Chiwei Zhu, and Zhendong Mao. "KNN-Instruct: Automatic Instruction Construction with K Nearest Neighbor Deduction." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Senasli, Lamia, Mohammed Chitioui, Mehdi Damou, Abdelhakim Boudkhil, Bouasria Fatima, and Slimane Gounni. "Applying the K Nearest Neighbor algorithm (KNN) in a microwave filter." In 2024 International Conference on Advances in Electrical and Communication Technologies (ICAECOT). IEEE, 2024. https://doi.org/10.1109/icaecot62402.2024.10829183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Adebiyi, Marion O., Oladayo G. Atanda, Chidinma Okeke, Ayodele A. Adebiyi, and Abayomi A. Adebiyi. "Network Intrusion Detection Using K-Nearest Neighbors (KNN) and Recurrent Neural Networks (RNN)." In 2024 International Conference on Science, Engineering and Business for Driving Sustainable Development Goals (SEB4SDG). IEEE, 2024. http://dx.doi.org/10.1109/seb4sdg60871.2024.10629867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kalyan, Pavan, R. Surendran, and Raveena Selvanarayanan. "Hydroponic NFT for Brinjal cultivation using Decision Tree (DT) and K-Nearest Neighbour (KNN)." In 2024 2nd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS). IEEE, 2024. https://doi.org/10.1109/icssas64001.2024.10760940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Centeno, Criselle, Leisyl Mahusay, Dan Michael Cortez, Jay Rome Rana, Regina Diaz, and Ariel Antwaun Rolando Sison. "TherapEase: Enhancing Remote Physical Therapy Using Pose Estimation, Emotion Recognition, and K-Nearest Neighbors (KNN) Algorithm." In 2024 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES). IEEE, 2024. https://doi.org/10.1109/icses63760.2024.10910635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rahman, Fahri Ari, Triyanna Widiyaningtyas, Hanif Rifai Adha, Azhar Ahmad Smaragdina, Lismi Animatul Chisbiyah, and Levina Lintang Pramita. "Comparative Performance Analysis of Decision Tree and K-Nearest Neighbors (KNN) Algorithms for Malformed Food Aroma Classification." In 2024 IEEE 2nd International Conference on Electrical Engineering, Computer and Information Technology (ICEECIT). IEEE, 2024. https://doi.org/10.1109/iceecit63698.2024.10859612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vinutha, K., and Usharani Thirunavukkarasu. "Prediction of arrhythmia from MIT-BIH database using J48 and k-nearest neighbours (KNN) classifiers." In FIFTH INTERNATIONAL CONFERENCE ON APPLIED SCIENCES: ICAS2023. AIP Publishing, 2024. http://dx.doi.org/10.1063/5.0197451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ganesh, Sri, and Radhika Baskar. "Comparative analysis of early detection of lung cancer through breath analysis by comparing K-nearest neighbours (KNN) with Naive Bayes." In INTERNATIONAL CONFERENCE ON NEWER ENGINEERING CONCEPTS AND TECHNOLOGY: ICONNECT-2024. AIP Publishing, 2025. https://doi.org/10.1063/5.0262284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography