To see the other types of publications on this topic, follow the link: Non-parametric learning.

Journal articles on the topic 'Non-parametric learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Non-parametric learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Bing, Shi-Xiong Xia, and Yong Zhou. "Unsupervised non-parametric kernel learning algorithm." Knowledge-Based Systems 44 (May 2013): 1–9. http://dx.doi.org/10.1016/j.knosys.2012.12.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Esser, Pascal, Maximilian Fleissner, and Debarghya Ghoshdastidar. "Non-parametric Representation Learning with Kernels." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 11910–18. http://dx.doi.org/10.1609/aaai.v38i11.29077.

Full text
Abstract:
Unsupervised and self-supervised representation learning has become popular in recent years for learning useful features from unlabelled data. Representation learning has been mostly developed in the neural network literature, and other models for representation learning are surprisingly unexplored. In this work, we introduce and analyze several kernel-based representation learning approaches: Firstly, we define two kernel Self-Supervised Learning (SSL) models using contrastive loss functions and secondly, a Kernel Autoencoder (AE) model based on the idea of embedding and reconstructing data. We argue that the classical representer theorems for supervised kernel machines are not always applicable for (self-supervised) representation learning, and present new representer theorems, which show that the representations learned by our kernel models can be expressed in terms of kernel matrices. We further derive generalisation error bounds for representation learning with kernel SSL and AE, and empirically evaluate the performance of these methods in both small data regimes as well as in comparison with neural network based models.
APA, Harvard, Vancouver, ISO, and other styles
3

Cruz, David Luviano, Francesco José García Luna, and Luis Asunción Pérez Domínguez. "Multiagent reinforcement learning using Non-Parametric Approximation." Respuestas 23, no. 2 (2018): 53–61. http://dx.doi.org/10.22463/0122820x.1738.

Full text
Abstract:
This paper presents a hybrid control proposal for multi-agent systems, where the advantages of the reinforcement learning and nonparametric functions are exploited. A modified version of the Q-learning algorithm is used which will provide data training for a Kernel, this approach will provide a sub optimal set of actions to be used by the agents. The proposed algorithm is experimentally tested in a path generation task in an unknown environment for mobile robots.
APA, Harvard, Vancouver, ISO, and other styles
4

Khadse, Vijay M., Parikshit Narendra Mahalle, and Gitanjali R. Shinde. "Statistical Study of Machine Learning Algorithms Using Parametric and Non-Parametric Tests." International Journal of Ambient Computing and Intelligence 11, no. 3 (2020): 80–105. http://dx.doi.org/10.4018/ijaci.2020070105.

Full text
Abstract:
The emerging area of the internet of things (IoT) generates a large amount of data from IoT applications such as health care, smart cities, etc. This data needs to be analyzed in order to derive useful inferences. Machine learning (ML) plays a significant role in analyzing such data. It becomes difficult to select optimal algorithm from the available set of algorithms/classifiers to obtain best results. The performance of algorithms differs when applied to datasets from different application domains. In learning, it is difficult to understand if the difference in performance is real or due to random variation in test data, training data, or internal randomness of the learning algorithms. This study takes into account these issues during a comparison of ML algorithms for binary and multivariate classification. It helps in providing guidelines for statistical validation of results. The results obtained show that the performance measure of accuracy for one algorithm differs by critical difference (CD) than others over binary and multivariate datasets obtained from different application domains.
APA, Harvard, Vancouver, ISO, and other styles
5

Yoa, Seungdong, Jinyoung Park, and Hyunwoo J. Kim. "Learning Non-Parametric Surrogate Losses With Correlated Gradients." IEEE Access 9 (2021): 141199–209. http://dx.doi.org/10.1109/access.2021.3120092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rutkowski, Leszek. "Non-parametric learning algorithms in time-varying environments." Signal Processing 18, no. 2 (1989): 129–37. http://dx.doi.org/10.1016/0165-1684(89)90045-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Mingming, Bing Liu, Chen Zhang, and Wei Sun. "Embedded non-parametric kernel learning for kernel clustering." Multidimensional Systems and Signal Processing 28, no. 4 (2016): 1697–715. http://dx.doi.org/10.1007/s11045-016-0440-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Changyou, Junping Zhang, Xuefang He, and Zhi-Hua Zhou. "Non-Parametric Kernel Learning with robust pairwise constraints." International Journal of Machine Learning and Cybernetics 3, no. 2 (2011): 83–96. http://dx.doi.org/10.1007/s13042-011-0048-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kaur, Navdeep, Gautam Kunapuli, and Sriraam Natarajan. "Non-parametric learning of lifted Restricted Boltzmann Machines." International Journal of Approximate Reasoning 120 (May 2020): 33–47. http://dx.doi.org/10.1016/j.ijar.2020.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Mingyang, Zhenshan Bing, Xiangtong Yao, et al. "Meta-Reinforcement Learning Based on Self-Supervised Task Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (2023): 10157–65. http://dx.doi.org/10.1609/aaai.v37i8.26210.

Full text
Abstract:
Meta-reinforcement learning enables artificial agents to learn from related training tasks and adapt to new tasks efficiently with minimal interaction data. However, most existing research is still limited to narrow task distributions that are parametric and stationary, and does not consider out-of-distribution tasks during the evaluation, thus, restricting its application. In this paper, we propose MoSS, a context-based Meta-reinforcement learning algorithm based on Self-Supervised task representation learning to address this challenge. We extend meta-RL to broad non-parametric task distributions which have never been explored before, and also achieve state-of-the-art results in non-stationary and out-of-distribution tasks. Specifically, MoSS consists of a task inference module and a policy module. We utilize the Gaussian mixture model for task representation to imitate the parametric and non-parametric task variations. Additionally, our online adaptation strategy enables the agent to react at the first sight of a task change, thus being applicable in non-stationary tasks. MoSS also exhibits strong generalization robustness in out-of-distributions tasks which benefits from the reliable and robust task representation. The policy is built on top of an off-policy RL algorithm and the entire network is trained completely off-policy to ensure high sample efficiency. On MuJoCo and Meta-World benchmarks, MoSS outperforms prior works in terms of asymptotic performance, sample efficiency (3-50x faster), adaptation efficiency, and generalization robustness on broad and diverse task distributions.
APA, Harvard, Vancouver, ISO, and other styles
11

Jung, Hyungjoo, and Kwanghoon Sohn. "Single Image Depth Estimation With Integration of Parametric Learning and Non-Parametric Sampling." Journal of Korea Multimedia Society 19, no. 9 (2016): 1659–68. http://dx.doi.org/10.9717/kmms.2016.19.9.1659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tanwani, Ajay Kumar, and Sylvain Calinon. "Small-variance asymptotics for non-parametric online robot learning." International Journal of Robotics Research 38, no. 1 (2018): 3–22. http://dx.doi.org/10.1177/0278364918816374.

Full text
Abstract:
Small-variance asymptotics is emerging as a useful technique for inference in large-scale Bayesian non-parametric mixture models. This paper analyzes the online learning of robot manipulation tasks with Bayesian non-parametric mixture models under small-variance asymptotics. The analysis yields a scalable online sequence clustering (SOSC) algorithm that is non-parametric in the number of clusters and the subspace dimension of each cluster. SOSC groups the new datapoint in low-dimensional subspaces by online inference in a non-parametric mixture of probabilistic principal component analyzers (MPPCA) based on a Dirichlet process, and captures the state transition and state duration information online in a hidden semi-Markov model (HSMM) based on a hierarchical Dirichlet process. A task-parameterized formulation of our approach autonomously adapts the model to changing environmental situations during manipulation. We apply the algorithm in a teleoperation setting to recognize the intention of the operator and remotely adjust the movement of the robot using the learned model. The generative model is used to synthesize both time-independent and time-dependent behaviors by relying on the principles of shared and autonomous control. Experiments with the Baxter robot yield parsimonious clusters that adapt online with new demonstrations and assist the operator in performing remote manipulation tasks.
APA, Harvard, Vancouver, ISO, and other styles
13

Meharunnisa S P. "Improving Network Traffic Security with Parametric and Non-parametric Anomaly Detection Techniques." Journal of Information Systems Engineering and Management 10, no. 33s (2025): 897–907. https://doi.org/10.52783/jisem.v10i33s.5669.

Full text
Abstract:
Introduction: Anomaly detection in network traffic is a critical component in multiple domains like IoT, Cloud Computing, cybersecurity and other field, focusing on the identification of malicious activities to preserve the integrity of network systems. Objectives: This research investigates the performance of both parametric and non-parametric machine learning algorithms in detecting anomalies within network traffic datasets. Parametric models such as Logistic Regression and Support Vector Machines (SVM) were evaluated alongside non-parametric methods, including Random Forest and K-Nearest Neighbors (KNN). Methods: The dataset underwent an extensive preprocessing pipeline to address issues such as missing data, feature normalization, and categorical encoding to improve model accuracy. Results: Among the different algorithms assessed, Random Forest demonstrated the highest efficacy, achieving an accuracy rate of 98.68%. This notable performance highlights the advantages of ensemble techniques in capturing complex, non-linear patterns inherent in network traffic. The results underscore the significant contribution of machine learning, particularly non-parametric methods, in enhancing anomaly detection systems within cybersecurity. Conclusions: Furthermore, this study provides valuable insights into algorithm selection for network traffic analysis, facilitating the development of more robust and efficient intrusion detection systems.
APA, Harvard, Vancouver, ISO, and other styles
14

ZHANG, Chao, and Takuya AKASHI. "Two-Side Agreement Learning for Non-Parametric Template Matching." IEICE Transactions on Information and Systems E100.D, no. 1 (2017): 140–49. http://dx.doi.org/10.1587/transinf.2016edp7233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ma, Yuchao, and Hassan Ghasemzadeh. "LabelForest: Non-Parametric Semi-Supervised Learning for Activity Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4520–27. http://dx.doi.org/10.1609/aaai.v33i01.33014520.

Full text
Abstract:
Activity recognition is central to many motion analysis applications ranging from health assessment to gaming. However, the need for obtaining sufficiently large amounts of labeled data has limited the development of personalized activity recognition models. Semi-supervised learning has traditionally been a promising approach in many application domains to alleviate reliance on large amounts of labeled data by learning the label information from a small set of seed labels. Nonetheless, existing approaches perform poorly in highly dynamic settings, such as wearable systems, because some algorithms rely on predefined hyper-parameters or distribution models that needs to be tuned for each user or context. To address these challenges, we introduce LabelForest 1, a novel non-parametric semi-supervised learning framework for activity recognition. LabelForest has two algorithms at its core: (1) a spanning forest algorithm for sample selection and label inference; and (2) a silhouette-based filtering method to finalize label augmentation for machine learning model training. Our thorough analysis on three human activity datasets demonstrate that LabelForest achieves a labeling accuracy of 90.1% in presence of a skewed label distribution in the seed data. Compared to self-training and other sequential learning algorithms, LabelForest achieves up to 56.9% and 175.3% improvement in the accuracy on balanced and unbalanced seed data, respectively.
APA, Harvard, Vancouver, ISO, and other styles
16

Pareek, Parikshit, Chuan Wang, and Hung D. Nguyen. "Non-parametric probabilistic load flow using Gaussian process learning." Physica D: Nonlinear Phenomena 424 (October 2021): 132941. http://dx.doi.org/10.1016/j.physd.2021.132941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Naeem, Muhammad, and Sohail Asghar. "Structure learning via non-parametric factorized joint likelihood function." Journal of Intelligent & Fuzzy Systems 27, no. 3 (2014): 1589–99. http://dx.doi.org/10.3233/ifs-141125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Karumanchi, Sisir, Thomas Allen, Tim Bailey, and Steve Scheding. "Non-parametric Learning to Aid Path Planning over Slopes." International Journal of Robotics Research 29, no. 8 (2010): 997–1018. http://dx.doi.org/10.1177/0278364910370241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dervilis, Nikolaos, Thomas E. Simpson, David J. Wagg, and Keith Worden. "Nonlinear modal analysis via non-parametric machine learning tools." Strain 55, no. 1 (2018): e12297. http://dx.doi.org/10.1111/str.12297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Barut, Emre, and Warren B. Powell. "Optimal learning for sequential sampling with non-parametric beliefs." Journal of Global Optimization 58, no. 3 (2013): 517–43. http://dx.doi.org/10.1007/s10898-013-0050-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lu, Zhong-Lin, Yukai Zhao, Jiajuan Liu, and Barbara Dosher. "Non-parametric Hierarchical Bayesian Modeling of the Learning Curve in Perceptual Learning." Journal of Vision 23, no. 9 (2023): 5752. http://dx.doi.org/10.1167/jov.23.9.5752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gaviria-Chavarro, Javier, Isabel Cristina Rojas-Padilla, and Yury Vergara-López. "Virtual Learning Object (VLO) for Teaching and Learning Non-Parametric Statistical Methods." Tecné, Episteme y Didaxis: TED, no. 54 (July 1, 2023): 285–302. http://dx.doi.org/10.17227/ted.num54-14155.

Full text
Abstract:
nterpreting, understanding, and applying statistical knowledge, presents, in many cases, some difficulties for students in the training process. For this reason, and thanks to the rise of information and communication technologies; a virtual object was developed for learning the statistical methods of Kruskal Wallis, Mann Whitney U and Wilcoxon, which are included in non-parametric statistics. The objective of this quasi-experimental design study was to apply the virtual object as a teaching-learning strategy for these three statistical methods after its creation and validation in order to support the training of students in biostatistics. The virtual learning object was evaluated by experts through the LORI instrument (tool that allows to evaluate learning objects based on nine variables), granting a quality level in the medium-high range according to the final weighting. The evaluation instrument and the comparative statistical analysis used in this process showed that the learning object is adequate for the purpose and objective set, concluding that there is a significant difference in the academic results of the students to whom this digital tool was applied.
APA, Harvard, Vancouver, ISO, and other styles
23

Deco, Gustavo, Ralph Neuneier, and Bernd Schümann. "Non-parametric Data Selection for Neural Learning in Non-stationary Time Series." Neural Networks 10, no. 3 (1997): 401–7. http://dx.doi.org/10.1016/s0893-6080(96)00108-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rajathi, C., and P. Rukmani. "Hybrid Learning Model for intrusion detection system: A combination of parametric and non-parametric classifiers." Alexandria Engineering Journal 112 (January 2025): 384–96. http://dx.doi.org/10.1016/j.aej.2024.10.101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Pal, Dipan K., and Marios Savvides. "Non-Parametric Transformation Networks for Learning General Invariances from Data." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4667–74. http://dx.doi.org/10.1609/aaai.v33i01.33014667.

Full text
Abstract:
ConvNets, through their architecture, only enforce invariance to translation. In this paper, we introduce a new class of deep convolutional architectures called Non-Parametric Transformation Networks (NPTNs) which can learn general invariances and symmetries directly from data. NPTNs are a natural generalization of ConvNets and can be optimized directly using gradient descent. Unlike almost all previous works in deep architectures, they make no assumption regarding the structure of the invariances present in the data and in that aspect are flexible and powerful. We also model ConvNets and NPTNs under a unified framework called Transformation Networks (TN), which yields a better understanding of the connection between the two. We demonstrate the efficacy of NPTNs on data such as MNIST with extreme transformations and CIFAR10 where they outperform baselines, and further outperform several recent algorithms on ETH-80. They do so while having the same number of parameters. We also show that they are more effective than ConvNets in modelling symmetries and invariances from data, without the explicit knowledge of the added arbitrary nuisance transformations. Finally, we replace ConvNets with NPTNs within Capsule Networks and show that this enables Capsule Nets to perform even better.
APA, Harvard, Vancouver, ISO, and other styles
26

Kardan, Ahmad Agha, and Samira Ghareh Gozlou. "A new non-parametric feature learning for supervised link prediction." International Journal of System Control and Information Processing 1, no. 4 (2015): 319. http://dx.doi.org/10.1504/ijscip.2015.075877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zoričić, Davor. "Non-parametric testing of the machine learning electricity prices forecasts." International journal of multidisciplinarity in business and science 10, no. 16 (2024): 5–11. https://doi.org/10.56321/ijmbs.10.16.5.

Full text
Abstract:
This research analyzes forecast accuracy in the day-ahead electricity market. Performance of Random Forest and XGBoost machine learning models is compared based on the day-ahead electricity market data for Germany. Data for 2018 and 2021 is analyzed in order to explore differences in forecast accuracy in the low and high market volatility periods. Initial training data for 2017 is used in order to produce forecasts for 2018 up to one month ahead. The training set is then rolled one month forward thus creating a fixed length rolling window of training and forecast set data for the remainder of the analyzed period. This methodological framework results in 11 forecasting sets for each analyzed year. Forecast accuracy is then evaluated by comparing root-mean-squared errors (RMSE) for the observed period. The focus of the research is on examination whether differences in the RMSE values of the competing machine learning models being analyzed can be reliably determined. For this purpose, firstly forecasting exercise has been conducted 30 times over for both machine learning models and each forecast set containing all forecast horizons. Secondly, median RMSE values are analyzed for each forecast set and non-parametric Wilcoxon rank-sum test is used to determine whether the observed differences in RMSE are statistically significant. Research results show small differences in RMSE values, however, they are found to be statistically significant for all forecast sets except one. Moreover, Random Forest seems to slightly outperform XGBoost model during the period of low market volatility, while XGBoost seems to perform better in the last three forecast sets of 2021 associated with higher market volatility.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Z., and C. W. Chan. "Learning control for non-parametric uncertainties with new convergence property." IET Control Theory & Applications 4, no. 10 (2010): 2177–83. http://dx.doi.org/10.1049/iet-cta.2009.0458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Yi, Bin Li, Yang Wang, Fang Chen, Bang Zhang, and Zhidong Li. "Robust Bayesian non-parametric dictionary learning with heterogeneous Gaussian noise." Computer Vision and Image Understanding 150 (September 2016): 31–43. http://dx.doi.org/10.1016/j.cviu.2016.05.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Der-Chang, and Chun-Wu Yeh. "A non-parametric learning algorithm for small manufacturing data sets." Expert Systems with Applications 34, no. 1 (2008): 391–98. http://dx.doi.org/10.1016/j.eswa.2006.09.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Fu, R., D. Xiao, A. G. Buchan, X. Lin, Y. Feng, and G. Dong. "A parametric non-linear non-intrusive reduce-order model using deep transfer learning." Computer Methods in Applied Mechanics and Engineering 438 (April 2025): 117807. https://doi.org/10.1016/j.cma.2025.117807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Park, Yeonseok, Anthony Choi, and Keonwook Kim. "Parametric Estimations Based on Homomorphic Deconvolution for Time of Flight in Sound Source Localization System." Sensors 20, no. 3 (2020): 925. http://dx.doi.org/10.3390/s20030925.

Full text
Abstract:
Vehicle-mounted sound source localization systems provide comprehensive information to improve driving conditions by monitoring the surroundings. The three-dimensional structure of vehicles hinders the omnidirectional sound localization system because of the long and uneven propagation. In the received signal, the flight times between microphones delivers the essential information to locate the sound source. This paper proposes a novel method to design a sound localization system based on the single analog microphone network. This article involves the flight time estimation for two microphones with non-parametric homomorphic deconvolution. The parametric methods are also suggested with Yule-walker, Prony, and Steiglitz-McBride algorithm to derive the coefficient values of the propagation model for flight time estimation. The non-parametric and Steiglitz-McBride method demonstrated significantly low bias and variance for 20 or higher ensemble average length. The Yule-walker and Prony algorithms showed gradually improved statistical performance for increased ensemble average length. Hence, the non-parametric and parametric homomorphic deconvolution well represent the flight time information. The derived non-parametric and parametric output with distinct length will serve as the featured information for a complete localization system based on machine learning or deep learning in future works.
APA, Harvard, Vancouver, ISO, and other styles
33

Souaissi, Zina, Taha B. M. J. Ouarda, and André St-Hilaire. "Non-parametric, semi-parametric, and machine learning models for river temperature frequency analysis at ungauged basins." Ecological Informatics 75 (July 2023): 102107. http://dx.doi.org/10.1016/j.ecoinf.2023.102107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Herranz-Matey, Ivan, and Luis Ruiz-Garcia. "New Agricultural Tractor Manufacturer’s Suggested Retail Price (MSRP) Model in Europe." Agriculture 14, no. 3 (2024): 342. http://dx.doi.org/10.3390/agriculture14030342.

Full text
Abstract:
Research investigating models for assessing new tractor pricing is notably scarce, despite its fundamental importance in conducting comprehensive cost analyses. This study aims to identify a model that is both user-friendly and robust, evaluating both parametric and Machine Learning-optimized non-parametric models. Among parametric models, the second-order polynomial model demonstrated superior performance in terms of R-squared (R2) of 0.97469 and a Root Mean Square Error (RMSE) of 15,633. Conversely, Machine Learning-optimized Gaussian Processes Regressions exhibited the most favorable overall R-squared (R2) of 0.99951 and a Root Mean Square Error (RMSE) of 2321. While the parametric polynomial model offers a solution with minimal mathematical and computational complexity, the non-parametric GPR model delivers highly robust outcomes, presenting stakeholders involved in new agriculture tractor transactions with superior data-driven decision-making capabilities.
APA, Harvard, Vancouver, ISO, and other styles
35

Maddalena, Emilio T., and Colin N. Jones. "Learning Non-Parametric Models with Guarantees: A Smooth Lipschitz Regression Approach." IFAC-PapersOnLine 53, no. 2 (2020): 965–70. http://dx.doi.org/10.1016/j.ifacol.2020.12.1265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Dongqi, Haoran Wei, Zhirui Zhang, Shujian Huang, Jun Xie, and Jiajun Chen. "Non-parametric Online Learning from Human Feedback for Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 11431–39. http://dx.doi.org/10.1609/aaai.v36i10.21395.

Full text
Abstract:
We study the problem of online learning with human feedback in the human-in-the-loop machine translation, in which the human translators revise the machine-generated translations and then the corrected translations are used to improve the neural machine translation (NMT) system. However, previous methods require online model updating or additional translation memory networks to achieve high-quality performance, making them inflexible and inefficient in practice. In this paper, we propose a novel non-parametric online learning method without changing the model structure. This approach introduces two k-nearest-neighbor (KNN) modules: one module memorizes the human feedback, which is the correct sentences provided by human translators, while the other balances the usage of the history human feedback and original NMT models adaptively. Experiments conducted on EMEA and JRC-Acquis benchmarks demonstrate that our proposed method obtains substantial improvements on translation accuracy and achieves better adaptation performance with less repeating human correction operations.
APA, Harvard, Vancouver, ISO, and other styles
37

Tohill, C., L. Ferreira, C. J. Conselice, S. P. Bamford, and F. Ferrari. "Quantifying Non-parametric Structure of High-redshift Galaxies with Deep Learning." Astrophysical Journal 916, no. 1 (2021): 4. http://dx.doi.org/10.3847/1538-4357/ac033c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Wirayasa, I. Ketut Adi, Arko Djajadi, H. Andri Santoso, and Eko Indrajit. "Comparison Non-Parametric Machine Learning Algorithms for Prediction of Employee Talent." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 15, no. 4 (2021): 403. http://dx.doi.org/10.22146/ijccs.69366.

Full text
Abstract:
Classification of ordinal data is part of categorical data. Ordinal data consists of features with values based on order or ranking. The use of machine learning methods in Human Resources Management is intended to support decision-making based on objective data analysis, and not on subjective aspects. The purpose of this study is to analyze the relationship between features, and whether the features used as objective factors can classify, and predict certain talented employees or not. This study uses a public dataset provided by IBM analytics. Analysis of the dataset using statistical tests, and confirmatory factor analysis validity tests, intended to determine the relationship or correlation between features in formulating hypothesis testing before building a model by using a comparison of four algorithms, namely Support Vector Machine, K-Nearest Neighbor, Decision Tree, and Artificial Neural Networks. The test results are expressed in the Confusion Matrix, and report classification of each model. The best evaluation is produced by the SVM algorithm with the same Accuracy, Precision, and Recall values, which are 94.00%, Sensitivity 93.28%, False Positive rate 4.62%, False Negative rate 6.72%, and AUC-ROC curve value 0.97 with an excellent category in performing classification of the employee talent prediction model.
APA, Harvard, Vancouver, ISO, and other styles
39

Singh, Sumeet, Jonathan Lacotte, Anirudha Majumdar, and Marco Pavone. "Risk-sensitive inverse reinforcement learning via semi- and non-parametric methods." International Journal of Robotics Research 37, no. 13-14 (2018): 1713–40. http://dx.doi.org/10.1177/0278364918772017.

Full text
Abstract:
The literature on inverse reinforcement learning (IRL) typically assumes that humans take actions to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive (RS) IRL to explicitly account for a human’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk neutral to worst case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human’s underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with 10 human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk averse to risk neutral in a data-efficient manner. Moreover, comparisons of the RS-IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.
APA, Harvard, Vancouver, ISO, and other styles
40

Syed, Zeeshan, Ilan Rubinfeld, Pat Patton, et al. "Using diagnostic codes for risk adjustment: A non-parametric learning approach." Journal of the American College of Surgeons 211, no. 3 (2010): S99—S100. http://dx.doi.org/10.1016/j.jamcollsurg.2010.06.262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Nesa, Nashreen, Tania Ghosh, and Indrajit Banerjee. "Non-parametric sequence-based learning approach for outlier detection in IoT." Future Generation Computer Systems 82 (May 2018): 412–21. http://dx.doi.org/10.1016/j.future.2017.11.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Nurul Amelina Nasharuddin and Nurul Shuhada Zamri. "Non-Parametric Machine Learning for Pollinator Image Classification: A Comparative Study." Journal of Advanced Research in Applied Sciences and Engineering Technology 34, no. 1 (2023): 106–15. http://dx.doi.org/10.37934/araset.34.1.106115.

Full text
Abstract:
Pollinators play a crucial role in maintaining the health of our planet's ecosystems by aiding in plant reproduction. However, identifying and differentiating between different types of pollinators can be a difficult task, especially when they have similar appearances. This difficulty in identification can cause significant problems for conservation efforts, as effective conservation requires knowledge of the specific pollinator species present in an ecosystem. Thus, the aim of this study is to identify the most effective methods, features, and classifiers for developing a reliable pollinator classifier. Specifically, this initial study uses two primary features to differentiate between the pollinator types: shape and colour. To develop the pollinator classifiers, a dataset of 186 images of black ants, ladybirds, and yellow jacket wasps was collected. The dataset was then divided into training and testing sets, and four different non-parametric classifiers were used to train the extracted features. The classifiers used were the k-Nearest Neighbour, Decision Tree, Random Forest, and Support Vector Machine classifiers. The results showed that the Random Forest classifier was the most accurate, with a maximum accuracy of 92.11% when the dataset was partitioned into 80% training and 20% testing sets. By developing a reliable pollinator classifier, researchers and conservationists can better understand the roles of different pollinator species in maintaining ecosystem health. This understanding can lead to better conservation strategies to protect these important creatures, ultimately helping to preserve our planet's biodiversity.
APA, Harvard, Vancouver, ISO, and other styles
43

Muji, Mujiansyah. "Creative Thinking for PJBL Approach Non-Parametric Analysis." JISAE: Journal of Indonesian Student Assessment and Evaluation 10, no. 2 (2024): 59–65. https://doi.org/10.21009/jisae.v10i2.49241.

Full text
Abstract:
Based on the 2018 PISA results, Indonesia ranked creative thinking at 74th place out of 79 countries. This data shows how important it is that creative thinking needs to be fostered, taught and developed in students. There is a very important approach to developing this capability, namely the PjBL approach by utilizing digital libraries. Such as Google Scholar and Crossref, systematic literature reviews, especially on PGMI UIN Antasari Banjarmasin students. The research results show that with PjBL creative thinking, PGMI UIN Antasari Banjarmasin students are not significantly different from conventional methods in training students' creative thinking if the learning sources are the same, namely Google Scholar and Crossref for discussing science subjects, but the pre-test and post-test results show differences. in student thinking, after using the PjBL approach.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Junjin, and Jiatong Song. "Research on Traffic Flow Prediction Methods Based on Deep Learning." Applied and Computational Engineering 111, no. 1 (2024): 72–80. http://dx.doi.org/10.54254/2755-2721/111/2024ch0096.

Full text
Abstract:
In recent years, traffic flow prediction technology has been transformed from statistics based parametric methods and machine learning driven non-parametric methods to big data driven deep learning methods. This paper summarizes and summarizes the existing methods and improvement measures of long and short term traffic flow prediction based on deep learning. The time range of traffic flow forecast based on the model is divided into long-term and short term. The short-term traffic flow forecasting methods are subdivided into time series model, non-parametric forecasting model and probability forecasting model, and the advantages and disadvantages of each method and the feasibility of the specific methods are summarized. As for the long-term model, it is mainly based on the application of GCN model to other models, and then the specific methods of its hybrid model are outlined, systematically describing the value of deep learning in traffic flow prediction. Finally, the future research direction and development trend in this field are predicted and prospected.
APA, Harvard, Vancouver, ISO, and other styles
45

Hakim, Abdul, Nurhikmah H. Nurhikmah, Nur Halisa, Farida Febriati, Latri Aras, and Lutfi B. Lutfi. "The Effect of Online Learning on Student Learning Outcomes in Indonesian Subjects." Journal of Innovation in Educational and Cultural Research 4, no. 1 (2023): 133–40. http://dx.doi.org/10.46843/jiecr.v4i1.312.

Full text
Abstract:
This study employs a Pre-Experimental Design (Non-design) to examine whether online learning has any effect on learning outcomes in Indonesian subjects. The sample size for this study was 16 students, chosen at random. Data collection methods include observation, testing, and documentation. Observations were made by observing both teacher and student activities. The test consists of a pretest before implementing offline learning and a posttest after implementing online learning, as well as documentation for research purposes. The data was analyzed using descriptive statistics and the non-parametric Wilcoxon signed ranks test (Z). The average value of student learning outcomes in an Indonesian class after offline learning was higher than after online learning, according to the results of the non-parametric Wilcoxon signed ranks test (Z) processed using SPSS 22 for windows. The findings revealed that the effect of online learning on learning outcomes in an Indonesian course after implementing offline learning is greater than the effect of online learning after implementing offline learning. The findings reveal that the effect of online learning on learning outcomes in an Indonesian language subject after implementing offline learning is greater than the effect of online learning after implementing offline learning. It is possible to conclude that online learning has an effect on students' learning outcomes in Indonesian Class IV subjects at SD Negeri 1 Bonto-Bonto, Pangkep Regency.
APA, Harvard, Vancouver, ISO, and other styles
46

Shi, Chao, and Yu Wang. "Non-parametric machine learning methods for interpolation of spatially varying non-stationary and non-Gaussian geotechnical properties." Geoscience Frontiers 12, no. 1 (2021): 339–50. http://dx.doi.org/10.1016/j.gsf.2020.01.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Z., and C. W. Chan. "Conditional iterative learning control for non-linear systems with non-parametric uncertainties under alignment condition." IET Control Theory & Applications 3, no. 11 (2009): 1521–27. http://dx.doi.org/10.1049/iet-cta.2008.0532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Menglin, Zhun Zhong, and Xiaojin Gong. "Prior-Constrained Association Learning for Fine-Grained Generalized Category Discovery." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21162–70. https://doi.org/10.1609/aaai.v39i20.35414.

Full text
Abstract:
This paper addresses generalized category discovery (GCD), the task of clustering unlabeled data from potentially known or unknown categories with the help of labeled instances from each known category. Compared to traditional semi-supervised learning, GCD is more challenging because unlabeled data could be from novel categories not appearing in labeled data. Current state-of-the-art methods typically learn a parametric classifier assisted by self-distillation. While being effective, these methods do not make use of cross-instance similarity to discover class-specific semantics which are essential for representation learning and category discovery. In this paper, we revisit the association-based paradigm and propose a Prior-constrained Association Learning method to capture and learn the semantic relations within data. In particular, the labeled data from known categories provides a unique prior for the association of unlabeled data. Unlike previous methods that only adopts the prior as a pre or post-clustering refinement, we fully incorporate the prior into the association process, and let it constrain the association towards a reliable grouping outcome. The estimated semantic groups are utilized through non-parametric prototypical contrast to enhance the representation learning. A further combination of both parametric and non-parametric classification complements each other and leads to a model that outperforms existing methods by a significant margin. On multiple GCD benchmarks, we perform extensive experiments and validate the effectiveness of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Lei, Yuqing Ma, and Xianglong Liu. "A general non-parametric active learning framework for classification on multiple manifolds." Pattern Recognition Letters 130 (February 2020): 250–58. http://dx.doi.org/10.1016/j.patrec.2019.01.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Shah, Sonali Rajesh, Abhishek Kaushik, Shubham Sharma, and Janice Shah. "Opinion-Mining on Marglish and Devanagari Comments of YouTube Cookery Channels Using Parametric and Non-Parametric Learning Models." Big Data and Cognitive Computing 4, no. 1 (2020): 3. http://dx.doi.org/10.3390/bdcc4010003.

Full text
Abstract:
YouTube is a boon, and through it people can educate, entertain, and express themselves about various topics. YouTube India currently has millions of active users. As there are millions of active users it can be understood that the data present on the YouTube will be large. With India being a very diverse country, many people are multilingual. People express their opinions in a code-mix form. Code-mix form is the mixing of two or more languages. It has become a necessity to perform Sentiment Analysis on the code-mix languages as there is not much research on Indian code-mix language data. In this paper, Sentiment Analysis (SA) is carried out on the Marglish (Marathi + English) as well as Devanagari Marathi comments which are extracted from the YouTube API from top Marathi channels. Several machine-learning models are applied on the dataset along with 3 different vectorizing techniques. Multilayer Perceptron (MLP) with Count vectorizer provides the best accuracy of 62.68% on the Marglish dataset and Bernoulli Naïve Bayes along with the Count vectorizer, which gives accuracy of 60.60% on the Devanagari dataset. Multilayer Perceptron and Bernoulli Naïve Bayes are considered to be the best performing algorithms. 10-fold cross-validation and statistical testing was also carried out on the dataset to confirm the results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography