To see the other types of publications on this topic, follow the link: Kernel Inference.

Journal articles on the topic 'Kernel Inference'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Kernel Inference.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nishiyama, Yu, Motonobu Kanagawa, Arthur Gretton, and Kenji Fukumizu. "Model-based kernel sum rule: kernel Bayesian inference with probabilistic models." Machine Learning 109, no. 5 (2020): 939–72. http://dx.doi.org/10.1007/s10994-019-05852-9.

Full text
Abstract:
AbstractKernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned from data in a nonparametric manner. Various algorithms of kernel Bayesian inference have been developed by combining kernelized basic probabilistic operations such as the kernel sum rule and kernel Bayes’ rule. However, the current framework is fully nonparametric, and it does not allow a user to flexibly combine nonparametric and model-based inferences. This is inefficient when there are good probabilistic mod
APA, Harvard, Vancouver, ISO, and other styles
2

Rogers, Mark F., Colin Campbell, and Yiming Ying. "Probabilistic Inference of Biological Networks via Data Integration." BioMed Research International 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/707453.

Full text
Abstract:
There is significant interest in inferring the structure of subcellular networks of interaction. Here we consider supervised interactive network inference in which a reference set of known network links and nonlinks is used to train a classifier for predicting new links. Many types of data are relevant to inferring functional links between genes, motivating the use of data integration. We use pairwise kernels to predict novel links, along with multiple kernel learning to integrate distinct sources of data into a decision function. We evaluate various pairwise kernels to establish which are mos
APA, Harvard, Vancouver, ISO, and other styles
3

LUGO-MARTINEZ, JOSE, and PREDRAG RADIVOJAC. "Generalized graphlet kernels for probabilistic inference in sparse graphs." Network Science 2, no. 2 (2014): 254–76. http://dx.doi.org/10.1017/nws.2014.14.

Full text
Abstract:
AbstractGraph kernels for learning and inference on sparse graphs have been widely studied. However, the problem of designing robust kernel functions that can effectively compare graph neighborhoods in the presence of noisy and complex data remains less explored. Here we propose a novel graph-based kernel method referred to as an edit distance graphlet kernel. The method was designed to add flexibility in capturing similarities between local graph neighborhoods as a means of probabilistically annotating vertices in sparse and labeled graphs. We report experiments on nine real-life data sets fr
APA, Harvard, Vancouver, ISO, and other styles
4

Lazarus, Eben, Daniel J. Lewis, and James H. Stock. "The Size‐Power Tradeoff in HAR Inference." Econometrica 89, no. 5 (2021): 2497–516. http://dx.doi.org/10.3982/ecta15404.

Full text
Abstract:
Heteroskedasticity‐ and autocorrelation‐robust (HAR) inference in time series regression typically involves kernel estimation of the long‐run variance. Conventional wisdom holds that, for a given kernel, the choice of truncation parameter trades off a test's null rejection rate and power, and that this tradeoff differs across kernels. We formalize this intuition: using higher‐order expansions, we provide a unified size‐power frontier for both kernel and weighted orthonormal series tests using nonstandard “fixed‐ b” critical values. We also provide a frontier for the subset of these tests for w
APA, Harvard, Vancouver, ISO, and other styles
5

Billio, M. "Kernel-Based Indirect Inference." Journal of Financial Econometrics 1, no. 3 (2003): 297–326. http://dx.doi.org/10.1093/jjfinec/nbg014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Li Lyna, Shihao Han, Jianyu Wei, Ningxin Zheng, Ting Cao, and Yunxin Liu. "nn-METER." GetMobile: Mobile Computing and Communications 25, no. 4 (2022): 19–23. http://dx.doi.org/10.1145/3529706.3529712.

Full text
Abstract:
Inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the DNN inferenc
APA, Harvard, Vancouver, ISO, and other styles
7

Robinson, P. M. "INFERENCE ON NONPARAMETRICALLY TRENDING TIME SERIES WITH FRACTIONAL ERRORS." Econometric Theory 25, no. 6 (2009): 1716–33. http://dx.doi.org/10.1017/s0266466609990302.

Full text
Abstract:
The central limit theorem for nonparametric kernel estimates of a smooth trend, with linearly generated errors, indicates asymptotic independence and homoskedasticity across fixed points, irrespective of whether disturbances have short memory, long memory, or antipersistence. However, the asymptotic variance depends on the kernel function in a way that varies across these three circumstances, and in the latter two it involves a double integral that cannot necessarily be evaluated in closed form. For a particular class of kernels, we obtain analytic formulas. We discuss extensions to more gener
APA, Harvard, Vancouver, ISO, and other styles
8

Yuan, Ao. "Semiparametric inference with kernel likelihood." Journal of Nonparametric Statistics 21, no. 2 (2009): 207–28. http://dx.doi.org/10.1080/10485250802553382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cheng, Yansong, and Surajit Ray. "Multivariate Modality Inference Using Gaussian Kernel." Open Journal of Statistics 04, no. 05 (2014): 419–34. http://dx.doi.org/10.4236/ojs.2014.45041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Agbokou, Komi, and Yaogan Mensah. "INFERENCE ON THE REPRODUCING KERNEL HILBERT SPACES." Universal Journal of Mathematics and Mathematical Sciences 15 (October 10, 2021): 11–29. http://dx.doi.org/10.17654/2277141722002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Memisevic, R., L. Sigal, and D. J. Fleet. "Shared Kernel Information Embedding for Discriminative Inference." IEEE Transactions on Pattern Analysis and Machine Intelligence 34, no. 4 (2012): 778–90. http://dx.doi.org/10.1109/tpami.2011.154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Maswadah, M. "Kernel Inference on the Inverse Weibull Distribution." Communications for Statistical Applications and Methods 13, no. 3 (2006): 503–12. http://dx.doi.org/10.5351/ckss.2006.13.3.503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Racine, Jeffrey S., and James G. MacKinnon. "Inference via kernel smoothing of bootstrap values." Computational Statistics & Data Analysis 51, no. 12 (2007): 5949–57. http://dx.doi.org/10.1016/j.csda.2006.11.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Yixiao, and Jingjing Yang. "Testing-optimal kernel choice in HAR inference." Journal of Econometrics 219, no. 1 (2020): 123–36. http://dx.doi.org/10.1016/j.jeconom.2020.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hayes, Kyle, Michael W. Fouts, Ali Baheri, and David S. Mebane. "Forward variable selection enables fast and accurate dynamic system identification with Karhunen-Loève decomposed Gaussian processes." PLOS ONE 19, no. 9 (2024): e0309661. http://dx.doi.org/10.1371/journal.pone.0309661.

Full text
Abstract:
A promising approach for scalable Gaussian processes (GPs) is the Karhunen-Loève (KL) decomposition, in which the GP kernel is represented by a set of basis functions which are the eigenfunctions of the kernel operator. Such decomposed kernels have the potential to be very fast, and do not depend on the selection of a reduced set of inducing points. However KL decompositions lead to high dimensionality, and variable selection thus becomes paramount. This paper reports a new method of forward variable selection, enabled by the ordered nature of the basis functions in the KL expansion of the Bay
APA, Harvard, Vancouver, ISO, and other styles
16

Kondratyev, Dmitry A. "Towards Automatic Deductive Verification of C Programs with Sisal Loops Using the C-lightVer System." Modeling and Analysis of Information Systems 28, no. 4 (2021): 372–93. http://dx.doi.org/10.18255/1818-1015-2021-4-372-393.

Full text
Abstract:
The C-lightVer system is developed in IIS SB RAS for C-program deductive verification. C-kernel is an intermediate verification language in this system. Cloud parallel programming system (CPPS) is also developed in IIS SB RAS. Cloud Sisal is an input language of CPPS. The main feature of CPPS is implicit parallel execution based on automatic parallelization of Cloud Sisal loops. Cloud-Sisal-kernel is an intermediate verification language in the CPPS system. Our goal is automatic parallelization of such a superset of C that allows implementing automatic verification. Our solution is such a supe
APA, Harvard, Vancouver, ISO, and other styles
17

Lei, Zijian, and Liang Lan. "Memory and Computation-Efficient Kernel SVM via Binary Embedding and Ternary Model Coefficients." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 8316–23. http://dx.doi.org/10.1609/aaai.v35i9.17011.

Full text
Abstract:
Kernel approximation is widely used to scale up kernel SVM training and prediction. However, the memory and computation costs of kernel approximation models are still too large if we want to deploy them on memory-limited devices such as mobile phones, smart watches and IoT devices. To address this challenge, we propose a novel memory and computation-efficient kernel SVM model by using both binary embedding and binary model coefficients. First, we propose an efficient way to generate compact binary embedding of the data which can preserve the kernel similarity. Second, we propose a simple but e
APA, Harvard, Vancouver, ISO, and other styles
18

Lu, Chi-Ken, and Patrick Shafto. "Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning." Entropy 23, no. 11 (2021): 1387. http://dx.doi.org/10.3390/e23111387.

Full text
Abstract:
It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are support
APA, Harvard, Vancouver, ISO, and other styles
19

Kumar, Mukesh, and Santanu Kumar Rath. "Classification of Microarray Data Using Kernel Fuzzy Inference System." International Scholarly Research Notices 2014 (August 21, 2014): 1–18. http://dx.doi.org/10.1155/2014/769159.

Full text
Abstract:
The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature s
APA, Harvard, Vancouver, ISO, and other styles
20

Cawley, Gavin C., and Nicola L. C. Talbot. "Kernel learning at the first level of inference." Neural Networks 53 (May 2014): 69–80. http://dx.doi.org/10.1016/j.neunet.2014.01.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Kai. "Conditional asymptotic inference for the kernel association test." Bioinformatics 33, no. 23 (2017): 3733–39. http://dx.doi.org/10.1093/bioinformatics/btx511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Wuchen, Luwen Zhang, Jian Xu, Linghui Li, and Liping Bai. "Jump amplitude inference in SDEs with cosine kernel." Results in Applied Mathematics 26 (May 2025): 100596. https://doi.org/10.1016/j.rinam.2025.100596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Auzina, Ilze A., and Jakub M. Tomczak. "Approximate Bayesian Computation for Discrete Spaces." Entropy 23, no. 3 (2021): 312. http://dx.doi.org/10.3390/e23030312.

Full text
Abstract:
Many real-life processes are black-box problems, i.e., the internal workings are inaccessible or a closed-form mathematical expression of the likelihood function cannot be defined. For continuous random variables, likelihood-free inference problems can be solved via Approximate Bayesian Computation (ABC). However, an optimal alternative for discrete random variables is yet to be formulated. Here, we aim to fill this research gap. We propose an adjusted population-based MCMC ABC method by re-defining the standard ABC parameters to discrete ones and by introducing a novel Markov kernel that is i
APA, Harvard, Vancouver, ISO, and other styles
24

Stordal, Andreas S., Rafael J. Moraes, Patrick N. Raanes, and Geir Evensen. "p-Kernel Stein Variational Gradient Descent for Data Assimilation and History Matching." Mathematical Geosciences 53, no. 3 (2021): 375–93. http://dx.doi.org/10.1007/s11004-021-09937-x.

Full text
Abstract:
AbstractA Bayesian method of inference known as “Stein variational gradient descent” was recently implemented for data assimilation problems, under the heading of “mapping particle filter”. In this manuscript, the algorithm is applied to another type of geoscientific inversion problems, namely history matching of petroleum reservoirs. In order to combat the curse of dimensionality, the commonly used Gaussian kernel, which defines the solution space, is replaced by a p-kernel. In addition, the ensemble gradient approximation used in the mapping particle filter is rectified, and the data assimil
APA, Harvard, Vancouver, ISO, and other styles
25

Xiao, Chengcheng, Xiaowen Liu, Chi Sun, Zhongyu Liu, and Enjie Ding. "Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification." Applied Sciences 12, no. 20 (2022): 10336. http://dx.doi.org/10.3390/app122010336.

Full text
Abstract:
A well-designed loss function can effectively improve the characterization ability of network features without increasing the amount of calculation in the model inference stage, and has become the focus of attention in recent research. Given that the existing lightweight network adds a loss to the last layer, which severely attenuates the gradient during the backpropagation process, we propose a hierarchical polynomial kernel prototype loss function in this study. In this function, the addition of a polynomial kernel loss function to multiple stages of the deep neural network effectively enhan
APA, Harvard, Vancouver, ISO, and other styles
26

Massaroppe, Lucas, and Luiz Baccalá. "Kernel Methods for Nonlinear Connectivity Detection." Entropy 21, no. 6 (2019): 610. http://dx.doi.org/10.3390/e21060610.

Full text
Abstract:
In this paper, we show that the presence of nonlinear coupling between time series may be detected using kernel feature space F representations while dispensing with the need to go back to solve the pre-image problem to gauge model adequacy. This is done by showing that the kernelized auto/cross sequences in F can be computed from the model rather than from prediction residuals in the original data space X . Furthermore, this allows for reducing the connectivity inference problem to that of fitting a consistent linear model in F that works even in the case of nonlinear interactions in the X -s
APA, Harvard, Vancouver, ISO, and other styles
27

Nie, Junlan, Ruibo Gao, and Ye Kang. "Urban Noise Inference Model Based on Multiple Views and Kernel Tensor Decomposition." Fluctuation and Noise Letters 20, no. 03 (2021): 2150027. http://dx.doi.org/10.1142/s0219477521500279.

Full text
Abstract:
Prediction of urban noise is becoming more significant for tackling noise pollution and protecting human mental health. However, the existing noise prediction algorithms neglected not only the correlation between noise regions, but also the nonlinearity and sparsity of the data, which resulted in low accuracy of filling in the missing entries of data. In this paper, we propose a model based on multiple views and kernel-matrix tensor decomposition to predict the noise situation at different times of day in each region. We first construct a kernel tensor decomposition model by using kernel mappi
APA, Harvard, Vancouver, ISO, and other styles
28

Maswadah, Mohamed, and Seham Mohamed. "Bayesian Inference on the Generalized Exponential Distribution Based on the Kernel Prior." Science Journal of Applied Mathematics and Statistics 12, no. 2 (2024): 29–36. http://dx.doi.org/10.11648/j.sjams.20241202.12.

Full text
Abstract:
In this work, we introduce an objective prior based on the kernel density estimation to eliminate the subjectivity of the Bayesian estimation for information other than data. For comparing the kernel prior with the informative gamma prior, the mean squared error and the mean percentage error for the generalized exponential (GE) distribution parameters estimations are studied using both symmetric and asymmetric loss functions via Monte Carlo simulations. The simulation results indicated that the kernel prior outperforms the informative gamma prior. Finally, a numerical example is given to demon
APA, Harvard, Vancouver, ISO, and other styles
29

Hou, Yuxin, Ari Heljakka, and Arno Solin. "Gaussian Process Priors for View-Aware Inference." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 7762–70. http://dx.doi.org/10.1609/aaai.v35i9.16948.

Full text
Abstract:
While frame-independent predictions with deep neural networks have become the prominent solutions to many computer vision tasks, the potential benefits of utilizing correlations between frames have received less attention. Even though probabilistic machine learning provides the ability to encode correlation as prior knowledge for inference, there is a tangible gap between the theory and practice of applying probabilistic methods to modern vision problems. For this, we derive a principled framework to combine information coupling between camera poses (translation and orientation) with deep mode
APA, Harvard, Vancouver, ISO, and other styles
30

Liang, Junjie, Yanting Wu, Dongkuan Xu, and Vasant G. Honavar. "Longitudinal Deep Kernel Gaussian Process Regression." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (2021): 8556–64. http://dx.doi.org/10.1609/aaai.v35i10.17038.

Full text
Abstract:
Gaussian processes offer an attractive framework for predictive modeling from longitudinal data, \ie irregularly sampled, sparse observations from a set of individuals over time. However, such methods have two key shortcomings: (i) They rely on ad hoc heuristics or expensive trial and error to choose the effective kernels, and (ii) They fail to handle multilevel correlation structure in the data. We introduce Longitudinal deep kernel Gaussian process regression (L-DKGPR) to overcome these limitations by fully automating the discovery of complex multilevel correlation structure from longitudina
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Rui, Christian Walder, and Marian-Andrei Rizoiu. "Variational Inference for Sparse Gaussian Process Modulated Hawkes Process." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6803–10. http://dx.doi.org/10.1609/aaai.v34i04.6160.

Full text
Abstract:
The Hawkes process (HP) has been widely applied to modeling self-exciting events including neuron spikes, earthquakes and tweets. To avoid designing parametric triggering kernel and to be able to quantify the prediction confidence, the non-parametric Bayesian HP has been proposed. However, the inference of such models suffers from unscalability or slow convergence. In this paper, we aim to solve both problems. Specifically, first, we propose a new non-parametric Bayesian HP in which the triggering kernel is modeled as a squared sparse Gaussian process. Then, we propose a novel variational infe
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Qihuan, Haolin Yang, Qianghao He, Dong Yue, Ce Zhang, and Duanyang Geng. "Real-Time Detection System of Broken Corn Kernels Based on BCK-YOLOv7." Agronomy 13, no. 7 (2023): 1750. http://dx.doi.org/10.3390/agronomy13071750.

Full text
Abstract:
Accurately and effectively measuring the breaking quality of harvested corn kernels is a critical step in the intelligent development of corn harvesters. The detection of broken corn kernels is complicated during the harvesting process due to turbulent corn kernel movement, uneven lighting, and interference from numerous external factors. This paper develops a deep learning-based detection method in real time for broken corn kernels in response to these issues. The system uses an image acquisition device to continuously acquire high-quality corn kernel image data and cooperates with a deep lea
APA, Harvard, Vancouver, ISO, and other styles
33

Cui, Chen, Shengyi Jiang, and Bruno C. d. S. Oliveira. "Greedy Implicit Bounded Quantification." Proceedings of the ACM on Programming Languages 7, OOPSLA2 (2023): 2083–111. http://dx.doi.org/10.1145/3622871.

Full text
Abstract:
Mainstream object-oriented programming languages such as Java, Scala, C#, or TypeScript have polymorphic type systems with subtyping and bounded quantification. Bounded quantification, despite being a pervasive and widely used feature, has attracted little research work on type-inference algorithms to support it. A notable exception is local type inference, which is the basis of most current implementations of type inference for mainstream languages. However, support for bounded quantification in local type inference has important restrictions, and its non-algorithmic specification is complex.
APA, Harvard, Vancouver, ISO, and other styles
34

Teng, Tong, Jie Chen, Yehong Zhang, and Bryan Kian Hsiang Low. "Scalable Variational Bayesian Kernel Selection for Sparse Gaussian Process Regression." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5997–6004. http://dx.doi.org/10.1609/aaai.v34i04.6061.

Full text
Abstract:
This paper presents a variational Bayesian kernel selection (VBKS) algorithm for sparse Gaussian process regression (SGPR) models. In contrast to existing GP kernel selection algorithms that aim to select only one kernel with the highest model evidence, our VBKS algorithm considers the kernel as a random variable and learns its belief from data such that the uncertainty of the kernel can be interpreted and exploited to avoid overconfident GP predictions. To achieve this, we represent the probabilistic kernel as an additional variational variable in a variational inference (VI) framework for SG
APA, Harvard, Vancouver, ISO, and other styles
35

Patel, Zeel B., Palak Purohit, Harsh M. Patel, Shivam Sahni, and Nipun Batra. "Accurate and Scalable Gaussian Processes for Fine-Grained Air Quality Inference." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12080–88. http://dx.doi.org/10.1609/aaai.v36i11.21467.

Full text
Abstract:
Air pollution is a global problem and severely impacts human health. Fine-grained air quality (AQ) monitoring is important in mitigating air pollution. However, existing AQ station deployments are sparse. Conventional interpolation techniques fail to learn the complex AQ phenomena. Physics-based models require domain knowledge and pollution source data for AQ modeling. In this work, we propose a Gaussian processes based approach for estimating AQ. The important features of our approach are: a) a non-stationary (NS) kernel to allow input depended smoothness of fit; b) a Hamming distance-based k
APA, Harvard, Vancouver, ISO, and other styles
36

Gudmundarson, Ragnar L., and Gareth W. Peters. "Assessing portfolio diversification via two-sample graph kernel inference. A case study on the influence of ESG screening." PLOS ONE 19, no. 4 (2024): e0301804. http://dx.doi.org/10.1371/journal.pone.0301804.

Full text
Abstract:
In this work we seek to enhance the frameworks practitioners in asset management and wealth management may adopt to asses how different screening rules may influence the diversification benefits of portfolios. The problem arises naturally in the area of Environmental, Social, and Governance (ESG) based investing practices as practitioners often need to select subsets of the total available assets based on some ESG screening rule. Once a screening rule is identified, one constructs a dynamic portfolio which is usually compared with another dynamic portfolio to check if it satisfies or outperfor
APA, Harvard, Vancouver, ISO, and other styles
37

Ren, Ming, Chi Cheung, and Gao Xiao. "Gaussian Process Based Bayesian Inference System for Intelligent Surface Measurement." Sensors 18, no. 11 (2018): 4069. http://dx.doi.org/10.3390/s18114069.

Full text
Abstract:
This paper presents a Gaussian process based Bayesian inference system for the realization of intelligent surface measurement on multi-sensor instruments. The system considers the surface measurement as a time series data collection process, and the Gaussian process is used as mathematical foundation to establish an inferring plausible model to aid the measurement process via multi-feature classification and multi-dataset regression. Multi-feature classification extracts and classifies the geometric features of the measured surfaces at different scales to design an appropriate composite covari
APA, Harvard, Vancouver, ISO, and other styles
38

Rocha, Gustavo H. M. A., Rosangela H. Loschi, and Reinaldo B. Arellano-Valle. "Inference in flexible families of distributions with normal kernel." Statistics 47, no. 6 (2013): 1184–206. http://dx.doi.org/10.1080/02331888.2012.688207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gao, Junbin, Paul W. Kwan, and Daming Shi. "Sparse kernel learning with LASSO and Bayesian inference algorithm." Neural Networks 23, no. 2 (2010): 257–64. http://dx.doi.org/10.1016/j.neunet.2009.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Capobianco, Enrico. "Kernel methods and flexible inference for complex stochastic dynamics." Physica A: Statistical Mechanics and its Applications 387, no. 16-17 (2008): 4077–98. http://dx.doi.org/10.1016/j.physa.2008.03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lam, Clifford, and Jianqing Fan. "Profile-kernel likelihood inference with diverging number of parameters." Annals of Statistics 36, no. 5 (2008): 2232–60. http://dx.doi.org/10.1214/07-aos544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Bochong, and Lingchong You. "Stochastic Sensitivity Analysis and Kernel Inference via Distributional Data." Biophysical Journal 107, no. 5 (2014): 1247–55. http://dx.doi.org/10.1016/j.bpj.2014.07.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Degui, Peter C. B. Phillips, and Jiti Gao. "Kernel-based Inference in Time-Varying Coefficient Cointegrating Regression." Journal of Econometrics 215, no. 2 (2020): 607–32. http://dx.doi.org/10.1016/j.jeconom.2019.10.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

González-Vanegas, Wilson, Andrés Álvarez-Meza, José Hernández-Muriel, and Álvaro Orozco-Gutiérrez. "AKL-ABC: An Automatic Approximate Bayesian Computation Approach Based on Kernel Learning." Entropy 21, no. 10 (2019): 932. http://dx.doi.org/10.3390/e21100932.

Full text
Abstract:
Bayesian statistical inference under unknown or hard to asses likelihood functions is a very challenging task. Currently, approximate Bayesian computation (ABC) techniques have emerged as a widely used set of likelihood-free methods. A vast number of ABC-based approaches have appeared in the literature; however, they all share a hard dependence on free parameters selection, demanding expensive tuning procedures. In this paper, we introduce an automatic kernel learning-based ABC approach, termed AKL-ABC, to automatically compute posterior estimations from a weighting-based inference. To reach t
APA, Harvard, Vancouver, ISO, and other styles
45

Huh, Jaeseok, Jonghun Park, Dongmin Shin, and Yerim Choi. "A Hierarchical SVM Based Behavior Inference of Human Operators Using a Hybrid Sequence Kernel." Sustainability 11, no. 18 (2019): 4836. http://dx.doi.org/10.3390/su11184836.

Full text
Abstract:
To train skilled unmanned combat aerial vehicle (UCAV) operators, it is important to establish a real-time training environment where an enemy appropriately responds to the action performed by a trainee. This can be addressed by constructing the inference method for the behavior of a UCAV operator from given simulation log data. Through this method, the virtual enemy is capable of performing actions that are highly likely to be made by an actual operator. To achieve this, we propose a hybrid sequence (HS) kernel-based hierarchical support vector machine (HSVM) for the behavior inference of a U
APA, Harvard, Vancouver, ISO, and other styles
46

Xiao, Heng, Donglin Jing, Fujun Zhao, and Shaokang Zha. "Feature Symmetry Fusion Remote Sensing Detection Network Based on Spatial Adaptive Selection." Symmetry 17, no. 4 (2025): 602. https://doi.org/10.3390/sym17040602.

Full text
Abstract:
This paper proposes a spatially adaptive feature fine fusion network consisting of a Fast Convolution Decomposition Sequence (FCDS) and a Spatial Selection Mechanism (SSM). Firstly, in FCDS, a large kernel convolution decomposition operation is used to break down dense convolution kernels into small convolutions with gradually increasing hole rates, forming a continuous kernel sequence to obtain finer scale features. This approach significantly reduces the number of parameters, improves network inference efficiency, and preserves the spatial feature expression ability of the network. Notably,
APA, Harvard, Vancouver, ISO, and other styles
47

Song, Le, Kenji Fukumizu, and Arthur Gretton. "Kernel Embeddings of Conditional Distributions: A Unified Kernel Framework for Nonparametric Inference in Graphical Models." IEEE Signal Processing Magazine 30, no. 4 (2013): 98–111. http://dx.doi.org/10.1109/msp.2013.2252713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Lin, Shuqiao Zhou, Tianhao Zhang, Chao Guo, and Xiaojin Huang. "An Unsupervised Anomaly Detection Method for Nuclear Reactor Coolant Pumps Based on Kernel Self-Organizing Map and Bayesian Posterior Inference." Energies 18, no. 11 (2025): 2887. https://doi.org/10.3390/en18112887.

Full text
Abstract:
Effectively monitoring the operational status of reactor coolant pumps (RCPs) is crucial for enhancing the safety and stability of nuclear power operations. To address the challenges of limited interpretability and suboptimal detection performance in existing methods for detecting abnormal operating states of RCPs, this paper proposes an interpretable, unsupervised anomaly detection approach. This innovative method designs a framework that combines Kernel Self-Organizing Map (Kernel SOM) clustering with Bayesian Posterior Inference. Specifically, the proposed method uses Kernel SOM to extract
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Dong-Yeong, Hayotjon Aliev, Muhammad Junaid, et al. "High-Speed CNN Accelerator SoC Design Based on a Flexible Diagonal Cyclic Array." Electronics 13, no. 8 (2024): 1564. http://dx.doi.org/10.3390/electronics13081564.

Full text
Abstract:
The latest convolutional neural network (CNN) models for object detection include complex layered connections to process inference data. Each layer utilizes different types of kernel modes, so the hardware needs to support all kernel modes at an optimized speed. In this paper, we propose a high-speed and optimized CNN accelerator with flexible diagonal cyclic arrays (FDCA) that supports the acceleration of CNN networks with various kernel sizes and significantly reduces the time required for inference processing. The accelerator uses four FDCAs to simultaneously calculate 16 input channels and
APA, Harvard, Vancouver, ISO, and other styles
50

Dixit, Purushottam D. "Introducing User-Prescribed Constraints in Markov Chains for Nonlinear Dimensionality Reduction." Neural Computation 31, no. 5 (2019): 980–97. http://dx.doi.org/10.1162/neco_a_01184.

Full text
Abstract:
Stochastic kernel-based dimensionality-reduction approaches have become popular in the past decade. The central component of many of these methods is a symmetric kernel that quantifies the vicinity between pairs of data points and a kernel-induced Markov chain on the data. Typically, the Markov chain is fully specified by the kernel through row normalization. However, in many cases, it is desirable to impose user-specified stationary-state and dynamical constraints on the Markov chain. Unfortunately, no systematic framework exists to impose such user-defined constraints. Here, based on our pre
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!