To see the other types of publications on this topic, follow the link: Epanechnikov kernel.

Journal articles on the topic 'Epanechnikov kernel'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 journal articles for your research on the topic 'Epanechnikov kernel.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mesquita, D. P. P., J. P. P. Gomes, and A. H. Souza Junior. "Epanechnikov kernel for incomplete data." Electronics Letters 53, no. 21 (October 2017): 1408–10. http://dx.doi.org/10.1049/el.2017.0507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chu, Chi-Yang, Daniel J. Henderson, and Christopher F. Parmeter. "On discrete Epanechnikov kernel functions." Computational Statistics & Data Analysis 116 (December 2017): 79–105. http://dx.doi.org/10.1016/j.csda.2017.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Du, Xiang Ran, Hai Tao Liu, and Min Zhang. "Comparative Analysis on Kernel Based Probability Density Estimation." Applied Mechanics and Materials 543-547 (March 2014): 1655–58. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.1655.

Full text
Abstract:
In this paper, we compare the estimation performances of 7 different kernels (i.e., Uniform, Triangular, Epanechnikov, Biweight, Triweight, Cosine and Gaussian) when using them to conduct the probability density estimation with Parzen window method. We firstly analyze the efficiencies of these 7 kernels and then compare their estimation errors measured by mean squared error (MSE). The theoretical analysis and the experimental comparisons show that the mostly-used Gaussian kernel is not the best choice for the probability density estimation, of which the efficiency is low and estimation error is high. The derived conclusions give some guidelines for the selection of kernel in the practical application of probability density estimation.
APA, Harvard, Vancouver, ISO, and other styles
4

Karczewski, Maciej, and Andrzej Michalski. "The study and comparison of one-dimensional kernel estimators – a new approach. Part 2. A hydrology case study." ITM Web of Conferences 23 (2018): 00018. http://dx.doi.org/10.1051/itmconf/20182300018.

Full text
Abstract:
The main purpose of this article is to present the numerical consequences of selected methods of kernel estimation, using the example of empirical data from a hydrological experiment [1, 2]. In the construction of kernel estimators we used two types of kernels – Gaussian and Epanechnikov – and several methods of selecting the optimal smoothing bandwidth (see Part 1), based on various statistical and analytical conditions [3–6]. Further analysis of the properties of kernel estimators is limited to eight characteristic estimators. To assess the effectiveness of the considered estimates and their similarity, we applied the distance measure of Marczewski and Steinhaus [7]. Theoretical and numerical considerations enable the development of an algorithm for the selection of locally most effective kernel estimators.
APA, Harvard, Vancouver, ISO, and other styles
5

IVAN, KOMANG CANDRA, I. WAYAN SUMARJAYA, and MADE SUSILAWATI. "ANALISIS MODEL REGRESI NONPARAMETRIK SIRKULAR-LINEAR BERGANDA." E-Jurnal Matematika 5, no. 2 (May 31, 2016): 52. http://dx.doi.org/10.24843/mtk.2016.v05.i02.p121.

Full text
Abstract:
Circular data are data which the value in form of vector is circular data. Statistic analysis that is used in analyzing circular data is circular statistics analysis. In regression analysis, if any of predictor or response variables or both are circular then the regression analysis used is called circular regression analysis. Observation data in circular statistic which use direction and time units usually don’t satisfy all of the parametric assumptions, thus making nonparametric regression as a good solution. Nonparametric regression function estimation is using epanechnikov kernel estimator for the linier variables and von Mises kernel estimator for the circular variable. This study showed that the result of circular analysis by using circular descriptive statistic is better than common statistic. Multiple circular-linier nonparametric regressions with Epanechnikov and von Mises kernel estimator didn’t create estimation model explicitly as parametric regression does, but create estimation from its observation knots instead.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Liang-ju, Gen-ke Yang, and Yue Chen. "Improved Independent Component Analysis Based on Epanechnikov Kernel Function." International Journal of Control and Automation 9, no. 7 (July 31, 2016): 147–58. http://dx.doi.org/10.14257/ijca.2016.9.7.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kalita, Jumi, and Pranita Sarmah. "Application of Epanechnikov kernel smoothing technique in disability data." International Journal of Intelligent Systems Design and Computing 1, no. 1/2 (2017): 198. http://dx.doi.org/10.1504/ijisdc.2017.082874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kalita, Jumi, and Pranita Sarmah. "Application of Epanechnikov kernel smoothing technique in disability data." International Journal of Intelligent Systems Design and Computing 1, no. 1/2 (2017): 198. http://dx.doi.org/10.1504/ijisdc.2017.10003810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Czesak, Barbara, Renata Różycka-Czas, Tomasz Salata, Robert Dixon-Gough, and Józef Hernik. "Determining the Intangible: Detecting Land Abandonment at Local Scale." Remote Sensing 13, no. 6 (March 18, 2021): 1166. http://dx.doi.org/10.3390/rs13061166.

Full text
Abstract:
Precisely determining agricultural land abandonment (ALA) in an area is still difficult, even with recent progress in data collection and analysis. It is especially difficult in fragmented areas that need more tailor-made methods. The aim of this research was to determine ALA using airborne laser scanning (ALS) data, which are available in Poland with 4 to 6 points per square metre resolution. ALS data were processed into heat maps and modified with chosen kernel functions: triweight and Epanechnikov. The results of ALS data processing were compared to the control method, i.e., visual interpretation of an orthophotomap. This study shows that ALS data modelled with kernel functions allow for a good identification of ALA. The accuracy of results shows 82% concordance as compared to the control method. When comparing triweight and Epanechnikov functions, higher accuracy was achieved when using the triweight function. The research shows that ALS data processing is a promising method of detection of ALA and could provide an alternative to well-known methods such as the analysis of satellite images.
APA, Harvard, Vancouver, ISO, and other styles
10

Karczewski, Maciej, and Andrzej Michalski. "The study and comparison of one-dimensional kernel estimators – a new approach. Part 1. Theory and methods." ITM Web of Conferences 23 (2018): 00017. http://dx.doi.org/10.1051/itmconf/20182300017.

Full text
Abstract:
In this article we compare and examine the effectiveness of different kernel density estimates for some experimental data. For a given random sample X1, X2, …, Xn we present eight kernel estimators fn of the density function f with the Gaussian kernel and with the kernel given by Epanechnikov [1] using several methods: Silverman’s rule of thumb, the Sheather–Jones method, cross-validation methods, and other better-known plug-in methods [2–5]. To assess the effectiveness of the considered estimators and their similarity, we applied a distance measure for measurable and integrable functions [6]. All numerical calculations were performed for a set of experimental data recording groundwater level at a land reclamation facility (cf. [7–8]). The goal of the paper is to present a method that allows the study of local properties of the examined kernel estimators.
APA, Harvard, Vancouver, ISO, and other styles
11

Moraes, Caroline P. A., Denis G. Fantinato, and Aline Neves. "Epanechnikov kernel for PDF estimation applied to equalization and blind source separation." Signal Processing 189 (December 2021): 108251. http://dx.doi.org/10.1016/j.sigpro.2021.108251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lee, Hyojin, and Kwangmin Kang. "Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling." Advances in Meteorology 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/935868.

Full text
Abstract:
Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data throughKth nearest neighborhood (KNN) regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT) hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with theKNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Gyamerah, Samuel Asante, Philip Ngare, and Dennis Ikpe. "Probabilistic forecasting of crop yields via quantile random forest and Epanechnikov Kernel function." Agricultural and Forest Meteorology 280 (January 2020): 107808. http://dx.doi.org/10.1016/j.agrformet.2019.107808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ali, Fayyadh, and Tareq Salih. "Analysis of semi-parametric single-index models by using MAVE-method based on some kernel functions." International Journal of Advanced Statistics and Probability 5, no. 1 (May 11, 2017): 37. http://dx.doi.org/10.14419/ijasp.v5i1.7258.

Full text
Abstract:
In this paper, we used many forms of kernel functions with minimum average variance estimation (MAVE) method [Xia2002] we called the proposed methods (MAVE-Biweight), (MAVE-Epanechnikov ) and .(MAVE-Gaussian ) for estimation the parameters and the link function of the single – index model (SIM) comparing with other methods of estimation . to evaluate the performing of the various methods s simulation and a real data have been used, conclusions showed that the (MAVE- Gaussian) method in this paper gave better results compared with other methods depending on the mean squared error (MSE) and mean Absolute error (MAE) criterion for comparison.
APA, Harvard, Vancouver, ISO, and other styles
15

Batóg, Barbara, Jacek Batóg, and Magdalena Mojsiewicz. "Application of Kernel Estimation in Analysis of Labour Productivity of the Largest Polish Firms in 2004-2008." Folia Oeconomica Stetinensia 8, no. 1 (January 1, 2009): 126–39. http://dx.doi.org/10.2478/v10031-009-0007-5.

Full text
Abstract:
Application of Kernel Estimation in Analysis of Labour Productivity of the Largest Polish Firms in 2004-2008The parametric methods of statistical and econometric analysis are not always useful in examination of labour productivity of economic entities. In previous works the Authors found that the labour productivity is characterized by the lack of stable regularities in the range of structure and interdependency. In that case it is possible to apply non-parametric methods. In the paper the Authors tried to model the distributions of the labour productivity in time by means of kernel estimation using classical approaches (Epanechnikov, Rosenblatt) and the new proposition called kernel B. It seems that proposed approach is a useful merger of the statistical modeling theory and economic practice which allows to analyze the changes in the labour productivity - the essential factor for long-term economic growth and the welfare of society. The empirical results show that the labour productivity in the largest Polish companies had increased in 2004-2008 but the growths had not the same dynamics in different economic sectors.
APA, Harvard, Vancouver, ISO, and other styles
16

Ramadhani, Puput, Dwi Ispriyanti, and Diah Safitri. "KAPABILITAS PROSES DENGAN ESTIMASI FUNGSI DENSITAS KERNEL PADA PRODUKSI DENIM DI PT APAC INTI CORPORA." Jurnal Gaussian 7, no. 3 (August 29, 2018): 326–36. http://dx.doi.org/10.14710/j.gauss.v7i3.26665.

Full text
Abstract:
The quality of production becomes one of the basic factors of consumer decisions in choosing a product. Quality control is needed to control the production process. Control chart is a tool used in performing statistical quality control. One of the alternatives used when the data obtained is not known distribution is analyzed by nonparametric approach based on estimation of kernel density function. The most important thing in estimating kernel density function is optimal bandwidth selection (h) which minimizes Cross Validation (CV) value. Some of the kernel functions used in this research are Rectangular, Epanechnikov, Triangular, Biweight, and Gaussian. If the process control chart is statistically controlled, a process capability analysis can be calculated using the process conformity index to determine the nature of the process capability. In this research, the kernel control chart and process conformity index were used to analyze the slope shift of Akira-F style fabric and Corvus-SI style on the production of denim fabric at PT Apac Inti Corpora. The results of the analysis show that the production process for Akira-F style is statistically controlled, but Ypk > Yp is 0.889823 > 0,508059 indicating that the process is still not in accordance with the specified limits set by the company, while for Corvus- SI is statistically controlled and Ypk < Yp is 0.637742 < 0.638776 which indicates that the process is in accordance with the specification limits specified by the company. Keywords: kernel density function estimation, Cross Validation, kernel control chart, denim fabric, process capability
APA, Harvard, Vancouver, ISO, and other styles
17

Blokh, A. I., N. A. Penyevskaya, N. V. Rudakov, O. A. Mikhaylova, A. S. Fedorov, A. V. Sannikov, and S. V. Nikitin. "Geographic information systems as a part of epidemiological surveillance for COVID-19 in urban areas." Fundamental and Clinical Medicine 6, no. 2 (July 1, 2021): 16–23. http://dx.doi.org/10.23946/2500-0764-2021-6-2-16-23.

Full text
Abstract:
Aim. To identify clustering areas of COVID-19 cases during the first 3 months of pandemic in a million city.Materials and Methods. We collected the data on polymerase chain reaction verified cases of novel coronavirus infection (COVID-19) in Omsk for the period from April, 15 until July 1, 2020. We have drawn heat maps using Epanechnikov kernel and calculated Getis-Ord general G statistic (Gi*). Analysis of geographic information was carried out in QGIS 3.14 Pi (qgis.org) software using the Visualist plugin.Results. Having inspected spatial distribution of COVID-19 cases, we identified certain clustering areas. The spread of COVID-19 involved Sovietskiy, Central and Kirovskiy districts, and also Leninskiy and Oktyabrskiy districts a short time later. We found uneven spatiotemporal distribution of COVID-19 cases infection across Omsk, as 13 separate clusters were documented in all administrative districts of the city.Conclusions. Rapid assessment of spatial distribution of the infection employing geographic information systems enables design of kernel density maps and harbors a considerable potential for real-time planning of preventive measures.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang Qinli and Chen Yu. "An Improved Generalized Fuzzy Model Based on Epanechnikov Quadratic Kernel and Its Application to Nonlinear System Identification." Automatic Control and Computer Sciences 53, no. 1 (January 2019): 12–21. http://dx.doi.org/10.3103/s0146411619010103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

He, Hui, Junting Pan, Nanyan Lu, Bo Chen, and Runhai Jiao. "Short-term load probabilistic forecasting based on quantile regression convolutional neural network and Epanechnikov kernel density estimation." Energy Reports 6 (December 2020): 1550–56. http://dx.doi.org/10.1016/j.egyr.2020.10.053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mikkonen, S., K. E. J. Lehtinen, A. Hamed, J. Joutsensaari, M. C. Facchini, and A. Laaksonen. "Using discriminant analysis as a nucleation event classification method." Atmospheric Chemistry and Physics 6, no. 12 (December 11, 2006): 5549–57. http://dx.doi.org/10.5194/acp-6-5549-2006.

Full text
Abstract:
Abstract. More than three years of measurements of aerosol size-distribution and different gas and meteorological parameters made in Po Valley, Italy were analysed for this study to examine which of the meteorological and trace gas variables effect on the emergence of nucleation events. As the analysis method, we used discriminant analysis with non-parametric Epanechnikov kernel, included in non-parametric density estimation method. The best classification result in our data was reached with the combination of relative humidity, ozone concentration and a third degree polynomial of radiation. RH appeared to have a preventing effect on the new particle formation whereas the effects of O3 and radiation were more conductive. The concentration of SO2 and NO2 also appeared to have significant effect on the emergence of nucleation events but because of the great amount of missing observations, we had to exclude them from the final analysis.
APA, Harvard, Vancouver, ISO, and other styles
21

Gui, Yi, and Nong Cheng. "An Improved Approach of Terrain-Aided Navigation Based on RBPF." Applied Mechanics and Materials 336-338 (July 2013): 336–42. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.336.

Full text
Abstract:
Rao-Blackwellized Particle Filter (RBPF) is suitable for solving the linear/nonlinear mixed Terrain-Aided Navigation (TAN) problem. But the Particle Filter (PF) part of RBPF is Standard Particle Filter (SPF), causing particle diversity reduction and even filters divergence under extreme conditions. To get a better estimation of the errors of INS, this paper proposes an improved approach called Regularized Rao-Blackwellized Particle Filter (RRBPF). After updating the nonlinear state and corresponding importance weights, RRBPF resamples from the Epanechnikov kernel and then get the resampled particles through a linear transition process. Theoretically, the resampling part of RRBPF is equivalent to resampling from the approximated continuous posterior probability density function. Shuttle Radar Topography Mission (SRTM) terrain data is used in simulations to investigate the performance of RRBPF. Results show that RRBPF can provide more accurate estimation of TAN and bear larger initial position error than Sandia Inertial Terrain Aided Navigation (SITAN).
APA, Harvard, Vancouver, ISO, and other styles
22

Mikkonen, S., K. E. J. Lehtinen, A. Hamed, J. Joutsensaari, M. C. Facchini, and A. Laaksonen. "Using discriminant analysis as a nucleation event classification method." Atmospheric Chemistry and Physics Discussions 6, no. 5 (September 7, 2006): 8485–510. http://dx.doi.org/10.5194/acpd-6-8485-2006.

Full text
Abstract:
Abstract. More than three years of measurements of aerosol size-distribution and different gas and meteorological parameters made in Po Valley, Italy were analysed for this study to examine which of the meteorological and trace gas variables effect on the emergence of nucleation events. As the analysis method, we used discriminant analysis with non-parametric Epanechnikov kernel, included in non-parametric density estimation method. The best classification result in our data was reached with the combination of relative humidity, ozone concentration and a third degree polynomial of radiation. RH appeared to have a preventing effect on the new particle formation whereas the effects of O3 and radiation were more conductive. The concentration of SO2 and NO2 also appeared to have significant effect on the emergence of nucleation events but because of the great amount of missing observations, we had to exclude them from the final analysis.
APA, Harvard, Vancouver, ISO, and other styles
23

Darmawan, Indra, Hermanto Siregar, Dedi Budiman Hakim, and Adler Haymans Manurung. "World Oil Price Changes and Inflation in Indonesia: A Nonparametric Regression Approach." Signifikan: Jurnal Ilmu Ekonomi 10, no. 1 (March 14, 2021): 161–76. http://dx.doi.org/10.15408/sjie.v10i1.19010.

Full text
Abstract:
This study aims to investigate the effect of world oil price changes on inflation in Indonesia. We used a nonparametric regression approach that never been employed in previous studies, both domestically and internationally. This study shows that the second-order Epanechnikov kernel function is statistically significant in explaining the effect of world oil price changes on Indonesia's inflation. We found that the world oil price changes had a lower effect on Indonesia's inflation when its price below USD 100 per barrel, and its effect became higher when its price above USD 100 per barrel. These results have important implications for Bank Indonesia and Indonesia's government in response to the world oil price changes. The policies that aimed at reducing the effect of world oil price changes on inflation in Indonesia should consider the world oil price level.JEL Classification: A12, B26, C14, E31, F29, F62How to Cite:Darmawan, I., Siregar, H., Hakim, D.B., & Manurung, A.H.. (2021). World Oil Price Changes and Inflation in Indonesia: A Nonparametric Regression Approach. Signifikan: Jurnal Ilmu Ekonomi, 10(1), 161-176. https://doi.org/10.15408/sjie.v10i1.19010.
APA, Harvard, Vancouver, ISO, and other styles
24

Ma, Hongtao, Yuan Meng, Hanfa Xing, and Cansong Li. "Investigating Road-Constrained Spatial Distributions and Semantic Attractiveness for Area of Interest." Sustainability 11, no. 17 (August 26, 2019): 4624. http://dx.doi.org/10.3390/su11174624.

Full text
Abstract:
An area of interest (AOI) refers to an urban area that attracts people’s attention within different urban functions through cities. The wide availability of big geo-data that are able to capture human activities and environmental socioeconomics enable a more nuanced identification of AOIs. Current research has proposed various approaches to delineate continuous AOI patterns using big geo-data. However, these approaches ignore the effects of urban structures such as road networks on reshaping AOIs, and fail to investigate the attractiveness and certain functions within AOIs. To fill this gap, this paper proposes a systematic framework to investigate the spatial distribution of road-constrained AOIs and analyze the semantic attractiveness. First, we propose an Epanechnikov-based kernel density estimation (KDE) with a bandwidth selection strategy to extract road-constrained AOIs. Then, we establish semantic attractiveness indices regarding AOIs based on the textual information and the number of review data. Finally, we investigate in detail the spatial distribution and semantic attractiveness of AOIs in Yuexiu, Guangzhou. The results show that road-constrained AOIs can not only effectively capture the human activity patterns influenced by urban structures, but also depict certain urban functions including entertainment, public, service, hotel, education, and food functions. This method provides a quantitative reference to monitor urban structures and human activities to support city planning.
APA, Harvard, Vancouver, ISO, and other styles
25

Shah, Ismail, Hasnain Iftikhar, Sajid Ali, and Depeng Wang. "Short-Term Electricity Demand Forecasting Using ComponentsEstimation Technique." Energies 12, no. 13 (July 1, 2019): 2532. http://dx.doi.org/10.3390/en12132532.

Full text
Abstract:
Currently, in most countries, the electricity sector is liberalized, and electricity is traded in deregulated electricity markets. In these markets, electricity demand is determined the day before the physical delivery through (semi-)hourly concurrent auctions. Hence, accurate forecasts are essential for efficient and effective management of power systems. The electricity demand and prices, however, exhibit specific features, including non-constant mean and variance, calendar effects, multiple periodicities, high volatility, jumps, and so on, which complicate the forecasting problem. In this work, we compare different modeling techniques able to capture the specific dynamics of the demand time series. To this end, the electricity demand time series is divided into two major components: deterministic and stochastic. Both components are estimated using different regression and time series methods with parametric and nonparametric estimation techniques. Specifically, we use linear regression-based models (local polynomial regression models based on different types of kernel functions; tri-cubic, Gaussian, and Epanechnikov), spline function-based models (smoothing splines, regression splines), and traditional time series models (autoregressive moving average, nonparametric autoregressive, and vector autoregressive). Within the deterministic part, special attention is paid to the estimation of the yearly cycle as it was previously ignored by many authors. This work considers electricity demand data from the Nordic electricity market for the period covering 1 January 2013–31 December 2016. To assess the one-day-ahead out-of-sample forecasting accuracy, Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) are calculated. The results suggest that the proposed component-wise estimation method is extremely effective at forecasting electricity demand. Further, vector autoregressive modeling combined with spline function-based regression gives superior performance compared with the rest.
APA, Harvard, Vancouver, ISO, and other styles
26

Dudka, Alexander. "Instrumental drift correction by nonparametric statistics." Journal of Applied Crystallography 42, no. 2 (February 7, 2009): 354–55. http://dx.doi.org/10.1107/s0021889809002271.

Full text
Abstract:
A program for instrumental drift correction by nonparametric statistics using a point-detector diffractometer is described. The correction improves structural results such as merging and refinementRfactors. The kernels available are as follows: uniform, Epanechnikov, quartic, octic, Gaussian.
APA, Harvard, Vancouver, ISO, and other styles
27

Utkin, Lev V., Anatoly I. Chekh, and Yulia A. Zhuk. "Binary classification SVM-based algorithms with interval-valued training data using triangular and Epanechnikov kernels." Neural Networks 80 (August 2016): 53–66. http://dx.doi.org/10.1016/j.neunet.2016.04.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Agarwal, Ravi Kumar, and Vignesh Ramakrishnan. "Epanechnikov Kernel Estimation of Value at Risk." SSRN Electronic Journal, 2010. http://dx.doi.org/10.2139/ssrn.1537087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Nurfitri Imro’ah, Sani, Setyo Wira Rizki,. "BAGAN KENDALI NONPARAMETRIK DENGAN ESTIMASI FUNGSI KEPEKATAN KERNEL (Studi Kasus: Indeks Prestasi Mahasiswa Jurusan Matematika Angkatan 2014-2016 FMIPA Untan pada Semester Genap 2016/2017)." Bimaster : Buletin Ilmiah Matematika, Statistika dan Terapannya 8, no. 1 (January 7, 2019). http://dx.doi.org/10.26418/bbimst.v8i1.30862.

Full text
Abstract:
Indeks Prestasi (IP) merupakan salah satu tolok ukur prestasi belajar pada proses perkuliahan. Demi menjaga konsistensi dan kestabilan IP agar tetap baik, maka perlu dilakukan pengendalian kualitas. Pengendalian kualitas dapat dilakukan dengan bagan kendali. Bagan kendali yang digunakan adalah bagan kendali nonparametrik dengan pendekatan Kernel. Langkah yang dilakukan adalah menentukan fungsi Kernel yang digunakan. Setelah itu mengestimasi fungsi kepekatan dengan masing-masing fungsi Kernel tersebut. Berdasarkan fungsi estimasi kepekatan Kernel, selanjutnya dicari nilai bandwidth yang optimal dan ditentukan nilai-nilai batas kendali. Batas-batas kendali yang diperoleh digunakan untuk membentuk bagan kendali. Data yang digunakan adalah data IP mahasiswa Jurusan Matematika FMIPA Untan angkatan 2014-2016 semester genap 2016/2017. Hasil penelitian menunjukkan bahwa estimasi fungsi kepekatan Kernel Triangular adalah fungsi yang paling baik jika dibandingkan dengan fungsi Kernel Epanechnikov, Biweight, dan Gaussian pada kasus ini.Kata kunci: Bagan kendali, Kepekatan Kernel, Nonparametrik
APA, Harvard, Vancouver, ISO, and other styles
30

"Blind Steganalysis for JPEG Images using SVM and SVM-PSO Classifiers." International Journal of Innovative Technology and Exploring Engineering 8, no. 11S (October 11, 2019): 1239–46. http://dx.doi.org/10.35940/ijitee.k1250.09811s19.

Full text
Abstract:
Blind steganalysis or the universal steganalysis helps to identify hidden information without previous knowledge of the content or the embedding technique. The Support Vector Machine (SVM) and SVM- Particle Swarm Optimization (SVM-PSO) classifiers are adopted for the proposed blind steganalysis. The important features of the JPEG images are extracted using Discrete Cosine Transform (DCT). The kernel functions used for the classifiers in the proposed work are the linear, epanechnikov, multi-quadratic, radial, ANOVA and polynomial. The proposed work uses linear, shuffle, stratified and automatic sampling techniques. The proposed work employs four techniques for image embedding namely, Least Significant Bit (LSB) Matching, LSB replacement, Pixel Value Differencing (PVD) and F5 and applies 25% embedding. The data to the classifier is split as 80:20 for training and testing and 10-fold cross validation is carried out.
APA, Harvard, Vancouver, ISO, and other styles
31

Pereima, João Basilio, and Alexandre Porsse. "Transição demográfica, acumulação de capital e progresso tecnológico: desafios para o crescimento brasileiro." Revista Economia & Tecnologia 9, no. 1 (April 12, 2013). http://dx.doi.org/10.5380/ret.v9i1.31407.

Full text
Abstract:
Este artigo aborda os efeitos da transição demográfica sobre a capacidade de crescimento das economias e discute o caso da transição demográfica brasileira e destaca os efeitos restritivos que o fim do bônus demográfico e aumento da razão de dependência dos idosos irá impor sobre a taxa de investimento e poupança e, portanto, sobre o crescimento e compara a transição brasileira com o que ocorre no resto do mundo. O artigo faz ainda uma estimativa não paramétrica, usando método Kernel – Epanechnikov para demonstrar as relações não lineares entre razão de dependência de idosos com crescimento do PIB, taxa de investimento e taxa de poupança para uma amostra de 140 países e conclui que a partir de uma razão de dependência de idosos de 8 a taxa de crescimento começa a cair e a partir de uma razão de 12% as taxas de investimento e poupança também caem. O artigo conclui que o caso brasileiro já está muito próximo do ponto de inflexão.
APA, Harvard, Vancouver, ISO, and other styles
32

Poda, Pasteur, Samir Saoudi, Thierry Chonavel, Frédéric GUILLOUD, and Théodore Tapsoba. "Non-parametric kernel-based bit error probability estimation in digital communication systems: An estimator for soft coded QAM BER computation." Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées Volume 27 - 2017 - Special... (August 3, 2018). http://dx.doi.org/10.46298/arima.4348.

Full text
Abstract:
The standard Monte Carlo estimations of rare events probabilities suffer from too much computational time. To make estimations faster, kernel-based estimators proved to be more efficient for binary systems whilst appearing to be more suitable in situations where the probability density function of the samples is unknown. We propose a kernel-based Bit Error Probability (BEP) estimator for coded M-ary Quadrature Amplitude Modulation (QAM) systems. We defined soft real bits upon which an Epanechnikov kernel-based estimator is designed. Simulation results showed, compared to the standard Monte Carlo simulation technique, accurate, reliable and efficient BEP estimates for 4-QAM and 16-QAM symbols transmissions over the additive white Gaussian noise channel and over a frequency-selective Rayleigh fading channel. Les estimations de probabilités d'événements rares par la méthode de Monte Carlo classique souffrent de trop de temps de calculs. Des estimateurs à noyau se sont montrés plus efficaces sur des systèmes binaires en même temps qu'ils paraissent mieux adaptés aux situations où la fonction de densité de probabilité est inconnue. Nous proposons un estimateur de Probabilité d'Erreur Bit (PEB) à noyau pour les systèmes M-aires codés de Modulations d'Amplitude en Quadrature (MAQ). Nous avons défini des bits souples à valeurs réelles à partir desquels un estimateur à noyau d'Epanechnikov est conçu. Les simulations ont montré, par rapport à la méthode Monte Carlo, des estimées de PEB précises, fiables et efficaces pour des transmissions MAQ-4 et MAQ-16 sur canaux à bruit additif blanc Gaussien et à évanouïssements de Rayleigh sélectif en fréquence.
APA, Harvard, Vancouver, ISO, and other styles
33

Bodyanskiy, Yevgeniy, Anastasiia Deineko, Antonina Bondarchuk, and Maksym Shalamov. "Kernel Online System for Fast Principal Component Analysis and its Adaptive Learning." International Journal of Computing, June 28, 2021, 175–80. http://dx.doi.org/10.47839/ijc.20.2.2164.

Full text
Abstract:
An artificial neural system for data compression that sequentially processes linearly nonseparable classes is proposed. The main elements of this system include adjustable radial-basis functions (Epanechnikov’s kernels), an adaptive linear associator learned by a multistep optimal algorithm, and Hebb-Sanger neural network whose nodes are formed by Oja’s neurons. For tuning the modified Oja’s algorithm, additional filtering (in case of noisy data) and tracking (in case of nonstationary data) properties were introduced. The main feature of the proposed system is the ability to work in conditions of significant nonlinearity of the initial data that are sequentially fed to the system and have a non-stationary nature. The effectiveness of the developed approach was confirmed by the experimental results. The proposed kernel online neural system is designed to solve compression and visualization tasks when initial data form linearly nonseparable classes in general problem of Data Stream Mining and Dynamic Data Mining. The main benefit of the proposed approach is high speed and ability to process data whose characteristics are changed in time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography