Academic literature on the topic 'Neighbour Mean Interpolation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neighbour Mean Interpolation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neighbour Mean Interpolation"

1

Andrew, Rudder, Goodridge Wayne, and Mohammed Shareeda. "USING BIAS OPTIMIAZATION FOR REVERSIBLE DATA HIDING USING IMAGE INTERPOLATION." International Journal of Network Security & Its Applications (IJNSA) 5, no. 2 (2013): 65–76. https://doi.org/10.5281/zenodo.3955520.

Full text
Abstract:
In this paper, we propose a reversible data hiding method in the spatial domain for compressed grayscale images. The proposed method embeds secret bits into a compressed thumbnail of the original image by using a novel interpolation method and the Neighbour Mean Interpolation (NMI) technique as scaling up to the original image occurs. Experimental results presented in this paper show that the proposed method has significantly improved embedding capacities over the approach proposed by Jung and Yoo.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, K. S., R. M. Beresford, and W. R. Henshall. "Spatial interpolation of daily humidity using natural neighbours over mountain areas in south eastern Australia." New Zealand Plant Protection 61 (August 1, 2008): 292–95. http://dx.doi.org/10.30843/nzpp.2008.61.6838.

Full text
Abstract:
Natural neighbour interpolation was investigated to estimate daily humidity at specific sites in a mountain area The Global Summary of Day (GSOD) dataset was used to obtain weather data in mountain areas in south eastern Australia Eighteen weather stations were selected as validation sites Dew point temperature was estimated from January to December 2007 When the inverse distance weight method was used without adjusting the elevation difference between stations accuracy of virtual dew point temperature was poor with a mean absolute error (MAE) of 36C When natural neighbour interpolation was used the MAE for dew point temperature was 21C with altitude adjustment Furthermore application of wet adiabatic lapse rate (0004C/m) for altitude adjustment reduced the MAE to 13C These results will be used to improve the accuracy of weather estimates in areas with complex terrain in order to implement crop disease predictions using risk models
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, H., R. De Jong, S. Gameda, and B. Qian. "Development and evaluation of a Canadian agricultural ecodistrict climate database." Canadian Journal of Soil Science 90, no. 2 (2010): 373–85. http://dx.doi.org/10.4141/cjss09064.

Full text
Abstract:
Spatially representative climate data are required input in various agricultural and environmental modelling studies. An agricultural ecodistrict climate database for Canada was developed from climate station data using a spatial interpolation procedure. This database includes daily maximum and minimum air temperatures, precipitation and incoming global solar radiation, which are necessary inputs for many agricultural modelling studies. The spatial interpolation procedure combines inverse distance squared weighting with the nearest neighbour approach. Cross-validation was performed to evaluate the accuracy of the interpolation procedure. In addition to some common error measurements, such as mean biased error and root mean square error, empirical probability distributions and accurate rates of precipitation occurrence were also examined. Results show that the magnitude of errors for this database was similar to those in other studies that used similar or different interpolation procedures. The average root mean square error (RMSE) was 1.7°C, 2.2°C and 3.8 mm for daily maximum and minimum temperature, and precipitation, respectively. The RMSE for solar radiation varied from 16 to 19% of the climate normal during April through September and from 21 to 28% of the climate normal during the remainder of the year.Key words: Maximum and minimum temperature, precipitation, solar radiation, ecodistrict, interpolation, cross validation
APA, Harvard, Vancouver, ISO, and other styles
4

Noor, Norazian Mohamed, A. S. Yahaya, N. A. Ramli, and Mohd Mustafa Al Bakri Abdullah. "Filling the Missing Data of Air Pollutant Concentration Using Single Imputation Methods." Applied Mechanics and Materials 754-755 (April 2015): 923–32. http://dx.doi.org/10.4028/www.scientific.net/amm.754-755.923.

Full text
Abstract:
Hourly measured PM10 concentration at eight monitoring stations within peninsular Malaysia in 2006 was used to conduct the simulated missing data. The gap lengths of the simulated missing values are limited to 12 hours since the actual trend of missingness is considered short. Two percentages of simulated missing gaps were generated that are 5 % and 15 %. A number of single imputation methods (linear interpolation (LI), nearest neighbour interpolation (NN), mean above below (MAB), daily mean (DM), mean 12-hour (12M), mean 6-hour (6M), row mean (RM) and previous year (PY)) were calculated to fill in the simulated missing data. In addition, multiple imputation (MI) was also conducted to compare between the single imputation methods. The performances were evaluated using four statistical criteria namely mean absolute error, root mean squared error, prediction accuracy and index of agreement. The results show that 6M perform comparably well to LI. Thus, this show that the effect of smaller averaging time gives better prediction. Other single imputation methods predict the missing data well except for PY. RM and MI performs moderately with the increasing performance in higher fraction of missing gaps whereas LR makes the worst methods for both simulated missing data percentages.
APA, Harvard, Vancouver, ISO, and other styles
5

Vázquez, Raúl F., Pablo V. Mosquera, and Henrietta Hampel. "Bathymetric Modelling of High Mountain Tropical Lakes of Southern Ecuador." Water 16, no. 8 (2024): 1142. http://dx.doi.org/10.3390/w16081142.

Full text
Abstract:
Very little is known on high mountain tropical lakes of South America. Thus, the main motivation of this research was obtaining base bathymetric data of 119 tropical lakes of the Cajas National Park (CNP), Ecuador, that could be used in future geomorphological studies. Eleven interpolation methods were applied with the intention of selecting the best one for processing the scattered observations that were collected with a low-cost fishing echo-sounder. A split-sample (SS) test was used and repeated several times considering different proportions of available observations, selected randomly, for training of the interpolation methods and accuracy evaluation of the respective products. This accuracy was assessed through the use of empirical exceedance probability distributions of the mean absolute error (MAE). A single best interpolation method could not be identified. Instead, the study suggested six better-performing methods, including the complex methods Kriging (ordinary), minimum curvature (spline), multiquadric, and TIN with linear interpolation but also the much simpler methods natural neighbour and nearest neighbour. A sensitivity analysis (SA), considering several data error magnitudes, confirmed this. This advocated that sophisticated interpolation methods do not always produce the best products as geomorphological characteristics of the study site(s) together with observation data characteristics are likely to play important roles in their performance. As such, this type of assessment should be carried out in any terrestrial mapping of bathymetry that is based on the interpolation of scattered observations. Upon the analysis of the relative hypsometric curves of the 119 study lakes, they were classified into three average form categories: convex, concave, and mixed. The separated accuracy analysis of these three groups of lakes did not help in identifying a single best method. Finally, the interpolated bathymetries of 114 of the study lakes were incorporated into the best DEM of the study site by equalising their elevation reference systems. It is believed that the resulting enhanced DEM could be a very useful tool for a more appropriate management of these very beautiful but fragile high mountain tropical lakes.
APA, Harvard, Vancouver, ISO, and other styles
6

Skromulis, Andris, Juris Breidaks, Svetlana Aņiskeviča, Līga Klints, and Darja Hudjakova. "EVALUATING THE PERFORMANCE OF EFAS HYDROLOGICAL PREDICTIONS IN LATVIAN RIVER BASINS: A COMPARISON WITH OBSERVATIONAL DATA." ENVIRONMENT. TECHNOLOGY. RESOURCES. Proceedings of the International Scientific and Practical Conference 1 (June 11, 2025): 509–14. https://doi.org/10.17770/etr2025vol1.8705.

Full text
Abstract:
This study evaluates the performance of the European Flood Awareness System (EFAS) [1] in predicting hydrological variables by comparing EFAS reforecast data with observational data from the Latvian Environment, Geology and Meteorology Centre (LVGMC). Using the open-source LISFLOOD hydrological model [2], the study examines the accuracy of ECMWF-driven predictions of river discharge and water levels across Latvia’s diverse river basins. The study employs a variety of interpolation techniques, including linear interpolation and nearest neighbour interpolation, to extract grid data from the Copernicus Early Warning Data Store (EWDS) [3] dataset at hydrological station points. To assess prediction accuracy, a range of statistical and error metrics, including Mean Error (ME) [4], [5], Root Mean Squared Error (RMSE) [5] - [7], Nash-Sutcliffe Efficiency (NSE) [5], [8]-[12] and Kling-Gupta Efficiency (KGE) [5], [12], [13], are utilized. The analysis highlights the effectiveness of EFAS in different seasonal and hydrometeorological conditions, identifying both strengths and limitations in the model's performance. Furthermore, the study explores potential calibration approaches to including regional forecasting capabilities, particularly in light of climate change impacts on low-flow and drought period predictions. This research provides valuable insights into the application of continental-scale hydrological models at the regional level, offering recommendations for improving the accuracy of flood forecasting systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Sekulić, Aleksandar, Milan Kilibarda, Gerard B. M. Heuvelink, Mladen Nikolić, and Branislav Bajat. "Random Forest Spatial Interpolation." Remote Sensing 12, no. 10 (2020): 1687. http://dx.doi.org/10.3390/rs12101687.

Full text
Abstract:
For many decades, kriging and deterministic interpolation techniques, such as inverse distance weighting and nearest neighbour interpolation, have been the most popular spatial interpolation techniques. Kriging with external drift and regression kriging have become basic techniques that benefit both from spatial autocorrelation and covariate information. More recently, machine learning techniques, such as random forest and gradient boosting, have become increasingly popular and are now often used for spatial interpolation. Some attempts have been made to explicitly take the spatial component into account in machine learning, but so far, none of these approaches have taken the natural route of incorporating the nearest observations and their distances to the prediction location as covariates. In this research, we explored the value of including observations at the nearest locations and their distances from the prediction location by introducing Random Forest Spatial Interpolation (RFSI). We compared RFSI with deterministic interpolation methods, ordinary kriging, regression kriging, Random Forest and Random Forest for spatial prediction (RFsp) in three case studies. The first case study made use of synthetic data, i.e., simulations from normally distributed stationary random fields with a known semivariogram, for which ordinary kriging is known to be optimal. The second and third case studies evaluated the performance of the various interpolation methods using daily precipitation data for the 2016–2018 period in Catalonia, Spain, and mean daily temperature for the year 2008 in Croatia. Results of the synthetic case study showed that RFSI outperformed most simple deterministic interpolation techniques and had similar performance as inverse distance weighting and RFsp. As expected, kriging was the most accurate technique in the synthetic case study. In the precipitation and temperature case studies, RFSI mostly outperformed regression kriging, inverse distance weighting, random forest, and RFsp. Moreover, RFSI was substantially faster than RFsp, particularly when the training dataset was large and high-resolution prediction maps were made.
APA, Harvard, Vancouver, ISO, and other styles
8

Etherington, Thomas R., George L. W. Perry, and Janet M. Wilmshurst. "HOTRUNZ: an open-access 1 km resolution monthly 1910–2019 time series of interpolated temperature and rainfall grids with associated uncertainty for New Zealand." Earth System Science Data 14, no. 6 (2022): 2817–32. http://dx.doi.org/10.5194/essd-14-2817-2022.

Full text
Abstract:
Abstract. Long time series of temperature and rainfall grids are fundamental to understanding how these environmental variables affect environmental or ecological patterns and processes such as plant distributions, plant and animal phenology, wildfires, and hydrology. Ideally such temperature and rainfall grids are openly available and associated with uncertainties so that data-quality issues are transparent to users. We present a History of Open Temperature and Rainfall with Uncertainty in New Zealand (HOTRUNZ) that uses climatological aided natural neighbour interpolation to provide monthly 1 km resolution grids of total rainfall, mean air temperature, mean daily maximum air temperature, and mean daily minimum air temperature across New Zealand from 1910 to 2019. HOTRUNZ matches the best available temporal extent and spatial resolution of any open-access temperature and rainfall grids that include New Zealand and is unique in providing associated spatial uncertainty in the variables' units. The HOTRUNZ grids capture the dynamic spatial and temporal nature of monthly temperature and rainfall and the uncertainties associated with the interpolation. We also demonstrate how to quantify and visualise temporal trends across New Zealand that recognise the temporal and spatial variation in uncertainties in the HOTRUNZ data. The HOTRUNZ data are openly available at https://doi.org/10.7931/zmvz-xf30 (Etherington et al., 2021).
APA, Harvard, Vancouver, ISO, and other styles
9

Oni, Olubukola A., and Ahzegbobor P. Aizebeokhai. "Aeromagnetic data processing using MATLAB." IOP Conference Series: Earth and Environmental Science 993, no. 1 (2022): 012017. http://dx.doi.org/10.1088/1755-1315/993/1/012017.

Full text
Abstract:
Abstract This study focuses on the evaluation of magnetic field variation in a two-dimensional plot in form of a contour map by carrying out interpolation on the magnetic field data and mapping regional structures to infer the direction of dykes. To pinpoint areas of magnetic highs and lows, MATLAB program was used to delineate magnetic field trends on the data. The program was also used to produce graphical, colourized and contoured plots of data from XYZ files (data with random locations) using interpolation functions. The program was used for both gridding and smoothening of the magnetic field data and also allow the setting of contour values and utilization of vivid colour scales. The aeromagnetic data vector may contain outliers which are due to instrumental error and data extraction during field data collection. These outliers were removed and replaced using three interpolation methods including linear, nearest-neighbour and cubic spline to have a non-distorted representation plot and they were also used to fit surfaces to the gridded data. The result shows that the piecewise cubic spline interpolant contour plot has finer precision with higher details at the output edges. From the piecewise linear surface B(X,Y), where X is normalized by mean 2.235e+05 and standard deviation of 3.202e+04 and Y is normalized by mean 7.809e+05 and standard deviation of 551. The residual magnetic intensity plot shows the magnetic field range between -200 nT for magnetic low and 200 nT for magnetic high. However, the use of MATLAB is not to displace Oasis Montaj geosoftware but to give more scientific meaning to the automation of the filtering techniques used in Oasis.
APA, Harvard, Vancouver, ISO, and other styles
10

Jurišić, Mladen, Ivan Plaščak, Oleg Antonić, and Dorijan Radočaj. "Suitability Calculation for Red Spicy Pepper Cultivation (Capsicum annum L.) Using Hybrid GIS-Based Multicriteria Analysis." Agronomy 10, no. 1 (2019): 3. http://dx.doi.org/10.3390/agronomy10010003.

Full text
Abstract:
Red spicy pepper is traditionally considered as the fundamental ingredient for multiple authentic products of Eastern Croatia. The objectives of this study were to: (1) evaluate the optimal interpolation method necessary for modeling of criteria layers; (2) calculate the sustainability and vulnerability of red spicy pepper cultivation using hybrid Geographic Information System (GIS)-based multicriteria analysis with the analytical hierarchy process (AHP) method; (3) determine the suitability classes for red spicy pepper cultivation using K-means unsupervised classification. The inverse distance weighted interpolation method was selected as optimal as it produced higher accuracies than ordinary kriging and natural neighbour. Sustainability and vulnerability represented the positive and negative influences on red spicy pepper production. These values served as the input in the K-means unsupervised classification of four classes. Classes were ranked by the average of mean class sustainability and vulnerability values. Top two ranked classes, highest suitability and moderate-high suitability, produced suitability values of 3.618 and 3.477 out of a possible 4.000, respectively. These classes were considered as the most suitable for red spicy pepper cultivation, covering an area of 2167.5 ha (6.9% of the total study area). A suitability map for red spicy pepper cultivation was created as a basis for the establishment of red spicy pepper plantations.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Neighbour Mean Interpolation"

1

Wikle, Christopher K. Spatial Statistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228620.013.710.

Full text
Abstract:
The climate system consists of interactions between physical, biological, chemical, and human processes across a wide range of spatial and temporal scales. Characterizing the behavior of components of this system is crucial for scientists and decision makers. There is substantial uncertainty associated with observations of this system as well as our understanding of various system components and their interaction. Thus, inference and prediction in climate science should accommodate uncertainty in order to facilitate the decision-making process. Statistical science is designed to provide the tools to perform inference and prediction in the presence of uncertainty. In particular, the field of spatial statistics considers inference and prediction for uncertain processes that exhibit dependence in space and/or time. Traditionally, this is done descriptively through the characterization of the first two moments of the process, one expressing the mean structure and one accounting for dependence through covariability.Historically, there are three primary areas of methodological development in spatial statistics: geostatistics, which considers processes that vary continuously over space; areal or lattice processes, which considers processes that are defined on a countable discrete domain (e.g., political units); and, spatial point patterns (or point processes), which consider the locations of events in space to be a random process. All of these methods have been used in the climate sciences, but the most prominent has been the geostatistical methodology. This methodology was simultaneously discovered in geology and in meteorology and provides a way to do optimal prediction (interpolation) in space and can facilitate parameter inference for spatial data. These methods rely strongly on Gaussian process theory, which is increasingly of interest in machine learning. These methods are common in the spatial statistics literature, but much development is still being done in the area to accommodate more complex processes and “big data” applications. Newer approaches are based on restricting models to neighbor-based representations or reformulating the random spatial process in terms of a basis expansion. There are many computational and flexibility advantages to these approaches, depending on the specific implementation. Complexity is also increasingly being accommodated through the use of the hierarchical modeling paradigm, which provides a probabilistically consistent way to decompose the data, process, and parameters corresponding to the spatial or spatio-temporal process.Perhaps the biggest challenge in modern applications of spatial and spatio-temporal statistics is to develop methods that are flexible yet can account for the complex dependencies between and across processes, account for uncertainty in all aspects of the problem, and still be computationally tractable. These are daunting challenges, yet it is a very active area of research, and new solutions are constantly being developed. New methods are also being rapidly developed in the machine learning community, and these methods are increasingly more applicable to dependent processes. The interaction and cross-fertilization between the machine learning and spatial statistics community is growing, which will likely lead to a new generation of spatial statistical methods that are applicable to climate science.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Neighbour Mean Interpolation"

1

Jana, Manasi, Shubhankar Joardar, and Biswapati Jana. "A New Reversible Data Hiding Scheme by Altering Interpolated Pixels Exploiting Neighbor Mean Interpolation (NMI)." In Computational Intelligence in Pattern Recognition. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3089-8_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Jiayu, Youpeng Jin, Xiaobo Hu, Bohan Kong, and Yu Zhang. "A Dual-Layer Reversible Data Hiding Scheme Based on Optimal Neighbor Mean Interpolation (ONMI) and Histogram Shifting." In Communications in Computer and Information Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-7005-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abdul Karim, Samsul Ariffin, Nur Atiqah Binti Zulkifli, A'fza Binti Shafie, Muhammad Sarfraz, Abdul Ghaffar, and Kottakkaran Sooppy Nisar. "Medical Image Zooming by Using Rational Bicubic Ball Function." In Research Anthology on Improving Medical Imaging Techniques for Analysis and Intervention. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-7544-7.ch028.

Full text
Abstract:
This chapter deals with image processing in the specific area of image zooming via interpolation. The authors employ bivariate rational cubic ball function defined on rectangular meshes. These bivariate spline have six free parameters that can be used to alter the shape of the surface without needed to change the data. It also can be used to refine the resolution of the image. In order to cater the image zomming, they propose an efficient algorithm by including image downscaling and upscaling procedures. To measure the effectiveness of the proposed scheme, they compare the performance based on the value of peak signal-to-noise ratio (PSNR) and root mean square error (RMSE). Comparison with existing schemes such as nearest neighbour (NN), bilinear (BL), bicubic (BC), bicubic Hermite (BH), and existing scheme Karim and Saaban (KS) have been made in detail. From all numerical results, the proposed scheme gave higher PSNR value and smaller RMSE value for all tested images.
APA, Harvard, Vancouver, ISO, and other styles
4

Abdul Karim, Samsul Ariffin, Nur Atiqah Binti Zulkifli, A'fza Binti Shafie, Muhammad Sarfraz, Abdul Ghaffar, and Kottakkaran Sooppy Nisar. "Medical Image Zooming by Using Rational Bicubic Ball Function." In Advancements in Computer Vision Applications in Intelligent Systems and Multimedia Technologies. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-4444-0.ch008.

Full text
Abstract:
This chapter deals with image processing in the specific area of image zooming via interpolation. The authors employ bivariate rational cubic ball function defined on rectangular meshes. These bivariate spline have six free parameters that can be used to alter the shape of the surface without needed to change the data. It also can be used to refine the resolution of the image. In order to cater the image zomming, they propose an efficient algorithm by including image downscaling and upscaling procedures. To measure the effectiveness of the proposed scheme, they compare the performance based on the value of peak signal-to-noise ratio (PSNR) and root mean square error (RMSE). Comparison with existing schemes such as nearest neighbour (NN), bilinear (BL), bicubic (BC), bicubic Hermite (BH), and existing scheme Karim and Saaban (KS) have been made in detail. From all numerical results, the proposed scheme gave higher PSNR value and smaller RMSE value for all tested images.
APA, Harvard, Vancouver, ISO, and other styles
5

Tereikovska, Liudmyla, and Ihor Tereikovskyi. "MATHEMATICAL SUPPORT OF GEOMETRIC TRANSFORMATIONS OF IMAGES DURING DATA AUGMENTATION OF NEURON NETWORK TOOLS." In Science, technology and innovation in the context of global transformation. Publishing House “Baltija Publishing”, 2024. https://doi.org/10.30525/978-9934-26-499-3-12.

Full text
Abstract:
One of the key problems in the field of increasing the efficiency of neural network tools intended for the analysis of graphic materials is the formation of representative training databases. A promising way to overcome this problem is to increase the size of the training sample by applying the augmentation of training examples due to geometric transformations. However, the modern mathematical apparatus for modifying the geometric parameters of images has shortcomings that can reduce the quality of the obtained images or lead to their insufficient compliance with the tasks. The purpose of the work is the formation of mathematical support, which is used to implement geometric transformations of images during the augmentation of training data of neural network tools. The research methodology is based on the theory of digital processing of signals and images, system analysis and the theory of neural networks and involves the determination of key components of the mathematical support for affine transformations and the definition of the mathematical support for calculating the color of pixels when scaling an image using interpolation methods in various application conditions. As a result of the conducted research, mathematical support was formed, which is used to implement geometric transformations of images when augmenting the training data of neural network tools. The key components of the mathematical support of affine transformations related to the determination of the display dimensions of the modified image and the determination of the color of individual points of the modified image are defined. A mathematical apparatus has been defined that allows you to calculate the dimensions of the display area of the modified image, provided that a rectangular display area is provided. It is shown that in the task of augmentation of training data of neural network tools, it is advisable to perform image scaling using proven non-adaptive interpolation methods: nearest neighbor, bilinear interpolation, bicubic interpolation, Lanzosh interpolation, box filter, triangular filter. A mathematical apparatus for calculating the pixel color of a scaled image using each of the above interpolation methods is defined. The conditions that determine the expediency of using the specified methods in solving practical problems are outlined. It is shown that the method of the nearest neighbor is advisable to use in the case of the need to ensure the simplicity of implementation and high productivity of the augmentation procedure, when the noise of small details and the distortion of the shape of objects containing thin lines can be neglected. The bilinear interpolation method is recommended for use under the conditions of ensuring a balance between the quality of the scaled image and computational costs, and the bicubic interpolation method - under the condition that there are no strict limits on the computational resource intensity of the processing process, when it is necessary to achieve high image quality during scaling. The Lanzosh interpolation method is recommended for scaling images in cases where maximum preservation of quality and detail is required. The box method and the triangular filter method are appropriate to use when the speed of scaling is more important than the quality of the image obtained after scaling. Prospects for further research are related to the development of a methodology for adapting the means of integrated modification of geometry and visual characteristics of images to the conditions of augmentation of training data of neural network systems for video stream analysis.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neighbour Mean Interpolation"

1

Sakthivel, S. M., and A. Ravi Sankar. "FPGA implementation of data hididng in grayscale imagesusing neighbour mean interpolation." In 2015 2nd International Conference on Electronics and Communication Systems (ICECS). IEEE, 2015. http://dx.doi.org/10.1109/ecs.2015.7124758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gustafson, Steven C., Gordon R. Little, and Darren M. Simon. "Interpolation between evenly spaced image pixels." In OSA Annual Meeting. Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.tht13.

Full text
Abstract:
Interpolation between evenly spaced image pixels is often required in optical metrology and other applications. The interpolation model considered here is motivated by neural network techniques and is applicable when each pixel is characterized by a single value. The model assumes that each pixel value may be approximated by a linear function of its nearest neighbors in accordance with a minimum mean-squared error criterion. This assumption is satisfied by functional forms for approximate pixel value vs interpixel distance (along each Cartesian coordinate) that consist of two possibly complex exponential terms. Linear combinations of these forms are used to synthesize functions that (1) match each pixel value and (2) provide continuous image values at all points between pixels. The resulting interpolation is accomplished at relatively low computational cost, as is demonstrated in computer simulations. This model may be advantageous compared to other interpolation methods because the assumption that each pixel value may be approximated by a linear function of its nearest neighbors is consistent with first-order analyses of the physics of many image forming processes.
APA, Harvard, Vancouver, ISO, and other styles
3

G, Chandana, Haritha J, Janani J, Kalpana S, and Vijaya Lakshmi DM. "Chronic Kidney Diseases Prediction Using K-Means Algorithm." In International Conference on Recent Trends in Computing & Communication Technologies (ICRCCT’2K24). International Journal of Advanced Trends in Engineering and Management, 2024. http://dx.doi.org/10.59544/huit4742/icrcct24p141.

Full text
Abstract:
This transformation allows the model to capture the relationships between different health indicators and their correlation with CKD risk. At the core of the predictive analysis lies a Random Forest Classifier, a powerful ensemble learning method known for its accuracy and robustness in classification tasks. The model is trained on a comprehensive dataset encompassing various health metrics associated with CKD. By analysing this data, the classifier predicts the likelihood of a user developing CKD based on their input, enabling early detection and timely intervention. In addition to predicting CKD risk, the application utilizes K means clustering to categorize users into distinct stages of CKD, based on their health data patterns. This clustering approach aids in providing a clearer understanding of an individual’s kidney health status, which is essential for appropriate treatment planning and management. Furthermore, to enhance user experience and support proactive healthcare decisions, the application employs the K Nearest Neighbors (KNN) algorithm to offer personalized recommendations for nearby hospitals. This web application is designed to predict chronic kidney disease (CKD) through an intuitive user interface that enables individuals to securely log in and input their health details. The application employs a robust data processing pipeline to ensure the reliability of the predictions. Initially, it utilizes interpolation techniques for data cleaning, addressing any missing values in the user input to enhance dataset integrity. This is crucial, as incomplete data can lead to inaccurate predictions and hinder effective analysis. To facilitate the handling of categorical variables, the application incorporates Label Encoder, which converts these categories into numerical values that machine learning algorithms can process effectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Abraham, J. J., C. Devers, C. Teodoriu, and M. Amani. "Machine Learning Approaches for Pattern Recognition and Missing Data Prediction in Field Datasets from Oil and Gas Operations." In GOTECH. SPE, 2024. http://dx.doi.org/10.2118/219384-ms.

Full text
Abstract:
Abstract The oil and gas industry is currently undergoing a technology transformation with ‘big data’ playing a huge role in making smart data-driven decisions to optimize operations. New tools and systems generate a large amount of data while performing drilling, completions, or production operations and this has become invaluable in well design, field development, monitoring operations as well as optimizing production and recovery. However, sometimes, the data collected has issues that complicate its ability to be interpreted effectively – most commonly being the lack of adequate data to perform meaningful analysis or the presence of missing or null data points. Significant amounts of data are usually generated during the early stages of field development (seismic, well logs, modeling), during drilling and completions (MWD, LWD tools, wireline tools), as well as production operations (production data, pressure, and rate testing). Supervised and unsupervised machine learning (ML) algorithms such as K-Nearest Neighbor, K-Means, Regression (Logistic, Ridge) as well as Clustering algorithms can be used as predictive tools for modeling and interpreting limited datasets. These can be used to identify and resolve deficiencies in datasets including those with missing values and null datapoints. ML and predictive algorithms can be used to determine complex patterns and interdependencies between various variables and parameters in large and complex datasets, which may not be apparent through common regression or curve fitting methods. Work done on a representative dataset of oilwell cement properties including compressive strength, acoustic and density measurements showed potential for accurate pattern recognition with a reasonable margin of error. Missing or null datapoints were rectified through different strategies including interpolation, regression and imputation using KNN models. Supervised machine learning models were determined to be efficient and adequate for structured data when the variables and parameters are known and identified, while unsupervised models and clustering algorithms were more efficient when the data was unstructured and included a sizeable portion of missing or null values. Certain algorithms are more efficient in predicting or imputing missing data values and most models had a prediction accuracy of 85% or better, with reasonable error margins. Clustering algorithms also correctly grouped the datapoints into six clusters corresponding to each class of cement and their curing temperatures, indicating their effectiveness in predicting patterns in unlabeled datasets. Using such machine learning algorithms on oil and gas datasets can help create effective ML models by identifying and grouping similar data with consistent accuracy to complement industry expertise. This can be utilized as a reliable prediction tool when it comes to working with limited datasets or those with missing values, especially when it comes to downhole data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!