To see the other types of publications on this topic, follow the link: Mean absolute errors.

Journal articles on the topic 'Mean absolute errors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Mean absolute errors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Frías-Paredes, Laura, Fermin Mallor, Martín Gastón-Romeo, and Teresa León. "Dynamic mean absolute error as new measure for assessing forecasting errors." Energy Conversion and Management 162 (April 2018): 176–88. http://dx.doi.org/10.1016/j.enconman.2018.02.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bingham, Rory J., and Keith Haines. "Mean dynamic topography: intercomparisons and errors." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 364, no. 1841 (2006): 903–16. http://dx.doi.org/10.1098/rsta.2006.1745.

Full text
Abstract:
Knowledge of the ocean dynamic topography, defined as the height of the sea surface above its rest-state (the geoid), would allow oceanographers to study the absolute circulation of the ocean and determine the associated geostrophic surface currents that help to regulate the Earth's climate. Here a novel approach to computing a mean dynamic topography (MDT), together with an error field, is presented for the northern North Atlantic. The method uses an ensemble of MDTs, each of which has been produced by the assimilation of hydrographic data into a numerical ocean model, to form a composite MDT, and uses the spread within the ensemble as a measure of the error on this MDT. The r.m.s. error for the composite MDT is 3.2 cm, and for the associated geostrophic currents the r.m.s. error is 2.5 cm s −1 . Taylor diagrams are used to compare the composite MDT with several MDTs produced by a variety of alternative methods. Of these, the composite MDT is found to agree remarkably well with an MDT based on the GRACE geoid GGM01C. It is shown how the composite MDT and its error field are useful validation products against which other MDTs and their error fields can be compared.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Jun, Bingqing Lin, and Zhenghui Feng. "Conditional absolute mean calibration for partial linear multiplicative distortion measurement errors models." Computational Statistics & Data Analysis 141 (January 2020): 77–93. http://dx.doi.org/10.1016/j.csda.2019.06.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tcheou, Michel Pompeu, Lisandro Lovisolo, Alexandre Ribeiro Freitas, and Sin Chan Chou. "Reducing Forecast Errors of a Regional Climate Model Using Adaptive Filters." Applied Sciences 11, no. 17 (2021): 8001. http://dx.doi.org/10.3390/app11178001.

Full text
Abstract:
In this work, the use of adaptive filters for reducing forecast errors produced by a Regional Climate Model (RCM) is investigated. Seasonal forecasts are compared against the reanalysis data provided by the National Centers for Environmental Prediction. The reanalysis is used to train adaptive filters based on the Recursive Least Squares algorithm in order to reduce the forecast error. The K-means unsupervised learning algorithm is used to obtain the number of filters to employ from the climate variables. The proposed approach is applied to some climate variables such as the meridional wind, zonal wind, and the geopotential height. The forecast is produced by the Eta RCM at 40-km resolution in a domain covering most of Brazil. Results show that the proposed approach is capable of reducing the forecast errors, according to evaluation metrics such as normalized mean square error, maximum absolute error, and maximum normalized absolute error, thus improving the seasonal climate forecasts.
APA, Harvard, Vancouver, ISO, and other styles
5

HE, XIAO-GANG, and A. ZEE. "GEOMETRIC MEAN NEUTRINO MASS RELATION." Modern Physics Letters A 22, no. 25n28 (2007): 2107–12. http://dx.doi.org/10.1142/s0217732307025352.

Full text
Abstract:
Present experimental data from neutrino oscillations have provided much information about the neutrino mixing angles. Since neutrino oscillations only determine the mass squared differences [Formula: see text], the absolute values for neutrino masses mi, can not be determined using data just from oscillations. In this work we study implications on neutrino masses from a geometric mean mass relation [Formula: see text] which enables one to determined the absolute masses of the neutrinos. We find that the central values of the three neutrino masses and their 2σ errors to be m1 = (1.58 ± 0.18) meV , m2 = (9.04 ± 0.42) meV , and m3 = (51.8 ± 3.5) meV . Implications for cosmological observation, beta decay and neutrinoless double beta decays are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Alam, S. M. Mahfuz, and Mohd Hasan Ali. "Equation Based New Methods for Residential Load Forecasting." Energies 13, no. 23 (2020): 6378. http://dx.doi.org/10.3390/en13236378.

Full text
Abstract:
This work proposes two non-linear and one linear equation-based system for residential load forecasting considering heating degree days, cooling degree days, occupancy, and day type, which are applicable to any residential building with small sets of smart meter data. The coefficients of the proposed nonlinear and linear equations are tuned by particle swarm optimization (PSO) and the multiple linear regression method, respectively. For the purpose of comparison, a subtractive clustering based adaptive neuro fuzzy inference system (ANFIS), random forests, gradient boosting trees, and long-term short memory neural network, conventional and modified support vector regression methods were considered. Simulations have been performed in MATLAB environment, and all the methods were tested with randomly chosen 30 days data of a residential building in Memphis City for energy consumption prediction. The absolute average error, root mean square error, and mean average percentage errors are tabulated and considered as performance indices. The efficacy of the proposed systems for residential load forecasting over the other systems have been validated by both simulation results and performance indices, which indicate that the proposed equation-based systems have the lowest absolute average errors, root mean square errors, and mean average percentage errors compared to the other methods. In addition, the proposed systems can be easily practically implemented.
APA, Harvard, Vancouver, ISO, and other styles
7

Reza, Imran, Nedal T. Ratrout, and Syed Masiur Rahman. "Calibration protocol for PARAMICS microscopic traffic simulation model: application of neuro-fuzzy approach." Canadian Journal of Civil Engineering 43, no. 4 (2016): 361–68. http://dx.doi.org/10.1139/cjce-2015-0435.

Full text
Abstract:
This study investigated the challenges of calibration of the PARAMICS microscopic simulation model for the local traffic conditions in the Kingdom of Saudi Arabia. It proposed an adaptive neuro-fuzzy inference system (ANFIS) based calibration protocol for the PARAMICS model. The developed ANFIS model performs adequately in modeling the queue length as a function of two key calibration parameters, namely mean headway time and mean reaction time. The selected values of the calibration parameters obtained through the ANFIS modeling approach were used as the input parameters for the PARAMICS model. The error indices such as mean absolute errors and mean absolute percentage errors of the developed ANFIS model in predicting the queue lengths varied between 1.11 and 1.24, and between 3.44 and 4.06, respectively. The conformance of the PARAMICS output and the measured queue length indicates the validity of the proposed calibration protocol.
APA, Harvard, Vancouver, ISO, and other styles
8

Radziukynas, V., and A. Klementavičius. "Short-Term Forecasting of Loads and Wind Power for Latvian Power System: Accuracy and Capacity of the Developed Tools." Latvian Journal of Physics and Technical Sciences 53, no. 2 (2016): 3–13. http://dx.doi.org/10.1515/lpts-2016-0008.

Full text
Abstract:
Abstract The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).
APA, Harvard, Vancouver, ISO, and other styles
9

Shankar, P. Sai, and M. Krishna Reddy. "Forecasting Gold Prices in India using Time series and Deep Learning Algorithms." International Journal of Engineering and Advanced Technology 10, no. 5 (2021): 21–27. http://dx.doi.org/10.35940/ijeat.d2537.0610521.

Full text
Abstract:
The primary object of this paper is to compare the traditional time series models with deep learning algorithm.The ARIMA model is developed to forecast Indian Gold prices using daily data for the period 2016 to 2020 obtained from World Gold Council.We fitted the ARIMA (2,1,2) model which exhibited the least AIC values. In the meanwhile, MLP, CNN and LSTM models are also examined to forecast the gold prices in India. Mean absolute error, mean absolute percentage error and root mean squared errors used to evaluate the forecasting performance of the models. Hence, LSTM model superior than that of the other three models for forecasting the gold prices in India
APA, Harvard, Vancouver, ISO, and other styles
10

Jia, Yunhong, Xiaodong Zhang, Zhenchong Wang, and Wei Wang. "Intelligent Calibration of a Heavy-Duty Mechanical Arm in Coal Mine." Electronics 9, no. 8 (2020): 1186. http://dx.doi.org/10.3390/electronics9081186.

Full text
Abstract:
Accurate positioning of an airborne heavy-duty mechanical arm in coal mine, such as a roof bolter, is important for the efficiency and safety of coal mining. Its positioning accuracy is affected not only by geometric errors but also by nongeometric errors such as link and joint compliance. In this paper, a novel calibration method based on error limited genetic algorithm (ELGA) and regularized extreme learning machine (RELM) is proposed to improve the positioning accuracy of a roof bolter. To achieve the improvement, the ELGA is firstly implemented to identify the geometric parameters of the roof bolter’s kinematics model. Then, the residual positioning errors caused by nongeometric facts are compensated with the regularized extreme learning machine (RELM) network. Experiments were carried out to validate the proposed calibration method. The experimental results show that the root mean square error (RMSE) and the mean absolute error (MAE) between the actual mast end position and the nominal mast end position are reduced by more than 78.23%. It also shows the maximum absolute error (MAXE) between the actual mast end position and the nominal mast end position is reduced by more than 58.72% in the three directions of Cartesian coordinate system.
APA, Harvard, Vancouver, ISO, and other styles
11

Elam, Emmett W., and Shelby H. Holder. "An Evaluation of the Rice Outlook and Situation Price Forecasts." Journal of Agricultural and Applied Economics 17, no. 2 (1985): 155–62. http://dx.doi.org/10.1017/s0081305200025152.

Full text
Abstract:
AbstractThe Rice Outlook and Situation (RO&S) forecasts were compared to the forecasts of a univariate Box-Jenkins (BJ) model. On balance, the RO&S forecasts had lower mean square forecast errors and lower mean absolute forecast errors than the BJ model forecasts. The differences in the squared and absolute forecast errors were not significant, however. Based on the concept of conditional efficiency as set forth by Granger and Newbold, it was found that the BJ forecasts did not add any information that might improve forecast accuracy beyond what was already incorporated in the RO&S forecasts.
APA, Harvard, Vancouver, ISO, and other styles
12

Hsu, Yan-Cheng, Yung-Hui Li, Ching-Chun Chang, and Latifa Nabila Harfiya. "Generalized Deep Neural Network Model for Cuffless Blood Pressure Estimation with Photoplethysmogram Signal Only." Sensors 20, no. 19 (2020): 5668. http://dx.doi.org/10.3390/s20195668.

Full text
Abstract:
Due to the growing public awareness of cardiovascular disease (CVD), blood pressure (BP) estimation models have been developed based on physiological parameters extracted from both electrocardiograms (ECGs) and photoplethysmograms (PPGs). Still, in order to enhance the usability as well as reduce the sensor cost, researchers endeavor to establish a generalized BP estimation model using only PPG signals. In this paper, we propose a deep neural network model capable of extracting 32 features exclusively from PPG signals for BP estimation. The effectiveness and accuracy of our proposed model was evaluated by the root mean square error (RMSE), mean absolute error (MAE), the Association for the Advancement of Medical Instrumentation (AAMI) standard and the British Hypertension Society (BHS) standard. Experimental results showed that the RMSEs in systolic blood pressure (SBP) and diastolic blood pressure (DBP) are 4.643 mmHg and 3.307 mmHg, respectively, across 9000 subjects, with 80.63% of absolute errors among estimated SBP records lower than 5 mmHg and 90.19% of absolute errors among estimated DBP records lower than 5 mmHg. We demonstrated that our proposed model has remarkably high accuracy on the largest BP database found in the literature, which shows its effectiveness compared to some prior works.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Rui, Li Chun Jiang, Feng Ri Li, and Yao Xiang Li. "Modeling Branch Length of Larch Tree Using Linear Mixed-Effects Models." Applied Mechanics and Materials 71-78 (July 2011): 3007–11. http://dx.doi.org/10.4028/www.scientific.net/amm.71-78.3007.

Full text
Abstract:
In this study, the sample data was based on 2190 branch length samples of 30 trees from dahurian larch (Larix gmeliniiRupr.) plantations located in Wuying forest bureau in Heilongjiang Province. A second order polynomial equation with linear mixed-effects was used for modeling branch length of larch tree. The LME procedure in S-Plus is used to fit the mixed-effects models for the branch length data. The results showed that the polynomial model with three random parameters could significantly improve the model performance. The fitted mixed effects model was also evaluated using mean error, mean absolute error, mean percent error, and mean absolute percent error. The mixed model was found to predict branch length better than the original model fitted using ordinary least squares based on all errors. The application of mixed branch length model not only showed the mean trends of branch length, but also showed the individual difference based on variance-covariance structure.
APA, Harvard, Vancouver, ISO, and other styles
14

Truitt, Theodore O., Roger A. Adelman, Dan H. Kelly, and J. Paul Willging. "Quantitative Endoscopy: Initial Accuracy Measurements." Annals of Otology, Rhinology & Laryngology 109, no. 2 (2000): 128–32. http://dx.doi.org/10.1177/000348940010900203.

Full text
Abstract:
The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and ≥2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.
APA, Harvard, Vancouver, ISO, and other styles
15

Roulston, M. S. "A comparison of predictors of the error of weather forecasts." Nonlinear Processes in Geophysics 12, no. 6 (2005): 1021–32. http://dx.doi.org/10.5194/npg-12-1021-2005.

Full text
Abstract:
Abstract. Three different potential predictors of forecast error - ensemble spread, mean errors of recent forecasts and the local gradient of the predicted field - were compared. The comparison was performed using the forecasts of 500hPa geopotential and 2-m temperature of the ECMWF ensemble prediction system at lead times of 96, 168 and 240h, over North America for each day in 2004. Ensemble spread was found to be the best overall predictor of absolute forecast error. The mean absolute error of recent forecasts (past 30 days) was found to contain some information, however, and the local gradient of the geopotential also provided some information about the error in the prediction of this variable. Ensemble spatial error covariance and the mean spatial error covariance of recent forecasts (past 30 days) were also compared as predictors of actual spatial error covariance. Both were found to provide some predictive information, although the ensemble error covariance was found to provide substantially more information for both variables tested at all three lead times. The results of the study suggest that past errors and local field gradients should not be ignored as predictors of forecast error as they can be computed cheaply from single forecasts when an ensemble is not available. Alternatively, in some cases, they could be used to supplement the information about forecast error provided by an ensemble to provide a better prediction of forecast skill.
APA, Harvard, Vancouver, ISO, and other styles
16

Stocker, M., and D. J. Noakes. "Evaluating Forecasting Procedures for Predicting Pacific Herring (Clupea harengus pallasi) Recruitment in British Columbia." Canadian Journal of Fisheries and Aquatic Sciences 45, no. 6 (1988): 928–35. http://dx.doi.org/10.1139/f88-114.

Full text
Abstract:
The ability of four forecasting methods to generate one-step-ahead forecasts of Pacific herring (Clupea harengus pillasi) recruitment is considered in this paper. Recruitment time series for five coastal stocks and various environmental time series are employed in the analyses. Information up to and including time t is employed to estimate the parameters of each model used to forecast recruitment in year t + 1. Parameter estimates are then updated after each time step with a total of seven one-step-ahead forecasts being generated by each model for each stock. The forecast errors are compared using the five criteria: (1) root mean squared error, (2) mean absolute deviation, (3) mean absolute percent error, (4) median absolute deviation, and (5) median absolute percent error. The results of the study indicate that time series models may provide better forecasts of recruitment for the Strait of Georgia/Johnstone Strait stocks than the other competing procedures. A Ricker stock–recruitment model that takes into account environmental data appears to produce marginally better forecasts for the Central Coast and Queen Charlotte Island stocks, while all models produced equally good/bad forecasts for the Prince Rupert district stocks.
APA, Harvard, Vancouver, ISO, and other styles
17

Makatjane, Katleho Daniel, and Ntebogang Dinah Moroke. "Comparative study of holt-winters triples exponential smoothing and seasonal Arima: Forecasting short term seasonal car sales in South Africa." Risk Governance and Control: Financial Markets and Institutions 6, no. 1 (2016): 71–82. http://dx.doi.org/10.22495/rgcv6i1art8.

Full text
Abstract:
In this paper, both Seasonal ARIMA and Holt-Winters models are developed to predict the monthly car sales in South Africa using data for the period of January 1994 to December 2013. The purpose of this study is to choose an optimal model suited for the sector. The three error metrics; mean absolute error, mean absolute percentage error and root mean square error were used in making such a choice. Upon realizing that the three forecast errors could not provide concrete basis to make conclusion, the power test was calculated for each model proving Holt-Winters to having about 0.3% more predictive power. Empirical results also indicate that Holt-Winters model produced more precise short-term seasonal forecasts. The findings also revealed a structural break in April 2009, implying that the car industry was significantly affected by the 2008 and 2009 US financial crisis.
APA, Harvard, Vancouver, ISO, and other styles
18

Caven, Elias J. G., Tom J. E. Bryan, Amelia F. Dingley, et al. "Group versus Individualised Minimum Velocity Thresholds in the Prediction of Maximal Strength in Trained Female Athletes." International Journal of Environmental Research and Public Health 17, no. 21 (2020): 7811. http://dx.doi.org/10.3390/ijerph17217811.

Full text
Abstract:
This study examined the accuracy of different velocity-based methods in the prediction of bench press and squat one-repetition maximum (1RM) in female athletes. Seventeen trained females (age 17.8 ± 1.3 years) performed an incremental loading test to 1RM on bench press and squat with the mean velocity being recorded. The 1RM was estimated from the load–velocity relationship using the multiple- (8 loads) and two-point (2 loads) methods and group and individual minimum velocity thresholds (MVT). No significant effect of method, MVT or interaction was observed for the two exercises (p > 0.05). For bench press and squat, all prediction methods demonstrated very large to nearly perfect correlations with respect to the actual 1RM (r range = 0.76 to 0.97). The absolute error (range = 2.1 to 3.8 kg) for bench press demonstrated low errors that were independent of the method and MVT used. For squat, the favorable group MVT errors for the multiple- and two-point methods (absolute error = 7.8 and 9.7 kg, respectively) were greater than the individual MVT errors (absolute error = 4.9 and 6.3 kg, respectively). The 1RM can be accurately predicted from the load–velocity relationship in trained females, with the two-point method offering a quick and less fatiguing alternative to the multiple-point method.
APA, Harvard, Vancouver, ISO, and other styles
19

Spokas, Kurt, and Frank Forcella. "Software Tools for Weed Seed Germination Modeling." Weed Science 57, no. 2 (2009): 216–27. http://dx.doi.org/10.1614/ws-08-142.1.

Full text
Abstract:
The next generation of weed seed germination models will need to account for variable soil microclimate conditions. To predict this microclimate environment we have developed a suite of individual tools (models) that can be used in conjunction with the next generation of weed seed germination models. The three tools that will be outlined here are GlobalTempSIM, GlobalRainSIM, and the soil temperature and moisture model (STM2). Each model was compared with several sets of observed data from worldwide locations. Overall, the climate predictors compared favorably. GlobalTempSIM had a bias between −2.7 and +0.9 C, mean absolute errors between 1.9 and 5.0 C, and an overall Willmott d-index of 0.79 to 0.95 (where d = 1 represents total agreement between observed and modeled data) for 12 global validation sites in 2007. GlobalRainSIM had a bias for cumulative precipitation ranging from −210 to +305 mm, a mean absolute error between 29 and 311 mm, and a corresponding d-index of 0.78 to 0.99 for the sites and years compared. The high d-indices indicate that the models adequately captured the annual patterns for the validation sites. STM2 also performed well in comparisons with actual soil temperatures with a range of −2 to +4.6 C biases and mean absolute errors between 0.7 and 6.8 C, with the d-index ranging from 0.83 to 0.99 for the soil temperature comparisons. The soil moisture prediction annual bias was between −0.09 and +0.12 cm3 cm−3, mean absolute errors ranging from 0.02 to 0.16 cm3 cm−3, and possessed a d-index between 0.32 and 0.91 for the validation sites. These models were developed in JAVA, are simple to use, operate on multiple platforms (e.g., Mac, personal computer, Sun), and are freely available for download from the U.S. Department of Agriculture Agricultural Research Service website (http://www.ars.usda.gov/Services/docs.htm?docid=11787).
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Chao, Longpeng Cao, Nanxin Peng, et al. "Day-Ahead Forecasting of Hourly Photovoltaic Power Based on Robust Multilayer Perception." Sustainability 10, no. 12 (2018): 4863. http://dx.doi.org/10.3390/su10124863.

Full text
Abstract:
Photovoltaic (PV) modules convert renewable and sustainable solar energy into electricity. However, the uncertainty of PV power production brings challenges for the grid operation. To facilitate the management and scheduling of PV power plants, forecasting is an essential technique. In this paper, a robust multilayer perception (MLP) neural network was developed for day-ahead forecasting of hourly PV power. A generic MLP is usually trained by minimizing the mean squared loss. The mean squared error is sensitive to a few particularly large errors that can lead to a poor estimator. To tackle the problem, the pseudo-Huber loss function, which combines the best properties of squared loss and absolute loss, was adopted in this paper. The effectiveness and efficiency of the proposed method was verified by benchmarking against a generic MLP network with real PV data. Numerical experiments illustrated that the proposed method performed better than the generic MLP network in terms of root mean squared error (RMSE) and mean absolute error (MAE).
APA, Harvard, Vancouver, ISO, and other styles
21

Noël, S., K. Bramstedt, H. Bovensmann, et al. "Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals." Atmospheric Measurement Techniques Discussions 5, no. 2 (2012): 2043–75. http://dx.doi.org/10.5194/amtd-5-2043-2012.

Full text
Abstract:
Abstract. The quality of trace gas products derived from measurements of a space-borne imaging spectrometer is affected by the inhomogeneity of the illumination of the instrument slit and thus by the heterogeneity of the observed scene. This paper aims to quantify this effect and summarise findings on how to mitigate the impact of inhomogeneous slit illumination on tropospheric O3, NO2, SO2 and HCHO columns derived from measurements of the Sentinel-4 UVN imaging spectrometer. For this purpose, spectra for inhomogeneous ground scenes have been simulated based on a combination of a radiative transfer model and spatially high resolved MODIS (Moderate Resolution Imaging Spectroradiometer) data. The resulting errors on tropospheric O3, NO2, SO2 and HCHO columns derived from these spectra have been determined via an optimal estimation approach. It could be concluded that inhomogeneous illumination results in significant errors in the data products if the natural inhomogeneity of the observed scenes is not accounted for. O3 columns are less affected than the other data products; largest errors occur for NO2 (mean absolute errors about 5%, maximum error exceeding 50%). These errors may be significantly reduced (by factors up to >10) by an appropriate wavelength calibration applied individually to each Earthshine radiance spectrum. With wavelength calibration the estimated mean absolute errors due to inhomogeneity are for all gases well below 1%; maximum errors are about 10% for NO2 and around 5% for the other gases.
APA, Harvard, Vancouver, ISO, and other styles
22

Qi, Zhaoxiang, Yong Yu, Richard L. Smart, et al. "Absolute Proper Motions Outside the Plane (APOP)." Proceedings of the International Astronomical Union 8, S293 (2012): 413–15. http://dx.doi.org/10.1017/s174392131301329x.

Full text
Abstract:
AbstractMost of the discovered exoplanets are close to our sun. Usually their host star is with large proper motions, which is an important parameter for exoplanet searching. The first version of absolute proper motions catalog achieved based on Digitized Sky Survey Schmidt plate where outside the galactic plane |b|≥27° is presented, resulting in a zero point error less than ± 0.3 mas/yr, and the overall accuracy better than ± 4.5 mas/yr for objects brighter than RF=18.5, and ranging from 4.5 to 9.0 mas/yr for objects with magnitude 18.5<RF<20.5. The systematic errors of absolute proper motions related to the position, magnitude and color are practically all removed. The sky cover of this catalog is 22,525 degree2, the mean density is 6444 objects/degree2 and the magnitude limit is around RF=20.5.
APA, Harvard, Vancouver, ISO, and other styles
23

Noël, S., K. Bramstedt, H. Bovensmann, et al. "Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals." Atmospheric Measurement Techniques 5, no. 6 (2012): 1319–31. http://dx.doi.org/10.5194/amt-5-1319-2012.

Full text
Abstract:
Abstract. The quality of trace gas products derived from measurements of a space-borne imaging spectrometer is affected by the inhomogeneity of the illumination of the instrument slit and thus by the heterogeneity of the observed scene. This paper aims to quantify this effect and summarise findings on how to mitigate the impact of inhomogeneous slit illumination on tropospheric O3, NO2, SO2 and HCHO columns derived from measurements of the Sentinel-4 UVN imaging spectrometer. For this purpose, spectra for inhomogeneous ground scenes have been simulated based on a combination of a radiative transfer model and spatially high resolved MODIS (Moderate Resolution Imaging Spectroradiometer) data. The resulting errors on tropospheric O3, NO2, SO2 and HCHO columns derived from these spectra have been determined via an optimal estimation approach. We conclude that inhomogeneous illumination results in significant errors in the data products if the natural inhomogeneity of the observed scenes are not accounted for. O3 columns are less affected than the other data products; largest errors occur for NO2 (mean absolute errors about 5%, maximum error exceeding 50%, standard deviation of the errors about 8%). These errors may be significantly reduced (by factors up to about 10) by an appropriate wavelength calibration applied individually to each Earthshine radiance spectrum. With wavelength calibration the estimated mean absolute errors due to inhomogeneity are for all gases well below 1%; standard deviations of the errors are 1.5% or lower; maximum errors are about 10% for NO2 and around 5% for the other gases.
APA, Harvard, Vancouver, ISO, and other styles
24

Akhmetov, V. S., P. N. Fedorov, A. B. Velichko, and V. M. Shulga. "The PMA Catalogue: 420 million positions and absolute proper motions." Monthly Notices of the Royal Astronomical Society 469, no. 1 (2017): 763–73. http://dx.doi.org/10.1093/mnras/stx812.

Full text
Abstract:
Abstract We present a catalogue that contains about 420 million absolute proper motions of stars. It was derived from the combination of positions from Gaia DR1 and 2MASS, with a mean difference of epochs of about 15 yr. Most of the systematic zonal errors inherent in the 2MASS Catalogue were eliminated before deriving the absolute proper motions. The absolute calibration procedure (zero-pointing of the proper motions) was carried out using about 1.6 million positions of extragalactic sources. The mean formal error of the absolute calibration is less than 0.35 mas yr−1. The derived proper motions cover the whole celestial sphere without gaps for a range of stellar magnitudes from 8 to 21 mag. In the sky areas where the extragalactic sources are invisible (the avoidance zone), a dedicated procedure was used that transforms the relative proper motions into absolute ones. The rms error of proper motions depends on stellar magnitude and ranges from 2–5 mas yr−1 for stars with 10 mag < G < 17 mag to 5–10 mas yr−1 for faint ones. The present catalogue contains the Gaia DR1 positions of stars for the J2015 epoch. The system of the PMA proper motions does not depend on the systematic errors of the 2MASS positions, and in the range from 14 to 21 mag represents an independent realization of a quasi-inertial reference frame in the optical and near-infrared wavelength range. The Catalogue also contains stellar magnitudes taken from the Gaia DR1 and 2MASS catalogues. A comparison of the PMA proper motions of stars with similar data from certain recent catalogues has been undertaken.
APA, Harvard, Vancouver, ISO, and other styles
25

Jadhav, Smita, and Dipika Jaspal. "Adsorptive eradication of tartrazine from aqueous solutions onto doped polyaniline." Journal of the Serbian Chemical Society 85, no. 2 (2020): 251–63. http://dx.doi.org/10.2298/jsc190705116j.

Full text
Abstract:
A potential polymeric adsorbent, doped polyaniline (PANI) has been investigated for the eradication of the hazardous dye tartrazine from textile effluent. During the adsorption process, the influence of the acidic character of the adsorbate, pH, dose of the adsorbent, dye concentration and time of contact between the adsorbent and adsorbate were evaluated. The outcomes attained from batch experiments were applied to the Langmuir and the Freundlich isothermal models. Different error analysis techniques, such as mean square error, root mean square error, the Chi-square test (?2), sum of absolute errors and sum of squared errors, were determined for the doped polyaniline?tartrazine system. The Langmuir isotherm was established as the best-fit isothermal model, with minimum errors and high regression values. About 90?97 % removal was achieved in the first 70 min. A positive enthalpy value implied the adsorption process was endothermic. The energy of activation for the dye adsorbent system was found to be 28.9 kJ mol-1, which is in line with physisorption.
APA, Harvard, Vancouver, ISO, and other styles
26

Qi, Junde, Bing Chen, and Dinghua Zhang. "Compensation for absolute positioning error of industrial robot considering the optimized measurement space." International Journal of Advanced Robotic Systems 17, no. 2 (2020): 172988142092164. http://dx.doi.org/10.1177/1729881420921644.

Full text
Abstract:
Industrial robots are getting widely applied due to their low use-cost and high flexibility. However, the low absolute positioning accuracy limits their expansion in the area of high-precision manufacturing. Aiming to improve the positioning accuracy, a compensation method for the positioning error is put forward in terms of the optimization of the experimental measurement space and accurate modelling of the positioning error. Firstly, the influence of robot kinematic performance on the measurement accuracy is analysed, and a quantitative index describing the performance is adopted. On this basis and combined with the joints motion characteristics, the optimized measurement space in joint space as well as Cartesian space is obtained respectively, which can provide accurate measurement data to the error model. Then the overall model of the positioning error is constructed based on modified Denavit–Hartenberg method, in which the geometric errors and compliance errors are considered comprehensively, and an error decoupling method between them is carried out based on the error-feature analyses. Experiments on the KUKA KR210 robot are carried out finally. The mean absolute positioning accuracy of the robot increases from 1.179 mm to 0.093 mm, which verifies the effectiveness of the compensation methodology in this article.
APA, Harvard, Vancouver, ISO, and other styles
27

PRASANNAVENKADESAN, VARATHARAJAN, and PONNUSAMY PANDITHEVAN. "JOHNSON–COOK MODEL COMBINED WITH COWPER–SYMONDS MODEL FOR BONE CUTTING SIMULATION WITH EXPERIMENTAL VALIDATION." Journal of Mechanics in Medicine and Biology 21, no. 02 (2021): 2150010. http://dx.doi.org/10.1142/s021951942150010x.

Full text
Abstract:
Constitutive models are widely used to predict the mechanical behavior of different kinds of materials. Although the Johnson–Cook model for bovine bone and Cowper–Symonds model for human thoracic rib and tibia was developed, the predictability of these models was found good only at particular strain rates. This study addresses this lack of information by investigating the Cowper–Symonds model, Johnson–Cook model, and Johnson–Cook model combined with Cowper–Symonds model at different strain rates to utilize in the bone cutting simulation. Specimens prepared using two rear femurs harvested from a 3.50-year-old bovine were investigated at different strain rates (0.00001–1/s). A comparative study made among the stresses predicted from these models showed 29.41%, 10.91%, and 11.11% mean absolute percentage errors using Cowper–Symonds model, and 2.03%, 7.19%, and 3.62% mean absolute percentage errors using Johnson–Cook model, respectively, at 0.0001, 0.001 and 1/s strain rates. However, the Johnson–Cook model combined with the Cowper–Symonds model predicted the stress with a maximum of only 2.03% mean absolute percentage error. The potential of each model to utilize in the orthogonal bone cutting was also evaluated using Ansys® and found that the combined model predicted the cutting force close to experimental cutting force with minimal error (5.20%). The outcomes of this study can be used in the surgical practice and osteotomy procedure before commencing actual surgery.
APA, Harvard, Vancouver, ISO, and other styles
28

Stathakis, E., and E. Stambologlou. "The Forecasting Natural Gas demand in a region with seven models and evaluating their accuracy using as criteria five types of errors/residuals." International Journal of Scientific Research and Management 8, no. 08 (2020): 423–38. http://dx.doi.org/10.18535/ijsrm/v8i08.ec03.

Full text
Abstract:
The modeling of Natural Gas (NG) demand differs significantly from the demand for electricity in terms of the determinants that affect it, as all fields of economic activities in a modern economy are directly related to electricity but not to NG. But NG is the second energy type after electricity used in all countries in percentages greater than 10% in average terms. NG is going to be installed in the Region of East Macedonia-Thrace (REMTH) the next years. So, we consider it is worth to predict the NG demand in REMTH using eight deterministic forecasting models. In order to do it we used a dataset of 20 years concerning two Greek regions to which the NG is used that period and through them we built the eight forecasting models aiming to find the NG demand in the REMTH. In order to evaluate the reliability and accuracy of them we used four types of statistical errors, Mean Error (ME), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Mean Absolute Scale Error (MASE). These are the most widely used measures of evaluating the accuracy of deterministic predictive models, due to their advantages of scale-independency and interpretability. When each of them is used alone has the significant disadvantage to produce infinite or undefined values for zero or close-to-zero actual values. In order to address this disadvantage, we propose a way to use the same time all of them measuring the accuracy of a model used to forecast the demand of Natural Gas in the Greek region EMTH. The innovation of this paper is that for NG demand forecasting were used seven different models and they are evaluated regarding their reliability /accuracy using five types of residuals or statistical errors
APA, Harvard, Vancouver, ISO, and other styles
29

Adler, Robert F., Guojun Gu, and George J. Huffman. "Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)." Journal of Applied Meteorology and Climatology 51, no. 1 (2012): 84–99. http://dx.doi.org/10.1175/jamc-d-11-052.1.

Full text
Abstract:
AbstractA procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within ±50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation σ of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (σ/μ, where μ is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%–15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (σ) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet’s mean precipitation.
APA, Harvard, Vancouver, ISO, and other styles
30

Sahebi, M. R., and J. Angles. "An inversion method based on multi-angular approaches for estimating bare soil surface parameters from RADARSAT-1." Hydrology and Earth System Sciences Discussions 6, no. 1 (2009): 207–41. http://dx.doi.org/10.5194/hessd-6-207-2009.

Full text
Abstract:
Abstract. The radar signal recorded by earth observation (EO) satellites is known to be sensitive to soil moisture and soil surface roughness, which influence the onset of runoff. This paper focuses on the inversion of these parameters using a multi-angular approach based on RADARSAT-1 data with incidence angles of 35° and 47° (in mode S3 and S7). This inversion was done based on three backscatter models: Geometrical Optics Model (GOM), Oh Model (OM) and Modified Dubois Model (MDM), which are compared in order to obtain the best configuration. For roughness expressed in rms of heights, mean absolute errors of 1.23 cm, 1.12 cm and 2.08 cm, and for dielectric constant, mean absolute errors of 2.46, 4.95 and 3.31 were obtained for the MDM, GOM and the OM simulation, respectively. This means that the MDM provided the best results with minimum errors. Based on these results, the latter inversion algorithm was applied on the images and the final results are presented in two different maps showing pixel and homogeneous zones for surface roughness and soil moisture.
APA, Harvard, Vancouver, ISO, and other styles
31

Hutchinson, Michael F., Dan W. McKenney, Kevin Lawrence, et al. "Development and Testing of Canada-Wide Interpolated Spatial Models of Daily Minimum–Maximum Temperature and Precipitation for 1961–2003." Journal of Applied Meteorology and Climatology 48, no. 4 (2009): 725–41. http://dx.doi.org/10.1175/2008jamc1979.1.

Full text
Abstract:
Abstract The application of trivariate thin-plate smoothing splines to the interpolation of daily weather data is investigated. The method was used to develop spatial models of daily minimum and maximum temperature and daily precipitation for all of Canada, at a spatial resolution of 300 arc s of latitude and longitude, for the period 1961–2003. Each daily model was optimized automatically by minimizing the generalized cross validation. The fitted trivariate splines incorporated a spatially varying dependence on ground elevation and were able to adapt automatically to the large variation in station density over Canada. Extensive quality control measures were performed on the source data. Error estimates for the fitted surfaces based on withheld data across southern Canada were comparable to, or smaller than, errors obtained by daily interpolation studies elsewhere with denser data networks. Mean absolute errors in daily maximum and minimum temperature averaged over all years were 1.1° and 1.6°C, respectively. Daily temperature extremes were also well matched. Daily precipitation is challenging because of short correlation length scales, the preponderance of zeros, and significant error associated with measurement of snow. A two-stage approach was adopted in which precipitation occurrence was estimated and then used in conjunction with a surface of positive precipitation values. Daily precipitation occurrence was correctly predicted 83% of the time. Withheld errors in daily precipitation were small, with mean absolute errors of 2.9 mm, although these were relatively large in percentage terms. However, mean percent absolute errors in seasonal and annual precipitation totals were 14% and 9%, respectively, and seasonal precipitation upper 95th percentiles were attenuated on average by 8%. Precipitation and daily maximum temperatures were most accurately interpolated in the autumn, consistent with the large well-organized synoptic systems that prevail in this season. Daily minimum temperatures were most accurately interpolated in summer. The withheld data tests indicate that the models can be used with confidence across southern Canada in applications that depend on daily temperature and accumulated seasonal and annual precipitation. They should be used with care in applications that depend critically on daily precipitation extremes.
APA, Harvard, Vancouver, ISO, and other styles
32

Jourdier, Bénédicte, and Philippe Drobinski. "Errors in wind resource and energy yield assessments based on the Weibull distribution." Annales Geophysicae 35, no. 3 (2017): 691–700. http://dx.doi.org/10.5194/angeo-35-691-2017.

Full text
Abstract:
Abstract. The methodology used in wind resource assessments often relies on modeling the wind-speed statistics using a Weibull distribution. In spite of its common use, this distribution has been shown to not always accurately model real wind-speed distributions. Very few studies have examined the arising errors in power outputs, using either observed power productions or theoretical power curves. This article focuses on France, using surface wind measurements at 89 locations covering all regions of the country. It investigates how statistical modeling using a Weibull distribution impacts the prediction of the wind energy content and of the power output in the context of an annual energy production assessment. For this purpose it uses a plausible power curve adapted to each location. Three common methods for fitting the Weibull distribution are tested (maximum likelihood, first and third moments, and the Wind Atlas Analysis and Application Program (WAsP) method). The first two methods generate large errors in the production (mean absolute error around 5 %), especially in the southern areas where the goodness of fit of the Weibull distribution is poorer. The production is mainly overestimated except at some locations with bimodal wind distributions. With the third method, the errors are much lower at most locations (mean absolute error around 2 %). Another distribution, a mixed Rayleigh–Rice distribution, is also tested and shows better skill at assessing the wind energy yield.
APA, Harvard, Vancouver, ISO, and other styles
33

Querino, Carlos Alexandre Santos, Marcelo Sacardi Biudes, Nadja Gomes Machado, Juliane Kayse Albuquerque da Silva Querino, Marcos Antônio Lima Moura, and Péricles Vale Alves. "Modelling parametrization to estimate atmospheric long wave radiation in the Northern Mato Grosso, Brazil." Ciência e Natura 42 (May 9, 2020): e105. http://dx.doi.org/10.5902/2179460x41205.

Full text
Abstract:
The measures of Atmospheric Long Wave radiation are onerous, which brings the necessity to use alternative methods. Thus, the main aim of this paper was to test and parameterize some models that exist in the literature to estimate atmospheric long wave. The data were collected at Fazenda São Nicolau (2002 - 2003), located in Northwestern of Mato Grosso State. Data were processed hourly, monthly, and seasonal (dry and wet) besides clear and partly cloudy days on the average. The models of Swinbank, Idso Jackson, Idso, Prata and Duarte. were applied. The performance of the models was based on the mean error, square root of mean square error, absolute mean error, Pearson's coefficient and Willmott's coefficient. All models had presented high errors and low Peason’s and Willmott coefficients. After parameterizing, all models reduced their errors and increased Pearson and Willmott’s coefficient. The models of Idso and Swinbank had presented better and worse performance, respectively. It was not observed an increment on the performance of the model when classified according to cloudiness and seasonality. The Idso’s model had presented the lowest errors among the models. The model that had presented worst performance for any tested situation was Swinbank.
APA, Harvard, Vancouver, ISO, and other styles
34

Pospelov, Boris, Evgenіy Rybka, Ruslan Meleshchenko, et al. "Short-term fire forecast based on air state gain recurrence and zero-order brown model." Eastern-European Journal of Enterprise Technologies 3, no. 10(111) (2021): 27–33. http://dx.doi.org/10.15587/1729-4061.2021.233606.

Full text
Abstract:
Possibilities of parameterization of the zero-order Brown model for indoor air forecasting based on the current measure of air state gain recurrence are considered. The key to the zero-order parametric Brown forecasting model is the selection of the smoothing parameter, which characterizes forecast adaptability to the current air state gain recurrence measure. It is shown that for effective short-term indoor fire forecast, the Brown model parameter must be selected from the out-of-limit set defined by 1 and 2. The out-of-limit set for the Brown model parameter is an area of effective fire forecasting based on the measure of current indoor air state gain recurrence. Errors of fire forecast based on the parameterized zero-order Brown model in the case of the classical and out-of-limit sets of the model parameters are investigated using the example of ignition of various materials in a laboratory chamber. As quantitative indicators of forecast quality, the absolute and mean forecast errors exponentially smoothed with a parameter of 0.4 are investigated. It was found that for alcohol, the smoothed absolute and mean forecast errors for the classical smoothing parameter in the no-ignition interval do not exceed 20 %. At the same time, for the out-of-limit case, the indicated forecast errors are, on average, an order of magnitude smaller. Similar ratios for forecast errors remain in paper, wood and textile ignition. However, for the transition zone corresponding to the time of material ignition, a sharp decrease in the current measure of chamber air state gain recurrence is observed. It was found that for this zone, the smoothed absolute forecast error for alcohol is about 2 % if the model parameter is selected from the classical set. If the model parameter is selected from the out-of-limit set, the forecast error is about 0.2 %. The results generally demonstrate significant advantages of using the zero-order Brown parametric model with out-of-limit model parameters for indoor fire forecasting
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Fei, Hua Yong Zhang, Zhong Yu Wang, and Li Zhang. "A Comparison of Two Biomass Estimation Methods: A Case Study of Several Pine Forests in China." Advanced Materials Research 807-809 (September 2013): 806–9. http://dx.doi.org/10.4028/www.scientific.net/amr.807-809.806.

Full text
Abstract:
Power function model and linear function model were commonly used to express the relationships between stand volume and biomass. However, the relative accuracy is still unclear. In order to compare the accuracy of the two types of model, field measurement data of 279 pine forest stands in China were collected from published literatures. Using the data collected, the relationships between stand volume and aboveground biomass (AGB) of Pinus koraiensis forest, Pinus armandii forest, Pinus massoniana forest, Pinus tabulaeformis forest and Pinus forest were established. The mean relative error and mean absolute values of relative errors were employed to test the errors of the established equations. The goodness-of-fit and errors of these two types of model were compared. The results show that the power function models could generally express the relationships better than the linear function models. Also, the errors of the power function models are generally lower than those of the linear function models.
APA, Harvard, Vancouver, ISO, and other styles
36

Grabar, Anna, Darya Starkova, Olga Soboleva, and Tatyana Kondratyeva. "Economic factors of electricity transport based on energy consumption forecasting." E3S Web of Conferences 210 (2020): 13036. http://dx.doi.org/10.1051/e3sconf/202021013036.

Full text
Abstract:
Forecasting significance in the energy market is extremely high. Demand for electricity determines the key decisions on its purchase and production, load transfer and transmission control. Over the past few decades, several methods have been developed to accurately predict the future of energy consumption. This article discusses various methods for forecasting energy demand. Three blocks of methods are considered: statistical, methods using artificial intelligence and hybrid. Authors defined the metrics that show the quality of the models and help to compare the results of the models: mean absolute error (MAE), mean absolute percentage error (MAPE), root-mean-square deviation (RMSE), minimum and maximum errors on the test sample. A comparative analysis of forecasting methods has been lunched on the open data set. The best result is obtained using a combined model based on the Lasso regression method. The accuracy and speed of predictions helps to get an economic effect from regulating generation by selling electricity at the peak of consumption.
APA, Harvard, Vancouver, ISO, and other styles
37

Cruz, Miguel G., Susan Kidnie, Stuart Matthews, et al. "Evaluation of the predictive capacity of dead fuel moisture models for Eastern Australia grasslands." International Journal of Wildland Fire 25, no. 9 (2016): 995. http://dx.doi.org/10.1071/wf16036.

Full text
Abstract:
The moisture content of dead grass fuels is an important input to grassland fire behaviour prediction models. We used standing dead grass moisture observations collected within a large latitudinal spectrum in Eastern Australia to evaluate the predictive capacity of six different fuel moisture prediction models. The best-performing models, which ranged from a simple empirical formulation to a physically based process model, yield mean absolute errors of 2.0% moisture content, corresponding to a 25–30% mean absolute percentage error. These models tended to slightly underpredict the moisture content observations. The results have important implications for the authenticity of fire danger rating and operational fire behaviour prediction, which form the basis of community information and warnings, such as evacuation notices, in Australia.
APA, Harvard, Vancouver, ISO, and other styles
38

Desviona, Nayla, and Ferra Yanuar. "Simulation Study of Autocorrelated Error Using Bayesian Quantile Regression." Science and Technology Indonesia 5, no. 3 (2020): 70. http://dx.doi.org/10.26554/sti.2020.5.3.70-74.

Full text
Abstract:

 The purpose of this study is to compare the ability of the Classical Quantile Regression method and the Bayesian Quantile Regression method in estimating models that contain autocorrelated error problems using simulation studies. In the quantile regression approach, the data response is divided into several pieces or quantiles conditions on indicator variables. Then, The parameter model is estimated for each selected quantiles. The parameters are estimated using conditional quantile functions obtained by minimizing absolute asymmetric errors. In the Bayesian quantile regression method, the data error is assumed to be asymmetric Laplace distribution. The Bayesian approach for quantile regression uses the Markov Chain Monte Carlo Method with the Gibbs sample algorithm to produce a converging posterior mean. The best method for estimating parameter is the method that produces the smallest absolute value of bias and the smallest confidence interval. This study resulted that the Bayesian Quantile method produces smaller absolute bias values and confidence intervals than the quantile regression method. These results proved that the Bayesian Quantile Regression method tends to produce better estimate values than the Quantile Regression method in the case of autocorrelation errors. 
 Keywords: Quantile Regression Method, Bayesian Quantile Regression Method, Confidence Interval, Autocorrelation.
APA, Harvard, Vancouver, ISO, and other styles
39

Sinclair, Jonathan, Sarah J. Hobbs, Laurence Protheroe, Christopher J. Edmundson, and Andrew Greenhalgh. "Determination of Gait Events Using an Externally Mounted Shank Accelerometer." Journal of Applied Biomechanics 29, no. 1 (2013): 118–22. http://dx.doi.org/10.1123/jab.29.1.118.

Full text
Abstract:
Biomechanical analysis requires the determination of specific foot contact events. This is typically achieved using force platform information; however, when force platforms are unavailable, alternative methods are necessary. A method was developed for the determination of gait events using an accelerometer mounted to the distal tibia, measuring axial accelerations. The aim of the investigation was to determine the efficacy of this method. Sixteen participants ran at 4.0 m/s ±5%. Synchronized tibial accelerations and vertical ground reaction forces were sampled at 1000 Hz as participants struck a force platform with their dominant foot. Events determined using the accelerometer, were compared with the corresponding events determined using the force platform. Mean errors of 1.68 and 5.46 ms for average and absolute errors were observed for heel strike and of –3.59 and 5.00 ms for toe-off. Mean and absolute errors of 5.18 and 11.47 ms were also found for the duration of the stance phase. Strong correlations (r= .96) were also observed between duration of stance obtained using the two different methods. The error values compare favorably to other alternative methods of predicting gait events. This suggests that shank-mounted accelerometers can be used to accurately and reliably detect gait events.
APA, Harvard, Vancouver, ISO, and other styles
40

Koenker, Roger, and Kevin F. Hallock. "Quantile Regression." Journal of Economic Perspectives 15, no. 4 (2001): 143–56. http://dx.doi.org/10.1257/jep.15.4.143.

Full text
Abstract:
Quantile regression, as introduced by Koenker and Bassett (1978), may be viewed as an extension of classical least squares estimation of conditional mean models to the estimation of an ensemble of models for several conditional quantile functions. The central special case is the median regression estimator which minimizes a sum of absolute errors. Other conditional quantile functions are estimated by minimizing an asymmetrically weighted sum of absolute errors. Quantile regression methods are illustrated with applications to models for CEO pay, food expenditure, and infant birthweight.
APA, Harvard, Vancouver, ISO, and other styles
41

Qi, Jun, Jun Du, Sabato Marco Siniscalchi, Xiaoli Ma, and Chin-Hui Lee. "Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network-Based Vector-to-Vector Regression." IEEE Transactions on Signal Processing 68 (2020): 3411–22. http://dx.doi.org/10.1109/tsp.2020.2993164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bruno, Jennifer L., Zhizhong Li, Matthieu Trudeau, Sachin M. Raina, and Jack T. Dennerlein. "A Single Video Camera Postural Assessment System to Measure Rotation of the Shoulder During Computer Use." Journal of Applied Biomechanics 28, no. 3 (2012): 343–48. http://dx.doi.org/10.1123/jab.28.3.343.

Full text
Abstract:
The goal of this study was to evaluate the performance of a single video camera system for measuring shoulder rotation during computer work, and to quantify the work and postural space within which the system performs optimally. Shoulder rotation angles calculated using the video system were compared with angles calculated using an active infrared LED three-dimensional motion analysis system while 10 adult volunteers simulated postures for two different trials: typical of normal computer work (freestyle) and with forced shoulder abduction (constrained). Average and absolute errors were calculated to determine the accuracy and precision of the system, respectively, for each trial, for each position, and for both the right and left hands. For the right hand, mean values for the average and absolute errors were –1 and 0 degrees, respectively. Only the absolute error increased significantly to 12 degrees for the constrained posture compared with freestyle. During normal computer work, the video system provided shoulder rotation angle values similar to those of a three-dimensional system, thus making it a viable and simple instrument to use in field studies.
APA, Harvard, Vancouver, ISO, and other styles
43

Nuss, Kayla J., Nicholas A. Hulett, Alden Erickson, et al. "Comparison of Energy Expenditure and Step Count Measured by ActiGraph Accelerometers Among Dominant and Nondominant Wrist and Hip Sites." Journal for the Measurement of Physical Behaviour 3, no. 4 (2020): 315–22. http://dx.doi.org/10.1123/jmpb.2019-0064.

Full text
Abstract:
Objective: To validate and compare the accuracy of energy expenditure (EE) and step counts measured by ActiGraph accelerometers (ACT) at dominant and nondominant wrist and hip sites. Methods: Thirty young adults (15 females, age 22.93 ± 3.30 years) wore four ActiGraph wGT3X accelerometers while walking and running on a treadmill for 7 min at seven different speeds (1.7, 2.5, 3.4, 4.2, 5.0, 5.5, and 6.0 mph). The EE from each ACT was calculated using the Freedson Adult equation, and the “worn on the wrist” option was selected for the wrist data. Indirect calorimetry and manually counted steps were used as criterion measures. Mean absolute percentage error and two one-sided test procedures for equivalence were used for the analyses. Results: All ACTs underestimated the EE with mean absolute percentage errors over 30% for wrist placement and over 20% for hip placement. The wrist-worn ACTs underestimated the step count with mean absolute percentage errors above 30% for both dominant and nondominant placements. The hip-worn ACTs accurately assessed steps for the whole sample and for women and men (p < .001 to .05 for two one-sided tests procedures), but not at speeds slower than 2.0 mph. Conclusion: Neither hip nor wrist placements assess EE accurately. More algorithms and methods to derive EE estimates from wrist-worn ACTs must be developed and validated. For step counts, both dominant and nondominant hip placements, but not wrist placements, lead to accurate results for both men and women.
APA, Harvard, Vancouver, ISO, and other styles
44

Chalermwongphan, Karn, and Prapatpong Upala. "The Analysis of Dynamic O/D Adjustment for Bicycle Traffic Demand Estimation with AIMSUN Simulation Model: A Case Study of Nakhon Sawan Municipality in Thailand." Open Transportation Journal 12, no. 1 (2018): 352–65. http://dx.doi.org/10.2174/1874447801812010352.

Full text
Abstract:
Aim: This research aimed to present the process of estimating bicycle traffic demand in order to design bike routes that meet the daily transportation needs of the people in Nakhon Sawan Municipality. Methods: The primary and secondary traffic data were collected to develop a virtual traffic simulation model with the use of the AIMSUN simulation software. The model validation method was carried out to adjust the origin and destination survey data (O/D matrix) by running dynamic O/D adjustment. The 99 replication scenarios were statistically examined and assessed using the goodness-of-fit test. The 9 measures, which were examined, included: 1) Root Mean Square Error (RMSE), 2) Root Mean Square Percentage Error (RMSPE%), 3) Mean Absolute Deviation (MAD), 4) Mean Bias Error (MBE), 5) Mean Percentage Error (MPE%), 6) Mean Absolute Percentage Error (MAPE%), 7) Coefficient of Determination (R2), 8) GEH Statistic (GEH), and 9) Thiel’s U Statistic (Theil’s U). Results: The resulting statistical values were used to determine the acceptable ranges according to the acceptable indicators of each factor. Conclusion: It was found that there were only 8 scenarios that met the evaluation criteria. The selection and ranking process was consequently carried out using the multi-factor scoring method, which could eliminate errors that might arise from applying only one goodness-of-fit test measure.
APA, Harvard, Vancouver, ISO, and other styles
45

Sagar, Pinki, Prinima Gupta, and Rohit Tanwar. "A novel prediction algorithm for multivariate data sets." Decision Making: Applications in Management and Engineering 4, no. 2 (2021): 225–40. http://dx.doi.org/10.31181/dmame210402215s.

Full text
Abstract:
Regression analysis is a statistical technique that is most commonly used for forecasting. Data sets are becoming very large due to continuous transactions in today's high-paced world. The data is difficult to manage and interpret. All the independent variables can’t be considered for the prediction because it costs high for maintenance of the data set. A novel algorithm for prediction has been implemented in this paper. Its emphasis is on extraction of efficient independent variables from various variables of the data set. The selection of variables is based on Mean Square Errors (MSE) as well as on the coefficient of determination r2p, after that the final prediction equation for the algorithm is framed on the basis of deviation of actual mean. This is a statistical based prediction algorithm which is used to evaluate the prediction based on four parameters: Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and residuals. This algorithm has been implemented for a multivariate data set with low maintenance costs, preprocessing costs, lower root mean square error and residuals. For one dimensional, two-dimensional, frequent stream data, time series data and continuous data, the proposed prediction algorithm can also be used. The impact of this algorithm is to enhance the accuracy rate of forecasting and minimized average error rate.
APA, Harvard, Vancouver, ISO, and other styles
46

Ceylan, Hüseyin. "An Artificial Neural Networks Approach to Estimate Occupational Accident: A National Perspective for Turkey." Mathematical Problems in Engineering 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/756326.

Full text
Abstract:
Occupational accident estimation models were developed by using artificial neural networks (ANNs) for Turkey. Using these models the number of occupational accidents and death and permanent incapacity numbers resulting from occupational accidents were estimated for Turkey until the year of 2025 by the three different scenarios. In the development of the models, insured workers, workplace, occupational accident, death, and permanent incapacity values were used as model parameters with data between 1970 and 2012. 2-5-1 neural network architecture was selected as the best network architecture. Sigmoid was used in hidden layers and linear function was used at output layer. The feed forward back propagation algorithm was used to train the network. In order to obtain a useful model, the network was trained between 1970 and 1999 to estimate the values of 2000 to 2012. The result was compared with the real values and it was seen that it is applicable for this aim. The performances of all developed models were evaluated using mean absolute percent errors (MAPE), mean absolute errors (MAE), and root mean square errors (RMSE).
APA, Harvard, Vancouver, ISO, and other styles
47

Mustafa, Mohammad Z., Ashraf A. Khan, Harry Bennett, Andrew J. Tatham, and Mark Wright. "Accuracy of biometric formulae in hypermetropic patients undergoing cataract surgery." European Journal of Ophthalmology 29, no. 5 (2018): 510–15. http://dx.doi.org/10.1177/1120672118803509.

Full text
Abstract:
Purpose: To audit and analyse the accuracy of current biometric formulae on refractive outcomes following cataract surgery in patients with axial length less than 22 mm. Methods: A total of 84 eyes from 84 patients with axial length <22 mm were identified from consecutive patients undergoing cataract surgery retrospectively at a single university hospital. All subjects had biometry using the IOLMaster (Carl Zeiss Meditec, Inc, Dublin, CA, USA) and a Sensar AR40 intraocular lens implant (Abbott Medical Optics, CA, USA). One eye from each patient was randomly selected for inclusion. Prediction errors were calculated by comparing expected refraction from optimized formulas (SRK/T, Hoffer Q, Haigis and Holladay 1) to postoperative refraction. A national survey of ophthalmologists was conducted to ascertain biometric formula preference for small eyes. Results: The mean axial length was 21.00 ± 0.55 mm. Mean error was greatest for Hoffer Q at −0.57 dioptres. There was no significant difference in mean absolute error between formulae. SRK/T achieved the highest percentage of outcomes within 0.5 dioptres (45.2%) and 1 dioptre (76.2%) of target. Shallower anterior chamber depth was associated with higher mean absolute error for SRK/T (p = 0.028), Hoffer Q (p = 0.003) and Haigis (p = 0.016) but not Holladay (p = 0.111). Conclusion: SRK/T had the highest proportion of patients achieving refractive results close to predicted outcomes. However, there was a significant association between a shallower anterior chamber depth and higher mean absolute error for all formulae except Holladay 1. This suggests that anterior chamber depth with axial length should be considered when counselling patients about refractive outcome.
APA, Harvard, Vancouver, ISO, and other styles
48

Louzazni, Mohamed, Sameer Al-Dahidi, and Marco Mussetta. "Fuel Cell Characteristic Curve Approximation Using the Bézier Curve Technique." Sustainability 12, no. 19 (2020): 8127. http://dx.doi.org/10.3390/su12198127.

Full text
Abstract:
Accurate modelling of the fuel cell characteristics curve is essential for the simulation analysis, control management, performance evaluation, and fault detection of fuel cell power systems. However, the big challenge in fuel cell modelling is the multi-variable complexity of the characteristic curves. In this paper, we propose the implementation of a computer graphic technique called Bézier curve to approximate the characteristics curves of the fuel cell. Four different case studies are examined as follows: Ballard Systems, Horizon H-12 W stack, NedStackPS6, and 250 W proton exchange membrane fuel cells (PEMFC). The main objective is to minimize the absolute errors between experimental and calculated data by using the control points of the Bernstein–Bézier function and de Casteljau’s algorithm. The application of this technique entails subdividing the fuel cell curve to some segments, where each segment is approximated by a Bézier curve so that the approximation error is minimized. Further, the performance and accuracy of the proposed techniques are compared with recent results obtained by different metaheuristic algorithms and analytical methods. The comparison is carried out in terms of various statistical error indicators, such as Individual Absolute Error (IAE), Relative Error (RE), Root Mean Square Error (RMSE), Mean Bias Errors (MBE), and Autocorrelation Function (ACF). The results obtained by the Bézier curve technique show an excellent agreement with experimental data and are more accurate than those obtained by other comparative techniques.
APA, Harvard, Vancouver, ISO, and other styles
49

Zahid, Faisal M., and Christian Heumann. "Multiple imputation with sequential penalized regression." Statistical Methods in Medical Research 28, no. 5 (2018): 1311–27. http://dx.doi.org/10.1177/0962280218755574.

Full text
Abstract:
Missing data is a common issue that can cause problems in estimation and inference in biomedical, epidemiological and social research. Multiple imputation is an increasingly popular approach for handling missing data. In case of a large number of covariates with missing data, existing multiple imputation software packages may not work properly and often produce errors. We propose a multiple imputation algorithm called mispr based on sequential penalized regression models. Each variable with missing values is assumed to have a different distributional form and is imputed with its own imputation model using the ridge penalty. In the case of a large number of predictors with respect to the sample size, the use of a quadratic penalty guarantees unique estimates for the parameters and leads to better predictions than the usual Maximum Likelihood Estimation (MLE), with a good compromise between bias and variance. As a result, the proposed algorithm performs well and provides imputed values that are better even for a large number of covariates with small samples. The results are compared with the existing software packages mice, VIM and Amelia in simulation studies. The missing at random mechanism was the main assumption in the simulation study. The imputation performance of the proposed algorithm is evaluated with mean squared imputation error and mean absolute imputation error. The mean squared error ([Formula: see text]), parameter estimates with their standard errors and confidence intervals are also computed to compare the performance in the regression context. The proposed algorithm is observed to be a good competitor to the existing algorithms, with smaller mean squared imputation error, mean absolute imputation error and mean squared error. The algorithm’s performance becomes considerably better than that of the existing algorithms with increasing number of covariates, especially when the number of predictors is close to or even greater than the sample size. Two real-life datasets are also used to examine the performance of the proposed algorithm using simulations.
APA, Harvard, Vancouver, ISO, and other styles
50

Goulet, Eric D. B., and Lindsay B. Baker. "Sweat Sodium Concentration: Inter-Unit Variability of a Low Cost, Portable, and Battery Operated Sodium Analyzer." International Journal of Sport Nutrition and Exercise Metabolism 27, no. 6 (2017): 528–32. http://dx.doi.org/10.1123/ijsnem.2017-0136.

Full text
Abstract:
The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and –0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!