To see the other types of publications on this topic, follow the link: Linear weighted averaging method.

Journal articles on the topic 'Linear weighted averaging method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Linear weighted averaging method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Al-Quran, Ashraf. "T-spherical linear Diophantine fuzzy aggregation operators for multiple attribute decision-making." AIMS Mathematics 8, no. 5 (2023): 12257–86. http://dx.doi.org/10.3934/math.2023618.

Full text
Abstract:
<abstract><p>This paper aims to amalgamate the notion of a T-spherical fuzzy set (T-SFS) and a linear Diophantine fuzzy set (LDFS) to elaborate on the notion of the T-spherical linear Diophantine fuzzy set (T-SLDFS). The new concept is very effective and is more dominant as compared to T-SFS and LDFS. Then, we advance the basic operations of T-SLDFS and examine their properties. To effectively aggregate the T-spherical linear Diophantine fuzzy data, a T-spherical linear Diophantine fuzzy weighted averaging (T-SLDFWA) operator and a T-spherical linear Diophantine fuzzy weighted geometric (T-SLDFWG) operator are proposed. Then, the properties of these operators are also provided. Furthermore, the notions of the T-spherical linear Diophantine fuzzy-ordered weighted averaging (T-SLDFOWA) operator; T-spherical linear Diophantine fuzzy hybrid weighted averaging (T-SLDFHWA) operator; T-spherical linear Diophantine fuzzy-ordered weighted geometric (T-SLDFOWG) operator; and T-spherical linear Diophantine fuzzy hybrid weighted geometric (T-SLDFHWG) operator are proposed. To compare T-spherical linear Diophantine fuzzy numbers (T-SLDFNs), different types of score and accuracy functions are defined. On the basis of the T-SLDFWA and T-SLDFWG operators, a multiple attribute decision-making (MADM) method within the framework of T-SLDFNs is designed, and the ranking results are examined by different types of score functions. A numerical example is provided to depict the practicality and ascendancy of the proposed method. Finally, to demonstrate the excellence and accessibility of the proposed method, a comparison analysis with other methods is conducted.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
2

Nourani, Vahid, Gozen Elkiran, and S. I. Abba. "Wastewater treatment plant performance analysis using artificial intelligence – an ensemble approach." Water Science and Technology 78, no. 10 (2018): 2064–76. http://dx.doi.org/10.2166/wst.2018.477.

Full text
Abstract:
Abstract In the present study, three different artificial intelligence based non-linear models, i.e. feed forward neural network (FFNN), adaptive neuro fuzzy inference system (ANFIS), support vector machine (SVM) approaches and a classical multi-linear regression (MLR) method were applied for predicting the performance of Nicosia wastewater treatment plant (NWWTP), in terms of effluent biological oxygen demand (BODeff), chemical oxygen demand (CODeff) and total nitrogen (TNeff). The daily data were used to develop single and ensemble models to improve the prediction ability of the methods. The obtained results of single models proved that, ANFIS model provides effective outcomes in comparison with single models. In the ensemble modeling, simple averaging ensemble, weighted averaging ensemble and neural network ensemble techniques were proposed subsequently to improve the performance of the single models. The results showed that in prediction of BODeff, the ensemble models of simple averaging ensemble (SAE), weighted averaging ensemble (WAE) and neural network ensemble (NNE), increased the performance efficiency of artificial intelligence (AI) modeling up to 14%, 20% and 24% at verification phase, respectively, and less than or equal to 5% for both CODeff and TNeff in calibration phase. This shows that NNE model is more robust and reliable ensemble method for predicting the NWWTP performance due to its non-linear averaging kernel.
APA, Harvard, Vancouver, ISO, and other styles
3

Łęski, J. M., and N. Henzel. "Generalized ordered linear regression with regularization." Bulletin of the Polish Academy of Sciences: Technical Sciences 60, no. 3 (2012): 481–89. http://dx.doi.org/10.2478/v10175-012-0061-2.

Full text
Abstract:
Abstract Linear regression analysis has become a fundamental tool in experimental sciences. We propose a new method for parameter estimation in linear models. The ’Generalized Ordered Linear Regression with Regularization’ (GOLRR) uses various loss functions (including the ǫ-insensitive ones), ordered weighted averaging of the residuals, and regularization. The algorithm consists in solving a sequence of weighted quadratic minimization problems where the weights used for the next iteration depend not only on the values but also on the order of the model residuals obtained for the current iteration. Such regression problem may be transformed into the iterative reweighted least squares scenario. The conjugate gradient algorithm is used to minimize the proposed criterion function. Finally, numerical examples are given to demonstrate the validity of the method proposed.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Do-Yeop, and Ju-Yong Chang. "Attention-Based 3D Human Pose Sequence Refinement Network." Sensors 21, no. 13 (2021): 4572. http://dx.doi.org/10.3390/s21134572.

Full text
Abstract:
Three-dimensional human mesh reconstruction from a single video has made much progress in recent years due to the advances in deep learning. However, previous methods still often reconstruct temporally noisy pose and mesh sequences given in-the-wild video data. To address this problem, we propose a human pose refinement network (HPR-Net) based on a non-local attention mechanism. The pipeline of the proposed framework consists of a weight-regression module, a weighted-averaging module, and a skinned multi-person linear (SMPL) module. First, the weight-regression module creates pose affinity weights from a 3D human pose sequence represented in a unit quaternion form. Next, the weighted-averaging module generates a refined 3D pose sequence by performing temporal weighted averaging using the generated affinity weights. Finally, the refined pose sequence is converted into a human mesh sequence using the SMPL module. HPR-Net is a simple but effective post-processing network that can substantially improve the accuracy and temporal smoothness of 3D human mesh sequences obtained from an input video by existing human mesh reconstruction methods. Our experiments show that the noisy results of the existing methods are consistently improved using the proposed method on various real datasets. Notably, our proposed method reduces the pose and acceleration errors of VIBE, the existing state-of-the-art human mesh reconstruction method, by 1.4% and 66.5%, respectively, on the 3DPW dataset.
APA, Harvard, Vancouver, ISO, and other styles
5

Sharghi, Elnaz, Vahid Nourani, and Nazanin Behfar. "Earthfill dam seepage analysis using ensemble artificial intelligence based modeling." Journal of Hydroinformatics 20, no. 5 (2018): 1071–84. http://dx.doi.org/10.2166/hydro.2018.151.

Full text
Abstract:
Abstract In this paper, an ensemble artificial intelligence (AI) based model is proposed for seepage modeling. For this purpose, firstly several AI models (i.e. Feed Forward Neural Network, Support Vector Regression and Adaptive Neural Fuzzy Inference System) were employed to model seepage through the Sattarkhan earthfill dam located in northwest Iran. Three different scenarios were considered where each scenario employs a specific input combination suitable for different real world conditions. Afterwards, an ensemble method as a post-processing approach was used to improve predicting performance of the water head through the dam and the results of the models were compared and evaluated. For this purpose, three methods of model ensemble (simple linear averaging, weighted linear averaging and non-linear neural ensemble) were employed and compared. The obtained results indicated that the model ensemble could lead to a promising improvement in seepage modeling. The results indicated that the ensembling method could increase the performance of AI modeling by up to 20% in the verification step.
APA, Harvard, Vancouver, ISO, and other styles
6

Koers, Greetje, Lambertus J. M. Mulder, and Frederik M. van der Veen. "The Computation of Evoked Heart Rate and Blood Pressure." Journal of Psychophysiology 13, no. 2 (1999): 83–91. http://dx.doi.org/10.1027//0269-8803.13.2.83.

Full text
Abstract:
Abstract For many years psychophysiologists have been interested in stimulus related changes in heart rate and blood pressure. To represent these evoked heart rate and blood pressure patterns, heart rate and blood pressure data have to be transformed into equidistant time series. This paper presents an extensive comparison between two methods. The most often used method is based on linear interpolation, also known as weighted averaging. The low pass filtering method presented here is based on a well-known model for the generation of heart beats, the integral pulse frequency modulation model (IPFM). The comparison shows that the results of the filtering and interpolation procedures are virtually identical. Practically, small differences between the methods disappear in the averaging process. Therefore, the interpolation method is a suitable practical alternative to the computationally complex filtering method.
APA, Harvard, Vancouver, ISO, and other styles
7

Manjumari, M. "Decision Making Problem with Multiple Attribute Group Employing an Array of Singularly Perturbed Differential Equations." Indian Journal Of Science And Technology 18, no. 14 (2025): 1147–54. https://doi.org/10.17485/ijst/v18i14.2820.

Full text
Abstract:
Objectives: This study deals with an ensemble of linear singularly perturbed differential equations that has been employed to solve Multiple Attribute Group Decision Making issues on that the weights of the Decision Makers (DMs) are unknown. Methods: Two sophisticated operators-the Intuitionistic Fuzzy Generalised Hybrid Weighted Averaging (IFGHWA) operator as well as the Intuitionistic Fuzzy Weighted Averaging (IFWA) operator—are employed to aid in the course of decision-making. Findings: In order to determine the best course of action, these operators are used to combine intuitionistic fuzzy decision matrices into a collective decision matrix. We can use the newly proposed correlation coefficient method and score function for ranking the best alternative from the available alternatives. Novelty: To demonstrate the efficiency and applicability of the suggested method in resolving MAGDM situations with ambiguous decision maker weights, a computational instance is provided. Numerical illustration is given to show the effectiveness of the proposed approach. Keywords: Intuitionistic Fuzzy Sets (IFSs), Singular Perturbation Problem, IFGHWA operator, IFWA operator, MAGDM
APA, Harvard, Vancouver, ISO, and other styles
8

Iampan, Aiyared, Gustavo Santos García, Muhammad Riaz, Hafiz Muhammad Athar Farid, and Ronnason Chinram. "Linear Diophantine Fuzzy Einstein Aggregation Operators for Multi-Criteria Decision-Making Problems." Journal of Mathematics 2021 (July 17, 2021): 1–31. http://dx.doi.org/10.1155/2021/5548033.

Full text
Abstract:
The linear Diophantine fuzzy set (LDFS) has been proved to be an efficient tool in expressing decision maker (DM) evaluation values in multicriteria decision-making (MCDM) procedure. To more effectively represent DMs’ evaluation information in complicated MCDM process, this paper proposes a MCDM method based on proposed novel aggregation operators (AOs) under linear Diophantine fuzzy set (LDFS). A q -Rung orthopair fuzzy set ( q -ROFS), Pythagorean fuzzy set (PFS), and intuitionistic fuzzy set (IFS) are rudimentary concepts in computational intelligence, which have diverse applications in modeling uncertainty and MCDM. Unfortunately, these theories have their own limitations related to the membership and nonmembership grades. The linear Diophantine fuzzy set (LDFS) is a new approach towards uncertainty which has the ability to relax the strict constraints of IFS, PFS, and q –ROFS by considering reference/control parameters. LDFS provides an appropriate way to the decision experts (DEs) in order to deal with vague and uncertain information in a comprehensive way. Under these environments, we introduce several AOs named as linear Diophantine fuzzy Einstein weighted averaging (LDFEWA) operator, linear Diophantine fuzzy Einstein ordered weighted averaging (LDFEOWA) operator, linear Diophantine fuzzy Einstein weighted geometric (LDFEWG) operator, and linear Diophantine fuzzy Einstein ordered weighted geometric (LDFEOWG) operator. We investigate certain characteristics and operational laws with some illustrations. Ultimately, an innovative approach for MCDM under the linear Diophantine fuzzy information is examined by implementing suggested aggregation operators. A useful example related to a country’s national health administration (NHA) to create a fully developed postacute care (PAC) model network for the health recovery of patients suffering from cerebrovascular diseases (CVDs) is exhibited to specify the practicability and efficacy of the intended approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Hongcong, Liu. "Time Series Forecasting as a Measure." International Journal of Advanced Pervasive and Ubiquitous Computing 5, no. 2 (2013): 47–55. http://dx.doi.org/10.4018/japuc.2013040105.

Full text
Abstract:
In this paper, the time series prediction is as a measure. At the same time, the optimal combination forecast using each method can be defined as the actual impact measurement value of true. Effect of its theoretical estimation has error correlation coefficient values. The optimal weighted linear combination is the theoretical prediction which can be proved, also, simple averaging method is linear combination forecasting optimal weights. Especially, based on the robust statistic theory, the mathematical derivation and numerical tests on the superiority is simple.
APA, Harvard, Vancouver, ISO, and other styles
10

Agbeti, Michael D. "Relationship between Diatom Assemblages and Trophic Variables: A Comparison of Old and New Approaches." Canadian Journal of Fisheries and Aquatic Sciences 49, no. 6 (1992): 1171–75. http://dx.doi.org/10.1139/f92-131.

Full text
Abstract:
Diatoms have been used as indicators of trophic status, and many ad hoc diatom indices have been developed. Although many of these indices have been useful, more direct methods for inferring trophic status are now available. Weighted-averaging regression and calibration was found to be a superior method for assessing lake trophic status from diatom assemblages when compared with a multiple linear regression index developed from a set of 30 Canadian Lakes.
APA, Harvard, Vancouver, ISO, and other styles
11

Teppa-Garran, Pedro, Diego Muñoz-de Escalona, and Javier Zambrano. "Liquid level tracking for a coupled tank system using quasi–lpv control." Ingenius, no. 33 (January 6, 2025): 15–26. https://doi.org/10.17163/ings.n33.2025.02.

Full text
Abstract:
This article proposes a gain-scheduling procedure based on quasi-LPV modeling for a nonlinear coupled tank system to track the liquid level with zero steady-state error. The nonlinearities are directly represented by a parameter vector that varies within a bounded set constrained by the physical limits of the tank system levels. This approach enables accurate nonlinear system modeling using a linear parameter-varying model. State-feedback linear controllers are designed at the extreme vertices of the bounded set. The global controller is derived as the weighted average of local controller contributions, with the weighting determined by the instantaneous values of the parameter vector. Two interpolation mechanisms are proposed to implement this weighted averaging of the linear controllers. The results confirm the effectiveness of the proposed method in achieving accurate liquid level tracking.
APA, Harvard, Vancouver, ISO, and other styles
12

Keikha, Abazar, and Tabasam Rashid. "Non-linear averaging-based operators of pseudo-hesitant fuzzy elements and an application." Croatian operational research review 14, no. 2 (2023): 179–92. http://dx.doi.org/10.17535/crorr.2023.0015.

Full text
Abstract:
Data modeling/aggregating, in many uncertain real-world' problems such as decision-making processes, has gotten more attention in recent years. Due to a variety of uncertainty sources, various types of fuzzy sets, and various types of averaging-based aggregation functions have been proposed. The power average operator (PAO), as a nonlinear operator, is more appropriate than other averaging-based functions for situations where different values are given on a single subject. In this paper, PAO will be extended to be used in the aggregation process of given pseudo-hesitant fuzzy elements (pseudo-HFEs), and some needed properties have been discussed, too. Then, four kinds of PAO with pseudo-HFEs, i.e., power average operator of pseudo-HFEs, power weighted average operator of pseudo-HFEs, power ordered weighted average operator of pseudo-HFEs and power hybrid average operator of pseudo-HFEs, will be defined. To solve a multi-attribute group decision-making (MAGDM) problem, the evaluation step done by both decision-makers and self-assessment will be quantified by pseudo-HFEs. Then the PAO will be applied to aggregate the row elements of the resulting decision matrix. The ranking orders of obtained pseudo-HFEs, show the options' orders. Finally, the proposed method will be used to solve a multi-attribute group decision-making problem, illustrated numerically, analyzed, and validated.
APA, Harvard, Vancouver, ISO, and other styles
13

M., Manjumari. "Decision Making Problem with Multiple Attribute Group Employing an Array of Singularly Perturbed Differential Equations." Indian Journal of Science and Technology 18, no. 14 (2025): 1147–54. https://doi.org/10.17485/IJST/v18i14.2820.

Full text
Abstract:
Abstract <strong>Objectives:</strong>&nbsp;This study deals with an ensemble of linear singularly perturbed differential equations that has been employed to solve Multiple Attribute Group Decision Making issues on that the weights of the Decision Makers (DMs) are unknown.<strong>&nbsp;Methods:</strong>&nbsp;Two sophisticated operators-the Intuitionistic Fuzzy Generalised Hybrid Weighted Averaging (IFGHWA) operator as well as the Intuitionistic Fuzzy Weighted Averaging (IFWA) operator&mdash;are employed to aid in the course of decision-making.&nbsp;<strong>Findings:</strong>&nbsp;In order to determine the best course of action, these operators are used to combine intuitionistic fuzzy decision matrices into a collective decision matrix. We can use the newly proposed correlation coefficient method and score function for ranking the best alternative from the available alternatives.&nbsp;<strong>Novelty:</strong>&nbsp;To demonstrate the efficiency and applicability of the suggested method in resolving MAGDM situations with ambiguous decision maker weights, a computational instance is provided. Numerical illustration is given to show the effectiveness of the proposed approach. <strong>Keywords:</strong> Intuitionistic Fuzzy Sets (IFSs), Singular Perturbation Problem, IFGHWA operator, IFWA operator, MAGDM
APA, Harvard, Vancouver, ISO, and other styles
14

BALLINI, R., and R. R. YAGER. "LINEAR DECAYING WEIGHTS FOR TIME SERIES SMOOTHING: AN ANALYSIS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 22, no. 01 (2014): 23–40. http://dx.doi.org/10.1142/s0218488514500020.

Full text
Abstract:
In this paper, we investigate the use of weighted averaging aggregation operators as techniques for time series smoothing. We analyze the moving average, exponential smoothing methods, and a new class of smoothing operators based on linearly decaying weights from the perspective of ordered weights averaging to estimate a constant model. We examine two important features associated with the smoothing processes: the average age of the data and the expected variance, both defined in terms of the associated weights. We show that there exists a fundamental conflict between keeping the variance small while using the freshest data. We illustrate the flexibility of the smoothing methods with real datasets; that is, we evaluate the aggregation operators with respect to their minimal attainable variance versus average age. We also examine the efficiency of the smoothed models in time series smoothing, considering real datasets. Good smoothing generally depends upon the underlying method's ability to select appropriate weights to satisfy the criteria of both small variance and recent data.
APA, Harvard, Vancouver, ISO, and other styles
15

Xie, Qichang, and Meng Du. "The Optimal Selection for Restricted Linear Models with Average Estimator." Abstract and Applied Analysis 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/692472.

Full text
Abstract:
The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing ak-class generalized information criterion (k-GIC), which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
APA, Harvard, Vancouver, ISO, and other styles
16

Wei, Cuiping, Xijin Tang, and Xiaojie Wang. "Linguistic Multi-Attribute Decision Making with a Prioritization Relationship." International Journal of Knowledge and Systems Science 4, no. 4 (2013): 46–54. http://dx.doi.org/10.4018/ijkss.2013100104.

Full text
Abstract:
In this paper we consider linguistic information aggregation problems where a prioritization relationship exists over attributes. The authors define a prioritized 2-tuple ordered weighted averaging (PTOWA) operator to aggregate satisfactions of alternatives under attributes with a linear prioritized ordering. The authors then use the PTOWA and a TOWA operator to aggregate linguistic information where attributes are partitioned into some categories of which prioritization between categories exists. Finally, two illustrative examples are employed to show the feasibility of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
17

Djavadov, Natig H., Hikmat H. Asadov, and Reyhana V. Kazimli. "Method of double averaging for optimum accounting of non-certainty of results of measurements greenhouse gases low gases concentrations at the ground distributed systems of atmospheric measurements." Metrologiya, no. 2 (June 8, 2020): 19–30. http://dx.doi.org/10.32446/0132-4713.2020-2-19-30.

Full text
Abstract:
To increase effectiveness of measurements of concentration of greenhouse gases questions on optimum accounting of non-certainty of results of measurements of low gases concentrations at the ground distributed nets of atmospheric measurements are considered. It is noted that temporal and structural non-stability of atmospheric aerosol leads to occurrence of non-certainty of carried out measurements. It is suggested to use the method of non-conditional variation optimization to determine the optimum interrelation between cost functions of researched atmospheric gas and aerosol which provides best metrological support for carried out measurements. In order to form the functional of optimization the newly suggested method of double averaging is used. The matter of suggested method of double averaging is that two following different averaging operations should be carried out sequentially: geometrical weighted averaging and algebraic averaging. To form the target functional of optimization the limitation condition should be adopted which is imposed to searched for optimum function. Solution of the formulated optimization task of non-conditional variation optimization does show that upon presence of linear interrelation between scalar cost functions of gas and aerosol the target functional could reach its maximum that is the uttermost value of non-certainty of measurements results are reached. On the base of aforesaid the heuristically recommendations on necessity to form the inverse interrelation between scalar values of cost functions of researched gas and atmospheric aerosol are formulated.
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Luping, Xin Li, Jianmin Yang, Xinliang Tian, and Lei Liu. "Three-Stage Interpolation Method for Demosaicking Monochrome Polarization DoFP Images." Sensors 24, no. 10 (2024): 3018. http://dx.doi.org/10.3390/s24103018.

Full text
Abstract:
The emergence of polarization image sensors presents both opportunities and challenges for real-time full-polarization reconstruction in scene imaging. This paper presents an innovative three-stage interpolation method specifically tailored for monochrome polarization image demosaicking, emphasizing both precision and processing speed. The method introduces a novel linear interpolation model based on polarization channel difference priors in the initial two stages. To enhance results through bidirectional interpolation, a continuous adaptive edge detection method based on variance differences is employed for weighted averaging. In the third stage, a total intensity map, derived from the previous two stages, is integrated into a residual interpolation process, thereby further elevating estimation precision. The proposed method undergoes validation using publicly available advanced datasets, showcasing superior performance in both global parameter evaluations and local visual details when compared with existing state-of-the-art techniques.
APA, Harvard, Vancouver, ISO, and other styles
19

Soleymannejad, Mohammad, Danial Sadrian Zadeh, Behzad Moshiri, Ebrahim Navid Sadjadi, Jesús García Herrero, and Jose Manuel Molina López. "State Estimation Fusion for Linear Microgrids over an Unreliable Network." Energies 15, no. 6 (2022): 2288. http://dx.doi.org/10.3390/en15062288.

Full text
Abstract:
Microgrids should be continuously monitored in order to maintain suitable voltages over time. Microgrids are mainly monitored remotely, and their measurement data transmitted through lossy communication networks are vulnerable to cyberattacks and packet loss. The current study leverages the idea of data fusion to address this problem. Hence, this paper investigates the effects of estimation fusion using various machine-learning (ML) regression methods as data fusion methods by aggregating the distributed Kalman filter (KF)-based state estimates of a linear smart microgrid in order to achieve more accurate and reliable state estimates. This unreliability in measurements is because they are received through a lossy communication network that incorporates packet loss and cyberattacks. In addition to ML regression methods, multi-layer perceptron (MLP) and dependent ordered weighted averaging (DOWA) operators are also employed for further comparisons. The results of simulation on the IEEE 4-bus model validate the effectiveness of the employed ML regression methods through the RMSE, MAE and R-squared indices under the condition of missing and manipulated measurements. In general, the results obtained by the Random Forest regression method were more accurate than those of other methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Yang, Guang Yong, Shui Jin Chen, Guo Qing Hu, and Jian Wei Zhang. "Nonlinear Optimization of Structure and Calibration for Scattered Triangulation Laser Displacement Measurement." Advanced Materials Research 204-210 (February 2011): 695–98. http://dx.doi.org/10.4028/www.scientific.net/amr.204-210.695.

Full text
Abstract:
The nonlinear output properties of scattered triangulation laser displacement measurement sensors with charged-coupled devices (CCDs) ware analyzed and optimized. In order to improve the high measurement precision and performance of the displacement and inclination for an object surface, dynamic skewness was regarded as coefficient for weighted linear optimization fitting algorithm to calculate the displacement instead of the averaging method; kurtosis was used to select the minimum and maximum of asymmetric distribution for inclination calculation. And, a novel calibration framework, which configured with a stepped motor to correct the discrepancy and update the coordination origin for CCD array, was introduced.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Lidong, and Yanjun Wang. "Group Decision-Making Approach Based on Generalized Grey Linguistic 2-Tuple Aggregation Operators." Complexity 2018 (September 26, 2018): 1–14. http://dx.doi.org/10.1155/2018/2301252.

Full text
Abstract:
To address complexity information fusion problems involving fuzzy and grey uncertainty information, we develop prioritized averaging aggregation operator and Bonferroni mean aggregation operator with grey linguistic 2-tuple variables and apply them to design a new decision-making scheme. First, the grey linguistic 2-tuple prioritized averaging (GLTPA) operator is developed to characterize the prioritization relationship among experts and employed to fuse experts’ information into an overall opinion. Second, we establish dual generalized grey linguistic 2-tuple weighted Bonferroni mean (DGGLTWBM) operator to capture the interrelationship among any attribute subsets, which can be reduced to some conventional operators by adjusting parameter vector. On that basis, a flexible group decision-making approach with fuzzy and grey information is designed and applied to an evaluation problem, in which grey relationship analysis (GRA) method and a linear programming model are combined to extract attribute weights from partially known attribute information. Furthermore, an illustrative example is employed to illustrate the practicality and flexibility of the designed method by conducting the related comparative studies.
APA, Harvard, Vancouver, ISO, and other styles
22

Brown, Leland. "Weighted Median Filtering for Terrain and Contour Generalization." Proceedings of the ICA 6 (December 18, 2024): 1–4. https://doi.org/10.5194/ica-proc-6-4-2024.

Full text
Abstract:
Abstract. When working with terrain elevation data, as in image processing, it is often desirable to smooth the data, for purposes such as map generalization or removal of noise. Median filtering is one common technique that can be used for this purpose. It differs from linear filtering techniques like local averaging or Gaussian blurring by its ability to smooth while retaining sharp edges in an image. When applied to elevation data, this means that median filtering can better preserve steep slopes and cliffs while otherwise reducing noise or excessive detail in the terrain. However, median filtering as typically applied can also introduce new artifacts, such as lopping off the tops of peaks and ridges to create flat plateaus that don’t exist in the original landscape. A lesser known technique, a weighted median filter, can reduce or eliminate these artifacts. This method shows promise as a way to generalize digital elevation models, as well as their associated contour lines. It can also be used to smooth hillshaded images, preserving the sharp transition in shading that occurs across ridges. And due to its ability to retain discontinuities in the data, it can be used to locate latent terracing effects hidden in elevation data, which may represent real terrain features or may indicate artifacts of the processing methods used to generate the data.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Xiao-Ning, Hai-Yu Ding, Xu-Gang He, Yang Dai, Yuan Zhang, and Sen Ding. "Assessing Fish Species Tolerance in the Huntai River Basin, China: Biological Traits versus Weighted Averaging Approaches." Water 10, no. 12 (2018): 1843. http://dx.doi.org/10.3390/w10121843.

Full text
Abstract:
Fish species tolerance used as a component of fish-index of biological integrity (F-IBI) can be problematic as it is usually classified using the historical data, data from literature or expert judgments. In this study, fish assemblages, water quality parameters and physical habitat factors from 206 sampling sites in the Huntai River Basin were analyzed to develop tolerance indicator values (TIVs) of fish based on a (Fb-TIVs) and the weighted averaging (WA) method (FW-TIVs). The two quantitative methods for fish tolerance were then compared. The FW-TIVs and Fb-TIVs of fish species were calculated separately using a WA inference model based on ten water quality parameters (WT, pH, DO, SC, TDS, NH3, NO2−, NO3−, TP, Cl−, and SO42−), and six biological traits (lithophilic spawning, benthic invertivores, cold water species, equilibrium or periodic life history strategies, families of Cottidae, and species distribution range). Fish species were then classified into biological traits approach three categories (tolerant species, moderately tolerant species, and sensitive species). The results indicated that only 30.3% fish species have the same classification based on FW-TIVs and Fb-TIVs. However, the proportion of tolerant species based on two methods had a similar response to environmental stress, and these tolerant species were correlated with PCA axes 1 site scores obtained by (FW-TIVs, p &lt; 0.05, R2 = 0.434; Fb-TIVs, p &lt; 0.05, R2 = 0.334) and not correlated with PCA axis 2 site scores (FW-TIVs, p &gt; 0.05, R2 = 0.001; Fb-TIVs, p &gt; 0.05, R2 = 0.012) and PCA axis 3 site scores (FW-TIVs, p &gt; 0.05, R2 = 0.000; Fb-TIVs, p &gt; 0.05, R2 = 0.013). The results of linear regression analyses indicated that Fb-TIVs can be used for the study of fish tolerance. Fish tolerance assessments based on FW-TIVs requires long-term monitoring of fish assemblages and water quality parameters to provide sufficient data for quantitative studies. The Fb-TIV method relies on the accurate identification of fish traits by an ichthyologist. The two methods used in this study can provide methodological references for quantitative studies of fish tolerance in other regions, and are of great significance for the development of biological assessment tools.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, She, Xiangyang Cui, and Gang Wang. "Bending and vibration analyses of plate and shell using an element decomposition method." Engineering Computations 35, no. 1 (2018): 287–314. http://dx.doi.org/10.1108/ec-09-2016-0333.

Full text
Abstract:
Purpose The purpose of this paper is to apply the element decomposition method (EDM) in the study of the bending and vibration properties of plate and shell. Design/methodology/approach In the present method, each quadrilateral element is first divided into four sub-triangular cells, and the local strains are obtained in those sub-triangles based on linear interpolation. The whole strain filed is formulated through a weighted averaging operation of local strains, implying that only one integration point is adopted to construct the stiffness matrix. To reduce the instability of one-point integration and increase the accuracy of the present method, a stabilization item of the stiffness matrix is formulated by variance of the local strains. A mixed interpolated tensorial components (MITC) method is used in eliminating the shear locking phenomenon. Findings The novel EDM based on linear interpolation is effective in bending and vibration analyses of plate and shell, and the present method used in practical problems is reliable for static and free vibration analysis. Originality/value This method eliminated the instability of one-point integration and increased the accuracy by a stabilization item and performed stably in engineering analysis including large-scale problems of vehicle components.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Jun, Pengcheng Luo, Xinwu Hu, and Xiaonan Zhang. "Combining an Extended SMAA-2 Method with Integer Linear Programming for Task Assignment of Multi-UCAV under Multiple Uncertainties." Symmetry 10, no. 11 (2018): 587. http://dx.doi.org/10.3390/sym10110587.

Full text
Abstract:
Uncertainty should be taken into account when establishing multiobjective task assignment models for multiple unmanned combat aerial vehicles (UCAVs) due to errors in the target information acquired by sensors, implicit preferences of the commander for operational objectives, and partially known weights of sensors. In this paper, we extend the stochastic multicriteria acceptability analysis-2 (SMAA-2) method and combine it with integer linear programming to achieve multiobjective task assignment for multi-UCAV under multiple uncertainties. We first represent the uncertain target information as normal distribution interval numbers so that the values of criteria (operational objectives) concerned can be computed based on the weighted arithmetic averaging operator. Thus, we obtain multiple criteria value matrices for each UCAV. Then, we propose a novel aggregation method to generate the final criteria value matrix based on which the holistic acceptability indices are computed by the extended SMAA-2 method. On this basis, we convert the task assignment model with uncertain parameters into an integer linear programming model without uncertainty so as to implement task assignment using the integer linear programming method. Finally, we conduct a case study and demonstrate the feasibility of the proposed method in solving the multiobjective task assignment problem multi-UCAV under multiple uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Gang, Xiangyang Cui, and Guangyao Li. "An Element Decomposition Method for the Helmholtz Equation." Communications in Computational Physics 20, no. 5 (2016): 1258–82. http://dx.doi.org/10.4208/cicp.110415.240316a.

Full text
Abstract:
AbstractIt is well-known that the traditional full integral quadrilateral element fails to provide accurate results to the Helmholtz equation with large wave numbers due to the “pollution error” caused by the numerical dispersion. To overcome this deficiency, this paper proposed an element decomposition method (EDM) for analyzing 2D acoustic problems by using quadrilateral element. In the present EDM, the quadrilateral element is first subdivided into four sub-triangles, and the local acoustic gradient in each sub-triangle is obtained using linear interpolation function. The acoustic gradient field of the whole quadrilateral is then formulated through a weighted averaging operation, which means only one integration point is adopted to construct the system matrix. To cure the numerical instability of one-point integration, a variation gradient item is complemented by variance of the local gradients. The discretized system equations are derived using the generalized Galerkin weakform. Numerical examples demonstrate that the EDM can achieves better accuracy and higher computational efficiency. Besides, as no mapping or coordinate transformation is involved, restrictions on the shape elements can be easily removed, which makes the EDM works well even for severely distorted meshes.
APA, Harvard, Vancouver, ISO, and other styles
27

Ren, Rupeng, Jun Fang, Jun Hu, Xiaotong Ma, and Xiaoyao Li. "Risk Assessment Modeling of Urban Railway Investment and Financing Based on Improved SVM Model for Advanced Intelligent Systems." International Journal on Semantic Web and Information Systems 19, no. 1 (2023): 1–19. http://dx.doi.org/10.4018/ijswis.331596.

Full text
Abstract:
A risk assessment method for urban railway investment and financing based on an improved SVM model under big data is proposed. First, the inner product in the traditional SVM is replaced by a kernel function to obtain a more accurate non-linear SVM, and a classifier with high classification accuracy is achieved by finding the optimal separating hyperplane. Then, a risk index system is constructed based on the grounded theory combining with intuitionistic fuzzy sets, interval intuitionistic fuzzy sets, weighted averaging operators and the distance measure, and the selection method of assessment indexes is analyzed based on the statistical methods. Finally, the SVM model with fuzzy membership is obtained by fuzzifying the input samples of the SVM based on the given rules of fuzzy membership design. The results show that the maximum relative error between the final test results and the actual value is 0.316%, and the minimum relative error is 0.133% with three different test sets being tested in the proposed method, which can accurately assess the investment.
APA, Harvard, Vancouver, ISO, and other styles
28

Song, Xinfu, Gang Liang, Changzu Li, and Weiwei Chen. "Electricity Consumption Prediction for Xinjiang Electric Energy Replacement." Mathematical Problems in Engineering 2019 (March 20, 2019): 1–11. http://dx.doi.org/10.1155/2019/3262591.

Full text
Abstract:
In recent years, the phenomenon of wind and solar energy abandoned in Xinjiang’s new energy has become severe, the contradiction between the supply and demand of the power grid is obvious, and the proportion of power in the energy consumption structure is relatively low, thus hindering the development of Xinjiang’s green power. In this context, the focus of Xinjiang’s power has shifted to promote the development of electric energy replacement. Therefore, using the Xinjiang region as an example, we first select the important indicators such as the terminal energy substitution in Xinjiang, added value of the secondary industry, population, terminal power consumption intensity, and per capita disposable income. Subsequently, eight combined forecasting models based on the grey model (GM), multiple linear regression (MLR), and error back propagation neural network (BP) are constructed to predict and analyse the electricity consumption of the whole society in Xinjiang. The results indicate the optimal weighted combination forecasting model, GM-MLR-BP of the induced ordered weighted harmonic averaging operator (IOWHA operator), exhibits better prediction accuracy, and the effectiveness of the proposed method is proven.
APA, Harvard, Vancouver, ISO, and other styles
29

Riaz, Muhammad, Hafiz Muhammad Athar Farid, Weiwei Wang, and Dragan Pamucar. "Interval-Valued Linear Diophantine Fuzzy Frank Aggregation Operators with Multi-Criteria Decision-Making." Mathematics 10, no. 11 (2022): 1811. http://dx.doi.org/10.3390/math10111811.

Full text
Abstract:
We introduce the notion of the interval-valued linear Diophantine fuzzy set, which is a generalized fuzzy model for providing more accurate information, particularly in emergency decision-making, with the help of intervals of membership grades and non-membership grades, as well as reference parameters that provide freedom to the decision makers to analyze multiple objects and alternatives in the universe. The accuracy of interval-valued linear Diophantine fuzzy numbers is analyzed using Frank operations. We first extend the Frank t-conorm and t-norm (FTcTn) to interval-valued linear Diophantine fuzzy information and then offer new operations such as the Frank product, Frank sum, Frank exponentiation, and Frank scalar multiplication. Based on these operations, we develop novel interval-valued linear Diophantine fuzzy aggregation operators (AOs), including the “interval-valued linear Diophantine fuzzy Frank weighted averaging operator and the interval-valued linear Diophantine fuzzy Frank weighted geometric operator”. We also demonstrate various features of these AOs and examine the interactions between the proposed AOs. FTcTns offer two significant advantages. Firstly, they function in the same way as algebraic, Einstein, and Hamacher t-conorms and t-norms. Secondly, they have an additional parameter that results in a more dynamic and reliable aggregation process, making them more effective than other general t-conorm and t-norm approaches. Furthermore, we use these operators to design a method for dealing with multi-criteria decision-making with IVLDFNs. Finally, a numerical case study of the novel carnivorous issue is shown as an application for emergency decision-making based on the proposed AOs. The purpose of this numerical example is to demonstrate the practicality and viability of the provided AOs.
APA, Harvard, Vancouver, ISO, and other styles
30

Suh, M. S., S. G. Oh, D. K. Lee, et al. "Development of New Ensemble Methods Based on the Performance Skills of Regional Climate Models over South Korea." Journal of Climate 25, no. 20 (2012): 7067–82. http://dx.doi.org/10.1175/jcli-d-11-00457.1.

Full text
Abstract:
Abstract In this paper, the prediction skills of five ensemble methods for temperature and precipitation are discussed by considering 20 yr of simulation results (from 1989 to 2008) for four regional climate models (RCMs) driven by NCEP–Department of Energy and ECMWF Interim Re-Analysis (ERA-Interim) boundary conditions. The simulation domain is the Coordinated Regional Downscaling Experiment (CORDEX) for East Asia, and the number of grid points is 197 × 233 with a 50-km horizontal resolution. Three new performance-based ensemble averaging (PEA) methods are developed in this study using 1) bias, root-mean-square errors (RMSEs) and absolute correlation (PEA_BRC), RMSE and absolute correlation (PEA_RAC), and RMSE and original correlation (PEA_ROC). The other two ensemble methods are equal-weighted averaging (EWA) and multivariate linear regression (Mul_Reg). To derive the weighting coefficients and cross validate the prediction skills of the five ensemble methods, the authors considered 15-yr and 5-yr data, respectively, from the 20-yr simulation data. Among the five ensemble methods, the Mul_Reg (EWA) method shows the best (worst) skill during the training period. The PEA_RAC and PEA_ROC methods show skills that are similar to those of Mul_Reg during the training period. However, the skills and stabilities of Mul_Reg were drastically reduced when this method was applied to the prediction period. But, the skills and stabilities of PEA_RAC were only slightly reduced in this case. As a result, PEA_RAC shows the best skill, irrespective of the seasons and variables, during the prediction period. This result confirms that the new ensemble method developed in this study, PEA_RAC, can be used for the prediction of regional climate.
APA, Harvard, Vancouver, ISO, and other styles
31

Amini, Navid, Changsoo Shin, and Jaejoon Lee. "Second-order implicit finite-difference schemes for the acoustic wave equation in the time-space domain." GEOPHYSICS 86, no. 5 (2021): T421—T437. http://dx.doi.org/10.1190/geo2020-0684.1.

Full text
Abstract:
We have developed compact implicit finite-difference (FD) schemes in the time-space domain based on the second-order FD approximation for accurate solution of the acoustic wave equation in 1D, 2D, and 3D. Our method is based on the weighted linear combination of the second-order FD operators with different spatial orientations to mitigate numerical error anisotropy and the weighted averaging of the mass acceleration term over the grid points of the second-order FD stencil to reduce the overall numerical dispersion error. We have developed a derivation of the schemes for 1D, 2D, and 3D cases. We obtain their corresponding dispersion equations, then we find the optimum weights by optimization of the time-space domain dispersion function, and finally we tabulate the optimized weights for each case. We analyze the numerical dispersion, stability, and convergence rates of our schemes and compare their numerical dispersion characteristics with the standard high-order ones. We also discuss the efficient solution of the system of equations associated with our implicit schemes using the conjugate-gradient method. The comparison of dispersion curves and the numerical solutions with the analytical and the pseudospectral solutions reveals that our schemes have better performance than the standard spatial high-order schemes and remain stable for relatively large time steps.
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Eun-Young, and Byeong-Seok Ahn. "An Efficient Approach to Solve the Constrained OWA Aggregation Problem." Symmetry 14, no. 4 (2022): 724. http://dx.doi.org/10.3390/sym14040724.

Full text
Abstract:
Constrained ordered weighted averaging (OWA) aggregation attempts to solve the OWA optimization problem subject to multiple constraints. The problem is nonlinear in nature due to the reordered variables of arguments in the objective function, and the solution approach via mixed integer linear programming is quite complex even in the problem with one restriction of which coefficients are all one. Recently, this has been relaxed to allow a constraint with variable coefficients but the solution approach is still abstruse. In this paper, we present a new intuitive method to constructing a problem with auxiliary symmetric constraints to convert it into linear programming problem. The side effect is that we encounter many small sub-problems to be solved. Interestingly, however, we discover that they share common symmetric features in the extreme points of the feasible region of each sub-problem. Consequently, we show that the structure of extreme points and the reordering process of input arguments peculiar to the OWA operator lead to a closed optimal solution to the constrained OWA optimization problem. Further, we extend our findings to the OWA optimization problem constrained by a range of order-preserving constraints and present the closed optimal solutions.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhuang, Hua. "Additively Consistent Interval-Valued Intuitionistic Fuzzy Preference Relations and Their Application to Group Decision Making." Information 9, no. 10 (2018): 260. http://dx.doi.org/10.3390/info9100260.

Full text
Abstract:
This paper aims to propose an innovative approach to group decision making (GDM) with interval-valued intuitionistic fuzzy (IVIF) preference relations (IVIFPRs). First, an IVIFPR is proposed based on the additive consistency of an interval-valued fuzzy preference relation (IVFPR). Then, two mathematical or adjusted programming models are established to extract two special consistent IVFPRs. In order to derive the priority weight of an IVIFPR, after taking the two special IVFPRs into consideration, a linear optimization model is constructed by minimizing the deviations between individual judgments and between the width degrees of the interval priority weights. For GDM with IVIFPRs, the decision makers’ weights are generated by combining the adjusted subjective weights with the objective weights. Subsequently, using an IVIF-weighted averaging operator, the collective IVIFPR is obtained and utilized to derive the IVIF priority weights. Finally, a practical example of a supplier selection is analyzed to demonstrate the application of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
34

Hao, Shuai, Beiyi An, Hu Wen, Xu Ma, and Keping Yu. "A Heterogeneous Image Fusion Method Based on DCT and Anisotropic Diffusion for UAVs in Future 5G IoT Scenarios." Wireless Communications and Mobile Computing 2020 (June 27, 2020): 1–11. http://dx.doi.org/10.1155/2020/8816818.

Full text
Abstract:
Unmanned aerial vehicles, with their inherent fine attributes, such as flexibility, mobility, and autonomy, play an increasingly important role in the Internet of Things (IoT). Airborne infrared and visible image fusion, which constitutes an important data basis for the perception layer of IoT, has been widely used in various fields such as electric power inspection, military reconnaissance, emergency rescue, and traffic management. However, traditional infrared and visible image fusion methods suffer from weak detail resolution. In order to better preserve useful information from source images and produce a more informative image for human observation or unmanned aerial vehicle vision tasks, a novel fusion method based on discrete cosine transform (DCT) and anisotropic diffusion is proposed. First, the infrared and visible images are denoised by using DCT. Second, anisotropic diffusion is applied to the denoised infrared and visible images to obtain the detail and base layers. Third, the base layers are fused by using weighted averaging, and the detail layers are fused by using the Karhunen–Loeve transform, respectively. Finally, the fused image is reconstructed through the linear superposition of the base layer and detail layer. Compared with six other typical fusion methods, the proposed approach shows better fusion performance in both objective and subjective evaluations.
APA, Harvard, Vancouver, ISO, and other styles
35

Shanmugam, Karthikeyan, and Harikumar Rajaguru. "Enhanced Superpixel-Guided ResNet Framework with Optimized Deep-Weighted Averaging-Based Feature Fusion for Lung Cancer Detection in Histopathological Images." Diagnostics 15, no. 7 (2025): 805. https://doi.org/10.3390/diagnostics15070805.

Full text
Abstract:
Background/Objectives: Lung cancer is a leading cause of cancer-related mortalities, with early diagnosis crucial for survival. While biopsy is the gold standard, manual histopathological analysis is time-consuming. This research enhances lung cancer diagnosis through deep learning-based feature extraction, fusion, optimization, and classification for improved accuracy and efficiency. Methods: The study begins with image preprocessing using an adaptive fuzzy filter, followed by segmentation with a modified simple linear iterative clustering (SLIC) algorithm. The segmented images are input into deep learning architectures, specifically ResNet-50 (RN-50), ResNet-101 (RN-101), and ResNet-152 (RN-152), for feature extraction. The extracted features are fused using a deep-weighted averaging-based feature fusion (DWAFF) technique, producing ResNet-X (RN-X)-fused features. To further refine these features, particle swarm optimization (PSO) and red deer optimization (RDO) techniques are employed within the selective feature pooling layer. The optimized features are classified using various machine learning classifiers, including support vector machine (SVM), decision tree (DT), random forest (RF), K-nearest neighbor (KNN), SoftMax discriminant classifier (SDC), Bayesian linear discriminant analysis classifier (BLDC), and multilayer perceptron (MLP). A performance evaluation is performed using K-fold cross-validation with K values of 2, 4, 5, 8, and 10. Results: The proposed DWAFF technique, combined with feature selection using RDO and classification with MLP, achieved the highest classification accuracy of 98.68% when using K = 10 for cross-validation. The RN-X features demonstrated superior performance compared to individual ResNet variants, and the integration of segmentation and optimization significantly enhanced classification accuracy. Conclusions: The proposed methodology automates lung cancer classification using deep learning, feature fusion, optimization, and advanced classification techniques. Segmentation and feature selection enhance performance, improving diagnostic accuracy. Future work may explore further optimizations and hybrid models.
APA, Harvard, Vancouver, ISO, and other styles
36

Ahmad, Irshad, Muhammad Hameed Siddiqi, Sultan Fahad Alhujaili, and Ziyad Awadh Alrowaili. "Improving Alzheimer’s Disease Classification in Brain MRI Images Using a Neural Network Model Enhanced with PCA and SWLDA." Healthcare 11, no. 18 (2023): 2551. http://dx.doi.org/10.3390/healthcare11182551.

Full text
Abstract:
The examination of Alzheimer’s disease (AD) using adaptive machine learning algorithms has unveiled promising findings. However, achieving substantial credibility in medical contexts necessitates a combination of notable accuracy, minimal processing time, and universality across diverse populations. Therefore, we have formulated a hybrid methodology in this study to classify AD by employing a brain MRI image dataset. We incorporated an averaging filter during preprocessing in the initial stage to reduce extraneous details. Subsequently, a combined strategy was utilized, involving principal component analysis (PCA) in conjunction with stepwise linear discriminant analysis (SWLDA), followed by an artificial neural network (ANN). SWLDA employs a combination of forward and backward recursion methods to choose a restricted set of features. The forward recursion identifies the most interconnected features based on partial Z-test values. Conversely, the backward recursion method eliminates the least correlated features from the same feature space. After the extraction and selection of features, an optimized artificial neural network (ANN) was utilized to differentiate the various classes of AD. To demonstrate the significance of this hybrid approach, we utilized publicly available brain MRI datasets using a 10-fold cross-validation strategy. The proposed method excelled over existing state-of-the-art systems, attaining weighted average recognition rates of 99.35% and 96.66%, respectively, across all the datasets.
APA, Harvard, Vancouver, ISO, and other styles
37

Gigović, Ljubomir, Siniša Drobnjak, and Dragan Pamučar. "The Application of the Hybrid GIS Spatial Multi-Criteria Decision Analysis Best–Worst Methodology for Landslide Susceptibility Mapping." ISPRS International Journal of Geo-Information 8, no. 2 (2019): 79. http://dx.doi.org/10.3390/ijgi8020079.

Full text
Abstract:
The main goal of this article is to produce a landslide susceptibility map by using the hybrid Geographical Information System (GIS) spatial multi-criteria decision analysis best–worst methodology (MCDA-BWM) in the western part of the Republic of Serbia. Initially, a landslide inventory map was prepared using the National Landslide Database, aerial photographs, and also by carrying out field surveys. A total of 1082 landslide locations were detected. This methodology considers the fifteen conditioning factors that are relevant to landslide susceptibility mapping: the elevation, slope, aspect, distance to the road network, distance to the river, distance to faults, lithology, the Normalized Difference Vegetation Index (NDVI), the Topographic Wetness Index (TWI), the Stream Power Index (SPI), the Sediment Transport Index (STI), annual rainfall, the distance to urban areas, and the land use/cover. The expert evaluation takes into account the nature and severity of the observed criteria, and it was tested by using two scenarios: the different aggregation methods of the BWM. The prediction performances of the generated maps were checked by the receiver operating characteristics (ROCs). The validation results confirmed that the areas under the ROC curve for the weighted linear combination (WLC) and the ordered weighted averaging (OWA) aggregation methods of the MCDA-BWM have a very high accuracy. The results of the landslide susceptibility assessment obtained by applying the proposed best–worst method were the first step in the development of landslide risk management and they are expected to be used by local governments for effective management planning purposes.
APA, Harvard, Vancouver, ISO, and other styles
38

Guo, Shunsheng, Yuji Gao, Jun Guo, Zhijie Yang, Baigang Du, and Yibing Li. "A multi-stage group decision making for strategic supplier selection based on prospect theory with interval-valued q-rung orthopair fuzzy linguistic sets." Journal of Intelligent & Fuzzy Systems 40, no. 5 (2021): 9855–71. http://dx.doi.org/10.3233/jifs-202415.

Full text
Abstract:
With the aggravation of market competition, strategic supplier is becoming more and more critical for the success of manufacturing enterprises. Suppler selection, being the critical and foremost activity must ensure that selected suppliers are capable of supporting the long-term development of organizations. Hence, strategic supplier selection must be restructures considering the long-term relationships and prospects for sustainable cooperation. This paper proposes a novel multi-stage multi-attribute group decision making method under an interval-valued q-rung orthopair fuzzy linguistic set (IVq-ROFLS) environment considering the decision makers’ (DMs) psychological state in the group decision-making process. First, the initial comprehensive fuzzy evaluations of DMs are represented as IVq-ROFLS. Subsequently, two new operators are proposed for aggregating different stages and DMs’ preferences respectively by extending generalized weighted averaging (GWA) to IVq-ROFLS context. Later, a new hamming distance based linear programming method based on entropy measure and score function is introduced to evaluate the unknown criteria weights. Additionally, the Euclidean distance is employed to compute the gain and loss matrix, and objects are prioritized by extending the popular Prospect theory (PT) method to the IVq-ROFLS context. Finally, the practical use of the proposed decision framework is validated by using a strategic supplier selection problem, as well as the effectiveness and applicability of the framework are discussed by using comparative analysis with other methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Yager, Ronald R. "On the OWA Aggregation with Probabilistic Inputs." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 23, Suppl. 1 (2015): 143–62. http://dx.doi.org/10.1142/s0218488515400115.

Full text
Abstract:
We discuss the use of the ordered weighted averaging (OWA) operator in multi-criteria decision problems as a means of aggregating the individual criteria satisfactions. We emphasize the need for ordering the arguments, the criteria satisfactions, when using the OWA operator. We consider the situation where the criteria satisfactions have some uncertainty, are finite probability distributions and note the requirement of having to order probability distributions. We introduce the idea of using pairwise stochastic dominance to provide the necessary ordering relationship over the probability distributions. We note that while this approach is appropriate, it is often not possible, since the presence of a stochastic dominance relationship between all pairs of probability distributions is not always the case, the relationship is not complete. To circumvent this problem we introduce an approach called the probabilistic exceedance method (PEM), which allows us to provide a surrogate for the OWA aggregation of probability distributions that doesn't require a linear ordering over the probability distributions. We look at this in both the cases in which the criteria have equal and unequal importances.
APA, Harvard, Vancouver, ISO, and other styles
40

Haghshenas, Elham, Mehdi Gholamalifard, Nemat Mahmoudi, and Tiit Kutser. "Developing a GIS-Based Decision Rule for Sustainable Marine Aquaculture Site Selection: An Application of the Ordered Weighted Average Procedure." Sustainability 13, no. 5 (2021): 2672. http://dx.doi.org/10.3390/su13052672.

Full text
Abstract:
Fish consumption is on the increase due to the increase in growth of the global population. Therefore, taking advantage of new methods such as marine aquaculture can be a reliable source for the production of fish in the world. It is necessary to allocate suitable sites from environmental, economic, and social points of view in the decision-making process. In this study, in order to specify suitable areas for marine aquaculture by the Ordered Weighted Averaging (OWA) methodology in the Caspian Sea (Iran), efforts were made to incorporate the concept of risk into the GIS-based analysis. By using the OWA-based method, a model was provided which can generate marine aquaculture maps with various pessimistic or optimistic strategies. Eighteen modeling criteria (14 factors and 4 constraints) were considered to determine the appropriate areas for marine aquaculture. This was done in 6 scenarios using multi-criteria evaluation (MCE) and ordered weighted average (OWA) methodologies. The results of the sensitivity analysis showed that most of the parameters affecting the marine aquaculture location in the region were as follows: Social-Economic, Water Quality, and Physical–Environmental parameters. In addition, based on Cramer’s V coefficient values for each parameter, bathymetry and distance from the coastline with the most effective and maximum temperature had the least impact on site selection of marine aquaculture. Finally, the final aggregated suitability image (FASI) of weighted linear combination (WLC) scenario was compared with existing sites for cage culture on the southern part of the Caspian Sea and the ROC (Relative Operating Characteristics) value turned out to be equal to 0.69. Although the existing sites (9 farms) were almost compatible with the results of the study, their locations can be transferred to more favorable areas with less risk and the mapping risk level can be controlled and low- or high-risk sites for marine aquaculture could be determined by using the OWA method.
APA, Harvard, Vancouver, ISO, and other styles
41

Smith, Edward A. K., Carla Winterhalter, Tracy S. A. Underwood, et al. "A Monte Carlo study of different LET definitions and calculation parameters for proton beam therapy." Biomedical Physics & Engineering Express 8, no. 1 (2021): 015024. http://dx.doi.org/10.1088/2057-1976/ac3f50.

Full text
Abstract:
Abstract The strong in vitro evidence that proton Relative Biological Effectiveness (RBE) varies with Linear Energy Transfer (LET) has led to an interest in applying LET within treatment planning. However, there is a lack of consensus on LET definition, Monte Carlo (MC) parameters or clinical methodology. This work aims to investigate how common variations of LET definition may affect potential clinical applications. MC simulations (GATE/GEANT4) were used to calculate absorbed dose and different types of LET for a simple Spread Out Bragg Peak (SOBP) and for four clinical PBT plans covering a range of tumour sites. Variations in the following LET calculation methods were considered: (i) averaging (dose-averaged LET (LETd) &amp; track-averaged LET); (ii) scoring (LETd to water, to medium and to mass density); (iii) particle inclusion (LETd to all protons, to primary protons and to particles); (iv) MC settings (hit type and Maximum Step Size (MSS)). LET distributions were compared using: qualitative comparison, LET Volume Histograms (LVHs), single value criteria (maximum and mean values) and optimised LET-weighted dose models. Substantial differences were found between LET values in averaging, scoring and particle type. These differences depended on the methodology, but for one patient a difference of ∼100% was observed between the maximum LETd for all particles and maximum LETd for all protons within the brainstem in the high isodose region (4 keV μm−1 and 8 keV μm−1 respectively). An RBE model using LETd including heavier ions was found to predict substantially different LET-weighted dose compared to those using other LET definitions. In conclusion, the selection of LET definition may affect the results of clinical metrics considered in treatment planning and the results of an RBE model. The authors’ advocate for the scoring of dose-averaged LET to water for primary and secondary protons using a random hit type and automated MSS.
APA, Harvard, Vancouver, ISO, and other styles
42

Guziy, A. G., A. M. Lushkin, and A. V. Fokin. "THE METHODOLOGY FOR THE SYNTHESIS AND CORRECTION OF THE "RISK PYRAMIDS" IN THE AIRPLANE SEGMENT OF COMMERCIAL AVIATION OF RUSSIA." Civil Aviation High TECHNOLOGIES 21, no. 4 (2018): 8–16. http://dx.doi.org/10.26467/2079-0619-2018-21-4-8-16.

Full text
Abstract:
The article presents the results of the "risk pyramids" analysis of commercial aviation for their adequacy to the current state of the aviation transport system of Russia. The necessity of annual updating of "risk pyramids" is shown, as the aviation transport system (ATS) of Russia is dynamic and the ATS state changes faster than the accident rate statistical indicators characterizing this state. The method of linear weighted moving average for the synthesis and annual correction of the "risk pyramids" parameters with an optimized averaging coefficient – 7 years is substantiated and proposed. The optimization of the averaging coefficient is performed by the criterion of the minimum mismatch between the averaged values of the "risk pyramids" parameters and the current (annual) values determined by the statistical data of an aviation events. The general and private "risk pyramids" of commercial aviation of Russia synthesized by results of the statistical factorial analysis of aviation events for 2009–2016 are presented. The synthesis of "risk pyramids" is made in accordance with the classification of aviation events in the civil aviation of Russia, separately by causative factors: "Human", "Aircraft", "Environment". The parameters of the "risk pyramids" reflect the conditional probability of an aviation event of great severity (for example, a catastrophe), if there were aviation events of less severity (for example, incidents). The parameters of the presented pyramids are intended for inclusion into the algorithms of indirect estimation of probability of aviation accidents for any airline and any period of flight work (from a month or more).
APA, Harvard, Vancouver, ISO, and other styles
43

Tavakoli, Mortaza, Zeynab Karimzadeh Motlagh, Dominika Dąbrowska, Youssef M. Youssef, Bojan Đurin, and Ahmed M. Saqr. "Harnessing AHP and Fuzzy Scenarios for Resilient Flood Management in Arid Environments: Challenges and Pathways Toward Sustainability." Water 17, no. 9 (2025): 1276. https://doi.org/10.3390/w17091276.

Full text
Abstract:
Flash floods rank among the most devastating natural hazards, causing widespread socio-economic, environmental, and infrastructural damage globally. Hence, innovative management approaches are required to mitigate their increasing frequency and intensity, driven by factors such as climate change and urbanization. Accordingly, this study introduced an integrated flood assessment approach (IFAA) for sustainable management of flood risks by integrating the analytical hierarchy process-weighted linear combination (AHP-WLC) and fuzzy-ordered weighted averaging (FOWA) methods. The IFAA was applied in South Khorasan Province, Iran, an arid and flood-prone region. Fifteen controlling factors, including rainfall (RF), slope (SL), land use/land cover (LU/LC), and distance to rivers (DTR), were processed using the collected data. The AHP-WLC method classified the region into flood susceptibility zones: very low (10.23%), low (23.14%), moderate (29.61%), high (17.54%), and very high (19.48%). The FOWA technique ensured these findings by introducing optimistic and pessimistic fuzzy scenarios of flood risk. The most extreme scenario indicated that 98.79% of the area was highly sensitive to flooding, while less than 5% was deemed low-risk under conservative scenarios. Validation of the IFAA approach demonstrated its reliability, with the AHP-WLC method achieving an area under curve (AUC) of 0.83 and an average accuracy of ~75% across all fuzzy scenarios. Findings revealed elevated flood dangers in densely populated and industrialized areas, particularly in the northern and southern regions, which were influenced by proximity to rivers. Therefore, the study also addressed challenges linked to sustainable development goals (SDGs), particularly SDG 13 (climate action), proposing adaptive strategies to meet 60% of its targets. This research can offer a scalable framework for flood risk management, providing actionable insights for hydrologically vulnerable regions worldwide.
APA, Harvard, Vancouver, ISO, and other styles
44

García-Zurdo, Rubén. "Three-Dimensional Face Shape by Local Fitting to a Single Reference Model." International Journal of Computer Vision and Image Processing 4, no. 1 (2014): 17–29. http://dx.doi.org/10.4018/ijcvip.2014010102.

Full text
Abstract:
The authors present a simple method to estimate the three-dimensional shape of a face from an input image using a single reference model, based on least squares between the output of the linear-nonlinear (LN) neuronal model applied to blocks from an intensity image and blocks from a depth reference model. The authors present the results obtained by varying the LN model parameters and estimate their best values, which provide an acceptable reconstruction of each subject's depth. The authors show that increasing the light source angle over the horizontal plane in the input image produces slight increases in reconstruction error, but increasing the ambient light proportion produces greater increases in reconstruction error. The authors applied the method to predict each subject's unknown depth using different individual reference models and an average reference model, which provides the best results. As a noise reduction technique, the authors perform a point by point weighted averaging with the average reference model with weights equal to the fractions of the squares of the Laplacian of a Gaussian applied to the prediction and to the reference depth over the sum of both. Finally, the authors present acceptable visual results obtained from external images of faces under arbitrary illumination, having performed an illumination estimation previously.
APA, Harvard, Vancouver, ISO, and other styles
45

Mert, Ali. "Defuzzification of Non-Linear Pentagonal Intuitionistic Fuzzy Numbers and Application in the Minimum Spanning Tree Problem." Symmetry 15, no. 10 (2023): 1853. http://dx.doi.org/10.3390/sym15101853.

Full text
Abstract:
In recent years, with the variety of digital objects around us becoming a source of information, the fields of artificial intelligence (AI) and machine learning (ML) have experienced very rapid development. Processing and converting the information around us into data within the framework of the information processing theory is important, as AI and ML techniques need large amounts of reliable data in the training and validation stages. Even though information naturally contains uncertainty, information must still be modeled and converted into data without neglecting this uncertainty. Mathematical techniques, such as the fuzzy theory and the intuitionistic fuzzy theory, are used for this purpose. In the intuitionistic fuzzy theory, membership and non-membership functions are employed to describe intuitionistic fuzzy sets and intuitionistic fuzzy numbers (IFNs). IFNs are characterized by the mathematical statements of these two functions. A more general and inclusive definition of IFN is always a requirement in AI technologies, as the uncertainty introduced by various information sources needs to be transformed into similar IFNs without neglecting the variety of uncertainty. In this paper, we proposed a general and inclusive mathematical definition for IFN and called this IFN a non-linear pentagonal intuitionistic fuzzy number (NLPIFN), which allows its users to maintain variety in uncertainty. We know that AI technology implementations are performed in computerized environments, so we need to transform the IFN into a crisp number to make such IFNs available in such environments. Techniques used in transformation are called defuzzification methods. In this paper, we proposed a short-cut formula for the defuzzification of a NLPIFN using the intuitionistic fuzzy weighted averaging based on levels (IF-WABL) method. We also implemented our findings in the minimum spanning tree problem by taking weights as NLPIFNs to determine the uncertainty in the process more precisely.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Changrui, Qun Yue, and Jiaqi Huang. "Projections of Mean and Extreme Precipitation Using the CMIP6 Model: A Study of the Yangtze River Basin in China." Water 15, no. 17 (2023): 3043. http://dx.doi.org/10.3390/w15173043.

Full text
Abstract:
In this study, we conducted an analysis of the CN05.1 daily precipitation observation dataset spanning from 1985 to 2014. Subsequently, we ranked the 30 global climate model datasets within the NEX-GDDP-CMIP6 dataset using the RS rank score method. Multi-model weighted-ensemble averaging was then performed based on these RS scores, followed by a revision of the multi-model weighted-ensemble averaging (rs-MME) using the quantile mapping method. The revised rs-MME model data were utilized for simulating precipitation variations within the Yangtze River Basin. We specifically selected 11 extreme-precipitation indices to comprehensively evaluate the capability of the revised rs-MME model data in simulating extreme-precipitation occurrences in the region. Our investigation culminated in predicting the characteristics of precipitation and the potential shifts in extreme-precipitation patterns across the region under three distinct shared socioeconomic pathways (SSP1-2.6, SSP2-4.5, and SSP5-8.5) for three temporal segments: the Near 21C (2021–2040), Mid 21C (2041–2070), and Late 21C (2071–2100). Our findings reveal that the revised rs-MME model data effectively resolve the issues of the overestimation and underestimation of precipitation data present in the previous model. This leads to an enhanced simulation of mean annual precipitation, the 95th percentile of precipitation, and the extreme-precipitation index for the historical period. However, there are shortcomings in the simulation of linear trends in mean annual precipitation, alongside a significant overestimation of the CWD and CDD indices. Furthermore, our analysis forecasts a noteworthy increase in future mean annual precipitation within the Yangtze River Basin region, with a proportional rise in forced radiation across varying scenarios. Notably, an ascending trend of precipitation is detected at the headwaters of the Yangtze River Basin, specifically under the Late 21C SSP5-8.5 scenario, while a descending trend is observed in other scenarios. Conversely, there is an escalating pattern of precipitation within the middle and lower reaches of the Yangtze River Basin, with most higher-rate changes situated in the middle reaches. Regarding extreme-precipitation indices, similar to the annual average precipitation, a remarkable upsurge is evident in the middle and lower reaches of the Yangtze River Basin, whereas a relatively modest increasing trend prevails at the headwaters of the Yangtze River Basin. Notably, the SSP5-8.5 scenario portrays a substantial increase in extreme-precipitation indices.
APA, Harvard, Vancouver, ISO, and other styles
47

Shen, Yicheng, Luke Sweeney, Mengmeng Liu, et al. "Reconstructing burnt area during the Holocene: an Iberian case study." Climate of the Past 18, no. 5 (2022): 1189–201. http://dx.doi.org/10.5194/cp-18-1189-2022.

Full text
Abstract:
Abstract. Charcoal accumulated in lake, bog or other anoxic sediments through time has been used to document the geographical patterns in changes in fire regimes. Such reconstructions are useful to explore the impact of climate and vegetation changes on fire during periods when human influence was less prevalent than today. However, charcoal records only provide semi-quantitative estimates of change in biomass burning. Here we derive quantitative estimates of burnt area from vegetation data in two stages. First, we relate the modern charcoal abundance to burnt area using a conversion factor derived from a generalised linear model of burnt area probability based on eight environmental predictors. Then, we establish the relationship between fossil pollen assemblages and burnt area using tolerance-weighted weighted averaging partial least-squares regression with a sampling frequency correction (fxTWA-PLS). We test this approach using the Iberian Peninsula as a case study because it is a fire-prone region with abundant pollen and charcoal records covering the Holocene. We derive the vegetation–burnt area relationship using the 31 records that have both modern and fossil charcoal and pollen data and then reconstruct palaeoburnt area for the 113 records with Holocene pollen records. The pollen data predict charcoal-derived burnt area relatively well (R2 = 0.44), and the changes in reconstructed burnt area are synchronous with known climate changes through the Holocene. This new method opens up the possibility of reconstructing changes in fire regimes quantitatively from pollen records, after regional calibration of the vegetation–burnt area relationship, in regions where pollen records are more abundant than charcoal records.
APA, Harvard, Vancouver, ISO, and other styles
48

Kamb, Barclay, and Keith A. Echelmeyer. "Stress-Gradient Coupling in Glacier Flow: I. Longitudinal Averaging of the Influence of Ice Thickness and Surface Slope." Journal of Glaciology 32, no. 111 (1986): 267–84. http://dx.doi.org/10.1017/s0022143000015604.

Full text
Abstract:
AbstractFor a glacier flowing over a bed of longitudinally varying slope, the influence of longitudinal stress gradients on the flow is analyzed by means of a longitudinal flow-coupling equation derived from the “vertically” (cross-sectionally) integrated longitudinal stress equilibrium equation, by an extension of an approach originally developed by Budd (1968). Linearization of the flow-coupling equation, by treating the flow velocityu(“vertically” averaged), ice thicknessh, and surface slope α in terms of small deviations Δu, Δh, and ∆α from overall average (datum) valuesuo,h0, andα0, results in a differential equation that can be solved by Green’s function methods, giving Δu(x) as a function of∆h(x) and ∆α(x),xbeing the longitudinal coordinate. The result has the form of a longitudinal averaging integral of the influence of localh(x) and α(x) on the flowu(x):where the integration is over the lengthLof the glacier. The ∆ operator specified deviations from the datum state, and the term on which it operates, which is a function of the integration variablex′, represents the influence of localh(x′),α(x′), and channel-shape factorf(x′), at longitudinal coordinatex′, on the flowuat coordinatex, the influence being weighted by the “influence transfer function” exp (−|x′ −x|/ℓ) in the integral.The quantityℓthat appears as the scale length in the exponential weighting function is called thelongitudinal coupling length. It is determined by rheological parameters via the relationship, wherenis the flow-law exponent,ηthe effective longitudinal viscosity, andηthe effective shear viscosity of the ice profile,ηis an average of the local effective viscosityηover the ice cross-section, and (η)–1is an average of η−1that gives strongly increased weight to values near the base. Theoretically, the coupling lengthℓis generally in the range one to three times the ice thickness for valley glaciers and four to ten times for ice sheets; for a glacier in surge, it is even longer,ℓ ~ 12h. It is distinctly longer for non-linear (n = 3) than for linear rheology, so that the flow-coupling effects of longitudinal stress gradients are markedly greater for non-linear flow.The averaging integral indicates that the longitudinal variations in flow that occur under the influence of sinusoidal longitudinal variations inhor α, with wavelength λ, are attenuated by the factor 1/(1 + (2πℓ/λ)2) relative to what they would be without longitudinal coupling. The short, intermediate, and long scales of glacier motion (Raymond, 1980), over which the longitudinal flow variations are strongly, partially, and little attenuated, are for λ ≲ 2ℓ , 2ℓ ≲ λ ≲ 20ℓ, and λ ≳ 20ℓ.For practical glacier-flow calculations, the exponential weighting function can be approximated by a symmetrical triangular averaging window of length 4ℓ, called thelongitudinal averaging length. The traditional rectangular window is a poor approximation. Because of the exponential weighting, the local surface slope has an appreciable though muted effect on the local flow, which is clearly seen in field examples, contrary to what would result from a rectangular averaging window.Tested with field data for Variegated Glacier, Alaska, and Blue Glacier, Washington, the longitudinal averaging theory is able to account semi-quantitatively for the observed longitudinal variations in flow of these glaciers and for the representation of flow in terms of “effective surface slope” values. Exceptions occur where the flow is augmented by large contributions from basal sliding in the ice fall and terminal zone of Blue Glacier and in the reach of surge initiation in Variegated Glacier. The averaging length4lthat gives the best agreement between calculated and observed flow pattern is 2.5 km for Variegated Glacier and 1.8 km for Blue Glacier, corresponding toℓ/h≈ 2 in both cases.Ifℓvaries withx, but not too rapidly, the exponential weighting function remains a fairly good approximation to the exact Green’s function of the differential equation for longitudinal flow coupling; in this approximation,ℓin the averaging integral isℓ(x) but is not a function ofx′. Effects of longitudinal variation of J are probably important near the glacier terminus and head, and near ice falls.The longitudinal averaging formulation can also be used to express the local basal shear stress in terms of longitudinal variations in the local “slope stress” with the mediation of longitudinal stress gradients.
APA, Harvard, Vancouver, ISO, and other styles
49

Feng, Zhouquan, Wenai Shen, and Zhengqing Chen. "Consistent Multilevel RDT-ERA for Output-Only Ambient Modal Identification of Structures." International Journal of Structural Stability and Dynamics 17, no. 09 (2017): 1750106. http://dx.doi.org/10.1142/s0219455417501061.

Full text
Abstract:
This paper presents an improved method called the consistent multilevel random decrement technique in conjunction with eigensystem realization algorithm (RDT-ERA) for modal parameter identification of linear dynamic systems using the ambient vibration data. The conventional RDT-ERA is briefly revisited first and the problem of triggering level selection in the RDT is thoroughly studied. Due to the use of a single triggering level by the conventional RDT-ERA, an inappropriate triggering level may produce poor random decrement (RD) functions, thereby yielding a poor estimate of modal parameters. In the proposed consistent multilevel RDT-ERA, multiple triggering levels are used and a consistency analysis is proposed to sift out the RD functions that deviate largely from the majority of the RD functions. Then the ERA is applied to the retained RD functions for modal parameter identification. Subsequently, a similar consistency analysis is conducted on the identified modal parameters to sift out the outliers. Finally, the final estimates of the modal parameters are calculated using weighted averaging with the weights set proportional to the number of RD segments extracted from the corresponding triggering levels. The proposed method is featured by the fact that the information from the signal is fully utilized using multiple triggering levels and the outliers are sifted out using consistency analysis, thus making the identified result more accurate and reliable. The effectiveness and accuracy of the method have been demonstrated in the examples using the simulated data and experimental data.
APA, Harvard, Vancouver, ISO, and other styles
50

Yaqoob, Sana, Ayman Noor, Talal H. Noor, et al. "Enhancing student career guidance and sentimental analysis: A performance-driven hybrid learning approach with feature ranking." PLOS One 20, no. 5 (2025): e0321108. https://doi.org/10.1371/journal.pone.0321108.

Full text
Abstract:
Choosing the appropriate career path poses a significant hurdle for students, especially when time is constrained. This research addresses the challenge of career prediction by introducing a method that integrates additional attributes, refines feature prioritization, and streamlines feature selection to enhance prediction precision. The key objectives of this study are to pinpoint pertinent features, accurately rank them, and enhance prediction accuracy by eliminating non-essential features. To accomplish these aims, three methodologies are employed: Feature Fusion and Normalization (FFN) for precise data identification, Average Feature Ranking (AFR) utilizing a blend of Random Forest (RF) and Linear Regression (LR) for feature prioritization, and Improved Prediction with Weighted Characteristics (PWF) which integrates Principal Component (PC) analysis for feature reduction. The prediction performance is assessed using a hybrid Multilayer Perceptron (MLP) classifier with 5-fold cross-validation. The outcomes reveal that the hybrid approach yields a superior feature set for prediction. The top twelve ranked features are determined by averaging each feature’s RF scores and coefficients. The achieved accuracy (ACC), precision (P), recall (R), and F1 scores stand at 87%, 87%, 86%, and 86%, respectively, with an Area Under the Receiver Operating Characteristic Curve (AUC-ROC) value of 92%. These findings underscore the efficacy of the proposed hybrid learning technique in accurately forecasting career trajectories.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography