To see the other types of publications on this topic, follow the link: Fletcher Reeves conjugate gradient (CGFR).

Journal articles on the topic 'Fletcher Reeves conjugate gradient (CGFR)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 journal articles for your research on the topic 'Fletcher Reeves conjugate gradient (CGFR).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lyn Dee, Goh, Norhisham Bakhary, Azlan Abdul Rahman, and Baderul Hisham Ahmad. "A Comparison of Artificial Neural Network Learning Algorithms for Vibration-Based Damage Detection." Advanced Materials Research 163-167 (December 2010): 2756–60. http://dx.doi.org/10.4028/www.scientific.net/amr.163-167.2756.

Full text
Abstract:
This paper investigates the performance of Artificial Neural Network (ANN) learning algorithms for vibration-based damage detection. The capabilities of six different learning algorithms in detecting damage are studied and their performances are compared. The algorithms are Levenberg-Marquardt (LM), Resilient Backpropagation (RP), Scaled Conjugate Gradient (SCG), Conjugate Gradient with Powell-Beale Restarts (CGB), Polak-Ribiere Conjugate Gradient (CGP) and Fletcher-Reeves Conjugate Gradient (CGF) algorithms. The performances of these algorithms are assessed based on their generalisation capability in relating the vibration parameters (frequencies and mode shapes) with damage locations and severities under various numbers of input and output variables. The results show that Levenberg-Marquardt algorithm provides the best generalisation performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Mazloom, Mohammad Sadegh, Farzaneh Rezaei, Abdolhossein Hemmati-Sarapardeh, Maen M. Husein, Sohrab Zendehboudi, and Amin Bemani. "Artificial Intelligence Based Methods for Asphaltenes Adsorption by Nanocomposites: Application of Group Method of Data Handling, Least Squares Support Vector Machine, and Artificial Neural Networks." Nanomaterials 10, no. 5 (May 6, 2020): 890. http://dx.doi.org/10.3390/nano10050890.

Full text
Abstract:
Asphaltenes deposition is considered a serious production problem. The literature does not include enough comprehensive studies on adsorption phenomenon involved in asphaltenes deposition utilizing inhibitors. In addition, effective protocols on handling asphaltenes deposition are still lacking. In this study, three efficient artificial intelligent models including group method of data handling (GMDH), least squares support vector machine (LSSVM), and artificial neural network (ANN) are proposed for estimating asphaltenes adsorption onto NiO/SAPO-5, NiO/ZSM-5, and NiO/AlPO-5 nanocomposites based on a databank of 252 points. Variables influencing asphaltenes adsorption include pH, temperature, amount of nanocomposites over asphaltenes initial concentration (D/C0), and nanocomposites characteristics such as BET surface area and volume of micropores. The models are also optimized using nine optimization techniques, namely coupled simulated annealing (CSA), genetic algorithm (GA), Bayesian regularization (BR), scaled conjugate gradient (SCG), ant colony optimization (ACO), Levenberg–Marquardt (LM), imperialistic competitive algorithm (ICA), conjugate gradient with Fletcher-Reeves updates (CGF), and particle swarm optimization (PSO). According to the statistical analysis, the proposed RBF-ACO and LSSVM-CSA are the most accurate approaches that can predict asphaltenes adsorption with average absolute percent relative errors of 0.892% and 0.94%, respectively. The sensitivity analysis shows that temperature has the most impact on asphaltenes adsorption from model oil solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Hongfei, Jorge Leandro, and Qing Lin. "Optimization of Artificial Neural Network (ANN) for Maximum Flood Inundation Forecasts." Water 13, no. 16 (August 18, 2021): 2252. http://dx.doi.org/10.3390/w13162252.

Full text
Abstract:
Flooding is the world’s most catastrophic natural event in terms of losses. The ability to forecast flood events is crucial for controlling the risk of flooding to society and the environment. Artificial neural networks (ANN) have been adopted in recent studies to provide fast flood inundation forecasts. In this paper, an existing ANN trained based on synthetic events was optimized in two directions: extending the training dataset with the use of hybrid dataset, and selection of the best training function based on six possible functions, namely conjugate gradient backpropagation with Fletcher–Reeves updates (CGF) with Polak–Ribiére updates (CGP) and Powell–Beale restarts (CGB), one-step secant back-propagation (OSS), resilient backpropagation (RP), and scaled conjugate gra-dient backpropagation (SCG). Four real flood events were used to validate the performance of the improved ANN over the existing one. The new training dataset reduced the model’s rooted mean square error (RMSE) by 10% for the testing dataset and 16% for the real events. The selection of the resilient backpropagation algorithm contributed to 15% lower RMSE for the testing dataset and up to 35% for the real events when compared with the other five training functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Djordjevic, Snezana. "New hybrid conjugate gradient method as a convex combination of FR and PRP methods." Filomat 30, no. 11 (2016): 3083–100. http://dx.doi.org/10.2298/fil1611083d.

Full text
Abstract:
We consider a newhybrid conjugate gradient algorithm,which is obtained fromthe algorithmof Fletcher-Reeves, and the algorithmof Polak-Ribi?re-Polyak. Numerical comparisons show that the present hybrid conjugate gradient algorithm often behaves better than some known algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Kaelo, Pro, Sindhu Narayanan, and M. V. Thuto. "A modified quadratic hybridization of Polak-Ribiere-Polyak and Fletcher-Reeves conjugate gradient method for unconstrained optimization problems." An International Journal of Optimization and Control: Theories & Applications (IJOCTA) 7, no. 2 (July 15, 2017): 177–85. http://dx.doi.org/10.11121/ijocta.01.2017.00339.

Full text
Abstract:
This article presents a modified quadratic hybridization of the Polak–Ribiere–Polyak and Fletcher–Reeves conjugate gradient method for solving unconstrained optimization problems. Global convergence, with the strong Wolfe line search conditions, of the proposed quadratic hybrid conjugate gradient method is established. We also report some numerical results to show the competitiveness of the new hybrid method.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, C. Y., and M. X. Li. "Convergence property of the Fletcher-Reeves conjugate gradient method with errors." Journal of Industrial & Management Optimization 1, no. 2 (2005): 193–200. http://dx.doi.org/10.3934/jimo.2005.1.193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

ZENG, MEILAN, and GUANGHUI ZHOU. "A MODIFIED FR CONJUGATE GRADIENT METHOD FOR COMPUTING -EIGENPAIRS OF SYMMETRIC TENSORS." Bulletin of the Australian Mathematical Society 94, no. 3 (July 26, 2016): 411–20. http://dx.doi.org/10.1017/s0004972716000381.

Full text
Abstract:
This paper proposes improvements to the modified Fletcher–Reeves conjugate gradient method (FR-CGM) for computing $Z$-eigenpairs of symmetric tensors. The FR-CGM does not need to compute the exact gradient and Jacobian. The global convergence of this method is established. We also test other conjugate gradient methods such as the modified Polak–Ribière–Polyak conjugate gradient method (PRP-CGM) and shifted power method (SS-HOPM). Numerical experiments of FR-CGM, PRP-CGM and SS-HOPM show the efficiency of the proposed method for finding $Z$-eigenpairs of symmetric tensors.
APA, Harvard, Vancouver, ISO, and other styles
8

Pang, Deyan, Shouqiang Du, and Jingjie Ju. "The smoothing Fletcher-Reeves conjugate gradient method for solving finite minimax problems." ScienceAsia 42, no. 1 (2016): 40. http://dx.doi.org/10.2306/scienceasia1513-1874.2016.42.040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alshorman, Omar, Mustafa Mamat, Ahmad Alhawarat, and Mohd Revaie. "A modifications of conjugate gradient method for unconstrained optimization problems." International Journal of Engineering & Technology 7, no. 2.14 (April 6, 2018): 21. http://dx.doi.org/10.14419/ijet.v7i2.14.11146.

Full text
Abstract:
The Conjugate Gradient (CG) methods play an important role in solving large-scale unconstrained optimization problems. Several studies have been recently devoted to improving and modifying these methods in relation to efficiency and robustness. In this paper, a new parameter of CG method has been proposed. The new parameter possesses global convergence properties under the Strong Wolfe-Powell (SWP) line search. The numerical results show that the proposed formula is more efficient and robust compared with Polak-Rribiere Ployak (PRP), Fletcher-Reeves (FR) and Wei, Yao, and Liu (WYL) parameters.
APA, Harvard, Vancouver, ISO, and other styles
10

Sellami, Badreddine, and Mohamed Chiheb Eddine Sellami. "Global convergence of a modified Fletcher–Reeves conjugate gradient method with Wolfe line search." Asian-European Journal of Mathematics 13, no. 04 (April 4, 2019): 2050081. http://dx.doi.org/10.1142/s1793557120500813.

Full text
Abstract:
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. we propose a modified Fletcher–Reeves (abbreviated FR) [Function minimization by conjugate gradients, Comput. J. 7 (1964) 149–154] conjugate gradient algorithm satisfying a parametrized sufficient descent condition with a parameter [Formula: see text] is proposed. The parameter [Formula: see text] is computed by means of the conjugacy condition, thus an algorithm which is a positive multiplicative modification of the Hestenes and Stiefel (abbreviated HS) [Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards Sec. B 48 (1952) 409–436] algorithm is obtained, which produces a descent search direction at every iteration that the line search satisfies the Wolfe conditions. Under appropriate conditions, we show that the modified FR method with the strong Wolfe line search is globally convergent of uniformly convex functions. We also present extensive preliminary numerical experiments to show the efficiency of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
11

Sun, Min, Jing Liu, and Yaru Wang. "Two Improved Conjugate Gradient Methods with Application in Compressive Sensing and Motion Control." Mathematical Problems in Engineering 2020 (May 5, 2020): 1–11. http://dx.doi.org/10.1155/2020/9175496.

Full text
Abstract:
To solve the monotone equations with convex constraints, a novel multiparameterized conjugate gradient method (MPCGM) is designed and analyzed. This kind of conjugate gradient method is derivative-free and can be viewed as a modified version of the famous Fletcher–Reeves (FR) conjugate gradient method. Under approximate conditions, we show that the proposed method has global convergence property. Furthermore, we generalize the MPCGM to solve unconstrained optimization problem and offer another novel conjugate gradient method (NCGM), which satisfies the sufficient descent property without any line search. Global convergence of the NCGM is also proved. Finally, we report some numerical results to show the efficiency of two novel methods. Specifically, their practical applications in compressive sensing and motion control of robot manipulator are also investigated.
APA, Harvard, Vancouver, ISO, and other styles
12

Jabbar, Hawraz N., and Basim A. Hassan. "Two-versions of descent conjugate gradient methods for large-scale unconstrained optimization." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 3 (June 1, 2021): 1643. http://dx.doi.org/10.11591/ijeecs.v22.i3.pp1643-1649.

Full text
Abstract:
<p>The conjugate gradient methods are noted to be exceedingly valuable for solving large-scale unconstrained optimization problems since it needn't the storage of matrices. Mostly the parameter conjugate is the focus for conjugate gradient methods. The current paper proposes new methods of parameter of conjugate gradient type to solve problems of large-scale unconstrained optimization. A Hessian approximation in a diagonal matrix form on the basis of second and third-order Taylor series expansion was employed in this study. The sufficient descent property for the proposed algorithm are proved. The new method was converged globally. This new algorithm is found to be competitive to the algorithm of fletcher-reeves (FR) in a number of numerical experiments.</p>
APA, Harvard, Vancouver, ISO, and other styles
13

Dalla, Carlos Eduardo Rambalducci, Wellington Betencurte da Silva, Júlio Cesar Sampaio Dutra, and Marcelo José Colaço. "A comparative study of gradient-based and meta-heuristic optimization methods using Griewank benchmark function/ Um estudo comparativo de métodos de otimização baseados em gradientes e meta-heurísticos usando a função de benchmark do Griewank." Brazilian Journal of Development 7, no. 6 (June 7, 2021): 55341–50. http://dx.doi.org/10.34117/bjdv7n6-102.

Full text
Abstract:
Optimization methods are frequently applied to solve real-world problems such, engineering design, computer science, and computational chemistry. This paper aims to compare gradient-based algorithms and the meta-heuristic particle swarm optimization to minimize the multidimensional benchmark Griewank function, a multimodal function with widespread local minima. Several approaches of gradient-based methods such as steepest descent, conjugate gradient with Fletcher-Reeves and Polak-Ribiere formulations, and quasi-Newton Davidon-Fletcher-Powell approach were compared. The results presented showed that the meta-heuristic method is recommended for function with this behavior because is no needed prior information of the search space. The performance comparison includes computation time and convergence of global and local optimum.
APA, Harvard, Vancouver, ISO, and other styles
14

Babaie-Kafaki, Saman. "A Quadratic Hybridization of Polak–Ribière–Polyak and Fletcher–Reeves Conjugate Gradient Methods." Journal of Optimization Theory and Applications 154, no. 3 (March 13, 2012): 916–32. http://dx.doi.org/10.1007/s10957-012-0016-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yao, Teng-Teng, Zheng-Jian Bai, Zhi Zhao, and Wai-Ki Ching. "A Riemannian Fletcher--Reeves Conjugate Gradient Method for Doubly Stochastic Inverse Eigenvalue Problems." SIAM Journal on Matrix Analysis and Applications 37, no. 1 (January 2016): 215–34. http://dx.doi.org/10.1137/15m1023051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Babaie-Kafaki, Saman, and Reza Ghanbari. "A hybridization of the Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient methods." Numerical Algorithms 68, no. 3 (May 22, 2014): 481–95. http://dx.doi.org/10.1007/s11075-014-9856-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Chang-yu, and Shu-jun Lian. "Global convergence properties of the two new dependent Fletcher–Reeves conjugate gradient methods." Applied Mathematics and Computation 181, no. 2 (October 2006): 920–31. http://dx.doi.org/10.1016/j.amc.2006.01.078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Al-batah, Mohammad Subhi, Mutasem Sh Alkhasawneh, Lea Tien Tay, Umi Kalthum Ngah, Habibah Hj Lateh, and Nor Ashidi Mat Isa. "Landslide Occurrence Prediction Using Trainable Cascade Forward Network and Multilayer Perceptron." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/512158.

Full text
Abstract:
Landslides are one of the dangerous natural phenomena that hinder the development in Penang Island, Malaysia. Therefore, finding the reliable method to predict the occurrence of landslides is still the research of interest. In this paper, two models of artificial neural network, namely, Multilayer Perceptron (MLP) and Cascade Forward Neural Network (CFNN), are introduced to predict the landslide hazard map of Penang Island. These two models were tested and compared using eleven machine learning algorithms, that is, Levenberg Marquardt, Broyden Fletcher Goldfarb, Resilient Back Propagation, Scaled Conjugate Gradient, Conjugate Gradient with Beale, Conjugate Gradient with Fletcher Reeves updates, Conjugate Gradient with Polakribiere updates, One Step Secant, Gradient Descent, Gradient Descent with Momentum and Adaptive Learning Rate, and Gradient Descent with Momentum algorithm. Often, the performance of the landslide prediction depends on the input factors beside the prediction method. In this research work, 14 input factors were used. The prediction accuracies of networks were verified using the Area under the Curve method for the Receiver Operating Characteristics. The results indicated that the best prediction accuracy of 82.89% was achieved using the CFNN network with the Levenberg Marquardt learning algorithm for the training data set and 81.62% for the testing data set.
APA, Harvard, Vancouver, ISO, and other styles
19

Abubakar, Auwal Bala, Poom Kumam, Hassan Mohammad, Aliyu Muhammed Awwal, and Kanokwan Sitthithakerngkiet. "A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications." Mathematics 7, no. 8 (August 15, 2019): 745. http://dx.doi.org/10.3390/math7080745.

Full text
Abstract:
One of the fastest growing and efficient methods for solving the unconstrained minimization problem is the conjugate gradient method (CG). Recently, considerable efforts have been made to extend the CG method for solving monotone nonlinear equations. In this research article, we present a modification of the Fletcher–Reeves (FR) conjugate gradient projection method for constrained monotone nonlinear equations. The method possesses sufficient descent property and its global convergence was proved using some appropriate assumptions. Two sets of numerical experiments were carried out to show the good performance of the proposed method compared with some existing ones. The first experiment was for solving monotone constrained nonlinear equations using some benchmark test problem while the second experiment was applying the method in signal and image recovery problems arising from compressive sensing.
APA, Harvard, Vancouver, ISO, and other styles
20

Chatterjee, A. "A Fletcher–Reeves Conjugate Gradient Neural-Network-Based Localization Algorithm for Wireless Sensor Networks." IEEE Transactions on Vehicular Technology 59, no. 2 (February 2010): 823–30. http://dx.doi.org/10.1109/tvt.2009.2035132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Wei, Jian Jun Cai, and Xi Pin Fan. "A Study of PID Control System Based on BP Neural Network." Advanced Materials Research 328-330 (September 2011): 1908–11. http://dx.doi.org/10.4028/www.scientific.net/amr.328-330.1908.

Full text
Abstract:
To deal with the defects of the steepest descent in slowly converging and easily immerging in partialm in imum,this paper proposes a new type of PID control system based on the BP neural network, which is a combination of the neural network and the PID strategy. It has the merits of both neural network and PID controller. Moreover, Fletcher-Reeves conjugate gradient in controller can make the training of network faster and can eliminate the disadvantages of steepest descent in BP algorithm. The parameters of the neural network PID controller are modified on line by the improved conjugate gradient. The programming steps under MATLAB are finally described. Simulation result shows that the controller is effective.
APA, Harvard, Vancouver, ISO, and other styles
22

Abubakar, Auwal Bala, Kanikar Muangchoo, Abdulkarim Hassan Ibrahim, Jamilu Abubakar, and Sadiya Ali Rano. "FR-type algorithm for finding approximate solutions to nonlinear monotone operator equations." Arabian Journal of Mathematics 10, no. 2 (February 17, 2021): 261–70. http://dx.doi.org/10.1007/s40065-021-00313-5.

Full text
Abstract:
AbstractThis paper focuses on the problem of convex constraint nonlinear equations involving monotone operators in Euclidean space. A Fletcher and Reeves type derivative-free conjugate gradient method is proposed. The proposed method is designed to ensure the descent property of the search direction at each iteration. Furthermore, the convergence of the proposed method is proved under the assumption that the underlying operator is monotone and Lipschitz continuous. The numerical results show that the method is efficient for the given test problems.
APA, Harvard, Vancouver, ISO, and other styles
23

Kannan, B. K., and S. N. Kramer. "An Augmented Lagrange Multiplier Based Method for Mixed Integer Discrete Continuous Optimization and Its Applications to Mechanical Design." Journal of Mechanical Design 116, no. 2 (June 1, 1994): 405–11. http://dx.doi.org/10.1115/1.2919393.

Full text
Abstract:
An algorithm for solving nonlinear optimization problems involving discrete, integer, zero-one, and continuous variables is presented. The augmented Lagrange multiplier method combined with Powell’s method and Fletcher and Reeves Conjugate Gradient method are used to solve the optimization problem where penalties are imposed on the constraints for integer/discrete violations. The use of zero-one variables as a tool for conceptual design optimization is also described with an example. Several case studies have been presented to illustrate the practical use of this algorithm. The results obtained are compared with those obtained by the Branch and Bound algorithm. Also, a comparison is made between the use of Powell’s method (zeroth order) and the Conjugate Gradient method (first order) in the solution of these mixed variable optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
24

Wanto, Anjar, Muhammad Zarlis, Sawaluddin, and Dedy Hartama. "Analysis of Artificial Neural Network Backpropagation Using Conjugate Gradient Fletcher Reeves In The Predicting Process." Journal of Physics: Conference Series 930 (December 2017): 012018. http://dx.doi.org/10.1088/1742-6596/930/1/012018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jiang, Xianzhen, and Jinbao Jian. "Improved Fletcher–Reeves and Dai–Yuan conjugate gradient methods with the strong Wolfe line search." Journal of Computational and Applied Mathematics 348 (March 2019): 525–34. http://dx.doi.org/10.1016/j.cam.2018.09.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Li, Weijun Zhou, and Donghui Li. "Global convergence of a modified Fletcher–Reeves conjugate gradient method with Armijo-type line search." Numerische Mathematik 104, no. 4 (September 5, 2006): 561–72. http://dx.doi.org/10.1007/s00211-006-0028-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Boumaraf, Badreddine, Nacira Seddik-Ameur, and Vlad Stefan Barbu. "Estimation of Beta-Pareto Distribution Based on Several Optimization Methods." Mathematics 8, no. 7 (July 1, 2020): 1055. http://dx.doi.org/10.3390/math8071055.

Full text
Abstract:
This paper is concerned with the maximum likelihood estimators of the Beta-Pareto distribution introduced in Akinsete et al. (2008), which comes from the mixing of two probability distributions, Beta and Pareto. Since these estimators cannot be obtained explicitly, we use nonlinear optimization methods that numerically provide these estimators. The methods we investigate are the method of Newton-Raphson, the gradient method and the conjugate gradient method. Note that for the conjugate gradient method we use the model of Fletcher-Reeves. The corresponding algorithms are developed and the performances of the methods used are confirmed by an important simulation study. In order to compare between several concurrent models, namely generalized Beta-Pareto, Beta, Pareto, Gamma and Beta-Pareto, model criteria selection are used. We firstly consider completely observed data and, secondly, the observations are assumed to be right censored and we derive the same type of results.
APA, Harvard, Vancouver, ISO, and other styles
28

Khalid Awang, Mohd, Mohammad Ridwan Ismail, Mokhairi Makhtar, M. Nordin A Rahman, and Abd Rasid Mamat. "Performance Comparison of Neural Network Training Algorithms for Modeling Customer Churn Prediction." International Journal of Engineering & Technology 7, no. 2.15 (April 6, 2018): 35. http://dx.doi.org/10.14419/ijet.v7i2.15.11196.

Full text
Abstract:
Predicting customer churn has become the priority of every telecommunication service provider as the market is becoming more saturated and competitive. This paper presents a comparison of neural network learning algorithms for customer churn prediction. The data set used to train and test the neural network algorithms was provided by one of the leading telecommunication company in Malaysia. The Multilayer Perceptron (MLP) networks are trained using nine (9) types of learning algorithms, which are Levenberg Marquardt backpropagation (trainlm), BFGS Quasi-Newton backpropagation (trainbfg), Conjugate Gradient backpropagation with Fletcher-Reeves Updates (traincgf), Conjugate Gradient backpropagation with Polak-Ribiere Updates (traincgp), Conjugate Gradient backpropagation with Powell-Beale Restarts (traincgb), Scaled Conjugate Gradient backpropagation (trainscg), One Step Secant backpropagation (trainoss), Bayesian Regularization backpropagation (trainbr), and Resilient backpropagation (trainrp). The performance of the Neural Network is measured based on the prediction accuracy of the learning and testing phases. LM learning algorithm is found to be the optimum model of a neural network model consisting of fourteen input units, one hidden node and one output node. The best result of the experiment indicated that this model is able to produce the performance accuracy of 94.82%.
APA, Harvard, Vancouver, ISO, and other styles
29

Ng, Kin Wei, and Ahmad Rohanin. "Modified Fletcher-Reeves and Dai-Yuan Conjugate Gradient Methods for Solving Optimal Control Problem of Monodomain Model." Applied Mathematics 03, no. 08 (2012): 864–72. http://dx.doi.org/10.4236/am.2012.38128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Nengjian, Qinhui Liu, Chunping Ren, and Chunsheng Liu. "A Novel Method of Dynamic Force Identification and Its Application." Mathematical Problems in Engineering 2019 (December 14, 2019): 1–10. http://dx.doi.org/10.1155/2019/1534560.

Full text
Abstract:
In this paper, an efficient mixed spectral conjugate gradient (EMSCG, for short) method is presented for solving unconstrained optimization problems. In this work, we construct a novel formula performed by using a conjugate gradient parameter which takes into account the advantages of Fletcher–Reeves (FR), Polak–Ribiere–Polyak (PRP), and a variant Polak-Ribiere-Polyak (VPRP), prove its stability and convergence, and apply it to the dynamic force identification of practical engineering structure. The analysis results show that the present method has higher efficiency, stronger robust convergence quality, and fewer iterations. In addition, the proposed method can provide more efficient and numerically stable approximation of the actual force, compared with the FR method, PRP method, and VPRP method. Therefore, we can make a clear conclusion that the proposed method in this paper can provide an effective optimization solution. Meanwhile, there is reason to believe that the proposed method can offer a reference for future research.
APA, Harvard, Vancouver, ISO, and other styles
31

ZHANG, XIAODONG, SHIRA L. BROSCHAT, and PATRICK J. FLYNN. "A NUMERICAL STUDY OF CONJUGATE GRADIENT DIRECTIONS FOR AN ULTRASOUND INVERSE PROBLEM." Journal of Computational Acoustics 12, no. 04 (December 2004): 587–604. http://dx.doi.org/10.1142/s0218396x04002377.

Full text
Abstract:
In ultrasound inverse problems, the integral equation can be nonlinear, ill-posed, and computationally expensive. One approach to solving such problems is the conjugate gradient (CG) method. A key parameter in the CG method is the conjugate gradient direction. In this paper, we investigate the CG directions proposed by Polyak et al. (PPR), Hestenes and Stiefel (HS), Fletcher and Reeves (FR), Dai and Yuan (YD), and the two-parameter family generalization proposed by Nazareth (TPF). Each direction is applied to three test cases with different contrasts and phase shifts. Test case 1 has low contrast with a phase shift of 0.2π. Reconstruction of the object is obtained for all directions. The performances of the PPR, HS, YD, and TPF directions are comparable, while the FR direction gives the poorest performance. Test case 2 has medium contrast with a phase shift of 0.75π. Reconstruction is obtained for all but the FR direction. The PPR, HS, YD, and TPF directions have similar mean square error; the YD direction takes the least amount of CPU time. Test case 3 has the highest contrast with a phase shift of 1.003π. Only the YD direction gives reasonably accurate results.
APA, Harvard, Vancouver, ISO, and other styles
32

Lahmiri, Salim, Mounir Boukadoum, and Sylvain Chartier. "Exploring Information Categories and Artificial Neural Networks Numerical Algorithms in S&P500 Trend Prediction." International Journal of Strategic Decision Sciences 5, no. 1 (January 2014): 76–94. http://dx.doi.org/10.4018/ijsds.2014010105.

Full text
Abstract:
The purpose of this study is to examine three major issues. First, the authors compare the performance of economic information, technical indicators, historical information, and investor sentiment measures in financial predictions using backpropagation neural networks (BPNN). Granger causality tests are applied to each category of information to select the relevant variables that statistically and significantly affect stock market shifts. Second, the authors investigate the effect of combining all of these four categories of information variables selected by Granger causality test on the prediction accuracy. Third, the effectiveness of different numerical techniques on the accuracy of BPNN is explored. The authors include conjugate gradient algorithms (Fletcher-Reeves update, Polak-Ribiére update, Powell-Beale restart), quasi-Newton (Broyden-Fletcher-Goldfarb-Shanno, BFGS), and the Levenberg-Marquardt (LM) algorithm which is commonly used in the literature. Fourth, the authors compare the performance of the BPNN and support vector machine (SVM) in terms of stock market trend prediction. Their comparative study is applied to S&P500 data to predict its future moves. The out-of-sample forecasting results show that (i) historical values and sentiment measures allow obtaining higher accuracy than economic information and technical indicators, (ii) combining the four categories of information does not help improving the accuracy of the BPNN and SVM, (iii) the LM algorithm is outperformed by Polak-Ribière, Powell-Beale, and Fletcher-Reeves algorithms, and (iv) the BPNN outperforms the SVM except when using sentiment measures as predictive information.
APA, Harvard, Vancouver, ISO, and other styles
33

Yao, Teng-Teng, Zheng-Jian Bai, and Zhi Zhao. "A Riemannian variant of the Fletcher-Reeves conjugate gradient method for stochastic inverse eigenvalue problems with partial eigendata." Numerical Linear Algebra with Applications 26, no. 2 (October 25, 2018): e2221. http://dx.doi.org/10.1002/nla.2221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hashim, Mimi Nurzilah, Muhammad Khusairi Osman, Mohammad Nizam Ibrahim, Ahmad Farid Abidin, and Ahmad Asri Abd Samat. "A Comparison Study of Learning Algorithms for Estimating Fault Location." Indonesian Journal of Electrical Engineering and Computer Science 6, no. 2 (May 1, 2017): 464. http://dx.doi.org/10.11591/ijeecs.v6.i2.pp464-472.

Full text
Abstract:
Fault location is one of the important scheme in power system protection to locate the exact location of disturbance. Nowadays, artificial neural networks (ANNs) are being used significantly to identify exact fault location on transmission lines. Selection of suitable training algorithm is important in analysis of ANN performance. This paper presents a comparative study of various ANN training algorithm to perform fault location scheme in transmission lines. The features selected into ANN is the time of first peak changes in discrete wavelet transform (DWT) signal by using faulted current signal acted as traveling wave fault location technique. Six types commonly used backpropagation training algorithm were selected including the Levenberg-Marquardt, Bayesian Regulation, Conjugate gradient backpropagation with Powell-Beale restarts, BFGS quasi-Newton, Conjugate gradient backpropagation with Polak-Ribiere updates and Conjugate gradient backpropagation with Fletcher-Reeves updates. The proposed fault location method is tested with varying fault location, fault types, fault resistance and inception angle. The performance of each training algorithm is evaluated by goodness-of-fit (R<sup>2</sup>), mean square error (MSE) and Percentage prediction error (PPE). Simulation results show that the best of training algorithm for estimating fault location is Bayesian Regulation (R<sup>2 </sup>= 1.0, MSE = 0.034557 and PPE = 0.014%).
APA, Harvard, Vancouver, ISO, and other styles
35

Sadchikov, Pavel, Tatyana Khomenko, and Galina Ternovaya. "Numerical optimization of the transfer function of the intelligent building management system." E3S Web of Conferences 97 (2019): 01015. http://dx.doi.org/10.1051/e3sconf/20199701015.

Full text
Abstract:
The paper deals with structural-parametric models for describing dynamic processes of technical systems of an intelligent building. The task of searching for the transfer function of the synthesized elements and devices of its information-measuring and control systems based on the Mason method is formalized. The components of the transfer function are presented in the form of characteristic polynomials in the structural scheme of the energy-information model of the circuit. The results of a comparative analysis of search methods for multiple real and complex conjugate polynomial roots are presented. To organize their search, an iterative method of unconditional optimization of Fletcher-Reeves was chosen. This conjugate gradient method allows to solve the problem of numerical optimization in a finite number of steps and shows the best convergence in comparison with the methods of the fastest descent, with the same order of difficulty of performing the steps of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
36

Babaie-Kafaki, Saman. "A Note on the Global Convergence of the Quadratic Hybridization of Polak–Ribière–Polyak and Fletcher–Reeves Conjugate Gradient Methods." Journal of Optimization Theory and Applications 157, no. 1 (September 28, 2012): 297–98. http://dx.doi.org/10.1007/s10957-012-0184-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sun, Zhongbo, Yantao Tian, and Jing Wang. "A novel projected Fletcher-Reeves conjugate gradient approach for finite-time optimal robust controller of linear constraints optimization problem: Application to bipedal walking robots." Optimal Control Applications and Methods 39, no. 1 (July 11, 2017): 130–59. http://dx.doi.org/10.1002/oca.2339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Mashaly, Ahmed F., and A. A. Alazba. "Comparative investigation of artificial neural network learning algorithms for modeling solar still production." Journal of Water Reuse and Desalination 5, no. 4 (April 20, 2015): 480–93. http://dx.doi.org/10.2166/wrd.2015.009.

Full text
Abstract:
Three artificial neural network learning algorithms were utilized to forecast the productivity (MD) of a solar still operating in a hyper-arid environment. The learning algorithms were the Levenberg–Marquardt (LM), the conjugate gradient backpropagation with Fletcher–Reeves restarts, and the resilient backpropagation. The Julian day, ambient air temperature, relative humidity, wind speed, solar radiation, temperature of feed water, temperature of brine water, total dissolved solids (TDS) of feed water, and TDS of brine water were used in the input layer of the developed neural network model. The MD was located in the output layer. The developed model for each algorithm was trained, tested, and validated with experimental data obtained from field experimental work. Findings revealed the developed model could be utilized to predict the MD with excellent accuracy. The LM algorithm (with a minimum root mean squared error and a maximum overall index of model performance) was found to be the best in the training, testing, and validation stages. Relative errors in the predicted MD values of the developed model using the LM algorithm were mostly in the vicinity of ±10%. These results indicated that the LM algorithm is the most ideal and accurate algorithm for the prediction of the MD with the developed model.
APA, Harvard, Vancouver, ISO, and other styles
39

Khan, Taimur, Teh Sabariah Binti Abd Manan, Mohamed Hasnain Isa, Abdulnoor A. J. Ghanim, Salmia Beddu, Hisyam Jusoh, Muhammad Shahid Iqbal, Gebiaw T. Ayele, and Mohammed Saedi Jami. "Modeling of Cu(II) Adsorption from an Aqueous Solution Using an Artificial Neural Network (ANN)." Molecules 25, no. 14 (July 17, 2020): 3263. http://dx.doi.org/10.3390/molecules25143263.

Full text
Abstract:
This research optimized the adsorption performance of rice husk char (RHC4) for copper (Cu(II)) from an aqueous solution. Various physicochemical analyses such as Fourier transform infrared spectroscopy (FTIR), field-emission scanning electron microscopy (FESEM), carbon, hydrogen, nitrogen, and sulfur (CHNS) analysis, Brunauer–Emmett–Teller (BET) surface area analysis, bulk density (g/mL), ash content (%), pH, and pHZPC were performed to determine the characteristics of RHC4. The effects of operating variables such as the influences of aqueous pH, contact time, Cu(II) concentration, and doses of RHC4 on adsorption were studied. The maximum adsorption was achieved at 120 min of contact time, pH 6, and at 8 g/L of RHC4 dose. The prediction of percentage Cu(II) adsorption was investigated via an artificial neural network (ANN). The Fletcher–Reeves conjugate gradient backpropagation (BP) algorithm was the best fit among all of the tested algorithms (mean squared error (MSE) of 3.84 and R2 of 0.989). The pseudo-second-order kinetic model fitted well with the experimental data, thus indicating chemical adsorption. The intraparticle analysis showed that the adsorption process proceeded by boundary layer adsorption initially and by intraparticle diffusion at the later stage. The Langmuir and Freundlich isotherm models interpreted well the adsorption capacity and intensity. The thermodynamic parameters indicated that the adsorption of Cu(II) by RHC4 was spontaneous. The RHC4 adsorption capacity is comparable to other agricultural material-based adsorbents, making RHC4 competent for Cu(II) removal from wastewater.
APA, Harvard, Vancouver, ISO, and other styles
40

Habibi, Azwar Riza, Vivi Aida Fitria, and Lukman Hakim. "Optimasi Learning Rate Neural Network Backpropagation Dengan Search Direction Conjugate Gradient Pada Electrocardiogram." NUMERICAL: Jurnal Matematika dan Pendidikan Matematika, January 6, 2020, 131–37. http://dx.doi.org/10.25217/numerical.v3i2.603.

Full text
Abstract:
This paper develops a Neural network (NN) using conjugate gradient (CG). The modification of this method is in defining the direction of linear search. The conjugate gradient method has several methods to determine the steep size such as the Fletcher-Reeves, Dixon, Polak-Ribere, Hestene Steifel, and Dai-Yuan methods by using discrete electrocardiogram data. Conjugate gradients are used to update learning rates on neural networks by using different steep sizes. While the gradient search direction is used to update the weight on the NN. The results show that using Polak-Ribere get an optimal error, but the direction of the weighting search on NN widens and causes epoch on NN training is getting longer. But Hestene Steifel, and Dai-Yua could not find the gradient search direction so they could not update the weights and cause errors and epochs to infinity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography