Academic literature on the topic 'Fletcher Reeves conjugate gradient (CGFR)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fletcher Reeves conjugate gradient (CGFR).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Fletcher Reeves conjugate gradient (CGFR)"

1

Lyn Dee, Goh, Norhisham Bakhary, Azlan Abdul Rahman, and Baderul Hisham Ahmad. "A Comparison of Artificial Neural Network Learning Algorithms for Vibration-Based Damage Detection." Advanced Materials Research 163-167 (December 2010): 2756–60. http://dx.doi.org/10.4028/www.scientific.net/amr.163-167.2756.

Full text
Abstract:
This paper investigates the performance of Artificial Neural Network (ANN) learning algorithms for vibration-based damage detection. The capabilities of six different learning algorithms in detecting damage are studied and their performances are compared. The algorithms are Levenberg-Marquardt (LM), Resilient Backpropagation (RP), Scaled Conjugate Gradient (SCG), Conjugate Gradient with Powell-Beale Restarts (CGB), Polak-Ribiere Conjugate Gradient (CGP) and Fletcher-Reeves Conjugate Gradient (CGF) algorithms. The performances of these algorithms are assessed based on their generalisation capability in relating the vibration parameters (frequencies and mode shapes) with damage locations and severities under various numbers of input and output variables. The results show that Levenberg-Marquardt algorithm provides the best generalisation performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Mazloom, Mohammad Sadegh, Farzaneh Rezaei, Abdolhossein Hemmati-Sarapardeh, Maen M. Husein, Sohrab Zendehboudi, and Amin Bemani. "Artificial Intelligence Based Methods for Asphaltenes Adsorption by Nanocomposites: Application of Group Method of Data Handling, Least Squares Support Vector Machine, and Artificial Neural Networks." Nanomaterials 10, no. 5 (May 6, 2020): 890. http://dx.doi.org/10.3390/nano10050890.

Full text
Abstract:
Asphaltenes deposition is considered a serious production problem. The literature does not include enough comprehensive studies on adsorption phenomenon involved in asphaltenes deposition utilizing inhibitors. In addition, effective protocols on handling asphaltenes deposition are still lacking. In this study, three efficient artificial intelligent models including group method of data handling (GMDH), least squares support vector machine (LSSVM), and artificial neural network (ANN) are proposed for estimating asphaltenes adsorption onto NiO/SAPO-5, NiO/ZSM-5, and NiO/AlPO-5 nanocomposites based on a databank of 252 points. Variables influencing asphaltenes adsorption include pH, temperature, amount of nanocomposites over asphaltenes initial concentration (D/C0), and nanocomposites characteristics such as BET surface area and volume of micropores. The models are also optimized using nine optimization techniques, namely coupled simulated annealing (CSA), genetic algorithm (GA), Bayesian regularization (BR), scaled conjugate gradient (SCG), ant colony optimization (ACO), Levenberg–Marquardt (LM), imperialistic competitive algorithm (ICA), conjugate gradient with Fletcher-Reeves updates (CGF), and particle swarm optimization (PSO). According to the statistical analysis, the proposed RBF-ACO and LSSVM-CSA are the most accurate approaches that can predict asphaltenes adsorption with average absolute percent relative errors of 0.892% and 0.94%, respectively. The sensitivity analysis shows that temperature has the most impact on asphaltenes adsorption from model oil solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Hongfei, Jorge Leandro, and Qing Lin. "Optimization of Artificial Neural Network (ANN) for Maximum Flood Inundation Forecasts." Water 13, no. 16 (August 18, 2021): 2252. http://dx.doi.org/10.3390/w13162252.

Full text
Abstract:
Flooding is the world’s most catastrophic natural event in terms of losses. The ability to forecast flood events is crucial for controlling the risk of flooding to society and the environment. Artificial neural networks (ANN) have been adopted in recent studies to provide fast flood inundation forecasts. In this paper, an existing ANN trained based on synthetic events was optimized in two directions: extending the training dataset with the use of hybrid dataset, and selection of the best training function based on six possible functions, namely conjugate gradient backpropagation with Fletcher–Reeves updates (CGF) with Polak–Ribiére updates (CGP) and Powell–Beale restarts (CGB), one-step secant back-propagation (OSS), resilient backpropagation (RP), and scaled conjugate gra-dient backpropagation (SCG). Four real flood events were used to validate the performance of the improved ANN over the existing one. The new training dataset reduced the model’s rooted mean square error (RMSE) by 10% for the testing dataset and 16% for the real events. The selection of the resilient backpropagation algorithm contributed to 15% lower RMSE for the testing dataset and up to 35% for the real events when compared with the other five training functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Djordjevic, Snezana. "New hybrid conjugate gradient method as a convex combination of FR and PRP methods." Filomat 30, no. 11 (2016): 3083–100. http://dx.doi.org/10.2298/fil1611083d.

Full text
Abstract:
We consider a newhybrid conjugate gradient algorithm,which is obtained fromthe algorithmof Fletcher-Reeves, and the algorithmof Polak-Ribi?re-Polyak. Numerical comparisons show that the present hybrid conjugate gradient algorithm often behaves better than some known algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Kaelo, Pro, Sindhu Narayanan, and M. V. Thuto. "A modified quadratic hybridization of Polak-Ribiere-Polyak and Fletcher-Reeves conjugate gradient method for unconstrained optimization problems." An International Journal of Optimization and Control: Theories & Applications (IJOCTA) 7, no. 2 (July 15, 2017): 177–85. http://dx.doi.org/10.11121/ijocta.01.2017.00339.

Full text
Abstract:
This article presents a modified quadratic hybridization of the Polak–Ribiere–Polyak and Fletcher–Reeves conjugate gradient method for solving unconstrained optimization problems. Global convergence, with the strong Wolfe line search conditions, of the proposed quadratic hybrid conjugate gradient method is established. We also report some numerical results to show the competitiveness of the new hybrid method.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, C. Y., and M. X. Li. "Convergence property of the Fletcher-Reeves conjugate gradient method with errors." Journal of Industrial & Management Optimization 1, no. 2 (2005): 193–200. http://dx.doi.org/10.3934/jimo.2005.1.193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

ZENG, MEILAN, and GUANGHUI ZHOU. "A MODIFIED FR CONJUGATE GRADIENT METHOD FOR COMPUTING -EIGENPAIRS OF SYMMETRIC TENSORS." Bulletin of the Australian Mathematical Society 94, no. 3 (July 26, 2016): 411–20. http://dx.doi.org/10.1017/s0004972716000381.

Full text
Abstract:
This paper proposes improvements to the modified Fletcher–Reeves conjugate gradient method (FR-CGM) for computing $Z$-eigenpairs of symmetric tensors. The FR-CGM does not need to compute the exact gradient and Jacobian. The global convergence of this method is established. We also test other conjugate gradient methods such as the modified Polak–Ribière–Polyak conjugate gradient method (PRP-CGM) and shifted power method (SS-HOPM). Numerical experiments of FR-CGM, PRP-CGM and SS-HOPM show the efficiency of the proposed method for finding $Z$-eigenpairs of symmetric tensors.
APA, Harvard, Vancouver, ISO, and other styles
8

Pang, Deyan, Shouqiang Du, and Jingjie Ju. "The smoothing Fletcher-Reeves conjugate gradient method for solving finite minimax problems." ScienceAsia 42, no. 1 (2016): 40. http://dx.doi.org/10.2306/scienceasia1513-1874.2016.42.040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alshorman, Omar, Mustafa Mamat, Ahmad Alhawarat, and Mohd Revaie. "A modifications of conjugate gradient method for unconstrained optimization problems." International Journal of Engineering & Technology 7, no. 2.14 (April 6, 2018): 21. http://dx.doi.org/10.14419/ijet.v7i2.14.11146.

Full text
Abstract:
The Conjugate Gradient (CG) methods play an important role in solving large-scale unconstrained optimization problems. Several studies have been recently devoted to improving and modifying these methods in relation to efficiency and robustness. In this paper, a new parameter of CG method has been proposed. The new parameter possesses global convergence properties under the Strong Wolfe-Powell (SWP) line search. The numerical results show that the proposed formula is more efficient and robust compared with Polak-Rribiere Ployak (PRP), Fletcher-Reeves (FR) and Wei, Yao, and Liu (WYL) parameters.
APA, Harvard, Vancouver, ISO, and other styles
10

Sellami, Badreddine, and Mohamed Chiheb Eddine Sellami. "Global convergence of a modified Fletcher–Reeves conjugate gradient method with Wolfe line search." Asian-European Journal of Mathematics 13, no. 04 (April 4, 2019): 2050081. http://dx.doi.org/10.1142/s1793557120500813.

Full text
Abstract:
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. we propose a modified Fletcher–Reeves (abbreviated FR) [Function minimization by conjugate gradients, Comput. J. 7 (1964) 149–154] conjugate gradient algorithm satisfying a parametrized sufficient descent condition with a parameter [Formula: see text] is proposed. The parameter [Formula: see text] is computed by means of the conjugacy condition, thus an algorithm which is a positive multiplicative modification of the Hestenes and Stiefel (abbreviated HS) [Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards Sec. B 48 (1952) 409–436] algorithm is obtained, which produces a descent search direction at every iteration that the line search satisfies the Wolfe conditions. Under appropriate conditions, we show that the modified FR method with the strong Wolfe line search is globally convergent of uniformly convex functions. We also present extensive preliminary numerical experiments to show the efficiency of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Fletcher Reeves conjugate gradient (CGFR)"

1

Al-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.

Full text
Abstract:
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Fletcher Reeves conjugate gradient (CGFR)"

1

Lahmiri, Salim. "On Simulation Performance of Feedforward and NARX Networks Under Different Numerical Training Algorithms." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 171–83. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8823-0.ch005.

Full text
Abstract:
This chapter focuses on comparing the forecasting ability of the backpropagation neural network (BPNN) and the nonlinear autoregressive moving average with exogenous inputs (NARX) network trained with different algorithms; namely the quasi-Newton (Broyden-Fletcher-Goldfarb-Shanno, BFGS), conjugate gradient (Fletcher-Reeves update, Polak-Ribiére update, Powell-Beale restart), and Levenberg-Marquardt algorithm. Three synthetic signals are generated to conduct experiments. The simulation results showed that in general the NARX which is a dynamic system outperforms the popular BPNN. In addition, conjugate gradient algorithms provide better prediction accuracy than the Levenberg-Marquardt algorithm widely used in the literature in modeling exponential signal. However, the LM performed the best when used for forecasting the Moroccan and South African stock price indices under both the BPNN and NARX systems.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Fletcher Reeves conjugate gradient (CGFR)"

1

Liu, Fang, Hu Wang, and Hongxia Hao. "Fletcher-reeves Conjugate Gradient for Sparse Reconstruction: Application to image compressed sensing." In 2009 2nd Asian-Pacific Conference on Synthetic Aperture Radar (APSAR). IEEE, 2009. http://dx.doi.org/10.1109/apsar.2009.5374158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Liming, Keping Liu, Chunxu Li, Long Jin, and Zhongbo Sun. "Form-finding of Tensegrity Structures Utilizing a Nonlinear Fletcher-Reeves Conjugate Gradient Method." In 2021 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, 2021. http://dx.doi.org/10.1109/rcar52367.2021.9517591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Masood, Sarfaraz, M. N. Doja, and Pravin Chandra. "Analysis of weight initialization routines for conjugate gradient training algorithm with Fletcher-Reeves updates." In 2016 International Conference on Computing, Communication and Automation (ICCCA). IEEE, 2016. http://dx.doi.org/10.1109/ccaa.2016.7813734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anam, Syaiful, Tommy Adriyanto, and Wuryansari M. K. "Diagnosis Diabetes Mellitus Menggunakan Algoritma Jaringan Syaraf Tiruan Backpropagation dengan Metode Conjugate Gradient Fletcher-Reeves Adaptive Gain." In Seminar Nasional: Peranan Ipteks Menuju Industri Masa Depan (PIMIMD) 2017. ITP Press, 2017. http://dx.doi.org/10.21063/pimimd4.2017.47-52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kannan, B. K., and Steven N. Kramer. "An Augmented Lagrange Multiplier Based Method for Mixed Integer Discrete Continuous Optimization and its Applications to Mechanical Design." In ASME 1993 Design Technical Conferences. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/detc1993-0382.

Full text
Abstract:
Abstract An algorithm for solving nonlinear optimization problems involving discrete, integer, zero-one and continuous variables is presented. The augmented Lagrange multiplier method combined with Powell’s method and Fletcher & Reeves Conjugate Gradient method are used to solve the optimization problem where penalties are imposed on the constraints for integer / discrete violations. The use of zero-one variables as a tool for conceptual design optimization is also described with an example. Several case studies have been presented to illustrate the practical use of this algorithm. The results obtained are compared with those obtained by the Branch and Bound algorithm. Also, a comparison is made between the use of Powell’s method (zeroth order) and the Conjugate Gradient method (first order) in the solution of these mixed variable optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography