Academic literature on the topic 'Polak Ribiere conjugate gradient (CGPR)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Polak Ribiere conjugate gradient (CGPR).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Polak Ribiere conjugate gradient (CGPR)"

1

Lyn Dee, Goh, Norhisham Bakhary, Azlan Abdul Rahman, and Baderul Hisham Ahmad. "A Comparison of Artificial Neural Network Learning Algorithms for Vibration-Based Damage Detection." Advanced Materials Research 163-167 (December 2010): 2756–60. http://dx.doi.org/10.4028/www.scientific.net/amr.163-167.2756.

Full text
Abstract:
This paper investigates the performance of Artificial Neural Network (ANN) learning algorithms for vibration-based damage detection. The capabilities of six different learning algorithms in detecting damage are studied and their performances are compared. The algorithms are Levenberg-Marquardt (LM), Resilient Backpropagation (RP), Scaled Conjugate Gradient (SCG), Conjugate Gradient with Powell-Beale Restarts (CGB), Polak-Ribiere Conjugate Gradient (CGP) and Fletcher-Reeves Conjugate Gradient (CGF) algorithms. The performances of these algorithms are assessed based on their generalisation capability in relating the vibration parameters (frequencies and mode shapes) with damage locations and severities under various numbers of input and output variables. The results show that Levenberg-Marquardt algorithm provides the best generalisation performance.
APA, Harvard, Vancouver, ISO, and other styles
2

D.K.H, Lim, and Kolay P.K. "Predicting Hydraulic Conductivity (k) of Tropical Soils by using Artificial Neural Network (ANN)." Journal of Civil Engineering, Science and Technology 1, no. 1 (August 1, 2009): 1–6. http://dx.doi.org/10.33736/jcest.63.2009.

Full text
Abstract:
Hydraulic conductivity of tropical soils is very complex. Several hydraulic conductivity prediction methods have focused on laboratory and field tests, such as the Constant Head Test, Falling Head Test, Ring Infiltrometer, Instantaneous profile method and Test Basins. In the present study, Artificial Neural Network (ANN) has been used as a tool for predicting the hydraulic conductivity (k) of some tropical soils. ANN is potentially useful in situations where the underlying physical process relationships are not fully understood and well-suited in modeling dynamic systems on a real-time basis. The hydraulic conductivity of tropical soil can be predicted by using ANN, if the physical properties of the soil e.g., moisture content, specific gravity, void ratio etc. are known. This study demonstrates the comparison between the conventional estimation of k by using Shepard's equation for approximating k and the predicted k from ANN. A programme was written by using MATLAB 6.5.1 and eight different training algorithms, namely Resilient Backpropagation (rp), Levenberg-Marquardt algorithm (lm), Conjugate Gradient Polak-Ribiere algorithm (cgp), Scale Conjugate Gradient (scg), BFGS Quasi-Newton (bfg), Conjugate Gradient with Powell/Beale Restarts (cgb), Fletcher-Powell Conjugate Gradient (cgf), and One-step Secant (oss) have been compared to produce the best prediction of k. The result shows that the network trained with Resilient Backpropagation (rp) consistently produces the most accurate results with a value of R = 0.8493 and E2 = 0.7209.
APA, Harvard, Vancouver, ISO, and other styles
3

Kaelo, Pro, Sindhu Narayanan, and M. V. Thuto. "A modified quadratic hybridization of Polak-Ribiere-Polyak and Fletcher-Reeves conjugate gradient method for unconstrained optimization problems." An International Journal of Optimization and Control: Theories & Applications (IJOCTA) 7, no. 2 (July 15, 2017): 177–85. http://dx.doi.org/10.11121/ijocta.01.2017.00339.

Full text
Abstract:
This article presents a modified quadratic hybridization of the Polak–Ribiere–Polyak and Fletcher–Reeves conjugate gradient method for solving unconstrained optimization problems. Global convergence, with the strong Wolfe line search conditions, of the proposed quadratic hybrid conjugate gradient method is established. We also report some numerical results to show the competitiveness of the new hybrid method.
APA, Harvard, Vancouver, ISO, and other styles
4

Hu, Guofang, and Biao Qu. "Convergence properties of a correlative Polak-Ribiere conjugate gradient method." Journal of Applied Mathematics and Computing 22, no. 1-2 (September 2006): 461–66. http://dx.doi.org/10.1007/bf02896494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Nengjian, Qinhui Liu, Chunping Ren, and Chunsheng Liu. "A Novel Method of Dynamic Force Identification and Its Application." Mathematical Problems in Engineering 2019 (December 14, 2019): 1–10. http://dx.doi.org/10.1155/2019/1534560.

Full text
Abstract:
In this paper, an efficient mixed spectral conjugate gradient (EMSCG, for short) method is presented for solving unconstrained optimization problems. In this work, we construct a novel formula performed by using a conjugate gradient parameter which takes into account the advantages of Fletcher–Reeves (FR), Polak–Ribiere–Polyak (PRP), and a variant Polak-Ribiere-Polyak (VPRP), prove its stability and convergence, and apply it to the dynamic force identification of practical engineering structure. The analysis results show that the present method has higher efficiency, stronger robust convergence quality, and fewer iterations. In addition, the proposed method can provide more efficient and numerically stable approximation of the actual force, compared with the FR method, PRP method, and VPRP method. Therefore, we can make a clear conclusion that the proposed method in this paper can provide an effective optimization solution. Meanwhile, there is reason to believe that the proposed method can offer a reference for future research.
APA, Harvard, Vancouver, ISO, and other styles
6

Tinambunan, Medi Herman, Erna Budhiarti Nababan, and Benny Benyamin Nasution. "Conjugate Gradient Polak Ribiere In Improving Performance in Predicting Population Backpropagation." IOP Conference Series: Materials Science and Engineering 835 (May 23, 2020): 012055. http://dx.doi.org/10.1088/1757-899x/835/1/012055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, X. P., R. T. Burton, and C. M. Sargent. "Experimental Identification of a Flow Orifice Using a Neural Network and the Conjugate Gradient Method." Journal of Dynamic Systems, Measurement, and Control 118, no. 2 (June 1, 1996): 272–77. http://dx.doi.org/10.1115/1.2802314.

Full text
Abstract:
An experimental approach of using a neural network model to identifying a nonlinear non-pressure-compensated flow valve is described in this paper. The conjugate gradient method with Polak-Ribiere formula is applied to train the neural network to approximate the nonlinear relationships represented by noisy data. The ability of the trained neural network to reproduce and to generalize is demonstrated by its excellent approximation of the experimental data. The training algorithm derived from the conjugate gradient method is shown to lead to a stable solution.
APA, Harvard, Vancouver, ISO, and other styles
8

Dalla, Carlos Eduardo Rambalducci, Wellington Betencurte da Silva, Júlio Cesar Sampaio Dutra, and Marcelo José Colaço. "A comparative study of gradient-based and meta-heuristic optimization methods using Griewank benchmark function/ Um estudo comparativo de métodos de otimização baseados em gradientes e meta-heurísticos usando a função de benchmark do Griewank." Brazilian Journal of Development 7, no. 6 (June 7, 2021): 55341–50. http://dx.doi.org/10.34117/bjdv7n6-102.

Full text
Abstract:
Optimization methods are frequently applied to solve real-world problems such, engineering design, computer science, and computational chemistry. This paper aims to compare gradient-based algorithms and the meta-heuristic particle swarm optimization to minimize the multidimensional benchmark Griewank function, a multimodal function with widespread local minima. Several approaches of gradient-based methods such as steepest descent, conjugate gradient with Fletcher-Reeves and Polak-Ribiere formulations, and quasi-Newton Davidon-Fletcher-Powell approach were compared. The results presented showed that the meta-heuristic method is recommended for function with this behavior because is no needed prior information of the search space. The performance comparison includes computation time and convergence of global and local optimum.
APA, Harvard, Vancouver, ISO, and other styles
9

Alhawarat, Ahmad, Thoi Trung Nguyen, Ramadan Sabra, and Zabidin Salleh. "An Efficient Modified AZPRP Conjugate Gradient Method for Large-Scale Unconstrained Optimization Problem." Journal of Mathematics 2021 (April 26, 2021): 1–9. http://dx.doi.org/10.1155/2021/6692024.

Full text
Abstract:
To find a solution of unconstrained optimization problems, we normally use a conjugate gradient (CG) method since it does not cost memory or storage of second derivative like Newton’s method or Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. Recently, a new modification of Polak and Ribiere method was proposed with new restart condition to give a so-call AZPRP method. In this paper, we propose a new modification of AZPRP CG method to solve large-scale unconstrained optimization problems based on a modification of restart condition. The new parameter satisfies the descent property and the global convergence analysis with the strong Wolfe-Powell line search. The numerical results prove that the new CG method is strongly aggressive compared with CG_Descent method. The comparisons are made under a set of more than 140 standard functions from the CUTEst library. The comparison includes number of iterations and CPU time.
APA, Harvard, Vancouver, ISO, and other styles
10

Khalid Awang, Mohd, Mohammad Ridwan Ismail, Mokhairi Makhtar, M. Nordin A Rahman, and Abd Rasid Mamat. "Performance Comparison of Neural Network Training Algorithms for Modeling Customer Churn Prediction." International Journal of Engineering & Technology 7, no. 2.15 (April 6, 2018): 35. http://dx.doi.org/10.14419/ijet.v7i2.15.11196.

Full text
Abstract:
Predicting customer churn has become the priority of every telecommunication service provider as the market is becoming more saturated and competitive. This paper presents a comparison of neural network learning algorithms for customer churn prediction. The data set used to train and test the neural network algorithms was provided by one of the leading telecommunication company in Malaysia. The Multilayer Perceptron (MLP) networks are trained using nine (9) types of learning algorithms, which are Levenberg Marquardt backpropagation (trainlm), BFGS Quasi-Newton backpropagation (trainbfg), Conjugate Gradient backpropagation with Fletcher-Reeves Updates (traincgf), Conjugate Gradient backpropagation with Polak-Ribiere Updates (traincgp), Conjugate Gradient backpropagation with Powell-Beale Restarts (traincgb), Scaled Conjugate Gradient backpropagation (trainscg), One Step Secant backpropagation (trainoss), Bayesian Regularization backpropagation (trainbr), and Resilient backpropagation (trainrp). The performance of the Neural Network is measured based on the prediction accuracy of the learning and testing phases. LM learning algorithm is found to be the optimum model of a neural network model consisting of fourteen input units, one hidden node and one output node. The best result of the experiment indicated that this model is able to produce the performance accuracy of 94.82%.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Polak Ribiere conjugate gradient (CGPR)"

1

Al-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.

Full text
Abstract:
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Polak Ribiere conjugate gradient (CGPR)"

1

Pratiwi, Melati Suci, Adiwijaya, and Annisa Aditsania. "Cancer Detection Based on Microarray Data Classification using Genetic Bee Colony (GBC) and Conjugate Gradient Backpropagation with Modified Polak Ribiere (MBP-CGP)." In 2018 International Conference on Computer, Control, Informatics and its Applications (IC3INA). IEEE, 2018. http://dx.doi.org/10.1109/ic3ina.2018.8629538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Novia Wisesty, Untari, and Alvina Noor Kharima. "Deteksi Anomali pada Intrusion Detection System (IDS) Menggunakan Algoritma Backpropagation Termodifikasi Conjugate Gradient Polak Ribiere." In Indonesia Symposium on Computing. SOCPRES, 2016. http://dx.doi.org/10.21108/indosc.2016.136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ghasemi-Fard, M., K. Ansari-Asl, L. Albera, A. Kachenoura, and L. Senhadji. "A low cost and reliable Polak-Ribiere conjugate gradient deflation ICA algorithm for real signals." In 2013 21st Iranian Conference on Electrical Engineering (ICEE). IEEE, 2013. http://dx.doi.org/10.1109/iraniancee.2013.6599835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ghani, Nur Hamizah Abdul, Mustafa Mamat, and Mohd Rivaie. "A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search." In PROCEEDINGS OF THE 24TH NATIONAL SYMPOSIUM ON MATHEMATICAL SCIENCES: Mathematical Sciences Exploration for the Universal Preservation. Author(s), 2017. http://dx.doi.org/10.1063/1.4995892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography