To see the other types of publications on this topic, follow the link: Polak Ribiere conjugate gradient (CGPR).

Journal articles on the topic 'Polak Ribiere conjugate gradient (CGPR)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 journal articles for your research on the topic 'Polak Ribiere conjugate gradient (CGPR).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lyn Dee, Goh, Norhisham Bakhary, Azlan Abdul Rahman, and Baderul Hisham Ahmad. "A Comparison of Artificial Neural Network Learning Algorithms for Vibration-Based Damage Detection." Advanced Materials Research 163-167 (December 2010): 2756–60. http://dx.doi.org/10.4028/www.scientific.net/amr.163-167.2756.

Full text
Abstract:
This paper investigates the performance of Artificial Neural Network (ANN) learning algorithms for vibration-based damage detection. The capabilities of six different learning algorithms in detecting damage are studied and their performances are compared. The algorithms are Levenberg-Marquardt (LM), Resilient Backpropagation (RP), Scaled Conjugate Gradient (SCG), Conjugate Gradient with Powell-Beale Restarts (CGB), Polak-Ribiere Conjugate Gradient (CGP) and Fletcher-Reeves Conjugate Gradient (CGF) algorithms. The performances of these algorithms are assessed based on their generalisation capability in relating the vibration parameters (frequencies and mode shapes) with damage locations and severities under various numbers of input and output variables. The results show that Levenberg-Marquardt algorithm provides the best generalisation performance.
APA, Harvard, Vancouver, ISO, and other styles
2

D.K.H, Lim, and Kolay P.K. "Predicting Hydraulic Conductivity (k) of Tropical Soils by using Artificial Neural Network (ANN)." Journal of Civil Engineering, Science and Technology 1, no. 1 (August 1, 2009): 1–6. http://dx.doi.org/10.33736/jcest.63.2009.

Full text
Abstract:
Hydraulic conductivity of tropical soils is very complex. Several hydraulic conductivity prediction methods have focused on laboratory and field tests, such as the Constant Head Test, Falling Head Test, Ring Infiltrometer, Instantaneous profile method and Test Basins. In the present study, Artificial Neural Network (ANN) has been used as a tool for predicting the hydraulic conductivity (k) of some tropical soils. ANN is potentially useful in situations where the underlying physical process relationships are not fully understood and well-suited in modeling dynamic systems on a real-time basis. The hydraulic conductivity of tropical soil can be predicted by using ANN, if the physical properties of the soil e.g., moisture content, specific gravity, void ratio etc. are known. This study demonstrates the comparison between the conventional estimation of k by using Shepard's equation for approximating k and the predicted k from ANN. A programme was written by using MATLAB 6.5.1 and eight different training algorithms, namely Resilient Backpropagation (rp), Levenberg-Marquardt algorithm (lm), Conjugate Gradient Polak-Ribiere algorithm (cgp), Scale Conjugate Gradient (scg), BFGS Quasi-Newton (bfg), Conjugate Gradient with Powell/Beale Restarts (cgb), Fletcher-Powell Conjugate Gradient (cgf), and One-step Secant (oss) have been compared to produce the best prediction of k. The result shows that the network trained with Resilient Backpropagation (rp) consistently produces the most accurate results with a value of R = 0.8493 and E2 = 0.7209.
APA, Harvard, Vancouver, ISO, and other styles
3

Kaelo, Pro, Sindhu Narayanan, and M. V. Thuto. "A modified quadratic hybridization of Polak-Ribiere-Polyak and Fletcher-Reeves conjugate gradient method for unconstrained optimization problems." An International Journal of Optimization and Control: Theories & Applications (IJOCTA) 7, no. 2 (July 15, 2017): 177–85. http://dx.doi.org/10.11121/ijocta.01.2017.00339.

Full text
Abstract:
This article presents a modified quadratic hybridization of the Polak–Ribiere–Polyak and Fletcher–Reeves conjugate gradient method for solving unconstrained optimization problems. Global convergence, with the strong Wolfe line search conditions, of the proposed quadratic hybrid conjugate gradient method is established. We also report some numerical results to show the competitiveness of the new hybrid method.
APA, Harvard, Vancouver, ISO, and other styles
4

Hu, Guofang, and Biao Qu. "Convergence properties of a correlative Polak-Ribiere conjugate gradient method." Journal of Applied Mathematics and Computing 22, no. 1-2 (September 2006): 461–66. http://dx.doi.org/10.1007/bf02896494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Nengjian, Qinhui Liu, Chunping Ren, and Chunsheng Liu. "A Novel Method of Dynamic Force Identification and Its Application." Mathematical Problems in Engineering 2019 (December 14, 2019): 1–10. http://dx.doi.org/10.1155/2019/1534560.

Full text
Abstract:
In this paper, an efficient mixed spectral conjugate gradient (EMSCG, for short) method is presented for solving unconstrained optimization problems. In this work, we construct a novel formula performed by using a conjugate gradient parameter which takes into account the advantages of Fletcher–Reeves (FR), Polak–Ribiere–Polyak (PRP), and a variant Polak-Ribiere-Polyak (VPRP), prove its stability and convergence, and apply it to the dynamic force identification of practical engineering structure. The analysis results show that the present method has higher efficiency, stronger robust convergence quality, and fewer iterations. In addition, the proposed method can provide more efficient and numerically stable approximation of the actual force, compared with the FR method, PRP method, and VPRP method. Therefore, we can make a clear conclusion that the proposed method in this paper can provide an effective optimization solution. Meanwhile, there is reason to believe that the proposed method can offer a reference for future research.
APA, Harvard, Vancouver, ISO, and other styles
6

Tinambunan, Medi Herman, Erna Budhiarti Nababan, and Benny Benyamin Nasution. "Conjugate Gradient Polak Ribiere In Improving Performance in Predicting Population Backpropagation." IOP Conference Series: Materials Science and Engineering 835 (May 23, 2020): 012055. http://dx.doi.org/10.1088/1757-899x/835/1/012055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, X. P., R. T. Burton, and C. M. Sargent. "Experimental Identification of a Flow Orifice Using a Neural Network and the Conjugate Gradient Method." Journal of Dynamic Systems, Measurement, and Control 118, no. 2 (June 1, 1996): 272–77. http://dx.doi.org/10.1115/1.2802314.

Full text
Abstract:
An experimental approach of using a neural network model to identifying a nonlinear non-pressure-compensated flow valve is described in this paper. The conjugate gradient method with Polak-Ribiere formula is applied to train the neural network to approximate the nonlinear relationships represented by noisy data. The ability of the trained neural network to reproduce and to generalize is demonstrated by its excellent approximation of the experimental data. The training algorithm derived from the conjugate gradient method is shown to lead to a stable solution.
APA, Harvard, Vancouver, ISO, and other styles
8

Dalla, Carlos Eduardo Rambalducci, Wellington Betencurte da Silva, Júlio Cesar Sampaio Dutra, and Marcelo José Colaço. "A comparative study of gradient-based and meta-heuristic optimization methods using Griewank benchmark function/ Um estudo comparativo de métodos de otimização baseados em gradientes e meta-heurísticos usando a função de benchmark do Griewank." Brazilian Journal of Development 7, no. 6 (June 7, 2021): 55341–50. http://dx.doi.org/10.34117/bjdv7n6-102.

Full text
Abstract:
Optimization methods are frequently applied to solve real-world problems such, engineering design, computer science, and computational chemistry. This paper aims to compare gradient-based algorithms and the meta-heuristic particle swarm optimization to minimize the multidimensional benchmark Griewank function, a multimodal function with widespread local minima. Several approaches of gradient-based methods such as steepest descent, conjugate gradient with Fletcher-Reeves and Polak-Ribiere formulations, and quasi-Newton Davidon-Fletcher-Powell approach were compared. The results presented showed that the meta-heuristic method is recommended for function with this behavior because is no needed prior information of the search space. The performance comparison includes computation time and convergence of global and local optimum.
APA, Harvard, Vancouver, ISO, and other styles
9

Alhawarat, Ahmad, Thoi Trung Nguyen, Ramadan Sabra, and Zabidin Salleh. "An Efficient Modified AZPRP Conjugate Gradient Method for Large-Scale Unconstrained Optimization Problem." Journal of Mathematics 2021 (April 26, 2021): 1–9. http://dx.doi.org/10.1155/2021/6692024.

Full text
Abstract:
To find a solution of unconstrained optimization problems, we normally use a conjugate gradient (CG) method since it does not cost memory or storage of second derivative like Newton’s method or Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. Recently, a new modification of Polak and Ribiere method was proposed with new restart condition to give a so-call AZPRP method. In this paper, we propose a new modification of AZPRP CG method to solve large-scale unconstrained optimization problems based on a modification of restart condition. The new parameter satisfies the descent property and the global convergence analysis with the strong Wolfe-Powell line search. The numerical results prove that the new CG method is strongly aggressive compared with CG_Descent method. The comparisons are made under a set of more than 140 standard functions from the CUTEst library. The comparison includes number of iterations and CPU time.
APA, Harvard, Vancouver, ISO, and other styles
10

Khalid Awang, Mohd, Mohammad Ridwan Ismail, Mokhairi Makhtar, M. Nordin A Rahman, and Abd Rasid Mamat. "Performance Comparison of Neural Network Training Algorithms for Modeling Customer Churn Prediction." International Journal of Engineering & Technology 7, no. 2.15 (April 6, 2018): 35. http://dx.doi.org/10.14419/ijet.v7i2.15.11196.

Full text
Abstract:
Predicting customer churn has become the priority of every telecommunication service provider as the market is becoming more saturated and competitive. This paper presents a comparison of neural network learning algorithms for customer churn prediction. The data set used to train and test the neural network algorithms was provided by one of the leading telecommunication company in Malaysia. The Multilayer Perceptron (MLP) networks are trained using nine (9) types of learning algorithms, which are Levenberg Marquardt backpropagation (trainlm), BFGS Quasi-Newton backpropagation (trainbfg), Conjugate Gradient backpropagation with Fletcher-Reeves Updates (traincgf), Conjugate Gradient backpropagation with Polak-Ribiere Updates (traincgp), Conjugate Gradient backpropagation with Powell-Beale Restarts (traincgb), Scaled Conjugate Gradient backpropagation (trainscg), One Step Secant backpropagation (trainoss), Bayesian Regularization backpropagation (trainbr), and Resilient backpropagation (trainrp). The performance of the Neural Network is measured based on the prediction accuracy of the learning and testing phases. LM learning algorithm is found to be the optimum model of a neural network model consisting of fourteen input units, one hidden node and one output node. The best result of the experiment indicated that this model is able to produce the performance accuracy of 94.82%.
APA, Harvard, Vancouver, ISO, and other styles
11

Hajar, Nurul, Mustafa Mamat, Mohd Rivaie, and Zabidin Salleh. "A combination of Polak-Ribiere and Hestenes-Steifel coefficient in conjugate gradient method for unconstrained optimization." Applied Mathematical Sciences 9 (2015): 3131–42. http://dx.doi.org/10.12988/ams.2015.53242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hashim, Mimi Nurzilah, Muhammad Khusairi Osman, Mohammad Nizam Ibrahim, Ahmad Farid Abidin, and Ahmad Asri Abd Samat. "A Comparison Study of Learning Algorithms for Estimating Fault Location." Indonesian Journal of Electrical Engineering and Computer Science 6, no. 2 (May 1, 2017): 464. http://dx.doi.org/10.11591/ijeecs.v6.i2.pp464-472.

Full text
Abstract:
Fault location is one of the important scheme in power system protection to locate the exact location of disturbance. Nowadays, artificial neural networks (ANNs) are being used significantly to identify exact fault location on transmission lines. Selection of suitable training algorithm is important in analysis of ANN performance. This paper presents a comparative study of various ANN training algorithm to perform fault location scheme in transmission lines. The features selected into ANN is the time of first peak changes in discrete wavelet transform (DWT) signal by using faulted current signal acted as traveling wave fault location technique. Six types commonly used backpropagation training algorithm were selected including the Levenberg-Marquardt, Bayesian Regulation, Conjugate gradient backpropagation with Powell-Beale restarts, BFGS quasi-Newton, Conjugate gradient backpropagation with Polak-Ribiere updates and Conjugate gradient backpropagation with Fletcher-Reeves updates. The proposed fault location method is tested with varying fault location, fault types, fault resistance and inception angle. The performance of each training algorithm is evaluated by goodness-of-fit (R<sup>2</sup>), mean square error (MSE) and Percentage prediction error (PPE). Simulation results show that the best of training algorithm for estimating fault location is Bayesian Regulation (R<sup>2 </sup>= 1.0, MSE = 0.034557 and PPE = 0.014%).
APA, Harvard, Vancouver, ISO, and other styles
13

Herawati, Sri, and M. Latif. "Analisis Kinerja Gabungan Metode Ensemble Empirical Mode Decomposition Dan Generalized Regression Neural Network." JURNAL INFOTEL - Informatika Telekomunikasi Elektronika 8, no. 2 (November 14, 2016): 132. http://dx.doi.org/10.20895/infotel.v8i2.124.

Full text
Abstract:
Abstract—The method of time series suitable for use when it checks each data patterns systematically and has many variables, such as in the case of crude oil prices. One study that utilizes the methods of time series is the integration between Ensemble Empirical Mode Decomposition (EEMD) and neural network algorithms based on Polak-Ribiere Conjugate Gradient (PCG). However, PCG requires setting free parameters in the learning process. Meanwhile, the appropriate parameters are needed to get accurate forecasting results. This research proposes the integration between EEMD and Generalized Regression Neural Network (GRNN). GRNN has advantages, such as: does not require any parameter settings and a quick learning process. For the evaluation, the performance of the method EEMD-GRNN compared with GRNN. The experimental results showed that the method EEMD-GRNN produce better forecasting of GRNN. Keywords-Forecasting crude oil price; EEMD;GRNN.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Hongfei, Jorge Leandro, and Qing Lin. "Optimization of Artificial Neural Network (ANN) for Maximum Flood Inundation Forecasts." Water 13, no. 16 (August 18, 2021): 2252. http://dx.doi.org/10.3390/w13162252.

Full text
Abstract:
Flooding is the world’s most catastrophic natural event in terms of losses. The ability to forecast flood events is crucial for controlling the risk of flooding to society and the environment. Artificial neural networks (ANN) have been adopted in recent studies to provide fast flood inundation forecasts. In this paper, an existing ANN trained based on synthetic events was optimized in two directions: extending the training dataset with the use of hybrid dataset, and selection of the best training function based on six possible functions, namely conjugate gradient backpropagation with Fletcher–Reeves updates (CGF) with Polak–Ribiére updates (CGP) and Powell–Beale restarts (CGB), one-step secant back-propagation (OSS), resilient backpropagation (RP), and scaled conjugate gra-dient backpropagation (SCG). Four real flood events were used to validate the performance of the improved ANN over the existing one. The new training dataset reduced the model’s rooted mean square error (RMSE) by 10% for the testing dataset and 16% for the real events. The selection of the resilient backpropagation algorithm contributed to 15% lower RMSE for the testing dataset and up to 35% for the real events when compared with the other five training functions.
APA, Harvard, Vancouver, ISO, and other styles
15

Lu, Tiao, Wei Cai, Jianguo Xin, and Yinglong Guo. "Linear Scaling Discontinuous Galerkin Density Matrix Minimization Method with Local Orbital Enriched Finite Element Basis: 1-D Lattice Model System." Communications in Computational Physics 14, no. 2 (August 2013): 276–300. http://dx.doi.org/10.4208/cicp.290212.240812a.

Full text
Abstract:
AbstractIn the first of a series of papers, we will study a discontinuous Galerkin (DG) framework for many electron quantum systems. The salient feature of this framework is the flexibility of using hybrid physics-based local orbitals and accuracy-guaranteed piecewise polynomial basis in representing the Hamiltonian of the many body system. Such a flexibility is made possible by using the discontinuous Galerkin method to approximate the Hamiltonian matrix elements with proper constructions of numerical DG fluxes at the finite element interfaces. In this paper, we will apply the DG method to the density matrix minimization formulation, a popular approach in the density functional theory of many body Schrödinger equations. The density matrix minimization is to find the minima of the total energy, expressed as a functional of the density matrix ρ(r,r′), approximated by the proposed enriched basis, together with two constraints of idempotency and electric neutrality. The idempotency will be handled with the McWeeny’s purification while the neutrality is enforced by imposing the number of electrons with a penalty method. A conjugate gradient method (a Polak-Ribiere variant) is used to solve the minimization problem. Finally, the linear-scaling algorithm and the advantage of using the local orbital enriched finite element basis in the DG approximations are verified by studying examples of one dimensional lattice model systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Wahyuni, Komang Tri, I. Made Oka Widyantara, and NMAE Dewi Wirastuti. "Deteksi Tipe Modulasi Digital Pada Automatic Modulation Recognition Menggunakan Support Vector Machine dan Conjugate Gradient Polak Ribiere-Backpropagation." Majalah Ilmiah Teknologi Elektro 18, no. 2 (August 22, 2019). http://dx.doi.org/10.24843/mite.2019.v18i02.p18.

Full text
Abstract:
Pada penelitian ini menggunakan data digital yang dibangkitkan secara random dalam seleksi ciri tipe modulasi. Adapun tipe modulasi yang digunakan adalah QPSK, 16QAM dan 64QAM. Pada proses ekstrasi ciri menggunakan pendekatan statistical feature set dengan metode Mean, Varian, Kurtosis dan Skewness, sedangkan seleksi ciri menggunakan Multi Class Support Vector Machine (SVM) dengan 5 kelas dalam klasifikasi diantaranya adalah (i) Bukan Fitur, (ii) Mean, (iii) Varian, (iv) Kurtosis dan (v) Skewness. Dalam mendeteksi tipe modulasi menggunakan Jaringan Syaraf Tiruan Backpropagation dengan proses pembelajaran menggunakan algoritma Conjugate Gradien Polak Ribiere. Dari hasil komparasi hasil pelatihan terhadap 401 data latih antara pembelajaran Conjugate Gradient Polak Ribiere dengan pembelajaran Gradient Discent adalah menggunakan Conjugate Gradient Polak Ribiere jauh lebih baik dengan nilai akurasi 86,20%, dan laju errornya 13,80% sedangkan pada pembelajaran dengan Conjugate Discent pada iterasi yang sama yaitu 781 tingkat akurasinya sebesar 67,83% dan laju errornya 32,17%. Dari hasil pengujian tersebut terdapat 4 kelompok fitur yang mampu mengenali tipe modulasi diantaranya adalah (i) Mean, Varian, Kurtosis, (ii) Mean, Varian, Skewness, (iii) Varian, Kurtosis, Skewnes dan (iv) Mean, Kurtosis, Skewness.
APA, Harvard, Vancouver, ISO, and other styles
17

Jayaseelan, Revathy, Gajalskshmi Pandulu, and Ashwini G. "NEURAL NETWORKS FOR THE PREDICTION OF FRESH PROPERTIES AND COMPRESSIVE STRENGTH OF FLOWABLE CONCRETE." Journal of Urban and Environmental Engineering, October 5, 2019, 183–97. http://dx.doi.org/10.4090/juee.2019.v13n1.183197.

Full text
Abstract:
This paper presents the prediction of fresh concrete properties and compressive strength of flowable concrete through neural network approach. A comprehensive data set was generated from the experiments performed in the laboratory under standard conditions. The flowable concrete was made with two different types of micro particles and with single nano particles. The input parameter was chosen for the neural network model as cement, fine aggregate, coarse aggregate, superplasticizer, water-cement ratio, micro aluminium oxide particles, micro titanium oxide particles, and nano silica. The output parameter includes the slump Flow, L-Box flow, V Funnel flow and compressive strength of the flowable concrete. To develop a suitable neural network model, several training algorithms were used such as BFGS Quasi- Newton back propagation, Fletcher-Powell conjugate gradient back propagation, Polak - Ribiere conjugate gradient back propagation, Gradient descent with adaptive linear back propagation and Levenberg-Marquardt back propagation. It was found that BFGS Quasi- Newton back propagation and Levenberg-Marquardt back propagation algorithm provides more than 90% on the prediction accuracy. Hence, the model performance was agreeable for prediction purposes for the fresh properties and compressive strength of flowable concrete.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography