Journal articles on the topic 'Conjugate gradient method, preconditioning, steepest descent method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 journal articles for your research on the topic 'Conjugate gradient method, preconditioning, steepest descent method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Knyazev, Andrew V., and Ilya Lashuk. "Steepest Descent and Conjugate Gradient Methods with Variable Preconditioning." SIAM Journal on Matrix Analysis and Applications 29, no. 4 (January 2008): 1267–80. http://dx.doi.org/10.1137/060675290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammed Salih, Dana Taha, and Bawar Mohammed Faraj. "Comparison Between Steepest Descent Method and Conjugate Gradient Method by Using Matlab." Journal of Studies in Science and Engineering 1, no. 1 (August 26, 2021): 20–31. http://dx.doi.org/10.53898/josse2021113.

Full text
Abstract:
The Steepest descent method and the Conjugate gradient method to minimize nonlinear functions have been studied in this work. Algorithms are presented and implemented in Matlab software for both methods. However, a comparison has been made between the Steepest descent method and the Conjugate gradient method. The obtained results in Matlab software has time and efficiency aspects. It is shown that the Conjugate gradient method needs fewer iterations and has more efficiency than the Steepest descent method. On the other hand, the Steepest descent method converges a function in less time than the Conjugate gradient method.
APA, Harvard, Vancouver, ISO, and other styles
3

Bouwmeester, Henricus, Andrew Dougherty, and Andrew V. Knyazev. "Nonsymmetric Preconditioning for Conjugate Gradient and Steepest Descent Methods 1." Procedia Computer Science 51 (2015): 276–85. http://dx.doi.org/10.1016/j.procs.2015.05.241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rahali, Noureddine, Mohammed Belloufi, and Rachid Benzine. "A new conjugate gradient method for acceleration of gradient descent algorithms." Moroccan Journal of Pure and Applied Analysis 7, no. 1 (January 1, 2021): 1–11. http://dx.doi.org/10.2478/mjpaa-2021-0001.

Full text
Abstract:
AbstractAn accelerated of the steepest descent method for solving unconstrained optimization problems is presented. which propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an new formula. Under common assumptions, by using a modified Wolfe line search, descent property and global convergence results were established for the new method. Experimental results provide evidence that our proposed method is in general superior to the classical steepest descent method and has a potential to significantly enhance the computational efficiency and robustness of the training process.
APA, Harvard, Vancouver, ISO, and other styles
5

Farhana Husin, Siti, Mustafa Mamat, Mohd Asrul Hery Ibrahim, and Mohd Rivaie. "A modification of steepest descent method for solving large-scaled unconstrained optimization problems." International Journal of Engineering & Technology 7, no. 3.28 (August 17, 2018): 72. http://dx.doi.org/10.14419/ijet.v7i3.28.20969.

Full text
Abstract:
In this paper, we develop a new search direction for Steepest Descent (SD) method by replacing previous search direction from Conjugate Gradient (CG) method, , with gradient from the previous step, for solving large-scale optimization problem. We also used one of the conjugate coefficient as a coefficient for matrix . Under some reasonable assumptions, we prove that the proposed method with exact line search satisfies descent property and possesses the globally convergent. Further, the numerical results on some unconstrained optimization problem show that the proposed algorithm is promising.
APA, Harvard, Vancouver, ISO, and other styles
6

Connolly, T. J., K. A. Landman, and L. R. White. "On Gerchberg's method for the Fourier inverse problem." Journal of the Australian Mathematical Society. Series B. Applied Mathematics 37, no. 1 (July 1995): 26–44. http://dx.doi.org/10.1017/s0334270000007554.

Full text
Abstract:
AbstractIf a finite segment of a spectrum is known, the determination of the finite object function in image space (or the full spectrum in frequency space) is a fundamental problem in image analysis. Gerchberg's method, which solves this problem, can be formulated as a fixed point iteration. This and other related algorithms are shown to be equivalent to a steepest descent method applied to the minimization of an appropriate functional for the Fourier Inversion Problem. Optimal steepest descent and conjugate gradient methods are derived. Numerical results from the application of these techniques are presented. The regularization of the problem and control of noise growth in the iteration are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Causse, Emmanuel, Rune Mittet, and Bjørn Ursin. "Preconditioning of full‐waveform inversion in viscoacoustic media." GEOPHYSICS 64, no. 1 (January 1999): 130–45. http://dx.doi.org/10.1190/1.1444510.

Full text
Abstract:
Intrinsic absorption in the earth affects the amplitude and phase spectra of the seismic wavefields and records, and may degrade significantly the results of acoustic full‐waveform inversion. Amplitude distortion affects the strength of the scatterers and decreases the resolution. Phase distortion may result in mislocated interfaces. We show that viscoacoustic gradient‐based inversion algorithms (e.g., steepest descent or conjugate gradients) compensate for the effects of phase distortion, but not for the effects of amplitude distortion. To solve this problem at a reasonable numerical cost, we have designed two new forms of preconditioning derived from an analysis of the inverse Hessian operator. The first type of preconditioning is a frequency‐dependent compensation for dispersion and attenuation, which involves two extra modeling steps with inverse absorption (amplification) at each iteration. The second type only corrects the strength of the recovered scatterers, and requires two extra modeling steps at the first iteration only. The new preconditioning methods have been incorporated into a finite‐difference inversion scheme for viscoacoustic media. Numerical tests on noise‐free synthetic data illustrate and support the theory.
APA, Harvard, Vancouver, ISO, and other styles
8

Johansson, E. M., F. U. Dowla, and D. M. Goodman. "BACKPROPAGATION LEARNING FOR MULTILAYER FEED-FORWARD NEURAL NETWORKS USING THE CONJUGATE GRADIENT METHOD." International Journal of Neural Systems 02, no. 04 (January 1991): 291–301. http://dx.doi.org/10.1142/s0129065791000261.

Full text
Abstract:
In many applications, the number of interconnects or weights in a neural network is so large that the learning time for the conventional backpropagation algorithm can become excessively long. Numerical optimization theory offers a rich and robust set of techniques which can be applied to neural networks to improve learning rates. In particular, the conjugate gradient method is easily adapted to the backpropagation learning problem. This paper describes the conjugate gradient method, its application to the backpropagation learning problem and presents results of numerical tests which compare conventional backpropagation, steepest descent and the conjugate gradient methods. For the parity problem, we find that the conjugate gradient method is an order of magnitude faster than conventional backpropagation with momentum.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Chein-Shan. "Modifications of Steepest Descent Method and Conjugate Gradient Method Against Noise for Ill-posed Linear Systems." Communications in Numerical Analysis 2012 (2012): 1–24. http://dx.doi.org/10.5899/2012/cna-00115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Jibum. "An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method." Mathematical Problems in Engineering 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/273732.

Full text
Abstract:
We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient) and second-order (Hessian) derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Qiuyu, and Yingtao Che. "Sufficient Descent Polak-Ribière-Polyak Conjugate Gradient Algorithm for Large-Scale Box-Constrained Optimization." Abstract and Applied Analysis 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/236158.

Full text
Abstract:
A practical algorithm for solving large-scale box-constrained optimization problems is developed, analyzed, and tested. In the proposed algorithm, an identification strategy is involved to estimate the active set at per-iteration. The components of inactive variables are determined by the steepest descent method at first finite number of steps and then by conjugate gradient method subsequently. Under some appropriate conditions, we show that the algorithm converges globally. Numerical experiments and comparisons by using some box-constrained problems from CUTEr library are reported. Numerical comparisons illustrate that the proposed method is promising and competitive with the well-known method—L-BFGS-B.
APA, Harvard, Vancouver, ISO, and other styles
12

Wu, Yulun, Mengxiang Zhang, and Yan Li. "Modified Three-Term Liu–Storey Conjugate Gradient Method for Solving Unconstrained Optimization Problems and Image Restoration Problems." Mathematical Problems in Engineering 2020 (October 17, 2020): 1–20. http://dx.doi.org/10.1155/2020/7859286.

Full text
Abstract:
A new three-term conjugate gradient method is proposed in this article. The new method was able to solve unconstrained optimization problems, image restoration problems, and compressed sensing problems. The method is the convex combination of the steepest descent method and the classical LS method. Without any linear search, the new method has sufficient descent property and trust region property. Unlike previous methods, the information for the function f x is assigned to d k . Next, we make some reasonable assumptions and establish the global convergence of this method under the condition of using the modified Armijo line search. The results of subsequent numerical experiments prove that the new algorithm is more competitive than other algorithms and has a good application prospect.
APA, Harvard, Vancouver, ISO, and other styles
13

Bhaya, Amit, and Eugenius Kaszkurewicz. "Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method." Neural Networks 17, no. 1 (January 2004): 65–71. http://dx.doi.org/10.1016/s0893-6080(03)00170-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Siahlooei, Esmaeil, and Seyed Abolfazl Shahzadeh Fazeli. "Two Iterative Methods for Solving Linear Interval Systems." Applied Computational Intelligence and Soft Computing 2018 (October 8, 2018): 1–13. http://dx.doi.org/10.1155/2018/2797038.

Full text
Abstract:
Conjugate gradient is an iterative method that solves a linear system Ax=b, where A is a positive definite matrix. We present this new iterative method for solving linear interval systems Ãx̃=b̃, where à is a diagonally dominant interval matrix, as defined in this paper. Our method is based on conjugate gradient algorithm in the context view of interval numbers. Numerical experiments show that the new interval modified conjugate gradient method minimizes the norm of the difference of Ãx̃ and b̃ at every step while the norm is sufficiently small. In addition, we present another iterative method that solves Ãx̃=b̃, where à is a diagonally dominant interval matrix. This method, using the idea of steepest descent, finds exact solution x̃ for linear interval systems, where Ãx̃=b̃; we present a proof that indicates that this iterative method is convergent. Also, our numerical experiments illustrate the efficiency of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Dalla, Carlos Eduardo Rambalducci, Wellington Betencurte da Silva, Júlio Cesar Sampaio Dutra, and Marcelo José Colaço. "A comparative study of gradient-based and meta-heuristic optimization methods using Griewank benchmark function/ Um estudo comparativo de métodos de otimização baseados em gradientes e meta-heurísticos usando a função de benchmark do Griewank." Brazilian Journal of Development 7, no. 6 (June 7, 2021): 55341–50. http://dx.doi.org/10.34117/bjdv7n6-102.

Full text
Abstract:
Optimization methods are frequently applied to solve real-world problems such, engineering design, computer science, and computational chemistry. This paper aims to compare gradient-based algorithms and the meta-heuristic particle swarm optimization to minimize the multidimensional benchmark Griewank function, a multimodal function with widespread local minima. Several approaches of gradient-based methods such as steepest descent, conjugate gradient with Fletcher-Reeves and Polak-Ribiere formulations, and quasi-Newton Davidon-Fletcher-Powell approach were compared. The results presented showed that the meta-heuristic method is recommended for function with this behavior because is no needed prior information of the search space. The performance comparison includes computation time and convergence of global and local optimum.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Chuang, Jianping Huang, Zhenchun Li, and Rongrong Wang. "Plane-wave least-squares reverse time migration with a preconditioned stochastic conjugate gradient method." GEOPHYSICS 83, no. 1 (January 1, 2018): S33—S46. http://dx.doi.org/10.1190/geo2017-0339.1.

Full text
Abstract:
This study derives a preconditioned stochastic conjugate gradient (CG) method that combines stochastic optimization with singular spectrum analysis (SSA) denoising to improve the efficiency and image quality of plane-wave least-squares reverse time migration (PLSRTM). This method reduces the computational costs of PLSRTM by applying a controlled group-sampling method to a sufficiently large number of plane-wave sections and accelerates the convergence using a hybrid of stochastic descent (SD) iteration and CG iteration. However, the group sampling also produces aliasing artifacts in the migration results. We use SSA denoising as a preconditioner to remove the artifacts. Moreover, we implement the preconditioning on the take-off angle-domain common-image gathers (CIGs) for better results. We conduct numerical tests using the Marmousi model and Sigsbee2A salt model and compare the results of this method with those of the SD method and the CG method. The results demonstrate that our method efficiently eliminates the artifacts and produces high-quality images and CIGs.
APA, Harvard, Vancouver, ISO, and other styles
17

Salihu, Nasiru, Mathew Remilekun Odekunle, Also Mohammed Saleh, and Suraj Salihu. "A Dai-Liao Hybrid Hestenes-Stiefel and Fletcher-Revees Methods for Unconstrained Optimization." International Journal of Industrial Optimization 2, no. 1 (February 24, 2021): 33. http://dx.doi.org/10.12928/ijio.v2i1.3054.

Full text
Abstract:
Some problems have no analytical solution or too difficult to solve by scientists, engineers, and mathematicians, so the development of numerical methods to obtain approximate solutions became necessary. Gradient methods are more efficient when the function to be minimized continuously in its first derivative. Therefore, this article presents a new hybrid Conjugate Gradient (CG) method to solve unconstrained optimization problems. The method requires the first-order derivatives but overcomes the steepest descent method’s shortcoming of slow convergence and needs not to save or compute the second-order derivatives needed by the Newton method. The CG update parameter is suggested from the Dai-Liao conjugacy condition as a convex combination of Hestenes-Stiefel and Fletcher-Revees algorithms by employing an optimal modulating choice parameterto avoid matrix storage. Numerical computation adopts an inexact line search to obtain the step-size that generates a decent property, showing that the algorithm is robust and efficient. The scheme converges globally under Wolfe line search, and it’s like is suitable in compressive sensing problems and M-tensor systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Martin, Matthieu, Sebastian Krumscheid, and Fabio Nobile. "Complexity Analysis of stochastic gradient methods for PDE-constrained optimal Control Problems with uncertain parameters." ESAIM: Mathematical Modelling and Numerical Analysis 55, no. 4 (July 2021): 1599–633. http://dx.doi.org/10.1051/m2an/2021025.

Full text
Abstract:
We consider the numerical approximation of an optimal control problem for an elliptic Partial Differential Equation (PDE) with random coefficients. Specifically, the control function is a deterministic, distributed forcing term that minimizes the expected squared L2 misfit between the state (i.e. solution to the PDE) and a target function, subject to a regularization for well posedness. For the numerical treatment of this risk-averse Optimal Control Problem (OCP) we consider a Finite Element discretization of the underlying PDE, a Monte Carlo sampling method, and gradient-type iterations to obtain the approximate optimal control. We provide full error and complexity analyses of the proposed numerical schemes. In particular we investigate the complexity of a conjugate gradient method applied to the fully discretized OCP (so called Sample Average Approximation), in which the Finite Element discretization and Monte Carlo sample are chosen in advance and kept fixed over the iterations. This is compared with a Stochastic Gradient method on a fixed or varying Finite Element discretization, in which the expectation in the computation of the steepest descent direction is approximated by Monte Carlo estimators, independent across iterations, with small sample sizes. We show in particular that the second strategy results in an improved computational complexity. The theoretical error estimates and complexity results are confirmed by numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
19

Акимова, Е. Н., В. Е. Мисилов, А. Ф. Скурыдина, and А. И. Третьяков. "Gradient methods for solving inverse gravimetry and magnetometry problems on the Uran supercomputer." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 1 (April 2, 2015): 155–64. http://dx.doi.org/10.26089/nummet.v16r116.

Full text
Abstract:
Для решения трехмерных структурных обратных задач гравиметрии и магнитометрии о нахождении поверхностей раздела слоев постоянной плотности либо намагниченности для модели многослойной среды предложен линеаризованный модифицированный метод наискорейшего спуска с весовыми множителями. Построен линеаризованный метод сопряженных градиентов и его модифицированный вариант с весовыми множителями для решения задач гравиметрии и магнитометрии в многослойной среде. На основе модифицированных методов градиентного типа разработаны эффективные параллельные алгоритмы, численно реализованные на многоядерном процессоре Intel и графических процессорах NVIDIA. Для модельной задачи проведено сравнение параллельных итерационных алгоритмов по относительной погрешности, числу итераций и времени счета. A modified linearized steepest descent method with variable weight factors is proposed to solve three-dimensional structural inverse gravimetry and magnetometry problems of finding the interfaces between constant density or magnetization layers in a multilayer medium. A linearized conjugate gradient method and its modified version with weight factors for solving the gravimetry and magnetometry problems in a multilayer medium is constructed. On the basis of the modified gradient-type methods, a number of efficient parallel algorithms are numerically implemented on an Intel multi-core processor and NVIDIA GPUs. The developed parallel iterative algorithms are compared for a model problem in terms of the relative error, the number of iterations, and the execution time.
APA, Harvard, Vancouver, ISO, and other styles
20

Clegg, S. T., and R. B. Roemer. "A Comparative Evaluation of Unconstrained Optimization Methods Applied to the Thermal Tomography Problem." Journal of Biomechanical Engineering 107, no. 3 (August 1, 1985): 228–33. http://dx.doi.org/10.1115/1.3138547.

Full text
Abstract:
In cancer hyperthermia treatments, it is important to be able to predict complete tissue temperature fields from sampled temperatures taken at the limited number of locations allowed by clinical constraints. An initial attempt to do this automatically using unconstrained optimization techniques to minimize the differences between experimental temperatures and temperatures predicted from treatment simulations has been previously reported [1]. This paper reports on a comparative study which applies a range of different optimization techniques (relaxation, steepest descent, conjugate gradient, Gauss, Box-Kanemasu, and Modified Box-Kanemasu) to this problem. The results show that the Gauss method converges more rapidly than the others, and that it converges to the correct solution regardless of the initial guess for the unknown blood perfusion vector. A sensitivity study of the error space is also performed, and the relationships between the error space characteristics and the comparative speeds of the optimization techniques are discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Zhishan, and Yaoyao Shi. "Tool Orientation Optimization for Disk Milling Process Based on Torque Balance Method." Symmetry 12, no. 1 (December 27, 2019): 60. http://dx.doi.org/10.3390/sym12010060.

Full text
Abstract:
Disk milling strategy has been applied in grooving for decades for its capacity to provide huge milling force on difficult-to-cut material. However, basic research on the tool orientation of the disk milling cutter for the disk milling process on the milling free surface, especially on the free surface of the blisk, is still lacking in previous studies. In this study, the minimum residual amount after the disc milling process is used as an optimization target to obtain the optimal tool orientation of the disc cutter. To address the problem mentioned above, a torque balance method, including a torque balance algorithm and concentric circle ray point (CCRP) method is proposed. The torque calculation and torque balance problem are solved by the torque balance algorithm and the problem of generating random points to cause torque symmetry is solved in the CCRP method. Based on the secondary development of UG NX software, a series of tool orientation of disk milling cutter are calculated. Finally, the torque balance method is compared with steepest descent method, Newton method, and conjugate gradient method in the aspects of calculation accuracy, operation speed, and convergence speed. However, both the calculation speed and the convergence speed are better than the other three algorithms. Compared with the other three methods, the operation speed of the torque balance method is reduced by 0.35 times, 1.5 times, and 2.25 times. The results prove the feasibility of the torque balance method in solving the problem of tool orientation optimization of the disk milling cutter.
APA, Harvard, Vancouver, ISO, and other styles
22

Habermann, Marijke, David Maxwell, and Martin Truffer. "Reconstruction of basal properties in ice sheets using iterative inverse methods." Journal of Glaciology 58, no. 210 (2012): 795–808. http://dx.doi.org/10.3189/2012jog11j168.

Full text
Abstract:
AbstractInverse problems are used to estimate model parameters from observations. Many inverse problems are ill-posed because they lack stability, meaning it is not possible to find solutions that are stable with respect to small changes in input data. Regularization techniques are necessary to stabilize the problem. For nonlinear inverse problems, iterative inverse methods can be used as a regularization method. These methods start with an initial estimate of the model parameters, update the parameters to match observation in an iterative process that adjusts large-scale spatial features first, and use a stopping criterion to prevent the overfitting of data. This criterion determines the smoothness of the solution and thus the degree of regularization. Here, iterative inverse methods are implemented for the specific problem of reconstructing basal stickiness of an ice sheet by using the shallow-shelf approximation as a forward model and synthetically derived surface velocities as input data. The incomplete Gauss-Newton (IGN) method is introduced and compared to the commonly used steepest descent and nonlinear conjugate gradient methods. Two different stopping criteria, the discrepancy principle and a recent- improvement threshold, are compared. The IGN method is favored because it is rapidly converging, and it incorporates the discrepancy principle, which leads to optimally resolved solutions.
APA, Harvard, Vancouver, ISO, and other styles
23

Habashy, T. M., A. Abubakar, G. Pan, and A. Belani. "Source-receiver compression scheme for full-waveform seismic inversion." GEOPHYSICS 76, no. 4 (July 2011): R95—R108. http://dx.doi.org/10.1190/1.3590213.

Full text
Abstract:
We have developed a source-receiver compression approach for reducing the computational time and memory usage of the acoustic and elastic full-waveform inversions. By detecting and quantifying the extent of redundancy in the data, we assembled a reduced set of simultaneous sources and receivers that are weighted sums of the physical sources and receivers used in the survey. Because the numbers of these simultaneous sources and receivers could be significantly less than those of the physical sources and receivers, the computational time and memory usage of any gradient-type inversion method such as steepest descent, nonlinear conjugate gradient, contrast-source inversion, and quasi-Newton methods could be reduced. The scheme is based on decomposing the data into their principal components using a singular-value decomposition approach, and the data reduction is done through the elimination of the small eigenvalues. Consequently, this would suppress the effect of noise in the data. Moreover, taking advantage of the redundancy in the data, this compression scheme effectively stacks the redundant data, resulting in an increased signal-to-noise ratio. For demonstration of the concept, we produced inversion results for the 2D acoustic Marmousi and BP models for surface measurements and an elastic model for crosswell measurements. We found that this approach has the potential to significantly reduce computational time and memory usage of the Gauss-Newton method by 1–2 orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
24

Faghidian, S. Ali. "A regularized approach to linear regression of fatigue life measurements." International Journal of Structural Integrity 7, no. 1 (February 1, 2016): 95–105. http://dx.doi.org/10.1108/ijsi-12-2014-0071.

Full text
Abstract:
Purpose – The linear regression technique is widely used to determine empirical parameters of fatigue life profile while the results may not continuously depend on experimental data. Thus Tikhonov-Morozov method is utilized here to regularize the linear regression results and consequently reduces the influence of measurement noise without notably distorting the fatigue life distribution. The paper aims to discuss these issues. Design/methodology/approach – Tikhonov-Morozov regularization method would be shown to effectively reduce the influences of measurement noise without distorting the fatigue life distribution. Moreover since iterative regularization methods are known to be an attractive alternative to Tikhonov regularization, four gradient iterative methods called as simple iteration, minimum error, steepest descent and conjugate gradient methods are examined with an appropriate initial guess of regularized coefficients. Findings – It has been shown that in case of sparse fatigue life measurements, linear regression results may not have continuous dependence on experimental data and measurement error could lead to misinterpretations of the solution. Therefore from engineering safety point of view, utilizing regularization method could successfully reduce the influence of measurement noise without significantly distorting the fatigue life distribution. Originality/value – An excellent initial guess for mixed iterative-direct algorithm is introduced and it has been shown that the combination of Newton iterative approach and Morozov discrepancy principle is one of the interesting strategies for determination of regularization parameter having an excellent rate of convergence. Moreover since iterative methods are known to be an attractive alternative to Tikhonov regularization, four gradient descend methods are examined here for regularization of the linear regression problem. It has been found that all of gradient decent methods with an appropriate initial guess of regularized coefficients have an excellent convergence to Tikhonov-Morozov regularization results.
APA, Harvard, Vancouver, ISO, and other styles
25

Lohvithee, Manasavee, Wenjuan Sun, Stephane Chretien, and Manuchehr Soleimani. "Ant Colony-Based Hyperparameter Optimisation in Total Variation Reconstruction in X-ray Computed Tomography." Sensors 21, no. 2 (January 15, 2021): 591. http://dx.doi.org/10.3390/s21020591.

Full text
Abstract:
In this paper, a computer-aided training method for hyperparameter selection of limited data X-ray computed tomography (XCT) reconstruction was proposed. The proposed method employed the ant colony optimisation (ACO) approach to assist in hyperparameter selection for the adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm, which is a total-variation (TV) based regularisation algorithm. During the implementation, there was a colony of artificial ants that swarm through the AwPCSD algorithm. Each ant chose a set of hyperparameters required for its iterative CT reconstruction and the correlation coefficient (CC) score was given for reconstructed images compared to the reference image. A colony of ants in one generation left a pheromone through its chosen path representing a choice of hyperparameters. Higher score means stronger pheromones/probabilities to attract more ants in the next generations. At the end of the implementation, the hyperparameter configuration with the highest score was chosen as an optimal set of hyperparameters. In the experimental results section, the reconstruction using hyperparameters from the proposed method was compared with results from three other cases: the conjugate gradient least square (CGLS), the AwPCSD algorithm using the set of arbitrary hyperparameters and the cross-validation method.The experiments showed that the results from the proposed method were superior to those of the CGLS algorithm and the AwPCSD algorithm using the set of arbitrary hyperparameters. Although the results of the ACO algorithm were slightly inferior to those of the cross-validation method as measured by the quantitative metrics, the ACO algorithm was over 10 times faster than cross—Validation. The optimal set of hyperparameters from the proposed method was also robust against an increase of noise in the data and can be applicable to different imaging samples with similar context. The ACO approach in the proposed method was able to identify optimal values of hyperparameters for a dataset and, as a result, produced a good quality reconstructed image from limited number of projection data. The proposed method in this work successfully solves a problem of hyperparameters selection, which is a major challenge in an implementation of TV based reconstruction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Gao, Guohua, and Albert C. Reynolds. "An Improved Implementation of the LBFGS Algorithm for Automatic History Matching." SPE Journal 11, no. 01 (March 1, 2006): 5–17. http://dx.doi.org/10.2118/90058-pa.

Full text
Abstract:
Summary For large scale history matching problems, where it is not feasible to compute individual sensitivity coefficients, the limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) is an efficient optimization algorithm, (Zhang and Reynolds, 2002; Zhang, 2002). However, computational experiments reveal that application of the original implementation of LBFGS may encounter the following problems:converge to a model which gives an unacceptable match of production data;generate a bad search direction that either leads to false convergence or a restart with the steepest descent direction which radically reduces the convergence rate;exhibit overshooting and undershooting, i.e., converge to a vector of model parameters which contains some abnormally high or low values of model parameters which are physically unreasonable. Overshooting and undershooting can occur even though all history matching problems are formulated in a Bayesian framework with a prior model providing regularization. We show that the rate of convergence and the robustness of the algorithm can be significantly improved by:a more robust line search algorithm motivated by the theoretical result that the Wolfe conditions should be satisfied;an application of a data damping procedure at early iterations orenforcing constraints on the model parameters. Computational experiments also indicate thata simple rescaling of model parameters prior to application of the optimization algorithm can improve the convergence properties of the algorithm although the scaling procedure used can not be theoretically validated. Introduction Minimization of a smooth objective function is customarily done using a gradient based optimization algorithm such as the Gauss- Newton (GN) method or Levenberg-Marquardt (LM) algorithm. The standard implementations of these algorithms (Tan and Kalogerakis, 1991; Wu et al., 1999; Li et al., 2003), however, require the computation of all sensitivity coefficients in order to formulate the Hessian matrix. We are interested in history matching problems where the number of data to be matched ranges from a few hundred to several thousand and the number of reservoir variables or model parameters to be estimated or simulated ranges from a few hundred to a hundred thousand or more. For the larger problems in this range, the computer resources required to compute all sensitivity coefficients would prohibit the use of the standard Gauss- Newton and Levenberg-Marquardt algorithms. Even for the smallest problems in this range, computation of all sensitivity coefficients may not be feasible as the resulting GN and LM algorithms may require the equivalent of several hundred simulation runs. The relative computational efficiency of GN, LM, nonlinear conjugate gradient and quasi-Newton methods have been discussed in some detail by Zhang and Reynolds (2002) and Zhang (2002).
APA, Harvard, Vancouver, ISO, and other styles
27

Arai, Kohei. "Comparative Study Among Lease Square Method, Steepest Descent Method, and Conjugate Gradient Method for Atmopsheric Sounder Data Analysis." International Journal of Advanced Research in Artificial Intelligence 2, no. 9 (2013). http://dx.doi.org/10.14569/ijarai.2013.020906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sudirman, Rubita, Sh Hussain Salleh, and Shaharuddin Salleh. "An Improved Method In Speech Signal Input Representation Based On DTW Technique For NN Speech Recognition System." Jurnal Teknologi, January 20, 2012. http://dx.doi.org/10.11113/jt.v46.291.

Full text
Abstract:
Kertas kerja ini membentangkan pemprosesan semula ciri pertuturan pemalar Pengekodan Ramalan Linear (LPC) bagi menyediakan template rujukan yang boleh diharapkan untuk set perkataan yang hendak dicam menggunakan rangkaian neural buatan. Kertas kerja ini juga mencadangkan penggunaan cirian kenyaringan yang ditakrifkan dari data pertuturan sebagai satu lagi ciri input. Algoritma Warping Masa Dinamik (DTW) menjadi asas kepada algoritma baru yang dibangunkan, ia dipanggil sebagai DTW padanan bingkai (DTW–FF). Algoritma ini direka bentuk untuk melakukan padanan bingkai bagi pemprosesan semula input LPC. Ia bertujuan untuk menyamakan bilangan bingkai input dalam set ujian dengan set rujukan. Pernormalan bingkaian ini adalah diperlukan oleh rangkaian neural yang direka untuk membanding data yang harus mempunyai kepanjangan yang sama, sedangkan perkataan yang sama dituturkan dengan kepanjangan yang berbeza–beza. Dengan melakukan padanan bingkai, bingkai input dan rujukan boleh diubahsuai supaya bilangan bingkaian sama seperti bingkaian rujukan. Satu lagi misi kertas kerja ini ialah mentakrif dan menggunakan cirian kenyaringan menggunakan algoritma penapis harmonik. Selepas kenyaringan ditakrif dan pemalar LPC dinormalkan kepada bilangan bingkaian dikehendaki, pengecaman pertuturan menggunakan rangkaian neural dilakukan. Keputusan yang baik diperoleh sehingga mencapai ketepatan setinggi 98% menggunakan kombinasi cirian DTW–FF dan cirian kenyaringan. Di akhir kertas kerja ini, perbandingan kadar convergence antara Conjugate gradient descent (CGD), Quasi–Newton, dan Steepest Gradient Descent (SGD) dilakukan untuk mendapatkan arah carian titik global yang optimal. Keputusan menunjukkan CGD memberikan nilai titik global yang paling optimal dibandingkan dengan Quasi–Newton dan SGD. Kata kunci: Warping masa dinamik, pernormalan masa, rangkaian neural, pengecaman pertuturan, conjugate gradient descent A pre–processing of linear predictive coefficient (LPC) features for preparation of reliable reference templates for the set of words to be recognized using the artificial neural network is presented in this paper. The paper also proposes the use of pitch feature derived from the recorded speech data as another input feature. The Dynamic Time Warping algorithm (DTW) is the back–bone of the newly developed algorithm called DTW fixing frame algorithm (DTW–FF) which is designed to perform template matching for the input preprocessing. The purpose of the new algorithm is to align the input frames in the test set to the template frames in the reference set. This frame normalization is required since NN is designed to compare data of the same length, however same speech varies in their length most of the time. By doing frame fixing, the input frames and the reference frames are adjusted to the same number of frames according to the reference frames. Another task of the study is to extract pitch features using the Harmonic Filter algorithm. After pitch extraction and linear predictive coefficient (LPC) features fixed to a desired number of frames, speech recognition using neural network can be performed and results showed a very promising solution. Result showed that as high as 98% recognition can be achieved using combination of two features mentioned above. At the end of the paper, a convergence comparison between conjugate gradient descent (CGD), Quasi–Newton, and steepest gradient descent (SGD) search direction is performed and results show that the CGD outperformed the Newton and SGD. Key words: Dynamic time warping, time normalization, neural network, speech recognition, conjugate gradient descent
APA, Harvard, Vancouver, ISO, and other styles
29

Yuan, Gonglin, Tingting Li, and Wujie Hu. "A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration." Journal of Inequalities and Applications 2019, no. 1 (September 18, 2019). http://dx.doi.org/10.1186/s13660-019-2192-6.

Full text
Abstract:
Abstract To solve large-scale unconstrained optimization problems, a modified PRP conjugate gradient algorithm is proposed and is found to be interesting because it combines the steepest descent algorithm with the conjugate gradient method and successfully fully utilizes their excellent properties. For smooth functions, the objective algorithm sufficiently utilizes information about the gradient function and the previous direction to determine the next search direction. For nonsmooth functions, a Moreau–Yosida regularization is introduced into the proposed algorithm, which simplifies the process in addressing complex problems. The proposed algorithm has the following characteristics: (i) a sufficient descent feature as well as a trust region trait; (ii) the ability to achieve global convergence; (iii) numerical results for large-scale smooth/nonsmooth functions prove that the proposed algorithm is outstanding compared to other similar optimization methods; (iv) image restoration problems are done to turn out that the given algorithm is successful.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography