Academic literature on the topic 'Conjugate gradient method, preconditioning, steepest descent method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Conjugate gradient method, preconditioning, steepest descent method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Conjugate gradient method, preconditioning, steepest descent method"

1

Knyazev, Andrew V., and Ilya Lashuk. "Steepest Descent and Conjugate Gradient Methods with Variable Preconditioning." SIAM Journal on Matrix Analysis and Applications 29, no. 4 (January 2008): 1267–80. http://dx.doi.org/10.1137/060675290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammed Salih, Dana Taha, and Bawar Mohammed Faraj. "Comparison Between Steepest Descent Method and Conjugate Gradient Method by Using Matlab." Journal of Studies in Science and Engineering 1, no. 1 (August 26, 2021): 20–31. http://dx.doi.org/10.53898/josse2021113.

Full text
Abstract:
The Steepest descent method and the Conjugate gradient method to minimize nonlinear functions have been studied in this work. Algorithms are presented and implemented in Matlab software for both methods. However, a comparison has been made between the Steepest descent method and the Conjugate gradient method. The obtained results in Matlab software has time and efficiency aspects. It is shown that the Conjugate gradient method needs fewer iterations and has more efficiency than the Steepest descent method. On the other hand, the Steepest descent method converges a function in less time than the Conjugate gradient method.
APA, Harvard, Vancouver, ISO, and other styles
3

Bouwmeester, Henricus, Andrew Dougherty, and Andrew V. Knyazev. "Nonsymmetric Preconditioning for Conjugate Gradient and Steepest Descent Methods 1." Procedia Computer Science 51 (2015): 276–85. http://dx.doi.org/10.1016/j.procs.2015.05.241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rahali, Noureddine, Mohammed Belloufi, and Rachid Benzine. "A new conjugate gradient method for acceleration of gradient descent algorithms." Moroccan Journal of Pure and Applied Analysis 7, no. 1 (January 1, 2021): 1–11. http://dx.doi.org/10.2478/mjpaa-2021-0001.

Full text
Abstract:
AbstractAn accelerated of the steepest descent method for solving unconstrained optimization problems is presented. which propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an new formula. Under common assumptions, by using a modified Wolfe line search, descent property and global convergence results were established for the new method. Experimental results provide evidence that our proposed method is in general superior to the classical steepest descent method and has a potential to significantly enhance the computational efficiency and robustness of the training process.
APA, Harvard, Vancouver, ISO, and other styles
5

Farhana Husin, Siti, Mustafa Mamat, Mohd Asrul Hery Ibrahim, and Mohd Rivaie. "A modification of steepest descent method for solving large-scaled unconstrained optimization problems." International Journal of Engineering & Technology 7, no. 3.28 (August 17, 2018): 72. http://dx.doi.org/10.14419/ijet.v7i3.28.20969.

Full text
Abstract:
In this paper, we develop a new search direction for Steepest Descent (SD) method by replacing previous search direction from Conjugate Gradient (CG) method, , with gradient from the previous step, for solving large-scale optimization problem. We also used one of the conjugate coefficient as a coefficient for matrix . Under some reasonable assumptions, we prove that the proposed method with exact line search satisfies descent property and possesses the globally convergent. Further, the numerical results on some unconstrained optimization problem show that the proposed algorithm is promising.
APA, Harvard, Vancouver, ISO, and other styles
6

Connolly, T. J., K. A. Landman, and L. R. White. "On Gerchberg's method for the Fourier inverse problem." Journal of the Australian Mathematical Society. Series B. Applied Mathematics 37, no. 1 (July 1995): 26–44. http://dx.doi.org/10.1017/s0334270000007554.

Full text
Abstract:
AbstractIf a finite segment of a spectrum is known, the determination of the finite object function in image space (or the full spectrum in frequency space) is a fundamental problem in image analysis. Gerchberg's method, which solves this problem, can be formulated as a fixed point iteration. This and other related algorithms are shown to be equivalent to a steepest descent method applied to the minimization of an appropriate functional for the Fourier Inversion Problem. Optimal steepest descent and conjugate gradient methods are derived. Numerical results from the application of these techniques are presented. The regularization of the problem and control of noise growth in the iteration are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Causse, Emmanuel, Rune Mittet, and Bjørn Ursin. "Preconditioning of full‐waveform inversion in viscoacoustic media." GEOPHYSICS 64, no. 1 (January 1999): 130–45. http://dx.doi.org/10.1190/1.1444510.

Full text
Abstract:
Intrinsic absorption in the earth affects the amplitude and phase spectra of the seismic wavefields and records, and may degrade significantly the results of acoustic full‐waveform inversion. Amplitude distortion affects the strength of the scatterers and decreases the resolution. Phase distortion may result in mislocated interfaces. We show that viscoacoustic gradient‐based inversion algorithms (e.g., steepest descent or conjugate gradients) compensate for the effects of phase distortion, but not for the effects of amplitude distortion. To solve this problem at a reasonable numerical cost, we have designed two new forms of preconditioning derived from an analysis of the inverse Hessian operator. The first type of preconditioning is a frequency‐dependent compensation for dispersion and attenuation, which involves two extra modeling steps with inverse absorption (amplification) at each iteration. The second type only corrects the strength of the recovered scatterers, and requires two extra modeling steps at the first iteration only. The new preconditioning methods have been incorporated into a finite‐difference inversion scheme for viscoacoustic media. Numerical tests on noise‐free synthetic data illustrate and support the theory.
APA, Harvard, Vancouver, ISO, and other styles
8

Johansson, E. M., F. U. Dowla, and D. M. Goodman. "BACKPROPAGATION LEARNING FOR MULTILAYER FEED-FORWARD NEURAL NETWORKS USING THE CONJUGATE GRADIENT METHOD." International Journal of Neural Systems 02, no. 04 (January 1991): 291–301. http://dx.doi.org/10.1142/s0129065791000261.

Full text
Abstract:
In many applications, the number of interconnects or weights in a neural network is so large that the learning time for the conventional backpropagation algorithm can become excessively long. Numerical optimization theory offers a rich and robust set of techniques which can be applied to neural networks to improve learning rates. In particular, the conjugate gradient method is easily adapted to the backpropagation learning problem. This paper describes the conjugate gradient method, its application to the backpropagation learning problem and presents results of numerical tests which compare conventional backpropagation, steepest descent and the conjugate gradient methods. For the parity problem, we find that the conjugate gradient method is an order of magnitude faster than conventional backpropagation with momentum.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Chein-Shan. "Modifications of Steepest Descent Method and Conjugate Gradient Method Against Noise for Ill-posed Linear Systems." Communications in Numerical Analysis 2012 (2012): 1–24. http://dx.doi.org/10.5899/2012/cna-00115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Jibum. "An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method." Mathematical Problems in Engineering 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/273732.

Full text
Abstract:
We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient) and second-order (Hessian) derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Conjugate gradient method, preconditioning, steepest descent method"

1

Solov'ëv, Sergey I. "Preconditioned iterative methods for monotone nonlinear eigenvalue problems." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600657.

Full text
Abstract:
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of the symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to ill-conditioned nonlinear eigenvalue problems with very large sparse matrices monotone depending on the spectral parameter. To compute the smallest eigenvalue of large matrix nonlinear eigenvalue problem, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors and inner products of vectors. We investigate the convergence and derive grid-independent error estimates of these methods for computing eigenvalues. Numerical experiments demonstrate practical effectiveness of the proposed methods for a class of mechanical problems.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Conjugate gradient method, preconditioning, steepest descent method"

1

Zhang, Jiapu. "The Hybrid Method of Steepest Descent: Conjugate Gradient with Simulated Annealing." In Molecular Structures and Structural Dynamics of Prion Proteins and Prions, 171–201. Dordrecht: Springer Netherlands, 2015. http://dx.doi.org/10.1007/978-94-017-7318-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"14. The Method of Steepest Descent and the Conjugate Gradient Method." In Finite Difference Schemes and Partial Differential Equations, Second Edition, 373–98. Society for Industrial and Applied Mathematics, 2004. http://dx.doi.org/10.1137/1.9780898717938.ch14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Conjugate gradient method, preconditioning, steepest descent method"

1

Sowayan, A. S., A. Bénard, and A. R. Diaz. "A Wavelets Method for Solving Heat Transfer and Viscous Fluid Flow Problems." In ASME 2013 Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/fedsm2013-16312.

Full text
Abstract:
Wavelet-based methods have demonstrated great potential for solving partial differential equations of various types. The capabilities of the wavelet Galerkin method are explored by solving various heat transfer and fluid flow problems. A fictitious domain approach is used to simplify the discretization of the domain and a penalty method allows an efficient implementation of the boundary conditions. The resulting system of equation is solved iteratively via the Conjugate Gradient and Preconditioned Conjugate Gradient Methods. The fluid flow problems in the present study are formulated in such a manner that the solution of the continuity and momentum equations is obtained by solving a series of Poisson equations. This is achieved by using steepest descent method. The examples solved show that the method is amenable to solving large problems rapidly with modest computational resources.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, HyungTae, SeungTaek Kim, KyungChan Jin, Jongseok Kim, and SungHo Lee. "Robust Parameter Design of Derivative Optimization Methods for Image Acquisition Using a Color Mixer." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-50568.

Full text
Abstract:
A tuning method was proposed for automatic lighting (auto-lighting) algorithms derived from the steepest descent and conjugate gradient methods. The auto-lighting algorithms maximize the image quality of the industrial machine vision by adjusting multiple-color light emitting diodes (LEDs), usually called color mixers. Searching for the driving condition for achieving maximum sharpness, which influences image quality, using multiple color LEDs, is time-consuming. Hence, the steepest descent and conjugate gradient methods were applied to reduce the searching time for achieving maximum image quality. The relationship between lighting and image quality is multi-dimensional, non-linear, and difficult to describe using mathematical equations. Hence the Taguchi method is actually the only method that can determine the parameters of auto-lighting algorithms. The Taguchi method was applied to an inspection system consisting of an industrial camera, coaxial lens, color mixer, image acquisition device, analog interface board, and semiconductor patterns for target objects. The algorithm parameters were determined using orthogonal arrays and the candidate parameters were selected by increasing the sharpness and decreasing the iterations of the algorithm, which were dependent on the searching time. After conducting retests using the selected parameters, the image quality was almost the same as that in the best-case parameters with a smaller number of iterations. The Taguchi method will be useful in reducing time-consuming tasks and the time required to set up the inspection process in manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography