Academic literature on the topic 'Conjugate Gradient Algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Conjugate Gradient Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Conjugate Gradient Algorithm"

1

Guo, Jie, and Zhong Wan. "A new three-term conjugate gradient algorithm with modified gradient-differences for solving unconstrained optimization problems." AIMS Mathematics 8, no. 2 (2022): 2473–88. http://dx.doi.org/10.3934/math.2023128.

Full text
Abstract:
<abstract><p>Unconstrained optimization problems often arise from mining of big data and scientific computing. On the basis of a modified gradient-difference, this article aims to present a new three-term conjugate gradient algorithm to efficiently solve unconstrained optimization problems. Compared with the existing nonlinear conjugate gradient algorithms, the search directions in this algorithm are always sufficiently descent independent of any line search, as well as having conjugacy property. Using the standard Wolfe line search, global and local convergence of the proposed algorithm is proved under mild assumptions. Implementing the developed algorithm to solve 750 benchmark test problems available in the literature, it is shown that the numerical performance of this algorithm is remarkable, especially in comparison with that of the other similar efficient algorithms.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
2

Qasim, Aseel M., Zinah F. Salih, and Basim A. Hassan. "A new conjugate gradient algorithms using conjugacy condition for solving unconstrained optimization." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 3 (December 1, 2021): 1647. http://dx.doi.org/10.11591/ijeecs.v24.i3.pp1647-1653.

Full text
Abstract:
The primarily objective of this paper which is indicated in the field of conjugate gradient algorithms for unconstrained optimization problems and algorithms is to show the advantage of the new proposed algorithm in comparison with the standard method which is denoted as. Hestenes Stiefel method, as we know the coefficient conjugate parameter is very crucial for this reason, we proposed a simple modification of the coefficient conjugate gradient which is used to derived the new formula for the conjugate gradient update parameter described in this paper. Our new modification is based on the conjugacy situation for nonlinear conjugate gradient methods which is given by the conjugacy condition for nonlinear conjugate gradient methods and added a nonnegative parameter to suggest the new extension of the method. Under mild Wolfe conditions, the global convergence theorem and lemmas are also defined and proved. The proposed method's efficiency is programming and demonstrated by the numerical instances, which were very encouraging.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Zhan Jun, and Liu Li. "Implementation of Modified Conjugate Gradient Algorithm in Electromagnetic Tomography Lab System." Advanced Materials Research 655-657 (January 2013): 693–96. http://dx.doi.org/10.4028/www.scientific.net/amr.655-657.693.

Full text
Abstract:
The advantage of the electromagnetic tomography is introduced briefly. Based on conjugate gradient algorithm, modified conjugate gradient algorithm for electromagnetic tomography (EMT) is proposed, which improves quality of reconstructed image and convergence speed efficiently. In the light of the lab electromagnetic tomography system, modified conjugate gradient for reconstructing images is verified. By evaluation of image error and the relevance, regularization, Landweber, conjugate gradient and modified conjugate gradient algorithms are compared. It can draw the conclusion that for different flow models, image error and the correlation using modified conjugate gradient algorithm is superior to the others in lab EMT system.
APA, Harvard, Vancouver, ISO, and other styles
4

Ocłoń, Paweł, Stanisław Łopata, and Marzena Nowak. "Comparative study of conjugate gradient algorithms performance on the example of steady-state axisymmetric heat transfer problem." Archives of Thermodynamics 34, no. 3 (September 1, 2013): 15–44. http://dx.doi.org/10.2478/aoter-2013-0013.

Full text
Abstract:
Abstract The finite element method (FEM) is one of the most frequently used numerical methods for finding the approximate discrete point solution of partial differential equations (PDE). In this method, linear or nonlinear systems of equations, comprised after numerical discretization, are solved to obtain the numerical solution of PDE. The conjugate gradient algorithms are efficient iterative solvers for the large sparse linear systems. In this paper the performance of different conjugate gradient algorithms: conjugate gradient algorithm (CG), biconjugate gradient algorithm (BICG), biconjugate gradient stabilized algorithm (BICGSTAB), conjugate gradient squared algorithm (CGS) and biconjugate gradient stabilized algorithm with l GMRES restarts (BICGSTAB(l)) is compared when solving the steady-state axisymmetric heat conduction problem. Different values of l parameter are studied. The engineering problem for which this comparison is made is the two-dimensional, axisymmetric heat conduction in a finned circular tube.
APA, Harvard, Vancouver, ISO, and other styles
5

Sellami, Badreddine, and Mohamed Chiheb Eddine Sellami. "Global convergence of a modified Fletcher–Reeves conjugate gradient method with Wolfe line search." Asian-European Journal of Mathematics 13, no. 04 (April 4, 2019): 2050081. http://dx.doi.org/10.1142/s1793557120500813.

Full text
Abstract:
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. we propose a modified Fletcher–Reeves (abbreviated FR) [Function minimization by conjugate gradients, Comput. J. 7 (1964) 149–154] conjugate gradient algorithm satisfying a parametrized sufficient descent condition with a parameter [Formula: see text] is proposed. The parameter [Formula: see text] is computed by means of the conjugacy condition, thus an algorithm which is a positive multiplicative modification of the Hestenes and Stiefel (abbreviated HS) [Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards Sec. B 48 (1952) 409–436] algorithm is obtained, which produces a descent search direction at every iteration that the line search satisfies the Wolfe conditions. Under appropriate conditions, we show that the modified FR method with the strong Wolfe line search is globally convergent of uniformly convex functions. We also present extensive preliminary numerical experiments to show the efficiency of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
6

Hasibuan, Eka Hayana, Surya Hendraputra, GS Achmad Daengs, and Liharman Saragih. "Comparison Fletcher-Reeves and Polak-Ribiere ANN Algorithm for Forecasting Analysis." Journal of Physics: Conference Series 2394, no. 1 (December 1, 2022): 012008. http://dx.doi.org/10.1088/1742-6596/2394/1/012008.

Full text
Abstract:
Abstract Each method and algorithm ANN has different performances depending on the algorithm used and the parameters given. The purpose of this research is to obtain the best algorithm information from the two algorithms that will be compared based on the performance value or the smallest / lowest MSE value so that it can be used as a reference and information for solving forecasting problems. The ANN algorithms compared were Conjugate Gradient Fletcher-Reeves and Conjugate Gradient Polak-Ribiere. The conjugate gradient algorithm can solve unlimited optimization problems and is much more efficient than gradient descent-based algorithms because of its faster turnaround time and less iteration. The research data used for the forecasting analysis of the two algorithms are data on the number of rural poor people in Sumatra, Indonesia. 6-10-1, 6-15-1, and 6-20-1 architectural analysis. The results showed that the Polak-Ribiere Conjugate Gradient algorithm with the 6-10-1 architecture has the best performance results and the smallest / lowest MSE value compared to the Fletcher-Reeves algorithm and two other architectures. So it can be concluded that the 6-10-1 architectural architecture with the Conjugate Gradient Polak-Ribiere algorithm can be used to solve forecasting problems because the training time to achieve convergence is not too long, and the resulting performance is quite good.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, Huda I., Eman T. Hamed, and Hamsa Th Saeed Chilmeran. "A Modified Bat Algorithm with Conjugate Gradient Method for Global Optimization." International Journal of Mathematics and Mathematical Sciences 2020 (June 4, 2020): 1–14. http://dx.doi.org/10.1155/2020/4795793.

Full text
Abstract:
Metaheuristic algorithms are used to solve many optimization problems. Firefly algorithm, particle swarm improvement, harmonic search, and bat algorithm are used as search algorithms to find the optimal solution to the problem field. In this paper, we have investigated and analyzed a new scaled conjugate gradient algorithm and its implementation, based on the exact Wolfe line search conditions and the restart Powell criterion. The new spectral conjugate gradient algorithm is a modification of the Birgin and Martínez method, a manner to overcome the lack of positive definiteness of the matrix defining the search direction. The preliminary computational results for a set of 30 unconstrained optimization test problems show that this new spectral conjugate gradient outperforms a standard conjugate gradient in this field and we have applied the newly proposed spectral conjugate gradient algorithm in bat algorithm to reach the lowest possible goal of bat algorithm. The newly proposed approach, namely, the directional bat algorithm (CG-BAT), has been then tested using several standard and nonstandard benchmarks from the CEC’2005 benchmark suite with five other algorithms and has been then tested using nonparametric statistical tests and the statistical test results show the superiority of the directional bat algorithm, and also we have adopted the performance profiles given by Dolan and More which show the superiority of the new algorithm (CG-BAT).
APA, Harvard, Vancouver, ISO, and other styles
8

Ahmed, Alaa Saad, Hisham M. Khudhur, and Mohammed S. Najmuldeen. "A new parameter in three-term conjugate gradient algorithms for unconstrained optimization." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 1 (July 1, 2021): 338. http://dx.doi.org/10.11591/ijeecs.v23.i1.pp338-344.

Full text
Abstract:
<span>In this study, we develop a different parameter of three term conjugate gradient kind, this scheme depends principally on pure conjugacy condition (PCC), Whereas, the conjugacy condition (PCC) is an important condition in unconstrained non-linear optimization in general and in conjugate gradient methods in particular. The proposed method becomes converged, and satisfy conditions descent property by assuming some hypothesis, The numerical results display the effectiveness of the new method for solving test unconstrained non-linear optimization problems compared to other conjugate gradient algorithms such as Fletcher and Revees (FR) algorithm and three term Fletcher and Revees (TTFR) algorithm. and as shown in Table (1) from where in a number of iterations and evaluation of function and in Figures (1), (2) and (3) from where in A comparison of the number of iterations, A comparison of the number of times a function is calculated and A comparison of the time taken to perform the functions.</span>
APA, Harvard, Vancouver, ISO, and other styles
9

Anwer Mustafa, Ahmed, and Salah Gazi Shareef. "Global convergence of new three terms conjugate gradient for unconstrained optimization." General Letters in Mathematics 11, no. 1 (September 2021): 1–9. http://dx.doi.org/10.31559/glm2021.11.1.1.

Full text
Abstract:
In this paper, a new formula of 𝛽𝑘 is suggested for the conjugate gradient method of solving unconstrained optimization problems based on three terms and step size of cubic. Our new proposed CG method has descent condition, sufficient descent condition, conjugacy condition, and global convergence properties. Numerical comparisons with two standard conjugate gradient algorithms show that this algorithm is very effective depending on the number of iterations and the number of functions evaluated.
APA, Harvard, Vancouver, ISO, and other styles
10

Bridson, Robert, and Chen Greif. "A Multipreconditioned Conjugate Gradient Algorithm." SIAM Journal on Matrix Analysis and Applications 27, no. 4 (January 2006): 1056–68. http://dx.doi.org/10.1137/040620047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Conjugate Gradient Algorithm"

1

Oliveira, Ivan B. (Ivan Borges) 1975. "A "HUM" conjugate gradient algorithm for constrained nonlinear optimal control : terminal and regular problems." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/89883.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2002.
Includes bibliographical references (p. 145-147).
Optimal control problems often arise in engineering applications when a known desired behavior is to be imposed on a dynamical system. Typically, there is a performance and controller use trade-off that can be quantified as a total cost functional of the state and control histories. Problems stated in such a manner are not required to follow an exact desired behavior, alleviating potential controllability issues. We present a method for solving large deterministic optimal control problems defined by quadratic cost functionals, nonlinear state equations, and box-type constraints on the control variables. The algorithm has been developed so that systems governed by general parabolic partial differential equations can be solved. The problems addressed are of the regulator-terminal type, in which deviations from specified state variable behavior are minimized over the entire trajectory as well as at the final time. The core of the algorithm consists of an extension of the Hilbert Uniqueness Method which, we show, can be considered a statement of the dual. With the definition of a problem-specific inner-product space, a formulation is constructed around a well-conditioned, stable, SPD operator, thus leading to fast rates of convergence when solved by, for instance, a conjugate gradient procedure (denoted here TRCG). Total computational time scales roughly as twice the order of magnitude of the computational cost of a single initial-value problem.
(cont.) Standard logarithmic barrier functions and Newton methods are employed to address the hard constraints on control variables of the type Umin < U < Umax. We have shown that the TRCG algorithm allows for the incorporation of these techniques, and that convergence results maintain advantageous properties found in the standard (linear programming) literature. The TRCG operator is shown to maintain its symmetric positive-definiteness for temporal discretizations, a property that is crucial to the practical implementation of the proposed algorithm. Sample calculations are presented which illustrate the performance of the method when applied to a nonlinear heat transfer problem governed by partial differential equations.
by Ivan B. Oliveira.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Barker, David Gary. "Reconstruction of the Temperature Profile Along a Blackbody Optical Fiber Thermometer." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/59.

Full text
Abstract:
A blackbody optical fiber thermometer consists of an optical fiber whose sensing tip is given a metallic coating. The sensing tip of the fiber forms an isothermal cavity, and the emission from this cavity is approximately equal to the emission from a blackbody. Standard two-color optical fiber thermometry involves measuring the spectral intensity at the end of the fiber at two wavelengths. The temperature at the sensing tip of the fiber can then be inferred using Planck's law and the ratio of the spectral intensities. If, however, the length of the optical fiber is exposed to elevated temperatures, erroneous temperature measurements will occur due to emission by the fiber. This thesis presents a method to account for emission by the fiber and accurately infer the temperature at the tip of the optical fiber. Additionally, an estimate of the temperature profile along the fiber may be obtained. A mathematical relation for radiation transfer down the optical fiber is developed. The radiation exiting the fiber and the temperature profile along the fiber are related to the detector signal by a signal measurement equation. Since the temperature profile cannot be solved for directly using the signal measurement equation, two inverse minimization techniques are developed to find the temperature profile. Simulated temperature profile reconstructions show the techniques produce valid and unique results. Tip temperatures are reconstructed to within 1.0%. Experimental results are also presented. Due to the limitations of the detection system and the optical fiber probe, the uncertainty in the signal measurement equation is high. Also, due to the limitations of the laboratory furnace and the optical detector, the measurement uncertainty is also high. This leads to reconstructions that are not always accurate. Even though the temperature profiles are not completely accurate, the tip-temperatures are reconstructed to within 1%—a significant improvement over the standard two-color technique under the same conditions. Improvements are recommended that will lead to decreased measurement and signal measurement equation uncertainty. This decreased uncertainty will lead to the development of a reliable and accurate temperature measurement device.
APA, Harvard, Vancouver, ISO, and other styles
3

Friefeld, Andrew Scott 1967. "A geometry-independent algorithm for electrical impedance tomography using wavelet-Galerkin discretization and conjugate gradient regularization." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282511.

Full text
Abstract:
Electrical impedance tomography is a rapidly growing discipline with an increasing number of medical and nonmedical applications. Many recent studies indicate that while the technique shows promise, improvements must be made before impedance imaging systems take their place beside more mature imaging technologies in the clinic and in the laboratory. This dissertation is an effort to address two of the shortcomings of currently available impedance tomography systems. First, a new numerical solution to the governing partial differential equation is presented which allows the user a fast, easy means of making geometrical changes. Treating the domain of interest as an input to the problem, recent results from the field of wavelet theory provide a simple means of identifying the boundary as well as giving a method for solving the partial differential equation in a fast, efficient manner. Since the algorithm only requires a pixel representation of the geometry and does not use a grid generation program, it may be of interest in applications where the geometry varies with time or the user may not be familiar with the complexities of typical finite element method grid generation programs. Second, an application of the conjugate gradient method to the problem of regularizing the nonlinear Newton-Raphson conductivity update leads to significant improvement over the popular Levenberg-Marquardt trust region regularization. The use of the conjugate gradient method as a regularization technique allows for convergence of the conductivity reconstruction in far fewer iterations and can perform reconstructions with an initial assumption of uniform conductivity in situations where other methods require either a priori knowledge or internal measurement of voltages.
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.

Full text
Abstract:
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.
APA, Harvard, Vancouver, ISO, and other styles
5

Pester, M., and S. Rjasanow. "A parallel version of the preconditioned conjugate gradient method for boundary element equations." Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199800455.

Full text
Abstract:
The parallel version of precondition techniques is developed for matrices arising from the Galerkin boundary element method for two-dimensional domains with Dirichlet boundary conditions. Results were obtained for implementations on a transputer network as well as on an nCUBE-2 parallel computer showing that iterative solution methods are very well suited for a MIMD computer. A comparison of numerical results for iterative and direct solution methods is presented and underlines the superiority of iterative methods for large systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Ansoni, Jonas Laerte. "Resolução de um problema térmico inverso utilizando processamento paralelo em arquiteturas de memória compartilhada." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/18/18147/tde-19012011-104826/.

Full text
Abstract:
A programação paralela tem sido freqüentemente adotada para o desenvolvimento de aplicações que demandam alto desempenho computacional. Com o advento das arquiteturas multi-cores e a existência de diversos níveis de paralelismo é importante definir estratégias de programação paralela que tirem proveito desse poder de processamento nessas arquiteturas. Neste contexto, este trabalho busca avaliar o desempenho da utilização das arquiteturas multi-cores, principalmente o oferecido pelas unidades de processamento gráfico (GPUs) e CPUs multi-cores na resolução de um problema térmico inverso. Algoritmos paralelos para a GPU e CPU foram desenvolvidos utilizando respectivamente as ferramentas de programação em arquiteturas de memória compartilhada NVIDIA CUDA (Compute Unified Device Architecture) e a API POSIX Threads. O algoritmo do método do gradiente conjugado pré-condicionado para resolução de sistemas lineares esparsos foi implementado totalmente no espaço da memória global da GPU em CUDA. O algoritmo desenvolvido foi avaliado em dois modelos de GPU, os quais se mostraram mais eficientes, apresentando um speedup de quatro vezes que a versão serial do algoritmo. A aplicação paralela em POSIX Threads foi avaliada em diferentes CPUs multi-cores com distintas microarquiteturas. Buscando um maior desempenho do código paralelizado foram utilizados flags de otimização as quais se mostraram muito eficientes na aplicação desenvolvida. Desta forma o código paralelizado com o auxílio das flags de otimização chegou a apresentar tempos de processamento cerca de doze vezes mais rápido que a versão serial no mesmo processador sem nenhum tipo de otimização. Assim tanto a abordagem utilizando a GPU como um co-processador genérico a CPU como a aplicação paralela empregando as CPUs multi-cores mostraram-se ferramentas eficientes para a resolução do problema térmico inverso.
Parallel programming has been frequently adopted for the development of applications that demand high-performance computing. With the advent of multi-cores architectures and the existence of several levels of parallelism are important to define programming strategies that take advantage of parallel processing power in these architectures. In this context, this study aims to evaluate the performance of architectures using multi-cores, mainly those offered by the graphics processing units (GPUs) and CPU multi-cores in the resolution of an inverse thermal problem. Parallel algorithms for the GPU and CPU were developed respectively, using the programming tools in shared memory architectures, NVIDIA CUDA (Compute Unified Device Architecture) and the POSIX Threads API. The algorithm of the preconditioned conjugate gradient method for solving sparse linear systems entirely within the global memory of the GPU was implemented by CUDA. It evaluated the two models of GPU, which proved more efficient by having a speedup was four times faster than the serial version of the algorithm. The parallel application in POSIX Threads was evaluated in different multi-core CPU with different microarchitectures. Optimization flags were used to achieve a higher performance of the parallelized code. As those were efficient in the developed application, the parallelized code presented processing times about twelve times faster than the serial version on the same processor without any optimization. Thus both the approach using GPU as a coprocessor to the CPU as a generic parallel application using the multi-core CPU proved to be more efficient tools for solving the inverse thermal problem.
APA, Harvard, Vancouver, ISO, and other styles
7

Hewlett, Joel David Wilamowski Bogdan M. "Novel approaches to creating robust globally convergent algorithms for numerical optimization." Auburn, Ala., 2009. http://hdl.handle.net/10415/1930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Irani, Kashmira M. "Preconditioned sequential and parallel conjugate gradient algorithms for homotopy curve tracking." Thesis, Virginia Tech, 1990. http://hdl.handle.net/10919/41971.

Full text
Abstract:

There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Variants of the conjugate gradient algorithm along with different preconditioners are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. In addition, a parallel version of Craig's method with incomplete LU factorization preconditioning is implemented on a shared memory parallel computer with various levels and degrees of parallelism (e.g., linear algebra, function and Jacobian matrix evaluation, etc.). An in-depth study is presented for each of these levels with respect to the speedup in execution time obtained with the parallelism, the time spent implementing the parallel code and the extra memory allocated by the parallel algorithm.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
9

Pinto, Marcio Augusto Sampaio 1977. "Método de otimização assitido para comparação entre poços convencionais e inteligentes considerando incertezas." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263725.

Full text
Abstract:
Orientador: Denis José Schiozer
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de Geociências
Made available in DSpace on 2018-08-24T00:34:10Z (GMT). No. of bitstreams: 1 Pinto_MarcioAugustoSampaio_D.pdf: 5097853 bytes, checksum: bc8b7f6300987de2beb9a57c26ad806a (MD5) Previous issue date: 2013
Resumo: Neste trabalho, um método de otimização assistido é proposto para estabelecer uma comparação refinada entre poços convencionais e inteligentes, considerando incertezas geológicas e econômicas. Para isto é apresentada uma metodologia dividida em quatro etapas: (1) representação e operação dos poços no simulador; (2) otimização das camadas/ou blocos completados nos poços convencionais e do número e posicionamento das válvulas nos poços inteligentes; (3) otimização da operação dos poços convencionais e das válvulas nos poços inteligentes, através de um método híbrido de otimização, composto pelo algoritmo genético rápido, para realizar a otimização global, e pelo método de gradiente conjugado, para realizar a otimização local; (4) uma análise de decisão considerando os resultados de todos os cenários geológicos e econômicos. Esta metodologia foi validada em modelos de reservatórios mais simples e com configuração de poços verticais do tipo five-spot, para em seguida ser aplicada em modelos de reservatórios mais complexos, com quatro poços produtores e quatro injetores, todos horizontais. Os resultados mostram uma clara diferença ao aplicar a metodologia proposta para estabelecer a comparação entre os dois tipos de poços. Apresenta também a comparação entre os resultados dos poços inteligentes com três tipos de controle, o reativo e mais duas formas de controle proativo. Os resultados mostram, para os casos utilizados nesta tese, uma ampla vantagem em se utilizar pelo menos uma das formas de controle proativo, ao aumentar a recuperação de óleo e VPL, reduzindo a produção e injeção de água na maioria dos casos
Abstract: In this work, an assisted optimization method is proposed to establish a refined comparison between conventional and intelligent wells, considering geological and economic uncertainties. For this, it is presented a methodology divided into four steps: (1) representation and operation of wells in the simulator, (2) optimization of the layers /blocks with completion in conventional wells and the number and placement of the valves in intelligent wells; (3) optimization of the operation of the conventional and valves in the intelligent, through a hybrid optimization method, comprising by fast genetic algorithm, to perform global optimization, and the conjugate gradient method, to perform local optimization; (4) decision analysis considering the results of all geological and economic scenarios. This method was validated in simple reservoir models and configuration of vertical wells with five-spot type, and then applied to a more complex reservoir model, with four producers and four injectors wells, all horizontal. The results show a clear difference in applying the proposed methodology to establish a comparison between the two types of wells. It also shows the comparison between the results of intelligent wells with three types of control, reactive and two ways of proactive control. The results show, for the cases used in this work, a large advantage to use intelligent wells with at least one form of proactive control, to enhance oil recovery and NPV, reducing water production and injection in most cases
Doutorado
Reservatórios e Gestão
Doutor em Ciências e Engenharia de Petróleo
APA, Harvard, Vancouver, ISO, and other styles
10

Heinrich, André. "Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration." Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108923.

Full text
Abstract:
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Conjugate Gradient Algorithm"

1

Conjugate gradient algorithms in nonconvex optimization. Berlin: Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Křížek, Michal, Pekka Neittaanmäki, Sergey Korotov, and Roland Glowinski, eds. Conjugate Gradient Algorithms and Finite Element Methods. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-642-18560-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

The Lanczos and conjugate gradient algorithms: From theory to finite precision computations. Philadelphia: Society for Industrial and Applied Mathematics, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Conjugate Gradient Algorithms in Nonconvex Optimization. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-85634-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Krizek, Michal, Roland Glowinski, Pekka Neittaanmäki, and Sergey Korotov. Conjugate Gradient Algorithms and Finite Element Methods. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Conjugate Gradient Algorithms and Finite Element Methods. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

McIntosh, A. Fitting Linear Models: An Application of Conjugate Gradient Algorithms. Springer London, Limited, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

(Editor), M. Krizek, P. Neittaanmäki (Editor), R. Glowinski (Editor), and S. Korotov (Editor), eds. Conjugate Gradient Algorithms and Finite Element Methods (Scientific Computation). Springer, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Conjugate Gradient Algorithms in Nonconvex Optimization Nonconvex Optimization and Its Applications. Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Meurant, Gérard. The Lanczos and Conjugate Gradient Algorithms: From Theory to Finite Precision Computations (Software, Environments and Tools). SIAM, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Conjugate Gradient Algorithm"

1

Andrei, Neculai. "Linear Conjugate Gradient Algorithm." In Nonlinear Conjugate Gradient Methods for Unconstrained Optimization, 67–87. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42950-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sabbagh, Harold A., R. Kim Murphy, Elias H. Sabbagh, Liming Zhou, and Russell Wincheski. "A Bilinear Conjugate-Gradient Inversion Algorithm." In Scientific Computation, 3–18. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67956-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kotlyar, Vladimir, Keshav Pingali, and Paul Stodghill. "Automatic parallelization of the conjugate gradient algorithm." In Languages and Compilers for Parallel Computing, 480–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0014219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jordan, Andrzej, and Robert Piotr Bycul. "The Parallel Algorithm of Conjugate Gradient Method." In Lecture Notes in Computer Science, 156–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47840-x_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Qteish, Abdallah, and Mohammad Hamdan. "Hybrid Particle Swarm and Conjugate Gradient Optimization Algorithm." In Lecture Notes in Computer Science, 582–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13495-1_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bilski, Jarosław, and Jacek Smoląg. "Fast Conjugate Gradient Algorithm for Feedforward Neural Networks." In Artificial Intelligence and Soft Computing, 27–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61401-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Field, Martyn R. "Adaptive polynomial preconditioning for the conjugate gradient algorithm." In Lecture Notes in Computer Science, 189–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-60902-4_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Serrarens, Pascal R. "Implementing the conjugate gradient algorithm in a functional language." In Implementation of Functional Languages, 125–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63237-9_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aich, Ankit, Amit Dutta, and Aruna Chakraborty. "A Scaled Conjugate Gradient Backpropagation Algorithm for Keyword Extraction." In Advances in Intelligent Systems and Computing, 674–84. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-7512-4_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Feiyan, and Xiaofeng Zhu. "Alphabet Recognition Based on Scaled Conjugate Gradient BP Algorithm." In Proceedings of the 9th International Symposium on Linear Drives for Industry Applications, Volume 4, 21–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40640-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Conjugate Gradient Algorithm"

1

Diniz, Paulo S. R., Marcele O. K. Mendonca, Jonathas O. Ferreira, and Tadeu N. Ferreira. "Data-Selective Conjugate Gradient Algorithm." In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Diniz, Paulo S. R., Jonathas O. Ferreira, Marcele O. K. Mendonca, and Tadeu N. Ferreira. "Data Selection Kernel Conjugate Gradient Algorithm." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Boray, G. K., and M. D. Srinath. "Conjugate gradient algorithm for adaptive echo cancellation." In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.226423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Apolinário Jr, José Antonio, Stefan Werner, and Paulo Sérgio Ramirez Diniz. "Conjugate Gradient Algorithm with Data Selective Updating." In XIX Simpósio Brasileiro de Telecomunicações. Sociedade Brasileira de Telecomunicações, 2001. http://dx.doi.org/10.14209/sbrt.2001.04400026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sitjongsataporn, Suchada, and Aphichata Thongrak. "Complex block orthogonal gradient adaptive-based algorithm with conjugate gradient principle." In 2016 13th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2016. http://dx.doi.org/10.1109/ecticon.2016.7561246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao Shengkui, Man Zhihong, and Khoo Suiyang. "Conjugate gradient algorithm design with RLS normal equation." In 2007 6th International Conference on Information, Communications & Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/icics.2007.4449580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Semira, Hichem, Hocine Belkacemi, and Noureddine Doghmane. "A novel conjugate gradient-based source localization algorithm." In 2007 9th International Symposium on Signal Processing and Its Applications (ISSPA). IEEE, 2007. http://dx.doi.org/10.1109/isspa.2007.4555290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jiao, Baocong, Jing Han, and Lanping Chen. "A Modified Conjugate Gradient Algorithm with Sufficient Descent." In 2011 Fourth International Joint Conference on Computational Sciences and Optimization (CSO). IEEE, 2011. http://dx.doi.org/10.1109/cso.2011.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jiang, Xuguang, and Daniel Thedens. "New iterative gridding algorithm using conjugate gradient method." In Medical Imaging 2004, edited by J. Michael Fitzpatrick and Milan Sonka. SPIE, 2004. http://dx.doi.org/10.1117/12.535685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Apolinario, Jose A., and Marcello L. R. de Campos. "The constrained generalized data windowing conjugate gradient algorithm." In 2008 IEEE 9th Workshop on Signal Processing Advances in Wireless Communications (SPAWC 2008). IEEE, 2008. http://dx.doi.org/10.1109/spawc.2008.4641656.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Conjugate Gradient Algorithm"

1

D'Azevedo, E. F., and C. H. Romine. Reducing communication costs in the conjugate gradient algorithm on distributed memory multiprocessors. Office of Scientific and Technical Information (OSTI), September 1992. http://dx.doi.org/10.2172/7172467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

D`Azevedo, E. F., and C. H. Romine. Reducing communication costs in the conjugate gradient algorithm on distributed memory multiprocessors. Office of Scientific and Technical Information (OSTI), September 1992. http://dx.doi.org/10.2172/10176473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Singh, Surendra, Klaus Halterman, and J. M. Elson. Bi-Conjugate Gradient Algorithm for Solution of Integral Equations Arising in Electromagnetic Scattering Problems. Fort Belvoir, VA: Defense Technical Information Center, September 2004. http://dx.doi.org/10.21236/ada433650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Peters, T. J. A Conjugate Gradient Based Algorithm to Minimize the Sidelobe Level of Planar Arrays with Element Failures. Fort Belvoir, VA: Defense Technical Information Center, August 1991. http://dx.doi.org/10.21236/ada240667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography