To see the other types of publications on this topic, follow the link: Hessian matrices.

Journal articles on the topic 'Hessian matrices'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Hessian matrices.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Xiang, Shusen Wang, and Zhihua Zhang. "Do Subsampled Newton Methods Work for High-Dimensional Data?" Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4723–30. http://dx.doi.org/10.1609/aaai.v34i04.5905.

Full text
Abstract:
Subsampled Newton methods approximate Hessian matrices through subsampling techniques to alleviate the per-iteration cost. Previous results require Ω (d) samples to approximate Hessians, where d is the dimension of data points, making it less practical for high-dimensional data. The situation is deteriorated when d is comparably as large as the number of data points n, which requires to take the whole dataset into account, making subsampling not useful. This paper theoretically justifies the effectiveness of subsampled Newton methods on strongly convex empirical risk minimization with high dimensional data. Specifically, we provably require only Θ˜(deffγ) samples for approximating the Hessian matrices, where deffγ is the γ-ridge leverage and can be much smaller than d as long as nγ ≫ 1. Our theories work for three types of Newton methods: subsampled Netwon, distributed Newton, and proximal Newton.
APA, Harvard, Vancouver, ISO, and other styles
2

Ek, David, and Anders Forsgren. "Exact linesearch limited-memory quasi-Newton methods for minimizing a quadratic function." Computational Optimization and Applications 79, no. 3 (April 28, 2021): 789–816. http://dx.doi.org/10.1007/s10589-021-00277-4.

Full text
Abstract:
AbstractThe main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give a class of limited-memory quasi-Newton Hessian approximations which generate search directions parallel to those of the BFGS method, or equivalently, to those of the method of preconditioned conjugate gradients. In the setting of reduced Hessians, the class provides a dynamical framework for the construction of limited-memory quasi-Newton methods. These methods attain finite termination on quadratic optimization problems in exact arithmetic. We show performance of the methods within this framework in finite precision arithmetic by numerical simulations on sequences of related systems of linear equations, which originate from the CUTEst test collection. In addition, we give a compact representation of the Hessian approximations in the full Broyden class for the general unconstrained optimization problem. This representation consists of explicit matrices and gradients only as vector components.
APA, Harvard, Vancouver, ISO, and other styles
3

Coleman, Thomas F., Burton S. Garbow, and Jorge J. Moré. "Software for estimating sparse Hessian matrices." ACM Transactions on Mathematical Software 11, no. 4 (December 1985): 363–77. http://dx.doi.org/10.1145/6187.6190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Yi, Yan Shi, and Jianping Yu. "Kinematic analysis of limited-dof parallel manipulators based on translational/rotational Jacobian and Hessian matrices." Robotica 27, no. 7 (February 27, 2009): 971–80. http://dx.doi.org/10.1017/s0263574709005396.

Full text
Abstract:
SUMMARYThis paper proposes an approach for solving the velocity and acceleration of the limited-dof (dof n < 6) parallel kinematic machines with linear active legs by means of translational/rotational Jacobian and Hessian matrices. First, based on the established or derived constraint and displacement equations, the translational/rotational Jacobian and Hessian matrices are derived. Second, the formulae for solving inverse/forward velocities and accelerations are derived from translational and rotational Jacobian/Hessian matrices. Third, a 2SPR + UPU PKM and a 2SPS + RPRR PKM are illustrated for explaining how to use this method. This approach is simple because it needs neither to eliminate 6-n rows of an n × 6 Jacobian matrix nor to determine the screw or pose of the constrained wrench.
APA, Harvard, Vancouver, ISO, and other styles
5

Ma, W. J., T. M. Wu, J. Hsieh, and S. L. Chang. "Level statistics of Hessian matrices: random matrices with conservation constraints." Physica A: Statistical Mechanics and its Applications 321, no. 1-2 (April 2003): 364–68. http://dx.doi.org/10.1016/s0378-4371(02)01796-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

FONSECA, IRENE, GIOVANNI LEONI, and ROBERTO PARONI. "ON HESSIAN MATRICES IN THE SPACE BH." Communications in Contemporary Mathematics 07, no. 04 (August 2005): 401–20. http://dx.doi.org/10.1142/s0219199705001805.

Full text
Abstract:
An extension of Alberti's result to second order derivatives is obtained. Precisely, if Ω is an open subset of ℝN and if f ∈ L1(Ω;ℝN × N) is symmetric-valued, then there exist u ∈ W1,1 (Ω) with ∇u ∈ BV(Ω;ℝN) and a constant C > 0 depending only on N such that [Formula: see text] and [Formula: see text]
APA, Harvard, Vancouver, ISO, and other styles
7

Erleben, Kenny, and Sheldon Andrews. "Solving inverse kinematics using exact Hessian matrices." Computers & Graphics 78 (February 2019): 1–11. http://dx.doi.org/10.1016/j.cag.2018.10.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tuan, Nguyen Thai Minh, Chung Thanh Pham, Do Dang Khoa, and Phan Dang Phong. "KINEMATIC AND DYNAMIC ANALYSIS OF MULTIBODY SYSTEMS USING THE KRONECKER PRODUCT." Vietnam Journal of Science and Technology 57, no. 1 (February 18, 2019): 112. http://dx.doi.org/10.15625/2525-2518/57/1/12285.

Full text
Abstract:
This paper employ Khang’s definition of the partial derivative of a matrix with respect to a vector and the Kronecker product to define translational and rotational Hessian matrices. With these definitions, the generalized velocities in the expression of a linear acceleration or an angular acceleration are collected into a quadratic term. The relations of Jacobian and Hessian matrices in relative motion are then established. A new matrix form of Lagrange’s equations showing clearly the quadratic term of generalized velocities is also introduced.
APA, Harvard, Vancouver, ISO, and other styles
9

Tůma, Miroslav. "A note on direct methods for approximations of sparse Hessian matrices." Applications of Mathematics 33, no. 3 (1988): 171–76. http://dx.doi.org/10.21136/am.1988.104300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jiang, Bowu, and Jianfeng Zhang. "Least-squares migration with a blockwise Hessian matrix: A prestack time-migration approach." GEOPHYSICS 84, no. 4 (July 1, 2019): R625—R640. http://dx.doi.org/10.1190/geo2018-0533.1.

Full text
Abstract:
We have developed an explicit inverse approach with a Hessian matrix for the least-squares (LS) implementation of prestack time migration (PSTM). A full Hessian matrix is divided into a series of computationally tractable small-sized matrices using a localized approach, thus significantly reducing the size of the inversion. The scheme is implemented by dividing the imaging volume into a series of subvolumes related to the blockwise Hessian matrices that govern the mapping relationship between the migrated result and corresponding reflectivity. The proposed blockwise LS-PSTM can be implemented in a target-oriented fashion. The localized approach that we use to modify the Hessian matrix can eliminate the boundary effects originating from the blockwise implementation. We derive the explicit formula of the offset-dependent Hessian matrix using the deconvolution imaging condition with an analytical Green’s function of PSTM. This avoids the challenging task of estimating the source wavelet. Moreover, migrated gathers can be generated with the proposed scheme. The smaller size of the blockwise Hessian matrix makes it possible to incorporate the total-variation regularization into the inversion, thus attenuating noises significantly. We revealed the proposed blockwise LS-PSTM with synthetic and field data sets. Higher quality common-reflection-point gathers of the field data are obtained.
APA, Harvard, Vancouver, ISO, and other styles
11

Marjugi, Siti Mahani, and Wah June Leong. "Diagonal Hessian Approximation for Limited Memory Quasi-Newton via Variational Principle." Journal of Applied Mathematics 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/523476.

Full text
Abstract:
This paper proposes some diagonal matrices that approximate the (inverse) Hessian by parts using the variational principle that is analogous to the one employed in constructing quasi-Newton updates. The way we derive our approximations is inspired by the least change secant updating approach, in which we let the diagonal approximation be the sum of two diagonal matrices where the first diagonal matrix carries information of the local Hessian, while the second diagonal matrix is chosen so as to induce positive definiteness of the diagonal approximation at a whole. Some numerical results are also presented to illustrate the effectiveness of our approximating matrices when incorporated within the L-BFGS algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, Yeongrak. "Cubic forms having matrix factorizations by Hessian matrices." Proceedings of the American Mathematical Society 148, no. 7 (March 25, 2020): 2799–809. http://dx.doi.org/10.1090/proc/14993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ivochkina, N. M., and N. V. Filimonenkova. "G˚arding Cones and Bellman Equations in the Theory of Hessian Operators and Equations." Contemporary Mathematics. Fundamental Directions 63, no. 4 (December 15, 2017): 615–26. http://dx.doi.org/10.22363/2413-3639-2017-63-4-615-626.

Full text
Abstract:
In this work, we continue investigation of algebraic properties of G˚arding cones in the space of symmetric matrices. Based on this theory, we propose a new approach to study of fully nonlinear differential operators and second-order partial differential equations. We prove new-type comparison theorems for evolution Hessian operators and establish a relation between Hessian and Bellman equations.
APA, Harvard, Vancouver, ISO, and other styles
14

Gao, Wenlei, Gian Matharu, and Mauricio D. Sacchi. "Fast least-squares reverse time migration via a superposition of Kronecker products." GEOPHYSICS 85, no. 2 (March 1, 2020): S115—S134. http://dx.doi.org/10.1190/geo2019-0254.1.

Full text
Abstract:
Least-squares reverse time migration (LSRTM) has become increasingly popular for complex wavefield imaging due to its ability to equalize image amplitudes, attenuate migration artifacts, handle incomplete and noisy data, and improve spatial resolution. The major drawback of LSRTM is the considerable computational cost incurred by performing migration/demigration at each iteration of the optimization. To ameliorate the computational cost, we introduced a fast method to solve the LSRTM problem in the image domain. Our method is based on a new factorization that approximates the Hessian using a superposition of Kronecker products. The Kronecker factors are small matrices relative to the size of the Hessian. Crucially, the factorization is able to honor the characteristic block-band structure of the Hessian. We have developed a computationally efficient algorithm to estimate the Kronecker factors via low-rank matrix completion. The completion algorithm uses only a small percentage of preferentially sampled elements of the Hessian matrix. Element sampling requires computation of the source and receiver Green’s functions but avoids explicitly constructing the entire Hessian. Our Kronecker-based factorization leads to an imaging technique that we name Kronecker-LSRTM (KLSRTM). The iterative solution of the image-domain KLSRTM is fast because we replace computationally expensive migration/demigration operations with fast matrix multiplications involving small matrices. We first validate the efficacy of our method by explicitly computing the Hessian for a small problem. Subsequent 2D numerical tests compare LSRTM with KLSRTM for several benchmark models. We observe that KLSRTM achieves near-identical images to LSRTM at a significantly reduced computational cost (approximately 5–15× faster); however, KLSRTM has an increased, yet manageable, memory cost.
APA, Harvard, Vancouver, ISO, and other styles
15

Fletcher, R. "An Optimal Positive Definite Update for Sparse Hessian Matrices." SIAM Journal on Optimization 5, no. 1 (February 1995): 192–218. http://dx.doi.org/10.1137/0805010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Mönnigmann, M. "Efficient Calculation of Bounds on Spectra of Hessian Matrices." SIAM Journal on Scientific Computing 30, no. 5 (January 2008): 2340–57. http://dx.doi.org/10.1137/070704186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Coleman, Thomas F., Burton S. Garbow, and Jorge J. Moré. "Algorithm 636: FORTRAN subroutines for estimating sparse Hessian matrices." ACM Transactions on Mathematical Software 11, no. 4 (December 1985): 378. http://dx.doi.org/10.1145/6187.6193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lombardi, Marco J., and Giampiero M. Gallo. "Analytic Hessian matrices and the computation of FIGARCH estimates." Statistical Methods & Applications 11, no. 2 (June 2002): 247–64. http://dx.doi.org/10.1007/bf02511490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Coleman, Thomas F., and Jin-Yi Cai. "The Cyclic Coloring Problem and Estimation of Sparse Hessian Matrices." SIAM Journal on Algebraic Discrete Methods 7, no. 2 (April 1986): 221–35. http://dx.doi.org/10.1137/0607026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mönnigmann, M. "Fast Calculation of Spectral Bounds for Hessian Matrices on Hyperrectangles." SIAM Journal on Matrix Analysis and Applications 32, no. 4 (October 2011): 1351–66. http://dx.doi.org/10.1137/10078760x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bofill, Josep Maria, and M�anica Comajuan. "Analysis of the updated Hessian matrices for locating transition structures." Journal of Computational Chemistry 16, no. 11 (November 1995): 1326–38. http://dx.doi.org/10.1002/jcc.540161103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhao, Tieshi, Mingchao Geng, Yuhang Chen, Erwei Li, and Jiantao Yang. "Kinematics and dynamics Hessian matrices of manipulators based on screw theory." Chinese Journal of Mechanical Engineering 28, no. 2 (January 28, 2015): 226–35. http://dx.doi.org/10.3901/cjme.2014.1230.182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Farid, Mahboubeh, Wah June Leong, and Lihong Zheng. "Accumulative Approach in Multistep Diagonal Gradient-Type Method for Large-Scale Unconstrained Optimization." Journal of Applied Mathematics 2012 (2012): 1–11. http://dx.doi.org/10.1155/2012/875494.

Full text
Abstract:
This paper focuses on developing diagonal gradient-type methods that employ accumulative approach in multistep diagonal updating to determine a better Hessian approximation in each step. The interpolating curve is used to derive a generalization of the weak secant equation, which will carry the information of the local Hessian. The new parameterization of the interpolating curve in variable space is obtained by utilizing accumulative approach via a norm weighting defined by two positive definite weighting matrices. We also note that the storage needed for all computation of the proposed method is justO(n). Numerical results show that the proposed algorithm is efficient and superior by comparison with some other gradient-type methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Mahdavi-Amiri, Nezam, and Rohollah Yousefpour. "Constructing a sequence of discrete Hessian matrices of an SC 1 function uniformly convergent to the generalized Hessian matrix." Mathematical Programming 121, no. 2 (July 25, 2008): 387–414. http://dx.doi.org/10.1007/s10107-008-0238-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jeyakumar, V., and X. Wang. "Approximate Hessian matrices and second-order optimality conditions for nonlinear programming problems with C1-data." Journal of the Australian Mathematical Society. Series B. Applied Mathematics 40, no. 3 (January 1999): 403–20. http://dx.doi.org/10.1017/s0334270000010985.

Full text
Abstract:
AbstractIn this paper, we present generalizations of the Jacobian matrix and the Hessian matrix to continuous maps and continuously differentiable functions respectively. We then establish second-order optimality conditions for mathematical programming problems with continuously differentiable functions. The results also sharpen the corresponding results for problems involving C1.1-functions.
APA, Harvard, Vancouver, ISO, and other styles
26

Magnus, Jan R., and H. Neudecker. "Symmetry, 0-1 Matrices and Jacobians: A Review." Econometric Theory 2, no. 2 (August 1986): 157–90. http://dx.doi.org/10.1017/s0266466600011476.

Full text
Abstract:
In this paper we bring together those properties of the Kronecker product, the vec operator, and 0-1 matrices which in our view are of interest to researchers and students in econometrics and statistics. The treatment of Kronecker products and the vec operator is fairly exhaustive; the treatment of 0–1 matrices is selective. In particular we study the “commutation” matrix K (defined implicitly by K vec A = vec A′ for any matrix A of the appropriate order), the idempotent matrix N = ½ (I + K), which plays a central role in normal distribution theory, and the “duplication” matrix D, which arises in the context of symmetry. We present an easy and elegant way (via differentials) to evaluate Jacobian matrices (first derivatives), Hessian matrices (second derivatives), and Jacobian determinants, even if symmetric matrix arguments are involved. Finally we deal with the computation of information matrices in situations where positive definite matrices are arguments of the likelihood function.
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Li. "A New BFGS Algorithm Using the Decomposition Matrix of the Correction Matrix to Obtain the Search Directions." Journal of Control Science and Engineering 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/674617.

Full text
Abstract:
We present an improved method for determining the search direction in the BFGS algorithm. Our approach uses the equal inner product decomposition method for positive-definite matrices. The decomposition of an approximated Hessian matrix expresses a correction formula that is independent from the exact line search. This decomposed matrix is used to compute the search direction in a new BFGS algorithm.
APA, Harvard, Vancouver, ISO, and other styles
28

Favennec, Y. "Hessian and Fisher Matrices For Error Analysis in Inverse Heat Conduction Problems." Numerical Heat Transfer, Part B: Fundamentals 52, no. 4 (August 23, 2007): 323–40. http://dx.doi.org/10.1080/10407790701443958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Nasseh, Saeed, Alexandra Seceleanu, and Junzo Watanabe. "Determinants of incidence and Hessian matrices arising from the vector space lattice." Journal of Commutative Algebra 11, no. 1 (February 2019): 131–54. http://dx.doi.org/10.1216/jca-2019-11-1-131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Han, Qing, and Qi S. Zhang. "An Upper Bound for Hessian Matrices of Positive Solutions of Heat Equations." Journal of Geometric Analysis 26, no. 2 (February 3, 2015): 715–49. http://dx.doi.org/10.1007/s12220-015-9569-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Schulze Darup, Moritz, Martin Kastsian, Stefan Mross, and Martin Mönnigmann. "Efficient computation of spectral bounds for Hessian matrices on hyperrectangles for global optimization." Journal of Global Optimization 58, no. 4 (August 24, 2013): 631–52. http://dx.doi.org/10.1007/s10898-013-0099-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hall, Matt. "Linear inversion." Leading Edge 35, no. 12 (December 2016): 1085–87. http://dx.doi.org/10.1190/tle35121085.1.

Full text
Abstract:
As a student geologist, I was never inducted into the world of linear algebra. Later, as a professional, I remained happily ignorant of Hessian matrices and Hermitian adjoints. But ever since reading Brian Russell's Don't neglect your math essay (Russell, 2012), I've wanted to put things right. In particular, I have wanted to understand the well-known geophysical equation at the heart of every inversion: d = Gm. It's only an equation — how hard can it be?
APA, Harvard, Vancouver, ISO, and other styles
33

Lu, Yi, and Bo Hu. "Unified Solving Jacobian∕Hessian Matrices of Some Parallel Manipulators With n SPS Active Legs and a Passive Constrained Leg." Journal of Mechanical Design 129, no. 11 (November 15, 2006): 1161–69. http://dx.doi.org/10.1115/1.2771572.

Full text
Abstract:
Some parallel manipulators with n spherical joint-prismatic joint-spherical joint (SPS)-type active legs and a passive constrained leg possess a larger capability of load bearing and are simple in structure of the active leg. In this paper, a unified and simple approach is proposed for solving Jacobian∕Hessian matrices and inverse∕forward velocity and acceleration of this type of parallel manipulators. First, a general parallel manipulator with n SPS-type active legs and one passive constrained leg in various possible serial structure is synthesized, and some formulae for solving the poses of constrained force∕torque and active∕constrained force matrix are derived. Second, the formulae for solving extension of active legs, the auxiliary velocity∕acceleration equation are derived. Third, the formulae for solving inverse∕forward velocity and acceleration and a Jacobian matrix without the first-order partial differentiation and a Hessian matrix without the second-order partial differentiation are derived. Finally, the procedure is applied to three parallel manipulators with four and five SPS-type active legs and one passive constrained leg in different serial structures and to illustrate.
APA, Harvard, Vancouver, ISO, and other styles
34

JENSEN, ARNE, and KENJI YAJIMA. "SPATIAL GROWTH OF FUNDAMENTAL SOLUTIONS FOR CERTAIN PERTURBATIONS OF THE HARMONIC OSCILLATOR." Reviews in Mathematical Physics 22, no. 02 (March 2010): 193–206. http://dx.doi.org/10.1142/s0129055x10003928.

Full text
Abstract:
We consider the fundamental solution for the Cauchy problem for perturbations of the harmonic oscillator by time dependent potentials which grow at spatial infinity slower than quadratic but faster than linear functions and whose Hessian matrices have a fixed sign. We prove that the fundamental solution at resonant times grows indefinitely at spatial infinity with an algebraic growth rate, which increases indefinitely when the growth rate of perturbations at infinity decreases from the near quadratic to the near linear ones.
APA, Harvard, Vancouver, ISO, and other styles
35

Hansen, Leif Ove, Andreas Hauge, Jan Myrheim, and Per Øyvind Sollid. "Extremal entanglement witnesses." International Journal of Quantum Information 13, no. 08 (December 2015): 1550060. http://dx.doi.org/10.1142/s0219749915500604.

Full text
Abstract:
We present a study of extremal entanglement witnesses on a bipartite composite quantum system. We define the cone of witnesses as the dual of the set of separable density matrices, thus [Formula: see text] when [Formula: see text] is a witness and [Formula: see text] is a pure product state, [Formula: see text] with [Formula: see text]. The set of witnesses of unit trace is a compact convex set, uniquely defined by its extremal points. The expectation value [Formula: see text] as a function of vectors [Formula: see text] and [Formula: see text] is a positive semidefinite biquadratic form. Every zero of [Formula: see text] imposes strong real-linear constraints on f and [Formula: see text]. The real and symmetric Hessian matrix at the zero must be positive semidefinite. Its eigenvectors with zero eigenvalue, if such exist, we call Hessian zeros. A zero of [Formula: see text] is quadratic if it has no Hessian zeros, otherwise it is quartic. We call a witness quadratic if it has only quadratic zeros, and quartic if it has at least one quartic zero. A main result we prove is that a witness is extremal if and only if no other witness has the same, or a larger, set of zeros and Hessian zeros. A quadratic extremal witness has a minimum number of isolated zeros depending on dimensions. If a witness is not extremal, then the constraints defined by its zeros and Hessian zeros determine all directions in which we may search for witnesses having more zeros or Hessian zeros. A finite number of iterated searches in random directions, by numerical methods, leads to an extremal witness which is nearly always quadratic and has the minimum number of zeros. We discuss briefly some topics related to extremal witnesses, in particular the relation between the facial structures of the dual sets of witnesses and separable states. We discuss the relation between extremality and optimality of witnesses, and a conjecture of separability of the so-called structural physical approximation (SPA) of an optimal witness. Finally, we discuss how to treat the entanglement witnesses on a complex Hilbert space as a subset of the witnesses on a real Hilbert space.
APA, Harvard, Vancouver, ISO, and other styles
36

Futami, Futoshi, Tomoharu Iwata, Naonori Ueda, and Issei Sato. "Accelerated Diffusion-Based Sampling by the Non-Reversible Dynamics with Skew-Symmetric Matrices." Entropy 23, no. 8 (July 30, 2021): 993. http://dx.doi.org/10.3390/e23080993.

Full text
Abstract:
Langevin dynamics (LD) has been extensively studied theoretically and practically as a basic sampling technique. Recently, the incorporation of non-reversible dynamics into LD is attracting attention because it accelerates the mixing speed of LD. Popular choices for non-reversible dynamics include underdamped Langevin dynamics (ULD), which uses second-order dynamics and perturbations with skew-symmetric matrices. Although ULD has been widely used in practice, the application of skew acceleration is limited although it is expected to show superior performance theoretically. Current work lacks a theoretical understanding of issues that are important to practitioners, including the selection criteria for skew-symmetric matrices, quantitative evaluations of acceleration, and the large memory cost of storing skew matrices. In this study, we theoretically and numerically clarify these problems by analyzing acceleration focusing on how the skew-symmetric matrix perturbs the Hessian matrix of potential functions. We also present a practical algorithm that accelerates the standard LD and ULD, which uses novel memory-efficient skew-symmetric matrices under parallel-chain Monte Carlo settings.
APA, Harvard, Vancouver, ISO, and other styles
37

Tang, Weilin, and Thomas Bally. "Evaluation of harmonic force fields from limited spectroscopic information and calculated Hessian matrices: butadiene." Journal of Physical Chemistry 97, no. 17 (April 1993): 4365–72. http://dx.doi.org/10.1021/j100119a019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Psang Dain. "Determination of caustic surfaces using point spread function and ray Jacobian and Hessian matrices." Applied Optics 53, no. 26 (September 4, 2014): 5889. http://dx.doi.org/10.1364/ao.53.005889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Abo-Shanab, Roshdy Foaad. "Dynamic modeling of parallel manipulators based on Lagrange–D’Alembert formulation and Jacobian/Hessian matrices." Multibody System Dynamics 48, no. 4 (October 2, 2019): 403–26. http://dx.doi.org/10.1007/s11044-019-09705-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Carlsen, Martin. "Using operators to expand the block matrices forming the Hessian of a molecular potential." Journal of Computational Chemistry 35, no. 15 (April 16, 2014): 1149–58. http://dx.doi.org/10.1002/jcc.23609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Qiuwei, Zhihui Zhu, and Gongguo Tang. "The non-convex geometry of low-rank matrix optimization." Information and Inference: A Journal of the IMA 8, no. 1 (March 22, 2018): 51–96. http://dx.doi.org/10.1093/imaiai/iay003.

Full text
Abstract:
Abstract This work considers two popular minimization problems: (i) the minimization of a general convex function f(X) with the domain being positive semi-definite matrices, and (ii) the minimization of a general convex function f(X) regularized by the matrix nuclear norm $\|X\|_{*}$ with the domain being general matrices. Despite their optimal statistical performance in the literature, these two optimization problems have a high computational complexity even when solved using tailored fast convex solvers. To develop faster and more scalable algorithms, we follow the proposal of Burer and Monteiro to factor the low-rank variable $X = UU^{\top } $ (for semi-definite matrices) or $X=UV^{\top } $ (for general matrices) and also replace the nuclear norm $\|X\|_{*}$ with $\big(\|U\|_{F}^{2}+\|V\|_{F}^{2}\big)/2$. In spite of the non-convexity of the resulting factored formulations, we prove that each critical point either corresponds to the global optimum of the original convex problems or is a strict saddle where the Hessian matrix has a strictly negative eigenvalue. Such a nice geometric structure of the factored formulations allows many local-search algorithms to find a global optimizer even with random initializations.
APA, Harvard, Vancouver, ISO, and other styles
42

Uschmajew, André, and Bart Vandereycken. "On critical points of quadratic low-rank matrix optimization problems." IMA Journal of Numerical Analysis 40, no. 4 (March 20, 2020): 2626–51. http://dx.doi.org/10.1093/imanum/drz061.

Full text
Abstract:
Abstract The absence of spurious local minima in certain nonconvex low-rank matrix recovery problems has been of recent interest in computer science, machine learning and compressed sensing since it explains the convergence of some low-rank optimization methods to global optima. One such example is low-rank matrix sensing under restricted isometry properties (RIPs). It can be formulated as a minimization problem for a quadratic function on the Riemannian manifold of low-rank matrices, with a positive semidefinite Riemannian Hessian that acts almost like an identity on low-rank matrices. In this work new estimates for singular values of local minima for such problems are given, which lead to improved bounds on RIP constants to ensure absence of nonoptimal local minima and sufficiently negative curvature at all other critical points. A geometric viewpoint is taken, which is inspired by the fact that the Euclidean distance function to a rank-$k$ matrix possesses no critical points on the corresponding embedded submanifold of rank-$k$ matrices except for the single global minimum.
APA, Harvard, Vancouver, ISO, and other styles
43

Ghazali, K., J. Sulaiman, Y. Dasril, and D. Gabda. "Newton Method with AOR Iteration for Finding Large Scale Unconstrained Minimizer with Tridiagonal Hessian Matrices." Journal of Physics: Conference Series 1298 (August 2019): 012002. http://dx.doi.org/10.1088/1742-6596/1298/1/012002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ghazali, K., J. Sulaiman, Y. Dasril, and D. Gabda. "Newton-SOR Iteration for Solving Large-Scale Unconstrained Optimization Problems with an Arrowhead Hessian Matrices." Journal of Physics: Conference Series 1358 (November 2019): 012054. http://dx.doi.org/10.1088/1742-6596/1358/1/012054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yazawa, Akiko. "The eigenvalues of the Hessian matrices of the generating functions for trees with k components." Linear Algebra and its Applications 631 (December 2021): 48–66. http://dx.doi.org/10.1016/j.laa.2021.08.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Fairbank, Michael, and Eduardo Alonso. "Efficient Calculation of the Gauss-Newton Approximation of the Hessian Matrix in Neural Networks." Neural Computation 24, no. 3 (March 2012): 607–10. http://dx.doi.org/10.1162/neco_a_00248.

Full text
Abstract:
The Levenberg-Marquardt (LM) learning algorithm is a popular algorithm for training neural networks; however, for large neural networks, it becomes prohibitively expensive in terms of running time and memory requirements. The most time-critical step of the algorithm is the calculation of the Gauss-Newton matrix, which is formed by multiplying two large Jacobian matrices together. We propose a method that uses backpropagation to reduce the time of this matrix-matrix multiplication. This reduces the overall asymptotic running time of the LM algorithm by a factor of the order of the number of output nodes in the neural network.
APA, Harvard, Vancouver, ISO, and other styles
47

Yin, Hui, Dejie Yu, Shengwen Yin, and Baizhan Xia. "Efficient Midfrequency Analysis of Built-Up Structure Systems with Interval Parameters." Shock and Vibration 2015 (2015): 1–18. http://dx.doi.org/10.1155/2015/712428.

Full text
Abstract:
To improve the efficiency of midfrequency analysis of built-up structure systems with interval parameters, the second-order interval and subinterval perturbation methods are introduced into the hybrid finite element/statistical energy analysis (FE/SEA) framework in this paper. Based on the FE/SEA for built-up structure systems and the second-order interval perturbation method, the response variables are expanded with the second-order Taylor series and nondiagonal elements of the Hessian matrices are neglected. Extreme values of the expanded variables are searched by using efficient search algorithm. For large parameter intervals, the subinterval perturbation method is introduced. Numerical results verify the effectiveness of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
48

Domenzain, Diego, John Bradford, and Jodi Mead. "Efficient inversion of 2.5D electrical resistivity data using the discrete adjoint method." GEOPHYSICS 86, no. 3 (April 27, 2021): E225—E237. http://dx.doi.org/10.1190/geo2020-0373.1.

Full text
Abstract:
We have developed a memory and operation-count efficient 2.5D inversion algorithm of electrical resistivity (ER) data that can handle fine discretization domains imposed by other geophysical (e.g, ground penetrating radar or seismic) data. Due to numerical stability criteria and available computational memory, joint inversion of different types of geophysical data can impose different grid discretization constraints on the model parameters. Our algorithm enables the ER data sensitivities to be directly joined with other geophysical data without the need of interpolating or coarsening the discretization. We have used the adjoint method directly in the discretized Maxwell’s steady state equation to compute the data sensitivity to the conductivity. In doing so, we make no finite-difference approximation on the Jacobian of the data and avoid the need to store large and dense matrices. Rather, we exploit matrix-vector multiplication of sparse matrices and find successful convergence using gradient descent for our inversion routine without having to resort to the Hessian of the objective function. By assuming a 2.5D subsurface, we are able to linearly reduce memory requirements when compared to a 3D gradient descent inversion, and by a power of two when compared to storing a 2D Hessian. Moreover, our method linearly outperforms operation counts when compared with 3D Gauss-Newton conjugate-gradient schemes, which scales cubically in our favor with respect to the thickness of the 3D domain. We physically appraise the domain of the recovered conductivity using a cutoff of the electric current density present in our survey. We evaluate two case studies to assess the validity of our algorithm. First, on a 2.5D synthetic example, and then on field data acquired in a controlled alluvial aquifer, where we were able to match the recovered conductivity to borehole observations.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Wen-Kai, Yaolong Zhang, Bin Jiang, Wei-Hai Fang, and Ganglong Cui. "Efficient Construction of Excited-State Hessian Matrices with Machine Learning Accelerated Multilayer Energy-Based Fragment Method." Journal of Physical Chemistry A 124, no. 27 (June 12, 2020): 5684–95. http://dx.doi.org/10.1021/acs.jpca.0c04117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hladík, Milan. "An extension of the $$\alpha \hbox {BB}$$ α BB -type underestimation to linear parametric Hessian matrices." Journal of Global Optimization 64, no. 2 (April 18, 2015): 217–31. http://dx.doi.org/10.1007/s10898-015-0304-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography