To see the other types of publications on this topic, follow the link: Truncated Newton methods.

Journal articles on the topic 'Truncated Newton methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 journal articles for your research on the topic 'Truncated Newton methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nash, Stephen G. "Preconditioning of Truncated-Newton Methods." SIAM Journal on Scientific and Statistical Computing 6, no. 3 (July 1985): 599–616. http://dx.doi.org/10.1137/0906042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nash, Stephen G. "A survey of truncated-Newton methods." Journal of Computational and Applied Mathematics 124, no. 1-2 (December 2000): 45–59. http://dx.doi.org/10.1016/s0377-0427(00)00426-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nash, Stephen G., and Ariela Sofer. "Block truncated-Newton methods for parallel optimization." Mathematical Programming 45, no. 1-3 (August 1989): 529–46. http://dx.doi.org/10.1007/bf01589117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zou, X., I. M. Navon, M. Berger, K. H. Phua, T. Schlick, and F. X. Le Dimet. "Numerical Experience with Limited-Memory Quasi-Newton and Truncated Newton Methods." SIAM Journal on Optimization 3, no. 3 (August 1993): 582–608. http://dx.doi.org/10.1137/0803029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Papadrakakis, M., and C. J. Gantes. "Truncated newton methods for nonlinear finite element analysis." Computers & Structures 30, no. 3 (January 1988): 705–14. http://dx.doi.org/10.1016/0045-7949(88)90306-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gräser, Carsten, and Oliver Sander. "Truncated nonsmooth Newton multigrid methods for block-separable minimization problems." IMA Journal of Numerical Analysis 39, no. 1 (November 9, 2018): 454–81. http://dx.doi.org/10.1093/imanum/dry073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kelley, C. T., and E. W. Sachs. "Truncated Newton Methods for Optimization with Inaccurate Functions and Gradients." Journal of Optimization Theory and Applications 116, no. 1 (January 2003): 83–98. http://dx.doi.org/10.1023/a:1022110219090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Ping, and Yao-Lin Jiang. "Truncated model reduction methods for linear time-invariant systems via eigenvalue computation." Transactions of the Institute of Measurement and Control 42, no. 10 (February 3, 2020): 1908–20. http://dx.doi.org/10.1177/0142331219899745.

Full text
Abstract:
This paper provides three model reduction methods for linear time-invariant systems in the view of the Riemannian Newton method and the Jacobi-Davidson method. First, the computation of Hankel singular values is converted into the linear eigenproblem by the similarity transformation. The Riemannian Newton method is used to establish the model reduction method. Besides, we introduce the Jacobi-Davidson method with the block version for the linear eigenproblem and present the corresponding model reduction method, which can be seen as an acceleration of the former method. Both the resulting reduced systems can be equivalent to the reduced system originating from a balancing transformation. Then, the computation of Hankel singular values is transformed into the generalized eigenproblem. The Jacobi-Davidson method is employed to establish the model reduction method, which can also lead to the reduced system equivalent to that resulting from a balancing transformation. This method can also be regarded as an acceleration of a Riemannian Newton method. Moreover, the application for model reduction of nonlinear systems with inhomogeneous conditions is also investigated.
APA, Harvard, Vancouver, ISO, and other styles
9

Fauci, Lisa J., and Aaron L. Fogelson. "Truncated newton methods and the modeling of complex immersed elastic structures." Communications on Pure and Applied Mathematics 46, no. 6 (July 1993): 787–818. http://dx.doi.org/10.1002/cpa.3160460602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Florian, M., S. J. Thomas, and R. V. M. Zahar. "On truncated-newton methods for solving the spatial price equilibrium problem." Networks 25, no. 4 (July 1995): 177–82. http://dx.doi.org/10.1002/net.3230250403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jensen, T. L., and M. Diehl. "An Approach for Analyzing the Global Rate of Convergence of Quasi-Newton and Truncated-Newton Methods." Journal of Optimization Theory and Applications 172, no. 1 (September 23, 2016): 206–21. http://dx.doi.org/10.1007/s10957-016-1013-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lucidi, Stefano, Francesco Rochetich, and Massimo Roma. "Curvilinear Stabilization Techniques for Truncated Newton Methods in Large Scale Unconstrained Optimization." SIAM Journal on Optimization 8, no. 4 (November 1998): 916–39. http://dx.doi.org/10.1137/s1052623495295250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Matharu, Gian, and Mauricio Sacchi. "A subsampled truncated-Newton method for multiparameter full-waveform inversion." GEOPHYSICS 84, no. 3 (May 1, 2019): R333—R340. http://dx.doi.org/10.1190/geo2018-0624.1.

Full text
Abstract:
Accounting for the Hessian in full-waveform inversion (FWI) can lead to higher convergence rates, improved resolution, and better mitigation of parameter trade-off in multiparameter problems. In spite of these advantages, the adoption of second-order optimization methods (e.g., truncated Newton [TN]) has been precluded by their high computational cost. We propose a subsampled TN (STN) algorithm for time-domain FWI with applications presented for the elastic isotropic case. By using uniform or nonuniform source subsampling during the computation of Hessian-vector products, we reduce the number of partial differential equation solves required per iteration when compared to the conventional TN algorithm. We evaluate the performance of STN through synthetic inversions on the Marmousi II and BP 2.5D models, using the limited-memory Broyden–Fletcher–Goldfarb–Shanno and TN algorithms as benchmarks. We determine that STN reaches a target misfit reduction at an overall cost comparable to first-order gradient methods, while retaining favorable convergence properties of TN methods. Furthermore, we evaluate an example in which nonuniform sampling outperforms uniform sampling in STN due to highly nonuniform source contributions to the Hessian.
APA, Harvard, Vancouver, ISO, and other styles
14

Xu, Min, Bojian Zhou, and Jie He. "Improving Truncated Newton Method for the Logit-Based Stochastic User Equilibrium Problem." Mathematical Problems in Engineering 2019 (October 9, 2019): 1–15. http://dx.doi.org/10.1155/2019/7313808.

Full text
Abstract:
This study proposes an improved truncated Newton (ITN) method for the logit-based stochastic user equilibrium problem. The ITN method incorporates a preprocessing procedure to the traditional truncated Newton method so that a good initial point is generated, on the basis of which a useful principle is developed for the choice of the basic variables. We discuss the rationale of both improvements from a theoretical point of view and demonstrate that they can enhance the computational efficiency in the early and late iteration stages, respectively, when solving the logit-based stochastic user equilibrium problem. The ITN method is compared with other related methods in the literature. Numerical results show that the ITN method performs favorably over these methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Roma, Massimo. "Dynamic scaling based preconditioning for truncated Newton methods in large scale unconstrained optimization." Optimization Methods and Software 20, no. 6 (December 2005): 693–713. http://dx.doi.org/10.1080/10556780410001727709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gräser, Carsten, Max Kahnt, and Ralf Kornhuber. "Numerical Approximation of Multi-Phase Penrose–Fife Systems." Computational Methods in Applied Mathematics 16, no. 4 (October 1, 2016): 523–42. http://dx.doi.org/10.1515/cmam-2016-0020.

Full text
Abstract:
AbstractWe consider a non-isothermal multi-phase field model. We subsequently discretize implicitly in time and with linear finite elements. The arising algebraic problem is formulated in two variables where one is the multi-phase field, and the other contains the inverse temperature field. We solve this saddle point problem numerically by a non-smooth Schur–Newton approach using truncated non-smooth Newton multigrid methods. An application in grain growth as occurring in liquid phase crystallization of silicon is considered.
APA, Harvard, Vancouver, ISO, and other styles
17

Caliciotti, Andrea, Giovanni Fasano, Stephen G. Nash, and Massimo Roma. "An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization." Operations Research Letters 46, no. 1 (January 2018): 7–12. http://dx.doi.org/10.1016/j.orl.2017.10.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Caliciotti, Andrea, Giovanni Fasano, Florian Potra, and Massimo Roma. "Issues on the use of a modified Bunch and Kaufman decomposition for large scale Newton’s equation." Computational Optimization and Applications 77, no. 3 (September 18, 2020): 627–51. http://dx.doi.org/10.1007/s10589-020-00225-8.

Full text
Abstract:
AbstractIn this work, we deal with Truncated Newton methods for solving large scale (possibly nonconvex) unconstrained optimization problems. In particular, we consider the use of a modified Bunch and Kaufman factorization for solving the Newton equation, at each (outer) iteration of the method. The Bunch and Kaufman factorization of a tridiagonal matrix is an effective and stable matrix decomposition, which is well exploited in the widely adopted SYMMBK (Bunch and Kaufman in Math Comput 31:163–179, 1977; Chandra in Conjugate gradient methods for partial differential equations, vol 129, 1978; Conn et al. in Trust-region methods. MPS-SIAM series on optimization, Society for Industrial Mathematics, Philadelphia, 2000; HSL, A collection of Fortran codes for large scale scientific computation, http://www.hsl.rl.ac.uk/; Marcia in Appl Numer Math 58:449–458, 2008) routine. It can be used to provide conjugate directions, both in the case of $$1\times 1$$ 1 × 1 and $$2\times 2$$ 2 × 2 pivoting steps. The main drawback is that the resulting solution of Newton’s equation might not be gradient–related, in the case the objective function is nonconvex. Here we first focus on some theoretical properties, in order to ensure that at each iteration of the Truncated Newton method, the search direction obtained by using an adapted Bunch and Kaufman factorization is gradient–related. This allows to perform a standard Armijo-type linesearch procedure, using a bounded descent direction. Furthermore, the results of an extended numerical experience using large scale CUTEst problems is reported, showing the reliability and the efficiency of the proposed approach, both on convex and nonconvex problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Gao, Lingli, Yudi Pan, Andreas Rieder, and Thomas Bohlen. "Multiparameter viscoelastic full-waveform inversion of shallow-seismic surface waves with a pre-conditioned truncated Newton method." Geophysical Journal International 227, no. 3 (August 6, 2021): 2044–57. http://dx.doi.org/10.1093/gji/ggab311.

Full text
Abstract:
SUMMARY 2-D full-waveform inversion (FWI) of shallow-seismic Rayleigh waves has become a powerful method for reconstructing viscoelastic multiparameter models of shallow subsurface with high resolution. The multiparameter reconstruction in FWI is challenging due to the potential presence of cross-talk between different parameters and the unbalanced sensitivity of Rayleigh-wave data with respect to different parameter classes. Accounting for the inverse Hessian using truncated Newton methods based on second-order adjoint methods provides an effective tool to mitigate cross-talk caused by the coupling between different parameters. In this study, we apply a pre-conditioned truncated Newton (PTN) method to shallow-seismic FWI to simultaneously invert for multiparameter near-surface models (P- and S-wave velocities, attenuation of P and S waves, and density). We first investigate scattered wavefields caused by these parameters to evaluate the coupling between them. Then we investigate the performance of the PTN method on shallow-seismic FWI of Rayleigh wave for reconstructing all five parameters simultaneously. The application to spatially correlated and uncorrelated models demonstrates that the PTN method helps to mitigate the cross-talk and improves the resolution of the multiparameter reconstructions, especially for the weak parameters with small sensitivity such as attenuation and density parameters. The attenuation of Pwaves cannot be inverted reliably due to its negligible sensitivity on the Rayleigh wave. The comparison with the classical pre-conditioned conjugate gradient method highlights the improved performance of the PTN method and thus the benefit of accounting for the information included in the Hessian.
APA, Harvard, Vancouver, ISO, and other styles
20

Zeng, Xinyi, and Wenhao Gui. "Statistical Inference of Truncated Normal Distribution Based on the Generalized Progressive Hybrid Censoring." Entropy 23, no. 2 (February 2, 2021): 186. http://dx.doi.org/10.3390/e23020186.

Full text
Abstract:
In this paper, the parameter estimation problem of a truncated normal distribution is discussed based on the generalized progressive hybrid censored data. The desired maximum likelihood estimates of unknown quantities are firstly derived through the Newton–Raphson algorithm and the expectation maximization algorithm. Based on the asymptotic normality of the maximum likelihood estimators, we develop the asymptotic confidence intervals. The percentile bootstrap method is also employed in the case of the small sample size. Further, the Bayes estimates are evaluated under various loss functions like squared error, general entropy, and linex loss functions. Tierney and Kadane approximation, as well as the importance sampling approach, is applied to obtain the Bayesian estimates under proper prior distributions. The associated Bayesian credible intervals are constructed in the meantime. Extensive numerical simulations are implemented to compare the performance of different estimation methods. Finally, an authentic example is analyzed to illustrate the inference approaches.
APA, Harvard, Vancouver, ISO, and other styles
21

Caliciotti, Andrea, Giovanni Fasano, Stephen G. Nash, and Massimo Roma. "Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization." Data in Brief 17 (April 2018): 246–55. http://dx.doi.org/10.1016/j.dib.2018.01.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Clayton, R. P., and R. F. Martinez-Botas. "Application of generic algorithms in aerodynamic optimisation design procedures." Aeronautical Journal 108, no. 1090 (December 2004): 611–20. http://dx.doi.org/10.1017/s0001924000000440.

Full text
Abstract:
AbstractDirect optimisation techniques using different methods are presented and compared for the solution of two common flows: a two dimensional diffuser and a drag minimisation problem of a fixed area body. The methods studied are a truncated Newton algorithm (gradient method), a simplex approach (direct search method) and a genetic algorithm (stochastic method). The diffuser problem has a known solution supported by experimental data, it has one design performance measure (the pressure coefficient) and two design variables. The fixed area body also has one performance measure (the drag coefficient), but this time there are four design variables; no experimental data is available, this computation is performed to assess the speed/progression of solution.In all cases the direct search approach (simplex method) required significantly smaller number of evaluations than the generic algorithm method. The simplest approach, the gradient method (Newton) performed equally to the simplex approach for the diffuser problem but it was unable to provide a solution to the four-variable problem of a fixed area body drag minimisation. The level of robustness obtained by the use of generic algorithm is in principle superior to the other methods, but a large price in terms of evaluations has to be paid.
APA, Harvard, Vancouver, ISO, and other styles
23

Métivier, Ludovic, and Romain Brossier. "The SEISCOPE optimization toolbox: A large-scale nonlinear optimization library based on reverse communication." GEOPHYSICS 81, no. 2 (March 1, 2016): F1—F15. http://dx.doi.org/10.1190/geo2015-0031.1.

Full text
Abstract:
The SEISCOPE optimization toolbox is a set of FORTRAN 90 routines, which implement first-order methods (steepest-descent and nonlinear conjugate gradient) and second-order methods ([Formula: see text]-BFGS and truncated Newton), for the solution of large-scale nonlinear optimization problems. An efficient line-search strategy ensures the robustness of these implementations. The routines are proposed as black boxes easy to interface with any computational code, where such large-scale minimization problems have to be solved. Traveltime tomography, least-squares migration, or full-waveform inversion are examples of such problems in the context of geophysics. Integrating the toolbox for solving this class of problems presents two advantages. First, it helps to separate the routines depending on the physics of the problem from the ones related to the minimization itself, thanks to the reverse communication protocol. This enhances flexibility in code development and maintenance. Second, it allows us to switch easily between different optimization algorithms. In particular, it reduces the complexity related to the implementation of second-order methods. Because the latter benefit from faster convergence rates compared to first-order methods, significant improvements in terms of computational efforts can be expected.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Kai, Rongchun Li, Yong Dou, Zhengfa Liang, and Qi Lv. "Ranking Support Vector Machine with Kernel Approximation." Computational Intelligence and Neuroscience 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4629534.

Full text
Abstract:
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
APA, Harvard, Vancouver, ISO, and other styles
25

Feng, Xuan, Qianci Ren, Cai Liu, and Xuebing Zhang. "Joint acoustic full-waveform inversion of crosshole seismic and ground-penetrating radar data in the frequency domain." GEOPHYSICS 82, no. 6 (November 1, 2017): H41—H56. http://dx.doi.org/10.1190/geo2016-0008.1.

Full text
Abstract:
Integrating crosshole ground-penetrating radar (GPR) with seismic methods is an efficient way to reduce the uncertainty and ambiguity of data interpretation in shallow geophysical investigations. We have developed a new approach for joint full-waveform inversion (FWI) of crosshole seismic and GPR data in the frequency domain to improve the inversion results of both FWI methods. In a joint objective function, three geophysical parameters (P-wave velocity, permittivity, and conductivity) are effectively connected by three weighted cross-gradient terms that enforce the structural similarity between parameter models. Simulation of acoustic seismic and scalar electromagnetic problems is implemented using 2D finite-difference frequency-domain methods, and the inverse problems of seismic FWI and GPR FWI are solved using a matrix-free truncated Newton algorithm. The joint inversion procedure is performed in several hierarchical frequencies, and the three parameter models are sequentially inverted at each frequency. The joint FWI approach is illustrated using three numerical examples. The results indicate that the joint FWI approach can effectively enhance the structural similarity among the models, modify the structure of each model, and improve the accuracy of inversion results compared with those of individual FWI approaches. Moreover, joint inversion can reduce the trade-off between permittivity and conductivity in GPR FWI, leading to an improved conductivity model in which artifacts are significantly decreased.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Yaru, Yulai Zhang, and Yongping Cai. "A New Hyper-Parameter Optimization Method for Power Load Forecast Based on Recurrent Neural Networks." Algorithms 14, no. 6 (May 24, 2021): 163. http://dx.doi.org/10.3390/a14060163.

Full text
Abstract:
The selection of the hyper-parameters plays a critical role in the task of prediction based on the recurrent neural networks (RNN). Traditionally, the hyper-parameters of the machine learning models are selected by simulations as well as human experiences. In recent years, multiple algorithms based on Bayesian optimization (BO) are developed to determine the optimal values of the hyper-parameters. In most of these methods, gradients are required to be calculated. In this work, the particle swarm optimization (PSO) is used under the BO framework to develop a new method for hyper-parameter optimization. The proposed algorithm (BO-PSO) is free of gradient calculation and the particles can be optimized in parallel naturally. So the computational complexity can be effectively reduced which means better hyper-parameters can be obtained under the same amount of calculation. Experiments are done on real world power load data, where the proposed method outperforms the existing state-of-the-art algorithms, BO with limit-BFGS-bound (BO-L-BFGS-B) and BO with truncated-newton (BO-TNC), in terms of the prediction accuracy. The errors of the prediction result in different models show that BO-PSO is an effective hyper-parameter optimization method.
APA, Harvard, Vancouver, ISO, and other styles
27

Saxton, R. D., A. M. Read, S. Komossa, P. Lira, K. D. Alexander, I. Steele, F. Ocaña, E. Berger, and P. Blanchard. "XMMSL2 J144605.0+685735: a slow tidal disruption event." Astronomy & Astrophysics 630 (September 26, 2019): A98. http://dx.doi.org/10.1051/0004-6361/201935650.

Full text
Abstract:
Aims. We investigate the evolution of X-ray selected tidal disruption events. Methods. New events are found in near real-time data from XMM-Newton slews, and are monitored by multi-wavelength facilities. Results. In August 2016, X-ray emission was detected from the galaxy XMMSL2 J144605.0+685735 (also known as 2MASX 14460522+6857311), that was 20 times higher than an upper limit from 25 years earlier. The X-ray flux was flat for ∼100 days and then fell by a factor of 100 over the following 500 days. The UV flux was stable for the first 400 days before fading by a magnitude, while the optical (U,B,V) bands were roughly constant for 850 days. Optically, the galaxy appears to be quiescent, at a distance of 127 ± 4 Mpc (z = 0.029 ± 0.001) with a spectrum consisting of a young stellar population of 1–5 Gyr in age, an older population, and a total stellar mass of ∼6 × 109 M⊙. The bolometric luminosity peaked at Lbol ∼ 1043 ergs s−1 with an X-ray spectrum that may be modelled by a power law of Γ ∼ 2.6 or Comptonisation of a low-temperature thermal component by thermal electrons. We consider a tidal disruption event to be the most likely cause of the flare. Radio emission was absent in this event down to < 10 μJy, which limits the total energy of a hypothetical off-axis jet to E < 5 × 1050 ergs. The independent behaviour of the optical, UV, and X-ray light curves challenges models where the UV emission is produced by reprocessing of thermal nuclear emission or by stream-stream collisions. We suggest that the observed UV emission may have been produced from a truncated accretion disc and the X-rays from Compton upscattering of these disc photons.
APA, Harvard, Vancouver, ISO, and other styles
28

Koliopanos, F., G. Vasilopoulos, J. Buchner, C. Maitra, and F. Haberl. "Investigating ULX accretion flows and cyclotron resonance in NGC 300 ULX1." Astronomy & Astrophysics 621 (January 2019): A118. http://dx.doi.org/10.1051/0004-6361/201834144.

Full text
Abstract:
Aims. We investigate accretion models for the newly discovered pulsating ultraluminous X-ray source (ULX) NGC 300 ULX1. Methods. We analyzed broadband XMM-Newton and NuSTAR observations of NGC 300 ULX1, performing phase-averaged and phase-resolved spectroscopy. Using the Bayesian framework, we compared two physically motivated models for the source spectrum: Non-thermal accretion column emission modeled by a power law with a high-energy exponential roll-off (AC model), and multicolor thermal emission from an optically thick accretion envelope plus a hard power-law tail (MCAE model). The AC model is an often used phenomenological model for the emission of X-ray pulsars, while the MCAE model has recently been proposed for the emission of the optically thick accretion envelope that is expected to form in ultraluminous (LX > 1039 erg s−1), highly magnetized accreting neutron stars. We combined the findings of our Bayesian analysis with qualitative physical considerations to evaluate the suitability of each model. Results. The low-energy part (< 2 keV) of the source spectrum is dominated by non-pulsating, multicolor thermal emission. The (pulsating) high-energy continuum is more ambiguous. If modeled with the AC model, a residual structure is detected that can be modeled using a broad Gaussian absorption line centered at ∼12 keV. However, the same residuals can be successfully modeled using the MCAE model, without the need for the absorption-like feature. Model comparison using the Bayesian approach strongly indicates that the MCAE model without the absorption line is the preferred model. Conclusions. The spectro-temporal characteristics of NGC 300 ULX1 are consistent with previously reported traits for X-ray pulsars and (pulsating) ULXs. All models considered strongly indicate the presence of an accretion disk that is truncated at a large distance from the central object, as has recently been suggested for a large portion of both pulsating and non-pulsating ULXs. The hard, pulsed emission is not described by a smooth spectral continuum. If modeled by a broad Gaussian absorption line, the fit residuals can be interpreted as a cyclotron scattering feature (CRSF) compatible with a ∼1012 G magnetic field. However, the MCAE model can successfully describe the spectral and temporal characteristics of the source emission, without the need for an additional absorption feature, and it yields physically meaningful parameter values. Therefore strong doubts are cast on the presence of a CRSF in NGC 300 ULX1.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Ning, and Dean S. Oliver. "Critical Evaluation of the Ensemble Kalman Filter on History Matching of Geologic Facies." SPE Reservoir Evaluation & Engineering 8, no. 06 (December 1, 2005): 470–77. http://dx.doi.org/10.2118/92867-pa.

Full text
Abstract:
Summary The objective of this paper is to compare the performance of the ensemble Kalman filter (EnKF) to the performance of a gradient-based minimization method for the problem of estimation of facies boundaries in history matching. The EnKF is a Monte Carlo method for data assimilation that uses an ensemble of reservoir models to represent and update the covariance of variables. In several published studies, it outperforms traditional history-matching algorithms in adaptability and efficiency. Because of the approximate nature of the EnKF, the realizations from one ensemble tend to underestimate uncertainty, especially for problems that are highly nonlinear. In this paper, the distributions of reservoir-model realizations from 20 independent ensembles are compared with the distributions from 20 randomized-maximum-likelihood (RML) realizations for a 2D waterflood model with one injector and four producers. RML is a gradient-based sampling method that generates one reservoir realization in each minimization of the objective function. It is an approximate sampling method, but its sampling properties are similar to the Markov-chain Monte Carlo (McMC) method on highly nonlinear problems and are relatively more efficient than McMC. Despite the nonlinear relationship between the data (such as production rates and facies observations) and the model variables, the EnKF was effective at history matching the production data. We find that the computational effort to generate 20 independent realizations was similar for the two methods, although the complexity of the code is substantially less for the EnKF. Introduction Several questions regarding the use of the EnKF for history matching are addressed in this paper. The most important is a comparison of the efficiency with a gradient-based method for a history-matching problem with known facies properties but unknown boundary locations. Secondly, the EnKF and a gradient-based method are unlikely to give identical estimates of model variables, so it is also important to know if one method generates better realizations. Finally, because there is often a desire to use the history-matched realizations to quantify uncertainty, it is important to determine if one of the methods is more efficient at generating independent realizations. Gradient-based history matching can be performed in several ways (e.g., assimilating data in batch or sequentially); a variety of minimization algorithms can be used (e.g., conjugate gradient or quasi-Newton); and several different methods for computing the gradient are available (e.g., adjoint or sensitivity equations). In this paper, we use what we believe is the most efficient of the traditional gradient-based methods: an adjoint method to compute the gradient of the squared data mismatch and the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) method to compute the direction of the change. The remaining choice is whether to incorporate all data at once or sequentially. Simultaneous, or batch, inversion of all data is clearly a well-established history-matching procedure. Although data from wells or sensors may arrive nearly continuously, the practice of updating reservoir models as the data arrive is not common. There are several reasons that make sequential assimilation of data difficult for large, nonlinear models:the covariance for all model variables must be updated as new data are assimilated, but the covariance matrix is very large;the covariance may not be a good measure of uncertainty for nonlinear problems; andthe sensitivity of adatum to changes in values of model variables is expensive to compute. Bayesian updating in general is described by Woodbury. Modifying a method described by Tarantola, Oliver evaluated the possibility of using a sequential assimilation approach for transient flow in porous media. He found that the results from sequential assimilation could be almost as good as those from batch assimilation if the order of the data was carefully selected. The problem was quite small, however, and an extension to large models was impractical. Although a sequential method has the advantage of generating a sequence of history-matched models that may all be useful at the time they are generated, our comparisons of efficiency will be based primarily on the effort required to assimilate all the data. If the intermediate predictions are needed (as they would be for control of a reservoir), the comparison provided here will underestimate the value of the sequential assimilation. A secondary objective of history matching is often to assess the uncertainty in the predictions of future reservoir performance or in the estimates of reservoir properties such as permeability, porosity, or saturation. In general, uncertainty is estimated from an examination of a moderate number of conditional simulations of the prediction or properties. Unless the realizations are generated fairly carefully and the sample is sufficiently large, however, the estimate of uncertainty could be quite poor. Two large comparative studies of the ability of Monte Carlo methods to quantify uncertainty in history matching have been carried out, one in groundwater and one in petroleum. Neither was conclusive, partly because of the small sample size. Liu and Oliver used a smaller reservoir model (fewer variables), but a much larger sample size. They found that the method that minimizes an objective function containing a model mismatch part and a data mismatch part, with noise added to observations, created realizations that were distributed nearly the same as realizations from McMC. The EnKF is a Monte Carlo method for updating reservoir models. It solves several problems with the application of the Kalman filter to large nonlinear problems. It has been applied to reservoir flow problems with generally good results. There has been no examination, however, of the distribution of the members of a single ensemble. The adequacy of the uncertainty estimate is completely unknown. In the first paper on the EnKF, Evensen described how the evolution of the probability density function for the model variables can be approximated by the motion of "particles" or ensemble members in phase space. Any desired statistical quantities can be estimated from the ensemble of points. When the size of the ensemble is relatively small, however, the approximation of the covariance from the ensemble almost certainly contains substantial errors. Houtekamer and Mitchell noted the tendency for a reduction in variance caused by "inbreeding." When the ensemble estimate is used in a Kalman filter, van Leeuwen explained how nonlinearity in the covariance update relation causes growth in the error as additional data are assimilated. In this paper, the comparison is made using history matching on a truncated plurigaussian model for geologic facies. It provides a difficult history-matching problem with significant nonlinearities that make both the EnKF and the LBFGS method difficult to apply.
APA, Harvard, Vancouver, ISO, and other styles
30

Galli, Leonardo, and Chih-Jen Lin. "A Study on Truncated Newton Methods for Linear Classification." IEEE Transactions on Neural Networks and Learning Systems, 2021, 1–14. http://dx.doi.org/10.1109/tnnls.2020.3045836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Song, L., and L. N. Vicente. "Modeling Hessian-vector products in nonlinear optimization: new Hessian-free methods." IMA Journal of Numerical Analysis, April 6, 2021. http://dx.doi.org/10.1093/imanum/drab022.

Full text
Abstract:
Abstract In this paper we suggest two ways of calculating interpolation models for unconstrained smooth nonlinear optimization when Hessian-vector products are available. The main idea is to interpolate the objective function using a quadratic on a set of points around the current one, and concurrently using the curvature information from products of the Hessian times appropriate vectors, possibly defined by the interpolating points. These enriched interpolating conditions then form an affine space of model Hessians or model Newton directions, from which a particular one can be computed once an equilibrium or least secant principle is defined. A first approach consists of recovering the Hessian matrix satisfying the enriched interpolating conditions, from which then a Newton direction model can be computed. In a second approach we pose the recovery problem directly in the Newton direction. These techniques can lead to a significant reduction in the overall number of Hessian-vector products when compared to the inexact or truncated Newton method, although simple implementations may pay a cost in the number of function evaluations and the dense linear algebra involved poses a scalability challenge.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Rongzhe, Tonglin Li, Cai Liu, Xingguo Huang, Kristian Jensen, and Malte Sommer. "3-D Joint Inversion of Gravity and Magnetic Data Using Data-Space and Truncated Gauss-Newton Methods." IEEE Geoscience and Remote Sensing Letters, 2021, 1–5. http://dx.doi.org/10.1109/lgrs.2021.3077936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Walker, K. P., and T. L. Sham. "A Fixed-Point Iteration Method With Quadratic Convergence." Journal of Applied Mechanics 79, no. 3 (April 5, 2012). http://dx.doi.org/10.1115/1.4005878.

Full text
Abstract:
The fixed-point iteration algorithm is turned into a quadratically convergent scheme for a system of nonlinear equations. Most of the usual methods for obtaining the roots of a system of nonlinear equations rely on expanding the equation system about the roots in a Taylor series, and neglecting the higher order terms. Rearrangement of the resulting truncated system then results in the usual Newton-Raphson and Halley type approximations. In this paper the introduction of unit root functions avoids the direct expansion of the nonlinear system about the root, and relies, instead, on approximations which enable the unit root functions to considerably widen the radius of convergence of the iteration method. Methods for obtaining higher order rates of convergence and larger radii of convergence are discussed.
APA, Harvard, Vancouver, ISO, and other styles
34

Khayat, Majid, Abdolhossein Baghlani, Seyed Mehdi Dehghan, and Mohammad Amir Najafgholipour. "Geometrically nonlinear dynamic analysis of functionally graded porous partially fluid-filled cylindrical shells subjected to exponential loads." Journal of Vibration and Control, December 31, 2020, 107754632098246. http://dx.doi.org/10.1177/1077546320982462.

Full text
Abstract:
This article investigates the influence of graphene platelet reinforcements and nonlinear elastic foundations on geometrically nonlinear dynamic response of a partially fluid-filled functionally graded porous cylindrical shell under exponential loading. Material properties are assumed to be varied continuously in the thickness in terms of porosity and graphene platelet reinforcement. In this study, three different distributions for porosity and three different dispersions for graphene platelets have been considered in the direction of the shell thickness. The Halpin–Tsai equations are used to find the effective material properties of the graphene platelet–reinforced materials. The equations of motion are derived based on the higher-order shear deformation theory and Sanders’s theory. Displacements and rotations of the shell middle surface are approximated by combining polynomial functions in the meridian direction and truncated Fourier series with an appropriate number of harmonic terms in the circumferential direction. An incremental–iterative approach is used to solve the nonlinear equations of motion of partially fluid-filled cylindrical shells based on the Newmark direct integration and Newton–Raphson methods. The governing equations of liquid motion are derived using a finite strip formulation of incompressible inviscid potential flow. The effects of various parameters on dynamic responses are investigated. A detailed numerical study is carried out to bring out the effects of some influential parameters, such as fluid depth, porosity distribution, and graphene platelet dispersion parameters on nonlinear dynamic behavior of functionally graded porous nanocomposite partially fluid-filled cylindrical shells reinforced with graphene platelets.
APA, Harvard, Vancouver, ISO, and other styles
35

DONG, Xiaoyue, Xiaofan SUN, Zhangbin YU, and Shuping HAN. "Image-Based Neonatal Hyperbilirubinemia Screening after Hospital Discharge." Iranian Journal of Public Health, June 23, 2020. http://dx.doi.org/10.18502/ijph.v49i6.3359.

Full text
Abstract:
Background: Newborn infants who are risk for severe hyperbilirubinemia and cared at home should be monitored for progression of jaundice. We aimed to verify if a smart phone application (BiliScan Inc), which uses automated imaging for bilirubin (AIB), can be used to estimate total serum bilirubin (TSB) levels at home. Methods: A convenience sample of 1038 “healthy” infants in China were prospectively enrolled to a singlecenter study in 2016. Correlations between AIB and TcB measurements were correlated to TB measurements. Bias and imprecision of AIB measurements were determined using Bland-Altman analysis. The diagnostic value of AIB was compared by the area-under-curve (AUC) values of receiver operator characteristic (ROC) curves. Results: The best correlation and AUC for AIB were at the sternum, both with values of 0.76. We truncated performances to 369 TB values >5 and <15 mg/dL, and sternal AIB showed the best correlation to TB (r =0.5, P<0.0001). The AUC for this range was 0.54. However, from a subset of 200 AIB values >13.5 mg/dL (n=369 babies), the sensitivity and negative predictive value (NPV) were 100% with a specificity of 50%. Furthermore, Bland-Altman analyses showed a bias and imprecision of AIB and TcB when TB was >13.5 and <15 mg/dL. Conclusion: The use of AIB may be a potentially useful screening device for neonatal jaundice. Its performance requires additional improvements for accurate measurements across wider ranges of TB levels.
APA, Harvard, Vancouver, ISO, and other styles
36

Chattergoon, Natasha N., Sara McCrohan, Kent L. Thornburg, and Philip Stork. "Abstract 416: B-Raf Loss Suppresses Extracellular-Regulated Kinase Activation and Cardiomyocyte Proliferation." Circulation Research 117, suppl_1 (July 17, 2015). http://dx.doi.org/10.1161/res.117.suppl_1.416.

Full text
Abstract:
Objectives: The postnatal heart does not retain the proliferative capacity it had during fetal life. Unlike large mammals, murine cardiomyocytes (CM) continue to divide into the first week of life before terminal differentiation and binucleation. We hypothesized that B-Raf regulates ERK activation in newborn CM and that loss of B-Raf suppresses cyclin levels and reduced proliferation. To test this, we determined whether loss of B-Raf disrupts the ERK (extracellular-regulated kinase) cascade and impairs CM growth. Methods: CM specific knockout (KO) of B-Raf was generated using CRE/lox (floxed B-Raf x αMHC CRE) resulting in a truncated, unstable B-Raf and a null phenotype. KO mice (α-MHC-CRE / B-Raf lox/lox ) were compared to CRE negative / B-Raf lox/lox mice (wild type; WT). Hearts from 3d and 8d old pups were harvested for molecular analysis of B-Raf signaling and cell cycle markers. Hearts from 3d old pups were harvested and CMs isolated for culture using a trypsin/DNAse digestion. The cells were treated with Isoproterenol (Iso;10uM), forskolin (20uM), and IGF-1 (1ng/ml) for 15 min to determine if the loss of B-Raf results in reduced activation of ERK. Results: Heart weight to body weight (HW/BW) ratio was less in 3d KO versus 3d WT (n=50, p<0.05). HW/BW ratio became greater in 8d KO; there was no difference in 3d and 8d HW/BW in WT animals. Baseline B-Raf and phosphorylated ERK levels were reduced in KO hearts (*p<0.05). Cell cycle inhibitors p21 and p53 were increased in 3d KO hearts with decreased levels of all cyclins (p<0.05). In 8d KO hearts, increased p21, p27, and p53 expression was accompanied with increased cyclin levels (p<0.05). In vitro ERK activation was blunted in KO CMs by forskolin and Iso compared to IGF-1. Conclusions: ERK activation was suppressed in KO hearts resulting in smaller newborn hearts but which exceeded normal HW/BW by 8d. This is may represent premature hypertrophy as the proliferative period of CM development had ended. Cell cycle analysis supports reduced CM mitotis among 8d CM. Such early disturbances in normal CM growth may increase susceptibility for reduced cardiac function in the face of increased postnatal load stress.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography