To see the other types of publications on this topic, follow the link: First Order Optimization Methods.

Journal articles on the topic 'First Order Optimization Methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'First Order Optimization Methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dvurechensky, Pavel, Shimrit Shtern, and Mathias Staudigl. "First-Order Methods for Convex Optimization." EURO Journal on Computational Optimization 9 (2021): 100015. http://dx.doi.org/10.1016/j.ejco.2021.100015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ershov, M. D. "First-Order Optimization Methods in Machine Learning." INFORMACIONNYE TEHNOLOGII 25, no. 11 (2019): 662–69. http://dx.doi.org/10.17587/it.25.662-669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Zhaosong, and Sanyou Mei. "First-Order Penalty Methods for Bilevel Optimization." SIAM Journal on Optimization 34, no. 2 (2024): 1937–69. http://dx.doi.org/10.1137/23m1566753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chiralaksanakul, Anukal, and Sankaran Mahadevan. "First-Order Approximation Methods in Reliability-Based Design Optimization." Journal of Mechanical Design 127, no. 5 (2004): 851–57. http://dx.doi.org/10.1115/1.1899691.

Full text
Abstract:
Efficiency of reliability-based design optimization (RBDO) methods is a critical criterion as to whether they are viable for real-world problems. Early RBDO methods are thus based primarily on the first-order reliability method (FORM) due to its efficiency. Recently, several first-order RBDO methods have been proposed, and their efficiency is significantly improved through problem reformulation and/or the use of inverse FORM. Our goal is to present these RBDO methods from a mathematical optimization perspective by formalizing FORM, inverse FORM, and associated RBDO reformulations. Through the
APA, Harvard, Vancouver, ISO, and other styles
5

Gonzaga, Clóvis C., and Elizabeth W. Karas. "COMPLEXITY OF FIRST-ORDER METHODS FOR DIFFERENTIABLE CONVEX OPTIMIZATION." Pesquisa Operacional 34, no. 3 (2014): 395–419. http://dx.doi.org/10.1590/0101-7438.2014.034.03.0395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Teboulle, Marc. "A simplified view of first order methods for optimization." Mathematical Programming 170, no. 1 (2018): 67–96. http://dx.doi.org/10.1007/s10107-018-1284-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dambrine, M., Ch Dossal, B. Puig, and A. Rondepierre. "Stochastic Differential Equations for Modeling First Order Optimization Methods." SIAM Journal on Optimization 34, no. 2 (2024): 1402–26. http://dx.doi.org/10.1137/21m1435665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

David, Villacís. "First Order Methods for High Resolution Image Denoising." Latin-American Journal of Computing 4, no. 3 (2017): 37–42. https://doi.org/10.5281/zenodo.5764177.

Full text
Abstract:
In this paper we are interested in comparing the performance of some of the most relevant first order non-smooth optimization methods applied to the Rudin, Osher and Fatemi(ROF) Image Denoising Model and a Primal-Dual Chambolle-Pock Image Denoising Model. Because of the properties of the resulting numerical schemes it is possible to handle these computations pixel wise, allowing implementations based on parallel paradigms which are helpful in the context of high resolution imaging.
APA, Harvard, Vancouver, ISO, and other styles
9

Savchuk, Oleg S., Alexander A. Titov, Fedor Sergeevich Stonyakin, and Mohammad S. Alkousa. "Adaptive first-order methods for relatively strongly convex optimization problems." Computer Research and Modeling 14, no. 2 (2022): 445–72. http://dx.doi.org/10.20537/2076-7633-2022-14-2-445-472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Haihao, Robert M. Freund, and Yurii Nesterov. "Relatively Smooth Convex Optimization by First-Order Methods, and Applications." SIAM Journal on Optimization 28, no. 1 (2018): 333–54. http://dx.doi.org/10.1137/16m1099546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Xin, Ran, Shi Pu, Angelia Nedic, and Usman A. Khan. "A General Framework for Decentralized Optimization With First-Order Methods." Proceedings of the IEEE 108, no. 11 (2020): 1869–89. http://dx.doi.org/10.1109/jproc.2020.3024266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Devolder, Olivier, François Glineur, and Yurii Nesterov. "First-order methods of smooth convex optimization with inexact oracle." Mathematical Programming 146, no. 1-2 (2013): 37–75. http://dx.doi.org/10.1007/s10107-013-0677-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Scheinberg, Katya, Donald Goldfarb, and Xi Bai. "Fast First-Order Methods for Composite Convex Optimization with Backtracking." Foundations of Computational Mathematics 14, no. 3 (2014): 389–417. http://dx.doi.org/10.1007/s10208-014-9189-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Rovnyak, Steven M., Edwin K. P. Chong, and James Rovnyak. "First-Order Conditions for Set-Constrained Optimization." Mathematics 11, no. 20 (2023): 4274. http://dx.doi.org/10.3390/math11204274.

Full text
Abstract:
A well-known first-order necessary condition for a point to be a local minimizer of a given function is the non-negativity of the dot product of the gradient and a vector in a feasible direction. This paper proposes a series of alternative first-order necessary conditions and corresponding first-order sufficient conditions that seem not to appear in standard texts. The conditions assume a nonzero gradient. The methods use extensions of the notions of gradient, differentiability, and twice differentiability. Examples, including one involving the Karush–Kuhn–Tucker (KKT) theorem, illustrate the
APA, Harvard, Vancouver, ISO, and other styles
15

Necoara, Ion. "General Convergence Analysis of Stochastic First-Order Methods for Composite Optimization." Journal of Optimization Theory and Applications 189, no. 1 (2021): 66–95. http://dx.doi.org/10.1007/s10957-021-01821-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Madden, Liam, Stephen Becker, and Emiliano Dall’Anese. "Bounds for the Tracking Error of First-Order Online Optimization Methods." Journal of Optimization Theory and Applications 189, no. 2 (2021): 437–57. http://dx.doi.org/10.1007/s10957-021-01836-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Necoara, I., Yu Nesterov, and F. Glineur. "Linear convergence of first order methods for non-strongly convex optimization." Mathematical Programming 175, no. 1-2 (2018): 69–107. http://dx.doi.org/10.1007/s10107-018-1232-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Battiti, Roberto. "First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method." Neural Computation 4, no. 2 (1992): 141–66. http://dx.doi.org/10.1162/neco.1992.4.2.141.

Full text
Abstract:
On-line first-order backpropagation is sufficiently fast and effective for many large-scale classification problems but for very high precision mappings, batch processing may be the method of choice. This paper reviews first- and second-order optimization methods for learning in feedforward neural networks. The viewpoint is that of optimization: many methods can be cast in the language of optimization techniques, allowing the transfer to neural nets of detailed results about computational complexity and safety procedures to ensure convergence and to avoid numerical problems. The review is not
APA, Harvard, Vancouver, ISO, and other styles
19

Dragomir, Radu-Alexandru, Alexandre d’Aspremont, and Jérôme Bolte. "Quartic First-Order Methods for Low-Rank Minimization." Journal of Optimization Theory and Applications 189, no. 2 (2021): 341–63. http://dx.doi.org/10.1007/s10957-021-01820-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Djordjević, Dragan S., and Predrag S. Stanimirović. "Iterative methods for computing generalized inverses related with optimization methods." Journal of the Australian Mathematical Society 78, no. 2 (2005): 257–72. http://dx.doi.org/10.1017/s1446788700008077.

Full text
Abstract:
AbstractWe develop several iterative methods for computing generalized inverses using both first and second order optimization methods in C*-algebras. Known steepest descent iterative methods are generalized in C*-algebras. We introduce second order methods based on the minimization of the norms ‖Ax − b‖2 and ‖x‖2 by means of the known second order unconstrained minimization methods. We give several examples which illustrate our theory.
APA, Harvard, Vancouver, ISO, and other styles
21

Taylor, Adrien B., Julien M. Hendrickx, and François Glineur. "Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization." SIAM Journal on Optimization 27, no. 3 (2017): 1283–313. http://dx.doi.org/10.1137/16m108104x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhou, Pan, Xiao-Tong Yuan, Shuicheng Yan, and Jiashi Feng. "Faster First-Order Methods for Stochastic Non-Convex Optimization on Riemannian Manifolds." IEEE Transactions on Pattern Analysis and Machine Intelligence 43, no. 2 (2021): 459–72. http://dx.doi.org/10.1109/tpami.2019.2933841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Porta, Federica, Anastasia Cornelio, and Valeria Ruggiero. "Runge–Kutta-like scaling techniques for first-order methods in convex optimization." Applied Numerical Mathematics 116 (June 2017): 256–72. http://dx.doi.org/10.1016/j.apnum.2016.08.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lu, Zhaosong, and Sanyou Mei. "Accelerated First-Order Methods for Convex Optimization with Locally Lipschitz Continuous Gradient." SIAM Journal on Optimization 33, no. 3 (2023): 2275–310. http://dx.doi.org/10.1137/22m1500496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Xie, Chenghan, Chenxi Li, Chuwen Zhang, Qi Deng, Dongdong Ge, and Yinyu Ye. "Trust Region Methods for Nonconvex Stochastic Optimization beyond Lipschitz Smoothness." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 16049–57. http://dx.doi.org/10.1609/aaai.v38i14.29537.

Full text
Abstract:
In many important machine learning applications, the standard assumption of having a globally Lipschitz continuous gradient may fail to hold. This paper delves into a more general (L0, L1)-smoothness setting, which gains particular significance within the realms of deep neural networks and distributionally robust optimization (DRO). We demonstrate the significant advantage of trust region methods for stochastic nonconvex optimization under such generalized smoothness assumption. We show that first-order trust region methods can recover the normalized and clipped stochastic gradient as special
APA, Harvard, Vancouver, ISO, and other styles
26

Ha, Seung-Yeal, Shi Jin, and Doheon Kim. "Convergence of a first-order consensus-based global optimization algorithm." Mathematical Models and Methods in Applied Sciences 30, no. 12 (2020): 2417–44. http://dx.doi.org/10.1142/s0218202520500463.

Full text
Abstract:
Global optimization of a non-convex objective function often appears in large-scale machine learning and artificial intelligence applications. Recently, consensus-based optimization (CBO) methods have been introduced as one of the gradient-free optimization methods. In this paper, we provide a convergence analysis for the first-order CBO method in [J. A. Carrillo, S. Jin, L. Li and Y. Zhu, A consensus-based global optimization method for high dimensional machine learning problems, https://arxiv.org/abs/1909.09249v1 ]. Prior to this work, the convergence study was carried out for CBO methods on
APA, Harvard, Vancouver, ISO, and other styles
27

Nesterov, Yurii, and Arkadi Nemirovski. "On first-order algorithms for l1/nuclear norm minimization." Acta Numerica 22 (April 2, 2013): 509–75. http://dx.doi.org/10.1017/s096249291300007x.

Full text
Abstract:
In the past decade, problems related to l1/nuclear norm minimization have attracted much attention in the signal processing, machine learning and optimization communities. In this paper, devoted to l1/nuclear norm minimization as ‘optimization beasts’, we give a detailed description of two attractive first-order optimization techniques for solving problems of this type. The first one, aimed primarily at lasso-type problems, comprises fast gradient methods applied to composite minimization formulations. The second approach, aimed at Dantzig-selector-type problems, utilizes saddle-point first-or
APA, Harvard, Vancouver, ISO, and other styles
28

Gracia, Victor, Pablo Krupa, Teodoro Alamo, and Daniel Limon. "Efficient Online Update of Model Predictive Control in Embedded Systems Using First-Order Methods." IEEE Control Systems Letters 7 (December 7, 2023): 3693–98. https://doi.org/10.1109/LCSYS.2023.3341283.

Full text
Abstract:
Model Predictive Control (MPC) is typically characterized for being computationally demanding, as it requires solving optimization problems online; a particularly relevant point when considering its implementation in embedded systems. To reduce the computational burden of the optimization algorithm, most solvers perform as many offline operations as possible, typically performing the computation and factorization of its expensive matrices offline and then storing them in the embedded system. This improves the efficiency of the solver, with the disadvantage that online changes on some of the in
APA, Harvard, Vancouver, ISO, and other styles
29

Alamo, Teodoro, Pablo Krupa, and Daniel Limon. "Restart of Accelerated First-Order Methods With Linear Convergence Under a Quadratic Functional Growth Condition." IEEE Transactions on Automatic Control 68, no. 1 (2023): 612–19. https://doi.org/10.1109/TAC.2022.3146054.

Full text
Abstract:
Accelerated first-order methods, also called fast gradient methods, are popular optimization methods in the field of convex optimization. However, they are prone to suffer from oscillatory behavior that slows their convergence when medium to high accuracy is desired. In order to address this, restart schemes have been proposed in the literature, which seek to improve the practical convergence by suppressing the oscillatory behavior. This article presents a restart scheme for accelerated first-order methods, for which we show linear convergence under the satisfaction of a quadratic functional g
APA, Harvard, Vancouver, ISO, and other styles
30

Beck, Amir, and Nili Guttmann-Beck. "FOM – a MATLAB toolbox of first-order methods for solving convex optimization problems." Optimization Methods and Software 34, no. 1 (2018): 172–93. http://dx.doi.org/10.1080/10556788.2018.1437159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Denisov, S. V., and V. V. Semenov. "FIRST-ORDER METHODS FOR GENERALIZED OPTIMAL CONTROL PROBLEMS FOR SYSTEMS WITH DISTRIBUTED PARAMETERS." Journal of Numerical and Applied Mathematics, no. 2 (134) (2020): 18–44. http://dx.doi.org/10.17721/2706-9699.2020.2.02.

Full text
Abstract:
The problems of optimization of linear distributed systems with generalized control and first-order methods for their solution are considered. The main focus is on proving the convergence of methods. It is assumed that the operator describing the model satisfies a priori estimates in negative norms. For control problems with convex and preconvex admissible sets, the convergence of several first-order algorithms with errors in iterative subproblems is proved.
APA, Harvard, Vancouver, ISO, and other styles
32

Denisov, S. V., and V. V. Semenov. "FIRST-ORDER METHODS FOR GENERALIZED OPTIMAL CONTROL PROBLEMS FOR SYSTEMS WITH DISTRIBUTED PARAMETERS." Journal of Numerical and Applied Mathematics, no. 2 (134) (2020): 18–44. http://dx.doi.org/10.17721/2706-9699.2020.2.02.

Full text
Abstract:
The problems of optimization of linear distributed systems with generalized control and first-order methods for their solution are considered. The main focus is on proving the convergence of methods. It is assumed that the operator describing the model satisfies a priori estimates in negative norms. For control problems with convex and preconvex admissible sets, the convergence of several first-order algorithms with errors in iterative subproblems is proved.
APA, Harvard, Vancouver, ISO, and other styles
33

Talaei, Shayan, Matin Ansaripour, Giorgi Nadiradze, and Dan Alistarh. "Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 19 (2025): 20778–86. https://doi.org/10.1609/aaai.v39i19.34290.

Full text
Abstract:
Distributed optimization is the standard way of speeding up machine learning training, and most of the research in the area focuses on distributed first-order, gradient-based methods. Yet, there are settings where some computationally-bounded nodes may not be able to implement first-order, gradient-based optimization, while they could still contribute to joint optimization tasks. In this paper, we initiate the study of hybrid decentralized optimization, studying settings where nodes with zeroth-order and first-order optimization capabilities co-exist in a distributed system, and attempt to joi
APA, Harvard, Vancouver, ISO, and other styles
34

Necoara, I., and A. Patrascu. "Iteration complexity analysis of dual first-order methods for conic convex programming." Optimization Methods and Software 31, no. 3 (2016): 645–78. http://dx.doi.org/10.1080/10556788.2016.1161763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Larson, Jeffrey, Matt Menickelly, and Stefan M. Wild. "Derivative-free optimization methods." Acta Numerica 28 (May 1, 2019): 287–404. http://dx.doi.org/10.1017/s0962492919000060.

Full text
Abstract:
In many optimization problems arising from scientific, engineering and artificial intelligence applications, objective and constraint functions are available only as the output of a black-box or simulation oracle that does not provide derivative information. Such settings necessitate the use of methods for derivative-free, or zeroth-order, optimization. We provide a review and perspectives on developments in these methods, with an emphasis on highlighting recent developments and on unifying treatment of such problems in the non-linear optimization and machine learning literature. We categorize
APA, Harvard, Vancouver, ISO, and other styles
36

Jia, Zhichao, and Benjamin Grimmer. "First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points." SIAM Journal on Optimization 35, no. 2 (2025): 1300–1329. https://doi.org/10.1137/23m1569551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bubeck, Sébastien, Ronen Eldan, and Yin Tat Lee. "Kernel-based Methods for Bandit Convex Optimization." Journal of the ACM 68, no. 4 (2021): 1–35. http://dx.doi.org/10.1145/3453721.

Full text
Abstract:
We consider the adversarial convex bandit problem and we build the first poly( T )-time algorithm with poly( n ) √ T -regret for this problem. To do so, we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ( n 9.5 √ T )-regret, and we show that a simple variant of this algorithm can be run in poly( n log ( T ))-time per step (for polytopes with polynomially many const
APA, Harvard, Vancouver, ISO, and other styles
38

TORKZADEH, P., J. SALAJEGHEH, and E. SALAJEGHEH. "EFFICIENT METHODS FOR STRUCTURAL OPTIMIZATION WITH FREQUENCY CONSTRAINTS USING HIGHER ORDER APPROXIMATIONS." International Journal of Structural Stability and Dynamics 08, no. 03 (2008): 439–50. http://dx.doi.org/10.1142/s0219455408002739.

Full text
Abstract:
Presented herein are four different methods for the optimum design of structures subject to multiple natural frequency constraints. During the optimization process the optimum cross-sectional dimensions of elements are determined. These methods are robust and efficient in terms of the number of eigenvalue analyses required, as well as the overall computational time for the optimum design. A new third order approximate function is presented for the structural response quantities, as functions of the cross-sectional properties, and four different methods for the optimum design are defined based
APA, Harvard, Vancouver, ISO, and other styles
39

Métivier, Ludovic, and Romain Brossier. "The SEISCOPE optimization toolbox: A large-scale nonlinear optimization library based on reverse communication." GEOPHYSICS 81, no. 2 (2016): F1—F15. http://dx.doi.org/10.1190/geo2015-0031.1.

Full text
Abstract:
The SEISCOPE optimization toolbox is a set of FORTRAN 90 routines, which implement first-order methods (steepest-descent and nonlinear conjugate gradient) and second-order methods ([Formula: see text]-BFGS and truncated Newton), for the solution of large-scale nonlinear optimization problems. An efficient line-search strategy ensures the robustness of these implementations. The routines are proposed as black boxes easy to interface with any computational code, where such large-scale minimization problems have to be solved. Traveltime tomography, least-squares migration, or full-waveform invers
APA, Harvard, Vancouver, ISO, and other styles
40

Ahookhosh, Masoud. "Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity." Mathematical Methods of Operations Research 89, no. 3 (2019): 319–53. http://dx.doi.org/10.1007/s00186-019-00674-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Zichong, Pin-Yu Chen, Sijia Liu, Songtao Lu, and Yangyang Xu. "Zeroth-Order Optimization for Composite Problems with Functional Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (2022): 7453–61. http://dx.doi.org/10.1609/aaai.v36i7.20709.

Full text
Abstract:
In many real-world problems, first-order (FO) derivative evaluations are too expensive or even inaccessible. For solving these problems, zeroth-order (ZO) methods that only need function evaluations are often more efficient than FO methods or sometimes the only options. In this paper, we propose a novel zeroth-order inexact augmented Lagrangian method (ZO-iALM) to solve black-box optimization problems, which involve a composite (i.e., smooth+nonsmooth) objective and functional constraints. This appears to be the first work that develops an iALM-based ZO method for functional constrained optimi
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Wei, Xiaoshan Zhang, and Min Li. "A Filled Function Method Dominated by Filter for Nonlinearly Global Optimization." Journal of Applied Mathematics 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/245427.

Full text
Abstract:
This work presents a filled function method based on the filter technique for global optimization. Filled function method is one of the effective methods for nonlinear global optimization, since it can effectively find a better minimizer. Filter technique is applied to local optimization methods for its excellent numerical results. In order to optimize the filled function method, the filter method is employed for global optimizations in this method. A new filled function is proposed first, and then the algorithm and its properties are proved. The numerical results are listed at the end.
APA, Harvard, Vancouver, ISO, and other styles
43

Murinto, Murinto, Nur Rochmah Dyah Puji Astuti, and Murein Miksa Mardhia. "Multilevel thresholding hyperspectral image segmentation based on independent component analysis and swarm optimization methods." International Journal of Advances in Intelligent Informatics 5, no. 1 (2019): 66. http://dx.doi.org/10.26555/ijain.v5i1.311.

Full text
Abstract:
High dimensional problems are often encountered in studies related to hyperspectral data. One of the challenges that arise is how to find representations that are accurate so that important structures can be clearly easily. This study aims to process segmentation of hyperspectral image by using swarm optimization techniques. This experiments use Aviris Indian Pines hyperspectral image dataset that consist of 103 bands. The method used for segmentation image is particle swarm optimization (PSO), Darwinian particle swarm optimization (DPSO) and fractional order Darwinian particle swarm optimizat
APA, Harvard, Vancouver, ISO, and other styles
44

Leon, Florin, Petru Caşcaval, and Costin Bădică. "Optimization Methods for Redundancy Allocation in Large Systems." Vietnam Journal of Computer Science 07, no. 03 (2020): 281–99. http://dx.doi.org/10.1142/s2196888820500165.

Full text
Abstract:
This paper addresses the issue of optimal allocation of spare modules in large series-redundant systems in order to obtain a required reliability under cost constraints. Both cases of active and standby redundancy are considered. Moreover, for a subsystem with standby redundancy, two cases are examined: in the first case, all the spares are maintained in cold state (cold standby redundancy) and, in the second one, to reduce the time needed to put a spare into operation when the active one fails, one of the spares is maintained in warm conditions. To solve this optimization problem, for the sim
APA, Harvard, Vancouver, ISO, and other styles
45

Ito, Masaru, and Mituhiro Fukuda. "Nearly Optimal First-Order Methods for Convex Optimization under Gradient Norm Measure: an Adaptive Regularization Approach." Journal of Optimization Theory and Applications 188, no. 3 (2021): 770–804. http://dx.doi.org/10.1007/s10957-020-01806-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kong, Weiwei. "Complexity-Optimal and Parameter-Free First-Order Methods for Finding Stationary Points of Composite Optimization Problems." SIAM Journal on Optimization 34, no. 3 (2024): 3005–32. http://dx.doi.org/10.1137/22m1498826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Muhamediyeva, Dilnoz, Nilufar Niyozmatova, Dilfuza Yusupova, and Boymirzo Samijonov. "Quantum optimization methods in water flow control." E3S Web of Conferences 590 (2024): 02003. http://dx.doi.org/10.1051/e3sconf/202459002003.

Full text
Abstract:
This paper examines the problem of optimizing water flow control in order to minimize costs, represented as the square of the water flow. This takes into account restrictions on this flow, such as the maximum flow value. To solve this problem, two optimization methods are used: the classical optimization method Sequential Least SQuares Programming (SLSQP) and the quantum optimization method Variational Quantum Eigensolver (VQE). First, the classical SLSQP method finds the optimal control (water flow) according to the given cost function and constraints. Then the obtained result is refined usin
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Huimin, Runsen Zhang, Yufeng Xing, and Pierangelo Masarati. "On the optimization of n-sub-step composite time integration methods." Nonlinear Dynamics 102, no. 3 (2020): 1939–62. http://dx.doi.org/10.1007/s11071-020-06020-8.

Full text
Abstract:
AbstractA family of n-sub-step composite time integration methods, which employs the trapezoidal rule in the first $$n-1$$ n - 1 sub-steps and a general formula in the last one, is discussed in this paper. A universal approach to optimize the parameters is provided for any cases of $$n\ge 2$$ n ≥ 2 , and two optimal sub-families of the method are given for different purposes. From linear analysis, the first sub-family can achieve nth-order accuracy and unconditional stability with controllable algorithmic dissipation, so it is recommended for high-accuracy purposes. The second sub-family has s
APA, Harvard, Vancouver, ISO, and other styles
49

Yakubu, Saidu Daudu, and Precious Sibanda. "One-Step Family of Three Optimized Second-Derivative Hybrid Block Methods for Solving First-Order Stiff Problems." Journal of Applied Mathematics 2024 (April 29, 2024): 1–18. http://dx.doi.org/10.1155/2024/5078943.

Full text
Abstract:
This paper introduces a novel approach for solving first-order stiff initial value problems through the development of a one-step family of three optimized second-derivative hybrid block methods. The optimization process was integrated into the derivation of the methods to achieve maximal accuracy. Through a rigorous analysis, it was determined that the methods exhibit properties of consistency, zero-stability, convergence, and A-stability. The proposed methods were implemented using the waveform relaxation technique, and the computed results demonstrated the superiority of these schemes over
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Suh-Yuh, and Jinn-Liang Liu. "Analysis of least squares finite element methods for a parameter-dependent first-order system*." Numerical Functional Analysis and Optimization 19, no. 1-2 (1998): 191–213. http://dx.doi.org/10.1080/01630569808816823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!