Добірка наукової літератури з теми "Approximate norm descent methods"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Approximate norm descent methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Approximate norm descent methods"

1

Morini, Benedetta, Margherita Porcelli, and Philippe L. Toint. "Approximate norm descent methods for constrained nonlinear systems." Mathematics of Computation 87, no. 311 (2017): 1327–51. http://dx.doi.org/10.1090/mcom/3251.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jin, Yang, Li, and Liu. "Sparse Recovery Algorithm for Compressed Sensing Using Smoothed l0 Norm and Randomized Coordinate Descent." Mathematics 7, no. 9 (2019): 834. http://dx.doi.org/10.3390/math7090834.

Повний текст джерела
Анотація:
Compressed sensing theory is widely used in the field of fault signal diagnosis and image processing. Sparse recovery is one of the core concepts of this theory. In this paper, we proposed a sparse recovery algorithm using a smoothed l0 norm and a randomized coordinate descent (RCD), then applied it to sparse signal recovery and image denoising. We adopted a new strategy to express the (P0) problem approximately and put forward a sparse recovery algorithm using RCD. In the computer simulation experiments, we compared the performance of this algorithm to other typical methods. The results show that our algorithm possesses higher precision in sparse signal recovery. Moreover, it achieves higher signal to noise ratio (SNR) and faster convergence speed in image denoising.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xu, Kai, and Zhi Xiong. "Nonparametric Tensor Completion Based on Gradient Descent and Nonconvex Penalty." Symmetry 11, no. 12 (2019): 1512. http://dx.doi.org/10.3390/sym11121512.

Повний текст джерела
Анотація:
Existing tensor completion methods all require some hyperparameters. However, these hyperparameters determine the performance of each method, and it is difficult to tune them. In this paper, we propose a novel nonparametric tensor completion method, which formulates tensor completion as an unconstrained optimization problem and designs an efficient iterative method to solve it. In each iteration, we not only calculate the missing entries by the aid of data correlation, but consider the low-rank of tensor and the convergence speed of iteration. Our iteration is based on the gradient descent method, and approximates the gradient descent direction with tensor matricization and singular value decomposition. Considering the symmetry of every dimension of a tensor, the optimal unfolding direction in each iteration may be different. So we select the optimal unfolding direction by scaled latent nuclear norm in each iteration. Moreover, we design formula for the iteration step-size based on the nonconvex penalty. During the iterative process, we store the tensor in sparsity and adopt the power method to compute the maximum singular value quickly. The experiments of image inpainting and link prediction show that our method is competitive with six state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ko, Dongnam, and Enrique Zuazua. "Model predictive control with random batch methods for a guiding problem." Mathematical Models and Methods in Applied Sciences 31, no. 08 (2021): 1569–92. http://dx.doi.org/10.1142/s0218202521500329.

Повний текст джерела
Анотація:
We model, simulate and control the guiding problem for a herd of evaders under the action of repulsive drivers. The problem is formulated in an optimal control framework, where the drivers (controls) aim to guide the evaders (states) to a desired region of the Euclidean space. The numerical simulation of such models quickly becomes unfeasible for a large number of interacting agents, as the number of interactions grows [Formula: see text] for [Formula: see text] agents. For reducing the computational cost to [Formula: see text], we use the Random Batch Method (RBM), which provides a computationally feasible approximation of the dynamics. First, the considered time interval is divided into a number of subintervals. In each subinterval, the RBM randomly divides the set of particles into small subsets (batches), considering only the interactions inside each batch. Due to the averaging effect, the RBM approximation converges to the exact dynamics in the [Formula: see text]-expectation norm as the length of subintervals goes to zero. For this approximated dynamics, the corresponding optimal control can be computed efficiently using a classical gradient descent. The resulting control is not optimal for the original system, but for a reduced RBM model. We therefore adopt a Model Predictive Control (MPC) strategy to handle the error in the dynamics. This leads to a semi-feedback control strategy, where the control is applied only for a short time interval to the original system, and then compute the optimal control for the next time interval with the state of the (controlled) original dynamics. Through numerical experiments we show that the combination of RBM and MPC leads to a significant reduction of the computational cost, preserving the capacity of controlling the overall dynamics.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Utomo, Rukmono Budi. "METODE NUMERIK STEPEST DESCENT DENGAN DIRECTION DAN NORMRERATA ARITMATIKA." AKSIOMA Journal of Mathematics Education 5, no. 2 (2017): 128. http://dx.doi.org/10.24127/ajpm.v5i2.673.

Повний текст джерела
Анотація:
This research is investigating ofSteepest Descent numerical method with direction and norm arithmetic mean. This research is begin with try to understand what Steepest Descent Numerical is and its algorithm. After that, we constructing the new Steepest Descent numerical method using another direction and norm called arithmetic mean. This paper also containing numerical counting examples using both of these methods and analyze them self.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Goh, B. S. "Approximate Greatest Descent Methods for Optimization with Equality Constraints." Journal of Optimization Theory and Applications 148, no. 3 (2010): 505–27. http://dx.doi.org/10.1007/s10957-010-9765-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Xiao, Yunhai, Chunjie Wu, and Soon-Yi Wu. "Norm descent conjugate gradient methods for solving symmetric nonlinear equations." Journal of Global Optimization 62, no. 4 (2014): 751–62. http://dx.doi.org/10.1007/s10898-014-0218-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Qiu, Yixuan, and Xiao Wang. "Stochastic Approximate Gradient Descent via the Langevin Algorithm." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5428–35. http://dx.doi.org/10.1609/aaai.v34i04.5992.

Повний текст джерела
Анотація:
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAGD), as an alternative to the stochastic gradient descent for cases where unbiased stochastic gradients cannot be trivially obtained. Traditional methods for such problems rely on general-purpose sampling techniques such as Markov chain Monte Carlo, which typically requires manual intervention for tuning parameters and does not work efficiently in practice. Instead, SAGD makes use of the Langevin algorithm to construct stochastic gradients that are biased in finite steps but accurate asymptotically, enabling us to theoretically establish the convergence guarantee for SAGD. Inspired by our theoretical analysis, we also provide useful guidelines for its practical implementation. Finally, we show that SAGD performs well experimentally in popular statistical and machine learning problems such as the expectation-maximization algorithm and the variational autoencoders.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yang, Yin, and Yunqing Huang. "Spectral-Collocation Methods for Fractional Pantograph Delay-Integrodifferential Equations." Advances in Mathematical Physics 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/821327.

Повний текст джерела
Анотація:
We propose and analyze a spectral Jacobi-collocation approximation for fractional order integrodifferential equations of Volterra type with pantograph delay. The fractional derivative is described in the Caputo sense. We provide a rigorous error analysis for the collocation method, which shows that the error of approximate solution decays exponentially inL∞norm and weightedL2-norm. The numerical examples are given to illustrate the theoretical results.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Poggio, Tomaso, Andrzej Banburski, and Qianli Liao. "Theoretical issues in deep networks." Proceedings of the National Academy of Sciences 117, no. 48 (2020): 30039–45. http://dx.doi.org/10.1073/pnas.1907369117.

Повний текст джерела
Анотація:
While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization, and good out-of-sample performance, despite overparameterization and the absence of explicit regularization. We review our recent results toward this goal. In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost. However, we proved that for certain types of compositional functions, deep networks of the convolutional type (even without weight sharing) can avoid the curse of dimensionality. In characterizing minimization of the empirical exponential loss we consider the gradient flow of the weight directions rather than the weights themselves, since the relevant function underlying classification corresponds to normalized networks. The dynamics of normalized weights turn out to be equivalent to those of the constrained problem of minimizing the loss subject to a unit norm constraint. In particular, the dynamics of typical gradient descent have the same critical points as the constrained problem. Thus there is implicit regularization in training deep networks under exponential-type loss functions during gradient flow. As a consequence, the critical points correspond to minimum norm infima of the loss. This result is especially relevant because it has been recently shown that, for overparameterized models, selection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the expected error. Thus our results imply that gradient descent in deep networks minimize the expected error.
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії