Academic literature on the topic 'Approximate norm descent methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Approximate norm descent methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Approximate norm descent methods"

1

Morini, Benedetta, Margherita Porcelli, and Philippe L. Toint. "Approximate norm descent methods for constrained nonlinear systems." Mathematics of Computation 87, no. 311 (2017): 1327–51. http://dx.doi.org/10.1090/mcom/3251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jin, Yang, Li, and Liu. "Sparse Recovery Algorithm for Compressed Sensing Using Smoothed l0 Norm and Randomized Coordinate Descent." Mathematics 7, no. 9 (2019): 834. http://dx.doi.org/10.3390/math7090834.

Full text
Abstract:
Compressed sensing theory is widely used in the field of fault signal diagnosis and image processing. Sparse recovery is one of the core concepts of this theory. In this paper, we proposed a sparse recovery algorithm using a smoothed l0 norm and a randomized coordinate descent (RCD), then applied it to sparse signal recovery and image denoising. We adopted a new strategy to express the (P0) problem approximately and put forward a sparse recovery algorithm using RCD. In the computer simulation experiments, we compared the performance of this algorithm to other typical methods. The results show
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Kai, and Zhi Xiong. "Nonparametric Tensor Completion Based on Gradient Descent and Nonconvex Penalty." Symmetry 11, no. 12 (2019): 1512. http://dx.doi.org/10.3390/sym11121512.

Full text
Abstract:
Existing tensor completion methods all require some hyperparameters. However, these hyperparameters determine the performance of each method, and it is difficult to tune them. In this paper, we propose a novel nonparametric tensor completion method, which formulates tensor completion as an unconstrained optimization problem and designs an efficient iterative method to solve it. In each iteration, we not only calculate the missing entries by the aid of data correlation, but consider the low-rank of tensor and the convergence speed of iteration. Our iteration is based on the gradient descent met
APA, Harvard, Vancouver, ISO, and other styles
4

Ko, Dongnam, and Enrique Zuazua. "Model predictive control with random batch methods for a guiding problem." Mathematical Models and Methods in Applied Sciences 31, no. 08 (2021): 1569–92. http://dx.doi.org/10.1142/s0218202521500329.

Full text
Abstract:
We model, simulate and control the guiding problem for a herd of evaders under the action of repulsive drivers. The problem is formulated in an optimal control framework, where the drivers (controls) aim to guide the evaders (states) to a desired region of the Euclidean space. The numerical simulation of such models quickly becomes unfeasible for a large number of interacting agents, as the number of interactions grows [Formula: see text] for [Formula: see text] agents. For reducing the computational cost to [Formula: see text], we use the Random Batch Method (RBM), which provides a computatio
APA, Harvard, Vancouver, ISO, and other styles
5

Utomo, Rukmono Budi. "METODE NUMERIK STEPEST DESCENT DENGAN DIRECTION DAN NORMRERATA ARITMATIKA." AKSIOMA Journal of Mathematics Education 5, no. 2 (2017): 128. http://dx.doi.org/10.24127/ajpm.v5i2.673.

Full text
Abstract:
This research is investigating ofSteepest Descent numerical method with direction and norm arithmetic mean. This research is begin with try to understand what Steepest Descent Numerical is and its algorithm. After that, we constructing the new Steepest Descent numerical method using another direction and norm called arithmetic mean. This paper also containing numerical counting examples using both of these methods and analyze them self.
APA, Harvard, Vancouver, ISO, and other styles
6

Goh, B. S. "Approximate Greatest Descent Methods for Optimization with Equality Constraints." Journal of Optimization Theory and Applications 148, no. 3 (2010): 505–27. http://dx.doi.org/10.1007/s10957-010-9765-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xiao, Yunhai, Chunjie Wu, and Soon-Yi Wu. "Norm descent conjugate gradient methods for solving symmetric nonlinear equations." Journal of Global Optimization 62, no. 4 (2014): 751–62. http://dx.doi.org/10.1007/s10898-014-0218-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qiu, Yixuan, and Xiao Wang. "Stochastic Approximate Gradient Descent via the Langevin Algorithm." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5428–35. http://dx.doi.org/10.1609/aaai.v34i04.5992.

Full text
Abstract:
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAGD), as an alternative to the stochastic gradient descent for cases where unbiased stochastic gradients cannot be trivially obtained. Traditional methods for such problems rely on general-purpose sampling techniques such as Markov chain Monte Carlo, which typically requires manual intervention for tuning parameters and does not work efficiently in practice. Instead, SAGD makes use of the Langevin algorithm to construct stochastic gradients that are biased in finite steps but accurate asymptotical
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Yin, and Yunqing Huang. "Spectral-Collocation Methods for Fractional Pantograph Delay-Integrodifferential Equations." Advances in Mathematical Physics 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/821327.

Full text
Abstract:
We propose and analyze a spectral Jacobi-collocation approximation for fractional order integrodifferential equations of Volterra type with pantograph delay. The fractional derivative is described in the Caputo sense. We provide a rigorous error analysis for the collocation method, which shows that the error of approximate solution decays exponentially inL∞norm and weightedL2-norm. The numerical examples are given to illustrate the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
10

Poggio, Tomaso, Andrzej Banburski, and Qianli Liao. "Theoretical issues in deep networks." Proceedings of the National Academy of Sciences 117, no. 48 (2020): 30039–45. http://dx.doi.org/10.1073/pnas.1907369117.

Full text
Abstract:
While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization, and good out-of-sample performance, despite overparameterization and the absence of explicit regularization. We review our recent results toward this goal. In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost. However, we proved that for certain types of compositional functions, de
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!