Academic literature on the topic 'Gradient descent algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gradient descent algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gradient descent algorithm"

1

Dong, Xuemei, and Ding-Xuan Zhou. "Learning gradients by a gradient descent algorithm." Journal of Mathematical Analysis and Applications 341, no. 2 (2008): 1018–27. http://dx.doi.org/10.1016/j.jmaa.2007.10.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jiao, Xianqi, Jia Liu, and Zhiping Chen. "Learning Complexity of Gradient Descent and Conjugate Gradient Algorithms." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 17 (2025): 17671–79. https://doi.org/10.1609/aaai.v39i17.33943.

Full text
Abstract:
Gradient Descent (GD) and Conjugate Gradient (CG) methods are among the most effective iterative algorithms for solving unconstrained optimization problems, particularly in machine learning and statistical modeling, where they are employed to minimize cost functions. In these algorithms, tunable parameters, such as step sizes or conjugate parameters, play a crucial role in determining key performance metrics, like runtime and solution quality. In this work, we introduce a framework that models algorithm selection as a statistical learning problem, and thus learning complexity can be estimated
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Haijing, Ying Cai, Ran Tao, et al. "An Improved Reacceleration Optimization Algorithm Based on the Momentum Method for Image Recognition." Mathematics 12, no. 11 (2024): 1759. http://dx.doi.org/10.3390/math12111759.

Full text
Abstract:
The optimization algorithm plays a crucial role in image recognition by neural networks. However, it is challenging to accelerate the model’s convergence and maintain high precision. As a commonly used stochastic gradient descent optimization algorithm, the momentum method requires many epochs to find the optimal parameters during model training. The velocity of its gradient descent depends solely on the historical gradients and is not subject to random fluctuations. To address this issue, an optimization algorithm to enhance the gradient descent velocity, i.e., the momentum reacceleration gra
APA, Harvard, Vancouver, ISO, and other styles
4

Krutikov, Vladimir, Elena Tovbis, Svetlana Gutova, Ivan Rozhnov, and Lev Kazakovtsev. "Gradient Method with Step Adaptation." Mathematics 13, no. 1 (2024): 61. https://doi.org/10.3390/math13010061.

Full text
Abstract:
The paper solves the problem of constructing step adjustment algorithms for a gradient method based on the principle of the steepest descent. The expansion of the step adjustment principle, its formalization and parameterization led the researchers to gradient-type methods with incomplete relaxation or over-relaxation. Such methods require only the gradient of the function to be calculated at the iteration. Optimization of the parameters of the step adaptation algorithms enables us to obtain methods that significantly exceed the steepest descent method in terms of convergence rate. In this pap
APA, Harvard, Vancouver, ISO, and other styles
5

Tu, Quan, Yingjiao Rong, and Jing Chen. "Parameter Identification of ARX Models Based on Modified Momentum Gradient Descent Algorithm." Complexity 2020 (June 17, 2020): 1–11. http://dx.doi.org/10.1155/2020/9537075.

Full text
Abstract:
The parameter estimation problem of the ARX model is studied in this paper. First, some traditional identification algorithms are briefly introduced, and then a new parameter estimation algorithm—the modified momentum gradient descent algorithm—is developed. Two gradient directions with their corresponding step sizes are derived in each iteration. Compared with the traditional parameter identification algorithms, the modified momentum gradient descent algorithm has a faster convergence rate. A simulation example shows that the proposed algorithm is effective.
APA, Harvard, Vancouver, ISO, and other styles
6

Syed Shahul Hameed, A., and Narendran Rajagopalan. "SPGD: Search Party Gradient Descent Algorithm, a Simple Gradient-Based Parallel Algorithm for Bound-Constrained Optimization." Mathematics 10, no. 5 (2022): 800. http://dx.doi.org/10.3390/math10050800.

Full text
Abstract:
Nature-inspired metaheuristic algorithms remain a strong trend in optimization. Human-inspired optimization algorithms should be more intuitive and relatable. This paper proposes a novel optimization algorithm inspired by a human search party. We hypothesize the behavioral model of a search party searching for a treasure. Motivated by the search party’s behavior, we abstract the “Divide, Conquer, Assemble” (DCA) approach. The DCA approach allows us to parallelize the traditional gradient descent algorithm in a strikingly simple manner. Essentially, multiple gradient descent instances with diff
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Shoujing. "Gradient Descent Algorithm Optimization and its Application in Linear Regression Model." Academic Journal of Natural Science 1, no. 1 (2024): 1–5. https://doi.org/10.5281/zenodo.13753916.

Full text
Abstract:
This paper systematically analyzes the application of gradient descent algorithm in linear regression model and proposes a variety of optimization methods. The basic concepts and mathematical expressions of linear regression model are introduced, and the basic principles and mathematical derivation of gradient descent algorithm are explained. The specific application of gradient descent algorithm in parameter estimation and model optimization is discussed, especially in big data environment. Several optimization methods of gradient descent algorithm are proposed, including learning rate adjust
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Zhipeng, Rui Feng, Xiuhan Li, Wei Wang, and Xiaoling Wu. "Gradient-Sensitive Optimization for Convolutional Neural Networks." Computational Intelligence and Neuroscience 2021 (March 22, 2021): 1–16. http://dx.doi.org/10.1155/2021/6671830.

Full text
Abstract:
Convolutional neural networks (CNNs) are effective models for image classification and recognition. Gradient descent optimization (GD) is the basic algorithm for CNN model optimization. Since GD appeared, a series of improved algorithms have been derived. Among these algorithms, adaptive moment estimation (Adam) has been widely recognized. However, local changes are ignored in Adam to some extent. In this paper, we introduce an adaptive learning rate factor based on current and recent gradients. According to this factor, we can dynamically adjust the learning rate of each independent parameter
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jinze. "The advance of underdetermined linear regression optimization based on implicit bias." Journal of Physics: Conference Series 2580, no. 1 (2023): 012008. http://dx.doi.org/10.1088/1742-6596/2580/1/012008.

Full text
Abstract:
Abstract The gradient descent method has the characteristics of easy realization and simple structure. The traditional gradient descent method has many advantages, especially in solving convex optimization problems. In recent years, some researchers have noticed that the gradient descent algorithm is helpful to solve the problem of underdetermined linear regression optimization. Therefore, in order to explore the specific relationship between gradient descent and under-determined linear regression optimization, this paper focuses on a case with a unique finite root loss function and discusses
APA, Harvard, Vancouver, ISO, and other styles
10

Gasnikov, A. V., M. S. Alkousa, A. V. Lobanov, et al. "On Quasi-Convex Smooth Optimization Problems by a Comparison Oracle." Nelineinaya Dinamika 20, no. 5 (2024): 813–25. https://doi.org/10.20537/nd241211.

Full text
Abstract:
Frequently, when dealing with many machine learning models, optimization problems appear to be challenging due to a limited understanding of the constructions and characterizations of the objective functions in these problems. Therefore, major complications arise when dealing with first-order algorithms, in which gradient computations are challenging or even impossible in various scenarios. For this reason, we resort to derivative-free methods (zeroth-order methods). This paper is devoted to an approach to minimizing quasi-convex functions using a recently proposed (in [56]) comparison oracle
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Gradient descent algorithm"

1

Mankau, Jan Peter. "A Nonsmooth Nonconvex Descent Algorithm." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-217556.

Full text
Abstract:
In many applications nonsmooth nonconvex energy functions, which are Lipschitz continuous, appear quite naturally. Contact mechanics with friction is a classic example. A second example is the 1-Laplace operator and its eigenfunctions. In this work we will give an algorithm such that for every locally Lipschitz continuous function f and every sequence produced by this algorithm it holds that every accumulation point of the sequence is a critical point of f in the sense of Clarke. Here f is defined on a reflexive Banach space X, such that X and its dual space X' are strictly convex and Clarkson
APA, Harvard, Vancouver, ISO, and other styles
2

Johnson, Sandra Gomulka. "Antenna array output power minimization using steepest descent adaptive algorithm." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leong, Sio Hong. "Kinematics control of redundant manipulators using CMAC neural networks combined with Descent Gradient Optimizers & Genetic Algorithm Optimizers." Thesis, University of Macau, 2003. http://umaclib3.umac.mo/record=b1446170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Holmgren, Faghihi Josef, and Paul Gorgis. "Time efficiency and mistake rates for online learning algorithms : A comparison between Online Gradient Descent and Second Order Perceptron algorithm and their performance on two different data sets." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260087.

Full text
Abstract:
This dissertation investigates the differences between two different online learning algorithms: Online Gradient Descent (OGD) and Second-Order Perceptron (SOP) algorithm, and how well they perform on different data sets in terms of mistake rate, time cost and number of updates. By studying different online learning algorithms and how they perform in different environments will help understand and develop new strategies to handle further online learning tasks. The study includes two different data sets, Pima Indians Diabetes and Mushroom, together with the LIBOL library for testing. The result
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.

Full text
Abstract:
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to
APA, Harvard, Vancouver, ISO, and other styles
6

Castoldi, Marcelo Favoretto. "Algoritmo híbrido para projeto de controladores de amortecimento de sistemas elétricos de potência utilizando algoritmos genéticos e gradiente descendente." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-07042011-151406/.

Full text
Abstract:
Os sistemas elétricos de potência são frequentemente submetidos a perturbações causadas, por exemplo, por um aumento súbito de carga ou por um curto-circuito em uma linha de transmissão. Estas perturbações podem gerar oscilações eletromecânicas no sistema, uma vez que a velocidade dos geradores oscila. Para reduzir tais oscilações, controladores de sistema de potência são utilizados sendo, os mais comuns, controladores do tipo PSS (Power System Stabilizer). Porém, em alguns sistemas, somente o emprego de PSSs não é suficiente para garantir um nível mínimo satisfatório de amortecimento, sendo n
APA, Harvard, Vancouver, ISO, and other styles
7

Milena, Kresoja. "Modifications of Stochastic Approximation Algorithm Based on Adaptive Step Sizes." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=104786&source=NDLTD&language=en.

Full text
Abstract:
The problem under consideration is an unconstrained mini-mization problem in noisy environment. The common approach for solving the problem is Stochastic Approximation (SA) algorithm. We propose a class of adaptive step size schemes for the SA algorithm. The step size selection in the proposed schemes is based on the objective functionvalues. At each iterate, interval estimates of the optimal function  value are constructed using the xed number of previously observed function values. If the observed function value in the current iterate is larger than the upper bound of the interval,
APA, Harvard, Vancouver, ISO, and other styles
8

Apidopoulos, Vasileios. "Inertial Gradient-Descent algorithms for convex minimization." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0175/document.

Full text
Abstract:
Cette thèse porte sur l’étude des méthodes inertielles pour résoudre les problèmes de minimisation convexe structurés. Depuis les premiers travaux de Polyak et Nesterov, ces méthodes sont devenues très populaires, grâce à leurs effets d’accélération. Dans ce travail, on étudie une famille d’algorithmes de gradient proximal inertiel de type Nesterov avec un choix spécifique de suites de sur-relaxation. Les différentes propriétés de convergence de cette famille d’algorithmes sont présentées d’une manière unifiée, en fonction du paramètre de sur-relaxation. En outre, on étudie ces propriétés, dan
APA, Harvard, Vancouver, ISO, and other styles
9

Zerbinati, Adrien. "Algorithme à gradients multiples pour l'optimisation multiobjectif en simulation de haute fidélité : application à l'aérodynamique compressible." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00868031.

Full text
Abstract:
En optimisation multiobjectif, les connaissances du front et de l'ensemble de Pareto sont primordiales pour résoudre un problème. Un grand nombre de stratégies évolutionnaires sont proposées dans la littérature classique. Ces dernières ont prouvé leur efficacité pour identifier le front de Pareto. Pour atteindre un tel résultat, ces algorithmes nécessitent un grand nombre d'évaluations. En ingénierie, les simulations numériques sont généralement réalisées par des modèles de haute-fidélité. Aussi, chaque évaluation demande un temps de calcul élevé. A l'instar des algorithmes mono-objectif, les
APA, Harvard, Vancouver, ISO, and other styles
10

Giacomini, Matteo. "Quantitative a posteriori error estimators in Finite Element-based shape optimization." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX070/document.

Full text
Abstract:
Les méthodes d’optimisation de forme basées sur le gradient reposent sur le calcul de la dérivée de forme. Dans beaucoup d’applications, la fonctionnelle coût dépend de la solution d’une EDP. Il s’en suit qu’elle ne peut être résolue exactement et que seule une approximation de celle-ci peut être calculée, par exemple par la méthode des éléments finis. Il en est de même pour la dérivée de forme. Ainsi, les méthodes de gradient en optimisation de forme - basées sur des approximations du gradient - ne garantissent pas a priori que la direction calculée à chaque itération soit effectivement une d
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Gradient descent algorithm"

1

Nonlinear performance seeking control using fuzzy model reference learning control and the method of steepest descent. National Aeronautics and Space Administration, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Michel, Bierlaire. Optimization: Principles and Algorithms. EPFL Press, 2015. http://dx.doi.org/10.55430/6116v1mb.

Full text
Abstract:
Every engineer and decision scientist must have a good mastery of optimization, an essential element in their toolkit. Thus, this articulate introductory textbook will certainly be welcomed by students and practicing professionals alike. Drawing from his vast teaching experience, the author skillfully leads the reader through a rich choice of topics in a coherent, fluid and tasteful blend of models and methods anchored on the underlying mathematical notions (only prerequisites: first year calculus and linear algebra). Topics range from the classics to some of the most recent developments in sm
APA, Harvard, Vancouver, ISO, and other styles
3

Sangeetha, V., and S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Full text
Abstract:
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip it
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gradient descent algorithm"

1

Gao, Jiaxin, Yao Lyu, Wenxuan Wang, Yuming Yin, Fei Ma, and Shengbo Eben Li. "Gradient Correction for Asynchronous Stochastic Gradient Descent in Reinforcement Learning." In Lecture Notes in Mechanical Engineering. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70392-8_127.

Full text
Abstract:
AbstractDistributed stochastic gradient descent techniques have gained significant attention in recent years as a prevalent approach for reinforcement learning. Current distributed learning predominantly employs synchronous or asynchronous training strategies. While the asynchronous scheme avoids idle computing resources present in synchronous methods, it grapples with the stale gradient issue. This paper introduces a novel gradient correction algorithm aimed at alleviating the stale gradient problem. By leveraging second-order information within the worker node and incorporating current param
APA, Harvard, Vancouver, ISO, and other styles
2

Garcia, M., E. Fernandez, M. Graña, and F. J. Torrealdea. "A Gradient Descent MRI Illumination Correction Algorithm." In Computational Intelligence and Bioinspired Systems. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11494669_112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Désidéri, Jean-Antoine. "Multiple-gradient Descent Algorithm for Pareto-Front Identification." In Computational Methods in Applied Sciences. Springer Netherlands, 2014. http://dx.doi.org/10.1007/978-94-017-9054-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tian, Yuan, Yong-quan Liang, and Yan-jun Peng. "Cuckoo Search Algorithm Based on Stochastic Gradient Descent." In Proceedings of the Fifth Euro-China Conference on Intelligent Data Analysis and Applications. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03766-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yan, Xiaodan, Tianxin Zhang, Baojiang Cui, and Jiangdong Deng. "Hinge Classification Algorithm Based on Asynchronous Gradient Descent." In Advances on Broad-Band Wireless Computing, Communication and Applications. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69811-3_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dong, Hailing, Yichao Zhang, Ming Yang, Wen Liu, Rong Fan, and Yu Shi. "On Abelian Tensor Decomposition and Gradient-Descent Algorithm." In Advances in Intelligent, Interactive Systems and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02804-6_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cheng, Xianfu, Yanqing Yao, and Ao Liu. "An Improved Privacy-Preserving Stochastic Gradient Descent Algorithm." In Machine Learning for Cyber Security. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62223-7_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ratsaby, Joel. "A Stochastic Gradient Descent Algorithm for Structural Risk Minimisation." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39624-6_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jinkun. "RBF Neural Network Control Based on Gradient Descent Algorithm." In Radial Basis Function (RBF) Neural Network Control for Mechanical Systems. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34816-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rajora, Sunaina, Mansi Butola, and Kedar Khare. "Mean Gradient Descent Algorithm for Single-Shot Interferogram Analysis." In Springer Proceedings in Physics. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9259-1_116.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gradient descent algorithm"

1

Hodlevskyi, Yurii O., and Tetiana A. Vakaliuk. "Optimal Gradient Descent Algorithm for LSTM Neural Network Learning." In 2024 IEEE 4th International Conference on Smart Information Systems and Technologies (SIST). IEEE, 2024. http://dx.doi.org/10.1109/sist61555.2024.10629398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jia, Xiaojing, Xiangchu Feng, and Hua Wu. "Accelerated Scaled Gradient Descent Algorithm For Low-Rank Matrix Factorization." In 2024 5th International Conference on Computer Engineering and Intelligent Control (ICCEIC). IEEE, 2024. https://doi.org/10.1109/icceic64099.2024.10775372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Akhtar, Muhammad Moin, Yong Li, Wei Cheng, and Yumei Tan. "Waveform Optimization of CS-MIMO Radar Based on Gradient Descent Algorithm." In 2024 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). IEEE, 2024. https://doi.org/10.1109/icspcc62635.2024.10770341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lyu, Lyu, Shitao Zhu, Die Li, and Caipin Li. "Compensation of OAM Phase Noise in UCCA Using Gradient Descent Algorithm." In 2024 7th Asia Conference on Cognitive Engineering and Intelligent lnteraction (CEII). IEEE, 2024. https://doi.org/10.1109/ceii65291.2024.00037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Grigoletto, Eliana Contharteze, and Aurelio Ribeiro Leite de Oliveira. "Fractional Order Gradient Descent Algorithm." In CNMAC 2019 - XXXIX Congresso Nacional de Matemática Aplicada e Computacional. SBMAC, 2020. http://dx.doi.org/10.5540/03.2020.007.01.0387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moreschini, Alessio, Mattia Mattioni, Salvatore Monaco, and Dorothee Normand-Cyrot. "A gradient descent algorithm built on approximate discrete gradients." In 2022 26th International Conference on System Theory, Control and Computing (ICSTCC). IEEE, 2022. http://dx.doi.org/10.1109/icstcc55426.2022.9931872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bayert, Jonathan, and Sami Khorbotly. "Robotic Swarm Dispersion Using Gradient Descent Algorithm." In 2019 IEEE International Symposium on Robotic and Sensors Environments (ROSE). IEEE, 2019. http://dx.doi.org/10.1109/rose.2019.8790430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Ruochi, and Parv Venkitasubramaniam. "Mutual-Information-Private Online Gradient Descent Algorithm." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Giryes, Raja, Yonina C. Eldar, Alex M. Bronstein, and Guillermo Sapiro. "The Learned Inexact Project Gradient Descent Algorithm." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8462136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mandic, D. P., D. Obradovic, and Anthony Kuh. "A robust general normalised gradient descent algorithm." In 2005 Microwave Electronics: Measurements, Identification, Applications. IEEE, 2005. http://dx.doi.org/10.1109/ssp.2005.1628578.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Gradient descent algorithm"

1

Pasupuleti, Murali Krishna. Phase Transitions in High-Dimensional Learning: Understanding the Scaling Limits of Efficient Algorithms. National Education Services, 2025. https://doi.org/10.62311/nesx/rr1125.

Full text
Abstract:
Abstract: High-dimensional learning models exhibit phase transitions, where small changes in model complexity, data size, or optimization dynamics lead to abrupt shifts in generalization, efficiency, and computational feasibility. Understanding these transitions is crucial for scaling modern machine learning algorithms and identifying critical thresholds in optimization and generalization performance. This research explores the role of high-dimensional probability, random matrix theory, and statistical physics in analyzing phase transitions in neural networks, kernel methods, and convex vs. no
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!