Academic literature on the topic 'Gradient descent ascent'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gradient descent ascent.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gradient descent ascent"

1

Lu, Songtao, Kaiqing Zhang, Tianyi Chen, Tamer Başar, and Lior Horesh. "Decentralized Policy Gradient Descent Ascent for Safe Multi-Agent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (2021): 8767–75. http://dx.doi.org/10.1609/aaai.v35i10.17062.

Full text
Abstract:
This paper deals with distributed reinforcement learning problems with safety constraints. In particular, we consider that a team of agents cooperate in a shared environment, where each agent has its individual reward function and safety constraints that involve all agents' joint actions. As such, the agents aim to maximize the team-average long-term return, subject to all the safety constraints. More intriguingly, no central controller is assumed to coordinate the agents, and both the rewards and constraints are only known to each agent locally/privately. Instead, the agents are connected by
APA, Harvard, Vancouver, ISO, and other styles
2

Pan, Zibin, Zhichao Wang, Chi Li, et al. "Federated Unlearning with Gradient Descent and Conflict Mitigation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 19 (2025): 19804–12. https://doi.org/10.1609/aaai.v39i19.34181.

Full text
Abstract:
Federated Learning (FL) has received much attention in recent years. However, although clients are not required to share their data in FL, the global model itself can implicitly remember clients' local data. Therefore, it’s necessary to effectively remove the target client's data from the FL global model to ease the risk of privacy leakage and implement "the right to be forgotten". Federated Unlearning (FU) has been considered a promising solution to remove data without full retraining. But the model utility easily suffers significant reduction during unlearning due to the gradient conflicts.
APA, Harvard, Vancouver, ISO, and other styles
3

Tang, Zheng, Xu Gang Wang, Hiroki Tamura, and Masahiro Ishii. "An Algorithm of Supervised Learning for Multilayer Neural Networks." Neural Computation 15, no. 5 (2003): 1125–42. http://dx.doi.org/10.1162/089976603765202686.

Full text
Abstract:
A method of supervised learning for multilayer artificial neural networks to escape local minima is proposed. The learning model has two phases: a backpropagation phase and a gradient ascent phase. The backpropagation phase performs steepest descent on a surface in weight space whose height at any point in weight space is equal to an error measure, and it finds a set of weights minimizing this error measure. When the backpropagation gets stuck in local minima, the gradient ascent phase attempts to fill up the valley by modifying gain parameters in a gradient ascent direction of the error measu
APA, Harvard, Vancouver, ISO, and other styles
4

Shen, Zhubin, Jianfeng Li, and Qihui Wu. "A fast adaptive beamformer with sidelobe control based on gradient descent ascent." Signal Processing 206 (May 2023): 108906. http://dx.doi.org/10.1016/j.sigpro.2022.108906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hedworth, Hayden, Jeffrey Page, John Sohl, and Tony Saad. "Investigating Errors Observed during UAV-Based Vertical Measurements Using Computational Fluid Dynamics." Drones 6, no. 9 (2022): 253. http://dx.doi.org/10.3390/drones6090253.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) are a popular platform for air quality measurements. For vertical measurements, rotary-wing UAVs are particularly well-suited. However, an important concern with rotary-wing UAVs is how the rotor-downwash affects measurement accuracy. Measurements from a recent field campaign showed notable discrepancies between data from ascent and descent, which suggested the UAV downwash may be the cause. To investigate and explain these observed discrepancies, we use high-fidelity computational fluid dynamics (CFD) simulations to simulate a UAV during vertical flight. We use
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Fengjiao, Aoyu Luo, Zongbo Hao, and Juncong Lu. "Two-stream neural network with different gradient update strategies." Journal of Physics: Conference Series 2741, no. 1 (2024): 012018. http://dx.doi.org/10.1088/1742-6596/2741/1/012018.

Full text
Abstract:
Abstract Deep neural networks will be affected by various noises in different scenes. Traditional deep neural networks often use gradient descent algorithms to update parameter weights. When the gradient falls to a certain range, it is easy to fall into the local optimal solution. Although the impulse method and other methods can escape from local optimization in some scenarios, they still have some limitations, which will greatly reduce the application effect of the actual scenes. To solve the above problems, a two-stream neural network with different gradient update strategies was proposed.
APA, Harvard, Vancouver, ISO, and other styles
7

Pendharkar, Parag C. "A comparison of gradient ascent, gradient descent and genetic-algorithm-based artificial neural networks for the binary classification problem." Expert Systems 24, no. 2 (2007): 65–86. http://dx.doi.org/10.1111/j.1468-0394.2007.00421.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Jieren, Chen Zhang, Xiangyan Tang, Victor S. Sheng, Zhe Dong, and Junqi Li. "Adaptive DDoS Attack Detection Method Based on Multiple-Kernel Learning." Security and Communication Networks 2018 (October 16, 2018): 1–19. http://dx.doi.org/10.1155/2018/5198685.

Full text
Abstract:
Distributed denial of service (DDoS) attacks has caused huge economic losses to society. They have become one of the main threats to Internet security. Most of the current detection methods based on a single feature and fixed model parameters cannot effectively detect early DDoS attacks in cloud and big data environment. In this paper, an adaptive DDoS attack detection method (ADADM) based on multiple-kernel learning (MKL) is proposed. Based on the burstiness of DDoS attack flow, the distribution of addresses, and the interactivity of communication, we define five features to describe the netw
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Ruijia, Mingxi Lei, Meng Ding, Zihang Xiang, Jinhui Xu, and Di Wang. "Improved Rates of Differentially Private Nonconvex-Strongly-Concave Minimax Optimization." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 21 (2025): 22524–32. https://doi.org/10.1609/aaai.v39i21.34410.

Full text
Abstract:
In this paper, we study the problem of (finite sum) minimax optimization in the Differential Privacy (DP) model. Unlike most of the previous studies on the (strongly) convex-concave settings or loss functions satisfying the Polyak-Lojasiewicz condition, here we mainly focus on the nonconvex-strongly-concave one, which encapsulates many models in deep learning such as deep AUC maximization. Specifically, we first analyze a DP version of Stochastic Gradient Descent Ascent (SGDA) and show the utility bound in terms of the Euclidean norm of the gradient for the empirical risk function. We then pro
APA, Harvard, Vancouver, ISO, and other styles
10

Bieliński, Adrian, Izabela Rojek, and Dariusz Mikołajewski. "Comparison of Selected Machine Learning Algorithms in the Analysis of Mental Health Indicators." Electronics 12, no. 21 (2023): 4407. http://dx.doi.org/10.3390/electronics12214407.

Full text
Abstract:
Machine learning is increasingly being used to solve clinical problems in diagnosis, therapy and care. Aim: the main aim of the study was to investigate how the selected machine learning algorithms deal with the problem of determining a virtual mental health index. Material and Methods: a number of machine learning models based on Stochastic Dual Coordinate Ascent, limited-memory Broyden–Fletcher–Goldfarb–Shanno, Online Gradient Descent, etc., were built based on a clinical dataset and compared based on criteria in the form of learning time, running time during use and regression accuracy. Res
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Gradient descent ascent"

1

Calder, Jeffrey. "Sobolev Gradient Flows and Image Processing." Thesis, 2010. http://hdl.handle.net/1974/5986.

Full text
Abstract:
In this thesis we study Sobolev gradient flows for Perona-Malik style energy functionals and generalizations thereof. We begin with first order isotropic flows which are shown to be regularizations of the heat equation. We show that these flows are well-posed in the forward and reverse directions which yields an effective linear sharpening algorithm. We furthermore establish a number of maximum principles for the forward flow and show that edges are preserved for a finite period of time. We then go on to study isotropic Sobolev gradient flows with respect to higher order Sobolev metrics. As th
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gradient descent ascent"

1

Brereton, R. G. "Steepest Ascent, Steepest Descent, and Gradient Methods." In Comprehensive Chemometrics. Elsevier, 2009. http://dx.doi.org/10.1016/b978-044452701-1.00037-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brereton, Richard G. "Optimisation: Steepest Ascent, Steepest Descent and Gradient Methods." In Comprehensive Chemometrics. Elsevier, 2020. http://dx.doi.org/10.1016/b978-0-12-409547-2.14835-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gradient descent ascent"

1

Takefuji. "Parallel distributed gradient descent and ascent methods." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yihan, Meikang Qiu, and Hongchang Gao. "Communication-Efficient Stochastic Gradient Descent Ascent with Momentum Algorithms." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/512.

Full text
Abstract:
Numerous machine learning models can be formulated as a stochastic minimax optimization problem, such as imbalanced data classification with AUC maximization. Developing efficient algorithms to optimize such kinds of problems is of importance and necessity. However, most existing algorithms restrict their focus on the single-machine setting so that they are incapable of dealing with the large communication overhead in a distributed training system. Moreover, most existing communication-efficient optimization algorithms only focus on the traditional minimization problem, failing to handle the m
APA, Harvard, Vancouver, ISO, and other styles
3

Gao, Hongchang, Xiaoqian Wang, Lei Luo, and Xinghua Shi. "On the Convergence of Stochastic Compositional Gradient Descent Ascent Method." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/329.

Full text
Abstract:
The compositional minimax problem covers plenty of machine learning models such as the distributionally robust compositional optimization problem. However, it is yet another understudied problem to optimize the compositional minimax problem. In this paper, we develop a novel efficient stochastic compositional gradient descent ascent method for optimizing the compositional minimax problem. Moreover, we establish the theoretical convergence rate of our proposed method. To the best of our knowledge, this is the first work achieving such a convergence rate for the compositional minimax problem. Fi
APA, Harvard, Vancouver, ISO, and other styles
4

Adnan, Risman, Muchlisin Adi Saputra, Junaidillah Fadlil, Muhamad Iqbal, and Tjan Basaruddin. "Simultaneous Gradient Descent-Ascent for GANs Minimax Optimization using Sinkhorn Divergence." In AIRC'20: 2020 2nd International Conference on Artificial Intelligence, Robotics and Control. ACM, 2020. http://dx.doi.org/10.1145/3448326.3448328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Ziyi, Shaocong Ma, and Yi Zhou. "Accelerated Proximal Alternating Gradient-Descent-Ascent for Nonconvex Minimax Machine Learning." In 2022 IEEE International Symposium on Information Theory (ISIT). IEEE, 2022. http://dx.doi.org/10.1109/isit50566.2022.9834691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Becker, Evan, Parthe Pandit, Sundeep Rangan, and Alyson K. Fletcher. "Local Convergence of Gradient Descent-Ascent for Training Generative Adversarial Networks." In 2023 57th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2023. http://dx.doi.org/10.1109/ieeeconf59524.2023.10476957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Bowei, Shaojie Li, and Yong Liu. "Towards Sharper Risk Bounds for Minimax Problems." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/630.

Full text
Abstract:
Minimax problems have achieved success in machine learning such as adversarial training, robust optimization, reinforcement learning. For theoretical analysis, current optimal excess risk bounds, which are composed by generalization error and optimization error, present 1/n-rates in strongly-convex-strongly-concave (SC-SC) settings. Existing studies mainly focus on minimax problems with specific algorithms for optimization error, with only a few studies on generalization performance, which limit better excess risk bounds. In this paper, we study the generalization bounds measured by the gradie
APA, Harvard, Vancouver, ISO, and other styles
8

Niu, Xiaochun, and Ermin Wei. "GRAND: A Gradient-Related Ascent and Descent Algorithmic Framework for Minimax Problems." In 2022 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2022. http://dx.doi.org/10.1109/allerton49937.2022.9929389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Songtao, Rahul Singh, Xiangyi Chen, Yongxin Chen, and Mingyi Hong. "Alternating Gradient Descent Ascent for Nonconvex Min-Max Problems in Robust Learning and GANs." In 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2019. http://dx.doi.org/10.1109/ieeeconf44664.2019.9048943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Meng, Bo Jiang, Wenqiang Pu, Ya-Feng Liu, and Anthony Man-Cho So. "An Efficient Alternating Riemannian/Projected Gradient Descent Ascent Algorithm for Fair Principal Component Analysis." In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10447172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!