Academic literature on the topic 'Adaptive Sharpness Aware Minimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Adaptive Sharpness Aware Minimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Adaptive Sharpness Aware Minimization"

1

Ratchatorn, Tanapat, and Masayuki Tanaka. "Improving Sharpness-Aware Minimization Using Label Smoothing and Adaptive Adversarial Cross-Entropy Loss." IEEE Access 13 (2025): 100326–37. https://doi.org/10.1109/access.2025.3578265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dong, Mingrong, Yixuan Yang, Kai Zeng, Qingwang Wang, and Tao Shen. "Implicit Sharpness-Aware Minimization for Domain Generalization." Remote Sensing 16, no. 16 (2024): 2877. http://dx.doi.org/10.3390/rs16162877.

Full text
Abstract:
Domain generalization (DG) aims to learn knowledge from multiple related domains to achieve a robust generalization performance in unseen target domains, which is an effective approach to mitigate domain shift in remote sensing image classification. Although the sharpness-aware minimization (SAM) method enhances DG capability and improves remote sensing image classification performance by promoting the convergence of the loss minimum to a flatter loss surface, the perturbation loss (maximum loss within the neighborhood of a local minimum) of SAM fails to accurately measure the true sharpness o
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Tao, Tie Luo, and Donald C. Wunsch II. "CR-SAM: Curvature Regularized Sharpness-Aware Minimization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 6144–52. http://dx.doi.org/10.1609/aaai.v38i6.28431.

Full text
Abstract:
The capacity to generalize to future unseen data stands as one of the utmost crucial attributes of deep neural networks. Sharpness-Aware Minimization (SAM) aims to enhance the generalizability by minimizing worst-case loss using one-step gradient ascent as an approximation. However, as training progresses, the non-linearity of the loss landscape increases, rendering one-step gradient ascent less effective. On the other hand, multi-step gradient ascent will incur higher training cost. In this paper, we introduce a normalized Hessian trace to accurately measure the curvature of loss landscape on
APA, Harvard, Vancouver, ISO, and other styles
4

Xing, Xinda, Qiugang Zhan, Xiurui Xie, Yuning Yang, Qiang Wang, and Guisong Liu. "Flexible Sharpness-Aware Personalized Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21707–15. https://doi.org/10.1609/aaai.v39i20.35475.

Full text
Abstract:
Personalized federated learning (PFL) is a new paradigm to address the statistical heterogeneity problem in federated learning. Most existing PFL methods focus on leveraging global and local information such as model interpolation or parameter decoupling. However, these methods often overlook the generalization potential during local client learning. From a local optimization perspective, we propose a simple and general PFL method, Federated learning with Flexible Sharpness-Aware Minimization (FedFSA). Specifically, we emphasize the importance of applying a larger perturbation to critical laye
APA, Harvard, Vancouver, ISO, and other styles
5

Deng, Jiaxin, Junbiao Pang, Baochang Zhang, and Guodong Guo. "Asymptotic Unbiased Sample Sampling to Speed Up Sharpness-Aware Minimization." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 16208–16. https://doi.org/10.1609/aaai.v39i15.33780.

Full text
Abstract:
Sharpness-Aware Minimization (SAM) has emerged as a promising approach for effectively reducing the generalization error. However, SAM incurs twice the computational cost compared to the base optimizer (e.g., SGD). We propose Asymptotic Unbiased data sampling to accelerate SAM (AUSAM), which maintains the model's generalization capacity while significantly enhancing computational efficiency. Concretely, we probabilistically sample a subset of data points beneficial for SAM optimization based on a theoretically guaranteed criterion, i.e., the Gradient Norm of each Sample (GNS). We further appro
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Changbao, Jiawei Du, Ming Yan, Hengshan Yue, Xiaohui Wei, and Joey Tianyi Zhou. "SAR: Sharpness-Aware minimization for enhancing DNNs’ Robustness against bit-flip errors." Journal of Systems Architecture 156 (November 2024): 103284. http://dx.doi.org/10.1016/j.sysarc.2024.103284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mariam, Iqra, Xiaorong Xue, and Kaleb Gadson. "A Retinal Vessel Segmentation Method Based on the Sharpness-Aware Minimization Model." Sensors 24, no. 13 (2024): 4267. http://dx.doi.org/10.3390/s24134267.

Full text
Abstract:
Retinal vessel segmentation is crucial for diagnosing and monitoring various eye diseases such as diabetic retinopathy, glaucoma, and hypertension. In this study, we examine how sharpness-aware minimization (SAM) can improve RF-UNet’s generalization performance. RF-UNet is a novel model for retinal vessel segmentation. We focused our experiments on the digital retinal images for vessel extraction (DRIVE) dataset, which is a benchmark for retinal vessel segmentation, and our test results show that adding SAM to the training procedure leads to notable improvements. Compared to the non-SAM model
APA, Harvard, Vancouver, ISO, and other styles
8

Liang, Hailun, Haowen Zheng, Hao Wang, Liu He, Haoyi Lin, and Yanyan Liang. "Exploring Flatter Loss Landscape Surface via Sharpness-Aware Minimization with Linear Mode Connectivity." Mathematics 13, no. 8 (2025): 1259. https://doi.org/10.3390/math13081259.

Full text
Abstract:
The Sharpness-Aware Minimization (SAM) optimizer connects flatness and generalization, suggesting that loss basins with lower sharpness are correlated with better generalization. However, SAM requires manually tuning the open ball radius, which complicates its practical application. To address this, we propose a method inspired by linear connectivity, using two models initialized differently as endpoints to automatically determine the optimal open ball radius. Specifically, we introduce distance regularization between the two models during training, which encourages them to approach each other
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Zheng, Xingjun Zhang, and Zhendong Tan. "Unifying and revisiting Sharpness-Aware Minimization with noise-injected micro-batch scheduler for efficiency improvement." Neural Networks 185 (May 2025): 107205. https://doi.org/10.1016/j.neunet.2025.107205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jeong, In-Woong, Han-Jin Lee, Jae-Hwan Jeong, and Seok-Hwan Choi. "A Study on Malware Family Classification Method based on Separable Vision Transformer Using Sharpness-Aware Minimization." Journal of Korean Institute of Intelligent Systems 34, no. 4 (2024): 329–38. http://dx.doi.org/10.5391/jkiis.2024.34.4.329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Adaptive Sharpness Aware Minimization"

1

Bakurov, Illya, Nathan Haut, and Wolfgang Banzhaf. "Sharpness-Aware Minimization in Genetic Programming." In Genetic and Evolutionary Computation. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-0077-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Khanh, Pham Duy, Hoang-Chau Luong, Boris S. Mordukhovich, Dat Ba Tran, and Truc Vo. "Convergence of Sharpness-Aware Minimization with Momentum." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73420-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Dongqi, Zhu Teng, Qirui Li, and Ziyin Wang. "Sharpness-Aware Minimization for Out-of-Distribution Generalization." In Communications in Computer and Information Science. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8126-7_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Xuehao, Weisen Jiang, Shuai Fu, and Yu Zhang. "Enhancing Sharpness-Aware Minimization by Learning Perturbation Radius." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70344-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Chenyu, Ting Zhang, and ZhiXian Li. "SAM-FT: Enhanced Generalizable Medical Image Segmentation via Sharpness-Aware Minimization and Focal Loss." In Communications in Computer and Information Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-6688-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ibayashi, Hikaru, Taufeq Mohammed Razakh, Liqiu Yang, et al. "Allegro-Legato: Scalable, Fast, and Robust Neural-Network Quantum Molecular Dynamics via Sharpness-Aware Minimization." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-32041-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Lilei, Zhen Wang, and Jie Liu. "Adaptive Self-Supervised Continual Learning." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230576.

Full text
Abstract:
Continual Learning (CL) studies the problem of developing a robust model that can learn new tasks while retaining previously learned knowledge. However, the current CL methods exclusively focus on data with annotations, disregarding that unlabelled data is the mainstream in real-world applications. To close this research gap, this study concentrates on continual self-supervised learning, which is plagued by challenges of memory over-fitting and class imbalance. Besides, these challenges are exacerbated throughout incremental training. Aimed at addressing these challenges from both loss and dat
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Adaptive Sharpness Aware Minimization"

1

Ratchatorn, Tanapat, and Masayuki Tanaka. "Adaptive Adversarial Cross-Entropy Loss for Sharpness-Aware Minimization." In 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zou, Jinping, Xiaoge Deng, and Tao Sun. "Sharpness-Aware Minimization with Adaptive Regularization for Training Deep Neural Networks." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10890114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Tao, Pan Zhou, Zhengbao He, Xinwen Cheng, and Xiaolin Huang. "Friendly Sharpness-Aware Minimization." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.00538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Fengchun, Dongfen Li, Jinshan Lai, Yang Zhang, Fengli Zhang, and Ruijin Wang. "General Dynamic Regularization Federated Learning with Hybrid Sharpness-Aware Minimization." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10890601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Yilang, Bingcong Li, and Georgios B. Giannakis. "Preconditioned Sharpness-Aware Minimization: Unifying Analysis and a Novel Learning Algorithm." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10889586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Tung, Tue Le, Hoang Tran Vuong, et al. "Sharpness-Aware Minimization for Topic Models with High-Quality Document Representations." In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Association for Computational Linguistics, 2025. https://doi.org/10.18653/v1/2025.naacl-long.231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Chenyu, Ting Zhang, Zhixian Li, and Zhichao Lian. "Enhancing Adversarial Patch Effectiveness for Face Recognition Systems Using Sharpness-Aware Minimization." In 2024 IEEE Cyber Science and Technology Congress (CyberSciTech). IEEE, 2024. https://doi.org/10.1109/cyberscitech64112.2024.00066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zareno, Kaitlin, Jarett C. Dewbury, Siamak K. Sorooshyari, Hossein Mobahi, and Loza F. Tadesse. "Sharpness-aware minimization (SAM) improves generalization performance of bacterial Raman spectral data enabling portable diagnostics." In Optics and Biophotonics in Low-Resource Settings XI, edited by David Levitz and Aydogan Ozcan. SPIE, 2025. https://doi.org/10.1117/12.3049180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Henwood, Sébastien, Gonçalo Mordido, Yvon Savaria, Sarath Chandar, and François Leduc-Primeau. "Sharpness-Aware Minimization Scaled by Outlier Normalization for Robust DNNs on In-Memory Computing Accelerators." In 2024 58th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2024. https://doi.org/10.1109/ieeeconf60004.2024.10943038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Hailong, and Wenbi Rao. "Supervised contrastive probability learning and sharpness-aware minimization model for multimodal emotion recognition in conversations." In Fourth International Conference on Algorithms, Microchips, and Network Applications (AMNA 2025), edited by Javid Taheri and Lei Chen. SPIE, 2025. https://doi.org/10.1117/12.3068505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!