Academic literature on the topic 'White-box attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'White-box attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "White-box attack"

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi, and Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Full text
Abstract:
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. I
APA, Harvard, Vancouver, ISO, and other styles
2

Josse, Sébastien. "White-box attack context cryptovirology." Journal in Computer Virology 5, no. 4 (2008): 321–34. http://dx.doi.org/10.1007/s11416-008-0097-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Porkodi, V., M. Sivaram, Amin Salih Mohammed, and V. Manikandan. "Survey on White-Box Attacks and Solutions." Asian Journal of Computer Science and Technology 7, no. 3 (2018): 28–32. http://dx.doi.org/10.51983/ajcst-2018.7.3.1904.

Full text
Abstract:
This Research paper provides special white-box attacks and outputs from it. Wearable IoT units perform keep doubtlessly captured, then accessed within an unauthorized behavior due to the fact concerning their bodily nature. That case, they are between white-box attack systems, toughness the opponent may additionally have quantity,visibility about the implementation of the inbuilt crypto system including complete control over its solution platform. The white-box attacks on wearable devices is an assignment without a doubt. To serve as a countermeasure in opposition to these problems in such con
APA, Harvard, Vancouver, ISO, and other styles
4

ALSHEKH, MOKHTAR, and KÖKSAL ERENTÜRK. "DEFENSE AGAINST WHITE BOX ADVERSARIAL ATTACKS IN ARABIC NATURAL LANGUAGE PROCESSING (ANLP)." International Journal of Advanced Natural Sciences and Engineering Researches 7, no. 6 (2023): 151–55. http://dx.doi.org/10.59287/ijanser.1149.

Full text
Abstract:
Adversarial attacks are among the biggest threats that affect the accuracy of classifiers in machine learning systems. This type of attacks tricks the classification model and make it perform false predictions by providing noised data that only human can detect that noise. The risk of attacks is high in natural language processing applications because most of the data collected in this case is taken from social networking sites that do not impose any restrictions on users when writing comments, which allows the attack to be created (either intentionally or unintentionally) easily and simply af
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (2020): 7168. http://dx.doi.org/10.3390/app10207168.

Full text
Abstract:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craf
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Jie, Jian Bai, and Meng Shan Jiang. "White-Box Implementation of ECDSA Based on the Cloud Plus Side Mode." Security and Communication Networks 2020 (November 19, 2020): 1–10. http://dx.doi.org/10.1155/2020/8881116.

Full text
Abstract:
White-box attack context assumes that the running environments of algorithms are visible and modifiable. Algorithms that can resist the white-box attack context are called white-box cryptography. The elliptic curve digital signature algorithm (ECDSA) is one of the most widely used digital signature algorithms which can provide integrity, authenticity, and nonrepudiation. Since the private key in the classical ECDSA is plaintext, it is easy for attackers to obtain the private key. To increase the security of the private key under the white-box attack context, this article presents an algorithm
APA, Harvard, Vancouver, ISO, and other styles
7

LIN, Ting-Ting, and Xue-Jia LAI. "Efficient Attack to White-Box SMS4 Implementation." Journal of Software 24, no. 8 (2014): 2238–49. http://dx.doi.org/10.3724/sp.j.1001.2013.04356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Sicheng, Yun Lin, Zhida Bao, and Jiangzhi Fu. "A Lightweight Modulation Classification Network Resisting White Box Gradient Attacks." Security and Communication Networks 2021 (October 12, 2021): 1–10. http://dx.doi.org/10.1155/2021/8921485.

Full text
Abstract:
Improving the attack resistance of the modulation classification model is an important means to improve the security of the physical layer of the Internet of Things (IoT). In this paper, a binary modulation classification defense network (BMCDN) was proposed, which has the advantages of small model scale and strong immunity to white box gradient attacks. Specifically, an end-to-end modulation signal recognition network that directly recognizes the form of the signal sequence is constructed, and its parameters are quantized to 1 bit to obtain the advantages of low memory usage and fast calculat
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang, and Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation." Applied Sciences 9, no. 11 (2019): 2286. http://dx.doi.org/10.3390/app9112286.

Full text
Abstract:
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a
APA, Harvard, Vancouver, ISO, and other styles
10

Jiang, Yi, and Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models." Security and Communication Networks 2022 (January 17, 2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Full text
Abstract:
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!